Those, of course, sparked a lot of discussion. Questioning Java and XML, the darlings of 'Enterprise' computing, would hardly pass without comment. It provoked even more of a strong reaction than I'd expect I'd get from questioning AGILE methodologies.

One person though ask me a really good question. Why did I think D-BUS and Thrift were better?

I had to think carefully about my enthusiasm for these two things, and I've reached some conclusions...

First, I like D-BUS a lot better. The problem domain involves communications between processes on a single computer. Context switch overhead is a big deal here, but latency isn't. Also, many encoding issues are greatly simplified in this sort of environment, though D-BUS handles those well regardless.

There are several things I like about D-BUS. There is a consistent naming scheme for referring to D-BUS objects and interfaces. Interfaces are generally expected to support an interface for describing the interface. So they're self-describing. There is a heavy emphasis on notifications and events. So much so that frequently the return you get from a function call is not the real return and you're supposed to expect a notification later that whatever you asked for has finished. Very asynchronous. And the format of messages is a self-describing binary format designed to support a reasonable set of datatypes and user-defined compositions (aka structs). There is a reasonably heavy emphasis on data. Notifications are all about getting the right data to the right thing so it can do whatever it needs to do with it. Very few imperative 'you must do X in such and such a way or I will be very upset' sort of messages are passed.

D-BUS is RPCish, but many of the things I really dislike about RPC are either non-issues in D-BUS' environment, things D-BUS handles better than most RPC systems or both.

Thrift is not nearly so well designed. The main positive thing it has is a nice language agnostic method of describing a data structure in a way that allows for clever upgrades of that data structure later without breaking old servers or clients.

Thrift is not inherently self-describing. There isn't even a standard for asking an object implementing Thrift to send you its IDML, whereas D-BUS actively encourages objects to implement a standard interface for this.

Thrift has the concept of async methods that return as soon as they've made it to the transport. But the methods are still very function-call like, and they encourage thinking of imperative 'do this, change your state in this way' sort of messages. I asked on the Thrift channel about an 'idempotent' method modifier for a method that could be implemented via HTTP GET and people were puzzled as to why I'd even want such a thing. The idea of using RPC to implement REST were met with general puzzlement.

Lastly there is no standard way of naming objects in Thrift. And the HTTP transport that comes by default just assumes it will be sending data via POST requests to http://something:port/, not much of a standard for naming the thing you're talking to at all.

So, Thrift as it stands is very poor for implementing random publicly accessible services.

But providing a language independent way to describe data structures is pretty darned nifty. The closest things I know of that are widely deployed on the wild Internet are JSON, ASN.1 and XDR. JSON is ugly and designed for Javascript Ajax clients. ASN.1 is just plain horribly ugly, overdesigned and a bear to parse. Given that ASN.1 comes out of OSI-land and they gave us such glorious gems as X.500, this is really no surprise. (Strangely, it's used for SNMP. What does that tell you about the S in SNMP?) XDR is not very self-describing in the data stream (from what I remember) has a bad name in other ways because it was associated with an awful RPC implementation from Sun, and is missing a map type.

I've been thinking that having web pages for interacting with data that will return a Thrift encoding of the data if passed an application/x-thriftAccept header might be interesting. They could even accept application/x-thrift data in HTTP POST or PUT requests. This would allow you to use URLs to name objects and a fairly efficient, easily parsed protocol encoding for sending the data needed to manipulate them.

I'd have to look at XDR, but as I recall it was missing a way to handle maps. And the data format was fairly C specific and it wasn't very self-describing. I believe that Thrift, even though the IDML is not available, still describes the types of the various bits of data being sent enough for you to be able to print them out regardless of whether or not you're aware of the IDML.

But yes, you're right. XDR is another thing that fits in that family of things.

Yeah, XDR isn't self-describing at all. (Though ASN.1 BER/DER isn't necessarily self-describing either...) You would have to build maps out of lower-level primitives (e.g., a list of key-value pairs), but I don't see that as a problem. It does have a paucity of primitive data types: no wide integers, for example.

I'm suprised you're so down on JSON; it's my current favorite. Of course at work I end up seeing a lot of bencode, and it's more appropriate for random binary data -- length-encoded strings are better than delimited strings in some cases! But the fact that you can write pretty JSON given appropriate whitespace is a big win for some use cases.

I only dislike JSON because it is enormously convenient to decode if you happen to be using Javascript, and a mild pain if you're using anything else. I also have a preference for encodings that are binary because they can compactly represent integers and floating point numbers and are also possible to parse in some ways that are pretty efficient.

Thrift's use of numeric field ids allows a structure to grow new fields over time without breaking old stuff, which is kind of a nice feature. You can achieve something similar in a much more verbose way using maps, but again, it's verbose.

AFAIK JSON is only especially convenient to decode in JS if you're willing to use eval(), which I hope no sane person will do in the general case. The pain level for any reasonable1 serialization format is about the same once you've left the realm of "built into the environment"...

Regarding binary -- I am very skeptical of the "binary is more efficient" argument. Recently a coworker proposed replacing a fairly verbose ASCII logfile (think Apache logs, though not quite -- we were most interested in the 20-odd CGI parameters in the GET requests) with a binary format "because it'll be more compact". I ran some experiments and found that the ASCII logs were around 400 bytes per record, which gzipped to an average of 54 bytes per record, while his proposed binary format would have been around 150 bytes per record. The binary format would have been unextensible (think "fwrite(3) a network-byte-order struct") and single bit errors or other corruption would have a very high likelihood of causing undetectable data corruption.The other part of "more efficient" is the CPU overhead of marshalling and unmarshalling to the wire format. ntohl(3) is certainly faster than strtol(3); my benchmark showsstrtol: 460527 values in 0.035447 seconds (76.970514 ns/conversion) r = 0x60de8a86ntohl: 460527 values in 0.001460 seconds (3.170281 ns/conversion) r = 0x60de8a86about a 70-nanosecond penalty for using ASCII (representative dataset coming from /(\d+)/ on $MAIL). That's not big enough to bother me for most wire format scenarios, where the network stack overhead will swamp the formatting overhead and issues of debuggability, extensibility, and codeability are far more important.

Now, I certainly wouldn't claim that there is no place for binary formats. I quite like the extreme simplicity of the NBD wire protocol, for example.

Also, there's definitely a potential efficency advantage to length-tagged formats like Bencode; you can implement zero-copy network protocols using writev(2) (it's even possible to do so using a high-level language like Python if you implement the buffering right!), allowing large-message traffic to saturate 1GBit links and maybe even push towards filling a 10GBit link, which you definitely cannot do using JSON or any other delimited serialization format.

Thinking out loud

Type tags are lowercase if there is no field id, uppercase if there is. The field id is always a count and always comes immediately after the type tag. All fields in a tuple must either have or not have a field id (i.e. their type tags must all be uppercase or lowercase).

't' aka Tuple

A grouping of values.

'c' aka Count

Something countable, but not just an arbitrary integer with no meaning.

Array's, lists and sets are largely indistinguishable in how they look on the wire, and so they are all represented by an array. Array's are not length delimited, there is a flag before each element saying whether or not it's the last one.

'n' a known length array

Like an array, but prefixed with a count and no flag before each element.

'd' a dictionary

A mapping from keys to values.

'r' random type

This is what's used in a type field for an array, tuple or dictionary when one of the types is not known. The expectation is that the element will be prefixed with a type tag.

Re: Thinking out loud

A field id is considered part of a type id. If a type id is a capital letter, then it is immediately followed by a count containing the field id.

Here is a description of the encodings which include type ids in their encoding. Note that only the tuple allows a type id that includes a field id.

Note also that the 'random' type does not allow a field id for it's enclosed type because the field id (if any) should've been applied to where the type id for the random type occured. If the random type allowed a field id for its enclosed type then you could just say an array or dictionary contained a random type and then put meaningless field ids for all the elements.

Tuple (type tag 't')

t<count of # of fields><type id field 1>...<type id field n><field 1>...<field n>