Sorry for throwing this in at the last minute. But I think there are
advantages to this scheme that make it likely that people will tread this
path in the future regardless of what decision the WG comes to on HTTP
encoding. I did consult with Mark Nottingham before throwing this in here.
The idea that prompted this proposal was Hoffman and Bormann's CBOR
proposal which defines a subset that provides a binary encoding of JSON. It
seemed to me that many of the ideas raised in the HTTP encoding debate
would be relevant to binary JSON. I also realized that to be useful in a
Web Services context, any scheme would need something like the HTTP chunked
transport encoding.
The primary value of a binary encoding for JSON would be that it avoids the
need to base64 encode binary blobs, a problem that gets a lot worse when
nested binary structures are involved and the 33% overhead per pass rises
exponentially.
So imagine for the sake of argument we have an encoding for JSON that:
* Is easy to convert existing JSON parsers/emitters to read/write
* Is as compact as the JSON specification and with the same data model
* Supports compression of strings to tag codes
* Supports binary blobs of data
* Supports chunked encoding of binary data and strings
Such an encoding would have obvious advantage in Web Services applications
that deal with large chunks of binary data. Instead of having to switch
encodings or pass binary data as links, all the data can be managed in one
framework.
Now imagine that the Web Service could use the same encoding for the HTTP
framing layer and the application layer. Instead of needing separate HTTP
and application stacks, it is now easy to combine both in to a single
scheme. This is the part of the ASN.1 vision that was brilliant. It was the
implementation of ASN.1 that sucked.
In this model HTTP headers would become JSON tags. Since JSON tags are
simply strings, a mechanism that allowed codes to be substituted for
strings would allow compression of both.
Proposing a scheme that meets these requirements is not difficult and I
propose to have one sometime next week. There are many other existing
proposals including BJSON which is heavily used in MongoDB. And there is a
whole set of arguments around the limitations that JSON imposes because it
is not possible to convert between binary and decimal representations of
floating point numbers without loss of precision which is a show stopper
for scientific applications.
For the sake of this argument assume that such an encoding can be devised
that offers comparable space/time efficiency to other encoding proposals.
This approach would be a huge win for Web Services. It would also provide
an easy and consistent way to represent HTTP headers in JavaScript and
other scripting languages so there are advantages to the browser side as
well.
The main problem is the time factor. Now is not the time to restart the
coding discussion from scratch. But that is not the only area that this
proposal would affect. The much bigger impact would be on the data model
used to describe HTTP headers. If the group chooses a data model for HTTP
headers that is aligned with the JavaScript data model and so has a
convenient conversion into and out of JSON, it will be possible to adopt a
binary encoding of JSON in Web Services applications at a later date.
--
Website: http://hallambaker.com/