Preface

Introduction

fastBinaryJSON is based on my fastJSON article (http://www.codeproject.com/Articles/159450/fastJSON) and code which is a polymorphic object serializer. The main purpose for fastBinaryJSON is speed in serializing and deserializing data for the use in data transfer and storage to disk. It was created for my upcoming RaptorDB - Document Database engine, for performance.

Features

fastBinaryJSON has the following feature list:

Based on fastJSON code base (very fast, polymorphic)

Supports: HashTables, Dictionary, Generic Lists, Datasets, ...

Typically 2-10% faster on serialize, 17%+ faster on deserialize.

Why?

Why another serializer you may ask, why not just use fastJSON? The answer to this is simple : performance. JSON while a great format has the following problem:

JSON is a text format, so you loose type information on serializing which makes deserializing the data again time consuming.

Why not BSON?

Looking at the specifications on the above site, you feel overwhelmed as it is hard to follow.

You feel that the specs have evolved over time and a lot of the coding parts have been deprecated.

BSON encodes lengths into the stream which inflate the data, this might be fine for the use case the authors envisioned, but for data transfer and storage it just makes things larger than they need to be.

Because of the length prefixes, the encoding of the data object must be done in two passes, once to output the data, and a second time to set the length prefixes.

I initially started off by doing a BSON conversion on fastJSON but it got too complicated, so it was scrapped.

How is Data Encoded in fastBinaryJSON?

JSON is an extremely simple format, so fastBinaryJSON takes that simplicity and add the needed parts to do binary serialization. fastBinaryJSON follows the same rules as the JSON specification (http://json.org) with the following table showing how data is encoded:

As you can see from the above, all the encoding rules are the same as JSON and primitive data types have been given 1 byte tokens for encoding data. So the general format is:

TOKEN, { DATA } : where DATA can be 0or more bytes

Strings can be encoded in 2 ways, as UTF8 or Unicode, where UTF8 is more space efficient and Unicode is faster.

String keys or property names are encoded as a special UTF8 stream which is limited to 255 bytes in length to save space (you should not have a problem with this as most property names are short in length).

Performance Tests

To get a sense of the performance differences in fastBinaryJSON against fastJSON, the following tests were performed, times are in milliseconds, each test was done on 1000 objects and repeated 5 times, the AVG column is the average of the test excluding the first which is skewed by initialization times:

As you can see in the DIFF column which is [ fastJSON / fastBinaryJSON ], the serializer performs at least 2% faster and the deserializer at least 17% faster, with the greatest difference being with DataSet types which are a lot of rows of data.

Now to do this, fastBinaryJSON is using the FormatterServices.GetUninitializedObject(type) in the framework which essentially just allocates a memory region for your type and gives it to you as an object by passing all initializations including the constructor. While this is really fast, it has the unfortunate side effect of ignoring all class initialization like default values for properties, etc. so you should be aware of this if you are restoring partial data to an object (if all the data is in json and matches the class structure, then you are fine).

To control this, you can set the ParametricConstructorOverride to true in the BJSONParameters.

Appendix v1.4.0 - Circular References & Breaking changes

As of this version, I fixed a design flaw since the start which was bugging me, namely the removal of the BJSON.Instance singleton. This means you type less to use the library which is always a good thing, the bad thing is that you need to do a find replace in your code.

Also, I found a really simple and fast way to support circular reference object structures. So a complex structure like the following will serialize and deserialize properly (the unit test is CircularReferences()):

Share

About the Author

Mehdi first started programming when he was 8 on BBC+128k machine in 6512 processor language, after various hardware and software changes he eventually came across .net and c# which he has been using since v1.0.He is formally educated as a system analyst Industrial engineer, but his programming passion continues.

* Mehdi is the 5th person to get 6 out of 7 Platinum's on Code-Project (13th Jan'12)* Mehdi is the 3rd person to get 7 out of 7 Platinum's on Code-Project (26th Aug'16)

First of all thanks for sharing your code. I was quite impressed with RaptorDB!

I am currently building a WCF-based AJAX service and I was wondering whether it would be possible to build a FastBinaryJSON-based service instead, which would serve JSON data to client pages. These would then in turn proceed to have the data deserialized (through Javascript).

Is such a scenario possible? Do you have any samples where you've used the FastBinaryJSON project with Javascript?

will, as exepected, effectively prevent serialization of those Fields/Properties in the Class you are serializing which are adorned with the Attribute.

However, if the Class has a static data structure, like a Dictionary field, for example, applying the custom Attribute will no prevent serialization. This is unexpected. This is true of Properties declared as static as well as Fields.

«I want to stay as close to the edge as I can without going over. Out on the edge you see all kinds of things you can't see from the center» Kurt Vonnegut, Jr.

CA1001 Types that own disposable fields should be disposable Implement IDisposable on 'BJSONSerializer' because it creates members of the following IDisposable types: 'MemoryStream'. fastBinaryJSON BJsonSerializer.cs 14

«I want to stay as close to the edge as I can without going over. Out on the edge you see all kinds of things you can't see from the center» Kurt Vonnegut, Jr.

The object I am supplying is an instance of a generic data structure, Node<T>, which is hierarchic ... each Node contains a List of other Nodes, a reference to its parent Node, and root Node, etc.

The type of T in the case that generates the error is a simple POCO Class named 'TestClass with integer and string Properties only. The class is adorned with DataContract/DataMember and ProtoContract/ProtoMember Attributes.

When the error occurs the Node supplied code has only five levels of "depth."

Ah! I believe you have encountered a edge case where you don't have circular references but deeply nested object structure where the 20 level deep comes into play.

Try increasing line 22 in bjsonserializer.cs : int _MAX_DEPTH = 20; to some larger value and see if it works (in fastJSON I have made this a parameter which you can change via JSONParameters I will do the same here).

I am having a problem when trying to work with a class that uses the "new" declaration for a property it inherits from a base class. For example, in dataobjects.cs, I defined a base class (BaseColClass) that defines a MyFooClass property. The colclass class definition uses the "new" operator on that property defined in the base class.

The problem is when the BJsonParser class is adding to the dictionary it returning in ParseObject(), it eventually has a collision of keys because both BaseColClass and colclass have a property called "MyFooClass", even if that property has different types.

I'm trying to figure out how to resolve this problem. Do you (or anyone else) have any ideas?

1) You can omit the [Serializable()] attribute since fastJSON and fastBinaryJSON don't need it.2) Since fasyBinaryJSON understands polymorphism andFooClass:BaseFooClass
you don't need the new and you can store FooClass or any other type derived from BaseFooClass in MyFooClass.

Everyone can have their opinion, though, a good manner would be to pass the code for review and you will get help, either if it's within the library code or a suggestion/workaround to overcome your problem ...

I'd like to build a big file appending documents to it etc, and I'll maintain indexes to find what I am looking for.

Is there currently a method to tell it parse a "SomeItem" from this memory stream at this location? Seems like if its not there, it'd be easy enough to add looking for a start/end token to stop parsing.

A programmer walks into a bar and asks the bartender for 1.00000000000003123939 root beers. Bartender says, I'll have to charge you extra, that's a root beer float. Programmer says, better make it a double then.

Thanks, Mehdi, I am trying out that code, and left the author of the Tip/Trick what I hope is some constructive feedback [^].

It was interesting to note that taking the file-output by MiniLZO, and then (from the desktop) creating a zipped file from that reduced the file size another 500k, or so.

Next-up: trying the GZip facility in .NET.

yours, Bill

“Human beings do not live in the objective world alone, nor alone in the world of social activity as ordinarily understood, but are very much at the mercy of the particular language which has become the medium of expression for their society. It is quite an illusion to imagine that one adjusts to reality essentially without the use of language and that language is merely an incidental means of solving specific problems of communication or reflection." Edward Sapir, 1929

MiniLZO was designed to be really fast it does not compress near the zip method, for even better compression try 7-zip.

A programmer walks into a bar and asks the bartender for 1.00000000000003123939 root beers. Bartender says, I'll have to charge you extra, that's a root beer float. Programmer says, better make it a double then.