Huffman Serialization Techniques

Networking expert Glenn Fielder’s article on network serialization techniques is a great read. It discusses a number of scenarios including serializing a sparse set of items from an array. This blog post discusses Glenn’s implementation of sparse array serialization and my thoughts on it. To be sure, Glenn’s code has the virtue of simplicity, and the concepts I’m going to discuss are more complex. Don’t assume the space or time trade offs of different serialization techniques without measuring them in practice.

Let’s talk about why this code immediately stood out to me – it triggered an intuition: code structure correlates with entropy. Entropy in the information theory sense means that repetitive data has lower entropy, and unique data has high entropy. This intuition cues me to think about refactoring along DRY principles or moving to a more data driven approach. These kinds of changes take low entropy code and “compress” it so it has higher entropy – in other words, code that is less repetitive to accomplish the same results.

The repetitive tree structure of the code immediately suggests that it could be written more concisely, especially as you have more arguments from which to pick. A sort algorithm is an obvious choice, e.g.

This version is much shorter, it can handle any number of items, and gets right to the point.

(Caveats: Of course, the astute reader will note that there are situations where for small argument counts the tree can be a faster choice. Rewriting it with ternary operators doesn’t count – we’re concerned with execution structure here, not syntactic structure. Finally, there are plenty of loose ends to tune in the new implementation for correctness and speed.)

Huffman Codes

How does this apply to Glenn’s implementation above? Well, we see it has a repetitive structure. It’s attempting to encode more common numbers in fewer bits, and rarer numbers with more bits. That smells like a Huffman code to me! A Huffman code is a tree-like encoding structure that assigns unique bit strings of varying lengths to different values based on their expected frequency. So the most common value is encoded with a couple of bits, while the least common value might take ten or twenty bits. This lets you get a big win assuming that your assumptions about what’s common are right. As an example, algorithms like deflate/zip rely on Huffman codes. There are also techniques like arithmetic coding that are more powerful and complex expressions of this concept.

A canonical Huffman code is one which generates N codes where the first value is expected to be most frequent, and the last value is least frequent. Implicitly, this means the first value’s code is shortest and the last value’s code is longest. Fortunately, this is exactly the situation we want to deal with! If you knew more about the actual distribution of values you could generate your own tree based on the observed values and get better efficiency in your serialization.

(In the following code, I’m going to assume you have a serialize function to read/write a canonical Huffman code that can represent values between 0 and 127. This is definitely sweeping complexity under the rug, although a Huffman coder is a handy thing to have in general. If I was implementing this, I’d start with the HuffmanProcessor from Torque, although there are surely more efficient implementations.)

Notice that there is now no repetitive structure to the code. We have to deal with a couple of details (what if it’s a number bigger than we can Huffman code?), but the basic flow is much simpler. And it achieves a very similar result.

Conclusions

What are the trade offs of this approach? Well, first off, it may not be as bit efficient in the “every item is consecutive” case with the canonical Huffman code because the canonical Huffman code won’t assign a single bit to that most common case. You could add a special case for that (and spend an extra bit per serialization in the non-consecutive case).

Second, Huffman codes are more complex to read and write. If you are doing a LOT of serialization this might affect CPU usage. Most likely it will not be a significant difference – I’ve rarely seen any game be bottleneck on network serialization.

Third, tuning is more indirect. With Glenn’s implementation, you can modify the code to directly affect the bit cost of various scenarios. With this approach, your main tuning option is to regenerate the Huffman code based on actual network traffic. You can also add special cases e.g. for the consecutive item case. Regardless of the approach you use, you always want to check your changes by running your game and measuring actual update sizes in various scenarios to confirm you are really getting a win.

Fourth, there are better coding techniques than Huffman – in practice you probably want a variant of arithmetic coding. Compressing the entire packet with a standard compression algorithm like LZMA is also an option. This article is just a piece in the puzzle of the ultimate network protocol.

Is the Huffman approach a slam dunk win over the manual approach? Unclear. But it gives a more formal perspective on the smart serialization technique Glenn uses, and is a good entree into a larger discussion on coding techniques for network serialization. I hope you found this article interesting!

Post navigation

Buy My Game!

Solve Your Problems!

I've developed a love for consulting and run The Engine Company. We solve hard problems in AR/VR, embedded systems, internet video, mobile/IoT, cloud/backend, game technology, fractional CTO and a lot more. Visit our site for more info.