Thursday, June 18, 2009

thought i was done caring about endianness when i left kernel programming... oops

I quickly replied:

You put bits on a {network,disk} that transcend architectures, you worry about byte-order.

I've often wondered why people with apps for Solaris on SPARC are often concerned about getting it to work on Solaris for x86 and vice-versa. Seeing Stephen equate byte-order-sensitivity to kernel-hacking suddenly made me realize the problem: byte-order sensitivity is everyone's problem.

Any time your program puts a multi-byte value in a network packet, or a disk block, it is highly likely another program on a different byte-order platform will attempt to read that packet or disk block. Never mind the historical holy wars about byte-order, even today, there are enough different platforms that run both big and little-endian byte orders out there.

It's really not tough to write endian-independent code. The first thing you need to decide is how to encode your disk/network data. Most Internet apps use a canonical format (which is big-endian for things in RFCs). There have been some schemes to have a universally-encoded format (XDR or ASN.1), but these can often be big-and-bulky. OS research in the early 90s proposed a scheme of "receiver makes right", where a producer tags the whole data with an encoding scheme, and it is then up to the receiver to normalize the data to its native representation.

Regardless of encoding scheme, if you are reading data from network or disk, the first step is to normalize the data. Different architectures have different aids to help here. x86 has bswap instructions to swap big endian to x86-native little endian. SPARC has an alternate space identifier load instruction. A predefine alternate space (0x88) is the little-endian space, which means if you utter "lduwa [address-reg] 0x88, [dst-reg]" the word pointed to by [address-reg] will be swapped into [dst-reg]. The sun4u version of MD5 exploits this instruction to overcome MD5's little-endian bias, for example. Compilers and system header files should provide the higher-level abstractions for these operations, for example the hton{s,l,ll}() functions that Internet apps often use. After manipulating data, encoding should follow the same steps as decoding. Also, in some cases (e.g. TCP or UDP port numbers), the number can often just be used without manipulation

Some have called for compiler writers to step up and provide clean language-level abstractions for byte-ordering. I'm no language lawyer, but I've heard the next revision of Standard C may include endian keywords:

Today, these fields need htons() or ntohs() calls wrapping references to them. Of course, there would be a lot of (otherwise correctly-written) existing code that would need to be rewritten, but such a type-enforced scheme would reduce errors.

Finally, one other cause of non-portable code is doing stupid tricks based on how multi-byte integers are stored. For example, on little-endian boxes:

People micro-optimize based on such behavior, which limits such code to little-endian platforms only. A compiler can exploit the native platform's representation to make such optimizations redundant, and any compiler guys in the half-dozen readers can correct or confirm my assertion.