Confessions of a Recovering Proprietary Programmer - Endianness

This entry by Paul McKenney is the eleventh in a series. You can find the rest of the series in Paul McKenney's Journal.

The computing field has had its share of holy wars over the decade, with one famous 1970s/1980s holy war regarding endianness.

Of course, endianness within a system refers to the mapping between numerical values and character strings, so that for little-endian the numerical value 0x1 results in the first character of the corresponding string being non-zero, and for big-endian it results in the last character of the corresponding string being non-zero.

Although the "cool kids" of the 1980s (e.g., Apple and Sun) were big-endian, the vast majority of today's Linux-capable systems are little-endian. The of course includes x86, but it also includes ARM, whose partners shipped more than one ARM CPU for each man, woman, and child on this planet -- and that only counts the ARM CPUs shipped in 2012.

Of course, this means that someone considering creating special-purpose hardware would likely do a little-endian implementation first and foremost, and a big-endian implementation later, if at all. Because single-threaded CPU throughput is not rising anywhere near as quickly as it was back in the 1980s and 1990s, special-purpose hardware will become increasingly important.

So what is a big-endian architecture like Power supposed to do?

The answer turns out to be both!

Power hardware has long supported both big-endian and little-endian byte ordering, and the toolchain has had prototype little-endian support for some years. Furthermore, if you have been paying close attention, you will have noticed that this little-endian support has received significant care and feeding over the past few months. Expect to see patches for the Linux kernel, QEMU, and other packages soon, from IBM and from others. IBM's partners in this effort include Google, Mellanox, NVIDIA, and Tyan.