To
avoid byte order issues purely on the client side it is important to
not try to use any tricks when writing down data. eg. if you want to
fill a buffer with inidividual bytes you should use a byte array, if
you want to store int32s you should use an int32 view, etc. Otherwise
you open up the possibility of byte order problems on the client side.

So if I fill a buffer with bytes that represent 'int32' data on the
server how do I know which byte order the client expects the data to be
in? That was the point of my original question. I realize that I can
*assume* that byte order is little-endian for most common web browser
implementations on Intel platforms. Obviously, I can also read headers
from the request to determine the browser/OS - but that's hit and miss.

The usage that the spec implies to me is as follows:

// Grab the data

var uint8_array = ImaginarySynchronousXHRBinaryDataFetch(url);

var network_view = new DataView(uint8_array.buffer);

// Since you produced the resource, you know the endian-ness:

var network_endian = false; // big-endian

// Now massage the data

var native_array = new Uint32Array(size);

for( var i = 0; i < size; ++i ) {

native_array[i] = network_view.getUint32(i, network_endian);

}

While the above looks a bit silly in this case, I imagine the
more common case will be where the data format is more complex so the
massaging will not be a simple loop copying 32 bits at a time, e.g.
reading raster or mesh binary file formats.