Backwards uint16_t?

This is a discussion on Backwards uint16_t? within the C Programming forums, part of the General Programming Boards category; Hello
I have a function that takes a strut and turns it into a uint8_t array. This code is running ...

Backwards uint16_t?

Hello

I have a function that takes a strut and turns it into a uint8_t array. This code is running on an embedded device and works well. I have recently copied this code to a linux development environment to try to get it to work there as well.

Everything works as it should, however the 16bit variables in my struct are produced backwards:

This works fine on the embedded device and I get a byte array with all of the data of the structs in it. If I do the same in the linux dev environment the 16bit variables are written to the byte array backwards:

embedded device example byte array: (correct)

0x10 | 0x20 | 0x01 | 0xFF | 0xDA | 0x33

Linux dev environment example for same data:

0x10 | 0x20 | 0xFF | 0x01 | 0x33 | 0xDA

Here the 16 bit values have been entered into the byte array, but put in backwards.

It is exactly the same code doing this on each platform which leads me to think there is a compiler options somewhere I need to change.

Linux on x86 is little endian because x86 is little endian. There can be architectures running Linux that may not be little endian.

The Internet Protocol defines internet endianness as big endian. So Linux provides ntohs() and htons() (htonl.. ntohl too) to switch endianness in a data stream from one to the other.
On big endian systems those functions do nothing because the endianness is already correct.