Encoding is the process of transforming a set of Unicode characters into a sequence of bytes. Decoding is the reverse; it is the process of transforming a sequence of encoded bytes into a set of Unicode characters.

The Unicode Standard assigns a code point (a number) to each character in every supported script. A Unicode Transformation Format (UTF) is a way to encode that code point. The Unicode Standard version 3.2 uses the following UTFs:

UTF-8, which represents each code point as a sequence of one to four bytes.

UTF-16, which represents each code point as a sequence of one to two 16-bit integers.

UTF-32, which represents each code point as a 32-bit integer.

The GetByteCount method determines how many bytes result in encoding a set of Unicode characters, and the GetBytes method performs the actual encoding.

Likewise, the GetCharCount method determines how many characters result in decoding a sequence of bytes, and the GetChars and GetString methods perform the actual decoding.

The encoder can use the big-endian byte order (most significant byte first) or the little-endian byte order (least significant byte first). For example, the Latin Capital Letter A (code point U+0041) is serialized as follows (in hexadecimal):

Big-endian byte order: 00 41

Little-endian byte order: 41 00

Optionally, the UnicodeEncoding provides a preamble, which is an array of bytes that you can prefix to the sequence of bytes resulting from the encoding process. If the preamble contains a byte order mark (code point U+FEFF), it helps the decoder determine the byte order and the transformation format or UTF. The Unicode byte order mark is serialized as follows (in hexadecimal):

Big-endian byte order: FE FF

Little-endian byte order: FF FE

It is generally more efficient to store Unicode characters using the native byte order. For example, it is better to use the little-endian byte order on little-endian platforms, such as Intel machines.

The GetPreamble method returns an array of bytes containing the byte order mark. If this byte array is prefixed to an encoded stream, it helps the decoder to identify the encoding format used.

For more information on Unicode encoding, byte order, and the byte order mark, see The Unicode Standard at www.unicode.org.

Note

To enable error detection and to make the class instance more secure, use the UnicodeEncoding constructor that takes a throwOnInvalidBytes parameter and set that parameter to true. With error detection, a method that detects an invalid sequence of characters or bytes throws an ArgumentException. Without error detection, no exception is thrown, and the invalid sequence is generally ignored.

The following code example demonstrates how to encode the string of Unicode characters into a byte array, using UnicodeEncoding. The byte array is decoded back into a string to demonstrate that there is no loss of data.