Encoding is the process of transforming a set of Unicode characters into a sequence of bytes. Decoding is the process of transforming a sequence of encoded bytes into a set of Unicode characters.

The Unicode Standard assigns a code point (a number) to each character in every supported script. A Unicode Transformation Format (UTF) is a way to encode that code point. The Unicode Standard uses the following UTFs:

UTF-8, which represents each code point as a sequence of one to four bytes.

UTF-16, which represents each code point as a sequence of one to two 16-bit integers.

UTF-32, which represents each code point as a 32-bit integer.

Note:

The UTF-7 encoding supports certain protocols for which it is required, most often e-mail or newsgroup protocols. Since UTF-7 is not particularly secure or robust, it should generally not be used. UTF-8 should normally be preferred to UTF-7.

The encoder can use the big endian byte order (most significant byte first) or the little endian byte order (least significant byte first). For example, the Latin Capital Letter A (code point U+0041) is serialized as follows (in hexadecimal):

Big endian byte order: 00 00 00 41

Little endian byte order: 41 00 00 00

It is generally more efficient to store Unicode characters using the native byte order. For example, it is better to use the little endian byte order on little endian platforms, such as Intel computers.

Optionally, the UnicodeEncoding object provides a preamble, which is an array of bytes that can be prefixed to the sequence of bytes resulting from the encoding process. If the preamble contains a byte order mark (BOM), it helps the decoder determine the byte order and the transformation format or UTF. The GetPreamble method retrieves an array of bytes that can include the BOM. For more information on byte order and the byte order mark, see The Unicode Standard at the Unicode home page.

Note:

To enable error detection and to make the class instance more secure, the application should use the UnicodeEncoding constructor that takes a throwOnInvalidBytes parameter, and set that parameter to true. With error detection, a method that detects an invalid sequence of characters or bytes throws a ArgumentException. Without error detection, no exception is thrown, and the invalid sequence is generally ignored.

The following example demonstrates how to encode a string of Unicode characters into a byte array, using UnicodeEncoding. The byte array is decoded into a string to demonstrate that there is no loss of data.