Source: Wikipedia. Pages: 75. Chapters: Huffman coding, Lossless data compression, Arithmetic coding, Run-length encoding, Lempel-Ziv-Welch, Entropy encoding, DEFLATE, Burrows-Wheeler transform, Shannon-Fano coding, LZ77 and LZ78, PackBits, Bzip2, Fibonacci coding, Elias gamma coding, Prefix code, Elias delta coding, Range encoding, Golomb coding, Delta encoding, PAQ, Lossless JPEG, Lempel-Ziv-Markov chain algorithm, Dynamic Markov compression, JBIG2, Variable-length code, Universal code, MrSID, Move-to-front transform, HTTP compression, Package-merge algorithm, Dictionary coder, Truncated binary encoding, Liblzg, FreeArc, Prediction by partial matching, Canonical Huffman code, LEB128, Embedded Zerotrees of Wavelet transforms, Adam7 algorithm, Elias omega coding, Adaptive Huffman coding, Adaptive coding, FELICS, LZX, Lempel-Ziv-Stac, Statistical Lempel Ziv, Huffyuv, SheerVideo, LZWL, NegaFibonacci coding, Lempel-Ziv-Storer-Szymanski, Exponential-Golomb coding, Unary coding, Context-adaptive binary arithmetic coding, Lempel-Ziv-Oberhumer, CCSDS 122.0-B-1, Incremental encoding, Levenstein coding, Microsoft Point-to-Point Compression, Chain code, Lagarith, Incompressible string, Recursive indexing, Sequitur algorithm, Byte pair encoding, Context-adaptive variable-length coding, Context tree weighting, Algorithm BSTW, MSU Lossless Video Codec, LZRW, LZJB, Modified Huffman coding, QUAD. Excerpt: Arithmetic coding is a form of variable-length entropy encoding used in lossless data compression. Normally, a string of characters such as the words "hello there" is represented using a fixed number of bits per character, as in the ASCII code. When a string is converted to arithmetic encoding, frequently used characters will be stored with fewer bits and not-so-frequently occurring characters will be stored with more bits, resulting in fewer bits used in total. Arithmetic coding differs from other forms of entropy encoding such as Huffman coding in that rather than separating the input into component symbols and replacing each with a code, arithmetic coding encodes the entire message into a single number, a fraction n where (0.0 = n < 1.0). In the simplest case, the probability of each symbol occurring is equal. For example, consider a sequence taken from a set of three symbols, A, B, and C, each equally likely to occur. Simple block encoding would use 2 bits per symbol, which is wasteful: one of the bit variations is never used. A more efficient solution is to represent the sequence as a rational number between 0 and 1 in base 3, where each digit represents a symbol. For example, the sequence "ABBCAB" could become 0.0112013. The next step is to encode this ternary number using a fixed-point binary number of sufficient precision to recover it, such as 0.0010110012 - this is only 9 bits, 25% smaller than the naïve block encoding. This is feasible for long sequences because there are efficient, in-place algorithms for converting the base of arbitrarily precise numbers. To decode the value, knowing the original string had length 6, one can simply convert back to base 3, round to 6 digits, and recover the string. In general, arithmetic coders can produce near-optimal output for any given set of symbols and probabilities (the optimal value is -log2P bits for each symbol of probability P, see source coding theorem). Compression algorithms that use arithmetic coding st