In addition there are slower compression levels achieving a quite competitive compression ratio while still decompressing at this very high speed.

The LZO algorithms and implementations are copyrighted OpenSource distributed under the GNU General Public License.

Introduction ------------ LZO is a data compression library which is suitable for data de-/compression in real-time. This means it favours speed over compression ratio.

The acronym LZO is standing for Lempel-Ziv-Oberhumer.

LZO is written in ANSI C. Both the source code and the compressed data format are designed to be portable across platforms.

LZO implements a number of algorithms with the following features:

- Decompression is simple and *very* fast. - Requires no memory for decompression. - Compression is pretty fast. - Requires 64 kB of memory for compression. - Allows you to dial up extra compression at a speed cost in the compressor. The speed of the decompressor is not reduced. - Includes compression levels for generating pre-compressed data which achieve a quite competitive compression ratio. - There is also a compression level which needs only 8 kB for compression. - Algorithm is thread safe. - Algorithm is lossless.

LZO supports overlapping compression and in-place decompression.

Design criteria --------------- LZO was designed with speed in mind. Decompressor speed has been favoured over compressor speed. Real-time decompression should be possible for virtually any application. The implementation of the LZO1X decompressor in optimized i386 assembler code runs about at the third of the speed of a memcpy() - and even faster for many files.

In fact I first wrote the decompressor of each algorithm thereby defining the compressed data format, verified it with manually created test data and at last added the compressor.

Performance ----------- To keep you interested, here is an overview of the average results when compressing the Calgary Corpus test suite with a blocksize of 256 kB, originally done on an ancient Intel Pentium 133.

The naming convention of the various algorithms goes LZOxx-N, where N is the compression level. Range 1-9 indicates the fast standard levels using 64 kB memory for compression. Level 99 offers better compression at the cost of more memory (256 kB), and is still reasonably fast. Level 999 achieves nearly optimal compression - but it is slow and uses much memory, and is mainly intended for generating pre-compressed data.

The C version of LZO1X-1 is about 4-5 times faster than the fastest zlib compression level, and it also outperforms other algorithms like LZRW1-A and LZV in both compression ratio and compression speed and decompression speed.

Notes: - CxB is the number of blocks - K/s is the speed measured in 1000 uncompressed bytes per second - the assembler decompressors are even faster

Short documentation ------------------- LZO is a block compression algorithm - it compresses and decompresses a block of data. Block size must be the same for compression and decompression.

LZO compresses a block of data into matches (a sliding dictionary) and runs of non-matching literals. LZO takes care about long matches and long literal runs so that it produces good results on highly redundant data and deals acceptably with non-compressible data.

When dealing with uncompressible data, LZO expands the input block by a maximum of 16 bytes per 1024 bytes input.

I have verified LZO using such tools as valgrind and other memory checkers. And in addition to compressing gigabytes of files when tuning some parameters I have also consulted various `lint' programs to spot potential portability problems. LZO is free of any known bugs.

The algorithms -------------- There are too many algorithms implemented. But I want to support unlimited backward compatibility, so I will not reduce the LZO distribution in the future.

As the many object files are mostly independent of each other, the size overhead for an executable statically linked with the LZO library is usually pretty low (just a few kB) because the linker will only add the modules that you are actually using.

I first published LZO1 and LZO1A in the Internet newsgroups comp.compression and comp.compression.research in March 1996. They are mainly included for compatibility reasons. The LZO2A decompressor is too slow, and there is no fast compressor anyway.

My experiments have shown that LZO1B is good with a large blocksize or with very redundant data, LZO1F is good with a small blocksize or with binary data and that LZO1X is often the best choice of all. LZO1Y and LZO1Z are almost identical to LZO1X - they can achieve a better compression ratio on some files. Beware, your mileage may vary.

Usage of the library -------------------- Despite of its size, the basic usage of LZO is really very simple.

The program examples/simple.c shows a fully working example. See also LZO.FAQ for more information.

Building LZO ------------ As LZO uses Autoconf+Automake+Libtool the building process under UNIX systems should be very unproblematic. Shared libraries are supported on many architectures as well. For detailed instructions see the file INSTALL.

Please note that due to the design of the ELF executable format the performance of a shared library on i386 systems (e.g. Linux) is a little bit slower, so you may want to link your applications with the static version (liblzo2.a) anyway.

For building under DOS, Win16, Win32, OS/2 and other systems take a look at the file B/00readme.txt.

In case of troubles (like decompression data errors) try recompiling everything without optimizations - LZO may break the optimizer of your compiler. See the file BUGS.

LZO is written in ANSI C. In particular this means: - your compiler must understand prototypes - your compiler must understand prototypes in function pointers - your compiler must correctly promote integrals ("value-preserving") - your preprocessor must implement #elif, #error and stringizing - you must have a conforming and correct <limits.h> header - you must have <stddef.h>, <string.h> and other ANSI C headers - you should have size_t and ptrdiff_t

Portability ----------- I have built and tested LZO successfully on a variety of platforms including DOS (16 + 32 bit), Windows 3.x (16-bit), Win32, Win64, Linux, *BSD, HP-UX and many more.

LZO is also reported to work under AIX, ConvexOS, IRIX, MacOS, PalmOS (Pilot), PSX (Sony Playstation), Solaris, SunOS, TOS (Atari ST) and VxWorks. Furthermore it is said that its performance on a Cray is superior to all other machines...

And I think it would be much fun to translate the decompressors to Z-80 or 6502 assembly.

The future ---------- Here is what I'm planning for the next months. No promises, though...

Some comments about the source code ----------------------------------- Be warned: the main source code in the `src' directory is a real pain to understand as I've experimented with hundreds of slightly different versions. It contains many #if and some gotos, and is *completely optimized for speed* and not for readability. Code sharing of the different algorithms is implemented by stressing the preprocessor - this can be really confusing. Lots of marcos and assertions don't make things better.

Nevertheless LZO compiles very quietly on a variety of compilers with the highest warning levels turned on, even in C++ mode.