Documentation

This module provides pure functions for compressing and decompressing
streams of data in the zlib format and represented by lazy ByteStrings.
This makes it easy to use either in memory or with disk or network IO.

Simple compression and decompression

This uses the default compression parameters. In partiular it uses the
default compression level which favours a higher compression ratio over
compression speed, though it does not use the maximum compression level.

Use compressWith to adjust the compression level or other compression
parameters.

There are a number of errors that can occur. In each case an exception will
be thrown. The possible error conditions are:

if the stream does not start with a valid gzip header

if the compressed stream is corrupted

if the compressed stream ends permaturely

Note that the decompression is performed lazily. Errors in the data stream
may not be detected until the end of the stream is demanded (since it is
only at the end that the final checksum can be checked). If this is
important to you, you must make sure to consume the whole decompressed
stream before doing any IO action that depends on it.

The compressBufferSize is the size of the first output buffer containing
the compressed data. If you know an approximate upper bound on the size of
the compressed data then setting this parameter can save memory. The default
compression output buffer size is 16k. If your extimate is wrong it does
not matter too much, the default buffer size will be used for the remaining
chunks.

The decompressBufferSize is the size of the first output buffer,
containing the uncompressed data. If you know an exact or approximate upper
bound on the size of the decompressed data then setting this parameter can
save memory. The default decompression output buffer size is 32k. If your
extimate is wrong it does not matter too much, the default buffer size will
be used for the remaining chunks.

This specifies the size of the compression window. Larger values of this
parameter result in better compression at the expense of higher memory
usage.

The compression window size is the value of the the window bits raised to
the power 2. The window bits must be in the range 9..15 which corresponds
to compression window sizes of 512b to 32Kb. The default is 15 which is also
the maximum size.

The total amount of memory used depends on the window bits and the
MemoryLevel. See the MemoryLevel for the details.

The MemoryLevel parameter specifies how much memory should be allocated
for the internal compression state. It is a tradoff between memory usage,
compression ratio and compression speed. Using more memory allows faster
compression and a better compression ratio.

The total amount of memory used for compression depends on the WindowBits
and the MemoryLevel. For decompression it depends only on the
WindowBits. The totals are given by the functions:

For example, for compression with the default windowBits = 15 and
memLevel = 8 uses 256Kb. So for example a network server with 100
concurrent compressed streams would use 25Mb. The memory per stream can be
halved (at the cost of somewhat degraded and slower compressionby) by
reducing the windowBits and memLevel by one.

Decompression takes less memory, the default windowBits = 15 corresponds
to just 32Kb.

Use the filtered compression strategy for data produced by a filter (or
predictor). Filtered data consists mostly of small values with a somewhat
random distribution. In this case, the compression algorithm is tuned to
compress them better. The effect of this strategy is to force more Huffman
coding and less string matching; it is somewhat intermediate between
defaultCompressionStrategy and huffmanOnlyCompressionStrategy.