Although Compress::Zlib has a pair of functions called compress and uncompress,
they are not related to the Unix programs of the same name.
The Compress::Zlib module is not compatible with Unix compress.

If you have the uncompress program available,
you can use this to read compressed files

open F, "uncompress -c $filename |";
while (<F>)
{
...

Alternatively, if you have the gunzip program available, you can use this to read compressed files

open F, "gunzip -c $filename |";
while (<F>)
{
...

and this to write compress files, if you have the compress program available

The Archive::Tar module can optionally use Compress::Zlib (via the IO::Zlib module) to access tar files that have been compressed with gzip. Unfortunately tar files compressed with the Unix compress utility cannot be read by Compress::Zlib and so cannot be directly accessed by Archive::Tar.

If the uncompress or gunzip programs are available, you can use one of these workarounds to read .tar.Z files from Archive::Tar

Note, there is a limitation of this technique. Some compression file formats store extra information along with the compressed data payload. For example, gzip can optionally store the original filename and Zip stores a lot of information about the original file. If the original compressed file contains any of this extra information, it will not be transferred to the new compressed file usign the technique above.

Although at first sight there seems to be quite a lot going on in Apache::GZip, you could sum up what the code was doing as follows -- read the contents of the file in $r->filename, compress it and write the compressed data to standard output. That's all.

This code has to jump through a few hoops to achieve this because

The gzip support in Compress::Zlib version 1.x can only work with a real filesystem filehandle. The filehandles used by Apache modules are not associated with the filesystem.

That means all the gzip support has to be done by hand - in this case by creating a tied filehandle to deal with creating the gzip header and trailer.

IO::Compress::Gzip doesn't have that filehandle limitation (this was one of the reasons for writing it in the first place). So if IO::Compress::Gzip is used instead of Compress::Zlib the whole tied filehandle code can be removed. Here is the rewritten code.

The use of one-shot gzip above just reads from $r->filename and writes the compressed data to standard output.

Note the use of the Minimal option in the code above. When using gzip for Content-Encoding you should always use this option. In the example above it will prevent the filename being included in the gzip header and make the size of the gzip data stream a slight bit smaller.

The Net::FTP module provides two low-level methods called stor and retr that both return filehandles. These filehandles can used with the IO::Compress/Uncompress modules to compress or uncompress files read from or written to an FTP Server on the fly, without having to create a temporary file.

Firstly, here is code that uses retr to uncompressed a file as it is read from the FTP Server.

A fairly common use-case is where compressed data is embedded in a larger file/buffer and you want to read both.

As an example consider the structure of a zip file. This is a well-defined file format that mixes both compressed and uncompressed sections of data in a single file.

For the purposes of this discussion you can think of a zip file as sequence of compressed data streams, each of which is prefixed by an uncompressed local header. The local header contains information about the compressed data stream, including the name of the compressed file and, in particular, the length of the compressed data stream.

To illustrate how to use InputLength here is a script that walks a zip file and prints out how many lines are in each compressed file (if you intend write code to walking through a zip file for real see "Walking through a zip file" in IO::Uncompress::Unzip ). Also, although this example uses the zlib-based compression, the technique can be used by the other IO::Uncompress::* modules.

The call to IO::Uncompress::RawInflate creates a new filehandle $inf that can be used to read from the parent filehandle $fh, uncompressing it as it goes. The use of the InputLength option will guarantee that at most$compressedLength bytes of compressed data will be read from the $fh filehandle (The only exception is for an error case like a truncated file or a corrupt data stream).

This means that once RawInflate is finished $fh will be left at the byte directly after the compressed data stream.

The difference here is the addition of the temporary variable $data. This is used to store a copy of the compressed data while it is being uncompressed.

If you know that $compressedLength isn't that big then using temporary storage won't be a problem. But if $compressedLength is very large or you are writing an application that other people will use, and so have no idea how big $compressedLength will be, it could be an issue.

Using InputLength avoids the use of temporary storage and means the application can cope with large compressed data streams.

One final point -- obviously InputLength can only be used whenever you know the length of the compressed data beforehand, like here with a zip file.