> On Sun, 23 Nov 1997, Umar Goldeli wrote:
>
> > YES!
> > I love it! More power to compressed pipes!
> > Wonderful... need I say more?
> > > Compressed intercache compressions. I've made a lame hack of this for my
> > > home squid: I use ssh to setup a non-encrypted compressed pipe and use
>
> I have some questions:
>
> - would this be for specific persistent connections, or all connections?
> - if for all connections, what of already compressed files?
> - what is the performance hit for the compression over lots of small
> text files?
>
> Don't get me wrong - the idea of compressed pipes sounds good; I'm just
> curious as to what sort of benefits one might expect, that's all.

Well, the way I *currently* use it (with 1.1.x (no persistent
connections)) I use a cache_host_acl to make certan document types (text/*
basicly) go through the compressed pipe...

With lots of small files I dont get the full benifite of compression (the
symbol dict needs to build up), but it's still an advantage.. With
persistent connections it would be a greater advantage (no dictionary
buildup time, as long as the text is simmlar in nature)..

So with compression support, you would end up with two persistant
connections to your parent cache.. One for compressed things, one for
non-compresable types...

To get an idea about the type of savings, wget a popular website:
wget -A .html,.htm,.txt -r -l5 -d .popular.site.com http://www.popular.site.com/

And tar the output... And gzip -6 it.. See how much it compresses..

Then time the compression and decompression and figure out how many MB/s
it's doing on a idle computer.. Figure out how many MB/s it could do on
your cache, if your result is bigger then your load then you can use
compression..

If gzip -6 is too slow then get a copy of lzop and try that.. It's much
faster (esp for decompression)..