Great work. I though I was the only one doing it this way, although I only gzip.

Comment by cdude — April 26, 2007

The Dean Edwards JS uncompression algoritm can take up lots of CPU resources if the script is large. It takes a few seconds to uncompress TinyMCE (150k) on a P4 machine in FF so that algorithm it’s not usable on larger scripts since this uncompressing needs to be done each time the JS is loaded even from local browser cache.

I’ve also seen that if you remove all comments and whitespace and gzip that it can even be smaller that dean edwards algoritm and gzip I guess the huffman algorithm can compress the script better in plain text.

I’m a tad confused? If you have mod_deflate/mod_gzip within apache, why is this necessary? We’ve used these on Apache 1.3 and 2.0 for years and years, to compress all outgoing text files on the fly. And since this is done in at a compiled C level (it being a module of Apache), there is no CPU concern. Visit http://www.silverstripe.com and experience the excitment of compressed JS/CSS/HTML et al.

This is a good idea to use once, the save the javascript and CSS into two files, and be done with it. Compressing on demand is a worthless waste of resources and you never know what bugs will be introduced if you make tiny changes to the files.

Comment by Nice — April 27, 2007

I think it’s important to note that the performance hit of this script only occurs when the script has detected that either a compressed version does not exist OR that the script has been modified and needs to be recompressed. That’s why the library wants a “cache” directory. Once the script has compressed the files there’s no performance hit, and quite a substantial performance boost.

mod_deflate/mod_gzip can’t consolidate multiple file requests (reducing the number of HTTP requests and the associated overhead and browser connection limits).

As for the corruption on the prototype.js library, I haven’t had a chance to use the code myself but it’s probably that the jsmin was set too aggressively, something that should be easy enough to fix in a few days.

This library has quite a bit of potential. I look forward to seeing it evolve.

First of all; why zipping content, if the webserver does the compression as well? I’ve done some tests on filesize two years ago. Although you can reduce the filesize by this method, those files cannot be compressed any further by the webserver.
Say, the original file is 100kb, after compression the file is reduced to 30kb, but then, the webserver cannot compress the 30kb. If the webserver compresses the 100kb, probably 20kb will remain.