Every modern browser supports svg and svgz natively. This issue (for svg) was reported seven years ago and it was suggested that it was up to "distributions" to set up the mime mapping. You distribute lighttpd sources so you are the most important distributor. You can't expect every distribution to figure out your configuration scheme, much less every sys admin. What possible reason do you have for not implementing these simple changes to the default configuration?

"Won't work with clients that are not supporting gzip" Every modern browser I've tried renders svgz content perfectly as long as both Content-Encoding and Content-Type headers are provided. Try any browser on the images at http://www.cs.queensu.ca/students/undergraduate/prerequisites/; if there's a problem I want to know about it.

Does curl not realize PDFs are compressed? I'd call this discrimination against svgz a bug in curl. For downloads, that web site provides PDFs.

you do realize that most people save the compressed PDF right.I would also argue that if someone clicks on a compressed file he wants to save it as a compressed file.personally i would rather let me users only click on .svg files. and send them the precompressed file when their browser announces/requests compression support. exposing users to the detail that you have .svgz files seems wrong.

most people save the compressed PDF. I would also argue that if someone clicks on a compressed file he wants to save it as a compressed file.

I agree; but it seems curl doesn't allow the compressed svg to be downloaded and saved. That's what's wrong.

let me users only click on .svg files. and send them the precompressed file when their browser announces/requests compression support.

Compressing static content every time it's requested seems inefficient. And in the case of that server (not mine to configure), it doesn't happen: the svg file itself gets transferred, which is certainly inefficient. What does lighttpd do (in the default configuration)? I'm using it locally and can't really tell whether there's automatic compression. And I doubt that it's any easier to configure automatic compression than to configure an encoding header.

I think the RFC can be blamed for this. The current draft for semantics says (and rfc 2616 is similar):

A request without an Accept-Encoding header field implies that theuser agent has no preferences regarding content-codings. Althoughthis allows the server to use any content-coding in a response, itdoes not imply that the user agent will be able to correctly processall encodings.

This is basically not specifying anything at all. The server is allowed to send what it wants, but the client obviously may not be able to read it.

But it doesn't matter; we can't include a properly working default config handling those pre-compressed files; setenv.add-response-header = ( "Content-Encoding" => "gzip") must only be used if the request is handled with mod_staticfile (no fastcgi, proxy, ...); and $HTTP["url"] can't be used reliably to check the file extension ($PHYSICAL["existing-path"] should work).