I prize your sense o humor, calling JPEG "the most optimized lossy compression format", as it can be further compressed losslessly another 23...24%.

@ pmeenan

I was thinking along the line of a new content-encoding. That's why I also mentioned the IE Varies problem.

Yea, you have a good point concerning decent browser support of WebP.

Concerning WebP, is there a way of getting content negotiation [JPEG <-> WebP] and cacheability at the same time without resorting to the auto-defeating Vary: User Agent header? Alternatively, is there a way of doing it with HTTP 2.0?

The current effort for negotiation is around vary: accept. Opera has always announced webp support in their accept headers and Chrome added it recently so all/both of the browsers that support WebP announce it.

CDN/proxy support for vary: accept is pretty weak right now though it is improving (Akamai is one of the first that I know that is supporting it).

HTTP 2.0 doesn't really add any mechanisms that help other than being encrypted by default so you can feel safe that you're bypassing any intermediaries.

I prize your sense o humor, calling JPEG "the most optimized lossy compression format", as it can be further compressed losslessly another 23...24%.

Indeed, which is why I said the most optimized lossy format, any format can losslessly be compressed more, but it's a matter of CPU utilization required to achieve that both on the compression end, and the decompression end. CPU utilization during browser rendering is not something to ignore, and lossless compression techniques that can be tacked on only add to this, not to mention the diminishing returns it takes to compress the file that much. It takes considerably more processing power to compress an image more and more as the low hanging fruit of the binary level compression gets taken. Technically you can tell your browser to do gzip compression right now for images, but it will actually slow down your overall page display time (from my testing) for the little bit of download size it saves. I'm know there are better lossless compression techniques that can be used other than those used by gzip, but I still doubt the tradeoff is worth it, and those would require a new format adoption as mentioned, not an easy task ;-)

(12-04-2013 03:04 AM)pmeenan Wrote: The current effort for negotiation is around vary: accept. Opera has always announced webp support in their accept headers and Chrome added it recently so all/both of the browsers that support WebP announce it.

Why not use the HTTP response header Link? The URL is the URL of a directory containing the images and the rel attribute [with a chosen and agreed name, for instance jpg->webp] is used as a hint for browsers to ask "image.webp" instead of "image.jpg" for the images located in a sub directory of the aforementioned URL.

It seems to be past proof [no side effects with current browsers and I would guess that proxies already support this response header (i.e. Link)] and future proof [it's up to the browsers to opt in].

Not sure I understand where you'd pass the header or how you'd use it. Using it on the base page ties all sorts of conventions in with it's use that are unlikely to work for a lot of sites and applying it to individual resources implies that the image was already requested so it would cause a double fetch.

We considered a srcset style solution that required markup to define alternate image formats but adoption would take forever and it's not all that clean.

By delivering different file formats for the same URI based on advertised capabilities, it opens up support for automatic image transcoding services and delivery without having to change the pages themselves at all.

New compression algorithms will not help unless the algorithms are implemented in all Browsers. If a new algorithm was added to the major Browsers today it will be many years before that algorithm would be practical to use because older Browsers would not be able to render the image.

I was thinking at HTML level [document]. Be it the HTTP headers or in the head section of the HTML document. Probably it is equivalent to the srcset proposal. The advantage is that it has no side effects with current browsers [i.e. the IE Vary problem] nor with current proxies, because the URL is different [cacheability without problems] and because proxies do not have to learn new tricks, such as vary: accept.

On the other hand, I concede that the "vary: accept" is slightly simpler from author's point of view. I might be wrong, but I would guess that IE <= 9 would have problems with it, i.e. the images are not cached by those browsers.