We've moved! Come visit our new blog:

Blog Archive

Thursday, September 30, 2010

WebP, a new image format for the Web

As part of Google’s initiative to make the web faster, over the past few months we have released a number of tools to help site owners speed up their websites. We launched the Page Speed Firefox extension to evaluate the performance of web pages and to get suggestions on how to improve them, we introduced the Speed Tracer Chrome extension to help identify and fix performance problems in web applications, and we released a set of closure tools to help build rich web applications with fully optimized JavaScript code. While these tools have been incredibly successful in helping developers optimize their sites, as we’ve evaluated our progress, we continue to notice a single component of web pages is consistently responsible for the majority of the latency on pages across the web: images.

Most of the common image formats on the web today were established over a decade ago and are based on technology from around that time. Some engineers at Google decided to figure out if there was a way to further compress lossy images like JPEG to make them load faster, while still preserving quality and resolution. As part of this effort, we are releasing a developer preview of a new image format, WebP, that promises to significantly reduce the byte size of photos on the web, allowing web sites to load faster than before.

Images and photos make up about 65% of the bytes transmitted per web page today. They can significantly slow down a user’s web experience, especially on bandwidth-constrained networks such as a mobile network. Images on the web consist primarily of lossy formats such as JPEG, and to a lesser extent lossless formats such as PNG and GIF. Our team focused on improving compression of the lossy images, which constitute the larger percentage of images on the web today.

To improve on the compression that JPEG provides, we used an image compressor based on the VP8 codec that Google open-sourced in May 2010. We applied the techniques from VP8 video intra frame coding to push the envelope in still image coding. We also adapted a very lightweight container based on RIFF. While this container format contributes a minimal overhead of only 20 bytes per image, it is extensible to allow authors to save meta-data they would like to store.

While the benefits of a VP8 based image format were clear in theory, we needed to test them in the real world. In order to gauge the effectiveness of our efforts, we randomly picked about 1,000,000 images from the web (mostly JPEGs and some PNGs and GIFs) and re-encoded them to WebP without perceptibly compromising visual quality. This resulted in an average 39% reduction in file size. We expect that developers will achieve in practice even better file size reduction with WebP when starting from an uncompressed image.

To help you assess WebP’s performance with other formats, we have shared a selection of open-source and classic images along with file sizes so you can visually compare them on this site. We are also releasing a conversion tool that you can use to convert images to the WebP format. We’re looking forward to working with the browser and web developer community on the WebP spec and on adding native support for WebP. While WebP images can’t be viewed until browsers support the format, we are developing a patch for WebKit to provide native support for WebP in an upcoming release of Google Chrome. We plan to add support for a transparency layer, also known as alpha channel in a future update.

We’re excited to hear feedback from the developer community on our discussion group, so download the conversion tool, try it out on your favorite set of images, and let us know what you think.

Love it. Thought Jpeg2k was not a good way for the web, but jpeg is old, very old in ways that there has to be something better.., and WebP with Alpha Channel is a Ruler. But how is the Performance need in comparing to jpeg and png for render in browser or encode/decode.

I think will be a good idea to add a header with an index and embed several "versions" at different resolutions of the same image (with offsets and lengths). So from a small device (smartphone) people could access to a small version of the image (without needing of downloading the whole file), and a normal computer could skip small versions and get the biggest resolution. That will greatly speed up internet navigation from mobile devices and will reduce traffic data.

It would be nice to compare the same image with different compression levels, in JPEG and WebP. I mean, same image at 100K, at 50K, at 25K.Also do not use "Images in the left column are JPEG originals", use TIFF or PNG originals.

Will it have a progressive mode? For small screens on mobile devices sometimes its enough not to have the full resolution. A mobile client could decide its got enough of a picture when a certain resolution is reached.

And why would JPEG "originals" show DCT artifacts that are not present in the WebP examples? Were they both compressed from a lossless original, or is there a mistake in your presentation of the examples?

tux - I'm afraid you are actually an immense retard and are completely wrong. The compression schemes used by JPEG and PNG are aimed at entirely different goals and therefore neither makes the other obsolete. Your retarded simplification of the differences doesn't demonstrate an understanding of their relative purposes. I'll outline it in layman's terms for your simple mind. JPEG is suited for photographic images where pixels don't fall into uniform blocks allowing lossy compression to have a minimal impact on perceived loss of quality. PNG is suited for vector web graphics and images requiring hard outlines and smooth gradients. It is a lossless format but is compressed using the deflate algorithm commonly employed by web servers among other things. In short, it would appear that you've used Photoshop a little, compared a few file sizes and naively think you understand the finer points of complex bitmap formats. Newsflash: you don't - get a clue before you expose your incredible stupidity next time.

@Craig I'm afraid you're an immense jerkass-wad, and that you wouldn't be able to say what you said out face to face without a computer. Newsflash: using 'newsflash' doesn't add any emphasis to your conclusion.

If you are going to make a web specific image format, why not embed a metadata tag containing a URL to a better (resolute/quality) image. You could create multi-scale images as a chain of multiple files (whereby the lowest quality image is selected by default or if some CSS setting suggests it as default). Then the http client can recursively request better images in the background and refine the page dynamically. I'm sure that this is not a novel idea.

I'd like to see:- support for XMP metadata- psychovisual optimizations in the WebM encoder- support for lossless encoding- using both WebM inter and intra coding for images (as in the hipix image format)