Tuesday, March 22, 2011

Today we released Firefox 4, the latest browser from Mozilla. There's a lot to love in Firefox 4 - better performance, hardware acceleration and a streamlined interface. All of that is great, but I'm here to talk about WebM support.

To understand why this is really important you need to understand global market share numbers for browsers. According to StatCounter, Firefox accounts for about 30% market share - or nearly a third of all browser users. When you combine that with Chrome and Opera it means that about 50% of internet users will have access to the high-quality WebM codec over the next few months, following the Firefox 4 adoption curve.

We've supported HTML5 and standards-based video since Firefox 3.5 with Theora and Vorbis support, and we're happy to add WebM to that mix since it offers an even higher-quality option for the web.

Thursday, March 17, 2011

In the Bali release post, we mentioned a that we've added a new encoding mode called "constrained quality" (CQ) to the VP8 Codec SDK (libvpx). The idea for CQ mode arose as we began testing approaches for encoding WebM versions, in multiple resolutions, of every file in the YouTube corpus. Approaching video encoding on such an immense scale sets one to thinking very carefully about how every bit is used; wasting even small amounts of data across many millions of files adds up very quickly, translating to higher storage and bandwidth costs.After trying a few approaches it became apparent that we needed not a better way to allocate bits within each WebM file, but rather a better way to distribute them across all the WebM files. The result was CQ mode.I presented the slides below at the February WebM Summit to explain CQ in general terms and summarize its benefits to content publishers when applied across large collections of WebM files. I hope you find it informative and welcome your feedback in the comments.

Wednesday, March 16, 2011

Today we're making available a preview release of Microsoft Media Foundation (MF) components for WebM. Microsoft has also announced the components on the IE Blog and posted a demo page.As Internet Explorer General Manager Dean Hachamovitch wrote last year on the Windows Blog, when these components are installed in Windows they enable rendering of WebM media in Microsoft Internet Explorer 9. Because the components are installed directly in Windows, the components can also render WebM in other applications that support MF, such as Windows Media Player 12 on Windows 7.To download the component installer package, visit the IE9 page on the WebM Project site. After installing the components, IE9 will be able to render HTML5 pages that include WebM video or audio media, such as the YouTube HTML5 experiment (see the YouTube instructions on the WebM Project site). Microsoft Media Foundation is a powerful and flexible API that allowed us to seamlessly integrate WebM with Windows, providing a great HTML5 user experience in IE9. Microsoft collaborated closely with us to make the components fully compatible with HTML5 in IE9, so features such as the <video> tag and its canPlayType method are fully enabled for WebM. Our thanks go out to the Microsoft engineers who provided technical assistance and hosted our team in Redmond last month.We hope you enjoy watching WebM in IE9! We value your comments and feedback and, as always, developers are welcome to contribute to the code.Matthew Heaney is a Software Engineer for the WebM Project.

Monday, March 14, 2011

Last week the WebM Finland team finalized our H1 hardware RTL design. The H1 is the world’s first VP8 hardware encoder. This initial release, which we're calling "Anthill," is now available through the WebM Project hardware page. Google does not require payment of any license fee or royalty in connection with use of the H1 encoder RTL.

Why "Anthill"? 77% of Finland is covered by forests, and the Finns are very fond of them. The Finnish freedom to roam rights allow anyone to wander in the woods, and pick wild berries, flowers and mushrooms. We thought it would be fitting to alphabetically name each VP8 hardware release with things that can be found amidst our Finnish evergreens.

The H1 encoder offloads the entire VP8 video encoding process from the host CPU to a separate accelerator block on the SOC. It significantly reduces power consumption and enables encoding of 1080p resolution video at full 30 FPS, or 720p at 60 FPS. Without a hardware accelerator like the H1, modern multi-core mobile devices can only encode video at around VGA 25 FPS, and are not able to do much else while doing that.

To provide an idea of our hardware's capabilities we compared it to the WebM Project's VP8 software encoder* (libvpx). The figures below show the required processor cycles for VGA resolution video at 30 frames per second, and are scaled from the FPS speed reached when running the Tegra2 at 1 GHz#.

Note: Power consumption measurements are for the ARM core vs. H1 encoder core in TSMC 65nm technology. ARM power consumption is estimated using the 65nm figure given at http://www.arm.com/products/processors/cortex-a/cortex-a9.php. H1 encoder core is measured using RTL netlist and Synopsys Power Compiler.

In terms of quality, hardware implementations of real-time encoders are typically behind those running on software, as adaptive algorithms related to motion search and mode selection (or exact rate-distortion optimizations) are often not feasible options in hardware. The following graph shows PSNR quality metrics for a 720p video conferencing use case, comparing the H1 Anthill release to the libvpx Bali release in different complexity modes (higher PSNR is better).

These graphs show that the H1 hardware encoder can produce good quality with very low power consumption using almost no clock cycles from the CPU. In the next release, we are planning to narrow the quality gap between the libvpx "Best" mode and the hardware implementation, while cutting down the required power even further. The next release is planned to be out in early Q2.

Several top-tier semiconductor partners have already started to integrate the H1 IP into their next chipsets, and we’re eager to share the technology with new partners.

For technical and licensing details about the H1, see our hardware page.

Aki Kuusela is Engineering Manager of the WebM Project hardware team in Oulu, Finland.*libvpx Aylesbury and Bali software encoder releases running NVidia Tegra2 development board with dual-core ARM Cortex A9 processors. In the test, libvpx was using both cores with the slowest and fastest real-time settings (-cpu-used=-5 and -cpu-used=-16).

Tuesday, March 8, 2011

We're targeting late Q2, 2011 for our next named release of the VP8 Codec SDK (libvpx). We're calling this release "Cayuga" in honor of our project's roots in New York state. Also because it's fun to say. Go ahead, say it: Cayuga.We will continue to focus on encoder speed in Cayuga. Though our Bali encoder is up to 4.5x faster than the initial VP8 release (at "Best" quality mode), there are more speed improvements to be had. As always, we'll continue to improve video quality in the encoder.

We welcome contributions from developers, so if you have ideas for improving libvpx speed or quality, get coding!John Luther is Product Manager of the WebM Project.

For Bali we focused on making the encoder faster while continuing to improve its video quality. Using our previous releases (our initial 0.9.0 launch release and "Aylesbury") as benchmarks, we’ve seen the following high-level encoder improvements:

On ARM platforms with Neon extensions, real-time encoding of video telephony content is 7% faster than Aylesbury on single core ARM Cortex A9, 15% on dual-core and 26% on quad core.

On the NVidia Tegra2 platform, real time encoding is 21-36% faster than Aylesbury, depending on encoding parameters.

"Best" mode average quality improved 6.3% over Aylesbury using the PSNR metric.

"Best" mode average quality improved 6.1% over Aylesbury using the SSIM metric.

For readers curious about the technical details, here are some detailed improvements we made in the Bali encoder:

Implemented a new "constrained quality" (CQ) data rate control mode. Within a large set of videos, this mode better allocates bits from videos where they can't provide significant visual benefit to videos where they can.

Achieved more consistent high video quality across entire video clips. We now use a better two-pass rate control option that no longer favors early sections of videos.