Overview

Open source library used for container parsing and audio/video decoding

WebKit

Implements the HTML and Javascript bindings as specified by WHATWG

Handles rendering the user agent controls

Provides a MediaPlayerPrivate interface for port-specific implementations of a media playback engine

Pipeline

The pipeline is a pull-based media playback engine that abstracts each step of media playback into 6 different filters: data source, demuxing, audio decoding, video decoding, audio rendering, and video rendering. The pipeline manages the lifetime of the filters and exposes a simple thread-safe interface to clients. The filters are connected together to form a filter graph.

- GStreamer (Windows support questionable at the time, extra ~2MB of DLLs due to library dependencies, targets many of our non-goals)

- VLC (cannot use due to GPL)

- MPlayer (cannot use due to GPL)

- OpenMAX (complete overkill for our purposes)

- liboggplay (specific to Ogg Theora/Vorbis)

Our approach was to write our own media playback engine that was audio/video codec agnostic and focused on playback. Using FFmpeg avoids both the use of proprietary/commercial codecs and allows Chromium's media engine to support a wide variety of formats depending on FFmpeg's build configuration.

As previously mentioned, the pipeline is completely pull-based and relies on the sound card to drive playback. As the sound card requests additional data, the audio renderer requests decoded audio data from the audio decoder, which requests encoded buffers from the demuxer, which reads from the data source, and so on. As decoded audio data data is fed into the sound card the pipeline's global clock is updated. The video renderer polls the global clock to determine when to request decoded frames from the video decoder and when to render new frames to the video display. In the absence of a sound card or an audio track, the system clock is used to drive video decoding and rendering. Relevant source code: /src/media, filters.h, clock.h, decoder_base.h, audio_renderer_base.h,video_renderer_base.h.

The pipeline uses a state machine to handle playback and events such as pausing, seeking, and stopping. A state transition typically consists of notifying all filters of the event and waiting for completion callbacks before completing the transition (diagram from pipeline_impl.h):

The pull-based design allows pause to be implemented by setting the playback rate to zero, causing the audio and video renderers to stop requesting data from upstream filters. Without any pending requests the entire pipeline enters an implicit paused state.

FFmpeg

After many rounds of internal testing, we decided to use the ffmpeg-mt branch of FFmpeg, which implements parallel frame-level decoding for many popular codecs. Although FFmpeg supports parallel slice-level decoding for H.264, it requires the content to be encoded with slices and also does not work for other video formats. We discovered a significant performance increase on multi-core systems using ffmpeg-mt to decode H.264 content compared to vanilla FFmpeg. FFmpeg is used to implement our demuxer, audio and video decoders. Relevant source code: /deps/third_party/ffmpeg, ffmpeg_demuxer.h,ffmpeg_audio_decoder.h, ffmpeg_video_decoder.h.