(ideas by JMV, not yet approved by anyone else. Should be merged in respective header definition above if approved)

−

−

In order to simplify implementations when it comes to channel mappings, several defaults are defined when no extra header is present.

−

−

* Files containing one channel are assumed to be plain mono files with:

−

channel_type = OGG_CHANNEL_MAP_MONO

−

channel_map [0] = OGG_CHANNEL_FRONT_CENTER

−

−

* Files containing two channels are assumed to be stereo files with:

−

channel_type = OGG_CHANNEL_MAP_STEREO

−

channel_map [0] = OGG_CHANNEL_FRONT_LEFT

−

channel_map [1] = OGG_CHANNEL_FRONT_RIGHT

−

−

* Files containing three channels are assumed to be B-format Ambisonic files with:

−

channel_type = OGG_CHANNEL_MAP_B_FORMAT

−

channel_map [0] = OGG_CHANNEL_W

−

channel_map [1] = OGG_CHANNEL_X

−

channel_map [2] = OGG_CHANNEL_Y

−

−

* Files containing four channels are assumed to be B-format Ambisonic files with:

−

channel_type = OGG_CHANNEL_MAP_B_FORMAT

−

channel_map [0] = OGG_CHANNEL_W

−

channel_map [1] = OGG_CHANNEL_X

−

channel_map [2] = OGG_CHANNEL_Y

−

channel_map [3] = OGG_CHANNEL_Z

−

−

===== Channel Conversion Header =====

−

−

Any number of channel conversion headers can be specified. This header specifies how to down-mix the data to another format.

−

−

32 0x00000001 Remixing Header Id

−

16 [uint] Major version

−

16 [uint] Minor version

−

32 [uint] Target Channel type

−

32xMxN [sint] Target Channel (M) x Src Channel (N) Gain array

−

−

The ordering of the mixing matrix is such that source channel gains are consecutive. The gain (note: *signed* integer) has the 16 MSBs for the integer part (including sign) and 16 bits for the fracional part of the gain. Note: the gain can be negative.

−

−

===== Channel Conversion Defaults =====

−

−

* Stereo files SHOULD be converted to a mono file by averaging the left channel and the right channel

−

* Ambisonic files SHOULD be converted to a mono file using Mono = W*sqrt(2).

−

* Ambisonic files SHOULD be converted to stereo files by dematrixing W, X and Y.

−

−

==== Channel Mapping, proposed option 2 ====

−

−

This proposed version of Channel Mapping has not yet gained the support of the Xiph.Org Foundation. However, it is likely the more mature proposal between the two. Still needs a bit more of polish, though.

Channel mappings are used to convey the meaning of the PCM signals stored in an OggPCM stream. They have been designed so that commonly used transmission formats like stereo, 5.1 and Ambisonics can be accurately tagged and distinguished from each other. Rudimentary downmixing from multichannel formats to stereo and mono and interoperability with compatibility formats like Dolby Surround and Ambisonics UHJ are also supported.

Channel mappings are used to convey the meaning of the PCM signals stored in an OggPCM stream. They have been designed so that commonly used transmission formats like stereo, 5.1 and Ambisonics can be accurately tagged and distinguished from each other. Rudimentary downmixing from multichannel formats to stereo and mono and interoperability with compatibility formats like Dolby Surround and Ambisonics UHJ are also supported.

OggPCM

The following is an draft format for OggPCM. This is not a final proposal, but has remained reasonably stable for over a year.

OggPCM is an encapsulation of PCM audio data into an Ogg logical bitstream. An OggPCM bistream may be concurrently multiplexed with other Ogg logical bitstreams such as OggUVS video or CMML metadata,

Note that unless otherwise noted, all multi-byte fields use the network byte order (big endian). The first packet in a stream MUST be the main header packet. The second packet MUST be the comment packet. Some extra header packets MAY be included after the comment header, provided this is identified in the main header. The packets that follow MUST all be data packets.

Main Header Packet

Multibyte fields in the header packets are packed in big endian order, to be consistent with network byte order. A header packet contains the following fields:

A PCM "frame" is composed of samples for all channels at a given time.

The "Codec identifier" is 64 bit long since most other Ogg codecs specify their identifier within the first 64 bits rather than the first 32 bits, so this allows applications to match on all 64 bits consistently.

The "Maximum number of frames per packet" field is meant to notify an application reading the file that no data packet will contain more than a certain number of frames. This not only makes implementation easier, but also provides information on how much needs to be buffered when streaming PCM files. A value of 0 means a maximum of 65536 frames. Implementations SHOULD make this field such that packets do not get split into multiple pages.

The "Number of significant bits" field specifies how many bits are actually used. The other bits MUST be zero. This can be used to support audio with any resolution. For example, 12-bit PCM can be supported as "16 bit PCM" for the format and 12 for the number of significant bits.

For streams where the number of significant bits is the same as the bit width specified by the format, the significant bits field may be set to zero.

For streams where the number of significant bits is less than that specified by the bit width, the data shall be justified to fill the most significant bits. For 12 bit PCM in a 16 bit format, the 12 valid bits will occupy the 12 most significant bits of the 16 bit word and the least significant 4 bits shall be zero.

Since the main header packet and the comment packet are mandatory, the "extra header packets" field counts any additional header packets (aside from these two) that can be provided before the start of the data packets.

Format IDs below 0x80000000 are reserved for use by Xiph and all the ones above are allowed for application-specific formats.

Comment packet

The codec header is followed by a "vorbis comment" packet and by optional extra headers, if any. The format used is the same as for Vorbis with the exception that there is no packet identifier (so the packet is exactly like it is for Speex).

Data Packets

Data packets contain the raw PCM audio in interleaved format (complete frames are encoded sequentially) with the following definitions/restrictions:

A PCM "frame" is composed of samples for all channels at a given time.

Any OggPCM packet MUST only contain complete frames (ie samples for all channels at a given sampling instance). Partial frames are forbidden. It is RECOMMENDED that decoders that come across an invalid packet containing a partial frame to drop the partial frame (at the end) and issue an error.

There is no padding allowed in a frame except when some bits (<8) are needed to complete a byte. This means that packet size has a direct relationship to the number of frames in the packet (for purposes of seeking).

Recommended packet size is smaller than 4k since interleaving and seeking in Ogg bitstreams is done on the resolution of packets and thus larger packet sizes create suboptimal bitstreams.

Extra Headers (optional)

Extra header packets contain additional information about the OggPCM stream, and must come after the Comment Packet and before the first Data Packet. Each extra header is defined as:

32 [uint] Header ID
... Header data

The first optional headers to be defined handle mappings from physically stored channels to logical channels, such as speaker feeds and Ambisonic signals.

Channel Mapping Headers

Channel mappings are used to convey the meaning of the PCM signals stored in an OggPCM stream. They have been designed so that commonly used transmission formats like stereo, 5.1 and Ambisonics can be accurately tagged and distinguished from each other. Rudimentary downmixing from multichannel formats to stereo and mono and interoperability with compatibility formats like Dolby Surround and Ambisonics UHJ are also supported.

A channel mapping can be given in two forms, using one of two headers. The Channel Mapping Header tags any subset of the transmitted channels with its intended playback semantics. The Channel Conversion Header additionally provides a mixing matrix which can be applied to the channels before interpretation as a target, logical channel type. A compatible implementation of OggPCM channel maps SHOULD support both types of maps, but MAY omit support for the Channel Conversion Header.

An arbitrary number of mapping and conversion headers can be present, including none at all. The header types can be mixed and they can appear in any order. When neither header is present, the defaults spelled out in the section below on defaulting apply. When more than one header is present, they describe alternative mappings in a decreasing preferential order, and the first supported one SHOULD be used.

A header is considered to be present once its header ID has been read successfully. If a field or structure is prematurely terminated after reading the ID, the header is considered erroneous. If an error is encountered in a header, it MUST be discarded and parsing SHOULD continue with the next header. If mapping headers are present but they are all erroneous, defaulting MUST NOT be applied.

The channel mapping header lists physical channels and their associated logical channels, identified by a channel_type value. It is defined as:

Channel numbers refer to the physical channels transmitted in the OggPCM stream. They start at zero, denoting the first channel transmitted in a frame, and range to the number of channels indicated in the main header packet minus one. References to absent channels MUST be treated as an error. If a physical channel is not referenced in any of the channel maps and defaulting is not being used, its semantics are unknown. Such channels SHOULD NOT be played without user intervention, and SHOULD NOT be routed to audio outputs which are currently in use, but they MUST NOT be considered an error.

Channel_types refer to logical channels with a clear interpretation on how the sound data routed to them is to be reproduced. All channel_types less than 0x80000000 are reserved for use by Xiph; 0x80000000 and above are allowed for application specific extensions. This scheme allows for 2^31 -1 Xiph defined channel map types and 2^32 distinct channel names. If a channel_type which has not been defined at all, has not been defined for the indicaed version of the header, is not supported by the player, or cannot be rendered accurately, parsing SHOULD continue with the next header.

Encoders SHOULD include appropriate Channel Conversion Headers at least into stereo and mono, if possible. It MUST be possible for the user to override any mixing coefficients included by the encoder by default. If no header is found by a decoder which leads to an accurate rendering but at least one valid, supported header is present, approximate rendering MAY be attempted, as outlined in the section on conversions and rendering below.

The mapping rows SHOULD be written sorted first by channel number, then by channel_type, and then by mixing coefficient. If a channel number is present more than once in a Channel Mapping Header, the first associated channel_type MUST be used. If a channel_type is present more than once in a Channel Mapping Header, the first associated channel number MUST be used. However, if more than one channel is tagged as OGG_CHANNEL_UNUSED, they are all considered unused and shouldn't be played. If a channel number, channel_type pair is present more than once in a Channel Conversion Header, the first mixing coefficient for the pair MUST be used. If channel mapping data is neglected because of these rules, readers SHOULD still accept the header without treating it as an error, but MAY warn the user.

The mixing coefficients are 32 bit signed, two's complement, fixed point numbers. The 16 most significant bits contain the integer part (including sign), and the 16 least significant bits are the fraction. Note that the gain can be negative.

For major version 0, minor version 0 of the Channel Mapping and Channel Conversion headers, the following channel_type values are defined. They are divided into groups corresponding to the closest mapping into the set of channels used in CAF, RIFF WAVE and USB channel masks:

// specials
OGG_CHANNEL_UNUSED = 2816 = 0x00000B00 (the channel is unused and should not be rendered)

Unless otherwise indicated, the logical channels are assumed to be speaker feeds, with the speaker lying in the indicated direction. The direction is referenced to either the front center, or where indicated, the back center speaker. By default all of the speakers SHOULD be at the same distance from the listener, or the so called "sweet spot", so that temporally coincident signals give rise to temporally coincident sound at the listening position. Where the channel_type indicates an interpretation other than a speaker feed, temporal coincidence SHOULD still hold.

Some of the base standards used to derive the channel mappings are sensitive to speaker distance in addition to any possible time delay, and some are not. In any case interoperability between the different standards calls for setting the distance. The base standards used to derive the channel map rarely take a stance on that, so for the purposes of this specification the speaker distance, the listening area, and the Ambisonics coding radius are all idealized as being infinite, unless otherwise noted. Hence, the field produced by any speaker feed SHOULD by default approximate a planar wave at the sweet spot.

Unless otherwise indicated, each channel should give rise to the same sound pressure level at the listener. The channel mapping metadata does not impose an absolute reference level for the channel data. The relative levels for ambisonic channels are given by the Furse-Malham convention.

Channel_types marked as being "diffuse" are intended to be reproduced in a spatially dispersed manner, from and around the indicated direction. As two common examples, they might be reproduced using dipole speakers aligned so that direct arrival of sound to the sweet spot is minimized, or by multiple speakers placed slightly above the listener in and around the stated direction. They SHOULD retain flat average spectral response as measured from the sweet spot.

Defaulting and Standard Mappings

OggPCM streams were originally defined without channel maps, so for compatibility purposes, the simplest cases are defaulted based on the number of physical channels present. The precise Channel Mapping Headers and Channel Conversion Headers that are implied are specified below. Further INFORMATIVE mappings for various channel layouts can be found in a companion document.

Files containing precisely one channel and no explicit channel map are assumed to contain plain mono.

Files containing precisely two channels and no explicit channel map are assumed to contain plain stereo.

Files containing precisely three channels and no explicit channel map are assumed to contain 1st order pantophonic Ambisonics (W, X and Y).

Files containing precisely four channels and no explicit channel map are assumed to contain 1st order periphonic Ambisonics (W, X, Y and Z).

Files containing precisely six channels and no explicit channel map are assumed to contain 5.1 in the ITU-R BS.775-1 layout.

Files containing precisely seven channels and no explicit channel map are assumed to contain 6.1 in the ITU+back channel layout.

Files containing precisely eight channels and no explicit channel map are assumed to contain 7.1 in the Dolby/DTS discrete layout.

Files containing some other number of channels and no explicit channel map are assumed to contain channels tagged with OGG_CHANNEL_UNUSED

Further Suggestions for Conversion and Rendering

Even if a decoder supports a given channel_type, it is not always possible to recreate the precise intention of the coder because of differences and uncertainties in the available speaker layout. This section outlines some strategies which MAY be used in approximate rendering.

When speakers are available all around the position of the desired feed, and they subtend no more than a typical stereo pair (60 degrees), a common simulation of a feed is to apply an equal power panning law among the closest two or three speakers. This is also what is done in the default mappings from the central channels to the stereo pair. In three dimensions this approach leads to Vector Base Amplitude Panning (VBAP).

Another possibility is to pan the speaker feed into an Ambisonics representation of some chosen order, and then to decode for the actual speaker layout.

When speakers to the back are not available, the respective speaker feeds are often mixed to opposite speakers in the front. The default mappings do this for stereo, and apply a 3dB attenuation to the back. The latter convention derives from European Broadcasting Union downmixing guidelines, and Dolby AC-3 defaults.

If speakers close to the desired positions are available, the speaker feed can also be routed directly to one of them, as long as the overall directional errors stay limited and the relative ordering of the channels is not affected. The channel maps and channel numbering are aimed at helping such mappings in existing systems: when the mapping examples in this document and the companion one are applied, in most cases channel ordering becomes identical to that used in WAV files and USB serializations. Rounding the channels to the positions specified in the respective channel masks should then lead to a workable rendering.

OggPCM encoders are encouraged to supply downmixing information for common output formats, but it is to be expected that the information will often be incomplete. In such cases, the mapping examples given in this document and the companion can be applied by default by the decoder when the stored signal set seems to fit one of them.

Proper downmixing to certain output formats, like Dolby Surround and Ambisonics UHJ, require complex processing which cannot be specified using a simple downmix matrix. If such output options are available, proper transcoding can be attempted as soon as the first channel map has been found which specifies a set of channels amenable to such a representation. The same goes for binaural rendering and is extensions, like crosstalk cancellation, which utilize head related transfer functions (HRTF) for accurate spatial rendering over a limited number of channels.