"Mozilla and the Xiph.Org Foundation are pleased to announce the Internet Engineering Task Force (IETF) has standardized Opus as RFC 6716. Opus is the first state-of-the-art, fully Free and Open audio codec ratified by a major standards organization."

In a great victory for open standards, the Internet Engineering Task Force (IETF) has just standardized Opus as RFC 6716.

Opus is the first state of the art, free audio codec to be standardized. We think this will help us achieve wider adoption than prior royalty-free codecs like Speex and Vorbis. This spells the beginning of the end for proprietary formats, and we are now working on doing the same thing for video.

Is storage in the Ogg container format intended to be the go-forward standard for encoding of local files with the Opus codec?

From the link through from the Opus homepage to the Ogg-Opus page, it appears so, and it's quite highly developed, including integration with EBU-R128 loudness normalization specified.

Congratulations to all involved including indeed all the corporate interests who have seen fit to ensure it is an open standard to prevent fragmentation and those such as Mozilla who have contributed employees to work on it.

Hopefully the Mandatory status in WebRTC and the IETF standardization and the number of organisations behind Opus will encourage a wide range of others to implement it as a well-backed standard and very much an open door to numerous use-cases, which need implementing only once for them all.

Pardon my ignorance, but I had always been led to believe that in order to be able to use software at low latencies that it was necessary for your operating system to be running a low latency kernel.

I have experimented with audio a fair bit on Linux and what musicians tend to do is to replace the Linux kernel that comes by default with their distribution with a low latency one.

They then tend to install the Jack audio server, which is designed for low latency work.

They also tend to use a dedicated soundcard.

My question is - it is great that Opus offers low latency, but will most users be able to benefit from this functionality if their operating system does not have a low latency kernel?

Also, given the benefits that a low latency kernel can offer why is the standard Linux kernel shipped by major distributions not low latency by default?

Are there any downsides to having a low latency kernel?

Is low latency not enabled by default because the integrated sound modules of a lot of motherboards are not powerful enough to run at low latency?Do you tend to need a dedicated soundcard to do low latency recording?I noticed that when recording using Jack at low latency with my motherboard sound I kept getting X runs, but once I started using a dedicated soundcard this problem vanished, so it was as if the integrated sound module was struggling to cope at low latency and the more powerful dedicated card was necessary for such work.

If this is the case could the usefulness of opus as low latency software be hampered somewhat by the kernels that many folks use being compiled without low latency enabled?Similarly could the usefulness of opus as low latency software be hampered by hardware shortcomings? Dedicated soundcards are very much an item for the enthusiast - the average computer user will rely upon less powerful integrated sound.

If I was to jam with a friend over the internet what prerequisites would we both need?Fast internet connection?Low latency kernels?Dedicated soundcards?

Sorry if I am coming across as a bit negative - just trying to get my head around this low latency stuff as it is confusing me a bit.

I have transcoded my flac files to opus at the default bitrate of 96kbps and am astounded by the quality.

Is storage in the Ogg container format intended to be the go-forward standard for encoding of local files with the Opus codec?

From the link through from the Opus homepage to the Ogg-Opus page, it appears so, and it's quite highly developed, including integration with EBU-R128 loudness normalization specified.

Congratulations to all involved including indeed all the corporate interests who have seen fit to ensure it is an open standard to prevent fragmentation and those such as Mozilla who have contributed employees to work on it.

Hopefully the Mandatory status in WebRTC and the IETF standardization and the number of organisations behind Opus will encourage a wide range of others to implement it as a well-backed standard and very much an open door to numerous use-cases, which need implementing only once for them all.

Opus in Matroska is also being worked on within Xiph.org, but isn't ready today, from what I've read. I suspect that .opus files (i.e. opus in ogg container) will be the typical form of file-based Opus audio-only playback, just as .ogg is the typical form of file-based Vorbis audio-only playback.

Essentially, little changed except removal of the -voice and -music modes which weren't useful and other things that simply made building the binary easier, AFAICT.

Recent versions including the free reference implementation have all been tuned to a very good performance, though moderate improvements on music and probably problem sample performance will doubtless be made over time. The days of naive reference encoders (like early mp3 encoders) seem to have passed.

I guess this is the time and place for the big "Thank You!". –I'm particularly grateful for everyone involved in pushing Opus through the standardisation process.I hope it is now going to be as successful as it deserves. – To world domination! ;-)

Lets hope that projects like Daala can reach something similar in the future.

Seeking in Opus files requires a pre-roll (recommended to be at least 80 ms). However, currently Matroska requires its index entries to point directly to the data whose timestamp matches the corresponding seek point, not some place arbitrarily before that timestamp. These two requirements are incompatible, and mean that seeking in Opus will be broken in all existing Matroska software. In particularly unlucky cases (e.g., around a transient), playing back audio decoded without any pre-roll can produce extremely loud (possibly equipment-damaging) results. We need a new element to signal this, e.g. Track::TrackEntry::PreRoll.

I believe the pre-roll is essential because of the SILK layer's predictors needing time to converge based on previous samples, but wouldn't matter in CELT-only (MDCT only) mode. Without this, excessive volume bursts may occur if SILK is active, so the BEST solution is a PreRoll (decoding but not playing 80 ms of audio or more to make it converge). Presumably if the PreRoll isn't accessible it should be acceptable (and mandatory) to mute the audio for 80ms (or follow a suitable fade-in curve).