The issue of which format becomes prevalent is very important for the future of open web (and especially Linux). Youtube is one of the biggest providers of H.264 encoded media (currently encapsulated in Flash, but there is an HTML5 beta program) and Google will pay hefty royalties for the privilege.

The question of royalties over use of H.264 has become a popular talking point of late, because while Safari and Chrome support it, Chromium (the free software version of Chrome browser) Opera and Firefox don’t.

I read through the FAQ and can’t find out if Free and Open Source developers and products need to license the MPEG LA patents for MPEG-4 Visual. It was alleged in a comment that royalties are only necessary for products sold, not for free products. Is this correct? Could you please comment on the licensing options for Free (e.g. GPL) and open source implementations of MPEG-4 Visual, specifically h.264? What about downstream users/developers/distributors of Free and open source software?

The answer is a resounding “Yes” and even end users are liable:

In response to your specific question, under the Licenses royalties are paid on all MPEG-4 Visual/AVC products of like functionality, and the Licenses do not make any distinction for products offered for free (whether open source or otherwise)…

I would also like to mention that while our Licenses are not concluded by End Users, anyone in the product chain has liability if an end product is unlicensed. Therefore, a royalty paid for an end product by the end product supplier would render the product licensed in the hands of the End User, but where a royalty has not been paid, such a product remains unlicensed and any downstream users/distributors would have liability.

As an article over at OSNews states, we must ensure that H.264 does NOT become the de-facto standard for video on the web:

“In other words, h264 is simply not an option for Free and open source software. It is not compatible with “Free”, and the licensing costs are prohibitive for most Free and open source software projects. This means that if the web were to standardise on this encumbered codec, we’d be falling into the same trap as we did with Flash, GIF, and Internet Explorer 6.”

I guess it’s up to web developers and corporations to make the smart choice. If Google can purchase On2 Technologies, they might release later generation versions of VP (on which Theora is based) to surpass the quality of H.264.

We get a lot of E-mail from people, demanding support for OGM. Of course they come to us, it’s called ‘Ogg.’ We’re the ‘Ogg People.’ We don’t support OGM. We didn’t write it, and we don’t have the resources to help people with it…

Ogg Vs Ogm file format are the same, the main difference is the first header in each stream. OGM uses several standardised header formats, audio, video and text, in order to make identifying unknown codecs easier in directshow (and subsequently other frameworks). ie with those three headers you can use any audio or video format you choose without have to write custom header parsing routines for each codec in the demuxer.

In other words ogmtools provides the standard du jeur for encapsulating various common-in-avi codecs in an Ogg bitstream, like ‘divx’, ‘mp3′ and so on.

Remember, this is still very early in H.264’s history so the licensing is very friendly, just like it used to be for MP3. The companies who own the IP in these large patent pools aren’t in this for the fun of it – this is what they do. They patent and they enforce and then enjoy the royalties. If they are in a position to charge more, they will. We can expect that if we allow H.264 to become a fundamental web technology that we’ll see license requirements get more onerous and more expensive over time, with little recourse.

Google has created an opt-in beta program for anyone wanting to test YouTube with the HTML5 tag rather than using Flash. There are a few caveats however, with the number one being that it’s still all H.264 video. No Theora to speak of, yet, but it’s possibly a step in the right direction!

They’ve made a video on how to host a Windows 7 Lunch Party. Of course for each of the characters you’ve got representatives of everyone; the nerd (red shirt), the older lady (blue shirt), the younger woman (purple shirt) and the African American (green shirt).

They try and be cool by cutting in and out and zooming the camera around, which really just ends up looking stupid.

One of the hot tips:

“Now, of course the first thing you want to do is install Windows 7.. [All laugh].. Der, der! Make sure you do that a couple of days in advance of the party..”

A few days before, hahaha.. Then it cuts over to a badly dubbed voice as though it’s the same guy continuing with:

In a lot of ways, you’re just throwing a party with Windows 7 as an honoured guest! Sounds easy, and it is!

“Oh my gosh, well when everyone was there and settled, I led an overview of some of my favourite Windows 7 features. I showed my guest things from two of the Windows 7 orientation videos and it took like ten minutes. Oh you know what was great? It was totally informal, like, everyone just crowded around the computer in the kitchen.”

Finally, it ends up with a deep message to everyone about how Windows 7 is all about you..

Prior to the purchase of brand new workstations at work, Justin and Andy were working from Macbook Pro laptops. We had these Matrox DualHead2Go boxes which took a video signal and split it in two, for the purposes of connecting two monitors to a non-dualhead video card. I cannot tell you how much of a pain it was getting not only DVI output working under Linux through the proprietary ATI driver (although now that I know how, it’s pretty easy), but also getting it to talk to these Matrox boxes.. modelines.. resolutions.. triple displays.. gahh..

Never-the-less, I did get it to work. The final setup consisted of the laptop screen being enabled as the primary desktop, then the secondary desktop through the DVI output connecting to the Matrox box at a resolution of 2560×1024, which the box then split across two LCD screens. One of the problems was that the DPI resolution for the dualscreen setup was very wrong and as a result the fonts on the monitors were TINY.

So, the next trick was to tell the secondary monitor (the dualview box) what DPI it should run at (in this case, 96×96).

Even on my main box at work using the NVIDIA driver on a dualscreen setup, the DPI is wrong.chris@gentoo ~ $ xdpyinfo |grep -A1 dimensions
dimensions: 3360x1050 pixels (948x303 millimeters)
resolution: 90x88 dots per inch