Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

SD-Arcadia writes to tell us that Theora 1.1 has officially been released. It features improved encoding, providing better video quality for a given file size, a faster decoder, bitrate controls to help with streaming, and two-pass encoding. "The new rate control module hits its target much more accurately and obeys strict buffer constraints, including dropping frames if necessary. The latter is needed to enable live streaming without disconnecting users or pausing to buffer during sudden motion. Obeying these constraints can yield substantially worse quality than the 1.0 encoder, whose rate control did not obey any such constraints, and often landed only in the vague neighborhood of the desired rate target. The new --soft-target option can relax a few of these constraints, but the new two-pass rate control mode gives quality approaching full 'constant quality' mode with a predictable output size. This should be the preferred encoding method when not doing live streaming. Two-pass may also be used with finite buffer constraints, for non-live streaming." A detailed writeup on the new release has been posted at Mozilla.

The version of Theora used in that comparison is also rather out of date. Nearly a year out of date, in fact - it's an SVN snapshot dating from 2008-11-25, not the released version 1.1. I think the experimental Thusnelda encoder was known to have regressed slightly on video taken from Touhou games back then.

Also of note, the comparison you said, actually doesn't say what you claim... It says that it beats the h263 youtube version at a lower bit rate. Read the conclusions - they admit that the h264 version on youtube is better quality.

Even at ~500k, the Youtube version is clearly more blur on details.And I thought I would need to download yet another codec to play the Theora video, but surprise to learn that my Firefox 3.5 does support it natively!

Maybe now Google will use Theora instead of the patent-encumbered H.264 in their new HTML5 Youtube.

"Encumbered" implies some sort of difficulty. H.264 decoding is available, for free (and, if you must, for free as in freedom, as well), on every OS, including Linux.

So, where's the encumbering?

It seems to me that requiring only open standards, when *they* are not the norm and require going out of one's way is more encumbering than going with something like h.264. Not to mention being encumbered with a format that offers inferior quality.

Freedom is cool and all, and I'm supremely grateful for Theora's exist

I find it intriguing that in every discussion I see on tech sites like/., it is always the patents that seem to be what people focus on.

What about the built in hardware support for h.264 is millions upon millions of existing general computing and embedded devices? It seems like Google would want YouTube accessible on these devices, and on many it is. Being able to bring that support to phones, satellite boxes, cable boxes, TV, etc. etc. etc. that already have h.264 is probably a bigger motivator than the

Theora is an open video codec being developed by the Xiph.org Foundation as part of their Ogg project (It is a project that aims to integrate On2's VP3 video codec, Ogg Vorbis audio codec and Ogg multimedia container formats into a multimedia solution that can compete with MPEG-4 format).Theora is derived directly from On2's VP3 codec; currently the two are nearly identical, varying only in framing headers, but Theora will diverge and improve from the main VP3 development lineage as time progresses.

Moreover, Theora is the only decent video codec which complies with the W3C's patent policy. There is no question or threat of demands for patent royalties or license payments for any use of the codec.

Dirac [diracvideo.org] strikes me as another codec worth following. It's available to all developers, high-quality, and in production use by the BBC during the Olympics (they said so in their Dirac promotional video [bbc.co.uk]). VLC has support for playing back Dirac streams. I'd guessing other players do as well.

I expect Theora and Dirac to be of interest to all who want high-quality free video codecs.

Worth following? Yes. Especially as a profile of Dirac is in process of adoption as VC-2 and so will be used a lot for digital masters. Worth deploying? Not so much. A decent (Core 2 or better) laptop can probably play back Dirac without dropping frames, but it will be at a very high CPU load. A handheld has no chance. There are a couple of GPU-based decoders which may be ported to run on OpenGL ES 2.0 hardware in a modern handheld and there is a hardware decoder under development that may help too (especially if it's licensed as an IP core for integration into ARM SoCs).

That said, most handhelds can handle Theora, so providing both Theora and Dirac should cover most clients. Not the iPhone, of course, but if people will buy into a closed platform then they can't expect things to always work...

VLC has support for playing back Dirac streams.

The OS X builds prior to 1.0 had Dirac support, but 1.0 didn't and neither have any of the subsequent ones. No word on whether this is intentional or not from the VLC team, but playing a Dirac file now pops up an error saying 'dirac' is an unrecognised CODEC ID.

It's an outdated video codec that loses to H.264 in pretty much every codec shootout, and is in general ignored in HD media (H.264/VC-1), HD broadcasts (H.264/MPEG2), set top boxes, mobile players and so on. It's also pretty much completely ignored by the pirate community, preferring mkv/H.264. While possibly FUD, not everyone is willing to ship this codec because they fear submarine patents meaning it's lost its only real shot at relevance as the default codec for HTML5 video, which now also seems to be a mix probably dominated by H.264. The end result is that it might be used by a few geeks and internally in video games and such that provide their own player, but it'll likely have as much impact as vorbis had on the mp3/aac format. That is, none.

Sadly I agree. But the vorbis mp3 example is too kind. Ogg Vorbis was significantly better than mp3 at a given bitrate, and it still didn't get much traction. Theora on the other hand, like you said, doesn't compare to modern proprietary codecs. It's too bad, but it's true.

> outdated video codecAn arbitrary definition which, could very well apply equally well to H.264 in comparison to almost any other codec.

> loses to H.264 in pretty much every codec shootoutBut not usually by very much; and in any case, countless codecs beat H.264 in pretty much every respect in turn - but since the issue is not some theoretical perfect codec but a cost/bandwidth/quality/encode-cpu-time/decode-cpu-time/features/etc tradeoff, this might still result in a net benefit

A moot point, given that people who are misappropriating unpaid-for content choosing to use a misappropriated unpaid-for format is hardly surprising.

Seriously? Do you work for the MPAA or some other group like that? People who pirate stuff aren't comic book villains who break laws just for the sake of breaking laws. They don't think "oh hey while I'm violating copyright I'll violate patents too, just because I can!" H.264 is more popular because it is better, not because the people who encode stuff get hard at the thought of breaking laws in a way nobody particularly cares about and they're never ever going to get in trouble for.

The AC above me covers the rest of your points quite nicely, so I'm not going to write something that would be much the same as his. Your post is utter nonsense, and you and the people who actually looked at your post and not only managed to not laugh, but modded you up need to pull your heads out of the GNU/sand and admit that Theora is simply inferior. If you think not having any patent problems is a big enough issue to prefer a technologically inferior codec, that's fine. But don't twist the facts and outright lie just so you can try to pretend Theora is otherwise a match for modern codecs, because it is not.

No, but if they don't particularly care about violating copyright, they won't care much about violating patents, either.

H.264 is more popular because it is better

Because it's better, or because it's perceived as better -- in terms of quality per bit. But again, anywhere other than the pirate community, patents are likely to be an issue, and an open-but-worse format may be preferred over a closed-but-better format, especially if it's not that much worse.

admit that Theora is simply inferior.

I'm pretty sure that's what was meant by this part:

But not usually by very much; and in any case, countless codecs beat H.264 in pretty much every respect in turn

No, but if they don't particularly care about violating copyright, they won't care much about violating patents, either.

Phrasing it as them using "a misappropriated unpaid-for format" is not saying they merely don't care. You really have to read that line very loosely and optimistically to interpret it in a waythat doesn't make it seem like the author was thinking "damn filthy fucking pirates" when he wrote it.

Because it's better, or because it's perceived as better -- in terms of quality per bit.

However, "misappropriated unpaid-for format" is about a hair's width away from "stolen"

Which, like it or not, is still not a terribly inaccurate way of describing what's going on here. You could say "illegal unlicensed format", if you like, but I don't think it changes the tone significantly.

Unless you can provide evidence of widespread usage of "misappropriated" in anything but a negative way

Are you going to argue that patent infringement is a positive thing?

I'm not necessarily saying I disagree, but let's be clear, because it sounds like that's what you're advocating.

a better codec is better even if it's impractical or even impossible to use it given current hardware.

It's come down to a semantic argument, but this seems pretty blatantly wrong to me. It's "better" even if it's impossible? In

Which, like it or not, is still not a terribly inaccurate way of describing what's going on here. You could say "illegal unlicensed format", if you like, but I don't think it changes the tone significantly.

Violating IP laws is not the same as theft. Not even close. Were it merely claimed to be illegal, I never would've had a problem with its wording. Let us look up the definition of "misappropriate [merriam-webster.com]", shall we? "to appropriate wrongly (as by theft or embezzlement)" Saying merely that it's "illegal" is a neut

When you cut out the wishful thinking you pretty much agree with me that it isn't being used by those that do care about software patents or those that don't care about software patents. The former licenses H.264 or jumps at shadows, the latter uses H.264 without a license. Your futile attempts at counter attack against H.264 failed the save vs reality. Oh by the way, I also forgot one other big thing - modern digicams/video cameras record in AVCHD which is H.264, so unless they edit and transcode it that'l

> which now also seems to be a mix probably dominated by H.264.
The jury's still out on that one - I think most people expect the W3 to wash their hands of baseline video recommendations entirely (at least until a possible appropriate future format meets the requirements)

the trouble is, theora does meet the requirements, and it's the only halfway modern codec which does. however the requirements for accepting a video tag for html from apple seem to be that it cannot be a royalty-free codec because that would allow firefox to continue to exist, which would slow market share growth for safari. instead, a patent-encumbered codec will make it impossible for free-software to implement html5 and manufacturers of proprietary software will have another string in their monopoly.

The link on xiph.org (http://people.xiph.org/~greg/video/ytcompare/comparison.html I'm guessing you mean) doesn't show that it's better than h264, it shows that it's better than h263, at a low bit rate. In the conclusions it freely admits that the h264 video is better quality.

Here's [saintdevelopment.com] another comparison that clearly shows that both 1.1 and 1.0 are worse than h264 â" the x264 encoder used here is actually pretty old, and missing a couple of major improvements too.

Did you even watch the videos at your second link? It's pretty clear to me that Theora is the better codec in that clip. Play them side by side and notice how much better the butterfly and the sky looks with Theora.

It just suffered from needing a higher powered processor to decode video for play back.

Also hilariously wrong. Hell, one of the advantages (what few there are) of Theora its proponents like to bring up is that it takes less resources to decode than H.264. I have no fucking clue where you got this idea from.

See www.xiph.org. I believe there is a link on there comparing H.264 and Theora. Theora is noticeably better

Wrong again. There have been several comparisons between H.264 and Theora by the Xiph folks and they've all come out in favor of H.264. They've only tried to argue that Theora isn't really that bad. The problem is it is, and the only reason Theora didn't get utterly murdered in their comparisons is they've compared default Theora to default x264 and YouTube's H.264.

Default Theora is pretty much as good as it gets unless you want to set custom quantization/Huffman tables. Default x264 falls far short of x264 with its settings set for maximum quality, mainly because when you set them like that it's slow as fuck and most people will take worse quality over sub-1 FPS encoding. I don't know what YouTube uses or how they set it, but I seriously doubt a site that huge goes for the maximum possible quality.

Furthermore, Theora is simply inferior technology-wise to H.264. Theora-the-specification is far behind H.264 and it makes it pretty much impossible for Theora-the-software to ever be better than a decent H.264 encoder, as any improvements could simply be copied by the H.264 encoder (though it's more likely it'd be the other way around).

My guess is Theora 1.1 should be noticeably better.

It is noticeably better than Theora 1.0, but remains noticeably worse than H.264 and will continue to be so.

I would rather that community based projects with low budgets distribute video using an absolutely free codec if the alternative is that they don't distribute at all because they can't afford the fees. If the quality is a little bit worse, but it's still fit for the purpose, and it's free, then it has more value than superior technology that is not affordable.

People shouldn't be using YouTube as their distribution mechanism in the first place. They should be using their own devices.

Where in my post did I say that you can't choose inferior codecs for other reasons? All I did was respond to the absurd assertion that Theora is better than H.264. If you think using outdated technology is an acceptable price to pay to avoid patent issues, go right ahead.

If the quality is a little bit worse, but it's still fit for the purpose, and it's free, then it has more value than superior technology that is not affordable.

MPEG-1 is completely free, in most areas of the world, due to patent expiration.

It'll also put Theora to shame in just about every respect. Encoding and decoding complexity is so low your digital watch could handle it, and h.264 offers practically no quality improvement at high bitrates, and only a small improvement at VERY LOW bitrates (what it was

MPEG-1 is completely free, in most areas of the world, due to patent expiration.

Possibly - so long as you don't want any audio with your video.

It'll also put Theora to shame in just about every respect.

Unlikely - even the original VP3 can beat MPEG-1, despite its major flaws.

Encoding and decoding complexity is so low your digital watch could handle it, and h.264 offers practically no quality improvement at high bitrates, and only a small improvement at VERY LOW bitrates (what it was designed for).

Encoding and decoding complexity for MPEG-1 is... actually going to be quite close to MPEG-2. h.264 also offers quality improvements at *every* bitrate - due to CABAC (which provides better compression of the encoded data), better motion compensation that allows the available bitrate to be used more efficiently, and possibly even in-loop deblocking.

I once claimed that xvid was better than h.264. Boy was I wrong! Slashdotters set me right almost immediately, and then I started researching it.

The h.264 4.0 profile isn't that good. I believe Youtube uses that, with optimizations like CABAC and B/Ref frames turned off, and motion estimation quite low. During my research, and after days of tweaking, I put together some ludicrously good x264 settings(very tweaked 5.1 profile) which yielded incredible results for FRAPS'd test vids. I was getting Youtube HD's

Theora is defintely improved, but I see a lot of basis pattern throughout these samples. Theora would be well-served by a postprocessing filter. Theora's 1-pass CBR encoding definitely needs a LOT of tuning before it'd be viable for real-world content; I don't think we'll see it used effectively for live encoding this version.

Why? If the video and audio are compressed already, are you really gaining much by trying to compress them again? As for subtitles, aren't you better off with a container that supports them (i.e.: mkv)?

Yep. They are actually zero-compressed files, but still inside multi-archived files. But the subtitle files are as separate. I can load a video file just fine on vlc, but I cant load subtitles in it unless I decompress and they have the same filename.

I cant load subtitles in [VLC] unless I decompress and they have the same filename.

Just wanted to let you know that SMplayer [sourceforge.net] lets you load any file as the subtitle file. Of course, Mplayer itself does, too, but some people get intimidated by the command-line. With SMplayer, you go to the Subtitles menu, click on Load, and then pick whichever file you want.

In case anyone doesn't know yet, SMplayer is a user-friendly front-end for the powerful Mplayer program. Mplayer is probably the next best thing to an omnipotent video (and audio) player, but it's a command-line program with a bewildering array of options guaranteed to intimidate the weak of heart. SMplayer is a very well done user interface, just as easy to use as VLC but allows use of most of the features of Mplayer. SMplayer is to Mplayer what Ubuntu is to Debian.

Now, it still doesn't work on zip files. I wish someone had written SMplayer with the KDE toolkit instead of GTK+; then you could use the zip Kpart and just dive right into the Zip file (or even specify the subtitle filename as "fish://mylogin@myhomemachine/mypath/mysubtitlefile" and just pull it off another machine on the SOHO net).

SMplayer is the best MPlayer frontend I've tried. I still prefer MPC-HC + KLite for the GPU shaders, but I can't deny that SMplayer and MPlayer are quality software! Based on CPU usage when playing stuff, I'd bet that the GPU acceleration/decoding is fully enabled and working.

Yep. They are actually zero-compressed files, but still inside multi-archived files. But the subtitle files are as separate.

This is wrong on so many levels it's not even funny. Why the hell would you want to keep an already compressed file format in a zero-compressed multi-archive?

I can understand if you want to seed your torrent, but in that case that's not the video player you're having trouble with. Why don't you ask for a torrent client that automatically decompresses them when the download is complete?

You don't understand what MKV [wikipedia.org] is... it's not a codec, it's a container format for holding the video & audio stream along with assorted other information. This could mean multiple video and audio streams as is common for many movies dubbed in different languages or alternate video scenes. The hardware acceleration applies to whatever codec is used to create the streams held within the MKV file.. and that could be many different things from MPEG2, h.264, VC1, etc. etc.

It's up to the media player to ensure the streams are accelerated by picking a proper codec. It's also up to the media player to understand the container format. These things aren't very difficult, because of the codec frameworks that exist. On Windows, the most common one is DirectShow. (or whatever they've renamed it in Vista/Win7)

The media player has to pipe the stream data through to wherever it has to go - the Codec handles this, so once the media player picks a hardware accelerated codec, you're set!

VLC usually just sends it to its own CPU-based codecs, but other media players (like MPC, loaded up with directshow codecs for different formats) will send parts of it to the GPU to be decoded/accelerated. MPC-HC also has GPU shaders that can enhance the quality, regardless of the codec.

H.264 will be accelerated in.MKV,.MOV, and.MP4 unless your media player doesn't know what to do, which is unlikely because of the codec frameworks. The biggest issue is either going to be a missing codec(solved by using a pack like the klite mega codec pack) or your media player of choice(VLC) favouring compatibility over performance. VLC likes to choose CPU-decode codecs rather than GPU-decode ones. As far as I know, it also lacks GPU shaders.

Side-note: Recently I was uploading H.264/AAC to Youtube. There was a glitch on Youtube's end that it thought VBR-AAC was longer than it really was, so it rejected the video. After switching to.mp4(h.264/mp3), I had problems with audio desyncing. Then I switched to.mkv(h.264/mp3), and it worked fine. Seems like youtube has solid mkv support, just like most desktop software I've tested.

Lets say I have a video that is....lets say H264 in a.mkv format. Now will any hardware accelerators actually recognize what it is through the mkv "wrapper" and accelerate it?

Hardware accelerators don't know what a mkv "wrapper" is. They don't care about the container format at all, and don't know anything about an AVI file, a MPEG transport or program stream, RTSP, etc. The software just reads the H.264 bitstream from the container and feeds only the H.264 stream to the decoder.

No, because mkv is a container format. Hardware acceleration for a container format makes no sense. Other than to demonstrate that you don't know the difference between containers and CODECs (or between Gb and GB) was there a point to your rant?

The really screwed up thing is that it is very rare for people to differentiate between coders, decoders, codecs, and encoding schemes.

DivX is a codec. A codec is a specific piece of software for converting between video and a specific encoding scheme. It is actually a terrible term. There is no such thing as a codec. There are encoders and decoders. They are often distributed in pairs. When doing so, the decoder will always support the output of the encoder, but might also support video using encoding sche

Fine, if you wish to be pedantic, I'll word it so you can understand. if I have a h264 file in a mkv "container" will it fucking be accelerated or not? Is that really so damned hard to understand? mark me as troll all you want mods, i got karma from hell baby, yeah! That doesn't answer my fucking question, which is there ANY of the current devices that will accelerate any damned thing in a mkv "container".

And can we PLEASE get off this "container" bullshit already? it is the same bullshit as "avi is a conta

You keep getting modded down because you keep on ranting and not listening what you're being told.

That doesn't answer my fucking question, which is there ANY of the current devices that will accelerate any damned thing in a mkv "container".

The answer to your sexually mless frustrated than you question is would be obvious if you knew what a container file is. That's why people keep on trying to explain it to you. However, to keep you from

I can't see much reason for that. I can only guess that he's downloading TV programs or something off free file hosting companies. With a limit of 100MB for example, people break larger media into rar archives so that they can be downloaded piecemeal.

And it's also a pain to transfer a "folder" of files to someone over the net. Torrents are the only remotely usable solution and that requires making a torrent, uploading it to a site, and then finding a user you want to give it to who also understands bittorrent...

And it's also a pain to transfer a "folder" of files to someone over the net. Torrents are the only remotely usable solution and that requires making a torrent, uploading it to a site, and then finding a user you want to give it to who also understands bittorrent...

Totally. Someone should get on this immediately. It would be totally cool to have a program which is able to string a number of files and their associated directories together, and just dump them into one file for ease of distribution! And then,

And I mean good support, not just something that works like a stream, but where you can seek and do everything like you can do with actual files.Afaict the only way to read a portion of a file in a zip is to read and decompress the whole file up to the portion wanted so seeking is going to be pretty damn slow.

I'll just have to ask... why? Except for some holdouts from Usenet I think pretty much everyone uses torrents without any rar/zip compression. And even those are automatically decompressed if you set up something like hellanzb. It certainly doesn't save you any space, it's just for grouping files together and intgrity checking. Except torrents already do that, same with PAR on the Usenet side. It's completely redundant these days.

Indeed, both torrents and Usenet are secondary distribution systems. The original scene releases work through a completely different system of topsites and PREs, etc.

Of course, I've definately downloaded a torrent before consisting of many rar files, inside which was a zip file, inside which were the original scene rars, insider which was the content, plus some supplementary material in a zip file.

That means that some files have had 4 layers of compression. That drives me nuts personally. I far prefer that

Funny how these days noone knows how the real Scene works. But it's surely better this way.

The old scene follows obscure rules to be l33t like ftping around rars, but they're a fraction of a fraction of the people downloading. There's also a new scene that's not so lame, I can tell you there's original releases that go on private torrents first but are packed up to make the old scene happy. Or they stay as internals, which is just fine with me.

The old scene follows obscure rules to be l33t like ftping around rars, but they're a fraction of a fraction of the people downloading.

Sure. But they are supplying 95% of what other people are downloading.

The topsite network was never meant to supply a large number of people, but was and is a *fast* and *secure* distributed exchange system for those who are in, *and* are contributing.

There's also a new scene that's not so lame, I can tell you there's original releases that go on private torrents first but are packed up to make the old scene happy.

Sure, I know and I respect them. They often fill the many holes left by the old scene these days. But still, these new scenes are *mostly* supplying mp3/cam/ts/scr/rips. No technical knowledge in there, just a matter of having fresh meat working for you. Yes,

The point was that there's relatively few people that get rars from the topsite system. Once you get past the fan-up and fan-out and start sharing in any form of peer group it's more effective to put up a torrent. I've never felt the need to view anything inside rars, and I'd say my hookup is stellar. But then I probably know one of the two exceptions you speak of.

Mod parent up; exactly what I was going to say. If you want to be able to treat archives as directories, this functionality belongs in the OS, not in every application. Windows has done this automatically for ZIP files since XP and other operating systems that support FUSE (including OS X) can do it for a variety of different archive formats.

Umm video/audio files are compressed, so technically we have this already. What you are proposing is compression inside of compression, which is quite useless. If you actually are getting good compression ratios from the RAR or ZIP then the video wasn't encoded with a good compression algorithm to begin with.

The one thing I'd like to have with players is good support for playing files off from compressed (rar/zip etc) files. And I mean good support, not just something that works like a stream, but where you can seek and do everything like you can do with actual files.

There's really only one graceful way to implement this, which is to decompress it to disk well ahead of time to avoid getting I/O bound. Maybe you can do it in blocks. The best option is to only support uncompressed files in archives; compressed files get decompressed wholesale. And really, anything compressed in there is probably small enough to just decompress too.

Unlike H.264, you can use Theora in open-source software without worrying about being sued or shut down overnight.

Sure, if you don't care about freedom and don't mind paying for the privilege, go ahead and use H.264. But why would you want to, when you can use Theora however you want to, and without paying a cent?

Because everyone else in the industry is using H.264. If you want your materials to play nice with others hardware, software, etc. you'd better damn well be using H.264.

Generally, the cost of the H.264 license is covered by the software/hardware purchased by the consumer, whether it's a business or personal use. It's licensed by Adobe/Apple/Google/whomever when you buy or use their encoder. I don't have to pay a licensing fee for every video I create in H.264.

I've tested Theora on a few occasions. Everytime, H.264 has beat it in terms of quality for file size plus I can send an H.264 file to anyone else in the industry and I guarantee it will play for them. And today, I can put it out on the web and be pretty much guaranteed that just about everyone can view it.

Something that may cost you money starting 2011. MPEG-LA has indicated that it's likely to require royalties for streaming (not encoding; simply making available in a streamable fashion) H.264 starting then, with the final decisions on pricing and such to be made in December 2009, last I checked.

Of course for the next year or so you're ok.

The fact is, the video codec landscape on the web just doesn't look very good.

Unless it becomes popular, in which case the so-called "submarine" (actually they may not even be submarine) patents will come to the fore, and you'll have to pay.

I don't trust Xiph having read their comments about what exactly they mean by "Patent free" [hydrogenaudio.org], and having seen the silence over, say, Vorbis's apparent infringement of US Patent 5,214,742. Is Theora "safer" than Vorbis? Well, it's another DCT-based codec, just like 99% of the video codecs in use since H.261, and it's essentially doing stuff where everyone else is doing stuff. The chances of it not violating some patent somewhere is minimal to non-existent, as everyone and their brother is trying to come up with ways to improve DCT based algorithms that they can patent and then submit to MPEG or VCEG for incorporation into the next MPEG or H.26* video standard.

There are really only three standards that could be considered free of patent issues, and even then it's not entirely 100% certain. H.261 dates back to the mid eighties. The ITU lists no current patents applying to MPEG-1. (It's worth pointing out that Theora's predecessor, VP3, is considered to be somewhere between H.261 and MPEG-1 in terms of quality.) And finally, the BBC did an extensive search for anything that might hit their Dirac codec and came up blank, as well as proposing (and then withdrawing once published, so they count as prior art) some patents themselves, so Dirac is in the running too.

Theora? If I was a commercial concern, I would avoid it. I'd go for the predictability of a licensable codec ahead of one that almost certainly would be a target for patent lawsuits if it ever achieves critical mass, and possibly earlier.

I might feel differently if Xiph didn't play word games with the term "Patent free", and gave a straight answer on the issues of actual patents people have found, rather than turning around and saying "Yeah, we ran it by a lawyer, and they said we're OK, but we're not going to tell you why because it's our super secret defense we'll use if we're ever sued", which doesn't exactly inspire confidence, especially as nobody will ever sue Xiph anyway (Xiph just writes the software, they leave the packaging, compiling, possible selling, and actual using to everyone else.)

Unless it becomes popular, in which case the so-called "submarine" (actually they may not even be submarine) patents will come to the fore, and you'll have to pay... I'd go for the predictability of a licensable codec ahead of one that almost certainly would be a target for patent lawsuits if it ever achieves critical mass,

Ludicrous FUD. Did concerns like this make anyone even pause, for a heartbeat, before considering H.264?

Nothing about Theora's "open-ness" makes it more likely to be hit by a submarine patent than any proprietary project.

And remember, it was originally proprietary, and is covered by a few patents, which have been released to the public domain -- so if your argument is that having something patented once means it's less likely to be infringing on someone else's patents, even if that was ever a valid argument

The first claim of 5,214,742 states (in part): "the improvement comprising selecting the length of the respective window functions as a function of signal amplitude changes", all the other clauses are dependent on this one.

Libvorbis lib/envelope.c, line 87:

/* fairly straight threshhold-by-band based until we find something
that works better and isn't patented. */

The code goes on to NOT select the window length based on a function of the signal amplitude.

Never mind the fact that block switching transform codecs pre-dated that patent significantly and that switching based on amplitude changes is the most obvious criteria since the primary purpose of block switching is to reduce movement of signal energy from high amplitude parts into previous low amplitude parts.

So, how much do they pay you to spread bullshit? Are there openings available? My soul is also for sale, at the right price...

The MPEG-LA license only protects you against the MPEG-LA members. In no way does it provide any sort of guarantee that someone who isn't in MPEG-LA won't start suing at any point in time. The argument against Theora in this regard can really be made against any codec.

As for your "safe" codecs, MPEG-1 may not be patentable my MPEG-LA's standards anymore, but that doesn't mean someone hasn't patented some part of the format at a later time than the standard came out, thus making the patent still valid today. Would such a patent pass the test of prior art? It depends on what they patented, but even if it didn't all it takes is for a patent grant by the USPTO to allow a lawsuit, and the patent must be invalidated afterwards. You still can get sued, even if the claim can be found baseless.

The BBC may have done research about Dirac and came up with nothing, but are they more open about what exactly they did than Xiph? If they are, please give a link showing what you considered acceptable for Dirac but not for Theora.

The MPEG-LA license only protects you against the MPEG-LA members. In no way does it provide any sort of guarantee that someone who isn't in MPEG-LA won't start suing at any point in time. The argument against Theora in this regard can really be made against any codec.

Well, the members of the MPEG-LA patent pools hold pretty much all the known-critical patents for video compression, so that's actually a pretty good real-world protection.

Unless it becomes popular, in which case the so-called "submarine" (actually they may not even be submarine) patents will come to the fore, and you'll have to pay.

If there were going to be submarine patents, they would have showed up when Xiph was selling the codecs... or their successors... or when AOL licensed them and used them in Winamp and AIM... or when Adobe licensed VP6 for Flash8 video... or...

Encoders such as Theora, DVD rippers, and GUIs for these are pretty much separate things. Normally an end user doesn't even end up in any kind of direct interaction with a Theora encoder, or an H.264 encoder implementation such as x264. The article is about encoders, not GUI applications that use them.

While I don't know much about MediaCoder, judging from screenshots on the site it's clearly a front-end that binds together these features -- ripping, decoding, processing (scaling etc.), and re-encoding, and