Lib-Ray video standard: using Google/On2’s VP8 video codec

“When I started working on a no-DRM, open-standards-based solution for distributing high-definition video on fixed media (‘Lib-Ray’), I naturally thought of Theora, because it was developed as a free software project. Several people have suggested, though, that the VP8 codec would be a better fit for my application. This month, I’ve finally gotten the necessary vpxtools and mkvtoolnix packages installed on my Debian system, and so I’m having a first-look at VP8. The results are very promising, though the tools are somewhat finicky.”

Now, my own take on this: it’s a generally rather good idea, but the spec is not well-defined enough.

Problem one: there is no file defined with a clear enough name that would immediately indicate that the media contains a Lib-Ray product. “meta.cnf” is just too ambiguous, I’ve already seen “.cnf” -files used by multiple different applications and games, and “meta” is.. well, clearly meta. A more clearly-defined file present on the media should be included and would make it very easy to determine if there is a Lib-Ray product on the media or on another location. My suggestion would simply be “Lib-Ray.media” – file, containing information such as what codecs are used, number of titles, chapters, audiotracks and so on included, and most importantly, the Lib-Ray specification version the media adheres to so that the specification can in the future be extended without breaking support for older media.

Second problem: the specification defines a combination of VP8/Vorbis as mandatory, whereas a much more flexible and future-proof approach would list VP8 – and Vorbis – support in the player mandatory in this version of the specification and the VP8/Vorbis combination the default, but would allow for the use of different codecs at users’ own discretion. This would obviously give users some flexibility on these matters if they have one or another specific need for using a different combination of codecs, and it would make it simple to extend the spec in the future; you’d only need to bump the spec version up and define a new set of codecs that must be supported in order to be compliant with that new spec.

Basically there’s plenty things in the spec that are mandated instead of mandating the support for atleast those values presented and making them the default, but allowing for values outside that range in future versions of the spec.

I am also somewhat uncertain if the way the menus are done is all that good and flexible enough, but I suppose that remains to be seen. One thing that springs to mind at first is that I do not see anything that would allow for multiple full titles, each with their own subtitle and audio settings, on one media. Personally I would like such a feature to be added to the spec, it would allow for the creation of collections where the titles are in one way or another related to each other, but do not share the same characteristics when it comes to audio and subtitling.

Hehe, more seriously, I appreciate that benefit we all get from open technologies, but I don’t see this gaining much traction. It just seems like the proposed media format doesn’t offer a compelling advantage over existing technology that’s been in use for years.

I’d be more interested in an open spec for running some arbitrary unlocked apps on a media center PC. Media centers like Tivo, xbox, etc are powerful, if only they weren’t so locked down. Adding more advanced programming features would help set this apart from “just another DVD player”, for example.

Hehe, more seriously, I appreciate that benefit we all get from open technologies, but I don’t see this gaining much traction. It just seems like the proposed media format doesn’t offer a compelling advantage over existing technology that’s been in use for years.

I suppose the aim is for people who like to keep a physical copy of all their movies and whatnot in such a format that will be readable far into the future, and for Indie filmmakers and such who may not wish to or who may not be able to afford all the various kinds of license fees needed to make BluRay – and/or DVD – discs. The license fees can apparently be really costly and as such a media format that is free of such seems like a rather good idea and will allow these people to use their budget for actual filming instead.

Hehe, more seriously, I appreciate that benefit we all get from open technologies, but I don’t see this gaining much traction. It just seems like the proposed media format doesn’t offer a compelling advantage over existing technology that’s been in use for years.

I suppose the aim is for people who like to keep a physical copy of all their movies and whatnot in such a format that will be readable far into the future, and for Indie filmmakers and such who may not wish to or who may not be able to afford all the various kinds of license fees needed to make BluRay – and/or DVD – discs. The license fees can apparently be really costly

Exactly.

I’m very conscious that Lib-Ray is almost certainly going to remain a niche product alongside proprietary standards like DVD and Blu-Ray. What I’m doing is embracing that, and trying to make it good at that role.

Thus I’m considering things like ease of producing Lib-Ray releases in small runs (because indie filmmakers and free culture projects often need to do that).

People are already working around the Blu-Ray problem. The Blender Foundation, for example, has released HD video on data DVD-ROMs. But there’s not really a defined standard for it, and that’s what I wanted to fix.

Well I’d say there is another problem that will be a real elephant in the room which is why Google refuses to indemnify VP8 which is the patents on H.264 are so wide and so numerous that frankly it would be extremely difficult to do much of anything with video without walking right into that minefield.

This is why I had hopes that developers would refuse to support HTML V5 unless a FOSS codec was chosen as baseline as that might finally bring this thing to a head but sadly it looks like the lure of iMoney nixed that and with Google refusing to step up to the plate frankly trying to get any kind of FOSS high def format adopted by the mainstream (and to get that crucial hardware support) is gonna be nearly impossible as the hardware manufacturers won’t dare risk the wrath of MPEG-LA.

So I just don’t see any FOSS format gaining any real traction as long as the specter of being buried in lawsuits by MPEG-LA hangs over the manufacturers. instead they will pay their MPEG-LA license fees and then since they have already paid why not just use H.264? What we need is the FSF and EFF to get together with several developers of software like this and have it out in court with MPEG-LA. Because as long as they can drop the patent bomb on any company at any time nobody is gonna risk going against them, their patent pool is just too vast.

After all the ONLY way for this format to gain any real traction is to have players manufactured (probably by small companies at first) and then build grass roots support but if MPEG-LA drops the patent bomb nobody will touch this with a 50 foot pole. And you can rest assured that there is NO WAY that MPEG-LA is gonna give up becoming the de facto standard for high def video without a nasty legal battle, there is simply too much profits involved.

Well I’d say there is another problem that will be a real elephant in the room which is why Google refuses to indemnify VP8 which is the patents on H.264 <snip>

That load of crap again? I thought the trolling had moved on. Oh well. No doubt Google will indemnify users of VP8 once MPEG-LA indemnify users of h.264. Until that time, any talk of indemnification is just double standards and can safely be ignored.

So your answer is to ignore the elephant in the room then? because that is what you are doing. After all i can buy an H.264 license and the odds are virtually nil that i will have a thing to worry about if I am a manufacturer because if H.264 is shot down it will cost MPEG-LA huge piles of money therefor they have a stake in the fight. Google on the other hand other than using it some on YouTube really doesn’t seem to care one way or another about WebM so you would be on your own.

And this of course is a wonderful example of “welcome to fantasy island” where everything that doesn’t follow your ideal MUST be some evil nasty dirty troll poo poo head, instead of what it is which is reality. Whether you choose to accept reality, which is MPEG-LA has a history of suing, has a metric buttload of money and could bury any small project in lawsuits, is of course up to you but don’t pretend that just because you live on fantasy island that the rest of us live there too.

While i would love it if ALL software patents were abolished tomorrow that simply isn’t the world we live in. Heck if MPEG-LA wanted to get nasty they could simply start by crushing X264 since they didn’t use clean room engineering to come up with their decoder/encoder. And if you honestly think that a company making the incredibly huge amounts that MPEG-LA makes on licensing is gonna say ‘Oh your FOSS? Well that makes all the difference, lets hold hands and dance through the flowers” if you threaten their business i have a bridge you might be interested in. Whether you like it or not until software patents are abolished they hang over every project like this one like a Sword of Damocles and can be just as dangerous.

if H.264 is shot down it will cost MPEG-LA huge piles of money therefor they have a stake in the fight. Google on the other hand other than using it some on YouTube really doesn’t seem to care one way or another about WebM so you would be on your own.

While Google doesn’t depend on codec-business as their main source of income it would still hurt their business and income if MPEG-LA did actually attack VP8/WebM. If you had been following the trends you’d notice that Google is slowly building WebM/VP8 support into all their relevant services and applications, they have various kinds of deals with H/W manufacturers regarding support for it, and so on.

This has been discussed at length before, even here on OSNews, but in short the situation seems to be as thus: when Google acquired On2 they acquired also some patents that predate H.264 and which H.264 seemingly does violate and thus both sides have ‘weaponry’ to aim at the other side. The thing is, it would be a bad business decision on either side to actually do anything about it as it could at worst lead to both of them losing large portions of their related patent-portfolio. That would be devastating for MPEG-LA as their main source of income is licensing fees from their codecs whereas Google has other means of securing income.

In other words, the elephant may indeed exist, but it’s all tied up in chains and red tape.

No, my answer is to deny there is an elephant in the room, demand proof that there has ever even been an elephant in the room and point out that you’re not even in the same building as me, let alone the same room.

Well I’d say there is another problem that will be a real elephant in the room which is why Google refuses to indemnify VP8 which is the patents on H.264 are so wide and so numerous that frankly it would be extremely difficult to do much of anything with video without walking right into that minefield.

This is why I had hopes that developers would refuse to support HTML V5 unless a FOSS codec was chosen as baseline as that might finally bring this thing to a head but sadly it looks like the lure of iMoney nixed that and with Google refusing to step up to the plate frankly trying to get any kind of FOSS high def format adopted by the mainstream (and to get that crucial hardware support) is gonna be nearly impossible as the hardware manufacturers won’t dare risk the wrath of MPEG-LA.

So I just don’t see any FOSS format gaining any real traction as long as the specter of being buried in lawsuits by MPEG-LA hangs over the manufacturers. instead they will pay their MPEG-LA license fees and then since they have already paid why not just use H.264? What we need is the FSF and EFF to get together with several developers of software like this and have it out in court with MPEG-LA. Because as long as they can drop the patent bomb on any company at any time nobody is gonna risk going against them, their patent pool is just too vast.

After all the ONLY way for this format to gain any real traction is to have players manufactured (probably by small companies at first) and then build grass roots support but if MPEG-LA drops the patent bomb nobody will touch this with a 50 foot pole. And you can rest assured that there is NO WAY that MPEG-LA is gonna give up becoming the de facto standard for high def video without a nasty legal battle, there is simply too much profits involved.

Although adoption has been a bit slow there is a lot going on. Android/Chrome/Firefox/Opera have support for WebM. A whole slew of SoCs used by phones and various appliances are going to be released over the next 6 months and some existing ones had upgraded DSP code as part of their SDK to allow HW accelerated VP8 decoding as well. My Boxee Box even has WebM support. IMO if this was such a big deal you would not have so many big name companies from across various industries commiting to WebM support.

Well I’d say there is another problem that will be a real elephant in the room which is why Google refuses to indemnify VP8 which is the patents on H.264 are so wide and so numerous that frankly it would be extremely difficult to do much of anything with video without walking right into that minefield.

If you can’t avoid a risk, then it doesn’t affect your choice. You just hold your breath and leap. As far as I can tell, the risks associated with Theora and VP8 are similar and both are better than any other available choice. H.264, on the other hand, is clearly off-limits.

After all the ONLY way for this format to gain any real traction is to have players manufactured

Depends a bit on what the goal is. I don’t expect to put Blu-Ray out of business, just give the free-culture/indie-film community a better option than we’ve got now.

Casual viewers will probably be happy with DVDs, while serious movie fans will probably pop for an inexpensive HTPC anyway, which is becoming increasingly attainable. Lib-Ray is designed with the idea of being more convenient in that environment than Blu-Ray (or even DVD).

Meanwhile, anybody with an Android tablet or a desktop PC will probably be able to watch Lib-Ray without buying any new equipment. (At least I hope this will be true). Choosing SDHC as a hardware medium will help with that, as almost all of these devices already have a reader for it (and if they don’t, they have USB which allows a reader to be attached easily).

Well I’d say there is another problem that will be a real elephant in the room which is why Google refuses to indemnify VP8 which is the patents on H.264 are so wide and so numerous that frankly it would be extremely difficult to do much of anything with video without walking right into that minefield.

Google doesn’t indemnify WebM users for the same reason that MPEG-LA doesn’t indemnify H.264 users… No one licensing codecs (either commerical or OSS) does that.

Considering MPEG-LA makes people pay to use H.264 you would think that there might be a wee bit more outrage about them failing to indemnify their licensees, but nooooooo… everyone grills Google over it, even though they give there stuff away for free.

This is why I had hopes that developers would refuse to support HTML V5 unless a FOSS codec was chosen as baseline as that might finally bring this thing to a head but sadly it looks like the lure of iMoney nixed that and with Google refusing to step up to the plate frankly trying to get any kind of FOSS high def format adopted by the mainstream (and to get that crucial hardware support) is gonna be nearly impossible as the hardware manufacturers won’t dare risk the wrath of MPEG-LA.

Google refusing to step up to the plate? What else do you want them to do for goodness sake…?

And why is hardware support so crucial – I still don’t get this. WebM doesn’t have to become a dominant format to “win” – it just has to exist. If companies want to use H.264 and pay for licences that is fine – why should I care? If I’m buying a little box to play netflix movies, why should I care what format they are in?

That isn’t at all the point of WebM – the point is to have a video distribution format for the web that can be used without having to pay for licensing. All it needs to accomplish that is get widespread browser support and a little time…

I just don’t see any FOSS format gaining any real traction as long as the specter of being buried in lawsuits by MPEG-LA hangs over the manufacturers. instead they will pay their MPEG-LA license fees and then since they have already paid why not just use H.264?

For almost all practical purposes, the manufacturers you are talking about are H.264 licensors.

There are about thirty them, global industrial giants like Mitsubishi, Philips, Samsung and Toshiba.

H.264 fees are capped. The maximum bite is 20 cents a unit for a harware encoder/decoder. 10 cents a unit on sales of more than 5 million units a year.

Which I’m writing as I go through the development of my third prototype design.

Now, my own take on this: it’s a generally rather good idea, but the spec is not well-defined enough.

Thus far, I agree. Hence, the “0” version numbers. 🙂

I’ll release a version “1” when I think I’ve got past that point.

Problem one: there is no file defined with a clear enough name that would immediately indicate that the media contains a Lib-Ray product. “meta.cnf” is just too ambiguous, I’ve already seen “.cnf” -files used by multiple different applications and games

Not a problem. It’s not just the ‘meta.cnf’ file that defines it as a Lib-Ray volume, it’s the contents of the file, e.g.:

[Lib-Ray]

# Mandatory fields

LibRayVersion: 0.2

LibRayID: 1 # ID for the producer (sign up for this with lib-ray.org)

DiskID: 2 # ID for this disk (you assign this number)

Cover: cover.jpg

Title: Sintel

Second problem: the specification defines a combination of VP8/Vorbis as mandatory

Bear in mind that “VP8-only” constrains the disk/card format, not the player. The point is that a player developer need only support this format.

(And AFAIK, the only choices for free/open formats are VP8 and Theora at this time. Allowing proprietary standards like H.264 would defeat the purpose).

I am also somewhat uncertain if the way the menus are done is all that good and flexible enough, but I suppose that remains to be seen.

Well, I’ll be revisiting that issue. The thing is, I don’t like the idea of creating a whole new menu format when there are perfectly good free software HTML engines and lots of developers already know how to write HTML.

One thing that springs to mind at first is that I do not see anything that would allow for multiple full titles, each with their own subtitle and audio settings, on one media.

That’s definitely going to happen, though. I’m still considering the best way to organize it — and at the moment I’m trying to get the basics dealt with first.

it would allow for the creation of collections where the titles […] do not share the same characteristics when it comes to audio and subtitling.

That’s a good point that I haven’t given a lot of thought to. However, I think it gets covered by using the already-specified HTML5 Javascript approach, where the subtitle and audio options are properties of the video objects.

What’s wrong with some people? Will they use something just because it’s an open and royalty free, despite being clearly unfit for the purpose? Despite what the Stallman crowd will tell you, Theora achieves a Mpeg1/Mpeg2 level of compression, so it’s not fit for HD streams, because you are essentially looking at 56Mbit/s bitrates. You may have it as an option (like bluray-m2ts has mpeg2), but that new Lib-Ray format also has to have a proper format, like VP8. The fact that guy started the LibRay project without having secured a proper format is dumb.

Theora was glorified as a viable format back when it was the only intraframe lossy royalty free format in existence, and the FSF needed to propose an alternative to H.264 to the HTML5 comittee. So, Theora was dragged to fulfull a mission it was clearly unfit for. Aka pure politics. But for some reason all this pro-Theora campaign lead many people to believe Theora is the hottest format in the town.

What’s wrong with some people? Will they use something just because it’s an open and royalty free, despite being clearly unfit for the purpose?

Linux started out being clearly unfit for the purpose of a high performance computing kernel.

What’s wrong with people choosing a format for their own purpose? It’s not like they’re forcing YOU to use Theora. One little project using Theora is not going to destroy the future hope of MPEG-4 and your HD needs.

At the time where the project was started, Theora apparently was the least awful option. Now that VP8, which is a superior format in every respect, is available, the guy has chosen to switch to it. So, what’s the issue ?

It would have been great if Xiph were as good at designing video codecs as they are at designing audio codecs, or if the MPEG-LA weren’t a bunch of dicks that would patent the concept of moving pictures if they could. But sadly, this isn’t the case, so people had to find a less-than-optimal compromise until Google bought and open-sourced a true competitor to H.264.

Wow. Thanks for the interest! And thanks to WereCatf for letting me know about this thread. I can share some basic info on Lib-Ray here, and I’ve already posted some specific responses up-thread.

I started Lib-Ray because I need it myself. We’re planning to release a pilot for our free culture animated video series “Lunatics” ( http://lunatics.tv ) later this year or early next. Like a lot of people these days, we are funding this project through Kickstarter and pre-sales of the videos. For standard-definition, what we’re selling is obviously going to be all-region DVDs (you can opt-out of using DRM with DVDs).

But of course, we’re actually producing the film in high-definition. So how can we offer a high-definition version? Downloads are certainly possible, but for this purpose, I want something you can put on your bookshelf. And right now, the only option is Blu-Ray. Which sucks, because Blu-Ray is all-kinds of proprietary. That’s not for us.

So, finding no really good solution existed, I decided to look into making one. Karl Fogel of QuestionCopyright.org suggested the name “Lib-Ray” as a nice pun, and I decided to run with that in March, 2011.

In April, I presented an early prototype at the Texas Linux Fest, and I got some more feedback there and through comments at FSM. One of the suggestions was to switch to VP8 rather than Theora as the video codec. At the time, I wasn’t sure VP8 really qualified as a free format, but since then I’ve been convinced that it is legally as acceptable as Theora. And technically, of course, VP8 is very, very good.

H.264 is ruled out entirely by the patent problems — why go to the trouble to create a free standard and then saddle it with such problems?

However, I do have several other projects (including actually producing “Lunatics”), so I didn’t get to installing the new Matroska + VP8 toolchain until just last month (March 2012). I’m still working on creating a prototype, from the masters for “Sita Sings the Blues”. I think I’ve got the video file format settled, and I’m now working on the menu system.

Although I do have some programming skills, I am not primarily a programmer, and I wanted to keep the programming part of this project as limited as possible. So my original choice was to target existing Webkit-based HTML5 browsers for playback.

That’s starting to look unrealistic, though.

So another part of this revised start on Lib-Ray is that I’m going to bite the bullet and develop a reference-implementation player. This I will write in Python, using Gstreamer and Webkit and their respective Python APIs. From what I’ve seen so far, that seems to lie within my skill set.

I’d like to implement the video playback modal switching in Javascript, though, as in my version 0.2 prototype that I made last April. That will involve extending the Javascript engine used in Webkit — although what I need is already specified in the current WHATWG standard:

(But as yet, no browser implements the needed features, which are the ability to switch embedded audio and subtitle tracks for playback).

That part could be tricky for me. I could probably use some help on that from interested programmers — might even be able to pay a commission or bounty on that part (see below). At the moment, though, I’m not even sure which project controls the Javascript implementation or if I need to choose that myself.

I’m researching that, however, and plan to have a proposal finished in early May. Parts of it are being released in Free Software Magazine as part of my column:

Still in the queue are articles on the Matroska container format and how to handle Languages/Localization. I’m currently working on the subtitle format (going to be SRT now, in separate files), menu design and implementation, and other details, such as how to physically package the cards.

Of course, the website ( http://lib-ray.org ) is a little out of date now, and I’ll be overhauling that once I’ve ironed out the kinks in the 0.3 prototype.

I’m planning also to launch a Kickstarter campaign in May (maybe May 4 — coinciding with FSF’s “Day Against DRM”), which would fund me for the time I’ll need to spend on the development. The goal would be to reach a version “1.0” standard that we can actually use for “Lunatics” and other videos which we’ll probably make available. And of course, a reference player implemented with free software for GNU/Linux platforms.

Afterward we’d offer mastering services for a small fee (generating revenue to cover what I hope will be the small cost of maintaining the standard), and we’ll provide free tutorials on how to do it yourself.

Rewards would be stuff like “Sita Sings the Blues” or the Blender Movies in Lib-Ray Format on the low end and ready-built HTPCs with Lib-Ray playback software installed (i.e. players) on the high end.

Of course, I do not expect to put Blu-Ray or Sony out of business with this project. I doubt we’ll ever have the volume to make practical embedded players for sale in your local discount department store (not even going to try to beat $75 Blu-Ray players for the mass market).

I just want to have an easy-to-use, marketing-friendly, free-software, open-standards, non-DRM, HD video format available. These will play on computers, including portable devices like Android tablets, and most importantly on Home Theater PC systems (which will benefit from the high-quality high-definition video and high-fidelity audio). The way the industry is going, I suspect that such HTPCs will be a lot more affordable by the time we release. Currently, it looks like a minimal “Lib-Ray player” would be an HTPC in the $500 range, but I need to fine that down a bit.

Cheaper systems based on SoC devices intended for the mobile market might be available before long — it looks like maybe ARM Cortex 9 quad cores or NVIDIA Tegra 3 might be able to handle it. And of course, Google has released code for hardware acceleration implementations. AFAIK, no one is marketing hardware based on that yet, but if they do, then the price will come down further, because decoding won’t have to be done in software.

Wow. Thanks for the interest! And thanks to WereCatf for letting me know about this thread.

You’re welcome. Good to see that you’re not being discouraged easily.

I’m researching that, however, and plan to have a proposal finished in early May.

I have to keep that in mind, I would love to see the finished spec. Though, I do not promise not to point out any flaws if I see any

I’m currently working on the subtitle format (going to be SRT now, in separate files)

This I want some more clarification on: is there some specific reason for those files to be separate instead of inside the Matroska-container? I personally always place cover-images, cast-details, description of the title, all the various subtitles etc. inside Matroska-containers.

Also, do you plan to support italics, bold and non-white text in subtitles? Those are all very useful properties and I feel it would be a rather important shortcoming if the spec didn’t support those. The issue, though, is that such properties inside an .SRT would be non-standard and thus those wouldn’t show up properly in e.g. VLC Player. Perhaps it might be worth it to research possible alternatives to .SRT?

Of course, I do not expect to put Blu-Ray or Sony out of business with this project. I doubt we’ll ever have the volume to make practical embedded players for sale in your local discount department store (not even going to try to beat $75 Blu-Ray players for the mass market).

Before the is finished there really is no point in worrying about such at all, but producing a small no-hassle media player+emulator-gaming console that also happens to support Lib-Ray wouldn’t be terribly far-fetched as long as it’s actually polished. I have actually been laying out various kinds of plans for such hardware and software for some time now, but since there is no Kickstarter-like service here I haven’t been able to actually do anything about it.

Currently, it looks like a minimal “Lib-Ray player” would be an HTPC in the $500 range, but I need to fine that down a bit.

A minimal device capable of playing Lib-Ray would be closer to $60, tbh, if we include the costs of producing a simple plastic container to hold the electronics inside. E.g. Raspberry Pi would hardware-wise be perfectly capable of doing that up to 1080p resolution as the SoC does support VP8-decoding, the only limiting factor is the drivers that do not at the moment have support for that. Ie. if you really wanted a minimal device you’d just need to create something similar to RPi with a SoC that supports VP8-decoding in H/W and make certain that such capabilities are exported through GStreamer and/or OpenMAX.

$500 for a player is definitely way, WAY too much and if you’re aiming for such a price-tag you may just shoot yourself in the foot now and save the hassle.

Cheaper systems based on SoC devices intended for the mobile market might be available before long — it looks like maybe ARM Cortex 9 quad cores or NVIDIA Tegra 3 might be able to handle it.

It seems you’re not entirely up to snuff when it comes to SoC – systems. I do not mean that as an insult, I am merely saying that you do not seem to fully understand what even a single-core SoC is capable of nowadays. A quad-core Tegra3 would definitely be an overkill if you merely want a media-player. Of course, if you intend the system to be useable as a desktop PC, too, then it might make sense but then you’d be targeting an entirely different market sector altogether.

I’m currently working on the subtitle format (going to be SRT now, in separate files)

This I want some more clarification on: is there some specific reason for those files to be separate instead of inside the Matroska-container?

Easy-patching. I mentioned that in the SDHC article.

Also — transparency to the end user.

I personally always place cover-images, cast-details, description of the title, all the various subtitles etc. inside Matroska-containers.

It’s true that Matroska supports that, but it makes it more opaque to the end user.

Also, do you plan to support italics, bold and non-white text in subtitles?

Those are the advantages of the Kate (“OggKate”) format that I originally specified. In fact, it can handle vector graphics as well.

However, SRT is a far more popular format, and you’re the first person who’s seen the benefit of the extra features in Kate that I have heard from (though admittedly, that has been a small sample so far).

Will give it further thought.

E.g. Raspberry Pi would hardware-wise be perfectly capable of doing that up to 1080p resolution as the SoC does support VP8-decoding, the only limiting factor is the drivers that do not at the moment have support for that.

Interesting claim. Can you back it up?

The queries I made before in tech forums led me to believe this was not a realistic expectation. In fact, it appeared that no VP8 hardware support existed in any SoC that could also handle the 1920x1080x30fps 12-24mbps video through-put.

However, I’d be very happy to see it work. A player based on a low-cost open hardware platform like RPi would be extremely attractive.

I suspect writing the support code for it is out of my depth, however. I would need to find someone capable of doing that.

However, SRT is a far more popular format, and you’re the first person who’s seen the benefit of the extra features in Kate that I have heard from (though admittedly, that has been a small sample so far).

Will give it further thought.

It just happens that in most cases such functionality isn’t taken advantage of in any meaningful ways, but I have also seen cases where they have been used to atleast some advantage. For example, hearing-impaired subtitling could present sound-effects and such with different characteristics than speech, thereby making it easier to follow and clearer.

Interesting claim. Can you back it up?

The queries I made before in tech forums led me to believe this was not a realistic expectation. In fact, it appeared that no VP8 hardware support existed in any SoC that could also handle the 1920x1080x30fps 12-24mbps video through-put.

http://wiki.webmproject.org/hardware/arm-socs lists some of the SoCs that are already shipping and which support VP8. There’s more of them coming in the future, and I can’t remember which manufacturer it was, but someone was claiming their next chip would do 4k resolution decoding. Ie. there’s quite good support now and it’s only getting better in the near future.

RPi uses the Broadcom chip mentioned on that list. It does not mention the maximum bandwidth the SoCs can handle since such details are usually only given after you’ve signed NDA, but atleast in the case of BCM2835 I think I saw a Broadcom engineer claim it can do up to 24mbps. (I could remember wrong so don’t take it as a fact, but that’s how I remember it. You could possibly just try and ask them if you’re interested.)

However, I’d be very happy to see it work. A player based on a low-cost open hardware platform like RPi would be extremely attractive.

The SoC is not open hardware, none of them are, so that could pose a problem. I am not aware of a single good-quality SoC that wouldn’t require you signing NDA in order to gain full programming information and tools.

I suspect writing the support code for it is out of my depth, however. I would need to find someone capable of doing that.

I doubt that would be an issue once you settle on the spec, there’s always someone out there to aid. I have been doing some coding here and there, including a few kernels and such, but I’m really rusty these days and I’m better at just bitching about things and trying to appear smarter than I actually am, so I wouldn’t likely be able to aid much.

E.g. Raspberry Pi would hardware-wise be perfectly capable of doing that up to 1080p resolution as the SoC does support VP8-decoding, the only limiting factor is the drivers that do not at the moment have support for that.

This doesn’t necessarily refute what you say, but this is what it says on the Raspberry Pi wiki at e-Linux:

GPU

The RaspberryPi appears to handle h264 1080p movie from USB to HDMI at least 4MB/s.

The Admin “JamesH” said it would handle “basically 1080p30, high profile, >40Mb/s.” (5MB/s) in h264

And about WVGA(480p30) or 720p20 in VP8/WEBM

Based on that, it can’t do it. But I guess you mean this can be fixed with software? Put another way, could you do it?

Based on that, it can’t do it. But I guess you mean this can be fixed with software? Put another way, could you do it?

I do not know if the people behind RPi have managed to fix the issue, but when I was still lurking around the forums the issue with VP8 decoding was that it was not implemented; it was all done in software. I assume that that 720p20 mentioned there is based on software-decoding.

EDIT: I cannot find any suitable WebM-video to download anywhere to test on my ARM-devices. Urgh.

“A minimal device capable of playing Lib-Ray would be closer to $60, tbh…”

You seem to know alot about this, so I’m not trying to doubt your figure but I’m currently looking for “living room suitable hardware” capable of running a linux media center and I haven’t found anything for less than $200 and then some, and that’s for a very low end system.

The main thing I’m looking to do is stream video over the network and hook up some kind of remote.

The only stuff I’ve found in the sub-hundred dollar range is mass produced proprietary devices. I could probably buy something and jailbreak it, but I don’t like *having* to jailbreak it and voiding the warranty (* However I’ll listen to anyone’s suggests for this too). The Rasberry Pi is intriguing but it’s neither proven nor available yet.

I’d earnestly like to hear what else were you could be referring to under a $100 price point that has similar capabilities?

I have also been in the market for a suitable HTPC based on ARM-architechture, but more-or-less all the currently-shipping ones are either aimed for tinkerers or try to pack too many things in the package, making it too expensive. RPi would almost work for me as it is supported these days by XBMC media center, but the lack of optical audio output is the thing that breaks the thing.

That said, I do not know of any currently-shipping ARM HTPC below the $100 price point other than the RPi. I am merely saying that it is entirely possible to create one, no one just seems all that interested in doing that.

As such I have only three recommendations if you want a HTPC on the cheap: Apple TV 2 (can only do 720p, not 1080p), RPi (will take a few more months to get), or wait for someone to realize there is demand for such and to start manufacturing something. Apple TV 3 can do 1080p, but I do not know if it has been jailbroken yet or if XBMC even has support for it.

I have actually tried to e-mail various manufacturers and hint that there is interest in sub-$100 HTPC-devices, but none of them have so much as bothered to respond at all. I can’t start manufacturing such myself either because I do not have the required legal expertise or financial resources — yet another reason why Kickstarter available here in Europe would be great.

Oh I see. It sounded to me like Digitante was talking about hardware that consumers could put together today by themselves, which is rather different.

Yea, I would like more manufacturers to sell unlocked ARM hardware too. I have purchased numerous proprietary systems with the intent of reflashing them with open firmware, but that’s a really tedious game to play, especially since we end up financially supporting a proprietary system vendor who sells closed products. There simply aren’t enough sources of genuinely unlocked hardware at reasonable costs. I really hope Raspberry Pi can start to change that.

Oh I see. It sounded to me like Digitante was talking about hardware that consumers could put together today by themselves, which is rather different.

In fact, I was.

And my $500 estimate was based on hardware needed to do VP8 decoding at 1920x1080x30fps all in software — because my information said that nothing had yet hit the market that had hardware decoding for VP8 and also could output that video rate (some of the portable devices have hardware decoding support for VP8, but they target smaller portable displays — the chip can do it, but the mainboard can’t).

But if it’s possible to do better, even if it’s only cost-effective for, say, 100 units, I’d be interested (I might be able to interest that many people if I did this through a Kickstart). But I’d have to be CERTAIN — you can’t promise 100 people a $100 player and then find out it costs $200 to make them!

(If it takes 1000s of units to be feasible, then it’s probably out of my reach).

It sounds like I should do some asking on the Raspberry Pi forums to see if I can find out more.

“But if it’s possible to do better, even if it’s only cost-effective for, say, 100 units, I’d be interested (I might be able to interest that many people if I did this through a Kickstart). But I’d have to be CERTAIN — you can’t promise 100 people a $100 player and then find out it costs $200 to make them!”

I am not confident about the $100 price point for 100 units, or even 1K units. However if you could make that promise I’d be up for buying two of them myself. I could probably find several linux user group members that would too.

It’s drawbacks are know: No widerspread use, nor SoC support (but there is hardware supporting Dirac).

On the other hand: It is optimized for high resolutions and up to 16-bit colordepth (!), it has a lossless mode, it is royality free, propose as VC-2 to SMPTE, can be used inside a Matroska-Container, good software support (via libschroedinger/gstreamer) and decoding performance is not that bad on modern hardware.

Good point there, I had completely forgotten about Dirac. What little I have seen and experimented myself Dirac produces very high quality and compresses quite well. Dirac might be The Choice(TM) for 4k resolutions and up, but for practical reasons 1080p and lower is still better off with VP8, simply due to H/W decoding support. Manufacturing a player that can do Dirac with software decoding at full speed would simply place too high requirements on it, meaning that the price would be way too high for general market acceptance, not to mention that current dual-core ARM-devices — ie. the largest bulk of mobile devices — wouldn’t be able to play back Dirac-content even at 1080p.

As such I propose a solution, temporary or not: Lib-Ray standard edition would define VP8 as the default codec, and Lib-Ray HD would use Dirac. Thus most players and media would be Lib-Ray standard edition and Lib-Ray HD could be used for more professional-end stuff.