Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader writes "Three FFmpeg developers — Ronald Bultje, David Conrad, and x264 developer Jason Garrett-Glaser — have written the first independent, free implementation of a VP8 video decoder. Benchmarks show that it's as much as 65% faster than Google's official libvpx. The announcement also gives a taste of what went into the development process, as well as the optimization techniques used. Currently it's only fully optimized on x86, but ARM and PowerPC optimizations are next in line for development."

As someone who spends most of their work day implementing someone else's specifications I know exactly where they are coming from. I honestly cannot tell if people are bad at writing spec's because they're simply lazy or if they need to be trained to document their file formats completely.

When I think back to my University days we never really learned how to write a specification and wonder if that wouldn't be a course worth teaching. Perhaps you get the students to write a program that outputs a set of complex information into a format, and then get them to write an end to end specification to both read and write that format.

My favourite moments are when you realise that the current implementation not only doesn't follow the spec' but directly contracts it (e.g. A "bool" that can be TRUE, FALSE, "", "null", or "nan").

Well since TRUE and FALSE are uppercase (meaning preprocessor definitions/value constants) it's obvious that "bool" was not meant to be a unique type in this hypothetical language, and instead was a typedef for an integer type. Nothing a coding standard can't rectify.

One of the best ways I have seen to avoid that kind of ambiguities is in the PNG specification. (rfc2083)By not only explaining how something should be done but also expressing the reasoning to why this method has been chosen the person implementing the specification can follow the same reasoning and in the cases where the "how" not is specific enough the "why" will make it evident.One of the worst ways is probably rfc1034 with it's many "I think it should be done this way but I refuse to say if it is important or why I have chosen this method. Here is some pseudocode in a language that I just invented."

By not only explaining how something should be done but also expressing the reasoning to why this method has been chosen

Yes - a truly excellent thing to do. They should do the same thing with laws - define the law as they currently do but also provide a justification for the law. That way the law can be challenged in the future when the justification no longer holds. In addition, no one will ever misinterpret the meaning of a law and use it for purposes for which it was not designed.

Now back to format specifications - code makes for a very poor specification. While code can implement a specification, it generally does

By not only explaining how something should be done but also expressing the reasoning to why this method has been chosen

Yes - a truly excellent thing to do. They should do the same thing with laws - define the law as they currently do but also provide a justification for the law. That way the law can be challenged in the future when the justification no longer holds. In addition, no one will ever misinterpret the meaning of a law and use it for purposes for which it was not designed.

That is what already happens in some European countries. It is part of being a judge to interpret laws while taking the intention of the lawmaker into account. Laws have accompaning documents that detail these (and you can tell from history/press from that time).

Now back to format specifications - code makes for a very poor specification. While code can implement a specification, it generally does so in an unorganized fashion. Specifications should be clear - having no ambiguities or vagueness. Code is not so clear. And as the parent mentioned, generally does not communicate reasoning to the reader.

Mathematically based definitions are great - they are both clear and organized. Tools such as lex/flex/yacc/bison combine code with mathematical definitions to implement such specifications. The ideal format specification would include a mathematical definition along with reasoning explaining the design decisions.

In design theory there is the notion of explicit and implicit knowledge. Unless you are very reflective and a very experienced designer, you will not be able to make all your knowledge and your whole decision process explicit.

Clearly written code with no 'clever' tricks can be part of a decent spec, but never all of it. Rationale matters as does an indication of possible future versions (is there a version field because we EXPECT to run against shortcomings or is it just future proofing).

Even with code, ambiguity remains. If there are numerically tagged fields, MUST they be included in numerical order or did this code just happen to do that? When the code writes that struct onto the wire, is it big endian or little endian? In so

Not at all, mathematicians have figured out how to be precise many many many years ago. Admitedly dijkstra helped a lot, but mathematics has been able to produce exact specifications for much longer than programming has existed for.

Where I went to school we actually did have a course in business writing that included writing specifications, proposal requests, etc. I didn't enjoy it at the time but it has come in useful on many occasions.

When I think back to my University days we never really learned how to write a specification and wonder if that wouldn't be a course worth teaching.

At WVU we had Software Engineering, which was pretty much entirely about writing specs, and is required for all CS majors.

Most people think we're just a party school (which, for the most part, is true), but the more I hear about other universities, the more I realize that our computer science and engineering programs are probably some of the best in the country.

That's why I like Perl so much. Anything can be a Bool., and it's easy to understand: if it is "something" it is true; otherwise it is false (like 0, "0.0", "", undefined, or that NaN nonsense). It's the sort of thing that drives me crazy in places like PHP or Javascript, where you suddenly need "===" operators or crazy tests for something that should be completely obvious.

Abolishing software patents will take years. Most of the short-term goals are a waste of time, or a distraction by companies that don't really want to end the problem, but WebM is a project that would have a big impact, and has a good chance of succeeding. Great to hear that Xiph continues to support it!

File formats and compatibility are the biggest problem caused by software patents. They're how monopolies get too powerful, and they're how companies with people-friendly terms get locked out of commercial software development. (Commerce isn't the only valid form of software development, but it's important for the sustainability of a project.)

I usually rip my DVDs to ~1.2GiB Xvid avi files at native res using mencoder (not reencoding the audio), and have been doing this for many years. Does anyone know what combination of muxer and audio/video codecs is preferred nowadays? I'm thinking of using Matroska with Vorbis for audio but I'm completely lost as to what video codec to use. As for which tools to use, I find most of what I need in the Debian repositories but I'm open to suggestions.

Probably not. x264 has a number of innate visual advantages to compressing video that were previously mpeg compressed. VP8 generally seems to win on raw uncompressed video in the races I've seen.

And the problem is, where do you find raw, uncompressed video? About the only place you will find it is if you are transferring an analog source to a dedicated internal capture card. Virtually everything else uses some form of MPEG or H.264 compression.

Agreed. The fact that VP8 generally does hold its own side by side with x264 is a pretty impressive testament to the codec.

But who cares if VP8 is technically a better codec if it doesn't actually produce superior results with the source video we work with? If it cuts CPU for decoding while offering on par quality that would be a solid advantage.

If it produces adequate results, then I for one will use it simply because of the stance it takes with regards to patent encumbrance. To me that is perfectly sufficient, because I'm not trying to create any HD video... yet? I don't want to get into building disk farms. Anyway, I shouldn't have to worry about things like whether the camera that says pro on it has a professional H.264 license associated with it, or whether the video editing software whose name ends in pro has a professional H.264 license... but last I heard, there were rather high-profile examples of each indeed not having same. This is not something that I want to have to worry about. Indeed, I would say that an intellectual property law system which permits this to become something you have to worry about is broken as designed.

last I heard, there were rather high-profile examples of each indeed not having same.

Of course there are.

You owe MPEG LA nothing until you are distributing product on a commercial scale - and by commercial I mean the premium cable channel with a minimum of 100,000 paying subscribers. The broadcast station serving 500,000 households.

H.264 royalties on sales of your 30 minute Star Trek fan-flick max out at 2 cents a disk or download. Wake them when you have a check for $20,000 to deliver.

And the problem is, where do you find raw, uncompressed video? About the only place you will find it is if you are transferring an analog source to a dedicated internal capture card. Virtually everything else uses some form of MPEG or H.264 compression.

You don't. I work in the TV industry and don't get much access to uncompressed video, not from a camera anyway. In the SD world, the camera gets dropped down to a 270mbit stream (165mbit of video data), which uses YUV, compressing the chroma, and uses interla

If you intend to edit the HD content, you might like it to be more than 100Mbps. For example, the default settings for ProRes HQ 50i content results in variable bitrate files up to 185Mbps and DNxHD 50i is 184Mbps. (60i content differs, with DNxHD at 220Mbps [avid.com]). AVC-Ultra (and AVC-Intra derivative) is up to 200Mbps.

There are other sub-100Mbps options, such as XDCAM HD422 @ 50Mbps, but it really depends on your productions - high-end natural history and drama, then I'd want as much more than 100Mbps as my sy

If you intend to edit the HD content, you might like it to be more than 100Mbps.

I might, however corporate strategy has settles on dvcpro-100 as the base codec. As it's news, a lot of the footage coming in has been squished through a satellite in any case, or at very best been shot on a 35 or 50mbit camera.

> Probably not. x264 has a number of innate visual advantages to compressing video that were previously mpeg compressed. VP8 generally seems to win on raw uncompressed video in the races I've seen.You are right and wrong at the same time. x264 has many psycho-visual optimizations (these lower PSNR) that make it look better, while VP8 is optimized for PSNR, which doesn't necessarily look good. If you compare x264 in baseline profile (not what you'd use for a DVD rip) and VP8 at best settings, VP8 might beat x264 at PSNR, but it'll still look worse.

Now, for recompression: This is basically misinformation, based on a comment made by the VP8 devs in the MSU test, where VP8 did relatively better on the only uncompressed source. Of course this source (moving calendar) also has very peculiar properties with regards to motion compensation, which is more likely the reason for different performance. MPEG compressed content doesn't actually bias against VP8 more than against H.264, since VP8's block transform is actually more similar to MPEG2's than that of H.264.

Long story short: Until VP8 gets psycho-visual optimizations, it'll always look worse than x264 at the same bitrate. Once it does get them, it might be possible for VP8 to reach x264 quality in baseline profile. Baseline is only used for phones and the like, though.

Probably not. x264 has a number of innate visual advantages to compressing video that were previously mpeg compressed. VP8 generally seems to win on raw uncompressed video in the races I've seen.

Then you've been fooled. The study you're referring to [compression.ru] used videos pre-compressed with other MPEG standards, and the VP8 guys claimed that that biased it toward other MPEG-like codecs. They said VP8 got better with an uncompressed source.

VP8 got better. On a single test. A single test is not conclusive for anyth

I rip h.264(libx264 level 4.1 high profile), with ACC 5.1 and 2.0(muxed properly combine left front/rear, split center) sound in a mpeg4 container. because that is what my ps3 will play back. i use ffmpeg's -cfq setting.

First, ffmpeg's deinterlacer kinda sucks. Especially if you're working with NTSC broadcast (60000/1001fps) content, because really you want to be doing a pullup. Since in your example you're using 24000/1001, I guess it's progressive content. In that case, do you really need to deinterlace at all? If you do, you might be interested in the "top" parameter (see the manpage) for setting which field is first in interlaced content; usually it won't be necessary, but it's nice to have

Normally i'm doing dvd some progressive, some interlaced/telecine'ed content, some of it is interlaced, most isn't. I was under the impression that -deinterlace didn't do anything if the content wasn't interlaced. I'll look into the -top option. The full python code can be found at, http://github.com/cynyr/ps3enc [github.com] It is a huge mess i know.

Hey thanks for the link! A mess isn't a problem; hell, it's better than what I have -- nothing!

I could be wrong about -deinterlace using cycles when it isn't necessary; I'm very much an amateur at this stuff still. For me, I've been using mencoder with -vf pullup,softskip for telecine'd content. It's slower, but the results (IMO) look better than ffmpeg, particularly for animation. For the mencoder tasks, I use a modified version of the script found at this blog: http://blog.dastrup.com/?p=34 [dastrup.com]

x264 in avi container is the most popular. x264 is going to take a little better than double the cpu to playback. The quality is better xvid even for dvd source but not dramatically better. It is substantially better for HD rips.

If you use a weaker cpu for playback you are going to want to stick to xvid. And if you are concerned about 1.2GB of disc space when 1TB HDD are less than $100 you probably are.

"I'm thinking of using Matroska with Vorbis for audio but I'm completely lost as to what video codec to us

Using a modern audio codec like Vorbis is hardly "killing" the audio. Vorbis is generally transparent at around q3 and still quite respectable below that, and can thus offer savings ranging from "pretty good" (~1/2 with 192kbps AC-3) to "very significant" (~1/16 with raw PCM).

Also, H.264 in AVI is an abomination, like sex with other men or eating shrimp. If you really want to risk eternal suffering in the fiery depths of encoding hell, go right ahead, but don't say I didn't warn you.

But if you have 5.1 vorbis, how do you play it back?I have optical cables to my receiver, when playing a dvd the raw ac3 sound is sent over this cable to the received and decoded there... I don't think it has the capability to send 5.1 channels of raw pcm audio over this link, so the only way to get it to the receiver is to either encode it back to ac3 or use the 8 individual analog channel inputs on the receiver which would get extremely messy with long cables...

Also, I prefer quality over size but over 1.2GiB for a 90 minutes DVD is too much IMHO.

Really? It's still a factor of 5-10 improvement over the DVD...

When I got my first DVD drive, it went in a computer with a 20GB hard disk. For about half what I paid for that disk now (less when you factor in inflation), I can buy a 1.5TB disk. Most DVDs aren't full, they only use 6-8GB of space, so that's enough for 200 DVDs. At that price, why bother messing around with transcoders and recreating the menus - just store them as disk images and then you can transcode them later if you want.

That's sort of what I do, but I would like to watch my DVDs on a dedicated device, which doesn't support ISOs.

I have a 3TB RAID array that I'm just beginning to populate. I rip the full ISO, and then rip the videos (usually TV episodes) into h.264+aac in mp4. I used to use mkv, but it doesn't have good device support. I use a UPnP server on my Linux box to share with my PS3, which works great. Also, mp4 (really m4v) is great for iDevices as well, so I have that flexibility if I want.

I encode with handbrake, which is ok, although I'm not happy with the Linux support. Since it's so Mac-centric, there isn't any support for the most recent release of Gnome (so no distros released after March 2010 work), so I have to run a dev version. I want really high quality encodes. I get pretty much perfect quality from the encodes and they run about 600-800MB/hr for film; animation is all over the place, but quality is good: 280-600MB/hr.

I don't plan to delete the ISOs until my disk space is full. This way if technology changes, then I can still encode from source rather than from another encode.

I rip DVDs straight to a drive array as plain isos. The original plan was to get around to re-encoding the video as h264 when the drives were full / I could be bothered. I don't buy new drives very often but the drive array is still growing faster than it is filling.

The next hike is 2GB drives to replace some of the oldest in the array now that they are down to 100 quid. As this is prompted by errors starting to show up on an old drive rather th

At that price, why bother messing around with transcoders and recreating the menus

Well, I for one would rather the menus not be there. If I want them I'll use the physical DVD. The other reason for transcoding is to reduce the file size so that it streams over wireless. Those little media players are cheap and work great - but often people do not have a CAT5 cable going to their TV. And while wireless might be fast enough most of the time, a little interference and you will notice the player start to stutter. Lower bandwidth requirements result in more reliable streaming.

use x264 to encode video to h.264, audio can be anything you like if you mux into matroska. If you care about quality then forget about using xvid. OK, it seemed great several years ago, but next to video encoded with x264 it is *pitifully* bad. You can use ffmpeg or mencoder to handle the cropping, scaling, muxing, encoding parameters etc. ffmpeg is better in some ways because it can mux successfully into mkv or mp4 while mencoder is only really useful for avi or outputting raw video.

Except he is encoding his own movies, since movies from P2P networks already come encoded (and nobody re-encodes, it would be stupid), and ripping his own DVDs/Blue-rays was considered fair use even if it breaks encryption, in the case RealNetworks v. DVD-CCA.

I usually rip my DVDs to ~1.2GiB Xvid avi files at native res using mencoder (not reencoding the audio), and have been doing this for many years. Does anyone know what combination of muxer and audio/video codecs is preferred nowadays?

Speaking for myself, I use XviD for video, raw ac3 or perhaps ogg for audio and mux everything together in an mkv file for best results.

The big question I've faced is whether to use h264 or not for video. After considering this for a long time, I finally came to the conclusion