I’ve got a few natty programs that draw stuff with OpenGL, I’d like to be able to record their output in some appropriate video format that I can show my friends, post on YouTube etc. Not quite sure how to do that, but it can’t be that difficult I’d have thought.

As as experiment, I’ll write it all up as I go along.

It seems that I can use the data in the frame buffer directly with glReadPixels:

so it should be easy to dump out frames as raw RGB values or whatever is required by the next step, ie. encoding to the actual movie format, AVI, mpeg4, whatever.

ffmpeg seems to be the tool for doing this. Let’s try some experiments: we can dump from X:

ffmpeg -f x11grab -s cif -r 25 -i :0.0+10,20 /tmp/out.mpg

but that has horrible encoding artefacts, trying with:

ffmpeg -qscale 5 -f x11grab -s cif -r 25 -i :0.0 /tmp/out.mpg

does rather better (qscale it seems goes from 1 to 31, lower is higher quality).

Apparently, ffmpeg is ‘deprecated’ and we should use avconf, but that seemed to fail dismally, it looked like it wasn’t synchronized with the frame buffer refreshes or something, I get all sorts of purple streaks and flashes, not nice.

avconv -f x11grab -s cif -r 25 -i :0.0 /tmp/out.mpg

maybe there’s an option for that too, but no idea what it might be.

Sticking to ffmpeg for the moment, after the usual Googling, this seems useful:

ffmpeg seems to work out that we need an avi wrapper for an avi file, so we don’t need -f avi and it uses its own mpeg4 encoder by default, so we don’t need -vcodec mpeg4, and the -f rawvideo covers -vcodec rawvideo. And another thing that I now know, but not knowing held me up a little: making an AVI file like this but with just one frame doesn’t work very well (at least Linux Movie Player won’t play the result).

Back to OpenGL, now I’ve got the video coding back end sorted I need to generate some real data. Since I’ll be piping the data into ffmpeg, I just need to dump out the frames to stdout, in the appropriate format. The OpenGL doc for the various formats is, as usual fairly baffling for a newbie like myself but this seems to work:

I’m not going to put this forward as model code: static variables, yuk, and there’s no protection against some damn fool resizing the window while we are ‘recording’, which won’t make anything very happy, and I’m not even checking the return value of fwrite or for errors in glReadPixels. We could do things entirely offline, but we might want to record some interactions and writing out each frame doesn’t seem to slow things down too much.

it seems that I probably want 480p format, 854×480 as a sort of baseline. I still don’t understand about codecs, but looks like H.264 is the way to go. Now, what encoder should I use, looks like x264 or libx264 is the thing, but nothing is installed:

makes me a nice 20MB, 1 minute 17 sec video of some coloured balls writhing around in empty space. A nice background (and one that disguises the stuck pixel on my new laptop that I only notice watching this sort of thing) would be good.

Now for that YouTube account, I suppose I might as well use my Google account, anything for an easy life and it’s not like I’m uploading anything too embarrassing. OK, here we go: “We did not recognise the audio format for this file, but we will try to process it anyway. See this article on recommended formats for more information”, well, I didn’t include any audio, so fair enough I suppose (a soundtrack would be nice though, suggestions on a postcard).

Anyway, here it is, in all its glory: I give you “invert264”:

Perhaps I should come up with a catchier name, but that will do for now.