to record opengl-applications in linux isn't quite easily done with traditional software like "recordmydesktop", because they hook into the xserver and capture screenshots from there, combining them into a video and compressing that on the fly. all in all, this is a big load for the average pc and already in need of a multicore-machine to run smoothly. however, you'd still need resources to run the game, so this "solution" is generally not working.

to save us from that dilemma, a cool guy named "nullkey" made a tool that hooks up with the opengl-libraries instead and captures the rendered stuff from there!
the suite is called "glc" and can be found at:http://nullkey.ath.cx/projects/glc/

it uses almost no resources while capturing, however to save disk-space you might want to compress the video on the fly, so i still recommend at least a dualcore-machine (one core for the game, one for the encoding).

you can start right now and record your first video by running "glc-capture savage2.bin" from your savage2-directory. once you start recording with shift+f8, a file with a number-name and the extension ".glc" pops up and starts to grow insanely. this is the raw video/audio-data, which can be played with "glc-play <filename>". you can stop recording by quitting savage2 or pressing shift+f9. apart from eating up your hdd, this should have close to no impact on your game-performance.
PITFALL: savage2 still recieves those keypresses and serves you some nice debug-stats which may further block your input. close them with Esc.

if you want to record a full match that way, you would probably need some terrabyte of free hdd, so we are going to look into a way to reduce the amount of data. we're going to create a line of processes to do that, including compressing it with mpeg4. however, it's always best to reduce the input as much as possible to take some burden from the later workers. my videos are recorded with 15 fps which seems to be ok for youtube or any internet-video-thingy. this can be done by passing glc-capture the option "-f 15". further, i don't record sound (works only with alsa and i use oss anyways, i put on music via youtube later, so screw it here). use "--disable-audio" for this. "--pbo" enables fast texture-transfer via opengl-commands and speeds up things a bit even more here. "-b back" captures from the back-buffer instead of the front-buffer, so it'll be one frame ahead of what you see, shouldn't really matter.
PITFALL: "-b back" won't work for compiz because it draws some things directly to the front-buffer. you have to use "-g" there instead (only for recording compiz-sessions, not the game).
also, we could already scale down to a nice resolution here (youtube recommends 1280x720) with the "-r <factor>"-switch. while recording, i play on 1024x768, so i use "-r 0.9375" to reduce the height to 720 (720/768=0.9375). i add some bars to stretch the width to 16:9 with mencoder later.

all these steps should already massively reduce the size of the recorded raw-data. to compress it on the fly, we will use a nice mechanism of linux: named pipes. you may already know the anonymous pipe from things like "glxinfo | grep -i version" (it's the vertical bar in the middle), which hands the output of one command directly to the input of the next one. however, not all commands support input from or output to stdin/stdout, but almost all commands support writing to and reading from a file.
it comes in handy here that one of the basic unix-concepts (linux is based on unix) says: "everything is a file!". so we create a file that does the same thing like our vertical bar above and because it has a file-name, it's called a "named pipe".
this is done with the command "mkfifo" and basically only has to be done once (until you delete the file). so we "mkfifo glc.fifo" and get: an empty file!
now we send the output of glc-capture to this file with "-o glc.fifo". linux will store the data in the ram as long as possible but swap it eventually as it still grows very fast. of course it does, because we're not pulling data out again.
so we now start "glc-play fifo.glc" and while it reads data from our pipe, linux automatically deletes that (used) data and frees some memory. great stuff, huh?

we decided to not record sound above, so we can reduce the load a bit here by telling glc-play to only extract video with "-y 2" (-y selects a video-stream and savage ends up being in the second stream for me. you might have to vary the number to get a working result). also, to further process (encode) the video, we don't send it to a window but to stdout with "-o -" (- is stdout if written to it as a file and stdin if read from it as a file, so basically a shortcut to still use the anonymous pipe). that output is now piped to mencoder, who does the real work: compressing the video-data. the input is yuv, so we need "-demuxer y4m" first. we can further add "-nosound" to take the burden of sound-processing from the cpu. to get a nice 16:9-hd-video, we use one of the numerous filters and tell him: "-vf expand=:::::16/9" (this adds black bars at the sides if you are not playing in a 16:9 resolution). now the most important and most cpu-eating step: encode the video. this is done with "-ovc lavc". last but not least we add an output-file ("-o gl-capture.mpeg") and tell mencoder to read from stdin with a final "-".

download it, put it into your path and run "glc-encode savage2.bin" from your savage2-directory to encode while recording.

however, even if you are on a multicore-machine like i suggested, you will probably notice a BIG drop in performance (just hit shift+f9 to compare), so you might want to turn down some stuff.
PITFALL: do not turn down the shaders to low or medium! they are broken and perform worse than the high-qualitiy-shaders! (this has been fixed in hon, so i really hope for a backport of the renderer.)

the final file "gl-capture.mpeg" will still be about 1megabyte for 10 seconds, but hey, it's fucking HD, what'd you expect?

Been there, done that. Good suggestions though. I got this working a few weeks ago with gstreamer.

Unfortunately this can't be used for streaming (yet) on Justin.TV. All the major A/V suites (ffmpeg, VLC, gstreamer) have Google Summer of Code projects going on to implement RTMP, but they haven't gotten there yet. Last I checked, the guy working on the VLC one got a 3 line comment checked in, but that was about it. Hopefully at least ffmpeg supports RTMP payloading soon.

There's another possibility that xnixnix and I investigated, which was to use JTV's builtin streaming using a "webcam in flash" (i.e. Video4Linux version 2, v4l2)

There's a virtual webcam driver which is the Linux equivalent of VHScrnCap called v4l2loopback. xnixnix and I have been trying to get that to work properly with glc, but the problem is mainly that the output video has a variable framerate because of the way glc-play just "throws out frames" as fast as it can produce them, and yuv4mpeg doesn't have timestamps in the format spec.

So my JTV stream, looking at the main menu, makes the malphas' idle animation change speeds dramatically depending on how fast your video card is rendering at the moment. :/

I'm still thinking going direct RTMP would help, but I'm still not sure how to solve the problem with streaming yuv4mpeg "live" because of the way the framerate is variable.

Anyway, if you're looking for a challenge, create a (free) Justin.TV channel and use glc as a frontend and something else as a backend to stream live -- without a variable framerate. It'd be quite an accomplishment. :P

There's a virtual webcam driver which is the Linux equivalent of VHScrnCap called v4l2loopback. xnixnix and I have been trying to get that to work properly with glc, but the problem is mainly that the output video has a variable framerate because of the way glc-play just "throws out frames" as fast as it can produce them, and yuv4mpeg doesn't have timestamps in the format spec.
Anyway, if you're looking for a challenge, create a (free) Justin.TV channel and use glc as a frontend and something else as a backend to stream live -- without a variable framerate. It'd be quite an accomplishment. :P

Anyway, if you're looking for a challenge, create a (free) Justin.TV channel and use glc as a frontend and something else as a backend to stream live -- without a variable framerate. It'd be quite an accomplishment. :P

Actually RedOmen has gotten it to work via the webcam route. Unfortunately the archives of him streamin Quake live have been deleted and he didn't make any clips:

I still think it was something in your gstreamer pipeline that varied the framerate. Going
glc-play | mencoder/ffmpeg | videoloopback ensured for RedOmen and me that the stream was always at the same rate that was applied during capturing with the -f option.

As for streaming with RTMP - The auth system of justin.tv is very basic. It simply depends on a hidden stream name that you have to stream to, and that you can find out by downloading the fme xml configs from justin.tv.

I couldn't be bothered to dive deeper into RTMP so stopped after fiddling a bit. I'm sure someone more determined can figure out how to stream directly with that. Or just wait till those GSoC have finished client implementations ready.

Well it wasn't Quake III, but of course less taxing than Savage 2. I was still quite impressed because it was the first time (after I streamed glxgears), that I saw a more complex opengl app live streamed from linux.

I'm still a bit nonplussed at your varying framerate problem, cause I actually found the -f option to glc-capture quite accurate in assuring a constant framerate when capturing.
The only way I can see where those varying framerates could be introduced is if you get a LOWER fps within your program than what you specified with -f. That would mean that the capture hook of glc would be less often called than necessary to get that fps. But in this case you shouldn't be capturing anyway and rather reduce resolution or something to ensure that your prog runs at least with the fps you want to capture at.

I still think it was something in your gstreamer pipeline that varied the framerate. Going
glc-play | mencoder/ffmpeg | videoloopback ensured for RedOmen and me that the stream was always at the same rate that was applied during capturing with the -f option.
.

I saved two archives there now of testing quake3 using the following scripts:

As you can see V4L1 is MUCH smoother but local playblack looks ok for me with V4L2 so I think it is just flash having problems with it. But until we get some direct RTMP method we can't have the higher quality hx264 streams anyway.