Framedump and openGL

I am working on a DVD with 5.1 sound and images processed or generated with
Jitter. I am recording in two stages. First, I record the sound with
sfrecord~ while running the images at 180x120 in a "proof" mode. My
composition algorithm generates the music along with messages to jitter
objects that are frame-indexed and stored in collections. Second, I set
frame size to 720x480 and jit.qt.record with lossless DV codec in framedump
mode. As the frames are dumped, I use the frame numbers to index the
collectiions and the message recorded are strobed out in perfect sync. The
result is a movie and a sound file that match beautifully when reassembled
in Final Cut.

OK to that's the idea. Here's the problem. Framedump works very nicely
when processing a QT movie but much of my video is generated with openGL
objects driven by a qmetro. Is there something equivalent to framedump for
openGL or do I just have to run the metro at a very low speed to assure that
no frames are dropped.

I am working on a DVD with 5.1 sound and images processed or generated with
Jitter. I am recording in two stages. First, I record the sound with
sfrecord~ while running the images at 180x120 in a "proof" mode. My
composition algorithm generates the music along with messages to jitter
objects that are frame-indexed and stored in collections. Second, I set
frame size to 720x480 and jit.qt.record with lossless DV codec in framedump
mode. As the frames are dumped, I use the frame numbers to index the
collectiions and the message recorded are strobed out in perfect sync. The
result is a movie and a sound file that match beautifully when reassembled
in Final Cut.

OK to that's the idea. Here's the problem. Framedump works very nicely
when processing a QT movie but much of my video is generated with openGL
objects driven by a qmetro. Is there something equivalent to framedump for
openGL or do I just have to run the metro at a very low speed to assure that
no frames are dropped.

You might like to use Render_node for Jitter. It solves this problem
of insuring all frames are recorded by implementing an offline
rendering mode. When rendering offline, the nonrealtime driver and
qfaker are used to generate events which are guaranteed to complete.
Audio and full-frame video are recorded to disk at the same time. (I
also use Final Cut to reassemble them and do a little post-production.)

Render_node runs on a master SMPTE clock which should be easy to
convert into frames to work with your patch. It's an open Max patch,
and somewhat reasonably documented so if you just want to extract the
relevant snippets and use them in your own patch, you can do that too.

This is exactly what I was looking for. I've seen the render node subject
go by a few times without really knowing what it was. I played with
Render_node and the test pattern example and got a basic understanding. I
will work a bit more and send some questions later. Great work!

My return to Max after four years is exciting. When I saw it last it was 4.6. Max 6 is super. The new interface was confusing at first but now it is a good friend. Happily most of my 4.6 objects and patches still work. The one's that didn't proved easy to fix. I had a large collection of great third-party objects that still work or have been updated. There are many exciting new tools available through the toolbox and this list. One that I used on several projects in that previous life is Randy Jones' Render Node. It was/is great for complex graphics because it facilitated high quality rendering in non-real time. The version I have seems to be 0.51 from 2006. I still find Randy's tracks but, so far, no link to an updated download. Any pointers?