The VMR on the the bridge source graph that's connected to the bridge render graph is very choppy -- displaying a new frame every 4-5 seconds.

If I seek the bridge source graph that's connected to the bridge render graph then all output (the VMRs in the connected bridge source and render graphs as well as the output on the external renderer) stops for about a minute. Once it resumes the choppiness from problem #1 is gone.

I've tried disconnecting and stopping the bridge render graph before seeking and reconnecting and running it after seeking, but I still have problems with it either freezing up or the VMR on the connected bridge source graph displaying a frame about every 10 seconds.

Sort of unimportant problem:

I did have smart tees where the infinite tees are, with the VMRs connected to the preview pins, but after seeking they'd play back at 1.5-2x the normal rate until they caught up with the live stream. Is there a sane way to fix it so I can go back to a smart tee?

1 Answer
1

The bridge adjusts the timestamps on the samples going into the render graph, since the stream time is not the same in the two graphs. However, the inftee filter sends the same sample to both of its outputs. So the VMR in the source graph will (sometimes) be asked to render samples whose timestamps have been adjusted in the render graph. The choppy playback you see is a result of the VMR trying unsuccessfully to catch up.

You need to copy the data, or at least copy the metadata, so that the modified timestamps don't appear in the source graph. The simplest way to do this with uncompressed video data is to insert a copy transform such as the colour space converter (probably between inftee and bridge sink).

To help you debug issues like this, you can create an empty file c:\gmfbridge.txt and the bridge code will create a log including timestamp adjustments and latency.

The GMFBridge sample demonstrates that you can divide a task into multiple separate graphs with very little overhead, and for that reason it avoids copying the data or introducing new threads on a delivery pipeline. However, for some tasks this is overcomplicated and a simpler, more decoupled, solution is more appropriate, such as a pool of buffers with a worker thread downstream.

On the other issue: the smart tee strips timestamps from the preview output, and so the samples downstream of the tee are rendered as soon as they arrive. In a normal capture graph, the samples are timestamped with their capture time -- if you pass these direct to a renderer, they will always be late for rendering. The proper solution is to adjust the timestamps for the latency from capture to rendering, but the crude solution of stripping the timestamps works in most cases. The smart tee does this by duplicating the IMediaSample object, but pointing to the same data buffer (so it copies the metadata but not the data). Note that the smart tee will also discard samples on the preview output if it thinks (according to 1996 heuristics) that the capture output is falling behind.

Thanks Geraint. I was going to post this link in my DirectShow forum thread but I see you found that as well. I put it there since, as a new user on StackOverflow, I am unable to add some of the tags I felt were more relevant and figured my question here would just get lost among everything else.
–
user173891Sep 18 '09 at 14:38