Hi Gregory,
On 01/26/2011 01:11 PM, Gregory Maxwell wrote:
> I think this is a pretty poor solution for the stated problem. It
> would basically be copying the braindead behaviour we're stuck with in
> libogg because of some poorly thought out parts of our ABI.
>> The fundamental issue here is that more tightly packed pages mean
> greater efficiency, but they also mean higher delay. If you don't care
> about delay? pack away. But if you care then you have issues.
>> The reason this solution is not good is that a straight size limit
> does _not_ bound the delay very tightly. If the packets being emitted
> by the codec are very small then you can still get up to 255 of them
> in a single page? which is the same worst case delay you had with
> maximum size pages! This means that your delay bounded streaming app
> will mostly work, but if the bitrate drops down it will stall and
> rebuffer.
>> The _correct_ behaviour is to decide how much delay you are willing to
> tolerate and flush at least that often? regardless of how much data is
> in the pages. Gstreamer does this. Ffmpeg2theora also has a cap on
> the number of packets it will attempt to place per page.
I definitely agree with this and implemented something like this in the
past.
Anyway with the per muxer options, it's easy to add.
[...]
--
Baptiste COUDURIER
Key fingerprint 8D77134D20CC9220201FC5DB0AC9325C5C1ABAAA
FFmpeg maintainer http://www.ffmpeg.org