On 4/18/2012 6:36 PM, Robert O'Callahan wrote:
> On Wed, Apr 18, 2012 at 12:23 PM, Randell Jesup <randell-ietf@jesup.org
> <mailto:randell-ietf@jesup.org>> wrote:
>
> So it sounds like to modify audio in a MediaStream you'll need to:
>
> * Extract each track from a MediaStream
> * Turn each track into a source (might be combined with previous step)
> * Attach each source to a graph
> * Extract tracks from the destination of the graphs
> * Extract the video stream(s) from the MediaStream source
> * Combine all the tracks back into a new MediaStream
>
>
> And one of the downsides of doing it this way is that you lose sync
> between the audio and video streams. Usually not by much, but more for
> certain kinds of processing. Given there's a way to not lose sync at
> all, why not use it? Sorry to harp on this :-).
For that matter, you'd (in theory at least) lose sync between audio
tracks as well. (In practice if the graphs are identical you probably
*shouldn't* lose sync, but it's not guaranteed - and what if you want to
process tracks differently? (add reverb to background but not
voice-over, etc.)
Personally, I'd like to see MediaStreams be the primary "container"
object for media in the HTML5 world, and have things operate on those.
In that world, a processing node would operate on MediaStreams, and
within it could have processing graphs, etc. MediaStream sources can be
hardware (getUserMedia), HTML5 media objects (<video>, <audio>), complex
sources (PeerConnections), etc. MediaStreams can be routed to similar
sorts of things (minus getUserMedia). It provides us with a unified
abstraction of these media flows, instead of a set of
similar-but-not-the-same ones that require a lot of conversion back and
forth or make more of a forest of APIs and abstractions to learn.
--
Randell Jesup
randell-ietf@jesup.org