On 4/19/2012 4:45 PM, Robert O'Callahan wrote:
> On Fri, Apr 20, 2012 at 6:58 AM, Maciej Stachowiak <mjs@apple.com
> <mailto:mjs@apple.com>> wrote:
>
> It seems to me that this spec has some conceptual overlap with
> WebRTC, and WebAudio, which both involve some direct manipulation
> and streaming of media data.
>
> WebRTC: http://dev.w3.org/2011/webrtc/editor/webrtc.html
> Web Audio API:
> https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html
>
>
>
> I actually think those are fairly well separated from this proposal.
> This proposal is all about manipulating the data that goes into a
> decoder; WebRTC and the Audio WG are all about manipulating decoded
> data. The latter two need to be carefully coordinated, but this doesn't.
Their creation was well separated. Now that they're relatively stable,
it's a good time to coordinate.
Web Audio needs reliable buffering otherwise it requires too much
decoded information to be stored in ram.
Audio and MediaStream APIs are may allow for audio post-processing of
Encrypted Media. Timing attacks may be an issue.
Use case: Bob turns on his Camera enabled TV and connects to Alice on a
secure channel; Bob runs a filter on Alice's audio stream to reduce
background noise so he can better hear the conversation.
Alice is running a secure mobile phone which handles encryption via
hardware acceleration with encryption keys isolated from the OS.
Technical examples:
Web Audio and Media Source overlap:
https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html
createBufferSource
"Creates an AudioBuffer given the audio file data contained in the
ArrayBuffer. The ArrayBuffer can, for example, be loaded from an
XMLHttpRequest with the new responseType and response attributes"
decodeAudioData
"Asynchronously decodes the audio file data contained in the
ArrayBuffer. The ArrayBuffer can, for example, be loaded from an
XMLHttpRequest with the new responseType and response attributes. Audio
file data can be in any of the formats supported by the audio element."
http://dvcs.w3.org/hg/html-media/raw-file/tip/media-source/media-source.html
void sourceAppend(in DOMString id, in Uint8Array data);
"Segments can be appended in pieces"
"[A segment is a] sequence of bytes that contain packetized &
timestamped media data"
As a general pipeline issue, the worker processing of the MediaStream
Processing API is also relevant.
http://www.w3.org/TR/streamproc/#worker-processing
Where the Audio API has AudioProcessingEvent exposing buffers to the
main thread, the MediaStream Processing API has ProcessMediaEvent only
exposed to worker threads, which may be relevant to the Encrypted Media
proposal.
WebRTC does not currently specify encryption techniques beyond requiring
that "configuration information should always be transmitted using an
encrypted connection."
Content protection seems to be a reasonable item to consider.
Note that the WebRTC reference implementation includes hooks to
"external encryption":
http://www.webrtc.org/reference/webrtc-internals/voeencryption
-Charles