MPTF/MPTF Discussions/Media Synchronous Web Content

Web content can synchronize its execution with video and audio content playback.

The content provider is able to signal a temporal region in the media stream in which Web content is to be rendered simultaneously with video and audio playback.

The content provider is able to signal messages that control Web content lifecycle, e.g. identify Web content to be executed, start execution, pause execution, end execution. Some of these signals need to be available prior to the play time of the temporal region, e.g. specification of Web content to be executed.

Track elements are expected to only appear at the beginning of a media stream. Track elements will come and go if the media stream is scheduled programming to there needs to be way to account for this

Motivation:

Timed text tracks of @kind = metadata can be used to deliver the signals. Standards external to W3C have, or will be, written specifying insertion opportunity signal formats, e.g. CableLabs’ EISS. What needs to be standardized is:

How each supported format gets mapped to a timed text track Cue.

Timed text tracks of @kind metadata do not distinguish between types of metadata. An ability to do so must be added.

Dependencies:

None at this point.

What needs to be standardized

Timed text tracks of @kind = metadata can be used to deliver the signals. Standards external to W3C have, or will be, written specifying insertion opportunity signal formats, e.g. SCTE 35. What needs to be standardized is:

How each supported format gets mapped to a timed text track cue.

Timed text tracks of @kind metadata do not distinguish between types of metadata. An ability to do so must be added.

Track elements are expected to only appear at the beginning of a media stream. Track elements will come and go if the media stream is scheduled programming to there needs to be way to account for this.