Mark Watson wrote:
>
> On Feb 18, 2011, at 2:08 AM, Philip JÃ¤genstedt wrote:
> >
> > I don't think we should spend much time making extra in-band video
> tracks
> > work more than barely, if at all, since the extra bandwidth needed to
> have
> > multiple in-band video tracks makes it quite unlikely the feature
> would be
> > used to any greater extent.
>
> A track declared within an adaptive streaming manifest (e.g. a DASH
> manifest or take-your-pick of various proprietary adaptive streaming
> solutions) would be an in-band track but would only be fetched when
> actually needed.
This has been an interesting conversation.
Philip, I think we need to be careful about the assumption you made, as
from an accessibility best-practices perspective, ensuring all supporting
media (be it textual or binary) is best included as in-band content, for
the very same reason why providing textual (captioning) data in-band is
preferable: portability and re-use. Isn't this why we worked on getting
the JavaScript API ready early on?
While I concede that the inclusion of sign language interpretation and
descriptive audio may seem edge-case compared to the larger body of
content envisioned to be on the web, it is important that we ensure we can
do this, and do it both well and properly. Thus I think we need to spend
as much time as required to ensure we *have* met this requirement, and I
am a tad concerned that we suggest that content such as this *not* be
treated in the same way as textual supporting content.
The idea that a DASH manifest would only fetch this type of content
'on-demand' is intriguing; however does it not presume an active
connection to the network? Or would the DASH manifest also be used to
'activate' or expose supporting in-band content such as sign language
content, etc. to the user-agent?
JF