David Singer [mailto:singer@apple.com]
>
> It is part of the video *element*; just as the separate tracks that are
> composed are.
Correct, however in <video> we have many different parts, that come from
different places: we have both sub-titles and closed caption files for
example: two discrete files, "merged" as a video in the UI (along with the
actual video file). However, the caption file and the sub-title file
themselves are different and we have strong semantics to express that this
text file is @kind="caption" versus @kind="subtitle". Further I can even
express that those assets can be of different languages (lang="fr"). Yet
somehow, while we are allowing for all of this granular distinction of text
files, "images are images" is what we are also getting as an argument. If
images are images, then time-stamp files are time-stamp files, right? They
are, after all, all part of the <video> right?
>From the a1ly perspective that I and others are arguing, the imagery that is
being merged inside of the <video> element also needs strong semantics, and
the mechanism to be able to convey in a textual means some understanding of
those various images is what we and our users are asking for, in much the
same way that we can convey that "this" text file has certain properties
that are different from "that" textual file.
There is no argument that the <video> element is a specific and unique
element, but it is comprised of many different "child" elements that are
merged in the UI. I had previously argued for a <firstframe> element as a
child to <video> in the same way that <source> and <track> are also children
of <video>, so that we could apply strong semantics, but that got shot down.
I am ambivalent on how we get the functional requirement achieved, but
intransient on what that requirement is.
JF