On 8/9/2011 1:00 AM, Cyril Concolato wrote:
> Hi Charles,
>
>> I believe that GPAC seeks through large SVG files via offsets and
>> small buffers, from what I understood at SVG F2F.
>> http://gpac.wp.institut-telecom.fr/
>> The technique is similar to what PDF has in it's spec.
> I don't know what you're referring to.
PDF 7.5.8.3 Cross-Reference Stream Data
PDF supports byte offsets, links and SMIL.
I suppose I was referring more to the MP4Box work than GPAC, though they
do work in harmony.
MP4 has chunk offsets, and GPAC includes SVG <discard> support.
I believe that MP4Box stores, and GPAC reads fragments of a large SVG file
throughout the MP4 stream, in a limited manner, similar to how a PDF
processes streams.
They both allow someone to seek and render portions of a large file,
without loading it all into memory.
From the article:
"We have applied the proposed method to fragment SVG content into SVG
streams on
long-running animated vector graphics cartoons, resulting from
the transcoding of Flash content... NHML descriptions were generated
automatically by the cartoon or subtitle transcoders."
"... the smallest amount of memory [consumed] is the 'Streaming and
Progressive Rendering'. The
memory consumption peak is reduced by 64%"
>> SVG does not have byte offset hints, but GPAC expects
>> data to be processed by an authoring tool and otherwise works with
>> transcoding, much as VLC (VideoLan) does.
> The details of how we can do it is here:
> http://biblio.telecom-paristech.fr/cgi-bin/download.cgi?id=7129
> Basically, for long running SVG animations (e.g. automatic translation
> from Flash to SVG), it is interesting to load only some SVG parts when
> they are needed and to discard them (using the SVG Tiny 1.2 <discard>
> element), when they are no longer needed. For that, we use an
> auxiliary file that indicates how to fragment the SVG file into a
> stream, giving timestamps to each SVG file fragment. That auxiliary
> file is then used to store the SVG fragments as regular access units
> in MP4 files, we use MP4Box for that. The manipulation of those
> fragments for storage and playback is then similar to what you would
> do for audio/video streams. We don't do transcoding for SVG fragments
> but for instance individual gzip encoding is possible.
>
> I think an interesting use case for XHR would be to be able to request
> data with some synchronization, i.e. with a clock reference and
> timestamp for each response data.
Some part of that could be handled via custom HTTP headers; though it's
certainly a bit of extra-work,
much as implementing "seek" over http can be work.
I'll keep thinking about the case you brought up. I do believe
timestamps are currently
available on events, relating to when the event was raised.
What do you mean by a clock reference?
-Charles