Contents

1 Introduction

Suppose you're at an institution, at which various members of the insitution generate video and audio assets. How should these assets appear on the web? How do you deliver these? The present article outlines a multimedia encoding and archiving workflow, that is ideally suited for Multiformat Media Delivery.

I argue that the members of your institution should focus on producing their assets, and that their work ends, once the asset has been produced in full quality (e.g. as a DV movie or uncompressed audio, together with metadata and other additional data).

Once the asset has been produced, the institution should take over, and

archive the asset suitably, and also

make the asset available publicly in suitable formats.

There is no reason why the members of your institution should have an in-depth understanding of video encoding, and the various delivery channels for which you may want to encode your videos. Video and audio production is difficult enough, and challenging in terms of technical knowledge required, pedagogy, creativity, etc. My view is that almost all training should be invested in improving production aspects, while the institution should take care of delivery. (There are of course exceptions to this, but in sufficiently large insitutions, centrally coordinated delivery is viable, and desirable.)

This article outlines a blue print for a multimedia encoding workflow.

(See section on updates below!)

2 Overview

3 Ingest

You'd want to capture

Multimedia data, i.e. video and audio clips, as well as images. Ideally these would be ingested in highest possible quality available. Where the original sources have been lost, perhaps you would need to ingest several encoded formats, the highest quality version of which becomes the new source. You might also have 'composite' data, e.g. an audio file, and a set of images, together with synchronisation information.

Metadata. Text-based metadata, including basic description of the media, as well as technical metadata. This might include a transcript, chapter information, ...

Rights data. Ideally you would be capturing information relating to rights. We imagine a scenario where the movie consists of a 'feature film', where rights are highly composite, or of a 'lecture' where your insitution may only be acquiring partial rights in the material. This could be scans of release forms, copyright clearance information, etc.

Auxillary data. There may be additional data that you want to store along side the multimedia data. This may be additional images, a pdf file of the presentation given on that day, further information that isn't captured by the metadata record, perhaps relational data to other records on the system.

Clearly the distinction between metadata, rights data and auxiallary data is somewhat fluid, but it's worth bearing those different aspects in mind.

Although this step is referred to as 'ingest', it includes amending data at a later stage.

4 Secure storage

The next step in the process is secure storage. I call this secure storage, distinct from high performance storage to be discussed later.

5 Transcoding and analysis

Materials are picked up from the secure storage, and transcoded and analysed. Various processes are operating in this area.

Generation of various movie formats Multiformat Media Delivery. The source materials are transcoded into a range of encoded multimedia formats. These formats are transferred to the 'high performance storage'. Suitable fields within the metadata are of course encoded into the encoded formats generated.

Generation of an archiving format. You might be permitting users of the system to upload in a variety of formats, that have varying degrees of openness. Hence all materials coming in are transcoded into an archival format, that has relatively good chances of remaining accessible. This archival format will not be used at the moment, but once the original source material becomes inaccessbile, the archival format can be used instead. It is likely that the archival format will degrade the original source material in some way, but that's ok, as the emphasis is on long term storage, not highest possible image quality.

As well as a reasonably high quality archival version, you might also want to deposit an intermediate quality version in the archive. This is in case the presence of some materials on the 'high performance storage' is discontinued, or users wish to access previews straight from the archive.

Generation of automatic metadata. The source materials are analysed, e.g. automatic speech transcription, automated image analysis. There might be some automated classification processes. Automatic metadata is mixed with the original metadata, and transferred to the high performance storage, perhaps as part of a database.

Generation of additional auxillary data. Also, you might extract images from movie materials. This additional aux data is mixed with the initial aux data (as appropriate), and also transferred to the 'high performance storage' area.

6 High performance storage

I call this 'high performance storage' to distinguish it from the secure storage. Of course it needs to be secure as well. Generating a large number of encoded movies is a time-consuming task, that doesn't allow for fast recovery. The high performance storage area holds encoded movies, as well as metadata and auxillary data. Metadata (which is held in xml format in the secure storage) may now be held in a high performance database.

7 Modes of access

The high performance storage is used by web servers, which might access the metadata database to generate webpages. Video servers access the high performance storage area to stream movies to the web or to give high performance http access for downloads. The high performance storage area also presents machine interfaces, allowing data to be obtained via OAIPMH, media:rss, etc.

8 Implementation

I started thinking about these ideas since about 2004, but back then many of the available frameworks for implementation seemed quite resource intensive, and disk space seemed too expensive to be able to archive everything. Now, in 2007, much of this has changed.

Particularly, since the release of OS X Leopard, Podcast Producer is publicly available, and offers a viable implementation strategy, using an XGrid based workflow. Together with Episode Podcast, you can then implement a multiformat media delivery strategy, see Multiformat Media Delivery.