Digital Tape Preservation Strategy: Preserving Data or Video?

By David Rice and Chris Lacinak – December 2, 2009

BACK TO DV ANALYZERAbstract: This paper examines preservation philosophies and strategies applied to large scale video collections that are both born-digital and tape-based. Technically and philosophically different approaches may be applied to migrating born-digital, tape-based content with decisions ranging from deck selection and choice of output to specifications of the resulting file. At the core of this is the distinction between migrating digital video as an audiovisual signal versus migrating it as data.

Introduction

In trying to conceptualize the issues around the migration of born-digital tape-based content we’re challenged to separate our normal associations with videotape and the video/audio signals from the fact that the content is stored digitally on the tape. It looks and acts like a legacy analog videocassette in many ways, but some of the underlying technology is different. This is why we may discuss the “migration”, and not “digitization”, of the content to the file-based domain. The content in this specific collection is already digital, born in a compressed DV codec.

Profile of DV tapes (miniDV and DVCam)

Composition: DVCam and miniDV tapes are a metal-evaporated tape made with a prioritization to compact size and low cost. The lubricated tape is extremely thin, chemically complex, and susceptible to drop-outs, errors, and data loss. The tape is also finicky as damaged or deteriorated portions of the tape may play back with varying degrees of accuracy from one playback to another or from one deck to another through error correction and concealment strategies.

Compression and DV metadata: DV tape is highly compressed but only spatially. Since DV used no temporal compression, such as compressing groups of frames against one another, the dv compression is not a challenge to any prominent modern computer based editing system. To uncompress a DV stream during transfer results in file about 5 times larger than the original, thus also increasing the storage and bandwidth requirements. An uncompressed derivative of DV may represent the audio and video of a DV stream but may lose the DV file’s rich set of metadata from the originating camera and the transfer from the deck. In instances where DV tapes are digitized directly to an uncompressed format, much of that metadata never transfers, creating a result that, though lossless visually and aurally, gains no quality and loses related, useful metadata. Each DIF sequence (structural component of a DV frame equivalent to one head pass of the deck across the tape) in a DV stream may contain time code, the time and date of the recording device, camera and lens settings, closed captioning data, information on audio specification, video standards, the relationship of the current frame to neighboring frames, information on where recordings start and stop, repeated frame data, and so on. The presence or lack of this metadata affects future ability to utilize the content.

To understand this issue a little better, let’s first look at a format that more commonplace and consider preservation strategies for an audio collection that was created using a CD recorder. Let’s imagine the Audio CDs are born-digital, unique, and are the focus of a preservation project that seeks to start with Audio CDs and result in files stored on a server. The two distinct methodologies, as referenced in the abstract, applied to this scenario would yield one approach focusing on the CDs as audio and one focusing on the CDs as data (keep in mind that these are Audio CDs not Data CDs).

Option 1: The Transcoding Approach

Play the discs back in a CD player in real time.

Connect the digital audio output (SP-DIF or AES/EBU) of the CD Player to the digital input of an internal or external soundcard.

Record in real time using software such as WaveLab, Peak, ProTools etc.

This workflow essentially replicates that of analog digitization by applying a workflow designed for analog audio cassettes to digital Audio CDs. The migration is in real-time and there are minimal data integrity monitoring capabilities. Information such as track IDs and separation of tracks is lost in the migration and must be performed manually.

Software helps facilitate the transfer but essentially the data is copied bit for bit from the CD to a set of files. Many operating systems and software will interpret the raw audio sample data of the CD as a WAV or AIFF file. When the content is read this way the resulting file adds a header to store technical information that is inherent on Audio CD. Information such as track IDs is preserved through the automatic creation of individual files per track. This method also enables the use of data integrity monitoring.

The Native Approach is highly accessible, easy, cheap, common and maintains the integrity of the original to a much greater degree. For both the casual user looking to rip their CDs to put on their iPod or the archivist seeking precise preservation, the second, data-focused migration approach is the most widely used for migrating Audio CDs to files.

In the example of digitization strategy for audio CDs, handling the disc as data is highly preferable for many reasons:

Accuracy: The results will more accurately represent the original.

Speed: The migration from disc to file may be performed faster than real-time.

Workflow: Rather than queuing ‘Record’ on one machine and ‘Play’ on another, a single human action can coordinate the entire transfer.

Integrity: Digital recordings held on discs or tapes may only be copied bit for bit if the data can be properly read. Scratches or other damage on these objects prevent accurate reads so that the hardware may try to conceal the missing data, fail the process, or report on it. With Audio CDs, programs such as cdParanoia and Exact Audio Copy report specific information on the quality or integrity of the migration. One can save these reports to document the quality of the migration or act on the information to try to improve the migration process, such as cleaning the disc or trying another drive.

Optimizing the Workflow

According to the IASA Guidelines on the Production and Preservation of Digital Audio Objects (2004), “Under optimal conditions digital tapes can produce an unaltered copy of the recorded signal, however any uncorrected errors in the replay process will be permanently recorded into the new copy… Optimization of the transfer process will ensure that the data transferred most closely equates to the information on the original carrier.” Optimization results in the direct bit-to-bit migration of the digital object from tape to file. Any errors which occur during reproduction will permanently become part of the new digital object. Playback devices typically contain some sort of safety or error concealment mechanism in order to prevent or mask the kinds of playback errors that one would not want recorded into a new copy.

The IASA Guidelines recognize this and go on to state, “In reality, effects of standards conversion, re-sampling or error concealment or interpolation may result in data loss or distortion in copies, and deterioration over time degrades the quality of original recordings and subsequent copies.” Attempts to improve the quality of the audio through conversion, error concealment, or other means cause changes to the original bits themselves, thus undermining the authenticity of the original and inhibiting functionalities available with the original object.

There is a nuanced but vitally important distinction to be made here. The Transcoding approach interpolates the error concealment as part of the decoded output. In the case of DVCam decks there are two categories of outputs: audiovisual signal outputs such as Composite and SDI and data stream outputs such as FireWire and SDTI. The audiovisual signal outputs consider error concealment as an indistinguishable part of the signal; however the data outputs maintain the video error concealment macroblock status markers and audio error codes written into the stream during playback that can then be used to identify and evaluate the extent and location of video errors throughout the stream. With the audiovisual outputs the interpolation choices made by the deck are now a permanent part of the signal, not distinguished from any other part of the signal in any way. Taking an audiovisual signal out of deck accepts the interpolated bits whereas a data stream output enables the user to evaluate and act on the interpolation. The error concealment choices and locations are labeled within the stream and discover by tools such as DV Analyzer. One is able to identify and see all bits where the deck failed to accurately read the data from the tape. Similarly to a workflow using cdParanoia or Exact Audio Copy on Audio CDs, the user can respond to a DV Analyzer report by attempting playback in another deck, repacking the tape, or working to improve playback conditions.

Transcoding would force the future encoder to interpret whatever new code is produced by error concealment, which it may not be able to do correctly. IASA has documented this in the case of CD transfers as audio signal where “no record of error correction is maintained in the record metadata” (Guidelines on the Production and Preservation of Digital Audio Objects 2004). If the original metadata from the reading of the digital object is lost, then the ability to work with it and act on it is lost. But also, there is no record of changes to the original data, and thus it is unknown where concealment or changes actually occurred. Provenance and authenticity are disregarded in this case.

With DV playback, video error concealment attempts to conceal improper reading of DV tape by, among other strategies, substituting information from the previous frame. Often the result works very well:

The image on the right shows a frame of DV as outputted over FireWire. The tape was not read properly and used a significant amount of video error concealment. This is especially noticeable on the wall on the right side of the image where the image becomes blocky and glitchy. The image on the left was made by substituting every macroblock noted to have concealment with a white macroblock in order to show the extent of the concealment.

As we can see in the example above, the full extent of the error concealment is not apparent until the concealed macroblocks are revealed. The chosen method for error concealment works well according to our eyes in the more static parts of the image here, but in many cases, particularly in scenes with motion, the results are not desirable.

See this article for more information about this file and video error concealment.

The video above is an example of such undesirable results. The error concealment in the playback deck is trying to compensate for data errors by replacing unreadable information in the current frame with information from a previous frame. What works well in video where there is little movement causes a highly distorted result in moving images. Using the Transcoding approach yields a result that would encode this error concealment data as a digitally indistinguishable part of the video frame itself, thus permanently changing the object in a non-reversible way. Using a data stream output over FireWire would contain information that explicitly notes where and how much concealment occurred, so that the result may be identified and evaluated.

The Case-by-Case Basis

There are circumstances where the Transcoding approach for digital video tapes is necessary and recommended. This is the case when the original tape-based codec is either obsolete, esoteric or highly proprietary, such as with Digital Betacam, Digital VHS, D1, D2, and D3 tapes. These factors place the content at risk of loss and require transcoding to a more sustainable format in order to preserve the content as hardware to playback the digital video is increasingly scarce while software to playback the digital video can often be non-existent. It is still advisable in these cases, in following traditional preservation principles, to maintain the original, as there may be valuable data and future unanticipated uses of the original.

The Case of DV tape

With DV tape we are once again faced with a physical format that is dependent on certain hardware for playback. However, DV is a published standard that is well documented and is supported by both major hardware and software manufacturers.

Still the conflation of a digital signal and a tape-based video carrier brings about confusion on how to approach migration. The hardware for playback offers multiple types of signal outputs. The selection of output identifies a key question. Should the information on the tape be treated purely as audiovisual information similar to legacy video formats or should it be treated as stream of data? The fact is that DV tape contains a much greater amount of information beyond the audiovisual signal.

Selecting the firewire output of a DV deck treats the DV tape as data and migrates all of the above data in its native format. Selecting the Serial Digital Interface (SDI) output from a DV deck treats the DV tape like a legacy videotape discarding a great deal of metadata, decompressing the video and blocking information on whether or not the deck used error concealment or produced a drop-out.

You may be asking, why does this other metadata matter?

Authenticity: DV attaches metadata to every recorded frame that can tie that frame to its place in the production chain. Inconsistencies in the timestamp, timecode, and camera information that occur during filming or editing can be identified and tracked.

Error identification: As addressed above, errors or concealments that occur during playback can be permanently encoded into the migrated datastream. A bit-for-bit transfer captures metadata that describes the error-related processes within the deck. This data can be analyzed for causes and possible solutions to determine if a second transfer is warranted or to identify potential issues in the workflow. Further information about different kinds of errors can be found under the DV Analyzer: Case Studies heading here.

Efficiency: Because so much information about the digital transfer itself is encoded in the stream, that data can be leveraged to perform targeted, efficient analysis and quality control with tools such as Live Capture Plus and DV Analyzer. During a high-throughput migration of content, quality control must be selective and focused. Putting on headphones and viewing every sample may not be practical or affordable. Tools that solely analyze the decoded audiovisual playback may play a useful role but may also under- or over-report on errors through estimations rather than responding to quality control information that is explicitly documented in the DV stream output of a FireWire cable. If the tools managing the migration of the content can provide information about the work being done, especially the parts of the migration pertinent to errors, we can narrow our quality control to areas where the errors are known.

Integrity: Although SDI is an interface specifically designed for digital data, it does not carry DV without decompressing and transforming the original data in a way that blocks valuable metadata.

DV is not uncompressed (the data must be decompressed from 25 Mb/s to 270 Mb/s which is a lossy process yielding a great deal more data)

DV is not 4:2:2 as SDI outputs (additional color data must be assumed)

Sending DV over SDI requires extra data processing such as upsampling and decompressing the original data

SDI does not contain the space necessary to transmit the full set of metadata defined in the DV specifications as FireWire does

As noted in the previously referenced IASA Guidelines, not only can errors and error concealments change the original code of a digital object, but so do any kinds of conversion or re-sampling, even if those conversions are to a higher resolution format. Upsampling and decompressing a DV stream in order to migrate it, even if you are re-compressing it into its original format, can produce lossy results.

Conclusion

When approaching migration of tape-based digital media, such as DVCam or miniDV, using a data-transfer migration approach (FireWire) instead of an audiovisual signal output approach (SDI) better enables the attainment of preservation principles and provides much greater flexibility by providing the following:

It retains the data in its original form with no new encoding, loss of metadata, or digital generation loss.

In addition to the audio and video data it also retains the time code, content metadata and recording information.

The ability to emulate the original tape and maintenance of the original media’s function (i.e. leveraging and utilizing components of the original data structure for quality control and other activities)

The ability to effectively identify error concealment and other error types while maintaining ability to address integrity, exhibition, and collection management considerations.

The ability of capture software to generate provenance metadata regarding the capture process.