Many users of Apple's Final Cut Pro start editing with miniDV, the popular digital video format. DV has not only great quality at an affordable price, but it's also very easy to edit with a computer, and especially easy with FCP. Many people used to think that FCP is "DV only", but now with the renaming of FCP to Final Cut Pro HD, we're reminded that FCP works with practically every video format available, whether high definition or standard definition, either "natively", like DV, or with the use of special capture cards.

In this article I will show what is "Beyond DV" in two ways, by looking at higher quality formats, DVCPro50 and DVCProHD which can both be edited in FCP almost as easily as DV, and 24p.

Compression

Before digital video tape, we used to talk about "digitizing" when capturing our video into our non linear editor. That is because analogue video needs to be converted to digital video before a computer can understand it. Now that most of video we use is already digital, the term "digitizing" is no longer appropriate, and we can use just the term "capturing" instead.

Practically all video tape formats use some kind of compression to fit the vast amounts of data that video takes onto convenient affordable tapes. This compression means that there are two main methods of capturing digital video to a non-linear editing system. Either the compressed digital data on tape can be captured direct as a bit for bit copy to the hard drive system of the NLE, or the video recorder can uncompress the video first before transfer to the NLE, and is stored on hard drive as uncompressed video. Although Firewire is often used to transfer the compressed data to the hard drive as in the first method, Firewire can also be used to transfer uncompressed standard definition video. (For instance, the AJA-IO transfers uncompressed video from the IO breakout box to FCP over a Firewire cable.)

There are two other methods of transferring digital video, SDI and SDTI. SDI is "Serial Digital Interface", a standard video interface for transferring uncompressed video. SDTI is "Serial Data Transmission Interface", a variant of SDI for transferring compressed video data.

SDI is used to transfer uncompressed video. The Decklink, Kona, Aurora, Cinewave all support SDI and can be used to transfer uncompressed video into FCP. The SDTI is not currently supported by any FCP capture card, but it can be used to transfer compressed video to a supported NLE.

From this, you can see that you can't always determine whether compressed or uncompressed video is being captured by looking at the kind of connecting cable! For what it's worth, all video formats that FCP currently deals with natively are transferred over Firewire.

Native?

What do we mean when we say the "FCP works with DV natively"? The use of compressed video in an NLE like Final Cut Pro necessitates itself on the availability of software codecs that match the compression of the video. When an NLE works with compressed video via the appropriate codec, we can say that NLE edits that kind of video "natively". For example, FCP can edit DV, DVCAM, DVCPro, DVCpro50 and DVCProHD natively, as FCP comes with the appropriate codecs. The Sony Xpri system, for instance has the ability to edit HDCAM natively, whereas FCP cannot and must edit HDCAM uncompressed. This is because FCP does not have access to a software HDCAM codec.

Tape formats Natively Supported in Final Cut Pro

Format

Recorded Resolution

Playback Resolution

Data Rate

Chroma Sampling

Quantisation

Notes

Horizontal

Vertical

Horizontal

Vertical

miniDV / DV

720

480

720

480

25mbps

4:1:1

8bit

identical picture quality, but DVCAM and DVCPro are more robust tape formats that offer greater resiliance to dropouts

DVCAM

720

480

720

480

25mbps

4:1:1

8bit

DVCPro

720

480

720

480

25mbps

4:1:1

8bit

DVCPro50

720

480

720

480

50mbps

4:2:2

8bit

DVCProHD

960

720

1280

720

100mbps

4:2:2

8bit

720p60

1280

1080

1920

1080

100mbps

4:2:2

8bit

1080i60

Why would I want to use DVCAM, DVCPro, DVCPro50 or DVCProHD over miniDV?

All the above formats can be used natively in FCP. However, some offer better picture quality than others, and some off better robustness against tape dropouts than normal miniDV.

Both DVCpro and DVCAM are "pro" versions of miniDV, with DVCPro made by Panasonic and DVCAM made by Sony. Both are more robust than miniDV in that the tapes run faster and are of higher quality. Neither offer any picture quality benefits over miniDV because miniDV, DVCpro and DVCAM all use the same codec at the same data rates. Because these format all record at 25megabits per second, DVCpro is often called DVCPro25 to differentiate it from the higher quality DVCpro50. The DV codec compresses video in two ways to allow it to fit high quality video on a small affordable tape at low data rates. The first method of compression records the colour part of the video signal at a lower resolution than the brightness information. This is not as bad as it sounds, but can be troublesome for certain operations like chroma keys. In DV, the colour part is reduced to 1/4 of the full 720 pixel horizontal resolution which means that on each horizontal line on the picture, the colour is represented by 180 pixels. The second method of compression effects both brightness and colour information and works very much like JPEG, and indeed both share the same core compression technology - DCTor "Discrete Cosine Transform". The overall compression of DV is about 5:1.

Panasonic improved DVCpro to create DVCPro50. They did this by lowering the amount of compression used. First they only reduced the chroma resolution to 1/2 of the full 720 pixels horizontal resolution, which means that the colour is represented by 360 pixels horizontally. The overall compression is still DCT, but it is reduced so that overall the compression is about 3:1 and the final data rate is twice that of DV at 50megabits per second, hence the name DVCPro50.

DVCProHD is a further advancement of DVCPro to allow the recording of high definition (HD) video compressed to 100megabits per second. Although the Panasonic Varicam only records at 720p (see Naming Formats and Frame Rates), the DVCProHD format itself is capable of both 720p and 1080i. Again, the compression uses similar methods of reducing the resolution of the colour to 1/2, and using JPEG like compression on the whole image. In addition, to help squeezing the high HD resolution onto the tape, some of the overall resolution is lost, so even though the pixel dimensions of 720p HD are 1280x720, the DVCProHD format only records at 960x720, and is expanded back up to full resolution on output. Some detail is lost on this process, but it does help fit the enormous amount of data needed for HD onto a small tape. Similarly, for 1080i recording, the full 1920x1080 resolution is reduced to 1280x1080, and expanded back up on output.

Clone Dubbing

Some video recorders, such as Digital Betacam do not give the outside world access to the uncompressed data on the tape, so cannot be edited natively. This also means that you cannot do a perfect clone dub of a Digital Betacam tape.

The compressed output, which we use to edit a video format natively, can also allow a perfect clone of a digital tape to be made. You can easily make a perfect (within the bounds of dropout and data related errors which might, theoretically, over many repeated such dubs reduce the picture quality) dub by connecting a firewire cable from one DV deck to another, pressing play on the first and record on the second. Some DVCAM decks will also copy the timecode from the original over to the new copy too.

When tape-to-tape dubbing via an uncompressed SDI output you cannot make a clone copy. This is because the video must be uncompressed on the play-out machine, and then re-compressed on the recorder, which will theoretically reduce the picture quality (perhaps only slightly in the case of formats like Digital Betacam) and not be a perfect clone. In practice, dubbing from Digital Betacam to Digital Betacam over SDI is visually lossless over hundreds of generations, and indeed, the error correction used in Digital Betacam is so good that tapes with errors on them can often be improved through a Digital Betacam to Digital Betacam over SDI dub.

Natively Supported Cameras

Footage from every miniDV and DVCAM camera can be edited natively in FCP. To keep this list to a manageable size, I have only included cameras that are either 24p capable or offer higher recording quality than DV.

Native editing can offer many advantages over an uncompressed approach to editing. The major advantage is the one which sparked the DV revolution in the first place, which is the lower requirements for the speed and capacity of your hard drive storage system. Before DV, video was generally edited on an NLE with a special capture card and either low, or no compression. This generated massive files that needed large arrays of fast SCSI drives. The whole system with computer, capture card and storage could be very expensive indeed.

DV changed that because it could be edited natively, and the 25megabits per second data rate was well within the specifications for the standard IDE hard drives of the time. Apple had invented Firewire as a replacement for SCSI for connecting hard drives and other devices to the computer, and Sony had renamed it iLink and put it on their camcorders. With FCP 1 we were welcomed to the world of native editing because FCP came with a DV codec.

As well as taking less space on your hard drive than uncompressed video, editing native video also can be put back to video tape without loss if no rendering of the video has occurred. This means that if you capture some DV, perform some cuts only edits, and then send it back to DV tape, the very same bits of data that made up the original video images on tape are transferred back to tape without alteration, and the picture quality is identical to that with which you started with.

Indeed, even when rendering is involved, there is no advantage to editing video uncompressed instead of natively if you are going back to the same format from which it started. So, if you are editing DV, DVCAM, DVCPro50 or DVCProHD, and are recording the finished edit back to the same tape format from which it started, the video will have gone through exactly the same process of decompression and compression as if you had edited it uncompressed. Let me explain:

Native Workflow

When FCP adds an effect or transition to a piece of video, if the video is not already uncompressed, FCP must decompress the video. If the video is native, then FCP has the codec to do that uncompression, and FCP will only recompress to that codec after all filters and effects that have been applied have been rendered. That means there is precisely one decompression of the video and one recompression. No further compression is needed to take the video back to tape because it is already compressed to precisely the correct format the tape needs, and thus a plain data transfer can take place.

Uncompressed Workflow

When you edit uncompressed video in FCP, the video has been uncompressed by the deck before it gets captured. As the video is uncompressed on the timeline, when effects or transitions are applied, FCP does not need to uncompress the video and it does not need to compress the video. However, because the video on the tape is compressed, when we output uncompressed video to the tape, the recorder must compress the video. This means that precisely one decompression and one recompression have occurred, an identical number to that of the native workflow.

So, does editing native have any disadvantages? Well, if the native FCP version of the codec is not very good, as the DV codec was with version 1 of FCP, then quality losses can occur. However, the current DV, DVCPro50 and DVCProHD codecs in FCP are excellent, and this is no longer a worry.

You might be able to get slightly more realtime performance with uncompressed video as there is less load on the processor of your computer as it does not have to uncompress the video for playback. However, for this to be an advantage for uncompressed, you must have very fast and large hard drives.

The major disadvantage of editing native is that the native codec you need might not be available for your edit system. FCP owners would love to be able to edit Sony's HDCAM format natively in FCP, but because Sony has not made the HDCAM codec available to Apple, this is not possible.

Native editing, however, might not always fit in with your particular workflow. If, for instance, you're shooting on DV, but mastering to Digital Betacam, it might make more sense to adopt an uncompressed workflow, perhaps bringing in your DV over SDI. This is because it is best not to render any effects or graphics to the DV codec if you are mastering back to a deck who's format is other than DV.

What Hard Drive Options Do I Have?

As a prelude to the discussion of hard drive systems, let me begin with a few words from the FCP 4 manual, Volume 1, page 44: "While not recommended for all users, Firewire drives can be effectively used to capture and edit projects using low data rate video clips, such as those captured using the DV codec. However, most Firewire drives lack the performance of internal Ultra ATA drive or of internal or external SCSI drives." I would recommend reading the full section of the FCP manual if you intend to use Firewire drives in your edit suite, however, I will point out that many of us long term FCP users have used Firewire drives for DV and beyond, and have not had any problems, but your milage may vary!

The major options for hard drives fall into categories based upon how they connect to your Mac:

Firewire

Firewire drives are affordable and connect to your Mac with a simple Firewire cable. They are suitable for the low data rates used by the native formats above, but you should check with the drive supplier as to whether it supports the necessary data rates and has been tested for such use. Firewire comes in two flavours, the original Firewire400, and the new Firewire800. Firewire800 is not widely supported outside the Mac world, but offers a useful speed increase over Firewire400.

ATA

ATA drives are internal to your Mac. Some older Macs can easily have 3 or 4 extra ATA drives installed, and they make a very affordable way of adding extra storage to your Mac. With the G5, however, ATA was replaced by SATA.

SATA

The G5's internal drives are Serial ATA, or SATA as it is called. One extra internal drive can be added to a G5, and this is suitable for editing the native formats above. Extra SATA drives can be added by means of a 2rd party bracket and SATA PCI card. SATA drives can also be added externally to a Mac. The new SATA II format should provide enhanced performance and become the method of choice for adding external drives in the years to come.

SCSI

SCSI comes in many flavours, but has always been the traditional way to add large amounts of fast storage to a Mac for video editing. SCSI is not necessary for native format editing, but if you need large amounts of storage it can still be an effective option.

Fibre

Fibre is used by the Xserve RAIDs to connect to your Mac. The Xserve RAID provides large amounts of very fast storage and is ideally suitable for video projects and formats of all kinds, wether native or uncompressed. It is, however very large, heavy and expensive. There are also other fibre based solutions available from other manufacturers.

24p

Even when using DV natively over Firewire, there are new options now available like 24p. 24p has become a real buzzword over the last couple of years as video shooters look for the fabled and elusive look of film. The characteristics of film differ from video in many ways: resolution, colour depth, contrast, grain and frame rate, to name a few. 24p addresses the issue of frame rate. Normal NTSC video is shot at 60 (actually 59.94) interlaced fields per second (60i). Because we see 60 discrete intervals of time per second represented in the video, motion is very smooth, however, this motion is a dead give-away that video was used, not film. Even to non-techies, film is associated with quality and expense, so one of the easiest ways you can make your video look like film is to shoot it at the frame rate that film is shot at, and that rate is 24 frames per second. Now that sounds simple, so you may ask why not all video cameras allow you to shoot at 24 frames per second? Well, video was never designed for such a frame rate, and indeed 24fps is incompatible with normal NTSC video (and PAL for that matter). To allow a video camera to shoot 24fps and record it to tape in such a way that you can play it back on a normal TV needs a bit of televisual trickery to be performed.

When color was brought to NTSC video, they had to change the frame rate from the whole number of 30 frames per second (60 fields) to 29.97 frames per second (59.94 fields) to stop the audio subcarrier interfering with the newly added color subcarrier and producing artifacts. That also means that in video, the film frame rate of 24 frames per second is really 23.98 frames per second. But we still need our 29.97 frames per second. This is done by recording each of the 23.98 frames per second to tape with duplicate frames and fields to pad out the gaps. Traditionally, this was always in a 3:2 pattern, hence the name 3:2 pulldown, but more recently introduced with the Panasonic DVX100, a new pattern (advanced), 2:3:3:2 was used which although looks terrible on video playback, allows FCP to remove the duplicate frames and fields very easily, leaving behind the original 23.98 frames per second the camera shot. The original 23.98 frames per second look great, and, indeed are of slightly higher quality than if they had used 3:2 pulldown because no decompression of the video is needed to remove the unwanted frames.

The Varicam offers a different approach to shooting 24p because the DVCProHD format it uses is designed to always record 60 (59.94) progressive frames per second. The Varicam takes it's name from it's ability to record at any frame rate between 4 and 60 frames per second. If the camera is set to record less than 60 frames per second, duplicate frames are used to fill in the gaps and flags are added to the electronic data on tape to tell FCP that those frames are duplicates to be discarded.

Which Decks for 24p?

Because 24p is created in camera, any of the decks that support playback of the format you recorded in can play back 24p into FCP. Even for advanced modes where frames are flagged for removal by FCP, you can use any deck that supports the format you shot in. The flags are stored in the data on the tape, and because we're editing natively, that exact data gets transferred to FCP, regardless of the native playback deck used.

Advantages for 24p

24p has advantages as well as disadvantages. The disadvantages are that if you shoot 24p, it's very hard to get it looking like video again if that's what you really wanted. You also have to be much more careful with your camerawork because if you move the camera too fast, the motion will look jumpy and choppy. The advantages are that it gets you nearer the look of film, and, if you remove the extra frames and edit in a 23.98p timeline, you can make a 24p DVD (See pages 45 and 90 of the DVD Studio Pro 3 manual for full details on making a 24p DVD) which will look better than the same video as a "normal" 29.97 DVD because less frames of video have to be fit on the disc and hence a higher bit rate can be used for the remaining frames.

While editing video at 23.98 frames per second, you will also find that you can render faster as your hard drives have to send less frames per second to FCP, and the Mac's processors have less work to do.

Naming Formats and Frame Rates

60i, 30p? What does it all mean? Let's look at the 'number' part first, then the 'i' or 'p' bit after. Due to NTSC video not being an integer frame rate, 29.97 frames per second rather than 30 frames per second, any time you see a NTSC frame rate you have to be very careful. If someone says that a particular video runs at 30 frame per second, do they mean that it runs at precisely 30.00 frames per second, or that it is running at 29.97 frames per second, but that they rounded it up for ease?

24p should really be called 23.98p,
30p should be 29.97p
and 60i should be 59.94i,

Ok, what does the 'i' and the 'p' mean? 'i' means interlaced and 'p' means progressive. Standard video has been interlaced right from the earliest days as interlacing (whereby each frame is split into two fields, and each field represents a slightly different instant in time) is a form of compression suitable for analogue use and allowed broadcasters to use a higher resolution and frame rate in the available bandwidth for their transmissions. Like NTSC, PAL and High Definition can be interlaced or progressive.

Progressive means that each frame of video represents an instant in time and that each frame is displayed sequentially without interlacing. Film is inherently progressive, and it is only recently that video cameras have been shooting progressive video.

It is quite easy to give interlaced video the look of progressive, and, indeed, turn it into progressive via use of a "de-interlacer". However, many de-interlacers reduce the resolution of the video and you should look for a "smart" or "adaptive" de-interlacer if you wish to use one. It is very hard to turn progressive video to interlaced, although such a process is now being used to help restore archive television programmes that were telerecorded (kinescoped) onto film and make them look like video once again.

High Definition video formats are often referred to as, say 720p. This does not mean 720 progressive frames per second. The 720 refers to the number of scan lines that make up the picture. For instance, you might say that DVCProHD is 720p60, meaning 60 (really 59.94) progressive frames per second, each with 720 scan lines. Similarly 1080i refers to the high definition format which has 1080 scan lines and is interlaced, so 1080i60 would mean 60 (really 59.94) interlaced fields per second, and with each entire frame having 1080 scan lines.

Revolution?

Native editing gives many advantages to digital video editors, and perhaps the greatest of these advantages is reduced costs. Although some of the equipment listed above is quite pricey, the computer and especially hard drive requirements needed to edit with them are much cheaper than was previously needed to edit at the quality of video they provide. Whereas once you needed an expensive capture card to edit high definition video, it can now be captured using a standard Firewire cable. Native editing is really, in my mind, the technological advance what sparked the DV revolution, and it should be remembered that the revolution is not over yet!

Graeme Nattress is a software developer who has been developing cutting edge algorithms for the improvement of video quality. Nattress Productions Inc. offers Special Effects filters and plugins for Final Cut Pro. Graeme is a frequent contributor to the kenstone.net, LAFCPUG, 2-Pop and Creative Cow websites and forums.