A Media Manager Has Your Back

In the world of HDSLR technology, media management is a very important position. Every Elite Team member has held this position at some point during the untitled Navy Seal Movie to gain an understanding of HD image capture in a small footprint work-flow system and they all have jumped in head first!

The unique skill set that my Elite Team brings is that they all have a film background and are comfortable with certain rituals that accompany being a motion picture film loader and 2nd assistant cameraman. These include: managing the truck; keeping track of the gear and specialty pieces of equipment; creating an inventory and log; assessing how many magazines you have to load and color coding it according to the stock; labeling the magazines with the date, job, film stock and amount loaded on the magazine itself; and writing a camera report with the same information.

The system we designed for the untitled Navy Seal Movie is a mixture of the traditional film loader combined with the DIT job in the digital world. On our movie, Mike McCarthy who is a brilliant post production guy at Bandito Brothers with an IQ that I swear is above 180, set up our media manager work-flow system. The Media Manager station is very simple and compact. Sticking with the small footprint approach we employ a Mac Book Pro Laptop, a 24” HD Cinema Display monitor, and 4 External 500GB hard drives.

MacBook Pro

We shoot 10 to 15 minutes on a 8GB card. I like using the 8GB cards the best because the counter on the top of the camera kicks in depending on jpeg settings at approximately 15 minutes of media recorded. This is a great gauge. Once the counter starts to come off of 999 we re-load the card. Just like a 1000 foot magazine on a film camera.

Card Reader with 8GB Card

There are three important reasons to do it this way:

We can get that to the media manager and he can check the focus on his big monitor. We all know how critical the focus is with these cameras.

The cards tend to heat up and when that happens the noise factor goes up. So keeping a fresh card in there is very good way to keep the image as clean as possible.

It promotes a steady pace of backing up cards, so if for any reason something happened to the camera or the card you are not losing a whole day worth of footage.

In our work-flow system, the 8GB card from the 5D camera goes to the media manager. He downloads the media into the computer and simultaneously sends it to the 4 external hard drives. After the download is complete, he checks for focus and exposure and labels each set-up for the assistant editor with as much detail and description as possible. Then, he formats each card before sending it back to the cameras in the field. When the cards go back to the field to be reused, the camera assistant knows to double check that each card is coming back empty.

2 of 4 Hard Drives

Next, one hard drive is shipped to the editor to start logging the footage; one is a back up if the original one gets lost in shipping. A third is for the director to view on his laptop. The last one is a “cloned master “of what we sent to the editor, which is held in post. This system has been successful in delivering the entire equivalent of 1.8 million feet of film safely into the edit room.

How do you manage media? What successes have you had? I would love to hear your formula.

Brilliant work flow and strategy for DIT Shane. You guys actually back it up on more drives than most pros I know. I like the idea of using 8gb cards like 1000ft mags.

Some advice for those adding a DIT to their shoots. This mostly applies to low budget shoots. These might seem like common sense but I’ve seen some bad stuff happen on set recently.

1. Pay a knowledgeable DIT person a fair wage for the day. You want the last person who is handling you footage to care! I’ve seen many productions put Production Assistants, or family friends in charge of downloading footage. Not only are they not knowledgeable, but they sacrifice focus when there is an on set distraction. You don’t want to be surprised when the shoot is over that transferred footage is missing or corrupt.

2. Even if you’re a pro DIT, and your production is happening on a secured stage where you will leave your equipment locked over night. Distribute your copied drives. Take one home at night, and have a producer take one home.
This comes from an experienced DIT pro who I work with. With 4 days of shooting left on a movie, a recent inside stage robbery in LA left a crew without equipment. Equipment can be replaced by insurance, your priceless digital footage can’t. Protect it at all costs.

Dustin McKim, Thank you so much. I like the system also. In the first week of production a drive was lost in shipping so it was take no prisoners after that. The 8GB cards I feel are the way to go. Constantly downloading protecting the data. Thank you for all that info., very useful.

I just dropped my 8gb card in the 5d and the 999 reading displays when I have select large jpeg as the picture format. When does the 999 change? I have rolled some footage and it seems to stay at 999, does it eventually go down?

It would also be great to know the editors workflow once you give them the footage. Is it being converted to ProRes for the edit?

Thanks for sharing and it would be really cool to see some examples of degraded quality footage from heat and even slower cards you had mentioned in a prior post.

Jon Carr, yes you have to roll for about 12-15 minutes and then the card will start to count down. It does not have to be a continuous take, it just has to add up to 12-15 minutes of footage. We input the RAW mov. file into the Avid and then the Avid automatically converts it to a MXF file. The Avid drops the frames for the 30p to 24p conversion. Once we lock our edit we convert all of the 30p RAW mov. file footage that we want in the film or short to Twixtor. The conversion usually takes an hour for one minute of footage. I love sharing and we are doing a whole slew of tests at Bandito in late January after I get off the Nuclear Submarine. We will post those and also the heat factor footage.

What a great work-flow! I may have to adopt this.. One quick question, I noticed you took down the blog about the faster CF cards taking away the rolling shutter, do you find this not to be true? Im just curious before I drop the additional cash on the faster cards.

Sam J, my experience and testing has found that the best card to use is a SanDisk Extreme IV UDMA. I submerged a camera to the bottom of the Mississippi river in an underwater housing that was compromised by the stuntmen exiting the truck. We let the card dry out for 2 hours and then downloaded it into the laptop, it was beautiful, flawless. When I shot my night work on the Terminator webisodes , I had Extreme III cards and I had Extreme IV UDMA cards, this was Feb. of 2009, just when the camera had hit, when our media manager was pulling up the footage he saw a cleaner, and little less compressed image with the UDMA cards, so I have been shooting them ever since. You can go to http://www.memorysuppliers.com and get a 8GB card for $109.00.

As I’m coming across more and more examples of shooting movies on DSLR, the one
thing no one has mentioned, which means it may not be an issue, has to do with time code. Does it matter the DSLR doesn’t burn time code to the footage? And does that matter when it comes to logging and syncing the footage? Once the show has been finished in editing, how do you refer back to the original material, assuming that material has been transcoded and converted to lower res
file for off-line editing? All in all, this is a fascinating blog to follow.
Thank you.

Roger Mattiussi, I will forward a blog to you, my friend was the post supervisor on The Untitled Navy SEAL project. He broke it down. It works incredibly well without time code. It all comes down to the labeling of files. He will be a guest blogger soon and do an updated version on this. We have learned so much from this movie. Here you go. Thanks
Shooting a Feature Film with the Canon 5D
Posted by McCarthyTech on November 4th, 2009 filed in Product Reviews, Workflow Ideas
The Canon 5D MarkII was the first DSLR that offered HD video capture capability worth considering as a replacement for film. Its full sized sensor, full resolution 1080p recording, and high quality 40Mb AVCHD compression differentiated it from all competitors I have experimented with many of the other DSLR options on the market, but most of the projects I have worked on for the last year have been shot with the Canon 5D, so the majority of my experience and workflow expertise has been with that particular camera, most of which I will try to share here. The workflow has improved greatly as the tools have become further developed over the course of the last year. While the most glaringly obvious issue was that the 5D only shot 30fps, that was acceptable for certain workflows, especially if the 5D was the only camera on a project.

A much larger issue was the fact that the camera did not give the user manual control over certain important settings while in video mode, including aperture, shutter speed, and ISO level. The settings could not be specifically dialed in, but any setting brought about through the automatic feature could be paused or locked for the duration of the next shot. Having three variables all changing made it nearly impossible to trick the camera’s auto-exposure system into giving you the settings you wanted with any level of consistency. The easiest setting to over-ride was aperture, since this was on the lense. By preventing the camera from commuicating with the lense, the automatic feature could be disabled. But with no electronic communication to the lense, the aperture must be set physically. Older Nikon Nikkor manual lenses were the only ones that easily adapted to the 5D, that had physical rings for controlling the aperture manually. Once the aperture was set, the standard practice was to point the camera at lighter or darker areas until the automatic exposure feature gave the user the desired settings, and then to lock it. This process had to be repeated for each take or shot, as stopping record put the camera back into full auto. Regardless, many people used this method of manipulating the camera to achieve the desired results for the first few months after its release, and I worked on a number of commercial projects that did. Canon was not real excited about promoting the use of Nikon glass over its own lenses, so this was one of the first issues they fixed. The 1.1.0 firmware update solved this problem by allowing the user to maunally set the aperture, shutter speed, and ISO, and keep it consistent from shot to shot.

So once the lenses issue was dealt with, were left with a selection of AVCHD encoded MOV files. AVCHD is a processing intensive format that does not playback or edit very well. While Quicktime would play the files, it clipped the blacks and the whites at incorrect levels. 16 and 235 were being stretched to 0 and 255 on decode, lowering the dynamic range. This was caused by Quicktime incorrectly interpreting one of the header fields on the file. The solution to this was to use CoreAVC to decode the files when converting into a different, and ideally more edit friendly, compression format. Shortly after this workaround was developed, Apple released a Quicktime update (7.6) that fixed this particular issue entirely.

Beyond the clipping issue, there are other tricks to maximize the dynamic range of the 5D. The picture style is used to control the way that the camera converts the 14bit RAW still into an 8-bit JPEG. The same picture profile settings are applied to the 8-bit recorded video. This allows you to do things to get the maximum detail out of the available 8-bits of color depth. The first few projects I worked on that used the 5D, we used a custom picture profile that I got from Stu Maschwitz’s ProLost blog, High Gamma 5. We did a number of comparison tests, and while High Gamma 5 gave us a wider total dynamic range, for our feature film, we eventually decided to use Neutral, one of the default Canon presets. Neutral gave us a file that was closer to the final look we were going for, and with only 8bits of color depth, burning in your look, at least to a degree, should result in better picture quality at the end of the day.

Every file the camera records is named MVI_####.mov, with an auto-incrementing number, and no real override options. That makes things simple on tiny projects, with one camera since each file has a unique name. On larger projects, and ones that use more that one camera, (We usually have 15) file management can be a bit more work, to keep things straight throughout the post production process. Our solution was to rename each MOV file with a unique 8 digit identifier as the new filename, and store the key to the original card and filename in a database. This allows each clip to have a consistent name throughout the process, to show up on EDLs as a tape name or clip name as desired, without truncating unique values after the 8th digit for certain formats. By the time we are done ,we usually have a source MOV, an Avid MXF, and an online Cineform AVI, all with the same content and file name.

Next up was the framerate problem, at 30p. The first few projects I did with 5D we posted at 29.97, so the issue was solved with a simple reinterpretation of the framerate, when converting from the source AVCHD into an editing codec, and tweaking the audio .1% to match. Unfortunately 29.97 footage doesn’t intercut with film very well, and won’t print back for theatrical masters either, so sometimes a 24p workflow is required. For 24p projects, the conversion solution is much more complicated, involving motion compensated frame blending. After extensive testing we concluded that this was best done with the Revision Effects Twixtor plugin for AE, or using Optical Flow in FCS Compressor on OSX. Having a PC centered workflow, I favor the AE based solution. With render times at around an hour per minute of source footage, it is impractical to convert all of the source footage on large projects, which necessitates an offline edit. Since we don’t have timecode and keycode, relinking for the online requires a bit more creativity. We have found some interesting options that are unique to Premiere Pro CS4, related to the way it links EDLs to existing source footage that make this much simpler than our first tedious tests, which involved manually rebuilding projects at 24p back in Premiere Pro CS3. The new CS4 version can convert the TC-In on an EDL to a framecounted In-point of an existing media file, with makes the onlining of 5D footage a relatively simple automatic process after a few find-replace edits (.mov to .avi in our case) to the EDL. In the future, it looks like Canon is going to support 24p recording on all of their DSLR offerings, so all of these crazy 30p workarounds will soon be an obselete thing of the past.

Although it is much better in rough environments than most other electronics, Canon DSLRs do have their weaknesses. I have operated a 5D in temperatures of 20 below zero, and in the desert at over 120 degrees fahrenheit. While we had no issues in the cold, where solid state recording has a huge advantage over tape, there are some issues at higher temperatures. The camera sensor itself is a large piece of silicon, that generates a lot of heat on its own, and when combined with a high external temperature, in the worst cases is shuts off the camera. You probably have to be over 150 degrees to get to that point, leaving the camera in a black metal box in direct sunlight for an extended period of time, but we have seen it happen. A much more frequent problem, that is harder to detect, is that as the sensor begins to overheat, there will be much more video noise in the recorded picture, especially in the darks. This is probably due to a higher latent voltage on the chip as its electrical resistance changes with the temperature increase. This has only been a problem for us when shooting with the same camera for many hours in a hot environment, and our solution is usually just to swap the camera bodies for one that has not been used in a while. This obviously requires having multiple cameras on set, which isn’t always an option on lower budget projects.

The last issue, that we are still finding new ways to deal with, is rolling shutter. Having a large format CMOS sensor, DSLRs are subject to rolling shutter, or inconsistencies between the top and bottom of the frame. I have spent the last few months working on a project that put the 5D into some of the most intense situations. As a fairly lightweight device, it is subject to more jitter and shake than a larger camera with more inertia, and with the camera moving, the rolling shutter results in the recorded picture being slightly geometrically skewed, depending on the direction of the motion. We also shoot high speed objects, like helicopter rotor blades, which are known to cause strange artifacts in certain instances. So far we have been lucky with that, and haven’t found any of those types of issues in our footage.

The type of rolling shutter artifact we are struggling with the most, is gunfire muzzle flashes, especially at night. In the dark, the flash blows out the imager, but the flash does not last as long as even a single frame. So with the rolling shutter, the top half of a frame will be totally blown out, with the bottom part looking normal, because the flash had subsided by the time that part of the chip was sampled, or vice versa. Setting the shutter speed lower than the frame rate causes it to screw up more of the frame or frames, and setting it higher causes it to narrow the flash into a distinct horizontal band in the footage, neither of which is desirable. One thing we have found that helps is setting the shutter on the 5D to 1/30th. (We usually set it to 1/50 to get similar motion plur to film shot with a 180 degree shutter) With the 30p framerate, the flash either affects an entire frame, or matching parts on two subsequent frames. (Bottom part of one frame, and the reverse area on the top of the next one) This gives us an entire over exposed frame if we stitch the two parts together. This can be hand cut back into footage that has been brought from 30p to 24p by manually selecting frames. It remains to be seen if this solution can be scaled practically to our entire movie. The best way to avoid this issue is to avoid recording gunfire at close range in very dark environments. The farther you are from the muzzle flash, and the more ambient light there is, the less it is going to flare out your camera, minimizing the degree of the resulting rolling shutter artifact.

So that should convey some of the challenges are in faced in using DSLRs for filmmaking, especially on large scale projects, but it is by no means an exhaustive list. As the tools evolve to suit the cameras, and the cameras evolve to suit the tools, many of these issues will become much easier to solve and require fewer workarounds. The AVCHD decoding issue was solved by a new release of Quicktime, the manual lense control was solved with a new firmware release from Canon. The 30p conversion process is the next issue I see becoming a thing of the past, if Canon can get a 24p recording option onto the 5D. I am looking forward to that day, but in the mean time I have 2TB of 30p footage, divided into 5,000 shots, to cut into a 24p film, so I have a lot of work ahead of me.

Has anyone had any experience with a solid post workflow with the final exhibition format being large screen projection off of HDCAM? I have had a music video screen a couple times and have not been happy with the result. There was a lot of artifacting and stairstepping, while some other 5D stuff that screened in the same venue did not have such issues.

I saw the comments on the type of CF cards / temperature of cards and video image quality on this thread, and I’d like to find out more about it. I’m having pixilated, blocky and grainy clips from my 5D Mark II in very contrasty situations. I’ve been using SanDisk Extreme III (30MB/s) cards and wondering if that is the problem. I’d be very appreciative for any comments.
I came across this site and I’m enjoying all the information you provided here. Thank you so much.

Keiji Iwai, Your blocky clips are because you need to decompress your footage. If you have final cut, go ahead and import the files in Pros Res 4:2:2. This will take the 8 BIT compressed file and decompress it with 10 BIT color and open up all your compressed contrasty and grainy clips. Your CF cards should be Extreme IV UDMA 45MB/s. The camera processes its data at 45MB/s and you need a card that can do the same. You are very welcome. I hope this helps.

From what I could gather from the post and the replies to comments that you’ve posted, you seem to be using some combination of Avid, Final Cut, and Cineform. How exactly do you use each part? Or what is your shoot-to-deliver workflow?

Above you say something about “RAW mov. file into the Avid and then the Avid automatically converts it to a MXF file.” What is this RAW .mov file? Do you just mean the h.264 .mov files generated by the camera, or are you recording the uncompressed 4:2:2 10bit out of the HDMI into the nanoflash or something like that?

I’m working with the 7d and the T2i. I’m editing in Sony Vegas, simply because I love the render-free environment. It may only play back the Canon .mov files at 20 fps, but at least it plays them back, unlike Final Cut which requires a render. Premiere CS4 is too buggy, and I don’t have access to any other platforms. With Vegas I can do rough cuts on the raw footage then only transcode the shots I need if I’m on a deadline.

If I’m not on a super strict deadline, I let it transcode to Cineform with NeoScene. I love the cineform quality and on my not-that-fast computer it cuts super easy, at full resolution.

So yeah, how exactly is your workflow laid out?

On another note, I love your blog! You are very specific and precise about what you do and how you write, and that is much appreciated. Could you shed a little light on what you do about avoiding aliasing? For example, I just did a project where there was a wide exterior of a suburban neighborhood, and the director wanted the bricks on the houses in focus. Those thousands of tiny horizontal lines just wreaked havoc on my line-skipping sensors haha.

Shane,
Can you please explain how you set up the 24″ display monitor on the set. Is it just for playback or can it be used like the 7″ Marshall monitor? Also, is there a less expensive alternative to the Apple monitor? (Dell U2410f?) Thanks so much for all of the valuable information.

Stark, The HP Dream-color 24 inch ZX monitor is for lighting and playback. I have also been mounting this monitor on the dolly and operating off of the monitor so that I can continue to view the subtleties of the light and be able to adjust. This monitor you can get refurbished for 1600.00. It holds its calibration for 6-9 months. Where have you ever heard that out of an HD monitor?

Hey Shane,
Truly amazing work by the way. The HDSLR has really changed things.
I have used this workflow for a while now and it seems to give me great results. I take everything into FCP log and transfer. There is now a plugin that will transcode your 5D/7D footage into FCP. The EOS E1 http://www.usa.canon.com/dlc/controller?act=GetArticleAct&articleID=3249 , and do a conversion to PRORES 4444. This is to try and get 4:4:4 color space introduced into the footage. I am finding that it gives just a bit more to play with in post CC. Anyone have any luck with this? I am actually able to edit this off a laptop with internal drive. Plus FCP loves Prores. Back up are done on an 8TB raid array, then put on a firewire bus powered hard drive as well. I personally always like to have a back up that is completely disconnected from a power supply.
Completely unrelated though….
What are your thoughts on RED (one, scarlet, Epic) and how it compares to Canon? Personally I am really liking canon. but I wanted to hear from an expert.
Looking forward to seeing more. Thanks Shane.

Militello, thank you so much for those kind words, yes it has changed everything. I am glad all that is working out. On the RED front, I have not got my hands on the Scarlet for Epic. I am scheduled to test them soon. My plan of attack is film and then 5D. Next episode is next week, which is the final Carnival. You are welcome.

Hi Shane, thanks for all the blog posts and taking the time to reply to peoples questions, it’s really generous of you to share your knowledge and experience with everyone especially considering that you are a busy working professional.

I am just curious if you use any data copy verification methods (checksums etc.) when downloading footage onto your 4 drives. As a DIT who has a lot of experience working with the RED I would never do RED without using checksum copying to give an extra level of security and ensure that all 4 copies have full bit for bit data integrity. If you do use something what software is it?

I know this post is a little old now but I wanted to reach out to you and ask the question if your workflow has changed over the last year. I like the idea of the 8GB cards, but I know technology has changed a bit.