Menu

Admittedly, that’s two similes in a row in titles on this blog, but since my writing output here has been so meagre, I suppose rhetorical crutches are to be expected.

As busy as I’ve been, there’s been enough of a kerfuffle surrounding Final Cut Pro X version 10.4’s new color correction controls, specifically the new color-balance controls, that it’s penetrated my curtain of tasks and made me curious. The process of examining FCP X further gave me food for thought concerning the differences between color-balance controls in different grading applications, the lack of true cross-platform standardization in this area, and the need for more user-accessible customization in this area.

But First, New Color Controls in Final Cut Pro X

Before I say anything else, let me just go on the record that I’m very happy to see the Final Cut Pro team putting effort into more professionalized integrated grading tools. As much as I love me my dedicated grading applications for their superior grade management capabilities and greater depth of tools, editors need integrated color controls in every NLE for a wide variety of reasons and situations, and I really hated the way the old color board worked.

New FCP X Color Balance Controls

And not only did they add color-balance control driven grading, but they added a set of color curves for RGB, and ALSO a set of HSL curves! It was like Christmas on, well, almost Christmas! Not only that, there’s even some new thinking with an innovative hue-limited Saturation vs. Luminance control (called COLOR vs. SAT), which defaults to Orange (ORANGE vs. SAT) but that can be set to any hue of the color wheel, with a nifty auto-naming color picker control. SEA FOAM vs. SAT anyone? I can see this being quick and useful for anyone doing product shots, or for fine-tuning skin tone intensity.

The new FCP X Customizable HUE vs. SAT control

So, new tools, hooray. However, I’ve been swimming in a river of unexpected work as 2017 turned into 2018, I didn’t have any time to look more closely at them. Over time, I caught a bit of unhappiness at the edges of social media, where some responses seemed to be, “yeah, the new color controls are nice and all, but the color wheels and contrast controls are a little wonky.”

So that’s what prompted me to finally have a look for myself. I loaded up a few shots, including a pre-rendered ramp gradient I use for testing color pipelines, and did some grading with the new color wheel controls.

In an effort to be fair-minded, I’ll start with what I like about the new Final Cut Pro color-balance controls:

They exist.

The controls corresponding to each tonal range include saturation, which is really useful.

The visual design of the controls is sensible, with crosshairs, space-efficient vertical sliders, and muted hues.

While I’m not a fan of having to add a color tool to a clip prior to adjusting it, I do like the ability to have multiple instances of tools, making management of color adjustments a layered experience, which is useful.

I like that all adjustments made by dragging within the color controls work like a virtual trackball, and make adjustments are relative to the previous color adjustment, for smooth operation.

I kind of like the option of switching between a single mode-switchable color control and a simultaneous group of four, however to be honest I’m not sure how useful that really is. I think if the single control could also simultaneously show all four tonal control handles, and if it got bigger and smaller as the Inspector’s width changed, it would be more interesting.

I like that you can Option-drag controls to make slower, fine-tuning adjustments.

I like that you can select a control handle and use the Arrow keys to adjust it, although I’m not sure how much I would actually do this in practice since you’re incrementing in coarser units when you do so.

Moving on to things I don’t like:

I don’t like that there’s only one reset button governing the color, contrast, and saturation controls for each tonal zone, so you can’t seem to reset contrast without also resetting color, etc.

I don’t like that you have to click right on the center control of the color-balance controls to actually make an adjustment. Other implementations let you click anywhere within the color wheel, so you don’t have to be so nit-picky.

I don’t like how the contrast and saturation vertical sliders make absolute adjustments to jump the adjusted level right to where you click with your mouse, requiring you to make a precision click right on top of the slider handle to drag up or down from the previous level.

I wish they hadn’t used the terms Shadows, Midtones, and Highlights to name these controls, or that the documentation explained in any amount of detail how these tonal ranges are defined. These names have come to mean something very different since Final Cut Pro 3 added similarly named controls 17 years ago (that actually did more of a Lift/Gamma/Gain operation). It’s time every application stuck to standardized terminology, so as not to be confusing.

Lastly, and the reason for this article, I don’t like how the color-balance and contrast controls interact with one another as you make adjustments. This is probably my biggest complaint. It’s also possibly a bug.

This last point is immediately noticeable to anyone who’s used color controls across different platforms, and while it’s not horrific, it’s not ideal, at least for me. Lift/Gamma/Gain controls typically pin the highlights in place (relative to 100 percent) as you raise or lower Lift, pin the shadows in place (relative to zero) as you raise or lower Gain, and pin both the shadows and highlights in place as you raise or lower Gamma. In this way, you can make structured adjustments and not have to worry that a change you’ve made to the gamma is going to cause radical changes to the shadows and highlights levels you’ve just set. Now, in practice, there’s usually a small bit of interactivity, but it shouldn’t be that noticeable.

Furthermore, any small interactivity is usually compensated for though the act of using a color correction control surface of some kind that lets you simultaneously adjust Lift and Gain, or Gamma and Lift, or whatever combination you require. Simultaneous adjustment makes it easy to compensate for this sort of thing as you turn the rings, dials, or knobs used to adjust contrast, to the point where you may not even really know it’s happening. On the other hand, weekend warrior colorists driving with the mouse will notice this kind of interactivity a lot, since they’ll be making adjustments one at a time, such that a change to Highlights making a change to Shadows will necessitate an adjustment to Shadows which in turn alters Highlights so you need to readjust Highlights and then, well, you get the idea. It’s somewhat less than ideal.

In FCP X, at least on my system, grading in Rec. 709 exhibited an enormous amount of interactivity among the Shadows/Midtones/Highlights contrast controls. I casually mentioned this on Twitter, and colleague Marc Bach helpfully pointed out Simon Ubsdell’s YouTube video which helpfully seeks to explain why this is happening, and which (spoiler alert) points out that these behaviors change when you set the project to Rec. 2020. My only complaint about Simon’s video is that it doesn’t specifically show the steps he used to set up the Library, Project, and Media to reproduce his results. However, since you need to set the Library that contains the content you’re grading to Wide Gamut HDR in order to set a new project to Rec. 2020, I assumed that was the proper combination of settings. I was a bit confused by the “Rec. 2020 PQ” banner at the top of the video scopes in Simon’s video where we see the Gamma control finally working properly, since a standard dynamic range gradient shouldn’t show a linear response if the color space OETF/EOTF is set to PQ, but maybe he was using a ramp that’s linear in PQ space. On the other hand, I don’t know FCP X that well, and maybe I was doing something wrong. So I set about to explore this issue more thoroughly to see if I could reproduce his results.

Long story short, using my own gradient .mov file, I found reproducing Simon’s results difficult to do. I first set the Library to Wide Gamut HDR and then the Project to Rec. 2020, and still found myself getting a wonky contrast response. I then set the Library back to Standard but left the Project set to Rec. 2020, which I didn’t think would be possible since you can’t set a Project to 2020 unless you’ve first set the Library it’s in to Wide Gamut, but it’s possible (seems like a bug). I tried every combination of color management settings FCP X allows, and while the contrast response differed, it was still wonky and not at all “standard” to the way I’d expect these controls to work.

Then, I finally stumbled upon the magic combination of settings. Turns out I needed to:

Create a new Project.

Import a ramp gradient .mov file into the same Library, and edit it into the timeline.

Set the Library to Wide Gamut.

Set the Project to Rec. 2020.

(Here’s the step that makes a difference) Select the ramp gradient clip in the timeline, and use the New Compound Clip command to turn it into a compound clip.

Exit the compound clip back to the overall timeline, select the compound clip you just created, and add a Color Wheels layer.

NOW, select the Library, and set the color processing setting to Standard.

Once you’ve done this, using the Color Wheels contrast controls to adjust the clip in the still-Rec. 2020 identified Project timeline results in exactly the kind of adjustments I would expect from Lift/Gamma/Gain operations in other applications.

Weird.

Given all the screwing around with the Color Management settings required to force a clip to grade correctly with these controls, clearly the math that governs these controls is being handled incorrectly relative the Color Management settings, so I believe Simon’s assessment is generally correct. And frankly, I’m happy to stop there, because from my experience working with the development teams of Final Cut Pro classic, Apple Color, and DaVinci Resolve, I have no doubt there’s way more going on under the hood of FCP X that affects these operations than any simple tests can really clarify. Color management is not simple to implement, and the interactions of color management and color grading operations are especially not simple, so I’m sympathetic.

And Really, It Doesn’t Matter

And at the end of the day, the why of all this really doesn’t matter to me as an artist. To offer a metaphor, if I build a guitar and give it to a musician, and they play it and tell me they don’t like it, the process of explaining to the musician why the guitar sounds the way it does doesn’t help. The bottom line is, the musician expects a certain result when manipulating the instrument in a familiar way, and if that result is not forthcoming, there’s not enough math in the world to change that musician’s mind.

Which brings up a point I’m actually more interested in discussing. As I mentioned on Twitter when chatting about some of this, most grading software I’ve used varies slightly in how the tonal ranges affected by Lift, Gamma, and Gain are defined. The falloff of Gamma towards Lift and Gain might be a bit wider or a bit narrower, steeper or shallower, and the curves governing the falloff of Lift and Gain against one another may also vary by small amounts. Not enough to make these controls work night-and-day differently, but enough to feel just a bit different.

Additionally, color management affects all of this. For example, working in different timeline color spaces in DaVinci Resolve makes the controls feel different, more or less sensitive in different ways within different zones. And these small differences matter to color grading artists, who play their control surface like an instrument from which they expect to get a particular result when they make a particular set of adjustments to the trackballs or contrast rings/dials/knobs.

I find that colorists tend to prefer the controls of the last grading application they used professionally, which makes sense since you’ve built up muscle memory to know what motions correspond to which corrections, and who wants to learn new instincts all over again? Even small changes take a while to get used to.

Furthermore, each application’s variances are rooted in no doubt the soundest of reasons, and I have no interest in litigating who has the best controls. They’re all great. But they’re all different.

Standards, Anyone?

From a workflow perspective, however, it’s a bit vexing that something as fundamental as Lift/Gamma/Gain hasn’t really been formally standardized across grading platforms. The closest we’ve got is the standardization of the ASC CDL Slope/Offset/Power/Saturation adjustments, which isn’t quite the same thing, but which is serving as the cross-platform glue in pre-production through post-production look management workflows for many projects and organizations. However, in many instances, the SOPS operations need to be translated to/from an application’s own Lift/Gamma/Gain operations. It’s not really a big deal, but I always have the supernatural fear of extra steps introducing the potential for extra problems. I know, too much of a life spent in post has made me paranoid.

Now, I admit it’s improbable that a standardization of Lift/Gamma/Gain will ever happen. Grading applications would have to give up their time-honored and thoroughly-justified wrinkles on Lift/Gamma/Gain functionality in favor of math from someone else “who’s been doing it wrong all these years.” Even worse, giving up your in-house Lift/Gamma/Gain functions risks alienating a user base that has spent years building muscle memory for different corrections. Nobody wants to get used to another application’s operational differences if they can help it.

On the other hand, Lift/Gamma/Gain standardization could offer an opportunity, were it to be implemented in such a way as to be globally customizable. Application-wide presets would allow each company to tailor the “custom tuning” of Lift/Gamma/Gain to their in-house standard, while also offering access to the “standard tuning” of the industry standard. Furthermore, this would facilitate project exchange as each companies variation could be described and implemented within the new standard.

Think of it, NLE’s could finally exchange meaningful color grading information with grading applications without having to resort to CDL tools, or custom plugin solutions, or black box math that can’t be adjusted. Project grading could be moved from one grading application to another in emergencies. Colorists who find themselves forced to use another grading application for a single job could switch to the controls they’re used to in order to be maximally effective. And new colorists could choose to get used to an industry-wide standard that every application would be able to implement.

Of course, this is a utopian vision that isn’t completely realizable, because control response isn’t just based on the math of the color operations those controls perform. An application’s color management, render pipeline, image processing order of operations, UI widget implementation, and control surface implementation all affect how an adjustment “feels” while you make it. But standardizing at least one set of parameters would get us closer to a portable world of people moving image data among different apps as necessary.

Playing Slack-Key Guitar

However, the guitar analogy points the way towards another useful metaphor; that of the slack-key guitar player.

Modern guitars have a standard tuning, so that any guitarist can pick up a guitar that’s in tune and immediately play a song with a reasonable chance of success. However, there are numerous traditions of guitarists who modify the tuning of their guitar to play a particular type of music.

With this in mind, I think about how nice it would be if the potentially standard tuning of Lift/Gamma/Gain controls across platforms provided a guaranteed means of customizing this response in subtle and useful ways to be able to work differently enough to make grading different kinds of projects easier. Imagine having one set of controls that you like to use when grading night shoots, and another set of controls you like to use when grading bright desert shoots. Maybe they’re not night and day different (I’m sorry), but I could see having slightly different tonal ranges for each making these scenes a bit faster to grade.

Now, there are already applications that allow you to customize the tonal range affected by a given Lift/Gamma/Gain control, and that even let you adjust the slope of these ranges, but there still many grading applications and plugins that don’t. Ideally, a Lift/Gamma/Gain standard would also encompass a standardized means of customization that would be available as an application-wide (or project) setting, such that every application and plugin that implements Lift/Gamma/Gain would be easily able to implement it, thus providing the benefits described above to everyone in postproduction, on every platform.

Given the news of the day, it would seem that there’s a vast swath of men in the film and television industry who’ve not gotten the memo that sexual harassment and assault is not an acceptable way of interacting with women. However, it’s occurred to me that perhaps it might be helpful to couch some helpful advice to would-be harassers in a way that someone familiar with the entertainment industry might understand.

In the following paragraph, substitute the word screenplay with dick.

Paraphrasing the advice of too many other folks to cite, nobody wants to see your screenplay. In nearly every situation, if someone hasn’t specifically asked to see your screenplay, they really don’t want to see it. Trust me. Showing your unsolicited screenplay unexpectedly is universally unwelcome, and guaranteed to brand you with a reputation that nobody should want. Furthermore, nobody wants an email or text of your screenplay if they’ve no context-appropriate relationship to you and have not asked for it. Nobody wants to get your screenplay sprung upon them in a box. Nobody wants you to show up to a convention and surprise them with your screenplay. Nobody has any interest in you popping by their hotel room to whip out your screenplay unexpectedly. People in general don’t want to talk about your screenplay. They don’t want an unbidden description of your screenplay delivered furtively in an elevator. They have no interest in you exposing your screenplay to them in the back of a cab. In fact, it’s probably best to just not bring your screenplay up at all unless you’re in a clearly communicated situation with someone who’s expressed a specific interest in it. And if you’re in a situation with someone where you find yourself in doubt, it’s absolutely the best policy to keep your screenplay to yourself.

After seeing Blade Runner 2049, and reading reactions and reviews afterward, it seems to me that opinion varies, in part, based on on whether one believes the themes raised by a movie need only begin a conversation, or must of necessity conclude it.

I found the film to be an enjoyably contemplative experience (a rarity at its budget), and I also found the world building compelling. While dystopias seem to be a dime-a-dozen these days, I thought the extrapolation well thought out from the perspective of ascendant corporatism ruthlessly pursuing questionable technologies in a time of governance weakened by man-made and natural disaster, containing many threads worth examination that may be unhappily relevant in the coming years.

Besides finding the film visually engaging, I enjoyed that the plotting was that of a detective story, I particularly enjoyed how much went unsaid and unexplained in favor of visual cues and narrative hints, and I thought the performances throughout were exceptional. However, one of my more durable benchmarks for a successful film is whether or not Kaylynn and I spend our time discussing it, not just immediately afterwards, but days later. In this respect, it was definitely a success.

As someone who’s owned the original Blade Runner in every consumer video format except Betamax (yes, including Laserdisc), I found Blade Runner 2049 a surprisingly worthy followup to the original. I thought it a thought-provoking and evocative film, and I’ll take that over a tidily efficient screenplay that wraps itself up and effervesces from the mind any day.

To the untrained eye, this may look like a surplus of people shooting the breeze, but each of these individuals played an indispensable role in the production over the course of my short film’s two days of shooting. From left, Script Supervisor John Pata, Director Alexis Van Hurkman, Hair/Makeup Laura Hart, Assistant Director Molly Katagiri, Hair/Makeup Melissa Martin.

Over the years I’ve periodically dipped my toe into the waters of discussing the “business” of filmmaking; a topic I am, so far at least, fairly lousy at (at least from the perspective of profit). Frankly, as a filmmaker it’s all I can do to break even at the moment, and my go-to joke is that I make money doing and describing post-production so I can lose it doing production.

However, as the odd duck who both works in post and produces original work, I’m in the unusual position of being on both sides of the fence when it comes to film budgets. As a colorist, I want to be paid for my time, and I want a rate that’s commensurate with what I offer as an artist. As a filmmaker, I need to pay a lot of people to get my projects done, so while I want to pay for professionals, I need to stretch my budget and minimize each line item in as non-insulting a way as I can so I can write all checks I need to.

As a colorist, I want to deliver the goods. As a filmmaker, I want to impress the audience. However, as a colorist I’m limited by the number of hours the client can afford; I work hard to maximize that time by being efficient, and I often round down a bit to keep the budget in the box, but I can’t afford to give my time away past a certain point. As a filmmaker, I only ever have so much money that I’m trying to stretch to fit the project of the moment, and that demands various sacrifices that shape the result.

As much as I wish the visual mediums in which we work were entirely divorced from commercial concerns, the truth is that shooting anything more than a webcam video where you’re sitting on your ass is going to involve some manner of financial commitment, whether in terms of dollars spent on rentals and crew, or in time spent on behalf of the volunteers who are sacrificing other things they could be doing in order to fulfill your dreams. Even if you use a phone to shoot an available light comedy sketch with friends of yours that you’re editing into the next hot zero-budget web series, at the very least hard drives cost money. The phone or tablet or laptop you’re cutting on cost you money. The software you’re using (probably) costs money. If you’re going that route, film festival entrance fees cost money. The internet access you use to upload your projects to YouTube costs money. You’re paying, even if you think you aren’t paying.

And I sure hope you at least bought your friends lunch for their efforts.

If you want to do a project of any ambition, that financial commitment grows. By “project of ambition” I mean a project that’s lit by one or more people who know how, that’s shot with a camera recording a high-quality image with high-quality lenses that are being focused by someone who’s paying attention, that has high quality audio recorded on a dedicated device by a dedicated person paying attention to it, that has some manner of camera motion facilitated by whatever type of camera support you can access, that uses practiced actors and interesting locations both of which are deliberately dressed to look the way the story needs them to. In other words, you have more people helping you out, and perhaps more equipment that you either bought, are borrowing, or are renting.

John Eremic, who works at HBO, who’s responsible for Endcrawl, and who knows what he’s talking about, and I had an interesting discussion on Twitter a month or so back about his ongoing assertion that the film industry is ripe for “disruption.” It was spawned by John’s response to the following article (by Canadian Filmmaker Kevan Funk which I wholeheartedly agree with, by the way, itself a response to a prior article to which I had the same response as Kevan).

I get what he’s saying; by being held hostage to the feature film and television forms of narrative we’re making now, we’re stuck with a system of financing and distribution that doesn’t encourage innovative stories, because the institutions that provide the money for these activities are risk-adverse. The consequence is that either (a) filmmakers dedicate themselves to pitching work that conforms to what studios are willing to buy, or (b) filmmakers do whatever they want, and distributors will cherrypick only those projects they’re willing to buy, based on what the distributors think audiences will attend. In either case, the outcome of what movies are available to people browsing Fandango or Netflix or Hulu or their cable listings for something to watch is the same.

New forms of media that are ostensibly cheaper to produce promise to free media makers from the burden of raising so much money to do their thing, while new distribution methods promise to make it easier to reach an audience and potentially monetize your activity, thereby enabling you to take greater risks on telling stories in novel ways that the more expensive and hidebound forms don’t allow, due to their expense.

Sounds great. I’m all for that.

However, I’ve developed a pavlovian response to the word “disruption,” with its attendant implication that old workers must lose their jobs so that a new guard doing innovative new things can flourish. An implication stated in the title of John’s blog entry, “Do We Want Better Movies, or Do We Want to Keep Our Jobs?”

Clearly, change is upon all of us who work in visual media. The evolution of film and video technologies into digital technologies, whether we’re talking about cameras, postproduction methods, or distribution technologies and strategies, have in many respects reshaped what we do, and reduced the footprint of these activities in terms of both expense and employees.

And new forms have absolutely emerged to compete for the leisure hours of audiences. Video games have become an enormous industry, spawning epic storyline-driven masterpieces that will easily consume fifty to one hundred and fifty hours of your life (I believe I spent 125-odd hours playing through Dragon Age 3, I can’t remember how many hours I put into Skyrim, and I never wanted to know how many hours I put into the final collection of Diablo III). Indie games on both traditional game platforms and on your phone compete for your time against YouTube stars with various followings, and the various streaming services are beginning to experiment with new types of programming with which to retain subscriber dollars with.

These new forms change the landscape in important ways. However, in many ways they don’t. It still takes a lot of people to make a video game or movie or television show of “ambition” (and I use that word loosely). I don’t think anyone in the industry would argue with me when I say that gear is not the most expensive part of any given production. People are.

It’s not to say there aren’t cost advantages that have emerged. Cameras are getting cheaper. Reasonably equipped post suites are nowhere near the financial commitments they once were. 3D modeling and animation software has plummeted in price, and the hardware needed to render intensive scenes has also fallen by amazing amounts. On the other hand, those were always things people could rent or hire for a fraction of the cost of ownership. Don’t get me wrong, easier and cheaper long-term availability of gear and software is an improvement all media producers or laborers, but it’s not the key driver of a modern production budget.

The cost of hiring artists and craftspeople remains the bulk of any budget. Actors, whether on-screen or voiceover. Gaffers and grips. Audio recordists. Stylists. Cinematographers. Makeup artists. Editors. Set decorators. Audio mixers. Wardrobe. Colorists. Stunt people. Coordinators. Assistant Directors. Directors. Screenwriters. Storyboard and Concept artists. Modelers and animators and compositors and previs and model makers and practical effects specialists and myriad other VFX artists. Composers and musicians. The list of available specialists goes on, with each one you add to your budget making a unique artistic contribution that years of experience have honed. One can hire fewer of these specialists and do the remainder of the work oneself, or one can hire more specialists in order to focus oneself more deeply on fewer personal contributions to the work, but the bottom line is that doing any manner of audio-visual presentation, you’ll be needing some combination of other people to create content of sufficient polish to be tolerable to an audience.

Even in other forms of media creation.

The four most solitary media forms I can think of at the moment, web comics, webcam entertainers, indie game programming, and podcasts, are all certainly things that individuals can do. And as such, they’re the most successful forms for truly independent media artist right now, “success” being loosely defined as something an individual can sustainably do in an ongoing fashion without becoming homeless.

This success encompasses everything from (a) being able to do these things as a sideline from day jobs, to (b) being able to quit the day job and be able to make a modest living doing the thing, to (c) upon rare occasions being able to do well financially when the thing somehow grabs an audience of significance.

This is an exciting development, and provides both hope to creators and a fertile body of lively work to audiences. But it is of limited applicability to the narrative short subject, feature-length, or episodic storyteller, or to the game developer doing a larger project involving ever more art, performance, and technology; in these instances, such “works of ambition” of necessity require more than a single person to either break even or be paid. And this isn’t just a matter of “well, shrink the crew,” because from what I’ve seen, even if you only add two more people to your project, you’ve significantly diminished your ability to be sustainable from the modest revenue streams available to the totally independent media creator (direct sales, patreon subscriptions, online ads, t-shirts, swag, etc.). The most successful independent media creators I’m seeing out on the web are one-person-bands. Maybe they hire an assistant to do specific, containable things, but largely they’re working solo to produce work with very low overhead, or that’s subsidized somehow by circumstances unique to that person’s situation (their day job affords certain access to gear or expertise, for example).

If you want to tell a story in a manner beyond a recorded Moth podcast, let’s look at a solid minimum way of going about it, using narrative storytelling as an example. You write the script and direct it yourself. You borrow a DSLR kit from a friend in exchange for lunch. You shoot available light, but get a friend to hold a bounce card for you and generally help with the camera. You write a scenario with the minimum characters necessary to tell the story, three actors who bring their own wardrobe and do their own hair/makeup as necessary. You shoot on private locations arranged by friends in order to avoid municipal permits and the absolute necessity of production insurance (it’s never a good idea to avoid insurance, by the way). However, you realize that without good audio, all your intentions are for nought, so you get another friend with a Zoom recorder, and rent or borrow a microphone and boom pole. After the shoot, you decide to save on post by teaching yourself everything, editing, audio mixing, color grading, title design, you use automated music software to generate a score, and you do all the post yourself. Done.

You’ve still needed the help of five other people. And this scenario depends on you being a good screenwriter, director, cameraperson, person with an eye for lighting, organizer, DIT, editor, sound designer, dialog editor, mixer, colorist, motion graphics designer, and finishing editor. You must do all of those things well in order to have something at a level of quality people won’t turn off after the first 10 or so seconds (or so I’m told).

If we’re talking about stories here, no matter what the medium, there are certain minimums. Someone has to create the story. Someone has to visualize the story. Someone must bring the characters to life. The result must be assembled and fine-tuned. Visuals must be polished. Audio must be polished. The people and the process must be organized somehow. Even for a project in which the characters are drawings on sequential cards brought to life with clever narration, it has to be written, drawn, voice acted, edited, mixed, finished, and organized.

Using any combination of innovative technologies, how many of those things can you do yourself?

How many things should anyone do by themselves?

Additionally, think on this. You’re not just hiring people to do things you aren’t good at doing yourself, or that you can’t do yourself. You’re hiring people to bring an additional perspective to the work being done. You’re hiring people, hopefully, that are in a position of saying, “hey, that thing you thought was a good idea? It’s not, we should do something better, and here’s a suggestion.” You’re hiring people to bounce your ideas off of. The more you undertake to do by yourself, the fewer opportunities you get for someone else to bring value to your creative work, no matter what it is.

And if you’re not paying those people money? Than what you’re doing isn’t sustainable. You’re working at a loss, even if that loss is non-monetary in terms of burning out your volunteer/collaborators over time. I’m not saying working with unpaid collaborators is bad, it’s certainly something that must be done from time to time of necessity whenever you’re new to your craft, but I am saying that it’s not an ideal way to pursue a long-term career.

So that’s why I’m dubious about the disruptive potential of technological advancements and new categories of entertainment for changing how projects are made when multiples of artists are needed. Certainly a bit of “doing more with less” becomes more possible, but while you may have replaced your “cast of thousands” with a Massive simulation, you now need a not insignificant team of programmers and artists to drive the replacement methodology at the same level of quality, that are likely (hopefully?) earning more per-person than those extras would have.

Art, which in my view is hopefully the thing we’re all pursuing here, is a profoundly human engagement with the world. It’s the need of a person to say things to other people. The writing of stories, the framing of images, the realization of characters, the shaping of narrative through the units of shots and scenes, the moulding of audio to draw out the emotional potential of each scenario, the adjustment of color and contrast to perfect each image, the composition of music. Each one of these tasks provides the means for that person to communicate to the rest of us, and requires a universe of skills unto itself. Each part of the process is a craft where having a human practitioner with a point of view is valuable.

I have no doubt that finding ways of automating various of these tasks will increase efficiency and drive costs down, and I have no doubt that machine learning and various other technologies being developed will eventually eliminate the necessity of hiring, say, colorists, as filmmakers will be able to simply point at a style off a menu and have their scene instantly balanced and made to look like that. But where do we stop? Automated story generation? Automated editing technologies? Automated music generation is already upon us, and will undoubtedly improve, but how much fun is it to work with composers and musicians to find the unexpected?

My point of view about where innovation can help, rather than steamroll, the artist is that the overall goal is to empower humans to tell more and different types of stories, and for other humans to exercise their art to help improve these stories. At a certain point, unless the goal is to create streaming services powered by audience-feedback-driven automated story generators and to eliminate humans from the storytelling process altogether, we’re still going to want to find a way for human practitioners to sustainably do audio-visual art.

More to the point, in the face of increasing technology, automation, and innovation, there are still people who are going to want to do things the “old” way. Novels and short stories continue to be written, forms which if you include epic poetry go back for centuries. Paintings and sculptures are still constructed from paint and stone. Comic books are still being written and drawn, increasingly digitally, but still by hand via stylus, colored, and lettered by separate individuals. Over thousands of years of evolving forms, theater is still being made. All of these methods of delivering narrative content to audiences continue to be practiced by artists and engaged with by audiences. Technology has certainly impacted the creation of all these forms, and yet, artistic specialists continue. Authors need editors. Painters hire frame makers. Comics creators employ writers, pencillers, inkers, and letterers. Theater, depending on the production, has its own armies of specialists despite, and perhaps because of, the many more technologies that have become available to the theater production, many of which overlap with cinema and television production.

And so it goes with visual narrative. Technologies don’t alter the fundamental desire for humans to drive the mechanisms used by each of the various disciplines. Steenbecks have given way to NLEs, rolls of film and reels of mag have given way to volumes of bits, and klieg lights have given way to LED lighting, but the decisions about how to use all of these things continue to come from the human artistic impulse.

This isn’t to say I don’t believe in the need for evolution, to improve how media reaches the public and make more widely available the means of visual storytelling to a wider pool of culturally and politically varied individuals, because I agree that there is a problem. That problem is that the business of storytelling interferes with the process of storytelling and the selection of who does the storytelling, even as the business of storytelling enables storytelling at the scale at which mass audiences are interested in seeing it. It’s well and truly a conundrum in need of innovation.

However, visual narrative, be it cinema, series, or video games, or whatever VR content proves to have the most durable audience, require teams of people, and these ought not be considered a liability. Humans are not a bug in the system, they’re a feature, because when you hire right, each one of these people makes the project better. Which is nice, because people need jobs.

Innovation is great, but let’s try and focus it on the things that suck. Let’s not just envision innovation for the sake of eliminating people from a process that, in the long run, probably won’t need us as much as we need it.

The DaVinci Resolve Mini Panel next to my “maxi” panel, comparing for size

Now that the new DaVinci Resolve panels are announced, I wanted to weigh in on a few of my own observations, as I’ve been hands-on with them for a little while. All the normal caveats, take what I say with the appropriate grain of salt since I work with DaVinci and I’m obviously biased.

In a nutshell? I think they’re really nice.

But here are some specifics. The layout is, in my opinion, well thought out. I especially like that the bottom half of the Mini Panel is exactly the same as the Micro Panel, so if you use one, the muscle memory you develop applies to the other. Working my way through the different sets of controls, I find I can quickly access most of the soft-mapped tools I regularly use, including the Custom and HSL (Hue) Curve knobs, Log and Offset controls, different windows, and RGB pots.

With 20 knobs, eight of which are remappable and 12 of which are permanently mapped to some of the most common controls you use (including the Y Lift, Y Gamma, and Y Gain controls of which many Resolve colorists are so fond), this panel provides great simultaneous access to controls. The three grids of buttons, two to the left and right of the displays, and one to the right of the trackballs, are logically arranged and provide access to an excellent selection of the most frequently executed commands by the Resolve colorist. Everyone will undoubtedly be missing something they’d like to add, but there are few buttons that I think will not see use. Finally, a set of eight remappable buttons along the top round off what’s available. Blackmagic Design have jammed a whole lot of controls into a reasonably compact and portable control surface, and have done so with a layout that doesn’t feel cramped.

As a regular user of what is now called the Advanced Panel, the trackballs, master contrast wheels, and knobs of the Mini Panel all feel as good (or even a tiny bit better) than those of my big panel. The trackballs are a bit smaller than those of the Advanced, but to my hand it’s a negligible difference. Bottom line, these controls on the Mini Panel have all the smoothness and solidity of the Advanced Panel, with the added bonus that the wheels and trackballs can be disassembled for cleaning, a really handy thing for something small enough to drag around on set.

The DaVinci Resolve Mini Panel newly unpacked

The buttons are completely different to those found on the Advanced. While the Advanced Panel has hard plastic buttons that clack when you press them, the Mini and Micro Panel buttons are softer and make a much softer thump when pressed. They have definite positive feedback when you press, you won’t wonder when the button engages, but because of their design there’s a double-thump to each button press, once when you press down, and again when the button springs back up. It took me a little getting used to, but it’s not bad, just different. The buttons are backlit (white), so you’ll see them in the dark, and brighter buttons indicate when you’re in a particular mode.

The displays of the Mini Panel are beautiful, nice and sharp and crisp. The design of the menus that appear on them is really nice, with a good mix of icons (where it makes sense), labels, and values to show you what controls are available for each palette of color correction you can access.

The sizing menu in the DaVinci Resolve Mini Panel

The window menu in the DaVinci Resolve Mini Panel

One thing I really like is the new page system this panel introduces, that lets you move left and right through different pages of controls for palettes that have more controls than can be simultaneously displayed with the available two displays, eight mappable knobs, and eight mappable buttons.

Custom curves controls

Any palette (accessed via the grid of buttons to the left of the displays) that have multiple pages of controls display dots under the page label text, with the number of dots showing how many pages are available. You then use the Left and Right arrow buttons to access each page. It’s logical, it’s quick, and it feels very straightforward. Additionally, some palettes with different modes of operation expose each mode as a button at the top of the screen (this can be seen in the image above). So, it’s really fast to jump to the curve, qualifier mode, or sizing mode you want to use.

I especially like the Offset button that’s front and center, right above the gamma trackball. I don’t mind saying that one of the things I find a pain in the neck about the Advanced Panel is that it’s easy to forget whether or not the fourth trackball is set to Offset. With the Mini Panel, pressing the offset button dims the All and Level reset buttons of the Lift and Gamma controls, giving you a visual cue that the right-most trackball is now adjusting Offset. Since this button is so handy (as is Log), it’s easy and fast to switch the trackballs and rings among Primary Lift/Gamma/Gain, Log Shadow/Midtone/Highlight, and Offset as your mood takes you. As an added bonus, when you’re in Offset mode, the left and center rings adjust Temp and Tint (in case you’re wondering why no Temp and Tint controls along the middle row of knobs).

The offset control

When it comes to plugging the Mini Panel in, I really like that it has both USB-C and Ethernet connectivity. USB is nice and simple, but being able to connect Ethernet for long distances cheaply is a really nice option to have. It’s also really cool that you can power the panel over Ethernet as well. I also like that the new version of DaVinci Resolve 12.5.5 that’s been updated to be able to use these panels can auto-sense when the panels are plugged in via USB-C. If you’re connecting via Ethernet on macOS or Windows, a popup in the Control Panels panel of the preferences lets you choose your panel from a list (Linux users have to enter the IP address of the panel manually).

Mini Panel connections

Dedicated power connectors include a standard AC computer plug, and a 4-pin XLR connection suitable for grabbing power from a variety of situations on set, so including the PoE option, you have a lot of ways to get this thing turned on.

I should probably mention that the Micro Panel only has USB-C, so if you want the other connectivity options, you’l need to get yourself the Mini.

Interestingly, I’m told that the USB-C port can provide power out to other devices when this panel is plugged in via the AC computer plug. That means if you’ve got a new computer or phone that can get power through USB-C, you can use the panel to use and charge it.

So, if you’re a DaVinci Resolve colorist who’s in the market for a set of panels and you can’t afford the $30,000 Advanced Panels, the Mini Panel is a fantastic choice, especially at $2,995 USD. The only caveat I would mention is that this panel only works with DaVinci Resolve. If you’re someone who uses a variety of panel-aware applications and you want a panel that can drive them all, you’ll want to look to Tangent Design, Avid, JL Cooper, or OxygenTec.

However, if DaVinci Resolve is your grading tool, you’ll be well served by these new panels. As always, I recommend finding somewhere you can give them a try first, before you throw your money down. Control surface touch and feel are an incredibly subjective thing, and just because one person likes the feel of a knob, trackball, ring, or button is no guarantee that another person will like it just as well. Hooray for choices!

Yes, it’s been MONTHS since I’ve posted an article. Partially I figured it’d take most people this long to get through the last gigantic article I wrote on HDR, and partially it’s been because I’ve been utterly slammed this year producing the pilot for a TV show I’ve been developing to shoot in China. I’d love to tell you more, (you have no idea how much), but there have been many twists and turns, “goes” and “stops,” and I’ve been waiting to see what’s going to happen with that before saying anything more specific (you know how these things go).

However, it’s come to my attention that a lot of folks who are considering using Resolve for Editing are having trouble finding the commands they need, which leads them to wonder if Resolve even HAS those commands. Other than recommending a thorough read of the editing chapters I’ve painstakingly written over the years, the following movie has a nice tip for searching through the many commands Resolve provides. Resolve’s capabilities are deeper than you might think, and this will hopefully help you explore what can be done more widely.

Have fun, and hopefully I’ll have more news to share as I put my producer/director hat on in the coming months.

High Dynamic Range (HDR) video describes an emerging group of monitoring, video encoding, and distribution technologies designed to enable a new generation of television displays to play video capable of intensely bright highlights and increased maximum saturation. I’ve been keen on this technology ever since I first saw demonstrations at the 2015 NAB conference, and I’ve had the good fortune to sit with some excellent colorists who’ve been grading HDR projects to see what they’ve been doing with it. I’ve also managed to work on a few HDR grading jobs myself, on two different HDR displays, which was the point at which I felt I had something interesting to contribute to the topic.

While I’d started, many weeks ago, to write an overview of HDR for folks who are interested in what’s going on, the growing enormity of the article caused it to be unfinished when I paused to attend the 2016 NAB conference to see this year’s update of what directions HDR seems to be taking. In the process, I also was invited to participate on a panel moderated by colorist Robbie Carman and hosted by Future Media Concepts on which Katie Hinsen (Light Iron), Marco Solorio (One River Media), Bram Desmet (Flanders Scientific), Robert Carroll (Dolby), Joel Barsotti (SpectraCal) and I got to chat about HDR. Happily, it seems that most of what I’d written before NAB was in line with the experiences of others in the field, providing both confirmation and a sense of relief that I was on the right track.

In this article, I provide a summary, from a colorist’s perspective, of what HDR is, what the different flavors of HDR distribution look like right now, and how HDR works inside of DaVinci Resolve (this article is a vast expansion of a new section on HDR I added to the DaVinci Resolve 12.5 User Manual). Lastly, I try to provide some food for thought regarding the creative uses of HDR, in an effort to get you to think differently about your grading in the wake of this wonderfully freeing and expanded palette for viewing images.

Before I continue, I want to give thanks to some folks who generously provided information and answered questions in conversation as I developed this piece, including Robert Carroll and Bill Villarreal at Dolby, colorist Shane Ruggieri, Bram Desmet at Flanders Scientific, and Gary Mandel at Sony. I also want to thank Mason Marshall at the Best Buy in Roseville Minnesota, who was able to give me a quite knowledgeable tour of the actual consumer HDR televisions that are for sale in 2016.

What Is It?

Simply put, HDR (High Dynamic Range) is an escape from the tiny box, as currently defined by BT.709 (governing color gamut), BT.1886 (governing EOTF), and ST.2080-1 (governing reference white luminance levels), in which colorists and the video signals/images they manipulate have been kept imprisoned for decades.

HDR for film and video is not the same as “high dynamic range photography,” which is a question I’ve gotten a few times from DPs I know. Whereas High Dynamic Range photography is about finding tricky ways of squeezing both dark shadow details and bright highlight details from wide-lattitude image formats into the existing narrow gamuts available for print and/or on-screen display, HDR for film and video is about actually expanding the available display gamut, to make a wider range of dark to light tones and colors available to the video and cinema artist for showing contrast and color to viewers on HDR-capable displays.

It’s impossible to accurately show what HDR looks like in this article, given the screen you’re likely reading this on is not HDR, because the levels I’m discussing simply cannot be visually represented. However, if you look at a side-by-side picture of an HDR-capable display and a regular broadcast-calibrated BT.709 display, with the picture exposed for the HDR highlights, it’s possible to see how the peak highlights and saturation on both displays compare, relative to one another. In such a picture, the comparative dimness of the BT.709 display’s highlights is painfully obvious. The following (admittedly terrible) photo I took at NAB 2016 gives you somewhat of an idea what this difference is like. To be clear, were you to see the SDR display to the right by itself, you would have said it looks fine, but in contrast to the HDR image being displayed at the left, there’s no comparison.

Another approach to illustrating the difference between High Dynamic Range and BT.709 displays is to show a simulation of the diminished highlight definition, color volume, and contrast of the BT.709 image in a side by side comparison. Something similar can be seen in the following photo of a comparison of two images from the same scene represented on Canon reference displays. At left is the HDR image, at right is the BT.709 version of the image.

HDR (left) compared to a BT.709 rendition (right) of the same scene (above), on Canon displays

Again, these sorts of example images give you a vague impression of the benefits of HDR monitoring, but in truth they’re an extremely poor substitute for actually looking at an HDR display in person.

So, HDR displays are capable of representing an expanded range of lightness, and in the process can output a far larger color volume than previous generations of displays can. However, this expanded range of color and lightness is meant to be used in a specific way, at least for now as we transition from an all “Standard Dynamic Range” (SDR) distribution landscape, to a mixture of SDR and HDR televisions, disc players, streaming services, and broadcast infrastructure, using potentially different methods of distributing and displaying HDR signals.

The general idea is that much of the tonal range of an HDR image will be graded similarly to how an SDR image is graded now, with the shadows and midtones being treated similarly between traditionally SDR and HDR-graded images in order to promote wider contrast, maintain a comfortable viewing experience, and to ease backward compatibility when re-grading for non-HDR displays. “Diffuse white” highlights (such as someone’s white shirt), are where the expanded range of HDR begins to offer options for providing more vivid levels to the viewer. HDR’s most immediately noticeable benefit, however, is in providing abundant additional headroom for “peak” highlights and more intense color saturation that far exceeds what has been visible (without clipping) in SDR television and cinema up until now.

For example, a reference SDR display should have a peak luminance level of 100 “nits” (cd/m2), above which all video levels are (probably) clipped. Meanwhile, today’s generation of professional HDR displays have peak luminance levels of 1000, 2000, or even 4000 nits (depending on the model and manufacturer), and support at least most of the expanded P3 gamut for color. Eventually, televisions capable of displaying even brighter highlights (Dolby Vision and BT.2084 support levels up to 10,000 nits) and expanded color saturation (reaching out towards the promise of BT. 2020) may become available.

And these peak HDR-strenth highlights look spectacular.

Why Is This Cool?

Frankly, the only way to answer this question is to finagle yourself into an HDR screening. I can type until my fingers cramp about how wonderful all of this is, but without seeing it for yourself, the benefits of HDR are a bit abstract. Once you’ve seen it, you’ll know why it’s cool, why you’ll want to shoot your next project with HDR in mind (as I am), and why getting your hands on HDR as a colorist will be enormousfun. I’ve now sat in on several different HDR demonstration screenings, grading sessions, and theatrical viewings, and have had a few HDR grading gigs of my own, and everyone I’ve talked to afterwards, both colorists and clients, has been almost immediately enthusiastic.

The core benefits of HDR, as I see them, are two-fold.

Firstly, you can have portions of the highlights of your image exhibit extremely bright specular highlights, glints, and sparkles with far greater visible detail within these areas because much of the detail within these highlights won’t clip. Practically, this means that instead of clipping all highlights above 100 nits (ST.2080-1 standardizes the peak luminance that’s associated with displays set to output BT.709/BT.1886), now you can see the difference between a 100 nit detail, a 300 nit detail, a 500 nit detail, and an 800 nit detail within such a highlight, assuming you’re looking at an HDR display capable of showing you that range. There’s simply no comparison.

If we look at a linear vertical representation of these values, similarly to how we’d plot the scale of a waveform monitor, it becomes immediately obvious what a difference this is. Keep in mind that the little tiny green slice at the bottom of the illustration represents the totalrange of luminance that’s available to colorists in a conventionally graded BT.709/BT.1886/ST.2080-1 image.

Secondly, and to me almost more importantly, richly saturated colorful and bright image details such as neon lights, emergency vehicle lights, backlit tinted glass, explosion effects, firelight, skin shine and bright highlights, and other saturated reflective areas and direct light sources, as well as the glows and volumetric lighting effects they emit, may carry saturation well above the 100 nit level on an HDR display, which is a creative choice previously forbidden to colorists, who had to be sure to compress color saturation somewhat below the 100%/100 IRE/700 mV maximum allowed by most conservative QC specifications for broadcast television just to be on the safe side. With HDR, you no longer have to crush the life out of vividly bright highlights to squeeze them onto TV, you can actually leave them be, and revel in the abundance of smear-free extra saturation and detail you can allow in the highlights of sunsets, stained-glass windows, Vegas-style signage, and other brightly-lit areas of colorful detail.

Now, the illustration above, while exciting, is not quite accurate, in that the human eye has a logarithmic response to highlights. Practically speaking, this means that our eyes perceive a difference between two very bright levels as a smaller percentage of what that difference actually is. This is one reason why we can handle going outside on a sunny day without being blinded, when there are reflective nit levels all over the place that are off the chart of what we see on an SDR television or in an SDR movie theater. Not coincidentally, HDR signals are logarithmically encoded for distribution, and if we look at an actual logarithmically compressed waveform scope scale for evaluating HDR media, we get a somewhat more comprehensible comparison of SDR and HDR signals, that’s a bit more actionable from the colorist’s perspective.

Another advantage to HDR displays is that, since viewers experience contrast as the difference between the brightest and darkest pixels within an image, and since edge contrast is a visual cue for sharpness, having dramatically brighter pixels, even a few of them in the top highlights, means that the perceived contrast of the image will be dramatically higher, and details will appear to be much crisper. My experience from looking at a few HD-resolution HDR displays at NAB 2015 was that they appeared to be sharper than some of the 4K displays I was seeing, because HDR highlights add contrast that make the edges by which we evaluate sharpness really pop. Combining HDR with 4K will be an exceptional viewing experience no matter how huge your living room television is.

One last advantage to HDR for distribution is that, with few exceptions, HDR distribution standards require a minimum of 10-bits to accommodate the wide range being distributed (HDR mastering requires 12-bits). Even though that 10-bits will be stretched more thinly than with an SDR signal, given the expanded latitude of HDR, this hopefully means that a side benefit of HDR will be a reduction in the kind of 8-bit banding artifacts in shadows and areas of shallow gradated color such as blue skies or bare walls that we’ve been cursed with ever since television first embraced digital signals. That alone is worth the cost of admission.

Another interesting thing about HDR is that, unlike other emerging distribution technologies such as Stereo 3D, high-frame-rate exhibition, wide gamuts, and ever-higher resolutions (4K, 8K) which engender quite a bit of debate about whether or not they’re worth it, HDR is something that nearly everyone I’ve spoken with, professional and layperson alike, agree looks fantastic once they’ve seen it. This, given all the griping about those other technologies I’d mentioned, is amazing to me. Furthermore, it’s easy for almost anyone to see the improvement, no matter what your eyeglass prescription happens to be.

(Updated) However, because it’s an emerging technology, the technical standards being promulgated at the moment exceed what the first few generations of consumer displays are capable of. I had a look at what’s on store shelves at the time of this writing in 2016, and depending on the make and model you get, consumer televisions are “only” capable of outputting a maximum of 300, 500, 800, or 1000-1400 nits peak luminance. Capabilities vary widely. Moreover, because display manufacturers are racing one another to improve each subsequent generation of consumer televisions, HDR standards for peak brightness are a moving target. While HDR this year means peak luminance of 300–1000 nits, maybe next year will bring a 2000 nit model. The year after that, who knows?

Because of this, two of the proposed mastering methods of HDR have been designed to accommodate up to 10,000 nits, while one other will accommodate up to 5,000 nits. Of course, no current television can get anywhere even remotely close to either of these maximum levels, but the Dolby Pulsar, which has the highest nit output display in use for mastering HDR (at the time of this writing), is capable of displaying an HDR signal with a peak luminance level of 4,000 nits, making this the de facto reference at facilities lucky enough to be grading programs from movie studios and content distributors that are mastering for Dolby Vision. Many other facilities are using 1000 nits as a more achievable de facto reference given that’s what the Sony BVM X300 HDR display is capable of doing.

This basically means that many colorists are grading and mastering programs to be future-proofed for later generations of television viewers with better televisions, and in the short term different strategies are employed to deal with how these higher-than-currently-feasible peak HDR-strength highlights will be displayed on the first generations of consumer HDR televisions.

Automatic Brightness Limiting (ABL)

There’s one other wrinkle. Consumer HDR displays have legally mandated (regulated by the California Energy Commission and by similar European agencies) limits on the maximum power that televisions can use in relation to their size and resolution. Consequently, automatic brightness limiting (ABL) circuits are a common solution manufacturers use to limit power consumption to acceptable and safe levels for home use. Practically speaking, an ABL circuit limits the percentage of the picture that may reach peak luminance without automatically dimming the display. This type of ABL limiting is not required on professional displays, but some manner of limiting may still be used to protect the display from damage stemming from drawing more current than they can handle in exceptionally bright scenes.

Naturally, on my first HDR grading job I was keenly interested in just how much of the picture could go into very-bright HDR levels before the average consumer HDR-capable TV would interfere, since I didn’t want to push things too far. Unfortunately, nobody could tell me what that threshold was at the time, so I simply proceeded with caution, grading relative to the 30″ Sony BVM X300 display we were using as our HDR reference display (and a beautiful monitor it is). The grade went well, I tried to be judicious about how far I pushed the brightest of the signal levels, and the client went away with a master that made them happy (sadly, it was a secret project…).

Later, I had the good fortune of speaking with Gary Mandle, of Sony Professional Solutions, who illuminated the topic of how ABL affects the HDR image, at least so far as the BVM X300 is concerned. A number of different rules are followed, all of which interact with one another:

In general, only 10% of the overall image may reach the X300’s peak brightness of 1000 nits (assuming the rest of the signal is back down at 100 nits or under)

The overall image is evaluated to determine the allowable output. An extremely simple (and certainly oversimplified) example is that you could (probably) have 20% of the signal at 500 nits, rather than 10% at 1000 nits. I have no idea if this kind of tradeoff is linear, so the truth undoubtedly varies. The general idea is that if you only had, say 2% of the image at 1000 nits, and 5% of the image at 500 nits, then you can probably have a reasonable additional percentage of the image at 200 nits, which is by no means at the top of the range, but is still twice as bright as SDR (standard dynamic range) images that peak at 100 nits. I don’t know what the actual numbers are, but the basic idea is the total percentage of pixels of HDR-strength highlights you’re allowed to have depends on the intensity of those pixels.

The dispersion of image brightness over the area of the screen is also evaluated, and output intensity is managed so that areas with a lot of brightness don’t overheat the OLED panel.

Long story short, how ABL gets triggered is complicated, and while you can keep track of how much of the image you’re pushing into HDR-specific highlights, how bright those highlights are, and how clustered or scattered the highlights happen to be, there will still be unknowable interactions at work. Fortunately, the Sony BVM X300 has an “Over Range indicator” light, which illuminates and turns amber whenever ABL is triggered, so you know what’s happening and can back off if necessary. Incidentally, it’s worth noting that the X300, being an OLED display, is susceptible to screen burn-in if you leave bright levels on-screen for too long, so don’t leave an HDR image on pause going out to your display before going home for the evening.

Bram Desmet, CEO of Flanders Scientific, pointed out that VESA publishes a set of test patterns (ICDMtp-HL01) devised by the International Committee for Display Metrology (ICDM) which can be used to analyze a display’s (a) susceptibility to halation, defined as “the contamination of darks with surrounding light areas,” and (b) susceptibility to power loading, which describes screens “that cannot maintain their brightest luminance at full screen because of power loading.” The set consists of two groups of ten test patterns. Black squares against white backgrounds are used to measure halation, while white squares against black backgrounds are used to measure power loading. For the power loading patterns, the ten patterns feature progressively larger white squares against a black background labeled as L05 to L90; the number indicates what diagonal percentage of the screen each box represents (which I’m told is different from a simple percentage of total pixels).

By measuring a display’s actual peak luminance while outputting progressively larger white boxes on black backgrounds, you can determine the maximum percentage of screen pixels that are possible to display at full strength before peak luminance is reduced due to power limiting. Of course, this doesn’t account for all the factors that trigger ABL, but it does provide at least one comprehensible metric for display performance, and some display manufacturers cite one of these test patches as an indication of a particular display’s performance.

Of course, the ABL on consumer televisions is potentially another thing entirely, as each manufacturer will have their own secret sauce for how to handle excess HDR brightness that exceeds a given television’s power limits. Hopefully, consumer ABL will be close enough to the response of professional ABL that we colorists won’t have to worry about it too much, but this will be an area for more exploration as time goes on and more models of HDR televisions become available.

(Update) In fact, I had just published this article when I had to run over to Best Buy to purchase a video game for a friend who I’ve decided is entirely too productive with their time. While I was there, I had a look at the televisions, and in the course of chatting about all of this (because I can’t stop), associate Mason Marshall pointed out a chart at rtings.com that does the kind of test chart evaluation I mention previously to investigate the peak luminance performance of different displays as they output different percentages of maximum white. The results are, ahem, illuminating. For example, while the Samsung KS9500 outputs a startling 1412 nits when 2% of the picture is at maximum white, peak luminance drops to 924 nits with 25% of the picture at maximum white, and it drops further down to 617 nits with 50 percent of the picture at maximum white. Results of different displays vary widely, so check out their chart. Now, this simple kind of Loading Pattern test isn’t going to account for all the variables that a display’s ABL contends with, but it does show the core principal in action of which colorists need to beware.

Dire as all this may sound, don’t be discouraged. Keep in mind that, at least for now, HDR-strength highlights are meant to be flavoring, not the base of the image. My experience so far has been if you’re judicious with your use of very-bright HDR-strength highlights, you’ll probably be relatively safe from the ravages of ABL, at least so far as the average consumer is concerned. Hopefully as technology improves and brighter output is possible with more efficient energy consumption, these issues will become less of a consideration. For now, however, they are.

More About Halation

Because of the intense light output required by HDR displays, different backlighting schemes are being developed to achieve the necessary peak luminance while attempting to keep power consumption economical. This is a period of rapid change in display technologies, but at this point in time some displays may exhibit halation in certain scenes, which can be seen as a fringing or ringing in lighter areas of the picture that surround darker subjects. These artifacts are not in the original signal, but are a consequence of a display whose backlighting technology is susceptible to this issue. This is the reason for the Halation test patterns described above, and it’s something you should keep an eye out for when looking at HDR displays you want to use for professional work.

Terminology in the Age of HDR

The advent of HDR requires some new distinguishing terminology, most of which has already been used in this article. Still, in the interests of clarification, SDR, or Standard Dynamic Range, describes video as has been previously experienced on conventional consumer televisions, where we talk about a display’s EOTF (electro-optical transfer function) being (hopefully) governed by the BT. 1886 standard, and your peak luminance level is probably (if you’ve calibrated) 100 nits as defined by the ST.2080-1 standard. Of course, standards compliance is entirely dependent on you and your clients choosing the correct settings on your displays, and maintaining the calibration of said displays on a regular-enough basis.

If you want to be specific, a “nit” is a colloquialism for candelas per meter squared (cd/m²), a unit for measuring emitted light. Nits is easier to type and more fun to say.

At the risk of being redundant, HDR describes video meant to be shown on a display that delivers peak reference white levels that are considerably higher, but that don’t use the BT.1886 EOTF that that you’re used to with SDR. Instead, HDR displays use an EOTF that’s described either by the ST.2084 or Hybrid Log-Gamma (HLG) standards (more on these later).

It used to be that Gamma was colloquially used to describe how image values at different levels of tonality were displayed when output to a SDR television. With the ratification of BT.1886 recommending a slightly more complicated tonal response with which to standardize modern digital SDR displays, we must now refer more specifically to the EOTF of a display, which describes the same principle of how image values at different levels of tonality are output on a display, but in a more general way that may encompass multiple methods and standards.

So, BT.1886, ST.2084, and HLG each describe a different EOTF. On a brand new professional HDR display, you must make sure that it’s set to the correct EOTF for the type of signal you’re mastering, since it can probably be set to any one of these standards.

HDR is Not Tied to Resolution

Whether a signal is SDR or HDR has nothing to do with either display resolution, gamut, or frame rate. These characteristics are all completely independent of one another. Most importantly:

HDR is resolution agnostic. You can have a 1080p (HD) HDR image, or you can have a 3840 x 2160 (UHD) SDR image, or you can have a UHD HDR image. Right this moment, a display being capable of HDR doesn’t guarantee anything else about it.

HDR is gamut agnostic as well, although the HDR displays I’ve seen so far adhere either to P3, or to whatever portion of the far wider Rec.2020 gamut they can manage. Still, there’s no reason you couldn’t master a BT.709 signal with an HDR EOTF, it’d just be kind of sad.

You can deliver HDR in any of the standardized frame rates you care to deliver.

That said, the next generation of professional and consumer displays seems focused on the combination of UHD resolution (3840×2160) and HDR, with at least a P3 gamut. To encourage this, the HDR10 industry recommendation or “Ultra HD Premium” industry brand-name is being attached to consumer displays capable of a combination of such high-end features (more on this later). As a side note, HDR10 is not the same as Dolby Vision, although both standards use the same EOTF as defined by ST.2084 (more on this later).

Higher resolutions are not required to output HDR images. They’re just nice to have in addition.

How Do You Shoot HDR?

You don’t.

By which I mean to say that you’re not required to do anything in particular to shoot material that’s suitable for HDR grading if you’re using one of numerous digital cinema cameras available today that are capable of capturing and recording 13 – 15 stops of wide-latitude imagery. The more latitude you have in the source signal, the greater a range of imagery you’ll be able to make available to the colorist for fitting into the above-100 nit overhead that HDR allows. My first client-driven HDR job consisted of RED DRAGON R3D media, which wasn’t originally shot for HDR grading. However, there was plenty of extra signal available in the raw highlights to create compelling HDR-strength highlights with naturalistic detail.

Of course, I imagine intrepid DPs will find themselves making all kinds of different decisions, potentially, about whether or not to let windows blow out, what to do with ND, how to deal with direct sunlight, etcetera. However, since most of the signal (shadows and midtones) in a well-graded image will initially continue to be graded down around 0-100 nits, you’re probably not going to be doing anything radically different in terms of how you shoot faces, shadows, and anything up to the sorts of diffuse white highlights that constitute the bedrock of your images. You just have to know that whatever peak highlights you have in the frame will be preserved, and have the potential to venture into super-bright levels, so you should start planning your highlights within the image accordingly.

I’m guessing DPs will start asking for a lot more flags on set.

Even if you’re shooting with a camera that doesn’t have the widest latitude possible, colorists can always “gin up” HDR-strength highlights in post from low-strength highlights, by isolating whatever highlights there happen to be and stretching them up to reasonably good effect. You probably won’t want to push these kinds of “fake” HDR-strength pixels as high as you would genuinely wide-latitude highlights for fear of banding and artifacts given the thin image data, but you can still do a lot, so you’re not without options.

Bottom line, if you already own a camera with reasonably wide latitude, HDR won’t be an excuse to buy another one, and it seems to me that there’s nothing extra you need to buy for the camera or lighting departments if you want to shoot media for an HDR grade. At least, not unless you really, really want to. As time goes on, I’m sure DPs will find new methodologies for taking advantage of greater dynamic range, and there will be much more to say on the subject. We’re in the very early days of HDR, and I’m sure I’ll have more interesting advice to contribute after working with my DP on my next shoot.

Don’t Lose Your Dynamic Range in Post

It ought to go without saying, but shooting wide-latitude images in the field as raw or log-encoded media files is only useful so long as you preserve this wide latitude during post-production. In terms of mastering, grading with your camera-original raw files such as R3D, ARRIRAW, Sony RAW, and CinemaDNG is an easy way to do this.

If you’re dealing with VFX pipelines, you can transcode wide-latitude raw media into log-encoded 16-bit OpenEXR files to retain latitude in a media format that’s useful in a wide variety of applications. Otherwise, grading with 12-bit log-encoded 4:4:4 sampled media in formats such as ProRes 4444, ProRes 4444 XQ, or DNxHR 444 will also preserve the latitude necessary for high-quality HDR grading. In either case, documentation from Dolby indicates that PQ-, Log C-, and Slog-encoded media is all suitable within a 12- or 16-bit container format.

Happily, all of these formats are compatible with DaVinci Resolve.

The Different Formats of HDR

Now that we’ve discussed in broad terms what HDR is, and what it takes to make it, how is it mastered?

While different HDR technologies use different methods to map the video levels of your program to an HDR display’s capabilities, they all output a “near-logarithmically” encoded signal that requires a compatible television that’s capable of correctly stretching this signal into its “normalized” form for viewing. This means if you look at an HDR signal that’s output from the video interface of your grading workstation on an SDR display, it will look flat, desaturated, and unappealing until it’s plugged into your HDR display of choice.

A log-like HDR image with 4000 nit peak highlights

It should go without saying that most professional grading applications such as FilmLight Baselight and SGO Mistika support HDR in color management, grading, and finishing workflows, and everything I describe in this article that’s non app-specific equally applies to HDR being worked on in any software environment with support for the standards you want to use. Since I’m obviously most familiar with DaVinci Resolve, that’s what I describe in this article.

At the time of this writing, there are three approaches to mastering HDR that DaVinci Resolve is capable of supporting, including Dolby Vision, HDR10 using ST.2084, and Hybrid Log-Gamma (HLG). Each of these HDR mastering/distribution methods focuses on describing how an HDR signal is encoded for output, and how that signal is later mapped to the output of an HDR display.

Each of these standards are most easily enabled using Resolve Color Management (RCM) via Color Space options in the Color Management panel of the Project Settings. Alternately, LUTs are available for each of these color space conversions if you want to do things the old-fashioned way, but Resolve Color Management has become so mature in the last year that, from experience, I personally recommend this approach to handling HDR within Resolve.

However, these standards have nothing to say about how these HDR-strength levels are be used creatively. This means that the question of how to utilize the expansive headroom for brightness and saturation that HDR enables is fully within the domain of the colorist, as a series of artistic decisions that must be made regarding how to assign the range of highlights that are available in your source media to the above-100 nit HDR levels you’re mastering to as you grade, given the peak reference white that you’re mastering with.

Funnily enough, even though HDR workflows are most easily organized using scene-referred color management, at the moment, HDR grading decisions are display-referred by virtue of the fact that the HDR peak luminance level of the display you happen to be using (1000 nit, 4000 nit, more?) will strongly influence the creative decisions you make, despite underlying HDR distribution standards all having much higher maximums.

Because of all of this, the following sections will describe in general terms how to work with Dolby Vision, HDR10, and Hybrid Log-Gamma in Resolve. However, the creative use of HDR will be addressed separately in a later section.

Dolby Vision

(Updated) Long a pioneer and champion of the concept of HDR for enhancing the consumer video experience, Dolby Labs has developed a proprietary method for encoding HDR called Dolby Vision. Dolby Vision defines a “PQ” color space, with an accompanying PQ electro-optical transfer function (EOTF) that is designed to accommodate displays capable of a wide luminance range, from 0 to 10,000 cd/m2. In short, instead of mastering with the BT.1886 EOTF, you’ll be mastering with the ST.2084 (or PQ) EOTF instead.

However, to accommodate backwards compatibility with SDR displays, as well as the varying maximum brightness of different makes and models of HDR consumer displays, Dolby Vision has been designed as a two-stream video delivery system consisting of a base layer and an enhancement layer with metadata. On an SDR television, only the base layer is played, which contains a Rec.709-compatible image that’s a colorist-guided approximation of the HDR image. On an HDR television, however, both the base and enhancement layers will be recombined, using additional “artistic guidance” metadata generated by the colorist to determine how the resulting HDR image highlights should be scaled to fit the varied peak luminance levels and highlight performance that’s available on any given Dolby Vision compatible television. Dolby Vision also supports a more bandwidth-friendly single layer delivery stream that is not backwards compatible; mastering is identical for both single and dual layer delivery.

Those, in a nutshell, are the twin advantages of the Dolby Vision system. It’s backward compatible with SDR televisions, and it’s capable of intelligently scaling the HDR highlights, using metadata generated by the colorist as a guide, to provide the best representation of the mastered image for whatever peak luminance a particular television is capable of. All of this is guided by decisions made by the colorist during the grade.

So, who’s using Dolby Vision? At the time of this writing, all seven major Hollywood studios are mastering in Dolby Vision for Cinema. Studios that have pledged support to master content in Dolby Vision for home distribution include Universal, Warner Brothers, Sony Pictures, and MGM. Content providers that have agreed to distribute streaming Dolby Vision content include Netflix, Vudu, and Amazon. If you want to watch Dolby Vision content on television at home, consumer display manufacturers LG, TCL, Vizio, and HiSense have all announced models with Dolby Vision support.

DaVinci Resolve Hardware Setup for Dolby Vision

To make all this work in DaVinci Resolve, you need a somewhat elaborate hardware setup, consisting of the following equipment:

A standalone hardware video processor called the Content Management Unit (CMU), which is a standard computer platform with a Video I/O card. The CMU is only available from Dolby Authorized System Integrators; you must contact Dolby for an Authorized Systems Integrator near you.

A video router, such as the BMD Smart Videohub

This hardware is all connected as seen in the following illustration:

In one possible scenario, you’ll connect your Resolve workstation’s dual SDI outputs to the BMD Smart Videohub, which splits the video signal to two mirrored sets of SDI outputs. One mirrored pair of SDI outputs goes to your HDR display. The other mirrored pair of SDI outputs goes to the CMU (Content Mapping Unit), which is itself connected to your SDR display via SDI. Lastly, the Resolve workstation is connected to the Dolby CMU via Gigabit Ethernet to enable the CMU to communicate back to Resolve.

The CMU is an off-the-shelf video processor that uses a combination of proprietary automatic algorithms and colorist-adjustable metadata within Resolve to define, at least initially, how an HDR-graded video should be transformed into an SDR picture that can be displayed on a standard Rec. 709 display, as well as how the enhancement layer should scale itself to varying peak luminance levels.

Dolby Vision automatic analysis and manual trim controls in DaVinci Resolve send metadata to the CMU that’s encoded into the first line of the SDI output. This metadata guides how the CMU makes this transformation, and the controls for adjusting this metadata are exposed in the Dolby Vision palette. These controls consist of luminance-only Lift/Gamma/Gain controls (that work slightly differently than those found in the Color Wheels palette), Chroma Weight (which darkens parts of the picture to preserve colorfulness that’s clipping in Rec.709), and Chroma Gain.

Dolby Vision Palette in the Color page

(Updated) These Dolby Vision analysis and trim controls in DaVinci Resolve send metadata to the CMU by encoding it into the first line of the SDI output. This metadata guides how the CMU makes this transformation, because the CMU is actually the functional equivalent of the Dolby Vision chip that’s inside each Dolby Vision-enabled television, what you’re really doing is using the CMU to make your SDR display simulate a 100 nit Dolby Vision television.

Additionally, the CMU can be used to output 600 nit, 1000 nit, and 2000 nit versions of your program, if you want to see how your master will scale to those peak luminance levels. This, of course, requires the CMU to be connected to a display that’s capable of being set to those peak luminance output levels.

Though not required, you have the option to visually trim your grade at up to four different peak luminance levels, including 100 nit, 600 nit, 1000 nits and 2000 nit reference points, so you can optimize a program’s visuals for the peak luminance and color volume performance of many different televisions with a much finer degree of control. If you take this extra step, Dolby Vision compatible televisions will use the artistic guidance metadata you generate in each trim pass to ensure the creative intent is preserved as closely as possible, in an attempt to provide the viewer with the best possible representation of the director’s intent.

For example, if a program were graded relative to a 4000 nit display, along with a single 100 nit Rec 709 trim pass, then a Dolby Vision compatible television with 750 nit peak output will reference the 100 nit trim pass artistic guidance metadata in order to come up with the best way of “splitting the difference” to output the signal correctly. On the other hand, were the colorist to do three trim passes, the first at 100 nits, a second at 600 nits, and a third at 1000 nits, then a 750 nit-capable Dolby Vision television would be able to use the 600 and 1000 nit artistic intent metadata to output more accurate HDR-strength highlights that take better advantage of the 750 nit output of that television.

You should note that to expose the Dolby Vision controls in DaVinci Resolve Studio, you need a Dolby Vision Mastering license from Dolby. More instructions for all of this are available in the DaVinci Resolve User Manual.

Dolby Vision Certified Mastering Monitors

At the time of this writing, only three displays have been certified as Dolby Vision Certified Mastering Monitors. Requirements include a minimum peak brightness of 1000 nits, a 200,000:1 contrast ratio, P3 color gamut, and native support for SMPTE ST.2084 as the EOTF (otherwise known as PQ). When grading Dolby Vision, your monitor should be set to a P3 gamut using a D65 white point. Suitable displays include:

The Sony BVM X 300 (30″, 1000 nit peak luminance, 4K)

The Dolby PRM 32FHD (32″, 2000 nit peak luminance, 1080)

The Dolby Pulsar (42″, 4000 nit, 1080)

Of these, only the Sony is commercially available. The Dolby monitors are not commercially available, and are provided only in limited availability from Dolby.

Setting Up Resolve Color Management For Grading HDR

Once the hardware is set up, setting up Resolve itself to output HDR for Dolby Vision mastering is easy using Resolve Color Management (RCM). In fact, this procedure is pretty much the same no matter which HDR mastering technology you’re using. Only specific Output Color Space settings will differ.

Set Color Science to DaVinci YRGB Color Managed in the Master Project Settings, and Option-click the Save button to apply the change without closing the Project Settings. Then, open the Color Management panel, and set the Output Color Space pop-up to the HDR ST.2084 setting that corresponds to the peak luminance, in nits, of the grading display you’re using. For example, if you’re grading with a Sony BVM X300, choose HDR ST.2084 1000 nits. At the time of this writing, RCM supports six HDR ST.2084 peak luminance settings:

HDR ST.2084 300 nits

HDR ST.2084 500 nits

HDR ST.2084 800 nits

HDR ST.2084 1000 nits

HDR ST.2084 2000 nits

HDR ST.2084 4000 nits

This setting is only the EOTF (a gamma transform, if you will). If “Use Separate Color Space and Gamma” is turned off, the Timeline Color Space setting will define your output gamut. If “Use Separate Color Space and Gamma” is turned on, then you can specify whatever gamut you want in the left Output Color Space pop-up menu, and choose the EOTF from the right pop-up menu.

Be aware that whichever HDR setting you choose will impose a hard clip at the maximum nit value supported by that setting. This is to prevent accidentally overdriving HDR displays, which can possibly have negative consequences depending on which display you happen to be using.

Next, choose a setting in the Timeline Color Space that corresponds to the gamut you want to use for grading, and that will be output. For example, if you want to grade the timeline as a log-encoded signal and “normalize” it yourself, you can choose Arri Log C or Cineon Film Log. If you would rather have Resolve normalize the timeline to P3-D65 and grade that way, you could choose that setting as well.

Be aware that, when it’s being properly output, HDR ST.2084 signals appear to be very log-like, in order to pack their wide dynamic range into the bandwidth of a standard video signal. It’s the HDR display itself that “normalizes” this log-encoded image to look as it should. For this reason, the image you see in your Color page Viewer is going to appear flat and log-like, even though the image being displayed on your HDR reference display looks vivid and correct. If you want to make the image in the Color Page Viewer look “normalized” at the expense of clipping the HDR highlights, you can use the 3D Color Viewer Lookup Table setting in the Color Management panel of the Project Settings to assign the appropriate “HDR X nits to Gamma 2.4 LUT,” with X being the peak nit level of the HDR display you’re using.

Additionally, the “Timeline resolution” and “Pixel aspect ratio” (in the project settings) that your project is set to use is saved to the Dolby Vision metadata, so make sure your project is set to the final Timeline resolution and PAR before you begin grading.

Resolve Grading Workflow For Dolby Vision

Once the hardware and software is all set up, you’re ready to begin grading Dolby Vision HDR. The general workflow in DaVinci Resolve is fairly straightforward.

First, grade the HDR image on your Dolby Vision Certified Mastering Monitor to look as you want it to. Dolby recommends starting by setting the look of the HDR image first, to determine the overall intention for your grade.

When using various grading controls in the Color page to grade HDR images, you may find it useful to enable the HDR Mode of the node you’re working on by right-clicking that node in the Node Editor and choosing HDR mode from the contextual menu. This setting adapts that node’s controls to work within an expanded HDR range. Practically speaking, this makes controls that operate by letting you make adjustments at different tonal ranges, such as Custom Curves, Soft Clip, etcetera, work over an expanded range, which makes adjusting wide-latitude images being output to HDR much easier.

When you’re happy with the HDR grade, click the Analysis button in the Dolby Vision palette. This analyzes every pixel of every frame of the current shot, and performs and stores a statistical analysis that is sent to the CMU to guide its automatic conversion of the HDR signal to an SDR signal.

If you’re not happy with the automatic conversion, use the Lift/Gamma/Gain/Chroma Weight/Chroma Gain controls in the Dolby Vision palette to manually “trim” the result to the best possible Rec.709 approximation of the HDR grade you created in step 1. This stores what Dolby refers to as “artistic guidance” metadata.

(Updated) If you obtain a good result, then move on to the next shot and continue work. If you cannot obtain a good result, and worry that you may have gone too far with your HDR grade to derive an acceptable SDR downconvert, you can always trim the HDR grade a bit, and then re-trim the SDR grade to try and achieve a better downconversion. Dolby recommends that if you make significant changes to the HDR master, particularly if you modify the blacks or the peak highlights, you should re-analyze the scene. However, if you only make small changes, then reanalyzing is not strictly required.

As you can see, the general idea promoted by Dolby is that a colorist will focus on grading the HDR picture relative to the 1000, 2000, 4000, or higher nit display that is being used, and will then rely on the colorist to use the DolbyVision controls to “trim” this into a 100 nit SDR version, with this artistic guidance turned into metadata and saved for each shot. This “artistic guidance” metadata is saved into the mastered media, and it’s used to more intelligently scale the HDR highlights to fit within any given HDR display’s peak highlights, to handle how to downconvert the image for SDR displays, and also to determine how to respond when a television’s ABL circuit kicks in. In all of these cases, the colorist’s artistic intent is used to guide all dynamic adjustments to the content, so that the resulting picture looks as it should.

Analyzing HDR Signals Using Scopes

When you’re using waveform scopes of any kind, including parade and overlay scopes, the signal will fit within the 10-bit full-range numeric scale much differently owing to the way HDR is encoded. The following chart of values will make it easier to understand what you’re seeing:

If you’re monitoring with the built-in video scopes in DaVinci Resolve Studio, you can turn on the “Enable HDR Scopes for ST.2084” checkbox in the Color panel of the Project Settings, which will replace the 10-bit scale of the video scopes with a scale based on “nit” values (or cd/m²) instead.

If you’re unsatisfied with the amount of detail you’re seeing in the 0 – 519 range (0 – 100 nits) of the video scope graphs, then you can use the 3D Scopes Lookup Table setting in the Color Management panel of the Project Settings to assign the appropriate “HDR X nits to Gamma 2.4 LUT,” with X being the peak nit level of the HDR display you’re using. This converts the way the scopes are drawn so that the 0 – 100 nit range of the signal takes up the entire range of the scopes, from 0 through 1023. This will push the HDR-strength highlights up past the top of the visible area of the scopes, making them invisible, but it will make it easier to see detail in the midtones of the image.

Rendering a Dolby Vision Master

To deliver a Dolby Vision master after you’ve finished grading, you want make sure that the Output Color Space of the Color Management panel of the Project Settings is set to the appropriate HDR ST.2084 setting, based on the peak luminance in nits of your HDR display. Then, you want to set your render up to use one of the following Format/Codec combinations:

TIFF, RGB 16-bit

EXR, RBG-half (no compression)

(Updated) When you render for tapeless delivery, the artistic intent metadata is rendered into an Dolby Vision XML and delivered with either the Tiffs or EXR renders. These two sets of files are then delivered to a facility that’s capable of creating the Dolby Vision Mezzanine File (this cannot be done in Resolve).

Playing Dolby Vision at Home

On distribution, televisions that have licensed Dolby Vision use the base layer and enhancement layer+metadata to determine how the HDR image should be rendered given each display’s particular peak luminance capabilities. Distributors, for their part, need to provide a minimum 10-bit signal to accommodate Dolby Vision’s wide range. As a result, Dolby Vision videos will look as they should on displays from 100 nits through however many nits the program was mastered to take advantage of, up to 10,000 nits, with the enhancement layer’s HDR-strength highlights being scaled to whatever peak luminance level is possible on a given display using the artistic intent metadata as a guide, and recombining these highlights with the base layer, so that there’s no unpredictable clipping, and the image looks as it should.

SMPTE ST.2084, Ultra HD Premium, and HDR10

Some display manufacturers who have no interest in licensing Dolby Vision for inclusion in their televisions are instead going with the simpler method of engineering their displays to be compatible with SMPTE ST.2084. It requires only a single stream for distribution, there are no licensing fees, no special hardware is required to master for it (other than an HDR mastering display such as the Sony X300), and there’s no special metadata to write or deal with (at this time).

Interestingly, SMPTE ST.2084 ratifies the “PQ” EOTF that was developed by Dolby and that’s used by Dolby Vision that accommodates displays capable of peak luminance up to 10,000 cd/m2 into a general standard. This standard requires at minimum a 10-bit signal for distribution, and the EOTF is described such that the video signal utilizes the available code values of a 10-bit signal as efficiently as possible, while allowing for such a wide range of luminance in the image.

SMPTE ST.2084 is also part of the new “Ultra HD Premium” television manufacturer specification, that stipulates televisions bearing the Ultra HD Premium logo have the following capabilities:

Finally, ST.2084 has been included in the HDR10 distribution specification adopted by the Blu-ray Disc Association (BDA) that covers Ultra HD Blu-ray. HDR10 stipulates that Ultra HD Blu-ray discs have the following characteristics:

UHD resolution of 3840 x 2160

Up to the Rec.2020 gamut

SMPTE ST.2084

Mastered with a peak luminance of 1000 nits

The downside is that, by itself, this EOTF is not backwards compatible with SDR displays that use BT.1886 (although the emerging metadata standard SMPTE ST.2086 seeks to address this). Furthermore, no provision is made to scale the above-100 nit portion of the image to accommodate different displays with differing peak luminance levels. For example, let’s say you grade and master an image to have peak highlights of 4000 nits, as seen in the following image:

An image with 4000 nit peak luminance highlights

Then, you play that signal on an ST.2084-compatible television that’s only capable of 800 nits. The result will be that all peaks of the signal above 800 nits will be clipped, while everything below 800 nits will look exactly as it should relative to your grade, as seen in the following image:

The same image clipped to 800 nit peak luminance highlights

This is because ST.2084 is referenced to absolute luminance. If you grade an HDR image referencing a 1000 nit peak luminance display as is recommended by HDR10, then any display using ST.2084 will respect and reproduce all levels from the HDR signal that it’s capable of reproducing as you graded them, up to the maximum peak luminance level it can output. For example, the Vizio R Series television can output 800 nits, so all mastered levels from 801 – 1000 will be clipped.

How much of a problem this is really depends on how you choose to grade your HDR-strength highlights. If you’re only raising the most extreme peak highlights to maximum HDR-strength levels, then it’s entirely possible that the audience might not notice that the display is only outputting 800 nits worth of signal and clipping any image details from 801 – 1000 nits because there weren’t that many details above 800 anyway other than glints and sparks. Or, if you’re grading large explosions filled with fiery detail up above 800 nits in their entirety because it looks cool, then maybe the audience will notice. The bottom line is, when you’re grading for displays that simply display ST.2084, you need to think about these sorts of things.

Monitoring and Grading to ST.2084 in DaVinci Resolve

Monitoring an ST.2084 image is as simple as getting a ST.2084-compatible HDR display (such as the Sony X300), and connecting the output of your video interface to the input of the display. In the case of the Sony X300, which is a 4K capable display, you can connect four SDI outputs from a DeckLink 4K Extreme 12G with the optional DeckLink 4K Extreme 12G Quad SDI daughtercard, or an UltraStudio 4K Extreme, directly from your grading workstation to the X300, and you’re ready to go.

Setting up Resolve Color Management to grade for ST.2084 is identical to setting up to grade for Dolby Vision. You’ll also monitor the video scopes identically, and output a master identically, given that both standards rely upon the same EOTF, and require the same high bit depth.

Hybrid Log-Gamma (HLG)

The BBC and NHK jointly developed a different EOTF that presents another method of encoding HDR video, referred to as Hybrid Log-Gamma (HLG). The goal of HLG was to develop a method of mastering HDR video that would support a range of displays of different brightness without additional metadata, that could be broadcast via a single stream of data, that would fit into a 10-bit signal, and that would be easily backward-compatible with SDR televisions without requiring a separate grade.

The basic idea is that the HLG EOTF functions very similarly to BT.1886 from 0 to 0.6 of the signal (with a typical 0 – 1 numeric range), while 0.6 to 1.0 segues into logarithmic encoding for the highlights. This means that, if you just send an HDR Hybrid Log-Gamma signal to an SDR display, you’d be able to see much of the image identically to the way it would appear on an HDR display, and the highlights would be compressed to present what ought to be an acceptable amount of detail for SDR broadcast.

On a Hybrid Log-Gamma compatible HDR display, however, the highlights of the image (not the BT.1886-like bottom portion of the signal, just the highlights) would be stretched back up, relative to whatever peak luminance level a given HDR television is capable of outputting, to return the image to its true HDR glory. This is different from the HDR10 method of distribution described previously, in which the graded signal is referenced to absolute luminance levels dictated by ST.2084, with levels higher than a TV can output being clipped. With HLG, all HDR-strength highlights will be scaled relative to whatever a television is capable of.

And while this facility to support multiple HDR displays with differing peak luminance levels seeks to accomplish the same goal of scaling HDR-strength highlights to suit whatever a given television is capable of outputting, HLG requires no additional metadata to guide how the highlights are scaled. Depending on your point of view, this is either a benefit (less work), or a deficiency (no artistic guidance to make sure the highlights are being scaled in the best possible way).

As is true for most things, you don’t get something for nothing. The BBC White Paper WHP 309 states that, for a 2000 nit HDR display with a black level of 0.01 nits, up to 17.6 stops of dynamic range without visible quantization artifacts (“banding”) is possible. BBC White Paper WHP 286 states that the proposed HLG EOTF should support displays up to about 5000 nits. So, partially, the backwards compatibility that HLG makes possible is due to discarding long-term support for 10,000 nit displays. However, given that the brightest commercially-available HDR display at the time of this writing is only 1000 nits peak luminance (the Sony X300), and the brightest HDR display I’m aware of only outputs 4000 nits peak luminance (the experimental Dolby Pulsar), it’s an open question whether or not over 5000 nits is necessary for consumer enjoyment. Only time will tell.

At the time of this writing, Sony and Canon have demonstrated displays capable of outputting HLG encoded video. DaVinci Resolve, naturally, supports this standard through Resolve Color Management (the RCM setting is labeled HLG).

Monitoring and Grading to Hybrid Log-Gamma in DaVinci Resolve

Monitoring an HLG image is as simple as getting a Hybrid Log-Gamma-compatible HDR display, and connecting the output of your video interface to the input of the display.

Setting up Resolve Color Management to grade for HLG is identical to setting up to grade for Dolby Vision, except that there are two basic settings that are available:

HDR HLG-2020

HDR HLG-709

Optionally, if you choose to enable “Use Separate Color Space and Gamma,” you can choose either Rec.2020 or Rec.709 as your gamut, and HLG as your EOTF.

The Aesthetics of Shooting and Grading HDR

At the moment, given that we’re in the earliest days of HDR grading and distribution, there are no hard rules when it comes to how to use HDR. The sky’s the limit, which makes it either an exciting or harrowing time to be a colorist, depending on your point of view. For me, it’s exciting, and I’ve been telling everyone that grading HDR is the most fun I’ve had as a colorist since I started doing this crazy job.

Developing the HDR image to best effect is, in my view, the domain of the colorist. The importance of lighting well and shooting a wide-lattitude format is indisputable, but the process of actively deciding which highlights of the image are diffuse white, which qualify as HDR-strength, and how bright to make each “plane” of HDR-strength highlights are all artistic decisions and assignments that are most easily and specifically controllable in the grading suite. In this way, I think HDR is going to bind the creative partnership between DPs and Colorists even more tightly.

In this section, I deliberately veer away from the technical in order to explore the creative potential for HDR. Some of this section is based on my experiences, some on my observations of the work of others, but much is also based on my perennial quest to mine the fine arts that have come before us for creative solutions that already exist, but have been neglected due to colorists having been stuck within the narrow confines of BT.709 and BT.1886 for so long. Breaking free of those restraints makes the work of other artistic disciplines even more accessible to us as models for what is artistically possible.

Differentiating Highlights

With images governed by BT.1886, the difference between diffuse and specular highlights can often be as little as 10% of the signal, sometimes less, and these differences are often so subtle as to be lost on most viewers. The difference between highlights of varying intensity can be accentuated by reducing the average levels of your midtones and shadows to create more headroom for differentiated highlights, but then you’re potentially fighting the legibility of the picture in uncertain viewing conditions (read – shitty televisions that are calibrated poorly). Bottom line, with SDR signals, you’re in a position where often both the white shine on someone’s face and a naked light bulb may both be up around 100 nits, which in truth has never really made any sense.

This no longer need be true in an HDR grade, where it’s possible to have skin shine around 100 nits if you want, but you can then push the light bulb up higher in the grade, where it would really peak, maybe at 800 nits. In addition, there will be much more detail available within that light bulb (depending on the latitude of the recording format), so you’ll potentially be able to see the interior of the bulb’s housing, so that the bulb isn’t simply a flat white flare.

Going farther, in an outdoor scene, it’s possible have a bright white t-shirt at one level, colorful highlights on a face at a clearly differentiated level, the rim-lighting of the sun on clouds at a different, higher level, and reflected sun glints off of a lake in the distance at an even higher level, resulting in a much richer distribution of highlight tonality throughout the scene. This is what’s new about grading HDR, you’ve finally got the ability to create dramatically differentiated planes of highlights, which finally gives the digital colorist the perceptual tools that fine artists working in the medium of painting have had for hundreds of years.

Given the elevated black levels of the photograph as seen on this computer screen, it’s hard to grok the true impact of the way this painting looks in person with more ideal gallery lighting and the direct reflection of light off the surface of the painting providing brighter levels than can be reproduced in a photograph. In person, the dimmer highlights of the background players emerge from the inky pools of shadow surrounding everyone, and the highlights of those background players faces are clearly dimmer than the highlights reflected off of the central two characters, and those highlights themselves are at a slightly but noticeably reduced level from the brilliant whites of the foreground sleeves and metallic glints dappled here and there throughout the image.

This, to me, represents the promise of what HDR grading done creatively can offer, in terms of using multiple planes of differentiated highlights to create a sensual glimmer, to add exciting punch to the image, and to guide the eye on a prioritized tour around the scene; to the arm encircling the woman’s waist, to the hand splashing wine into the prodigal’s gobblet, to the Prodigal’s face lasciviously eyeing the activities before him.

Getting Used to It

One thing that multiple colorists warned me about, and that I definitely experienced, is that it takes a little time to get used to “the look of HDR.” When you’ve spent years getting to know how audiences respond to images with a BT.1886 distribution of tonal values that max out at 100 nits, how to see and allocate highlights within that narrow range of tonality, and how images “should” look when graded for broadcast, the shockingly brilliant highlights and color volume that HDR allows can be confusing at first. It doesn’t look “right.” It shouldn’t even work.

More to the point, it’s tempting to either avoid highlights that seem too bright altogether, or to succumb to the impulse to linearly scale the entire image, midtones and all, to be uniformly brighter. Both impulses are ones you should try to avoid, but to avoid them, you’re going to need some time to get used to seeing what HDR images have to offer. To get used to the idea of comparatively subdued shadows and midtones contrasted against brilliant splashes of color and contrast. To familiarize yourself with tones and colors on an expanded palette that you’ve never had the opportunity to play with before. In conversation, colorist Shane Ruggieri was emphatic about the need to “unlearn 709 thinking” in order to be able to more fully explore the possibilities that HDR presents.

Don’t Just Turn It to Eleven

It cannot be over-emphasized that HDR grading is not about making everything brighter. Never minding the limitations imposed by the ABL on consumer televisions, just making everything brighter is like doing a music mix where you simply make everything louder. You’re not really taking advantage of the ability to emphasize specific musical details via increased dynamic range, you’re just making individual details harder to hear amongst all the increased energy bombarding the audience. Maintaining contrast is the key to taking the best advantage of HDR-strength highlights, which will lack punch if you boost all of your midtones too much and neglect the importance and depth of your shadows. HDR images only really look like HDR images when you’re judicious with your highlights.

I honestly think that looking to various eras of painting can be enormously instructive when getting ideas for what to do with HDR. I was in the middle of this article when I happened to go to an event at the Minneapolis Institute of Art. Since I had HDR on the brain as I wandered the collection, a few pieces leapt out at me as terrific examples of the use of selective specular highlights, large shadow areas combined with pools of highlights, and the guidance of the viewer’s eye through an entire scene within a single frame using lighting. Clearly, the reproductions I include in this article are a poor facsimile compared to seeing these paintings in person, where the reflective light from the surface of the painting results in a considerably more vivid experience, but I’ve tried to simulate their punch by applying a simple, slight gamma correction to give you a similar impression to what I felt when viewing the originals. Of course, your computer screen’s accuracy is the limiting factor.

A Little Can Go a Long Way

The following painting (Nicolas Poussin’s The Death of Germanicus, 1627) is a great example of using targeted high-octane highlights to great effect. Notice how the vast majority of the image is relatively dark, employing rich colors in the low midtones and high shadows (which can also be reproduced due to the increased color volume of HDR displays) but the artist uses polished strokes of brightness in key areas to add specular highlights that make the image really pop. These highlights are few, small, and they’re carefully targeted, but they punch up an image that otherwise has relatively subdued highlights falling on the skin and cloth of the participants. Also, because of the lattitude available to HDR-strength highlights, specular shines such as these can fall off gracefully towards the shadows, so that they’re not harsh “cigarette burns” with an abrupt edge, but areas that transition smoothly and naturalistically out of the lower tones of the image.

Nicolas Poussin’s The Death of Germanicus, 1627

This, to me, is a tremendous illustration of what HDR enables the colorist to now do. In another example (Cornelis Jacobz. Delff’s Allegory of the Four Elements, c. 1600), a still life with metal vessels is brought vividly to life through the use of some carefully placed metallic shine, despite a preponderance of shadows wrapped around every surface. These bright highlights are streaked here and there through the image, adding an impression of considerable sharpness thanks to the resulting contrast.

Cornelis Jacobz. Delff’s Allegory of the Four Elements, c. 1600

Know When to Fold It

Granted, it’s easy to overdo HDR-strength highlights. On one job I was grading, one of the characters of a scene had brass buttons on their jacket, which were natural candidates for putting out some HDR-strength glints. I keyed and boosted them, but I was moving so fast that the first adjustment I made had the buttons glowing like little suns. I paused to take in the effect, and the client and I simultaneously burst out laughing, the result was so completely ridiculous. It goes without saying that HDR-strength highlights should be motivated, but I was surprised by just how instantly hilarious the wrong use of these highlights was.

Balancing Subjects in the Frame, Using Negative Space

Keeping the people inhabiting a scene interesting despite amazing HDR effects happening in the background also becomes a new and interesting challenge. In an SDR image, even the brightest highlights in an image may only be 25 nits higher than the highlights coming off of people, so subjects aren’t so easily overwhelmed by their surroundings. However, in HDR you might have vividly colorful 600 nit highlights in the background that are competing with 100 nit highlights illuminating people inhabiting the foreground. One example that springs to mind from a program I saw graded by another colorist was a scene with sun-drenched stained-glass windows placed behind two actors having a conversation. After the first preliminary primary adjustment which went by the natural lighting in the scene, the window was so beautifully spectacular that the people in front held practically zero interest. A bit of extra work was required to pull the actors out back in front so they could compete with the scenery.

A useful example can be seen in the following painting (Constant Troyon’s Landscape with Cattle and Sheep, c. 1852-58), where the white cow catches the sunlight in dazzling fashion, relative to the far dimmer tones found throughout the rest of the image. The milk-maid is almost easy to miss, were she not so forcefully present as negative space within the cow’s dazzling highlights.

Constant Troyon’s Landscape with Cattle and Sheep, c. 1852-58

A creative use of negative space in the composition of an image can be a powerful way out of this dilemma, which is nice as this is a technique the colorist can harness through careful control of contrasting midtone and shadow values.

Plan for a Wandering Eye

I’ve heard several people express concern about HDR-strength highlights proving distracting, but I think it’s a mistake to be too terrified of losing the audience’s attention to the bold highlights that are possible within an HDR image. In the following image (Giovanni Francesco Barbieri’s Ermina and the Shepherds, 1648-49), the most vivid planes of highlights are on the armored woman’s arm, face, breastplate, and robes, on the man’s sleeve, elbow, and knee, and on the arm of the foremost boy to the right, and the sheep. The man’s face is hilighted, but diminished relative to these other elements, as are (to a greater extent) the faces of the two boys far to the back. However, this lighting scheme adds considerable depth to the image, as the brighter elements jump forward, pushing the darker elements back. And the artist uses contrast of saturation to make sure that the ruddy faces of the boys are still worthy of the viewer’s attention vs. their immediate background. The highlights don’t necessarily drive our gaze directly to each face as the first thing we look at, but the path traced by our eyes moving among each available highlight gets us there nonetheless, as a secondary act of exploration.

Giovanni Francesco Barbieri’s Ermina and the Shepherds, 1648-49

Something I’m keen to try more of as I work with a greater range of HDR programming is the potential for directing the viewer’s gaze by sprinkling HDR highlights strategically across the image. I think we’ve become a bit too obsessed with treating the colorist’s ability to guide the eye using digital relighting and vignetting as a “bulls-eye” targeting technique, giving the viewer only a single clear region of the image to focus on. I suspect that to utilize HDR most effectively, we need to reconsider the notion of guiding the viewer’s eye through the scene, providing a path from one part of the image to another that encourages the viewer to explore the frame, rather than simply having the viewer obsess over just one element within it. In this way, HDR-strength highlights can be used to provide a roadmap through the image.

In this regard, fine artists showed the way hundreds of years ago. I’ve long felt that painted scenes were once the equivalent of an entire short film in terms of the viewer’s experience, and the technique of being guided through an ambitious work’s mise-en-scène by the painter via lighting is an amazing thing to experience in person, if you’re willing to give the time. In the following image (Francesco Bassano; Jacopo Bassano’s The Element of Water, c. 1576-1577), dappled highlights pluck each of the scene’s participants from the shadows to spectacular effect, and guide the viewer’s eye along the thoroughfare of the scene’s major areas of activity, not just through the street, but farther down the road, to the horizon in the distance.

With the wider and now-standard 16:9 frame available to the home viewer and the considerably wider availability of large-screen televisions from 55-85 inches, the medium is ripe for creating a more ambitious mise-en-scène that challenges the viewer to engage more fully with the narrative image. And even on smaller devices, the so-called “retinal” resolutions now available to the tablet and “phablet” viewer make it possible to peer more deeply into even these diminutive images. So, instead of using grading as an invitation to the viewer to dwell on a single element of the picture, it might be time to compose, light, and grade in such a way as to invite a more sweeping gaze, guided in part by HDR-strength highlights.

Choices For Handling Midtones

So yes, HDR provides endless opportunities for finding creative uses for your highlights. Blah, blah, blah. However, in an HDR grade, what are we to do with our midtones? This is an interesting question that is, in my opinion, ripe for exploration.

The first answer is the “party line” that many discussions of HDR emphasize (myself as well), which is to grade your midtones (including skin tones which fall squarely within the midtones of most images) largely the same as you would before. Not only does this make it easier to create dazzling HDR effects in contrast to restrained midtones and deep shadows, but this makes it considerably easier to maintain backward compatibility with the Rec.709 trim pass that you’re inevitably going to have to produce, given that the vast majority of televisions out in the world are still SDR. At this point in time, grading to make your trim pass easier makes all the sense in the world.

However, I don’t think it’s going to take very long for colorists to begin seeing the potential of using the lower portion of whatever range of HDR highlights you’re mastering with to let the brighter midtones of an image breathe, so long as you can count on a few hundred nits more peak luminance to maintain the separation and punch of your HDR-strength highlights. Of course, if you’re grading relative to a lower peak luminance threshold, then you should probably keep your high midtones lower, otherwise you risk de-emphasizing the glittery effect that’s possible.

However, assuming you’ve got the headroom, an example of what should be possible when allowing oneself to use the brightness and saturation that can be found within the 100-400 nit midtone range might be seen in the following painting (Gerrit van Honthorst’s The Denial of St. Peter, c. 1623). This painting employs a beautiful use of silhouettes and vignetting shadows as negative space against the vividly lit face at the center of the image. Pushing these skin tone highlights up past what’s ordinarily possible in SDR to achieve more luminosity through the combination of brightness and saturation would make this practically jump off the screen, while maintaining an even more profound separation from the shadows, shadows that nonetheless hold considerable detail because it’s not necessary to crush them to flat black in order to maintain contrast given the higher midtones. In such an image, 800 nit highlights wouldn’t even be necessary, though you’d probably find a few pixels of eye glints, metal on the candlestick, or (as in the painting) shine off of the top edge of the foreground soldier’s breastplate, to provide just a tiny bit of flash up around 700-1000 nits.

Gerrit van Honthorst’s The Denial of St. Peter, c. 1623

If you let yourself use higher-nit midtones, you’ll have more of a chore before you as you trim those grades to look as they should on a BT.709/BT.1886 display, but I anticipate as more and more of the viewing audience upgrades to HDR-capable televisions, it’ll be worth it.

Contrast of Saturation Becomes Even More Powerful

Truly, all forms of color contrast will become more potent tools for the colorist given the increased color volume that a P3 or Rec.2020 gamut coupled with BT.2084 or HLG permits. Different hues have the potential to brilliantly ring against one another at the higher levels of saturation that will be allowed. However, the availability of richer saturation also means that you can have multiple planes even of the same hue of blue, for instance, all differentiated from one another by significantly different levels of saturation.

Should I Worry About the Audience’s Eyeballs?

It’s good to be mindful that, should someone at home eventually have a 2000 nit television, that sun in the frame that you decided to put all the way up at the top of your grade will definitely make them squint. I’m not kidding, I graded a sun all the way to peak luminance in the shot on a 2000 nit Dolby display, and everyone in the room was squinting. However, I’m not too personally worried. I’ve had long HDR grading sessions with 1000 nit displays, and while I was initially worried about early eye fatigue, in truth I had not that much more eye fatigue at the end of an 8-hour day than I do with SDR grading sessions. That said, I’m pretty firm about taking regular breaks every 2-3 hours from the grading suite to stretch the legs, get a tasty beverage, and see the sun for a few minutes before diving back into the job, so perhaps my good habits help.

However, spectacularly vivid contrast is something that regularly occurs in our everyday lives. For example, while chatting with Shane, he shared some actual luminance measurements from his office, in which shadowed areas of the wall with visible image detail fell around 1.5 nits, and light reflected from just under a fluorescent fixture measured 3070 nits, making the point that examples from life can inform and reestablish what dynamic range can plausibly be within a scene, even one as subdued as a “dimly lit office.”

Dolby’s “The Art of Better Pixels” document, authored by D.G. Brooks of Dolby Laboratories (available here), cites tests performed to determine preferred viewer experiences for black, diffuse white, and highlight levels. Studies with viewers show that on large-screen displays, diffuse white values around 3,000 nits and peak highlights at 7,000 nits were luminance levels that satisfied 90% of the test subjects (smaller screens engendered even higher preferred levels). I suspect any colorist who’s had a client ask for “more contrast, more contrast, still more contrast” can certainly relate to this data.

I also see the ability to have these sorts of squint-inducing highlights as another creative opportunity, one that’s been available to audiences looking up at the stage lighting of plays, musicals, and concerts for years. If you’re careful not to abuse the privilege, I think the ability to cut to a bright frame, surprise with a sudden flare or shower of sparks, or grade a light-show with similar physiological impact to the real thing can create compelling narrative opportunities in our storytelling.

HDR In Movie Restoration

I’ve seen several examples of older films being remastered for HDR, which I find an interesting task for consideration. In truth, when remastering older films, you’re adding something that wasn’t there. Even during a film’s original theatrical run, the standard for peak luminance in the theater has long been only 48 nits (SMPTE 196M specified 16 fL open gate with a minimum of 11 fL, practically 14 – 9 fL with light running through a strip of clear film), although with a gamma of 2.6 and a lack of surround lighting, that peak luminance seems much brighter than it actually is.

Bottom line, a television displaying HDR-strength highlights at even 500 nits is going to present isolated highlights that are vastly brighter than this (at least when viewed in a darkened room). If you’re interested in preserving the director’s intent, then splashing HDR onto older films is a case of deliberately imposing something new onto an older set of decisions.

On the other hand, for directors and cinematographers who are revisiting their own films, older negatives have ample latitude to be re-scanned and regraded to take advantage of HDR, to present a new look at previously released material.

While this article is largely focused on HDR for television, it’s also worth mentioning that there are emerging theatrical exhibition formats for HDR, such as Dolby Cinema, which allows the projection of images with peak luminance of 108 nits, over double the brightness of ordinary theatrical projection, advertising down to 0 nit black for a claimed 1 million to 1 contrast ratio on Dolby Cinema projectors (a collaboration between Dolby and Christie). This high contrast in a darkened theater yields similarly dazzling results when graded carefully, and I believe many of the creative decisions I describe here will apply to cinema grading as well.

Being Creative During the Shoot

I think HDR really shines when contemplating the creation of new films and television, where you have the opportunity to think about how to use HDR as a deliberate part of the project.

Despite my assertion that HDR will thrive as a domain of colorist creativity, cinematographers obviously have real decisions to make. I suspect that more careful and deliberate lighting schemes will mean more lighting and grip used to shape the pools and ratios of light and shadow. From my experience, it’ll really help colorists save time if you create the preconditions for the sparkly bits that you want, so we don’t have to go digging around the highlights around the signal to find something specific to pry out. It’ll be interesting to see more deliberate planning for a differentiation between diffuse whites and HDR-strength highlights, in order to take advantage of the fact that there’s a difference between 100 nit, 400 nit, and 800 nit highlights.

Additionally, the art department has an enormous contribution to make, as production designers, set dressers, wardrobe, and makeup all have something to add to (or subtract from) the HDR image. Production Designers will be tasked with making sure there are highlights to be had through careful selection of set materials, paint, and glossy vs. flat regions of the environment. Small set dressing and propmaster decisions will have a large impact – selection of items within the frame such as having a couple of shiny desk accessories in the office (or not), using a car with chrome trimming, using reflective fabrics, etcetera, etcetera.

Wardrobe choices offer similar opportunities. Sequins, brass buttons, shimmery or flat fabrics, choice and manner of stitching, selection of wardrobe accessories, all these and more are opportunities to contribute to what HDR can present to the audience. Even the makeup artists can contribute. It only takes a few pixels of highlights adjacent a few pixels of lower midtones to create some HDR-strength flash, so cosmetics with shimmer or gloss, glitter, or simple control of shine become powerful tools to shape HDR effects on arguably the most important subject within any frame, people’s faces.

All these are tremendously meaningful decisions when shooting for HDR mastering, and the prudent creative team would do well to schedule more on-camera testing with a colorist’s support to see how things are going to work out. This is exactly what I’m contemplating doing for my next film, more on-camera testing prior to the actual shoot to see how different makeup and costume schemes are going to work. It’s a bit more logistics than the typical indie project has to go through, but I think it’ll be worth the hassle. I’ll let you know when it’s done.

If Grading In DaVinci Resolve, What Tools Will Help?

At the end of the day, grading HDR material is simply a matter of manipulating a video signal with a different weight to the distribution of shadows, midtones, and however many levels of highlights you’ll be individuating. Just as a quick tip, here are some Resolve tools that I’ve found help enormously when grading HDR material:

Resolve 12.5 has a new HDR Mode in the Node Editor, which has become indispensable for HDR grading when you’re outputting to one of the HDR or HLG profiles using Resolve Color Management (RCM). Right-click any node and turn on HDR Mode to set the controls in that node to act upon a wider signal range than normal, and you’ll find that the controls in the Color Wheels palette, the Custom Curve controls, and soft clip all feel much more natural than they do when HDR Mode is turned off (which is the default).

The Highlights control, found in page 2 of the Color Wheels palette, can be a fast way of boosting or attenuating highlights while simultaneously adjusting the high midtones of the image. This control works better with HDR Mode enabled.

Using the Highlight master control in the Log mode of the Color Wheels palette is a more targeted way of boosting or attenuating the highlights of your image. Using the default High Range parameter setting and HDR Mode enabled, this control affects only the top HDR-strength highlights of the image. With HDR Mode disabled, this control affects more of the highlights of the image, but is more restrictive than the Highlights control. You can of course change how much of the top end of the signal is affected by adjusting the High Range parameter.

Custom Curves, with HDR Mode enabled, are hugely useful when shaping the contrast of the HDR highlights, the midtones, and the shadows. In fact, I can safely say that every HDR grade I’ve done has used the Custom Curves to create just the right tonal separation for each situation.

Secondary corrections made by Luma Keying or Chroma Keying just the range of highlights that I want to boost or attenuate is an invaluable technique that I’ve used again and again. Often, I may want to isolate highlights that aren’t actually the brightest thing in the picture to boost up to become HDR-strength highlights, because the natural highlights of the image (light falling on someone’s face, for example), weren’t good candidates for HDR-strength highlights.

In Conclusion

Due in part to my unbridled enthusiasm for the topic, and the fact that HDR is such a wide-ranging subject, what I had intended to be a quick overview of HDR gradually snowballed into a massive 14,000 word essay on the topic. At the time of its writing, there’s a lively debate about which formats will “win” the hearts and minds of audiences and the industry, whether or not people are ready for HDR-strength brightness, what the correct (and by extension incorrect) uses of HDR should be, and ultimately, whether HDR is worth the hassle.

Clearly, I think it is.

That said, this is a rapidly evolving facet of the industry, and I’ll be curious to find out how long it takes for this article to become woefully out of date and in need of an upgrade. Usually I just write an article here and leave it for the ages, but this one I’ll have to keep an eye on. I hope you’ve found it useful.

(5/11/16 Update – Updated Dolby Vision section with updated information. 5/7/16 Update – I updated a paragraph covering the peak luminance capabilities of current televisions, and added another paragraph describing the ABL performance of consumer televisions. Yes, I made this article even longer.)

It all started with me wanting to analyze the color of some out-of-calibration projectors with potentially aged bulbs in order to see if I could create a “poorly calibrated projector” LUT to more closely examine the effects of poor projector quality on a graded image. Why is a tale for another time; suffice it to say, it’s a research project.

I have a Klein K-10 Colorimeter which I was originally intending to use for the project, but while discussing my plan with Bram Desmet at Flanders Scientific, who’s an extremely knowledgable fellow when it comes to display calibration, he pointed out that a Colorimeter would be unsuitable for my purposes since the potentially aged bulbs of the projectors that I needed to measure would have an unknown spectral distribution, and Colorimeters assume a known spectral distribution for any given device (which is supplied as a profile for each device).

Crap.

Turns out I needed to use a Spectroradiometer, which is another device for measuring color, that directly measures the short, medium, and long wavelengths of light that we see as color – making it able to accurately measure the spectral distribution of any light source without any other information.

I’ve avoided Spectroradiometers up until now because (a) they’ve traditionally been pretty expensive, and (b) like I said, I’ve already got a Colorimeter. However, given some projects on the horizon, it had occurred to me that it might not be a bad thing to bite the bullet and invest in another measurement instrument, not only for its value in future color research, but also because I could then use it to recalibrate my Colorimeter, since all Colorimeters benefit from periodic recalibration to make sure that everything is being measured accurately.

The Colorimetry Research CR-250 Spectroradiometer, the model shown is with the optional targeting scope.

Of course, it turns out that you ALSO need to get the Spectroradiometer periodically calibrated. However, I discovered that I had no idea how Spectroradiometers got calibrated. And I hate not knowing things.

Bram introduced me to Guillermo Keller, President of Colorimetry Research, who graciously invited me to the lab where the Spectroradiometers they make (the CR-250) are calibrated before being shipped out, so I could see the whole process in person.

I’ve written about display calibration before, both on this blog, and in my Color Correction Handbook. In order to do color-critical work such as grading a movie, episodic show, or music video for the public’s enjoyment, it’s essential to have a display capable of outputting accurate, standards-compliant video. Displays are made accurate via a calibration procedure whereby thousands of color patches are displayed on that monitor and measured by a color probe of some kind, either a Colorimeter or Spectroradiometer.

Using the CR-250 with LightSpace to calibrate a theater screen.

The software that generates the color patches going to the display and simultaneously records measurements made with the probe (applications include Light Illusion’s LightSpace and SpectraCal’s CalMan) then compares the actual color of each patch with the measured color being emitted by your display, and compiles the thousands of measurements being taken into a characterization that describes how that display is really showing color. The calibration software can then mathematically compare a display’s characterization to the desired video standard that display is supposed to be outputting (BT.709, P3, or Rec.2020), and generate a calibration LUT to load back onto the display (or onto a LUT box sending a video signal to the display) that is used to guarantee that display is outputting accurate color across the spectrum according to the appropriate video standard in use.

Display calibration is dependent on the accuracy of your measuring device, and Colorimeters and Spectroradiometers can subtly shift over time, so unfortunately it’s not enough to simply buy an expensive probe and put it on your shelf, you need to have your probe of choice recalibrated over time. Guillermo recommends having both the CR-250 and CR-100 recalibrated once yearly.

Calibration, in fact, is a carefully controlled chain of device measurements. Monitors can be calibrated using Colorimeters. Colorimeters can be calibrated using Spectroradiometers. But how then are Spectroradiometers calibrated?

Very carefully, it turns out. And using equipment that is itself calibrated, extending the chain of calibration all the way back to fundamental components that are manufactured and performance-tracked by companies such as Gooch and Housego, that are themselves compared to light sources that are traceable to devices and methods standardized by NIST, the National Institute of Standards and Technology, an agency of the U.S. Department of Commerce. So, if you’re wondering who, through the long chain of calibration, is ultimately responsible for the color accuracy of every movie, television show, promo, and advertisement you watch, it’s the federal government.

But this is going all the way down the rabbit hole. For the film and video practitioner’s practical purposes, it is the calibration of Spectroradiometers upon which the scaffolding of our industry rests, and there are four fundamental procedures involved with this. Each of these tests rely on taking spectral measurements of a known light source. The accuracy of everything else relies entirely on the maintenance and care taken with these light sources.

First, a Helium-gas lamp is used to calibrate the Spectroradiometer sensor’s pixel-to-wavelength transformation.

Calibrating a Spectroradiometer to a Helium light source.

The Helium-gas lamp bulb, which is similar in principle to a Neon sign tube, has a unique and utterly reliable spectral distribution that spikes at specific wavelengths. These spikes are clear to see, do not vary, and provide an easy way to calculate the difference between what the probe is reading, and the reality of physics. This offset is stored on the probe as a transformation.

The spectral distribution of a Helium-gas lamp.

Next, a tungsten light source reflecting diffusely within an integrated sphere is used to calibrate the probe’s reading of spectral distribution.

An integrated sphere, used to calibrate color temperature.

Calibrating the Spectroradiometer

The integrated sphere is itself calibrated to NIST standards, and the bulb usage is carefully timed and recorded, since the whole sphere is periodically sent in for measurement. In fact, one of the measures taken to extend the life of this device is to only turn it on by slowly increasing the voltage from 0 to full, in order to prevent spikes of voltage causing unnecessary wear to the bulb.

As with the Helium measurement, the difference between the measured spectral radiance in linear pixels (the raw data that is recorded by the probe through the optics) and the known output of the integrated sphere is used to determine the transform from the pixel value recorded by the probe to an accurate reading of spectral radiance. This transform is also stored on the probe.

Spectral output of the diffuse tungsten lighting within the integrated sphere.

Lastly, as an alternate step, the quality of the integrated sphere’s output can be verified by measuring the reflectance of a NIST-traceable tungsten bulb (a $1000 200-watt lamp) shining on a similarly NIST-standardized diffuse “reflectance standard” from a specific distance. To highlight how picky these devices are, the bulb must be sent in to be re-measured every 600 minutes, with the new measurements being factored in to subsequent use of that bulb. Meanwhile, the reflectance target, which is comprised of compressed chalk-like particles, must be certified to be close to 100% reflective.

Reflectance Standard

NIST traceable 200 watt tungsten bulb

This is only done for spot checking, in order to verify that the integrated sphere is operating correctly. The bulb and reflectance target are mounted a measured distance apart (the intensity of the reflected light is controlled in this way via the inverse square law), with the probe pointed at the target, and another measurement is taken and compared.

Spectroradiometer measuring the NIST traceable bulb reflecting off of the reflectance standard target.

And that’s it. Once each Spectroradiometer has been calibrated in this way with the offsets stored on the probe, they’re shipped out to manufacturers, calibrators, and facility people who in turn use them to calibrate the displays we use in the world of film and video.

In the process of learning how Spectroradiometers are calibrated, I also learned much more about how they actually work, and how they fundamentally differ in operation from Colorimeters. These differences are key to understanding each device’s differing advantages and disadvantages when it comes to you making a choice about what kind of device to use.

Spectroradiometers measure the wavelengths of light directly. Optics are used to gather light through the front lens and focus it through a “diffraction grating,” which is a grooved filter where each groove works as a tiny prism to split the light apart for measurement. In Spectroradiometers, the quality of these optics determine the quality of the instrument, given in nanometers (for example, the CR-250 is a 4 nm probe, which is considered extremely accurate for purposes of video calibration).

The light that’s split apart via the diffraction grating then falls upon the 250-pixel grid of the Spectroradiometer’s CMOS sensor, which is set up to measure the 380 to 780 nanometer range of the spectrum that CIE 1931 specifies as the visible range of light. Because Spectroradiometers measure the spectral distribution of light directly, they need no other information about the source being measured.

However, because of the physics of how they function, Spectroradiometers are slow. The diffraction grating is not efficient at transmitting light; two-thirds of the light coming in through the front lens is lost right off the bat. Then, only 1/250th of the remaining light is measured by each pixel of the probe’s sensor. The only way to compensate for this low sensitivity is to increase the exposure time of light falling onto the sensor. This isn’t a problem when measuring bright colors, but it becomes a significant problem when measuring very dark colors. For example, measuring a 3 candela source requires a 30 second exposure for a Spectroradiometer. This means that they’re slow to operate.

Colorimeters work much differently. Colorimetry Research also makes a Colorimeter, the CR-100, but the principle is the same for colorimeters made by anyone. For the CR-100, light coming through the front lens is split and directed through three colored glass filters, one each for Red, Green, and Blue, with the filtration specified by the CIE 1931 2 degree standard observer spectral response curves, which attempt to model the sensitivity of the cones of human eyes to low, medium, and high wavelengths to light. The output of each filter is then measured, with the quality of the measurement depending entirely on how well the filters match the CIE 1931 standard observer model.

The CR-250 and CR-100 mounted side by side.

Because the sensors reading the output of the Red, Green, and Blue filters are each receiving one-third of the available light, Colorimeters are extremely fast. The same 3 candela source that takes 30 seconds to be read by a Spectroradiometer only takes 1 millisecond on a Colorimeter. However, the truth is that the speed of Colorimeter readings also depends on the refresh rate of the display device (in Hz), so assuming a display running at 60 Hz, the measurement actually takes 16.6 milliseconds. Either way, this is considerably faster than a Spectroradiometer.

And this increased sensitivity means that Colorimeters are also better at measuring extremely dark colors, with the CR-100 capable of taking accurate color measurements all the way down to .03 cd/m2, and accurate luminance measurements all the way down to .003 cd/m2.

However, because Colorimeters are using fixed filters based on CIE 1931, they must be supplied with specific information about the spectral distribution of the particular type of light they’re measuring, as different displays use completely different types of light sources to emit an image. Otherwise, they’ll give inaccurate results. This means that you need to store different profiles on the Colorimeter (which is typical) for Plasma, Fluorescent-backlit LCD, White-LED-backlit LCD, OLED, etcetera. Usually, Colorimeters store generic profiles on the probe itself (which are available via pop-up menus in the calibration software you’re using), for use in measuring each display you have, and typically this works fine.

Different profiles for each of the available display backlight technologies.

However, depending on the quality of your display and the accuracy and age of its backlight, it’s possible that the backlight of your display may diverge from the optimism of the generic profile on your probe, in which case the resulting measurements may be a little off.

So, the basic choices are between a Spectroradiometer that will be totally accurate for any device, but will take a really, really long time to do a full 17 x 17 x 17 sampling of the RGB color cube to profile your display (that’s 4,913 color patches), or a Colorimeter which will do that same 4,913 color patch calibration in an hour, but that might be a tiny bit off if there’s something obscure that’s wrong with your display.

I’m not trying to scare you. To put this into perspective, many companies get great results when using a calibrated Colorimeter’s generic presets to measure a high-quality display device. This is yet another reason to not try and use a cheap television or computer display, since displays that are designed to be color-critical also happen to be easier to calibrate.

However, if you demand total accuracy and total efficiency in any situation, there is another path, and that is to use a Spectroradiometer in addition to a Colorimeter in what calibration applications refer to as offset mode. Both LightSpace and CalMan can do this, and it involves using the Spectroradiometer to take four readings from your monitor, Red, Green, Blue, and White. Those readings are then used to calculate an offset for the Colorimeter’s measurements, so that the Colorimeter’s 4,913 readings are totally accurate for that display at that moment in time.

So, if you were wondering why high-quality color probes are so expensive, this glimpse behind the curtain of the technologies involved hopefully provides some, ahem, illumination. Although I would be remiss were I not to point out that prices are lower than they’ve ever been, what with Colorimetry Research’s CR-250 Spectroradiometer going for $6,990, and their CR-100 Colorimeter going for $4,990 (prices taken from Flanders Scientific). Furthermore, there are many other vendors to consider, including Klein Instruments, Photo Research, Konica Minolta, and Xrite, to name the ones with which I’m familiar.

And hopefully this has clarified the concrete differences between the two kinds of probes, giving you some background for further research in the process of trying to figure out which will be more useful for your application.

Typically, the easy answer is usually the most expensive one. Buy one of each.

Rage is the engine, and retribution is the fuel that keeps the carousel of violence on which we find ourselves spinning. More rage and more retribution won’t solve or end anything, but it will result in more death, and it will keep the carousel spinning.

Here’s an important tip when using “Optimized Media” in DaVinci Resolve 12 (or higher) to spare yourself the processing overhead of debayering raw media. For those of you who don’t know, you can right-click a selection of clips in the Media Pool that are in one or more formats that are processor intensive to work with (camera raw clips, H.264, other intensive-to-decode media types), and choose “Generate Optimized Media” to have Resolve automatically create an alternate set of media files that let you work faster.

All Optimized Media you generate is compressed using whatever setting is currently selected in the General Options panel of the Project Settings. The default media format is ProRes 422 HQ.

Once you’ve generated optimized media for a set of clips in a project, the Playback > Use Optimized Media if Available setting determines whether or not you’re using Optimized Media, or the original media files that you had imported into the Media Pool.

When using Optimized Media, you can also reveal an additional column in the Media Pool’s list view, which lets you see which clips have been optimized, and which clips haven’t.

However, there’s a potential problem with using Optimized Media, which can be seen in clips with high dynamic range; the highlights of any image data with levels above 1023 become clipped. In the following screenshots, you can see the winter exterior has plenty of levels above 1023, as evidenced by the waveform below.

However, after optimizing these CinemaDNG raw clips, any attempt to retrieve the highlights above 1023 by lowering the Gain or Offset controls results in flat, clipped highlights, which can also be seen as a flattening in the waveform.

This, of course, defeats the whole purpose of shooting camera raw media in the first place. However, there’s a way you can generate optimized media that actually preserves these highlights, and that’s by changing the format used for optimization in the General Options panel of the Project Settings to “Uncompressed 16-bit float.”

Uncompressed 16-bit float is a proprietary DaVinci image format designed to preserve out-of-gamut floating point image data. The only downside to this is that by using Uncompressed 16-bit float to generate optimized media, you create larger optimized media files. However, you still spare yourself the processor overhead of having to debayer your camera raw media, and you preserve high dynamic range image data for grading. So, you might need to make sure you have fast hard drive storage, but you’ll still work faster.

Incidentally, the exact same issue occurs when using the Smart Cache, which generates cache media for timeline and grading effects that are too processor intensive to play back in real time, except you’ll need to change the “Cache frames in” pop-up in the General Options panel of the Project Settings to Uncompressed 16-bit float, instead.

Optimized Media and the Smart Cache are two of Resolve’s best features for letting you grade higher quality media on systems with lower processing power. If you’re careful about what media format you use, you can preserve the quality of high dynamic range media, and you can even use Optimized Media for finishing and final output.

I’m very happy to announce that, after a huge amount of recording, and even more time spent editing and organizing, my new Editing & Finishing in DaVinci Resolve 12 video training is now available from Ripple Training, for $99 USD. I’m really happy with how these lessons turned out, so if you want to understand how editing in Resolve works, than this is the title for you.

It’s an exhaustive look at editing in DaVinci Resolve, detailing every nook and cranny of the Media and Edit pages. There are nine hours and thirty minutes of videos, spanning 90 meticulously organized lessons complete with chapter markers that let you jump to whatever topic you want to focus on next, making this useful as a reference as well as a class.

And every relevant topic is covered, from choosing whether to use the free or studio version of Resolve and touring the application, to setting up and organizing projects, importing and organizing media, improving performance and managing media, drag & drop editing, precision editing, cutting dialog, multicam editing, trimming and rearranging clips, using effects and transitions, and working with audio. Absolutely every available editing technique in DaVinci Resolve is demonstrated in detail.

However, the true power of Resolve is in its seamless marriage between editing and color, so there are also over an hour of tutorials dedicated to color correction and grading. Starting with how you can prep the color of your clips prior to editing, and continuing with learning the basics of the Color page, making automatic and manual color adjustments using Lift/Gamma/Gain and curve controls, copying and matching grades, and adding secondary adjustments.

And since Resolve is such a capable finishing environment, additional lessons cover audio mixing and effects, creating still and animated video effects, compositing, titling, stabilization, green-screen compositing, and the use of third party filters.

And, in a first for me, this tutorial is accompanied by a complete set of high-quality media and project files so you can follow along as I demonstrate each feature and technique, and then continue to experiment on your own.

At this point, I have several titles available covering DaVinci Resolve from Ripple Training, so here’s how they all fit together.

On the other hand, if you want a faster overview of how both editing and grading works in Resolve, you might want to check out my DaVinci Resolve 12 Quick Start, which is a more approachable 4 hour overview of how to use Resolve, focusing only on the basics.

This is it. After two years of production and post-production, and a year traveling on the film festival circuit, I can finally release my Science Fiction short “The Place Where You Live” free on the web to the general public, available both on both YouTube and Vimeo. It’s been a long time coming.

While the shoot itself went fairly quickly, with two-and-a-half days of principal photography, and another day of pickups a year later, post-production took a good long time for everyone involved. It’s tough squeezing in ambitious VFX composites in-between paid gigs, and even I wasn’t immune as this came during the same year I ended up writing and revising a total of five different books (Adobe SpeedGrade Classroom in a Book, Autodesk Smoke Essentials, the DaVinci Resolve 10 manual, Color Correction Handbook 2nd Edition, and Color Correction Look Book), in addition to the color grading gigs I had that year. Squeezing in my portion of the post where I could was hard, and not a day passed where I didn’t wake up and feel guilt over not being able to get to my film (I’m never writing that many books in a year ever, ever again).

In the end, nothing motivates finishing like a deadline, and an early look at the trailer and a teaser convinced the organizer of the Midwest Sci-Fi Film Festival that he wanted my short in their lineup. This prompted my last and most break-neck month of post-production and finishing, to wrap up the project once and for all, and to embark upon what would become a total of 18 festival screenings, plus one promotional screening (in Beijing, no less). In the process, we garnered six awards for everything from “Best Science Fiction Short” (Big Easy International Film Festival) to “Best Leading Actress” (ConCarolinas Short Film Festival), to a “Special Jury Prize” at the Worldfest-Houston International Film Festival. I travelled to what festivals I could, along the way meeting many talented filmmakers, actors, and film enthusiasts at screenings both in the U.S. and abroad.

Film Festivals are always a great experience; films are meant to be seen by an audience, so it’s gratifying to put the work in front of people, which to me is the the whole point. Happily, we had great audiences who were, on the whole, enthusiastic about the film. And being in the Science Fiction category of a lot of festivals, I have to say there’s a lot of really fantastic work out there right now. “The Place Where You Live” was in great company in every shorts program in which it played.

Please watch the credits, as I can’t thank the folks who worked with me on this nearly enough. Additionally, I want to give a huge shout-out of thanks to Autodesk, who sponsored the project, and develop the software that made it possible (the entire short was entirely edited and composited in Autodesk Smoke). Their support was key to this film’s creation, and helped me to get up to speed with an incredibly capable and deep application. Smoke’s fantastic integration of node-based compositing and editing made it easy to tweak every shot in this movie until the day it was finished. Autodesk 3D Studio Max was also used by artist B.J. West to create the CG effects, so Autodesk Software touches every single frame of this film (along with Adobe Illustrator, Photoshop, and After Effects to create animated graphics elements, DaVinci Resolve Studio to create dailies and do the final grade, GenArts Sapphire plugins to help all along the way, and Avid ProTools to do the sound design and mix). If you’re interested in learning more about the workflow I and the other artists who worked on this project used, you can see a presentation I gave at the 2013 Amsterdam SuperMeet here. In the coming weeks, I’ll be posting a couple more “making of” videos showing preproduction and workflow.

And now, my only appeal. If you like this short movie, please help spread the word among your friends, colleagues, or anyone you know who likes thoughtful Science Fiction. Promotion is one of the great challenges facing independent filmmakers, and word of mouth on social media and in person is one of the best ways you can reward this project if you like what you see.

And so, without further ado, it’s showtime!

Thank you for watching! If you want to read more about our adventures making this film and following the film festival circuit, please check out The Place Where You Live website.

The shoot for my goofy little rant, “The Importance of Color Correction,” came on the heels of some promos that Steve Martin wanted me to record for my newest Ripple Training titles for DaVinci Resolve 12. I figured, since I’m there on a stage, why not have a bit of fun with it?

A confession – I suffer from incurable impatience between a shoot and the beginning of the cut, so once home I immediately fired up Resolve 12 and got to work. I was determined to do the entire thing inside of Resolve, to test the workflow of grading, compositing, cutting, and finishing a green-screen intensive project, all within Resolve 12. Since I knew I wanted to edit a series of dynamically changing backgrounds that reacted to what was being said, my first order of business was to grade the clip, and create transparency from the green background for compositing within the timeline.

I shot with the BMD Production 4K camera, but I made the decision to record to ProRes HQ, instead of raw, as I wasn’t sure how many takes I’d burn through, or how much space I’d ultimately need. This meant that, although I recorded a log-encoded image, my camera settings were burned into the files. The result, owing to a combination of camera color temperature settings and shooting through the glass of the teleprompter I was using, was the following image (after normalizing to Rec. 709 using Resolve Color Management):

After a relatively straightforward grade, this was easily turned into:

This took two nodes. It could’ve been one, but I like keeping my HSL curves separate for organization.

This was the original grade, but since I rendered out self-contained graded clips to hand off to Ripple, I ended up re-importing the graded media and using it as the basis of my next few adjustments and the edit. This wasn’t necessary at all, it just seemed like the thing to do, since I had the media and all.

With the grade accomplished, it was time to create transparency, which I did using the blue-labeled Alpha Output in the Color page’s Node Editor, connecting a matte I created using a combination of techniques (nodes 3, 4, and the Key node), while the color adjustment nodes (1 and 2) connected to the RGB output.

In particular, since some idiot I rolled out of bed and threw on a green jacket with a green pocket square without thinking before rushing over to the stage, I needed to be a bit clever with how I created the matte. Although, being faced with this kind of issue, I was kind of glad to have an interesting test of the new 3D Keyer’s capabilities for green-screen compositing in a slightly awkward situation.

Turns out, the 3D Keyer (in node 3) did a fantastic job of specifically keying the green screen background while omitting the slightly different green of my jacket, while retaining nice edges without too much crunchiness, so big props to the 3D Keyer; it only took one sample of the background green and a second subtractive sample of the foreground jacket to do it (along with very slight application of the Clean Black and Clean White controls).

However, no combination of samples would also omit the green pocket square, which was just too similar to the background. This required me to divide and conquer, using the Key mixer to combine the 3D Keyer matte with a second matte generated by a tracked window to cover the pocket square.

The window itself was easy to make and track, except for the part where some idiot the “talent” decided to wave his arms around.

The hand completely screwed up the track, but my body motion was so irregular that just deleting the disrupted part of the track and letting Resolve automatically interpolate between the areas of the clip that had good tracking data wouldn’t cut it (although that was the first step). So, I ended up using yet another one of Resolve 12’s new features to solve the issue, the new Frame mode of the Tracker palette, that makes it easier to auto-keyframe manual alterations to a window’s shape and position (i.e. a bit of rotoscoping). Five manual adjustments (and keyframes) later, and the hole in the tracking data was nicely filled.

Inverting the 3D Keyer matte in Node 3 (using the Invert button within the Keyer Palette) and letting the Key Mixer node add the two mattes together from nodes 3 and 4 gave me the overall matte I needed, which, when connected to the Alpha Output, punched out the background nicely.

Now, however, I needed to deal with the green spill that was figuratively (possibly even literally) hitting me in the head. Sadly, while the Despill checkbox that’s built into the 3D Keyer works wonderfully in situations where the person being keyed isn’t wearing fucking green, in my case I couldn’t use it without leeching all the color out of my jacket. So, time to go back to the old ways, isolating my head using a tracked circular window in node 2, and using the Hue vs. Sat curve to selectively desaturate the greens that I didn’t want contaminating my face.

With all that done, I could now go back to the edit page and cut together the varied mix of backgrounds behind the foreground clip. While I was at it, although the entire rant is a single long take (thank you teleprompter), I wanted to chop it up to punch up the rhythm by rippling out a few pauses, masking the jumps with push-ins made using the Zoom controls of the Edit page Inspector. Thus, at the end of the edit, I had a timeline that looked like this:

For the backdrops and audio cues, I used clips from the THAT Studio Effects collection of HD resolution effect clips (licensed from Rampant Design, which offers 2K–5K resolution media). The cut went smoothly, pretty much in real time on my 2010 Mac Pro with Nvidia GTX 770 GPU. (I can’t believe how much life I’ve gotten out of that five-year-old machine.)

However, I had one last problem. Because I had decided to record to ProRes HQ at 1080 resolution, some of my more aggressive push-ins started to look soft, softer then I liked going out the door. Mulling over how to deal with the issue, I thought it would be funny to try and emulate the effect of zooming into a televised image, such that you’d see the pixels of the TV. Red Giant Universe to the rescue, I used their Holomatrix OpenFX filter to add vertical scan lines (hey, why not) to the zoom-ins, stylizing them to the point where the softness is irrelevant.

And that, as they say, was that. A composite-heavy green-screen promotional piece graded, composited, edited, and finished entirely within DaVinci Resolve. I did the mix as well, but that was nothing to brag about as the first version I uploaded to Vimeo had all of my dialog mixed to the left channel (there’s a reason I send final mixes for my projects to dedicated audio professionals). Still, I fixed the problem, tuned the mix, and completed the program, which you can see in the previous blog post.

All in all, it was a great experience, and while I’m the first to say I’m biased since I work with the DaVinci design team, I’m also being completely honest when I say that I’ve been really enjoying editing in Resolve 12, and using the hell out of all the new grading features, to boot.

Ripple Training is hard at work editing my “New Features in Resolve 12” title, which should be coming out really, really soon. To tide folks over until then, they’ve started posting some free new features videos I’ve made on the “DaVinci Resolve in Under 5 Minutes” section of their YouTube channel. Two came out today, and there are more to come covering both editing and grading features in the public beta of DaVinci Resolve 12.

The first of this week’s pair of new videos cover the new Smooth Cut transition in the Edit page, for eliminating “ums,” stutters, and other speech disfluencies, and patching up the hole. This feature’s effectiveness depends heavily on how much motion there is in the frame, so it won’t work for every jump cut you throw at it, and it works best when there’s a minimum of subject and camera movement. This video shows what it does.

The second video summarizes how to use the new 3D Qualifier, which is a brand new keyer in Resolve 12 that is often faster, more accurate, and can in many cases be more pleasant to use then the older HSL qualifier. Bottom line, this keyer should let you work more efficiently for most chroma key isolations.

Color Correction Handbook: Second Edition

Get the 2nd Edition of this best-selling, platform-agnostic book covering all aspects of professional color correction theory and practice. Expanded with 200 pages of new and revised information ranging from grading workflow, display selection and calibration, detailed color and contrast theory for both 709 and log-encoded workflows, practical grading techniques, quality control adherence, scene balancing, and a deep exploration of memory color, image ideals, and the intersection of video grading and fine art in an interdisciplinary context. Whether you're just starting out or have been grading for a while, there's something for colorists of all levels.
In print from Amazon; From Barnes & Noble; ePub, MOBI, and PDF from Peachpit

Color Correction Look Book

Expanding on material broken out from the original Handbook, the 216-page Color Correction Look Book focuses entirely on creative grading techniques. Covering the very latest generation of grading software, classic techniques such as color washes, undertones, bleach bypass emulation, and cross-processing stylizations have been updated to take advantage of new features, and entirely new techniques have been added including film stock emulations, flat looks, greenscreen grading for compositing, flaring, light leaks and color bleeds, vibrance and targeted saturation, monochrome looks, grain/noise and texture, and much more.
In print from Amazon

What’s New in DaVinci Resolve 14

With extensive workflow enhancements in every page, 20 new ResolveFX plugins, and the all-new Fairlight audio editing and mixing page, Resolve 14 is a giant leap forward in postproduction workflow. In this 7 hour plus training, I take you on a grand tour of every new feature, large and small. If you’re an editor, colorist, or content creator who’s worked with previous version of Resolve, or you’re currently working with the public beta of Resolve 14, this training will give you jump start so you’re ready to work when the GM version becomes available. Available from Ripple Training.

SaveSave

SaveSaveSaveSave

DaVinci Resolve Video Tutorials From Ripple Training

Learn and master DaVinci Resolve with my online library of Resolve tutorials from Ripple Training, geared for the beginning or professional editor and colorist. Choose from a wide variety of workflow-specific training titles; whether your interest is editing, color grading, effects or finishing, I cover each topic the only way I know how, thoroughly and with real-world examples whenever possible. Available from Ripple Training.