ACES – Academy Color Encoding System – A Beginner’s Overview

ACES, or the Academy Color Encoding System, has been around since 2014 but has mainly been the concern of higher budget productions. I’ve seen more mid size productions utilizing the workflow and thought it would be an opportunity to do a broad non-technical overview of the topic. Color space has always been a complicated topic, and a problem when discussing output formats and future proofing film and video. Look at this locus with just a few random color spaces and where they fall (www.ditspot.net):

ACES provides a way to work with footage and not worry about the destination color space until the very end of the process, as the ACES colorspace completely encompasses the CIE spectral locus.

The core of ACES is it’s encoding system which is a set of transforms that dictates exactly how an image is brought in and sent out with metadata definitions within an Open EXR 16bit half float that gives you over 33 stops of exposure. There are a few main terms needed to understand when it comes to ACES:

IDT (Input Device Transform)
This transforms the source footage into scene referred ACES color space. Each camera requires it’s own IDT to properly transform the image into ACES format, typically developed by the camera manufacturer. IDTs can be chosen by camera and color temperature.

LMT(optional) (Look Modification Transform)
This provides the first way to apply a look to a shot. It does not change any image data the same way as a color grade does because it is not changing pixel information, only metadata.

RRT (Reference Rendering Transform)
This converts the scene referred data to an ultrawide display data set. It can be thought of as the renderer of ACES. This “render” for lack of a better term, is passed onto the ODT for the final step

ODT (Output Device Transform)
This is the final step, and using a variety of transforms that you pick depending on what device you are outputting to, converts the ultrawide data from the RRT and converts it to the proper color space for the display device.

So in it’s simplest form, an ACES workflow would go as follows:

Camera Acquisition > IDT > Color Grade > LMT > RRT > ODT

The main benefits to this workflow really show in a collaborative environment where shots will be moving from department to department. It allows for a standardized screen representation where the shots will look the same no matter where they are viewed, since the images are only transformed & tone mapped on the output side. It also helps match different cameras. Since the goal of the IDT provided by each camera manufacturer is to essentially reverse engineer the image into a linear light format, it will remove any baked in ‘look’ that the regular output of the camera gives you.

And lastly one of the main benefits is the future proofing of your projects. Since ACES preserves the full gamut at the master, you could master your project in 2017 for DCI Projection, and then in two years remaster it for HDR displays without having to start at the beginning, since the ODT will be creating the transformation to final output. This is ACES strength as a way to archive films and video for viewing on future technologies.

I won’t (and can’t) get into actual workflows because I am just started to learn them myself. I will be using Resolve to grade in ACES color space so I hope to share my findings here eventually.