Part 3: File Formats

The discussion surrounding file formats can be a difficult one. Individual artists and studios have different preferences and applications have different supported formats. When it comes to a linear workflow, what matters most is saving to a lossless high bit format, preferably float. A few of your options are: TIF, TGA and EXR. Of course, feel free to research other formats that may be suitable for you.

Personally I use the OpenEXR format, which 3ds Max can export to. It is supported by almost every professional imaging application. Developed by ILM, OpenEXR is a great format with support for 16-bit floating-point, 32-bit floating-point and 32-bit integer pixels, lossless compression and multichannel support. This enables you to have all of your render passes stored in a single file at the highest quality. For more information, visit www.openexr.com

So how do you render out to the very awesome .EXR format? Simply select OpenExr from the list of available file formats in 3ds Max when saving your renders.

As far as options go I have a couple of suggestions, but feel free to fire up the 3ds Max help or Google if you are uncertain of other aspects.

What Bit Depth?

The 16-bit vs. 32-bit argument can go in favor of either bit depth, but when it comes down to it 16-bit is suitable 99% of the time. If you make sure your raw renders are at least somewhat close to your desired final composite output (e.g., no extreme exposure changes) and you don't have a client who is always requesting crazy changes in a short time period, 16-bit will give you more than enough data for your compositing and color correcting needs. 16-bit also has the added benefit of significantly smaller file sizes than 32-bit and faster load times without too much loss of flexibility.

32-bit is fantastic for those times when you need the extra bit depth, either for extreme color correction or in cases where certain render passes will benefit from it. Although I mostly render to 16-bit half float, I do render my Motion Vector passes and Z-Depth passes out in 32-bit float.

16-bit vs. 32-bit can also depend on your project, so I certainly wouldn't rule out 32-bit totally. It is all about what works best for you, and it may be beneficial for you to test both formats.

Compression Options

There are quite a few compression options to choose from, and we could spend days getting into what each one does and where it may be applicable. What I suggest is to use ZIPs. I find it is a good tradeoff between compression ability and reading speed. This is important, because although you could compress the file more to have smaller file sizes, your compositing application will have to decompress them to display and work with them. The more compression, the longer it takes to decompress and the more lag (read times) you will have during compositing. Of course this is also dependent on the speed of your hard drives or network, but that is a whole other can of worms.

Storage Type

Most compositing applications can work with either Scanline or Tile storage types, but there are preferable settings for different applications. If you are compositing using After Effects, set your storage type to Store Image as Tiles. But if you are using The Foundry Nuke, Autodesk Composite or Eyeon Fusion, it is best to set your storage type to Store Image as Scanlines. As an example in Nuke, storing the .EXR by Scanline can give you a 5-10% speed increase. It may not sound like much, but once you begin working with a large composition any speed increase you can gain is well appreciated.

Note about OpenEXR and V-Ray: V-Ray doesn't render directly to .EXR stored as scanlines due to the way it stores buckets while rendering. You will need to convert the files to be stored as scanline after rendering is complete if you want to take advantage of the significant speed increases in Nuke. V-Ray provides a utility for this called vrimg2exr. You can find more information about this in the V-Ray help. There are a couple of other considerations when it comes to outputting .EXR from V-Ray, all of it available through a simple Google search.

Render Elements

Whether you decide to use multichannel .EXR or render to individual .EXRs per pass is completely up to you, your workflow and requirements. Generally, I will render each render layer and its associated render elements into their own multichannel .EXR files. For example, say I have a character lying on a daybed in a garden. I might use three render layers for this: the character, the daybed and the garden. Each render layer has the following render elements: Diffuse, Shadow, Reflection, Specular, Alpha, GI, ObjectID, Z-Depth, Motion Vector and Ambient Occlusion. There may even be extra render elements depending on the render layer, such as SSS for the character.

So pretty quickly we are rendering up to more than 30 individual .EXR files. But if I render each render layer out to a multichannel .EXR, I only have three files to deal with for that shot. Then in my compositing application I can access each render element from the multichannel .EXR per render layer as needed. This makes file management a lot easier, while not getting bogged down with disk read times that occur when using a single .EXR containing a large amount of render elements.

Of course, test out the various options (one large multichannel .EXR, multiple multichannel .EXRs or individual .EXRs for every pass) and see what works best for you.

Okay, so you've set up 3ds Max correctly, set up your Render Engine, chosen your File Format and rendered out your project, but how do you actually composite it?

Part 4: Compositing with a Linear Workflow

There are certain workflow changes that need to be taken into consideration when using a linear workflow for compositing. I will give you a walkthrough and some hints for After Effects, Nuke and Composite.

Adobe After Effects - Layer Based Compositor

After Effects' implementation of color spaces is incomplete with CS5.5. Even though the features are there, they aren't fully useable or intuitive at all. Here I will show you the method that works for me, as well as link you to alternative methods. I recommend you try out various methods to find what works best for you.

In your Project Settings (File > Project Settings), you want to set your Depth to 32 bits per channel (float). After Effects doesn't support 16-bit float, so to work with it without artifacts and issues, you must set your depth in the project settings to 32 bits per channel (float). As is evident by the options in the dropdown, After Effects doesn't support 32-bit integer either, so you will need to keep that in mind when rendering to 32-bit .EXR in 3ds Max.

You want to set your Working Space to sRGB. Options may vary on your system, as After Effects takes into account your monitor's gamma correction. You also want to tick Linearize Working Space. This means After Effects will do any blending calculations based on a gamma of 1.0 to help prevent artifacts (Fig.09).

Fig.09

What we have told After Effects to do is display and output with a gamma of 2.2 and perform most operations with a gamma of 2.2, except those involving pixel blending which will be calculated with a gamma of 1.0.

Autodesk Composite - Node Based Compositor

Composite (used to be called Toxic) comes with 3ds Max, so is a great alternative for those who don't want/need to spend extra money on additional software. The fact that it is node based and fully compatible with a 16-bit/32-bit Linear Workflow makes me recommend it over After Effects.

It is very simple to set up Composite to work in a linear fashion. First of all, you need to add a sRGB tool to your Player (step-by-step is provided in the documentation) and set the Output Depth to Input Depth. This means all operations, calculations and rendering will be performed with a gamma of 1.0, but your Player will display your composition with a gamma of 2.2 (Fig.10).

(ID: 280107, pid: 0) Sali on Tue, 10 June 2014 8:45am I'm looking for explanation about linear workflow from 3ds max goes to AE. And here I am.
Thanks for your clear guide.
But I want ask, how about setting up gamma on image editor such as Photoshop? Usually we made texture with 3d painting in Photoshop, and still have to adjusting texture color/brightess to get something we want in 3ds max renderer.
Big thanks anyway.

(ID: 261519, pid: 0) Mikhail on Mon, 17 March 2014 11:19pm well, I can't agree with author about lwf in MRay.. the thing is that you should to turn off the exposure control to get an image without any camera responce. In globals you can use gamma 2.2 in each tab, but save your rendered images with gamma override 1.0 (and no exposure conrols!). Also, you need to be sure to set 32bit(Float) in mrFrame buffer (in this case you will have a correct Z and WPP passes). Also there is a quite long process with images input as a backround plate and\or reflection maps (if you load 32bits HDR images so you should load it with gamma 1.0. If you load LDR images, like jpg and so - load it with gamma 2.2)..

(ID: 193354, pid: 0) Derek Bentley on Sat, 20 April 2013 4:21pm Where is Figure 01? It seems to be missing...
So we use Gamma 2.2 for Display and 1.0 for output?
Does Gamma output affect VRay GI Maps like the Light Cache, and Irradiance Map?

(ID: 188175, pid: 0) Anlleoking on Thu, 21 March 2013 9:38am Dear Daniel Dye
Thank you for the tutorial.May I translate it to Chinese so people in our country could understand it. I want to post it in my blog with your name.Thank you anyway.
Best regards
Anlleo King

(ID: 128987, pid: 0) Demonpepper on Thu, 28 June 2012 7:22am should we need to calibrate out cpu/monitor before doing
this LWF.