Rendering and shading in ADAM: Episode 3

Have you seen ADAM: The Mirror and ADAM: Episode 3 yet? These two short films have captivated millions of viewers, and many are eager to know how Neill Blomkamp and his team did such cool effects in real-time. Read our in-depth, behind-the-scenes posts on lighting, Alembic support, clothing simulation, Timeline, shaders, real-time rendering and more.

John Parsaie is the Software Engineer on the Made with Unity team. On the recent ADAM films, John delivered features like subsurface scattering, transparent post-processing effects, Alembic graphics integrations, and more. Prior to his time on the team, John worked as an engineering intern at Unity and Warner Bros. Interactive, while studying in Vermont and Montreal.

Yibing Jiang is the Technical Art Supervisor on the Made with Unity team. Before joining Unity, she worked at top studios in both animated feature films and AAA games, including character shading for Uncharted 4 (Naughty Dog), sets shading for Monsters University, and Cars 2 (Pixar), and look development for Wreck-It Ralph (Disney).

Setting the stage: Frame breakdown

In the striking, real-time ADAM films, a number of components come together in Unity to deliver the effects that have gained so much attention. In this post, we focus on just two aspects – albeit very important aspects – of how Oats Studios achieved such memorable effects. So if you’d like to know more about the custom shaders that these artists used, and real-time rendering of just one frame with Unity 2017.1, read on!

The frame in ADAM: Episode 3 to be analyzed

In this post we will be cracking open the frame above from ADAM: Episode 3 using RenderDoc – an extremely useful tool for frame analysis and graphics debugging – to give you an insider’s understanding of some of the visuals Oats accomplished. Conveniently, RenderDoc already has built-in editor integration with Unity, making it the logical next step from our own Frame Debugger, in case you want to do similar analysis on one of your own projects. Read more about RenderDoc and Unity here.

Rendering the G-Buffer

Both ADAM films were rendered on Unity 2017.1’s deferred render path. This means that all opaque objects* are rendered into a set of buffers collectively referred to as the G-Buffer, or the Geometry Buffer. The G-Buffer contains all the data necessary to perform the actual lighting calculations further down the render pipe.

The G-Buffer (Geometry Buffer), alpha channels in left corners

By setting up multiple render targets (MRT), the following data can be written to all four constituents of the G-Buffer within the same draw call, for each opaque object as shown.

1. Diffuse Color RGB / Occlusion A (ARGB32)

The “intrinsic color” and baked ambient occlusion of geometry.

2. Specular Color RGB / Smoothness A (ARGB32)

Unity supports both specular and metallic PBR workflows. Internally, however, both workflow inputs boil down to the same information written in this buffer. This is done to unify PBR inputs under the same shading model, which is used later.

3. World Normal RGB / Unused A (ARGB2101010)

A higher precision buffer is used to store the world space normals, or the facing direction of a surface, at each pixel. This information is crucial for calculating illuminance with respect to a light source.

4. Emission, Ambient RGB / Unused A (ARGBHalf)

Emission and ambient GI is rendered into this buffer. Later down the pipe, this buffer is also used to gather reflections and to accumulate lighting. This buffer is set to an LDR or HDR format, depending on the user setting. ADAM is rendered with HDR lighting.

*Transparents and opaque items with custom shading models are handled differently, further down the pipe.

The depth-stencil buffer

As the G-Buffer is rendered, so is the scene’s depth into its own special buffer. Storing depth is critical in real-time graphics for holding onto our sense of the third dimension, both during and after the projection of the scene onto a 2D space. It is also essential for reconstructing the world position of a pixel, needed later in deferred shading. Moreover, this is the “bread and butter” for the advanced post-processing effects we all know and love in real-time.

The depth and stencil buffers

The stencil buffer (right) shares the same resource as the depth buffer. It is particularly useful for categorizing pixels based on what was rendered to them. We can use that information later to discriminate between pixels and choose what kind of work is done on them. In Unity’s case, the stencil buffer is used for light culling masks. Specifically for ADAM, it is also used to mark objects that exhibit subsurface scattering (SSS).

Subsurface profile buffer

Regarding subsurface scattering, the renderer was extended to also write indices into an extra buffer (during G-Buffer generation) that is later used for lookups into buckets containing the important data sent from the subsurface profiles. This buffer also stores a scalar representing how much scattering should occur.

Subsurface profile buffer: (R) profile index, (G) scatter radius

As mentioned, that important data comes from subsurface diffusion profiles, which a user creates on the Editor side. These user-defined profiles define how diffuse light scatters within highly translucent media.

A subsurface scattering profile

We are also able to control forward-scattering, or transmittance, through these profiles. Examples of this is in shots where light transmits through the thin flesh of the ear and nostrils. All of this information is sent to the GPU to read later.

Next steps

With the G-Buffer rendered, we have effectively deferred all complexity of the scene geometry onto a handful of buffers. Doing this makes nearly all of our future calculations a reasonably fixed, predictable cost; this is because lighting calculations are now completely agnostic to the actual geometric complexity of the scene. Prior to main lighting, however, a few key preliminary passes remain, which are explored below.

Environment reflections

Using data from the recently created G-Buffer, a calculation is run against the Skybox cubemap to obtain environment reflections. This calculation takes into consideration information ranging from roughness, normals, view direction, specular color, etc., and is pushed through a series of equations to produce a physically correct specular response from the environment. These reflections are additively blended to the emissive HDR buffer.

The environment reflections

Shadows

Nearly all preliminary work required by the renderer is now complete. With that, the renderer enters the deferred lighting phase, which begins with shadows.

Unity uses a well-known technique called cascaded shadow mapping (CSM) on its directional lights. The idea is simple: our eyes can’t make out much detail the further away we look, so why should so much effort be put into calculating the faraway details in computer graphics? CSM works with this fact, and actually distributes, or cascades, the resolution of the shadow map based on distance from the camera.

(L) Cascaded Shadow Map (CSM), (R) Spot Light Shadow Map

In this particular shot, the directional light CSM is actually only used on the environment geometry, leaving the two characters to be handled by a set of spotlights! This was done in some shots, including this one, because it gave the lighters at Oats Studios better flexibility in accentuating the key visuals of a shot.

Screen-Space Shadows

We also deployed a technique called “screen-space shadows”, or sometimes known as “contact shadows”, which grants us some highly detailed shadows by raymarching in the depth buffer. This technique was especially important because it was able to capture the granular shadow details in Oats’ photogrammetry environment, which even CSM was not strong enough to capture at times. Screen-space shadows work together with Unity’s shadow-mapping techniques to “fill in” light leaks.

Deferred shading

With all of the pieces in place, we’re now equipped with enough information to completely reconstruct the lighting scenario at each pixel.

All inputs in one of the deferred lighting passes

The deferred lighting pass will perform a pass for each light in view, accumulating light to the HDR buffer on every pass. The contents of the G-Buffer are computed against the current light’s information, including its shadow map.

As a first step, the world space position of the pixel is reconstructed from the depth buffer, which is then used to calculate the direction from the surface point to the eye. This is crucial in determining the view-dependent specular reflections later. Shadow and other light information (cookie, etc.) is also gathered into a single scalar term to attenuate the final result. Next, all surface data is fetched from the G-Buffer. Finally, everything gets submitted to our shading model, a physically based, microfacet bidirectional reflectance distribution function (BRDF) for final shading.

All final lighting is accumulated for opaque objects, except subsurface objects

At this point we nearly have a fully shaded scene, but what’s up with the white outlines? Well, if you remember, those were the pixels that we marked in the stencil buffer for subsurface scattering, and we’re not quite done shading them.

Subsurface scattering

Briefly mentioned earlier, subsurface scattering is the scatter and re-emergence of diffuse light most easily seen in translucent media (in fact, subsurface scattering actually occurs in all non-metals to some degree, you just don’t notice it most of the time). One of the most classic examples is skin.

But what does subsurface scattering really mean in the context of real-time computer graphics?

(L) Scattering distance is smaller than the pixel, (R) Scattering distance is larger than the pixel

The answer is actually a big problem. Both diagrams above contain a green circle, which represents a pixel, and incoming incident light at its center. The blue arrows represent diffuse light, and the orange arrows specular. The left diagram shows that all of the diffuse light scatters in the material and re-emerges within the bounds of the same pixel. This is the case for nearly all materials one could hope to render, allowing the safe assumption that outgoing diffuse light emits from the entry point.

The problem arises when rendering a material that scatters diffuse light so much that it re-emerges farther than the bounds of the pixel, shown right. The previous assumption is of no help in a situation like this, and more advanced techniques must be explored to solve it.

(L) Diffuse, (R) Specular

Following current state-of-the-art, real-time subsurface scattering techniques, a special screen space blur is deployed on the lighting. Before we can do that, though, we must ensure the separation of diffuse and specular lighting off into their own buffers. Why bother doing this? Looking back at the diagrams, you will see that specular light immediately reflects off the surface, taking no part in the subsurface phenomenon. It makes sense that we should keep it separated from the diffuse lighting, at least until after performing the subsurface approximation.

Below you can see a closer capture of the split lighting for clarity. Note that all specular light is completely separated from the diffuse, allowing for the work needed on the irradiance/diffuse buffer on the left to be done without any concern for damaging the integrity of the high-frequency detail in the specular.

(L) Diffuse buffer, (R) Specular buffer

Armed with the diffusion kernels created and sent from user-created subsurface profiles, a screen-space blur approximates this subsurface scattering phenomenon.

Multiple subsurface profiles can be used for different materials

By recombining with the specular at the end of the calculation, we have addressed our original problem! A technique like this is extremely effective at approximating the scattering that should occur outside the reach of a pixel. At this point, all of the opaque objects in the scene are now shaded.

All opaque objects are fully shaded

What comes next is rendering of screen-space reflections (SSR), skybox, screen-space ambient occlusion (SSAO), and transparents. Below you can observe the stepping through of these passes.

Rendering of SSR, skybox, SSAO, and transparents

The importance of motion blur

Motion blur did play a key role in the films. Offering yet another axis of subtle cinematic quality, motion blur was instrumental in making or breaking some shots.

Of course, to render motion blur requires the renderer to have knowledge of motion itself. This information is acquired by first rendering a preliminary motion vector texture (left). This buffer is produced by calculating the delta between the current and previous vertex positions in screen space, yielding velocities to use for calculating motion blur.

Some extra work was done to properly obtain motion vectors from the Alembic streams. For details, see my colleague Sean’s recent post on that and other Alembic topics.

Post-FX

Before/after applying final post-processing

We finally arrive at the uber-shaded post-processing pass. Here, final color grading, ACES tonemapping, vignette, bloom, and depth of field are composited, providing a near-complete image. One thing is off though, where is Marian’s visor?

Marian’s visor

Dealing with transparency in real-time graphics is a well-known issue in the industries that use it. The very backbone of effects like depth of field, screen-space reflections, motion blur, and ambient occlusion all require some spatial awareness/reconstruction from the scene depth – but how could something like this be possible for a pixel covered by a transparent object? You would need two or more depth values!

What is done instead is to first render all opaque objects to the scene, followed by a special back-to-front forward pass of the transparent object list, blending each pass to the frame without writing depth. This effectively ignores the issue as best it can, getting the job done just fine for most things, like character eyebrows or the incense smoke.

However, as seen in the examples above, ignoring the issue will not fly for Marian’s transparent visor, which took up nearly half of the screen time of what is intended to be a cinematic short film. We need some sort of alternative for this specific corner case, and quickly.

The solution during production was to defer transparency to a composite pass between two fully shaded frames. As you have already seen so far in this post, the first frame contains everything but the visor. After rendering of the first frame, the G-Buffer and depth gets emplaced to the second render pass for the second frame, in which the visor is rendered as an opaque.

Visor transparency was deferred to a composite pass between two fully shaded frames

Running a second post-process pass on the second frame and armed with the contents of the original frame’s G-Buffer and depth, we can successfully obtain proper SSR, SSAO, motion blur, and depth of field for the visor. All it takes to get the mask back into the original frame is to composite by the visor’s alpha, which will get blurred by motion blur or depth of field.

With and without the technique. Notice the expected occlusion on the right.

By taking this necessary step for Marian’s visor we integrated it nicely back into the picture, as shown in the above comparison. You will notice the proper SSR and AO taking effect on the right. While by no means an all-encompassing solution for the transparency problem, this technique addressed the original corner case and offered full post-processing compliance for a transparent object.

The finishing touch: Flares

Putting their in-house flare system to good use, Oats Studios completely elevated their picture’s cinematic quality with this great finishing touch. Animated and sequenced in Timeline, these lens flares are additively blended to the frame, producing our final picture.

The final shaded frame, and the lens flares to be added on top

Final result

Here you see how everything we’ve discussed is rendered at runtime.

Marian approaches her hostage brother with a rock

In summary, frame breakdowns are a great way to understand the interesting choices graphics teams make to suit the needs of a production, as well as being a trove of useful information to learn from and use in your own projects. If you’d like to know more about this type of analysis, check out Adrian Courrèges’ excellent graphics study series, where he deconstructs frames from various AAA titles.

Looking ahead

Unity has big plans to deliver enhanced graphics features (like the subsurface scattering used in this film) to every user in 2018. With what we call the Scriptable Render Pipeline (SRP), a new set of APIs now available in the 2018 beta, users can define the renderer for themselves. We will also be shipping a template implementation of SRP called the High Definition Render Pipeline (HDRP), a modern renderer that includes subsurface scattering and other awesome new features. The subsurface scattering used in ADAM was a direct port from the HDRP to the 2017.1 stock renderer.

If you are curious and want to know more about SRP and what Unity has in store for graphics this year, be sure to check out Tim Cooper’s 2018 and Graphics post.

Learn more about Unity

Find out how Unity 2017 and its features like Timeline, Cinemachine, Post-Processing Stack, and real-time rendering at 30 FPS help teams like Oats Studios change the future of filmmaking.

11 Комментарии

Loving the ADAM shorts, really trying to get to grips with Unity for use in VR and Architectural simulations. I’d love to know if it’s possible to incorporate the ADAM scene asset into a VR/WMR environment without much knowledge of coding.

Dude this is actually an incredibly detailed and in-depth article about the current and, quite likely future graphics pipeline in Unity. To «do any of that» you do the same things as with everything else: you learn and this here is a very good read just for that.

Hey guys i’ve been doing some work taking just the gbuffer and doing alot of custom shading using the data. I too wan’t to implement a translucency effect in a deferred manner. Did you just hijack one of the empty gbuffer channels? If not, how did you get deferred rendering to write to an additional buffer without re-rendering? If the answer is just to use Scriptable Render Pipeline — any idea how you’d do that? The docs just obsessively described culling.

Well, I did not try this, but I am almost sure that if you set up two cameras and two layers (one for each camera), set the cameras to only render the objects in their respective layer (first camera renders layer Alpha, second camera renders layer Omega) and then clone the Visor to be two objects: one transparent for layer Alpha and one opaque for layer Omega you get what they got. Then I guess you forego the lighting pass in the «opaque» case and only use the «opaque» GBuffer during a custom transparent shader (i assume?) pass. Keep in mind, in the case of a Visor this was only possible because the geometry of a visor is concave, that is, no matter how the camera is positioned, the Visor only renders one layer per pixel. If the Visor was a complex object with cavities (say like a semi-transparent ghost figure of a person), they would have needed way way more GBuffers than just 2 :)