About this sample

One of the “must-have” effects for modern video games is HDR rendering. When used, it allows for the rendering of beautiful and realistic lighting effects
that utilize luminance values with a very high dynamic range. This makes for beautiful scenes, and can also simply the creation of artwork and assets as extremely bright or dark areas are no longer special case scenarios.

This sample presents a basic implementation of a typical HDR pipeline using the XNA Framework. The sample also implements a PostProcessor class, which can be extended for other post-processing tasks (such as depth of field, motion blur, etc.) The graphics code is all compatible with both the PC and the 360, however the sample itself only includes a Windows PC project. Update: the sample has been tweaked for the 360 as well now.

All details about the implementation and further background information on HDR can also be found in a write-up in the zip below, or downloaded seperately as a word document. Below you'll find the contents of this article:

Basic HDR Theory

When we create a GraphicsDevice, we typically have it create a backbuffer with SurfaceFormat.Color (INT8) as the surface format. This format specifies 8-bits per component per pixel, giving us 256 discrete values for each component. In our pixel shaders this [0,255] range is mapped to [0.0,1.0], and if our shaders output any value greater than 1.0 it is simply clamed when output to the backbuffer. This gives us a wide range of colors to work with in terms of what we display on the screen, however a problem arises when areas of the screen need to be significantly brighter than others.

To solve this problems, we can render to a format whose precision allows us to extend past the [0,1] range. This allows us to render wide range of luminance (brightness) values to a surface, which in turn allows us to do some pretty neat effects with the data. The most convenient way for us store this extended data is in a floating point format, such as SurfaceFormat.HalfVector4 (FP16). This format uses 16 bits per component, which can comfortably store a wide range of color values. However it also uses twice as many bits per pixel as SurfaceFormat.Color, which means double the memory usage and double the bandwidth. Other penalties can also be incurred depending on the hardware, for instance if the ROP’s or the texture units can’t handle FP16 data at the same speed they can handle INT8. To alleviate these problems, and alternate encoding format known as LogLuv is demonstrated in the sample code that encodes HDR data to a standard INT8 surface.

LogLuv Encoding

LogLuv is an encoding format described in Greg Ward’s paper The LogLuv Encoding for Full Gamut, High Dynamic Range Images. Originally designed for storing static images, it was adapted for use in real-time games by former Ninja Theory programmer Marco Salvi. By dedicating 16-bits of a 32bpp pixel to storing luminance information, it is capable of storing a very wide dynamic range of luminance values suitable for HDR rendering. Encoding and decoding can be achieved via simple pixel shader functions, found below:

NOTE: credit for the optimized encoding function goes to Christer Ericcson, who posted it on his his blog.

Getting Set Up

The sample initializes itself by loading models, effects, and textures in the LoadContent method. Our two custom classes (FirstPersonCamera and PostProcessor, respectively) are also initialized here.

Before we start rendering, we also need to create our render target to which we’ll render our HDR color information. We do this in the MakeRenderTarget method, which creates a single RenderTarget2D. The parameters we supply to the RenderTarget2D constructor depends on the current settings of our app: if we’re using LogLuv encoding we’ll create use SurfaceFormat.Color, otherwise we’ll use SurfaceFormat.HalfVector4. We’ll also specify MultiSampleType.FourSamples if multisampling is enabled. To support multisampling, we also create a separate a new DepthStencilBuffer. For multisampling we don’t specify that we want the backbuffer to be multisampled, since we’ll already have rendered to a multisampled RenderTarget2D.

Rendering the scene

For our scene, we’re going to render an HDR skybox and a single model. The HDR skybox contains texture data that is in fp16 format, which allows it to have areas that are significantly brighter than others. When we’re using FP16, we don’t really need to do anything special in our shaders. We simply output our value as normal, the only difference is that when outputting to FP16 the values won’t be clamped to [0,1]. For LogLuv however, we need to encode our final color value before outputting it. The same goes for the model Effect.

To accomplish, we include a header file called LogLuv.fxh in both the Skybox and Model .fx files. This file contains the encoding and decoding functions specified at the beginning of the tutorial. By placing the functions in a header file, we can simply include it in any Effect that requires these routines. To enabel easy switching between output in linear RGB and encoded LogLuv, we send a uniform bool parameter to the pixel shader. When we use a parameter like this whose value is specified in the technique definition, the effect compiler generates two versions of the pixel shader: one with the encoding, and one without. This allows us to easily generate different permutations of our shader, with each permutation conventiently referenced by the technique name.

Applying Bloom and Tone mapping

Once we have HDR color data rendered to a RenderTarget2D, we can send it off to our PostProcessor to do some neat things with it. In the sample, we mainly accomplish 2 things:

Apply tone mapping to the scene, compressing the color values to the visible range

Add an HDR bloom effect

The tone mapping process implemented by the sample uses an operator described in Equation 4 of this paper. This operator allows for colors above a certain specified value (Lwhite) to “burn out”, or stay above the [0,1] range. This can be highly desirable for bloom affects, as for bloom to be applied only to very bright areas of the screen. The sample’s implementation is in pp_Tonemap.fxh, as seen below:
float g_fMiddleGrey = 0.6f; float g_fMaxLuminance = 16.0f;

This tone mapping operator requires calculation of the average luminance of the entire scene. To do this, we convert the HDR render target to luminance values and then repeatedly downscale until we have a 1x1 texture. To simulate the gradual adaptation of the human eye to different lighting conditions (or the auto-exposure feature of a camera) we can gradually adapt the current luminance value rather than directly using the value calculated through downscaling. The sample implements this feature in pp_HDR.fx, using a technique described in this presentation by Wolfgang Engel:

In the code fTau is a constant that controls the rate of adaptation, and g_fDT is the amount of time elapsed since the last frame. fLastLum is the adapted luminance from the previous frame, and fCurrentLum is the luminance calculated from the current frame.

To add bloom effects, we first downscale our initial HDR texture to 1/16 size. We then apply our tone mapping operator to figure out what the color of the pixel will be once tone mapped, and then apply a simple threshold: