Color Quantization For Film

RenderDotC's default behavior is to generate 8 bit linear data.
This means that each channel (red, green, blue, and alpha) spans from nominal
black at 0 to nominal white at 255. While this is often acceptable,
RenderDotC is capable of producing higher quality output. When rendering
for film, the 8 bit model may run out of steam. This document explains
why rendering for film is more demanding on color quantization (especially
when composited with live action) and how to achieve superior results with
RenderDotC.

Background

For a thorough and accurate description of film and the 10 bit digital
negative (Cineon) format, see these original documents:

Film has a tremendous dynamic range [4]. It starts at black and
must be overexposed mercilessly before it becomes totally saturated and
can get no whiter. Instead of considering the total possible range
of film, we start by focusing on the normal range of intensities that might
occur when filming an indoor or outdoor scene, from a reference black to
a reference white. Cameras are sometimes calibrated by holding benchmark
gray or white cards in front of the lens. Such cards are of a known
intensity expressed as a percentage of 100% reference white. The
18% gray card is used in film photography and television cameras are calibrated
against the 90% white card [2].

The Cineon format uses ~1% black card as reference black and 90% white
card as reference white [1]. On film, total saturation is perhaps
20 times brighter than reference white. This additional range is
called "headroom" and can be used to capture specular highlights on water
or chrome. Cineon's digital negative also captures the extended headroom.
A logarithmic scale is employed to focus the numeric precision where it
is needed most on the darker shades, sacrificing precision in the headroom.
Cineon uses 10 bits to avoid contouring (mach bands) in digital images.
Therefore, this format is known as "10-bit log".

RenderMan Limitations

The RenderMan Interface Specification 3.1 [5] requires
that the default color quantization be 8 bit linear. RenderDotC honors
this requirement. One may change the color quantization with RiQuantize:

Quantize type one min max ditheramplitude

The arguments to RiQuantize are as follows:

type: "rgba" to set quantization levels for color and alpha. "z"
to set depth quantization

one: Integer that floating point value 1.0 should be mapped to. If
one equals 0 then quantization is turned off.

min: Lowest integer that should ever be generated.

max: Highest integer that should ever be generated.

ditheramplitude: Maximum value of random number added or subtracted before
rounding to an integer.

One limitation of the RenderMan model of quantization is that only linear
mapping is possible. Creating a 10 bit log scale such as Cineon must
be done as a post processing step.

Another limitation is that colors and alpha must have identical quantization
parameters [3]. As we shall see later, RenderDotC overcomes this
limitation with an implementation specific extension to the RenderMan standard.

Quantization in Practice

When rendering for film, the first problem encountered is that 8 bits are
not enough [3]. Artifacts such as mach bands may appear. The
solution is simple enough. Use 16 bits:

Quantize "rgba" 65535 0 65535 0.5
# 16 bit integer

Or even 32 bit floating point:

Quantize "rgba" 0 0 0 0
# 32 bit floating point

[Note that min, max, and ditheramplitude are meaningless
when using floating point output. Here, they are arbitrarily set
to all zeros.]

When quantizing to integers, RenderDotC looks only at max to
determine how many bits per channel to use. Possible values are 1,
2, 4, 8, or 16 bits per channel. The smallest number of bits that
will accommodate max is selected. It makes sense to choose
a value for max that uses all of the bits. In the example
above, we wanted to use 16 bits so we set max = 216 -
1 = 65535. This is the largest possible unsigned 16 bit number.

The standard (but shortsighted) approach that is often taken when recording
computer generated images (CGI) to film is to align 0 with reference black
and max with reference white. This works reasonably well except
that the extended headroom of the film goes unused. Bright highlights
in the CGI just aren't as bright as they should be. If the CGI is
mixed with live action, it may look dull and flat by comparison [4].

Here's where the one parameter of RiQuantize comes into play.
We can set one to some value less than max, align one
with reference white, and leave the range from one to max for
the extended headroom. If an object in the scene is a fully illuminated
white object, the shader will return Ci = color(1.0) and reference white
will be met. A specular highlight off of chrome may produce an even
more intense color such as color(5.0). Be careful that the shader
does not arbitrarily clamp all colors to 1.0. Otherwise, the headroom
will never be exercised.

What's a good value for one? The perfect value for covering
the same extended range as 10 bit log is about 4829. This is close
enough to 212 - 1 = 4095 that we may substitute it for the convenience
of nice round numbers:

Quantize "rgba" 4095 0 65535 0.5

Don't make the mistake of choosing 1023 just because that sounds like a
good value for 10 bit log. The RenderMan quantization space remains
linear. 10 bits in linear space has nothing to do with 10 bit log.
Choosing 1023 results in leaving more headroom than can be captured on
film at the cost of reduced precision below reference white.

Texture Files

When using an extended linear range, texture files (especially environment
maps) should be stored in the same space. If the image was created
with RenderDotC, then reference white is automatically stored as the value
of one and is automatically transferred to the texture/environment
maps by texdc (or RiMakeTexture). The texture() and environment()
functions used by the shader then convert the texel to floating point by
dividing it by reference white. For a texel at reference white of
4095, texture() will return 4095 / 4095 = 1.0, exactly the desired result.
A texel at the maximum value of 65535 results in a color of 16.0, well
above reference white and into the range reserved for the headroom.

For images not created by RenderDotC, reference white may be explicitly
set when building the texture or environment maps. For more information,
see the documentation on the refwhite
option.

A RenderDotC Extension

One problem with the procedure described above is that alpha goes through
the same quantization procedure as color [3]. It doesn't make sense
for alpha to have headroom. We really just want a simple, 16 bit
linear alpha channel. The RenderMan Specification [5] provided no
mechanism for separating RGB from A in RiQuantize. RenderDotC provides
an extension to do just that: