-The Reyes Rendering Architecture-

Part one.

This article aims to take you through the basic architecture of the Reyes algorithm. I'm not going to describe every tiny step but I will detail the major portions and offer pseudo code where appropriate. Also included is the source code for a simple Reyes renderer in various stages of completion.

A big thank you goes to Dave Myers for his proof reading skills and suggestions.

Rendering algorithms have been a hobby of mine for almost ten years now. I wrote my first ray tracer in 1999 and I've been hooked ever since. After hacking my way through path tracing, photon mapping, radiosity, and metropolis light transport, I started looking for a change of pace. Ray tracing is so simple and straight forward; Once you understand the basics, everything else falls into place. This is great for beginners but for repeat rendering offenders like myself, something a bit more challenging is always welcome.

Enter micropolygon renderers. The basic concept behind micropolygon renderers such as Reyes is to divide each surface into tiny, pixel sized polygons and to project them onto the screen. Micropolygon renderers are quite fast and offer a number of interesting features such as displacement mapping. They're able to handle parametric surfaces very efficiently and there's no need to worry about complicated spatial subdivision structures. Rendering a displaced NURBS surface in a raytracer ain't easy. It's trivial in a Reyes renderer.

A Reyes renderer can be divided into five basic steps. In order they are:

Bound

Split

Dice

Shade

Sample

A primitive is bound by the screen and the near and far clipping planes. It is then split into smaller primitives. Each smaller primitive is then diced into a grid of micropolygons. The micropolygons are then shaded. Finally, the micropolygons are projected onto the screen and sampled. We'll be covering the bounding, splitting, and dicing stages in this chapter.

Bounding

Bounding involves estimating how large a piece of geometry is when projected onto the screen. There are numerous way to do this. The simplest way is to coarsely dice each piece of geometry and project the resulting vertices onto the screen. Some primitives such as Bezier patches are bounded by their control points, meaning that projecting the control points onto the screen will give you a solid estimate of the object's size. For other objects, a 3d bounding box can be constructed around them and projected on the the screen. Care must be taken to ensure that the screen space estimate accounts for the possibility that the geometry is displaced.

There are advantages to each method. Coarse dicing is the slowest as it requires repeatedly dicing the primitive and possibly running displacement shaders but it's also the most general.

Splitting

Splitting a shape serves two purposes. First, it allows a more granular level of bounding to be done. If an object straddles a bouding plane, splitting it will allow the sections outside to be discarded.

Second, splitting ensures that there is a good balance between the number of primitives in memory and the number of micropolygons needed to represent them. Primitives that haven't been split enough will result in microgrids that contain tens or hundreds of thousands of micropolygons. On the other hand, it's possible to split a primitive too much, requiring millions of tiny primitives to be stored in memory.

The following code determines if a primitive is larger the splitting threshold by dicing it and projecting the vertices of the microgrid onto the screen. It also deterimes which axis the primitive should be split along.

It's important to choose the splitting direction properly so that the primitive is always being split across the longest axis. Choosing the right splitting direction can substantially reduce the number of primitives that need to be stored in memory. The following graph illustrates two possible splitting directions for a primitive of roughly 6 by 3 pixels with a splitting threshold of 4 pixels.

There is no hard and fast splitting threshold; the number can be anything you want. I've found that values between 16 and 32 pixels result in a fairly good mix between the number of primitives and the number of micropolygons.

Splitting parametric surfaces is easy. Rather than constructing two new objects complete with new control points (which can be computationally expensive, as in the case of NURBS), it's simpler to define "start" and "end" values on the larger object.

This single splitting function can be reused for every type of parametric surface. Some primitives like triangles and polygons require slightly more involved splitting functions. Triangles are particularly inefficient because they must be divided into three quadrilaterals.

Once a primitive has been split, its children are sent back to the bounding stage and the original primitive is deleted. From a conceptual standpoint, the relationship between the bounding and splitting stages is probably the most difficult part of Reyes to understand. This flowchart should hopefully make it a little more clear:

Dicing

When a primitive is smaller than the splitting threshold, it is diced into micropolygons. Micropolygons are stored in a grid structure called a microgrid. Due to the bounding and splitting phase, each primitive in the scene is now only a few pixels across.

The microgrid is simply a 2d array containing the position, normal, colour, and opacity associated with each vertex in each micropolygon.

Like the splitting function, this can be reused for each type of primitive your renderer supports. Every additional type of primitive only requires you to write new evaluation functions. The evaluation functions for a sphere are:

Source code

You can download the source code for this chapter here. It implements the basic bounding, splitting, and dicing steps described in this article along with a sphere primitive. You'll need SDL installed to compile it.

And that's it for part one. Part two will detail the shading and sampling steps.