FuzzyPhoton

What is Raytracing?

In the very simplest terms, raytracing is a method for producing views of a virtual 3-dimensional scene on a computer. Other techniques such as scanline or z-buffer rendering may also be described thus. Raytracing is closely allied to, and is an extension of, ray-casting, a common hidden-surface removal method. It tries to mimic actual physical effects associated with the propagation of light.

Let's forget about computers for some time and look at the branch of physics known as optics. At the most fundamental level, we see objects when light bounces off the surfaces of these objects (or when light is produced by the objects themselves) and reaches our eyes. Light is a form of radiant energy. Solar cells run because the light energy falling on them can be converted to electric energy. When you focus the sun's rays of light with a magnifying glass, you can set a piece of paper on fire.

Light may be considered as being composed of waves or particles -- this is the classic wave-particle duality of matter and of radiant energy.

The problem regarding the nature of light arose in the 17th century. On one side were the particle-theorists, who spoke of light as being composed of fast-moving "corpuscles". It is worth noting that Isaac Newton belonged to this group. On the other side were the wave-theorists, who thought of light as a wave, a periodic fluctuation of a transmitting medium, much like sound waves.

To most of us, particles are necessarily composed of matter. This is the common sense definition. Matter may be defined as that which has mass. But where is the "matter" in a light wave, and how can it have mass? The answer may be found in Einstein's equivalence relation between mass and energy and in quantum theory. The first is the famous formula

E = mc^2

where E is energy, m is the mass of the object and c is the speed of light.

This tells us that mass and energy are equivalent. If you are given a packet of energy E (a hypothetical situation!) you can treat it as an object of mass E/c^2. Light energy is thus equivalent to a certain amount of mass.

Max Planck, the father of quantum theory, established (largely through a trial-and-error process to deal with a specific physics problem, the "violet catastrophe") that light energy is emitted not continuously, but in little packets called quanta (plural of quantum). Each of these packets has energy given by

E = hf

where h is Planck's constant, a ridiculously small number (6.626 * 10^-34), and f is the frequency of the light (see the next section).

Now using these two equations, we have

mc^2 = hf

which gives, after dividing both sides by c^2,

m = hf / c^2

This is the effective mass of a quantum of light. Because of this property, we may consider a beam of light to be composed of many little particles called "photons" (photo = light), each having mass as given above and energy of one quantum.

The effective mass idea is mainly a calculational tool and a useful concept: it is useless trying to catch a photon and measure its mass. In fact, modern physics usually speaks of a photon as having zero rest mass, that is, it has no mass when stationary. It acquires mass only when it moves (at the speed of light, naturally!)

The particle theory was useful in explaining many phenomena associated with light (such as photoelectricity) but it was quite useless in understanding others (such as interference, the production of a light and dark pattern when light from multiple sources interacts). This indicated that an alternative way of looking at light was required.

We often speak of "light waves" (in fact, a popular raytracing package is called Lightwave). Light may be considered as a periodic disturbance of a medium, which propagates from one place to another. Initially, physicists thought that this medium was invisible but material (like air in the case of sound waves), they named it the "luminiferous ether". The ether concept was later discarded. Nowadays, light is treated as a fluctuation in an electromagnetic field. We shall not go into this concept in detail. The reader is advised to consult elementary high-school physics textbooks on electromagnetism.

Light, as a wave, has the following properties:

Frequency (f): the number of fluctuations per unit time Time period (T): the time taken for one fluctuation Wavelength (lambda): the distance moved by the wave in one time period

This treatment fully explained interference effects, dispersion, refraction, reflection etc. However, it was still impossible to explain photoelectricity (the production of electrons when light falls on a surface) with the wave concept.

The two theories given above were complementary -- each had certain advantages that the other lacked, but taken together they explained just about any lighting effect physicists could think of. Finally, physicists (probably as a compromise, at least initially!) decided that light was neither just a stream of particles, nor just a bunch of waves -- it was, in a strange and ill-understood way, both at the same time. This idea became known as the "wave-particle duality" of light.

This does not mean that a quantum of light is a wave some of the time and a particle the rest of the time. It possesses both wave-like and particle-like properties at the same time -- depending upon the experiment, one of these two states may become more apparent to the experimenter than the other.

This idea gained a great deal of theoretical underpinning with the rapid development of quantum theory. A short description of this theory is given in the section on The Particle Theory.

At the heart of modern optics is the concept of a light ray. This may be understood in the languages of both the particle theory and the wave theory.

In the particle theory, a ray is just the path of a photon. This is always a straight line, where a straight line is defined as the shortest distance between two points. (This definition is useful when dealing with non-Euclidean geometries. Because of the presence of ponderable mass, our own universe actually has such a geometry -- space is curved.)

In the wave theory, the definition is a little more complicated and requires the introduction of another concept, that of the "wave-front". A wave has periodic fluctuations. Points on a wave are said to have the same "phase" if they are moving in the same way at the same time. The locus of all points having the same phase at a certain time is called the wave-front of the wave. A ray, in this system, is the locus of a given point on the moving wave-front.

In simple terms, both definitions amount to a single concept -- a ray is a line along the path of propagation of the light. Thus the ray gives us the path taken by the light.

One of the main problems in the rendering of 3-D scenes is the elimination of hidden surfaces, i.e. surfaces that are not visible from the position of the eye. This problem is tackled in various ways. For example, in the z-buffer method, a depth value is associated with each pixel on the screen. If the depth of the point under consideration from the view plane (the projection screen) is less than the stored depth at that pixel, the pixel is assigned the colour of the point and the depth value is updated to the new value. The process continues in this way.

In raycasting, a ray of light is followed from the eye position through the point on the view plane corresponding to the pixel. For each surface in the scene, the distance the ray must travel before it intersects that surface is calculated, using elementary or advanced mathematical methods. The surface with the shortest distance is the nearest one, therefore it blocks the others and is the visible surface. Raycasting is therefore a hidden surface removal method.

Actually, raycasting has other applications as well. Volumes of non-standard objects may be estimated with raycasting methods coupled with integration techniques (volume of a cylinder through a surface element of area dA is dA * distance betwen successive intersections of the ray with the object along the axis of the cylinder). Raycasting may also be applied in rendering volume data.

We are now ready to understand what raytracing involves. The name itself is a clue. A raytracing program calculates the illumination effects of a surface by tracking, or tracing, the path of a light ray as it bounces off or is refracted through the surface. The technique known as "forward raytracing" starts off with a ray from a light source in an arbitrary direction. When this ray intersects a surface, two child rays in the reflected and refracted directions are generated. Each of these rays has a colour given by the colour of the light source modified by the properties of the surface. Each of the child rays is followed as it intersects other surfaces and more children are spawned. This process is continued until all of the child rays have either arrived at the eye position or have escaped into space. The observed colours of the last set of intersected surfaces is given by the colours of the rays arriving at the eye.

As you can imagine, this technique is immensely wasteful. In most cases, about 99.9% of the tested rays will not reach the eye, or will do so only after an unmanageably large number of surface intersections. A more efficent technique is "backward raytracing". This method is based on the principle that the path of a light ray can be reversed, that is, two light rays along the same straight line but in opposite directions will follow the same path, even if many reflections or refractions are involved. Backward raytracing follows a light ray in the reverse direction, from the eye to the light source. This ensures that the colour obtained is in fact the colour observed.

Note: Sometimes, the method of tracing a ray from the eye to the light source is termed "raytracing" and the reverse "backward raytracing". I feel the previous convention is more logical (since you're travelling against the light, you should be going "backward"). However, the second convention appears sometimes and might lead to some confusion. [-- Thanks to Mathias Baas for pointing this out --]

Diagram of the backward raytracing process

Usually, a raytracing program makes a few other compromises. If the algorithm I have just given is strictly followed, and the light sources are not very large, then most rays will miss the sources altogether. Also, most surfaces are not perfect reflectors, they show diffuse reflection. Diffuse reflection is caused by surface roughness, and results in the commonly seen effect of a gradient from light to dark when a rough surface scatters light from a point light source (the strongest scattering occurs when the surface is perpendicular to the line from the surface point to the light source). Thus in addition to the colours obtained from the rays following the paths of perfect reflection and refraction, empirical models are applied to the visible points to determine the diffuse reflection intensities of the surfaces. These models have been described in the section on lighting models elsewhere on this site. Related effects such as specular and metallic highlights (the little bright spots seen on shiny surfaces) may also be produced with these models.