In an imaginary world of infinite CPU power, it would be feasible to use ray
casting to compute the lighting information for every face at runtime. Since the Pentium CXXIX has yet
to make it out of internal testing we have to live with less than ideal methods for lighting.

Many graphics engines use precomputed lightmaps that cover all faces of the mesh. The greatest advantage
using this method is that the complexity of the lighting only affects the preprocessing time, not the
runtime. However, precomputed systems do not handle dynamic lighting well at all.

In my case, I have a very dynamic world. Thus I need to be able to light the world during runtime. One of
the obvious trade-offs is that the engine will not be able to handle complex lighting. As the number of
lights increase, the runtime stress on the engine increases.

As a first stab at rendering runtime lighting, I chose to use projected lightmap textures.

Note: This method has some very serious drawbacks. In this algorithm, the intensity of the light defines
the fall off distance. The actual visible intensity of the light at the center of any light is the same.
Thus, a light with an intensity of 500 would be just as bright at the center as a light with an intensity
of 100. In my case, lighting is an ambience and not a critical part of the engine. Realistic lighting
is not required.

The Lightmap Texture

For my purposes, the intensity of a light at a given distance from the
light source is defined as:

PixelIntensity = PixelDistance2 - LightIntensity2

Where:
LightIntensity is defined as the distance at which the light no longer illuminates a pixel.

To generate the lightmap, light is computed for an imaginary face that cuts through a sphere of light
at the center of the sphere. The following code sample generates a lightmap texture where the bytes
of the texture contain the intensity of the light as values between 0 and 255, where 0 is dark and 255
is bright.

Projecting a lightmap for an omnidirectional light is different than
projecting a lightmap for a directional light. With omnidirectional lights, there is no need
to worry about perspective corrections beyond what is required for normal texturing.

When a light projects the lightmap onto a face, the center of the lightmap texture is located
at the point that is closest to the light. This point need not actually lie on the face. However, it
will lie on the plane of the face.

Texture Coordinates

To compute the texture coordinates of the lightmap for each of the face’s vertices is a relatively
easy thing. First, two vectors must be generated, the vS and vT vectors.

Where:
vFaceVertex is any vertex of the face
vClosestPointOnPlane is the closest point on the plane to the light source
vLightNormal is the normal vector between the light source and the closest point on the plane.

The final part required to compute the texture coordinates from the vertex is the scale of the lightmap.
As the plane gets farther away from the light source, the size of the circle generated by the light gets
smaller also. The scale of the lightmap is defined as:

Since the intensity of a pixel is defined by the previous equation, the intensity of the lightmap at
the center of the lightmap must be adjusted to match the required intensity. With OpenGL, the color
and intensity of a texture can be adjusted by invoking glColor. If the plane actually traveled through
the center of the light, then the intensity of the light would be 100%, thus glColor would be invoked
as glColor4f (1, 1, 1, 1) for a white light. As the plane is moved farther away from the light source,
the RGB values for glColor would be reduced as specified by the equation.