Menu

Category Archives: Shadow Mapping

This is probably the first technique one will think for creating soft shadows when doing shadow mapping. The shadows are rendered to a texture (in screen space) and this texture is then blurred (in screen space) and later applied to the screen. This is a very easy technique for getting soft shadows. The main drawbacks are shadow bleeding and the cost of the extra passes.

This is a technique for making softer shadows when doing shadow mapping. It works by filtering the result of the depth comparison. So when comparing a depth, some depths around should also be compared and the result should be averaged. This will give a softer look on the shadow edges.

It can be implemented as simple as this in a the pixel shader:

float result;
result = shadow2DProj(shadowMap,texCoord+offset[0]);
result += shadow2DProj(shadowMap,texCoord+offset[1]);
result += shadow2DProj(shadowMap,texCoord+offset[2]);
result += shadow2DProj(shadowMap,texCoord+offset[3]);
result /=4.0;// now result will hold the average shading

The samples are often either taken in a grid-based square around the original sample location or randomly scattered around it.

Optimization:

NVIDIA has built in hardware support for doing bilinear interpolation between four samples.

ATI has fetch4 which will fetch four texture samples at the same time.

Shadow mapping works in that it checks if a point is visible from the light or not. If a point is visible from the light then it’s obviously not in shadow, otherwise it is. The basic shadow mapping algorithm can be described as short as this:

Render the scene from the lights view and store the depths as shadow map

Render the scene from the camera and compare the depths, if the current fragments depth is greater than the shadow depth then the fragment is in shadow

This technique for shadow mapping does the reverse when comparing depths. Instead of the normal case of comparing depths in light space, this method suggest that the comparison should be done in eye space. This requires the depth buffer from the lights’ view and from the camera’s view at the time of the comparison. This method is probably most useful when doing deferred lightning.