I've read some material about deferred rendering, and I think I get the gist of it. But what I don't understand is how it accomplishes shadows. The G-buffer, as far as I'm aware, does not involve creating a shadowmap for each light, so I'm confused as to how the lighting pass is aware of whether or not each pixel is occluded. After all, a given pixel which is visible from the camera's perspective may not actually be visible from the perspective of any given light- and that occluding geometry may not be visible from the camera's perspective and therefore have nothing written about it into the G-buffer.

If you start rendering shadowmaps, then it seems pretty much the same as forward rendering- you render all the geometry in the scene for every light to render the shadowmaps.

So how does deferred rendering accomplish shadows equivalent to forward rendering?

2 Answers
2

Deferred shading doesn't do anything special for shadows. You still need to render the shadow maps normally and then render each light with the appropriate shadow map bound as a texture.

It's still better than forward rendering because you don't need to redraw the scene in the main view to apply the lighting. Drawing the shadow map is often much cheaper than drawing more passes in the main view because you don't need to do any pixel shading, and shadow maps often contain less of the scene (you can cull out a lot more stuff).

People do sometimes do "deferred shadows" for one light, typically the main directional light. The primary reason to do this is to use cascaded shadow maps or another approach that uses multiple shadow maps for the same light. You can reserve one channel in the G-buffer for a shadow mask (white where lit, dark where shadowed) and apply all cascaded shadow maps into this G-buffer channel; then the shader for the light just reads the shadow mask and multiplies it into the light color. This is nice since it decouples shadows from shading, but you're still drawing all the same shadow maps.

If you are extremely clever you could try using a geometry shader during the creation of the GBuffer to 'inline' the creation of the shadow map (like the cube map DX10 sample does). I'm not sure if it has been done, if it is at all possible or if it would be slower in the end - but it would bring the creation of shadow maps closer to deferred shading, or into the definition depending on your religion.
–
Jonathan DickinsonMar 14 '12 at 8:24

Well, what is a shadow map? A shadow map is a texture who's texels answer a simple question: at what distance from the light, along the direction represented by the texel, is the light occluded? Texture coordinates are generated using various projective texturing means, depending on the particular shadow mapping algorithm.

Projective texturing is simply a way of transforming an object into the space of the texture (and yes, I know that sounds backwards. But that's how it works). Shadow mapping algorithms use several different kinds of transforms. But ultimately, these are just transformations from one space into another.

When rendering the shadow map, you take the vertices of your geometry and transform them though a standard rendering pipeline. But the camera and projection matrices are designed for your light position and direction, not for the view position and orientation.

When doing forward rendering with a shadow map, you render the object as normal, transforming the vertices into the view camera space and through the viewing projection matrix. However, you also transform the vertices through your light camera and projection matrices, passing them as per-vertex data to the fragment shader. It uses them via projective texturing to access the shadow texture.

Here's the important point. The projective texture access is designed such that the location it accesses on the texture represents the direction between that point on the surface (the point that you're rendering in the fragment shader) and the light. Therefore, it fetches the texel that represents the depth at which occlusion happens for the fragment being rendered.

But there's nothing special about this pipeline. You don't have to transform the vertex positions into the shadow texture and pass those to the fragment shader. You could pass the world-space vertex positions to the fragment shader, and then have the fragment shader transform them into the projective space of the shadow texture. Granted, you'd be throwing away lots of performance, since you'll come up with the exact same texture coordinates. But it is mathematically viable.

Indeed, you could pass the view camera-space vertex positions to the fragment shader. It could then transform them to world, then into light camera-space, then into the projective shadow texture space. You could put all of that transformation into one matrix (depending on your shadow projection algorithm). Again, this gives you exactly what you had before, so when forward rendering, there's no reason to do it.

But in deferred rendering, you already have the view camera-space vertex positions. You have to, otherwise you can't do lighting. You either wasted a lot of memory and bandwidth by writing them to a buffer, or you were smart and recomputed them using the depth buffer and various math (which I won't go into here, but is covered online).

Either way, you have view camera-space positions. And, as stated above, we can apply a matrix to transform them from view camera-space into shadow projective texture space. So... do that. Then access your shadow map.