2 Answers
2

The defining characteristic of deferred rendering is that it essentially changes the complexity of scene rendering from O(geometry * lights) to O(geometry + lights).

This is achieved by first rendering the scene using shaders designed to output basic attributes such as (at a minimum) position*, normal, and diffuse color. Other attributes might include per-pixel specular values and other material properties. These are stored in full screen render targets, collectively known as the G-buffer.

(*: It's worth noting that developers will more commonly opt to store depth, and use that to reconstruct position, since having depth available is useful for so many other effects.)

Once the G-buffer has been generated, it's possible to compute a fully lit result for any pixel on the screen by solving the BRDF exactly once per pixel per light. In other words, if you have 20 meshes that each are affected by 20 lights, traditional ("forward") rendering would demand that you re-render each mesh several times in order to accumulate the result of each light affecting it. In the worst case, this would be one draw call per mesh per light, or 400 total draw calls! For each of these draw calls, you're redundantly retransforming the vertices of the mesh. There's also a good chance that you'll be shading pixels that aren't actually affected by the light, or won't be visible in the final result (because they'll be occluded by other geometry in the scene). Each of these results in wasted GPU resources.

Compare to deferred rendering: You only have to render the meshes once to populate the G-buffer. After that, for each light, you render a bounding shape that represents the extents of the light's influence. For a point light, this could be a small sphere, or for a directional light, it would be a full screen quad, since the entire scene is affected.

Then, when you're executing the pixel/fragment shader for that light's bounding volume, you read the geometry attributes from the appropriate position in the G-buffer textures, and use those values to determine the lighting result. Only the scene pixels that are visible in the final result are shaded, and they are shaded exactly once per light. This represents a potentially huge savings.

However, it's not without drawbacks. It's a paradigm which is very difficult to extend to handle transparent geometry (see: depth peeling). So difficult, in fact, that virtually all deferred rendering implementations fall back to forward rendering for the transparent portions of the scene. Deferred rendering also consumes a large amount of VRAM and frame buffer bandwidth, which leads to developers going to great lengths to cleverly pack and compress G-buffer attributes into the smallest/fewest components possible.

Also referred to as Deferred Shading, Deferred Rendering refers to a wide set of possible rendering paths that store intermediate results into textures, then complete the rendering equation later by sampling the intermediate data.

Geometry buffers are an early example, where the scene is rendered into a series of buffers that contain e.g. the position, normal, and base texture of opaque geometry. Lighting has not been applied, and the final color is not known. In subsequent passes lights are rendered, and the geometry buffers are sampled. This means that large numbers of lights can be rendered with a fixed cost of number of lights possibly visible on a screen pixel. Traditional rendering would have evaluated all light sources for surfaces that were occluded and never seen on screen.

Many variations exist, including rendering light information first for example.