I was considering switching my engine from a deferred shading to a light prepass (deferred lighting) approach. From my initial readings on deferred lighting, it seems that this method will not generate the same ouput as deferred shading since we are not taking into account the diffuse + specular colors of the materials during the light buffer generation. So if an object is affected by multiple lights, it will only apply the surface color to the output once vs the deferred shading approach which multiplies in the surface color for each light (I am talking about the phong model specifically).

I assume that to generate the same output as before, I would have to modify the light properties for each light to generate the same output or modify the deferred shading implementation to only apply the surface color once. Another option is to add surface data to the g-buffer but that brings us back to deferred shading. In my current implementation I can switch between deferred and forward shading and the output is about the same, however this will no longer be the case with deferred lighting.

Is there something I am missing or is this indeed the case ? how are other engines which have switched to deferred lighting handling this ? Are you just ignoring the differences and keeping with one lighting method ? or applying some function in the code to modify the light properties in a prepass renderer. I would assume this transition would be a bigger issue in large projects with multiple scenes and lights.

To get the same results with deferred-lighting and deferred-shading, your deferred-lighting "lighting accumulation buffer" has to accumulate diffuse and specular light seperately, so that later you can multiply them with the diffuse surface colour and specular surface colour, respectively. As long as you do that, they'll be the same.

The reason that many deferred-lighting systems are different, is because the above setup requires 6 channels in the accumulation buffer. As an optimization, you can instead just accumulate 4 channels -- the diffuse light RGB, and the specular light without any colour information. Later on, you can either treat all specular light as monochromatic, or you can 'guess' it's colour by looking at the accumulated diffuse colour.

[edit]

So if an object is affected by multiple lights, it will only apply the surface color to the output once vs the deferred shading approach which multiplies in the surface color for each light

Splitting diffuse + specular seems like a good idea. Wouldn't that still produce a different output than deferred shading though. For example, lets just take diffuse into account ignoring specular.

my current deferred shading approach (which I believe is how everyone does it ?):

float4 color = 0,0,0,0;

for each light

color += surfaceColor * (NdotL * lightDiffuseColor);

freameBuffer = color

deferred lighting approach:

lightAccumBuffer = 0,0,0,0

for each light

lightAccumBuffer += (NdotL * lightDiffuseColor);

scene render pass

framebuffer = lightAccumBuffer * surfaceColor

In the examples above, the surfaceColor is multiplied into the lighting contribution for each light and that result is added to the frame buffer. Whereas in deferred lighting, the surface color is only multiplied in once. Wouldn't this produce a different result ?

Both of these equations are mathematically equal (just ask wolfram). If you can re-arrange the left one into the right one, then it may be more efficient to implement in a computer because there's less operations.

Splitting diffuse + specular seems like a good idea. Wouldn't that still produce a different output than deferred shading though. For example, lets just take diffuse into account ignoring specular.

my current deferred shading approach (which I believe is how everyone does it ?):

float4 color = 0,0,0,0;

for each light

color += surfaceColor * (NdotL * lightDiffuseColor);

freameBuffer = color

deferred lighting approach:

lightAccumBuffer = 0,0,0,0

for each light

lightAccumBuffer += (NdotL * lightDiffuseColor);

scene render pass

framebuffer = lightAccumBuffer * surfaceColor

In the examples above, the surfaceColor is multiplied into the lighting contribution for each light and that result is added to the frame buffer. Whereas in deferred lighting, the surface color is only multiplied in once. Wouldn't this produce a different result ?

Mathematically the results are the same due to the distributive property, as Hodgman has already pointed out. In practice there can be differences due to precision and conversion behavior of render target formats. If you're using floating-point formats then it's not likely to be a significant issue.

However, I feel I should ask why you're considering using a "light pre-pass" approach in the first place. It can be useful if you *really* don't want to use multiple render targets (which can be beneficial on a certain current-gen console), but outside of that it doesn't really have any advantages. It forces you to render your geometry twice (both times with a pixel shader), it's harder than regular deferred to handle MSAA (at least if you want to do it correctly), and the second pass doesn't really give you much more material flexibility since it happens after applying the BRDF.

The main reason was material flexibility without having to store additional parameters to the GBuffer, but thinking about this more based on what you said, it doesn't seem that much more useful since the BRDF has already been applied as you said. Basically I was just wondering if I were to switch, if there would be any discrepancies in the output. I probably won't switch unless I find a good reason.