Hello,I have managed to implement a technique similar to the one used in battlefield 3(deferred shading with tiled light culling on the compute shader).I however couldn't find any info on the so called Irradiance Volumes.What exactly are they?Are they more efficient than the method used in Battlefield 3?With all the limitations of deferred shading,the method I'm currently using gets really annoying to work with.Wouldn't it be way better to just have a forward renderer that just renders once and in the pixel shader it just loops all lights for each pixel and calculates it's illumination.Why do people say the geometry has to get ''re-drawn'' for each light in the forward rendering approach?

Why do people say the geometry has to get ''re-drawn'' for each light in the forward rendering approach?

On older hardware, you would have a shader that only calculated a single light. Then for each light, you would draw the object using that same shader (and additively blend the 2nd and onwards lights).After this, people optimized this technique to only require a single pass by having n different shaders that worked for n different lights. If an object was lit by 5 lights, you'd use the 'Forward5Lights' shader.Today, you can load thousands of lights into a cbuffer/texture/etc, and then use dynamic branching in your shader to loop through each object's required lights (so we're back to one shader and one pass).

Why do people say the geometry has to get ''re-drawn'' for each light in the forward rendering approach?

On older hardware, you would have a shader that only calculated a single light. Then for each light, you would draw the object using that same shader (and additively blend the 2nd and onwards lights).After this, people optimized this technique to only require a single pass by having n different shaders that worked for n different lights. If an object was lit by 5 lights, you'd use the 'Forward5Lights' shader.Today, you can load thousands of lights into a cbuffer/texture/etc, and then use dynamic branching in your shader to loop through each object's required lights (so we're back to one shader and one pass).

does this mean that deferred shading is obsolete in DirectX11 now?EDIT:what I mean is - if it was so easy,why do modern game engines use complex GBuffer techniques?

No, deferred shading is still very efficient when you've got high light counts -- especially tiled variants.However, there's still people working on new variants of forward shading -- it started losing in popularity to deferred, but never truly became obsolete itself. Recently people have taken the ideas from tiled deferred shading and created tiled forward shading too. Different scenes will perform differently on different kinds of rendering pipelines. The best shading pipeline design will change from game to game. Also, it's not as black and white as it used to be. Many games are somewhere in-between forward and deferred.

Hello,I have managed to implement a technique similar to the one used in battlefield 3(deferred shading with tiled light culling on the compute shader).I however couldn't find any info on the so called Irradiance Volumes.What exactly are they?Are they more efficient than the method used in Battlefield 3?With all the limitations of deferred shading,the method I'm currently using gets really annoying to work with.Wouldn't it be way better to just have a forward renderer that just renders once and in the pixel shader it just loops all lights for each pixel and calculates it's illumination.Why do people say the geometry has to get ''re-drawn'' for each light in the forward rendering approach?

Irradiance volumes (like Tatarchuk's presentation "Irradiance Volumes For Games"), is merely a light probe's technique like the post on codeflow.org previously linked.
Crytek mention irradiance is used in conjunction with LPV. Don't confuse the both methods because they are radically different. In their first Siggraph presentation Crytek mention that they are making the two work together because it is a bad idea to inject sky radiance into the LPV. First, because you would need 55 steps of propagation to make the radiance flows all the way down, second because it will disrupt the flux of other lights because of the poor 2 bands SH, and because it will interfere with itself since sky radiance is hemispherical and has to be injected from 5 faces in the LPV, thridly because a volume full of flux makes the limits very noticeable, lastly because this technique is full of leaks, it is better to keep the flux to a minimum.
So they are using classic dome irradiance for contribution of the sky. They don't say how they solve occlusion from the sky's radiance, and I think they do not. That is why it is nowhere mentioned in their second paper.
Battlefield uses enlightment, which is a radically different approach that has an 'offline' cost, and is less dynamic, also handles occlusions with more difficulty. But LPV is not the holy grail at all in that domain either. The advantage would be the very light cost. A better technique for geometric distant AO would be cone tracing but it costs too much memory and requires heavy shader model 5 code.

Are you talking about the global illumination from geomerics' enlighten?If so, can you tell me some more informations about this technique?

It's just radiosity performed at run-time, with some of their own particular optimizations. If you read up on classic radiosity techniques you should be able to get a rough idea as to what they're doing. They also have some public presentations on their website.

By the way, I'm pretty sure that BF3 doesn't even use Enlighten at runtime. They don't appear to have any dynamic GI, so they probably just use Enlighten to (very quickly) pre-bake GI for their lightmaps.

By the way, I'm pretty sure that BF3 doesn't even use Enlighten at runtime. They don't appear to have any dynamic GI, so they probably just use Enlighten to (very quickly) pre-bake GI for their lightmaps.

This. That's why you don't see, for example, a dynamic day and night cycle on BF3. Its all pre-baked and tuned up per-scene by artists.

Battlefield uses enlightment, which is a radically different approach that has an 'offline' cost, and is less dynamic, also handles occlusions with more difficulty. But LPV is not the holy grail at all in that domain either. The advantage would be the very light cost. A better technique for geometric distant AO would be cone tracing but it costs too much memory and requires heavy shader model 5 code.

Voxel Cone tracing can be handled for newer hardware, and besides there's a lot of optimization room left. 3d compressed textures could help a lot with the memory problem. Speaking of voxel cone tracing, it can also handle direct lighting well in enough if you're using it already. Not only does it handle occlusion but you're also getting a full on image based lighting solution if you want.

The real problems with voxel cone tracing are other things, constant re-rasterization (though again, pre-computed voxels in a 3d texure could help a lot), thin geometry does not work well and how you'd handle things like grass and trees and stuff still needs to be worked out, not to mention the cone tracing itself is fairly expensive.

As for Battlefield, the original plan (As far as I know) was for dynamic GI. Unfortunately Enlighten's solution for dynamic stuff isn't terribly great, nor obviously nearly fast enough to run on the 360/PS3. Eventually they went down to PC only, and then just dropped it altogether.

It is true that cone tracing has an open road future of improvements and derivations. We are already seeing people on this forum trying to implement a version in a regular grid (cabaleira). Of course, remains the issue of thin objects (maybe solvable by tesselating/cutting objects ?) and light leaks, but appart from view-bound solutions like path tracing, I do not know of any GI technique that doesn't have difficulties with leaks. Final gather is full of those, empirical tricks have to be used to clip photon samples and other horrors to avoid leaks, in LPV Crytek is using an empirical 'central difference' damping which doesn't work perfectly because it darkens area that are receiver on one's side and emitter on the other side (black hole effect). I wonder what kind of artefacts cone tracing will give, seeing from the last scene in the video (plates and trees in some kind of atrium restaurant), I see horrible dark sploshs on the arcades and near the ceilings.
And other older techniques are just only worse:
- pure SSGI (RSM) : not even usable with large discs, occlusions ?
- radiance hints : horribly low information
- instant radiosity : mentioned above (BF3)
- sparse voxel GI : same issues as LPV (noise in occlusions etc..)
- imperfect shadow maps : horrible to implement, dirty result, slow...

fortunately with all the room left for improvement researchers won't be out of a job before long.

Well I think MJP was not trying to say "just" in this sense. It was in the sense that "there is nothing more to it", the method already existed, so the steps are known. That kind of "just". Basically, if you know "radiosity", you don't need to read the "instant radiosity" paper. Roughly. but well...

What I meant that what they are doing anything that's radically new at a fundamental level...radiosity has been around for a very long time and there has been a lot of research devoted to optimizing it. Most of what Enlighten offers is a framework for processing scenes and handling the data in a way that's optimized for their algorithms. I'm sure they've devoted a lot of time to optimizing the solving of the radiosity, but I don't that's really necessary for understanding what they're doing at a higher level.

What they're doing isn't magic...they're techniques only work on static geometry so a lot of the heavy lifting can be performed in a pre-process. They also require you to work with proxy geometry with a limited number of patches, which limits quality. They also limit the influence of patches to zones, and only update a certain number of zones at a time (you can see this if you've ever seen a video of their algorithm where the lighting changes quickly).

I don't to sound like I'm trivializing their tech or saying it's "bad" in any way (I'm actually a big fan of their work), my point was just that their techniques stem from an area of graphics that's been around for a long time and is well-documented.