Lighting

I started this little rendering sandbox about 1 year ago. I started this blog at the same time to share about my progress and finding. So 1 year later I now have what looks like a basic modern rendering sandbox integrating my studied subjects. I experimented with a lot of concepts since my last post. I will quickly summarize all of them and share all the references I used for my implementation.

Reflections

I implemented reflections using parallax corrected prefiltered cubemaps. So basically, with this system you place probes and optional associated geometry volumes in the scene.

Probes and associated geometry volumes.

The probe is the point from which the 6 scene captures will be taken and the geometry volume is the geometric scene approximation that is used to perform the optional parallax correction (limited to AABB in my implementation). In my implementation, each mesh in the scene are manually associated with one probe. The idea is then to prefilter the cubemap for each level of roughness and store them in incrementing mipmaps. For the reflections to match the specular analytical lighting model we need to convolve the cube map with the same shape. This model is Cook-Torrance with GGX for the normal distribution in my case. To understand this integration in the discrete domain we can consider each cubemap texel as a light source. The brute force method would then be to iterate over all of them and compute the contribution according to the specular BRDF. But to reduce the number of texel we will only consider the ones that have a significative impact on the result using importance sampling. I learned about importance sampling in this GPU gems 3 chapter. One problem we have is that the Cook-Torrance shape change according to roughness and the viewing angle, this would add another dimension to our prefiltered cubemap. To visualize this shape, it is very instructive to use the disney BRDF explorer. Fortunately, clever approximation relying on a split sum and a 2D LUT texture exist. This whole process is explained with a lot of details in Real Shading in Unreal Engine 4 and Moving Frostbite to Physically Based Rendering. The biggest trade off of this approximation is that the integration is done with the viewing angle aligned to the normal (V = N), because of this we lost the stretching of the reflection at grazing angle. Finally, I also don’t have the indirect lighting from the sky in my probe captures. To fix this, I should do the capture pass twice: a first pass to generate the irradiance map and a second with the contribution applied.

Filtered cubemap with increasing roughness stored in mipmaps and the 2D LUT texture.

Result on spheres of increasing roughness.

So I precomputed all of this using compute shaders and binded a key to force a re-baking when needed. For now I can only say that it looks not too bad since I don’t have yet a system to compare my results with a reference. To be honest I’m not sure I will go that far since my primary goal with this was to get a better understanding of those concepts. This is how it looks in action.

Indirect lighting

I rely on 2 systems for the indirect lighting contribution: irradiance environment maps and single bounce indirect illumination. For the irradiance map, I simply reuse the 4×4 mipmap of the probes used for reflection since at this level the convolution is almost 180°. I use this for outdoor scene. This is not exact and I don’t have precomputed a per-fragment sky visibility factor, but it still better than no indirect lighting. For indoor (and outdoor), I pre-compute a per-vertex indirect light contribution using something in the line of reflective shadow maps (RSM). The idea is simple, each texel of the shadow map is considered as a new light source that you re-inject for the second bounce of light. So to reconstruct light from it, this RSM need to contain the normal and the color in addition to the depth. Computing this for all texels will be too expansive, so I simply uniformly select a subset of texels using the Hammersley distribution. I iterate over all the VBOs in the scene and do the lighting computation per vertex in a compute shader. Again I binded a key to trigger a re-baking. I selected this approach because it was simple, bake really quickly and that I have the feeling that I could derive a stable realtime version from it. At the final lighting stage, both indirect light contribution are multiplied by the SSAO factor. Well, at this point I’m pretty sure I break all energy conservation rules with those indirect lighting systems and that pixels are about to explode, but this is what I have for now.

Indoor scene without (left) and with (right) per-vertex radiosity pre-computed using RSM.

Volumetric light scattering

This one was in the instant gratification category. A simple system reusing shadow maps that can bring convincing effects. Lighting participating media in the air is a little bit different than lighting an opaque surface. You have a lighting model that determine how much of the light is scattered toward the camera depending of the light direction and a forward scattering factor. The model I used is Mie scattering approximated with Henyey-Greenstein. Since we are dealing with a non-opaque media, we need to accumulate the result of each fragment between the shaded fragment and the camera. To do so, we ray march from the fragment position to the camera position and accumulate the result. At each step we look into the shadow maps to see if the light is blocked. The number of steps will determine the fidelity of the effect. About 100 steps are needed to obtain good quality. This can be optimized by taking random steps (about 16) followed by a blur and by performing the ray marching process at half resolution followed by a bilateral upsampling. I gathered informations on this subject in GPU Pro 5 – Volumetric Light Effects in Killzone and this excellent blog post.

Even a less contrasted outdoor light scattering with a little tint on the sun color bring a nice contribution.

Outdoor scene without (left) and with (right) volumetric light scattering.

I’m now thinking about releasing the source of this small sandbox, I guess it could be of some value for someone and it will open the door to comments and improvements. But first I will need to find time to prepare the source to make it as clear as possible, so it should be the topic of the next post.

Theory is one thing but there is nothing like practice and execution. It was time for me to spend sometime experimenting with more physically based BRDF models to get a better feel of how they work. Just for fun I started with Cook-Torrance using the Beckmann distribution, but like everybody I ended up using the GGX distribution for the microfacet slope distribution (D).

Some takeaway from this experiment:

The application of the Fresnel term and the proper energy conservation really have an impressive visual impact.

Even if the instruction count is higher then the good old Blinn-Phong the performance hit is not that high on modern GPU.

Authoring and calibrating textures is hard, I ended up using DDO to have a good reference.

You really need some environment reflection (which I don’t have for now) for the metals to look good since they don’t have a diffuse component.

Not really specific to the BRDF model but the high constrast specular highlights don’t come out to good on the Rift DK1, revealing even more the pixel grid. Cannot wait to see how it will perform with the DK2.

Goodbye Blinn-Phong.

The next step will be to add some IBL to handle indirect lighting and radiance properly. Those more realistic materials mixed with the lack of irradiance really give the impression that we are on the moon.

I guess I’m the typical Nintendo generation game developer. Video games have been part of my life forever and I started rolling my own when I was around 13. Since then, I have always been implicated at various level on a game project. With the years, the insane amount of time playing games has shifted toward an insane amount of time working on game related technologies.