Rendering Terrain Part 13 – Basic Shadow Maps, The Not Quite Right Edition

Well, I solved the problem with Shadow Mapping that I mentioned last post. Shadows now work. They’re not quite right, but they work. I’ll talk about the basic Shadow Mapping algorithm today and what’s wrong with my implementation currently. Hopefully I’ll be able to get the problems fixed over the weekend, then I can talk about those fixes next post.
Most of the work presented here is based heavily off of the chapter on Shadow Mapping in Introduction to 3D Game Programming with DirectX 12. I actually used the DirectX 11 version for most of it, but, as I mentioned last week, I wound up buying the new version when I was stuck. The chapters are pretty much identical, other than the API-specific code. I also studied the Microsoft Sample code. The Mini Engine and Multi-threading projects both have code for Shadow Maps.

The Algorithm

Shadow Mapping works by rendering the scene from the point of view of your light source, saving the depth of each pixel, and then using the resulting map as a texture we can sample against on a second pass where we render the scene normally. If you have multiple light sources, you need a separate pass and resulting texture for each light source.
There are tons of tutorials out there for Shadow Mapping, so I don’t want to spend too much time on the algorithm itself. Let’s try and focus on the DirectX 12 implementation.

The first thing we know is that we need a separate rendering pass to create the Shadow Map. This means a new Root Signature and Pipeline State Object. There isn’t much that needs to change. I created both from scratch, but most of the examples I looked at actually try to reuse the same Root Signature and copy a lot of the PSO settings from the normal render pass. I could have done this as well and then just set any buffers I didn’t need to NULL. That would literally just be the new shadow map buffer. Everything else is needed.
As for what is different, there are a couple of tweaks to the PSO.

As long as we’re not rendering anything transparent, we don’t need to load a pixel shader at all. The algorithm works by only filling the Depth Buffer, which happens in the Rasterizer stage before the pixel shader is called. From what I’ve read, GPUs are optimized for this sort of Depth Buffer only rendering, so let’s hope we don’t kill our frame rate!

We need to turn off the render targets. Since we’re not loading a pixel shader, we’re not technically ‘rendering’ anything.

I’ve seen many examples and tutorials mention that changing your Cull Mode to cull front facing polygons instead of back facing will help with some common issues with Shadow Maps, like shadow acne, so I changed this variable.

Related to the shadow acne issue, there are also values for DepthBias, DepthBiasClamp, and SlopeScaledDepthBias. These values allow you to add a bias to the vertex locations when you’re building your Shadow Map. The idea is that this limits shadow acne by repositioning vertices such that they don’t get missed because of the resolution of the Shadow Map. You can read what Microsoft has to say about Depth Bias. I had a hard time finding much about this, in terms of what values to use. Microsoft’s examples used -100, 0, and -1.5 respectively. In the DirectX 12 book I linked to previously, they use 100000, 0, and 1. Both sets of values either didn’t work at all or didn’t improve anything in my case. I’m currently using 100, 0, and 1.5 because at least the shadows work, even if they aren’t quite right yet.

As far as what shaders I’m compiling, I use the exact same vertex shader as it is just a pass through. I could have used the same hull shader, but I decided to make a separate one that doesn’t do frustum culling, as our entire scene has to be in the frustum of the light at all times anyway. This may change later when I start working on Cascaded Shadow Maps.
My domain shader is nearly identical, but I need to multiply my vertex positions by a different view/projection matrix. I could handle this in two ways.

I could create a new constant buffer for the shadow map pass. It would be pretty much identical, but I would store a different value in the view/projection matrix slot. I could then use the same domain shader and just bind the new buffer.

I could add a variable to the existing constant buffer for the light’s view/projection matrix and create a new domain shader that multiplies by that variable instead of the regular view/projection matrix.

Obviously the first is the option that makes more sense, but I went with the second because I just didn’t get it at the time. This actually wound up being the source of all my troubles getting Shadow Mapping working. Mostly. I actually thought I could just use the existing view/projection matrix slot for the light in the shadow pass and then update it to the camera for the normal pass. The problem with this is that both passes are using the same buffer in video memory, not separate copies like I assumed. So I was overwriting the value before the shadow pass had a chance to use it.
I finally figured that out because when I went into the graphics analyzer in Visual Studio and captured a set of frames, it would slow the frame rate down enough that the shadows would actually show up! Talk about confusing. So rather than make an entirely new buffer, I just quickly added a variable to the existing one and made a copy of the domain shader that used that variable.
I’m definitely going to go back and fix this. It’s an ugly way of doing it, and possibly less efficient, as well.

It took me a while to figure out what format to use for the depth stencil buffer. The depth buffer literally is the shadow map. We create a buffer for it, like a normal depth buffer. That basically means that we create a default buffer but no upload buffer. Then we make a Depth Stencil View and a Shader Resource View for it.

Every format I tried didn’t seem to be working. This was before I fixed the bug with the view/projection matrix, but I did get something different when I switched to the values above, which came from the DirectX 12 book. I’ll go back and try some of the other depth stencil related formats. They should all work, in theory. Some are just higher or lower precision. This one has memory for both a depth buffer and a stencil buffer, but we currently only use the depth portion (I think).

Since this is a different render pass, it probably has different dimensions than your default pass1It doesn’t have to, but a smaller depth buffer is cheaper and a larger one is better quality., so we need to set up a viewport and scissor rectangle for this pass.

Once we’ve set everything up, the render pass is pretty much the same as the normal pass. Set the render targets, clear the depth stencil view, set the root signature and PSO, set all of your resources, and draw the objects in the scene. We also need to set resource transition barriers for the shadow map buffer to ensure that it is configured for writing when we’re drawing to it, then switched to read before we tried to read from it.

You’ll notice that I hard-coded a value there to pass to the GetShadowViewProjectionMatrixTransposed() method. That value describes a bounding sphere that contains the entire terrain. As you can see in the following image, there’s a lot of dead space around the terrain currently. We already have code to generate an Axis Aligned Bounding Box, so I’m going to try to rework the code in that method to use an AABB instead of the sphere.

This image was actually taken while the light was directly below the terrain. Because of Front-Face Culling, not much is visible from the top.

The GetShadowViewProjectionMatrixTransposed() method is very similar to the method we created for the camera way back in Part 4, only now we’re creating a matrix from the perspective of our Sun. Since the Sun is represented by a directional light, we’re going to base our view frustum off of the bounding sphere, so that the terrain is centered, and then we’ll use Orthographic projection so we don’t get any funky perspective warping in the shadows.

We also have a second, nearly identical method called GetShadowTransformMatrixTransposed(), which performs the same calculations, but adds a transformation to Texture space. This matrix is used in the normal draw pass to transform our vertices from world space to the shadow map’s texture space, so we can look up whether we’re in shadow. I’ll include just the extra code. A smarter implementation would create both matrices in one function that is run on update, and then we’d just have Get() methods. I’ll fix this soon.

As I said above, there isn’t really any necessary change to the shaders. I made some because I did a pretty poor job on the implementation. Once I fix my code, I’ll probably be able to delete the extra shaders I added.

As for the final render pass, we add the new SRV we created for the Shadow Map and add the Transform matrix we made above. We also need to add a new sampler for the shadow map. Rather than just sampling the map, we want to do a comparison of the depth value stored against the depth value we calculate for the current point, which is found by multiplying the world position by the transform matrix. This step is done in the domain shader on the vertex level. We then pass it to the pixel shader as a texture coordinate that will be interpolated across the polygon. That multiplication and new output is the only thing new about our domain shader.
We do add a new function to our pixel shader. We are taking advantage of the fact that there is a built in function for doing Percentage Closer Filtering. The built in function automatically samples a 2×2 grid of the shadow map using the comparison sampler we just set up and linearly interpolates the results. Our function then takes a 3×3 grid of these samples and averages them. The idea here is to help with shadow acne, and to create softer shadows.

We can then multiply our diffuse and specular components by the shadow factor to add in the shadow component. I also added a quick check so that if the light is behind the point on the surface, we’d automatically put the point in shadow. I just added this because the Sun was lighting the terrain a bit after it had set.

So that’s pretty much it for the basic Shadow Mapping algorithm. Things look pretty decent, at least from a distance.

Problems

There are, however, a few problems. The first major one is how the shadows look up close.
Pretty bad. First off, the shadow is too soft. You can barely even see it, but there is, in fact, a shadow cast across the flat area. You can see the jagged, badly aliased line across the ground where the shadow SHOULD be connected to the dark area, which is a steep wall. Pretty bad Peter Panning happening there. I played around a bit with the Depth Bias, but didn’t know what kinds of values to use to get rid of it. I also tried Normal Offset Bias, but didn’t have any better luck. I was pretty certain the problem here is that the resolution of the shadow map is just too low. Even a 4096×4096 shadow map, which is ridiculously big, only works out to 4 texels per 1m2 on the 2048×2048 height map in these pictures. The same size shadow map on a 512×512 height map actually works pretty well, but now the shadow map is 64 times the size of the height map. A 1024×1024 shadow map actually works fairly well, as well. Not quite as smooth, obviously. But better in terms of performance and memory.
There’s still an issue with areas within the shadow not being dark. I think it is also related to the resolution, but may not be completely that. The following video demonstrates this issue. In the second half of the clip, you’ll see the shadow sweep by. Once the umbra passes, part of the terrain will be slightly lit again until the Sun dips below even with the point. Given the Sun is behind something at this point, the region should be completely dark.

I was thinking these problems were all related to resolution, so I squeezed the light’s frustum down to a sphere of radius 100, compared to one of radius 1024 for the 2048×2048 heightmap or 256 for the 512×512 height map. I couldn’t see any issue at all within the area covered by that radius. The shadows looked perfect. Mind you, that was also a 4096×4096 shadow map. But if I implement Cascaded Shadow Maps and perhaps find some Depth Bias values that actually work, I think I can get these problems licked.

Those aren’t the only problems, though. I mentioned last post that my Sun rises and sets aligned with the Y axis. Since the terrain is also aligned with both the x and y axes, I should think the shadows would be aligned as well. You can see in the above video that they aren’t.
The following God’s eye view video shows the problem in a more pronounced way. It also appears that the problem is opposite at dawn and dusk.

I’m really not sure what’s causing this. I didn’t notice anything odd about the lighting without the shadows, so I’m fairly certain the Sun’s direction vector is updating correctly. I decided to render the shadow map directly as it is animating, so I can see what that is doing.

As I said before, there’s a lot of wasted space. Getting the light frustum tighter to the terrain would help with the resolution quite a bit.
I’m not sure why it rotates horizontally. I would have thought it would rotate vertically. I apparently have created a frustum with the light on its side…
…I stopped and fixed that as I was writing this post. I set the up vector to point along the y axis. I actually needed the cross product of that vector with the lightdir vector to get the correct up vector as the Sun direction rotates.

In that video above, it seemed to me as if the terrain is getting warped as it spins. I didn’t make a new video with it rotating vertically, but now it feels less like a warping and more like a wobble. I’ll have to work on getting that fixed this weekend.

Performance

Since there’s just a couple of (hopefully) minor non-performance-related issues to deal with, we may as well talk briefly about performance.
I was pretty disappointed. In Part 11, we were rendering our 2048×2048 height map at 3.8ms per frame, 266fps. Now? With a 4096×4096 shadow map, it took 10.4ms, 94fps. That’s a pretty big shadow map, so I tried a 1024×1024 one. Somehow it was slower, hitting 12.1ms, 82fps. A 512×512 shadow map was just as bad! Clearly, it isn’t the size of the map. It also doesn’t appear to be anything I’ve added to the regular render pass, so it must be the amount of geometry going through the shadow map pass.
We’re sending a mesh of vertices made up of 1 vertex per 4 texels of the height map. We then tessellate that based on the distance to the camera. We linearly interpolate from no tessellation at 512m to a tessellation factor of 64 up to 16m from the eye. You can check that out in Part 9. For the shadow map, I had read that you can sometimes cut down the tessellation factor without really hurting the scene. I tried changing the maximum tessellation factor in the shadow map to 8. It made a huge difference in the frame rate, getting us down below 5ms per frame, but it definitely impacted the quality of the shadows.
I decided to play a bit more with the tessellation factor. I discovered that in the normal render mode, I could tell no difference between a far distance of 512m, and 256m. That’s a lot of patches that don’t need to be tessellated at all. Plus, because we’re interpolating between near and far values, a shorter far distance means all the in between tessellation factors cover a shorter distance as well.
So switching both the shadow and normal render passes to use a closer far distance value looks exactly the same to me, but saves a ton of work. I was able to turn the shadow tessellation back up to max and still get a frame rate of 5.7ms, 175fps.

Hopefully I’ll have the basic Shadow Map implementation working much better by Tuesday. I don’t think I’ll get Cascaded Shadow Maps implemented by then, but we’ll see.