Tuesday, February 12, 2013

Restoring Depth

I've been refactoring some of the drawing code in the Rogue Moon game engine. I've had nice bounding box drawing working and in the same manner wanted to add some sort of 'engine trail' effect. I got this working, but then decided that both of these needed to move to a 'post process' step.

After all, these were simple primitive drawings that didn't need to be properly lit, shadowed or the like. So there was no point in drawing them during the (comparatively) expensive GBuffer stage.

The only problem was that, since I am using a deferred rendering solution, I'm presented with the scene rendered to a flat image at the end of the GBuffer combining.

Draw GBuffer (color, normals, depth render targets)

Draw Lighting (uses normals and depth to draw light render target)

Draw Shadows (draws light depth, shadow occlusion texture)

Combine (combines color, light and shadows using normals and depth)

At the end of the final stage we have a finished scene render (this does include the post-effects I'm talking about here, but bear with me):

Anyway, this is a composite of the color, light and shadow information we recorded in previous steps. Since it is a composite, the depth information in the original GBuffer write has been lost.

Why is this important?

Consider the above image. There are lots of spaceships, the ground, and the sky beyond. If you don't take care to handle depth correctly, nearby spaceships might appear partially behind the terrain or trees. Or a closer ship might be draw before a further one that it overlapped, which would look... odd. There are basically two ways to get this to draw right:

Sort every object in the scene based on distance, then draw furthest to nearest.

Draw in whatever order you like, but record depth information when you do.

Actually some combination of 1 and 2 is best. Method 1 (sorting) is nice and simple, but sorting every object in a scene where many things are moving gets to be a bit of work, and also it does not prevent overdraw.

Method 2 involves storing not just the color of the pixel drawn, but storing the depth of that pixel in what is called a depth buffer, or sometimes a Z-buffer. The depth buffer a floating point format surface associated with the rendertarget you create (another IDirect3DSurface9):

There are various formats for the depth buffer (including DepthFormat.None), depending on the resolution you require. Depth is generally stored as 0 (close) to 1.0 (far), and it generally isn't linear.

Anyway, how does this help?

Modern GPUs can easily test against this depth information before they do any pixel writing. So when you are about to render a pixel, the GPU checks to make sure that nothing CLOSER to the point of view has been written. If it has, the pixel is skipped.

Basically, if something closer has been drawn in that spot, skip it. Not only does this allow you do draw and not have to worry about depth ordering, but it saves your GPU the work of rendering those pixels again (which should be covered by the 'closer' information).

Now, if you followed that, you might get what I said about combining method 1 and 2 above. If you sort your objects and draw the nearer objects first a lot of the further objects won't be drawn or will only be partially drawn (i.e. they will fail the Z-test because of nearer objects). In the image above, if you drew front-to-back, every pixel covered by the terrain is one that the sky didn't have to draw. Every pixel covered by a ship is one the sky or terrain didn't have to render, etc. This is a simplification, but basically correct (as it happens I draw the sky to a separate sky rendertarget and combine it during the GBuffer combine phase, so I'm not getting that particular efficiency).

However, as I said before, since the scene rendertarget I get is a manufactured one based on the GBuffer, while there is a depth buffer associated with it, it is completely incorrect. So if you clear the useless depth and draw your post effects to you you get this:

Clearly not what we want. You can see that the engine trails are being drawn atop the ships. To be precise, the engine trails are drawn at the correct depth, but without knowing the depth of the other pixels you cannot know if a given engine trail pixel is behind or in front of any other part of the image.

The obvious answer is that you need to somehow restore the depth 'Z-buffer' of this final scene rendertarget. We definitely have the information from before; the color rendertarget from the GBuffer drawing pass would be perfect.

Unfortunately in DirectX9, there is no way to simply read from a depth buffer, so that is right out. What we do have is the depth rendertarget we also wrote during the GBuffer phase to enable our lighting. This is the same information. Aha!

So, we have to copy from the Depth rendertarget to the depth channel of our scene rendertarget. Now, how to go about this? It should be simple, but I didn't know how to write to the depth channel only (I don't want to replace the color of the scene rendertarget; we went through a lot of trouble to generate it!). I thought about it for a bit, but didn't really know how to do it.

A quick Google search turned up this, from the ever helpful Catalin Zima. It is very simple, you just use the output semantic DEPTH:

Unfortunately, that still left me with the color overwriting problem. Catalin's code restores both color and depth, and I didn't want that. So how to skip the color? You can't remove the color output from the pixel shader (the 'out float4 color: COLOR0' above). A pixel shader MUST write out COLOR0 (error X4530: pixel shader must minimally write all four components of COLOR0).

So that was out. I quickly thought up one solution, though it was a bad one. I simply set the pass to enable alpha blending, and in the pixel shader, did:

color.rgba=0;

Alpha (the 'a' in rgba) is the transparency of a pixel when doing alpha blending. Zero = transparent, One = fully opaque. So basically I just forced all the colors to be transparent.

That worked, but seemed terribly wasteful. All those color pixels were being examined for transparency and rejected. That is a lot of unnecessary work, so I really didn't like this solution.

Fortunately there is another way; you can simply disable color writes:

The 'ColorWriteEnable' command turns off color writing. ColorWriteEnable=RED|GREEN, for example, would allow the R and G channel of color to be written, but nothing else, for example (giving you a very Christmas-y scene). Zero turns off everything.

Of course, once I did that all my engine trails and bounding boxes dissapeared. Hm. Clearly color writing wasn't being re-enabled. The ColorWriteEnable appears to be, XNA wise, part of the BlendState settings, so I quickly reasoned resetting that would fix it, and it did:

Device.BlendState = BlendState.Opaque;

Then it all worked and was about as efficient as I think it can be!

Anyway, here is the code:

//For drawing things post G-Buffer resolution. Transparent items, overlays, etc.
public override void ForwardRenderingPass(GameTime gameTime)
{
if (!IsActive)
return;
RestoreDepth();
gameEngine.ForwardRenderingPass();
Device.SetRenderTarget(null);
base.ForwardRenderingPass(gameTime);
}
//Restores the depth to this scene's output rendertarget.
//Uses the current renderer's DepthRT as the source.
void RestoreDepth()
{
Device.SetRenderTarget(OutputRT);
//Clear the depth buffer of the output render target (it is not at
//all correct as it was never set properly during GBuffer combine).
Device.Clear(ClearOptions.DepthBuffer, Color.CornflowerBlue, 1, 0);
//Now restore the OutputRT depth from the GBuffer depth texture.
Device.DepthStencilState = DepthStencilState.Default;
Effects["RestoreDepth"].Parameters["DepthTexture"].SetValue(currentRenderer.DepthRT);
DrawQuad(null, Device, Effects["RestoreDepth"]);
//We must do this to turn color write back on.
//It was disabled in the RestoreDepth effect as
//we only want the depth, not any color values.
Device.BlendState = BlendState.Opaque;
}