Recommended Posts

It's not often that I have to resort to posting questions on here but my debugging and googling skills have failed me this time.

I have a deferred renderer that I am integrating ray-traced shadows (in hlsl) into for my masters dissertation. For this I am rendering a depth map and using that to get the position of the pixel when it comes to the ray-tracing and deferred lighting stage. This works fine and I get the correct position, everything is hunky dory until I actually go to move the camera.

I think what is happening is that when the camera moves, the texture holding the depth is obviously changing, when it's getting passed to the ray-tracer effect it seems to be very low quality, as if it were rendered in 8-bit instead of the Single format 32-bit it is rendered in. I don't know if this is some "optimisation" to lower bandwidth usage or something but whatever it is is really screwing with the position calculation in the deferred stage.

This is most obvious when I render the height of the pixel, I made a short video of it and you can see it here (view in HD):

I hadn't noticed this when just doing deferred lighting but when it comes to ray-tracing the shadows I have to use the pixel position as the start point of the ray so any error is very noticeable.

Does anyone know what exactly is causing this problem and is there a way to stop it? I have an ATi 2900xt, it might be a driver optimisation on that card.

Whenever I try to take a screenshot of it happening it doesn't show up, to take the screenshot it must use full quality textures.

Share this post

Link to post

Share on other sites

Original post by xnunesSo you think the problem is in the texture precision?

Not exactly, I know that the texture is fine when the camera is staying still. When it isn't moving everything works fine and there is no flicker. So I think it's something to do with the texture precision only when the camera is moving.

Quote:

Original post by xnunesAre you using POINT as texture sampler?

Yes, I have tried POINT and NONE with the same issue.

Quote:

Original post by xnunesAre you considering the half-pixel offset that DirectX 9(and XNA) uses when sampling the texture?

Yep, I tried using it and not using it without any difference.

Quote:

Original post by xnunesIs your near and far plane close enough to each other to avoid Z imprecision?

Yea there isn't any Z imprecision when rendering the depth image.

I think that the main issue seems to be the compression of the image when it's sent to the graphics card, if you look at the youtube video it just seems to lose it's bit-depth. It becomes "chunky" the same way a depth map would be if you reduced it to just 256 colours say.

0

Share this post

Link to post

Share on other sites

It seens to flicker not when the camera moves, but when you do some yaw/pitch in the camera axis.

Very weird, I don't have any other guess. I have done a lot of refeeding textures(including deferred rendering) with XNA and didn't have any compression problem. I don't believe theres any compression when you are feeding the texture. Maybe someone else knows the problem.

Share this post

Link to post

Share on other sites

my guess is that the camera's view matrix is being updated after some of the deferred render targets are being drawn.

Try to move you camera code to the beginning of the loop or move your render target update code to the update method... see if this fixes the issue, if it does then you know the texture "flickering" is because of the camera's matrix being updated between deferred passes.

0

Share this post

Link to post

Share on other sites

I have v-sync off and the camera matrices being updated during the Update() phase which should only happen once per frame with v-sync off. All deferred rendering is being handled during the Draw() phase which comes after the Update().

It does look a bit like that though. I don't know why that would be making the gradient lines on the texture however. It really does look exactly like if you took the 32-bit texture and dropped the bit-depth to 8-bit.

0

Share this post

Link to post

Share on other sites

I know from extensive experience that Update does not get called right after a Draw. They are both threaded (you can tell by looking at the output window and seeing all of the Thread Exited messages). This threw me for a loop when I was first developing my game engine Reactor 3D on XNA. The threading of the update and draw functions was really apparent on the xbox 360. This behavior is so that Update gets called at a fixed interval while Draw gets called as soon as it can with v-sync off. This has been researched extensively...

There are a few things you need to do to force XNA to call Update and Draw one after another and it gets rather complicated.

If, just for testing, you were to put your camera update code in the beginning of your draw call just to test if this is the culprit it might be yet another thing you can cross off your list of potential adverse effects.

*EDIT* This would only fix the flickering... The rest of your issues would probably be due to either sampling state changes or render surface formats...

0

Share this post

Link to post

Share on other sites

Original post by gabereiserI know from extensive experience that Update does not get called right after a Draw. They are both threaded (you can tell by looking at the output window and seeing all of the Thread Exited messages).

No they're not. That would be pure craziness...it would mean every noobie using the framework would have to understand and work around the implications of using multiple threads. Just put a breakpoint in Update or Draw and you'll see they're getting executed on the same thread. Or dig in with Reflector if you want.

The rules for when Update and Draw get called are spelled out clearly in Shawn's blog post, and they're not really all that complicated. The only thing you need to be aware of with fixed time step is that your Update may get called more than once in between draws, which results in "jerkiness" since your simulation is suddenly lurching forward relative to your rendering.

Anyway this doesn't seem to have anything to do with the TC's problem, which is flickering and not jerkiness.

Share this post

Link to post

Share on other sites

Original post by DargI don't know if this is some "optimisation" to lower bandwidth usage or something but whatever it is is really screwing with the position calculation in the deferred stage.

There are no such driver optimizations. Your GPU will execute everything in your shaders at 32-bit precision, and the driver has to implement the surface format exactly. Filtering and blending are not always done at full precision (depending on the GPU, but you're not doing that).

You need to be careful with regards to precision if you want to avoid error. It sounds like you're storing depth as post-perspective z/w...this value has issues in that it's non-linear with respect to your view-space depth. In other words it increases exponentially as you move from the near plane to the far plane. This has the effect that there's much much more precision for areas closer to camera than areas further from the camera. This is compounded by the fact that floating-point numbers have more precision closer to 0 than they do closer to 1. This can get really icky when you use that depth value to reconstruct position, especially when you use that position for generating shadows. I have a blog post with some pictures, if you're interested. Since you're using XNA and you have to write out a depth buffer anyway, I would recommend storing a linear depth value instead.

I don't know if any of this is related to your flickering problem, but I figured it was worth mentioning.