Poor precision in depth texture

I'm trying to perform, in glsl, an opacity calculation by comparing the value in a depth texture with the depth value of the current fragment. My plan is to lerp between a captured color texture ( the scene ) and the color of my water volume by how "thick" the water is for that pixel.

Here's my glsl -- all it's doing is trying to translate the value in the depth texture into world coordinates and then subtracting the z value of the incoming fragment from that to determine the thickness. Since I'm just messing around here, it's nothing complex.

What I see when I run the app is -- what appears to be at least -- that the precision of the depth buffer isn't up to the task. I see a solid green, as if the thickness passes the threshold, but if I bring the camera very close to an edge I can see a smooth interpolation across depth, just as I expect.

I know that the depth buffer is logarithmic, reserving precision for near fragments, rather than far. What I don't know is why it's failing so badly -- I'd expect *some* transition, even if it's not accurate.

Here's a couple screenshots:

Looking from a distance:

And looking close ( where it sort of works like I'd expect )

Any idea how I can increase the precision of the depth buffer? Or, failing that, how I can work around it? Perhaps ( in fact, almost certainly ) my math is wrong.

I'm not sure if this will help you. I haven't tried doing this in OpenGL, but this code is Cg code that works on the PS3s RSX and should in theory work in OpenGl. The actual code is slightly different as we obtain the depth value from the texture in a different way, I just didn't want to over complicate things.

I actually pass in the values for a and b so I don't have to do the calculations per fragment, I've just shown the calcs for simplicity here.

We use this code in in our depth of field effect, to blend between the back buffer and a blured version of it, based on the world space distance of the fragment from the camera. So it should work in your situation too.

glGetIntegerv(GL_DEPTH_BITS, &foo) on your framebuffer and glGetTexLevelParameteriv(...GL_TEXTURE_DEPTH_SIZE...) on your texture.
Most likely your texture was created as 16 bit and you are dropping bits during the copy.
Try explicitly requesting a sized internal depth format, like GL_DEPTH_COMPONENT24.

If you use an FBO instead of copying, you can just query GL_DEPTH_BITS after binding the FBO, it updates the framebuffer-dependent state.

Ulp! I thought for a second there I had inadvertently violated my NDA. But that information is quite public now.

Yes we use Cg and there is a version of OpenGl ES for PS3, but I have not used it.

Sorry the code didn't help. I have a few ideas but I don't want to send you on a wild goose chase, so if I get the chance later I'll look into it as I want to support similar functionality myself on the Mac.

You could check that GL_TEXTURE_COMPARE_MODE is set to NONE for the depth texture, just in case you are implicitly getting a depth compare lookup? Look at the GL_ARB_shadow spec for more info on that.

arekkusu Wrote:glGetIntegerv(GL_DEPTH_BITS, &foo) on your framebuffer and glGetTexLevelParameteriv(...GL_TEXTURE_DEPTH_SIZE...) on your texture.
Most likely your texture was created as 16 bit and you are dropping bits during the copy.
Try explicitly requesting a sized internal depth format, like GL_DEPTH_COMPONENT24.

If you use an FBO instead of copying, you can just query GL_DEPTH_BITS after binding the FBO, it updates the framebuffer-dependent state.

To request a sized internal format, would I call something like so?

EDIT: I tried this, and got errors from GL. GL seems only to accept the 'internalFormat' param as GL_DEPTH_COMPONENT. The 'format' param as GL_DEPTH_COMPONENT24 works, but produces no difference.

What about the GL_FLOAT in there? Should I ask for some other data type? That said, I'd rather stay away from using FBOs to render a depth pass, just because I'm already performing an extra render ( into FBO ) for the reflection. I'd be happiest to just be able to grab the values already in the ( I assume ) 24 or 32-bit depth buffer.

iklefrelp Wrote:Ulp! I thought for a second there I had inadvertently violated my NDA. But that information is quite public now.

I'm sorry!

And, don't worry about wild good chases, I can use any pointers you've got.

One thing I managed to realize ( since I'm reading the Orange Book as I go ) is that gl_FragCoord.z is already in the depth-buffer space, so I need to convert it using my convertZ ( or your conversion ) method. That doesn't fix it, tho.

Accepted by the <format> parameter of GetTexImage, TexImage1D,
TexImage2D, TexSubImage1D, and TexSubImage2D:

DEPTH_COMPONENT

Quote:What about the GL_FLOAT in there? Should I ask for some other data type?

Depth data is defined to be in the range [0..1], but is typically stored internally in an integer format. There isn't really a good 24 bit int format you can request (except via EXT_packed_depth_stencil) so FLOAT is as good as you can do. It doesn't really matter since you aren't providing any data here; no format conversion cost.

Quote:That said, I'd rather stay away from using FBOs to render a depth pass, just because I'm already performing an extra render ( into FBO ) for the reflection. I'd be happiest to just be able to grab the values already in the ( I assume ) 24 or 32-bit depth buffer.

I see. Yes, unfortunately there is no way to share the window's depth buffer with an FBO, so you need to copy. Just make sure you aren't dropping bits in the process

It looks to me like my depth texture is losing precision. You can see the banding here in these screenshots:

Here the camera's near an intersection with the water plane -- looks OK, aside from banding due to aliasing. The water is completely transparent where its depth == depthtexture depth for the fragment. That's what I want:

And here it's a bit farther away. Looks horrid, due to loss of precision in the depth texture ( depth texture appears to be 8-bit. WTF?):

Here's the GLSL. Note, the code is hacky, I'm trying to figure things out as I go, so I have three different methods for converting depth values to world z, and I have another method DepthRange that converts those back to 0-1, but linearly.

What do you have for your near and far planes? Those will determine how precise your depth buffer will be. But seriously, if you want to have fog, instead of using GLSL I would recommend making sure the floor is sub-devided, then using per-vertex fog. For most scenes, it won't make that much of a difference in terms of visual quality, but it will save you a lot of problems including this, and speed.

BTW, I don't think ATI supports 32 bit depth buffers, and for FBOs they don't support above 16 bits. Of course, you can always use plain old GL_DEPTH_COMPONENT to be safe.

I'm aware of how near and far affect precision! That said, I'm not actually doing fog -- I'm doing something like fog, but based on the thickness of the water comparing the solid geometry's depth ( by the depth texture ) to the incoming fragment depth.

The thing is, I'm certain that 16 bit precision would be enough, my problem is that what I'm getting looks like 8!

Isn't it just that requesting the non-existant 32 bit depth causes you to fall back to eight? It sounds stupid, but http://www.beyond3d.com/forum/showthread.php?t=21773 suggests requesting 16-bit. This is way out of my depth (pun intended) so please do disregard me if I'm talking rubbish.

So far I have had no luck with this either. I'm creating a 24 bit depth buffer but when I create the depth texture, for some reason I loose precision and only get a 16bit depth texture I would however have thought that 16 bits of precision would be fine, especially with the scene depth ranges I am using, so there could be another problem regarding the shaders accessing of the depth texture.

Sorry, but since it wasn't mentioned, I know how easy it is to forget the simplest things. What happens if you use different formats, such as GL_UNSIGNED_INT or GL_UNSIGNED_SHORT? It could be just a problem where even though it's using GL_FLOAT, it's still clipping the values as if it were GL_UNSIGNED_BYTE. It may end up doing that for every format, not just float, but AFAIK support for that format is rather new, so it would probably have the most problems. It's just a guess, but it would certainly explain the loss of precision.

I gave that a stab but all it did was to slow the app down from 20fps to 6 -- profiling revealed it to be in glCopyTexSubImage implying to me that a conversion was occurring. So it seems that GL_DEPTH_COMPONENT24 and GL_FLOAT result in no conversion.

So then why am I seeing poor precision?

And, when I sample the depth texture, are r,g,b and a all the same value? Am I sampling the incorrect one?

I know that I'm supposed to use shadow2D samplers for GL_DEPTH_COMPONENT ( at least according to the orange book ) but there's no Rect variant.

And a final thought... what if I used a different internal format than GL_DEPTH_COMPONENT... what if I used a luminance texture or something. Is that an option? Would that blow up? Obviously, I'll just try and see, but I'm curious if anybody has any actual suggestions for me.