I'm trying to perform something like per-pixel fog for water
rendering, where I capture the depth buffer and color buffers of the
scene before rendering water, and then for each fragment of the water
I lerp between the solid-geometry's captured color texture and the
water color based on the distance between the water fragment and the
value in the depth buffer for that fragment. The idea being that the
further the surface of the water is from the solid geometry beneath
( in eye-space ) the more that fragment should be in the water's
color. And in the inverse, where the water is very close to the solid
geometry ( like a shore-line ) it would be almost transparent.

I know that the depth buffer isn't linear, so I poked around and
found a function which seems to perform a reasonable conversion from
depth buffer values to world-z, and hacked up a simple glsl fragment
program to test with. For what it's worth, I tested the depth-to-
world conversion function in python and it gave me ( what seems to
be ) reasonable conversion from depth values ( [0-1] ) to my near/far
values, taking into account the logarithmic bias of the depth buffer.

The trouble is, I'm getting what appears to be way-too-poor precision
from my depth buffer for this to work. Which is to say, if I bring
the camera close to an intersection between the water quad and solid
geometry I *do* see a smooth transition from the color texture
( "refraction" ) and the water color based on thickness. However, if
I back off, even a few "meters" it goes solidly to water color.

The app is a dummy GLUT app for working this stuff out; my context
initialization is asking for a 32-bit depth buffer:

glutInitDisplayString("depth>=32 double rgb");

I know, from looking at OpenGL Profiler that I'm getting a depth
buffer and that the values seem reasonable. GLProfiler reports the
depth buffer as being GL_UNSIGNED_SHORT, which strikes me as
interesting since I'm requesting GL_FLOAT for its format.

So, I guess my problem is that my math might be really incorrect, or
that my depth texture is somehow being converted from a 32-bit
greyscale to 8-bit, or something to that effect causing me to lose
precision. Another thing that I'm curious about is what coordinate
space gl_FragCoord.z is in. Is it in eye space ( ranging from the
projection matrix's near to far ), or the depth-buffer's [0-1]?

Can anybody give me a few pointers? Thanks,

email@hidden
"authentic frontier gibberish"

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Mac-opengl mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden

Apple Footer

This site contains user submitted content, comments and opinions and is for informational purposes only. Apple may provide or recommend responses as a possible solution based on the information provided; every potential issue may involve several factors not detailed in the conversations captured in an electronic forum and Apple can therefore provide no guarantee as to the efficacy of any proposed solutions on the community forums. Apple disclaims any and all liability for the acts, omissions and conduct of any third parties in connection with or related to your use of the site. All postings and use of the content on this site are subject to the Apple Support Communities Terms of Use.
Apple