Shadow Mapping Depth Buffer not storing 32bit floats…

Not sure if this should go here or in the shader forum...

I am having trouble deciphering why my render depth buffer is coming out not as expected. I am attempting to write a shader based shadow mapper, as I was partially successful with an immediate mode version.

I have attached three screen shot to attempt to illustrate this along with the GL setup code and the two shaders.

It appears that my off screen depth buffer is storing only 8bit values and not the packed 32 float to RGBA I am attempting to store. This is seen when I capture the depth buffer and output to a file. The areas with infinite depth have 80 80 80 FF as there hex values, R, G, B all have same value and alpha is FF or 1, I’m assuming 80 is 0.5. This is illustrated by the depth.jpg.

The depth_noshader.jpg show the same logic but no enabling the render buffer thus rendering to the current screen buffer. Basically not calling glBindBuffer(GL_FRAME_BUFFER). By the looks of the output I’d say the say the shader is calculating a floating point representation in RGBA. Banding seems to indicate some accurate scaling.

The screencapture.jpg is a shoe of the final render stage with the depth buffer in the lower right.

All I can seem to get out of the shader is an alpha effect. All my depth calculations fail as I suspect the depth buffer doesn’t actually have what I really want to see, the linearly adjusted distance from light source on the first pass to depth buffer.

Can anyone see from the code below is there is something wrong with the way the depth buffer is setup? I suspect the problem is there as when rendering to the actual frame buffer the bluish banding seems consistent with what I might expect a bit pattern to progress by for depth.

...may be depreciated. Maybe TEXTURE_MODE is not the way to go. So I am only getting the red component. I have seen it referenced as rrr1 which seems to be about what I am getting. I get one value repeated three times and an FF which I suspect translates to 1.0 in GL speak.

Anyone? I'm stumped. I've been reading faqs, other examples on the net and everything seems to indicate this is correct. Playing with the shadow depths from the unpack and setting colors for ranges it appears my unpack is returning values from 0 to 1. Which seeme to indicate the depth buffer texture have values 0 to 1 which kinda follows what I am seeing.

I have ported the pack/unpack methods to a C# program using random float from 0 to 1 and the logic makes sense and works.

I'm missing something really trivial but I can't see it.

Can anyone can throw me a bone?

I am starting to think its in my second pass shaders.

Note: the vAdj and multiply by vShadowDepth are so I can see something. The intent of teh vAdj is to multiply by all 1s (white) for in the light and by 0101 (green) if in shadow. The vShadowDepth in the final color calculation will come out.

I still think something is wring in the depth texture since all I see is rrr1 so I think there are two issues at play here.

Your depth buffer, if it is a standard depth buffer, only has a single value per sample.

The rrr1 is just how GLSL populates the vec4 that's returned from the texture sampling function in GLSL. Pre-GLSL 1.3, GLSL populates the vec4 return value as follows for these DEPTH_TEXTURE_MODE assignments:

* INTENSITY = rrrr
* LUMINANCE = rrr1
* ALPHA = 000r
* RED = r001

where "r" is the depth value (or depth comparison value). In GLSL 1.3+, DEPTH_TEXTURE_MODE is ignored and GLSL behaves as if it is always set to LUMINANCE.

Not quite sure I understand this statement. What else could it be? During the definition, above, of the FBO, RBO and Texture I define as depth buffer and attach to the depth attach point. When I read to create a screenshot, I read from the depth buffer attachment. Am I missing something about a depth buffer?

I was thinking about this...in the shaders I am manually calculating the distance of the fragment to the light source. Thus I shouldn't really need a depth buffer at all. I could encode my linear depth floating points using the pack/unpack methods, store in a color buffer, or any buffer for that matter, and reconstitute on the fragment shader on the other side. The 'depth' buffer seems to be irrelevant. Depth seems to be the buzz word for shadow mapping but ultimately I don't think the type of off screen render buffer matters. It's simply a transport mechanism for data from shader to shader.

I could encode my linear depth floating points using the pack/unpack methods, store in a color buffer, or any buffer for that matter...

Your 2nd quote answers your 1st. By "standard" I'm referring to a texture or renderbuffer that has format GL_DEPTH_STENCIL or GL_DEPTH_COMPONENT ... one which stores 0..1 window-space depth values written by the standard depth pipeline.