Saving Depth Buffer to Texture

Hello,

I am attempting to save the both the frag and depth buffers to a texture for use with subsequent render passes. To verify that the depth buffer is correctly being rendered to the texture by displaying the texture to the screen (as well as saving it as a TGA), where the result is a texture completely populated with values of 1.0. After looking at the documentation and several examples, my code is as follows:

Where pVar lets me switch between the two textures at run time. The renderTex displays correctly, however, like I said, the depthTextureId does not. Additionally, I should mention that the final render shader is using a Sampler2D.

If anyone has any ideas as to why this is not working, please let me know. Thanks.

I do not know what is going wrong there, since it might be the drawing itself, the texture setup or the rendering afterwards... i'd suggest you install a copy of the gdebugger from http://www.gremedy.com/ - it can show you the exact contents of your textures (buffers) etc during execution. Pause the execution before / after rendering and check that the depthbuffer is really modified and that the contents are really 1.000000 and not only close to it. It may be perfectly normal that all depth values are > 0.99.
For this i would reduce the buffer size to about 100x100 max to ease the debug and minimize texture-scrolling.

Thanks for the reply. I tried gdebugger and was able to verify that the depth information is indeed writing to the depthTex. Additionally, as you suggested, the depth values are around 0.989..
With that said, I need to somehow convert that depth texture into an RGB texture with "normalized". I attempted to use glTexParameterf(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE); , which converted the RGB values from 255 to 254, however, the static color buffer is reading it as 255. Also, I should mention that the frustum matrix is set with a near value of 0.1 and a far values of 600, with the majority of the content sitting somewhere around 300.
Any ideas on how I should go about reading the depth information from the depthTex?
Thanks.

Which returns an error of GL_INVALID_ENUM on the glTexImage2D step. I also tried the above using GL_R16F. Any suggestions?
Additionally, should there not be a more efficient way to use the depth texture directly (instead of copying it)?

I would have thought this would keep the full value of the depth buffer. ( I believe it is normally a 24bit int)

@tonyo_au: The default depth buffer format on most devices should be 24 bit FIXED point format. You could use a 32bit float, but be careful with mixing / comparing them.#

@VASMIR:
If i understand this correctly, what you want is to copy the depth texture from one texture to another. In general you should not download it to CPU and then re-up it to GPU as you are trying. As you suggested there is a more efficient way: use glCopyTexImage2d or even better glCopyTexSubImage2d (if you copy more than once this prevents reinitializing new glTexImage2d's again and again).

However, if you just want your depth texture stick around a little longer, e.g. to do shadow mapping or s.th. in the shader, there seems to be no need to copy anything at all. Just remember your depth texture id and create a new blank depth buffer and attach it to your fbo to continue. Bind the remembered depth texture id to a texture unit and use that one to access the texture in the shader. Is that what you want?

@VASMIR:
If i understand this correctly, what you want is to copy the depth texture from one texture to another. In general you should not download it to CPU and then re-up it to GPU as you are trying. As you suggested there is a more efficient way: use glCopyTexImage2d or even better glCopyTexSubImage2d (if you copy more than once this prevents reinitializing new glTexImage2d's again and again).

However, if you just want your depth texture stick around a little longer, e.g. to do shadow mapping or s.th. in the shader, there seems to be no need to copy anything at all. Just remember your depth texture id and create a new blank depth buffer and attach it to your fbo to continue. Bind the remembered depth texture id to a texture unit and use that one to access the texture in the shader. Is that what you want?

Specifically what I am looking to do is save the depth buffer in RGBF16/F32 (or RF16/F32 format if that will work) so that I can send it to a sampler2D to be used in the edge detection stage of an MLAA pass. Additionally, I wanted to have the ability to discard any pixels that have a depth value corresponding to the far plane (for drawing semi-transparent scenes to a texture - i.e. GUI).

If you want precise work setup a separate texture in RF32, clear it to 1.0f and set glFragCoord.z in there - this works well for me.

I was considering doing that, however, I don't think it needs to be that precise. As I said, I am only planning on using it for MLAA and discarding fragments on the far plane. I figured that I could probably save a few function calls by taking the depth buffer directly in that case.

I tried your solution of just writing to a separate textures (I appended a depth render buffer instead), and am having difficulty writing to it. Would the fragment shader be something like:

(where there is other code in place of the ... depending on which shader I am using - I should probably combine them all into one with a subroutine switch)
This ends up producing a texture completely valued with 1.0, and attempting to replace gl_FragCoord.z with 0.5 ( depthOut = 0.5; ) ends up producing the same result.

This basically what I am doing. I actually have a GL_RGBA32 because I collect a few things so I have a vec4 and store gl_FragCoord.z in one component and other info in the others.
If you bind the texture to an attachment point and do not write to it in the shader, my driver seems to write 0 to each component. I would make sure you only bind it to an attachement
point for shaders that write to it.