Hello friends, do you know if there's the possibility to draw an object on top of any other else in the z-depth order? For example, in a third person shooter, when player is behind an obstacle, make player glow or something so it can be drawn on top of the obstacle.. (don't worry for the obstacle calculation, it was just an example, I only need the "always on top" shader)I tried to tinker with z-depth in glsl in FTE but with no success.. it's like it's impossible to retrieve z-depth from scene

I don't understand if it's a matter of depth calculation in FTE or is it a pure GLSL topic. Plus, if anyone succeeded to retrieve z buffer in glsl in FTE and would like to share it here, he/she would be a saint to me!

Disclaimer: I don't know glsl. That said, I would start looking for a way to DISABLE any z-buffer test/writing. Reading from the z-buffer probably is a very expensive operation in terms of performance, if it's even supported in glsl.

I know FrikaC made a cgi-bin version of the quakec interpreter once and wrote part of his website in QuakeC (LordHavoc)

frag.machine wrote:Reading from the z-buffer probably is a very expensive operation in terms of performance, if it's even supported in glsl.

No no, reading z-buffer IT IS supported in glsl, in fact it's a crucial part of many important post processing operations (screen space ambient occlusion, outline shader, etc.). I don't know how much expensive it is, I guess quite a lot.The problem is that, once I stored it, divided by .w coord and added to gl_FragCoord, usually nothing happens.

This is an excerpt from a chat with Spike

Spike wrote:depth maps are single-channel (so only the .r component is valid) and when displayed generally result in values close to 1. its also non-linear, which is fun.using calcLightWorldPos will calculate the world-space position. this is generally not very useful unless you have some second world-space position to compare it to, like the view position, so if its displaying green then its because the y coord is positive and the x+z coords are negative with any actual 0-1 values being such a small part of the map that you're unlikely to even notice them.

So, if I use view coords -> nothing happens, if I use world coords -> nothing happens.. you have to trust me.. the only GLSL post processing effects that I couldn't achieve in FTE are those related to depth buffer!

In case one of you are curious, this is my basic depth post processing effect

no glsl required...alternatively you can just drawpic it after the 3d scene if you're trying to do post processing.that's what you originally asked for anyway.

glsl has these outputs:gl_FragColor/gl_FragData[]/outs (not to be confused with the actual pixel colour)gl_FragDepth (overrides the fragment's depth value - using this WILL disable early-z optimisations, so only use it the only other choice is more overdraw)discard; (discards this part of the fragment entirely - can have performance implications as the z values are only known once the fragment shader completes, rather than before)

the fragment vs pixel distinction is valid - the glsl does NOT write the colour that the pixel will become, rather it writes a value that will be passed to the blend unit of the gpu. the fragment shader doesn't have access to the depth buffer or the colour buffer - only the blend unit does.gl_FragDepth contains the depth of the fragment, not that of the framebuffer. Reading it is explicitly disallowed (you should be able to calculate it regardless). This is why you're normally expected to use render-to-texture or whatever first, if you want access to the 'framebuffer' (ie: by making a copy of it first) - this avoids weird race conditions.

then if you have a mypostprocshader shader with maps $sourcecolour then $sourcedepth, then you can read from s_t1 - the red channel will hold the depth values. note that if you try to directly draw it then you'll find that there's not much difference between any of the pixels. be prepared to rescale it by a lot before you can actually see any clear differences.

note that you may need to draw two scenes if you want to do weird depth compares - first time to generate the normal scene, second time you have a depth buffer that you can read to compare against.

alternatively if you're using fte's deferred lighting, you can use the $gbufferN image indicated by gl_deferred_pre_depth, in any shader with a sort key of unlitdecal, banner, underwater, blend, additive, or nearest, or you can use it freely in post-processing shaders. note that you can sample any of the gbuffer images after that point, so you can have different (opaque) entities writing into one of the channels that you can then read out later. but yeah, I'll probably end up breaking that method again at some point. I get bored, see...using .forceshader, you can draw the ent into the gbuffers, and then add it to the scene using a shader with a different sort key.

note that a 'sort nearest' shader with 'depthfunc greater' will draw only where the thing you're trying to draw was obscured.using forceshader and two addentity calls you can get weird overlays working.using the undocumented/untested VF_RT_DESTCOLOUR1 value, you can draw stuff to a different image (or you could try to figure out some way to keep the alpha channel usable, like using alphamask in all your other shaders). don't underestimate blend funcs using gl_dst_alpha either.you can then run an edge finding post-process shader to draw outlines for obscured ents. no depth buffer reading needed.

Spike, this didn't work well..this worked perfectly!! That was EXACTLY what I was looking for! Thanks a bunch, man!

Thanks a lot for the GLSL code and the super detailed explanation. I tried your code but I still see white screen, so I'll write some examples and when I'll have something useful to show I'll post it here!