deferred spotlight shadow map value issues

hello, i would like some fresh ideas to help solve my issue.

so i have engine with deferred rendering, i already have implemented point lights with dual paraboloid shadows\pcf. they're working fine.
but I've stumbled upon some unexpected issues trying to implement cone spotlights in similar way, but with single projective shadow map.

and see no shadow map effect. until i set attenuation to something about 0.99-1.0; obviously, it won't produce correct depth comparison. but coord.xy values are correct. i guess it has something to do with perspective... tried differend manipulations with attenuation value\W component - no acceptable result. if i manually output linearized depth values from shadowmap - they are correct.

and see no shadow map effect. until i set attenuation to something about 0.99-1.0; obviously, it won't produce correct depth comparison. but coord.xy values are correct. i guess it has something to do with perspective...

Your transform math and your comment about it don't make much sense to me. What puzzles me is that you would have already beaten the bugs out of your transform math in doing a standard omni point light source -- which you said works fine -- and adding a cone is just a small extension to that.

I'll explain my puzzlement in a minute, but I would have expected you to be reading the camera WINDOW-SPACE depth value from the G-buffer, back-projecting that with the fragment's WINDOW-SPACE XY position to get a camera's EYE-SPACE position for the fragment, then backprojecting that through WORLD-SPACE to the light's EYE-SPACE, and onto the light's CLIP-SPACE. Then you apply your *0.5+0.5 bias to X,Y, &Z. And then you do your w-divide, shadow map lookup, and depth comparison (which is what shadow2DProj does). I'm not seeing that here. Graphically, that's this (courtesy of Paul's Projects):

Some of my puzzlement is the following. If modelMatrix is what it sounds like (a camera MODELING transform), the first line makes no sense. As to the second, even if we assume that attenuation gets you a distance from the fragment to the light source, linearly scaled to 0..1 within 0..lightRadius. This is a linear, radial value, and bears little resemblance to the biased light's clip-space depth value that you should be using for the shadow map depth comparison. And of course the w-value for the lookup position wouldn't ever be 1 for a perspective projection. There's no clue here what backPosition is, so can't really trace the coord.xy logic.

So there's some resemblances to shadow map logic here, but not enough to convince me this is right.

hope this will clear up everything. i will review my matrix math now. and yes, i don't get what i should use as W component for shadow lookup. for dual paraboloid it was a lot simplier, because all the significant transformations were manual and happened in the same place. and here i get a bit confused in all the spaces.

which takes us from the camera's EYE-SPACE to WORLD-SPACE -> light's EYE-SPACE -> light's CLIP-SPACE. And then it stacks on the *0.5+0.5 bias matrix that shifts the position (post perspective divide) from -1..1 range to 0..1 range, in X, Y, and Z. Note that the entire product of matrices in the parenthesis can and probably should be precomputed per-frame and uploaded to the shader in a single matrix uniform. Then you only do one matrix transform and not 4.

I get that you're trying to get the WORLD-SPACE position here. However, this shouldn't compile because in the first line you're assigning a vec4 to a vec3. I'll move on assuming that's just a typo. With that fix, it should give you a WORLD-SPACE position "assuming" the input is truly a camera EYE-SPACE position.

However, note that you generally shouldn't use world-space positions on the GPU because these could have large magnitude -- but you can leave that as a nuance for later. If your world-space positions are tiny, should work fine.

Houston, we have a problem. Since your light source is a point light source, your light projection matrix is perspective. Perspective projections make use of the .w coordinate (that is, after applying it, .w is typically not 1). You can't just thunk down to vec3 here. You need to keep this as vec4.

Code glsl:

float len =length(coord.xyz);
coord /= len;

Why are you doing this? I think I know what you're "trying" to do. That is, you're trying to fit things down into some unit box. But this doesn't do it. What you might not appreciate is that's what the projection matrix does for you. It squeezes things down such that (post-perpective-divide, which comes later), the entire view frustum fits neatly into a (-1..1, -1..1, -1..1) cube. This post-perspective-divide "cube" space is called NDC. Read about it in the Viewing chapter of the OpenGL Programming Guide for instance.

Code glsl:

coord.xy= coord.xy*0.5+0.5;

Yeah, you're gonna need a bias as the last step before the texture lookup, but this only biases X and Y. You need to bias Z too. Recall I said that NDC is (-1..1, -1..1, -1..1) (that is, in X, Y, and Z). And what you want in the end is to get to a space "like" NDC, but which has the extents (0..1, 0..1, 0..1). So you just scale and shift NDC to fit.

I don't really have a clue on what this is doing. Where did this come from? Note that the first line computes a "radial" distance, but a standard depth buffer (like a shadow map) encodes a distance along the EYE-SPACE Z-axis. This is not a radial distance.

ok, i've done it. real problem for me was the fact that i lost W component at some point, replacing it with 1.0 and trying to treat it like linear point light. but it was, not surprisingly, identical to basic shadow mapping. it should go like this:

vec3 thing was a typo. before showing you the code, i edited it a lot, because until i finalize the feature, i make a mess in a code. i think it's common. and i tried to make a readable, isolated example.

i will concentrate all the transformations to light-space into single matrix. what you've seen is dirty code. first finalize, then optimize

can you explain

However, note that you generally shouldn't use world-space positions on the GPU because these could have large magnitude -- but you can leave that as a nuance for later. If your world-space positions are tiny, should work fine.

that part? in what space i should make my computations? what is the better way to store? you mentioned reconstruction of position from depth... but isn't it expensive? i use position a lot. i have deferred global light with specular, local point lights with specular\shadows, soft edged water, now spotlights with shadows. i think i gonna loose a lot of performance with position reconstruction routines. for tests, i have about 4000 units scene and didn't notice significant problems with lighting. global lighting is done in world-space.

also i'm facing a problem doing projective texturing for this light. strangely, when i use same coordinates i've generated for shadow map with texture2DProj - they don't work. texture is offseting when light is being rotated.

However, note that you generally shouldn't use world-space positions on the GPU because these could have large magnitude -- but you can leave that as a nuance for later. If your world-space positions are tiny, should work fine.

Fundamentally, you're not doing anything different (same source and destination spaces in your transformation chain). The only thing you do is optimize one thing: premultiply (lightMatrix * cameraViewInverse) on the CPU and call that lightMatrix. Then there's no need for your shader to deal with WORLD-SPACE positions. If your world is tiny though, you don't care about this.

you mentioned reconstruction of position from depth... but isn't it expensive?

It's not too bad. We're talking a few compute cycles vs. more memory (6-12 more bytes per sample in the G-buffer for storing eye-space X,Y,Z rather than just using the depth buffer you're writing anyway). Depends on your specific code and GPU, but compute usually wins the race. Once you get your shadows working, time it both ways. Or you can just stay with storing eye-space X, Y, and Z in your G-Buffer.

also i'm facing a problem doing projective texturing for this light. strangely, when i use same coordinates i've generated for shadow map with texture2DProj - they don't work. texture is offseting when light is being rotated.

Dunno. I don't have enough info to take a guess -- but I'm sure you'll figure it out!

Dunno. I don't have enough info to take a guess -- but I'm sure you'll figure it out!

that is disappointing. i thought you will point out some obvious mistake i made. i can't find any differences in math between shadow mapping and projective mapping. i used exactly the same texture coordinates i've used for working shadow mapping in the same shader and it doesn't position projected texture properly if i rotate light, example: