Omnidirectional shadow mapping and wrong space?

I've been banging my head against the wall when dealing with point light shadow mapping, and a recent idea is that I might've been comparing two depth values from different spaces and I'd like some confirmation if thats the case.

The shadow pass draws all the geometry from the 6 different angles originating from the point light using its own view and projection matrices, while the directional vector ("positionDiff") used in sampling the shadowmap and which is also converted to the comparison depth value (func "VectorToDepthValue") is in world space. Is this a potential error?

...the directional vector ("positionDiff") used in sampling the shadowmap and which is also converted to the comparison depth value (func "VectorToDepthValue") is in world space. Is this a potential error?

At first glance, the VectorToDepthValue guts look correct. While the input to VectorToDepthValue() is light's eye-space (it better be anyway; if not, that's a bug), the guts basically run this Z coord through the light's projection transform, does the perspective divide, and remaps -1..1 -> 0..1 to drop it into a window-space Z value for comparison with the shadow map depth values.

If positionDiff is in world-space and not light's eye-space, then that's a bug (it sounds like from what you're saying is that it's in world-space which is a problem).

It definately is in world space; I'll try get it in light space instead. A couple of follow-up questions arise though

1. What I don't understand though is how would you put it in light space? Since I got 6 different view matrices, one for each cube face? Looking at the code you linked, their "light_view_matrix" seems to only be a translation matrix - given that my lightPosition already is in world space, isn't that the same thing?

2. Looking at the shader source you posted though, why dosn't they subtract the lights position with the vertex position? Won't they get wrong cubemap coordinates otherwise? More like this:

If you look at that code I linked to, there's a base light viewing matrix for the "forward" direction, and then there are 6 "face" matrices that are stacked on top of that when rendering the cube map faces to reorient that and point in the direction of the cube face.

However, in the shader when sampling the cube map, only the base light viewing matrix for the "forward" direction is used, and the vector in that space is what's used as a direction vector for your cube map lookup.

The magic which grabs the appropriate value from this vector to use as the face-specific depth value (to use for the comparison once projected) is:

If you look at that code I linked to, there's a base light viewing matrix for the "forward" direction, and then there are 6 "face" matrices that are stacked on top of that when rendering the cube map faces to reorient that and point in the direction of the cube face.

However, in the shader when sampling the cube map, only the base light viewing matrix for the "forward" direction is used, and the vector in that space is what's used as a direction vector for your cube map lookup.

The magic which grabs the appropriate value from this vector to use as the face-specific depth value (to use for the comparison once projected) is:

But what is the "forward" direction in this case? I find the term confusing; I can understand a forward vector in terms of a directional light, but what is it for omnidirectional lights? What is the purpose of the base light view matrix?

Looking at the code:

Code :

// create the light view matrix (construct this as you would a camera matrix)
glLoadIdentity();
glTranslatef(-light_position_ws[0], -light_position_ws[1], -light_position_ws[2]);
glGetFloatv(GL_MODELVIEW_MATRIX, light_view_matrix);

Yeah, for omni it seems arbitrary. But I think you agree that (for cube map face rendering purposes) you need viewing transforms for each face. You have a base viewing transform that gets you into the eye-space for one face. And then a "rotate" transform you can apply to that which moves you from that face to the other faces.

Turns out that since OpenGL eye-space is looking down the -Z axis with +Y up, then (I think) the CUBE_MAP_NEGATIVE_Z face ends up being your base transform here (the face matrix for it should be the identity). True, there's no "real" forward direction on an omni light. So let's call this your base direction.

Yeah, for omni it seems arbitrary. But I think you agree that (for cube map face rendering purposes) you need viewing transforms for each face. You have a base viewing transform that gets you into the eye-space for one face. And then a "rotate" transform you can apply to that which moves you from that face to the other faces.

The view matrix consists of a translation and a rotation component, right - so the "base matrix", which we agree I need in the shader, is the translation matrix? Does the translation "put me in eye (light) space"?