A few issues with Shadow mapping & Deferred Shading

My goal, quoted perfectly on stack overflow as follows:

An user tried to shadow map with an deferred renderer, calculated the shadow cords as if it was an forward renderer, and got the reply:

The process is actually the opposite. You don't convert texture coordinates to the clip space but you convert coordinates from the clip space to the texture. For this to work you need to pass the light camera and projection to the fragment shader (and pass the position from vertex shader to fragment shader at least in OpenGLES 2.0, don't know about OpenGL 3.3).

Multiply position by camera and projection and you'll get the position in the light's view.
Divide xyz by w and you'll get the position in light view's clip space.
Multiply x and y by 0.5 and add 0.5 and you'll have the uv coordinates of the shadow map.
Now you can read the depth from the shadow map at uv and compare it to your pixel's depth.

Since I am still a novice OpenGL programmer, I found this tutorial that did exactly that, but with the caveat that it uses deprecated GL functionality.
(http://www.codinglabs.net/tutorial_o...w_mapping.aspx)
I tried my best to convert them, shadow mapping works but are not projected into the world correctly.

This was my assumptions, though this could be wrong (hopefully explain my current issues)

The tutorial refers to GL_MODELVIEW_MATRIX, GL_PROJECTION_MATRIX for camera and light identity.

I am unsure about the conversion, but I use Model View Projection with GLM and did the following:

The author sets 3 identities, as uniform variables he compute the shadow cord

I have confirmed that the shadow map is generated from the lights P*V*M, position, projection etc looks correct when I draw from the texture. Also, even though the shadows jumps about when i move the camera, I can see the expected shape in the shadow.

I have confirmed that the camera (my viewport) is correct.

What I am unsure of is what the depricated GL_MODELVIEW returns in glGetFloatv as compared to my expected matrices.

With shadow mapping, you should have two model-view-projection matrices (if you have a separate model-view matrix and projection matrix, just concatenate them). One is the camera transformation, the other is the light transformation (i.e. the transformation which was used when rendering the shadow map).

The vertex shader should transform the incoming vertex coordinates by each transformation. The result of transforming the vertex by the camera transformation should be stored in gl_Position. The result of transforming the vertex by the light transformation should be converted to a vec3 (by dividing by w) then converted from the -1..+1 range used for normalised device coordinates to the 0..1 range used for texture coordinates and depth values, then stored in an output variable.

Thanks for the response. I noticed that I had an typo in my first leading paragraph using the words "forward renderer", might been misleading (sorry!) (Fixed that)

Unless I am totally mistaken, your point covers forward rendering. I want to compute the shadow cords in the lightning pass (after geometry). The steps you cover are no longer possible (unless I want to store the shadow map in the G-buffer which is a big No No).