We have been trying to get cascaded shadow maps working from MJP's sample but I can't seem to get it working right. All I get is a diagonal area of shadow across the ground regardless of which direction I look or where the light is. From my debugging of the shader, it seems like the problem is that the shader that builds the occlusion map is sampling the cascade map from the wrong point. I've gone over the code several times and can't see what would be causing the issue.

This is the part of the shader that calculates the texture coordinate which seems to be wrong:

My guess would be that your not creating your orthogonal projection matrix for the shadow map wide enough to contain the whole scene. Try widening that projection matrix and see if the problem is resolved.

Why would that cause the sampling position to be wrong? And how would I just make it bigger? The code for the frustum building is also from MJP's sample, so there shouldn't be any fundamental problems with it unless I ported it wrong.

Without seeing your code It's hard to say exactly where the issue is. The shader code you posted looks alright. From the images you have posted I've seen that issue before when the shadow frustum hasn't been built correctly and/or the frustum is missing a transform into light space. Maybe also look over the matrices you're passing into the shader to make sure they are correct.

Well, here's the code for the frustum creation. It's possible the issue is from convention differences between XNA (what the sample uses) and SharpDX (which is what we're using), but I wouldn't really know

There is one limitation with the example MJP gives as well that may be problematic for you. Only the viewable objects in the scene are being considered as shadow casters. So if you have an object behind the camera that's casting a shadow in front of the camera then the shadow will not be visible. Depending on your application this may not be an issue, but it's something to keep in mind.

That part of the code is actually changed from the sample. There's a comment on MJP's blog where a user uploaded a changed ComputeFrustum function that is supposed to reduce jitter that occurs when the camera is moving. The code is here: http://pastebin.com/Yn5SVPUP. Is that code actually wrong? Should I just use MJP's original version instead?

Another thing I noticed, In your original images the shadow maps look upside down. This could be a slimDX thing? And maybe you're compensating for that? So maybe try this... In your .fx file invert the .y element of your texture coordinates when you are checking to see if a pixel is in shadow or not. So somewhere in there you will have something like:

The comment from the sample says that position is going to be in view-space, which fits the assumption your equation makes. I've never seen this kind of position reconstruction before and don't really understand it, so I can only assume it's doing what it says.

InverseView here is the inverse of the player's camera's view matrix. lightViewProjection is the View * Projection matrix for the cascade camera this pixel is in. Broken down, mul(position, InverseView) matches your step 1. inverse(L) should be the view matrix for the cascade camera and L_proj the projection matrix. So in mine it's combined into 1 matrix. So I think it is going through the right transforms, right?

This process seems correct, but I think that one or more matrices are wrongly calculated. When looking at your screenshots, the sign is pointing to the left on your shadowmap (middle). The lighting on the sign in the screenshot supports this (light coming from the right side of the screenshot), therefor the shadow should fall to the left, but it falls to the right.

Check and debug your InverseView first, it should be used to put your pixels into worldspace. You could try to colorencode the world position relative to a reference point and a scale factor to check it, it should be stable, even if you rotation your camera. Testshader:

Yes, the camera position, what happens if you stand still and rotate the camera only ? In this case the "terrain texture" should not change, if it change, your InverseView matrix is most likely broken.

An other test, if you only move the camera along the lookat,right axis (no rotation) the color pattern should stay the same (like a projected texture pointing along the up-vector centered at the camera).