Cascaded shadow mapping and GL_TEXTURE_2D_ARRAY?

I've got basic directional shadow mapping working and now I'm looking into improving this using technique called "Cascaded Shadow Mapping", where you use several depth textures to get better quality shadows.
More specifically, I believe I can use GL_TEXTURE_2D_ARRAY as my depth attachment to my FBO and specify the number of cascades in glTexImage3D. The problem is I do not understand how the rendering to the depth attachment works when you've got a texture array and I cannot find any examples of this (only examples where you render into color attachments). For example, how is the range determined when selecting which texture to render depth into? How do I access the right layer in the fragment shader during the lighting process?

I've got basic directional shadow mapping working and now I'm looking into improving this using technique called "Cascaded Shadow Mapping", where you use several depth textures to get better quality shadows.
More specifically, I believe I can use GL_TEXTURE_2D_ARRAY as my depth attachment to my FBO and specify the number of cascades in glTexImage3D.

Yep.

The problem is I do not understand how the rendering to the depth attachment works when you've got a texture array and I cannot find any examples of this (only examples where you render into color attachments). For example, how is the range determined when selecting which texture to render depth into? How do I access the right layer in the fragment shader during the lighting process?

Use glFramebufferTextureLayer() to bind the specific slice of the texture array to the FBO, passing the slice index to the layer parameter. Use this when you want to render to one slice of the FBO at a time. You'd want to do this if you are implementing some CPU-side split culling, to avoid throwing the entire world down the pipe for all splits.

An alternative if you don't have any smart split culling is to blast any object rendered into your shadow maps at multiple splits simultaneously. There you'd use glFramebufferTexture() to bind the entire texture array (all slices) and used layered rendering to select which slices get rasterized too. Generally, I wouldn't expect this to be as efficient, but think about this within your specific problem domain to be sure.

Among other links, check out this for docs and code (search down to "Cascaded Shadow Maps").

I've been looking at the article, and one thing I don't understand is what is a "crop matrix" they talk about and what is it used for?

All they're saying is that they're chosing the orthographic projection for each split (i.e. the "cube" of world space that is chopped out by this light split's frustum) to be tightly fit to the corners of the AABB around that view frustum split's corners. The "crop matrix" is just the "scale and translate" projection matrix that takes the region in this AABB and squeezes/shifts it to be within -1..1 (the clip-space cube) for clipping.

Since this is an orthographic projection, where there is effectively no perspective divide, you can think of this clip-space cube as essentially NDC. See the docs on that space in the OpenGL Programming Guide.

The "view frustrum" you mention is the cameras view-projection matrix right? Is there any way to "chop up" the cameras view-projection matrix for each of the cascade ranges or do you have to recalculate the view-projection matrix in each cascade?

The "view frustrum" you mention is the cameras view-projection matrix right?

Yes. And a "view frustum split" is one piece of the (as you put it) "chopped up" view frustum.

Is there any way to "chop up" the cameras view-projection matrix for each of the cascade ranges or do you have to recalculate the view-projection matrix in each cascade?

I may be missing something in your question, but you don't "chop up" the camera's viewing-projection matrix. You chop up its view frustum into pieces, and then you fit a light-space frustum around each.

Ok - do I use the cameras view matrix, and then for each split generate a new perspective projection matrix, using the split range as the near/far dist?

Each camera split has the same VIEWING matrix, yes (same eyepoint, same look direction, same up vector), but different PROJECTION matricies (really only diffing in their near and far clip values). But since you "typically" don't render each camera frustum split separately from the camera's perspective, you typically wouldn't care about having separate camera transforms per split.

Something like:

Code :

...

This would give me a AABB around light-space frustrum for each split right?

I haven't checked your math, but I see what you're trying to do. Start in clip space and try to backproject the camera frustum corners to get them in WORLD space. Conceptually this is reasonable, but since this is a perspective frustum, this alone isn't going to give you an AABB as you said. Remember the shape of a perspective frustum. Also, looks a bit strange to me that you are doing a perspective divide here on a back-project, and after moving back to world space. Recall that a perspective divide happens after the MODELVIEW transform on a "forward" projection.

Doing the inverse camera view-projection and then dividing by W, dosn't that put the frustrum corners in world space?

And once I have the cameras frustrum, now I want to split this into sub-frustrum, one for each split. Say I decide [0, 5] for the first split; how do I find the min/max X/Y/Z? The minZ/maxZ is 0 and 5, but the X/Y? Is there an easy way to get that?