Threaded View

Hey all, I'm trying to do omnidirectional shadow mapping for a point light with a cube map but am running into some issues when it comes to sampling the depth values in my second pass. I've put some sample renders up which can be seen on imgur at /a/74JLz (I can't seem to post links or images? I guess cause I'm new?) where you can see some of the errors. It seems to depend somewhat on model complexity as I see the most errors in the suzanne render, quite a few in the polyhedrons but none in the cubes. Earlier on I color coded things based on the face they rendered too to help make sure I was looking up the right faces in my second pass, which is why the colors are a bit funky. The actual shadow lookup is done in world space by using the vector from the light to the fragment, and comparing against the fragment z-coord after transforming by the cube face viewing and projection matrices. The code for the project can be viewed on github at Twinklebear/Deferred-Rendering/tree/layered_rendering_test (I can't post URLs at all?) but I'll post the important snippets below.

The shadow pass uses the shaders (located under res/): vlayered_instanced (vertex), glayered_test (geometry) and fshadow (fragment). The cube face is selected via layered rendering in the geometry shader which for now just amplifies each primitive and outputs it to every face of the cube map. The shadow pass draws to an fbo with color and depth cube maps attached, images of which are also in the album. The shadow pass is drawn on lines 345-355 of main. The shadow pass seems to go well as far as I can tell.

The shadow pass render call in main, Model::bindShadow binds the model's shadow pass program and VAO, both models use the same program.

The second pass fragment shader (fshader), is where I think the issue is, but I'm not really sure. The shader finds the view, light and half vectors for Blinn-Phong shading, then uses the negative light vector (ie. from light->fragment) to lookup which cube face we're closest too using a pretty naive method of just finding the largest dot product w/ the cube normals. From here we compute the depth of the fragment for that cube face by applying the face's view and projection matrices, then applying perspective division and scaling. The lookup in the cubemap texture is done with the negative light vector (light->fragment) and takes the z coord of the shadow pos to compare (our depth for that face).

The color is then chosen based on the face index and the shadow lookup value is factored into the lighting calculations.

Let me know if there's any more information that would be helpful, I've been stumped by this for a bit. Sorry that I can't seem to post the images or urls to the code, it does make it a huge pain to go view them, but please do at least look at the images (on imgur at /a/74JLz) as they show the issue much more clearly than I can really explain. If you do want to run the program yourself you'll need SDL2, GLEW and GLM, and if you're on windows should have the environment variables SDL2, GLEW and GLM set to the root folders of those libraries so CMake can find them. You can also choose to view the color or depth of the cube map faces by pressing D (depth) or C (color) and picking a face w/ 1-6, the scene view is chosen with S.

Thanks folks.

Edit: I put up some clearer higher resolution renders, and also noticed that the missing fragments are pure black, while any fragment that's part of a model should have at least some very low ambient color. Very strange.

From some further fiddling (ie. rotating the cubes) it seems that the issue is provoked depending on the faces angle somehow, although for cubes where the face normal is the same across the entire face the gaps only appear in portions of the face, instead of the whole thing. So perhaps it's really more related to the light direction? The gaps also remain in the same location when viewed from a different angle, so I don't think the viewing angle has an effect. These new images are also added to the album on imgur.

Edit: I realized I shouldn't be normalizing the PCF value, and am starting to think the issue is with how I'm scaling shadow_pos.z to match the range of values in the cube map depth texture. The texture itself is DEPTH_COMPONENT_32F, but I'm having some trouble finding how to scale the values in properly.

Final edit: I solved it! Turns out I had the wrong up vectors for some of my faces.