I've been implementing deferred shading, and have finally got it working. However, as expected, performance is terrible when the number of lights increases, as every light is currently being rendered as a full-screen quad.

I'd like to limit the the rendering of each light using a scissor region, but I'm having trouble understanding how to calculate the screen-space region affected by the light. I've been trying to follow this tutorial, which is the only source I can find that talks about using scissor testing for deferred shading, but it isn't the easiest to understand.

Does anyone have better references or explanations on how to calculate the region to scissor for a given light? I'm also open to other techniques besides scissor tests, but since I'm already using scissor testing, it seemed a good fit.

I too am interested in a good resource for this, although I think I have a good idea of how to acheive it. Currently I am only using a simple radius check in the frag. However, since doing 3d picking I think I've realized a different approach. If you take the view_matrix (camera_matrix) and the projection_matrix and find the corners of affect like this:

You then know the bounds for the rectangle by using x, y, of each corner and can define a scissor rectangle. Although I'm sure there's a much better way to do this, and I haven't yet tested this. Just figured I'd give you some inspiration to get the wheels turning.

You can just use a sphere / cube to approximate the light's area of effect. Set the radius of the sphere to encompass the radius of the light (probably in the vertex shader). I'm not sure how that compares to using scissor testing, but it should be faster than a fullscreen quad.

What I have read is really nice, is to chop the screen up into multiple scissor tests. Take the screen chopped into a 2x2 grid say. And you figure out which lights are present in each on screen grid. So you have to do some BSP type tests or something. Once you know each of those, you draw a quad over each grid, and not the fullscreen (the scissor test you dont need it now), and send all your lights in that grid to the shader at once. It does the lighting for all lights, instead of doing 1 light and blending with the main scene, then the 2nd light and blending with the main scene.

For their method though do what the other guy said and just draw a 3D box around the x,y,z of your light, at a specific size big enough to cover how far the light distance shines, and it will project onto the screen as the quad you want. Still wont need scissor testing as the pixels that the box projects to on screen are the only pixels that will be drawn and shaded anyway.

It is really similiar to shadow volumes so you search for that name too I guess.

Basically you draw an enclosed model that will completely surround the light (it can be cube or a rough sphere). After while rendering this volume you use stencil operetors to find which pixels will be illuminated (or will be shadowed for shadow volumes)

There are 3 posiblities for a pixel depending on which sides of volume (front side or back side relative to camera) are rendered

If both sides are rendered (passed the depth test) then this pixel is behind the light and too far away. It won't be illuminated.

If back side is not rendered while front side is rendered that pixel is inside the light volume so it will be illuminated.

If both sides are not rendered (couldn't passed the depth test or maybe pixel is not on top of them) that it is not illuminated again.

You use stencil tests to find which pixels are inside the light. First render back side of model by culling front (of course don't render anything to depth buffer or color buffer. just stencil) And set a stencil bit to 1 if depth test fails. After this step stencil bit will be 1 for each pixel that is infront of back side of light volume. After that render front side of light volume and render light if depth test passes and stencil bit is 1. This will ensure that you will be rendering only if your pixel is between two sides of light volume.

alternatively you can use two sided stencil tests. Increase stencil buffer while rendering front and decrease it while rendering back. If a stencil buffer is 1 in the end it means that only the front side is rendered and pixel will be illuminated. Finally render a full screen quad to render light

Thanks for the shadow volume comments. I'm still hoping to find a solution using scissor regions, but I may have to give that a try.

I've started trying to read the "Scissor Optimization" chapter from the Mechanics of Robust Stencil Shadows article on Gamasutra, which deals exactly with the problem I'm having of using scissor regions to limit light rendering. However, I so far haven't been able to follow along very well. The article makes assumptions and pulls in equations that I don't know about, and doesn't really explain what its doing or why the equations are important.

I really want to understand the "why" of what I'm doing, and not just copy/paste math equations, so I'm still searching for solutions...

Drawing spheres as we suggested IS scissor testing. Just draw a sphere, where your light is, and the only pixels that will be be lit are the ones that the sphere projects to on screen.

However, as expected, performance is terrible when the number of lights increases, as every light is currently being rendered as a full-screen quad.

I think you missed the deferred rendering memo. You draw a sphere for each light. When you really need to optimize, you draw portions of your screen with multiple lights at once using scissor tests:

Assuming 8 lights in your scene:

1.Draw sphere, sphere projected on screen will basically give you the "scissor pixels" the only ones the light will effect.2. Light those pixels and ADD blend them to the scene.3. Step 1 for next light.

8 Blend operations.

Optimized1. Figure out (like that mechanics article, or basic ray tracing) what lights are in what part of your screen image.2. Assuming your screen is cut in 4, say 2 lights are in each quadrant. You draw 4 quads, upperleft, upperight, etc. And send to your pixel shader 2 light locations for each quad you draw (the lights that actually are projected on screen and only light/influence those pixels in 1 specific quadrant).3. Light those pixels and ADD blend them to scene.4. Step 1 for next quadrant.

4 Blend operations. Because you calculated lighting for 2 lights at once.

Drawing spheres as we suggested IS scissor testing. Just draw a sphere, where your light is, and the only pixels that will be be lit are the ones that the sphere projects to on screen.

However, as expected, performance is terrible when the number of lights increases, as every light is currently being rendered as a full-screen quad.

I think you missed the deferred rendering memo. You draw a sphere for each light. When you really need to optimize, you draw portions of your screen with multiple lights at once using scissor tests:

What you're describing is using stencil testing, right? Correct me if I'm wrong, but isn't that just another method of doing the same thing as screen-space scissor testing? I don't understand why you would need both. Stencil test would show you exactly where to render based on the "shadow" created by the sphere geometry, but requires an extra pass to render the sphere geometry. Scissor test limits rendering to a particular rectangle on the screen, with no extra pass, but requires some fancy math to calculate the extent of the light on the screen. End result should be roughly the same either way.

I've started trying to read the &quot;Scissor Optimization&quot; chapter from the <a href='http://www.gamasutra.com/view/feature/2942/the_mechanics_of_robust_stencil_.php?page=6' class='bbc_url' title='External link' rel='nofollow external'>Mechanics of Robust Stencil Shadows</a> article on Gamasutra, which deals exactly with the problem I'm having of using scissor regions to limit light rendering. However, I so far haven't been able to follow along very well. The article makes assumptions and pulls in equations that I don't know about, and doesn't really explain what its doing or why the equations are important.<br /><br />I really want to understand the &quot;why&quot; of what I'm doing, and not just copy/paste math equations, so I'm still searching for solutions...<br />

That article is self-contained. The only assumptions it makes is that you know what a plane is and what a dot product is, and it does not pull in equations from out of nowhere. If you really want to understand how to calculate the proper scissor rectangle for a light source, then that article is the right place to look.

That article is self-contained. The only assumptions it makes is that you know what a plane is and what a dot product is, and it does not pull in equations from out of nowhere. If you really want to understand how to calculate the proper scissor rectangle for a light source, then that article is the right place to look.

I have no doubt that the article is sound, I'm just having a hard time understanding it. For example, it mentions that, in order to calculate the coordinates for the planes bounding the light, the following two equations must be satisfied:

But so far I don't have the slightest idea where these two conditions come from. I did some reading on dot products today to see if it had some other property I didn't know of, and found the geometric form which might be what the first equation is using, but other than that I'm still swimming in the dark. Again, no fault of the article, just a lack in my understanding. I just haven't yet found what I'm missing for it to make sense.

The dot product between a normalized plane and a point gives you the signed perpendicular distance from the point to the plane. The equation T * L = r means that the distance between the tangent plane T and the light position L is equal to the radius r of the light source. The equation Nx^2 + Nz^2 = 1 just means that the normal to the plane is unit length, where we left out the y coordinate because it is known to be zero.

The dot product between a normalized plane and a point gives you the signed perpendicular distance from the point to the plane. The equation T * L = r means that the distance between the tangent plane T and the light position L is equal to the radius r of the light source. The equation Nx^2 + Nz^2 = 1 just means that the normal to the plane is unit length, where we left out the y coordinate because it is known to be zero.

Wow... awesome, thank you for that explanation! That is the first I've read of a dot product doing that. I will start through the article again with that in mind.

No. I think deferred is a bit over your head. In no offense, the reason your not getting a lot of stuff is that your skipping from beginner to advanced with no intermediate.

As I said and the whole optimization of what everyone here including you are talking about is: For each light, I dont want to run a pixel shader on pixels that are not affected by the light. How do you do that? Simplest way, know how big of a 3d area in the world your light effects. So draw a sphere in your world, any surfaces that collide with that sphere (use GL_DEPTH_TEST as GL_GREATER), the sphere ("light") will actually be hitting a surface. Since the sphere is drawn right over the screen those are the exact pixels you need to light. So while you draw your sphere, you arent drawing the sphere, you still run your pixel shader and pass the light position etc, but the pixel shader only draws the triangles being sent (The ones from the lights sphere that collide and hit surfaces).

look up some more tutorials, i swear every one i read talked about that. Your scissor test is going to do NOTHING faster than the first method I posted UNLESS you actually figure out and do method 2 and compute lighting in grid regions for each light in 1 pass for multiple lights. Just do the method we told you and ur fine. All you have to do is draw a 3d sphere at the lights position and scale it big enough. If its a small desk lamp draw the sphere like 3 units, a streetlight 50 units etc.

Which do you think would be faster, drawing a 16 triangle sphere where each vertex has to be multiplied by a set of matrices?or computing the edges of the scissor region by taking the light position and adding in the strafe and up direction the radius of the light to get the top and right edges of the light's volume of affect projected onto 2d space?

I agree that drawing spheres to set a stencil region is fairly easy... perhaps I'll do a test, here's the code I was thinking of...

As I said and the whole optimization of what everyone here including you are talking about is: For each light, I dont want to run a pixel shader on pixels that are not affected by the light. How do you do that? Simplest way, know how big of a 3d area in the world your light effects. So draw a sphere in your world, any surfaces that collide with that sphere (use GL_DEPTH_TEST as GL_GREATER), the sphere ("light") will actually be hitting a surface. Since the sphere is drawn right over the screen those are the exact pixels you need to light. So while you draw your sphere, you arent drawing the sphere, you still run your pixel shader and pass the light position etc, but the pixel shader only draws the triangles being sent (The ones from the lights sphere that collide and hit surfaces).

look up some more tutorials, i swear every one i read talked about that. Your scissor test is going to do NOTHING faster than the first method I posted UNLESS you actually figure out and do method 2 and compute lighting in grid regions for each light in 1 pass for multiple lights. Just do the method we told you and ur fine. All you have to do is draw a 3d sphere at the lights position and scale it big enough. If its a small desk lamp draw the sphere like 3 units, a streetlight 50 units etc.

The reason for my confusion is that I've already tried restricting light rendering to small regions of the screen using scissor testing, and achieved better performance. This article on deferred shading uses scissor testing, but never mentions rendering spheres around the light sources. All lights are rendered as fullscreen quads, but the scissor test limits shader execution to the portion of the screen where the light is located.

I'm certainly not opposed to trying the sphere method, as it sounds fairly simple, I just don't understand why you think both are required for deferred shading to work.

You mention that the method you describe doesn't use stencil testing. How else would you limit shader execution to an area of the screen masked by the rendered sphere? I've tried looking up more tutorials, as you suggest, but the only alternative I've found to scissor regions is using stencil testing, using an extra pass to render the light sphere to the stencil buffer, and then doing stencil failure tests to determine which pixels to run the shaders on. I'd love to see the tutorials you refer to.

Which do you think would be faster, drawing a 16 triangle sphere where each vertex has to be multiplied by a set of matrices?

I'm certainly not opposed to trying the sphere method, as it sounds fairly simple,

No, what I'm getting at is that it is easier for his perspective, I use what I know vs copying a function that doesn't make any sense to me. But in the theory of deferred shading, yes they will be the EXACT same time, unless your doing the 1,000+ disco lights. Also Pablo, the original method is faster (the one I'm proposing) IF IF IF your doing your stencil region 1 light at a time. How many operations does it take to project a low poly sphere say 100 polys: 100*32 instructions = 3200 instructions. Lets say his lighting shader is 32 instructions per pixel, and that includes pixels in his scissor test that are still NOT even receving light. So if your scissor region has more than 100 pixels that still wont receive light (which is going to happen), then your still running your pixel shader on pixels that equate to nothing, AND still wasting the blending operation which is the biggest concern for performance.As to your question of the sphere. Well I have nothing to do at work atm, so damn if you dont get this then I give up.

Attached Thumbnails

Further, the scissor test is trying to solve the issue of not blending the pixel shader with the current framebuffer because it is slow NOT to determine what pixels the light source actually hits, those are 2 problems. You can still draw sphere to a stencil buffer and compute those same areas and shade them at once. So if there was another light on the right side of the box, you could draw that intersection as well and run the pixel shader on both regions at once. That way you only ADD to the framebuffer once. Thats all I got. But I have no benchmarks for performance, and to me the scissor test is overkill. Most real world lights dont overlap. If your using like school type industrial lights that are on the ceiling, then your non-sphere rendering would be F'd because all those lights would have basically fullscreen scissor regions. Where the actual method of doing deferred gives you just the pixels that intersect with the light, IE the ones actually on the ceiling can't be light by a light that only casts downward, etc. Not drawing spheres is horrible.

The dot product between a normalized plane and a point gives you the signed perpendicular distance from the point to the plane. The equation T * L = r means that the distance between the tangent plane T and the light position L is equal to the radius r of the light source. The equation Nx^2 + Nz^2 = 1 just means that the normal to the plane is unit length, where we left out the y coordinate because it is known to be zero.

After this explanation and a bit more reading, I am having a much easier time understanding the math used in the article. The only question I have at this point is more abstract.

Why is a quadratic equation being used to determine the extents of the light in screen space? I know what a quadratic equation looks like, and I understand how the shape makes sense for finding the two equation solutions that define the two sides of the light source. What I'm looking for is the proof or explanation that shows why the quadratic equation gives valid results for this use. I want to actually understand how this works as much as possible. Most of the quadratic equations I've seen are curved, but I would assume this calculation to require straight lines from the camera to the light edges. I'm guessing there is a variation on quadratic equations that mimics this?

I've been reading what I can find on quadratic equations all evening, but haven't found much beyond the typical curve examples from math class. Any information or links to explain how/why they are being used here would be awesome.