I'm recently working on implementing deferred lightning into my engine. I got the basics so far ( rendering the worldpositions/viewspace-normals/color) into textures, and then apply them onto a screen filling quad. However, I ran into a few problems doing so.

First of all, I managed the bring one pointlight into the scene. That seems to work fine, but I wanted to be able to bring a lot of lights into the scene, and now I can't really see how I am suppost to move on from one light to multiple ones.

Should I just pass a big big buffer with all their positions and attributes into the shader and then loop through them? Where would be the benefit compared to pass all of them to a usual shader?

I was told that shadowmapping would be an easy thing to add when rendering lights with this method, but I can't see how this is supposed to work.

Is there an easy way to implement some kind of AA to this?

And one other thing, when I render things to simple quads, the textures seem to suffer a pretty big impact on their quality, looking very stepped from color to color, not really smooth. Anyone an idea what could cause this?

When it comes to lighting, you should iterate through your light list and render a screen-space volume, (so for a point light, you'd use a sphere), where the light will affect them, outputting your color to the light buffer. If you enable additive blending, then all the lights can overlap and there will be no issues. You only need to worry about one light at a time.

Shadow mapping is very easy to add to this, since you can just make the alpha component of your light buffer your shadow term, and apply it easily enough when you combine at the end.

im also currently working on getting my deffered shader to work with multiple point lights.

i pass in all my light parameters for a given frame (positions, colors, whatever params i need) i a single constant buffer.

then i run the actual pointlight shader for every point light, rendering the lit areas (using stencil culling algorithm by the way), and passing in an index to access the current lights data.

as a rendertarget, i use (a) seperate buffer(s) storing the calculated lightcolor for a given pixel, including specular. this buffer accumulates the effect for all lights, and gets passed as resource for final composition with the global light.

i have no idea how this would help be achieve shadowmapping though, like i said, also doing this for the first time. make sure to tell me how you did it (esp. shadowmapping) when you get it up and running, also don't forget to mention your algorithm used.

Also, for each point-light volume remember to cull the polygons of the sphere appropriately, or else your lights will be too bright because each polygon will be drawn twice (and it will also cost more to render). The cull mode for each sphere mesh depends on whether the camera is inside or outside a point light's "area of influence".

For each point light, I test whether its sphere intersects the camera frustum's near plane. Depending on whether it does or not, I place them in separate "buckets" or lists so that they get rendered with the right cull parameters. By grouping them into two lists, you avoid having to switch the cull mode render state many, many times and at the most you will switch the cull mode twice.