I wan't to implement volumetric lighting with a fragment shader in opengl. But i have no idea how to do it. Please help.

:mellow:

Faelenor
—
2006-02-17T15:26:14Z —
#2

Did you try google? If you search for "volumetric lighthing", you'll find some interesting links.

Anudhyan
—
2006-02-17T16:06:01Z —
#3

I just need the basic idea. Rest i will try to do on my own. Can i do it with a transparent screen-like object before my viewport, Or do i have to do advanced stuff such as fixing a color for every point in air.

Thanks for mentioning google, but i didn't get any interesting stuff.

Jynks
—
2006-02-18T12:19:51Z —
#4

Have you tried using 3D cones with animated alpha chans in stead of proper volums

Anudhyan
—
2006-02-19T14:02:35Z —
#5

No. I want proper,foggy,spherical volumes. ...like in the game Hitman 3...

Nautilus
—
2006-02-19T16:51:09Z —
#6

Hi, 3D graphics is the art of obtaining the most while doing the least. Never forget that.

If it tricks the eye, then it's good to go (no matter *how* the effect is achieved). Do not be so sure that in H3 what you see is genuine volumetric lighting.

Ciao ciao

Anudhyan
—
2006-02-21T12:05:30Z —
#7

I understand your point. Guess i got a little too impaitent. However these spherical volumes i had in mind are something like this:

Nautilus
—
2006-02-21T13:40:00Z —
#8

I see. Sure it's of effect. That is probably the work of a Pixel Shader. But I'm positive it's not real volumetric lighting (would cost too much processing power). Seems more of a diffused glowing effect well positioned in 3D space (in fact the blackened column in the foreground, at the center of the scene, is not affected by the glow of the light in the background). But I'm guessing here. Hard to tell what it is without seeing it in action and examining its behavior when some non static object (like a human being) enters the radius covered by that glow or partially overlaps with the source of the light. And I never played H3 I'm sure you can find some visual glitch within the behavior of that glow. Glitches often give you important clues about the nature of a special effect. I don't know the game, but (if you can), try using a 3rd person camera view. Zoom out with the camera as much as you can, and then step into the light with your avatar. As we say here, find the 'crystal's flaw'

Ciao ciao

oisyn
—
2006-02-21T13:48:03Z —
#9

Add depth values of frontfacing lightvolume-polygons to a buffer, and substract those of the backfacing polygons. What's remains is the total length of the ray intersected with the volume. You can use these values to apply some sort of fogging technique.

Of course, you still need to handle the degenerate case where a frontfacing polygon is visible but a backfacing is behind other geometry, but this can be done by using min(current_z, z_in_depth_buffer) instead of the actual depth value

Nautilus
—
2006-02-21T15:14:20Z —
#10

.oisyn you are terrible! Can you explain that again in a more high-level language? I am interested in what you said, but have problems to understand.

Thank you very much, Ciao ciao

Anudhyan
—
2006-02-22T03:27:09Z —
#11

I was thinking that if this kind of volumetric lighting was too costly to perform in realtime, why not precalculate and have a sort of Volumetric Lightmaps ?

void_
—
2006-02-22T09:13:05Z —
#12

The screenshot seems to be just a lightmapped room (precalculated lights) with some coronas applied to visible lights.

A corona can just be a billboard that is rendered over the light, giving the effect that it shines bright.

Anudhyan
—
2006-02-22T09:20:27Z —
#13

Are these coronas applied by disabling depth test(i.e. on top of everything)? If you see the left yellow light, you will see that the 'corona' is occluding the pillar next to it. But if you see the right yellow light the corona isn't occluding the pillar(the dark one). How is this done?

kusma
—
2006-02-22T11:45:43Z —
#14

i'd rather just use additive cones, simply because they can look quite good and are easy for an artist to tweak...

.oisyn you are terrible! Can you explain that again in a more high-level language? I am interested in what you said, but have problems to understand.

I think there is a sample included in the DirectX 9 SDK that does exactly this.

For fogging, you need to know how far a ray from the eye travels through the fog, so you can calculate how much of the light is scattered and how much will pass through. Classical fogging does this by taking the z-values either at the vertices or at the pixels when rendering polygons, but this obviously doesn't work with custom volumes. Fortunately, it isn't that hard to calculate the total distance that a ray travels through fog at every pixel.

Suppose you want to solve this with raytracing. Think of a convex volume, like a sphere. Shoot the ray through the sphere, and calculate the intersection points where the ray enters and exits the sphere. Of course, a raytracer calculates the actual z-values; the length that a ray needs to travel until it reaches the surface. So if you substract the z-value where the ray exits the volume from where it enters the volume, you are left with the total distance (in the z direction) the ray travels through the fog.

Since you only need the values of where a ray enters and exits the volume, you can simply render the actual volume using the GPU to a depthbuffer. All the polygons that face the camera are polygons where rays enter the volume. All polygons that backface the camera are polygons where the rays exit the volume. So if you add the z-values of all front-facing polygons of a volume to a buffer, and substract all z-values of all back-facing polygons from that buffer, you are left with the total distance a ray travels through the volume at every pixel. Note that this also works for concave volumes, as for every volume entry there is a corresponding volume exit.

Of course, a ray stops as soon as it hits actual geometry. If the geometry is between the camera and the fog volume, the volume polygons will get z-tested away. If the geometry is completely behind the volume, you'll get fog-values as expected. But if a ray enters the volume and then hits geometry without exiting the volume first, your backfacing volume polygons will get z-tested away while front-facing polygons won't, which leaves you with incorrect values in the buffer. This can be solved by taking the minimum of the current z-value of the pixel of the fog-polygon being rendered with the value at that pixel in the depth buffer.

Another problem is clipping against the near and far plane, you obviously don't want that to happen. Far plane clipping can be resolved using an infinite far plane, I'm not sure how you can solve the near plane clipping problem.

Thanks .oisyn for explaining fogging. Still sounds kind of expensive. Can all this be done in realtime ? If I could only get my hands on some opengl source code... :happy:

geon
—
2006-02-24T16:08:23Z —
#18

.oisyn: That's volumetric fog, not light. However, it can be used to make volumetric light.

First, "simply" subtract the ligth's shadow volume from the fog volume. This new fog volume will be used to calculate the added light from the fog. Tee original volume should be used for only the lost light.

This (I guess) would need multiple rendertargets that read and write to eachother, before they are combined to the final screen. (Much like depth peeling.) Would this even be possible with dx10?

.oisyn: That's volumetric fog, not light. However, it can be used to make volumetric light.

You know, those are physically actually exactly the same. Of course, the effect you want to achieve depends on the postprocessing filter you use. But fog means light scattering, light volumes are light volumes because there is fog (lots of tiny particles) around the light that scatter the light in all directions. So it's fogging either way. But what matters is the technique, not what name you give it.

geon
—
2006-02-24T17:53:31Z —
#20

@.oisyn

So it's fogging either way. But what matters is the technique, not what name you give it.

You are right of course. But to make volumetric ligth, I feel it is implied that the fog should be shadowed by anyobject between it and the lightsource. Or to put it the other way around: Objects should cast shadows onto the fog.