So, we have run into an interesting issue, how to make caves be dark and not have the ambient light of the world above... but... if somebody is to break into the cave and turn it into open world make it have that light back?

Minecraft does some odd stuff with the fog as well, based off of your depth, i do not want to do that... so if we can figure out some way for the system to tell if you are in a cave and not in open land, than i would have it modify our dynamic world lighting so that the world lighting doesnt have ambient and such for the player....

Any thoughts our ideas are welcome at this point since we are at a loss.

Can't you just change the ambient light value depending on the surrounding tile types? If you walk into a cave, bring the ambient down to (16,16,16 or 32,32,32). You can even adapt that towards the whole game and always interpolate the ambient value based on terrain type and time of day.

Do you have specific cave tiles? Just do a check to see what type of tile the player is standing on. If grass tiles only appear outside, and cave tiles only appear inside a cave with cave roof tiles overhead, then it's an easy assumption to make.

Can't you just change the ambient light value depending on the surrounding tile types? If you walk into a cave, bring the ambient down to (16,16,16 or 32,32,32). You can even adapt that towards the whole game and always interpolate the ambient value based on terrain type and time of day.

Do you have specific cave tiles? Just do a check to see what type of tile the player is standing on. If grass tiles only appear outside, and cave tiles only appear inside a cave with cave roof tiles overhead, then it's an easy assumption to make.

I think the OP wants something like when you break a few tiles off the cave roof, daylight floods into the cavern (with a direction depending on the time of day) and illuminates it a bit. You'd need to take into account geometry, or at least identify which tiles are "lit" and propagate that property to nearby tiles with a realistic diffusion algorithm (perhaps a sweep algorithm could work here if the world is divided in chunks)

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

Minecraft in particular is having a very specific lighting where each cell has two lighting values ranging from 0-15 (one for sunlight and one for artificial light). Sunlight is propagated downwards at full intensity, but loses intensity "sideways" (reduced by 1 per cell). Artificial light is always reduced by 1 per cell. Also, the lighting value of a call is not the brightness of the cell (or geometry in that cell), but the brightness used for adjacent faces. In fact, all cells filled with a solid have lighting values of 0.

We already have the minecraft lighting, as well as DYNAMIC lighting ( minecraft is for low end computers ). We are trying to figure out a way to change the ambient light so it is not present when you go down into a cave system. The issue is... how to define a cave system? We are already looking at how much light to show or not.. so that is one possible theory. By doing this we can not only work with caves but buildings as well. Ever been in a building in minecraft that was huge and you saw some bullshit blue fog... yeah.. that is not something we want. We can dynamically change the fog and other settings of the players view so why not try to figure a way to auto control it? That is what we are attempting to do.

I only played Minecraft for a very short time, so I may be way off. If you do the simple lighting the way Trienco described, couldn't you use a third channel for the ambient light that propagates just like artificial light but with a much smaller falloff?

I think I would use a trigger plane at the entrance, which controls the scenes ambience. As you approach the trigger, it activates and records your distance from the plane, as this distance decreases( you approach the entrance) , dim ambient... at a certain point, perhaps when you hit some -r distance from the plane, leave it alone, reverse it on the way out.

I think I would use a trigger plane at the entrance, which controls the scenes ambience. As you approach the trigger, it activates and records your distance from the plane, as this distance decreases( you approach the entrance) , dim ambient... at a certain point, perhaps when you hit some -r distance from the plane, leave it alone, reverse it on the way out.

That would work in a non dynamic world. The issue is how do you get a trigger plane to exist in a world that is layed out a certain way, and then modified by the player to be something else. Conventional methods can not apply since we are not linear in the controls of where the player goes.

I can think of ways, but how heavyweight they are is a different question. I appreciate that directional lighting with occlusion would give too drastic an effect, e.g. the area next to a cliff in mid afternoon would be pitch black. I doubt that you want to do a full realtime radiosity/global illumination solution, but you could perhaps cannibalise the techniques for a very coarse voxel grid and a limited maximum range for radiosity calculations. A couple of compendiums of links follow:

What I did here: http://pic.twitter.com/foX0EZvS was trace a bunch of rays for each surface. The Minecraft thing is essentially a special case of the ray tracing where you only use a single ray pointing straight up. The Problem with this is that its pretty expensive and it becomes fairly hard to determine what has to be updated when a single block is removed and you end up updating a whole bunch of surrounding blocks etc.
From a results point of view this look very nice though. I spent a lot of time thinking about optimizations like mipmapping the terrain and use that to accelerate the raytracing etc. But the problem is always that you will miss some sort of detail when doing that (thin walls become transparent etc.).

I think the best and fastest way is this. We already have light propagating for each block. In the GBuffer, I can pass those lighting values to a channel. Then in the Directional Lighting, apply the ambient lighting according to whats in the channel.

what you want is real time global illumination, this is a tough topic.
you can try to go with Crytek method : Light Propagation Volumes for instance. but this does not handle well multi light bounces which are almost the only thing that physically light a cave.

the ray idea is a simplification of the "final gather" step in radiosity calculation. radiosity is traditionally calculated this way:
use the light source to create a photon map (equivalent of a stochastic shadow map) then for each surfel gather photons over a hemisphere, this allows to evaluate indirect occlusion.

In a less correct way there is another method using spherical harmonics, they encode the irradiance at each vertex of the scene. in your case you cuold go with each voxel. but encoding irradiance needs to integrate the environment over the sphere, multiple times, which is not real time at all.
the SDK of DirectX incorporate some examples of that method. (e.g. the little spaceship in a greek-parthenon-like temple, based on a paper from ATI)

other real time GI methods: the first paper from Dachsbacher : Reflective Shadow Maps. but this requires screen space gathering with 400 samples per pixel which is crazy slow. In the paper they are doing it with a reduced resolution but still...
CryEngine3 has an implementation of that last method for everything that is outsides of the light propagation volumes cascades. (far world, thus small gathering disk on screen, thus fast)

The most complex real time GI method : Imperfect Shadow Maps.
this consists of renddering of approximated scene geometry with point rendering from the point of view of distributed virtual point lights amongst the scene volume, and apply some 2d reconstruction algorithm on those little (cubic) shadow maps. this gives a good volumetric information about occlusion in any direction, like a final gather in the classical ray traced way, they can deduce the ambient occlusion in real time.

The SSAO: you already have it, nothing to say on that.

In my sense, you should go with a volume partitionning that is rougher than your basic cell, like say a grid of sectors that can contain 1024 cells each, and for those sectors you can easily update the flag that tells the actual presence of cells in it. which you propagate up in the octree that stores this grid structure.
then when there is invalidation in your world (element movings/deleted/created) you can locate what sector needs to be re-gathered quickly. and you run a little custom simplified ray-traced photon map/final gather system for that sector. (which is much faster than having to do it for every surfel).
then you sample this sector like you would a volume texture, with real trilinear sampling.
you will need something to improve the frequency of the information at interfaces when your geometry has thin layers/walls/ceilings, because you will have "light bleeding" and also "shadow bleeding" on the outside.
Also, a final gather is always noisy because of the limited amount of ray (samples) => lacks of information <=> aliasing.

or you would invent a totally new method, why not with a compute shader, that takes advantage of the blocky structure of your world.... needs thinking

The first thought that comes to mind to define a cave system is to cast rays from the edges of the blocks in the area in question towards (and away from) the sun. You'll end up with an odd shaped cone volume of sorts defining the area in the cave that's visible to the light source.

This can be used to define which blocks receive light and can be used to define the cave volume by saying "the volume that is not in the lit cone volume and below the ray cast point".

Possibly a brutish approach to it. I'm not really familiar with the current techniques used today.

We have been busy with moving this week so nothing has been touched on this. I can see some seriously great ideas going on here and I am excited to get working on them. I will keep you posted as to what we find and what we end up doing.

For those who are curious, we are using a differed rending system right now, we do not have point lights setup because we are having issues with how we are doing our depth field. We are using LOG and it doesnt seem to want to render the lighting properly.... anyway.... ill keep you posted.

I just had an idea that could serve you here:
you could use environment maps. you don't need a high resolution (like 32x6), but it will allow to get the information "how much of the sky is seen from this point in space".
and you can use a shader to convert to irradiance maps then you keep only the spherical harmonics coefficients at the position of the rendering of the environment map.

the issue is that you will need many of those to cover your whole world.
you could prepare those coefficients in a structure that places those points near interfaces (between void and presence of voxels).
then invalidate the the SH near voxel that changed, and re-render the envmaps when needed.

All in all, it is exactly a final gather, just pure GPU friendly and very light in storage.
the problem is that it could still take several seconds to prepare all SH for a map of a few hundreds of meters.