Hi.I'm trying to implement this technique: http://blog.icare3d....tion-using.htmlThe paper about it isn't detailed and it's missing some info.I did research about voxels, I found technique for fast voxelization. But still, I don't know about cone tracing. There are no papers about this technique. I know the basic difference between path tracing and cone tracing. The difference is that you trace cones instead of rays. So instead of shooting hundreds of rays, you just shoot few cones. But still, I'm missing a lot of informations about this technique to implement it. Can somebody explain me cone tracing? Or give me link to paper. I WAS using google. I found nothing. All I found is undetailed wiki page about cone tracing. Thanks in advance. Sorry for my poor english.

Depending on radius of cone change to lower mipmap, thus when it hits you get sum of all information in the smaller voxels down the mipmap tree.
This is the reason you do not need to shoot so many rays/cones this way.

Depending on radius of cone change to lower mipmap, thus when it hits you get sum of all information in the smaller voxels down the mipmap tree.This is the reason you do not need to shoot so many rays/cones this way.

just FYI, the first part relating to the voxelization was released as a free chapter from the OpenGL Insights book, you can find it here (according to Cressin's Twitter page, the full source will be released soon, probably to the git repo here).

just FYI, the first part relating to the voxelization was released as a free chapter from the OpenGL Insights book, you can find it here (according to Cressin's Twitter page, the full source will be released soon, probably to the git repo here).

I know how to do fast voxelization. I'm looking for info about cone tracing.

Well, I was trying hard, but my 13 years old brain can't handle cone tracing.Do you know any other, easy to implement, real time global illumination techniques?

The words easy and global illumination don't really get along all too well

Indirect lighting is an advanced technique, so if you want to implement it you'll pretty much have to get your hands dirtyWhen it comes to global illumination you have quite a few options each with their own advantages and disadvantages.

There are precomputed methods like precomputed radiance transfer (PRT), photon mapping, lightmap baking, etc. These techniques are mostly static and won't have any effect on dynamic objects in your scene, but they are very cheap to run since all the expensive calculations have been done in a pre-processing step. These only support diffuse indirect light bounces as far as I know.

When you look at more dynamic approaches you have VPL-based approaches like instant radiosity, which allow for dynamic objects and a single low-frequency light bounce. You could also directly use VPLs, but this will require some filtering algorithm if you want to get smooth results and prevent flickering.

Another interesting dynamic approach is the light propagation volume approach used by Crytek which uses reflective shadow maps to set up a 3D grid with indirect lighting values, and which then applies a propagation algorithm to correctly fill the grid. This is fast, but also only allows for a single low-frequency diffuse bounce.

There's also screen-space indirect lighting, which is an extension of SSAO. Of course this technique can only use the information available on screen and could possibly not give satisfying results.

Recently, Ritschel et al. wrote a nice state-of-the-art report on interactive global illumination techniques. Most of them require a solid amount of work, but it’s good to get an overview of what’s out there.

Once you get your ray tracer going, you can quite easily extend it to a path tracer or – with a little more effort – to a photon mapper. Extending to a path tracer is easier, but with a photon mapper you could compute indirect lighting for real-time applications, see McGuire et al., or you could look into progressive photon mapping (PPM, SPPM, Photon Beams) and all its extensions if you want photometrically correct lighting, which, however, takes a few more hours to compute.

Recently, Ritschel et al. wrote a nice state-of-the-art report on interactive global illumination techniques. Most of them require a solid amount of work, but it’s good to get an overview of what’s out there.

Once you get your ray tracer going, you can quite easily extend it to a path tracer or – with a little more effort – to a photon mapper. Extending to a path tracer is easier, but with a photon mapper you could compute indirect lighting for real-time applications, see McGuire et al., or you could look into progressive photon mapping (PPM, SPPM, Photon Beams) and all its extensions if you want photometrically correct lighting, which, however, takes a few more hours to compute.

Cheers!

Hey, thanks for good informations.And thanks for getting me into photon mapping! This technique is awesome, and easy to understand. I'll try to do some optimizations for it. Voxelizing the geometry before mapping photons will be big speed up, i think.

I will say this paper is very complicated, and perhaps impossible to implement directly from the paper. Additionally, the technique was implemented taking advantage of some of the latest GPU features, some of which are only availible in OpenGL 4.3 or DirectX 11 and/or Nvidia-specific extensions, so unless you have the latest and greatest Nvidia (I think only Kepler technology), you will not be able to fully implement it. For example, in order to voxelize the dynamic objects per frame, it requires the access to a compute shader. In the absence of a compute shader, I guess you could do something with CUDA and OpenCL, but it would require OpenGL to interop and write data to a buffer that can be used by these other libraries to build the octree or voxelize. You could always pre-build the octree, but your scene would have to be static with no dynamic geometry.

That being said, if you would like more information on these techniques, you should check out Cyril Crassin's webpage as it contains other papers on techniques which this method uses (http://maverick.inria.fr/Members/Cyril.Crassin/). Also, the newest Unreal Engine 4 takes advantage of this technique (see demos on youtube).

I will say this paper is very complicated, and perhaps impossible to implement directly from the paper. Additionally, the technique was implemented taking advantage of some of the latest GPU features, some of which are only availible in OpenGL 4.3 or DirectX 11 and/or Nvidia-specific extensions, so unless you have the latest and greatest Nvidia (I think only Kepler technology), you will not be able to fully implement it. For example, in order to voxelize the dynamic objects per frame, it requires the access to a compute shader. In the absence of a compute shader, I guess you could do something with CUDA and OpenCL, but it would require OpenGL to interop and write data to a buffer that can be used by these other libraries to build the octree or voxelize. You could always pre-build the octree, but your scene would have to be static with no dynamic geometry.

That being said, if you would like more information on these techniques, you should check out Cyril Crassin's webpage as it contains other papers on techniques which this method uses (http://maverick.inria.fr/Members/Cyril.Crassin/). Also, the newest Unreal Engine 4 takes advantage of this technique (see demos on youtube).

I can do voxelization in vertex/fragment shader.The technique is expained here: http://graphics.snu....09_voxel_gi.pdfYou basically render model's vertex coordinates into texture, and for each pixel of that texture you add a voxel at position from current pixel's value.

Maybe I explained it wrong, see the paper for best explanation.

Well yeah, but that's not enough to implement the technique presented in the paper by Crassin. For cone tracing to work you'll need to generate mipmap data of your voxels stored in an octree, and to maintain any kind of performance this octree structure should be held entirely in GPU memory in a linear layout and it should be recalculated on each scene update by the GPU, and you'll really need a compute shader solution to do this.

Well yeah, but that's not enough to implement the technique presented in the paper by Crassin. For cone tracing to work you'll need to generate mipmap data of your voxels stored in an octree, and to maintain any kind of performance this octree structure should be held entirely in GPU memory in a linear layout and it should be recalculated on each scene update by the GPU, and you'll really need a compute shader solution to do this.

This is exactly right, the key being it is an octree structure and that it is entirely generated/updated/accessed on the GPU.

You might be in luck. A book titled OpenGL Insights just came out, and Cyril Crassin has a chapter in it entitled Octree-Based Sparse Voxelization Using the GPU Hardware Rasterizer in which he explains the Octree voxelization technique presented in the paper. It shows how to use the compute shader and all that. What's more, it's your lucky day because the website has a link to sample chapters you can download for free in PDF form, and this chapter is one of them.