My screen is completely white, which means that the depth of every pixel is equal to 0... Could someone see if I made an error here, or send me a piece of code that can access the depth buffer ? Thank you for your help!

I finally succeeded to eradicate most of the visual artifacts in my DOF shader! My biggest problem was the bleeding effect caused by the merging of the blur buffer and the scene. I played a bit with the kernel and sampler values and finally found a way to attenuate it.

Though, I wanted to try other approaches, so I did another version of the DOF shader. In this one, instead of blurring all the scene and merging it to the initial image in concordance with the depth buffer, I do the blur passes by interpolating the kernel value in function of the depth buffer. This technique is far more GPU intensive, though it yields better results. It totally eradicates the bleeding artifact, gives a smoother blur and gives the impression of a smoother focal transition when moving the camera.

Here is a picture that shows the two techniques:

The fast DOF is probably better for games though, the visual difference isn't so noticeable, but the performances are far better (2x faster: 200fps vs 400fps on my 8800gt).

I tested what you suggested, I did some ping-pong with the blur buffer. It was a worthwhile experiment, and it confirmed what I was thinking. It produces a sharper, and more aliased blur. The reason is that when using the same buffer for input and output, once a pixel is blurred it is written in the output buffer, and thus the other pixels do their blur computations with the value of already blurred pixels. This creates a very smooth blur and helps reduce aliasing. Performances are not affected by using 2 buffers.

I must admit though, that I don't fully understand how it is working, because modern GPU are supposed to do their computations in parallel, and thus it cannot ensure that all neighboring pixels of the currently processed pixel are actually done processing. My guess is that the computing time per pixel varies each frame and thus the resulting image, though the difference is so minimal that we don't see it.

In my opinion using the same buffer for input and output produces a better looking blur, and thus dof_demo3.rar (see my previous post) will certainly be the final version of my DOF shader.

Yes I am french Did I made some english mistakes ? Well, I should say that I *speak* french, I live in Quebec. And no, I don't have any blog.

I tried GeeXLab a few weeks ago, so I am quite a beginner with this tool, but I must say that it is so easy to learn and to use, that now it's hard for me to return to using the tools/engines I used for doing shaders and demos. I've seen and used a lot of engines before, but really, GeeXLab (and Demoniak3D) is very unique. It really is a good compromise between tool and engine by offering the control that only a programming language can provide and the easiness of use of RAD applications. I can only admire the work you've done creating this tool, and I will probably use it extensively for my future projects.

I assessed precisely what you suggested, I did some ping-pong due to the blur buffer. It had been a worthwhile test, and it verified precisely what I was considering. It generates a sharper, and much more aliased blur. The factor would be that whenever using the exact same buffer for input as well as production, when a pixel is blurred it is penned within the output buffer, and so the different pixels do their blur computations while using the value of already blurred pixels. This creates a really smooth blur and also helps reduce aliasing. Shows tend to be not affected through the use of 2 buffers.