Screen Space Ambient Occlusion

“Ambient occlusion is a shading method used in 3D computer graphics which helps add realism to local reflection models by taking into account attenuation of light due to occlusion” [Wikipedia.org]

The proper way to do ambient occlusion is very expensive for today’s hardware and especially without the use of a ray-tracing renderer. For that reason a few new techniques developed that tried to produce the same result using simpler and faster algorithms. One of these approaches is the Screen Space Ambient Occlusion (aka SSAO) that makes the calculations of ambient occlusion in 2D space just like a normal 2D filter. This article will not go deep into what SSAO is, there are many great readings around the web that cover the theory behind SSAO, in this article we will jump into the implementation that AnKi uses.

There are quite a few techniques to produce SSAO, a top level way to group them is from the resources they need to produce the SSAO:

AnKi used to implement the second technique for quite some time. The results were good but apparently not that good, so for the past couple of weeks I’ve tried to implement the third technique by reading this great article from gamedev. The present article extends the gamedev one by adding a few optimization tips and by presenting the code in GLSL.

Bellow you can see the old and the new SSAO, rendered at the half of the original rendering size with two blurring passes.

Old implementation of SSAO

New implementation of SSAO

The whole scene with the new SSAO

In order to produce the SSAO factor we need practically tree variables for every fragment, the first is the position of the fragment in view or world space (view space in our case), the normal of that fragment and a random vector that we obtain using a noise texture. The fact that AnKi uses a deferred shading renderer gives us the normals of the scene in a texture. The gamedev article suggests that we need to have the view space positions stored in a texture but in AnKi we don’t do that for any reason. Its very expensive to store the position in a texture and for that reason we use a few techniques to obtain the position from the depth buffer.

To obtain the fragment position in view space using the fragments depth we do:

The position is the vertex coordinates of the quad, the coordinates are: {1.0, 1.0}, {0.0, 1.0}, {0.0, 0.0}, {1.0, 0.0}. The values are pretty easy to digest and they help to calculate the texture coordinates. With this way we don’t pass a separate vertex attribute for the texture coordinates.