Main menu

Post navigation

Fullscreen Motion Blur on iDevices with OpenGL ES 1+

This, sadly, is my last post for this cycle, but I promise I’ll be back. It’s been a lot of fun being on the rotation, and it helped me a lot to share my findings.

However, for this final post, I’ve picked something special. Actually, a lot of people have asked me about this, how we do the motion blur in our latest game Nuts! and our very-soon-to-be-released game Zombie Gunship. The technique is by no means new, but the fact that it works so beautifully on the iDevices and it’s simplicity really seal the deal for me.

Showcase

First of all, let me give you some arguments on why the motion blur is so cool.

In the case of Nuts!, it is actually pretty hidden. The only place where you can see it is when you pick up a fireball nut. But as you can see in the screenshots (and even more so when you play the actual game), the motion blur adds a lot of “speed” feeling to those nuts. The whole fireball effect is a lot more convincing with the motion blur effect. Interestingly, the motion blur is only used in those situations and runs at half the resolution of the original game. But it is not noticeable, because of the temporal bluring. Even when the resolution switches back to the full 640×960, once the effect has worn out, there is no popping noticeable.

In the case of Zombie Gunship, the visuals of the whole game are in essence built around this effect. It gives the game this 80s-built warplane-targeting-computer like look and artificial “imperfection”. Also, as you can see in the screenshots, we’re actually running a quite low resolution (480×320), and the models are quite low-res as well. But with the motion blur the game looks a lot smoother, it’s harder to make out individual pixels.

Since it is a temporal blur by its nature, it is actually harder to see in screenshots

How it’s done

The best about this technique is that it’s super simple. It even works in OpenGL ES 1, and like many post-processing effects it can just be dropped into the game very easily.

In a traditional rendering setting on iOS, we would map the final framebuffer, then draw the solid geometry, blended geometry, and then the ui on top. Finally we would present the renderbuffer and the frame is done.

With motion-blur, instead of rendering into the final framebuffer, we render into an intermediate framebuffer that renders into a color texture. For us, this buffer is usually half the size of the final framebuffer. Once we’ve rendered the solid and blended geometry into this buffer, we enable alpha blending and render this intermediate texture into a so-called accumulation buffer with an alpha value smaller than one. This accumulation buffer is only cleared when the blur begins. Finally, this accumulation buffer is then rendered into the final framebuffer.

In pseudocode, it looks something like this:

Traditional Rendering:

ActivateFinalFramebuffer();

Clear();

RenderScene();

RenderUI();

Present();

With Motion Blur:

ActivateIntermediateFramebuffer();

Clear();

RenderScene();

ActivateAccumulationFramebuffer();

// No clear here!

RenderIntermediateTextureWithAlpha(alpha);

ActivateFinalFramebuffer();

RenderAccumulationTexture();

RenderUI();

Present();

As you can see, you “just” need to add a few functions to your -(void) draw call in order to add the motion blur, and you can turn it on and off on-the-fly.

The smaller the alpha, the longer the blur, because less of the pixel is “overwritten” every frame. In the first frame, the pixels contribution to the final pixel value is alpha, in the second frame it is alpha*(1-alpha), then alpha*(1-alpha)^2, so it slowly fades out over time.

Of course, alpha can be varied every frame. We use that in Nuts! to slowly fade out the fireball effect at the end.

Two small remarks

One simple idea for optimization would be to use the final framebuffer as the accumulation buffer. This would save us one full-screen quad rendering operation. However, the framebuffer on iOS is at least double buffered. That means every second frame has a different render target, which leads to a very choppy and mind twisting blur effect. Also, if you want to display non-blurred components, such as UI and text, such things should be rendered into the final framebuffer, after the accumulation buffer has been rendered.

Another thing to note is that the first frame needs to have alpha=1, eg. when the fireball nut is picked up in Nuts!. This makes sure the accumulation buffer is properly initialized and doesn’t have any very old data.

Is this right for rendering?To my understanding: 1. render the scene to the intermediate texture2. render the intermediate texture with alpha to the accumulation texture3. render the accumulation texture to the final color buffer

Hey, sorry, I can’t release any source code for this. But the pseudocode above should be a good start. Also, you can google for “accumulation buffer motion blur” which should give you a few interesting hints (mostly for Desktop OpenGL, though).