Non-uniform Deblurring for Shaken Images

Overview

In general, blur resulting from camera shake is mostly due to the 3D
rotation of the camera, causing a blur that can be significantly
non-uniform across the image. However, most previous deblurring
methods model the observed image as a convolution of a sharp image
with a uniform blur kernel. We propose a new model of the blurring process and apply this model in the
context of two different algorithms for camera shake removal, showing that our approach makes it possible to model and
remove camera shake more effectively than previous methods.

Blur Model

In the paper, we show that the blur arising from camera shake is caused mainly by the 3D rotation of the camera while the shutter is open. For a perspective camera, we know that rotation of the camera leads to a 2D projective transformation of the image, as shown in image (a) below, which displays the apparent motion of scene points under two simple rotations of the camera. Note that in both cases, the motion clearly varies across the image.

Traditionally, blur has been modelled as a convolution of a sharp image with a blur kernel, however this model is clearly insufficient to describe the effects seen below. Convolution models a blurry image as a weighted sum of translated versions of the sharp image, whereas we know that in fact, while the shutter is open, the camera in fact sees a sequence of projectively-transformed versions of the sharp image. Images (b) & (c) below show the difference between these two models.

(a) Apparent motion of scene pointsunder camera rotations

(b) Blur by convolution

(c) Blur by rotation of camera

In the same way that a convolution kernel can be considered as a set of weights used to sum up translated versions of the sharp image, we define a blur kernel for our model to be the set of weights used to sum up projectively-transformed versions of the sharp image. The parameter space for camera rotations is 3D, giving us 3D blur kernels, as shown below. This kernel in fact describes a different blur for each pixel of the image, as shown below, where the point spread functions (PSFs) for several locations are plotted on top of the blurry image itself.

Camera axes

The weights, or "blur kernel" defined over orientations of the camers

A blurry image, with the equivalent PSFs overlaid at several locations

Applications

In order to demonstrate the effectiveness of our model, we substitute it into two existing algorithms for removing camera shake, in place of their convolutional blur models, and show superior results and the ability to handle highly non-uniform blurs.

Blind Deblurring

We have adapted the camera shake-removal algorithm of Fergus et al. [1], based on the variational deblurring method of Miskin & MacKay [2], to use our rotational blur model.

Blurry image

Deblurred using uniform blur model

Deblurred using our model

We have also adapted the fast blind deblurring algorithm of Cho & Lee [4] to use our rotational blur model.

Blurry image

Deblurred using uniform blur model

Deblurred using our model

Noisy/Blurry Image Pairs

Following the approach of Yuan et al. [3] we also apply our model in the situation where an additional sharp but noisy image of the same scene is available. This enables us to find a good estimate of the kernel with a standard convex optimization algorithm, and obtain a deblurred result free from "ringing" artifacts.

Abstract

Blur from camera shake is mostly due to the 3D rotation of the camera,
resulting in a blur kernel that can be significantly non-uniform
across the image. However, most current deblurring
methods model the observed image as a convolution of a sharp image
with a uniform blur kernel. We propose a new parametrized
geometric model of the blurring process in terms of the rotational
velocity of the camera during exposure. We apply this model
to two different algorithms for camera shake removal: the
first one uses a single blurry image (blind deblurring), while the second
one uses both a blurry image and a sharp but noisy image of the same
scene. We show that our approach makes it possible to model and
remove a wider class of blurs than previous approaches, including
uniform blur as a special case, and demonstrate its effectiveness with
experiments on real images.

Abstract

Photographs taken in low-light conditions are often blurry as a result of
camera shake, i.e. a motion of the camera while its shutter is open. Most
existing deblurring methods model the observed blurry image as the convolution
of a sharp image with a uniform blur kernel. However, we show that blur from
camera shake is in general mostly due to the 3D rotation of the camera,
resulting in a blur that can be significantly non-uniform across the
image. We propose a new parametrized geometric model of the blurring process
in terms of the rotational motion of the camera during exposure. This model is
able to capture non-uniform blur in an image due to camera shake using a
single global descriptor, and can be substituted into existing deblurring
algorithms with only small modifications. To demonstrate its effectiveness, we
apply this model to two deblurring problems; first, the case where a single
blurry image is available, for which we examine both an approximate
marginalization approach and a maximum a posteriori approach, and second, the
case where a sharp but noisy image of the scene is available in addition to
the blurry image. We show that our approach makes it possible to model and
remove a wider class of blurs than previous approaches, including uniform blur
as a special case, and demonstrate its effectiveness with experiments on
synthetic and real images.

This package contains code to perform fast blind deblurring of images degraded by camera shake, using the MAP algorithm described in our IJCV 2012 paper, and the fast approximation of spatially-varying blur described in our CPCV 2011 paper.