Pictures and videos taken with smartphone cameras often suffer from motion
blur due to handshake during the exposure time. Recovering a sharp frame from
a blurry one is an ill-posed problem but in smartphone applications additional
cues can aid the solution. We propose a blur removal algorithm that exploits
information from subsequent camera frames and the built-in inertial sensors of
an unmodified smartphone. We extend the fast non-blind uniform blur removal
algorithm of Krishnan and Fergus to non-uniform blur and to multiple input
frames. We estimate piecewise uniform blur kernels from the gyroscope
measurements of the smartphone and we adaptively steer our multiframe
deconvolution framework towards the sharpest input patches. We show in
qualitative experiments that our algorithm can remove synthetic and real blur
from individual frames of a degraded image sequence within a few seconds.