The interplay of camera position and focal length allows a photographer to achieve different compositions of the same scene. Moving the camera away from the scene while increasing the focal length f affects the sense of depth of the scene, as well as the relative magnification of objects at different depths, see (a) through (c). Note that the woman did not move while these three pictures were being taken. Given a stack of images captured with a fixed focal length at different distances from the scene, our framework allows us to modify the composition of the scene in post-capture, and leverages multi-perspective cameras for added flexibility. This is shown in the animation in (d). The colors overlaying the key-frames of the animation indicate regions imaged with different focal lengths, shown in the visualization on the right.

Abstract

Capturing a picture that "tells a story" requires the ability to create the right composition. The two most important parameters controlling composition are the camera position and the focal length of the lens. The traditional paradigm is for a photographer to mentally visualize the desired picture, select the capture parameters to produce it, and finally take the photograph, thus committing to a particular composition. We propose to change this paradigm. To do this, we introduce computational zoom, a framework that allows a photographer to manipulate several aspects of composition in post-processing from a stack of pictures captured at different distances from the scene. We further define a multi-perspective camera model that can generate compositions that are not physically attainable, thus extending the photographer's control over factors such as the relative size of objects at different depths and the sense of depth of the picture. We show several applications and results of the proposed computational zoom framework.