Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

Digital photographs and video are exciting inventions that let us capture the visual experience of events around us in a computer and re-live the experience, although in a restrictive manner. Photographs only capture snapshots of a dynamic event, and while video does capture motion, it is recorded from pre-determined positions and consists of images discretely sampled in time, so the timing cannot be changed.This thesis presents an approach for re-rendering a dynamic event from an arbitrary viewpoint with any timing, using images captured from multiple video cameras. The event is modeled as a non-rigidly varying dynamic scene captured by many images from different viewpoints, at discretely sampled times. First, the spatio-temporal geometric properties (shape and instantaneous motion) are computed. Scene flow is introduced as a measure of non-rigid motion and algorithms to compute it, with the scene shape. The novel view synthesis problem is posed as one of recovering corresponding points in the original images, using the shape and scene flow. A reverse mapping algorithm, ray-casting across space and time, is developed to compute a novel image from any viewpoint in the 4D space of position and time. Results are shown on real-world events captured in the CMU 3D Room, by creating synthetic renderings of the event from novel, arbitrary positions in space and time. Multiple such recreated renderings can be put together to create retimed fly-by movies of the event, with the resulting visual experience richer than that of a regular video clip, or simply switching between frames from multiple cameras.