We present an algorithm designed for navigating around a performance that
was filmed as a "casual" multi-view video collection: real-world footage
captured on hand held cameras by a few audience members. The objective is
to easily navigate in 3D, generating a video-based rendering (VBR) of a
performance filmed with widely separated cameras. Casually filmed events
are especially challenging because they yield footage with complicated
backgrounds and camera motion. Such challenging conditions preclude the
use of most algorithms that depend on correlation-based stereo or 3D
shape-from-silhouettes.

Our algorithm builds on the concepts developed for the exploration of
photo-collections of empty scenes. Interactive performer-specific
view-interpolation is now possible through innovations in interactive
rendering and offline-matting relating to i) modeling the foreground
subject as video-sprites on billboards, ii) modeling the background
geometry with adaptive view-dependent textures, and iii) view
interpolation that follows a performer. The billboards are embedded in a
simple but realistic reconstruction of the environment. The reconstructed
environment provides very effective visual cues for spatial navigation as
the user transitions between viewpoints. The prototype is tested on
footage from several challenging events, and demonstrates the editorial
utility of the whole system and the particular value of our new
billboard-to-billboard optimization.

Note: The code is optimized for quad-core machines with a GPU. Recommended configuration:
Intel i7, nVidia GTX 280, 7200rpm hard drive, Vista 32bit or 64bit. The laptop version
runs at half video resolution and should play smoothly even on a decent dual-core laptop.

Acquiring Shape and Motion of Interacting People from Videos
L. Ballan and G. M. Cortelazzo [link]

Acknowledgments:

We thank Ralph Wiedemeier, Davide Scaramuzza, and Mark Rothman whose performances constitute
the Juggler, Magician, and Rothman data, Nils Hasler and Juergen
Gall for the Climber videos, and Christopher Zach, David Gallup,
Oisin Mac Aodha, Mike Terry, and the anonymous reviewers for
help and valuable suggestions. The research leading to these results
has received funding from the ERC under the EC’s Seventh Framework
Programme (FP7/2007-2013) / ERC grant #210806, and from
the Packard Foundation.