A detailed account of learning to work with Lytro Cinema — a pioneering light-field capture system.

By David Stump, ASC

Images courtesy of Lytro

Light-field capture, or computational imaging, has been hailed by many as the future of filmmaking. While we are admittedly still in the early stages, the technology itself promises to deliver new opportunities in the way we capture images. I recently had the opportunity to serve as director of photography on Life, the first short film captured with Lytro Cinema — a pioneering light-field capture system that was introduced at NAB this year. This article relays my experience on the project, shot for director Robert Stromberg, and serves as an introduction to light-field cinematography for readers who may not be familiar with the concept.

Preparing for Life

As the chair of the Camera subcommittee for the ASC, and as a matter of personal interest, I’m often asked to evaluate new prototype technologies. When Lytro reached out to Robert and the Virtual Reality Company production studio to assemble Life, I was asked to be the cinematographer, both through my ASC affiliations and as the chief imaging scientist at VRC.

The concept for Life was developed and designed by Robert, and the shots were defined in collaboration with Jeff Barnes, Lytro’s executive director of studio productions, to showcase the capabilities of the technology. The short is a visual poem that tells the story of a boy and girl as they traverse from youth to old age. It was the first use of the system on a production set — and that along with a tight deadline for our NAB premiere contributed to making Life a brave and exciting endeavor.

We knew going in that we needed to present Lytro Cinema as a practical, production-friendly solution, with images that could intercut seamlessly with conventional footage, so we decided to capture half of the shots on the Arri Alexa SXT. I want to relay a big thank-you to Arri for supporting the project and providing us with production gear.

Though it will get there very soon, the Lytro camera is not yet optimal for shooting an entire show or feature, due to the amount of data and the current size of the camera in its first-generation state. In its present incarnation the camera is best used for shots that are visual-effects oriented or otherwise impossible to capture traditionally.

Light Field

I have been following the development of light-field capture and plenoptics since the first research papers on the topic came out of Stanford University. There are different approaches to light-field capture, but the fundamental principles are generally the same. A light field is a collection of rays of light that reflect off of objects, generally defined by whatever is in one’s view. We interpret these rays from the points of view of our eyes, which help our brain perceive an object’s position in the world. If you then think about a light field in terms of camera capture, traditional cameras capture an image from a single lens, and stereo cameras from two lenses; in the case of Lytro Cinema, however, light can be captured from multiple vantage points, as if there were an array of hundreds of thousands of cameras standing side by side, perfectly synced to capture a scene. Lytro has developed several technologies that enable this type of capture, along with software that interprets the light field by computing the angles of the rays arriving at the sensors.

Light-field capture in the Lytro Cinema system is made possible through the use of a “micro-lens array,” which is the equivalent of millions of lenses that are built at the wafer scale and inserted between the main camera lens and the camera sensor. The micro-lens array takes the light within a scene and breaks it apart into the color, intensity and direction of the rays, which are captured by a collection of pixels under each micro lens, on the sensor. It takes a bit to digest all of this — and without getting too deep into the weeds, I would recommend watching some of the detailed video presentations, easily searchable online, given by Lytro’s head of light-field video, Jon Karafin.

The magic behind Lytro Cinema lies in the fact that each captured pixel has color properties, directional properties and a calculated awareness of its exact placement in space. As the process produces a lot of data, Lytro Cinema isn’t just a camera; it’s an end-to-end solution. The workflow includes a camera, a server for storing and processing light fields, and software plug-ins that work hand-in-hand with off-the-shelf production solutions.

Since we were going to be working with an alpha camera, my gaffer, Craig “Cowboy” Aines, and I flew up to the Lytro headquarters in Mountain View, Calif., early on to get a feel for the system and gauge our cabling and lighting needs. The camera has a variable length extending from about 6’-11’, depending on framing and refocusing range. The unit had substantial weight, so from a planning perspective we needed to be adequately prepared for the mechanics of moving, panning, tilting and dollying on set. We also had to take into consideration the parameters of shooting at very high frame rates, with a fixed T-stop lens at the sensor’s native ISO, which was around 200. Given these circumstances, Cowboy and I went to Mole-Richardson to speak with honorary ASC member Larry Mole Parker, in order to spec out lamps that would give us the horsepower to create beams and shafts of light within smoke at the illumination levels we would need for a successful shoot.

One of my early mentors, Phil Lathrop, ASC — who shot The Pink Panther and the Peter Gunn television series using very high key-light levels — taught me how to control big lighting units and how to use hard light. It was very rewarding to draw on his mentorship as part of this project.

Set Life

The experience on set with Lytro Cinema was designed to be sensitive to the standard workflow of traditional cinematography. In addition to capturing the light-field data — comprised of 755 raw megapixels with 16 stops of dynamic range — we took advantage of the camera’s ability to capture QuickTime ProRes, which allowed us to review files on-set, immediately after the take. Because the metadata in the light field is tied directly to the information in the preview capture, we could make on-set decisions — which got baked into the metadata of the file system — just like you would on a traditional shoot, and then see those versions in the real-time preview. Those files were then uploaded to our editor, Damien Acker, based at the Third Floor, who began cutting while we were still on set.

As a cinematographer, there really wasn’t much about the Lytro Cinema workflow that was any different to me than a typical shoot; the Lytro on-set support team made the amount of data flowing through our project mostly invisible to me. Their solution includes a drive array, and because captured data moved through a 100-meter cable, there was effectively no restriction as far as the fiber tethering of the camera to the drive, and the servers could be placed anywhere for sound considerations. The camera specs out at up to 300 fps, but for this project we did most of the shots between 24 and 120 fps, which still offered us the ability to manipulate all of the aspects of the image that make this camera unique. At 755 megapixels, the files coming off of the camera contain a dense amount of data, but the end goal for Lytro Cinema is that all of these files will be stored and processed in the cloud.