Computational Photography for Cinema | When Lenses Become Plugins

What is computational photography? I want you to put aside everything you know about the form and function of the camera. Whether digital or film they are essentially no different in operation to those at the birth of photography.

The Camera Obscura which is defined on Wikipedia as:

A camera obscura (Latin for “dark room”) is an optical device that led to photography and the photographic camera. The device consists of a box or room with a hole in one side. Light from an external scene passes through the hole and strikes a surface inside, where it is reproduced, inverted (thus upside-down), but with color and perspective preserved.Wikipedia

Until recently, the sum of all our various advancements in photographic technology has been focussed on improving the camera obscura.

Things are changing quickly. In no time at all we will have improved the image sensor in a dark box to the point where we hit a real point of diminishing return in packing in ever more photosites. Sensors will be as big (or as small) as we can make them, and it’s very likely that the physical, mechanical glass optics we love so much become the limiting factor.

I believe we are close to the end of the digital imaging technology we know, but we are only at the beginning of what is to come.

Computational Photography

Computational photography as a term encompasses a wide range of technologies and applications based around methods of sensing and capturing light directly as data rather than a fixed and focussed optical image. This data is then processed to achieve a desired image that may only be one of many possible interpretations of the data.

One simple application of computational photography you’ll be familiar with is the act of stitching together multiple images to create a panorama that is larger than what was achievable within the field of view of a single exposure. This process not only has to take into account alignment, but optical distortion, and color or luminance shift across the image that may be the result of the physical optics.

Another familiar application of computational photography is the processing of HDR image data by combining various exposures to create one final HDR image.

Both of these examples rely on traditional underlying imaging techniques and optics, but the most exciting technologies in development don’t necessarily rely on single sensors, or lenses as we know them at all. These technologies have the potential to capture far more data than light hitting a two dimensional plane.

Virtual Optics

Imagine all the beautiful photo and cine lenses past and present, exist not physically, to be bought and sold, insured, and flown around the world from shoot to shoot and lugged from location to location in heavy protective cases. Instead these lenses exist in a huge database of precision laser mapped mathematical transforms, essentially software plugins, each precisely emulating its physical counterpart in every optical nuance and flaw.

These plugins are applied to captured data in order to computationally generate the final rendered image. You can switch between and modify virtual optics in post.

Focal length, aperture, focus as well as frame rate, shutter speed, precise shutter response, all become modifiable and post rendered as the data is interpreted through these interchangeable, flexible, mathematical transforms.

Perfect stereoscopy becomes just another possible interpretation of captured data, again with more precise, finer control over the baked or rendered result than we’ve ever had before.

The Data-Centric Role of the Cinematographer

Today many cinematographers are coming to terms with the need for a greater and greater level of understanding, integration and input on how their captured images are rendered during post production. It’s no longer just photography, it’s understanding digital color and how an image may be reframed or adjusted, even re-lit after it has been shot.

Imagine a scenario where the role of the cinematographer becomes 20% lighting a set and placing the camera, and 80% virtually lensing the production in post. This is the likely future of computational cinematography.

It’s no longer about shooting for post, it’s about shooting in post.

Vast amounts of light field, 3D volumetric, range data and metadata will be captured on a live set, effectively at very high sample rates, and likely over a 360 degree or even spherical field of view, possibly from multiple positions simultaneously. This data capture process will replace the single POV photography of today.

The director(s) and actors are probably the only people on such a live-action set whose role remains intact. Every other role, skillset and specialisation will be radically different from what we know today.

The nuanced art and craft of sculpting light and capturing moments will not disappear, but the cinematographer of tomorrow will frame, reframe, light and lens virtually, and all of this will shift to post production, requiring ever tighter integration, knowledge sharing and working partnership with the ever advancing technologies of VFX and editorial for all screen types and sizes.

The Time Is Now

This is without a doubt the future of our industry. The camera as we know it, and the production processes we are familiar with will not disappear overnight, they may never disappear entirely, but as these technologies make their way out of the lab and onto our film sets everything we know is going to change.

As creatives and technicians, now is the time to open our minds to the new, to what will be possible.

At the heart of it all is story, and that is the rock that unites us. No matter what our role or how it changes, we are storytellers. Humanity has been crafting and telling stories throughout the ages. Our methods however are constantly evolving, from verbal traditions passed down through generations to the written and printed word, the first photograph, the first motion picture and now immersive virtual reality.