In Case You Missed the Crazy Future Camera That's Refocusable in Post, Here It Is

Years ago a reader emailed me about plenoptic cameras, also known as light-field cameras, which allow an image to be refocused after the picture is taken. Sometimes referred to as a 4D camera, this crazy technology is now headed to a consumer camera from new manufacturer Lytro. News of this development, which utilizes technology first seen in a 2005 Stanford research paper, hit the internet last week, with Lytro now taking reservations for the device. Check out the refocusable images in action, and let me know what you think -- game-changer or gimmick?

First off is the company's pitch video:

https://www.youtube.com/watch?v=7babcK2GH3I

And here are a few images from the company's full gallery; click anywhere on these images to refocus:

Dr Ng's camera recreates the light field thanks to an array of microlenses inserted in between an ordinary camera lens and the image sensor. (Dr Ng declined to reveal the precise specifications for the commercial device, but prototypes from his academic days sported 90,000 minuscule lenses, arranged in a 300-by-300 grid.) Each microlens functions as a kind of superpixel. A typical camera works by recording where light strikes the focal plane—the area onto which rays passing through a lens are captured. In traditional cameras the focal plane was a piece of film; modern ones use arrays of digital sensors. In Lytro's case, however, light first passes through a microlens and only then hits the sensors behind it. By calculating the path between the lens and the sensor, the precise direction of a light ray can be reconstructed. This in turn means that it is possible to determine where the ray would strike if the focal plane were moved.

This technology obviously has a lot of potential for professional applications, but no word on higher-end implementations at present.
Why? Because, as the article brings up, the resolution of the camera is limited, presumably coming in far lower than current digital cameras (the resolution is limited by the number of microlenses, each of which produces one pixel of the "final" image). As a result, "the new device might just reignite the once-furious race for ever more megapixels." With the added depth element in the image, a twenty megapixel sensor will output an image with a much lower horizontal and vertical resolution.

This brings up another question: what about for video? Presumably the data throughput would be tremendous -- not to mention the processing power on the consumer's end in terms of decoding such a signal -- but imagine watching a movie and focusing on what you want right now instead of what the director wanted when it was shot. I'm not saying that would be better -- but it would certainly be different. Hook up such a technology to an eye-tracking mechanism that allows viewers to focus with their eyes instead of a mouse or touchscreen, and heads may explode.

For more on the camera and company, which is being hyped as "the first time picture-making has been changed since 1826," here's an interview with Lytro's founder, Ren Ng:

As Ng says in the video, pricing and availability are not public at present, but the cmaera should be out by the end of 2011 at a "competitive consumer camera price."

I'll ask again -- what do you think, game-changer or gimmick? What other applications could you see for light-field cameras?

Agreed. This basically cans the 1st AC job if it's applied to video. This is the next generation of photography and cinematography as we know it. I predict in 5 years, we'll all be scrambling for hdslr-like cameras with these type of lenses. And then in 5.25 years, our clients will be saying "what, you can't just fix that in post?" if we happen to have the old-fashion-y manual focus lenses : D.

I do think there will always be a place for a 1st AC and keeping good focus. But the ability to change the focus afterwards would be great to fix any mistakes or for a run+gun docu shooter who has no 1stAC. Even switching to digital media didn't can the 2nd AC, instead of a loader now we have a "Data Management Technician" who, at least around here, is considered part of the Camera Department (as much as it is almost assistant editor work). Likewise if this new lens cans the 2nd AC job, the workload of Assistant Editors in post will be that much heavier that we'll see 2nd AE's or some other name pop up to fill all those extra jobs that the AE does now. These technologies are never really replacing people, just moving them to different departments.
I welcome the new technology, it's a great power to have in a pinch, but nothing beats getting footage from a seasoned pro and his team that needs the slightest curve and colour adjustment to fit into sequence.

I agree. Interesting development and a *potential* game-changer. adobe now already offers a software based solution where you can change the focal plane on sth that was shot with deep focus. This can be a workable solution, but requires yet another few more hours in post.

IMHO, these technologies are best suited for those who have their story developed in the edit bay, rather than as a well thought-out script beforehand. I mainly do live marketing events, so I can see the benefits, although my clients are genarally not willing to pick up the tab for fixing it in post...

The little discussed facts about plenoptic lenses are that they have less sharpness and contrast than ordinary lenses, and they require massive amounts of data storage and processing bandwidth to actually work. I would be surprised if these lenses make their way to video any time soon, and even if they do, they will not touch the quality and clarity of pro lenses.

I think this will turn into a kinect-like device which will eventuallyy allow to capture hi-res depth information from footage from the actual sensor (I'm guessing you could build a z-depth map from those pictures using highpass filters). the applications in post production and interface design are endless.
This thing makes me very happy :)
totally a game changer!

If this is the real deal? I'll take it in a heartbeat. Nothing against my buds who have great paying union gigs on studio union shoots; but for us indies...I need to shoot on a string, just getting my own stuff going.
Smaller the crew the better. That's just the way it is.

I think this may, at some point, spell doom for the 1st AC, but it's a long, long way off: the cost of a skilled person is still much less than the investment in technology--including even MORE storage and back-up (it's cheap, but it's not free)--and even the increased time ($$) on the back-end (if the files are, say, five times bigger, that means five times longer to transfer, right?) And this is all assuming a plenoptic system that delivers video at professional quality.

For now, it's an interesting curiosity...a bit more than a gimmick, but not yet a game-changer.

It will be interesting to see what impact it has on editing systems, though: real-time focus shifts that don't need to be rendered until the final print? Hm...

Showed this to a VFX artist and they were convinced that it was a digital blur (gaussian, fast, etc.), so a deep DOF with just real time post processing. What would be the best way to explain how it really works?

Get your FREE copy of the eBook called "astonishingly detailed and useful" by Filmmaker Magazine!
It's 100+ pages on what you need to know to make beautiful, inexpensive movies using a DSLR. Subscribe to receive the free PDF!