Lytro's Ren Ng sheds some light on the company's ambitions

Lytro's announcement that it will be launching a plenoptic 'light field' camera that allows images to be re-focused after they've been taken, was met with equal amounts of interest and skepticism. Interested to find out more, we spoke to the company's founder and CEO, Ren Ng, to hear just what he has planned and how far towards a product the company has got.

The first thing to understand, he stressed, is how the system works: by placing an array of microlenses some distance in front of an imaging sensor, points of light arriving through the lens are scattered across multiple photo sites, depending on the angle they've arrived from. This information, captured in a single exposure, provides the ability to render images as if they'd been focused at different distances. The company says it will begin selling such a device before the end of the year (2011). Not only would such a device be able to produce re-focusable images, but it also wouldn't need focusing at the point of shooting.

"Our vision is a product that allows people to shoot and share very simply"

The first device will be aimed at the consumer end of the market, says Ng, explaining that the company is targeting: 'people who really like to have fun with pictures and share them with friends and family. Our vision is a product that allows people to shoot and share very simply.' And this product is not far from becoming a reality, he says: 'The product will be out in 2011 and priced competitively for a consumer product. It's already in the hands of photographers.' (Of the people shooting the samples on the company's website, only Eric Cheng, its director of photography, is an employee).

Despite the consumer focus for the first product, Ng believes the nature of the technology means this won't just entail people pointing and shooting: 'We're looking at someone really interested in what photography means, who wants to experiment with the capabilities of this new approach, and wants to explore and enjoy the artistic possibilities of working with a new medium.'

Sharing the experience

'Light field photography creates a fundamentally different type of data. When we moved from film to digital it made all sorts of changes to what we could do with photographs, but we were still collecting essentially the same 2D data that we always had been, right back to the days of the daguerreotype. There are opportunities as an artistic process for people to experiment and be creative. The type of data is very resonant with that - you can create an image and invite the viewers to explore the picture. There are opportunities in terms of crafting and posing pictures in a way that gives a sense of discovery to the viewer. A sense of discovering a story for themselves.'

The first product's focus will be on making this capability accessible and easy to share, he says: 'Five years ago, this would have been impossible. It's only the development of web infrastructure, technologies such as Flash and HTML5 that allow us to program the interaction through an internet browser without having to download or install additional software. That's what powers the experience of our product, just as much as the instant shutter, instant focus or any of the other benefits.'

"The end user gets the full 'living picture' experience without onerous downloads"

'The software to convert the captured 'light field' into an image, which we're calling the Light Field Engine, is in the camera. It is installed on your PC as well and, when you share your images through social networks, mobile devices or all the other places people share images, the Light Field Engine goes with the picture, so that the end user gets the full 'living picture' experience without onerous downloads.'

What about those samples?

He notes the concerns expressed about the samples that have already been shown on the web, explaining that, while they show how shareable the images can be, they are not representative of the camera's full capabilities: 'The ability to focus after-the-fact is fully continuous - you can focus at any depth. There are two factors that make this less apparent in the samples. The first is the tendency in photography for depth to appear compressed, so objects of similar distances appear together [as they do when you shoot a portrait with a long focal length, as an extreme example]. Depending on composition and arrangement of subjects, there may only be two or three significant depths within an image. Also the way we've packaged the data for easy viewing on the internet has an effect. It's not the full light field you're seeing - it's a subset to make it more portable. It's analogous to comparing the Raw data that an enthusiast photographer might take, with the small, compressed JPEG that Facebook might serve up if you view it on your smart phone.'

Also, while he explains that the sample images come from devices taken from the production line, they are not yet final: 'The devices themselves look very close to final on the outside, but the hardware internals, software and image quality are not production standard yet,' he says.

"The 0.1MP resolution we were producing then is not consumer-ready, so we've come a long way from there to make a commercializable product"

This is a long way beyond the point dpreview last spoke to him (in 2005), when Ng has adapted a 16MP medium format camera to produce 900,000 pixel images: 'An important thing to note is that at that stage of development, the focus was on: "how do we take a multi-camera array and miniaturize it to a single device?" The results at that time were not anywhere near commercializable. It was a scientific breakthrough we were working towards. The next step we've been working on has been making a commercial breakthrough. The 0.1MP resolution we were producing then is not consumer-ready, so we've come a long way from there to make a commercializable product, that can sell in the highest volumes. And doing that has required making a product that makes it easy to share the results on the internet. If you look at the way people use pictures, the vast majority of pictures are on the web.'

More creativity to come

The initial software won't allow a great deal of post-shot editing, he explains: 'At first we'll be making those decisions for the user - so that we can make the process as simple as possible but, further down the line, we'll provide tools to give more control over the final output. It's important to understand that Lytro's camera will record full light fields at day one, and folks will be able to do more and more with those same files as the software grows into the future. It's a bit like DSLR shooters working with the initial Raw formats: the new editing features you could achieve with those Raw file increased over time as software support matured.'

"We're very keen to see light field images develop through an ecosystem of software"

'We're very keen to see light field images develop through an ecosystem of software, to allow people to share images and edit images, as with normal, 2D images. We're producing a format with an API to provide developer access to the format's capabilities. Kurt Akeley, our Chief Technology Officer used to work for SGI, where he invented the OpenGL API, so we've got some truly world class experience in this sort of thing.'

'It's not going to just drop into existing software, it's going to require a bit of work - it's a richer data with greater possibilities. The light field, when turned into pictures, is 2D but there are opportunities to work on light fields directly to access their full possibility. Tapping the full potential is a huge opportunity, and paves the road for a great deal of exciting R&D.'

"In the same way that Polaroid changed the market - it brought an immediacy and shareability to photography"

But, even at his most positive, Ng says he doesn't expect light field photography to replace conventional, 2D photography: 'The folks at dpreview are not going to replace all their cameras with our first product. But, once it comes and opens up all these other capabilities, I think they're going to be enchanted by what it represents for photography. It provides new opportunities - allowing you to create compositions that tell a story in a way you never could before. They're going to keep their existing tools but add this as well. In the same way that Polaroid changed the market - it brought an immediacy and shareability to photography, but that wasn't at the expense of conventional film photography, it was in addition.'

Over time, he believes, people will find additional creative options in the images. For example, some enthusiast photographers discuss the quality of the out-of-focus regions of their photographs, which is influenced by the complexity of the design of the lens used to shot the image. This could be an area people want to experiment in, Ng proposes: 'At the beginning, the out-of-focus region will look as it would through an optical viewfinder. For people who want to shape their bokeh, this could be the thing that really interests them. The ability to control your bokeh after the effect, could be another example of creative control on the editing side. The photographic possibilities will explode as people experiment with this sort of thing.'

Pushing sensor technology

And, if it achieves the level of success he and the company are hoping for, he says he can envisage light field cameras influencing sensor technology: 'As well as a scientific and commercial breakthrough, this could cause a technological breakthrough. We've got to the stage where we're seeing 14-16MP sensors for compacts and 20-24MP in larger sensors. It's not technological limitations that are defining that figure, it's a marketing-driven progression. When we went from VGA to 1MP to 4MP sensors, that was technology growth.'

"You could in theory make a sensor with hundreds of millions of pixels"

'Growth in that underlying industry capacity hasn't stopped, there's just no demand for it. With 14MP, for print or web use, those are enormous images, so there's no great pressure to move on from there. But if you applied the technology being developed for mobile phone cameras and applied it to an APS-C sensor, you could in theory make a sensor with hundreds of millions of pixels - an order of magnitude beyond what we're currently seeing. With such a sensor in a light field camera, we'd be able to measure hundreds of millions of rays of light. Light field technology can utilize and re-invigorate amazing growth in density of sensors.

And the lower output resolution of light field cameras, compared to conventional ones, could be a real benefit: 'Light field technology is inherently more capable in low light - we can shoot wide-open with apertures larger than make any sense for conventional photography. And we're not just trying to make enormous pictures. One dead or noisy pixel in conventional photography is expected to result in one output pixel in the final image. In light field photography it translates to a dead 'ray' which won't have as much impact on the final output - the sensitivity to defects from the sensor will go down.'

Going it alone

Trying to go to market with a product based on a fundamentally different technology may sound ambitious for a camera company nobody had heard of two months ago, but Ng is unfazed by the challenge: 'We feel we're better placed to bring the full benefits of this technology. It's a transformational technology, it needs a transformational product. If you look at most digital cameras, they're very good but they've come about as a result of a series of incremental changes to the previous technology. Trying to do this as an incremental change to an existing technology would rob the consumer of many of the most disruptive benefits.'

"We feel we're better placed to bring the full benefits of this technology"

There's another reason for producing the camera themseleves, Ng says: 'because we can build this kind of company today. Ten years ago [doing all this themseleves] would have been impossible but, as with the advances in web infrastructure that make the pictures sharable, there have been great advances in manufacturing and distribution that make it possible for a new company to do this. In the past, to get the message out, you'd have needed to buy an ad during the Superbowl - which is a very expensive thing to do and doesn't get your message to the right people. The web has made it so much easier in terms of localizing the message. Just the pictures we've posted, spreading out across the web has generated so much interest.'

Comments

I'm very excited to see where this technology goes, but I have mixed feelings. After all, focus points are core to the challenge and experience of photography! On the other hand, removing this challenge would open a lot of creative avenues. More thoughts here: http://www.aputure.com/blog/?p=2447

Other companies exist now but the price of their camera is outrageous to say the least. How much is this so called consumer camera going to cost? That's what I want to know. Still, I do believe this is the future of photography; no more lost captures. Also, the ability to create 3D models on the fly and so many other benfits that again, as the article describes, this is a game changer that is for sure. :)

There are some overly optimistic ideas in the posts but no you cannot market this to movie studios, because you are light years from developing such idea into 2k or 4k video on a full frame slice.The idea presented by NG is interesting, but at this stage of workable densities only a mere curiosity.You would require to put far higher density of wells on APC sized sensor to keep with resolution of todays (or even yesterdays) cameras which would be beyond operability (you will mostly receive noise due to huge crosstalk of wells). So we are in front of a problem, we can choose between a much smaller resolution or a huge increase of noise just to be able to focus later....hmmm...maybe neither?

Page 27 is interesting as it explains that a plenoptical camera (read a Lytro camera) is confined to 0.3MP (say sub MP) and if you want to push barriers a little bit buy doing a hybrid between a lightfield and a conventional camera, you'll need Raytrix patents.

part1I think the use of diffraction is not as well understood here. There are applications where diffraction data is used today in science : space interferometry, especially when using more than one telecope to simulate a bigger telescope of the diameter of space between them.

We must not suppose there will be ONE lens in front of the microlens layer.the image sensor, composed of a plane of light detectors and a plane of microlenses don't need a lens in front, and also to capture multiple planes don't need to be made of multiple physical planes.

The innovation resides in what and where, instead of just having one microlens for one pixel just behind it, the technology can take measurement of light with an angle, using many pixels for one lens, and the same pixels for different lenses.

part2In the articles i studied back in 2000 this was only for astronomy applications, and the focus was to take one huge light field and being able to re-aperture it (not re-focus), making 10x more Z slices out of the picture of a galaxy, to map x and y at a much higher Z resolution.

That's my only complaint in Mike Davis postings: He talks about diffraction. Because Ng could in theory have explored new stuff, I prefer talking about Heisenberg's uncertainty relation which can't be circumvented.

An example is an array of lenses. They overcome the obstacles of diffraction-limited lenses but still obey the Heisenberg relation.

Therefore, I want to make sure everybody understands that my argument solely relied on the Heisenberg principle. Ng can do so much. Which may be nice enough for some. But not more.

I've read all the posts by Mike Davis. I'm impressed how he can be so sure of what he thought he understood. Has he thought that the camera could work in a fundamentally different way than now ? Actually this system is not diffraction-limited in the old sense. Indeed, it uses all the information of diffraction as a data.

"Multi-plane" light field sensor does NOT mean that multiple exposures are taken on actual multi-plane sensors. It means all informations about the "rays" ("the field" is better), and not only the area they light up, are recorded on a single sensor.

For example, the information on the direction of a ray is not possible to recover from the "which-pixel-is-light-up" information. With light-field this information can be recovered, thus allowing for tracing back from where (which plane in the real scene) a ray comes from. Then, selecting a narrow set of planes in the final image creates a narrow depth-of-field picture ; selecting a wide set creates a deep DOF picture.

Sure, Mike Davis understands things better than the one who invested millions of dollars in the project (and far more on technology than on marketing).

I'm not saying his arguments are nonsense ; they are very sensible as far as classical photography is concerned (by classical, I mean "single-plane"), but they may simply not apply to the kind of information recorded by this new technology. As a clue, imagine how to think "classically" with an array of micro-lenses between the lens and the sensor : 1) the effectif "density" is not that of the sensor anymore, but that of the microlens array. 2) Then, multiple sensor pixels are required to get informations from a single micro-lens. 3) as a consequence it is both understandable why there is no such diffraction limitation on the sensor density itself, and why so many pixels are required : 1 Mega pixel microlens array * 1000 pixel sub-sensor by microlens = 1 Giga sensor pixels, for a 1 M pix resolution only : no diffractioin pb.

I just don`t get one thing: its the lens that focuses, not the sensor, so, how is sensor going to fix, lets say, 135 f/2.0 where subject is 7 feet away and background is 150 feet. nothing in the background is even close to focus when it reaches sensor, there is just not enough data for computer to fork with.

I expect for this to be like this: sensor - 1x1 mm 1MP, lens 1-4mm f/11 and a lot of sharpening after in the computer to the selected point.

Until we get real samples, not this flash animations i must assume that this is all a hoax. And whats with all this secrecy?

Quoting my opening sentence: "You're a brilliant young man, Mr. Ng, and I am genuinely impressed with your innovative technology - it will no doubt have a lasting impact on photography as we know it - but I'm compelled to point out that you would have us believe the limitations imposed by DIFFRACTION can be ignored."

I went on to quote Mr. Ng's RIDICULOUS statement that "there's just no demand for [APS-C sensors having pixel counts greater than 14 MP.]" For that and his obvious pretense that diffraction isn't an issue, yes, the remainder of my 6-part post was negative.

I want him to succeed, but his assertion that his resolution problem can be cured by using the same or higher diffraction-prone pixel densities as currently used on mobile phones borders on being a fraudulent statement - not a statement made out of ignorance - not from someone with his obvious knowledge of optics.

You're a brilliant young man, Mr. Ng, and I am genuinely impressed with your innovative technology - it will no doubt have a lasting impact on photography as we know it - but I'm compelled to point out that you would have us believe the limitations imposed by DIFFRACTION can be ignored:

Quoting from the interview, above:

"'Growth in that underlying industry capacity hasn't stopped, there's just no demand for it. With 14MP, for print or web use, those are enormous images, so there's no great pressure to move on from there. But if you applied the technology being developed for mobile phone cameras and applied it to an APS-C sensor, you could in theory make a sensor with hundreds of millions of pixels - an order of magnitude beyond what we're currently seeing. With such a sensor in a light field camera, we'd be able to measure hundreds of millions of rays of light. Light field technology can utilize and re-invigorate amazing growth in density of sensors."

First, let me point out that your multi-plane 'Light Field'sensor will always have a resolution less than conventional, single-plane sensors because some of your photosites are, by design, blocking the view of any photosites that could reside beneath them. But in your pipe-dream statement, above, you are arguing that your multi-plane sensor technology could ultimately achieve the same resolution as today's conventional, single-plane sensors if only manufacturers would apply the same pixel densities to APS-C (and full frame) sensors as are currently being applied to mobile phone cameras!

I hate to pop your balloon, Mr. Ng, but we will NEVER see pixel densities anywhere near those ultra-high mobile phone pixel densities in conventional APS-C (or full frame) sensors because DIFFRACTION would prevent users from selecting the f-Numbers available with current lens technology while deploying enlargement factors far greater than those we are suffering with current pixel densities. Diffraction is already discouraging conventional APS-C users from shooting at f/16 and f/22 when making unresampled 300 dpi enlargements. Would you have us always shoot wide open with the expensive lenses made for APS-C and full-frame bodies - avoiding f/ 4.0, f/5.6, f/8, and f/11 in addition to f/16 and f/22?

Diffraction is your enemy Mr. Ng. The 2-inch by 3-inch images seen on our mobile phones are already softened by diffraction thanks to the outrageous enlargement factors involved! It's diffraction that is preventing the use of higher pixel densities in conventional APS-C and full-frame sensors and it will be diffraction that prevents your 'Light Field' sensor from exceeding the resolutions already enjoyed by conventional APS-C and full-frame sensors. You're welcome to use higher densities as necessary in your Light Field sensor to end up at the same resolutions as current technology, but diffraction will prevent you from going any higher. What good is the infinite Depth of Field offered by your technology if it will be accompanied by diffraction that degrades the entire image, independent of subject distance?

Here's a solution to your problem and to the problem we're already up against with conventional small dimension sensors that require huge enlargement factors to make prints that exploit all those pixels: Start working on a camera that not only embodies your Light Field sensor, but also includes an ultra-fast zoom lens - one that incorporates the following f-Numbers:

That would give us nine stops to work with, much like an f/1.4 lens that includes f/22, and they would ALL be diffraction-free at the enlargement factors to which we are currently limited due to diffraction. Such a lens would permit the use of a sensor having pixel densities three times higher than what is currently used in APS-C and full-frame sensors, giving the user a creative choice of several combinations of shutter speed and aperture, instead of forcing us to shoot wide open, at only one aperture, to avoid diffraction - as is the case with some of today's 12 and 14 Megapixel tiny-sensored digicams.

By the way, good luck with controlling all the aberrations you'll suffer designing lenses that operate at those f-Numbers.

Every time I try to understand what these folks are doing my eyes glaze over. I have a pretty strong technical/physics background but reading comments by the Lytro folks makes me feel like I slept through too many classes. I keep expecting someone to pull back the curtains and shout "Hey, they are shooting everything at f30 and then messing about with gaussian blurs!".

For all the nay sayers this is interesting stuff. Wether it comes to fruition or not is another matter. I was at RIT in the late 70's and the concept of auto focus cameras was ridiculous. Yes we were taught the old fashion way and not that I can recite the formulas anymore, we could figure out depth of field, shutter seed and ASA & aperture combinations etc. Today there are many pros that are shooting auto focus, simply because it is better than their eyes. A little sharpening in photoshop and you get a beautiful image. I would give this technology a chance. Who knows where it will go and as for the science and physics behind it....big deal. If someone can change all of these things that we think are written in stone isn't that great. I say best of luck to him and his company.

I am still fascinated about the ignorance of Ng wrt facts of optical physics.

Photons are elementary particles with a wave nature and obeying the Heisenberg uncertainty relationship. Speaking about rays and hundreds of millions of pixels at the same time is close to spreading false information: E.g., to capture 170 MP on a APSC sensor (not exactly a P&S specification btw) you need 1.5µm pixels (ok if you ignore the Bayer matrix for a second) and a f/2.0 - f/2.2 lens which resolves 800 lp/mm across the entire field (otherwise, you won't be able to refocus outside the image center). The best system camera lenses on earth resolve about 400 lp/mm (primes from Leica or Zeiss) and they only do so in the center and at about f/4. Lenses with a smaller image circle can do better but this wouldn't solve the light field capture problem.

I guess, it is all ok because US investors typically didn't exactly study physics ;)

You are right and you cannot mean me.Because I did not say such a thing.What has been done is possible. And I even provide details where I explain what to expect when full specs become public. All I say is that some of the predictions made by Lytro lack scientific background.

What Lytro does is in no way magical. It is applying rather old physics. What is new is the processing which can now be done in-camera.

Falconeyes has exhibited no arrogance whatsoever in his comment, above. He has only stated the facts. It is Mr. Ng (CEO of Lytro) who is exhibiting arrogance in his attempt to pull the wool over the eyes of investors and consumers alike when he claims that it's only the lack of market interest that has prevented manufacturers from applying the same pixel densities used in mobile phones to APS-C sensors. No, Mr. Ng, it's DIFFRACTION that is already preventing us from using f/16 and f/22 with APS-C sensors when we try to use enlargement factors that exploit the pixel densities had with 12 or 14 Megapixel sensors! Mr Ng's technology can do NOTHING about overcoming diffraction. (See my long-version comment above, dated (Aug 21, 2011 at 14:01:40 GMT.)

The amount of data that would have to be collected to account for all light and angles should be staggering. I find it hard to believe it will all fit in the file size they are talking about. Could it simply be that the picture is taken with a large depth of field (all is in focus) and they then apply a gaussian blur throughout, which is selectively removed as you click on parts of the pictures? Would work if it is also coupled with distance information... Just a thought.

The optical flaws of large aperture lenses are more pronounced at the edges of the lens. If this camera uses micro lenses does the software sample light more from the center of the lens as DOF is increased in post? Does that result in a reduction of Coma, chromatic aberration and increased sharpness? i.e. Does the image improve optically in a similar way to stopping down the lens when increasing the DOF?

3D from one picture: Use depth information to create (limited) 3d objects similar to what helicon focus can do as a side product of focus stacking.

Stereoscopic light fields: Use two light field cameras for 3D movies with additional depth information (which would allow you to move the head a bit a see a slightly different image) or to render an area sharp depending on what the viewer is looking at.

Lenticulars: Use depth information with lenticular prints

Embosser: Combine the light field camera with a device that creates a relief of the scene. Useful if you want to share pictures with blind people. Or take a portrait, inverse the depth information, use with a 3d printer and put it on the wall to have the person watch you.

Relight: Use the raw light field that has the directional information of the light to change the brightness of the light sources or adjust their colors in mixed light situations.

Compress: Use the depth information to compress the perspective as if the picture was taken with a longer lens.

There're probably thousands of other ideas just waiting to be thought of. So I'm much more excited about what could be done with this technology than I'm concerned about what the initial technical shortcomings might be.

I was hoping he'd share what the final image size will be, but we only get this comment:"The 0.1MP resolution we were producing then is not consumer-ready, so we've come a long way from there to make a commercializable product"

They need to market this first to hollywood and movie makers and use their technology in movie studios. Allowing the director and editors to place specific focus on a character, object as needed to best tell the story.

They can use this approach first to build a bit of brand recognition. Also this is where this product has the most use, IMHO. They are making photosharing more difficult in the world of instagrams and quick mobile sharing photo, and I think consumer market is not the way to go.

They will sell a few thousand units to bunch of early adopters, but all their investment and time in consumer market will end there. I just don't think they can convince enough consumers to buy these things to make the company survive in a very competitive consumer camera market.

Ha, I agree. 3D moviemaking has once again about run its course and Hollywood is always in want of some technological magic to throw its huge amounts of money at. If anything, at least it'll earn the engineers a decent salary.

Such technology eventually will trickle down as people see the possibilities, but for that initial cash you want those deep pockets that are always looking for the next big thing.

I'm open to seeing what this is like and, depending on cost, will probably try it and then pass it on to a favored nephew or grandchild. I don't really need a consumer grader.

I've seen a lot of new things flop in 50 years in the medium. Some work. Others don't. Others are novel, but so what, e.g. a panorama camera is useful, but therea's not always room in the bag.

What I don't expect to see is something that would affect the basic nature of what i like to shoot, although the aspects of low light photography mentioned has some appeal, depending on what Ng means. We're to a degree looking at an elephant with a blindfold, one hand tied behind our backs and sitting in a chair and no sense of smell.

I don't read too much into the samples which are as he says limited. But he's demonstrating one thing and the possibility of edge to edge focus sharpness could be useful. I prefer to work with prints and this is to some degree aimed at screen view.

What happened to learning a craft and mastering it? I heard someone down in this tread say "Haven't you ever missed focus and wanted to correct it?" Well, yes I actually have. But you know what I had to do? Learn. All I could do was learn to pay closer attention to focus next time and nail it. If I missed a shot a missed a shot. It's my fault.

I never shot film. I've only been into photography in the digital age. But even so, I miss film. I would love a film SLR to take around and better teach myself to pay more attention to composition, focus, and lighting.

I don't mean to sound like someone who is afraid of new technology. I love new tech. I eagerly await the next Nikon and Canon cameras. But I can't help but see this as photography being taken over by laziness. Don't be lazy people.

Take a shot and correct focus later? Do they even hear themselves when they say that?

Instead of a plenoptic lens, wouldn't it be cheaper and easier to have a camera with four 1/2.3" sensors and four ordinary lenses, each of them set at a different focus? One could then pick the preferred mix of focus from the four layered shots.

Easier still: a single deep-focus shot, followed by application of Gaussian blur when editing.

Some things are charming in the abstract, but come with high costs or constraints. Solar energy, for example, won't re-charge your Volt unless it is mid-summer, you have an acre with $500k worth of panels, and you don't drive very much.

I think this whole idea could be achieved much better with a normal camera and implementing the selective focus in software only. No microlenses, use the full res of the sensor, do the re-focus parlor trick after the fact. How? Just take a deep DOF picture (small aperture) and then apply gaussian blur in a horizontal gradient above and below the point of click? Of course, that will only work in good light.

Would a plenoptic lens work any better in low light? Any approach would require slower shutter speed and risk blur unless the camera were very firm. If some folks think plenoptics means "never a bad photo," they are confusing bad focus with blur due to movement of camera or subject.

Clearly, you've never actually tried applying a horizontal gradient blur to give the illusion of shallow depth of field. It might work if all you're shooting is landscapes from a hot-air balloon, but for anything else it's mostly useless.

Though still far from competing on the same quality level of conventional sensors, I believe it is a great technology, and besides the low resolution, double that would already allow a wide use of it and selling it possible. Online media photographers that normally use photos up to screen-resolution would have a huge benefit. Imagine photographing sports or constant moving long distant social events without the need for focusing. It would mean fast shooting with much less power drain.

Maybe it'll never be a replacement technology, but surely it'll have its market, and it will not be small.

Ah but you can... Sort of!The Panasonic GH2 has an electronic shutter that works at 40fps@4MP.If you make a 1s capture in that mode and then choose how much frames you stack in software later, it's basically the same as choosing the shutter speed after you took the shot. ;)

1) The way images are recreated suggests to be that it would be quite possible to populate the sensor plane with multiple smaller chips, the microlens arrangement can ensure there are no gaps in the captured information.

2) 1024x768 is a popular computer display size, but it's only 0.75MP.

3) Each point in a Plenoptic camera draws information from multiple sensor points,that in itself reduces noise, why do we think these cameras are going to be noisy?

4) I am overjoyed by the fact the first product is going to be a consumer product. So I can afford to buy one and try it out. And I hope, I really hope, it is going to work.

Not a single detil about the camera. Just show us the camera and we know if this is going to be a flop or not. I think they went the wrong way in making a camera, instead they should concentrate on sensor tech make some patents and later license it to other camera makers. Much trara about nothing (yet).

As with all the other samples, they are very poor and would ordinarily end up 'on the cutting room floor'!

Even when trying to apply the single stated advantage, that of selecting focus point, it is so inaccurate and arbitrary, that the 'gain' is actually a negative. To see what I mean, just try getting the earring in focus on the first sample shot...

Even if this system succeeded to some degree, the thought of having to fool around afterwards with every shot, just to optimise focus, sends shivers down my spine! It's bad enough post processing 500 - 1000 wedding shots when I've pretty well nailed the focus and the exposure is already reasonably good. Add this into the mix and you are looking at nervous breakdown territory!!!

What makes you think the picture with the earring was taken with a plenoptic lens at all. Basically, the choice is between the face and the flag. That is either "duo-optic" or a montage of two mono-optic photos.

But this would not be for wedding photos. It would be for people to share holiday greetings with little family photos at screen resolution. Pet lovers would share pictures of their pride. Recipients would buy a reader to see the pictures and have a good chuckle. Maybe there would be a market for coffee table digital viewers and touch screen selection of focus. Politicians' PR staff could use it to "airbrush" photos to highlight their employers' favorable features and defocus any nefarious faces nearby.

Re: the earring shot.from the interview:"Also the way we've packaged the data for easy viewing on the internet has an effect. It's not the full light field you're seeing - it's a subset to make it more portable. It's analogous to comparing the Raw data that an enthusiast photographer might take, with the small, compressed JPEG that Facebook might serve up if you view it on your smart phone."

So what, might I ask, is the point of the samples?They purport to show the results, but don't! Merely a feeble impression.We are told that the majority of use (certainly initially) will be for Facebook et al., but this only demonstrates it is not even usable for that.The only conclusions then are that the device is even more useless than common sense suggests and far less likely to see the light of day than has been stated.

I agree with some of the previous comments: first you do need a very high sensor density to have the least decent picture resolution, else what you gain by later focusing the image details, you're losing it by poor definition.The other point is that actually as a photographer you should... ahem, FOCUS on your subjects in the shooting INSTANT: having said that, it may be a useful way to re-elaborate an image, but in this case the matter gets on a much wider and controversial scope,

I am getting more and more the impression that this is vaporware. Plenoptic camera exist already and are in use (mostly industrial). The concept is sound, but to achieve more than a web resolution the sensor has to capture an enormous amount of pixels. This has a consequence for implementing it into small devices (memory, power consumption). Besides manufacturing problems for the then needed smaller microlenses, eventually the noise level of the individual pixel will become to high to allow the plenoptic reconstruction - this is a physical limit that cannot be overcome

For me, this potential new "tool," if you will, has very little to do with creative photography. Rather it seems to me a convenient way to achieve something that I prefer to realize while shooting, in retrospect. I am absolutely not interested in that sort of thing. It's surely great for picture-takers who can't decide where they should focus.

I am assuming the dof seen in the photo is due to the aperture of the lens, then moving the focal point is achieved by the information gathered from the micro lenses? so having greater dof would be possible with a smaller aperture, but with a wide aperture, surely there is the opportunity (now or later with software development) is do an 'HDR' like effect on dof, shoot at f1.8 but layer up to what would have been achieved at f8 if that is what you wanted - this would be very useful for low light, if you wanted more of the picture in focus than the aperture needed for the conditions allowed?

The "HDR" effect your talking about is somewhat similar to focus stacking, its already something people do mostly for Macro Photography, but can also be used for landscapes and architecture. Software is readily available for cameras with normal lenses to do this, of course this means taking multiple photos at different focus distances, so your pretty much limited to no moving subjects. But of course you can still shot wide open and have each image with less noise. So instead of one f/8 at ISO 1600 you can have say for example 4 or 5 f/2.8 shots at ISO 200 or still at ISO 1600 but a faster, more stabilizing, shutter speed.

Not exactly as versatile as a one shot plenoptic, but i thought i should point out that this is something available to you.

This all seems to work with a tiny (not very small, but tiny) sensor with tremendous pixel count. And you can see how Ng points several times - "sharing. sharing, sharing, sharing...". A revolution? Perhaps it is, in cameraphones, but nothing more then a new useless toy for a photographer...

What I'm looking forward to is a matching display for these images: it has the same amount of photo sites and micro-lenses as the camera, and turns the image into something viewable without the need for processing. It would give a 3D effect when viewed from the right range of angles, without the need for 3D glasses.

How about field curvature, chromatic aberration, spherical aberration, distortion? All trivially solved with the right algorithm. Some of that is already done today, and light field photography would add a whole new dimension to the possibilities.

This camera can not correct a drastically out of focus picture any more than stopping down with a conventional camera. See Joseph's post below. What it will let you do is still have selective focus effects within a range of focus (at a tremendous reduction of resolution). That range, however, is no larger than if you had stopped down for larger DOF to begin with.

Nor is it easy to generate a smooth and believable transition between the sharp and blurred area. However, I find something deeply unsatisfying in the images posted for experimentation. The refocussed area never looks truly sharp, even in the highly downsized images they provide. Good enough for Facebook? Sure, but so is my crappy cell phone camera.

I think it will have a larger DOF than a normal camera at maximum aperture. In fact it does not need aperture change for DOF.

In a normal camera, all you have is a plane with sensor. With this camera you have micro-lenses at different distances from lens. It is like taking multiple shots at maximum DOF (with different focus points).The problem is however, that you loose resolution/sensitivity compared with a normal camera. If for example you have 16MP and 4 focus planes, and you want a small DOF image, than you only have 4MP to work with. For more focal planes, even less

Cy, if you have a lens with an aperture so large the DOF is too small for the scene you could shoot it wide open anyway to gather enough light that you can then increase the shutter speed. Now motion blur is reduced, but your DOF is too shallow. With this technique, you could increase your DOF in post.High ISO doesn't blur shots, it adds noise. Noise reduction often does at blur though. There is the potential that this technique could smooth noise in a way similar to picture stack averaging, which could slightly sharpen the image while it reduces noise - as in astro photography.Actual 1st gen products may not be capable of either improvement even if technically possible.

kenw: If you could correct anywhere within the normal stop-down range, I would say that is very significant! Fully stopped down can give a pretty huge range - from a few cm to infinite. However, I suspect in practise the system will be more limited by the number of photo site positions and the design of lenses to match.

As a side note, not having to focus at all could also help make for a very fast and responsive camera!

Cy: How could the war in Afghanistan kill Elvis if you perceived him to be shot?

You can do it if they give you more access to the computation engine. They do say that initially they will take the Apple approach - dumb down everything to the lowest common denominator. Which is why some of us don't touch Apple products.

"When we moved from film to digital it made all sorts of changes to what we could do with photographs, but we were still collecting essentially the same 2D data that we always had been, right back to the days of the daguerreotype."

I guess he's never seen:* A holograph. That's a film technique over a half century old, to capture a lightfield. It captures a lot more information than Ng's folly.* A lenticular array, like the one in his camera, used for the analog film processes for which it was designed. (baseball cards, advertisements, crackerjack box prizes).* A stereo or higher order film camera. Remember the four lens Nimslo?

What Ng should have said is: "For the overwhelming majority of photos taken today, we are still collecting essentially the same 2D data". The important exceptions you note are very small niches compared to mainstream photography. This new device may have the potential to change that.

I have been following this for some time and am excited to see what Lytro releases and how it progresses. Having used DSLRs since their infancy, I think Ng's forecast that the growth in sophistication and power of both Lytro files and tools to work with them will be like the growth for raw files and tools is probably very fair.

I got on the waiting list some time ago, and very much looking forward to see what Lytro releases. Wish I had it for my upcoming hiking trip to the Pacific Northwest!

While a Lytro image's focus may not be as tack sharp as the photo of Mr. Ng at the head of this article, I think that is simply a matter of time. The code jockey in me is fascinated by the whole concept and the photographer just wants to have fun.

It can get an arbitrary number of zones of focus. But the calculations to do that aren't anywhere near realtime. To get something you can move sliders and see something happening, you need to calculate a finite number of zones of focus, offline, with a big computer, then put them together in some sort of layered presentation (flash is ideal for fluff like this).

"Also the way we've packaged the data for easy viewing on the internet has an effect. It's not the full light field you're seeing - it's a subset to make it more portable. It's analogous to comparing the Raw data that an enthusiast photographer might take, with the small, compressed JPEG that Facebook might serve up if you view it on your smart phone."

It sounds like they sampled the data depending on the image used, so if the image has 3 points of interest, they only included those DOF slices plus a few OOF to save on bandwidth.

Points available to select for focus are 'iffy'. Trying to focus on the girls teeth nothing happens. Using her earing as a point of focus does trigger refocus but nothing, including the earring, comes into focus.

There are many other aspects which I expect will keep this from becoming mainstream, even among uncritical snapshooters. It is a shame because I was really enthusiastic about this as I first read it.

This all an interesting concept and I applaud those who have taken it this far, but it has a long way to go before it will be widely accepted by anyone but very few.

Please keep working in this direction. While it may not replace the pro cameras there is certainly a place for this in today's world after its many shortcomings have been overcome.

Latest in-depth reviews

The Leica Q2 is an impressively capable fixed-lens, full-frame camera with a 47MP sensor and a sharp, stabilized 28mm F1.7 Summilux lens. It's styled like a traditional Leica M rangefinder and brings a host of updates to the hugely popular original Leica Q (Typ 116) that was launched in 2015.

The Edelkrone DollyONE is an app-controlled, motorized flat surface camera dolly. The FlexTILT Head 2 is a lightweight head that extends, tilts and pans. They aren't cheap, but when combined these two products provide easy camera mounting, re-positioning and movement either for video work or time lapse photography.

Are you searching for the best image quality in the smallest package? Well, the GR III has a modern 24MP APS-C sensor paired with an incredibly sharp lens and fits into a shirt pocket. But it's not without its caveats, so read our full review to get the low-down on Ricoh's powerful new compact.

The Olympus OM-D E-M1X is the ultimate sports, action and wildlife camera for professional Micro Four Thirds users. However, it can't quite match the level of AF reliability offered by its full frame competitors.

Latest buying guides

What's the best camera for under $500? These entry level cameras should be easy to use, offer good image quality and easily connect with a smartphone for sharing. In this buying guide we've rounded up all the current interchangeable lens cameras costing less than $500 and recommended the best.

What’s the best camera costing over $2000? The best high-end camera costing more than $2000 should have plenty of resolution, exceptional build quality, good 4K video capture and top-notch autofocus for advanced and professional users. In this buying guide we’ve rounded up all the current interchangeable lens cameras costing over $2000 and recommended the best.

What's the best camera for shooting sports and action? Fast continuous shooting, reliable autofocus and great battery life are just three of the most important factors. In this buying guide we've rounded-up several great cameras for shooting sports and action, and recommended the best.

What’s the best camera for less than $1000? The best cameras for under $1000 should have good ergonomics and controls, great image quality and be capture high-quality video. In this buying guide we’ve rounded up all the current interchangeable lens cameras costing under $1000 and recommended the best.

If you're looking for a high-quality camera, you don't need to spend a ton of cash, nor do you need to buy the latest and greatest new product on the market. In our latest buying guide we've selected some cameras that while they're a bit older, still offer a lot of bang for the buck.

We've updated our waterproof camera buying guide with the latest round of rugged compacts, and we've crowned a new winner as the best pick in the category: the Olympus TG-6. That is, unless you happen to find a good deal on the TG-5.

Researchers with the Samsung AI Center in Moscow and the Skolkovo Institute of Science and Technology have created a system that transforms still images into talking portraits with as little as a single image.

K&R Photographics, a camera store in Crescent Springs, Kentucky, was robbed by armed men, who not only took thousands of dollars worth of camera equipment, but also injured the 70-year-old co-owner of the store.

The new Fujifilm GFX 100 boasts some impressive specifications, including 100MP, in-body stabilization and 4K video. But what's it like to shoot with? Senior Editor Barnaby Britton found out on a recent trip to Florence, Italy.

It's here! The long-awaited next-generation Fujifilm GFX has been officially launched. Click through to learn more about the camera that Fujifilm is hoping will shake up the pro photography market - the GFX100.

We've known about the Fujifilm GFX 100 since last fall, but now it's official: this 102MP medium-format monster will be available at the end of June for $10,000. In addition to its incredible resolution, the camera also has in-body IS, a hybrid AF system, 4K video and a removable EVF.

According to DJI, any drone model weighing over 250 grams will have AirSense Automatic Dependent Surveillance-Broadcast (ADS-B) receivers installed to help drone operators know when planes and helicopters are nearby.

Chris and Jordan are kicking off a new segment in which they make feature suggestions to manufacturers for the benefit of all photographer-kind. To start things off, they take a look at the humble USB-C port and everything it could be doing for us.

The Olympus TG-5 is one of our favorite waterproof cameras, and the company today introduced the TG-6, a relatively low-key update. New features include the addition of an anti-reflective coating on the sensor, a higher-res LCD, and more underwater and macro modes.

The Leica Q2 is an impressively capable fixed-lens, full-frame camera with a 47MP sensor and a sharp, stabilized 28mm F1.7 Summilux lens. It's styled like a traditional Leica M rangefinder and brings a host of updates to the hugely popular original Leica Q (Typ 116) that was launched in 2015.

We've been playing around with a prototype of the new Peak Design Travel Tripod and are impressed so far: it's incredibly compact, fast to deploy and stable enough for the heaviest bodies. However, the price may turn some away.