Lytro poised to forever change filmmaking: debuts Cinema prototype and short film at NAB

Lytro greeted a packed showroom at NAB 2016 in Las Vegas, Nevada to demo its prototype Lytro Cinema camera and platform, as well as debut footage shot on the system. To say we're impressed from what we saw would be an understatement: Lytro may be poised to change the face of cinema forever.

The short film 'Life', containing footage shot both on Lytro Cinema as well as an Arri Alexa, demonstrated some of the exciting applications of light field in video. Directed by Academy Award winner Robert Stromberg and shot by VRC Chief Imaging Scientist David Stump, 'Life' showcased the ability of light field to obviate green screens, allowing for extraction of backgrounds or other scene elements based off of depth information, and seamless integration of CGI elements into scenes. Lytro calls it 'depth screening', and the effect looked realistic to us.

'Life' showcased the ability of Lytro Cinema to essentially kill off the green screen

Just as exciting was the demonstration of a movable virtual camera in post: since the light field contains multiple perspectives, a movie-maker can add in camera movement at the editing stage, despite using a static camera to shoot. And we're not talking about a simple pan left/right, up/down, or a simple Ken Burns effect... we're talking about actual perspective shifts. Up, down, left, right, back and forth, even short dolly movements - all simulated by moving a virtual camera in post, not by actually having to move the camera on set. To see the effect, have a look at our interview with Ariel Braunstein of Lytro, where he presents a camera fly-through from a single Lytro Illum shot (3:39 - 4:05):

The Lytro Cinema is capable of capturing these multiple perspectives because of 'sub-aperture imaging'. Head of Light Field Video Jon Karafin explains that in front of the sensor sits a microlens array consisting of millions of small lenses similar to what traditional cameras have. The difference, though, is that there is a 6x6 pixel array underneath each microlens, meaning that the image made up of only pixels on the sensor at any position (X,Y) underneath a microlens represents the scene as seen through one portion, or 'sub-aperture' of the lens. There will be 36 of these 'sub-aperture' images though, each providing one of 36 different perspectives, which then allows for computational reconstruction of the image with all the benefits of light field.

The 36 different perspectives affords you some freedom of movement in moving a virtual camera in post, but it is of course limited, affected by considerations like lens, focal length, and subject distance. It's not clear yet what that range of freedom is with the Cinema, but what we saw in the short film was impressive, something cinematographers will undoubtedly welcome in place of setting up motion rigs for small camera movements. Even from a consumer perspective, consider what auto-curation of user-generated content could do with tools like these. Think Animoto on steroids.

Front of the Lytro Cinema, on display at NAB 2016. There are two optical paths, one for the actual light field capture, and the other for previewing the live view and dialing in creative decisions like exposure, focus and depth-of-field at the time of capture. With light field, though, those decisions are reversible.

We've focused on depth screening and perspective shift, but let's not forget all the other benefits light field brings. The multiple perspectives captured mean you can generate 3D images or video from every shot at any desired parallax disparity (3D filmmakers often have to choose their disparity on-set, only able to optimize for one set of viewing conditions). You can focus your image after the fact, which saves critical focus and focus approach (its cadence) for post.* Selective depth-of-field is also available in post: you can choose whether you want shallow, or extended, depth-of-field, or even transition from selective to extensive depth-of-field in your timeline. You can even isolate shallow or extended depth-of-field to different objects in the scene using focus spread: say F5.6 for a face to get it all in focus, but F0.3 for the rest of the scene.

Speaking of F0.3 (yes, you read that right), light field allows you to simulate faster (and smaller) apertures previous thought impossible in post, which in turn places fewer demands on lens design. That's what allowed the Illum camera to house a 30-250mm equiv. F2.0 constant aperture lens in relatively small and lightweight body. You could open that aperture up to F1.0 in post, and at the demo of Cinema at NAB, Lytro impressed its audience with - we kid you not - F0.3 depth-of-field footage. A Lytro representative claimed even faster apertures can be simulated.

The sensor housing appears to be over a foot wide. That huge light field sensor gets you unreal f-stops down to F0.3 or faster

But all this doesn't come without a cost: the Lytro Cinema appears massive, and rightfully so. A 6x6 pixel array underneath each microlens means there are 36 pixels for every 1 pixel on a traditional camera; so to maintain spatial resolution, you need to grow your sensor, and your total number of pixels. Which is exactly what Lytro did - the sensor housing appeared to our eyes to be over a foot in width, sporting a whopping 755 million total pixels. That should mean that at worst, you'd get 755/36, or roughly 21MP final video output. Final output resolution was a concern with previous Lytro cameras: the Illum yielded roughly 5MP equivalent (sometimes worse) stills from a 40MP sensor. However, as we understand it, the theoretical lowest resolution of 21MP with the Cinema sensor means that output resolution shouldn't be a concern for 4K, or even higher-res, video.**

The Lytro Cinema is massive. The sensor is housed in the black box behind the orange strut, which appears to be at least a foot wide. It's thermally cooled, and comes with its own traveling server to deal with the 300GB/s data rates. Processing takes place in the cloud where Google spools up thousands of CPUs to compute each thing you do, while you work with real-time proxies.

The optics appear as massive as the resolution, but that's partly because there are two optical paths: one for the 755MP light field capture, and the other to give the cinematographer a live preview for framing, focus, and exposure. The insane data rates for the light field capture, on the order of terabytes for every few seconds, means that Lytro Cinema comes with its own server on-set. The sensor is also actively cooled. The total unit lives on rails on wheels, so forget hand-held footage - for now. Bear in mind though, the original technicolor cinematic camera invented back in 1932 appeared similarly gargantuan, and Lytro specifically mentioned that different versions of Cinema are planned, some smaller in size.

Processing all that data isn't easy - in fact, no mortal laptop or desktop need apply. Lytro is partnering with Google to send footage to the cloud, where thousands of CPUs crunch the data and provide you real-time proxies for editing. Lytro stated the importance of integration with existing workflows, and to that end is building plug-ins to allow for light field video editing within existing editors - starting with Nuke. But Lytro is going a step further: they suggest the light field is the ultimate mastering format, and they're capable of converting all content - from footage to visual effects - into a 4D light field so you can, at any time, go back and re-render your film for any display device. This will be particularly important with the advent of holographic and other innovative light field displays.

Thousands of CPUs on Google's servers crunch the data and provide you real-time proxies for editing

The 4K footage from the Lytro Cinema that was mixed with Arri Alexa footage to create the short 'Life', viewed from our seating position, appeared comparable to what one might expect from professional cinema capture. CEO Jason Rosenthal commented that the short film was shot on both cameras to speak to how interchangeable footage can be with other cameras. Importantly, the footage appeared virtually noise free - which one might expect of such a large sensor area. Furthermore, Jon Karafin pointed out there are 'hundreds of input samples for every one output sample', which means a significant amount of noise averaging occurs, yielding a clean image, and a claimed 16 stops of dynamic range. In fact, in 'Life', noise had to be added back in to get the Lytro footage to match the Alexa.

That's incredibly impressive, given all the advantages light field brings. This may be the start of something incredibly transformative for the industry. After all, who wouldn't want the option for F0.3 depth-of-field with perfect focus in post, adjustable shutter angle and frame rate, compellingly real 3D imagery when paired with a light field display, and more? With increased capabilities for handling large data bandwidths, larger sensors, and more pixels, we think some form of light field will exist perhaps in most cameras of the future. Particularly when it comes to virtual reality capture, which Lytro also intends to disrupt with Immerge.

It's admirable just how far Lytro has come in such a short while, and we can't wait to see what's next. For more information, visit Lytro Cinema.

* If it's anything like the Illum, though, some level of focusing will still be required on set, as there are optimal planes of refocus-ability.

** We're not certain of the actual trade-off for the current Lytro Cinema. It's correlated to the number of pixels underneath each microlens, and effective resolution can vary at different focal planes, or change based on where focus was placed. This may be one reason for the overkill resolution - to ensure that at worst, capture is high resolution enough to meet high demands.

There seems to be some confusion over what the Lytro can offer, such as the green-screen potential. There are hundreds of shots that need roto/keying in film, many of which have not been shot with a green-screen but also need background removal, this is where Lytro could be even more valuable. I've known roto artists who have had to cutout/roto individual trees in a forest scene, now picture having to do that without a green-screen, so masking out 24fps over say 5seconds x2 (for 3D), thousands of leaves, hundreds of branches, blowing around in the wind, and film is very pedantic about every aspect of the work involved... to the pixel. Roto'ing hair is also another nightmare that can be solved by Lytro, this sounds unimportant but film demands just as much quality in isolating elements like hair and fine detail as every other aspect in the film making process. Supervisors will check every frame, every edge of your mask, gamma up & down, to make sure you've saved all the detail...

There is no wrong camera size...just the wrong pocket...lmao...so if you get a few square meters of fabric, a (I would say) 2 by 4 m (very) rigid base and a ton of wheels, you got your camera bag...I would suggest a tent on some kind of a wheeled palette...

To the calculations and assumptions done by dpreview. I agree that this video beast likely has (an needs to have) an excess of resolution in order to give the editor a real choice for focus.

WIth the Lytro Illum you could refocus your image to any point, but there was only a limited range in which the image was anything like sharp. (It had two maxima at depth -4 and +4 with a local minimum at ±0 where the image was just acceptable, but not really good. This left me with no real choice where to put my subjects if I wanted them to be sharp (~-6 to -2 and/or 2 to +6, roughly).Excess resolution would give me the freedom to re-focus on anything from, lets say, -10 to +10 and still have an acceptable sharpness.

P.S. They Lytro Illum did not have a spatial resolution of 5MP! The spatial resolution is nowhere officially stated (afaik) and the camera can only export rather soft 4MP JPG files, the effective resolution is ~2MP in ideal conditions.

Digital photography has always been distinct from film, and yet both in form factor, and mindset, it has slavishly emulated it. One reason is ease of marketing, another is the innate conservatism of "photographers" as opposed to people using a tool towards a conceptual result. This break will not only make the film/digital difference apparent to lytro users, but may inspire a new perspective, and consequent new ideas, from "traditional" digital users.

"The multiple perspectives captured mean you can generate 3D images or video from every shot at any desired parallax disparity"

Except that single point cameras are essentially useless for 3D video, because the occlusion is only correct from the single viewpoint. Moving the viewpoint means that you now need background that was blocked from the single viewpoint.

The end result is that the images look like bad automatic 3D conversions, until a skilled artist retouches all occluded areas.

light field data allows correct perspective shift within a limited area because it captures both angle and intensity of light passing through an area. essentially it has the background and foreground captured from a continuum of perspectives within an area. the tradeoff is the somewhat ridiculous amount of redundant data and power needed to process it all

Exactly. The light field cameras should not be seen as "single point cameras", they can be best compared to an array of cameras mounted next to above above each other, and in post you can choose the perspective of the one you like most or choose two of them for a 3D effect.

The human interpupillary distance ranges from about 50-70mm, and 3D filmmakers tend to shoot a bit wider. The "correct perspective shift within a limited area" of a lightfield camera is limited by the aperture of the lens. To get a minimal 50mm interpupillary distance on a normal lens would require a 30mm f0.5 on a super-35mm sized sensor.

I stand by my observation that "single point cameras are essentially useless for 3D video" and for 3D work, the lightfield cameras are essentially single points.

Single point techniques only work in macro 3D focus stacking, where the interpupillary distances are on the order of 1mm, not 60mm, and the background is totally out of focus.

see the caption in the photo of the camera: "The sensor is housed in the black box behind the orange strut, which appears to be at least a foot wide." we're not talking about podunk super35 sensors. it's a 304mm wide sensor. super35 is 24mm wide.. there's a reason the thing is the size of a car.

This is obviously not going to replace most standard film production crews shooting ordinary scenes of people, no reason to use a car sized camera to get a kiss scene in the bedroom right. I think people are missing the point that this will be able to possibly revolutionize shooting the ultra costly massive special effect scenes like explosion/building collapse/million dollar per second kind of shot where you've got one take to get it right and you don't want to miss something as mundane as focus for the love of God.

Stills and cinema are two very different things. Unlike stills photographers who have demands on resolution and usability that lytro couldn't meet (along with them trying to solve a problem that really wasn't a problem, Cinema has limited resolution needs, which this does meet, and as it tries to move into new technologies has real problems that this solves.

I don't doubt that but I can't hear the marketing BS like "forever change filmmaking" or "killing green screen" and "best ever". Come to the market, show what you're capable of and the crowd will decide what's good and what's not. Work comes before party.

Well they need buzzwords to sell haha. But they will definitely change parts of film making that's for sure. This thing obviously isnt going to fit in any tight spaces or do any action sequences involving a lot of motion. It isn't being stuck on crane to get long tracking shots. But for some things it will have it's place. It will be a very large tool in a very large bag.

The problem with stills is that nobody cared about what lytro was offering. They were offering something that was kind of neat but nobody wanted or needed it. The application was limited, and the execution left a lot to be desired. Nobody wanted to do what lytro gave them the ability to do.

With cinema it is a whole different ball game with many companies rushing to try and offer what lytro just unveiled. Tons of people want it. This time there actually is a market.

Dulynoted is correct, this is a massive deal for cinema. Masking a still frame is one thing, rotoscoping five seconds of 4k footage at 24fps that's stereo scoped for 3D, so 24 x 5 x 2.... possibly even 50fps (like in the Hobbit for example), then depth separation becomes a real game-changer....

Dulynoted, which parts of the filmmaking process are really being solved that you're refering?

Focus pullers: I can almost garantee you it's easier and cheaper to have an excellent focus puller on your crew, than it is to hire the Lytro camera, specialized new crew, and pay for post processing the specialized footage.

Really complex action sequences that can be done just once: those are shot from multiple cameras. I guess more than 20 at once. Cheaper that way, and more garanteeded to load the editor with good footage, than a single Lytro camera will ever offer.

Green screen: that's the industry standard. There're lots of green screen sets out there, and lots of great keying artists available. No big deal. No need for a completely new system to solve that.

Camera movement: logistics and costs say it's easier to have a crane or travelling on your set (let's say, wild woods like in The Revenant), than it is to have this monster set up on location.

We're not sticking with traditional methods, we're just questioning a product. I was even not that impressed by the video, but I can see the possibilites.

So, let's see how it developes and I wish best luck with it (and the massive serverbackground). I only can't stand the megahype marketingbuzz. This seems to be normal towadays. Every new product has at least to be "marketleading" and "changing the universe". A lot of warm air.

Any tech that suggests it can separate subjects without roto is changing the universe! It's a bold statement and the film industry have every right to get excited.Marcio - your response kinda shows that you do not quite understand what is being shown here.

Focus pullers: What if whilst editing, the director decides he wants to pull focus differently? Perhaps the opposite way or shift focus from the eyes, to the hands and back or whatever? With Lytro, there's an opportunity to extend creativity past production. A re-shoot would be incredibly expensive.

Lytro isn't claiming to replace multiple cameras onset, but it could be the primary one.

Green-Screen: You make it sound easy. Keying can require multiple keys to pull mattes from and a lot of labor. Hair and other fine detail can get lost through keying, wardrobe is restricted by green-screen, Compositors have to spend a lot of time de-spilling, removing green reflection and such from their subjects etc. Any system that mitigates keying saves a lot of effort and time.

There are shots that require a camera for movement, there's also shots that do not. Choose the right one for the job. Who knows, common sense says that this monster will one day fit in the palm of your hand.

> common sense says that this monster will one day fit in the palm of your hand.

Not as long as physical laws are valid.

Getting an idea of the 3D scene and therefore the mentioned perspective shift, all this only works if the camera is able to look at the object from sufficiently different angles. You wouldn't get any serious parallax from a camera with a 1cm front lens. The parallax of the Lytro Illum (~4 cm maybe) is already rather limited compared to the human eye. And although there will surely be further advances in miniaturizing servers and storage I doubt you will get a camera with these specs town to the size of a common DSLR. Maybe down to the size of a "normal" studio movie camera, which would be sufficient.

I don't understand the point you're making... work will still be passed onto post-production, obviously not everything but certainly more, and more will be expected from existing shots too, seeing as this offers wider creative freedom...

If it can't do the basic shot I mentioned then a producer will think twice about using it as it would be cheaper and quicker to just hire an Alexa for the whole shoot. Roto work will be it's forte but it's not as if there isn't a lot of cheap labour to do this in post. 1000TB per hour, using thousands of Google cpu's, how does this work in editing with Avid and maybe Premier? There's a lot of cheap and very mature keying, spill suppression, tracking software and expertise available. There's more to compositing than just pulling the key, that's the easy part so the VFX artist won't be diminished. Will the optics compare to the best primes and the best sensors available?`Foverever changing filmmaking', rotoscoping and VR films, yes but the rest may not be economically practical. 3D tv and cinema sounded like stereo audio but where is it now ? It will have applicationsbut I can't see it supporting Lyro's remaining workforce, maybe in ten years.

I saw a shot on Gatsby where they had covered a three story building and half the street with a green screen, now there's an application but the crane did a huge sweeping shot from high up and then tracked along the side of the travelling car, maybe not possible with Lytro ?

You pick the right camera for the right job, you dont hire one camera to do everything. Lytro has its place for certain shots just like any other camera.Keying isn't as easy as you think, especially for film quality pulling. A lot of keyers out there would be annoyed with that statement!

OK, keying is comparatively easy compared to the rest of compositing and effect creation. It's amazing the quality of keying these days even with digital mixers, UHD realtime. Different industry I know. One guy who has always used Nuke showed me a plugin in After Effects that did rotoscoping. If he was impressed it must have been pretty good, he had a vested interest in it not working.Surely DPR has bits and bytes wrong, hopefully this camera can run at 50fps too without the ability to change the shutter in post.

Just the green-screen/rotoscoping aspect alone is a HUGE deal with this technology. Both are painstaking, slow, expensive and sometimes impossible to achieve without quality trade off. The drawback however is that all those artists jobs will be gone, and we're talking potentially hundreds per movie...

Lol. So much salt in this thread from photogs who see Lytro's approach only as a failed stills technology, or from flat-out shortsighted image capturing "purists."

Motion picture production was always the dream with the Lytro technology for anyone that understood just how much control it gave you to fix, or simply adjust, as is needed in post. It adds so much more control that there's no reason *not to use it*... once the price and size drop enough to make it economical on set, that is.

As an motion picture editor, I can't wait to be able to to make focal shifts, and "aperture changes" after I'm rolling with a cut. Because no matter how intentional a DP or Director are with their work on set, they can't visualize EXACTLY how everything's going to splice together later, and tech like this just gives editors and compositors more options to make things work in a final cut in a way that might not have without it.

To add... as an editor, sometimes your entire job is fixing what everyone else messed up before the footage got to you... simply because you're at the end of the production line. So to have more tools to fix those mistakes? I'll take every one I can get.

And for another take on the utility of this tech, imagine this: being able to leave another camera operator, or lighting setup, right smack dab in the master shot, on set, and then being able to just erase everything at that depth level from the shot in post. Wow! That alone opens up so many new creative opportunities!

Yeh the Fix-it-in-post crowd will love this, and proper DPs can shed a tear as yet another of their skills is handed over to the team of cooks in the edit suite. Hopefully some will be allowed to use this without over gilding it,

Hell yeah, let's replace expert operators and cinematographers with the editor, who has ostensibly zero experience in either field. That'll make for a great film full of artistic intent end-to-end. And if we can't do that, we'll at least be able to undermine the creative choices made by the DP after the fact! So perfect. What a dream.

*Honestly, I've edited so much footage from so many different "DPs," camera ops, and shooters, and frankly many of them *should* have to share control, or need to be babysat better by their directors.

Same can be said of many editors, I'm sure. But more often than not, crappy shooting has really backed me into a corner in the edit suite. On more than one occasion, I've even gone out on my lunch break and shot extra B-Roll around the city on an RX100 because I needed a few, better bumper shots.

But, you know, from preproduction to post, that's what happens when tech gets cheap enough that anyone with a credit card and a dream can call themselves a pro.

So I guess we just keep arguing about who is able to have control. In the end it has always (historically) been up to the editor to be creative and deal with the elements they are given in order to come up with solutions. There will always be shortcomings in every department – directing, shooting, vfx, production design, et al. But transferring power over the focus, framing, aperture, depth of field and even camera moves to a person who is not the DP, and all of the people who have power over that person (clients and producers) is sickening to anyone who takes their role as a cinematographer seriously. Digital RAW came first, and now technologies like Lytro continue to give every non-artist in the move-making process after-the-fact control of decisions made by the Director and Director of Photography.

It should... but I've been editing broadcast content for NBC and ABC shows (both daytime and primetime), and you'd be surprised how much corner cutting still goes on, and how much "good enough" talent is kept around because it's "easy." Of course, once you're talking about feature artists, and big media houses, the cream really does rise up.

@CWSmith First, it's not about replacing anyone. It's about being able to fix things that couldn't be done properly on set, for whatever reason, which happens all the time. Sometimes that's as simple as nailing critical focus, and other's it's as complex as compositing something in or out, or adding a "camera move" to something that was shot statically because it's better for the cut. So good DPs, and their great work aren't going anywhere.

Secondly, if you don't know that half the direction in a project happens in the edit room, you've never been in one.

Looks very interesting. Exciting to see what it will bring in the future.

To me the insane data rates of the quoted 300 Gb/s look like a huge problem with present technology? I take it that the 300 Gb/s is before compression? Even with compression it seems an awful lot of data for shooting a whole movie. Even with the fastest commercial standards it would take a loooooong time to transfer from one medium to another and I dare not think about the time it would take to upload it to the servers. It would probably be faster to physically deliver the data :) I am not in the film/movie business so I don't know if there are any proprietary, non-commercial standards which are substantionally faster than USB3/Thunderbolt??

Lets say their on location and they take the data truck with them, they plan to shoot 5 hours over the week.5000TB would require 833 6TB drives, with no raid protection. Lets jam 200 drives into a full height rack, you need 4 racks, you should back that up, 8 racks.We now need a small generator truck to power it and the air con.Back at the post house we transfer it using the fibre optic server in the truck. At 1000MBs, 1300 hours later it's all done. No good, lets use 10 fibers and a military grade server, 130 hours. Now transcode it to proxies so the editor can have a look. The camera has replaced the need for 4 cameras or 4 takes, better transcode the 4 angles.People will say that it's not intended to replace main stream production, that's my point.

The key here is whether or not this solves anything that movie makers are dying to solve and if so, at a cost that makes it worth switching over. I don't know how much money is spent now on things, but you also have unions to challenge. Just because you have a camera that can "eliminate the green screen" doesn't mean the industry will accept it. Lots of hurdles to overcome. Stay tuned...

Think of all the dedicated focus throwers that could be replaced by one camera man. Actually loss of a shot due to bad focus can be incredibly costly and may even result in a later reshoot so that's a potential cost saving right there.

@cdembrey - you're more entrenched into this than me. I am in a media company and see movie sets quite a bit, but don't have any direct experience. What do you think? What are the chances that this will be embraced?

@cdembrey - I think of saving money as being all encompassing as well. Fewer people, fewer man hours, etc. So if they're renting, what changes for the producers? I would imagine post gets a big change and they need to learn how this new technology changes things. But exactly how does this save money? IE, I've seen a few re-shoots on some small sets, but I don't recall how many were for missed focus etc. They were usually because someone didn't time in right or hit their spot or missed a line or the lighting didn't suite the producer or the producer just didn't like the feeling that was portrayed and wanted different approaches from the actors.

@wtevo23 They are renting fewer crew members. Most producers have a very small staff of full time employees.

", I've seen a few re-shoots on some small sets, but I don't recall how many were for missed focus etc. " This is not a problem for union Hollywood crewa. If the 1st Ac (focus puller) misses focus they wiill say so. Then they go again—no big deal. Even the best 1st Acs, camera operators and dolly grips will make an occasional mistake, they own up to it, and another take is made.

@cdembrey - I'm not questioning the savings of not needing certain people on a crew, I'm asking how Lytro accomplishes that and if there is any way a million dollar system can replace a $150k personnel reduction. (I don't know the numbers - just guessing.) And then that expensive post processing workflow. I'm skeptical that they've actually got something that will be embraced.

No keying or roto work as you can generate mattes from the depth information, they can insert low-res geometry into the scene whilst shooting so the director knows exactly where things will be and making life easier in post rather than recomposing shots and set extensions to compensate, tracking information can be extrapolated so no need for camera solvers, DoF information will be available for the compositors for quicker, more realistic results, clean plating will be easier with the added information stored in the different perspectives, etc etc

@stormy - it isn't adopted until it saves money or presents a significant improvement that everyone has been asking for. With CD Audio, I was immediately able to see the benefits and was willing to adopt it before it came out. I don't know enough about the movie industry, but what little I know doesn't make me think people are dying for what Lytro offers and certainly not at this cost. Maybe VR, but I don't see standard or even HD cinema.

If this did make it to a commercial production, I wonder if people will reject the "fake" looking video, similar to some people's rejection of high frame rate, on the grounds of "knowing" what a proper film should look like.

Put simply, if a director starts dinking with extreme aperture differences across the screen, people could be quite turned off.

It all depends on the skills (and budget) of those creating the film doesn't it? You could make exactly the same comment about all CGI, but done well it can be totally believable - just watch the latest Star Wars film for a great example. Some scenes are obviously CGI, often simply because we know they couldn't be created any other way, but there are some which I only know are CGI because I've seen the 'making of' extras.

The same is going to apply to light field, and it will be possible to use it badly, no question.

The soap opera effect was only noticed because of conditioning between TV and Film. If TV had always been 24p and film switched to 50 or 60 everyone would have been blown away instead of association with soap opera's. This would be something new so people will probably be blown away.

@CheersUK Kodak is betting their future on the Super8 camera they announced at CES. I'll be one of the first-in-line when they become available in the fall. Like a lot of old-timers I started out with Super8—and it looks like Super8 film-making will make my retirement more enjoyable.

Obtaining knowledge and experience has always required encounters with both failure and success, and often years of dedication, experimentation, and hard work. The sum of this experience gave a person both added value and character. I guess we're moving away from these pesky old bugaboos now. Welcome to the age of shake-and-bake perfection and expertise! Still, I gotta wonder: what kind of world is all this progress progressing us towards? I might buy one of these, just to find out.

"THAT is the example I didn't think about!!! Elvis Vs. Bieber. Tina Turner vs. Britney Spears. Guns-n-Roses vs. Nickelback. Modern technology gives anyone a chance to "make music". The result - there is no music."

Yes. So whether it is clear to you or not, we make similar observations.

A key benefit this camera aims to provide is the ability to return to a shot and "re-do" it an unlimited number of times, rather than being constrained by the traditional idea of being such an experienced "pro" you that you "nail" your shots the first time.

Kind of ironic then that you take notice with the 11 changes (edits?) I made. I confess I wanted to make a few more changes, but DPR imposes a limit!

So thank you for using your intelligence to distract other readers by mocking me, rather than actually contemplating and responding to the questions I raised. Questions, I might add, which we evidently share, despite your apparent difficulty understanding me.

Serious English? You can't be serious. Trust me. All three words are in the serious dictionary. Are there a lot of words and perhaps entire languages and forms of expression you're unfamiliar with that you summarily dismiss out of hand? What a pity. But let's get back to the subject at hand: Do you think technology which obviates expertise that once took sometimes years of experience and study is a good thing?

Very exciting news. I'm not that interested in cinema, but if ever the technology translates over to TV, just think of the quality of wildlife films that would be possible! You could get a lion's eye view of an entire herd of wildebeest and actually roam among them! Or you could start from a tree canopy and make your way down to the undergrowth, stopping anywhere to examine entire ecosystems. I'd pay half my monthly salary to have a TV channel that like that :-)

Thanks for the additional info.As a matter of fact, it means the Lytro Cinema is an example of the "Multi-Pixel AF Sensor" with 21 MP and Multi=6x6.The Canon Dual-Pixel AF Sensor is another example with Multi=2x1.For some time now, I advocate a broader adoption of Multi-Pixel AF Sensors, specifically Quad-Pixel with Multi=2x2.OTOH, it is a significant step-back from the original idea of a real Light-Field or plenoptical camera which requires a much higher sub-ray resolution being captured. But this stepped-back specification yields a much more interesting camera overall, still increasing the available depth of field by a factor six.

Just adding to it...We do not know yet the 35mm-equivalent properties of this camera. But as it looks massive, even F-Stop numbers of 6*N could imply shallow-enough depth-of-field to make this camera operate more like a conventional camera which requires constant focus operation.

Zdman, you clearly haven't read the text. Firstly, 30-250 is the equivalent focal length of the illum, not that of the cinema prototype. Secondly, ovatab was asking for the physical focal length, not the 35mm equivalent.

"noise had to be added back in to get the Lytro footage to match the Alexa" ... that's strange, if i were making a tech demo, i would certainly love to prove that the IQ of my product is better than what's best available... it's this kind of unsupported claims that i don't really like.

On the other hand... it seems we are indeed on a beginning of a new era ... finally!

They did show the best IQ possible. But as a cinematographer you don't want your shots to looks completely different from one scene to the next unless intended. So it was demonstrated that they need not worry about that.

"CEO Jason Rosenthal commented that the short film was shot on both cameras to speak to how interchangeable footage can be with other cameras. Importantly, the footage appeared virtually noise free"

He's just pointing out that if you don't add noise to CGI it stands out from the rest of the real video which does have noise. That's very much on point with what you said. You're agreeing now stop fighting.

It looks like one giant Lytro camera. I'm guessing most film makers are incredibly picky when it comes to lenses. Would they rather shoot with this than the lens of their choice? DPs use Arri just for the brand name and default color profiles.

I think the point is that shooting with this could potentially BE any lens of choice. ;-)

I'm sure that in the long run you'll be able to adjust 'character' of the lytro capture as well in post, just like we're able to do new things with our old raw files now when new software is introduced.

It's not like some guys at google do the work there looking at videos and stuff... they just upload some amazing lot of RAW lytro data to some datacenter and then receive back some still a huge lot or RAW video data, to grade, and so on... so leaking is not the most worrying part :)Anyway, probably all companies do that already since it's a better way nowadays to use cloud power when you need lot of computing (thinking at CGI intensive movies, animations, etc.)

I expect that, when used for 3D production, the maximum interocular will be limited to the actual opening of the aperture being used, and not much wider. Perhaps I'm wrong, but I don't see how it can be larger that the "hole" the sensor "looks" through...

Well, if the 100MP Hassy is any guide, this sensor might be in the 180-200mm diagonal range - or are the pixel sites much smaller than a traditional sensor? Because with a sensor that big, you're definitely going to need some serious glass.

I don't intend to sound against technology. Actually, it's clear this thing is alien technology. Seriously, Lytro should open their warehouses to the public, to prove they're not reverse engineering tech from a fallen UFO. ;)

Having said that, there's a bit of exaggeration in the thought that all new techs are pushing the industry forward.

Sometimes, just because one thing allows more, it's not in anyway the "future".

Speaking as photographers, we don't change lenses just because of limitations are imposed by physics and optics.

We choose from WA to telephoto, as an artistic choice, as well as a technical one.

For example, has your D810 or 5ds made you dump the 70-200? I guess not, even though you have the option to crop in post now.

You won't dump your teles. Not now, not in the 200 mp era. Because of artistic merits.

My simplified example above may be just that: too simplified, but you get the idea.

Lytro reps themselves stressed you can lock-in creative decisions. In fact, you do still make them at the time of capture. But flexibility in what you do w/ that footage is key in cinema, in a way it may not be to many photographers. Especially b/c what they do in movie making is so much more difficult from what traditional stills photographers might have to deal with.

The reason you're having trouble seeing the application may simply be b/c you're not thinking like a Hollywood DP - and I don't blame you for that!

But this has the potential to absolutely change the way real footage is merged with CGI, how such real world footage is shot, etc.

Hollywood productions strive for perfection in a way many stills photographers don't (although many do) - attend a tech summit at NAB if you're skeptical.

Your argument kind of feels like it's saying digital photographers don't appreciate what the Exposure slider in ACR gets you over what you were 'locked' into with film prints years ago.

Hmm, impressive tech, and having worked in the digital animation and effects business I can see the benefits, but although I somewhat agree with your thoughts Rishi, I have trouble imagining this product "poised forever to change filmmaking".Over the last 20 yrs digital technology has transformed film making...but you know what? many, many major motion pictures are still shot on film...and for good reason too.I really do hope this kind of tech survives and they make a success of it, but I think it will probably end up like the Lytro Illium.

@marcio_napoli, I agree, not all new tech pushes the industry forward. Having limitations is a big part of the creative process. It's like choosing to use a prime lens when you could be using a zoom lens. Most people didn't dump all of their primes in favor of zooms. Primes impose a creative discipline. Having no limitations is sometimes a prescription for creative indecision.

Rishi, I included the last info (opinions from a fellow filmmaker) just to make sure it wasn't biased as a photographer's mindset.

Of course I'm nowhere Hollywood level, and I do admit I was completely paranoid with nailing focus on a recent short film I've just finished.

Crew on set knows how much I was paranoid about things like focus, choosing the right lens, etc.

I admit that, but that's because I'm just a low end independent filmmaker having to do things like focusing myself, while shooting.

Hollywood movies, which can afford very efficient and expensive focus pullers, don't worry as much.

There're guys very well paid to get things right on the set.

Examples that comes to mind where a Lytro camera would come in handy, are very complex action sequences, or some specialized effects where you can't have a second go (like blowing up a large scale miniature).

But in either cases, you won't ever have just 1 camera rolling, and it can never be this large as a Lytro camera.

There may not be a single cinematographer in the world that'd prefer to use a green screen if they didn't have to. But you have to ask them whether they'd rather use a green screen with the camera they already have, or use a colossal camera. Some may choose the colossal camera because of its special advantages. But some may still choose the green screen because it's readily available and doesn't require a special camera.

New technology tends to succeed better if it offers a quicker, easier, cheaper way to do something that we already do. If it's not significantly quicker, easier or cheaper, then it's hard to sell it based mainly on some new effect that it offers.

Rishi, I saw you come out with the same argument when they pulled the Illumina...and for me the company is making the same mistake again. For this tech to get smaller, it needs investment. For that to happen this current version of tech needs plenty of sales. With the size of this camera it would make more sense to use it in the studio (not many advantages there, in such a controlled environment)...but this tech crys out to be used outside on location. Guess what? It's no where near portable enough. Imagine having to take 3 or 4 of these on location, let alone attaching one to a boom 😆

Dont get me wrong, I think the tech used is very promising. I just think the company keeps biting off more than they can chew, miss reading the market and creating overly niche products.

OK, so to begin with, you're repeating what I wrote about how you can forget about hand-holding this.

Second, you're saying they're still misreading the market, trying to go from consumer-focus to far more niche-focused, just a niche that would actually benefit from all this tech. But it's still too niche.

What, pray, would you have them develop today anyway, then?

Or is it just that it's easier to pontificate and complain on the internet?

Rishi, as I said, I'm not denying the technology has great merritt...but sit back for a minute and just look at the size of this thing...it's ridiculous. This tech needs many more yrs of development to be ready for being the 'standard' in cinematography...and even then optical physics may beat it. This kind of development can not be sustained by such a small company without substantial backing. A camera that size has huge implications regarding logistics, let alone on location data management.

Let's look at it this way - 4k is the future. If 4k determines that a sensor using this tech needs to be 1x1ft then its still a huge camera. Too big. The many advantages start to be outweighed by negatives.

As a side note, I thought on topic opinions were the whole idea of this website..a shame one can't express honest opinions without being deemed negative. I didn't think I was being negative, just an honest appraisal from my experience in the film industry.

I'm also surprised at the lack of balance from Rishi on this thread of debate. It lacks caution and appears to deride a user who has some realistic concerns about this product. I can see some significant barriers with this first generation of Lytro cinema camera, whereas any subsequent improvements, certainly with regards to size and processing overhead, would be most welcome, and may well shape the future of cinematography. This appears to be a potential precursor to that shaping.

<< Also, keep in mind that even for Hollywood level, IMAX cameras are prohibitively large and cumbersome.Few filmmakers dare to use it, even with Hollywood budgets.And an IMAX camera is "pocketable" compared to this colossus.Picture an absurd example: imagine carrying this monster into the wild woods to shoot The Revenant.Advancing tech is nice, but there's one point where you wonder if it's really doable, or if it's just a gorgeous science experiment with limited real world use. >>

You know, I bet they said all those things about the Technicolor camera too. And look what happened.

"Green screen gets in the way of the creative process" says who? That is part of the process, period. And there are million examples of super creative movies who use the archaic tech with success and with complete control. I've use green/blue screens for many years in tv ads and I rather have the set green painted than having to upload 100GB footage to "the cloud" and then what, edit it in Lytro's website? Yeah right. These guys really are experts in creating solutions for non-existing problems.

<< But Technicolor was everything they had available back then. The tech had to be accepted, one way or another, simple as that. >>

It was everything they had available *to shoot in colour*. That's the point. The only other choice was to shoot black and white.

Granted, if you wanted to shoot colour you had to accept the unwieldiness of the Technicolor process, but then exactly the same is true of the Lytro. If you need or want the facilities the Lytro offers, it's currently the only game in town, just as the Technicolor camera was in the early days of colour.

That's the price of early adoption, and without the early adopters, Technicolor wouldn't have had the money to make their kit lighter and more compact and better quality. If the Lytro camera offers a significant advantage - which it does for certain types of production - people will use it and Lytro will have the money to develop it.

The difference between this product and the introduction of Technicolor, is a little different. Technicolor had a huge advantage over this, where the technology advancement was obvious to the public - B&W compared to colour. Using this compared to traditional tech, it isn't so obvious in the final result to the paying consumer.

Using this camera would certainly change workflow for the better in the long run, but even now, with all the digital advancements, many studios prefer to still shoot with film.

The size of this camera is one obvious disadvantage, but the [un]willingness of the industry to adopt it with any conviction may be another.

Yes you can, paired with the latest Lytro Desktop software. It's called 'DepthFX', and we covered it here.

What's particularly cool for stills is the idea of doing certain image edits based on depth. I may wish to tone a bride's face a certain color, but not the background - and I don't have to build a mask to do so with light field.

Furthermore, I may want her entire face in focus, which might traditionally require F5.6, but I may wish for the rest of the image to maintain F1.4 depth-of-field/bokeh. That's what depth-based editing allows and, frankly, I still hope that light field tech will trickle back down to stills cameras, even though it was hard to convince the market at this time.

I can see this having applications in big budget Hollywood where people work on effects for thousands of hours. But not for stills. Stills photographers don't want to do depth edits (ok, maybe a small number do). Stills photographers might be curious to try depth edits if that were a feature on their existing cameras, but they don't want to buy a separate camera just to work on depth edits. It's more cool as an "idea" than as something people want to spend time and money on. That's why the stills Lytro failed.

"It's more cool as an "idea" than as something people want to spend time and money on. That's why the stills Lytro failed."

Disagree. It failed because it couldn't fundamentally provide the quality current cameras provide. So the very audience of people it was targeted at - creative pros who do care about quality - had to choose between traditional quality, and the newfound freedom. That doesn't mean, though, that one day it couldn't compete at a practically level playing field compared to traditional cameras. Yes there'll always be a resolution tradeoff but after a certain point, it won't matter. But simply writing off the technology is short-sighted - how is a company to develop a future technology if it's only evaluated by its current implementation that's limited by today's technological limits?

Lytro Cinema, by the way, is trying to remove that trade-off by starting with something that almost only offers advantages with respect to image quality.

having this thing for movies or dynamic images is one thing, but this thing kinda failed on still pictures. Moreso ones that have to be printed.Could they make a light field camera with variable focal length? with a good sharp lens?

They didn't announce at the event whether or not it would be released, but I can't imagine why they wouldn't since it really showcases the camera's capability.

On the other hand, an important part of the presentation was a live demo afterwards in which they showed how the camera allowed them to achieve some of the effects in the film. This was important to put the visuals in 'Life' in context.

I'd love to find /any/ samples from this camera that can be viewed at more than 300y pixels. The embedded Vimeo player doesn't allow full-screen viewing in any of the samples. And even there, the actual samples from the camera, only a few seconds worth, are very hard to make out.

Are there other samples out there I'm missing? You would think they'd want these teaser videos viewable at 1080 as soon as possible.

It's a great idea. The convenience in compositing alone is worth the price of admission. But I'm puzzled as to why they wouldn't have HD samples up in time for NAB.

Latest in-depth reviews

The Canon G5 X Mark II earns a Silver Award with its very good image quality, flexibility and the overall engaging experience of using the camera. However, if you need the very best in autofocus and video, other options may suit you better. Find out all the details in our full G5 X II review.

360 photos and video can be very useful for certain applications (as well as having fun). The Vuze+ is an affordable 360 camera that supports both 2D and 3D (stereo vision) capture, and might be the best option for someone wanting to experiment with the 360 format.

The Mikme Pocket is a portable wireless mic with particular appeal to smartphone users looking to up their game and improve the quality of recorded audio without the cost or complexity or traditional equipment.

The 90D is essentially the DSLR version of the EOS M6 Mark II mirrorless camera that was introduced alongside it. Like the M6 II, it features a 32MP sensor, Dual Pixel AF, fast burst shooting and 4K/30p video capture. It will be available mid-September.

Latest buying guides

If you want a compact camera that produces great quality photos without the hassle of changing lenses, there are plenty of choices available for every budget. Read on to find out which portable enthusiast compacts are our favorites.

Whether you're hitting the beach in the Northern Hemisphere or the ski slopes in the Southern, a rugged compact camera makes a great companion. In this buying guide we've taken a look at nine current models and chosen our favorites.

What's the best camera for under $500? These entry level cameras should be easy to use, offer good image quality and easily connect with a smartphone for sharing. In this buying guide we've rounded up all the current interchangeable lens cameras costing less than $500 and recommended the best.

If you're looking for a high-quality camera, you don't need to spend a ton of cash, nor do you need to buy the latest and greatest new product on the market. In our latest buying guide we've selected some cameras that while they're a bit older, still offer a lot of bang for the buck.

Whether you're new to the Micro Four Thirds system or a seasoned veteran, there are plenty of lenses available for you. We've used pretty much all of them, and in this guide we're giving your our recommendations for the best MFT lenses for various situations.

Blackmagic has announced an update to Blackmagic RAW that adds support, via plugins, to Adobe Premiere Pro and Avid Media Composer. Blackmagic also announced a pair of Video Assist 12G monitor-recorders with brighter HDR displays, USB-C recording and more.

Sony has announced the impending arrival of its next-generation video camera system, the FX9. The full-frame E-mount system is set to be released later this year with a 16-35mm E-mount lens to follow in spring 2020.

The Canon G5 X Mark II earns a Silver Award with its very good image quality, flexibility and the overall engaging experience of using the camera. However, if you need the very best in autofocus and video, other options may suit you better. Find out all the details in our full G5 X II review.

The Fujifilm X-A7 is the newest addition to the company's X-series lineup. Despite its relatively low price of $700 (with lens), Fujifilm didn't skimp on features. Click through to find out what you need to know about the X-A7.

The entry-level Fujifilm X-A7 improves upon many of its predecessor's weak points, including a zippier processor, an upgraded user experience and 4K/30p video capture. It goes on sale October 24th for $700 with a 15-45mm F3.5-5.6 kit lens.

Robert Frank's unconventional approach to photography and filmmaking defied generational constraints and inspired some of the most influential artists of the 20th century. He passed away today at age 94.

All three devices offer a standard 12MP camera plus, for the first time on an iPhone, an ultra-wide 13mm camera module. The 11 Pro and 11 Pro Max also retain the telephoto camera of previous generations.

Phase One's new XT camera system incorporates the company's IQ4 series of digital backs with up to 151MP of resolution and marries them to a line of Rodenstock lenses using the new XT camera body. The result is an impressively small package for one of the largest image sensors currently on the market - take a closer look here.

Phase One has announced its new XT camera system, which includes an IQ4 digital back, body (made up of a shutter release button and two dials) and a trio of Rodenstock lenses. The company is marketing the XT as a 'travel-friendly' product for landscape photographers.