CEA-Leti has developed a fully-functional curved full-frame image sensor

CEA-Leti has created this fully-functional prototype of a 20MP full frame image sensor. Photo Credit: LAM / CEA-Leti

Curved image sensors are an area of intense interest and research, and now the field of astronomy is jumping into the mix, developing curved sensors for use in telescopes. The latest prototype comes from CEA-Leti, a France-based research institute for electronics that has managed to create a fully-functional, 20MP full-frame curved image sensor prototype.

The project was conceived in a collaboration between the Laboratoire d’Astrophysique de Marseille (LAM) and CEA-Leti.

Theopticalbenefits of a curved sensor are fairly straight forward: primarily more direct illumination for pixels at the edge of the sensor and lighter lenses because there's no longer a need to correct a curved projection for a flat sensor. And while Microsoft and Sony are both working on mass-market applications for this tech, these benefits are of serious interest to astronomical observatories.

"In terms of technologies, this is the dawn of a new era for astronomical instrumentation, with the access to wider fields and exquisite homogeneity of the optical properties across the images, and faster systems not possible with classical flat focal planes," explains a press release about the new sensor from the European Week of Astronomy and Space Science 2017 (EWASS). "Fewer components are needed, and the remaining ones are less complex."

Further competition in this market, even from niche markets like this, can only be a good thing. Curved sensors might very well be the next major 'breakthrough' in the world of digital imaging.

I see a great field in the bridge cameras, since they usually have long zoom and have fixed lens. Usually we have a new objective design for each launch, I do not see problems in using a curved sensor in a new project. I only see vantages, more zoom, more brightness, or both.

A curved sensor would not reduce vignetting, or improve peripheral illumination from lenses. That is dictated by the lens design itself. Light fall-off towards the edges will be exactly the same whether on a flat sensor, or elevated by a few millimeters on a curved sensor. More modern lens designs mitigate the need for such sensors. Not to say they are useless, far from it. Matching a telescope to a specific curved sensor should make coma a thing of the past.

There is vignetting, which reduces peripheral illumination. There is also the cos^4 law, which states that for a nonzero chief ray angle (beam not centered on a point, the case for most camera lenses) because you are oblique to where the image is, there is a loss of image intensity.

Coma is not directly impacted by a curved sensor. Petzval field curvature is the only aberration mitigated directly. Freeing its correction may allow correction of e.g. coma or astigmatism, but this is not true for telescope design forms with less than 3 curved mirrors.

There is also light fall off caused by the increasing angle of light on the micro leneses towards the edge of the sensor. This was a very big deal with DSLRS 10 years ago and was mostly mitigated by offset micro lenses but this would certainly help with that.

As it has been mentioned in other comments, curved sensors don't make sense for ILCs or zoom lenses. This is where I was coming from. In a known, invariable focal length geometry, this makes perfect sense to match the optics with a specific curved sensor.

As far as vignetting, yes microcells help reduce it to some degree. One can go to DXO Mark and compare the same lens on different bodies, and sometimes see vastly different amounts of vignetting. But the fact remains that a lens will always project vignetting to some degree at the sensor. Curving the edges will do nothing to increase light. Only perhaps having custom lens profiles built into cameras, and then applying signal amplification at the analog stage accordingly. What Canon and Nikon do with lens profiles is likely post A/D conversion, so you induce a certain amount of posterization around the edges. I'm not knocking this technological accomplishment, just want people to be better informed about vignetting.

Alberto, you seem to be assuming that lenses always achieve the best case relative illumination (RI) set by the cos^4 law. This is not the case.

Maybe high-end SLR lenses get close, but for many applications, we fall far short. Mobile phone lenses are a prime example. Many of the modern mobile lenses (where the manufacturers have been driven hard to reduce TTL) have RI around 35%.

That's not the whole story, because even with microlens shift (I guess that's what you mean by "microcells") the sensor still has some fall off in sensitivity for the high CRAs used in mobile.

Thanks for your insights Jon, I'm not an optical engineer, mostly a technicaly inclined shooter. I definitely see the benefits of this technology, and I hope to see it applied to a future product purchase someday :)

@Jon Stern, by design cell phone lenses don't have any vignetting and have a constraint that the chief ray angle is below, say, 30 degrees. The system RI may be ~35%, but half (or so) of that is due to the angular dependence of the sensor or microlens's response.

For a start vignetting isn't the same as RI. Vignetting really means something blocking light rays (whether chief, or marginal rays). Whereas, relative illumination is the characteristic intensity roll of inherent in a particular lens design and manufacturing process (for the latter, particularly the quality of the coatings).

People tend to interchange the terms, so I'm not sure if you're saying that cellphone lenses have 100% RI, or there's nothing in the module that causes vignetting.

If you're saying there's 100% RI, that's not correct. The shading is corrected in the ISP using a lens shading correction (LSC) block that applies digital gain to the image according to a per-module calibrated shading table.

You don't see the RI and sensor roll off because it's hidden. In low light it will show up as noisier corners.

@Airydiscus, With microlens shift, the sensor roll off with CRA is mitigated. I won't tell you specific numbers as I'm covered by a NDA by my last employer (a CMOS image sensor manufacturer), but the sensor roll off (thanks to microlens shift) is a smaller effect than the lens RI.

Having worked in the mobile phone sensor and module business for 6 years, I can tell you that I'm intimately familiar with this stuff. I've seen more mobile lens specs than I care to remember, and I've built and characterized dozens of mobile modules.

As for the max CRA, that's now significantly above 30°. Before back-side illumination technology, it was capped at about 28°, but with BSI and deep trench isolation (which helps control crosstalk) CRAs as high as 34° are relatively common.

I did not say RI and vignetting are the same. I am saying that in the optical design, the lens is constrained to have no (or very little) vignetting. You cannot block the chief ray in any optical design program and still evaluate anything at a given field point, as the field is explicitly located by that ray.

The efficiency of a pixel (and also microlens) varies as a function of angle of incidence. This is the source of the CRA spec on cell phone lenses. For an f/2ish lens, the lower rim ray will be about 50 degrees, which ends up contributing little to the final image intensity as it is not efficiently collected. This is an issue not well solved by microlens shift, as there is not enough range in the shift of a microlens to significantly trade increased AoI for the upper rim ray in exchange for reduced AoI in the lower rim ray.

@Jon Sternofftop question: why these 1/2.3" small sensors don't use excess of pixels for other CFA than bayer, or e.g. polarization sensing? we could have better colors with 9 instead of 3 color filters and excess of pixels offsets moire.

@Enginel, polarization is not easy to achieve at the pixel level. It is an area of research though.

It's difficult to do better than the standard Bayer pattern. Attempts to do so have rarely gone far.

The two green diagonal pixels with a red and blue ends up being a good approximation for the human photopic response. If you try to add more colors you have a lower spatial sampling frequency in red, green and blue. Which would result in more moire, not less.

@Jon SternI'm talking about these modern 18-21 mp sensors at 1/2.3" size. e.g. canon sx730 hs is 21 mp and the lens is rather slow f/6.9 at telephoto end (probably even it's a rounded value so it's even slower). suppose the lens gives best IQ at 0.5 or 0.6 micron wavelength instead or 0.55, then we could improve IQ by picking data from corresponding color channel.

This would benefit fixed lens cameras the most. But it will never come to fruition, as the camera industry refuses to give us what we want; an affordable small aps-c 35mm/2.0 ff eq simple compact with pasm dial, fast AF, fast startup and wifi with the ability to transfer raw files. I would never need anything else.

Firstly, it's not what "we" want, it's what you want. Secondly, you want a perfect small camera AND you want for it to be affordable. This is not how things work in the real world, I'm sorry. They "refuse" to give you that, because producing good stuff isn't just a matter of want. It costs money and requires resources.

I wan't a 70mm or 85mm ff equivalent camera as it happens and they're never going to make that. Well actually Sigma already does with the DP1 DP2 and DP3 but you have to deal with the foveon compromises with those.

The sensor is 32x24mm, so why not call it that instead of the inaccurate "Full Frame" which means 36x24mm — except when it means 54x42mm for medium format fans!

By the way, 32x24mm has the advantage that it can be made without on-chip stitching, with the needed fab equipment having a maximum field size of 33x26mm. And the squarer 4:3 makes more sense for astronomy.

There is a secret telescope of the air force I know for fact (i cannot tell you where it is) and it uses not one but a dozen of these large format curved detetors for a few years now.For consumer products this technology has lot of promise once the yield of curved sensor production reached high values, and thus becomes cheap. It can significantly simplify for example cell phone camera design, allow for even slimmer lenses in those ever thinning smart 'bricks'. No wonder why Sony and some others are putting big efforts into this. Correcting field curvature this way, if sensors can be bent in a cheap way, has lot of benefits. It wil likely not make it into your DSLR, or even compact cameras, for stated reasons (curvature needs to match lens, so interchangeable lens is a no-go for this tech) but has potential for a lot of other devices which actually generate most of the revenue for digital image recording devices.

So it's not new to astronomy at all. The reason why it is not widespread yet is the relatively high cost due to the low yield of conforming the sensors to curved substrates, and the sensors we use in our cameras are not cheap to start with (being 4kx4k or even 9kx9k sensors with a physical size of 50x50 to 90x90mm and cost in the range of $50-$250k per sensor.) And as Ebrahim stated below, the curvature of the sensor needs to be matched to the Petzval sum (field curvature) of the optics, so every optical design needs its own sensor curvature. However, the appeal for astronomy is that these super large cameras we build can cost in the range of 5-25 million USD, and so if a curved sensor can simplify the optical desing and save on the optics/alignement fixturing cost even just 5% that is still a million bucks.

Being an astronomer, and designing optics for spectrographs on large telescopes, I'd like to make a correction to the statement of "and now the field of astronomy is jumping into the mix". Astronomers have thought about it for decades, and at least 4 institutions have produced curved sensors many years ago (one is the University of Arizona, Lincoln Laboratories, English company E2V, and company Germany called Andanta - latter even has it as a catalog item).

After reading for hours up on curved sensors, white papers, ten patents or so and latest productions: I can assurely tell you that this is only advantageous for a single focal length per sensor.

If we design an ultra wide angle lens with a correspondingly curved chip that has the cheapest and least number of elements yet gives edge-edge sharpness (long-story-short a really good cheap small ultra wide made possible due to the sensor curve)

and then want to design an 85mm portrait lens for that same chip, we'll be working with constraints of the sensor curve that's ideal for the ultra wide lens and it wouldn't give an advantage to that lens.

Same story with any zoom lens.

It just doesn't work the way they make it seem (we will make sensors that make your lenses cheap and small) Yes you will, but only one lens for one sensor will be made smaller, compared to an identical flat sensor and an identical flat plane lens.

So is it the future? No. Not unless they cam male sillicon chips with photosites and microlenses plus AA and Infrared filters and glass that can bend all as one unit when given a numerical value in-camera (HINT: they can't and won't be able to with any kind of current modern lithography processes, maybe in 50 years or more)

So useless? Of course not: This is, if cheap enough, can make wonders with prime fixed-lens cameras like the Fuji X100 or Sony RX1 or mobile phones or maybe best of all a 16-18mm FF equivalent pocket camera with quite a large chip and edge to edge sharpness, wouldn't that be cool? It would.

Your point is quite valid. However, whether or not a curved sensor could find its way into a consumer/pro interchangeable lens camera, there MAY be an answer.

I'm thinking about the fact that the latest large astronomical telescopes use separate mirror segments whose curvature can be minutely adjusted to compensate for atmospheric disturbances and so forth. Granted, the devices used to accomplish that are far too large and cumbersome for a handheld camera, but who knows what an ingenious engineer might come up with to overcome that limitation?

Actually, one of the earlier claims (I think from Sony) was that a curved sensor caused mechanical stress that actually favorably altered the electrical properties.... Anyway, I don't see that here. Additionally, it's ok to match focus field curvature in the optics when the sensor is used with only one lens -- or telescope -- which seems to be the plan here. Not a game changer for most camera applications....

No. There is an advantage for a curved sensor straight away. Any curved sensor is better than a flat one. We'll still need curve correction in glass for differences between wide and telephoto. But the correction will have to deal with smaller errors.

drajit: the matching is not of focal length, but of curvature of field (except on the sensor side of the lens; curvature of what would be the image plane if it was flat), which is not directly related to focal length in a complex lens.

@ProfHankDvirtually every time I ran optimization of prime lens in CAD with image curvature as free variable, it tended to chose curvature radius equal to FL. Well, maybe this doesn't work in fisheyes which need to be retrofocus anyway xD

Enginel: Interesting. I'd expect that for simple, roughly symmetric, designs around a normal focal length, but not for retrofocus nor telephoto ones. In any case, I don't think the sensor can curve much -- mostly for manufacturing constraints, but also for positioning in the camera. For example, to match curvature to focal length, a non-retrofocus 24mm lens on a 43mm diagonal FF sensor would have the sensor corners lifted nearly 24mm from the center, interfering with lens mounting and making a focal plane shutter impossible.

We already design different lens types for sensors with infinite curvature; the same logic applies to curved, only it should be easier to match the focal surface with curvature. According to R. Kingslake, curvature is the most difficult correction to make in lens design [http://spie.org/Publications/Proceedings/Paper/10.1117/12.959113]. If a manufacturer decides to put out a line of cameras with sensors of curvature R, then they will need lenses to match that sensor curvature. Regardless of the focal length, zoom, et cetera that is true for both flat and curved surfaces, only flattening the field is more difficult and adds more compromises. Read the literature on lens designs for curved image sensors and you'll see there is improvement in speed, relative illumination, reduced element count, and aberrations.

just as a zoom is designed in a compact camera to optical excellence with a flattish field on a flat sensor, so too optical engineers could design a zoom that behaves properly on a single curves sensor... like any zoom it is corrected at various points along the zoom range , but instead of flat curved ... it still has potential to make lenses smaller and less complex... flat sensors are the hardest of all

Simple!? They're fewer, yes, but the types of asphericals used in there are impossible to produce for larger formats.If anything, I'd say it's the other way around - with a curved sensor, compacts with larger sensors will be possible.

While I can see this as an advantage for extremely expensive professional telescope usage, I really don’t see it being that useful for regular photography.

People aren’t thinking this through. Whether for mirrored, or for mirrorless, it will require an entirely new line of lenses, and the old flatter field lenses out now, will perform poorly with this sensor.

And what curvature should it have? As was pointed out by indohydra, this is more useful for wide work than anything else. And what will Canon and Nikon, with their vast line of lenses do? Even Sony, with their much smaller line of lenses will have a very hard time to have an entirely new line of lenses. And Sony has several different lines. Smaller manufacturers, whose economic situation is teetering, will have almost no funding for this.

If this were done 10 to 15 years ago, before major digital lens lines were built up, it might have been feasible, but for now, that doesn’t seem possible.

Light passing through wide angle lenses requires more bending and therefore more glass. So the first real advantage of a curved sensor would be in wide angle photography, not telephoto. Also I would think that the optimal curvature needed in the sensor would itself need to be varied according to the angle from which the light comes through the newly redesigned lens. This means that you would perhaps be swapping out sensors instead of lenses. So the comment that this technology would best be suited for a fixed non-interchangeable prime lens camera seems right. So a professional would need to carry around a lot of these cameras. Maybe they would be lighter or cheaper in the long run. In development is the technology to create a "flexible" curved sensor which would solve the problem. And , of course, there are other pathways to bending light.

I made the same observations on another website a few months ago and was severely spanked for my shortsightedness. I agree with your post, that unless you have a flexible sensor (not impossible) that can dynamically adjust to the field curvature of the lens, this technology will be most useful in fixed lens cameras. It will be interesting to see where the technology goes.

If production costs are realistic, I can imagine this technology taking hold in enthusiast grade fixed lens cameras first.As someone else suggested, it would create a market for adapters to make existing lenses with longer flange distances adaptable to interchangeable lens bodies.

If any company nails down a cost-effective mass-production method, then curved sensors are going to have a huge impact on the industry. The ramifications this has for reducing weight and size of lenses, while reducing production cost and increasing quality of the optics, will finally allow smartphone manufacturers to produce cameras capable of rivaling micro four thirds systems.

It's going to be a tough pill to swallow for professional photographers and manufacturers, because legacy glass will no longer be compatible with new curved sensor camera bodies. But I think the potential gains outweigh all opposing arguments. Ultimately because the lenses will be so much easier and less expensive to produce. It won't take long at all for companies like Nikon, Canon, and Sony to rebuild their lens libraries from the ground up.

The problem with something like this is that it’s all or nothing. A chicken and the egg problem. Without new lens lines, the sensor is useless. Without the sensor, standard lenses are useless. There is no in between.

With companies able to produce between 2 and 4 new, or updated lenses a year, it will indeed take years before modest lens lines are rebuilt. Replacing large lines could take a decade, and would cost hundreds of millions.

@Melgross: There's no doubt it will be expensive, but I don't see it taking decades. Applications beyond consumer camera systems, such as medical imaging, automotive/aerospace imaging systems will drive demand for the optics.

There'll be an impact but it won't be as big, considering the sensor must match any specific lens. So 50mm lens calls for a 50mm-spec'ed sensor, etc.

Phones, modern cars, webcams and the like will sure be impacted. But for us photographers, it's only the non-zoom, fixed lens cameras that will benefit from it ; and that represents a tiny part of the photo market. (But I'm personally eager to see this happen)

It will definitely make sensors much more expensive. But if, say, $100 cost increase on the sensor can reduce lens cost by $200 and can reduce the weight and size by 30%, then it will be worth the development.

It's like say Global Shutter readout replacing mechanical shutters, we all know it's a technological breakthrough in camera making history to remove the shutter (and mirror) from cameras, but then you look at a drawback and go humm... It's better. It removes any speed limits, it disrupts the entire DSLR and Mirrorless design and speed limits, gives a near infinite photo accuation limits, BUT... let's wait until we iron the kinks out till we can mass produce without making a stepback in another area. This other area can be price and can be technical (lowlight performance in the GS case).

So I see curved sensors giving us tiny ultra sharp wide glass, small f/1.4 FF primes with edge to edge sharpness at kit lens prices, sizes and weight, high perfornance superzooms, very very sharp pocketable fixed-lens cameras, just not now. In the future. I hope I am there.

I really think the biggest drawback is perfecting the production method so it's economically feasible. Silicon is a rigid structure, so some kind of chemical/industrial engineering magic needs to occur before we see it introduced in the 1" compact bodies.

@Jon Stern: Wafer thinning is one way of doing it, and it's the most common method for front-side illuminated sensor designs, since the transistors are usually layered on the upper 10 microns of the chip with the silicon substrate underneath. Then the bottom of the sensor is planed to a flexible thickness (I've heard 50um does the trick).

But there's also another way which I think Sony/Microsoft are pursuing, one a little more suited to back-side illuminated sensor designs. High-speed flexible silicon was a process identified by a research team at MIT back in 2006. It involves using silicon germanium layers to create an elastic foundation that other silicon, and silicon oxide layers can be stacked on to. The silicon germanium absorbs the stress when the chip is bent after production with little or no planing involved, which usually results in higher production yields since ultra thin silicon can crack under its own weight.

@Josh Leavitt, wafer thinning is used for back-side illuminated sensors. In that case the sensor wafer is bonded to a mechanical support wafer, or in the case of new stacked die sensors, to an ASIC layer, and then thinned.

For visible light CMOS sensors the epi layer is taken down to about 2.5µm. The support wafer is then thinned to bring the whole stack down to 150µm or 200µm (in most cases). To created curved sensors, you really need to thin down to <10µm.

If you're curious about methods of forming curved sensors, take a look at my patent:

@Jon Stern: That's a clever way of curving the sensors. Did Semiconductors Components Industries partner with Sony to test these processes? A stable sensor structure less than 10 micrometers would be very, very impressive.

True or false: the amount of curvature on a curved sensor is ideal for a specific focal length, and the further you get away form that focal length, the more the lens still needs to "correct" for the shape of the sensor?

True. Optimal sensor curvature for a prime is about its focal length. (there exist even concentric lenses where the only aberration is spherochromatism) For a zoom, I don't know xDSo for interchangeable lens system, they might e.g. pick curvature =85mm which would shrink 85/1.2, 50/1.2 and the likes a lot and yet telephotos until <170 mm do not suffer.

someone explain why a 24 70 equiv compact cameras can correct for the imageto focus correctly along the entire zoom range on a flat sensor but you cant design the lens to behave correctly on a single curved sensor... its done on a single flat sensor isnt it?? ..... i believe a flat sensor is hardest of all

The curvature is very modest???Just the fact that you can see it demonstrates that it's huge! For astronomical instruments, a nanometer is a mountain.This sensor has been designed for astronomical instrumentation. Discussing applications for the public does not make more sense than installing a telescope mirror in a bathroom ;-) ESA and ESO are not ordinary customers.

A nanometer is about 1/550 of wavelength of green light. Not a mountain.They'd happily design such sensor for general photo if they could to.Yes, curvature is visible by naked eye but such modest curvature could be emulated by making its cover glass into concave Piazzi-Smith lens instead of flat one.Now, if the curvature radius would be at least 2-3 times sensor diagonal, and therefore unable to be emulated via concave Smith lens, that would have been another story.

The largest problem is marketability. xDLenses designed for flat field would require lens adapters to be used on this. Also, the benefit for zoom lenses is quite small.@miggyliciousit's experimental production so of course it is, production becomes cheap as volumes raise

For normal photography, I don't think the curved sensor is really a big deal. For AP, yep, it is a big deal. Just dealing with issues like coma alone, (usually handled with an additional optical element called a field flattener), would be improved. Also, I would think vignetting would be improved.

That said, instead of being just curved, I think being shaped to the design of the type of telescope, (Reflector, refractor, SCT), would be even better and harder to accomplish.

Coma is caused by the parabolic mirror in a Newtonian reflector projecting on a flat surface, like a sensor, so not sure what you mean. A corrective lens is used to remove coma.https://www.astronomics.com/coma_t.aspx

Also, wide angle is a very relative term. I am guessing your idea of wide angle is based on the more accepted terms used in normal photography.

As far as light hitting the sensor goes, it gets down to the image circle and how the lens is designed as well as the microlens design on the sensor. Both affect vignetting and can be found at both wide angle or extremely long focal lengths.

If the sensor is curved to match the lens, say in a refractor, that should help with vignetting as well as distortion. even field flatteners don't necessarily provide perfect correction.

On closer examination of this sensor, I see they fully curved the design hough based on telescope optics, it would need to be something specific to a given design.

@Jon Stern e.g. photos? xDSomehow I thought you were a hobbyist and wrote about this into a blog.BTW some sony patents feature a flat cover glass over curved sensor. If they glued something to a curved sensor, then there might be chance it mistaken for a flat sensor.

Latest in-depth reviews

Panasonic's premium compact DC-ZS200 (TZ200 outside of North America) boasts a 24-360mm equiv. F3.3-6.4 zoom lens, making it the longest reaching 1"-type pocket camera on the market. There are tradeoffs that come with that big lens, however. Find out all the details in our in-depth review.

The Hex Raven DSLR bag holds a ton of gear and employs a low-profile design that doesn't scream "I'm a camera bag." We think it's a little too bulky for everyday use, but makes for a great option when traveling with a lot of gear.

The Sony a7 III sets a new benchmark for full-frame cameras thanks to its compelling combination of value and capability. It's at home shooting everything from sports to portraits, and is one of the most impressive all-around cameras we've seen in a long while. Find out all the details in our full review.

Latest buying guides

What's the best camera for a parent? The best cameras for shooting kids and family must have fast autofocus, good low-light image quality and great video. In this buying guide we've rounded-up several great cameras for parents, and recommended the best.

What's the best camera for shooting landscapes? High resolution, weather-sealed bodies and wide dynamic range are all important. In this buying guide we've rounded-up several great cameras for shooting landscapes, and recommended the best.

What’s the best camera costing over $2000? The best high-end camera costing more than $2000 should have plenty of resolution, exceptional build quality, good 4K video capture and top-notch autofocus for advanced and professional users. In this buying guide we’ve rounded up all the current interchangeable lens cameras costing over $2000 and recommended the best.

What's the best camera for taking pictures of people and events? Reliable autofocus, good image quality in low light, and great colors straight from the camera are all important. In this buying guide we've rounded-up several great cameras for shooting people and events, and recommended the best.

The new HP DesignJet Z6 and Z9+ supposedly offer "the fastest printing capabilities available on the market today," all while using fewer ink tanks, and featuring useful add-ons like a built-in vertical trimmer.

In an effort to streamline production and minimize confusion, RED has announced that it is simplifying its product lineup to three main cameras. As an added bonus, this change dramatically drops the prices for all three options.

Fujifilm's new X-T100 is an SLR-style mirrorless camera that takes the internals of the X-A5, including phase-detect AF, and adds a fully articulating LCD and high-res OLED viewfinder. The X-T100 is priced at a very reasonable $599/€599 body-only and $699/€699/£619 with a 15-45mm lens.

Panasonic's latest firmware update for its GH5S, GH5 and G9 series of cameras was leaked in Japan earlier today and is now being officially announced a week early. But don't get too excited – you still won't be able to download it until May 30th.

We've been saying for years that the term "lens compression" is misleading, but Lee Morris over at Fstoppers has put together a useful video that explains why this is the case, and demonstrates it with two easy-to-understand examples.

Last week, some 'leaked' photos were published online that purported to show a DJI Phantom 5 drone with interchangeable lens camera and several prime lenses. The rumor was widely reported, but DPReview has learned that those images do not, in fact, show a Phantom 5 at all.

Award-winning fashion and celebrity photographer Markus Klinko recently tested out the Godox EC-200 flash extension head. Actually, he tested out four of them, creating a quad-flash ring light alternative that works great for both beauty and close-up work.

According to a recent investor presentation, Sony intends to occupy the top slot in the overall camera market by the end of 2020, beating back Canon and Nikon by boosting its interchangeable lens systems.

Google has finally added the ability to mark your favorite images in Google Photos, so they can be filtered into a dedicated album. The service is also planning to a social network-like "heart" button that lets you like other people's photos.

We've been messing around with Apollo, an iOS app that allows you to add 3D lighting effects to images using depth information, and have to say we're impressed with what it's capable of – but that doesn't mean we don't have a few requests for the next version.

The new lightweight laptop packs a whole lot of photo- and video-editing punch. The laptop can be specced out with a Core i9 processor, 16GB of RAM, 1TB of SSD storage, NVIDIA graphics with 4GB of GDDR5, and a 4K display with 100% Adobe RGB coverage.

It looks like Canon is getting into sensor sales. The three specialized CMOS sensors the company recently demoed—including a 120MP APS-H model and an ultra-low light sensor—have been listed for sale through a distributor in the US.

Kodak Alaris has launched a new single-use disposable camera in Europe. Called the Kodak Daylight Single Use Camera, this 800 ISO film camera is supposedly ideal for parties, weddings, and similar events.

Computer vision company Lucid and cinema camera maker RED have partnered to create an 8K 3D camera that can capture 4-view (4V) holographic images and video in real-time. The camera is designed to work with RED's upcoming holographic Hydrogen One smartphone.

If Canon and Nikon do get into high-end mirrorless, it's almost certain that they'll do everything they can to maintain compatibility with their existing mounts. But, asks Richard Butler, wouldn't it be more interesting if they built a small, niche system to live alongside their existing DSLRs?

You know that feeling when you're already all suited up and out on a spacewalk outside the International Space Station, and only then do you realize you forgot to put the SD card in your GoPro? No? Us either... but one astronaut on the ISS sure does.

From 2015 to 2017, filmmaker Macgregor and his crew spend many months traveling back and forth on the famed Mauritanian Railway—the so-called 'Backbone of the Sahara—to document the grueling journey endured by merchants who regularly travel atop this train. This beautifully-executed short doc is the result.

Synology has added a new 6-bay NAS to its DiskStation+ series, and it's aimed squarely at photographers and medium sized businesses. The DS1618+ can handle up to six 12TB drives, giving it a max capacity of 72TB, or up to 60TB in RAID 5.

Our original gallery for Tamron's new 70-210mm F4 had portraits, slow-moving wildlife and city scenes, but was sorely missing fast action. We remedied that by photographing some motorcycles flying through the air.

This week on DPReview TV, Chris and Jordan prepare for the summer holiday season by putting several popular waterproof cameras to the test. If you're considering a rugged camera for the beach or pool this summer, or if you just want to see what a Chris and Jordan fishing show might look like, tune in.

Soulumination is a non-profit organization that provides life-affirming legacy photography to families facing serious medical conditions, completely free of charge. This video shares the work they are doing.

Fujifilm EU seems to have accidentally leaked an unreleased camera to the masses. The leaked page details a new "X-T100" camera that will share most of its specs with the X-A5, but includes an EVF, deeper buffer, and 3-way tilting touchscreen.

LA-based director and cinematographer Phil Holland of PHFX recently joined forces with Gotham Film Works to create something out-of-this-world. Using a special aerial camera array, Holland shot a flyover of New York City using not one, not two, but three 8K RED Weapon Monstro VistaVision cameras.