iPhone Xs: How does the variable bokeh effect compare to a real lens?

One of the key new features of Apple's latest iPhones is the ability to adjust the 'bokeh effect' on portrait images, after they've been taken. But, as well as letting you adjust the intensity of the effect, the function has been enhanced to more accurately represent the bokeh characteristics of a real lens, rather than just trying to blur the background.

Every time you shoot an image using the 52mm-equivalent F2.4 portrait camera on the iPhone Xs you have the choice of editing the bokeh effect. This brings up a scale marked in F-numbers. This may sound like Apple just borrowing an interface from the real-world (a process called skeuomorphism), but it goes beyond this: the company says it's modeled the bokeh characteristics to mimic the behavior of a Zeiss lens.

We thought we'd put this to the test: how convincingly does the iPhone Xs resemble a real-world lens? Is the F-number scale anything more than a pastiche? To find out, we shot the Xs alongside the Nikkor 58mm F1.4, mounted on a full frame camera.

iPhone Xs vs Nikon 58mm at F1.4

iPhone Xs image processed as 'F1.4'

Nikkor 58mm at F1.4

Scaling the Nikon image down to the same width, you can see the bokeh is around the right size:

Then, when you look at the bokeh off-center, you'll see it develops an elongated 'cat-eye' effect.

iPhone Xs vs Nikon 58mm at F8

iPhone Xs image processed as 'F8'

Nikkor 58mm at F8

Just as with the real lens, the cat-eye effect diminishes as you 'stop down.' And Apple has given its bokeh a smooth, fairly gaussian look, rather than the slightly bright-edged bokeh that Nikon has produced, being constrained by the limitations of things such as glass and physics.

Unlike the 'real' camera, the iPhone's sharpness doesn't always drop-off smoothly: for instance it's blurred both shoulders and the subject's scarf, despite the nearer being in a similar plane to the face.

However, while this doesn't always looks natural, the phone is intentionally ensuring that the subject's face remains entirely in focus, which is usually a good thing. And, unlike the $1600 Nikkor lens, it doesn't become a little soft and dreamy when set to 'F1.4.'

Equally, because the iPhone isn't actually changing its aperture, you don't find yourself with less light if you want more depth of field (the iPhone portrait camera's actual depth of field is F15 equivalent, so there's plenty that's in focus in the underlying 'native' image), so you don't have to worry so much about camera shake or subject movement.

The end result isn't going to convince anyone if they look too closely (the processing has cut-off some of the fine hairs, for instance), but for social media use, it's hard to deny that the effect is impressive. And we have to assume this technology will only get smarter and more powerful in future generations.

Comments

The iPhag skintone makes her look more human than "shape-shifting-reptilian posing as manjawed-women's-studies-professor-about-to-resume-shrieking-about-microagressions" anyway, accuracy be damned. Nothing can help the constipated expression and the missing two inches of cranium height.

Our experience is the opposite: the Pixel 3 Portrait Mode generally produces more believable results, with fewer errors and also the ability to keep an entire body sharp, not just the face. The Pixel 3 Portrait Mode also 'cuts around' subjects (especially hair) better. We'll have examples soon.

I tried the pixel 3 in store, I basically took a few shots of static simple items like table or laptop and it blurred lines that should of been in focus, I didn’t use it on person so maybe that was the issue but I thought if it couldn’t get a pic of laptop in focus and blur the background then what hope is a person. I’m saying that I had the exact same result with the XR. I’m on the way to shop again so might give it another go

I have a technical question: Wouldn't it be possible to use a technology like the one in Microsoft Kinect, that was able to retrieve depth information, to add as lens/sensor to the phone and have a depth information pixel for every color information pixel?! This way you would have a 3D shot of the scene and calculating a more proper and custom fake-bokeh would be easily possible. No?

There's a lot of research on new sensors for depth information, but they still have low resolution or poor depth accuracy. Good enough to do a much better job than with a standard sensor alone, though. They should be able to do a better job at deducing depth than they show here if they just use simultaneous shots from the phone's multiple cameras.

http://image-sensors-world.blogspot.com/ has the latest sensor research news, e.g. most recent post just now: "Huawei Smartphone to use Sony ToF Sensor for 3D Imaging" (ToF is "time of flight", measures how long the light takes to bounce off the different parts of the scene) - Huawei phone model to be available in the next few weeks, "Besides generating pictures that can be viewed from numerous angles, Huawei’s new camera can create 3-D models of people and objects that can be used by augmented-reality apps, according to one of the people. "

I can see a bright future for Nature Photographers!You just take a picture of a sitting duck and the computer will convert it into a flying eagle!I can't wait. Maybe eventually I do not even have to leave the house anymore ;-)

My overall impression was that something was off with the iPhone shots, I put it down to them looking flat and artificial in comparison to the Nikon, but I'd agree, the subjects face in particular is so bright in comparison that it looks unreal to me. Finer textures and colours just washed away.

Must get around to testing my new phone's raw shots to see what the real data is like (the AI enhanced camera shots are somewhat 'overemphasised').

Ultimately I actually like the idea of cameras for more DOF and less DOF, different challenges and uses. Of course if the smaller, simpler one could completely ape the larger, more complex camera (indistinguishably) then for almost anyone there'd be no point in carrying the larger one. If my phone could do what the K-50 could then I'd carry something the phone couldn't replicate, like the ME Super...

A lot of manufacturers have a mechanism for doing that. Pentax, Leica, etc. and then you can get WiFi-capable SD cards for a lot of cameras even if the manufacturer offers no "native" solution (note of caution: not ALL cameras are compatible).

It looks like the software-generated Bokeh is becoming more perfect as the cellphone makers refine the technology. Real/optical Bokeh often has onion rings, non-circular shapes, sharp edges, etc., which are undesirable.

In the future, I believe even real Bokeh will be corrected in software on-the-fly, just like the chromatic aberration (CA) of a lens is already being corrected before an image is output to a file (JPG or RAW) in a modern digital camera.

Perhaps camera and lens makers can implement variable/customizable Bokeh effect on a physical dial or in the menu. You can leave the Bokeh entirely real, or you can dial in the effect desired. Best of both worlds.

The problem with these algorithms is they live in a 2d world, not a 3d world. It blurs everything but the face and (most of) the hair. It totally ignores all other stuff in the supposed plane of focus, resulting in a totally unrealistic sharpness drop off. Blur is a function of distance relative to the focal plane. The algortihm does not now what a focal plane is and it just has an x and y direction, no z. The phone needs a dual pixel AF sensor that uses distance information over the entire field to compute the amount of blur.

Totally not correct. By using two lenses, a depth map is created on the fly, just like our eyes do. The map "knows" from every point it x,y and z coordinates, using simple trigonometry (distance between the lenses is know). Notice that the blur is intenser for points further away from focus. The problem however is in the region of the hair. I invite you to read about the algorithm.

Yes, as long as they are producing a depth map thye should be able to dom the blur correctly, but making an accurate depth map is tough and trying to get it accurate enough to detect features like fly-away hair is nigh impossible. I know becuase we were working on this when I was with Canon. Technology has greatly improved and of course phones are massively more pwerful than cameras. I’m actually quite impressed given this tech is only in it’s infancy. We will see it in normal cameras down the track once they have enough compute power.

Indeed, hair is problematic. See my comment on "hair masking" between Google Pixel 2 and iPhone 8 portrait mode a little bit higher in this thread. You can see side by side examples of both algorithms, albeit previous versions, Google is far more advanced in hair masking.

@XPloRR: The eye comparison isn't accurate, since the two lenses have different focal lengths on the iPhone. This is like trying to use one eye glued to a telescope, and the other having normal vision, to generate depth. Not impossible, but much less precise and accurate than regular binocular vision. This is why Google's algorithm in the Pixel 3 does better, I think. It uses the same focal length, and the comparison shots are separated by the distance introduced by camera shake.

The other issue is that the separation between the two lenses isn't much. This one dogs Google even more. The trigonometry is much more accurate with greater lens separation, but it introduces the issue of only parts of the fields of view overlapping.

I repeat: Google's bookeh algorithm is far more advanced than that of Apple.See my previous posting with side by side examples of Apple vs Google. I can only judge on the basis of comparative material.

I think it's important to stay informed and aware of these developments because inevitably, computational photography will replace many of the optical approaches that we have come to be used to since 50 years.

Phone manufacturers and software developers rarely have any assets in the production of interchangeable lenses. So they won't hesitate to make them obsolete - gradually more and more with each iteration of their devices and software.

It will take ages to have a fake bokeh close to a real one... As the article mentions, the fine hairs just disappear, blurred off... But if your only goal is to publish on Instagram, then it may be ok...

Just as a point of style, the 'S' following the 'X' in the phone model name should be capitalized. Apple styles this as a small cap, so it's not entirely obvious with the 'S' whether it is 's' or 'S', but it is immediately obvious with the iPhone XR where the 'R' is clearly a capital 'R'. So, it should be 'iPhone XS', with the 'S' capitalized and optionally rendered as a small cap.

Here's a prophesy, DPReview: if you actually care for photography, you're going to regret putting so many "journalistic" eggs in the phone-cloud-computational world. The misanthropic companies driving these "innovations" don't care about the art, society, or cultural value of photography in the least. They're the modern-day equivalent of the Western US's 19th-century timber barons. You think an old-growth forest ecosystem is important? They saw only fields of telephone poles and railroad ties awaiting harvest. Photography as an art or craft or cultural force with personal and social gravity? Pfft: give the dumb lemmings a few "computational" tricks and they'll fall over themselves to hand you every bit of data that defines their lives. "Wait'll we roll out deepfakes, they'll love it!"

Yet, the 3 name drops that you mentioned has NOTHING to do with this article or the phone that is in THIS article. Good job there oh-rocket-scientist!

You think that Nikon, Canon, or Sony care about the “art” of photography? Have these companies lower their prices in the name of art? Have these companies push useless “features” as a way to increase price? You think ANY of these companies would die for more profits?

Mira has a very valid point. The names she mentions very much have an influence on the modern ways in which our art is shared-or in their case, exploited for endless financial gain by those whose fortunes already defy reckoning by the average worker's mind.

And if Nikon, Canon, Fuji etc. didn't care about photography then they wouldn't run their companies how they do or make the products that they do. They'd just merge mediocre optics into an Android interface and call it a day.

It's hard to understand. DPR is owned by Amazon, which makes money when DPR readers buy the things DPR writes about. But an article like this, touting the advancements in phone camera tech, is not going to drive phone sales, because people are going to buy phones anyway—and they usually don't buy them on Amazon either.

You bring up a great point about DPR's cessation of lens reviews. If they want to make their corporate masters money, they should be reviewing every new lens under the sun! In addition to the cameras.

And it was something TheVerge, 9to5mac, Engadget, and the rest of the phone-tech "press" explored two months ago.

The problem with putting your eggs in the phone basket, Richard--beyond the broader philosophical issues of pimping big tech's bait-and-switch remarket of photography as a mass data collection regime--is that you end up duplicating the coverage of literally every other tech news site who writes about smartphones and big tech. There are lots of them.

Even John Gruber over at Daring Fireball did a quick bokeh test.

(I suppose you're unique in that TheVerge conducted its bokeh-comparison test with a Canon rig?)

John Gruber, on the other hand, doesn't do lens reviews for interchangeable lens camera systems. But then again, neither do you!

Nilay and Dieter at The Verge beat Richard & Co. like a Cherokee drum to appropriately breathless photographic "coverage" of the latest big player smartphones. Not only were they months ahead with the bokeh comparison, but they also had pre-production access to the Google Pixel "night view" software. They had samples and analysis in the can and published long before it was ever mentioned here.

In fact, The Verge and their ilk is eating DPReview's lunch around all of these big tech photographic initiatives. They had a working prototype of Adobe's "real photoshop" for iPad in October and were able to offer in-depth impressions of it in use with Vox Media's in-house photography and graphic design crew. Has DPReview even had hands on it, yet?

They don't have to agree with me, or even listen to me. Neither do you. Differences of opinion make the world go round. Meanwhile, as a reasonable, thinking human being, you can't expect that what you like represents anything more inherently valid than what I like. My complaints are at least as worthy as your applause.

Sometimes I think DPReview writes gloriously thoughtful things. And I tell them so. Sometimes I think they're dense. And I tell them so. In this case, I think they regard big tech's bait-switch re-engineering of photography with some real shortsightedness. I'm concerned it won't play out in a way that's healthy for our craft or our culture. I'm here to suggest they may care to consider that perspective.

I don’t understand this new bokeh or smartphone vs camera discussion. I photograph weddings and always ask my clients which photos they like best. And guess, it’s always the photo where I captured the biggest feelings, the authentic moments. They don’t care if there is a bokeh or not. They even don’t care if I photograph their wedding with a camera, a smartphone or a banana. They just want someone who does the job. I know that this here is a technical forum but it gets a little bit strange.

I agree that this is a technical forum, which is why we see articles and discussions on the the minute details of technology. That's fine; however, I hope that photographers remember to use these tech articles as supplementary information to the overall art of photography.

Technology isn't the end game. The end game is to create beautifully composed images, capture emotions, and unique moments.

Focus and depth of field is a tool, like composition, exposure or shutter speed.It serves a purpose as it can be used to draw attention to a subject.The point is, you will always have clients looking for "the authentic moments", but it's also the way you use your tools and how good they work, that will define how strong their feelings will be towards the picture.

Amazing how people resist progress. Look at it as a photo and not something you are trying to technologically dissect. In that light these look pretty good. In some ways better, such as the better sharpness vs the 58 at f1.4. Sure you can sit there and look for something 'wrong' with it and surely you will find it. Many posts here remind me of audiophiles who don't listen to their stereo system for the music, but just to try and find out something wrong with them so they can spend lots of money later in a ill fated attempt at perfection. This is the future of photography. Get used to it.

I agree in general but your mention of the better facial skin sharpness of the iPhone shot vs the 58 1,4..well, that's an old pet peeve of mine. I don't need macro level detail for human skin, be it from a modern prime or a phone and I prefer the 58 1,4 shot level-of-skin-detail here.

Why try to simulate the bokeh in the first place if it's not going to look good?

Why not just accept the composition with the original deeper depth of field and focus on the subject, if the aesthetics of computational photography can't successfully replicate the look of a larger aperture without getting into the "uncanny valley"?

Very shallow depth of field is a relatively modern aesthetic trend. People are mostly obsessed with it because smartphones (and cheap point & shoots) can't produce it, so it's associated with higher end camera gear. Shallow DoF has prestige.

Back when everyone could shoot with a 35mm film camera, shallow DoF wasn't prestigious. Most people tried to get more DoF, most of the time, but were limited by available light and film speed.

This isn't about resisting progress. It's as much a commentary on aesthetic trends and the, uh, shallowness of shallow depth of field...

Yes, it's "amazing" that a 2018 iPhone still hasn't progressed the the image quality level of the 2015 (released) Panasonic CM1.

The bokeh effect with this iPhone Xess is garbage.

"In that light these look pretty good. In some ways better, such as the better sharpness vs the 58 at f1.4."

And note well that this Nikon 58mm lens is not sharp in the centre wide open BY DESIGN--unlike basically every Zeiss lens available for Canikon SLRs. (Right, I think it was a mistake for DPR to use this 58mm lens for comparison.)

"Many posts here remind me of audiophiles who don't listen to their stereo system for the music,"

And the problem remains that iTunes software, garbage DACs in many MacBooks and several, not all, iPods combined with compressed formats like ACC and MP3 really harmed some good recordings.

"This is the future of photography. "

Yellow rings added to the top of a model's forehead is the future of photography? Is that a medieval iconography "filter"?

This is spot on.A lot of discussions on here remind me of the days when hi-fi "golden ears" would spend hours debating the sound of various speaker cables.Most people don't see or don't care to see minor differences, not in bokeh anyway.

Jwilliams - It’s the future for some. For others , no. People can and do make choices and are also secure enough to not fall for the the marketing propaganda, or feel pressured by popular norms. The cell phone has parachuted into the photography domain with little history, little credibility and box full of software party tricks in the form a fools guide to easy results. Fine if you like that kind of thing, and enjoy contributing to the growing mountains of selfies, food photographs and voyeristic social reportage. The bit that needs acceptance is there will always be differences of opinion on this issue, there is room for diversity, get used to it.

The good thing about Apple's latest implementation is that you can change the virtual DOF easily after the fact. For this shot, f1.4 did horribly and looks too fake. F8 looks pretty good, but not a ton of blur. Maybe if you play around with the aperture slider, f4 is the sweet spot.

Either way, it's not baked in and can be adjusted or turned off afterward.

FWIW, I'm fairly sure that most (if not all) Android implementations have done it that way to begin with.

The 2 year-old firmly mid-range Huawei Honor 8 that I bought for $200 as a quick replacement for a broken phone lets me adjust the blur after the fact. The blur itself is crap, but hey, you pays your money and you takes your choice...

I was familiar with Huawei and Samsung implementations which definitely allow you to adjust the amount of background blur (though I suppose I don't know if Samsung still allows adjustment after capture).

I suppose I figured if the two OEMs that together hold >40% of the marketshare did it, that it was likely to be a more universal feature. Then again, I suppose am analogy to the dedicated camera market (i.e. Canikon) would pretty quickly put the lie to that kind of thinking!

I don't care for phone photos. Use my Big Ol Note4 camera for note taking, or Telegraming a price tag in a store to a frined I know'd be interested.

So I do not look at phone photo reviews. But I cannot avoid the buzz. And lately what I have been hearing, is the phone photos are so good they are about ready to make cameras obsolete. I believed the hype (although I didn't care) until I saw these photos.

I am not talking about the poor fake blur that looks like me 15 years ago trying to learn Photoshop 7. Why is her skin banding when the light is not difficult at all? Are these the phones that are supposed to make cameras obsolete? I really do not see how that is going to happen, and these poorly implemented attempts to imitate real cameras only make it more clear.

Computational photography will displace traditional digital photography from consumer space. Very soon. Like the later displaced traditional chemical photography. The process will be accompanied with holly wars online on whats Better and whats True. But the dust will settle and 10 years from now a guy with DSLR would be called hipster.

Mostly this, yes. Have got to wonder about tele and macro options though. If computations can make a 500/4 lens together with lighting problems in macro situations irrelevant then yes, it's probably time to throw in the towel, it just seems as if physics has to show up at some point.

@BecksvartThere are more interesting things available with computational photography then just simulating whats already available with DSLRs. Likewise simulating film grain or it's color response is not the most interesting part of digital photography. Potentially available things like altering a point of view in order to move those annoying light poles which tend to always grow out of peoples heads. Or altering scenes lighting to a taste direct/soft/morning/etc lightning conditions. It's not necessary a matter of altering things in the picture but the way a scene is represented - composition and lighting or what distinguishes a good picture from a snapshot.

Well Richard I didn't say that. (Though you are doing that ass well with all these favourite camera and gear listings). Teach people to look at pictures through a lens so they are not fooled by these cheap effects. It's not just de blurring of hairs, that is just the iPhone being bad at what it does, probably so it can do it faster because the millennial generation doesn't think waiting for anything is acceptable. It is totally ignoring what a lens does. The limitations of optical glass you are talking about are called character by the way and if you don't like character, there are plenty of lenses without it and then there is always the clarity slider to even things out :p.

A green blob in the models's hair is acceptable as a representation of hair colour change when shooting for a blurred background?

And the blurring of the herringbone pattern on the models' right shoulder is a joke in the iPhone sample.

The scarf in the iPhone image, is that supposed to be a joke? How about the yellow ring on the top of the model's forehead that the iPhone added!

I take it that DPR didn't have a Zeiss 85mm Otus or Milvus to try. Would have avoided the over-yellowed Nikon look. And of course, that Nikon 58mm lens is not sharp wide open on purpose, whereas the Zeisses are--right only the Otus would be sharp across the frame.

Now, I would like to have seen what the Panasonic CM1 with its f/2.8 lens and 1 inch sensor could have done with the same model, lighting, and background. (I know, not a smartphone one can buy new any longer, so no potential sales--still the best higher ISO smartphone ever shipped as of late 2018.)

For all of the people complaining about the "quality" of the bokeh: most of the intended audience won't care. The vast majority of these images will be consumed on social media, and the viewing time will be a few seconds at most.

Indeed, thank you. Imagine what the man who said “sometimes I do get to places just when God’s ready to have somebody click the shutter" would make of those arriving in front of a beautiful scene, turn their back to it, and leave as quickly as they came...

The blur modeling isn't the BIG issue. That's fine and dandy. The real issue is LACK of 3D.

The segmentation algorithm (besides being way off) looks like they it's taking an "All or Nothing" approach and in the end, even if they achieve a perfect segmentation (which they don't) produce a 2D picture: There is a focused subject, there is a defocused background and nothing else in between.

I'll have to look more closely. On my iPhone 7 Plus, the first model to offer Portrait Mode, I learned that there are nine fields of depth measured and captured in PM. So I did some tests (e.g., a long sign veering away from the focus point at an angle), and sure enough there was super crisp, just a little less crisp, etc., all the way to totally blurred out. I'm wondering why this dynamic isn't showing up on the latest models.

Apparently they already us a Time of Flight camera on the iPhone X front sensor suite, but really it's just for detection purposes, and apparently doesn't have sufficient accuracy for much more than that. They need the dot projector and an infrared camera to actually do the more sophisticated 3D detection.

The fakeness from this is no different than sky replacements or adding light leaks or doing day to night post processing. I use my mirrorless over my phone for my own enjoyment whether or not people can tell the difference. Now if you're doing this professionally I'd assume it's only logical to welcome advancement to tech to either speed up your workflow or cut overhead cost.

The result is surprisingly poor. It looks like the shallow Depth-Of-Field feature is barely usable right now. But the technology is promising and I'm sure it is only a matter of time before phones start convincingly simulating shallow DOF.Thanks for exposing the hype, DPReview!

Did you see the LEGO Movie? All of the bokeh in that is fake but it looks really good because they spent a lot of time and effort on it, stimulating actual cine lenses.

"Real-world lenses exhibit various photographic idiosyncracies, and we wanted to emulate some of these ... in order to avoid a clean look that would indicate that a CG camera had been used. After the trailer for the film was released there were a few comments we noticed where people were authoritatively holding forth about how they "knew" exactly what camera had been used to "film" the "stop motion". Mission accomplished :)"

Seriously! iPhone processing has cut off nearly half of her head is more like it. The first glance impression in the first photo was how the iPhone image looks like it has been cut with scissors and pasted on top of an uniformly blurred background. Closer look showed not only fine hairs cut off. Everything but strongest solid shapes of hair are cut off.

iPhone looks like there is huge amount of wax applied to her her, which is then flattened.

It has gone too far. Most people, including the crowd here that goes on and on about it, would not know what bokeh was if it whacked them in the face. 20 years ago, almost no one worried about it. Why today?

Yeah, it looks much better at f/8 than it does at f/1.4. The extreme fakeness and obvious cardboard-cutout factor is reduced, and it gives a subtle separation.

The trouble with bokeh simulation more generally though is that you kind of need to go over-the-top with shallow DoF to make it visible on 5" phone screen. And as a result, it's likely going to look terrible when viewed on any larger screen.

I can shoot my medium format camera at f/11 and get a really beautiful, subtle separation that's obvious on a 24" monitor, evident on a 15" laptop, but invisible on a 5" phone screen. The visual effect and the perceived quality of the bokeh is inextricably tied to the output magnification.

enlargement factor and viewing distance are inputs to the DOF equation. Those are basically proxies for output size. So yes, DOF definitely changes depending on how large or small you view a digital image.

Most of the classic DOF calculators use an 8x10" print held at arms length as a fixed point of reference.

I was really impressed until I looked at the border between hair and background. Sure the bokeh looks nice, and the amount of blur is as similar as necessary, but the cutout is woeful , and would show up really badly in a print or large screen. You lose most of the hair detail at the edges.

With the iphone shot, you might as well cut out a picture and drop in a completely different background, as that is what it looks like has been done.

one is photograph and one looks like a still to the upcoming steve carell film "marwan".i see the perimeter of the subject in the phones file is a jumble of computer algorithm nonsense , when phone photo become the best they can be without endlessly obsessing over what they are not , it will be better

Its terrible. The fact that it doesn't calculate a specific plane of focus is key. It's a pointless technology until that happens. I thought that was supposed to be one of the points of having multiple cameras, but I guess it isn't there yet.

More about gear in this article

While the iPhone XS's camera hardware is slightly different to the iPhone X's, the most important changes are in the software. Our first tests of Smart HDR mode and the updated Portrait Mode are quite promising.

Latest in-depth reviews

The Leica Q2 is an impressively capable fixed-lens, full-frame camera with a 47MP sensor and a sharp, stabilized 28mm F1.7 Summilux lens. It's styled like a traditional Leica M rangefinder and brings a host of updates to the hugely popular original Leica Q (Typ 116) that was launched in 2015.

The Edelkrone DollyONE is an app-controlled, motorized flat surface camera dolly. The FlexTILT Head 2 is a lightweight head that extends, tilts and pans. They aren't cheap, but when combined these two products provide easy camera mounting, re-positioning and movement either for video work or time lapse photography.

Are you searching for the best image quality in the smallest package? Well, the GR III has a modern 24MP APS-C sensor paired with an incredibly sharp lens and fits into a shirt pocket. But it's not without its caveats, so read our full review to get the low-down on Ricoh's powerful new compact.

The Olympus OM-D E-M1X is the ultimate sports, action and wildlife camera for professional Micro Four Thirds users. However, it can't quite match the level of AF reliability offered by its full frame competitors.

Latest buying guides

What's the best camera for under $500? These entry level cameras should be easy to use, offer good image quality and easily connect with a smartphone for sharing. In this buying guide we've rounded up all the current interchangeable lens cameras costing less than $500 and recommended the best.

What’s the best camera costing over $2000? The best high-end camera costing more than $2000 should have plenty of resolution, exceptional build quality, good 4K video capture and top-notch autofocus for advanced and professional users. In this buying guide we’ve rounded up all the current interchangeable lens cameras costing over $2000 and recommended the best.

What's the best camera for shooting sports and action? Fast continuous shooting, reliable autofocus and great battery life are just three of the most important factors. In this buying guide we've rounded-up several great cameras for shooting sports and action, and recommended the best.

What’s the best camera for less than $1000? The best cameras for under $1000 should have good ergonomics and controls, great image quality and be capture high-quality video. In this buying guide we’ve rounded up all the current interchangeable lens cameras costing under $1000 and recommended the best.

If you're looking for a high-quality camera, you don't need to spend a ton of cash, nor do you need to buy the latest and greatest new product on the market. In our latest buying guide we've selected some cameras that while they're a bit older, still offer a lot of bang for the buck.

We've updated our waterproof camera buying guide with the latest round of rugged compacts, and we've crowned a new winner as the best pick in the category: the Olympus TG-6. That is, unless you happen to find a good deal on the TG-5.

Researchers with the Samsung AI Center in Moscow and the Skolkovo Institute of Science and Technology have created a system that transforms still images into talking portraits with as little as a single image.

K&R Photographics, a camera store in Crescent Springs, Kentucky, was robbed by armed men, who not only took thousands of dollars worth of camera equipment, but also injured the 70-year-old co-owner of the store.

The new Fujifilm GFX 100 boasts some impressive specifications, including 100MP, in-body stabilization and 4K video. But what's it like to shoot with? Senior Editor Barnaby Britton found out on a recent trip to Florence, Italy.

It's here! The long-awaited next-generation Fujifilm GFX has been officially launched. Click through to learn more about the camera that Fujifilm is hoping will shake up the pro photography market - the GFX100.

We've known about the Fujifilm GFX 100 since last fall, but now it's official: this 102MP medium-format monster will be available at the end of June for $10,000. In addition to its incredible resolution, the camera also has in-body IS, a hybrid AF system, 4K video and a removable EVF.

According to DJI, any drone model weighing over 250 grams will have AirSense Automatic Dependent Surveillance-Broadcast (ADS-B) receivers installed to help drone operators know when planes and helicopters are nearby.

Chris and Jordan are kicking off a new segment in which they make feature suggestions to manufacturers for the benefit of all photographer-kind. To start things off, they take a look at the humble USB-C port and everything it could be doing for us.

The Olympus TG-5 is one of our favorite waterproof cameras, and the company today introduced the TG-6, a relatively low-key update. New features include the addition of an anti-reflective coating on the sensor, a higher-res LCD, and more underwater and macro modes.

The Leica Q2 is an impressively capable fixed-lens, full-frame camera with a 47MP sensor and a sharp, stabilized 28mm F1.7 Summilux lens. It's styled like a traditional Leica M rangefinder and brings a host of updates to the hugely popular original Leica Q (Typ 116) that was launched in 2015.

We've been playing around with a prototype of the new Peak Design Travel Tripod and are impressed so far: it's incredibly compact, fast to deploy and stable enough for the heaviest bodies. However, the price may turn some away.