Your HTC One M8 isn't a Lytro, but that doesn't mean you can't create some neat-looking effects

Background defocus effects are the hot new feature for smartphone cameras this year. Sony, Nokia, LG, Samsung and HTC are all offering DSLR-like bokeh effects in photos through software trickery. These effects have even been likened to the Lytro, the first mainstream light-field camera, which lets users refocus their photos after capture.

Instant after-the-fact refocusing from a smartphone camera isn't truly upon us just yet, however, and each manufacturer's approach to defocusing and refocusing shots after taking them has its individual quirks. That includes the HTC One M8's Ufocus feature, which while capable of producing some striking depth-of-field effects, has its limitations. And the same goes for single-camera refocusing effects from competitors like the Samsung Galaxy S5, LG G Pro 2 and Sony Xperia Z2.

Let's take a closer look after the break.

Right now, real Lytro-like functionality is way beyond what's possible in a smartphone.

First of all, it's worth underscoring the differences between traditional digital cameras like the one in your smartphone and a light-field camera like the Lytro. Whereas most cameras capture the color and intensity of light at given points, the Lytro is also able to measure the direction of light at a given point. It does this using a matrix of microlenses over its sensor, and from this the device's onboard processor can generate images with different things in focus. Needless to say, that's way beyond what's currently possible in a smartphone camera.

If you want to emulate this kind of effect on a phone, you need to work around the fact that you've only got a traditional lens-and-sensor combo — not to mention a very small amount of space in which to work. The latter eliminates the possibility of creating DSLR-like bokeh effects the old-fashioned way. But fortunately what a high-end smartphone lacks in fancy optics it usually makes up for in processing grunt, meaning computational photography can pick up some of the slack.

Sony, Samsung and Nokia all combine multiple shots into one 'refocusable' image.

The most common approach to defocusing or refocusing photos on a phone is to combine exposures with different focus points between the foreground and background — and this is what Sony, Samsung and Nokia do. (It's similar to the way that HDR shots combine longer and shorter exposures into the same image.) The camera app then does some number crunching and combines these far and near-focused exposures into a single interactive image, while perhaps overlaying a tasteful blurring effect over areas of the photo deemed to be out of focus.

Sony's Background Defocus app is the most limited of the bunch, doing exactly what the name suggests and keeping the foreground in focus — from there, you can add various levels of blur to the background. Samsung's selective focus mode on the Galaxy S5 is — well — more selective. After rapid-firing a bunch of exposures, the GS5 lets you choose to focus on the background or foreground, or enable pan focus, which aims to keep the entire shot in focus.

Nokia is the only smartphone maker offering an easy way to share 'refocus' photos.

Meanwhile Nokia has a more Lytro-like setup, at least from the perspective of the images it produces. Like Samsung and Sony, the Nokia Refocus camera app for PureView devices captures images with varying focus points over a couple of seconds, then lets you tap anywhere on the image to "refocus" it. (Or tap "all in focus" to bring everything back in focus.) You're still somewhat limited by the finite number of exposures that are captured, but it does give you more fine control over the focus point than Samsung or Sony's alternatives. The "color pop" feature also lets you highlight certain objects while desaturating the background.

What's more, Nokia is the only smartphone maker offering an easy way to share "refocusable" photos on the web and through social networks. (Though it'll be interesting to see whether HTC's Zoe cloud sharing app includes anything like this when it launches.)

The Nokia-Samsung-Sony method has its advantages — you get a realistic blurring effect because defocused areas are based on exposures that are actually out of focus. But the main disadvantage is that it's painfully slow compared to normal smartphone photography — taking multiple shots with different focus points and then waiting for them to be processed into one image takes time. These kinds of shots require patience, are highly sensitive to movement of any kind.

That's where the HTC One M8's Duo Camera and Ufocus effect comes in. Instead of waiting around to combine multiple exposures into a single image, standard shots on the M8 incorporate an image from the main camera with depth information from the second camera. (Since it's got two cameras, it can easily judge depth.) You don't need to do anything special to get that depth information — if you're in Auto mode, it's always captured and saved elsewhere in the JPEG file.

There's actually nothing out of the ordinary about the hardware of the M8's secondary camera, in fact Anand Lal Shimpi of AnandTechreports that it's basically just the front-facing camera from the old HTC One (M7). (Which would explain why Duo Effects aren't available in some situations, like low-light and macro shots.)

In any case, with two cameras working in tandem it's possible to determine how far away each point in the photo is, and apply effects to what you calculate is either the foreground or background — and that's what happens when you use Ufocus, Foregrounder or any of the M8's other Duo effects. With Ufocus, for instance, HTC isn't combining in-focus and out-of-focus areas, just applying a blurring filter over certain areas of the photo.

HTC's Duo Camera setup is ridiculously fast, and lets you apply artistic, depth-sensitive effects later with no added inconvenience. But it too has its weaknesses. As we've mentioned, Duo effects don't work in all situations, and even when the second camera does capture depth information, the edges it calculates aren't always 100 percent accurate. And because you've only got one proper exposure to work from, you can't bring an out-of-focus object into focus. If an object is blurry in the original photo, it's blurry forever. For this reason you're best off using Ufocus and other Duo effects in shots with a clear foreground and background, and not much in between the two. Portraits, for example, are a great use case.

Smartphone photography is one of the areas where mobile tech still has plenty of room for advancement, and the recent trend towards refocus effects is just one way in which manufacturers are using new hardware and software to work around the challenges of getting great photos from a super-thin device that fits in your pocket.

And while Lytro-like wizardry is outside the grasp of mobile photographers for the moment, that doesn't mean we can't blur, shade and filter to our hearts' content.

Nice layman's explanation, though you could've expanded a bit on how DSLR and mirrorless cameras create the same bokeh effects with just their optics (or the old fashioned way as also alluded to)... After all, it's a far more common practice than Lytro anyway. One of the best explanations I've seen uses the tube of paste analogy, see here:

The technical reasons explained there aren't really necessary to understand what creates a shallow depth of field or bokeh effect tho, if you just wanna take it on faith then it's simply a combination of sensor size (the most lacking bit in smartphones cameras), focal length (most smartphone lenses are also relatively wide, usually near 35mm equivalent), and aperture.

For those same reasons a cheap P&S camera can't achieve said effects either since sensor size tends to not be bigger than a smartphone and while you obviously get longer zooms they're also usually slower or less bright (smaller aperture, larger F stop number). It's always possible if you compose a picture with enough natural separation between subject and background while using a lot of zoom tho...

The uptake is, fancy lenses aren't the only thing keeping DSLR and mirrorless cameras relevant, but a combination of sensor size and said lenses (lens designs haven't changed much years). Smartphones are unlikely to overcome either limitation anytime soon, software trickery is alright but ultimately pales in comparison. Still, uFocus IS fun to play with if you don't have an alternative... Not sure I agree with HTC's decision to drop OIS for it tho.

I would think low light photography is still a priority for the average user over artistic effects, and OIS can help with the former (granted, at the expense of shutter speed). It's kind of huge for taking watchable handheld videos too.

Agree! I've been screaming this "more megapixel" & sensor size problem for years, but the under-educated public always thinks more is better. "snapshots" are about the best you can get from a PINHOLE sensor, sub-par lens in a smart device. Until the smartphone camera starts to approach APS-C size sensors, software tricks are about all you can do to mimic the bokeh effect of a high f-stop. The next advance I would like to see in smart devices isn't more megapixels, but allowing for the saving of the photo in RAW mode, so people like use that understand photography can use photoshop, lightroom or whatever to develop the photo without the huge post processing that a lot of jpegs out of the camera do.

I thought RAW shooting was already in the works as part of Google's new camera API, we'll see how it pans out... Larger sensors in phones might remain a niche market for a long time, at least as long as ultra thin unibody devices continue to be the overriding trend.

You can get a large-ish sensor into pretty small devices tho, see the M43 GM1 from Panasonic (it's smaller than many P&S, only with a lens hump for the pancake zoom) or the 1" sensor Sony RX100.

I think Sony made a mistake in designing such large zooms into their QX modules... The concepts are very interesting but the execution isn't, a long zoom on a tiny sensor was no better than a P&S and even the shorter zoom on the 1" model made it huge. Smartphone users are already accustomed to shooting at a single focal length!

Why not capitalize on that? A QX line that pairs tiny pancake primes with the 1" sensor would get a ton of enthusiast interest IMO. If Panasonic can make pocketable 20mm/1.7 and 12-32mm/3.5-5.6 pancake lenses that cover the larger M43 image circle Sony could surely do better or at least on par while bolting a 1" sensor on back plus the necessary BT
NFC/WI-FI etc around it.

The modular approach is well worth exploring further IMO... Cause otherwise at some point you really do reach the limits of what's practical on a device you carry every day (the point where a second device starts making more sense). Maybe Samsung will get the hint, their Galaxy camera range has been pretty uninspiring.

Interesting article Alex, but as another reader commented, for the average user, low light performance is more important for sharable/usable pics. Let's face it most people will apply a crappy Instagram defocus filter to try and achieve shallow depth of field.

fantastic write up putting things into simpler terms. True optics just aren't possible from a smartphone at this point so software is the answer until we start getting real glass and sensors in there. I have to say I love what the M8 is doing, yeah it's not going to fix a blurry photo but what it can do with that depth info is fantastic! maybe that's the compositor in me though.

Portions of this page are modifications based on work created and shared by the Android Open Source Project
and used according to terms described in the Creative Commons 2.5 Attribution License. AndroidCentral is an independent site
that is not affiliated with or endorsed by Google.