Neural Photo Editor works like magic thanks to machine learning

Neural Photo Editor is an experimental piece of retouching software from researchers at the University of Edinburgh that uses neural networks to act like Photoshop on steroids. Thanks to machine learning, it can intuitively interpret how a user intends to retouch a photo based on a “contextual paintbrush.” A single brush can change hair color, fill in bald spots, or add a toothy grin.

The process couldn’t be simpler: Users select a color for their paintbrush and the system analyzes that color in context with the image in order to produce an intelligent output. Painting over a subject’s mouth with a white brush, for example, can make a smile larger, while painting with a dark color on a forehead can add bangs.

The software currently works best on images generated by it that have constrained limits on what can be manipulated. However, through the use of “introspective adversarial networks,” an advanced masking system allows it to be applied to existing images, as well.

The tech-savvy can download Neural Photo Editor from Github, but don’t go canceling your Adobe Creative Cloud subscription anytime soon. The tool is very much in its early days and is far from perfect. While it produces stable outcomes much of the time, it can still lead to bizarre results on occasion, as the video above demonstrates. The video also shows the system working with very low-resolution images, so the practical applications for such a tool for photographers are currently nonexistent.

Still, it is an intriguing look at how photo editing could evolve in the future to be smarter and more context-aware, significantly increasing the speed at which a photo could be majorly retouched. It also represents a practical application for the general use of neural networks in personal computing.

Google updated the iOS version of the Google Photos app to now take advantage of the depth data that can be captured by the iPhone's camera in Portrait Mode. The new feature is already available in the Android app.

Where's the best spot to take fall photos? Michigan, according to social media and a Nikon contest. The results and more in this week's photography news, including significant firmware updates for the Fujifilm X-T3, X-H1, and GFX 50S.

To kick off its first developer conference in Beijing, Intel unveiled the second generation of its Neural Compute Stick -- a device that promises to democratize the development of computer vision A.I. applications.

ON1 Photo RAW 2019 now has a dedicated tab for portraits that automatically recognizes faces to help with retouching. The update also brings a new focus stacking tool, enhancements to layers, and improvements to local adjustments.

What's the difference between a National Park and a National Forest? Drones. With no ban on drones in National Forests -- at least, not yet -- filmmakers have a way to capture the immensity of these locations with stunning results.

Ever get that nagging feeling you're spending too much time on Instagram? Well, a new "activity dashboard" has a bunch of features designed to help you better control how you use the addictive photo-sharing app.

You’ve scored yourself a new Google Pixel 3 or Pixel 3 XL, and you want to take advantage of that incredible camera. We’ve got everything you need to know right here about how to snap the best photos with your Pixel 3.

DJI's Mavic 2 series drones are ready to fly -- but which one is right for you? The Mavic 2 Pro and Mavic 2 Zoom are nearly identical save for their cameras. Here's what you need to know about these powerful new UAVs.

On paper, the Airselfie 2 is marketed as flying photographer that fits in your pocket and snaps selfies from the sky. Unfortunately it’s more like a HandiCam controlled by a swarm of intoxicated bumblebees