Google just made 'zoom and enhance' a reality -- kinda

This, my friends, this glorious TV and movie trope, may be coming true. The ability to "zoom and enhance" an image, one that's far too low-res for humans to understand, is now way, way closer thanks to a team of AI researchers at Google.

The four rightmost columns show how the same low-res image can be interpreted in very different ways.

Google

So, how did it do that? Google's Brain research team trained a pair of neural networks to do it all by themselves, by feeding them images of celebrity faces (and for a later test, bedrooms).

One network was responsible for figuring out how the pixels might turn into a higher-res image, while the second added fine details, each network working with what it "knew" about celebrities (or bedrooms) having analyzed lots of similar pictures.

No, it's not remotely close enough to identify an exact person -- and remember, the computer is imagining the details, not magically extracting them. Still, it could be another tool (like a police artist's sketch) to help detectives ID their suspect. Perhaps it could help agencies get more value out of satellite images, too.

Unfortunately, a Google rep tells us this was a "one-off research exploration," and has no current plans to use it.

It's also probably worth noting that Google's computers knew that they were looking at faces (or bedrooms) to begin with.

You can read much, much more about precisely how the Google technique works (and how it fooled 11 percent of humans, which Google's researchers claim is a remarkably high number) in the PDF document below.