Google Stealthily Introduced Its Own Robot Brain that Learns

This year's Google I/O was kind of a quiet one. There were no fancy new watches, no self-driving cars, and definitely no sky-diving from blimps. But behind smaller (and still awesome!) announcements like a photo service that will store and organize all your photos for you, and a version of Google Now that can intelligently suggest reminders, is a titanic achievement Google only hinted at onstage: A robotic brain that can learn.

Taking in the world

To be fair, this isn't the first time Google's toyed with a robot brain. In 2012, an army of 16,000 computers hidden away in the deep, dark recesses of the experimental Google X labs accomplished a complex and impressive feat, at least by the standards of a computer brain working with silicon neurons.

By pouring over some 10 million images from YouTube videos and thinking really, really hard (for a computer), this network actually taught itself to conceive of the idea of cat, and put a furry face to that concept. It's easy to gloss over the magnitude of that; these are cats after all. Remember the one with the keyboard? Lol. The cheeky headlines basically write themselves.

But now, three years later, we're starting to really see the fruits of that labor pay off. Google's new Photo's app seems to be much the same experiment but put towards a practical goal. Instead of a million YouTube images, this brain is looking through thousands of personal photos. Instead of a cat, it's learning to recognize your friends and family. Also, coincidentally, your cat.

it's learning to recognize your friends and family

On the one hand, this feels like a natural thing for Google to accomplish. After all, Google has been slavishly parsing infinity for us since its very earliest days of web search. The end result of Google Photos offers the same sort of user-friendly simplification of an incredible amount of information, but the input and the output are all switched around.

Google search takes a search term, runs out into the matrix, and returns with a list of things you might be looking for. Google Photos takes the matrix as its input and returns a list of things you might be interested in searching for. It's the difference between playing fetch with your dog in the backyard and dropping your dog off at Walmart and expecting him to come up with a grocery list.

And putting it in context

But collating the near-infinite is only one of the Google brain's tricks. The other is understanding the context of things it can find. That's the animating force behind the new Now on Tap, a version of Google Now that pops up over the top of whatever you're doing, reads in the data from the app you were on, and uses that to smarten itself up. While demoing Now on Tap and showing off how it could generate a reminder to pick up the dry cleaning by just reading a conversationally written text message, Google Now head Aparna Chennapragada hinted at exactly how impressive this really is.

The user in me is pretty happy, but I have to say the computer scientist in me is practically giddy with excitement here because that's some like epic natural language understanding going on here.

From the very beginning, Google Now has traded on this sort of contextual awareness, though it started from a pretty basic place. Directions home when you're at work, or the score of the ongoing Packers game, or the boarding pass for your upcoming flight. These are all great and handy, but effectively they're just search results, and the service Google Now is performing is searching them for you automatically based on where you are or what's going on. But Now on Tap can apparently go a step further; it doesn't just figure out what simple, searchable information you might want, but also what you might want to do.

Of course this doesn't all work perfectly just yet. Now on Tap and Android M haven't rolled out yet, but Google Photos is here, with its equal parts eerie insight and errors only a primitive computer brain could make. In seconds it was able to identify the most important (and common) people in my life (and photos). It singled out my fiancee, two of my younger sisters, a good friend. It even took all the pictures of my cat and put them in a folder labeled cats! But then again it also put a few of the same pictures in a folder labeled "dogs."

To be fair, she does love playing fetch

But with millions upon millions of our pictures to learn from—and the content of our emails, our text messages, virtually everything else on our phones—Google's robot brain is poised to start figuring things out at a blistering pace. For better and for worse, our personal data is what feeds the beast. It's simultaneously thrilling, eerie, and terrifying to watch a machine identify the faces of those close to you, or suggest something you were just thinking of.

simultaneously thrilling, eerie, and terrifying

The benefits of that are for far more than just easily searchable photos or better recommendations from a virtual assistant, though. Being able to suck in the entire world, and then make sense of it as a series of distinct objects with relationships to each is the end goal of so many of technology's biggest projects.

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io

This commenting section is created and maintained by a third party, and imported onto this page. You may be able to find more information on their web site.

A Part of Hearst Digital Media
Popular Mechanics participates in various affiliate marketing programs, which means we may get paid commissions on editorially chosen products purchased through our links to retailer sites.