Latest Technology news..

Main menu

Monthly Archives: June 2012

Post navigation

There’s More To Google’s Artificial Brain Than Finding Cats On The Internet

Google’s got a brain. An actual electronic brain.

The New York Timeshas news that inside Google’s high-tech R&D “X” laboratory the search giant has been creating a simulation of the human brain. And rather than teaching it programs, Google’s staff have been exposing it to information from the Net so that it learns organically, a little like the way we humans do. It’s built by hooking together 16,000 processor cores with over one billion interconnections, in a notional model of the around 86 billion neurons in a typical adult human brain.

Some AI systems are all about code run in very fast computers, simulating the various layers of thought and decision that make up a mind with statistics or logic. But Google’s approach is a bit closer to a natural model where the inspiration isn’t some abstract algorithm for simulating a brain, but instead relies on building a replica of a brain and exposing it to raw information. Google has lots of information at its disposal.

In Google’s case they did no complex training, but simply exposed the brain to around 10 million random digital pictures extracted as thumbnails from YouTube videos and let it do its own thing, adjusting signals from some neurons up and down and strengthening and weakening some of the connections between them. It’s a concept well known to science fiction, and Douglas Adams even used it in The Hitchhiker’s Guide To The Galaxy, to describe the Deep Thought super computer “which was so amazingly intelligent that even before the data banks had been connected up it had started from I think therefore I am and got as far as the existence of rice pudding and income tax before anyone managed to turn it off.”

Google’s brain, more or less undirected through a process of repetition, developed a “concept” of human faces and the different parts of a human body from these images, and also a concept of cats. “Concept” here means a fuzzy ill-understood pattern that it could use to categorize a new image it had not seen before, based on its previous learning. The cats concept was a surprise to the researchers, but given the fact that YouTube is a skewed data set, and that we humans do love Lolcats and their like, perhaps it was inevitable.

So what Google’s done is develop a very simplified digital simulation of a human visual cortex. Given that such power is usually imagined as belonging to some military research facility, why’s Google trying it?

The answer is manyfold. In some sense, it’s a natural progression from much of the semantic web research Google’s been doing–investigating how to best process and interpret really human, natural language inputs so that it can deliver even more relevant web search results. Google’s Knowledge Graph is the most recent example of how powerful semantic search can be. The idea is that if you can better understand what someone actually means when they type data into Google, then you’ve got a better chance of actually delivering a matched set of answers in the search results.

A more complete artificial intelligence is simply the successor to these systems, because it would be able to make a guess at the meaning of a search term like “how many roads must a man walk down?” far beyond merely matching the words to the famous song, perhaps guessing that the milage of metaled roads in the U.S. may be useful data, or even engaging in a little discussion about the meaning of life or even the stupidity of Homer Simpson. Though this is a frivolous example, think about how you sometimes have to trawl through hundreds of Google search answers to find the one you want because searching for it isn’t straightforward. An AI search may well be swifter and more helpful.

But an AI trained like this would also make for an improved image recognition system, and also a much more astute voice recognition system. That could turbocharge the usefulness of search using text or imagery on your Android phone. And given what we know of Project Glass, Google’s effort to get us all wearing augmented reality goggles, a future Glasssystem hooked up to an AI that recognizes what the wearer sees and what they’re saying would seem an almost inevitable goal. Smarter AI could also help with Google’s self-driving cars project, perhaps resulting in safer drives or more efficient journeys.

Ultimately you have to wonder if Google’s system could plug into its Siri-like service, rumored to be codenamed Majel, to create a genuinely smart digital personal assistant. Fun though that is, we can also guess that Google would most likely use an AI for its own ends, to best work out what kinds of targeted ads to sell to its users.

Very few companies know how to scale and deploy cloud applications like Netflix(s nflx), the ginormous movie streaming site. And now it’s making some of that cloud management expertise available to the masses via Github.

On Monday, the company open sourced Asgard, a Grails and JQuery web interface that Netflix engineers use to deploy code changes and manage resources in the Amazon(s amzn) cloud in a massive way. The technology was named after the home of the Norse gods Norse god of thunder and lightning but was once known as the Netflix Application Console or NAC. And it offers some capabilities that the AWS Console does not.

Asgard, for example, helps engineers track the multiple Amazon Web Service components — AMIs, EC2 instances etc. — used by their application and manage them more efficiently.

As Joe Sondow, the Netflix senior software engineer who leads the project, wrote in the blog: