But one of the most fascinating -- and perhaps frightening -- Google X accomplishment has been its creation of one of the world's largest self-learning "unsupervised" neural networks. Consisting of 16,000 computer processors, the array is capable of complex task that are considered impossible using traditional algorithms. One such task is finding cute cats on the internet.

As a test of the nascent cognizant system, Stanford University Electrical Engineering Professor Andrew Y. Ng and Google fellow Jeff Dean fed the machine 10 million thumbnails of YouTube videos. Without being told exactly what to "look for", the network began to hierarchically arrange data, removing duplicate similar features and group certain images together.

One example was the cat. Thanks to the wealth of cat videos on YouTube, the cyber-brain eventually came to a single dream-like image representing the network's knowledge of what a cat looks like. The network was able to then able to recognize its favorite thing -- cat videos, no matter what subtle variations merry YouTubers come up with to their feline's appearance.

The "cat neuron" holds the learned appearance of what a cat looks like.
[Image Source: Jim Wilson/The New York Times]

The significant part, say researchers, is that the network wasn't told what to look for.

Professor Dean comments in an interview in TheNew York Times, "We never told it during the training, ‘This is a cat.' It basically invented the concept of a cat. We probably have other ones that are side views of cats."

II. Future Systems May Match or Beat Human Brain

Google researchers believe this capability is due to the fact that the network operates similarly to the visual cortex in the human brain. The visual cortex is thought to contain so-called "grandmother neurons", which store key images, such as your love ones' faces. The system developed an idea of what a human face looks like, though it lacked the specificity of known faces stored in the human visual cortex.

The system taught itself what a human looks like. [Image Source: Google]

He says that despite that the network learned what a cat looks like and many basic human features, that it still had far less connections ("synapses") than a human brain. In short, mankind is still winning versus his digital counterpart. Writes the team, "It is worth noting that our network is still tiny compared to the human visual cortex, which is a million times larger in terms of the number of neurons and synapses."

David A. Bader, executive director of high-performance computing at the Georgia Tech College of Computing, though, says that the team's findings indicate that mankind's era of superiority will be short-lived. He comments, "The scale of modeling the full human visual cortex may be within reach before the end of the decade."

In a difficult test of recognizing 20,000 images, the system performed better than any machine to date, post-learning. The final accuracy was 15.8 percent, 70 percent better than the previous record-holder.

The project is now headed out of the top-secret lab and into Google's server farms. Applications that it may be used in include improving results in Google's image search and adaptive speech recognition for Android mobile devices.

But Professor Ng has his sights set on a far more ambitious goal -- a machine that is capable of fully learning, developing into a fully sentient digital system. To get there he'll need to wait for the never-ending process of hardware improvements to reach a bit further and he'll also have to work on the fundamental algorithms.

The Google X system is close, but not quite there. He states, "It’d be fantastic if it turns out that all we need to do is take current algorithms and run them bigger, but my gut feeling is that we still don’t quite have the right algorithm yet."

I'm not attaching much to the "cat" thing. It found a series of related patterns but it ends there.

The term cat has so much more information attached to it. Beyond the objective information like anatomy, behavior, history, population, human interaction..etc there is each observers subjective information (countless variables..many working in us right now to make cat seem important).

Treating the human neuron as merely a logic gate multiplied by its synapse complexity is not enough to explain anything remotely close to consciousness. Googles neural net is off by ten to the double digits in orders of magnitude.

This network is modeling the visual cortex of the brain; not the whole brain itself. Extraneous, peripheral information about cats regarding their past history and connection with humans and whatnot is therefore irrelevant in this case, as the system deals with images only - at this stage anyway it would seem. Who can say for sure what'll happen in the future though.

Maybe in a decade we can finally have some proper "agent" type programs that can independently track down information for us in an intelligent manner...