Google acquires human-like AI company for $500 million, Skynet is now a real possibility

Continuing its rather intimidating streak of acquisitions, Google has acquired the British artificial intelligence company DeepMind for around $500 million. There is no doubt that this acquisition is linked to Google’s hiring of futurist and inventor Ray Kurzweil, and the string of eight robotics acquisitions that ended last year with the purchase of Boston Dynamics, one of the world’s biggest names in robotics. We would not be surprised if there was also a connection to Google’s acquisition of Nest, Google Glass, and its Calico life-extension project. All the pieces are now in place for a Google-created Skynet and the robotic Judgment Day apocalypse that would surely follow.

Despite the exorbitant price tag of $400 million, there’s sadly very little public information about DeepMind. In the last few years I have noticed a slightly worrying trend where many acquired companies don’t even have a functioning website — and DeepMind is no different. According to our own in-house neuroscientist, John Hewitt, DeepMind appears to be in the business of creating artificial general intelligence (AGI). The co-founder and apparent brains of the operation, Demis Hassabis, has published some papers on AGI.

Google now owns Boston Dynamics’ Atlas robot. Imagine if it was equipped with a strong AI. Hello, Judgment Day!

AGI (sometimes referred to as strong AI) is different from conventional AI (weak AI) in the sense that it is capable of performing (and learning from) very general tasks. Most AI (weak AI) is programmed to perform a very specific task, such as decoding house signs in Google Streetview, or IBM’s Jeopardy-playing Watson. AGI, on the other hand, is programmed so that it solves problems in a much more human way. Where weak AI is usually characterized by speed and accuracy, strong AI is more closely linked to reasoning, planning, self-awareness, consciousness, and communicating in natural language. In other words, if you want to build useful, human-like robots, you need a really good AGI.

Building an AGI, as you can imagine, is rather difficult. Ray Kurzweil, in his 2005 book The Singularity is Near, speculates that human-level machine intelligence (the technological singularity) should be possible sometime between 2015 and 2045, depending on the rate at which computing power grows. There are numerous groups, including IBM, who are trying to emulate neurons and synapses within supercomputers, with the hope of understanding how we might eventually build an AGI. Ben Goertzel, who has done a lot of research into AGIs, is currently working on writing software (OpenCog) that will imbue a humanoid robot with the intelligence of a human toddler, but there’s no timeline for the (hopeful) success of this project. (Read: How to create a mind, or die trying.)

At this point, we have absolutely no clue how advanced DeepMind’s AGI is. Presumably, if Google saw fit to pay $500 million, there must be something juicy worth acquiring. Furthermore, inside sources say that Facebook was DeepMind’s original suitor, but for some reason the deal fell through, allowing Google to step in. Hopefully it won’t be too long before we see the fruits of Google’s recent robotics and AI acquisitions, but who knows — it wouldn’t be that surprising if we never hear about DeepMind ever again. $500 million is a drop in Google’s estimated $60 billion in cash reserve ocean.

As an interesting aside, according to two people familiar with the deal, Google agreed to set up an “ethics board” that will govern how the company can and can’t use DeepMind’s technology. DeepMind pushed for this arrangement. We don’t have any more details at the moment, but presumably we’re talking about a group of people who will try to prevent Google from turning into Skynet. Yes, Google might become the first big corporation to enact Isaac Asimov’s Three Laws of Robotics. The three laws are: 1) a robot may not injure a human through action or inaction. 2) a robot must obey orders given by humans, except where it conflicts with the first law. 3) a robot must protect its own existence, as long as it does not conflict with the first or second law.