Google could be the first company to implement Asimov’s Three Laws

This site may earn affiliate commissions from the links on this page. Terms of use.

Rumors of an ethics board to accompany Google’s recent acquisition of the artificial intelligence company DeepMind Technologies could see the creation of uniform robotics laws in the future.

Science fiction is positively littered with the idea of artificial intelligence. From the first whispers of mechanical consciousness by Samuel Butler in 1906 to the current day concepts of fully formed digital people like Cortana in the Halo franchise, the basic idea that one day machines will be able to think and act on their own is deeply seeded in our global culture.

Google has taken surprising and fantastic strides in the last couple of months to actively encourage the imaginations of many to run wild, with Andy Rubin now leading a rapidly expanding team of purchased companies to create bipedal robots. All of the pieces are on the board for the kind of humanoid thinking machines from our favorite books as a child, but with that comes the responsibility of governing those machines and setting standards for their existence.

Artificial intelligence, in a basic sense, is already a significant part of how Google is able to deliver some of the incredible services the company offers today. Produts like Google Now, search engines, and ad delivery networks rely heavily on advanced algorithms. For their next trick Google has purchased DeepMind Technologies whose technology has been demonstrated thinking like humans when playing video games.

It’s not been made clear where or how this technology will be used at Google, but the purchased has been coupled with a report that an ethics board is being formed to help create rules for the application of artificial intelligence.

Google has a lot of things that an ethics board for artificial intelligence could have a hand in. As a contractor to the Department of Defense, as the creator of self driving vehicles, and as the glue bringing together dozens of the brightest minds in robotics, there’s more than a few applications that will require rules for the general population to be alright with everything. Almost more important that how this newly acquired technology is used on public facing projects is how it is used internally at Google, which is a big part of why the ethics board would need to be comprised of employees who are familiar with Google’s inner workings as opposed to outside sources.

The next few steps for Google are important ones because as of right now they are at the forefront of these technologies. The rules created for the use of artificial intelligence in whatever context Google decides to apply them may start out as being specific to Google, but the significant global influence the company has could quickly see them become standards as other organizations begin to move into these same areas. It puts Google in a unique new position of power as an authority in artificial intelligence, and could easily make them responsible for the first implementation of the three laws of robotics.

And, of course, Isaac Asimov’s Three Laws are:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.