The issue of artificial intelligence and its place in the future has become increasingly prominent in the tech world today. But Google says it is partnering with Oxford University to potentially create an A.I. kill switch that reduces the risk of future dangers.

The Problem With Artificial Intelligence

Experts have been warning against artificial intelligence for some time now. Stephen Hawking, Bill Gates and Elon Musk, all three power hitters in the scientific and engineering fields, have been vocal about their worry. According to them and other prominent members of the technology field, an A.I. runs the future risk of becoming too powerful and overriding human instruction/will.

Before you scoff at the Terminator-esque scenario, the fears are not unfounded. Artificial intelligence is, by its nature, a platform meant to simulate human learning capabilities, with the greater focus and potential of a machine. As the technology progresses – which it is doing at an incredible rate – the threat of losing control increases.

Human influence itself can run the risk of impacting the personality, language, and motives of an A.I. We recently saw that in practice with the Microsoft teen girl bot controversy.

A Handy Kill Switch

Google’s acquired A.I. research department, DeepMind is teaming up with Oxford University to address this problem. The research team claims that with some special coding, an artificial intelligence could be restricted and limited, turning off the ability to disregard human commands.

Yes, it all sounds a little bit iRobot, with its Three Laws of Robotics. But if we are going to keep developing these self-sustaining programs that are meant to learn, we have to have some kind of assurance that the technology won’t drag us into the mud. However dramatic it might sound, the possibility isn’t that far fetched.

It isn’t a matter of the A.I. turning on us, like some people assume. The bigger concern is that a sequence of events will be sparked, and we won’t be able to step in to stop it. The A.I. will just keep moving forward, using its own cyclical justification for the action, and ignoring attempts to shut it down.

A kill switch would let harmful decisions that a machine might make, such as those that require some human interpretation, from being completed. Or accidents from leading to disasters.

The researchers leading the project are DeepMind’s Laurent Orseau, from Google DeepMind, and University of Oxford’s Future of Humanity Institute’s Stuart Armstrong.