Future AI could “go rogue’ and turn on humans

A top computer expert has said there is a grave risk of artificial intelligence breaking free of human control and turning on its creators.

It’s believed that driverless cars are set to take over our roads within 20 years.

But the computer systems they depend on could potentially become so complicated that even the scientists who create them won’t understand exactly how they work.

This means they could make what we might describe as “out of character” decisions during critical moments.

This could mean a car decides to swerve into pedestrians or crash into a speed barrier instead of taking the decision to drive sensibly.

Michael Wooldridge, Professor of Computer Science at Oxford University told a select committee meeting on artificial intelligence: “Transparency is a big issue.”

“You can’t extract a strategy.”

He told the Committee, appointed to consider the implications of artificial intelligence, that there “will be consequences” if engineers weren’t able to unlock the opaque nature of super smart algorithms.

He said there were plenty of amazing opportunities within the industry that Britain should be harnessing – adding that someone studying AI at Oxford University could expect to become a millionaire in “a couple of years.”

But Wooldridge is not alone in his concerns that the tech could run amock if not reigned in.

Several scientists have admitted they cannot fully understand the super smart systems they have built, suggesting that we could lose control of them altogether.

If they can’t figure out how the algorithms (the formulas which keep computers performing the tasks we ask them to do) work, they won’t be able to predict when they fail.

Tommi Jaakkola, a professor at MIT who works on applications of machine learning has previously warned: “If you had a very small neural network [deep learning algorithm,] you might be able to understand it.”

“But once it becomes very large and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”

There was the famous example of the two Facebook bots that created their own language because it was more effective to communicate in their own secret lingo than what its creators were trying to train it in.

Several big technology firms have been asked to be more transparent about how they create and apply deep learning.

This includes Google, which has recently installed an ethics board to keep tabs on its AI branch, DeepMind.