Hawking Says Artificial Intelligence could be Mankind's Worst Mistake

Hawking and others claim artificial intelligence could result in the downfall of mankind. Per the article:

Stephen Hawking has warned that artificial intelligence has the potential to be the downfall of mankind....Dismissing the implications of highly intelligent machines could be humankind's"worst mistake in history", write astrophysicist Stephen Hawking, computer scientist Stuart Russell, andphysicists Max Tegmark and Frank Wilczek in the Independent...."One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand," they write. "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."....And what are we humans doing to address these concerns, they ask. Nothing.

Technology is already becoming our masters. Many people are subjected to continous monitoring, "quality measures", performance improvement, with no actual person to get back to. I know this from personal experience.

Also my car locks itself mysteriously without me telling it to. If I leave something in the car,I have to be sure to bring my keys in case it locked itself. Sometimes it wont let me u lock. It unlocks then instantly locks again. It also turns its lights on and off sometimes without input from me. I stand in the garage and wait for it to turn the damn lights off so the battery doesnt wear down. And the computer systems - holy jesus they can be obnoxious. They are no longer our tools. They are our masters. The have no empathy, no sympathy,no individualization. You will obey!

I wouldn't bet the farm on that. Moore's Law continues to be functional in predicting the reduction of circuitry size while continuing to increase processor and storage capacity and density. How much further we have to go to make a neural network feasible is a question mark, but there are people out there pursuing it, trust me.

Interesting side note: When I was at The Unbelievers screening in Columbus, I think it was Lawrence Krauss who estimated that a silicon-based simulation of the human brain (a device requiring 10 watts of power) would need 10 TERAWATTS to accomplish the same result. That, as they say, is a fair amount of juice. Still:

Everything is theoretically impossible, until it is done.-- Robert A. Heinlein

There can be no doubt that if true AI were realized, Homo sapiens could - and I must emphasize - COULD be in deep sneakers. The issue would be one of implementation. What kind of network access would this AI have? What means would be at its disposal, if any? An AI computer which could learn from the internet, but have limited or no outbound access to it would be safer in that it could grow its own abilities while its capacity for manipulating external systems might be frustrated, preferably in hardware whose functions could not be altered by the machine's efforts. As it comes to prophylactic measures, something as simple as a hard-wire EMO for mains power to the computer not dependent on any other electronics would also be a simple and highly effective deterrent to any chance of the machine's usurpation of power from those who created it.

Treat the problem as you would handling a snake - recognize the danger while also recognizing that the means to control said danger do exist, so long as care is taken and deliberate thought is used. Forewarned is forearmed.

We do have technology used to control people of course (like traffic lights and automated speed-monitoring devices). Not that the technology itself is motivated to control us, though :)

And we have technology that ends up controlling us, like SB's quirky car, and machines that pester you with reminder beeps, etc. etc.

It seems like the fear is that these effects would somehow morph into AI with a drive to power though.

How would this happen - why would AI systems be devised to want power? People struggle for control because we evolved that way, by natural selection. But the AI systems would be evolving by our selection. Unless we actually chose to design AI to have a drive to power, or we designed the AI to be subject to selection for a drive to power, it wouldn't develop a drive to power.

There are many sci-fi stories about machines taking over - but without explaining why they want to take over.