Artificial intelligence is a great threat

The developments into artificial intelligence are "the best or worst thing" that could occur to humanity in the future, says l egendary astrophysicist, author and cosmologist Stephen Hawking and urges for more research into the possible risks that are involved to overcome potential problems before they arise, rather than waiting until it is too late to fix.

Recently he has written an article that warns about the dangers of artificial intelligence as part of a paper that Hawking co-wrote with fellow science professors Stuart Russell a computer-science professor at Berkley University and physics professors Frank Wilczek and Max Tegmark of the Massachusetts Institute of Technology. In the paper they cited several achievements in the field of artificial intelligence, including self-driving cars, Siri and the computer that won Jeopardy! "Such achievements will probably pale against what the coming decades will bring," the article in Britain's Independent said . "Success in creating AI would be the biggest event in human history," the article continued. "Unfortunately, it might also be the last, unless we learn how to avoid the risks." Stephen Hawking was inspired to write the piece after watching the latest Johnny Depp and Morgan Freeman movie Transcendence, which will be in cinemas this July. The film looks at two opposing possible futures for humanity. One is the road in which artificial intelligence is a strong and crucial part of our existence and taking over many aspects of human life. The other is an anti-technology perspective. However, Hawking warns about dismissing this sort of artificial intelligence simply as science fiction. Whilst Hawking writes that to successfully create AI would be one of the greatest achievements of the human race, he also sees potential problems in the future. The uses of AI are endless, world issues such as poverty, disease and war could become a thing of the past. But there is a price. Defence firms are already looking into how AI could be used to create weapons that are completely autonomous to eliminate their targets. Super-intelligent machines could self-replicate, improving on their faults and learning as they go. As such, they could outsmart any human counterparts in the case of financial markets, invent better machines than humanity could come up with and manipulate human leaders to their own benefit. Hawking notes that in the short-term, issues with AI would simply be centered around who has the control of the technology, whereas the long-term problems could be whether or not it is able to be controlled. In an effort to prevent technology from falling into the wrong hands, the UN and Human Rights Watch has suggested a treaty against such weapons, banning them from being produced. "Looking further ahead, there are no fundamental limits to what can be achieved," said Professor Hawking. "There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains," he notes. In fact, IBM has already developed smart chips that could pave the way for sensor networks that mimic the brain’s capacity for perception, action, and thought. One day, it could allow computer scientists to develop a machine with a brain that is even more intelligent than that of humans. "As Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a singularity," said Professor Hawking, calling for more research as a prevention mechanism. "Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks," he writes. Source: http://sputniknews.com/