Hope Against Hope… And Smile

Main menu

Post navigation

Number 1 Risk For This Century

Is it global warming? World War III, perhaps?

Nope. It’s superintelligence.

“Eventually, I think human extinction will probably occur, and technology will likely play a part in this.” Thus spoke Shane Legg, one of the founders of DeepMind, in his believe that artificial intelligence could play a part in humans’ demise. Neuroscientist Demis Hassabis founded DeepMind two years ago and recently sold it to Google. The aim of the company is AI development that will allow computers think like humans.

Another AI group, San Francisco-based Vicarious, is attempting to build a program that mimics the brain’s neocortex, simulating its multiple levels of functionality: sensory perception, spatial reasoning, conscious thought, and language in humans. “Vicarious is developing machine learning software based on the computational principles of the human brain.”

What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?

Superintelligence: Paths, Dangers, Strategies, a book by Nick Bostrom asks these same questions from the other side of the equation: how humanity will cope with super-intelligent computers? The book lays the foundation for understanding the future of humanity and intelligent life. Mr Bostrom has also argued that the world we live in is fake, and humans are nothing but computer simulation. But never mind that.

Amsterdam-based engineer, futurist and CEO of Poikos, Nell Watson said computer chips could soon have the same level of brain power as a bumblebee – allowing them to analyse social situations.

“I am deeply saddened by the inability of robots to do something as simple as telling apart an apple and a nectarine. […] Machines are going to be aware of the environments around them and, to a small extent, they’re going to be aware of themselves.”

At a conference just days ago, she said that robots could decide that the greatest compassion to humans as a race is to get rid of everyone. Watson makes the case that as robots get smarter and more capable, “the most important work of our lifetime is to ensure that machines are capable of understanding human value. It is those values that will ensure machines don’t end up killing us out of kindness.”

Nell Watson’s comments follow tweets by Tesla-founder, Elon Musk, earlier this month. He said AI could be more dangerous than nuclear weapons. Musk made an investment Vicarious, along with Mark Zuckerberg and actor Ashton Kutcher. Musk is so concerned that he is investing in several AI companies. Not to make money, he says, but to keep an eye on the technology in case it gets out of hand.

Stephen Hawking, too, has warned that artificial intelligence has the potential to be the downfall of mankind. Writing in the Independent he said, “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last.”