Will Humans Perish Due To AI? Stephen Hawking’s Last Paper May Hold Some Clues

“The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded”, Stephen Hawking, the world-famous physicist had said in an interview.

Hawking (Jan 1942 — Mar 2018), in spite of being a theoretical Physicist and Cosmologist by profession, often ended up expressing his views on many other issues like global warming, politics and philosophy. But most of his controversial statements used to be about technological perils to the future of humanity. Artificial intelligence as a threat to humanity used to be one of his warnings. In his last writings, published on 16 October this year, finished by his family with a collaboration with his colleagues, Hawking talked about how he thinks technology could risk humanity, among some other issues.

Truth Behind Hawking’s Threat

Hawking, an inspiration to innumerable theoretical astrophysicists and cosmologists, had always warned us about the threats by AI. In his last book Brief Answers to the Big Questions (Hodder & Stoughton, 2018), he talks about everything from AI to aliens.

“Superhumans”

But one of the most important things that he talks about is the superhumans replacing the regular humans. By superhumans, he means “genetically re-engineered” humans. He talks about the genetic editing of humans in the future and that destroying the current humanity entirely. We already have CHRISPR-Cas9 to modify our genes at a remarkable level. He was of the opinion that people would find ways to modify something as difficult as intelligence and aggression by using wonders of genetic engineering and it will not take a long time to achieve that. It may also enhance disease resistance and longevity. We also have a research which showed the possibility of genetically modifying an embryo to treat a heart disease.

All these advances in genetic engineering made Hawking fear for the future of humanity and he was of the opinion that they will eventually be a threat to the ordinary human world. “Once such superhumans appear, there will be significant political problems with unimproved humans, who won’t be able to compete,” he wrote. “Presumably, they will die out, or become unimportant. Instead, there will be a race of self-designing beings who are improving at an ever-increasing rate.”

Natural Calamities Caused By AI?

His paper also says the possibility of AI building a will of its own in the future, a will that will conflict the fill of us humans, and that in the next 1,000 years, major environmental calamity or a nuclear war will ‘cripple the Earth’.Many computer scientists today do believe that consciousness will arise from AI.

Superintelligence

In the past, Google in its project called AutoML had created an AI system that had created its own child AI which surpassed in its own creator’s performance. This requires an AI to understand how it works and since this is a possibility now, we could very well care the downside of consciousness in AI in the future.

The physicist’s own machine that was made for him to be able to communicate was based on AI, but he feared that the development and widespread of this technology can very well be a threat to the entire human race in the future.

Hawking was concerned about the advent of Superhuman AI which will be able to not just depict human intellect, but also expand it. He said that the technology will surpass humans and enable themselves to not just do what humans are capable of, but also learn and grow, eventually surpassing humans in their abilities. He stated in the book that AI will overtake humans in intelligence during the next 100 years. But humans will need to ensure that the AI must have the same goals as us.

A still from the movie I, Robot depicts a futuristic world that is left to robots. The movie has a negative take on the AI controlling the world and how it may enslave the human race.

Others Who Have Warned Us Of AI

Entrepreneur Elon Musk also considers AI to be of threat to humans and had, along with Hawking, signed an open letter in hopes of preventing robot uprising. “Maybe there’s a five to 10 percent chance of success [of making AI safe],” he told the staff of his own neurotechnology company called Neuralink, after showing them a documentary on AI, reports Rolling Stone.

The inventor of the world wide web Sir Tim Berners-Lee spoke about the horrific scenario where AI could become the new ‘masters of the universe’ by creating and running their own companies, at a conference. A very good example was by an episode of the American TV show called Black Mirror called ‘Metalhead’.

Bill Gates, in an interview with BBC, had shared his thoughts on the potential threats of AI. “A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”, he had said.

Future Of AI

Any technology that we humans have created in our entire history has had good or bad effects. Whether to make it a threat or a help is up to us. But the world of AI is altogether a different world. The threats that Hawking suggests are real and we, as a civilization, need to pay extra care to not let it progress up to a point where we can no longer control it.