Imagine a computer that wants to calculate π to as many digits as possible. That computer will see humans as being made of atoms which it could use to build more computers; and worse, since we would object to that and might try to stop it, we’d be a potential threat that it would be in the AI’s interest to eliminate

Ideally, “each time you build a more powerful machine, it effectively models human values and does what humans would like,” says Christiano. But he worries that these machines may stray from human values as they surpass human intelligence.

One of the things that makes AI risk scary is that it’s one of the few that is genuinely an extinction risk if it were to go bad. With a lot of other risks, it’s actually surprisingly hard to get to an extinction risk.

In strategy games, the most powerful abilities are those which let you take more actions per turn or have a wider array of possible actions to perform the one best suited to the situation at hand. AI is literally a machine producing ideas, which let you act faster (and thus perfom more actions) or execute different plans (and thus have more choices). This is a serious game imbalance, and is one we...See More

1) Given the history of AI development and current rate of AI progress, it's almost obvious that superhuman-level AI would be invented and run in millions of copies within the next 50 years if we don't impose severe restrictions on its development.
2) It's highly unobvious that scientists can invent the way to control general superhuman level AI. Moreover, it's even much more dubious that scienti...See More

Artificial General Intelligence (AGI) will undoubtedly become humanity’s most transformative technological force. However, the nature of such a force is unclear with many contemplating scenarios in which this novel form of intelligence will find humans an inevitable adversary

Robert Provine
Research Professor/Professor Emeritus, University of Maryland

There is no indication that we will have a problem keeping our machines on a leash, even if they misbehave. We are far from building teams of swaggering, unpredictable, Machiavellian robots with an attitude problem and urge to reproduce

There are plenty of consequences of the development of AI that warrant intensive discussion (economical consequences, ethical decisions made by AIs, etc.), but it is unlikely that they will bring the end of humanity

All species go extinct. Homo sapiens will be no exception. We don't know how it will happen—virus, an alien invasion, nuclear war, a super volcano, a large meteor, a red-giant sun. Yes, it could be AIs, but I would bet long odds against it. I would bet, instead, that AIs will be a source of awe, insight, inspiration, and yes, profit, for years to come.

People are worried about the free will of machines. So far, no scientific evidence can support such a statement. Even human beings’ free will seems to be an enigma, let alone that of machines. Deep diving AI researchers have a crystal clear picture of the industry status quo and risks that may not be manageable. The reality is far from what people might think of.

n make sense of it. AI can truly help solve some of the world’s most vexing problems, from improving day-to-day communication to energy, climate, health care, transportation and more. The real magic of AI, in the end, won’t be magic at all. It will be technology that adapts to people. This will be profoundly transformational for humans

We can turn machines into workers — they can be labor, and that actually deeply undercuts human value. My biggest concern at the moment is that we as a society find a way of valuing people not just for the work they do.

I don't think there's a single change that going to be black and white once we're on one side and now there's a change and we're on the other side. It's a cumulative effect of everything, AI is embedded in many of the technologies that have been changing our world over the last several decades and will continue to do so.