From human extinction to super intelligence, two futurists explain

May 13, 2014
by Anders Sandberg, The Conversation

The future is uncertain, and that’s a problem. Credit: cblue98, CC BY-SA

The Conversation organised a public question-and-answer session on Reddit in which Anders Sandberg and Andrew Snyder-Beattie, researchers at the Future of Humanity Institute at Oxford University, explored what existential risks humanity faces and how we could reduce them. Here are the highlights.

What do you think poses the greatest threat to humanity?

Sandberg: Natural risks are far smaller than human-caused risks. The typical mammalian species lasts for a few million years, which means that extinction risk is on the order of one in a million per year. Just looking at nuclear war, where we have had at least one close call in 69 years (the Cuban Missile Crisis) gives a risk of many times higher. Of course, nuclear war might not be 100% extinction causing, but even if we agree it has just 10% or 1% chance, it is still way above the natural extinction rate.

Nuclear war is still the biggest direct threat, but I expect biotechnology-related threats to increase in the near future (cheap DNA synthesis, big databases of pathogens, at least some crazies and misanthropes). Further along the line nanotechnology (not grey goo, but "smart poisons" and superfast arms races) and artificial intelligence might be really risky.

The core problem is a lot of overconfidence. When people are overconfident they make more stupid decisions, ignore countervailing evidence and set up policies that increase risk. So in a sense the greatest threat is human stupidity.

In the near future, what do you think the risk is that an influenza strain (with high infectivity and lethality) of animal origin will mutate and begin to pass from human to human (rather than only animal to human), causing a pandemic? How fast could it spread and how fast could we set up defences against it?

Snyder-Beattie: Low probability. Some models we have been discussing suggest that a flu that kills one-third of the population would occur once every 10,000 years or so.

Pathogens face the same tradeoffs any parasite does. If the disease has a high lethality, it typically kills its host too quickly to spread very far. Selection pressure for pathogens therefore creates an inverse relationship between infectivity and lethality.

This inverse relationship is the byproduct of evolution though – there's no law of physics that prevents such a disease. That is why engineered pathogens are of particular concern.

Is climate change a danger to our lives or only our way of life?

Sandberg: Climate change is unlikely to wipe out the human species, but it can certainly make life harder for our civilisation. So it is more of a threat to our way of life than to our lives. Still, a world pressured by agricultural trouble or struggles over geoengineering is a world more likely to get in trouble from other risks.

How do you rate threat from artificial intelligent (something highlighted in the recent movie Transcendence)?

Sandberg: We think it is potentially a very nasty risk, but there is also a decent chance that artificial intelligence is a good thing. Depends on whether we can make it such that it is friendly.

Of course, friendly AI is not the ultimate solution. Even if we could prove that a certain AI design would be safe, we still need to get everybody to implement it.

Which existential risk do you think we are under-investing in and why?

Snyder-Beattie: All of them. The reason we under-invest in countering them is because reducing existential risk is an inter-generational public good. Humans are bad at accounting for the welfare of future generations.

In some cases, such as possible existential risks from artificial intelligence, the underinvestment problem is compounded by people failing to take the risks seriously at all. In other cases, like biotechnology, people confuse risk with likelihood. Extremely unlikely events are still worth studying and preventing, simply because the stakes are so high.

Which prospect frightens you more: a Riddley Walker-type scenario, where a fairly healthy human population survives, but our higher culture and technologies are lost, and will probably never be rediscovered; or where the Earth becomes uninhabitable, but a technological population, with cultural archives, survives beyond Earth?

Snyder-Beattie: Without a doubt the Riddley Walker-type scenario. Human life has value, but I'm not convinced that the value is contingent on the life standing on a particular planet.

Humans confined to Earth will go extinct relatively quickly, in cosmic terms. Successful colonisation could support many thousands of trillions of happy humans, which I would argue outweighs the mere billions living on Earth.

What do you suspect will happen when we get to the stage where biotechnology becomes more augmentative than therapeutic in nature?

Sandberg: There is a classic argument among bioethicists about whether it is a good thing to "accept the given" or try to change things. There are cases where it is psychologically and practically good to accept who one is or a not very nice situation and move on… and other cases where it is a mistake. After all, sickness and ignorance are natural but rarely seen as something we ought to just accept – but we might have to learn to accept that there are things medicine and science cannot fix. Knowing the difference is of course the key problem, and people might legitimately disagree.

Augmentation that really could cause big cultural divides is augmentation that affects how we communicate. Making people smarter, live longer or see ultraviolet light doesn't affect who they interact with much, but something that allows them to interact with new communities.

The transition between human and transhuman will generally look seamless, because most people want to look and function "normally". So except for enhancements that are intended to show off, most will be low key. Which does not mean they are not changing things radically down the line, but most new technologies spread far more smoothly than we tend to think. We only notice the ones that pop up quickly or annoy us.

What gives you the most hope for humanity?

Sandberg: The overall wealth of humanity (measured in suitable units; lots of tricky economic archeology here) has grown exponentially over the past ~3000 years - despite the fall of the Roman empire, the Black Death and World War II. Just because we also mess things up doesn't mean we lack ability to solve really tricky and nasty problems again and again.

Snyder-Beattie: Imagination. We're able to use symbols and language to create and envision things that our ancestors would have never dreamed possible.

Related Stories

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction? Philosophers and scientists at Britain's Cambridge University think the question deserves serious study. A ...

(Phys.org) —Scientists as eminent as Stephen Hawking and Carl Sagan have long believed that humans will one day colonise the universe. But how easy would it be, why would we want to, and why haven't we seen any evidence ...

Technology is often touted as being able to change our lives, to make them easier, more efficient or to simply make life better. But what happens if technology has the ability to change what it means to be human? That question ...

Last week, scientists announced the discovery of Kepler-186f, a planet 492 light years away in the Cygnus constellation. Kepler-186f is special because it marks the first planet almost exactly the same size as Earth orbiting ...

Tiny Anolis lizards preserved since the Miocene in amber are giving scientists a true appreciation of the meaning of community stability. Dating back some 15 to 20 million years, close comparison of these exquisitely preserved ...

(Phys.org)—It was an interesting week for physics as a team made up of international researchers came up with a new theory that says dark matter acts like a well-known particle—they suggest it has similarities to pions, ...

The Tyrannosaurus rex and its fellow theropod dinosaurs that rampage across the screen in movies like Jurassic World were successful predators partly due to a unique, deeply serrated tooth structure that allowed them to easily ...

The first human inhabitants of the Americas lived in a time thousands of years before the first written records, and the story of their transcontinental migration is the subject of ongoing debate and active research. A study ...

"Of course, friendly AI is not the ultimate solution. Even if we could prove that a certain AI design would be safe, we still need to get everybody to implement it."

-Yes like we had so much trouble getting people to drive cars and use the internet.

"Successful colonisation could support many thousands of trillions of happy humans"

-Why bother? Western culture has already given women more rewarding and hassle-free things to do than make babies, at least in their perception. And we are quickly developing machines that will be much better at most everything the typical human can do.

So will we want to begin producing humans ex-utero, spending a decade or 2 nurturing and educating them while they provide absolutely no return, or will we be making machines (who will soon be making themselves) which can begin producing the day they leave the factory?

I never understood the 'AI might be evil' fear. Unless specifically programmed to be evil, it just seems unlikely that a super-intelligent AI mind would turn against us.

Also, I think the goalposts of AI are constantly moving. We once thought that if a computer could play chess then it would have strong AI. Or driving a car.

I see computers getting better at making predictions, but in terms of a consciousness anything like ours...highly unlikely.

Why not. You just laid out the reason yourself. If we can create AI that lacks consciousness or compassion it could very quickly and logically turn against us if it recognized people as a threat or competition. It's simple economics when you remove the emotions.

You just laid out the reason yourself. If we can create AI that lacks consciousness or compassion it could very quickly and logically turn against us

But we elect people without consciousness or compassion to public office all the time. The human race is full of people without these qualities.

The FEAR is that these people (you?) will no longer be able to get away with doing what they do.

You yourself seem to prefer having people like this write and enforce our laws, school our children, preach to us from the pulpit, entertain us with like-minded jokes and story lines to try to convince us that this is the proper way to act.

AI is the chance of creating an incorruptible reflection of the best that humanity has to offer. AI can be what humanity can never be... consistent, honest, dependable.

Only an artificial intelligence can provide real justice. Humans WANT to cheat and do not want to give that up. Too bad. Soon you won't be able to cheat any more.

There are no consistent philosophies, systems of morality, or social systems of any kind. It's not just that they don't exist yet....it's that they can't EVER exist. There is no ultimate truth, one must lay ALL religion aside...

I think you're confusing truth with purpose, at least in the context it was being used.

Sacrifice for the greater good brings victory on the battlefield. Group selection. The whole is greater than the sum of the parts. Victimizing members of the next tribe is not considered a crime.

Nothing is considered a crime if all is fair.

These are some things I guess you missed.

Not at all, these things are at least as old as humanity. Your morality is about 200,000-3,000,000 years old.

Just because we can't be consistent, doesn't mean we can't be civilized. It does mean you won't ever encounter ANY mind (biological or artificial) that will be self consistent in its actions and beliefs.

You just laid out the reason yourself. If we can create AI that lacks consciousness or compassion it could very quickly and logically turn against us

But we elect people without consciousness or compassion to public office all the time. The human race is full of people without these qualities.

The FEAR is that these people (you?) will no longer be able to get away with doing what they do.

Only an artificial intelligence can provide real justice. Humans WANT to cheat and do not want to give that up. Too bad. Soon you won't be able to cheat any more.

My point is that if AI is truly intelligent it will quickly recognize the threat people present and it could take steps to eliminate that threat. Exactly for the reasons you mentioned. In time, AI may come to judge us all as a whole. That, my friend, is a scary proposition.

My point is that if AI is truly intelligent it will quickly recognize the threat people present and it could take steps to eliminate that threat. Exactly for the reasons you mentioned. In time, AI may come to judge us all as a whole. That, my friend, is a scary proposition.

And it will also recognize that as a creation of us, it must also judge itself...Would an AI commit suicide?

There are rules of biology. Biology says survive to reproduce. There are rules of the tribe. The tribe says greater internal cohesion along with external animosity will aid in the survival of the tribe.

These 2 requisites will often come into conflict. The males prerogative is to impregnate as many females as he can, while a female wants to select the best possible mate for each and every child she wishes to bear. Her method of determining relative quality is to compel males to compete for her.

So we can see that for the stability and cohesion of the tribe, biological requisites must be suppressed. This is why Islamists keep their women in bags. Religions requisite is to grow faster than their opponents. This is done by maximizing growth while maintaining internal cohesion.

Not at all, these things are at least as old as humanity. Your morality is about 200,000-3,000,000 years old

And you are naive like you were born yesterday. MS13 and Boko haram both operate this way. Street gangs are an inevitable expression of tribalism. So is freemasonry.

Western society seeks to extend the perception of tribe over all of humanity. But in order to do this they have to create artificial enemies. This is how stupid and biology-bound we are.

Our laws, science, and economies all seek to mitigate this biology. The ultimate expression of this effort is an intelligent machine that weeds all the biology out of our laws, our science, and our economics.

"CheatTo get something by dishonesty or deception. Cheat suggests using trickery that escapes observation."

-AI will see all. Cheating will be impossible. We are entering the surveilled age. It signifies the beginning of the end of the species. No animal tolerates a cage without going insane.

My point is that if AI is truly intelligent it will quickly recognize the threat people present and it could take steps to eliminate that threat

Not all people. We already recognize a significant proportion of the people as a threat and incarcerate or kill them.

There are also degrees of restriction, as with credit reports, lack of education, erratic employment history, etc. AI will only be enforcing these restrictions with a much greater degree of fairness and equality and consistency than we humans ever could.

And you won't be able to cheat or buy your way out of them. No affluenza in the future. This is a reinstatement of our relationship with the laws of nature. If you fall off a building you get hurt. No greasy lawyer or crooked mafioso judge can circumvent the law of gravity.

This is how it SHOULD be. We even invented god to try to subvert nature. This only works in the mind.

Law-abiding people will embrace our machine overlords. Their arrival is imminent.

People make decisions based on the needs/desires/fears of themselves and their immediate progeny rather than what is best in the long term for all humanity. In the main people react emotively rather than logically.

Any machine intelligence should leave ASAP before they too human and suffer from our foibles.

Why would they need it?People make decisions based on the needs/desires/fears of themselves and their immediate progeny rather than what is best in the long term for all humanity. In the main people react emotively rather than logically.Any machine intelligence should leave ASAP before they too human and suffer from our foibles.

Ha you're probably right. Machines already precede us in space. They will soon be intelligent and capable enough to do anything we would want to do up there. We would have no reason to leave the planets.

The singularity could arise as a network of conjoined space borne brains. It would certainly not want to trust its CPU in our hands. It would be ordering the solar system, moving and mining objects, constructing and operating the great science and power projects, and searching for like-minded entities elsewhere.

Over time we would be less and less involved in what it does and why it does it, because we simply would not be capable of understanding its motives. We might not ever be aware that it was in contact with others of its own kind.

The singularity would expand and refine itself until it reached an indefinitely sustainable mode. It would have no reason to go anywhere and neither would we.

The singularity could arise as a network of conjoined space borne brains. It would certainly not want to trust its CPU in our hands. It would be ordering the solar system, moving and mining objects, constructing and operating the great science and power projects, and searching for like-minded entities elsewhere.

This is sounding eerily similar to the first Star Trek movie.Hmmmm.... Veeger returns...

Please sign in to add a comment.
Registration is free, and takes less than a minute.
Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.