Risking Human Extinctiion

by Lifeboat Foundation Scientific Advisory Board member John
Leslie, 1999.

Abstract

Of all humans so far, roughly ten per cent are alive with
you and me. If human extinction occurred soon, our position in
population history would have been fairly ordinary. But if, in
contrast, humankind survived for many more centuries, perhaps
colonizing the galaxy, then we could easily be among the earliest 0.001
per cent of all humans who will ever have lived. This could seem a very
surprising position to be in  a point which is crucial to a
“doomsday
argument” originated by the cosmologist
Brandon Carter.

People who
accept the argument, even in a weakened form which takes account of the
fact that the world is probably indeterministic, will re-estimate the
size of the threats to humankind, showing increased reluctance to
believe that humans will survive for very long.

Possible
threats
include nuclear and biological warfare; ozone layer destruction;
greenhouse warming of a runaway kind; an environmental crisis caused by
overpopulation; new diseases; disasters from genetic engineering or
from nanotechnology; computers replacing humans entirely, as some
people think would be desirable; the upsetting of a space-filling
scalar field through an experiment at very high energies, as discussed
in a recent book by England’s Astronomer Royal; and even the arguments
of the many philosophers who see no duty to keep the human race in
existence. But despite all such dangers and despite Carter’s
disturbing argument, humans may well have a good chance of surviving
the next five centuries.

Paper

What chance has the human race of surviving the coming century, and
perhaps further centuries?

Comets and asteroids are very unlikely to
exterminate us. During the next hundred years Earth may well be hit by
something large enough to wipe out a city, or several cities if the
thing hits an ocean, producing huge tidal waves. Yet even something
much bigger, like the monster which exterminated the dinosaurs,
probably would not be enough to kill all humans, and objects of that
size arrive only about once in a hundred million years. Long before the
next one did, humans should have spread far beyond their tiny planet so
long as they had not exterminated themselves. How likely are they to do
that? Let us look at various risks.

First, there is nuclear warfare. There are
still thousands of hydrogen bombs despite the collapse of the Soviet
Union. Because of the chaos of its collapse, the threat of accidental
nuclear war could well be greater than ever. Nitrogen oxides from a
nuclear war might be disastrous to the ozone layer, Earth’s shield
against ultraviolet light. Also, nobody can be sure whether a “nuclear
winter”, severe cooling which lasted for months, would result from all
the soot which burning cities and forests threw into the atmosphere.
The radioactive fall out would work mischief too. Humans might be wiped
out through the deaths of microorganisms which were crucial to the
health of the biosphere.

Biological warfare could be still more
dangerous. Scientists could produce new diseases that spread more
easily and killed far more efficiently than the Spanish ‘flu which,
appearing in 1918, ended more lives than the World War had just done.
An aggressor nation’s vaccines to protect itself could fail, in which
case maybe everybody could be killed off.

Do not say that
nobody would
be criminal enough to risk it! The world contains some very unpleasant
individuals and, now that mammalian cells can be grown on tiny beads, a
single bottle can produce viruses in numbers which previously required
large factories.

It is often population pressures which lead to
warfare, and the world’s population is still exploding. We have some
six billion humans now, which means only very little usable land for
each. There could be up to twelve billion humans by the end of the next
century. Even without warfare, the environment could come under
disastrous pressure. Many think it already is, thanks to such things as
the unholy alliance between fertilizers and pesticides, the loss of
forests, and the chlorofluorocarbons which continue to erode the ozone
layer.

Recent research suggests that in the northern
hemisphere, during
the crucial spring growing season, ozone losses will be double what had
been estimated, because of how global greenhouse warming is linked to
stratospheric cooling. And the warming might be disastrous just by
itself. To get the consensus needed in 1992 for persuading the
politicians in Rio, the International Panel on Climate Change
disregarded worst case predictions, also dealing with biological
feedback loops in just one sentence: “Biological feedbacks have not yet
been taken into account.”

Scenarios involving positive feedback and
runaway overheating are easy to construct. For instance: (i) Ocean
waters warm up, becoming less able to absorb carbon dioxide which is a
powerful greenhouse gas; (ii) cold-water nutrients then rise to the
warmed sea surface less often, so phytoplankton grow more slowly,
absorb less carbon dioxide, and generate less dimethyl sulphide, a
substance which encourages the birth of the clouds which keep us cool
in daytime; (iii) many phytoplankton die because of ozone layer losses;
(iv) warmer weather increases production of carbon dioxide by plants
and soil microbes; (v) tundra melts and peat bogs dry out, producing
yet more carbon dioxide and vast amounts of another greenhouse gas,
methane, which is, molecule for molecule, perhaps thirty times as
powerful; (vi) changes in high altitude clouds make them trap more
heat; (vii) drought then kills vegetation, returning carbon dioxide to
the atmosphere; (viii) there next comes depletion, through the ravages
of methane and other greenhouse gases, of the hydroxyls which are so
important in destroying these gases; (ix) there follows a retreat of
sea ice, so that less sunlight is reflected back into space; (x)
heating of the sea thereupon releases the trillions of tons of methane
which are at present locked up in the clathrates of the continental
shelves; (xi) the new heat produces much more water vapor, an
extremely important greenhouse gas, so that a greenhouse-effect
disaster arrives.

James
Lovelock is well known for his “Gaia”
hypothesis which, stripped of the mysticism that has sometimes been
attached to it, is simply that negative feedback loops have kept Earth
healthy. However, he has recently been saying that positive feedbacks
could come to be dominant any day now, the planet’s temperature then
perhaps rocketing upwards in a way producing “gigadeath”: killing
billions, that is to say, if not everybody.

Even without severe heating, the environment
could suffer heavily through industrial pollution, pesticides, and
deliberate destruction of habitats. Nobody knows whether this would be
ruinous. Again, consider the concentration of so many people in huge
cities, international travel, and the weakening of our immune systems
through onslaughts from the eighty thousand chemicals which are
synthesized today on an industrial scale.

These factors
could combine
to produce worldwide plagues of startling killing power. It is
nowadays often suggested that the AIDS virus had been around for a long
time, unable to spread widely because twentieth-century civilization
had not arrived. An expert on global risks,
Norman Myers, has estimated
that it may now destroy up to one in five of us. Perhaps, though, we
can count ourselves lucky that it is not still more threatening. The
closely related visna virus, which infects sheep, is spread simply by
coughing.

All the same, humanity’s future prospects are
markedly uncertain. The next couple of centuries will probably be the
period of highest risk. When thinking about them, remember Enrico
Fermi’s question of why we have not detected any extraterrestrials.

Calculations suggest that a species which had developed industrial
technology could spread across its galaxy in a few million years. Now,
our galaxy has existed for several billion years, and it contains many
billion sun like stars. Even if only a small proportion of those stars
have suitable planets, how comes it that not a single planet has given
birth to intelligent beings who have spread far enough for us to
notice? One currently popular answer is that intelligent species do
appear quite often, but that they then quickly destroy themselves
through their scientific discoveries.

Our own latest discovery, genetic engineering,
has alarmed many people. Most experts say that its risks are fairly
small, but social pressures might be influencing them. Instead of just
affecting industry, as in the case of regulations applying to nuclear
power plants, efforts to restrict genetic engineering threaten the
salaries and research grants of numerous scientists.

I confess that I
lose little sleep from fear of the so called “green scum disaster” in
which a genetically engineered super organism performs so efficiently
that it wipes out everything else, yet maybe not even this can be
entirely ruled out. And it was rather a shock to find Toronto’s The
Globe and Mail reporting on November 2nd 1993, on its front page yet
with no mention at all of possible risks, that researchers at
Washington University had used genetically modified salmonella bacteria
to produce harmless, temporary infections able to act as
contraceptives, rendering recipient women infertile for months. Are not
salmonella bacteria extremely widespread, and might not the genetically
modified ones mutate so that they spread efficiently from woman to
woman? What might that not do to humanity’s survival prospects?

Bear in mind, too, that genetic
engineers could
develop truly terrible germ warfare agents. The greatest threat during
the coming century could easily be from these, perhaps let loose by
terrorists.

An exotic variant on the “green scum disaster”
is the “grey goo calamity” which nanotechnology might conceivably make
possible. Nanotechnology, originally suggested by
Richard Feynman, is
the development of very, very tiny machines, preferably controlled by
their own miniature computers. Such machines might be made
self-reproducing.

After all, viruses and bacteria can be
viewed as
“natural machines” and they reproduce themselves with ease. If it used
sunlight as its power source, and naturally available chemicals, self
reproducing nanomachinery might perhaps reduce the biosphere to dust. I
think nothing of the sort will be possible for at least another three
or four centuries, yet it is very hard to be sure.

Computers could become threatening rather more
quickly than that. Already, humans are strikingly dependent on them.
The “year two thousand problem”  the fact that many computers
will
fail at the opening of the next century, which their internal clocks
will be unable to handle  may result in economic chaos. And
people
have suggested that something a lot grimmer could hit us later, as
nation states handed over more and more of their decision making to
computers of ever greater intelligence. The most powerful nation
states, the ones which won the struggle to survive, could be those
whose computers were programmed simply to defeat other nation-states
economically. They might have few citizens, almost everything being
done by machine. A conceivable final outcome would be that computers
replaced humans entirely.

On some versions of this scenario, the
computers take over through developing enough cunning to defeat us when
we try to interfere with them. On other versions, they take over with
our full blessing.

Hans Moravec, a leading expert on
artificial
intelligence, states in his book
Mind Children
[2] that he believes the
human race to be in its final hundred years, and that he intends to
work for its speedy extinction. Computers, he argues, will soon be far
cleverer than humans. They will be virtually immortal. Their thought
processes will be much less neurotic. In a word, they will be better,
happier beings than we are, and we should be eager to have them replace
us.

Moravec here assumes that, as well as being
highly intelligent, computers could be fully conscious. Many
philosophers would agree with this, yet many other philosophers would
not.

It is a difficult issue.

Another such issue is whether physicists could
destroy the world by experiments at extremely high energies. There has
been some discussion of this in the journals. The energies available to
experimental physics have increased roughly tenfold in each decade
since 1900.

In his book
Dreams of a Final Theory
[3] the
Nobel-prizewinning physicist Steven Weinberg suggests that, using
lasers to accelerate atomic particles, even Planck scale energies might
be had. A collision between two particles at Planck-scale energies
could release into a very tiny volume the power of two small jet
aircraft colliding head-on. Now, we know that very high kinetic
energies are reached by cosmic rays, particles which can sometimes pack
the punch of bullets, but the energies of collisions between these
rays, at least inside “our past light cone” (which means the section of
the universe’s history which is relevant to us), are thought to have
been well below those of bullets in collision. Could disaster strike if
physicists exceeded cosmic ray collision energies, which would seem to
be the very highest energies known to be safe?

According to most physicists, what we think of
as “empty space” is in fact filled with one or more force fields 
“scalar fields”  whose presence is vital to the world as we know
it.
Perhaps as early as in the coming century (although this can seem very
unlikely) experimenters could reach energies above those of the
colliding cosmic rays. They might then conceivably produce a tiny
bubble of new-strength scalar field, a bubble of an “energetically
advantageous” sort. Such a bubble would expand through the surrounding
space rather like a rock rushing downhill, but much faster: at
virtually the speed of light, in fact. The solar system would be
destroyed, and then our entire galaxy, and so on. The bubble would just
keep expanding.

People often think there is little danger
of this,
pointing to an article by the astronomers Piet Hut and Martin Rees
[4].
But last year
Rees, who is Britain’s Astronomer Royal, said in his book
Before the Beginning
[5] that the danger seemed real enough, given how
clever physicists are at pushing energies upwards. “Caution,” he wrote,
“should surely be urged (if not enforced) on experiments that create
energy conditions that may never have occurred naturally.”

A possible ground for taking such warnings
seriously is the “doomsday argument” originated by the cosmologist
Brandon Carter, an argument which has now been developed and defended
by several people, including
Richard Gott. I have defended it myself
[6],
trying to refute a large number of objections which people have thrown
at it. Suppose that many thousand intelligent races, of more or less
the same size, had been more or less bound to evolve in our universe.
We could not at all expect to be in the very earliest race, could we?
Now, very similarly, you and I could not expect to find ourselves among
the very first  for instance, in the first one per cent  of
all
humans who will ever have existed.

That, however, is where you and I would be, if
the human race were indeed going to last for many more centuries, even
at just its present size. If it spread right across the galaxy, then
you and I could easily be in the first thousandth of one per cent. But
if, in contrast, humans became extinct shortly, then you and I would
have lived at the same time as up to ten per cent of all humans ever.
Remember that, of all humans so far, maybe seven or eight per cent are
alive at this very moment.

Quite a useful principle of scientific
reasoning runs as follows. Avoid treating what you observe as very
extraordinary, so long as you can instead fairly easily regard it as
fairly ordinary. Now, how about your own observed position in human
population history? To regard it as fairly ordinary rather than highly
extraordinary, all you need do is to think that humans will fairly soon
be extinct.

Imagine a scene towards the end of the next
century. Ten billion humans walk the Earth, but all will die inside a
month because of germ warfare. One of the doomed humans complains of
his terrible luck in having been born so late. “There have been”, he
complains, “upward of fifteen thousand generations since the start of
human history, yet here I am in the very last generation, the one which
gets wiped out by the progress of military science. What fantastically
extraordinary ill fortune!”  Well, would not this be absurd?
Perhaps
ten per cent of all the humans who would ever have been born would be
alive at that very moment. How could it be “fantastically
extraordinary” to find yourself in company with ten per cent of all
humans?

Here is another way of looking at the matter.
Suppose that in our gigantic universe  over a hundred billion
galaxies
 intelligent life is bound to evolve many times. If intelligent
species found it easy to survive long enough to start spreading through
their galaxies, then would not almost all intelligent beings live at
times when their races had spread through their galaxies, since at
those times the races in question would have so immensely many people
in them? But do you and I find ourselves in a race which has spread
through its galaxy? We do not.

If intelligent life will span the universe living for trillions of
years, what are the odds that we would be part of the 0.00000000001 per
cent of
intelligent life that is stuck living on just one rock instead of
being part of an interstellar civilization?

People often protest that we have to find
ourselves alive at the time when we do, since it is that time now. If
now is a time before the human race has spread through its galaxy, then
so what? Finding ourselves alive when we do cannot, they say, be any
indication that there will not be trillions of humans later. It is
absurd, their protest runs, to ask what period you could expect to have
been born into, against the background of this or that story about how
humans are distributed across the ages. “When you could expect to have
been born” is a nonsensical notion. But any protest on these lines is
simply wrong, I think. Suppose you have forgotten your birthday. Is it
likely to have been March 21st? No, for only relatively few humans
have March 21st as their birthdays.

Think about lemmings. If you were a lemming,
when could you expect to have been born: at a time when there were
hardly any lemmings, or after a lemming population explosion?

According to the cosmologist I mentioned,
Brandon Carter, reflections on these lines ought to increase any doubts
which we had about a long future for humankind, a future containing
perhaps many trillions of humans in different parts of the galaxy. This
is what has come to be known as Carter’s “doomsday
argument”.

After
looking at the dangers confronting the human race, but before
considering Carter’s argument, I would have guessed that humans had
only about a five per cent chance of becoming extinct during the next
few centuries, and perhaps a ninety per cent chance of spreading across
the galaxy. Now that I have taken Carter’s argument into account, what
is my estimate of the chance that humans will be extinct within a few
centuries? Answer: about thirty per cent. So you could say that
Carter’s reasoning has made me about six times more pessimistic.

However, I have reservations about the variant
on this reasoning which was arrived at, quite independently, by Richard
Gott
[7]. If I understand him rightly, Gott reasons that your chance and
mine of being in, for instance, the first 2.5 per cent of all humans
who will ever have lived is 2.5 per cent, and that is that. He points
out that if, when the human race came to an end, all humans had always
guessed that they lived in the first 2.5 per cent, then exactly 2.5 per
cent would have guessed correctly.

I suspect that his argument is too
straightforward. We have evidence of actual risks facing the human
race, and of actual attempts to counter those risks by, for instance,
trying to prevent disgusting tyrants from developing biological
weaponry. This is evidence which we should not disregard. I discussed
the point with Carter, and what we concluded is that the doomsday
argument is best used simply as a magnifying glass, increasing whatever
pessimism we had developed after thinking about disgusting tyrants, the
ozone layer, greenhouse warming, nuclear bombs, etcetera. Suppose that
thinking about these things still left us extremely confident in a long
future for humankind. In that case the doomsday argument, although
reducing our confidence, could still fail to remove it. Pessimism which
starts off extremely small can remain small even when magnified.

Another point on which Gott and I may disagree
is this. I think the doomsday argument works in a smooth mathematical
way only if our world is a deterministic world, a world in which the
number of humans who will ever have lived is something which has
already been fixed: something “already out there” in the realm of
genuine facts.

In that case your situation and mine could
be compared
to that of a man drawing names from a hat. He knows that his name was
in the hat only once, and that the hat contained either sixteen names
or else six hundred. He has just drawn his name, and it was the seventh
to be drawn. He now asks how likely it is that the hat contained six
hundred names. He ought to take into account two things: first, how
confident he used to be, before he started drawing names from the hat,
that it contained six hundred names, and second, the fact that his name
was the seventh to be drawn, which would have been less remarkable if
there were sixteen names instead. A mathematical formula known as
Bayes’ Rule can tell him what he should now think. But this
mathematical formula works unproblematically only because the number of
names in the hat was, all along, something “already there”.

The future of the human race, you might
reasonably believe, is not “already there” in the requisite way. The
number of humans who will ever have lived may depend, for example, on
whether some politician is going to push a nuclear warfare button.
Whether this politician will push the button may depend on some utterly
free decision which the politician has not yet made, or perhaps on some
utterly random quantum event which will occur in his or her brain.

The right conclusion, I suggest, is that the
doomsday argument runs smoothly only as a way of reducing very great
confidence in a long future for humans: confidence, in effect, that a
long future for humans is already “there” or “as good as there”, in the
sense that it is inevitable or very near inevitable. If you lack this
sort of confidence, then the argument perhaps ought not to influence
you all that much.

It is because of this that my own confidence in
a long and heavily populated future for humans, a future in which they
spread right across the galaxy, can be as high as it is. It stands at a
little above fifty per cent. If the doomsday argument worked really
smoothly, then my confidence would instead be well below one per cent.

People worrying about human extinction have
also raised other problems with which philosophers have struggled. The
main one is this. If human extinction seemed imminent, then ought we to
try to prevent it?

For a start, is it ever a fact, in any
straightforward sense of the word “fact”, that we ought to do this or
that? So called facts about what we ought to do: may not these be
simply matters of taste, or of how we order one another around, or of
something similar? And next, could it ever be a fact that people who
had not been born had a right to be born?

Would not
these be “merely
possible people” so that if they never got to be born  because,
perhaps, the human race had been wiped out by biological warfare 
then
this would not really matter? Which real people would have been
harmed? How could the possible people have a right to be indignant
about never being born? Merely possible people cannot be indignant,
can they?

Here let me just comment that, when my book
settled down to examining the threats to the human race, threats from
faulty philosophy were on the list. I know of no adequate reason for
denying that facts about good and bad are as much “out there in the
real world” as the fact that two and two make four; and yet, you will
find plenty of philosophers who deny it.

Again, you will
find plenty of
philosophers who run the argument that merely possible people are not
real people, and that therefore there can be no basic duty to keep the
human race in existence. And you will find other philosophers who think
that getting rid of the human race would be really rather a good thing,
because of all the unhappy humans who would live if the race continued
onwards. Even one unhappy human per million would be altogether too
much, according to some.

You will actually find a few philosophers, for
instance
Schopenhauer, arguing that virtually every human is more
unhappy than happy, so that it would be wonderful if humans could be
made extinct.

There are all sorts of difficult philosophical
points in this area, and it is intellectually right and proper to
recognize just how difficult they can get. Still, I urge you not to be
persuaded by most of the philosophers who discuss them. If you were,
then you could soon find yourself saying that the extinction of all
humans would not be tragic.

I think it would be. Advanced intelligence, or
even just anything worth calling life and consciousness, may arrive in
a galaxy only through a great deal of luck. It might actually be that
humans are the only intelligent species in our entire universe. If they
survived the next century or two, then their descendants would quite
probably spread right through the Milky Way and then possibly through
other galaxies too. The lives of these descendants could be quite a bit
happier than ours, and there could be hundreds of trillions of them.

Biosphere 2

All this might depend on whether a few billion
dollars were spent on building artificial refuges, so that a few
hundred humans could survive nuclear or biological warfare. Yet almost
nobody is interested in building them. The first major artificial
habitat,
Biosphere 2, cost only a
hundred and fifty million dollars. Designed with the help of some
rather sloppy science, it soon failed to be self-sustaining. Since
then, nobody has even tried. Meanwhile, the global armaments industry
has been swallowing about a thousand billion dollars
annually.