Sunday, March 12, 2006

Reading List: The Cosmic Landscape

Leonard Susskind (and, independently, Yoichiro Nambu) co-discovered
the original hadronic string theory in 1969. He has been a prominent
contributor to a wide variety of topics in theoretical physics over
his long career, and is a talented explainer of abstract theoretical
concepts to the general reader. This book communicates both the
physics and cosmology of the “string landscape” (a term he
coined in 2003) revolution which has swiftly become the consensus
among string theorists, as well as the intellectual excitement of
those exploring this new frontier.

The book is subtitled “String Theory and the Illusion of
Intelligent Design” which may be better
marketing copy—controversy sells—than descriptive of the
contents. There is very little explicit discussion of intelligent
design in the book at all except in the first and last
pages, and what is meant by “intelligent design”
is not what the reader might expect: design arguments in the
origin and evolution of life, but rather the apparent
fine-tuning of the physical constants of our universe, the
cosmological constant in particular, without which life as
we know it (and, in many cases, not just life but even atoms,
stars, and galaxies) could not exist. Susskind is eloquent in
describing why the discovery that the cosmological
constant, which virtually every theoretical physicist would
have bet had to be precisely zero, is (apparently) a small
tiny positive number, seemingly fine tuned to one hundred
and twenty decimal places “hit us like the proverbial ton
of bricks” (p. 185)—here was a number which, not only
did theory suggest should be 120 orders of magnitude greater, but
which, had it been slightly larger than its minuscule value,
would have precluded structure formation (and hence life) in
the universe. One can imagine some as-yet-undiscovered
mathematical explanation why a value is precisely zero
(and, indeed, physicists did: it's called supersymmetry,
and searching for evidence of it is one of the reasons they're
spending billions of taxpayer funds to build the
Large Hadron
Collider), but when you come across a dial set with the
almost ridiculous precision of 120 decimal places and it's
a requirement for our own existence, thoughts of a benevolent
Creator tend to creep into the mind of even the most
doctrinaire scientific secularist. This is how the appearance of
“intelligent design” (as the author defines it)
threatens to get into the act, and the book is an
exposition of the argument string theorists and cosmologists
have developed to contend that such apparent design is entirely an illusion.

The very title of the book, then invites us to contrast two
theories of the origin of the universe: “intelligent
design” and the “string landscape”. So,
let's accept that challenge and plunge right in, shall we?
First of all, permit me to observe that despite frequent claims
to the contrary, including some in this book, intelligent
design need not presuppose a supernatural being operating
outside the laws of science and/or inaccessible to discovery
through scientific investigation. The origin of life on
Earth due to deliberate seeding
with engineered organisms by intelligent extraterrestrials
is a theory of intelligent design which has no supernatural
component, evidence of which may be discovered by science
in the future, and which is sufficiently plausible to have
persuaded Francis Crick, co-discoverer of the structure
of DNA, was the most likely explanation.
If you observe a watch, you're entitled to infer the existence
of a watchmaker, but there's no reason to believe he's a
magician, just a craftsman.

If we're to compare these theories, let us begin by stating them
both succinctly:

Theory 1: Intelligent Design. An intelligent being
created the universe and chose the initial conditions and
physical laws so as to permit the existence of beings like
ourselves.

Theory 2: String Landscape. The laws of physics and initial
conditions of the universe are chosen at random from among
10500 possibilities, only a vanishingly small fraction
of which (probably no more than one in 10120) can
support life. The universe we observe, which is infinite in
extent and may contain regions where the laws of physics differ,
is one of an infinite number of causally disconnected “pocket
universes“ which spontaneously form from quantum
fluctuations in the vacuum of parent universes, a process
which has been occurring for an infinite time in the past and
will continue in the future, time without end. Each of these
pocket universes which, together, make up the “megaverse”,
has its own randomly selected laws of physics, and hence the
overwhelming majority are sterile. We find ourselves in one of the
tiny fraction of hospitable universes because if we weren't
in such an exceptionally rare universe, we wouldn't exist to make
the observation. Since there are an infinite number of universes,
however, every possibility not only occurs, but occurs an
infinite number of times, so not only are there an infinite number
of inhabited universes, there are an infinite number identical
to ours, including an infinity of identical copies of yourself
wondering if this paragraph will ever end. Not only does the megaverse
spawn an infinity of universes, each universe itself splits into two
copies every time a quantum measurement occurs. Our own
universe will eventually spawn a bubble which will destroy all life
within it, probably not for a long, long time, but you never
know. Evidence for all of the other universes is hidden behind
a cosmic horizon and may remain forever inaccessible to observation.

Paging Friar Ockham! If unnecessarily multiplying hypotheses are
stubble indicating a fuzzy theory, it's pretty clear which of
these is in need of the razor! Further, while one can imagine
scientific investigation discovering evidence for Theory 1,
almost all of the mechanisms which underlie Theory 2 remain,
barring some conceptual breakthrough equivalent to looking inside
a black hole, forever hidden from science by an impenetrable
horizon through which no causal influence can propagate. So
severe is this problem that chapter 9 of the book is devoted to
the question of how far theoretical physics can go in the total
absence of experimental evidence. What's more, unlike virtually
every theory in the history of science, which attempted to
describe the world we observe as accurately and uniquely as possible, Theory 2
predicts every conceivable universe and says, hey,
since we do, after all, inhabit a conceivable universe, it's
consistent with the theory. To one accustomed to the crystalline
inevitability of Newtonian gravitation, general relativity, quantum
electrodynamics, or the laws of thermodynamics, this seems by
comparison like a California blonde saying
“whatever”—the cosmology of despair.

Scientists will, of course, immediately rush to attack Theory 1, arguing
that a being such as that it posits would necessarily be
“indistinguishable from magic”, capable of explaining anything,
and hence unfalsifiable and beyond the purview of science. (Although
note that on pp. 192–197 Susskind argues that Popperian
falsifiability should not be a rigid requirement for a theory to be
deemed scientific. See Lee Smolin's
Scientific
Alternatives to the Anthropic Principle for the
argument against the string landscape theory on the
grounds of falsifiability, and the 2004
Smolin/Susskind
debate for a more detailed discussion of this question.)
But let us look more deeply at the attributes of what might be called the
First Cause of Theory 2. It not only permeates all of our universe,
potentially spawning a bubble which may destroy it and replace it
with something different, it pervades the abstract landscape of
all possible universes, populating them with an infinity of
independent and diverse universes over an eternity of time:
omnipresent in spacetime. When a universe is created,
all the parameters which ultimately govern its ultimate evolution
(under the probabilistic laws of quantum mechanics, to be sure)
are fixed at the moment of creation: omnipotent to
create any possibility, perhaps even
varying the
mathematical structures underlying the laws of physics.
As a budded off universe evolves, whether a sterile formless
void or teeming with intelligent life, no information is
ever lost in its quantum evolution, not even down a black
hole or across a cosmic horizon, and every quantum event splits the
universe and preserves all possible outcomes. The ensemble of
universes is thus omniscient of all its contents.
Throw in intelligent and benevolent, and you've got the
typical deity, and since you can't observe the parallel universes
where the action takes place, you pretty much have to take it on
faith. Where have we heard that before?

Lest I be accused of taking a cheap shot at string theory, or
advocating a deistic view of the universe, consider the
following creation story which, after John A. Wheeler, I shall
call “Creation without the Creator”. Many extrapolations
of continued exponential growth in computing power envision
a technological
singularity in which super-intelligent computers
designing their own successors rapidly approach the
ultimate
physical limits on computation. Such computers would be
sufficiently powerful to run highly faithful simulations of
complex worlds, including intelligent beings living within
them which need not be aware they were inhabiting a simulation,
but thought they were living at the “top level”,
who eventually passed through their own technological
singularity, created their own simulated universes,
populated them with intelligent beings who, in turn,…world
without end. Of course, each level of simulation imposes a
speed penalty (though, perhaps not much in the case of
quantum computation), but it's not apparent to the
inhabitants of the simulation since their own perceived time
scale is in units of the “clock rate” of the
simulation.

If an intelligent civilisation develops to the point where it
can build these simulated universes, will it do so? Of course
it will—just look at the fascination crude video game
simulations have for people today. Now imagine a simulation as
rich as reality and unpredictable as tomorrow, actually creating
an inhabited universe—who could resist? As unlimited computing
power becomes commonplace, kids will create innovative universes
and evolve them for billions of simulated years for science fair
projects. Call the mean number of simulated universes created by
intelligent civilisations in a given universe (whether top-level
or itself simulated) the branching factor. If this
is greater than one, and there is a single top-level
non-simulated universe, then it will be outnumbered by simulated
universes which grow exponentially in numbers with the depth
of the simulation. Hence, by the Copernican principle, or
principle of mediocrity, we should expect to find ourselves
in a simulated universe, since they vastly outnumber the
single top-level one, which would be an exceptional place
in the ensemble of real and simulated universes. Now here's the
point: if, as we should expect from this argument, we do live
in a simulated universe, then our universe is the product
of intelligent design and Theory 1 is an absolutely correct
description of its origin.

Suppose this is the case: we're inside a simulation designed by
a freckle-faced superkid for extra credit in her fifth grade
science class. Is this something we could discover, or must it,
like so many aspects of Theory 2, be forever hidden from our
scientific investigation? Surprisingly, this variety of Theory 1
is quite amenable to experiment: neither revelation nor faith
is required. What would we expect to see if we inhabited a
simulation? Well, there would probably be a discrete time step
and granularity in position fixed by the time and position
resolution of the simulation—check, and check: the Planck
time and distance appear to behave this way in our universe.
There would probably be an absolute speed limit to constrain the
extent we could directly explore and impose a locality constraint
on propagating updates throughout the simulation—check:
speed of light. There would be a limit on the extent of the
universe we could observe—check: the Hubble radius is an
absolute horizon we cannot penetrate, and the last scattering
surface of the cosmic background radiation limits electromagnetic
observation to a still smaller radius. There would be a limit on
the accuracy of physical measurements due to the finite precision
of the computation in the simulation—check: Heisenberg
uncertainty principle—and, as in games, randomness would be
used as a fudge when precision limits were hit—check: quantum
mechanics.

Might we expect surprises as we subject our simulated universe
to ever more precise scrutiny, perhaps even astonishing the
being which programmed it with our cunning and deviousness (as
the author of any software package has experienced at the
hands of real-world users)? Who knows, we might run into
round-off errors which “hit us like a ton
of bricks”! Suppose there were some quantity, say, that
was supposed to be exactly zero but, if you went and actually
measured the geometry way out there near the edge and crunched
the numbers, you found out it differed from zero in the
120th decimal place. Why, you might be as shocked as the
naïve Perl programmer who ran the program
“printf("%.18f", 0.2)” and was
aghast when it printed “0.200000000000000011”
until somebody explained that with about 56 bits of mantissa
in IEEE double precision floating point, you only get about
17 decimal digits (log10 256) of precision.
So, what does a round-off in the 120th digit imply? Not
Theory 2, with its infinite number of infinitely reproducing
infinite universes, but simply that our Theory 1 intelligent designer
used 400 bit numbers (log2 10120)
in the simulation and didn't count on our noticing—remember
you heard it here first, and if pointing this out causes the simulation
to be turned off, sorry about that, folks!
Surprises from
future experiments which would be suggestive (though not probative)
that we're in a simulated universe would include failure to find any
experimental signature of quantum gravity (general relativity
could be classical
in the simulation, since potential conflicts with quantum mechanics
would be hidden behind event horizons in the present-day universe, and
extrapolating backward to the big bang would be meaningless if the
simulation were started at a later stage, say at the time of big bang
nucleosynthesis), and discovery of limits on the ability to superpose
wave functions for quantum computation which could result from limited
precision in the simulation as opposed to the continuous complex
values assumed by quantum mechanics. An interesting theoretical
program would be to investigate feasible experiments which, by
magnifying physical effects similar to proposed searches for
quantum gravity signals,
would detect round-off errors of magnitude comparable to the
cosmological constant.

But seriously, this is an excellent book and anybody who's
interested in the strange direction in which the string
theorists are veering these days ought to read it; it's
well-written, authoritative, reasonably fair to opposing
viewpoints (although I'm surprised the author didn't address
the background spacetime criticism of string theory
raised so eloquently by Lee Smolin), and provides a roadmap
of how string theory may develop in the coming
years. The only nagging question you're left with after finishing
the book is whether after thirty years of theorising which
comes to the conclusion that everything is predicted and
nothing can be observed, it's about science any more.