Digital Physics:
Is our universe just the output of
a deterministic
computer program?

As a consequence of Moore's law, each decade computers are getting
roughly 1000 times
faster
by cost.
Apply Moore's law to the video game business. As the virtual
worlds get more convincing many people will spend more time
in them. Soon most universes will be virtual, only one (the
original) will be real. Then many will be led to suspect the
real one is a simulation as well.
Some are already suspecting this today.

Then the simplest explanation of our universe
is the simplest program that computes it. In 1997
Schmidhuber
pointed out [1] that the simplest such program
actually computes all possible universes with all
types of physical constants and laws, not just ours.
His essay also talks about universes
simulated within parent universes in nested fashion,
and about universal complexity-based measures on
possible universes.

Prior measure problem:
if every possible future exists, how can we predict anything?

Unfortunately, knowledge about the program that computes all
computable universes is
not yet sufficient to make good predictions about the future of
our own particular universe.
Some of our possible futures obviously are more likely than others.
For example, tomorrow the sun will probably shine in the Sahara desert.
To predict according to Bayes rule, what we need is a
prior probability distribution or measure on the possible futures.
Which one is the right one?
It is not the uniform one:
If all futures were equally
likely then our world might as well dissolve
right now. But it does not.

Some think the famous anthropic principle (AP) might help us here.
But it does neither, as will be seen below.

Zuse's thesis
vs
quantum physics?
Einstein
always claimed that `God does not play dice,'
and back in 1969 Zuse (top)
published a book about
what's known today as
Zuse's thesis:
The universe is being deterministically computed on some sort of
giant but discrete computer - compare
PDF of MIT's translation (1970)
of Zuse's book (1969).
Contrary to common belief, Heisenberg's uncertainty
principle is
no physical evidence against
Zuse's thesis - Nobel laureate
't Hooft
agrees. Compare [7].

The anthropic principle (AP)
essentially just says that the conditional probability of finding
oneself in a universe compatible with one's existence will always
remain 1. AP by itself
does not have any additional predictive power.
For example, it does not predict that tomorrow
the sun will shine in the Sahara,
or that gravity will work in quite the same way -
neither rain in the Sahara nor certain changes of gravity
would destroy us, and thus would be allowed by AP.
To make
nontrivial predictions about the future we need more than AP - see below!

Predictions for universes sampled from
any computable probability distribution

To make better predictions,
can we postulate
any reasonable nontrivial constraints on the
prior probability distribution on our possible futures?
Yes! The distribution should at least be computable in the
limit.
That is, there should exist a program that takes
as an input
any beginning of the universe history
as well as a next possible event,
and produces an output converging on the conditional
probability of the event.
If there were no such program we could not
even formally specify our universe, leave alone writing
reasonable scientific papers about it.

It turns out that the very weak
assumption of a limit-computable
probability distribution
is enough to make quite nontrivial predictions about
our own future. This is the topic of Schmidhuber's work (2000) on
Algorithmic Theories of Everything [2].
Sections 2-5 led to
generalizations of Kolmogorov's and Solomonoff's
complexity and probability measures [3,4].

There is a fastestway of computing
allcomputable universes!

Speed Prior

Former or later such observers will build their own computers
and nested virtual
worlds and extrapolate from there and get the idea their
"real" universe is a "simulation" as well, speculating
their God also suffers form resource constraints. This
will naturally lead them to the Speed Prior [2, 4] which assigns
low probability to universes that are hard to compute.
Then they will make
non-traditional
Speed Prior-based predictions
about their future. Shouldn't we do so, too?
Compare [4, 5].

The work mentioned above focused on description size and
completely ignored computation time.
From a pragmatic point of view, however,
time seems essential.
For example, in the near future
kids will find it natural to play God
or "Great Programmer" by creating
on their computers their own universes inhabited by simulated
observers. But since
most computable universes are hard to compute, and since
resources will always be limited despite faster and faster
hardware, self-appointed Gods will always have to
focus on relatively
few universes that are relatively easy to compute.
So which is the best universe-computing algorithm for any
decent "Great Programmer" with resource constraints? It turns out there
is an optimally fast algorithm which computes each universe
as quickly as this universe's unknown (!) fastest program,
save for a constant factor that does not depend on universe
size. In fact, this algorithm is essentially identical
to the one on page 1 of Schmidhuber's above-mentioned
1997 paper [1].

Given any limited computer, the easily computable universes
will come out much faster than the others. Obviously, the first
observers to evolve in any universe will find themselves in
a quickly computable one.