Site Search Navigation

Site Navigation

Site Mobile Navigation

How to Simulate the Universe

By John Tierney August 15, 2007 9:14 amAugust 15, 2007 9:14 am

A lot of readers don’t want to believe they’re living in a computer simulation. The last post prompted a virtual avalanche of comments from people insisting it would be impossible for any computer to create a virtual world with virtual people with virtual nervous systems.

These readers might be right, but many them dismissed the idea without studying the argument in Nick Bostrom’s paper. He didn’t just steal an idea from old sci-fi stories; he considered how to do it and how much computational power would be required. A simulation, as he explains, wouldn’t have to include every single microscopic entity in the universe — “only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities.”

And what if there were glitches in the software? “Should any error occur, the director could easily edit the states of any brains that have become aware of an anomaly before it spoils the simulation. Alternatively, the director could skip back a few seconds and rerun the simulation in a way that avoids the problem.”

After reviewing estimates for the processing power of human brains and the projections for the power of future computers, Dr. Bostrom concludes:

It thus seems plausible that the main computational cost in creating simulations that are indistinguishable from physical reality for human minds in the simulation resides in simulating organic brains down to the neuronal or sub-neuronal level. While it is not possible to get a very exact estimate of the cost of a realistic simulation of human history, we can use ~10^33 – 10^36 operations as a rough estimate. As we gain more experience with virtual reality, we will get a better grasp of the computational requirements for making such worlds appear realistic to their visitors. But in any case, even if our estimate is off by several orders of magnitude, this does not matter much for our argument. We noted that a rough approximation of the computational power of a planetary-mass computer is 10^42 operations per second, and that assumes only already known nanotechnological designs, which are probably far from optimal. A single such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) by using less than one millionth of its processing power for one second. A posthuman civilization may eventually build an astronomical number of such computers. We can conclude that the computing power available to a posthuman civilization is sufficient to run a huge number of ancestor-simulations even it allocates only a minute fraction of its resources to that purpose. We can draw this conclusion even while leaving a substantial margin of error in all our estimates.

Now, you may think it’s unlikely that civilization on Earth could last long enough to build that kind of computer. But even if humans don’t survive, or don’t figure out how to build this computer, some other civilization in the universe might. And as long as some intelligent beings out there run a lot of simulations, the central argument in Dr. Bostrom’s paper applies: the number of simulated beings in the universe grows so much larger than the “real” beings that we should figure it’s likely we’re living in a simulated world.

A lot of readers said it’s a waste of time to speculate about this possibility — and maybe for them it is. (Although I don’t understand why they spent so much time complaining about wasting time.) But I’d like to hear from the rest of readers who have answers to the other two questions I raised in the last post: Would it be ethical to create a simulated world with sentient beings? And what’s the best strategy for living and surviving in a simulation?

Assuming we are a simulation, the creator of the simulation now takes on the role of god to us. By causing so many people in the world pain, one could say they are unethical. But one could also argue that they just set some initial conditions and we are the ones causing the pain in the world, in which case its pretty much what would have happened anyway! I guess it all depends on your interpretation of the creator’s purposes and actions.

My guess is that if it is a simulation, its for research purposes. Surviving till the next simulation depends on conforming to whatever criteria the user has set for this study. “I only want the really mean people for my next one” for example.

Okay, let’s play with simulated people. I think I would not mind simulating people in a simulated world. Why am I not a simulated persom who’s mind is controlled by an editor just like the 6.6 bil … . Since it is a simulated world though. I may need to send sign’s the my presence to show them the way. I would create a character to visit a rundown California beach and have weird things happen to set them right. I would call this person, ‘John from Premium Cable’. His appearance would allow me to tend them as the Deity I would be. I would use my simulated world to raise money to pay my vast power bill.
John from Premium cable would keep my simulated charaters from looking at the physical would too closely some it would cut down on the editing time on millions of minds.
I would not want them to look around at there small world to much or they would find something since I am a perfect coder and never has to deal with the challenges of challenges of real world simulators.

I don’t see how this theory interferes with the idea of God at all. In fact, for me, this hypothesis makes the idea of the “Prime Mover” more translatable. The “people” running these programs don’t even need to be humanoid. Instinctive fear obviates ignorance.
I’m betting on being interesting as opposed to moral. Let loose the Id! Be cannon fodder. Be revolutionary! Be the best Sim. There is real love for characters.

The best strategy is the same as it has always been. One should try to do good work, avoid unnessary risk, be of good moral character, take care of family and enjoy life, but not to the point that unessary risks are taken or morals are violated. Perhaps we are all simulations, but that isn’t really any different than being ripples in space-time, as many in science believe.

The point is, either way we do not really direct this show, we are but poor players upon the stage. Yes, reality could end with a “game over” message, or with a wandering black hole that sucks in the entire solar system in an instant with no warning, or via any one of thousands of other possible ends. But that has always been the case and it will always be the case. It doesn’t change the formula for living a good life.

If however, one does believe their may be some super programmer or “creator” in charge who monitors our every thought and action for research or entertainment, praying for forgivness might not be a bad strategy. That’s just a very logical suggestion, not a sermon.

#1 is a poorly posited question.
Ethics considers how we should conduct ourselves in relation to others.
Who, here, are the “others”?
The Simulators fellow Simulators or the Simulacrons?
What does a painter owe to her canvas? A composer to his ink?

I’m not sure about the ethical side, since I’m not posthuman (I’m not even transhuman, sadly). I would guess, however, that posthumans might regard it as ethical to create intelligence — possibly even imperative, if they see intelligence and the creation/modeling thereof as some kind of critical metric.

As far as sim-survival: Be interesting, and try not to wreck things. Perhaps the Prime User will be intrigued enough to keep you around for the next iteration.

Art. The value of such a simulation to a posthuman designer would not come from the sick pleasure of watching us fight but from watching us create new meaning. We can safely assume that a posthuman would be bored by the meaningless violence and the constant, mundane struggle man endures in this world. I think the kind of posthuman who would set up and run such a complex simulation would be far more interested in the open-ended creative potential of his sims in making art. In answer to your question, creating beauty in color and sound and movement, in form, language, spontaneity, abstraction – the creation of new meaning would be the best “strategy” for survival in a simulation, and perhaps the only truly meaningful human endeavor regardless.

Though I still think the possibility of simulation is not so high as Bostrom predicts (his inevitability argument rests on a fundamental assumption about how *we* will eventually become post-humans, then concedes that *we* are probably not adequate representations of pre-post-humans seems to break the logical change), the ethics of it – IF it is possible, SHOULD we simulate – is very much worth considering.

First off, depending on the intention, I’m not sure it’s necessary to actually create conscious beings in your simulated universe. Recall Searle’s Chinese Room thought experiment, where a person can be “easily” (at least concievably) fooled by a non-conscious rule-machine that it understands Chinese (//en.wikipedia.org/wiki/Chinese_room). If it’s a game, that’s much easier to code than real thought, and Occam’s razor DOES apply to programming efficiency. All you want is for a person playing the sim to “think” they’re encountering other conscious people. Given ANY additional ehtical considerations, then, it seems unlikely games will need to reach that level.

Psychological experiments are another matter entirely, though. Perhaps you want a conscious mind, or a whole society of them, to see how they’ll react to varying stimuli, interact with each other etc.

Now, it is clearly against ethical guidelines to do anything close to enforcing a “simulated” world on actual bio-humans. Even a *voluntary* scenario which resulted in most people suffering significantly was widely condemned and forced a revision of ethical psychological practices – that was the Stanford prison experiment (//en.wikipedia.org/wiki/Stanford_prison_experiment).

The REAL question here, I believe, is whether an adequately simulated being, whose exists only on silicon, should get equal rights as a “bio-human.” Bostrom should think so, since he takes the issue of “substrate-independence” as a given, and necessary to his argument. That is to say, his argument presupposes making a silicon(or whatever non-carbon)-based computer conscious is POSSIBLE. If it is, then there is no difference anyway between the way the sim thinks and the way we think.

It suffers. It has hopes. It has dreams. It has, perhaps, free will. And like (most of) us, it doesn’t want to live in The Matrix.

Just as it would be immoral to give birth to a child specifically for the purpose of subjecting it to a lifelong psychological experiment, with NO WAY to opt out, NO CHOICE given, and the obvious suffering that has resulted, so would creating a conscious program to do so.

I cannot then, at this time, think of a valid ethical justification for creating simulated beings.

One key problem of the thesis possibly not yet raised is an algorithmic one. Consider analog computers, which solve problems in real number space, instead of in computable number space (//en.wikipedia.org/wiki/Real_computer).

A simulation based on digital technology would not result in real-number observations for the agents living in that world.

Well if we are in a computer simulation, our laws of physics and our universe are not the “real” laws of physics or the “real” universe. The simulators might live in a world with a much finer grain of particles (making our sub-atomic particles look huge) or they might live in a much larger universe. In each case they would theoretically be able to simulate our entire universe without resorting to any kind of data compression.

This whole thought proccess is moot. unless he can produce physical proof it’s just as useful as me hypothesising that the great spaghetti monster farted the universe into existence. Prove me wrong… instead of wrapping his theory in mysticism he chose’s science fiction.

What could possibly be unethical about it? That’d be like arguing it’s unethical to have children. That sentient creature didn’t ask to be created or to be born into, that country, religion, environment, etc.

The best stratey for surival in any structured system is always the same, learn the rules and follow them.

Even after going back and reading Bostrom’s paper, as a software engineer, I cannot shake the feeling that his concept of simulation is too simple (almost like a cheesy version of Searle’s Chinese Room puzzle with the all-powerful book).

Consider climatology, which has developed very precise notion of how soon computer simulations underlying weather forecasts choke on their lack of precision given the dynamics of the systems they are modeling. A “you dont need that much precision in the simulation” argument is really equivalent to saying “you dont have competent climatologists” because their daily experiences would call the bluff. (The programmer cant patch around this because the model from which the patch would come could not be any less complex.)

So, for my money, the best strategy for living and surviving in a simulation is to do what my software engineering buddies do to beat computer games: find the corners that are being cut and exploit them.

Any who is familar with the “Conversations With God” books by Neal Walsh recognizes this concept. We and everything around us are indeed physical manifestations of the one energy of all that is. There is only one being manifesting as the many individual entities around us. We are living in a simulation very much like the Holideck on Star Track. All possiblities have been created within the Matrix and we are experiencing our world through a bubble of perception. We are all tied together via collective consciousness moving on the Matrix from one “reality” to another creating the illusion of time. The point is that we are creating our simulation with our beliefs, thoughts words and deeds and can create a any “reality” that we can imagine! How about LIFE, LOVE, UNITY AND PEACE!

I can, at least, explain why people complain about the waste of time. They (we) foolishly hope that in this world of crumbling infrastructures, sensible persons may turn their attention to working on real solutions- to real problems.

I haven’t scrolled through all the comments; so, someone may have covered this already.
Hasn’t the gist of these questions, and the answer, been the core of Buddhism for centuries already.
Basically, the very simple idea that there is only one real Being in the Universe ? That the universe is being ‘dreamed up’ by this singular force / enegy/ consciousness; a Being who would otherwise be quite alone and bored ?
There’s a Micronesian creation myth mentioned in the book ‘God Had A Dog’, I think it was; explaining that the Universe was the result of a ‘whim’
in the mind of an all powerful Being. Of course everything’s very convincingly ‘real’ ~ there’s no ‘comparable’. It is what it is.

The ethical questions are made moot insofar as the ‘winners’ and ‘losers’ are not in reality distinct entities to begin with. The Cosmic Dreamer is lost in the Dream. Shiva creates and destroys; but everything is always Shiva anyway.

The modern scientific / rational approach; say, a book like Tipler’s ‘Physics of Immortality” ; seems to just be taking the long way ’round to arrive at the same place as the ‘Tibetan Book of the Great Liberation’.

The Grateful Dead covered this too with ~’wake up to find out that you are the Eyes of the World’~ & I’d add; also the Light Source.

We already do so with our limited computation power. It’s rarely debated how ethical it is because our sentient beings are in our perception much simpler then we are. To a prime designer, I don’t think we would be seen any differently then how we view our simulations now.

And what’s the best strategy for living and surviving in a simulation?

Do I feel any twinge of guilt when I blast the bad guys on a video game? No, not even a little. In fact, I doubt ANY of the MILLIONS of game players ever feel any guilt. So, based on this very real world experiment, verified millions of times over, I don’t agree with the argument that post humans will find it immoral to create a virtual world with evil in it. In fact, they’d probably put evil in just to make it more interesting.

With the second proposition eliminated, we are left with either 1.)we are likely in a simulation, or 2.)we likely will be destroyed before we can create a virtual reality simulation. Take your pick, they both suck.

hahah wow, upon reading this I immediately thought of the years where I spent hours each week controlling virtual worlds. Well ok, I lead you on a bit. I’m talking about those lovely computer/video games which simulate historical contexts (usually involving wars, cause hey, peace is lame dude). One thing that alyways bothered my friends and I was how crappy the simulated people were, and their automatic responses to their environment was always so unnatural. However, as I got older (and unfourtanetly have less time to control these minions) their reponses have gotten better. Not within any particular video game, but in new games as new algorythms were created that allowed for more natural and challenging/realistic responses to the virtual world which I manipulated.

So for Dr. Bostrom and his colleagues, I have some first hand expierence and words of advice. It is fine that you may assume that the beings who created this vast virtual universe (or maybe it only extends to the edge of the solar system, or maybe just a few planets out before we hit a virtual border/backdrop and cant go any further) are benevolent and good…but I bet $100 their teenage sons get pretty bored pretty fast with these benevolent societies.

Just look at the Sim City series of games…whats your first instinct after building a marvelous city where you can no longer improve it by much (or maybe you just get bored or frustrated because you cant seem to balance your budget and build new expensive things)? You go to the disasters menu and erupt a massive volcano in the middle of main street!!! And then you unleash UFOs who zap at your city blowing it up and creating havoc so much so that your fire fighters and police officers are overrun and the city burns burns burns burns…excuse me, I got carried away there.

Anyways a plus side to this is that as long as the kid saved the simulation before all the havoc took place, all he has to do is click load city and everything will be back to benevolence. So is it immoral to cause such horrible pain and suffering so long as the guy running the show remembers to NOT save the game after causing some epidemic of destruction? Its not like we’d remember it, well, after it happened anyways.

and as far as survival goes, perhaps we should pull our collective (simulated) resources to construct a massive complex across the globe that spells out: “would your mother approve of this”.

Ethical? Logically you’d have to say it is, to a certain extent – is God (or the Prime Designer, as you say; any designer) held liable to the rules of his simulated world? Only by personal guilt? I think the purpose of a simulation might be to experiment in making an idealized version of oneself. Where each iteration is better than the last, and rules change after experimental data proves which settings are best. Best for what? Well, happiness, or eudaimonia, or nirvana, I suppose. Closest to harmonious perfection.

How to cope with one’s role in a simulation? Assuming there is an intelligent being working the table, so to speak, you should make it clear that you’re not trying to overthrow him (Cronus-Zeus complex) but are out to help. Offer a deal, so to speak – just like Jesus supposedly did with God, or Neo with the machines in The Matrix. Because yes, those are products of the system, but they’re unexpected anomalies, and worth cooperating with if you’re really serious about getting further. Like a business negotiation, be clear about what you’re willing to offer and that fairness is worth upholding. The iterated prisoner’s dilemma is a direct logical implication.

0. When do we start?
1. How do we know that, much like The Bomb, extraordinary “knowledge” that has never been, and most likely – to the rational mind – never be used. To me paranoia provided the means for the atomic bomb, and continues to be the reason for fear.
2.How do we know that Bostrum is not an alien?
3. Continuing in the thread of extraterrestrial life, how do we know that we are not in another being’s experience? This being may not be posthuman – just simply another incomprehensible reality – one that irks and flummoxes us, much the way this complex fantasy or reality does irk us.
4. Who gets there first? How does might that resolve?
5. How do we know that this theory and reality is not simply a Creationist theory? How could it not be a creationist theory? How could a posthuman, alien, or other solution to this ultiimately (it seems to me) scientific problem not be a Creationist?

These are only a few flaws – or opinions – depending on your point of view, regarding Bostrum’s theory (probably a concept). John Tierney rightfully resolves to bring this issue to light for readers of this newspaper.

And you can make up your own mind about this in America! Personally, I think that any construction worker could tell you that we’re out of our minds to even consider a theory like this. Any composer. Any lawyer. Any journalist. Any scientist. The possibilities of human life are endless, and rightfully they may be that way. And America deserves endless possibilities. Even Bostrum, a foreigner, can dream, because of the freedom of religion espoused by the Founding Fathers.

What's Next

About

John Tierney always wanted to be a scientist but went into journalism because its peer-review process was a great deal easier to sneak through. Now a columnist for the Science Times section, Tierney previously wrote columns for the Op-Ed page, the Metro section and the Times Magazine. Before that he covered science for magazines like Discover, Hippocrates and Science 86.

With your help, he's using TierneyLab to check out new research and rethink conventional wisdom about science and society. The Lab's work is guided by two founding principles:

Just because an idea appeals to a lot of people doesn't mean it's wrong.

But that's a good working theory.

Comments and suggestions are welcome, particularly from researchers with new findings. E-mail tierneylab@nytimes.com.