Archives

Meta

Simulated Reality

Simulated reality is the proposition that reality could be simulated—perhaps by computer simulation—to a degree indistinguishable from ‘true’ reality. It could contain conscious minds which may or may not be fully aware that they are living inside a simulation. This is quite different from the current, technologically achievable concept of virtual reality. Virtual reality is easily distinguished from the experience of actuality; participants are never in doubt about the nature of what they experience. Simulated reality, by contrast, would be hard or impossible to separate from ‘true’ reality.

In brain-computer interface simulations, each participant enters from outside, directly connecting their brain to the simulation computer. The computer transmits sensory data to the participant, reads and responds to their desires and actions in return; in this manner they interact with the simulated world and receive feedback from it. The participant may be induced by any number of possible means to forget, temporarily or otherwise, that they are inside a virtual realm (e.g. ‘passing through the veil,’ a term borrowed from Christian tradition, which describes the passage of a soul from an earthly body to an afterlife). While inside the simulation, the participant’s consciousness is represented by an avatar, which can look very different from the participant’s actual appearance.

In a virtual-people simulation, every inhabitant is a native of the simulated world. They do not have a ‘real’ body in the external reality of the physical world. Instead, each is a fully simulated entity, possessing an appropriate level of consciousness that is implemented using the simulation’s own logic (i.e. using its own physics). As such, they could be downloaded from one simulation to another, or even archived and resurrected at a later time. It is also possible that a simulated entity could be moved out of the simulation entirely by means of mind transfer into a synthetic body.

Another way of moving an inhabitant of the virtual reality out of its simulation would be to ‘clone’ the entity, by taking a sample of its virtual DNA and create a real-world counterpart from that model, assuming the real world’s physics is compatible with the virtual world’s. The result would not bring the ‘mind’ of the entity out of its simulation, but its body would be born in the real world. ‘Virtual people’ simulations subdivide into two further types: ‘Virtual people-virtual world,’ in which an external reality is simulated separately to the artificial consciousnesses; and ‘Solipsistic simulation’ in which consciousness is simulated and the ‘world’ participants perceive exists only within their minds.

In an emigration simulation, the participant enters the simulation from the outer reality, as in the brain-computer interface simulation, but to a much greater degree. On entry, the participant could use a variety of hypothetical methods to participate in the simulated reality including mind transfer to temporarily relocate their mental processing into a virtual-person. After the simulation is over, the participant’s mind is restored along with all new memories and experience gained within.

Further, there is the option of a completely virtual-person, born in the simulation, willing to escape the simulation (after ‘waking up’) and consequently somehow succeeding to be transferred into an outer-reality person. This would ultimately mean exiting (emigrating) and getting transformed on exit into a ‘real’ person.

Finally, there is the option of a simulated reality being dynamically constructed and modified using real-world matter and energy within an enclosing container or room, such as the ‘Holodeck’ in Star Trek. Upon entering such a space, the real-world person would effectively feel immersed in the simulated environment, with a variety of potential methods being used to convince the user of the presence of motion, gravity, environments, and so on, and with the user presumably able to interact (or not) with the simulated reality.

An intermingled simulation supports both types of consciousness: ‘players’ from the outer reality who are visiting (as a brain-computer interface simulation) or emigrating, and virtual-people who are natives of the simulation and hence lack any physical body in the outer reality.

—

Swedish philosopher Nick Bostrom investigated the possibility that we are in a simulated reality. His premise is that given sufficiently advanced technology, it is possible to simulate entire inhabited planets or even larger habitats or even entire universes as quantum simulations in time/space pockets, including all the people on them, on a computer, and that simulated people can be fully conscious, and are as fully sentient individuals as non-simulated people.

We assume that the human race could reach such a technologically advanced level without destroying themselves in the process. We presume that once we reached such a level we would still be interested in history, the past, and our ancestors, and that there would be no legal or moral strictures on running such simulations. It is likely that we would run a very large number of so-called ancestor simulations to study our past; and that, by the same line of reasoning, many of these simulations would in turn run other sub-simulations, and so on; and that given the fact that right now it is impossible to tell whether we are living in one of the vast number of simulations or the original ancestor universe, the likelihood is that the former is true.

Assumptions as to whether the human race (or another intelligent species) could reach such a technological level without destroying themselves depend greatly on the value of the Drake equation, which attempts to calculate the number of intelligent technological species communicating via radio in a galaxy at any given point in time. The expanded equation looks to the number of posthuman civilizations that ever would exist in any given universe. If the average for all universes, real or simulated, is greater than or equal to one such civilization existing in each universe’s entire history, then the odds are rather overwhelmingly in favor of the proposition that the average civilization is in a simulation, assuming that such simulated universes are possible and such civilizations would want to run such simulations.

As to the question of whether we are living in a simulated reality or a ‘real’ one, the answer may be ‘indistinguishable,’ in principle. In a commemorative article dedicated to the ‘The World Year of Physics 2005,’ physicist Bin-Guang Ma proposed the theory of ‘Relativity of reality’ (though this notion has been suggested in other contexts like ancient philosophy, Zhuangzi’s ‘Butterfly Dream,’ and psychologic analytics). By generalizing the relativity principle in physics, which is mainly about the relativity of motion, stating that the motion has no absolute meaning (to say if something is in motion or rest, one must adopt some reference frame; without a reference frame, one cannot tell the state of being in rest or in uniform motion), a similar property has been suggested for reality, meaning that without a reference world, one cannot tell the world one is living in is real or a simulated one. Therefore, there is no absolute meaning for reality. Similar to the situation in Einstein’s relativity, there are two fundamental principles for the theory ‘Relativity of reality': All worlds are equally real; and Simulated events and simulating events coexist.

Computationalism is a philosophy of mind theory stating that cognition is a form of computation. It is relevant to the Simulation Hypothesis in that it illustrates how a simulation could contain conscious subjects, as required by a ‘virtual people’ simulation. For example, it is well known that physical systems can be simulated to some degree of accuracy. If computationalism is correct, and if there is no problem in generating artificial consciousness from cognition, it would establish the theoretical possibility of a simulated reality. However, the relationship between cognition and phenomenal consciousness is disputed. It is possible that consciousness requires a physical substrate not provided by a computational simulator, and simulated people, while behaving appropriately, would be philosophical zombies. This would also seem to negate Nick Bostrom’s simulation argument; we cannot be inside a simulation, as conscious beings, if consciousness cannot be simulated. However, we could still be within a simulation, and yet be envatted brains. This would allow us to exist as conscious beings within a simulated environment, even if a simulated environment could not simulate consciousness.

Some theorists have argued that if the ‘consciousness-is-computation’ version of computationalism and mathematical realism (also known as mathematical Platonism) are both true our consciousnesses must be inside a simulation. This argument states that a ‘Plato’s heaven’ or ultimate ensemble would contain every algorithm, including those which implement consciousness. Platonic simulation theories are also subsets of the multiverse theories and theories of everything.

A dream could be considered a type of simulation capable of fooling someone who is asleep. As a result the ‘dream hypothesis’ cannot be ruled out, although it has been argued that common sense and considerations of simplicity rule against it. One of the first philosophers to question the distinction between reality and dreams was Zhuangzi, a Chinese philosopher from the 4th century BC. He phrased the problem as the well-known ‘Butterfly Dream,’ which went as follows: ‘Once Zhuangzi dreamt he was a butterfly, a butterfly flitting and fluttering around, happy with himself and doing as he pleased. He didn’t know he was Zhuangzi. Suddenly he woke up and there he was, solid and unmistakable Zhuangzi. But he didn’t know if he was Zhuangzi who had dreamt he was a butterfly, or a butterfly dreaming he was Zhuangzi. Between Zhuangzi and a butterfly there must be some distinction! This is called the Transformation of Things.’

The philosophical underpinnings of this argument are also brought up by Descartes, who was one of the first Western philosophers to do so. In ‘Meditations on First Philosophy,’ he states ‘… there are no certain indications by which we may clearly distinguish wakefulness from sleep,’ and goes on to conclude that ‘It is possible that I am dreaming right now and that all of my perceptions are false.’

A decisive refutation of any claim that our reality is computer-simulated would be the discovery of some uncomputable physics, because if reality is doing something that no computer can do, it cannot be a computer simulation. Known physical laws (including those of quantum mechanics) are very much infused with real numbers and continua, and the universe seems to be able to decide their values on a moment-by-moment basis. As Richard Feynman put it: ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypotheses that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

Note that these objections all relate to the idea of reality being exactly simulated. Ordinary computer simulations as used by physicists are always approximations.

These objections do not apply if the hypothetical simulation is being run on a hypercomputer, a hypothetical machine more powerful than a Turing machine. Unfortunately, there is no way of working out if computers running a simulation are capable of doing things that computers in the simulation cannot do. No-one has shown that the laws of physics inside a simulation and those outside it have to be the same, and simulations of different physical laws have been constructed. The problem now is that there is no evidence that can conceivably be produced to show that the universe is not any kind of computer, making the simulation hypothesis unfalsifiable and therefore scientifically unacceptable. All conventional computers, however, are less than hypercomputational, and the simulated reality hypothesis is usually expressed in terms of conventional computers, i.e. Turing machines. Inasmuch as they are, the hypothesis is falsifiable.

Roger Penrose, an English mathematical physicist, presents the argument that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine-type of digital computer. Penrose hypothesizes that quantum mechanics plays an essential role in the understanding of human consciousness. The collapse of the quantum wavefunction is seen as playing an important role in brain function.

In his book ‘The Fabric of Reality,’ David Deutsch discusses how the limits to computability imposed by Gödel’s Incompleteness Theorem affects the Virtual Reality rendering process. In order to do this, Deutsch invents the notion of a CantGoTu environment (named after Cantor, Gödel, and Turing), using Cantor’s diagonal argument to construct an ‘impossible’ Virtual Reality which a physical VR generator would not be able to generate.

The computational requirements for molecular dynamics are such that in 2002, ‘while the fastest proteins fold on the order of tens of microseconds,’ ‘current single computer processors’ could ‘only simulate on the order of a nanosecond of real-time of folding in full atomic detail per CPU day.’ To simulate an entire galaxy would require more computing power than can presently be envisioned, assuming that no shortcuts are taken when simulating areas that nobody is observing.

In answer to this objection, Bostrom calculated that simulating the brain functions of all humans who have ever lived would require roughly 1036 calculations. He further calculated that a planet-sized computer built with computronium using known nanotechnological methods would perform about 1042 calculations per second — and a planet-sized computer or an even larger stellar system-sized computer is not inherently impossible to build, (although the speed of light could severely constrain the speed at which its subprocessors share data). In any case, a simulation need not compute every single molecular event that occurs inside it; it may only process events that its participants can actively perceive. This is particularly the case if the simulation contained only a handful of people; far less processing power would be needed to make them believe they were in a ‘world’ much larger than was actually the case. A real world example of this could be the observer paradox or Heisenberg Uncertainty Principle – an unobserved region of space is indeterminate until observed – this could be because the simulating computer is not simulating it until it needs to.

The existence of simulated reality is unprovable in any concrete sense: any ‘evidence’ that is directly observed could be another simulation itself. In other words, there is an infinite regress problem with the argument. Even if we are a simulated reality, there is no way to be sure the beings running the simulation are not themselves a simulation, and the operators of that simulation are not a simulation, ad infinitum. Given the premises of the simulation argument, any reality, even one running a simulation, has no better or worse a chance of being a simulation than any other.

—

A computed simulation may have voids or other errors that manifest inside. As a simple example of this, when the ‘hall of mirrors’ effect occurs in the first person shooter ‘Doom,’ the game attempts to display ‘nothing’ and obviously fails in its attempt to do so. If a void can be found and tested, and if the observers survive its discovery, then it may reveal the underlying computational substrate. However, lapses in physical law could be attributed to other explanations, for instance inherent instability in the nature of reality.

In fact, bugs could be very common. An interesting question is whether knowledge of bugs or loopholes in a sufficiently powerful simulation are instantly erased the minute they are observed since presumably all thoughts and experiences in a simulated world could be carefully monitored and altered. This would, however, require enormous processing capability in order to simultaneously monitor billions of people at once. Of course, if this is the case we would never be able to act on discovery of bugs. In fact, any simulation significantly determined to protect its existence could erase any proof that it was a simulation whenever it arose, provided it had the enormous capacity necessary to do so.

To take this argument to an even greater extreme, a sufficiently powerful simulation could make its inhabitants think that erasing proof of its existence is difficult. This would mean that the computer actually has an easy time of erasing glitches, but we all think that changing reality requires great power. One could possibly take miracles and paranormal activity as software bugs especially those which seem to have a negative effect on one; this notion has been explored in ‘The Matrix,’ where déjà vu is considered a sign of crude alteration to the system; and ‘Animatrix’ where software glitches are concentrated in a house which the neighbors call ‘haunted,’ subsequently corrected by the Agents. A possible exploit could regard demons and evil spirits as the ‘hackers’ who attempt to take advantage of this system.

Additionally, it can be argued that what are in fact errors in the software, we perceive as part of the ‘proper’ reality. For example, it may be the case that tornadoes were never meant to exist in this simulation, but due to an error in the programming came to be. It would then be only suspicious to remove them from this reality and doing so would raise more questions by its inhabitants. In such instance, it would make more sense to leave the ‘error’ in place.

The simulation may contain hidden/secret messages or exits placed there by the designer or by other inhabitants who have solved the riddle in the way that easter eggs in computer games and other media sometimes do. People have already spent considerable effort searching for patterns or messages within the endless decimal places of the fundamental constants such as e and pi. In Carl Sagan’s science fiction novel ‘Contact,’ Sagan contemplates the possibility of finding a signature embedded in pi (in its base-11 expansion) by the creators of our reality.

However, such messages have not been made public if they have been found, and the argument relies on the messages being truthful. As usual, other hypotheses could explain the same evidence. In any case, if such constants are in fact normal, then at some point an apparently meaningful message will appear in them (this is known as the infinite monkey theorem), not necessarily because it was placed there.

The Easter Egg Theory also assumes that a simulation would want to inform its inhabitants of its real nature; it may not. Otherwise, if we consider that the human race will eventually be capable of creating intelligent programs (i.e. machines) living inside a virtual subspace of our ‘real’ world, then an interesting question would be to define whether or not we will be capable of suppressing from our sentient robots their capability of knowing their artificial nature.

A computer simulation would be limited to the processing power of its host computer, and so there may be aspects of the simulation that are not computed at a fine-grained (e.g. subatomic) level. This might show up as a limitation on the accuracy of information that can be obtained in particle physics. However, this argument, like many others, assumes that accurate judgments about the simulating computer can be made from within the simulation. If we are being simulated, we might be misled about the nature of computers.

Taken one step further, the ‘fine grained’ elements of our world could themselves be simulated since we never see the sub-atomic particles due to our inherent physical limitations. In order to see such particles we rely on other instruments which appear to magnify or translate that information into a format our limited senses are able to view: computer print out, lens of a microscope, etc. Therefore, we essentially take on faith that they’re an accurate portrayal of the fine grained world which appears to exist in a realm beyond our natural senses. Assuming the sub-atomic could also be simulated then the processing power required to generate a realistic world would be greatly reduced.

In theoretical physics, digital physics holds the basic premise that the entire history of our universe is computable in some sense. The hypothesis was pioneered in Konrad Zuse’s book ‘Rechnender Raum’ (translated by MIT into English as ‘Calculating Space,’ 1970), which focuses on cellular automata. Juergen Schmidhuber suggested that the universe could be a Turing machine, because there is a very short program that outputs all possible programs in an asymptotically optimal way. Other proponents include Edward Fredkin, Stephen Wolfram, and Nobel laureate Gerard ‘t Hooft. They hold that the apparently probabilistic nature of quantum physics is not incompatible with the notion of computability. A quantum version of digital physics has recently been proposed by Seth Lloyd. None of these suggestions has been developed into a workable physical theory.

Some of the people in a simulated reality may be automatons, philosophical zombies, or ‘bots’ added to the simulation to make it more realistic or interesting or challenging. Indeed, it is conceivable that every person other than oneself is a bot. Bostrom called this a ‘me-simulation,’ in which oneself is the only sovereign lifeform, or at least the only inhabitant who entered the simulation from outside.

Bostrom further elaborated on the idea of bots: ‘In addition to ancestor-simulations, one may also consider the possibility of more selective simulations that include only a small group of humans or a single individual. The rest of humanity would then be zombies or ‘shadow-people’ – humans simulated only at a level sufficient for the fully simulated people not to notice anything suspicious. It is not clear how much [computationally] cheaper shadow-people would be to simulate than real people. It is not even obvious that it is possible for an entity to behave indistinguishably from a real human and yet lack conscious experience.’

A brain-computer interface simulated reality may be required to progress at a rate that is near realtime; that is, time within it may be required to pass at approximately the same rate as the outer reality which contains it. This might be the case because the players are interacting with the simulation using brains which still reside in the outer reality. Therefore, if the simulation were to run faster or slower, those brains could notice because they were not contained within it. It is possible that time passes slower or quicker for brains in a dream state (i.e., in a brain-computer interface trance); however, the point is that they still function at a finite, biological speed, and the simulation must track with them. Unless those interacting with the simulation are augmented and capable of processing information at the same rate as the simulation itself.

A virtual-people or emigration simulated reality, on the other hand, need not. This is because its inhabitants are using the simulation’s own physics in order to experience, think, and react. If the simulation were slowed down or sped up, so also would the inhabitants’ own senses, brains, and muscles, as well as every other molecule inside. The inhabitants would perceive no change in the passage of time, simply because their method of measuring time is dependent on the cosmic clock that they are seeking to measure.

For that matter, they could not even detect whether the simulation had been completely halted: a pause in the simulation would pause every life and mind within it. When the simulation was later resumed, the inhabitants would continue exactly as they were before the pause, completely unaware that (for example) their cosmos had been paused and archived for a billion years before being resumed. A simulation could also be created with its inhabitants already possessing memories as though they had already lived part of their lives before; said inhabitants would not be able to tell the difference unless informed of it by the simulation.

Recursive simulation involves a simulation, or an entity in a simulation, creating another simulation within a simulated environment. The ‘parent’ simulator would be simulating all of the atoms of the computer, atoms which happen to be calculating a ‘child’ simulation. This recursion could continue to infinitely many levels — a simulation containing a computer running a simulation containing a computer running a simulation and so on. The recursion is subject only to one constraint (assuming no level has infinite computational power): each ‘nested’ simulation must be:
smaller than its parent reality.