In this last in the virtual cheating series, the focus of the discussion is on virtual people. The virtual aspect is easy enough to define—these are entities that exist entirely within the realm of computer memory and do not exist as physical beings in that they lack bodies of the traditional sort. They are, of course, physical beings in the broad sense, existing as data within physical memory systems.

An example of such a virtual being is a non-player character (NPC) in a video game. These coded entities range from enemies that fight the player to characters that engage in the illusion of conversation and interaction. As it now stands, these NPCs are quite simple—although players often have very strong emotional responses and even (one-sided) relationships with them. Bioware, for example, excels at creating NPCs that players get very involved with and their games often feature elaborate relationship and romance systems.

While these simple coded entities are usually designed to look like and imitate the behavior of people, they are obviously not people. They cannot even pass a basic Turning test. They are, at best, the illusion of people. As such, while humans could become emotionally attached to these virtual entities, it would be impossible to cheat with them. Naturally, a human could become angry with how involved their partner is with video games, but that is another matter.

As technology improves, the virtual people will become more and more person-like. As with the robots discussed in the previous essay, if a virtual person were a person, then cheating would be potentially possible. Also as with the discussion of robots, there could be degrees of virtual personhood, thus allowing for degrees of cheating. Since virtual people are essentially robots in the virtual world, the discussion of robots in that essay would apply analogously to the virtual robots of the virtual world. There is, however, one obvious break in the analogy: unlike robots, virtual people lack physical bodies. This leads to the obvious question of whether a human can virtually cheat with a virtual person or if cheating requires a physical sexual component.

While, as discussed in a previous essay, there is a form of virtual sex that involves physical devices that stimulate the sexual organs, this is not “pure” virtual sex. After all, the user is using a VR headset to “look” at the partner, but the stimulation is all done mechanically. Pure virtual sex would require the sci-fi sort of virtual reality of cyberpunk—a person fully “jacked in” to the virtual reality so all the inputs and outputs are essentially directly to and from the brain. The person would have a virtual body in the virtual reality that mediates their interaction with that world, rather than having crude devices stimulating their physical body.

Assuming the technology is good enough, a person could have virtual sex with a virtual person (or another person who is also jacked into the virtual world). On the one hand, this would obviously not be sex in the usual sense—those involved would have no physical contact. This would avoid many of the usual harms of traditional cheating—STDs and pregnancies would not be possible (although sexual malware and virtual babies might be possible). This does, of course, leave open the door for accusations of emotional infidelity.

On the other hand, if the experience is indistinguishable from the experience of physical sex, then it could be argued that the lack of physical contact is irrelevant. At this point, the classic problem of the external world becomes relevant. The gist of this problem is that because I cannot get outside of my experiences to “see” that they are really being caused by external things that seem to be causing them, I can never know if there is really an external world. For all I know, I am dreaming or already in a virtual world. While this is usually seen as the nightmare scenario in epistemology, George Berkeley embraced this view in his idealism—he argued that there is no metaphysical matter and that “to be is to be perceived.” On his view, all that exists are minds and within them are ideas. Crudely put, Berkeley’s reality is virtual and God is the server.

So, if cheating is defined such that it requires physical sexual activity, knowing whether a person is cheating or not would require solving the problem of the external world. And there would be the possibility that there never has been any cheating since there might be no physical world. If sexual activity is defined in terms of the behavior and sensations without references to a need for physical systems, then virtual cheating would be possible—assuming the technology can reach the required level.

While this discussion of virtual cheating is currently purely theoretical, it does provide an interesting way to explore what it is about cheating (if anything) that is wrong. As noted at the start of the series, many of the main concerns about cheating are purely physical concerns about STDs and pregnancy. These concerns are avoided by virtual cheating. What remains are the emotions of those involved and the agreements between them. As a practical matter, the future is likely to see people working out the specifics of their relationships in terms of what sort of virtual and robotic activities are allowed and which are forbidden. While people can simply agree to anything, there is the deeper question of the rational foundation of relationship boundaries. For example, whether it is reasonable to consider interaction with a sexbot cheating or elaborate masturbation. Perhaps Bill Clinton, with his inquiries into the definition of “sex” should be leading the discussion of this matter.

Elon Musk and others have advanced the idea that we exist within a simulation. The latest twist on this is that he and others are allegedly funding efforts to escape this simulation. This is, of course, the most recent chapter in the ancient philosophical problem of the external world. Put briefly, this problem is the challenge of proving that what seems to be a real external world is, in fact, a real external world. As such, it is a problem in epistemology (the study of knowledge).

The problem is often presented in the context of metaphysical dualism. This is the view that reality is composed of two fundamental categories of stuff: mental stuff and physical stuff. The mental stuff is supposed to be what the soul or mind is composed of, while things like tables and kiwis (the fruit and the bird) are supposed to be composed of physical stuff. Using the example of a fire that I seem to be experiencing, the problem would be trying to prove that the idea of the fire in my mind is being caused by a physical fire in the external world.

Renee Descartes has probably the best known version of this problem—he proposes that he is being deceived by an evil demon that creates, in his mind, an entire fictional world. His solution to this problem was to doubt until he reached something he could not doubt: his own existence. From this, he inferred the existence of God and then, over the rest of his Meditations on First Philosophy, he established that God was not a deceiver. Going back to the fire example, if I seem to see a fire, then there probably is an external, physical fire causing that idea. Descartes did not, obviously, decisively solve the problem: otherwise Musk and his fellows would be easily refuted by using Descartes’ argument.

One often overlooked contribution Descartes made to the problem of the external world is consideration of why the deception is taking place. Descartes attributes the deception of the demon to malice—it is an evil demon (or evil genius). In contrast, God’s goodness entails he is not a deceiver. In the case of Musk’s simulation, there is the obvious question of the motivation behind it—is it malicious (like Descartes’ demon) or more benign? On the face of it, such deceit does seem morally problematic—but perhaps the simulators have excellent moral reasons for this deceit. Descartes’s evil demon does provide the best classic version of Musk’s simulation idea since it involves an imposed deception. More on this later.

John Locke took a rather more pragmatic approach to the problem. He rejected the possibility of certainty and instead argued that what matters is understanding matters enough to avoid pain and achieve pleasure. Going back to the fire, Locke would say that he could not be sure that the fire was really an external, physical entity. But, he has found that being in what appears to be fire has consistently resulted in pain and hence he understands enough to want to avoid standing in fire (whether it is real or not). This invites an obvious comparison to video games: when playing a game like World of Warcraft or Destiny, the fire is clearly not real. But, because having your character fake die in fake fire results in real annoyance, it does not really matter that the fire is not real. The game is, in terms of enjoyment, best played as if it is.

Locke does provide the basis of a response to worries about being in a simulation, namely that it would not matter if we were or were not—from the standpoint of our happiness and misery, it would make no difference if the causes of pain and pleasure were real or simulated. Locke, however, does not consider that we might be within a simulation run by others. If it were determined that we are victims of a deceit, then this would presumably matter—especially if the deceit were malicious.

George Berkeley, unlike Locke and Descartes, explicitly and passionately rejected the existence of matter—he considered it a gateway drug to atheism. Instead, he embraces what is called “idealism”, “immaterialism” and “phenomenalism.” His view was that reality is composed of metaphysical immaterial minds and these minds have ideas. As such, for him there is no external physical reality because there is nothing physical. He does, however, need to distinguish between real things and hallucinations or dreams. His approach was to claim that real things are more vivid that hallucinations and dreams. Going back to the example of fire, a real fire for him would not be a physical fire composed of matter and energy. Rather, I would have a vivid idea of fire. For Berkeley, the classic problem of the external world is sidestepped by his rejection of the external world. However, it is interesting to speculate how a simulation would be handled by Berkeley’s view.

Since Berkeley does not accept the existence of matter, the real world outside the simulation would not be a material world—it would a world composed of minds. A possible basis for the difference is that the simulated world is less vivid than the real world (to use his distinction between hallucinations and reality). On this view, we would be minds trapped in a forced dream or hallucination. We would be denied the more vivid experiences of minds “outside” the simulation, but we would not be denied an external world in the metaphysical sense. To use an analogy, we would be watching VHS, while the minds “outside” the simulation would be watching Blu-Ray.

While Musk does not seem to have laid out a complete philosophical theory on the matter, his discussion indicates that he thinks we could be in a virtual reality style simulation. On this view, the external world would presumably be a physical world of some sort. This distinction is not a metaphysical one—presumably the simulation is being run on physical hardware and we are some sort of virtual entities in the program. Our error, then, would be to think that our experiences correspond to material entities when they, in fact, merely correspond to virtual entities. Or perhaps we are in a Matrix style situation—we do have material bodies, but receive virtual sensory input that does not correspond to the physical world.

Musk’s discussion seems to indicate that he thinks there is a purpose behind the simulation—that it has been constructed by others. He does not envision a Cartesian demon, but presumably envisions beings like what we think we are. If they are supposed to be like us (or we like them, since we are supposed to be their creation), then speculation about their motives would be based on why we might do such a thing.

There are, of course, many reasons why we would create such a simulation. One reason would be scientific research: we already create simulations to help us understand and predict what we think is the real world. Perhaps we are in a simulation used for this purpose. Another reason would be for entertainment. We created games and simulated worlds to play in and watch; perhaps we are non-player characters in a game world or unwitting actors in a long running virtual reality show (or, more likely, shows).

One idea, which was explored in Frederik Pohl’s short story “The Tunnel under the World”, is that our virtual world exists to test advertising and marketing techniques for the real world. In Pohl’s story, the inhabitants of Tylerton are killed in the explosion of the town’s chemical plant and they are duplicated as tiny robots inhabiting a miniature reconstruction of the town. Each day for the inhabitants is June 15th and they wake up with their memories erased, ready to be subject to the advertising techniques to be tested that day. The results of the methods are analyzed, the inhabitants are wiped, and it all starts up again the next day.

While this tale is science fiction, Google and Facebook are working very hard to collect as much data as they can about us with an end to monetize all this information. While the technology does not yet exist to duplicate us within a computer simulation, that would seem to be a logical goal of this data collection—just imagine the monetary value of being able to simulate and predict people’s behavior at the individual level. To be effective, a simulation owned by one company would need to model the influences of its competitors—so we could be in a Google World or a Facebook World now so that these companies can monetize us to exploit the real versions of us in the external world.

Given that a simulated world is likely to exist to exploit the inhabitants, it certainly makes sense to not only want to know if we are in such a world, but also to try to undertake an escape. This will be the subject of the next essay.

The problem of the external world is a classic challenge in epistemology (the theory of knowledge). This challenge, which was first presented by the ancient skeptics, is met by proving that what I seem to be experiencing is actually real. As an example, it would require proving that the computer I seem to be typing this on exists outside of my mind.

Some of the early skeptics generated the problem by noting that what seems real could be just a dream, generated in the mind of the dreamer. Descartes added a new element to the problem by considering that an evil demon might be causing him to have experiences of a world that does not actually exist outside of his mind. While the evil demon was said to be devoted to deception, little is said about its motive in this matter. After Descartes there was a move from supernatural to technological deceivers: the classic brain-in-a-vat scenarios that are precursors to the more recent notion of virtual reality. In these philosophical scenarios little is said about the motivation or purpose of the deceit, beyond the desire to epistemically mess with someone. Movies and TV shows do sometimes explore the motives of the deceit. The Matrix trilogy, for example, endeavors to present something of a backstory for the Matrix. While considering the motivation behind the alleged deceit might not bear on the epistemic problem, it does seem a matter worth considering.

The only viable approach to sorting out a possible motivation for the deceit is to consider the nature of the world that is experienced. As various philosophers, such as David Hume, have laid out in their formulations of the problem of evil (the challenge of reconciling God’s perfection with the existence of evil) the world seems to be an awful place. As Hume has noted, it is infested with disease, suffused with suffering, and awash in annoying things. While there are some positive things, there is an overabundance of bad, thus indicating that whatever lies behind the appearances is either not benign or not very competent. This, of course, assumes some purpose behind the deceit. But, perhaps there is deceit without a deceiver and there is no malice. This would make the unreal like what atheists claim about the allegedly real: it is purposeless. However, deceit (like design) seems to suggest an intentional agent and this implies a purpose. This purpose, if there is one, must be consistent with the apparent awfulness of the world.

One approach is to follow Descartes and go with a malicious supernatural deceiver. This being might be acting from mere malice—inflicting both deceit and suffering. Or it might be acting as an agent of punishment for past transgressions on my part. The supernatural hypothesis does have some problems, the main one being that it involves postulating a supernatural entity. Following Occam’s Razor, if I do not need to postulate a supernatural being, then I should not do so.

Another possibility is that I am in technologically created unreal world. In terms of motives consistent with the nature of the world, there are numerous alternatives. One is punishment for some crime or transgression. A problem with this hypothesis is that I have no recollection of a crime or indication that I am serving a sentence. But, it is easy to imagine a system of justice that does not inform prisoners of their crimes during the punishment and that someday I will awaken in the real world, having served my virtual time. It is also easy to imagine that this is merely a system of torment, not a system of punishment. There could be endless speculation about the motives behind such torment. For example, it could be an act of revenge or simple madness. Or even a complete accident. There could be other people here with me; but I have no way of solving the problem of other minds—no way of knowing if those I encounter are fellow prisoners or mere empty constructs. This ignorance does seem to ground a moral approach—since they could be fellow prisoners, I should treat them as such.

A second possibility is that the world is an experiment or simulation of an awful world and I am a construct within that world. Perhaps those conducting it have no idea the inhabitants are suffering; perhaps they do not care; or perhaps the suffering is the experiment. I might even be a researcher, trapped in my own experiment. Given how scientists in the allegedly real world have treated subjects, the idea that this is a simulation of suffering has considerable appeal.

A third possibility is that the world is a game or educational system of some sort. Perhaps I am playing a very lame game of Assessment & Income Tax; perhaps I am in a simulation learning to develop character in the face of an awful world; or perhaps I am just part of the game someone else is playing. All of these are consistent with how the world seems to be.

It is also worth considering the possibility of solipsism: that I am the only being that exists. It could be countered that if I were creating the world, it would be much better for me and far more awesome. After all, I actually write adventures for games and can easily visually a far more enjoyable and fun world. The easy and obvious counter is to point out that when I dream (or, more accurately have nightmares), I experience unpleasant things on a fairly regular basis and have little control. Since my dreams presumably come from me and are often awful, it makes perfect sense that if the world came from me, it would be comparable in its awfulness. The waking world would be more vivid and consistent because I am awake; the dream world less so because of mental fatigue. In this case, I would be my own demon.

One of the oldest problems in philosophy is that of the external world. It present an epistemic challenge forged by the skeptics: how do I know that what I seem to be experiencing as the external world is really real for real? Early skeptics often claimed that what seems real might be just a dream. Descartes upgraded the problem through his evil genius/demon which used either psionic or supernatural powers to befuddle its victim. As technology progressed, philosophers presented the brain-in-a-vat scenarios and then moved on to more impressive virtual reality scenarios. One recent variation on this problem has been made famous by Elon Musk: the idea that we are characters within a video game and merely think we are in a real world. This is, of course, a variation on the idea that this apparent reality is just a simulation. There is, interestingly enough, a logically strong inductive argument for the claim that this is a virtual world.

One stock argument for the simulation world is built in the form of the inductive argument generally known as a statistical syllogism. It is statistical because it deals with statistics. It is a syllogism by definition: it has two premises and one conclusion. Generically, a statistical syllogism looks like this:

Premise 1: X% of As are Bs.

Premise 2: This is an A.

Conclusion: This is a B.

The quality (or strength, to use the proper term) of this argument depends on the percentage of As that are B. The higher the percentage, the stronger the argument. This makes good sense: the more As that are Bs, the more reasonable it is that a specific A is a B. Now, to the simulation argument.

Premise 1: Most worlds are simulated worlds.

Premise 2: This is a world.

Premise 3: This is a simulated world.

While “most” is a vague term, the argument is stronger than weaker in that if its premises are true, then the conclusion is logically more likely to be true than not. Before embracing your virtuality, it is worth considering a rather similar argument:

Premise 1: Most organisms are bacteria.

Premise 2: You are an organism.

Conclusion: You are a bacterium.

Like the previous argument, the truth of the premises make the conclusion more likely to be true than false. However, you are almost certainly not a bacteria. This does not show that the argument itself is flawed. After all, the reasoning is quite good and any organism selected truly at random would most likely be a bacterium. Rather, it indicates that when considering the truth of a conclusion, one must consider the total evidence. That is, information about the specific A must be considered when deciding whether or not it is actually a B. In the bacteria example, there are obviously facts about you that would count against the claim that you are a bacterium—such as the fact that you are a multicellular organism.

Turning back to the simulation argument, the same consideration is in play. If it is true that most worlds are simulations, then any random world is more likely to be a simulation than not. However, the claim that this specific world is a simulation would require due consideration of the total evidence: what evidence is there that this specific world is a simulation rather than real? This reverses the usual challenge of proving that the world is real to trying to prove it is not real. At this point, there seems to be little in the way of evidence that this is a simulation. Using the usual fiction examples, we do not seem to find glitches that would be best explained as programming bugs, we do not seem to encounter outsiders from reality, and we do not run into some sort of exit system (like the Star Trek holodeck). Naturally, this is all consistent with this being a simulation—it might be well programmed, the outsider might never be spotted (or never go into the system) and there might be no way out. At this point, the most reasonable position is that the simulation claim is at best on par with the claim that the world is real—all the evidence is consistent with both accounts. There is, however, still the matter of the truth of the premises in the simulation argument.

The second premise seems true—whatever this is, it seems to be a world. It seems fine to simply grant this premises. As such, the first premise is the key—while the logic of the argument is good, if the premise is not plausible then it is not a good argument overall.

The first premise is usually supported by its own stock argument. The reasoning includes the points that the real universe contains large numbers of civilizations, that many of these civilizations are advanced and that enough of these advanced civilizations create incredibly complex simulations of worlds. Alternatively, it could be claimed that there are only a few (or just one) advanced civilizations but that they create vast numbers of complex simulated worlds.

The easy and obvious problem with this sort of reasoning is that it requires making claims about an external real world in order to try to prove that this world is not real. If this world is taken to not be real, there is no reason to think that what seems true of this world (that we are developing simulations) would be true of the real world (that they developed super simulations, one of which is our world). Drawing inferences from what we think is a simulation to a greater reality would be like the intelligent inhabitants of a Pac Man world trying to draw inferences from their game to our world. This would be rather problematic.

There is also the fact that it seems simpler to accept that this world is real rather than making claims about a real world beyond this one. After all, the simulation hypothesis requires accepting a real world on top of our simulated world—why not just have this be the real world?