A future society will very likely have the technological ability and the motivation to create large numbers of completely realistic historical simulations and be able to overcome any ethical and legal obstacles to doing so. It is thus highly probable that we are a form of artificial intelligence inhabiting one of these simulations. To avoid stacking (i.e. simulations within simulations), the termination of these simulations is likely to be the point in history when the technology to create them first became widely available, (estimated to be 2050). Long range planning beyond this date would therefore be futile.

This is a nearly perfect abstract. My only complaint is that it does not accurately reflect what is actually in the paper.

“A future society will very likely have the technological ability … to overcome any ethical and legal obstacles…”

I imagine this would have to depend on some sort of robot lawyers (for the latter obstacles), and robot philosophers (for the former - or perhaps just robot op ed columnists, depending on the nature of the ethical obstacles). I hope this future isn’t too near, because there are only so many jobs for philosophers as it is.

Jenkins notes in his conclusion that one major reason for stopping the simulation at the point where nested simulations become possible is that

a historical simulation that is set in a period when the necessary simulation technology already exists would tend to stymie any efforts to make the simulated entitties unaware that they exist in a simulation. This lack of awareness is necessary for the simulation to run effectively, otherwise the behavior of the AI’s inhabiting the simulation would not be genuine and the basic purpose of the simulation would not be accomplished.

Next paper topic: is it ethical to publish a paper that, by raising awareness that we all live in a simulation, is likely to help bring about the end of the world?