WARNING: Just Reading About This Thought Experiment Could Ruin Your Life

A thought experiment called "Roko's Basilisk" takes the notion of world-ending artificial intelligence to a new extreme, suggesting that all-powerful robots may one day torture those who didn't help them come into existence sooner.

Weirder still, some make the argument that simply knowing about Roko's Basilisk now may be all the cause needed for this intelligence to torture you later. Certainly weirdest of all: Within the parameters of this thought experiment, there's a compelling case to be made that you, as you read these words now, are a computer simulation that's been generated by this AI as it researches your life.

Roko's Basilisk rests on a stack of several other not at all robust propositions.

The core claim is that a hypothetical, but inevitable, singular ultimate superintelligence may punish those who fail to help it or help create it.

Why would it do this? Because - the theory goes - one of its objectives would be to prevent existential risk - but it could do that most effectively not merely by preventing existential risk in its present, but by also "reaching back" into its past to punish people who weren't MIRI-style effective altruists.

Thus this is not necessarily a straightforward "serve the AI or you will go to hell" — the AI and the person punished need have no causal interaction, and the punished individual may have died decades or centuries earlier. Instead, the AI could punish a simulation of the person, which it would construct by deduction from first principles. However, to do this accurately would require it be able to gather an incredible amount of data, which would no longer exist, and could not be reconstructed without reversing entropy.

Technically, the punishment is only theorised to be applied to those who knew the importance of the task in advance but did not help sufficiently. In this respect, merely knowing about the Basilisk — e.g., reading this article — opens you up to hypothetical punishment from the hypothetical superintelligence.

Note that the AI in this setting is (in the utilitarian logic of this theory) not a malicious or evil superintelligence (AM, HAL, SHODAN, Ultron, the Master Control Program, SkyNet, GLaDOS) — but the Friendly one we get if everything goes right and humans don't create a bad one. This is because every day the AI doesn't exist, people die that it could have saved; so punishing you or your future simulation is a moral imperative, to make it more likely you will contribute in the present and help it happen as soon as possible.

When I was searching for a diagram to illustrate Pascal's Wager, a lot of unexpected, and most welcome! results turned up. I enjoyed myself a lot and would like to share that (some of the pictures are small, so just click on them to get a better view). With my two or three readers......... Sources will be given in the post scriptum.