There is some connection between a “veil of ignorance”-based thinking (also called the original position) and timeless/updateless/functional decision theory. It’s not clear to me whether they are basically the same thing or not.

UDT-like reasoning is also related to the Kantian categorical imperative (see Good and Real for some discussion).

Quotes

Timeless decision theory / updateless decision theory / functional decision theory. Roughly, choosing a policy from behind a Rawlsian veil of ignorance. As I mentioned with accounting for base rates, it might seem from one perspective like this kind of reasoning is throwing information away; but actually, it is much more powerful. It allows you to set up arbitrary functions from information states to strategies. You are not actually throwing information away; you always have the option of responding to it as usual. You are gaining the option of ignoring it, or reacting to it in a different way, based on larger considerations.

In a post on the Open Phil blog, Holden Karnofsky talks about representing worldviews as agents, then says:2

We can further imagine deals that might be made behind a “veil of ignorance” (discussed previously). That is, if we can think of some deal that might have been made while there was little information about e.g. which charitable causes would turn out to be important, neglected, and tractable, then we might “enforce” that deal in setting the allocation. For example, take the hypothetical deal between the long-termist and near-termist worldviews discussed above. We might imagine that this deal had been struck before we knew anything about the major global catastrophic risks that exist, and we can now use the knowledge about global catastrophic risks that we have to “enforce” the deal - in other words, if risks are larger than might reasonably have been expected before we looked into the matter at all, then allocate more to long-termist buckets, and if they are smaller allocate more to near-termist buckets. This would amount to what we term a “fairness agreement” between agents representing the different worldviews: honoring a deal they would have made at some earlier/less knowledgeable point.

(I’m actually curious why Karnofsky doesn’t mention functional decision theory in his post, since I would guess he knows about it. Is it because he doesn’t want to be associated with MIRI?)

The above is basically the same kind of reasoning that Wei Dai calls “UDT-like reasoning”. Interestingly, Dai uses this reasoning to reach the conclusion that one might care less about astronomical waste, while Karnofsky uses this reasoning to give more weight to long-term worldviews (since they are relatively more neglected).4

A post by Carl Shulman also mentions both the veil of ignorance and acausal decision theories (although not discussed together):5

However, from behind a veil of ignorance, before learning about the existence of large inaccessible populations, they might have preferred a deal in which their precepts would be followed in a case where great good could be done by their lights, e.g. an Adam and Eve scenario, in exchange for deferring to other concerns in worlds with big inaccessible populations.

The lesson of the dual-simulation transparent-boxes problem is thus consistent with the proposal of John Rawls (1999). Rawls advocates choosing a social policy as though under a veil of ignorance about your station—that is, you should choose a policy that you would want (for your sake) to be in place if you were unaware of your actual circumstances, as though you were betting on the entire range of possible events that contributed to your present circumstances. The dual-simulation discussion offers an abstract decision-theoretic justification for betting on such a range of possibilities, regardless of which of those possibilities is already known to have come about.

An agent may also believe that his decision to extort someone makes it more likely (via correlated decision-making) that others extort him. Using an acausal decision theory, she may view this as a (potentially strong) reason to refrain from extortion, unless the agent gains sufficient confidence that she will only threaten herself. Even in that case, an updateless agent might reason that in the original position, she was equally likely to be threatened and to threaten herself. Under the assumption of sufficiently strong correlation with other decision-makers, this potentially implies (similar to the Counterfactual mugging problem) to never use extortion, even if the agent happens to find herself in a situation where she would profit from it.

From an original position, i.e. a perspective from which we do not yet know which position in the multiverse we will take, how many resources will be at our disposal, etc., it seems reasonable to give equal weight to all utility functions. Updatelessness gives this argument some additional appeal, as it asks us to make our decisions from a similar perspective.

Paul Christiano’s “On SETI” also mentions both veil of ignorance and UDT.