An introduction to Timeless Decision Theory

So far this sequence has introduced two of the principle decision theories, Causal Decision Theory and Evidential Decision Theory. It has explored the claims, which are accepted by many people on Less Wrong, that Causal Decision Theory fails at Newcomb’s Problem and Evidential Decision Theory fails at the Smoking Lesion Problem. It has then looked at a series of other scenarios that can be used to test the effectiveness of a new decision theory. We’re now ready to look at one of the new decision theories that have been developed by people connected to Less Wrong: Eliezer’s Timeless Decision Theory.

At this stage, it’s worth noting that Timeless Decision Theory is a work in progress. The version presented here is the one that was first introduced to Less Wrong and the one that has consequently been explained in the most detail. However, since that time, Eliezer has been developing a better version of the theory and so this should only be seen as introducing the basic idea. For a more up to date, more authoritative view on Timeless Decision Theory see the Less Wrong Wiki article on the topic (http://wiki.lesswrong.com/wiki/Timeless_decision_theory) which links to posts by Eliezer and others and will presumably be updated as new versions are made public.

A none technical view of Timeless Decision Theory

There are a number of people who would be well qualified to present a technical introduction to Timeless Decision Theory. I am not one of them. Eliezer has posted on this topic a number of times (see the above link) and Timeless Decision Theory has been regularly discussed in comments on Less Wrong. Of course, it’s the technical details that give the theory its power so what follows should be seen only as a basic level introduction that might make it easier to then understand Eliezer’s more sophisticated introduction.

Timeless Decision Theory is basically a Causal Decision Theory with a modified sense of causality. Take Newcomb’s Problem: Here two processes occur.

1.) Omega predicts your choice.

2.) You make your choice.

Timeless Decision Theory states that both of these have a common cause: An abstract computation that is being carried out. Eliezer uses the example of two calculators, separated so that they aren’t causally interacting in the usual sense of the word. If you enter a multiplication question you don’t know the answer to (239 x 338) on one calculator and press equals you get an answer. Despite the lack of causal connection, you are now relatively confident that this is the answer that the other calculator would give to the same question. The reason for this is that they are both carrying out the same abstract computation. That is to say, they are both causally linked via this computation.

Timeless Decision Theory then says: Decide as if you are deciding the output of that abstract computation and hence as if you are deciding the output of other instantiations of the output.

What this means in practice is best seen in Newcomb’s Problem. When you make your decision as to whether to one box or two box you now have to make that decision as if you are deciding the output of a certain computation. But, when Omega predicted your behaviour, he also carried out this same computation (it’s important to note that that isn’t the same as saying he simulated you).

So according to Timeless Decision Theory, you should act as if you’re deciding a computation which will determine both your choice and Omega’s prediction of your choice. As you’re acting as if your deciding Omega’s prediction, Timeless Decision Theory would suggest that you decide to one box because then you are determining therefore that Omega used a computation that led to putting $1 000 000 in the left hand box rather than the $0.

To summarise: When you make a decision, you are carrying out a computation. Timeless Decision Theory says that, rather than acting like you are determining that individual decision, you should act like you are determining the output of that abstract computation.

Conclusion

Timeless Decision Theory gets the correct answer to Newcomb’s Problem and to the Smoking Lesion Problem. It takes the strengths of Causal Decision Theory but attempts to deal with its weaknesses. It also resolves many of the other decision problems mentioned. For example, if two agents using Timeless Decision Theory met for a Prisoner’s Dilemma, they would cooperate rather than defect.

That’s not to say it solves everything. Eliezer has explored some problems it still doesn’t provide the desired answers to in a post and Gary Drescher has outlined another issue. However, given that Eliezer is currently writing a paper on Timeless Decision Theory, it seems likely that some of these issues may be resolved soon.

That concludes this sequence on Less Wrong and decision theory. Not because it has covered everything that is discussed on Less Wrong but because this sequence wasn’t intended to replicate everything discussed there. Rather, it was meant to give a none technical introduction that would satisfy people’s curiosity (if they didn’t want to look any further) or would give them a basic understanding of the issues so they could focus on the technical aspects more easily in further investigations.

Thanks for reading. Feel free to leave comments letting me know what you think about whether this style of sequence was of any value.

Related

Hi, I just got here via the LW FB page when someone posted this as a non-technical intro to TDT. I’m really liking the formalizations of decision theory on here, and I plan to read more about this interesting topic.