Monday, November 23, 2009

Warning: This post is very long, and goes quite a bit further than what I originally set out to do, namely to defend a narrow point about what the implications of a prisoner’s dilemma in a society of perfectly rational agents is. I tried to posit a few general implications, and the whole thing got out of hand. Anyway, the next paragraph basically starts where I originally wanted to start. Get some snacks and start reading, this is going to be long.

In my previous post, I basically said something so briefly that people did not seem to get what I was saying. Reproduced here:

20. One should note that things like prisoner's dilemma and tragedy of the commons seem to posit a conflict between individual rationality and group rationality. The payoff for any individual for defecting is always more than that for cooperating, however mass cooperation gives off a larger payoff. It seems as if the simplest way for cooperation to be individually rational as well as globally rational is if each person internalises the "externalities" namely, that the payoff that the "opponent" receives is also reflected back on the person. e.g. if by defecting I receive a payoff of +10 and my opponent -10, I must internalise everything by incorporating his payoff into mine. Then, a mutual cooperation where each of us receives +5 payoff would be more rational as by internalising, I've got a net payoff +10 instead of 0.

21. There are a variety of ways in which we could interpret this requirement to internalise the costs. One way, is by supposing that other people's payoffs really do matter. To be very clear, I am not assuming some notion of the good. Instead, I am concluding that some notion of the good is necessary in order to resolve paradoxes inherent in prisoner's dilemma situations.

I will try in this post to defend the following thesis:

There are rational paradoxes (e.g. hedonic paradoxes) whereby achieving an end A is hindered by actively pursuing A (or A only) and can instead be achieved by instead/also pursuing end B or cultivating disposition C.

If an ideal evaluator (perfectly knowledgeable and rational) necessarily held disposition C or pursued end B (in order tat A be achieved), then it follows that B is an end in itself worth pursuing or C is a fitting disposition to have (little difference really. An end being valuable conceptually entails a disposition being fitting).

1. We must consider precisely what it means when we say someone is perfectly rational.

1.1 A person is rational to the extent that he is sensitive (i.e. responsive) to reasons. Note that in it self, it says nothing as to what the correct reasons are. A theory can specify what the reasons are as broadly or as narrowly as it wants. However 1.1 is a conceptual truth, and doesn’t even say anything controversial either.

However, there are theoretical implications.

1.1.1 Lets talk about the case of act utilitarianism where pleasure itself is reason giving. In this case, a fitting disposition is one which is sensitive to the consequences in terms of how much pleasure they generate. i.e. a person with this fitting disposition generally tries to maximise pleasure with each act.

1.1.2 However, because such a disposition is very inefficient (its time consuming to consider all the consequences all the time and figure out the pleasure each one has) and people are often inaccurate (it is prone to people making lots of mistakes as well as biasing in favour of the agent’s own pleasure), it is better to cultivate other dispositions or act according to rules of thumb. For example, an indirect consequentialist would advise you to cultivate such dispositions such as Aristotelians would recommend: justice, prudence, courage etc. However, for classical utilitarians, the only reasons that they acknowledge are means ends reasoning and pleasure. Thus, these dispositions are fortunate in that they result in the agent acting in accordance with what the theory describes as reasons to act. However, these fortunate dispositions are not fitting. These dispositions almost never involve any consideration of how much pleasure they are generating, or how to achieve the end of maximising pleasure. An agent who has these dispositions is irrational, but morally fortunate in that even though he is not sensitive to reason, he tends to do things that he really morally ought to do. This of course puts indirect act consequentialism in the very awkward position of having a rational agent (one with fitting dispositions) try to cultivate merely fortunate dispositions and therefore become an irrational agent. (H/T to Richard Chappell from Princeton This seems like a strong criticism of act consequentialism.

We have wandered quite far afield here. Nevertheless, it is certainly true that a perfectly rational agent is perfectly sensitive to reasons

1.2 The reasons in question definitely involve means-ends reasoning. i.e. reasons relate to finding the best means to achieving a given end. However, there is also the question of whether there are any reasons to prefer some ends over others.

1.2.1 It would be uncontroversial to say that we may not want to pursue some of our ends because they would inhibit the pursuit of other ends dearer to us or interfere with our other ends.

1.2.2 While it is definitely controversial to assume that there are certainly reasons to adopt some particular definite ends, it is also questionable to flat-footedly assert that there can be no such reasons. That would require arguing that any such reason would in fact be contradictory etc. What we can however do is judge whether any particular attempt at a reason to adopt specific ends works on its own merits.

1.2.3 i.e. if you are forced to conclude that we have rave reason to adopt some particular end, this is not reason to think that the whole line of argument has gone wrong

2. This is roughly the form my argument is going to take:

2.1 A perfectly rational agent necessarily has some other regarding disposition X

2.2 Such a disposition is not fitting if we only consider the kind of self interested reasons that can be uncontroversially asserted about all agents, or all humans/mortals.

2.3 However all of a rational agent’s dispositions are fitting. This is follows from the argument set up in (1), specifically the conclusion obtained in 1.1.2

2.4 It must be the case, not only that a rational agent will have disposition X, but that X has an appropriate relation to reasons

2.4.1 For example, if a rational agent were to have the otherwise inexplicable disposition to increase the number of sheep in Texas, then barring that such, the simplest explanation for this must be that Texan sheep are genuinely valuable.

2.5 The nature of such reasons as these which go beyond means to particular ends is such that they are tied to features of the situation. i.e. becoming fully aware of the feature in every relevant way would motivate the properly reasoning agent to respond to it appropriately. We could formalise the structure of such reasons by the following feature Y demands B-ing, where Y is a feature of the world and B is an action directed at Y.For example, utilitarians would say that pleasure requires maximising it, Kant would say that rational natures require respecting them, conservatives would say that traditions require preserving and theists would claim that sins require avoiding them.

3. In order to proceed, I need to actually demonstrate that there are some dispositions which cannot be explained under means-ends reasoning (MER). Some obvious paradoxes in means end reasoning include hedonic paradoxes and consequentialist paradoxes. These are where common wisdom tells us that actively pursuing our own happiness, or actively trying to maximise pleasure often fall short of the goals. Achieving success in these ends often requires us to forget about these goals and do other things e.g. immersing ourselves in charity work (for the former) and using commonsense morality (for the latter). However, it seems that these are merely products of our own incompetence. It is not clear that a perfectly rational and knowledgeable agent would in fact suffer from these problems. However, it is my contention that the prisoner’s dilemma is precisely such a case where perfectly reasoning self interested agents can face paradoxical circumstances.

3.1 The prisoner’s dilemma will take place in a society of ideal agents. i.e. one where all agents are perfectly knowledgeable and are able to reason perfectly.

3.2 It is not at first clear what this society would look like. The society may just be us, except that all of us know all the facts and are able to reason perfectly. This would be the most relevant instantiation of the hypothetical. However, we may want to make some adjustments. If we were all perfectly rational etc, it is almost certain that we would not be doing the things we are currently doing now. We currently compensate for many of our weaknesses using various heuristics etc. When idealised, we would not have to do so. Maybe we wouldn’t need a state, or maybe we still would. The exact shape of society is not known. But one thing we can be sure of is that there wont be any strange utility monsters, or any strange creature demanding that we adopt certain dispositions else it will kill us all or any of the standard list of monsters that consequentialists like to throw at each other. (See, I can just stipulate them out of existence!!!)

4. Here I will explain the salient aspects of a prisoner’s dilemma (PD) and explain where the paradox lies.

4.1 A prisoner’s dilemma is a game traditionally played by 2 people where you and your opponent have 2 options each and the payoff you receive depends on the exact combination of your and your opponent’s move

4.1.1 The traditional story goes like this: two thieves have robbed a bank and hidden the money $10, 000, 000. Both of the thieves are subsequently brought in for questioning where they are questioned separately. If both thieves keep silent (cooperate), the police let both of them go and they can later collect the money $5000, 000 each. If only one of them defects by ratting out his friend, he gets away scot free and gets the 10 million while his friend (ex-friend by now) is thrown into prison for 10 years. If both rat each other out, they each get 5 years in jail.

4.2 To formalise the whole system, there are 4 outcomes based on whether you or your friend cooperate or defect.

4.2.1 Both cooperate: payoff is A

4.2.2 You cooperate, your friend defects: payoff is B

4.2.3 You defect, your friend cooperates: payoff is C

4.2.4 Both defect: payoff is D

4.2.5 C > A > D > B is the weak (necessary) requirement for the situation to be considered a prisoner’s dilemma. For a single round game, this condition is sufficient. (for multiple rounds with the same partner, an additional 2A > B + C is required to prevent the winning strategy to be alternate cooperation and defection)

4.2.6. Because C > A and D > B whether or not your opponent defects, you can always improve your payoff by defecting

4.2.7 However, A > D. There is a better payoff if both cooperate than if both defect.

4.2.8 To recap, given that an agent knows everything and is reasoning perfectly, there is no action he could have taken that would make him better off with respect to his goals than those actions he has taken.

4.2.9 Therefore, it would be rational to X iff X-ing maximises the success of the agent with respect to all his ends.

4.2.10 Note that we can spell out exactly what the payoffs represent. It may, for example represent the agent’s own happiness or his own and those of his closest relatives. However, if the latter, the opponent cannot be those same relatives that the player is concerned about. Also, it would also be a rather strange interaction where the happiness of the opponent’s loved ones is directly affected when the interaction is with the opponent himself. Similarly the payoff cannot simply be the general welfare of everyone either. If the payoffs were arranged such, it would be impossible for the payoffs to conform to the requirement of the prisoner’s dilemma. Any gain the general welfare (i.e. agent’s payoff) would similarly increase the opponent’s payoff. Keep in mind that this is only a limitation when we are considering setting up a prisoner’s dilemma. Solving the prisoner’s dilemma always involves dissolving it. i.e. changing the game so that the payoffs are different either by re-specifying what the payoffs reflect, or by introducing various incentive changing practices like punishment etc.

4.2.11 For the moment, lets specify that it is one’s own happiness. i.e. for a selfish agent, he can always increase his payoff by defecting.

4.2.12 It is rational for a selfish person to defect.

4.2.13 In a society of perfectly rational and knowledgeable selfish agents, all agents would defect.

4.2.14 But everybody could do better if they all cooperated.

4.2.14.1 Note that if cooperation were rational, then everybody would cooperate, since they are all perfectly rational.

4.2.14.2 The unanimity (everyone defects or everyone cooperates) always follows from the premises that everyone is rational. It seems that in-so far as we can analyse a situation through the lens of PD in a society of ideal agents, an action is rational iff the maxim behind the action, if made a universal law, is consistent with the ends of the agent in question. i.e. it is rational if the maxim can be willed to be universal law.

4.2.14.2.1 This may in fact extend to all games and not just PD.

4.2.15 Selfishness is self defeating. Caring only about your own happiness means that the actions taken thereby have not necessarily maximised your happiness.

4.2.16 To whit, given that everybody cares about their own happiness, everything else being equal, everybody does better by cooperating.

4.2.16.1 It is taken as a given that all agents do in fact desire their own happiness.

4.2.16.2 We can also take everything else to be equal too. The only ends that can be better achieved by defecting are one’s own happiness and the opponent’s unhappiness, the latter rarely, if ever, casually being desired.

4.2.16.3 It seems that one cannot consistently desire one’s own happiness and your opponent’s unhappiness at the same time

4.2.17 Given that the society of ideal agents is one where they cannot do better, it follows that it is one where they all cooperate.

4.2.18 Since each agent is rational and each agent cooperates, it is rational to cooperate

4.2.19 But consideration of only one’s self interest fails to provide sufficient reason to cooperate since one can always increase one’s payoff by defecting.

4.2.19.1 Neither can an agent argue that since his actions are necessarily rational, everyone’s will mirror his, and therefore cooperation will in fact produce the better outcome. The reason is primary and the agent will act only if he has reason to do so. If there are no considerations other than self interest, there is no reason why the agent cannot in fact improve his payoff by defecting.

4.2.20 Points 4.2.18 and 4.2.19 essentially distils the paradox to its essence.

4.3 Here are a number of reasons why PD is relevant

4.3.1 Tragedy of the commons is an example of PD with multiple players. Defection refers to over-use of a common resource such that the resource supply starts to fail (e.g. over-fishing destroys the ecosystem). Cooperation is simply refraining from overuse. The payoffs are simply the sum of all resources extracted over time.

4.3.2 A prisoner’s dilemma basically reflects any situation where one could harm another for personal gain. Note that prisoner’s dilemmas are symmetrical.

5 There are a variety of ways of resolving the paradox detailed in 4.2. All of these ways make it such that it is not PD anymore. Any solution must also make it the case that the disposition to cooperate is a fitting one.

5.1 One way would be to introduce practices like punishment etc which would incentivise cooperation or disincentivise defection. If defection could be punished, then self the payoff from defecting would be reduced.

5.1.1 However punishment is not always possible. When 2 people who do not know each other meet briefly at a market to trade, then there is no opportunity to retaliate etc. However they would both be better off if they tried to deal honestly with each other rather than both cheating each other by giving shoddy goods, imitations, counterfeit money etc.

5.2 A disposition to cooperate is fitting iff there are reasons to cooperate or reasons to not defect. Absent any punishment practices, there are a variety of ways we could cash out the reasons for cooperation or non-defection. What follows are some possibilities.

5.2.1 Cooperation (in the PD sense) is an end in itself. Merely being aware of cooperation and all it entails is sufficient to make it the case that it would be irrational to not adopt it as an end.

5.2.2 We could do the same with non-defection. i.e. non-defection is an end in itself. Characteristic about 5.2.1 and 5.2.2 is that these do not regard the payoffs to others (their happiness) for cooperation and defection. This could be along the lines about recognising that the opponent is a person. It could simply be the case that from the simple fact that the other guy is a person, we are not to use them as a mere means to an end.

5.2.2.1 Hey, it’s possible! Besides, we are just speculating here.

5.2.3 Other people’s happiness is intrinsically valuable. Understanding what happiness is, means that we would want to maximise it for everybody and not just ourselves. This principle is sensitive to the payoffs. Discount rates may also apply. It is not obviously unreasonable.

5.3 Just to remind everyone of point 1.2.2. Don’t get your panties in a bunch just because I managed to introduce some end which we rationally ought to adopt.

5.4 Note that these reasons are speculative. What we could do is try to look at all the dispositions that ideally rational agents have and try to come up with the simplest set of principles that would consistently motivate these dispositions.

5.4.1 See what I’ve done here. There are basically practical reasons and theoretical reasons. That everybody will do what is rational and that since they do better when they cooperate than when they defect, cooperation is rational is a theoretical reason. But theoretical reasons are not motivating, only practical reasons are. A practical reason is one like 5.2.3 which says that happiness is intrinsically valuable (valuable simply being what we have reason to desire).

To look back at what I’ve done in this post so far, I have established that cooperation in a prisoner’s dilemma is what rational agents would do, that there therefore has to be practical reasons in favour of cooperating towards which self interest alone is insufficient and that there are two types of reasons: practical and theoretical. Practical reasons are those which will motivate rational people to act, and theoretical reasons, even if they concern human action, are not motivating. From a theoretical consideration that agents do aim at their own happiness, we derive that since perfectly rational and knowledgeable agents must be maximally successful and that all of them being similarly situated (to cooperate or defect) and rational and knowledgeable, they must all cooperate. Therefore there must be some practical reason/principle which would motivate cooperation, of which I have provided a list while admitting that they are indeed speculative. This concludes the bulk of what I set out to do. What follows will be a quick assay into whether I can extend these conclusions about people in a symmetrical situation to agents in asymmetric situations i.e. where the opponent couldn’t possibly do anything to the player.

6 Note that whichever of the reasons 5.2.1 – 5.2.4 are true, they automatically apply to not only the symmetrical prisoner’s dilemma case, but also to asymmetric cases where the other guy cannot defect. Their happiness is still valuable, or they are persons too etc etc.

6.1 Note however, that from a purely theoretical consideration, it seems that there is no paradox as is in the PD case with regards to self interest. We cannot however simply leave the issue at saying that it is obvious that the other kinds of reasons do in fact apply. We are at lest obligated to investigate whether we could justify limiting such practical reasons as could motivate cooperating to the symmetrical situation only.

6.2 The only difference between the two is that in the asymmetric case, the opponent has no choice but to cooperate (not that they are automata, but that attempted defection wouldn’t harm you in any way either, nor would cooperation do anything for you either.). Think of this as the case where everybody else has the coordination and the strength of 2 year olds. Here defection is always dominating. Defection always gives a better self interested payoff than cooperation. Even if everybody defects, the payoff is better than if everybody cooperates.

6.3 This principle cannot be based on the fact that the opponent cannot retaliate, as even in the one off PD case, retaliation is not possible.

6.4 Is their ability to harm you a sufficient consideration?

6.4.1 One is tempted to argue that it isn’t. That there is no logical connection between one being stronger and it being acceptable to do so especially once we rule out fear of retaliation. However, that would be too stringent a standard. Any reason giving feature would in fact be a substantive claim, which would not follow merely logically from the feature. Other people’s happiness is not logically connected to any notion of maximisation etc. However claiming that happiness is valuable and therefore ought to be maximised is a substantive claim. Similarly, a claim about strength having prerogative would also be a similar substantive claim, as is a claim about desert or moral responsibility.

6.5 One could however generalise the lesson in point 4.2.14.2. Any practical principle/ reason, upon which we act with respect to our opponent, also applies when some third party acts with respect to us as long as said third party is situated in the same respects to us as we are to our opponent.

6.6 For people who are situated with respect to each other as equals (or approximately so), this article has already demonstrated how this theoretical would work.

6.7 For the case described in 6.2, we could do a regress, saying that some other person is situated in such a superior position to you etc etc. However, the regress has to stop somewhere, and it can only stop with some entity that is so potent and powerful that there are no competitors anywhere near. This entity has an effective monopoly on the use of force. We can call this either the Leviathan, or the state, or maybe just a pro wrestler. (Yes, I’m borrowing shamelessly from Hobbes)

6.8 However, we can also note this. People do best when the leviathan does not transgress against them. Therefore any principle which allows or motivates people to transgress against those weaker than them would also similarly motivate the leviathan to transgress against the person. More generally, any practical principle which motivates an agent towards his inferiors would also motivate the leviathan with respect to the person.

6.9 Since people do best when the leviathan does not transgress against them, they would not similarly transgress against their inferiors. Let the practical principle that motivates this principle be X.

6.10 For the same reason X, the rational leviathan would not transgress against the people.

7 I think that now, we can generalise the point made in 4.2.14.2.

7.1 If a reason genuinely counts in favour of an action in a particular situation, then it would count similarly for all people who are similarly situated. If it doesn’t, there has to be some principle that explains why.

7.2 Therefore any maxim which acts as a reason would function, in a society of ideal agents as a universal law of nature.

7.3 Because people necessarily desire their own happiness, we can measure success, by whether or not people can do any better as far as their happiness is concerned.

7.4 Ideal agents are maximally successful. Being completely rational and knowledgeable, it is in fact impossible for them to do any better than they are doing.

7.5 Therefore the maxims which they act on are those which, when conceived as universal laws of nature, will maximise their happiness.

7.6 This is not different from the categorical imperative which tells us to act on the maxim which we can will to be a universal law of nature.

7.6.1 In the understanding that you cannot will your own unhappiness.

I think that that is it for now. Of course, this says nothing of what rational people would do in the current world, or what the rational leviathan would do in the current world. But if having reached this point, all is agreeable, then, we can proceed confidently.

8 Much, however, can be said regarding the maxims that can be willed as universal laws.

8.1 A maxim, if explicitly stated will hold to the general form: Perform action A, under conditions M

8.1.1 A would be a general imperative e.g. kill a person.

8.1.2 M would be a qualifying condition e.g. if the person has white hair and it would increase the number of sheep in Texas.

8.1.3 For now, lets not quibble about whether the maxim is right or not, although, the reasons are limited to the extent that they do not specify a false link between the action and the rationale. Let us presume that killing this particular white haired man would increase the number of sheep in Texas. However, we should note that there is a limitation to what these maxims can say. The maxim cannot be obviously false in the sense that killing this white haired man would not in fact increase the sheep in Texas, or the person is not white haired. i.e. the conditions M refer tomust actually apply. Anyone following such a maxim can be fairly accused of being utterly retarded if he did anything to a black haired person based on a maxim where the stated conditions did not apply.

8.2 Even given the limitations mentioned in 8.1.3, there are an infinite number of semantic variations a maxim could take. i.e. there is nothing in the formal structure of a rational maxim which distinguished it from an irrational one. They all have the same formal structure

8.3 There is little or nothing in the semantic content itself (to us) that, apart from determining whether the maxim applied or not, would be indicative of the rationality of the maxim.

8.3.1 Note that being willed as a universal law is not part of the semantic content of a maxim. Conceptual analysis of the content of the maxim yields no information as to whether or not it can be willed as a universal law. Trying to see if it can be willed as a universal law is in fact a synthetic proposition.

8.3.2 The point is that there are few non question-begging ways in which we could reject a maxim based on the semantic content alone.

8.3.3 Just because we cannot properly evaluate the semantic content of a maxim, doesn’t mean that fully rational and knowledgeable agents cannot. In fact, full knowledge of all the facts would apprise the agents of the semantic differences which were important. In fact, it seems that it is because we have special epistemic access to our own happiness, that we find that we necessarily desire it.

8.4 Note that the formula of universal law is not a practical reason in and of itself. It is a mere conceptual tool which we as disinterested observers could use to decide if a particular maxim would be truly motivating to an ideal agent in a society of ideal agents.

8.5 Therefore, if the ideal agent would in fact act on a particular maxim, it must be because of the semantic content of the maxim. i.e. if an ideal agent necessarily would act to increase the sheep in Texas, Then, there must be some feature about sheep in Texas which the agents having full knowledge of would want to increase it.

8.6 Texan sheep in the real world would have the same features as Texan sheep in a society of ideal agents. They would have the same reason giving force in both cases.

8.7 There could of course be other features of the case which may involve other maxims, and might change whether an action was right or not, but by looking at how all the features play out in the ideal setting, we could determine how those features which remain invariant play out in the real world.

8.7.1 In that even though there could be other principles involved, the principles and maxims which have reason giving force in the idealised world have reason giving force in the actual world.

8.7.2 Consider the PD case again. In the ideal world, it is a fact that the ideal agent cooperates. It is also a fact that self interest alone would motivate the agent to defect. Therefore, there must be some principle A, based on some feature of the situation that over-rides self interest in PD and all other relevantly similar situations. In order to justify defecting in the real world, there must be some additional principle B, which is neither self interested, nor parasitic on such notions. It is doubtful that there is any principle B which could in fact do this.

8.8 What talking about a society of ideal agents allows us to do is to talk about at least some of the features of the world which have reason giving force. If we find that an ideal agent in a society of other ideal agents necessarily cares about a lot of things other than just herself (lets call these things X), then she does not simply stop caring for those things just because the situation changes such that the people around her are not reasoning properly, or are ignorant in various ways. i.e. she may find that there are other considerations as well, but she cannot cease to care about X.

8.9 At this point, I might as well want to distinguish between reasons and the good. It may in fact be the case that happiness is simply the good. But, whether or not concepts like desert and need are genuinely reason giving, they in themselves are not the good. The concepts instead of being additional goods to promote, weigh in favour or against the provision, withholding, or alienation of the goods with respect to certain people in specific instances and to specific extents. i.e. they transform a utility function in a very localised manner. The value of giving a murderer pleasure becomes negative because he is not deserving of the pleasure, not that there is an additional disvalue which outweighs the hedonic value. The task of a future post would be to determine how these concepts, which at the moment are at best intuitions can be properly justified within the given framework. Scanlonian considerations might be informative.