Cooperation Un-Veiled

Contractualism tries to derive morality from an agreement that even selfish agents would willingly sign if they knew about it. In theory, you would gain from such an agreement, since the costs of not being able to behave unethically towards others would be at least balanced by the benefits of other people not behaving unethically to you.

Such attempts crash into the brick wall that not everybody would, in fact, sign such an agreement. For example, the King might reasonably argue that he is able to reap the benefits of oppressing lots of people, but almost nobody can oppress him. To give another example, rich people might feel no need to give to charity, since they don’t need anyone else to give charity to them.

One classic solution to the problem is Rawls’ “veil of ignorance”. Rawls asks: what if we have to make the agreement before we know who exactly we’re going to be? The future King, not knowing he will be born a King, will agree oppression is bad along with everyone else; the future rich, not knowing they will be rich, will want to create a strong social safety net and tradition of charitable giving.

The great thing about this thought experiment is that it works pretty well to get us what we want – assuming a veil at just the right spot, we end up with something like utilitarianism being in everyone’s best interests.

The bad thing about the thought experiment is that there is not, in fact, a veil of ignorance. There’s just a King, who when asked will tell you he knows perfectly well he’s a King and would like to keep on oppressing people. So what can we do with the universe we actually have?

Here’s a model I have been playing around with recently.

Suppose there is a society of one hundred men, conveniently named Mr. 1, Mr. 2, and so on to Mr. 100. Higher-numbered people are stronger than lower-numbered people, such that a higher-numbered person can always win fights against a lower-numbered person at no danger to themselves. Further, suppose this society has a god who enforces all oaths and agreements, but who otherwise stays out of the picture.

(in order to avoid finicky math distinctions between choosing with replacement and choosing without replacement, it might help to think of these as arbitrarily large clans of people with with specified strength instead. Whatever.)

This society is marked by interactions where two randomly selected people meet each other. Sometimes the people nod at each other and pass each other by. Other times, the stronger of the two people overpowers the weaker one and oppresses them in some way, where an oppression is an interaction where the stronger person gains and the weaker person loses some utility.

One person proposes a rule: “no oppressing anyone else.” How much support does the rule get?

Well, that depends on the character of the oppression. Some oppression can give the oppressor exactly as much utility as it costs the victim – for example, I steal $10 from you, making me $10 richer and you $10 poorer. Other oppression can cost the victim more than it benefits the oppressor – for example, I steal your wallet, which gives me only whatever small change you have in there, but you have to replace all your credit cards and licenses and so on. Still other oppression could help the oppressor more than it hurts the victim – for example, starving Jean Valjean steals a loaf of bread from a rich man.

So let’s be more specific. One person proposes a rule: “No zero-sum oppression.” Who agrees?

Naively – and I’ll challenge this later – Mr. 1 through Mr. 50 agree, but Mr. 51 through Mr. 100 refuse. Analyzing Mr. 25’s thought process should explain: “In 25% of interactions, I will be the oppressor. In 75%, I will be oppressed. Assuming one of my utils for one of their utils, that means in a hundred interactions I will on average lose fifty utils. Therefore, I should ban this type of interaction.”

Mr. 99, on the other hand, likes this kind of oppression. He thinks “In 99% of interactions, I will gain. In 1%, I will lose. So in a hundred zero-sum interactions, I will on average a gain of 98 utils. Therefore, I like this type of interaction.”

But Mr. 99 might have a different rule he would agree to. He might say “No oppression so bad that it hurts the victim >100x as much as it helps the oppressor.”

It’s easy to think of examples of this kind of oppression. For example, if I’m having a really bad day and just want to beat someone up, breaking your ribs might make me feel a little bit better, but probably not even one percent as much as it makes you feel worse.

Mr. 99 thinks “In 99% of interactions I will be the oppressor; in 1% I will be the victim. Each time I am the oppressor, I gain one util; each time I am the victim, I lose 100. Therefore, in 100 interactions I will lose on average one util. Therefore, I don’t like this kind of oppression.”

And it’s easy to see that Mr. 1 through Mr. 98 will agree with him and be able to sign this contract.

The logical conclusion is a hierarchy of agreements. Mr. 1 signs an agreement banning all oppression, Mr. 1 and 2 together sign an agreement banning oppression that helps the oppressor less than 50 times as much as it hurts the victim, Mr. 1 and 2 and 3 together sign an agreement banning oppression that helps the oppressor less than 33 times as much as it hurts the victim, and so on all the way to everyone except Mr. 100 signing an agreement banning oppression that helps the oppressor less than 1/100 as much as it helps the victim. Mr. 100 signs no agreements – why would he?

Before I explain why this doesn’t work, I want to think about what it means in real world terms.

It would replace the one-size-fits-all principle of utilitarianism with the idea of power-based utility ratios. This seems to kind of map on to real life experience. For example, the King may order his servant to spend hours getting the floor polished absolutely spotlessly. Having a perfectly spotless floor (rather than a very clean floor with exactly one spot) gives the king only a tiny utility gain, but may require many more hours of the servant’s time and labor. That the King can command a large amount of the servant’s utility to improve his own utility only a tiny bit seems a lot like what it means to say there’s a power differential between the King and the servant. If the servant tried to reduce the King’s utility by a large amount in order to improve his own utility by a tiny amount, he would be in big trouble.

I notice this in my own life as well. Last year I worked under a doctor who was consistently late. The way it would work was that he would say “I have a meeting at 8 AM every morning, so you should be in by 9 so we can start work together.” Then his meeting would invariably run to 10, and I would be left sitting around for an hour doing nothing. It might seem that the smart choice would have been for me to just sleep late and arrive at 10 anyway, but suppose one day a week, my boss’ meeting finishes exactly on time. Then if I’m not there, he has to wait for me, and he considers this unacceptable. So if my boss and I value an hour of our times the same amount, it would seem this arrangement implies my boss’ utility is worth at least seven times as much as my own.

There are some features of this power-ratio utilitarianism that are repugnant: the rich seem to be held to a very low standard, whereas the poorer you are, the more exacting a moral standard you’ve got to live up to. That seems like if anything the opposite of how it should be. But other features actually seem better than our current morality – if giving charity to the poor improves their utility 100x as much as it decreases yours, then the 1% have to donate, probably quite a lot.

Enough of that. The reason this doesn’t work is simple. Mr. 1 through Mr. 50 would want to sign the zero-sum agreement. But if he knows the rules of the thought experiment, Mr. 50 can predict that Mr. 51 through Mr. 100 won’t sign the agreement. None of the people who could conceivably oppress him will consider themselves bound by the rule. So he’s not trading his right to oppress others in exchange for others’ right to oppress him, he’s giving up his right to oppress others but should still expect exactly the same amount of oppression as he had before. Therefore, he does not sign.

But now Mr. 49 is in the same such position. He knows nobody stronger than he is, including Mr. 50, will sign the agreement. Thus the agreement is useless to him.

And so on by induction all the way to Mr. 2 refusing to sign (it doesn’t matter much for poor Mr. 1 either way).

This produces some weird results. Mr. 99 is no longer willing to accept his “No breaking people’s ribs just to let out some stress” agreement that banned utility exchanges worse than 1:100, because the only person whose help he wants, Mr. 100, isn’t going to sign. That means Mr. 98 won’t sign, Mr. 97 won’t sign, and again, so on all the way down to Mr. 2.

In other words, even the second weakest person in a society has no interest in signing an agreement not to punch people weaker than you when you’re having a bad day.

But this is a stupid result!

It reminds me of a problem noticed in Iterated Prisoner’s Dilemma. Conventional wisdom says the best thing to do is to cooperate on a tit-for-tat basis – that is, we both keep cooperating, because if we don’t the other person will punish us next turn by defecting.

But it has been pointed out there’s a flaw here. Suppose we are iterating for one hundred games. On Turn 100, you might as well defect, because there’s no way your opponent can punish you later. But that means both sides should always play (D,D) on Turn 100. But since you know on Turn 99 that your opponent must defect next turn, they can’t punish you any worse if you defect now. So both sides should always play (D,D) on turn 99. And so on by induction to everyone defecting the entire game. I don’t know of any good way to solve this problem, although it often doesn’t turn up in the real world because no one knows exactly how many interactions they will have with another person. Which suggests one possible solution to the original problem is for nobody to know the exact number of people.

(now I want to write a science fiction novel about a planet full of aliens who are perfect game theorists, but who always behave kindly and respectfully to one another. Then some idiot performs a census, and the whole place collapses into apocalyptic total war.)

It seems like there ought to be some kind of superrational basis on which the two sides in the iterated-100 prisoners dilemma can cooperate. And along the same lines there ought to be some kind of superrational basis upon which everyone in the society of 100 people should stick to some basic utility-ratio principles. But I’m not sure what it would be.

Some other variations of this problem might be more interesting, but I don’t think I’ve got the math ability or the time to think about them as carefully as they deserve:

1. What if all fights contained a random element? For example, suppose your chance of overpowering someone else (and thus being able to oppress them) was your_strength/(your_strength + opponent_strength)? In societies of this type, agreements to ban strongly negative-sum interactions would be more salient for everyone, since even Mr. 100 would have some chance of being beaten in a typical interaction.

2. How about a meta-agreement, in which people say “I agree to sign the agreements requested by people weaker than myself if and only the people above me agree to sign the agreements benefitting people weaker than they?” Such an agreement wouldn’t make sense for Mr. 100, and so Mr. 99 would not sign, and so on down, but is there a superrational solution?

3. What if one type of agreement people were allowed to make was a coalition to gang up against opponents? This seems one of the most important real-world considerations – one of the things that does make Kings behave at least somewhat morally is the knowledge that they will be overthrown if they do not; likewise, some countries implement social welfare systems with the explicit goal of decreasing the poor’s incentive to overthrow the rich (I think Bismarck tried this). On the other hand, it also gives the powerful an incentive to band together to better oppress the weak. I’m pretty sure the effects of this would be impossible to really calculate, but might we lump them together into saying “This is so nondeterministic that no one can ever be sure they’ll end up in the winning as opposed to the losing coalition, therefore they are less certain of victory, therefore they should be more likely to agree to rules against oppression”?

246 Responses to Cooperation Un-Veiled

I notice this in my own life as well. Last year I worked under a doctor who was consistently late. The way it would work was that he would say “I have a meeting at 8 AM every morning, so you should be in by 9 so we can start work together.” Then his meeting would invariably run to 10, and I would be left sitting around for an hour doing nothing. It might seem that the smart choice would have been for me to just sleep late and arrive at 10 anyway, but suppose one day a week, my boss’ meeting finishes exactly on time. Then if I’m not there, he has to wait for me, and he considers this unacceptable. So if my boss and I value an hour of our times the same amount, it would seem this arrangement implies my boss’ utility is worth at least seven times as much as my own.

When I was much younger (like, age 8-13), this sort of inequity would send me into paroxysms, and I would regularly point it to my superiors.

I learned not to do so, but I never quite got over the resentment.

An interesting question: how do you anticipate you would act in this specific scenario, when you’re the one with the power?

Possibly the same, depending on how I get power. It might be I have power because my time is worth ten times is much (in terms of monetary compensation) as other people (and we assume this difference to be a fair reflection of the value of our work). In that case, it seems fairest to whatever institution is paying us, or to whoever is gaining from our labor, that I make the other person wait (thus wasting on average seven hours of their time) instead of making me wait (thus wasting one hour of my time).

There’s also an altruism angle. In a lot of these cases, these are doctors who have volunteered time out of their busy and lucrative schedule to teach others. I don’t want to disincentivize that by making it inconvenient for them.

Overall I think I’d rather try to arrange things so the issue doesn’t come up, like by not scheduling the thing where I have to work with another person directly after my meeting.

For my part (having had that power previously) I wouldn’t be so ridiculously insensitive and stupid as to connect my employee’s commencement of work to my own meeting times. I can’t see any good reason to do that, when a set commencement would mean my employee could embark on other work while waiting for me to finish my meeting, rather than sitting about doing nothing, and thus would feel valued.

I mean, it’s almost like the doctor in Scott’s example wanted Scott to hate him. One possible explanation is that he was deliberately creating that situation to reinforce his superiority over Scott.

It seems to me that the solution to the 100-iteration-Prisoner’s-Dilemma game isn’t superrational, but subrational. That is, even in this precise situation, the vast majority of people don’t have the game theory to figure out that they really should be defecting all the way, and those who do know that most of the people they’re playing with, not being perfect game theorists, won’t follow their rules, and hence the reduction doesn’t work anyway. See also: why the 100-blue-eyed-men-on-the-island question is so hard, because most people will model the islanders as at least something like people, whereas the problem states explicitly that they’re all actually perfect game theorists.

I suppose my takeaway is that perfect game theorists aren’t actually much like people at all, and if you’re trying to come up with results that are useful for actual people, reasoning using perfect game theorists is kind of silly.

I don’t have any answer to the amoralist challenge. I pursue moral behavior because it is aesthetically pleasing to me. Given the power, I would enforce moral behavior in others, similarly because it is aesthetically pleasing to me.

I don’t think the amoralist challenge has any consistent answer. Whatever your utility function is, it has axioms you can’t justify.

Not to my knowledge. “Emotivism” is not it, although emotivists are likely to hold the view that moral and aesthetic preferences are the same kind of thing.

I haven’t seen this position actually defended in academic moral philosophy, but I may well have just missed it.

Someone I once discussed this with pointed out how much, if you look at it, aesthetic and moral judgments behave alike in many people; they will condemn you for liking the wrong kinds of things in the same way that they condemn you for doing something they consider immoral. People will similarly argue about objective aesthetic worth in the same way as they will talk about objective morality.

Therefore, an epistemic rationalist will be motivated to accept the truth of well justified moral claims.

Therefore, morality is not incompatible with rationality.

That’s about all I need to answer the challenge.

To answer some further questions:

An epistemic rationalist won’t necessarily act on a moral claim she accepts.They can be akrasic. Altruistic morality requires them to lose utility in some areas, which may or may not be balanced out others areas. That doesn’t mean they are irrationality losing nett utility every time they act morally.

I think Hume is facepalming in his grave. That line of reasoning ducks the entire is-ought problem.

EDIT: To clarify, by using truth, valid argumentation, etc, one could potentially come up with a beautifully designed social contract that Mr. 1 through Mr. 100 could agree to, but that has everything to do with utility and nothing to do with morality.

The is-ought divide is really a fact-normativity divide…it isn’t specific to ethical normativity. And the point of basing ethical normativity on some other normativity is that you don’t have to leverage it from non-normativity.

Yes, precisely. That’s why people try to get morality from game theory, because the normativity of game theory is about something that by definition you care about already.

I read knyazmyshkin as objecting to the idea that epistemic rationalists, seeing the truth of moral statements, would immediately be motivated to follow them. And rightly so, because that assumes a whole lot, not only that moral statements are truth-apt, but also motivational internalism.

The amoralist challenge is especially applicable to externalist theories. If morality really is entirely external, then the amoralist can coherently and rationally reject morality and not have it bind him.

Um … duh? The problem with the hardened sociopath is not irrationality, it’s not giving a crap about the rest of us. That’s what externalism says, and the diagnosis seems to agree better with common sense, than does the contrary view.

But for someone to have no reason to be moral conflicts with common sense. and more importantly makes morality itself a questionable enterprise. If it’s possible for someone to have no reason to be moral, why should I bother being moral? If I reject morality in the same way that the amoralist does, perhaps the things I’d do would coincide with morality more often that the things the sociopath would do, but why should I be concerned with whether I’m being moral?

You’re not someone though, you’re you, and people like you (who take morality at least somewhat seriously) are in the majority, and can talk freely about morality without confusing anyone. I don’t understand what the problem is supposed to be. I mean, for example, I wouldn’t say “why should I care what happens to my wife?” even though I recognize that not everybody cares about their spouse (or about mine!) or even has a spouse, and lots of that not-caring is perfectly rational.

The world isn’t divided into sociopaths and people who care about morality*, there’s a spectrum of psychological constitutions, preferences, etc, that have different degrees to which they have reasons to care about morality (or aspects of it). Sociopaths are at one end of the spectrum, and they presumably have no reason to be moral – let’s say their optimal rational behavior is called Optimal_Sociopath, and it rarely recommends the same behaviors as morality. Now suppose that I’m somewhere in the middle of the spectrum, and my optimal rational behavior is Optimal_Me. Because I’m in the middle of the spectrum, Optimal_Me’s behavior recommendations coincide with morality much more often than Optimal_Sociopath’s do. But Optimal_Me doesn’t always agree with the prescriptions of morality. If the sociopath has no reason to be moral – if there’s no reason he should abandon Optimal_Sociopath behavior for moral behavior – then why should I abandon Optimal_Me behavior for moral behavior?

The law is assumed to be designed such that the penalties for violating it are sufficient to motivate people to follow it. There may be additional reasons to follow some particular laws, but they don’t apply to laws in general. If there’s a law I disagree with, it’s likely that I should follow it anyway, not because of some obligation but because something bad will happen to me if I don’t. I’m not obliged to follow the law, but it’s the prudent thing to do most of the time.

In contrast, there’s no external punishment for acting immorally as such. There are sometimes punishments for some immoral acts, but at most that gives you a reason to avoid acting immorally in cases where it would get you punished. “If I’m not going to be punished for it, why should I act morally?” is a significant problem for external conceptions of morality. If you say that I should do moral things because that’s what “moral” means, then that just shifts the question to “Why should I do the things that are in the set labeled ‘moral’?” (e.g. give to charity, not murder, etc.) The problem is connecting “What I should do” to specific moral acts. It’s a dilemma, and there are two options:
1. You can conclude that not everyone should be moral, because for some agent, “What I should do” isn’t any act that’s commonly labeled “moral”. You then preserve the intuitive content of morality (maybe), but you lose moral universality.
2. You can widen the set of possible moral acts to anything that “What I should do” can be, and concede that a moral act for Agent A may not be a moral act for Agent B. You then have a morality that applies to everyone, but then you lose the intuitive content of morality.

You can say that morality is what you should do, or that morality is stuff like helping old ladies cross the street, but it cannot necessarily be both.

If you restrict “morality” to what (presumably human) “sane adults should do”, that’s fine. That’s taking Option 1 in the dilemma. But then that means that neither Caligula nor sociopaths have a reason to be moral. Also, even if you restrict morality to what sane adults should do, there is still significant variance in what one should do depending on who one is, and it’s perfectly possible to be a sane human adult who shouldn’t donate to charity, for example. There are moral claims because there are true statements about what sane human adults should do, and these statements don’t prescribe behaviors like paperclip maximization or torturing everyone, but they may still not prescribe behaviors that are commonly considered to be moral.

I have no problem giving up on “universal” motivation, since it only ever exsted in a qualified sense. I have never heard of a moralism which required all humans to be morally motivated. Asserting that not everyone is morally motivated is fairly banal, especially if the unmotivated coincide with the people we don’t consider moral agents anyway,

I don’t know why you say there is variance in what people should do. I don’t know which meaning of should you are using, or whether you agree that there is more than one.

I could guess, based on past experience, that you mean “should in order to satisfy their preferences”. If so, I would not default to the assumption that that is any kind of morality.

The identification of a “should” based on satisfying preferences, and treating that as a master or ultimate “should”, seems cart-before-horse-ish. People can change some of their preferences on a whim, so to speak: they can embrace a certain point of view. And that changes the balance of preferences. So it’s useless to ask “what favors the final balance of my preferences?” when the latter is a moving target. The very act of discussing morality, which involves trying to justify your actions to your interlocutors and get them to justify theirs, tends to make you more inclined to prefer justifiable actions.

People aren’t always consistent, and they can change points of view if the new view is more consistent with their deeper-seated preferences. For example, if I believe that pleasure is the only good, and yet I oppose wireheading, and then am asked to justify my opposition, I change my view, having discovered it to be inconsistent, but I change it because of a more fundamental foundation that doesn’t change. Justifiable actions are justifiable with respect to a consistent framework, and the act of discussing morality imposes consistency.

A successful theory needs some at least ideally motivating foundation – that is, a rational being must have sufficiently motivating reasons to act morally. So, whatever morality is, it must be motivating for the consistent rational being. Preferences are motivating for normal people, and they’re even more motivating for the ideally rational person, because they don’t suffer from akrasia and other forms of inconsistency. So we start with preferences, because they give us reasons to act, and if morality is anything beyond individual preference-satisfaction, there most be something that is simultaneously motivating to the rational person and overrides their motive for preference-satisfaction. But I don’t think there is any such thing, and the burden of proof is on whomever is arguing for the existence of such a thing.

External morality means giving up on at least some of your preferences some of the time, so there must be something that overrides own-preference-satisfaction at least in some cases.

You should start with preferences because they give us reasons to act. If a preference can be immoral, that means that the consistent rational person would have reasons not to act in accordance with that preference. The burden of proof for the existence of such reasons lies on whomever is arguing for their existence.

If you can argue that X is moral, the argument itself is a motivation to an epistemic rationalist.

Yes, but that’s tautological. That pushes aside the problem of what it means for X to be moral. You could just as well say that if the argument isn’t a motivation to the epistemic rationalist, that means that X isn’t moral.

but I change it [preference] because of a more fundamental foundation that doesn’t change.

Maybe, but you haven’t shown that this “more fundamental foundation” must itself be a further preference. Presumably, a zygote has no preferences, and an adult human does, so at least sometimes preferences can evolve out of non-preferences. If sometimes, why not often?

It’s certainly possible to bring one’s beliefs about how one should act into consistency with more fundamental beliefs about how one should act, and to have none of those beliefs be grounded in preferences. But your point was that preferences change “on a whim”, which isn’t what’s happening here – it’s merely a case of beliefs being made coherent with each other.

The case of the zygote and the adult human is completely unrelated, because it’s not a case of preferences being derived from non-preferences, but the psychological constitution of a being changing from being incapable of having preferences to being capable of having them.

” External morality means giving up on at least some of your preferences some of the time, so there must be something that overrides own-preference-satisfaction at least in some cases.”

…overrides your. self-centered preferences.

Yes, I largely agree, since that is largely what I have been saying. But so what? You think it’s irrational to sacrifices some preferences others? I think it’s inevitable, if you have complex sets of preferences, and not restricted to ethical issues.

“You should start with preferences because they give us reasons to act.”

Not in the the sense that if you start anywhere else, you necessarily dont get motivation.

“If a preference can be immoral, that means that the consistent rational person would have reasons not to act in accordance with that preference. ”

If there reasons for morality.

“The burden of proof for the existence of such reasons lies on whomever is arguing for their existence.”

Up to a point. And there are many attempted justifications. However, if you going to assert that amoralism, rather than “dunno”, is the correct answer, there’s a burden on you.

“Yes, but that’s tautological. That pushes aside the problem of what it means for X to be moral.”

I am doing that quite deliberately because so far, this discussion has been about motivation, not truth. Moral truth has its problems, to be sure, but they’re not the same as the problems of motivation.

Yes, I largely agree, since that is largely what I have been saying. But so what? You think it’s irrational to sacrifices some preferences others? I think it’s inevitable, if you have complex sets of preferences, and not restricted to ethical issues.

If your preferences are internally consistent, then sacrificing the net fulfillment of your preferences would be a failure to be instrumentally rational. It’s certainly possible and rational to follow one preference rather than another (when they’re mutually exclusive) when the fulfillment of the first would give more utility than the fulfillment of the second, but then your action would still be fully in line with your preferences, so it wouldn’t be like morality overriding your preferences.

Not in the the sense that if you start anywhere else, you necessarily dont get motivation.

Maybe (though I don’t know what you’re suggesting to start with instead), but if you start with preferences, you get behavior that is justified by instrumental rationality. You don’t get that with anything else.

And there are many attempted justifications. However, if you going to assert that amoralism, rather than “dunno”, is the correct answer, there’s a burden on you.

Someone arguing for the existence of a morality bears the burden of proof. Until they’ve proved it, you assume that morality doesn’t exist, which means amoralism.

If your preferences are internally consistent, then sacrificing the net fulfillment of your preferences would be a failure to be instrumentally rational.

If your preferences are internally consistent, such that you can always satisfy all of them under all circumstances,you are probably quite unusual.

It’s certainly possible and rational to follow one preference rather than another (when they’re mutually exclusive) when the fulfillment of the first would give more utility than the fulfillment of the second, but then your action would still be fully in line with your preferences, so it wouldn’t be like morality overriding your preferences.

It wouldn’t be overriding all your preferences, but it would be overriding some of them. If you give money to charity, you are losing utility in a limited sense. But you can still ”
have an overall gain in utility even if some of the quantities involved in the calculation are negative. So it can be rational to perform actions which are altruistic in the sense of involving an moment of sacrifice.

“Maybe (though I don’t know what you’re suggesting to start with instead), but if you start with preferences, you get behavior that is justified by instrumental rationality. You don’t get that with anything else.”

If you dont get morality out of that, what’s the point?

“Someone arguing for the existence of a morality bears the burden of proof. Until they’ve proved it, you assume that morality doesn’t exist, which means amoralism.”

Morality , undoubtedly exists in some sense. The question is how it is justified. Some justifications multiply entities, some do not.

It wouldn’t be overriding all your preferences, but it would be overriding some of them. If you give money to charity, you are losing utility in a limited sense. But you can still have an overall gain in utility even if some of the quantities involved in the calculation are negative. So it can be rational to perform actions which are altruistic in the sense of involving an moment of sacrifice.

An overall gain in utility compared to what? Compared to not giving to charity? That’s certainly possible, but that’s also contingent on an individual’s preferences. If I can rationally reject me giving to charity, i.e. if me giving to charity would be a net loss of my utility, the externalist moralist would say that I should give to charity regardless – that’s why this view is called “external”. If I should give to charity if and only if it would maximize my utility, then external morality (as commonly conceived of) doesn’t exist.

If you dont get morality out of that, what’s the point?

You get morality in the sense of there being truths about what you should do. But the content of these truths may not be the same as the content of popular morality or utilitarianism. It generates morality, but not external morality.

> An overall gain in utility compared to what? Compared to not giving to charity? That’s certainly possible, but that’s also contingent on an individual’s preferences.

Assuming instrumental rationality, yes. Assuming epistemic rationality, it is more contingent on good arguments.

> . If I can rationally reject me giving to charity, i.e. if me giving to charity would be a net loss of my utility, the externalist moralist would say that I should give to charity regardless – that’s why this view is called “external”. If I should give to charity if and only if it would maximize my utility, then external morality (as commonly conceived of) doesn’t exist.

Then it is false, rather.

But you don’t have an argument that “external morality” — moral shoulds that don’t amount to self interest — are false.

Nor do you have an argument that self interest is moral…all you have done is exploit the ambiguity of “should”.

Motivation is a not relevant.

> You get morality in the sense of there being truths about what you should do.

Only if you can establish that they are moral shoulds.
There are things you should do to be a good chess player, or a notorious criminal, but they are not moral shoulds.

There are burdens on all proponents of all moral theories, including egoism. You need to show that doing what you want is moral. Showing that it is motivating is not the same thing.

> But the content of these truths may not be the same as the content of popular morality or utilitarianism. It generates morality, but not external morality.

But you don’t have an argument that “external morality” – moral shoulds that don’t amount to self interest – are false.

Nor do you have an argument that self interest is moral…all you have done is exploit the ambiguity of “should”.

If external morality is true, then it’s possible for me to have reasons to donate to charity regardless of whether it would be instrumentally rational for me to do so. However, if morality would cause me to act in a way that’s contrary to my instrumental rationality, I can rationally reject it. That means that it’s not something that I should do, and because “morality” means “what I should do”, it’s not morality. Therefore external morality is false.

There are burdens on all proponents of all moral theories, including egoism. You need to show that doing what you want is moral. Showing that it is motivating is not the same thing.

As you yourself said, “morality” means “what you should do”. If I show that it’s what you should do, that means I’ve shown that it’s morality.

> If external morality is true, then it’s possible for me to have reasons to donate to charity regardless of whether it would be instrumentally rational for me to do so. However, if morality would cause me to act in a way that’s contrary to my instrumental rationality, I can rationally reject it.

If there are reasons to donate to charity, then you can’t epistemically rationally reject it.

> That means that it’s not something that I should do,

The default meaning of “should” is not the actual behaviour of a realistic version of a an entity. In fact, another word, “would”, labels realistic predictions. “Should” relates to ideals and optimization. There are different things that can be optimised, so there are different shoulds.

Giving to charity optimizes human happiness, or something, so it is what you morally-should do.

If it js not what you would do,that just means you are an imperfect moral agent, not that there is something wrong with morality itself. You are kind of blaming ideals for being ideals.

>{and because “morality” means “what I should do”, it’s not morality. Therefore external morality is false.

Morality means what you morally-should do and not what you would do, or even instrumentality-should do.

If there are reasons to donate to charity, then you can’t epistemically rationally reject it.

Yes, but whether there are epistemically rationally non-rejectable reasons to donate to charity is determined by whether there are instrumentally rationally non-rejectable reasons to donate to charity.

The default meaning of “should” is not the actual behaviour of a realistic version of a an entity. In fact, another word, “would”, labels realistic predictions. “Should” relates to ideals and optimization. There are different things that can be optimised, so there are different shoulds.

That’s true, but most shoulds are themselves dependent on other shoulds, so they’re not fundamental. For example, if I should do X to be a good chess player, whether I should do X is dependent on whether I should be a good chess player. Whether I should do X and be moral (if doing X would make me moral) is dependent on whether I should be moral. Indeed, if you want to be “moral” by some commonly accepted content of morality, there are things that you should do, but that doesn’t mean you should be moral. Whether you should do or be any particular thing ultimately comes down to instrumental rationality, because that’s the only thing you can’t rationally reject.

I’d be happy to concede not having an answer to an amoralist challenge (which I understand to be “there is no rational basis for morality”) so long as the amoralist is willing to not use the little sleight-of-hand where you say “morality has no rational basis, therefore some conception of self-interest is the only rational basis for behavior.” I’m not allowing the assumption that pursuit of self-interest (however defined), or any other potential motive, has some innate rationality without subjecting it to the same scrutiny as morality.

Moral right and wrong as we know it at the moment is an absolutely pointless concept to have due to the amoralist challenge, and an incoherent concept in the same sense that grue is if defined by human moral beliefs as they differ.

Key to the argument, however, is the amoralist challenge. If morality isn’t motivated, why bother trying to figure out a vaguely defined concept of right and wrong.

Let me give you an analogy here. Say I create a concept of “Sumerness”- similiarility to ancient Sumerian civilisation. In a very broad sense this is objective, even if there are many details in which parts of Sumerian civilian differ and the borders of what is and is not Sumerian are vaguely defined.

But there is no point in the existence of the concept. It may be coherent, but it is so useless as to not be worth discussing and has only a vague connection to reality as we know it.

It is “less rational” in the sense that it works specifically by throwing away the first-level rational solution, and most of the people doing this do so not because they’ve parsed that solution and decided they don’t like it, but because they can’t come up with that solution in the first place.

If you define “rational” as “wins”, then yes, it’s more rational. But this implies that an RNG which has a long lucky streak is also being “rational”, which seems counterintuitive.

IPDs with evolution (tit for tat defect last x, where x changes over generations) settles on x = 2n/3 (where n is the number of rounds). Can’t remember if this is due to the standard payoff matrix or true for all matrices.

I am not sure why you see this as a problem with anything other than the veil of ignorance and/or utilitarianism. I find it interesting because IMO both of these theories suffer from what I can only call a measurement problem. In the Veil’s case, the problem of accurately appraising pain and distress of others. Actually, stated that way, in both cases, since it’s really a consequence of “de gustibus non est disputandum”—I don’t know not only if I will inhabit such-and-such but also that I don’t know how much I’ll care (e.g., the socially just distribution of pork rinds). This axis is really hard to analyze, I think.

It seems to me that you can artificially create almost any scenario you like simply by tweaking your util function, in what I’ll sloppily call “Arrow’s Impossibility Analogy”. Though in a pretty specific sense you are talking about something like voting on an outcome from behind the veil. Now suppose that there is something like an objective measure of utils in some things, and then we go to vote, and the method we choose for aggregating our preferences is…

Ah, I always get sidetracked in these thoughts.

Mostly it reminds me that I need to unpack my Rawls because I don’t know how far you can really extend Rawlsian veils to the things it is often applied to. Also, it always seems to me to be something that some clever behavioral economist somewhere should have already written three papers on and I’m vaguely saddened that your post doesn’t have two of them with seven commentaries each linked.

The whole point of the example was that it takes place after the veil of ignorance, so it cannot possibly be a problem with the veil of ignorance. It is harder to remove interpersonal comparison from the analysis. But the core of the analysis applies to zero-sum muggings for cash. The purpose of the range of positive sum and negative sum interactions is really to fill in a range of intuitions, and not because the whole range is needed.

In short, I have never seen someone bring up Arrow in a utilitarian context who seems to have understood a word he claims to have read.

Well my response was less than charitable. Please consider the following.

The whole point of the example was that it takes place after the veil of ignorance, so it cannot possibly be a problem with the veil of ignorance.

Scott’s hypothetical fails to update the participants’ util functions to account for “results Scott thinks are stupid.” From behind that veil they’ve found a stable result. He thinks he’s removed the veil, but he forgot that he was judging the outcome, not the hypothetical participants. His preferences were unrevealed.

The Rawlsian veil is there so that we cannot gain advantage from neither our preferences nor our place in society. Scott’s example removes that veil for the participants judging their own situation, but these 100 hypothetical souls don’t know their place isn’t in the society their util functions are determining, but in the meta-context of Scott’s estimation of that society. They picked the correct results through flawless reasoning. If anyone says otherwise, then their prior preferences were not revealed to the participants. And if the participants picked the “not stupid” result, it would be a coincidence.

It is analogous to Arrow’s Impossibility Theorem because AIT is about problems of voting systems, not the outcomes of voting. It’s a “stupid result” that someone is a dictator or etc. But it doesn’t say that the outcome selected by the social choice function is “stupid.”

There is no prisoner’s dilemma. The participants make their decisions based on the payoff matrix. The payoff matrix does not include the post-analysis of “if you did anything but coop/coop then you failed to choose ‘optimally.'” The “dilemma” exists only to outsiders; to insiders the stable strategy is determined by the payoff matrix and possibly the form of the game (in e.g. iterated cases).

It seems to me that you can artificially create almost any scenario you like simply by tweaking your util function,

This does work in reality. Pro-slavery apologists would create theories by which their slaves didn’t actually feel suffering, and were instead best suited to a life of constant labor and torture. Some of the more toxic SJWs occasionally argue that privileged people can’t actually be bullied. These arguments seem to hinge on arguing that:
(in the pro-slavery case) making someone pick cotton all day, whipping them to work harder, not letting them marry without permission, and selling their children on the auction block did not decrease their utility,
(in the toxic-SJW case) any time privileged people claim to feel pain they should be ignored, as these are simply attempts to undermine the march of justice.
Both arguments are really, really hard to argue against.

Torture? The pro-slavery advocates portrayed slaves as happy-go-lucky slaves who would willing do the sort of drudge-work they would be stuck with in any scenario because of their lack of ability. Which lack also, happily, scuttled their imagination, thus preventing their being unhappy from ambition.

1. What if all fights contained a random element? For example, suppose your chance of overpowering someone else (and thus being able to oppress them) was your_strength/(your_strength + opponent_strength)? In societies of this type, agreements to ban strongly negative-sum interactions would be more salient for everyone, since even Mr. 100 would have some chance of being beaten in a typical interaction.

This is almost precisely Hobbes’ argument – he notes that almost anybody can bash in almost anybody else’s head in, assuming the latter is asleep or something.

The usual way is not having a fixed number of rounds. Maybe each round there’s a fixed probability of there being another round. Or you could use some other distribution, I don’t know. (Note that the conventional “tit-for-tat” wisdom is not for the fixed-rounds case!) I mean, you kind of mention this, but I think it is worth making explicit how this is actually handled normally.

Yeah, I know that, but if you do get stuck in a tournament with a fixed number of rounds, it seems really dumb that you can’t get two perfectly rational agents to cooperate.

EDIT: Also, Liskantope brings up just below that this is similar to the Unexpected Hanging problem. And that does seem to have a solution, in that it is indeed possible to hang the prisoner unexpectedly.

Oh boy. The growth/transformation of Walter into Heisenberg looks like climbing from Mr 1 (loses to schoolkids and car wash managers) up the ranks to Mr. 100. First defeating drug dealers, then motorcycle gangs, then large-scale distributors, then… Nazis?

Rationality is a human construct, not an objective feature of the world. I know nowhere near enough to categorically state there is no solution, but it sounding ‘really dumb’ isn’t a good enough reason to rule one out.

Theoretically speaking, perhaps rationality itself as rationalists understand it is ultimately incoherent because at some level there are unsolvable paradoxes?

Yes – if you consider only two strategies to fixed-iteration, always-cooperate (C) and always-defect (D), C-C works out better for both players than D-C. In a regular PD, D-C works out better for the defector than C-C.

Huh? First of all, the hyphen is generally understood to indicate time-differentiated strategies, and ordered pair used to indicate player-differentiated strategies, so your notation is rather confusing. As for the content of your post, (D,C) is better for the defector than (C,C) in all versions of PD.

It’s possible to hang the prisoner unexpectedly. It’s not possible to promise to. Similar to the iterated prisoner’s dilemma, perfect common knowledge screws everything up when a slight relaxation of that assumption makes everything work out like we would expect.

What happens in the really degenerate case of the unexpected hanging paradox?

Judge: Tomorrow I will hang you, but only if it will be a surprise. Which it will be, when I hang you.

Prisoner: I know the judge is going to hang me tomorrow, so I’ll expect it. But he won’t hang me if I expect it, so there won’t be a hanging. Which means that I should anticipate no hanging, which means that I will be hanged unexpectedly, which means that I should anticipate it, which means that I won’t be hung unexpectedly, which means I won’t be hung… AAUGH!

Also, what happens if the judge says to the defense attorney “Your client will be hanged tomorrow, but won’t expect it”? The defense attorney knows that the prisoner will be hanged tomorrow, so the defense attorney expects it, but that’s not paradoxical. But even if the prisoner overhears the statement, the prisoner will not be able to consistently conclude that the statement is true.

It’s basically a Godel statement: “This statement is true, but you will not know that it’s true”.

He “promises” to hang the prisoner unexpectedly, but it is not an infinitely iron-clad and infinitely credible promise, which is why he is able to fulfill it. The “paradox” arises because philosophers see the judge saying words, and then the words coming true, and they confuse that with a successful promise in the hyper-idealized ultra-rigorous philosophical sense.

It just so happens that a similar type of thought experiment was brought up in my department earlier today. The scenario is that a professor tells the students on the first day of class that there will be exactly one pop quiz this semester. Now suppose the professor were to set this pop quiz on the last day of the semester. Then the students would come in on the last day knowing this quiz would take place, since there was no pop quiz any of the other days of the semester. That would take away the element of surprise, so it would not be a pop quiz; thus, one concludes that the pop quiz couldn’t fall on the last day. But then, by the same argument (combined with “reverse induction”), the pop quiz couldn’t fall on the day before, or the day before that, etc. so that the professor couldn’t give the quiz at all.

Right now I’m scratching my head over this; maybe I’ll be able to think of the “right” way to resolve this type of paradox when I’m feeling more clear-headed.

Let’s say that I owe you $1000. I promise you that I’ll have your $1000 tomorrow. I only have $500. I head to the roulette tables, put it all on red, and win. I pay you your $1000. Did I lie to you when I promised the $1000, a promise I wasn’t sure I could keep?

Let’s call this a pseudo-lie.

In the unexpected hanging problem, we could say that the judge pseudo-lied to the prisoner. He promised that the hanging would be a surprise, knowing that there was the possibility the prisoner could be hanged on Friday unsurprised.

This seems to boil down to judging truthfulness based on intentionality/belief versus action. That is, you promised me the $1000 tomorrow, not actually intending to pay me all of it tomorrow, because you didn’t anticipate that you’d have it by then. So if your definition of “promise” involves some kind of intention, then your promise was a lie. If, on the other hand, the truthfulness of a promise is reflected solely by one’s later actions, then in this case the promise turned out not to be a lie. I lean towards the former position: the truthfulness of a promise depends on one’s intentions. So in your scenario, I would consider the judge to be lying.

(Then again, maybe “intentionality” isn’t the best word to use here. I’m reminded of the time I witnessed two philosophy professors get into an argument about whether or not, if I pay for a lottery ticket and beat the million-to-one odds by winning, I intentionally won the lottery.)

I’m reminded of the time I witnessed two philosophy professors get into an argument about whether or not, if I pay for a lottery ticket and beat the million-to-one odds by winning, I intentionally won the lottery.

I’m not sure I see how this is a solution, though. Wouldn’t the students go on to say, “If the correct conclusion is that the pop quiz is impossible, then the professor will anticipate us concluding that the pop quiz is impossible. Then the professor will go ahead and give us the quiz, knowing that we will be surprised. So we’re back at square one.”

If I understand it correctly, yes, that’s how the students ought to reason, hence why it’s a paradox. Scott’s “solution” is just the typical telling of the joke. It’s funny because most people wouldn’t think immediately to continue the student’s reasoning the way you did and so laugh at the result for the poor students; and your continuation is not so funny as that, so no one tells the joke the “right” way, assuming students who reason better.

Yes, Scott did present it as the punchline of a joke rather than as a solution to the paradox. But in a comment in an above thread, he refers to the Unexpected Hanging Paradox and says, “that does seem to have a solution, in that it is indeed possible to hang the prisoner unexpectedly”. So I’m a little confused as to what the purported solution is.

Let’s say that there are 100 days in a semester. Before classes start, the professor secretly writes down a random number from 1 to 100; he will give the surprise quiz on that day.

On the first day of class, the probability of the professor will give the quiz is 1/100; so, encountering the quiz on this day would be quite surprising. On the second day, this probability is 1/99; still surprising, but not as surprising as before. On day 100, it’s 1/1, which is not surprising at all.

Where’s the paradox ? The professor never promised to give the quiz on the maximally surprising day, did he ?

No, the problem is the professor promises it will be at least a little surprising.

So we can’t just say “The probability on the last day would be 1/1” because that wouldn’t be surprising, so the professor would have broken his promise by giving it on a non-surprising day. He can’t give it on that day.

Perhaps I’m missing something, but couldn’t the professor randomise over 100 to select the number of days that will be excluded from the end of the semester (eg, if they roll 30, the test won’t happen on any of the last 30 days), and then randomise over whatever number is left to decide which day the test will fall on? All they have to do is tell the students that they’re doing this (so they know that there is at least one day at the end of the semester that the test won’t fall on) but not tell the students what number they rolled on the first roll (so they don’t know when to start inducting back from), and then whatever number they get on the second must be a surprise.

The first randomization cannot include the number 0, because if 0 comes up then the second one could give the number 100, and the test would be on the last day and not be surprising. Could the first randomization include the number 1? No, because then the second one could give the number 99, and on reaching day 99 the students would know (having already replicated the first stage of reasoning) that the test must be today. And so on…

If the professor must be 100% certain of fulfilling the promise of surprise, then the two-step randomization doesn’t work.

The hanging paradox is ultimately a consequence of the false assumption that if one knows an agent’s goal and part of the agent’s reasoning process, one can predict that agent’s actions.

Alternatively considered, it’s a consequence of the implicit premise that the professor is constrained by/to inductive reasoning alone.

If the professor was a computer program that could use no reasoning other than inductive reasoning, then I think the program would return an error message. I don’t think it is actually possible to build a program limited to this logic that would output an action it expected to be surprising. If actually limited to the described sequence of logic, the program or professor would simply be incapable of action.

There is no true paradox in showing that some reasoning processes are insufficient to guarantee some types of promises. The apparent paradox is only a consequence of the hidden premise that is first assumed and then violated.

In this problem, because of how the students gain information, surprised students have to be purchased in the coin of unsurprised students. You can only surprise the students on some day if there is some chance it would be the next day – therefore in a bounded set there is some day that will be unsurprising. This is just a cost of doing business, like a compost pile or a sewage treatment plant.

Suppose the professor decided which day to give the quiz by using some random process like die-rolling or coin-flipping. I’m saying that if this random process outputs “last day of class”, the professor realio trulio should give the quiz on the last day. The random process’s surprisingness is bought with very occasional unsurprisingness.

It seems like the standard veil of ignorance argument also concludes that we should have a society which taxes everyone to supply the world’s poor with whatever intervention maximizes their utility. Yet nobody does this.

Ther’s a difference between a theory of morality that some people don’t follow, and one which pretty much nobody follows, ever. A theory in the latter category I would conclude fails to capture something important about what we really consider moral.

I don’t think there’s any difference between a theory of morality that some people don’t follow and a theory of morality that nobody follows at all, except for contingent environmental factors that affect adoption and persistence of the theory.

For example: You put five chimps in a room, but this time instead of putting a banana on the ladder, you put a baby chimp being endlessly tortured. (Rescuing a baby chimp from endless torture is something all chimps agree is morally right.) Whenever a chimp tries to rescue the poor baby chimp, we blast all five chimps with ice cold water.

Pretty soon, the chimps learn that if they see another chimp trying to climb the ladder, they better drag that guy down before we all get hurt.

Swap one chimp out for a new one, who immediately tries to climb the ladder to rescue the poor baby chimp, and who immediately gets thrashed by the four chimps (who ‘wisely’ administer the beating to the ‘naive’ chimp to stop bad things from happening). Wait until the new chimp has internalised this lesson, then swap out one of the original four ‘wise’ chimps for a new one.

After a while, you have a room with a baby chimp being endlessly tortured, with five chimps sitting around not helping it and beating up anybody who tries to. Even if one of the chimps inspected the cold water system and discovered that years of disuse had rendered it nonfunctional, this wouldn’t have any relevance – chimp beatings are now reinforced by chimp beatings, not cold water.

If one of these chimps told you about their moral theory (that considers saving baby chimps from endless torture a moral thing to do), would you respond that their theory fails to capture something important about what they really consider to be moral?

I think it would be more accurate to say their moral theory fails to direct their actions in certain hostile environments. From the outside, changing the environment to allow their moral theory to start directing their actions seems obviously the right thing to do, and adopting a different moral theory to reflect the environment seems like the obviously wrong thing to do.

I would prefer it to all others if I was behind the veil of ignorance and didn’t know whether I was going to be one of the people paying the taxes or one of the people receiving them, since by supposition the gain to those receiving them is maximal.

No, that was not the supposition. The supposition is that the utility is maximal given that this distribution scheme is in place, not that it’s maximal over all possible worlds. Second order effects could easily leave everyone worse off.

These iterated prisoner dilemma type problems require that everyone’s perfect rationality be common knowledge. This is an even more unrealistic condition than everyone being perfectly rational! Even if everyone happens to be perfectly rational, they can’t necessarily be sure that the other guy isn’t going to be an irrational tit-for-tat player or an irrational anti-oppressionist. If you’re playing 100 round IPD, a >1% chance of your opponent being tit-for-tat instead of rational makes tit-for-tat the rational thing to play. And so a >1% chance of your opponent thinking there’s a >1% chance of his opponent being tit-for-tat instead of rational makes tit-for-tat the rational thing to play. And so on and so on, for as many levels of meta deep you need to go to convince you to cooperate.

The trick with these game-theoretical constructs is that they’re infinitely fragile. An infinitesimal probability of deviation from ideal conditions can lead to a large change in outcomes. If you’re not thinking probabilistically, you’re not really thinking.

A condition generally is that there is no secret knowledge (other than the players’ strategies themselves). So if one player knows they are perfectly rational, then everyone else knows that, too. Stating this rigorously take a bit of work, though, since it involves self-reference and infinite recursion.

But this doesn’t work for problems where perfect spheres have special properties which are wildly different from every other solid, including spheres that have been deformed to an infinitesimal degree. Which is basically the situation with perfect common knowledge game theory.

Suntzu – the issue might be that game theory’s perfect spheres actually do approximate our real spheres in most situations (e.g. ducks in a pond) but we are specifically fixating on just the questions where real spheres do not behave like perfect spheres at all.

Some objections to these game theory questions take the form of “of course your ‘if-the-sphere-is-perfectly-round-then-flash-green-otherwise-flash-red’ machine gives different results when applied to real life spheres!”.

Interestingly enough, if you assume that pro-social behavior is due to hard-coded moral urges instead of conscious game-theory, then there is a real world veil! Your genes do not necessarily know what your eventual ranking is, the source code is behind the veil.

Your genes “know” that you’ll be a being with those genes, which can be a substantial bit of information. Furthermore, there’s no coordination mechanisms; precommitting to an altruistic strategy doesn’t make others altruistic as well.

For every gene, there is some first mutation, and that first mutation does not have any other copies of the gene to take advantage of. And it’s not clear how it benefits from other copies of the gene even if they do exist; reproductive fitness is entirely relative, so the gene would have to benefit from other copies of gene more than other genes do. If you’re appealing to the fact that organisms will be interacting with relatives, then it’s not clear how this is adding anything to pure kin selection.

Fixed-length IPD can, in a sense, reduce to one-shot IPD. Does your system reduce to one with two agents, a more powerful one and a less powerful one? How might you look for solutions to that? Are there any?

If oppression is usually negative sum rather than positive or neutral sum, and there are other activities that are positive sum but interfered with by “oppression” everyone including 100 has an incentive to agree to SOME oppression-lowering contract to claim a smaller slice of a bigger pie. This is basically how serfdom works as opposed to outright theft. “If I let you farm land on your own and pay me 10 percent of all your food every year, I will end up with more wealth than if I kill you and take all the food you have now”

Once anyone is in a partial oppression relationship, they have incentives to push out total oppressors.

It should be possible, yes, and if there is no cost to 100 for beating someone else up, something like that will happen. You can see that if you add in a random element or allow coalitions, the “I’ll take you all on!” approach quickly becomes disincentivised.

I am usually pretty skeptical of “privilege” but reading the stuff about the doctor actually made me feel privileged. I have never had a job where I had to interact with a boss in this way. I have always been able to just ignore “stupid” rules and hae never shown even moderate deference to a boss/professor/etc.

I try to treat “superiors” the same way I treat everyone. I intned to be as nice as I can, and don’t want to set anyone back if I can avoid it. But I would find it pretty funny if someone actually expected me to always be on time, nevermind early. At least for activiteies that do not critically depend on being on time (I have taught before and I always show up on time and prepared).

I think I actually have the opposite ethic of most people. I feel terrible if I even plausibly mistreat someone below me. Of course I can only do so much, but I try. And I certainly don’t expect any deference. But with respect to my “bosses” I am pretty quick to say to myself “she is being unreasoanble and hence I’m ignoring her, probably she will find someone else to bother.”

Computer programmer right now. I have also taught and tutored both privately and as someone’s employee. In high school worled in a pet shop lol.

My theory is this a mix of things. One is that the ideology in most organizations is intentional made to seem more repressive than it is. Most people will obey rules even if no real punishments occurs for ignoring them. So organizations best benefit by pretending to care alot about many things they don’t care about. Though some things are obviously serious (stealing for example).

The other reason is I always try to avoid direct conflict. If chastized I say I will follow the rule from now on, then just keep ignoring it. Getting into an open fight does not leave your “surperior” a line of retreat (stop bothering you).

I only break dumb rules. My quality of work is pretty high imo. I actually think I am a signifigant benefit to my employers. Relevantly in high school I cut over 1/2 of my classes and in college more like 3/4s.

The other humurous possibility is the redpill people arre right and ignoring rules signals high status (whether you have it or not). So its actually beneficial or neutral to your standing in an organization. I really hope this one is true! And I don’t usually root for the fing redpill.

I like the saying “It’s better to ask for forgiveness than for permission”; I don’t really see myself as a rule breaker though (mostly because I can’t think of many explicit rules I’ve been told to follow at work).

(this probably does make me somewhat privileged, thanks for pointing that out by the way)

Intuitively, it seems like the best case for your model is an unstable equilibrium- and that’s unlikely. You posit only two types of activity (rule-building and oppression), within which utility can only flow up. With that kind of unidirectional utility transfer, there’s no balancing force to motivate rule-making. The lowest energy state looks an awful lot like Mr. 100 taking all the utils from everybody.

The original hypothetical is highly unrealistic, not only in that it simplifies things, but that it ignores salient characteristics in the real world, to wit that even the most powerless can hurt the most powerful, even if it costs them much more utility that it costs others. For instance, a serf committing suicide would hurt their lord an minuscule amount, but it would still be a cost. A lord who oppressed his serfs so much that their lives are not worth living would soon find himself with no one to work his fields. So we should move onto the random case as more legitimately modeling the real world. And in this case, the utility multiple doesn’t have to be especially high for constant oppression to not be Kaldor–Hicks efficient. For instance, Mr. 100 could agree to reduce oppression of Mr. 1 by 1% in exchange for Mr. 1 not oppressing Mr. 100 at all. I’m not going to work out the math, but it’s likely that Mr. 1 could get an even better deal by “buying” the retaliatory power of others; for instance,

Mr. 1 could promise to not oppress Mr. 2 at all if
Mr. 2 will not oppress Mr. 100 iff
Mr. 100 promises to reduce his oppression of Mr. 1 by 2%.

I think your variation 1 covered it but people don’t normally don’t hold fixed positions in society. So Mr 50 has a chance to be demoted to Mr 49. Which means that he should accept agreements that would positively affect people of 49 strength. People might hold out on agreements because they might promoted as well though which might make an equilibrium.

Mr 82 has good reason to go around to all the Mr 70s and insinuate that they will very shortly be promoted to the 80s. If convinced, the 70s will be complicit in making themselves easier targets for the 80s.

This makes me think of the united nations. Everyone thinks the united nations is _supposed_ to be fair, even while blatantly scheming to take advantage of other countries. But from one angle of view what it actually does (very imperfectly) is to rearrange intra-state conflict so that everyone gets what they want in proportion to how likely they’d be to win a war if there was one. Which is still equally unfair, but a lot better simply by not wasting a lot of resources on _having_ wars.

Assuming that it’s not raw utility that is exchanged in these interactions, but something that is limited or can otherwise affect the payoff (like money), it makes sense for Mr. 100 to sign agreement banning extreme negative-sum interactions, otherwise there would nothing left for him to steal. Or perhaps he will be even more agreeable to the proposal that others do not engage in any negative-sum “trades”, and he in return will refrain from unreasonably brutal beatings. If necessary, it can be stated in a technically uniform way, where everybody signs the same document: “you agree that if you happen to be Mr. 100, you will do this, and if you are mere mortal, you will do that”.

In reality agreement of course can be multilayered, where the more powerful one is, the more one favors altruism (at least among one’s inferiors), because one wants healthy substrate to feed on.

Fun fact – the prisoner’s dilemma was completely solved a few years ago by Freeman Dyson and William Press, who identified a class of zero-determinant strategies, of which tit-for-tat is one. These are strategies in which each player completely controls the score of the other player, meaning that it turns into an instance of the ultimatum game.

Press and Dyson have an interesting model, but their claim about reduction to an ultimatum game is off. If a player has the power to commit to one strategy and the other is forced to react, they’re implicitly assuming an ultimatum game in the background. Their contribution is to show the extent of power someone might have with commitment.

If they talk about repeated games without at least mentioning the extensive literature on the folk theorem, something is missing. I don’t see how this is a big deal.

it also gives the powerful an incentive to band together to better oppress the weak

I’ve played games before where I was offered the agreement “Let’s team up against him because he’s in position X” and noticed “but after we do that, I’ll be in position X”. Such an offer is only superficially attractive.

On the other hand, it’s stereotypically tragic that people do fall for “I did not speak out, because I was not an X” in real life…

It seems to me that there are some games that are extremely amenable to such reasoning, primary examples being Risk and Illuminati. And of course it’s a major part of gameplay in such reality shows as Survivor. So a lot of the game comes down to figuring out how much of Position X you can be in without everyone ganging up on you.

The two-stage game has lots of subgame-perfect equilibria, including both unconditionally playing D each round or E each round. The most promising equilibrium strategy would be “On the first round, play C. If (C,C) happened on the first round, play D on the second, and otherwise play E,” which is available only because of the threat of E.

Do you mean the strategy of “C on the first round and D in every subgame of the second round”? That’s one best response to my proposed strategy, but that doesn’t stop my strategy from being a mutual best response. It’s also not an equilibrium strategy on its own since “D in the first and every subgame of the second” is the only best response to it.

I’m not clear what you mean by “mutual best response”. And of course this strategy isn’t a Nash equilibrium. The term “Nash equilibrium” doesn’t refer to a strategy, it refers to the set strategies that the players have.

By “equilibrium strategy”, I meant a strategy that appears in some Nash equilibrium. Since there are multiple equilibria, I could have clarified that I was referring to the symmetric equilibrium where both players use the same strategy. I also should have said the strategy is a best response to itself rather than say the implicit pair of strategies are mutual best responses.

Do you still think the strategy is strictly dominated? If the other player uses that strategy, which action would you choose in each subgame?

I still maintain the above is an equilibrium strategy, but on second thought the “best” equilibrium strategy would be “Play C on the first round. Play D on the second, unless (D,C) or (C,D) happened on the first round. In those two subgames, play E”. That gives the same payoffs on the equilibrium path, but is more forgiving off the equilibrium path.

Has my rtsoc [rest of name omitted] email been banned for some reason? I don’t recall posting anything that would have earned me a ban, but I cannot comment using my normal account or when changing the account’s name.

It would be nice if I were notified when banned. I spent a long time trying to figure out whether something was wrong with my computer. It would also be nice if whatever comment(s) that earned me the ban were pointed out to me.

Did I simply comment too often? I was worried about that at one point. If that was so, I’ll try to be quiet more often in the future, if you confirm.

Was it my concur comment that irritated you? I didn’t know what the rules were regarding comments like that. I understood brief comments might be irritating, but I also thought it was important that agreement be expressed so that your impression of the community’s opinion would be more accurate. But if you say so, then I won’t make such comments.

Please take into consideration that social norms are difficult for me to recognize as I have Asperger’s. This is another reason feedback would be nice. 🙂

I did expect to receive a warning message of some kind before getting banned. Was the policy changed?

Everyone agrees to the following oath “whenever I encounter someone stronger than me I will truthfully state the most I would pay in dollars to not be oppressed by him (X), and whenever I encounter someone weaker than me I will truthfully state the most I would pay in dollars for the opportunity to oppress him (Y). If Y>X the oppression will happen, otherwise the weaker party will pay the stronger party [X+Y]/2 and no oppression will occur.”

“[…] the weaker party will pay the stronger party [X+Y]/2 and no oppression will occur.”

I don’t think that’s quite correct. In the scenario oppression is defined as “an interaction where the stronger person gains and the weaker person loses some utility”, so making the reasonable assumption that dollars have utility, there is still oppression going on.

However, I agree that your system is better than a scenario with no pacts.

But it has been pointed out there’s a flaw here. Suppose we are iterating for one hundred games. On Turn 100, you might as well defect, because there’s no way your opponent can punish you later. But that means both sides should always play (D,D) on Turn 100. But since you know on Turn 99 that your opponent must defect next turn, they can’t punish you any worse if you defect now. So both sides should always play (D,D) on turn 99. And so on by induction to everyone defecting the entire game.

One common way to preserve the effectiveness of tit-for-tat here is to have an ecosystem of iterated prisoner’s dilemma games. Many players are in the ecosystem, and they are trying to accumulate points – one common game type even has the players reproduce, with success in reproduction governed by how many points you have.

How does this favor tit-for-tat players? Well, consider a pool containing both tit-for-tat players and defect-rocks. When the defect rocks play each other they gain little. When two players from different sub-populations play, the tit-for-tat loses one turn of exploitation and the defect rock gains one turn of exploitation, but then after that they play like defect rocks and gain little.

But when two tit-for-tat players play, they gain the entire game length of cooperation. As long as tit-for-tat players are common enough in the ecosystem that they can run into each other and reap the rewards, they will gain more points than defect-rocks.

One might ask “what about defecting on the last turn?” And indeed, if tit-for-tats have taken over the population, a mutant who defects on the last turn has a competitive advantage. If tit-for-tat is a plant, defecting on the last turn makes you an herbivore. So the population of last-turn-defectors will grow and grow.

But induction does not lead us back to defect-rocks. We have already seen that tit-for-tat can take over an ecosystem of mostly defect-rocks, because defect rocks gain such little payoff when they play against each other. The trend to start defecting on earlier and earlier turns will continue until the players in the ecosystem are paying so much in defection-costs that a small band of tit-for-tat players could gain a foothold again.

In the eventual equilibrium, there’s a diverse mixture of levels of defection, but everyone in the ecosystem actually gets the same average payoff. So even though it looks like the more defecting strategies are “exploiting suckers,” they get the same average payoff as those suckers, because the suckers can cooperate with each other and the defectors can’t. Both defectors and cooperators are just filling a niche in the ecosystem.

What if you took the “Public Goods Experiment,” (where there are pairs of agents randomly assigned to each other each round and $10 up for grabs that round, and whoever gets picked as Person A gets to propose how to divide the amount between the two, and person B can either accept the division or refuse, in which case both people get $0 that round), and you had both humans and computer programs playing in the game, but you don’t know which is which except for being given information about that agent’s past deals with others (this has been done, I think), AND (this is the novel part) you make it a real competition, a real professional sport, centered around seeing who could end up with the most money after a random number of rounds.

I would find it hard to believe that a) the best method for maximising dollars in this game is complicated enough that something humans can do that computers can’t is relevant (computers are better than humans at guessing an agent’s computer-or-human likelihood based on past actions, etc) and b) humans can carry out this method using their privileged knowledge consistently enough that they average a better return than a computer perfectly carrying out a simpler method.

They should all spontaneously decide to be nice, because it would end up benefiting almost everyone.

Suppose they all gang up on Mr. 100, kill him, and replace him with an equally string robot that is programmed to agree to all pacts which everyone else agrees to. This benefits everybody but Mr. 100. Therefore, if it were possible, Mr. 100 should agree to act as the robot would act, because he prefers losing some oppression opportunities over dying.

If they could get everyone to co-operate, it would benefit everyone; but they can’t, and the most rational option for the individual screws everyone.

That’s kind of the essence of Moloch.

[In the actual example, they can’t co-operate because the strongest person is always tempted to defect, even though acausally this will have caused the person above them to defect.

In your example … IIUC the strongest coalition would be 2-100 ganging up on 1 to enforce their preferences on him. But regardless, if 1-99 ganged up on 100, then 1-98 would gang up on 99, and so on by induction until 2 rules a kingdom of robots. They should refuse to gang up in the first place, not agree to serve 2 – or act like utilitarians, as you suggest.]

From a story point of view, the competitions between Misters 1-100 shouldn’t be one on one. They should be team on team.

Mister 1 has value, in that he can help Mister 99 tie Mister 100.

Mister 10 has to spend his time intimidating 1-9 (because he can, if he catches them alone), which gets his team power up to 55. So maybe his team can intimidate 11 through 14 to join. Now he has a team with 105 power, enough to take on Mister 100 alone.

Who ends up winning and losing depends on energy, confidence, leadership, and luck, as well as unequal advantages starting out. Just like real life.

1. Depending on what opportunities for oppression exist with what frequency, one of the key tactics for stronger participants in this scenario could be to say things like “If you don’t resist my mild oppression, I promise to pass up opportunities to severely oppress you later; and if you do, I promise to take them”. They’d probably end up publicly signing contracts to that effect. This has complicated tactical implications, but I’m not going to try and puzzle them out.

2. I don’t think superrationality could apply here. Mr 100 has no reason to sign anything, ever, because his share of the pie is already at its theoretical maximum. No-one can offer him anything he can’t take. Only reason TDT etc. could make him want to ease up is if he’s worried about the possibility of Mr 101 moving in next door.

3. In this scenario, I think it would become a race to see who can get more than half the world’s power on their side. For the top quarter of the population, the ‘universal harmony’ solution is dominated by the ‘form an unbreakable, undefeatable alliance composed of everyone from Mr 75 to Mr 100, then freely oppress everyone else forever’ solution.

(now I want to write a science fiction novel about a planet full of aliens who are perfect game theorists, but who always behave kindly and respectfully to one another. Then some idiot performs a census, and the whole place collapses into apocalyptic total war.)

Tom Hunt comes kinda close to what I want to say, but I don’t think cooperation is sub-rational. Rather, it’s part non-rational, and part superrational, in that order of importance. The non-rational part is that humans typically come with basic preferences for empathy and fairness. Not all humans, but most. As a direct consequence, crudely applied utility theory and game theory fail to describe human beings, completely aside from human irrationality. When we pretend that outcomes can be evaluated for one human without taking into consideration both the outcomes for others, and the relationships/actions that led there, we get crude utility and game theory. Sometimes, crude theory works well enough for practical purposes; usually, not so much.

The superrational part is that rationality itself stacks the moral deck. Rational discussion requires openness, honesty, consideration of interlocutors’ points, and other factors that are definitely not morally neutral. Rationality doesn’t require morality, but it is pro-moral. (David Velleman puts this nicely, so I stole his phrase.) Rational people prize rationality – perhaps not by definition, but that is the only way it works in actual fact; those who do not prize it will not achieve it. This puts a thumb on the scale – one that can be outweighed, for sure, but that doesn’t make it nonexistent – in favor of cooperation.

So indeed, you Kant dismiss universalizability, and cooperation needs to be un-veiled. But please let’s leave the toy (Homo economicus, atomistic) game theory problems behind, and remember what real humans value.

Of course, even within the atomistic paradigm, there are good responses available, as social justice warlock points out about Hobbes. Not trashing that. But without the two points I’ve emphasized, I think you only get a feeble shadow of morality as we know it.

A minor tweak to the hundred-people scenario: say everybody can spend some resources to increase their strength, but if everybody does so, they end up in pretty much the same place. In this case, even Mr 100 may have an incentive to sign some agreements so that he can stop spending those resources, provided that cost is greater than the gain he gets from fights (but he pays it anyway because if he didn’t he’d lose fights and lose even more).

That constitutes one more mechanism (along with uncertainty and teaming up on each other) by which I believe people are inclined to get into such agreements in the real world. An arms race can be expensive!

Another mechanism is powerful people that care about the welfare of non-powerful people (typically, their children).

Another simple solution to this is to make knowledge of each person’s number uncertain, such as if it’s impossible to tell if Mr. 80 from Mr. 75-85 until you fight him. (You also have to introduce a range of possible maximum numbers, so that Mr. 100 doesn’t know if a perceived Mr. 102 is impossible because there are only 100 people.) This should make everyone agree to the contract.

one of the things that does make Kings behave at least somewhat morally is the knowledge that they will be overthrown if they do not;

No. The reason kings behave much, much better than presidents and prime ministers is that they have a stake in the future, beyond the next election, or even the next few decades.

The other thing that’s wrong with this entire article is the insistence that people should behave “rationally”, instead of according to a combination of the incentive structure and their conscience. We are not robots, we are not angels, we are humans.

Think about conscience and private property. Then, as an added bonus, think about some people being born into ethnic groups, and some ethnic groups having different types of consciences. I.e. think like a neoreactionary of the religious, commercialist, or nationalist strains.

1- Even ignoring points Scott Alexander has already made, I cite the behaviour of the Great Powers of Europe from 1648 to 1815 since I actually know enough about them to make a reasonable argument there.

a- England/Britain was considered the best governed (as French Revolutionary demands eventually admitted) but was pretty far from the neoreactionary model of how things should be.
b- Far less of the budget was spent on ordinary people and far more attention was spent on war with other powers comparatively. Yes democracies do plot against each other, but it doesn’t take up as much time and money.
c- Large numbers of wars were started for no reason that modern people (I’m an amoralist so I don’t really count) would accept. Proportionate to population, far more people died from their wars then die in modern wars because democracies are held back from wars by their own people.

2- It’s far more reasonable to say that rationality is the ideal against which actual behaviour is measured. People really are capable of considerable improvement in their rationality even if they can never perfect it. Understanding what perfectly rational behaviour is is a useful step towards improvement.
3- What’s the evidence that different ethnic groups behave differently regardless of cultural raising? Don’t studies of adoptions contradict you?

Is 1c actually true? I mean sure it is if your definition of modern is “post WW2”, but that would be really disingenuous. My understanding is that the European death rate due to war was pretty similar for the 18th and 20th centuries.

1b is certainly true but kind of a pointless claim, since you can achieve a huge fraction of your budget being spent on ordinary people by taxing ordinary people their whole salaries and then redistributing it back to them, without making their lives better at all on net. A more reasonable way to measure the costliness of military spending is as a fraction of GDP, not as a fraction of government budget, to give governments that tax less fair credit.

It’s an amazingly audacious lie. Pick up a book from the time of the War between the States – democracy had been grinding away at Americans for a hundred years – to find out how much more civilized that was than WWII or the post-WWII conflicts. Much less the American Rebellion.

The French Revolution was exceptionally bloody, with the city of Nantes in particular being harrowed by mass executions – of less than 10,000 people. One form of execution was called “Republican Marriage“, but France was at the time a dictatorship by Robespierre and later Napoleon, certainly not a democracy. The French Revolution was definitely not a war fought between democracies or even by a democracy, so democracy has her hands still clean.

lots of civil wars in undevelopped countries are of questionable relevance to the issue of which forms of government are more or less conducive to peace.

Yes, let’s ignore the effect of the US and EU, so we can pretend that third-world violence isn’t our problem. The US is the arsenal of democracy; when the US decided to no longer sell weapons to Egypt, the government fell.

absence of wars between democracies,

What if I told you [morpheus.jpg] that in this context democracy means nothing other than US puppet?

Is Venezuela a democracy? Is Russia? Was South Vietnam? Is North Vietnam? Is Pakistan a democracy? How about Afghanistan?

So yes, there have been few wars between democracies, with the notable exception of Argentina and the UK. The UK is of course not a democracy but a monarchy, and Argentina at the time was 100% democratic; so predictably the bumbling US chose the less-progressive side in that conflict, like in every other conflict since WWII.

Yes, let’s ignore the effect of the US and EU, so we can pretend that third-world violence isn’t our problem.

By implicature, you’re claiming that democracies are especially conducive to war in the rest of the world, just not among themselves. Is the idea that democracy, and the consequent leftward shift in policy, caused colonies to be abandoned, which caused them to descend into a mess?

It’s still hard to see how such effects, which depend on the behaviour of a large number of political players on the global scene, bear on the question of which system of government is best for an individual state. And if we’re not talking “best for an individual state”, then what is the criterion we want to evaluate systems of government on, anyway?

World War 2 should have been enough to keep the silly notion that democracy is enough to prevent war from ever arising. Hitler was elected.

It’s a very “No True Scotsman” thing; Hitler did nasty things, therefore his government is categorized as “fascist” instead of “democratic” and we talk about how wonderful “democratic” regimes are as though we did not artificially define that wonderfulness into existence.

The US and the EU give money, weapons, and various kinds of random NGO crap to the third world.

Mugabe, who was supported by the UK back when he took over Rhodesia and took away the civil rights of Whites, recently said that he prefers to deal with China instead of the West, because China isn’t trying to impose current Western moral standards like homosexuality.

Does the US have any responsibility for the effects of what the US imposes?

@suntzuanime: Are you saying that Nazi Germany was a democracy, as opposed to a dictatorship that had arise from a democracy? Of course the potential to morph into a harmful dictatorship is a relevant consideration, but it seems to me that it is reasonable to separate this from the question of how belligerent states with a certain form of government are.

@peppermint: Nothing to reply to since you aren’t addressing my points.

You have a gift for telescoping historic events to give a misleading impression. The UK backed a democratic election which was won, as it turned out, with typical chicanery, by Mugabe, rather than their favoured candidate, Bishop Muzarewe.

And Mugabes policies about the white Farmers weren’t exactly on his manifesto.

And what would you have done? Recolonised Zim/Rhod? Sent in troops on the ground,? Backed Ian Smith to create another South Africa?

@peterdjones: Bush was elected “Kinda Sorta”, let’s not split hairs. The point is that people abuse the fuzzy definition of what counts as a “true” democracy to engage in all sorts of hypocrisy to make their preferred political system look better or worse, to the point where “no wars between democracies” isn’t a meaningful thing to say unless you pin down your definition pretty explicitly.

the ghosts of WWII, South America and Southeast Asia, and a thousand African bush wars say what?

Oh, but maybe all the constant, occasionally genocidal warfare over the past 50 years wasn’t against democracies. That means it’s not democracy’s fault.

What you said for (2) is completely unrelated to what I said.

As to (3), no, adoption studies do not contradict genetic differences in behavioral traits. I can’t imagine how anyone can take evolution seriously and at the same time believe that behavioral traits should be the same in all populations, or that anyone who has had serious contact with other groups could believe that behavioral traits are the same in all populations.

Judging from this list, WWII blows everything else out of the water in absolute numbers (unless you take the high estimate for the European colonization of the Americas, an option I’m not happy with for various reasons beyond the scope of this post). As a percentage of world population, though, the Mongol conquests seem likely to take that dubious crown, and most of the other high slots would go to Chinese civil wars.

The Napoleonic Wars and the Thirty Years’ War — by most measures the most destructive European wars prior to WWI — are no more than middling by comparison. The worst Cold War conflicts are more than an order of magnitude less significant.

Fortunately, the Napoleonic Wars were between a dictatorship of Napoleon, who was not democratically elected, and the Ancien Regime of old reactionary Catholics who really needed to codify and update their laws anyway.

What if I told you [morpheus.jpg] that jingoistic nationalism with an simplistic, irrevocably idealistic foreign policy is the result of democratic tendencies?

Hear me out – who was singing “we don’t want to fight, but by Jingo, if we do / we’ve got the guns, we’ve got the ships, we’ve got the money too / we’ve beat the Bear before and while we’re Britons true / no Russian will set foot in Constantinople”?

Why were they singing that?

What, exactly, is a “Briton” anyway?

But really, invading Iraq and setting up a farcical government there is the least of democracy’s crimes.

What if I told you [morpheus.jpg] that jingoistic nationalism with an simplistic, irrevocably idealistic foreign policy is the result of democratic tendencies?

If you told me that, I’d tell you that you’d neglected to support that statement with anything resembling an argument.

Nationalism has a complex history but I don’t see a case for tying it closely to democracy, unless you’re defining “democratic tendencies” as something closer to “modernity” (and I’d still disagree; there’s plenty of pugnacious nationalism in Herodotus and Tacitus, among other classical authors). The usual type specimen for a nationalistic war — WWI — was fought mainly between aristocratic powers, though the transition to constitutional monarchy had begun in several and was largely complete in a few.

I mention a war that the English were convinced to fight, under the name Britons, to prevent the Russians from taking a city from the Turks.

You mention wars fought by Romans and Greeks for the territorial aggrandizement of Romans and Greeks.

Just as long as you can find some evidence for something you can call nationalism, though. I mean, I could have defined it better, but Pericles is pretty persuasive

We secure our friends not by accepting favours but by doing them….We are alone among mankind in doing men benefits, not on calculations of self-interest, but in the fearless confidence of freedom. In a word I claim that our city as a whole is an education to Hellas .

sure. on point (2) I was attacking the narrow game-theoretic rationality that this article talks about; which I believe that few in the Rationalist community actually subscribe to. I think, if I was going to play iterated prisoner’s dilemma with Scott Alexander, we would end up cooperating every time, including the last time.

As to point (3), it’s tricky, because it’s probably the most emotionally charged political issue on this patch of the globe today. From 30,000 feet, there’s a pretty extensive literature on both sides making their respective claims.

There are these two species of baboons, the Hamadryas baboon of the highlands and the Yellow baboon from the lowlands. If they interbreed, they get a hybrid with mixed characteristics.

The Hamadryas prefers to live in harems, and the Yellows don’t. There’s a video on Youtube of some researchers taking a Yellow female into Hamadryas territory, but leaving her in a cage: https://www.youtube.com/watch?v=4LTWi13_jjk (starts at 17:30).

Are their behaviors cultural, or instinctual, or both? Are our behaviors cultural, or instinctual, or both? How about groups of humans who have had particular types of families for thousands of years – does that affect their instinctual behavior? If it doesn’t, there needs to be a compelling reason.

I think the idea of anti-racism as a scientific principle is ultimately going to sound as silly as Aristotle’s four humors theory of psychology. Biology has evolved and psychology is increasingly grounded in it.

anyway… the reason I come here is because arguing with reasonable people is a great way to discover things. For point 1, episodes that could be embarrassing to reactionaries include the constant wars against heretics that left part of central Europe depopulated, and the constant Arab slave raids that left parts of the European coast depopulated. The relevant question isn’t so much why the heretics ruined everything they touched (see also Shafarevich’s The Socialist Phenomenon), or why the Arabs wanted slaves, but why the kings of the middle ages weren’t able to protect their peoples.

…yes, they can. If you disagree, please create a computer program capable of checking the syntax and tagging articles based on the semantics of human language, or identifying human faces.

Who tells a goat to butt heads with the other goats to determine their social ranking, and thus the order that they choose mates when they meet the female herd? Who tells them to stick their hoohah in the wazoo?

Of course complicated behavior is partly in the DNA. It could not be otherwise.

” Who tells a goat to butt heads with the other goats to determine their social ranking, and thus the order that they choose mates when they meet the female herd? Who tells them to stick their hoohah in the wazoo?”

Can you define complex behavior? I would go with anything that a computer would have trouble with, like walking around, recognizing faces, speaking fluently and picking up chicks. I probably wouldn’t include the logic puzzles I spend a lot of time on.

Of course, humanity spent quite a long time evolving certain behaviors – and different ethnic groups spent quite a long time using various cultural norms to determine who mates with whom, thereby exerting an influence on human evolution.

What does being deposed peacefully mean? It means incentivizing that that the time horizon is the next election cycle and policies be judged based on winning the next election and/or how to disingenuously push something through.

For example, the Bush tax cuts that the CBO dutifully marked according to the written text that they would be phased in over the next ten years, then abruptly canceled.

Or Obamacare, which no one really knew what it was going to have in it when it was passed, and the major effects were intended to be delayed for long enough to make it difficult to repeal.

Or Social Security – why is it paid for by the most regressive tax possible?

Why is fracking in 100 places possible, but nuclear reactors in 1 place impossible?

What is Corexit and who approved of its use at Deepwater Horizon, and why?

Democracy has a punitive cost in terms of the things that are possible. But that’s okay, because at least you get to pretend that there’s no difference between you and the people in charge.

Ah yes, the US Senate. Or did you mean the House of Lords – which, being “aristocracy” with titles, not actual ownership of anything, is reliably progressive.

Ivan the Terrible

Meanwhile, Joseph Stalin, who was not a dictator, was not the democratically elected leader of the USSR. The Chairmen of the Presidium of the of the Supreme Soviet of the USSR while Stalin was a secretary of a political party were Mikhail Kalinin and Nikolay Shvernik.

You say you like career civil servants and bicameralism. I will of course note that career civil servants mean less democracy, not more, and are therefore a good idea. However, bicameralism doesn’t distinguish the USSR from a democracy.

Maybe you can find a principle that distinguishes the USSR from a democracy. Is it that the USSR didn’t have very many political parties? If so, China and Venezuela are democracies with multiple parties. Well, we need to let the other parties win. And then we have the West as it stands; and the few wars between Western countries proves that democracy is okay.

There are of course books about why multiple parties with a chance at winning causes the great technological advances and social progress of the 20th century. There has indeed been sublime social progress – where else would the mainstream press come to bat for female game developer Zoe Quinn, or the robber who punched the cop?

But when you look at the actual machinations of a political system with multiple parties that could win, what you see is that they try to sneak their ideology in through lying, try to steal everything that is not nailed down and burn everything else so the other party can’t have it, and the time horizon is at most two years.

You may not have a problem being lied to. After all, it’s for the best – Western countries haven’t fought any wars recently, and ‘citizen’ sounds so much better than ‘peasant’.

how about that, democracy creates incentives for lying, vote buying, two year time horizons and slash-and-burn politics, whipping up the public into a murderous fury and other kinds of media manipulation; and all you get out of it is the ability to tell yourself that there’s no difference between you and the people in charge.

That pretty much sums it up.

I don’t care if the king is noble or not. Metternich was prime minister to a Habsburg monarch whose only coherent order was “I am the Emperor, and I want dumplings”. In 1848, there was an revolution – he asked Metternich, “But, are they allowed to do that?”

“No. The reason kings behave much, much better than presidents and prime ministers is that they have a stake in the future, beyond the next election, or even the next few decades.”

I think that it’s rather bad form to quote someone’s statement, claim disagreement, and then present a supporting point that doesn’t actually contradict the statement, and in fact is a corollary. Not wanting to be deposed is a subset of “stake in the future”.

“Then, as an added bonus, think about some people being born into ethnic groups, and some ethnic groups having different types of consciences.”

Ethnic groups do not have consciences, people do. You are posting nonsense, and the most apparent reason for doing so is racist motives.

That is not at all a response to 1c. You have now lost any assumption of good faith I may have had with regard to the idea that if you post a link in a manner implying that it establishes a claim, that it does in fact establish that claim.

I think that it’s rather bad form to quote someone’s statement, claim disagreement, and then present a supporting point that doesn’t actually contradict the statement, and in fact is a corollary. Not wanting to be deposed is a subset of “stake in the future”.

Yes, elected politicians have a stake in the next two, four, or six years, depending on what office they are elected to.

Ethnic groups do not have consciences, people do.

When did I say ethnic groups have consciences? I said that people of different ethnic groups have different behavioral traits. You even recognized what I actually said, though you still had to pretend I said something retarded.

You are posting nonsense, and the most apparent reason for doing so is racist motives.

You are posting nonsense, and the most apparent reason for doing so is anti-racist motives.

this is not at all a response to 1c

Point 1c was that today’s world is less violent than the pre-democracy times.

What the hell do you mean, when did you say that? I clearly quoted you saying that. What, you want me to include the timestamp? Here you go:

September 6, 2014 at 10:28 pm:

and some ethnic groups having different types of consciences.

“I said that people of different ethnic groups have different behavioral traits.”

So, there exists pairs of a trait T and an ethnic group E such that T is unique to E? That’s really not much less idiotic than your original claim. I could steelman your claim into something reasonable such as “There are substantial variations in the prevalence of some traits among ethnic groups”, but I shouldn’t have to write your argument for you. You either hold a manifestly absurd position, or you are wording your claim in an outrageously sloppy manner that displays a deep indifference to clarity and precision, and looks for all the world like a precursor to a bunch of racist equivocation.

“You even recognized what I actually said, though you still had to pretend I said something retarded.”

Reported for managing to be dishonest, rude, and ableist in the same sentence.

“Point 1c was that today’s world is less violent than the pre-democracy times.”

You might find this interesting. Like all simulation studies, it’s argument from fictional evidence, but an interesting proof-of-concept of a world of game-interacting agents where equity norms are the ultimate destination, but where it can take an arbitrary amount of time being non-equitable and class-based to get there.

More generally I think the fact that humans exhibit bounded rationality means that instead of a transcendental deduction of the categories of moral thought the empirical process will be one of distributed computation over history, with flashes of transcendence here and there.

Scott, it seems to be as if you’ve been poisoned by philosophy classes, and are thus attempting to reason morality out from first principles and thought experiments rather than by looking at the cognition, sociology, and history of actual people. Why not start from reality and then try to work back to a theory? You’re enough of a rationalist that you ought to dismiss the Open Question Argument with a simple wave of “Morals/values are complex — but reducible!”

For reference, I am a realist in the sense that I do think something genuine is going on in our moral cognition, that moral statements have truth-values and some of those are “true!”, but I also think that phrasing these things in the language used by contemporary normative ethics and meta-ethics researchers biases the search space of hypotheses to make the truth extremely difficult to locate.

That depends on how strong of a realist you are. From weak realist standpoints like desirism or the like oughts are at least in principle derivable from ises about what we would do under meta-meta-reflexiveness or something.

Hence my remark on the Open Question Argument. Either “should” means something, or it doesn’t: we can examine its actual semantic content. The Open Question Argument basically says, “Yes, you’ve examined the real structure that’s generating ought-propositions, but that doesn’t mean I have to endorse that structure as any more normatively compelling than paperclip maximization! And don’t try to get to me by talking about what’s normatively compelling to my mind-design, or even for truth-optimizing minds in general, as Real Ethics is from the point of view of the universe!”

At this point, you are implicitly anthropomorphizing the universe (or rather, demanding that the universe must have a mind in order for compelling normativity to exist), acting as an amoralist (by putting your own norms on a level with Clippy), and also denying that ethical intuitions have any normative grounding whatsoever (which runs against the vast majority of theoretical and applied ethicists). Yes, error theorists come out and endorse this exact position, but then they’re left with the work of explaining why this vast collective delusion called “right and wrong” actually has an observable effect, through its ability to compel human actions, on the real world.

By then you’re admitting that morality exists, you are most likely at least a bit compelled by it yourself, but you’re kvetching at it like a wannabe-rebellious teenager who refuses to recognize normative obligations that result in having to walk the dog.

(NOTE: In case you’re asking, I intend to get back to other posts here- I’m just dealing with this first because it’s way easier)

Moore obviously has a different definition of moral right and wrong from you. Yours appeals to human instincts, whilst Moore’s doesn’t. I don’t know if he would, but Moore could easily say that what we see in human brains is an inbuilt perception of right and wrong, not proof of right and wrong itself. Error theorists most certainly would.

What most ethicists think is an argument ad populorum, and a silly one given you don’t like philosophy.

Not all moral realists think of morality as from the point of the universe. There are other ways, such as the mind of God (admittedly bunk, but not incoherent).

Morality from the point of view of intitutions starts to look a lot less credible when you start to see conflicts between moral intuitions. That’s only the start of it- there’s even worse.

1- A lot of intuitions are culturally produced. If you get rid of these you get a result we really won’t like. If you keep them, your morality is culturally subjective.
2- Even ignoring cultural upbringing, personality alone can lead to drastic differences in moral intuitions. How does an ethicist account for these?

Ethicists try to solve these problems, but none of them have a decent answer.

The normative thrust of all this? That normatively speaking it is best to view moral intuitions as simply another kind of want, and weigh them just as you do other wants. If the moral intuitions come up short, so much worse for the moral intuitions.

To be compelled by a sense of right and wrong as more than a mere desire is a cognitive bias, and should be treated like all the others.

reason morality out from first principles and thought experiments rather than by looking at the cognition, sociology, and history of actual people.

This exercise is particularly interesting to the Friendly AI/LessWrong believers, because it is necessary to have a grounded theory of morality to think about the morality of a supergenius AI, which is not an actual person.

My model of Scott suggests that it’s some portion this, and some portion obsession. (In a friendly, non-attacking way.)

A recurring theme in philosophy: someone proclaims a heavyweight Principle (metaphysical and/or epistemic) which is supposedly absolutely necessary for X. For example, X might be free will and the Principle might be “being the sole ultimate uncaused cause of an action.” X might be ethical value, and the Principle might be “a fundamental non-natural property sensed by intuition.” X might be knowledge, and the Principle “absolute certainty.”

Skeptics point to the thin evidence for the Principle and a large body of evidence against, and declare that Science has discovered there’s no such thing as X! Meanwhile, people go about their everyday lives making choices and holding each other responsible, evaluating characters and acts and results, and knowing or doubting, just the same, without any obvious contact with the mysterious Principle. People have erotic love affairs without believing in Eros, and biologists study life without regard for elan vital. Why, it’s as if the Principle was beside the point all along!

There is a difference between saying that morality (in the sense of huamns having moral beliefs) exists, saying that morality (in the sense of objective morality in the sense believed in by non-philosophers), and saying that rational morality (as in it being rational for humans to consult moral codes to decide how to behave) exists.

1 clearly exists. 2 clearly doesn’t. 3 is a point of legitimate dispute which I hold doesn’t exist.

Meaning that there it is irrational to follow moral codes even if they successfully maximize human wellbeing , and if you are motivated to do that…or that there are no such principles ….or that there is no such motivation?

On Version 1- If you don’t want to do something but do so because you believe you have a moral duty, your action is irrational. If you do it because you care and it’s truely seperated from a belief in a moral duty to act it’s a different matter.

On Version 2- Such principles are an incoherent conception- they don’t exist objectively in the universe. Humans have varying beliefs about them, but see 1.

On Version 3- People clearly are motivated, but in most cases they are motivated delusionally by their belief in moral norms.

—————————-
That’s not very helpful, so let me clarify. I’ve been thinking about my ideas a bit, so this is a partial revision. I’m using standard Sequences ideas of CEV as I understand them, but I may have got them wrong.

-Moral truths do not exist.
-Therefore, to the extent a person is motivated by percieved moral truths to act they are breaking their own CEV.

Things get more complicated when dealing with somebody who attempts to define moral truth in terms of human intuitions. However,

-First, such a person needs to deal with how to resolve issues with contradicting intuitions.

-Second, they need to have at absolute minimum a rational response to a person who contradicts their moral code based on said person’s own conscience, even if such a person is merely hypothetical.

-Third, such a person needs to factor for the problem of Culturally Created Intuitions. If they allow them to factor into their system, morality becomes subjective. If they don’t, it becomes abhorrent to their target audience.

Even if they suceed in all three, without an answer to the amoralist challenge there is no rational reason to be moral besides ‘Because I want to’. If a person genuinely doesn’t want to follow their system despite seeing their logic, they have no rational answer to them.

> On Version 1- If you don’t want to do something but do so because you believe you have a moral duty, your action is irrational.

Rationality is more than one thing. If there is a good rational argument for behaving morality, then it would irrational not to acknowledge it, and hypocritical not to act on it. But that’s epistemic rationality.

> Such principles are an incoherent conception- they don’t exist objectively in the universe. Humans have varying beliefs about them, but see 1.

Needs justification. Also, you seem to have confused an ontological argument with an epistemological one. It is possible to argue for the truth of ethical claim without introducing novel entities: there’s an exampre in the OP.

> On Version 3- People clearly are motivated, but in most cases they are motivated delusionally by their belief in moral norms.

You appear to be appeal to an assumption that morality is nonsense in order to conclude it is nonsense.

“That’s not very helpful, so let me clarify. I’ve been thinking about my ideas a bit, so this is a partial revision. I’m using standard Sequences ideas of CEV as I understand them, but I may have got them wrong.”

In order to argue for the conclusion that objective moral truths don’t exist, you ideally need an impossibility proof, one that shows that there CANNOT be any.

Failing that, it would be good to have a comprehensive refutation of all known theories. You are not going to get that from Mr Yudkowsky, who is
proudly unacquainted with the philosophical tradition.

“Moral truths do not exist.”

Truths don’t exist. When I assert the truth of “there is no pegasus”, I am denying the existence of a purported entity, namely Pegasus. What would be the point of doing that if the act of making a claim involved asserting the existence of some true-claim-entity?

> Part of my contention is that such an argument does not exist. Try one on me if you like, but I have seen none that actually work.

Don’t work because…? You have repeatedly attacked ontological arguments, arguments that assert the existence of some kind entity that pins down morality. Not all arguments work that way: Scots argument in the OP is an example. So what’s wrong with it? Well, apparently, what’s wrong is the Amoralist Challenge. Which says there are no moral.truths. So Scott’s theory isn’t a moral truth because there are no moral truths, anfpd there are no moral truths because you haven’t seen a good theory, and Scott’s theory isn’t a good theory because there are none ….

1- I’ve seen two ways to try to argue for morality without introducing a novel entity.

i- Appeal to intuitions.
ii- Appeal to self-interest.

If you appeal to intuitions, you get severe problems with conflicting intuitions. If you appeal to self-interest, you forget that your argument can at most be correct some of the time. Sometimes it is a person’s interests to double-cross another person.

If a theory is fundamentally based on self-interest, exists from the perspective of an individual rather than society (people’s interests clash, so a societal contractrualist theory would be pointless), and sometimes advocates as a ‘moral course’ of action double-crossing society or undermining society from within, it is so far from the ordinary understanding of ‘moral’ as not to merit the term.

2- The understanding most people have of a moral norm is utter nonsense- see moral realism. I’ve just explained why intuitions or self-interest don’t work either.
——————

4- Absence of evidence is evidence of absence. The burden of proof is on somebody who postulates the existence of something.

1.i Intuition has its problems. So does every approach, including amoralism. Skepticism about the real world has the problem that no one really believes it, in the sense of acting on it..skeptics cross the road with their eyes open. Likewise, amoralists still get aggrieved about things they don’t like. You have been treating amoralism as an unproblematic default, in that you never mention its problems, but it isn’t. If amoralism has its own problem , the correct answer is a noncommital “dunno”.

1.ii There is a difference between short and long term self interest, If everyone defects by choosing short term interest, then Moloch wins and everyone ends up with less utility. So if co-operation is the winning strategy why would defection be rational?

Note that your objection to contractarian morality isn’t particularly specific to morality. Why not cheat in games or break business contracts for short term gain?

2 The understanding most people have of everything is utter nonsense.

4. Assertions of altruism, or ethical Objectivism are not per se assertions about the existence of entities, so Occams razor us not applicable. Moral realism asserts entities, other stances don’t.

5. Are you not arguing for a default to amoralism on the basis of something like Occams razor, on the basis that moralism …all amoralism, not just moral realism…asserts the existence of some extra entity, apparently a truth?

Meta

Subscribe via Email

Email Address

Beeminder's an evidence-based willpower augmention tool that collects quantifiable data about your life, then helps you organize it into commitment mechanisms so you can keep resolutions. They've also got a blog about what they're doing here

80,000 Hours researches different problems and professions to help you figure out how to do as much good as possible. Their free career guide show you how to choose a career that's fulfilling and maximises your contribution to solving the world's most pressing problems.

Nectome is building the first brain preservation technique to verifiably preserve your memories for the future.

Jane Street is a quantitative trading firm with a focus on technology and collaborative problem solving. We're always hiring talented programmers, traders, and researchers and have internships and fulltime positions in New York, London, and Hong Kong. No background in finance required.

Collin F. of Instacart is looking for software engineers to work there.

MealSquares is a "nutritionally complete" food that contains a balanced diet worth of nutrients in a few tasty easily measurable units. Think Soylent, except zero preparation, made with natural ingredients, and looks/tastes a lot like an ordinary scone.

Dr. Laura Baur is a psychiatrist with interests in literature review, reproductive psychiatry, and relational psychotherapy; see her website for more. Note that due to conflict of interest she doesn't treat people in the NYC rationalist social scene.

Triplebyte is building an objective and empirically validated software engineering recruitment process. We don’t look at resumes, just at whether you can code. We’ve had great success helping SSC readers get jobs in the past. We invite you to test your skills and try our process!

James Koppel Coaching teaches software engineers how to spend less time debugging and write robust future-proof code. We’ve helped SSC readers be more confident in design decisions and articulate in code reviews. Advanced Software Design courses offered live and online.

AISafety.com hosts a Skype reading group Wednesdays at 19:45 UTC, reading new and old articles on different aspects of AI Safety. We start with a presentation of a summary of the article, and then discuss in a friendly atmosphere.

Altruisto is a browser extension so that when you shop online, a portion of the money you pay goes to effective charities (no extra cost to you). Just install an extension and when you buy something, people in poverty will get medicines, bed nets, or financial aid.

Giving What We Can is a charitable movement promoting giving some of your money to the developing world or other worthy causes. If you're interested in this, consider taking their Pledge as a formal and public declaration of intent.

Metaculus is a platform for generating crowd-sourced predictions about the future, especially science and technology. If you're interested in testing yourself and contributing to their project, check out their questions page