Three people, whom we'll call Xannon, Yancy and Zaire, are separately wandering through the forest; by chance, they happen upon a clearing, meeting each other. Introductions are performed. And then they discover, in the center of the clearing, a delicious blueberry pie.

Xannon: "A pie! What good fortune! But which of us should get it?"

Yancy: "Let us divide it fairly."

Zaire: "I agree; let the pie be distributed fairly. Who could argue against fairness?"

Xannon: "So we are agreed, then. But what is a fair division?"

Yancy: "Eh? Three equal parts, of course!"

Zaire: "Nonsense! A fair distribution is half for me, and a quarter apiece for the two of you."

Yancy: "What? How is that fair?"

Zaire: "I'm hungry, therefore I should be fed; that is fair."

Xannon: "Oh, dear. It seems we have a dispute as to what is fair. For myself, I want to divide the pie the same way as Yancy. But let us resolve this dispute over the meaning of fairness, fairly: that is, giving equal weight to each of our desires. Zaire desires the pie to be divided {1/4, 1/4, 1/2}, and Yancy and I desire the pie to be divided {1/3, 1/3, 1/3}. So the fair compromise is {11/36, 11/36, 14/36}."

Zaire: "What? That's crazy. There's two different opinions as to how fairness works—why should the opinion that happens to be yours, get twice as much weight as the opinion that happens to be mine? Do you think your theory is twice as good? I think my theory is a hundred times as good as yours! So there!"

Yancy: "Craziness indeed. Xannon, I already took Zaire's desires into account in saying that he should get 1/3 of the pie. You can't count the same factor twice. Even if we count fairness as an inherent desire, why should Zaire be rewarded for being selfish? Think about which agents thrive under your system!"

Xannon: "Alas! I was hoping that, even if we could not agree on how to distribute the pie, we could agree on a fair resolution procedure for our dispute, such as averaging our desires together. But even that hope was dashed. Now what are we to do?"

Yancy: "Xannon, you are overcomplicating things. 1/3 apiece. It's not that complicated. A fair distribution is an even split, not a distribution arrived at by a 'fair resolution procedure' that everyone agrees on. What if we'd all been raised in a society that believed that men should get twice as much pie as women? Then we would split the pie unevenly, and even though no one of us disputed the split, it would still be unfair."

Xannon: "What? Where is this 'fairness' stored if not in human minds? Who says that something is unfair if no intelligent agent does so? Not upon the stars or the mountains is 'fairness' written."

Yancy: "So what you're saying is that if you've got a whole society where women are chattel and men sell them like farm animals and it hasn't occurred to anyone that things could be other than they are, that this society is fair, and at the exact moment where someone first realizes it shouldn't have to be that way, the whole society suddenly becomes unfair."

Xannon: "How can a society be unfair without some specific party who claims injury and receives no reparation? If it hasn't occurred to anyone that things could work differently, and no one's asked for things to work differently, then—"

Yancy: "Then the women are still being treated like farm animals and that is unfair. Where's your common sense? Fairness is not agreement, fairness is symmetry."

Zaire: "Is this all working out to my getting half the pie?"

Yancy: "No."

Xannon: "I don't know... maybe as the limit of an infinite sequence of meta-meta-fairnesses..."

Yancy: "I wanted to give you a third of the pie, and you equate this to seizing the whole thing for myself? Small wonder that you don't want to acknowledge the existence of morality—you don't want to acknowledge that anyone can be so much less of a jerk."

Xannon: "You oversimplify the world, Zaire. Banana-fights occur across thousands and perhaps millions of species, in the animal kingdom. But if this were all there was, Homo sapiens would never have evolved moral intuitions. Why would the human animal evolve to cry morality, if the cry had no effect?"

Zaire: "To make themselves feel better."

Yancy: "Ha! You fail at evolutionary biology."

Xannon: "A murderer accosts a victim, in a dark alley; the murderer desires the victim to die, and the victim desires to live. Is there nothing more to the universe than their conflict? No, because if I happen along, I will side with the victim, and not with the murderer. The victim's plea crosses the gap of persons, to me; it is not locked up inside the victim's own mind. But the murderer cannot obtain my sympathy, nor incite me to help murder. Morality crosses the gap between persons; you might not see it in a conflict between two people, but you would see it in a society."

Yancy: "So you define morality as that which crosses the gap of persons?"

Xannon: "It seems to me that social arguments over disputed goals are how human moral intuitions arose, beyond the simple clash over bananas. So that is how I define the term."

Yancy: "Then I disagree. If someone wants to murder me, and the two of us are alone, then I am still in the right and they are still in the wrong, even if no one else is present."

Zaire: "And the murderer says, 'I am in the right, you are in the wrong'. So what?"

Xannon: "How does your statement that you are in the right, and the murderer is in the wrong, impinge upon the universe—if there is no one else present to be persuaded?"

Yancy: "It licenses me to resist being murdered; which I might not do, if I thought that my desire to avoid being murdered was wrong, and the murderer's desire to kill me was right. I can distinguish between things I merely want, and things that are right—though alas, I do not always live up to my own standards. The murderer is blind to the morality, perhaps, but that doesn't change the morality. And if we were both blind, the morality still would not change."

Xannon: "Blind? What is being seen, what sees it?"

Yancy: "You're trying to treat fairness as... I don't know, something like an array-mapped 2-place function that goes out and eats a list of human minds, and returns a list of what each person thinks is 'fair', and then averages it together. The problem with this isn't just that different people could have different ideas about fairness. It's not just that they could have different ideas about how to combine the results. It's that it leads to infinite recursion outright—passing the recursive buck. You want there to be some level on which everyone agrees, but at least some possible minds will disagree with any statement you make."

Xannon: "Isn't the whole point of fairness to let people agree on a division, instead of fighting over it?"

Yancy: "What is fair is one question, and whether someone else accepts that this is fair is another question. What is fair? That's easy: an equal division of the pie is fair. Anything else won't be fair no matter what kind of pretty arguments you put around it. Even if I gave Zaire a sixth of my pie, that might be a voluntary division but it wouldn't be a fair division. Let fairness be a simple and object-level procedure, instead of this infinite meta-recursion, and the buck will stop immediately."

Zaire: "If the word 'fair' simply means 'equal division' then why not just say 'equal division' instead of this strange additional word, 'fair'? You want the pie divided equally, I want half the pie for myself. That's the whole fact of the matter; this word 'fair' is merely an attempt to get more of the pie for yourself."

Xannon: "If that's the whole fact of the matter, why would anyone talk about 'fairness' in the first place, I wonder?"

Zaire: "Because they all share the same delusion."

Yancy: "A delusion of what? What is it that you are saying people think incorrectly the universe is like?"

Zaire: "I am under no obligation to describe other people's confusions."

Yancy: "If you can't dissolve their confusion, how can you be sure they're confused? But it seems clear enough to me that if the word fair is going to have any meaning at all, it has to finally add up to each of us getting one-third of the pie."

Xannon: "How odd it is to have a procedure of which we are more sure of the result than the procedure itself."

Comments (81)

It does not seem to come to any definite conclusion, instead simply presenting arguments and leaving the three participants in the dialogue with beliefs that are largely unchanged from their original position.

I am unable to come up with anything of substance to add, other than praise, but I feel compelled to comment anyway.

At first I tend to side with Zaire. The pie should be divided according to everyone's needs. But what if Zaire has a bigger body and generally needs to eat more? Should he always get more? Should the others receive less and be penalized because Zaire happens to be bigger? This is not easy, sigh...

I'm about to make a naked assertion with nothing to back it up, just to put it out there.

The purpose of morality is to prevent such an arguement from even ever occurring. If the morale engine of society is working correctly, then all it's members will have a desire for everyone to get an equally sized portion of the pie (in this example). If there is a Zaire who believes he should get 1/2 of the pie, then there was a malfunction when morality was being programmed into him. This malfunction will lead to conflict.

View it like you would view programming a friendly AI. The purpose is to program the AI with desires that will motivate it to help humanity, and to have a strong aversion to destroying humanity. If this goal is not reached, there was a failure by the programmers. I think it's been said on this blog that if you create an AI without having made it friendly you've already lost and the game is over. It's not quite as drastic if you fail with humans, but the principle is the same. If a friendly Human Intelligence is not programmed with the desires that will help to keep humanity thriving then there was a failure by it's programmers(parents/society/teachers/whoever).

Why is it that human morality is this confusing and mysterious realm that no one seems to be able to fathom when AI morality is straight-forward? Is it just that humans can easily see the goal of one (an AI that desires to help rather than hurt humanity) and for some reason can't see the goal of another (a human that desires to help rather than hurt humanity)?

Zaire's argument is that some people actually need more of "the pie" than others. Equal portions aren't necessarily fair, in that situation.

For example: would it be fair if every person on the globe got an equal portion of diabetic insulin?
No, obviously not. We disproportionately give insulin to diabetics. Because that is more fair than to distribute it equally amongst all people (regardless of their health situation).

The disagreement here is between two perfectly understandable concepts of fairness. Both of them make sense in different ways. I see no easy solution to this myself.

It licenses me to resist being murdered; which I might not do, if I thought that my desire to avoid being murdered was wrong, and the murderer's desire to kill me was right.

Licenses relative to what authority? Himself, I presume. Of course the murderer would say the same.

Blind? What is being seen, what sees it?

Optimistically, I would say that if the murderer perfectly knew all the relevant facts, including the victim's experience, ve wouldn't do it (at least if ve's human or similar; a paperclip maximizer won't care).

Tiiba - Sure, I got no problem with that. There's often extenuating circumstances which change how any particular interaction occurs. However that was not the case presented in this hypothetical. :) However as a baseline that everyone should start with (and work forward from), an equally sized portion for all is the ideal as it will lead to the least conflict.

Gowder, I'm talking to the people who say unto me, "Friendly to who?" and "Oh, so you get to say what 'Friendly' means." I find that the existing literature rarely serves my purposes. In this case I'm driving at a distinction between the object level and the meta level, and the notion of bedrock (the Buck Stops Immediately). Does the political philosophy go there? - for I am not wholly naive, but of course I have only read a tiny fraction of what's out there. I fear that much political philosophy is written for humans by humans.

Roland, I confess that I'd intended the original reading of the dialogue as simple greed on Zaire's part, but your reading is also interesting... I would still tend to say that 1/3 apiece is the fair division, but that either of the other two are welcome to donate portions of pie to Zaire; the resulting division might perhaps be termed utilitarian. The morally interesting situation is when Xannon thinks Zaire deserves more pie but Yancy disagrees.

I'm curious why you personally just chose to use the norm-connoting term "fair" in place of the less loaded term "equal division" ... what properties does equal division have that make you want to give it special normative consideration? I could think of some, but I'm particularly interested in what your thoughts are here!

A bit of unfairness is acceptable, if that is needed to get us all back to fairness. Example: Zaire should get a bigger piece of pie if they are on a lifeboat and if he is the only one who can row the boat back ashore, and needs some extra carbs to do that. Xannon and Yancy should agree that this is a useful distribution in this context.

Say x and y come from, respectively: a tribe of quasaieugenicists that settle distributions based on "fitness" rankings (using something like IQ - probably largely arbitrary - but that doesn't matter), and a tribe of equal-sharers (that subscribe to y's conclusion is in the dialog). Within each culture the relevant version of "fairness" (or the 'core distributive principle') is intuitive, much like y's system is for us. In the x culture people with low rankings intuit that their superiors are 'entitled' to their larger share, and in fact this reinforces a strictly tiered society with little to no concept(s) of equality (sure there would be squabbles between the closely ranked - but the distinction between low and high would be clear). Their philosophers do speculate on other systems - but, barring the occasional sociopath, people typically retreat to the same intuition. Thus both societies largely avoid the recursion problem. So now what happens when x and y stumble upon the sylvan pastry?

Of course y might not signal the relevant information pertaining to an xish fitness ranking, especially if the ranking system doesn't have anything to do with appearance. So x might be momentarily confused. But, applying his intuitions, he will probably attempt to recreate whatever routines and evaluations the xers use to establish distribution (just as y applies his familiar calculus). The point is: there will be an argument. And as long as this isn't a survival situation, its difficult to see any variety of bedrock within walking distance.

The Xers and Yers are radically different - but similar enough, I think, to be included within the space of possible human cultures (history is replete with every flavor of hierarchy). I think the reality is that we depend precariously on a very sloppy overlapping of billions of similar but clearly distinct conceptions of morality. The more you venture beyond your social bubble the less you overlap with those adjacent to you, and eventually you start getting into situations like x and y's (so you best mind your ps and qs). Of course the knitting gets progressively tighter as we move closer to the preferred world of Marginal Revolution and OB.

Why not divide the pie equally among cells, which make up the agglomerations we call "persons"? And if there is a distinction between voluntary and fair so that Xannon and Yancy honestly couldn't comfortably eat another bite and gave extra to Zaire, would that be unfair?

We've already got a society in which living things are treated like farm animals, by which of course I speak of farm animals themselves. They are of course privileged over a more defenseless living being that they live as parasites off of, which are plants. Some Swiss officials are working to remedy that situation, but it causes me to wonder why we should privilege the selfish replicating and entropy-producing patterns known as "life" over non-life? Fairness as symmetry is underdetermined and at least to me not particularly compelling.

In a hypothetical gender-chattel society, how does the notion that it is unfair pay rent?

I notice that Eliezer asks the question why it is we discuss fairness and find it compelling. He does not answer that question. My guess is that it signals a desire for the cooperation of others and establishes a Schelling point by which you are willing to cooperate with your allies against defectors. Upon violating the implied promise of future cooperation your reputation would take a significant hit. As I use a pseudonym only relevant in a restricted domain, I am free to ignore the damage to my reputation and the willingness of others to cooperate with me rather than punish.

Eliezer, to the extent I understand what you're referencing with those terms, the political philosophy does indeed go there (albeit in very different vocabulary). Certainly, the question about the extent to which ideas of fairness are accessible at what I guess you'd call the object level are constantly treated. Really, it's one of the most major issues out there -- the extent to which reasonable disagreement on object-level issues (disagreement that we think we're obligated to respect) can be resolved on the meta-level (see Waldron, Democracy and Disagreement, and, for an argument that this leads into just the infinite recursion you suggest, at least in the case of democratic procedures, see the review of the same by Christiano, which google scholar will turn up easy).

I think the important thing is to separate two questions: 1. what is the true object-level statement, and 2. to what extent do we have epistemic access to the answer to 1? There may be an objectively correct answer to 1, but we might not be able to get sufficient grip on it to legitimately coerce others to go along -- at which point Xannon starts to seem exactly right.

Oh, hell, go read Ch. 5. of Hobbes, Leviathan. And both of Rawls's major books.

I mean, Xannon has been around for hundreds of years. Here's Hobbes, from previous cite.

But no one mans Reason, nor the Reason of any one number of men, makes the certaintie; no more than an account is therefore well cast up, because a great many men have unanimously approved it. And therfore, as when there is a controversy in account, the parties must by their own accord, set up for right Reason, the Reason of some Arbitrator, or Judge, to whose sentence they will both stand, or their controversie must either come to blowes, or be undecided, for want of a right Reason constituted by Nature...

Okay, how does standard political philosophy say you should fairly / rightly construct an ultrapowerful superintelligence (not to be confused with a corruptible government) that can compute moral and metamoral questions only given a well-formed specification of what is to be computed?

After you've carried out these instructions, what's the standard reply to someone who says, "Friendly to who?" or "So you get to decide what's Friendly"?

That's a really fascinating question. I don't know that there'd be a "standard" answer to this -- were the questions taken up, they'd be subject to hot debate.

Are we specifying that this ultrapowerful superintelligence has mind-reading power, or the closest non-magical equivalent in the form of access to every mental state that an arbitrary individual human has, even stuff that now gets lumped under the label "qualia"/ability to perfectly simulate the neurobiology of such an individual?

If so, then two approaches seem defensible to me. First: let's assume there is an answer out there to moral questions, in a form that is accessible to a superintelligence, and let's just assume the hard problem away, viz., that the questioners know how to tell the superintelligence where to look (or the superintelligence can figure it out itself).

We might not be able to produce a well-formed specification of what is to be computed when we're talking about moral questions (it's easy to think that any attempt to do so would rig the answer in advance -- for example, if you ask it for universal principles, you're going to get something different from what you'd get if you left the universality variable free...). But if the superintelligence could simulate our mental processes such that it could tell what it is that we want (for some appropriate values of we, like the person asking or the whole of humanity if there was any consensus -- which I doubt), then in principle it could simply answer that by declaring what the truth of the matter is with respect to that which it has determined that we desire.

That assumes the superintelligence has access to moral truth, but once we do that, I think the standard arguments against "guardianship" (e.g. the first few chapters of Robert Dahl, Democracy and its Critics) fail, in that if they're true -- if people are really better off deciding for themselves (the standard argument), and making people better off is what is morally correct, then we can expect the superintelligence to return "you figure it out." And then the answer to "friendly to who" or "so you get to decide what's friendly" is simply to point to the fact that the superintelligence has access to moral truth.

The more interesting question perhaps is what should happen if the superintelligence doesn't have access to moral truth (either because there is no such thing in the ordinary sense, or because it exists but is unobservable). I assume here that being responsive to reasons is an appropriate way to address moral questions (if not, all bets are off). Then the superintelligence loses one major advantage over ordinary human reasoning (access to the truth on the question), but not the other (while humans are responsive to reasons in a limited and inconsistent sense, the supercomputer is ideally responsive to reasons). For this situation, I think the second defensible outcome would be that the superintelligence should simulate ideal democracy. That is, it should simulate all the minds in the world, and put them into an unlimited discussion with one another, as if they were bayesians with infinite time. The answers it would come up with would be the equivalent to the most legitimate conceivable human decisional process, but better...

I'm pretty sure this is a situation that hasn't come under sustained discussion in the literature as such (in superintelligence terms -- though it has come up in discussions of benevolent dictators and the value of democracy), so I'm talking out my ass a little here, but drawing on familiar themes. Still, the argument defending these two notions -- especially the second -- isn't a blog comment, it's a series of long articles or more.

First: let's assume there is an answer out there to moral questions, in a form that is accessible to a superintelligence, and let's just assume the hard problem away

Let's not. See, this is what I mean by saying that political philosophy is written for humans by humans.

Your other answer, "ideal democracy", bears a certain primitive resemblance to this, as you'd know if you were familiar with the Friendliness literature...

Okay, sorry about that, just emphasizing that it's not like I'm making all this up as I go along; and also, that there's a hell of a lot of literature out there on everything, but it isn't always easy to adapt to a sufficiently different purpose.

Why doesn't Zaire just divide himself in half, let each half get 1/4 of the pie, then merge back together and be in possession of half of the pie?

Or, Zaire might say: Hey guys, my wife just called and told me that she made a blueberry pie this morning and put it in this forest for me to find. There's a label on the bottom of the plate if you don't believe me. Do you still think 'fair' = 'equal division'?

Or maybe Zaire came with his dog, and claims that the dog deserves an equal share.

I appreciate the distinction Eliezer is trying to draw between the object level and the meta level. But why the assumption that the object-level procedure will be simple?

I was expecting Xannon and Yancy to get into an exchange, only to find that Zaire had taken half the pie while they were talking. Xannon is motivated by consensus, Yancy is motivated by fairness, and Zaire is motivated by pie. I know who I bet on to end up with more pie.

And then they discover, in the center of the clearing, a delicious blueberry pie.

If the pie is edible then it was recently made and placed there. Whoever made it is probably close at hand. That person has a much better claim on the pie than these three and is therefore most likely rightly considered the owner. Let the owner of the pie decide. If the owner does not show up, leave the pie alone. Arguably the difficulty the three have in coming to a conclusion is related to the fact that none of the three has anything close to a legitimate claim on the pie.

Arguably the difficulty the three have in coming to a conclusion is related to the fact that none of the three has anything close to a legitimate claim on the pie.

If you modify the scenario by postulating that the pie is accompanied by a note reading "I hereby leave this pie as a gift to whomever finds it. Enjoy. -- Flying Pie-Baking Monster", how does that make the problem any easier?

If you modify the scenario by postulating that the pie is accompanied by a note reading "I hereby leave this pie as a gift to whomever finds it. Enjoy. -- Flying Pie-Baking Monster", how does that make the problem any easier?

If, indeed, it requires that we imagine a flying pie-baking monster in order to come up with a situation in which the concept of 'fairness' is actually relevant (e.g. not immediately trumped by an external factor), then it suggests that the concept of 'fairness' is in the real world virtually irrelevant. I notice also that the three have arrived separately and exactly simultaneously, another rarity, but also important to make 'fairness' an issue.

I notice also that the three have arrived separately and exactly simultaneously, another rarity, but also important to make 'fairness' an issue.

Yet most people in a situation of near simultaneity find it easier (or perhaps just safer?) to assume they had arrived simultaneously and come to agreement on dividing the pie 'fairly', rather than argue over who got there first.

It seems that the 1/3 each is what the recursive buck ends with, anyhow. Upon learning that Zaire claims half for him/herself and Xannon insists on averaging fairness algorithms, Xannon and Yancy merely update their claims to equal Zaire's at all times. That way, the average of the three desires will always turn out 1/3 a piece. Perhaps an argument for why an equal share is most fair. If not, Zaire could just wait until the other two had stated their desires and claimed the whole pie for him/herself, thus always skewing the final average in his/her favor.

I don't have an argument here; rather, I just want to see if I understand each position taken in the dialogue. After all, it would be a dreadful waste of time to argue one way or the other against our three musketeers while completely misunderstanding some key point. As far as I can tell, these are the essential arguments being made:

Yancy's position: that fairness is a rational (mathematical) system. There is no moral factor; rather than "to each according to his need," it is "to each according to the equation." This presumes fairness is a universal, natural system which people must follow, uncomplaining; any corruption of the system would be unfair, any bending or breaking of the rules renders them useless.

Zaire's position, that it is fair for individuals to define the product of fairness, handily illustrates this; his conception of fairness breaks down as soon as another conception is introduced. Fairness is entirely relative.

Xannon's initial position is that the product of fairness can be rationally derived from individually relative definitions of fairness; that fairness itself is the sum of differing concepts of fair.

Xannon revises this position, in that fairness is derived from the moral rights of a group and has an intrinsic, understood value. Those who do not inherently comprehend this value are not moral and do not belong to the group, like the murderer. Of course, this assumed that passersby, like Xannon, would side with the victim. Not only could they side with the assailant, they could even refuse to become involved. If the victim is "licensed" to resist being murdered, is the victim likewise licensed to kill the murderer in self-defense, and is that fair? The question of "who started it?" begins a new problem. If the observer is joined by five others who think that the victim, who killed his attacker in self-defense, must be put to death, is that also fair? This position argues for the existence of absolute morality, but only achieves a weak implication of moral relativity.

This is at odds with Xannon's initial position; if Zaire wants more of the pie than Yancy, but Xannon sides with Yancy, Xannon thinks it is fair to average their desires. However, Xannon would not average the desire of the murderer, the victim, and the passerby. If the murderer is presumed in the wrong, then Zaire is also presumed in the wrong. Therefore, it is unfair to attempt to combine Zaire's desire with Yancy and Xannon's.

In essence, Xannon's position is ultimately not far removed from Zaire's; where Zaire believes in the individual's right to define fairness, Xannon believes in the group's right to the same. Both believe in a moral right to inflict their own definition of fairness on the other. Yancy, in believing a universal system of fairness can be applied, would attempt the same. Further, even if Xannon agreed Yancy's proposal was fair, it would not be for the same reason, as Xannon believes fairness is derived from moral right; therefore, arriving at a fair decision through the amoral application of a rational system may not be "morally fair" to Xannon. There is no resolution to be found here.

I would like to see the question of the purpose of fairness addressed more comprehensively. If fairness as a system is not effective, why does it exist? If it is an artificial social construction, it must have a agreed-upon definition; if it is an evolved, biological system, it must have a physical basis; if it is a universal rule, there must be evidence of it.

As for the question "Friendly to who?"/"So you get to decide what's Friendly?", may I suggest
Who Gets to Decide? as a reasonable answer? To summarize (while of course skipping a lot of the detail in the original post), no one gets to decide what's Friendly just like no one gets to decide the speed of light. There are simply facts that can be discovered (or that we can be wrong about). Certain desires help the human race, other desires hurt the human race, and these can be discovered in the same way we discover any other facts about the universe.

Does anyone think that this disagreement can be resolved without threat-signalling? I think valuing a particular model of 'fairness' over another (the Xers and Yers from Leif's post) ultimately boils down to the cost/benefit of being accepted/rejected by a particular social group.

So does this disagreement take place in a universe consisting only of the entities Xannon, Yancy, and Zaire, or do they all go back to the same village afterward and reminisce about what happened, or do they each go back to their separate villages?

Yet most people in a situation of near simultaneity find it easier (or perhaps just safer?) to assume they had arrived simultaneously and come to agreement on dividing the pie 'fairly', rather than argue over who got there first.

You are claiming it is a common practice. But common practice is common practice - not necessarily "fairness". We often do things precisely because they are commonly done. One common practice which is not equal is, if two cars arrive at the same intersection at right angles, then the car on the right has the right of way. This is the common practice, and we do it because it is common practice, and it is common practice because we do it.

Even if it is not common practice, dividing it into thirds may well be apt to occur to most people. This makes it a likely Schelling point. Schelling points aren't about fairness either. They are about trying to predict what the other guy will predict that you predict, all without communicating with each other. You can use a Schelling point to try to find each other in a large city without a prior agreement on where to meet. Each of you tries to figure out what location the other will choose, keeping in mind that the other guy is trying to pick the location which you're most likely to predict he's going to pick (and you can probably keep recursing).

If all we're trying to do is come to an agreement there is no need to get deeply philosophical about fairness per se.

One common practice which is not equal is, if two cars arrive at the same intersection at right angles, then the car on the right has the right of way. This is the common practice, and we do it because it is common practice, and it is common practice because we do it.

We do it that way because the delay the car on the left will experience if the car on the right goes first is shorter than the delay the car on the right would experience if the car on the left went first.

This rule is reversed in left-hand-of-the-road driving regions, because of the reversal of the asymmetry.

This dialogue leads me to conclude that "fairness" is a form of social lubricant that ensures our pies don't get cold while we're busy arguing. The meta-rule for fairness rules would then be: (1) fast; (2) easy to apply; and (3) everybody gets a share.

Optimistically, I would say that if the murderer perfectly knew all the relevant facts, including the victim's experience, ve wouldn't do it

The murderer may have all the facts, understand exactly what ve is doing and what the experience of the other will be, and just decide that ve doesn't care. Which fact is ve not aware of? Ve may understand all the pain and suffering it will cause, ve may understand that ve is wiping out a future for the other person and doing something that ve would prefer not to be on the receiving end of, may realize that it is behavior that if universalized would destroy society, may realize that it lessens the sum total of happiness or whatever else, may even *know* that "ve *should* feel compelled not to murder" etc. But at the end of the day, ve still might say, "regardless of all that, I don't care, and this is what I want to do and what I will do".

There is a conflict of desire (and of values) here, not a difference of fact. Having all the facts is one thing. Caring about the facts is something altogether different.

--

On the question of the bedrock of fairness, at the end of the day it seems to me that one of the two scenarios will occur:

(1) all parties happen to agree on what the bedrock is, or they are able to come to an agreement.

(2) all parties cannot agree on what the bedrock is. The matter is resolved by force with some party or coalition of parties saying "this is our bedrock, and we will punish you if you do not obey it".

I tend to agree with Xannon, that 'fairness' is defined by society. So the question is if the societal moral norms still affect the three opponents. If Xannon decides "we are still members of society where equal shares for everyone are considered fair" he might side with Yancy, share the pie into 1/3's and label Zaire to be a criminal. If he decides "we are out in the desert with no society around to push its moral values unto us" he might side with Zaire, divide the pie in 1/2's and tell Yancy to shove his ideas of equality up his behind.

The whole Y's "fair distribution is an even split, not a distribution arrived at by a 'fair resolution procedure' that everyone agrees on" argument seems to either say 'fair' == 'equal division' or bring in some sort of external source of morality "The Howly Blooble says we shall divide equally and so we shall."

The Y's intuitive grasp of fairness seems to be derived from ideas of modern western society, but even in our world there is, for example, a medical practise of triage where a doctor spends more time with patients who require more treatment. Nobody seems to call that unfair. As already have been mentioned the same situation would be different if X and Y had big dinners an hour ago and Z hasn't eaten in two days. I suppose in that case Y would be arguing that it is fair to give the whole pie to Z.

Certain desires help the human race, other desires hurt the human race, and these can be discovered in the same way we discover any other facts about the universe.

You simply passed the recursive buck to "help" and "hurt". I will let you take for granted the superintelligence's knowledge of, or well-calibrated probability distribution over, any empirical truth about consequences; but when it comes to the valuation of those consequences in terms of "helping" or "hurting" you must tell me how to compute it, or run a computation that computes how to compute it.

The resemblance between my second suggestion and your thing didn't go unnoticed -- I had in fact read your coherent extrapolated volition thing before (there's probably an old e-mail from me to you about it, in fact). I think it's basically correct. But the method of justification is importantly different, because the idea is that we're trying to approximate something with epistemic content -- we're not just trying to do what you might call a Xannon thing -- we're not just trying to model what humans would do. Rather, we're trying to model and improve a specific feature of humanity that we see as morally relevant -- responsiveness to reasons.

That's really, really important.

In the context of your dialogue above, it's what reconciles Xannon and Yancy: even if Yancy can't convince Xannon that there's some kind of non-subjective moral truth, he ought to be able to convince Xannon that moral beliefs should be responsive to reasons -- and likewise, even if Xannon can't convince Yancy that what really matters, morally, is what people can agree on, he should be able to convince Yancy that the best way to get at it in the real world is by a collective process of reasoning.

So you see that this method of justification does provide a way to answers to questions like "friendliness to whom." I know what I'm doing, Eliezer. :-)

Eliezer: as you are aware yourself, we don't know how to compute it, nor how to run a computation that computes how to compute it. If we leave it up to the superintelligence to decide how to interpret "helping" and "hurting," it will be in a position no worse than our own, and possibly better, seeing that we are not superintelligent.

The answer to "Friendly to who?" had damn well better always be "Friendly to the author and by proxy those things the author wants." Otherwise leaving aside what it actually is friendly to, it was constructed by a madman.

Right, but those questions are responsive to reasons too. Here's where I embrace the recursion. Either we believe that ultimately the reasons stop -- that is, that after a sufficiently ideal process, all of the minds in the relevant mind design space agree on the values, or we don't. If we do, then the superintelligence should replicate that process. If we don't, then what basis do we have for asking a superintelligence to answer the question? We might as well flip a coin.

Of course, the content of the ideal process is tricky. I'm hiding the really hard questions in there, like what counts as rationality, what kinds of minds are in the relevant mind design space, etc. Those questions are extra-hard because we can't appeal to an ideal process to answer them on pain of circularity. (Again, political philosophy has been struggling with a version of this question for a very long time. And I do mean struggling -- it's one of the hardest questions there is.) And the best answer I can give is that there is no completely justifiable stopping point: at some point, we're going to have to declare "these are our axioms, and we're going with them," even though those axioms are not going to be justifiable within the system.

What this all comes down to is that it's all necessarily dependent on social context. The axioms of rationality and the decisions about what constitute relevant mind-space for any such superintelligence would be determined by the brute facts of what kind of reasoning is socially acceptable in the society that creates such a superintelligence. And that's the best we can do.

The only reasons that exist for taking any actions at all are desires. In specific - the desires of the being taking the action. Under any given condition the being will always take the action that best fulfills the most/strongest of it's desires (given it's beliefs). The question isn't which action is right/wrong based on some universal bedrock of fairness, but rather what desires we want the being to have. We can shape many desires in humans (and presumably all the desires of an AI) and thus we want to give it the desires that best help and least hurt humanity.

You say this is passing the recursive buck. Unknown says it's impossible for us to calculate what's helpfull or hurtfull. I disagree in both cases. The desires that we most want to encourage are those that tend to fulfill the desires of other beings ("helpfull" desires). The desires we most want to discourage are those that tend to thwart the desires of other beings ("harmfull" desires). It doesn't have to be some grand confusing thing.

Paul: Sounds like you're just describing the "thought faster" part of the CEV process, i.e., "What would you decide if you could search a larger argument space for reasons?" However, it seems to me that you're idealizing this process very highly, and overlooking such questions as "What if different orderings of the arguments would end up convincing us of different things?" which a CEV has to handle somehow, e.g. by weighting the possibilities by e.g. length, combining them into a common superposition, and acting only where strong coherence exists... but now we're heading into strictly Friendly AI territory.

If you say "reasons" or "reasons for reasons", that's philosophy written by humans for humans; if you want to put the weight of your theory on "reasons" you have to tell me how to compute a "reason", or how to make a superintelligence compute something that computes a reason.

Things like the ordering of arguments are just additional questions about the rationality criteria, and my point above applies to them just as well -- either there's a justifiable answer ("this is how arguments are to be ordered,") or it's going to be fundamentally socially determined and there's nothing to be done about it. The political is really deeply prior to the workings of a superintelligence in such cases: if there's no determinate correct answer to these process questions, then humans will have to collectively muddle through to get something to feed the superintelligence. (Aristotle was right when he said politics was the ruling science...)

On the humans for humans point, I'll appeal back to the notion of modeling minds. If we take P to be a reason, then all we have to be able to tell the superintelligence is "simulate us and consider what we take to be reasons," and, after simulating us, the superintelligence ought to know what those things are, what we mean when we say "take to be reasons," etc. Philosophy written by humans for humans ought to be sufficient once we specify the process by which reasons that matter to humans are to be taken into account.

Eliezer: Are you looking for a new definition of "fairness" which would reconcile the partisans of existing definitions? Or are you just pointing out that this is a sort of damned-if-you-do, damned if-you-don't problem, and that any rule for establishing fairness will piss somebody or other off? If the latter, from the point of view of your larger project, why not just insert a dummy answer for this question - pick any definition that grabs you - and see how it fits with the rest of what you need to work out. Or work through several different obviously computable answers.

As fair as it goes, it seems plausible-ish that fairness has to do with equality of *something* - resources, or opportunity, or utility, or whatever - but I doubt whether there's any general agreement over what should be equalised, and I don't see the value of descending to a meta level of discussion to sort the question out. Meta-discussions would have to be answerable to fairness anyway, if they were to be fair, and that looks circular. So why not cut the knot and pick whatever answer is nearest to hand?

I suppose that's just to second Paul Gowder's point that the political problem is insurmountable. But I imagine few things would resolve a political problem faster then the backing of an all-powerful supermind.

@Paul: You seem to suggest that we all take the same things to be reasons, perhaps even the same reasons. Is this warranted?

Things like the ordering of arguments are just additional questions about the rationality criteria

...which problem you can't hand off to the superintelligence until you've specified how it decides 'rationality criteria'. Bootstrapping is allowed, skyhooking isn't. Suppose that 98% of humans, under 98% of the extrapolated spread, would both choose a certain ordering of arguments, and also claim that this is the uniquely correct ordering. Is this sufficient to just go ahead and label that ordering the rational one? If you refuse to answer that question yourself, what is the procedure that answers it?

Poke has it exactly right. Thinking further along the lines suggested by his "social lubricant" idea, I'd suggest that fairness is no more than efficiency. Or, at the very least, if two prevailing doctrines of fairness exist, the more efficient doctrine willâceteris paribusâin the long run prevail.

This leaves open the question of how closely to efficiency our notions of fairness have actually evolved, but that's an empirical question.

This question, of what is fairness / morality, seems a lot easier (to me) than the posters here appear to feel.

Isn't the answer: You start with purely selfish desires. These sometimes cause conflict with limited resources. Then you take Rawl's Veil of Ignorance, and come up with social rules (like "don't murder") that result in a net positive outcome for society. It's not a zero-sum game. Cooperation can result in greater returns for everybody, than constant conflict.

Individuals breaking agreed morality are shunned, in much the same way as someone betraying in a Prisoner's Dilemma, or a herder allowing extra sheep onto a field, resulting in the Tragedy of the Commons.

Yes, any of us could break common morality -- that's easy. The whole point is, that if you didn't know which of the individuals you were going to be, then you wouldn't be so eager to propose some particularly non-moral solution.

Meanwhile, moral dilemmas that actually are zero sum, like two monkeys and a banana that can't be divided, don't have consensus solutions in society.

Finally, this formulation doesn't completely resolve all scenarios, because it matters a lot which group of people/things you consider in the class that "you" "might have been". In morality a few centuries ago, "you" were a white slaveowner, and it didn't occur to you that "you" "might have been" a black slave. So it was not immoral to own slaves, then. Just as, today, you might imagine yourself to be any citizen (of your country? of the world?), but not, say, a cow. So the conflicts become one of what is the population from which the Veil of Ignorance is drawn.

(Of course, all this imagination is beside the point that it is meaningless that "you" "might have been" someone else. But you can still do the computation even though the scenario is not physically plausible.)

But the basic structure seems pretty clear. It's not "right" for strong people to beat up weak people, because if you don't know whether you would have been born strong or weak, you'd much rather a society where nobody does it, than one where the strong dominate the weak. In other words, the gains from beating people up are far less than the losses from being beaten up.

(...we do what we must, because we can. For the good of all of us. Except the ones who are dead.)

Certain desires help the human race, other desires hurt the human race, and these can be discovered in the same way we discover any other facts about the universe.

You simply passed the recursive buck to "help" and "hurt". I will let you take for granted the superintelligence's knowledge of, or well-calibrated probability distribution over, any empirical truth about consequences; but when it comes to the valuation of those consequences in terms of "helping" or "hurting" you must tell me how to compute it, or run a computation that computes how to compute it.

To rephrase the statement without passing the recursive buck:

Certain desires help the human race fulfill desires, other desires prevent the human race from fulfilling desires, and these can be discovered in the same way we discover any other facts about the universe.

I see no reason to believe there is such a thing as an objective definition of "fair" in this case. The idea that an equal division is "fair" is based on the assumption that none of the three has a good argument as to why he should receive more than either of the others. If one has a reasonable argument as to why he should receive more, the fairness argument breaks down. In fact, none of the three really have a good argument as to why he is entitled to any of it, and I can't see why it would be wrong for any of the first one to grab it to claim the whole pie under "right of capture".

what's the standard reply to someone who says, "Friendly to who?" or "So you get to decide what's Friendly"?

This is an important question. I don't believe there is such a thing as an objective definition of friendliness, I'd doubt that "reasonable" people can come to an agreement as to what friendliness means. But I'm eager to be proven wrong, keep writing.

Why not divide the pie according to who will ultimately put the pie to the best use? If X and Y intend to take a nap after eating the pie, but Z is willing to plant a tree, wouldn't the best outcome for the pie favor Z getting more?

Before you dismiss the analogy, consider this - what if the pie was $1800.00 that none of the three had earned? What if the $1800.00 had been BORROWED with a certain expectation of its utility? Should X, Y, and Z each get $600.00, even though there is no stipulation as to what each of them must DO with that money? If X intends to save his portion, and Y intends to pay down debt, but Z will spend the money though it may not be is HIS best interests to do so, should he still only get an equal portion, even though his actions with his share best accomplish the purpose of the money?

If we return to pie, you may now see that pie represents potential action (as one of the earlier commenters who mentioned carbs noted). Instead of arguing for division based on merit for PAST actions/attributes (as mentioned by another commenter), why not argue for division based on merit of INTENDED actions? Who provides the best return on the invested carbs? Why assume that 'fair' division should reflect mere existence? Why can't 'fairness' include an evalutation of potential return?

This may simply deflect the argument of 'fairness' to one wherein 'best return' must be determined with regard to each individual and the group as a whole. If Y gets no shade from the tree Z plants, then perhaps her 'best return' might be a contented nap.

The ratio of productive and beneficial action, as a function of the input (pie), calculated across time (a tree has longer benefits than an immediate nap) seems to also be a 'fair' way to divide the pie.

Suppose that 98% of humans, under 98% of the extrapolated spread, would both choose a certain ordering of arguments, and also claim that this is the uniquely correct ordering. Is this sufficient to just go ahead and label that ordering the rational one? If you refuse to answer that question yourself, what is the procedure that answers it?

Again, this is why it's irreducibly social. If there isn't a procedure that yields a justified determinate answer to the rationality of that order, then the best we can do is take what is socially accepted at the time and in the society in which such a superintelligence is created. There's nowhere else to look.

Early in the story, Z is hungry, and X and Y are not. Z says that he thinks that because he is hungry, 'fair' is defined with him getting more pie, while X and Y disagree. This seems like a slightly strange story to me, but here's a much stranger one:

Z is hungry, and X and Y are not. X thinks that it would be fair to give Z 1/2 the pie, but Z and Y both think it would be fair to split the pie 1/3;1/3;1/3. In other words, the person who is arguing the fairness of the unequal distribution is not the person who would benefit from it. This feels much less likely to me than the above story. In fact, it informs the following, which I submit: when people have a disagreement about what is fair, each person's opinion usually favors a positive outcome for himself.

If I accept this intuition as true, I can see to reasons why it might be true: One, that in cases where each person thinks fairness means being more generous to another party, they can easily find a compromise, because people are always willing to accept gifts. Two, that 'fair' is really just another word for 'compromise,' in which competing entities agree to a division of a resource simply in order to resolve conflict without violence. The latter seems more likely to me, at least as an explanation for the origin of the idea of fairness in our minds and our vocabularies.

An objective definition of 'fair' seems like it would have to be identical to 'moral'; what is the 'moral' distribution of the pie?

Joe Mathes: I thought it was fairly obvious that a fair distribution is in this case synonymous with a moral distribution (was I wrong?). In this context, the word fair doesn't have any meaning if one tries to remove the concept of morality.

However I don't think that argueing for fairness when one is not the beneficiary is that unusual. The civil rights movement was supported by a lot of white people, and the women's liberation movement was supported by a lot of males. In both cases these people are losing an advantage they previously held in order to preserve fairness, which they viewed as more valuable than their advantages. I think they're right, of course, but people are rarely motivated by abstract arguements about society as a whole being better off. They are motivated by a love of fairness and a desire to promote fairness, which has been inculcated into them by their programmers. A desire strong enough to override their desire for advantages over the oppressed group.

They are motivated by a love of fairness and a desire to promote fairness, which has been inculcated into them by their programmers.

Unlikely. The basic principles of fairness are constant between human cultures and societies, and seem to be intuitively understood by humans. What changes is the status of categories of people - but humans agree on what behavior is fair towards an equal.

To deal with the question "what is moral", we need first to establish the purpose of "morality". How can you evaluate the effectiveness of a design unless you first understand what it is intended to do and not do?

Eneasz: you're ignoring "moral benefits". Let's say Joe is crossing a desert with enough food and water to live comfortably until he reaches his destination. Midway through, he comes across Bob, who is dying of thirst. If Joe gives Bob sufficient food and water to save his life, Joe can still make it across the desert, but not as comfortably. Giving Bob food and water represents a loss of benefits for Joe; withholding food and water represents a more significant loss, though. Most people would be wracked by guilt at leaving someone to die when they could have saved them; conversely, saving someone's life imparts an enormous feeling of goodwill and self-confidence. Surely the loss of a small amount of comfort is insignificant compared to the loss of moral respectability? Fairness in this scenario benefits both parties; Bob gets to live, and Joe gains an intangible but nevertheless real moral benefit.

Supporting the civil rights movement might have represented the loss of a certain kind of benefit for white people; say, the exercise of force over black people. However, opposing the movement would have represented a moral deficit. Not all benefits are material. In supporting the movement, white people gained moral benefits. They certainly have some advantage over white people who did not support the civil rights movement, do they not?

For every possible division of pie into three pieces (including pieces of 0 size), take each person and ask how fair they would think the division if they received each of the three slices. Average those together to get each person's overall fairness rating for a given pie distribution.

Average those per-person results into an "overall fairness" rating for each pie distribution.

This includes:
- You can have people involved who don't like pie and don't want any. It seems pointless to say that division into thirds is the only fair division, if one of the three people is equally happy with any division.
- There can be more than one fair division
- The inputs are not as simple as "I want half the pie", but are a function of fairness in proportion to size of slice, and distinguish between whether your happiness is in direct proportion to the size of pie slice, or whether it peaks at "half the pie" and stays the same for any value above that, or declines for any value above that.

Zaire says he wants half the pie and the other two want to divide it into thirds, but they may at the same time all have a linear link between happiness and amount of pie, leading to thirds being the fairest division.

- Situations such as the murderer and murderee in the alley, the murderer is happy to kill but unhappy to be killed, the murderee is unhappy to be in either situation, leaning towards both of them walking away as the 'fairest' outcome.

The process may lead to a less than happiest outcome for a given situation, but when applied to many situations over your lifetime such that you may be in a different position in the situation each time, gives the most long-term happiness.

I've wandered into describing "happiness" rather than "fairness" in several places, and seem to be heading towards "fairness as position-independant happiness".

This seems to be similar to Don Geddis's answer, except where he says it is meaningless to talk about "what position you could have been in", I suggest that it's a process you can agree to apply to situations you will be in in the future to get a 'fairest' outcome, even though you don't yet know which position you will be in the next time pie-slicing turns up in your life. So, fair is the process that gets you the best outcome for all situations for the rest of your life, given that you don't yet know what the situations are or what position you will play in them.

A possible mathematical rule for fairness in this situation.
1. Select who gets to cut the pie into three pieces by a random process.
2. That individual can cut it into any size sections he chooses along as there are three sections.
3. The order of choice selection again is determined by a random process.
Result: on average everyone receives 1/3 share.

A variant on demiurge: A standard way of dividing something into two parts is to have one person divide and the other choose. Alice cuts the slice of cake in half, and Bob takes whichever piece he likes. If Alice is unhappy with her piece, she should have cut the two more evenly. You can apply the same rule to three people by adding an extra step: glide the knife along the edge to create an increasingly large piece, and any of the three can call a stop and take that piece (then divide the rest as for two people). (For a pie, you might make an initial cut at 0-degrees and proceed clockwise, expecting someone to call for the first piece around 120-degrees.) We expect it to lead to a roughly even distribution.

Is this sort of thing (one cuts, the other chooses) a procedure that would inform "fairness" more generally or just a solution to the problem at hand?

Yancy: "If someone wants to murder me, and the two of us are alone, then I am still in the right and they are still in the wrong, even if no one else is present."

So the trick here is to realize that fairness is defined with respect to an expected or typical observer -- when you try to murder me, and I scream "Foul play!", the propositional content of my cry is that I expect any human who happens to pass by to agree with me and to help stop the murder. If nobody passes by this time, well, that's just my bad luck, and I can go to my grave with the small comfort that whatever behavior of mine that led to my being murdered by you was at least marginally more adapative than a behavior that would to our fellow tribespeople thinking that you were justified in murdering me, because then I would have had no chance of survival, as opposed to having my survival depend upon having the good luck of being observed in a timely fashion.

On the other hand, if it were impossible for a disinterested party to pass by, because you and I were the only two intelligent beings in the known world, or because all known intelligent beings would have a political reason to pick one side or the other in our little tiff, then fairness would have no propositional content, and would be meaningless. That seems like a small bullet to bite -- it seems plausible to think that fairness norms really did evolve -- and that people continue to make a big deal about the concept -- because there were and often are disinterested third parties that observe two-party conflicts (or disinterested fourth parties who observe three-party conflicts, and so on). If there weren't any such thing as disinterested parties, it really wouldn't make any sense to talk about "fairness" as an arrangement that's distinct from "equal division".

My favorite answer to this problem comes from "How to Cut a Cake: And Other Mathematical Conundrums."
The solution in the book was that "fair" means "no one has cause to complain." It doesn't work in the case here, since one party wants to divide the pie unevenly, but if you were trying to make even cuts, it works. The algorithm was:
1. Make a cut from the center to the edge.
2. Have one person hold the knife over that cut,
3. Slowly rotate the knife (or the pie) at, say, a few degrees per second. 4. At any time, any person (including the one holding the knife) can say "cut." A cut is made there, and the speaker gets the thus-cut piece.

At the end, anyone who thinks they got too little (meaning, someone else got too much) could have said "cut" before that other person's cut got too big.

This was a pun by the way. I was playing on "fair" in the sense of retributive justice (Xannon and Yancy punishing Zaire for being antisocial) as opposed to distributive justice. Sorry if that wasn't clear.

On reflection, it is important that these senses are closely linked... societies probably can't get the distributive part of justice at all unless they are first firm on the retributive part. Close-knit, egalitarian communities do seem to get very nasty about members taking more than their fair share, and don't truck a lot of long-winded, self-serving debate on what constitutes a fair share. (On a wider scale, it is also interesting how much noisier and greedier the super-Zaires of this world have become in recent years, ever since the threat of socialist revolution went the way of the dodo. A few decades back, the rich really did fear reds under the beds lynching them any time soon, with compromises like Keynes and Social Democracy among the results. Not so much these days.)

Lastly, I don't think I (or you) need to ignore Zaire's preferences, any more than those of Xannon or Yancy's. Each of them, presumably, has a individual utility function which increases with the proportion of pie that they personally get. The real difference is that Xannon and Yancy are at least attempting to construct a symmetric joint utility function (one which is invariant under permutations of the variables X, Y and Z.) Whereas Zaire is just trying it on.

When people get this embroiled in philosophy, I usually start eating pie.

However as I don't like blueberries, we will split the pie into thirds fairly as Yancy wants, then I will give 1/6th of my pie to Zaire so he has the half he wants, and I'll leave the other 1/6th where I found it since A PIE WE FOUND IN THE FOREST AND KNOW NOTHING ABOUT ISN'T NECESSARILY MINE TO STEAL FROM.

A great post. It captured a lot of intriguing questions I currently have about ethics. One question I have, which I am curious to see addressed in further posts in this sequence, is: Once we dissolve the question of "fairness" (or "morality" or any other such term) and taboo the term, is there a common referent that all parties are really discussing, or do the parties have fundamentally different and irreconcilable ideas of what fairness (or morality, etc.) is? Is Xannon's "fairness" merely a homonym for Yancy's "fairness" rather than something they could figure out and agree on?

If the different views of "fairness" are irreconcilable, then I am inclined to wonder if moral notions really do generally function (without this intention, oftentimes) as a means for each party to bamboozle the other into given the speaker what she wants, by appealing to a multifaceted "moral" concept that creates the mere illusion of common ground (similar to how "sound" functions in the question of the tree falling). Perhaps Xannon wants agreement, Yancy wants equal division, and there is no common ground between them except for a shared delusion that there is common ground. (I certainly hope this isn't true.)

More generally, what about different ethical systems? Although we can easily rule out non-naturalist systems, if two different moral reductionist systems clash (yet neither contradicts known facts) which one is "best?" How can we answer this question without defining the word "best," and what if the two systems disagree on the definition? It would seem to result in an infinite recursion of criteria disagreements--even between two systems that agree on all the facts. (As I understand it, Luke's discussion on pluralistic moral reductionism is relevant to this, but I have not yet read it and am very distressed that he is apparently never going to finish it.)

I tentatively stand by my own theory of moral reductionism (similar to Fyfe's desirism, with traces of hedonistic utilitarianism and Carrier's goal theory) but it concerns me that different people might be using moral concepts in irreconcilably different ways, and some of those that contradict mine might be equally "legitimate" to mine... After reading the Human's Guide to Words sequence, I am more hesitant to use any kind of appeal to common usage, which is what I'd previously done. My views and arguments may continue to change as I read further, and I try always to be grateful to read things that do this to me.

Interesting. As far as I can tell, the moral is that most definitions in an argument are supplied such that the arguer gets their way, instead of being a solid fact that can be followed in a logical sequence in order to deduce the correct course of action.

But I think it would using the rationalists' Taboo would benefit the three, as the word "fair" is defined differently by each of them: Xannon defines fairness as a compromise between the involved parties. Yancy defines fairness as an objective equality wherein everyone receives the same treatment. Zaire either defines fairness as accounting for the needs of each of the involved parties, or as whatever gets him half the pie. Define "fairness" first before agreeing to divide the pie "fairly", or shut up and compromise.

In this situation, I think it would just be easier to split the pie while the other two are arguing and ask, "Do you guys want to eat the pie or continue arguing?" while holding a piece. Arguing about the definition of fairness while starving in a forest in the middle of who-knows-where is generally not a good idea. An argument that leads nowhere is a waste of time, and not useful.

There's another compromise position. Namely, two can form a coalition against the third and treat the problem as dividing a pie between two individuals with different claims. For example, Xannon and Yancy have a combined claim of 2/3 to Zaire's 1/2. Proportional division according to those terms would give Zaire 3/7 to the duo's 4/7, which they can then split in half to get the distribution {2/7, 2/7, 3/7}. As it turns out, you get this same division no matter how the coalitions form. This sort of principle dates back to the Talmud.

Of course, this only works if Xannon agrees that Zaire can justly claim half the pie. If not, Xannon and Yancy could compel Zaire to accept one-third of the pie.

Rational actors would each claim the entire pie for themselves, and then any fair division scheme would result in each getting a third.