Do you believe in luck? If not, and you live near DC, come to an OB meetup at my place southwest of Washington DC on Friday March 13th, at 7PM! If you reply to this post and say you want to come, AND provide your real e-mail in the "email" line when you post, I'll email back with details. (The email address provided by each commentator gets sent to the author of the original post.)

Nonetheless, I would like to present some of my motivations on Newcomb’s Problem – the reasons I felt impelled to seek a new theory – because they illustrate my source-attitudes toward rationality. Even if I can’t present the theory that these motivations motivate…

First, foremost, fundamentally, above all else:

Rational agents should WIN.

As I just commented on another thread, this is faith in rationality, which is an oxymoron.

It isn’t obvious whether there is a rational winning approach to Newcomb’s problem. But here’s a similar, simpler problem that billions of people have believed was real, which I’ll call Augustine’s Paradox (“Lord, make me chaste – but not yet!”)

Most kinds of Christianity teach that your eternal fate depends on your state in the last moment of your life. If you live a nearly-flawless Christian life, but you sin ten minutes before dying and you don’t repent (Protestantism), or you commit a mortal sin and the priest has already left (Catholicism), you go to Hell. If you’re sinful all your life but repent in your final minute, you go to Heaven.

The optimal self-interested strategy is to act selfishly all your life, and then repent at the final moment. But if you repent as part of a plan, it won’t work; you’ll go to Hell anyway. The optimal strategy is to be selfish all your life, without intending to repent, and then repent in your final moments and truly mean it.

I don’t think there’s any rational winning strategy here. Yet the purely emotional strategy of fear plus an irrationally large devaluation of the future wins.

On NPR some days ago, I heard a speaker say that there were a lot of reasons for closing the US prison at Guantanamo, but that "the most important of them is that it’s the right thing to do." He said it twice.

(I was already amused by the idea of closing Guantanamo because people there carried out policies decided on in Washington DC. The logic could be that guilt adhered to the place itself; or that guilt could be made to adhere to it and then be done away with, as with a scape-goat. If it worked with Jesus, why not with Guantanamo?)

But the idea that "being the right thing to do" is a reason rather than a conclusion is more intriguing. Is this just circular logic? I don’t think so.

Take away the faith. (This is NPR, after all.) What happens? After thousands of years of being attached to faith, does morality attach itself, in the minds of the public, to reason? Or does it just become detached? I think that the latter model explains the speaker’s statements: "The right thing" is something you just know – a support, not a conclusion.

Say your first car got 10 mpg, and you replaced it with a 20 mpg car. Now you’re ready to get another car. How many mpg will your new car need to get, to be as much of an improvement over your last car (gas-wise), as that car was over your first car?

A recent Science article, summarized here, reports on this as an instance of a simple yet subtle bias: When given information, people assume that the effects relevant to them scale linearly with the measurement scale used. In this instance, it’s miles per gallon.

If this were wartime, and you were rationed 10 gallons per week, the measurement of interest to you in evaluating a car’s mileage might be the number of different places you could visit once a week with that car. Then the relevant statistic would be (miles/gallon)2. But since we aren’t rationing gas, a better measurement is gallons per mile, which can be translated into dollars and environmental impact per mile.

When people are given figures in miles per gallon, they usually think that the answer to the above question is 30 mpg. "Sixty percent of participants ordered the pairs according to linear improvement and 1% according to actual improvement. A third strategy, proportional improvement, was used by 10% of participants." (The proportional strategy says that the answer is 40 mpg.)

People get the right answer when you rephrase the question in units that scale linearly with the effect. Try this: Your first car could go 100 miles on 10 gallons of gas. Your second car could go 100 miles on 5 gallons of gas. Your third car needs to go 100 miles on… 0 gallons of gas. So it needs to get infinite mpg, to match the improvement in going from 10 to 20 mpg.

(Incidentally, there probably is no viable distinction between cognitive structure and content.)

This statement is true, in that there is probably no distinction that I can write, that Jed can’t come up with a counter-example to. Much as I can’t write a definition of "game" that Wittgenstein couldn’t come up with a counter-example to.

But the statement was used to imply that distinguishing AI architectures by reliance on content vs. learning is nonsensical. If that were so, knowledgeable people would be confused when Eliezer (or Lenat) says Cyc emphasizes content more than other architectures do. They aren’t.

Some more-popular false false dichotomies:

Nature vs. nurture (e.g., genetic or instinctual vs. learned behavior): We’re told that there’s no true distinction between them, since "nurture" can only occur when expected by "nature". I like Paul Bloom’s reply (paraphrasing), "There’s something wrong with a theory of mind that says that a knee reflex and word learning are the same sort of thing."

Race: We’re told that race is a "social construct" because, for any particular genetic criteria you set to determine who is in a race, someone can be found who looks to us like they belong to that race, yet doesn’t satisfy your criteria.

Gender: There are people naturally having characteristics of both sexes; people whose phenotypic gender is different from their genotypic gender; and people who’ve had sex-change operations. Therefore, there is no gender.

You probably knew where I was going with this when you saw the Wittgenstein reference. Every word in our languages breaks down when you apply enough pressure to it. A word encodes a statistical regularity. Applicability in all cases is not required. Forbid us from using words that aren’t precise, and we’d be unable to talk at all.

According to a recent study, on the day of a US presidential election there are, on average, an extra 24 auto-accident fatalities. The study covered the past 32 years, not including this year.

The number of times that a single vote has affected the outcome of a US presidential election is, so far, zero.

In order for voting to be rational, the expected benefit to you from your vote having an effect on the outcome, must be greater than the expected cost of you dying in an auto accident on your way to vote.

The traffic accident study covers only 32 years; but we have over 200 years of data on individual votes not swinging an election. Over time, it has become much less likely for one person’s vote to swing an election due to population increase. I will approximate this effect by saying that 210 years of one vote not swinging an election is similar to 1000 years of one vote not swinging an election at current population levels. That’s a sloppy off-the-cuff guess at how the population changes affect the probabilities.

So, the odds of your dying in a traffic accident on your way to vote would at first seem to be 24 * (1000/4) = 6000 times the odds of your vote changing the outcome of the election. (Probably much higher. Those are the odds they would be if one person’s vote had swung the election once.) The odds of your being disabled in a traffic accident on your way to vote would, similarly, seem to be 800*(1000/4) = 200,000 times higher than the odds of your vote swinging the election.