1) If I don’t have free will, then I can’t choose what to believe.
2) If I can choose what to believe, then I have free will [from 1]
3) If I have free will, then I ought to believe it.
4) If I can choose what to believe, then I ought to believe that I have free will. [from 2,3]
5) I ought, if I can, to choose to believe that I have free will. [restatement of 4]

He remarks in the comments:

I’m taking it as analytic (true by definition) that choice requires free will. If we’re not free, then we can’t choose, can we? We might “reach a conclusion”, much like a computer program does, but we couldn’t choose it.

I understand the word “choice” a bit differently, in that I would say that we are obviously choosing in the ordinary sense of the term, if we consider two options which are possible to us as far as we know, and then make up our minds to do one of them, even if it turned out in some metaphysical sense that we were already guaranteed in advance to do that one. Or in other words, Chappell is discussing determinism vs libertarian free will, apparently ruling out compatibilist free will on linguistic grounds. I don’t merely disagree in the sense that I use language differently, but in the sense that I don’t agree that his usage correspond to the normal English usage. [N.B. I misunderstood Richard here. He explains in the comments.] Since people can easily be led astray by such linguistic confusions, given the relationships between thought and language, I prefer to reformulate the argument:

If I don’t have libertarian free will, then I can’t make an ultimate difference in what I believe that was not determined by some initial conditions.

If I can make an ultimate difference in what I believe that was not determined by some initial conditions, then I have libertarian free will [from 1].

If I have libertarian free will, then it is good to believe that I have it.

If I can make an ultimate difference in my beliefs undetermined by initial conditions, then it is good to believe that I have libertarian free will. [from 2, 3]

It is good, if I can, to make a difference in my beliefs undetermined by initial conditions, such that I believe that I have libertarian free will.

We would have to add that the means that can make such a difference, if any means can, would be choosing to believe that I have libertarian free will.

I have reformulated (3) to speak of what is good, rather than of what one ought to believe, for several reasons. First, in order to avoid confusion about the meaning of “ought”. Second, because the resolution of the argument lies here.

The argument is in fact a good argument as far as it goes. It does give a practical reason to hold the voluntary belief that one has libertarian free will. The problem is that it does not establish that it is better overall to hold this belief, because various factors can contribute to whether an action or belief is a good thing.

We can see this with the following thought experiment:

Either people have libertarian free will or they do not. This is unknown. But God has decreed that people who believe that they have libertarian free will go to hell for eternity, while people who believe that they do not, will go to heaven for eternity.

This is basically like the story of the Alien Implant. Having libertarian free will is like the situation where the black box is predicting your choice, and not having it is like the case where the box is causing your choice. The better thing here is to believe that you do not have libertarian free will, and this is true despite whatever theoretical sense you might have that you are “not responsible” for this belief if it is true, just as it is better not to smoke even if you think that your choice is being caused.

But note that if a person believes that he has libertarian free will, and it turns out to be true, he has some benefit from this, namely the truth. But the evil of going to hell presumably outweighs this benefit. And this reveals the fundamental problem with the argument, namely that we need to weigh the consequences overall. We made the consequences heaven and hell for dramatic effect, but even in the original situation, believing that you have libertarian free will when you do not, has an evil effect, namely believing something false, and potentially many evil effects, namely whatever else follows from this falsehood. This means that in order to determine what is better to believe here, it is necessary to consider the consequences of being mistaken, just as it is in general when one formulates beliefs.

In an alternate universe, on an alternate earth, all smokers, and only smokers, get brain cancer. Everyone enjoys smoking, but many resist the temptation to smoke, in order to avoid getting cancer. For a long time, however, there was no known cause of the link between smoking and cancer.

Twenty years ago, autopsies revealed tiny black boxes implanted in the brains of dead persons, connected to their brains by means of intricate wiring. The source and function of the boxes and of the wiring, however, remains unknown. There is a dial on the outside of the boxes, pointing to one of two positions.

Scientists now know that these black boxes are universal: every human being has one. And in those humans who smoke and get cancer, in every case, the dial turns out to be pointing to the first position. Likewise, in those humans who do not smoke or get cancer, in every case, the dial turns out to be pointing to the second position.

It turns out that when the dial points to the first position, the black box releases dangerous chemicals into the brain which cause brain cancer.

Scientists first formed the reasonable hypothesis that smoking causes the dial to be set to the first position. Ten years ago, however, this hypothesis was definitively disproved. It is now known with certainty that the box is present, and the dial pointing to its position, well before a person ever makes a decision about smoking. Attempts to read the state of the dial during a person’s lifetime, however, result most unfortunately in an explosion of the equipment involved, and the gruesome death of the person.

Some believe that the black box must be reading information from the brain, and predicting a person’s choice. “This is Newcomb’s Problem,” they say. These persons choose not to smoke, and they do not get cancer. Their dials turn out to be set to the second position.

Others believe that such a prediction ability is unlikely. The black box is writing information into the brain, they believe, and causing a person’s choice. “This is literally the Smoking Lesion,” they say. Accepting Andy Egan’s conclusion that one should smoke in such cases, these persons choose to smoke, and they die of cancer. Their dials turn out to be set to the first position.

Still others, more perceptive, note that the argument about prediction or causality is utterly irrelevant for all practical purposes. “The ritual of cognition is irrelevant,” they say. “What matters is winning.” Like the first group, these choose not to smoke, and they do not get cancer. Their dials, naturally, turn out to be set to the second position.

Susan is debating whether or not to smoke. She knows that smoking is strongly correlated with lung cancer, but only because there is a common cause – a condition that tends to cause both smoking and cancer. Once we fix the presence or absence of this condition, there is no additional correlation between smoking and cancer. Susan prefers smoking without cancer to not smoking without cancer, and prefers smoking with cancer to not smoking with cancer. Should Susan smoke? Is seems clear that she should. (Set aside your theoretical commitments and put yourself in Susan’s situation. Would you smoke? Would you take yourself to be irrational for doing so?)

Causal decision theory distinguishes itself from evidential decision theory by delivering the right result for The Smoking Lesion, where its competition – evidential decision theory – does not. The difference between the two theories is in how they compute the relative value of actions. Roughly: evidential decision theory says to do the thing you’d be happiest to learn that you’d done, and causal decision theory tells you to do the thing most likely to bring about good results. Evidential decision theory tells Susan not to smoke, roughly because it treats the fact that her smoking is evidence that she has the lesion, and therefore is evidence that she is likely to get cancer, as a reason not to smoke. Causal decision theory tells her to smoke, roughly because it does not treat this sort of common-cause based evidential connection between an action and a bad outcome as a reason not to perform the action.

Egan’s argument is basically that either she has the lesion or she does not, and she can make no difference to this, and so apparently she can make no difference to whether or not she gets cancer. So if she feels like smoking, she should smoke. If she gets cancer, she was going to get it anyway.

Answering Egan’s question, if there was a strong correlation like this in reality, I would think that smoking was a bad idea, and would choose not to do it.

We can change the problem somewhat, without making any essential differences, such that every reasonable person would agree.

Suppose that every person is infallibly predestined to heaven or to hell. This predestination has a 100% correlation with actually going there, and it has effects in the physical world: in some unknown place, there is a physical book with a list of the names of those who are predestined to heaven, and those who are predestined to hell.

But it has nothing to do with the life you live on earth. Instead, when you die, you find yourself in a room with two doors. One is a green door with a label, “Heaven.” The other is a red door with a label, “Hell.” The doors do not actually lead to those places but to the same place, so they have no special causal effect. You only end up in your final destination later. Predestination to heaven, of course, causes you to choose the green door, while predestination to hell causes you to choose the red door.

You find yourself in this situation. You like red a bit more than green, and so you prefer going through red doors rather than green ones, other things being equal. Do you go through the green door or the red door?

It is clear enough that this situation is equivalent in all essential respects to Egan’s thought experiment. We can rephrase his version:

“Susan is debating whether or not to go through the red door. She knows that going through the red door is perfectly correlated with going to hell, but only because there is a common cause – a condition that tends to cause both going through the red door and going to hell. Once we fix the presence or absence of this condition, there is no additional correlation between going through the red door and going to hell. Susan prefers going through the red door without going to hell to not going through the red door without going to hell, and prefers going through the red door with going to hell to not going through the red door with going to hell. Should Susan go through the red door? Is seems clear that she should. (Set aside your theoretical commitments and put yourself in Susan’s situation. Would you go through the red door? Would you take yourself to be irrational for doing so?)”

It should be clear that Egan is wrong. Don’t go through the red door, and don’t smoke.