1) If I don’t have free will, then I can’t choose what to believe.
2) If I can choose what to believe, then I have free will [from 1]
3) If I have free will, then I ought to believe it.
4) If I can choose what to believe, then I ought to believe that I have free will. [from 2,3]
5) I ought, if I can, to choose to believe that I have free will. [restatement of 4]

He remarks in the comments:

I’m taking it as analytic (true by definition) that choice requires free will. If we’re not free, then we can’t choose, can we? We might “reach a conclusion”, much like a computer program does, but we couldn’t choose it.

I understand the word “choice” a bit differently, in that I would say that we are obviously choosing in the ordinary sense of the term, if we consider two options which are possible to us as far as we know, and then make up our minds to do one of them, even if it turned out in some metaphysical sense that we were already guaranteed in advance to do that one. Or in other words, Chappell is discussing determinism vs libertarian free will, apparently ruling out compatibilist free will on linguistic grounds. I don’t merely disagree in the sense that I use language differently, but in the sense that I don’t agree that his usage correspond to the normal English usage. [N.B. I misunderstood Richard here. He explains in the comments.] Since people can easily be led astray by such linguistic confusions, given the relationships between thought and language, I prefer to reformulate the argument:

If I don’t have libertarian free will, then I can’t make an ultimate difference in what I believe that was not determined by some initial conditions.

If I can make an ultimate difference in what I believe that was not determined by some initial conditions, then I have libertarian free will [from 1].

If I have libertarian free will, then it is good to believe that I have it.

If I can make an ultimate difference in my beliefs undetermined by initial conditions, then it is good to believe that I have libertarian free will. [from 2, 3]

It is good, if I can, to make a difference in my beliefs undetermined by initial conditions, such that I believe that I have libertarian free will.

We would have to add that the means that can make such a difference, if any means can, would be choosing to believe that I have libertarian free will.

I have reformulated (3) to speak of what is good, rather than of what one ought to believe, for several reasons. First, in order to avoid confusion about the meaning of “ought”. Second, because the resolution of the argument lies here.

The argument is in fact a good argument as far as it goes. It does give a practical reason to hold the voluntary belief that one has libertarian free will. The problem is that it does not establish that it is better overall to hold this belief, because various factors can contribute to whether an action or belief is a good thing.

We can see this with the following thought experiment:

Either people have libertarian free will or they do not. This is unknown. But God has decreed that people who believe that they have libertarian free will go to hell for eternity, while people who believe that they do not, will go to heaven for eternity.

This is basically like the story of the Alien Implant. Having libertarian free will is like the situation where the black box is predicting your choice, and not having it is like the case where the box is causing your choice. The better thing here is to believe that you do not have libertarian free will, and this is true despite whatever theoretical sense you might have that you are “not responsible” for this belief if it is true, just as it is better not to smoke even if you think that your choice is being caused.

But note that if a person believes that he has libertarian free will, and it turns out to be true, he has some benefit from this, namely the truth. But the evil of going to hell presumably outweighs this benefit. And this reveals the fundamental problem with the argument, namely that we need to weigh the consequences overall. We made the consequences heaven and hell for dramatic effect, but even in the original situation, believing that you have libertarian free will when you do not, has an evil effect, namely believing something false, and potentially many evil effects, namely whatever else follows from this falsehood. This means that in order to determine what is better to believe here, it is necessary to consider the consequences of being mistaken, just as it is in general when one formulates beliefs.

There is a “no free will” exception to moral and legal responsibility. When the Boston Marathon bombers were escaping Boston, they hijacked a car and forced the driver at gunpoint to assist them in their escape. The driver was not charged with “aiding and abetting” the criminals because he was forced to act against his will.

If determinism is used as an excuse to claim that nobody ever has any free will, then the special exception gets applied to all cases, and nobody is responsible for anything they do. So it is not unexpected that a person who believes he will not be held responsible would behave worse than someone who would be held responsible for their actions.

When free will is correctly defined as a decision that one makes for oneself, free of coercion or undue influence, a definition that nearly everyone recognizes in cases like the hijacked driver, then free will clearly does exist.

It is only when free will is given the added requirement, of being “free from reliable causation”, that it conflicts with determinism.

And likewise, it is only when determinism is viewed incorrectly as a constraint forcing us to act against our will, that determinism conflicts with free will.

(a) Without reliable cause and effect, we cannot reliably cause any effect, and thus would have no freedom to do anything at all.

(b) The direct prior cause of our choice is a mental process called “choosing”, where multiple options are reduced to a single choice.

(c) Choosing is an empirical event that occurs within our own brains, thus it is an unquestionable product of ourselves.

(d) The fact that choosing involves unconscious as well as conscious processes does not change the fact that all of these processes are “us” in the act of “choosing”.

(e) What we inevitably do is exactly identical to us just being us, doing what we do, and choosing what we choose. This is not a meaningful constraint. However, coercion and undue influence are meaningful constraints.

(f) The idea that determinism is itself a force of nature is false. Only actual objects and forces can cause things to happen.

(g) We are physical objects, living organisms, and an intelligent species. When we choose to do something, we are also forces of nature.

I’m actually a compatibilist myself, so certainly didn’t mean for my argument to be interpreted as ruling out this position! I don’t think the argument works for specifically libertarian free will, because it’s an open possibility that the relevant sense of “choice” is compatibilist in nature, and that compatibilism is true, and hence that we ought to believe precisely this (and not libertarianism).

Thanks for the correction. I edited the post to indicate that I had misunderstood you.

Still, my understanding of compatibilism implies that even when you make a choice, that particular choice was physically necessary, and the opposite choice physically impossible. Do you understand this differently? If so what is the difference from libertarian free will?

If you agree with that, I am curious how you would respond to the argument for libertarian free will by comparison with the smoking lesion:

If libertarianism is false, then everyone who believes it, believes it with physical necessity. Consequently choosing to believe in libertarianism has an upside, namely the possibility of being right, but no downside, because no one will fall into the situation of falsely believing that libertarianism is true, except people for whom it was physically impossible to avoid that situation.

That seems the same as saying that smoking has an upside (namely the benefit of smoking) but no downside because no one will get cancer except people who were definitely going to get it on account of the lesion.

If you answer that the possible falsehood of the belief should be considered a downside because you had the compatibilist option of choosing not to believe in libertarianism, then getting cancer should be considered a downside because you had the compatibilist option of not smoking, which would have ruled out cancer (in the 100% correlation case.)

There are certain words which have meaning only in the context of our imagination. A “possibility”, for example, only exists in the mind. And it operates within the context of mentally making a choice. Each choosing process begins with more than one possibility. Once we’ve chosen that possibility it becomes our choice. Once we actualize that possibility it is no longer referred to by the word “possibility”, at least not until the next time we consider making a new choice.

There is only one inevitability. But this fact has nothing to do with any possibilities. For example, it does not make any possibility an impossibility. The fact that one of the possibilities should turn out to be the inevitable choice does not change the nature of the other possibilities. They never were impossibilities, nor would they become impossibilities, because one of them may be selected the next time we make this choice.

The mental process of choosing always begins with imagining two or more possibilities. There is an evaluation, perhaps a mental scenario of how we expect things to turn out if we choose this rather than that. And then based upon the relative value assigned, we choose the one that seems best.

This mental process plays out deterministically according to how we think and feel about each of our possibilities.

The fact that multiple options were considered and reduced to a single choice is empirical evidence that the mental event of choosing took place. And if we trace the prior causes of our choice, we will find that the most relevant and meaningful prior causes was the mental process that just took place in our heads.

This matter of context is especially important for understanding “I could have done otherwise”. The context of “could have” is again a mental event taking place in our imagination. We are reviewing a past act or decision (usually when things didn’t work out as we expected) to play out some of our other possibilities in our minds, to see what we might do differently next time. It is how we learn from our mistakes.

Again, the fact of the single inevitability has no place in this context of going over scenarios in our minds. It has no impact upon the fact that “we could have made a different choice”, because all that really means is that we had more than one possibility at the time.

So I go into this restaurant with a hard determinist waiter.
I ask him, “What possibilities do I have for my main course tonight?”
He says, “You only have one real possibility”.
I say, “Okay, what is that?”
He says, “I don’t know until you tell me your choice.”

A little late here, but nice article! I feel I have to rant a bit because the free will question is one of those I’m surprised is still considered such a big deal. The whole question seems an obvious result of using a conceptual model that makes sense on one level but fails when you zoom in too much. We see the self as a singular unit without internal structure, separate from the physical universe. We also kind of understand that everything needs to be determined by something (or just meaninglessly random) and for us to be free our actions must determined by us and nothing else (like the physical universe).

This makes us think there is a conflict between free will and determinism or universal causation. But there is no “us” existing outside causal processes and even a nonphysical mind would work through some combination of deterministic mechanism and randomness if we were to view it as having an internal structure and process, which it must have (otherwise it’s all meaningless randomness).

Instead, a self is a nexus in the causal network and actions are “free” to the extent that the causal chain leading up to them pass through (and not past) the self. In short, there is no reason physical causation and free will (as in “our actions are determined by our selves”) are in conflict if we see the self as a subnetwork of the total network of physical causation. The self is singular when viewed from afar, if we keep thinking it’s singular when we zoom in on it there is going to be weird, paradoxical effects. Otherwise the problem just dissolves. I hope this makes sense.

What you are saying is clear enough, and mostly right. And as you say, even the actions of a non-physical mind would have to be analyzed in terms of either deterministic actions or statistical ones. The one exception would be a mind with an infinite knowing power: such a mind could plan actions that did not fit into any statistical analysis. However, there is no proof such a mind is possible, and human minds are certainly not such minds.

As soon as you say “such a mind could plan actions” you imply that this mind has some purpose or goal for which these actions, but not those actions, would be chosen. Purpose and reason are causes. Therefore, the choice would be deterministic, and causally inevitable.

What would be necessary there is choosing randomly between two possibilities. There would be a clear purpose for that: namely, if he succeeded in choosing randomly, he would accomplish the purpose of proving you wrong. This purpose would not be accomplished, on the other hand, by choosing one of them deterministically.

Whether or not you agree this is possible, it proves definitively that your argument is wrong: your argument is that if you have a purpose, the best way to accomplish that purpose must be deterministic. This is false.

Random is not indeterministic. We might routinely flip a coin if two choices are roughly equal in value. But the location of the thumb under the coin, the force of the flipping, the air resistance, the distance of the fall, the characteristics of the surface it hits, etc. would physically determine whether it comes up heads or tails. It would be theoretically possible to build a machine that would guarantee the coin would land heads up every time. An expert at knife throwing, for example, must control the rotations of the knife to assure the point rather than the hilt hits the target.

But flipping the coin in any less controlled fashion would give us an unknowable result, which is what we want when choosing which team kicks off.

Now, if we have an omniscient being flipping the coin, then the result will never be random. It is logically impossible, just like it is logically impossible for an omnipotent being to create a rock so heavy he cannot lift it.

An omniscient being would not have to use a process like flipping a coin: it would just choose, without using any process. It would have a determinate purpose, namely disproving your theory, but no process, and therefore no determinate process.

I’m not sure what a “no process” could produce. But if it accomplished the goal then the goal would be the determinate cause of the choice of using a “no process”, thus disproving the disproof.

Free will does not mean “uncaused”, it means authentically caused by the agent (to satisfy the agent’s purpose and the agent’s reasons), and not imposed upon the agent against his/her/its will. Thus the omniscient being’s free will is identical to our own.

If you assert that something is logically necessary, you need to prove that a contradiction follows from the opposite claim. In other words, not being sure what can be produced without a process is not helpful: you need to show that producing something without a process is a contradiction, and it is not.

In fact, it cannot be a contradiction, because a process has elements. Each of those elements will cause something. Do they do it with a process, or without? At some point you will come to something that causes something without any process. And without any process, there is no need for it to be deterministic.

I did not say that free will means uncaused. In fact, in the situation we are discussing, nothing is uncaused. The omniscient agent chooses between two options, A and B, indeterministically and without a process. It chooses one of them. Let’s suppose it is A. That is not uncaused: it is caused by the omniscient agent, and for a purpose, namely refuting you. The purpose is accomplished precisely because it is still true that it could have chosen B. If it could not have chosen B, the purpose would not be accomplished.

Choosing is a process. Thus choosing without a process is a logical contradiction a priori (by definition).

The process in this case begins with the omnipotent being desiring to demonstrate indeterminacy. And that is a second logical contradiction because it requires it to “cause an uncaused choice”.

An element of a process is a step. Each step is also a process, ad infinitum (here’s hoping you have provided sufficient stack space). Actually, it is not ad infinitum, but rather as far as is humanly meaningful and relevant.

When you shift to “element” you seem to be looking for smaller reductions of material. A process is not a material object. “Process” is what we call the “set of steps by which something is brought about”.

EU: “At some point you will come to something that causes something without any process.”

No. At some point we will lose interest in further reduction because it becomes irrelevant.

EU: “The omniscient agent chooses between two options, A and B, indeterministically and without a process.”

Logically impossible as per above.

EU: “That is not uncaused: it is caused by the omniscient agent, and for a purpose, namely refuting you.”

But it doesn’t. Logically, nothing can be indeterministically caused, because “to cause” is “to (causally) determine.” On the other hand, it is often the case that we cannot determine (know) what determined (caused) a specific event.

EU: “The purpose is accomplished precisely because it is still true that it could have chosen B. If it could not have chosen B, the purpose would not be accomplished.”

Oh. Well, it turns out that if A can be implemented IF CHOSEN and if B also can be implemented IF CHOSEN, then it will always be true that I COULD HAVE DONE OTHERWISE.

If it is true right now that I can choose A or I can choose B, then it will always be the case TOMORROW that I could have done otherwise. That’s the way these words work.

There is but a single inevitability, but our possibilities are only limited by our imagination. And imagination is the actual logical context of “possibility” and “can” and “could have”. And that is how those words work.

You say that a process is a number of steps. In that case, choosing is not a process by definition, as you say, because choosing means selecting between alternatives; it does not say whether this happens by steps or all at once.

Speaking of which, you also say that every process has an infinite number of steps. This is a contradiction itself; infinite means “without end”, so a process that comes to an end cannot have an infinite number of steps. Therefore it has a finite number of steps; and each of these is a step without any other steps. So each step is not a process. Consequently something can happen without any process.

“Indeterminate” does not mean “uncaused.” It means there are several options, and there can be several even if one of them is caused to be actual.

Options, possibilities, can’s, could have’s, are all about imagination. The imagination is where we “try out” our options and estimate their outcomes. This is often part of the choosing process. Within the imagination we have room for an infinite number of possibilities. What we do not have room for is the concept of inevitability. It tends to break the process. It has its own, very limited context.

No, I didn’t say an “infinite number of steps”, I’m saying each process can be further broken down. It could literally be a single 1) step. That single step may be broken down into sub-steps, like 1a) lifting the foot, 1b) moving it forward, and 1c) placing it on the ground. And we might breakdown 1a) lifting the foot as 1a1) nerves send signal to contractor neurons and 1a2) muscle cells contract. Etc.

“Choosing between alternatives” suggests: 1) Identifying the alternatives, 2) Estimating the outcome of each alternative in terms of the desired goal, 3) Selecting the alternative with the best outcome.

If causal determinism is about the reliability of cause and effect, then causal indeterminism should necessarily be about the unreliability of cause and effect (the light switch turns the light on and off most of the time, but occasionally instead of controlling the light it causes gravity to reverse).

In the case of causal indeterminacy our freedom to carry out our intent (such as turning on the light) is diminished by the degree of unreliability of the effect of our cause.

If knowledge determinism means knowing the effect that will result from a specific cause effect then knowledge indeterminism would be not knowing the effect (as in random or chaotic phenomena).

If every process can be broken down into steps, and every step can be broken down into additional steps, then either the original process consists of an infinite number of steps, which is impossible, or sooner or later you will come to steps that cannot be broken down any more. These will be things that happen without any process, since they will be a single step that cannot be broken down.

Choosing between alternatives does not have to include the things you mention as distinct steps. They could all be done at once, in a single step. Thus there would be no process.

Causal determinism is not about the reliability of cause and effect: it is about causing one determinate thing instead of one out of many options. Cause and effect would be quite reliable, even if a cause randomly caused A or B. It just would not cause anything like reversing gravity. In other words, reliability is quite possible without determinism.

EU: “either the original process consists of an infinite number of steps, which is impossible”

Unfortunately, it is not impossible, thus the Xeno paradox of getting from the chair to the door. Before getting to the door, you must get halfway there. Before getting halfway, you must get halfway to the halfway point. Before that yo must get halfway to the halfway to the halfway point … (Same with the Achilles and the turtle paradox, every time Achilles gets to where the turtle is now, the turtle will be a little ahead.) Solution: you don’t go to the halfway point, you compute what you need to do to get to the door (or where the turtle will be, not where he is now).

EU: “or sooner or later you will come to steps that cannot be broken down any more.”

Well, that’s the thing. Steps are the way we break down any task into subtasks. As long as we can relate the sub-sub-sub-…-task to the goal, the breakdown of the step may be a valid.

EU: “They could all be done at once, in a single step. Thus there would be no process.”

But a single step is sufficient to be called a process. If your goal is to “go East” you take a step in one direction. If your goal is to “go West” you step in the opposite direction.

EU: (a) “Causal determinism is not about the reliability of cause and effect:” (b) “it is about causing one determinate thing instead of one out of many options.”

(a) = (b)

EU: “Cause and effect would be quite reliable, even if a cause randomly caused A or B.”

But it would beg the questions, “why A this time and not B” and “why B this time and not A”. And if it were important enough, we would eventually determine the additional cause in play (or if not that important, we would still assume there is some additional cause in play).

EU: “It just would not cause anything like reversing gravity.”

Who can say what a truly indeterminate cause can do? You essentially have a button, and each time you push it, something unpredictable will happen.

EU: “In other words, reliability is quite possible without determinism.”

Cause and effect is a necessary mental structure for coping with the real world. The more reliable it is, the better able we are to predict and often control what happens next.

Exactly. You seem to say this as though we should say causal determination is something it is not, instead of something it is. But a statement , (a) is (b), is true when (a) = (b). So my formulation was correct, and yours was not.

The Xeno paradox proves that you do not actually pass through an infinite number of diminishing steps. There is such a thing as a motion, but it does not and cannot consist of passing through infinite things.

If you say a single step is sufficient to count as a process, even if it cannot be divided, then “process” no longer says anything interesting, and an omniscient being might choose randomly between two possibilities in a single indivisible step. It would be a “process” in this sense, but not a determinate one, nor a divisible one.

It is true that if a choice is made randomly, or even if some physical event happens randomly, “why A this time and not B” has no answer. But no one promised you a priori that every question has an answer. And “not B” is not being, but non-being; there is no reason to expect non-being to have a reason. The question, “Why A?” will still have an answer: it is to prove that Marven is mistaken. But “Why A and not B?” will not have an answer, and no one promised you it does.

The cause we are talking about is indeterminate between A and B; there is no reason for it to be indeterminate to reversing gravity.

Cause and effect, as I said, are quite real even without determinism. And the fact that we can control things better with more determinism, does not mean that everything is determinate: again, no one promised you that you are in control of all of reality.

Scientifically, we wish to believe that everything that happens can potentially be understood. By experiment and observation we might find an explanation of why things happen. For example, if we can understand what causes someone to become ill, we might avoid, prevent, or cure one or many illnesses.

If we believe that everything has a cause, then we would be motivated to find the cause. But if we believe that illness randomly strikes us without cause, then we would have no cause to seek the reason and no confidence that we could find one.

So we assert that there is a reliable cause or combination of causes for every effect or event we observe.

Having made that assertion, we discover a logical corollary: If every event has reliable causes, and each of those causes is also an effect or event that has reliable causes and so on, then the current and future state of everything can, in theory, be traced back to a prior state of everything. And might be said to be the inevitable result of that prior state.

And, most important, if this is a logical and irrefutable truth, then, (a) How should we feel about this? (b) Does it actually change anything?

And the answer is: (b) No, it doesn’t change anything. So, (a) We should feel okay about it.

I agree that the desire to understand things can lead someone to say that everything is absolutely deterministic, since it would be easier to understand in that way. Leibniz did this. However, this is wishful thinking: it may be that it is not possible to understand everything perfectly, and not everything is deterministic.

I also agree that it does not matter if everything is deterministic. That is quite different from saying it is true; and if it turns out that not everything is deterministic, there is no reason to feel bad about that either, even if it means that it is not possible to understand everything perfectly.

I agree with points (1) and (5). I disagree with (4), in two ways. First, just because you cannot think of a way to prove that something did not have a cause, does not meant there will be no such way in the future. It may not be possible to “reliably duplicate” it, but there is no proof that this is relevant. In fact, if we prove that we can definitely not reliably duplicate it, then according to you we would have proven that it was uncaused. That of course is my second disagreement with that statement: you continue to assume that if something cannot be reliably duplicated, it is automatically uncaused. That is false: a thing could be caused, but in an unreliable manner.

Statement (3) is wishful thinking. It might sometimes be possible to explain random behavior in terms of reliable causes, but that is no proof that all such behavior has that kind of cause. And as I have said previously, scientists do not typically understand quantum mechanics as having such causes, and they do not think it is likely that any such causes will ever be found.

Statement (2), as you note, is an assumption, and quite possibly false.

There’s a natural human tendency to ask “Why did this happen?” If it was something good, then they’ll want to be able to repeat it. If it was something bad, then they’ll want to be able to avoid it. The question assumes a cause that might be discovered or not discovered.

If something is uncaused, then there is no hope that we can do anything about it. We can’t bring it about if it’s good. We can’t avoid it if it’s bad.

To find a cause, we guess (form a hypothesis), and then attempt to cause the event ourselves, to test our guess. Or we survey similar phenomena, like gravity’s effect upon orbits.

Something that is truly uncaused is irrelevant to us, because there is no way to predict it and no way to control it. So our concerns are only about caused events.

I don’t know anything about QM. Rumor has it that quantum events are mysterious and unexplainable. But I don’t know that this makes them “uncaused” or “unreliable”, it just points up the weaknesses in our human understanding of these events.

Again, if something is not deterministic, that does not mean it is uncaused.

Second, whether deterministic or not, and whether caused or not, you may or may not be interested. But it is wishful thinking to say that everything is deterministic because that way you will be able to control it.

In the same way, if you cared about what is true, then you would care about knowing that an uncaused thing is uncaused, and an indeterminate thing indeterminate, even though you would not be able to control it. Perhaps most people do not care about what is true: but I do.

Saying that QM has hidden causes that make it deterministic, is just wishful thinking unless you can find them.

I believe the truth is that everything is reliably caused and that we, by our choices and actions, causally determine significant parts of what becomes inevitable.

When I say that something is reliably caused, I am including three levels of causation: physical, biological, and rational.

For example, by social law I am required to stop my car at a red light. I generally choose to obey that law, so, when I come upon a red stop light, I apply the brakes and stop. Rational causation led me to deliberately stop at the red light. Biological causation translated my intent into raising my foot and pressing the brake pedal. Physical causation by the brake pads pressing against the wheel drum ceased the cars forward motion.

A “random” or “unpredictable” event may change the outcome. The physical linkage from the pedal to the brake might have been damaged or worn such that it fails. A biological failure due to neurological failure could prevent my leg from working. A lapse in attention, perhaps a distraction, could cause a failure of my attention to notice the light until it was too late to stop, despite my intent to obey the law.

However, each of these failures will be caused by something. And it will be the convergence of these different causes that will inevitably result in the success or failure of my intent to stop at a red light.

Reliable causation is evidenced every time you type a letter from your keyboard. You type a “c” and you see a “c” appear on your screen. Reliable causation is so constantly demonstrated in our daily lives that we all take it for granted.

On the other side, we have not seen any uncaused events. When we run into an unexpected event, such as our car skidding when we apply the brakes, we presume a cause, such as a wet or oily road surface.

And if our world were so unreliable that our car skidded for no reason at all, and at any time at all, no one would be free to drive a car. Every freedom that we enjoy requires a deterministic universe.

“We have not seen any uncaused events.” Indeed. But as I said before, not being deterministic is not being uncaused. And as far as anyone knows, as I said, QM events are indeterministic (but not uncaused.)

Your position that everything is determinstic is just a personal dogma; you have provided no good arguments for it, simply repeating that it must be that way because you would like it that way.

Obviously we disagree upon the definition of determinism. Determinism is the belief in that objects and forces behave reliably and predictably. Determinism itself is neither an object nor a force, and thus it causes nothing at all. Only objects and forces can actually cause things to happen. And we happen to be one of those objects.

I’m a compatibilist, and find the argument unconvincing for a number of reasons, but I’d raise the following one:

Premise 3. states “If I have libertarian free will, then it is good to believe that I have it.”, but as you pointed out, it doesn’t establish it’s overall better to believe it.
While I don’t think hell is a live option, the following one is (or would be, if I thought libertarian free will is a live option): If I have libertarian free will, but on the basis of the info available to me, any epistemically proper assessment will give compatibilist free will a higher probability than it will give libertarian free will, then it’s epistemically irrational to believe I have libertarian free will. It’s bad to be epistemically irrational. Being epistemically rational on this matter seems to have no downside, by the way, since compatibilism is also compatible with moral judgment, responsibility, blame, etc.

Another issue is that even if I have free will, that does not mean I have the freedom to choose what to believe. I’m free to choose whether to post here or not. But I don’t think I’m free to choose whether to believe in libertarian free will, or generally to have one belief or another – my free choices do impact my beliefs, but not in that direct, calculated manner of picking to believe P or ¬P for practical reasons.

I would agree with your argument that it is better to believe in compatibilism if it is more likely to be true.

I disagree about whether we can choose to believe something. I am pretty sure that we can do that, and I have a number of times in my life chosen to believe things for quite calculated practical reasons. However, in terms of this argument, it does not matter, since the argument concerns what would turn out to be the case if it turned out that you were free to believe something, even though you mistakenly supposed that you did not have that freedom.

I’ll take a look at the link and read the argument later, but as far as I can tell, I don’t seem to have that capability. I can pretend like an actor, so at some superficial level, it looks like belief, but it’s not actual belief.

Still, even if sometimes I can choose what to believe, it does not follow that I frequently do so. For example, it’s pretty obvious to me I can’t choose to believe that there isn’t a keyboard or a computer screen in front of me, that I’m a woman or a Vulcan, that the POTUS is Hillary Clinton, that IS promotes kindness to people of all religions, etc. Also, in particular (and always leaving aside the question of whether I can sometimes choose what to believe), it looks also pretty obvious to me I can’t choose to believe that Thor, Zeus, Baal or Yahweh exists, or that we have libertarian free will, or that – say – a moral error theory is true, and so on.

I can choose – for example – to read such-and-such books on X, Y or Z, knowing that that will likely affect my beliefs on the subject matters (e.g., by giving me knowledge I didn’t have before), but that’s a different matter.

I’m not sure why you think it doesn’t matter to the argument. For example, what if we had lfw, but not the lfw to choose what to believe for practical reasons?

If P1 is ” I can make an ultimate difference in my beliefs undetermined by initial conditions”, and P2 is “I have libertarian free will”, your point 4 is:

4. If P1, then it’s good to believe P2.

But 5. seems to me something like:

5. If P1, then it’s good that use my power as described in P1 to bring about that I believe P2.

Yet, what if P1 is true, but I don’t have the power to bring about that I believe P2? (maybe it’s still good to believe P2, but I would have to reach that conclusion by reading arguments and reasoning in a Bayesian manner, but not in by means of using lfw to choose).
But maybe I misunderstood (one of) your premises?

I read the argument you linked to. I don’t find it persuasive, for a number of reasons (e.g., it lists as among our capabilities a number of things that I don’t seem capable of doing; “deferring to experts” is in my view a way of incorporating the info that such experts make such claims, etc.).

Have you never in your life had this experience: you disagree with experts about something. Then you quite naturally find out you were mistaken. If so, don’t you find yourself at least tempted to blame yourself for ignoring the experts in the first place?

If you never had this experience, do you not find yourself at least tempted to blame others for ignoring experts?

I don’t recall having been in that situation, no. But I blamed myself (for example, and in terms of negligence) for not dedicating sufficient time to consider a matter, argument, etc., or for not having made an effort not to be upset but to consider the matters at hand in a calm fashion, trying to understand the arguments in detail.

In fact, I do think there is a clear sense in which one may be at fault (epistemically and even morally) for having some belief, failing to have some belief, etc. This isn’t because one chooses what to believe, but rather, because one makes choices that likely result in one having one belief or another.

For example – and to address a case of ignoring experts – , take a Young Earth Creationist (YEC) who has internet access and is fully aware of the fact that biologists, geologists, etc., generally support that the Earth is over 4.5 billion years old, but believes and promotes Young Earth Creationism (also YEC). I would say that she epistemically and morally should not believe in YEC, and morally should not promote YEC.
However, I do not believe that the typical YEC is to herself something like: “I know that they are the experts, but it’s in my interest to believe that they’re mistaken, so I will believe that regardless of what the evidence say”, and she actually comes to believe (or continues to believe) in YEC on the basis that sort of choice (maybe she believes she is making that sort of choice, though; that depends on the YEC).
Rather, I think when it comes to arguments against YEC, arguments for common descent, old rocks, etc., she just reads them (if she does) believing that the claims they make are false, and is motivated to find counter arguments on line, which she then (due to bias) considers compelling. However, she does have a choice not to do that. She can choose to read the arguments against YEC, for common descent, etc., in a dispassionate, calm and careful manner, dedicate time to think about them also carefull, etc. She might not be able to understand some of them, but she can still choose to read the background of the people making some of the arguments, etc., also carefully, camly, etc. If she does that, and unless the relevant part of her brain is simply too damaged, she will find herself not being a YEC anymore. That’s why I think she shouldn’t be a YEC, but not because I think she’s making a direct choice.

Now, it might be suggested that I’m not literally blaming her for being a YEC, but rather, for failing to read arguments calmly, etc.; but while I think strictly speaking, it might be so, saying that she’s at fault for being a YEC is not different from what one says in other cases. For example, let’s say that one blames Bob for failing to pay his electricity bill. He forgot about it, the company disconnected his home, so his family is now temporarily without electricity. Strictly speaking, his failure was to not properly attempt to pay. If he had attempted to pay the bill but, say, a someone hacked the bank’s website, and as a result the payment didn’t get through but it looked like it did, then there would be no fault on his part. Yet, it’s customary (and proper, given custom) to say that his fault was to fail to pay the bill. Similarly, I hold that the YEC’s fault is to remain a YEC and to promote YEC, but strictly speaking, her fault is not to do as I described above, in my view.

I have seen people insist that intellectual activity can never be praised or blamed at all, even in cases like the one you discuss (ironically, sometimes while very obviously blaming me for disagreeing with them). Since you are not holding such a position, this make me suspect that the disagreement is largely verbal. Nonetheless, I think it might be productive to consider a few things in greater detail. First, there is this question: how do we know what we believe?

Do you believe that you would be hurt if you jumped from the roof of a ten-story apartment building? Presumably you do. But how do you know that you believe that?

There are an almost indefinite number of things you could say (and you can probably come up with others):

– I know I believe I would be hurt because as soon as the question is asked, I say, “yes, of course”
– As soon as the question is asked I know the answer is yes without thinking about it. If you ask “What do you mean by knowing without thinking about it,” the answer is that I am immediately inclined to respond in one way, and in no other.
– I possess large amounts of evidence that I would be hurt by such jumping, such as the experience of shorter falls and accounts of what has happened to others who have done such things
– The very fact that I am giving these reasons proves that I believe I would be hurt
– I don’t want to jump from such a building, and I am quite sure the reason I do not want to jump is that I do not want to be hurt. So I must believe that I would be hurt if I jumped.
– I will not in fact jump, even if someone offers me $1,000 for it, and clearly that must be because I believe I would be hurt.

While we can probably multiply these reasons almost indefinitely, we can divide them into two kinds:

1. Things that could theoretically be modified voluntarily. The fact that I would say “yes”, either externally in response to the question, or even in my head, can be modified voluntarily: I can say “no,” if I want, even internally, as long as we are talking about verbal expression. Of course you will likely say that however much you say “no” inside your head, you do not really believe it; but we will get to that later. Likewise, giving reasons in this way is voluntary: I could, if I wanted, give reasons that would argue that I do not believe that I would be hurt. They would be bad reasons, surely, but I could give them if I so chose. Likewise, the fact that I will not in fact jump is voluntary; I could accept the offer of $1,000 and jump, if I wanted.

2. Things that could not be modified voluntarily. The fact that I feel an impulse to say yes is not something I can directly change. Likewise, I cannot change the fact that I have evidence that I would be hurt by jumping. Nor can I directly change the fact that I do not want to jump.

But here is an important point about these latter things: none of them prove that I believe I would be hurt. Rather, they provide evidence about my belief. I sometimes feel an immediate impulse to respond to a question in a certain way, but reflection proves quite conclusively that I do not believe that response. Angry responses might be a good example. Having evidence for a claim does not prove that I believe it, since there can be evidence on both sides of a claim, and I cannot believe both sides at once. And not wanting to jump surely does not prove that I actually believe I will be hurt; I have seen someone stand above a pool for 20 minutes, trying to make himself jump in, and being unable. He consciously believed that he would not be hurt, but nonetheless there was surely some kind of unconscious expectation that made him not want to jump.

But it might appear that I have neglected an important question. If the above facts are not the belief itself, but provide evidence for belief, what is the belief itself? What are they providing evidence for?

And this is where I say we have a choice about what to use the word “belief” for, at least in an important sense. (I agree with your qualifications on this; we are not at every moment deciding how to use each particular word that we are using. But this is not really different from the way our bodily movements is voluntary. Such movements also are not consciously considered and decided at every moment.) Here are two proposed definitions, a voluntarist definition and an involuntarist definition:

1. Voluntarist definition: “Belief” signifies a plan of life, based on treating the claim as a fact. In this case “choosing to believe that I would be hurt if I jumped from the building” would mean choosing one side of all of the above voluntary aspects. Choosing to say “yes”, internally and externally, when someone asks if I think I will be hurt. Choosing to give reasons for saying that I believe I would be hurt, instead of searching for reasons for saying that I don’t believe that. Choosing not to jump, even if someone offers me $1,000 for it.

On the other hand, choosing to believe that I would not be hurt, would mean choosing to say “no,” when asked that question, both externally and inside my head. It would mean choosing to give reasons for saying that I do not believe I would be hurt, instead of the opposite reasons. And importantly, it would mean choosing to accept the $1,000 and go and jump, instead of choosing not jumping.

The importance of the last factor results from the fact that in real life people will often divide these up, making some of the choices but not others. And then in voluntarist terms they neither completely believe nor completely disbelieve.

2. Involuntarist definition. “Belief” signifies an unconscious evaluation of the world, which since it is not even conscious, cannot be directly changed by choice. It expresses itself, or in any case gives evidence of itself, in various conscious ways, such as the impulse to say “yes I would be hurt,” as well as in the desire not to jump, even when offered $1,000.

Personally, when I speak of “belief,” I speak of it in the first sense. When I say I can choose my beliefs, I mean that I can voluntarily perform all of the voluntary aspects of my intellectual life and can voluntarily express those in other behaviors (such as by accepting the $1,000 offer or not.) And on the other hand, it seems to me that you must mean the second thing or something very much like it.

Speaking of belief in the first way is not an arbitrarily chosen way to speaking, in such a way that I might other times accidentally start using the word in the second way; that would pretty much never happen. If I wanted to refer to the second thing in distinction from the first, I might say something like, “that person unconsciously suspects such and such,” or “they have an implicit belief,” or something like that. But I would not use the unqualified language of belief. I think that my usage both more closely conforms to the way people usually speak of belief in ordinary life, and is a more productive usage.

As to ordinary usage, consider the case of religious people. The unconscious evaluation is often almost completely opposed to the plan of life. For example, consider the doubts of Therese of Lisieux about an immortal soul. She insisted that the soul is immortal, but her unconscious evaluation, which expressed itself consciously in all sorts of way, was very obviously “the immortal soul is very dubious and there is a good chance it is false.” Should we count her as a believer or as a disbeliever? Some people from the rationalist community speak of “belief in belief” and would likely say that Therese did not “really” believe, but this is not in conformity with ordinary talk of belief, which would say that she was a believer. And an important part of this is the fact that she committed her life to that belief as to a fact; she accepted the offer of $1,000 in order to jump, so to speak.

In a similar way, while there are certainly many creationists who think the way you describe, it certainly is not universal. Consider Kurt Wise: he has said openly that if “all the evidence in the universe” were against young earth creationism, he would still believe in it. And he completely rejects most young earth arguments (e.g. he insists that “there are no missing links” is completely false), so it is not a question of not having considered the evidence in sufficient detail. It is also clear enough that he implicitly expects the earth to be old in the same way a person expects to be hurt if he jumps; Wise would absolutely not expect the discovery of animals with a different genetic code, for example.

I suppose you could say people like Therese and Kurt Wise are liars, but I do not find that convincing. In part this is from past experience; I was myself in such a situation for many years, and I know that I was not lying. I also know that my unconscious evaluation was in total contradiction to my consciously affirmed beliefs, and I knew it at the time.

As to what way of speaking is more productive, I will return to a point I sort of made in another comment. Human beings are not perfect Bayesians. So our unconscious evaluation does not always keep up with the evidence we have available to us. Thus it is perfectly possible to know, “If I thought about it calmly and carefully, I would conclude that X is true,” without already thinking that X is true. You say that the reason people are to blame for unworthy beliefs is that, for example, they did not think carefully enough. Perhaps they did not, but they might know quite well the direction in which their belief would change if they did. And this means they should already change their belief, and there is a way for them to know this, even before they do that careful thinking.

Consider the case of a Christian confronting evidence that their religion is false. One way that people occasionally actually respond in such a situation: “This is good evidence that my religion is false. But I can’t conclude that my religion is false, because I might go to hell, and that must be avoided at all costs. So I will continue to say that it is true.” You seem to be suggesting that they think they are thinking this, but really something else is convincing them, or they are even lying. In contrast, I am quite sure that the fear of hell can be very convincing, just like the fear of jumping. It is quite reasonable to tell that person, “Look, your unconscious evaluation is wrong, and that is clear from the condition of the evidence that you already admit. So it is reasonable for you to treat it as a fact that your religion is false, e.g. by saying that is false and acting on that.” If they choose to do this (in the plan-of-life sense), that fear will probably go away soon enough anyway, since it is not in conformity with the evidence.

In other words, if we speak of belief as the plan of life, we can say, “Choose to believe the things you have the best evidence for, namely by treating those things as facts.” But if we speak of belief as the unconscious evaluation, someone can say, “I cannot treat this as a fact, no matter how strong the evidence for it, because my unconscious evaluation says that it is not.” In that way the voluntarist definition is more productive as well as being more normal.

Yes, I believe I would be hurt. How do I know that I believe it?
That’s a difficult question about human psychology.

An inmediate and naive hypothesis with a certain degree of (inmediate) intuitive plausibility (i.e., the prior isn’t too low) would be that I made a very quick assessment about what will likely happen if I were to jump (i.e., I make an assessment of the scenario, using my epistemic intuitions), and the outcome that I get killed (and thus hurt) got a very high probability, almost 1.

However, after further consideration, the hypothesis does not appear plausible. Rather, that seems to be a plausible (but I wouldn’t say probable; I’d say not too improbable) hypothesis about how I have come to be aware that I believe I would be hurt, which is not at the same of how I know that I believe that I would be hurt. Generally, being aware of P and/or coming to be aware of P isn’t the same as knowing P. After further consideration, I have to conclude I do not have any plausible hypothesis to offer. Introspection only goes so far; beyond that, the way the mind works is pretty obscure. It’s a fascinating psychology question, though, but I concede I have no hypothesis to offer.

Regarding your distinction between “proving” that you believe you would get hurt vs. providing evidence, I don’t know I see any difference. “Proving” in a sense is only for math, logic, etc., and it’s about deductions. On the other hand, there is another sense of “proving” in which it means to provide evidence so that it’s beyond a reasonable doubt. In the former case (i.e., math, logic, etc.), neither the things in your first or your second category prove it. In the sense of providing evidence beyond a reasonable doubt, the matter would have to be assessed on a case-by-case basis I think, and also, what provides evidence beyond a reasonable doubt to you about your own mental states is not always the same as what provides evidence beyond a reasonable doubt to me about your mental states, since your priors will be different from my priors due to information you have about yourself even if you’re not aware you have it, etc.
Personally, I think probably just representing in my head the scenario in the naive hypothesis above and become aware of the result is enough to provide beyond-a-reasonable doubt evidence that I believe it, but that does not suggest that’s how I know I believe it: it may well be I already knew it before having this particular piece of evidence.

As for your assessment that having evidence for a claim does not prove that you believe it, here I’m going to ask: do you mean “prove” in the sense of deriving it by means of a deductive argument from some premises, or in the sense of establishing it beyond a reasonable doubt?
If it’s the former, I’m not sure what you’re trying to get at, or how deductive logic plays a role here. Could you clarify, please?
If it’s the latter, I would say your argumentation fails. Sure, there can be evidence for both sides of a claim, but that does not preclude people from establishing beyond a reasonable doubt that – say – the Moon Landing happened, or that a defendant is guilty – even though there can still be evidence for a hoax and for their being not guilty respectively.

Regarding how we use words vs. bodily movements, yes, we’re not all the time consciously considering our movements before we make them. But that does not seem to affect my point that even if we choose to use a word in a certain manner that is against common usage, we would probably often fail to do so when we’re not consciously thinking about it.

As for the two definitions of “belief” you propose, I don’t think either of them captures the intuitive concept of belief. In fact, I think that in the case of nearly all or probably all concepts of folk psychology, we do not at this point have definitions in terms of other words that fully capture their meaning. I don’t even think they fully capture their referent. There is still the question of which of the definitions is closer to common usage, or whether there is a single or more than one common usage (that is relevant in this context). But at any rate, I prefer not to attempt to stick to a definition I consider incorrect, but rather, use the words (in this case, “belief”), intuitively.

With respect to doubts and the like, I think there are several things that might be happening, but two are:

1. A person assigns a middling probability to a hypothesis, but asserts it anyway. In that case, she does not actually have the belief, but wants to have it or at least wants others to think she has it.
2. Two different parts of the brain of a person have conflicting beliefs, a part believes that P and a part believes that ¬P(or believes something that implies ¬P, but let’s simplify). Does she believe it? I think in that case we already have a more accurate picture of the situation, and it’s that part of her believes it, part of her does not believe it. If you ask whether she fully believes it – or close to it -, the answer would be negative, because part of her doesn’t. But also, it’s not the case that she fully believes ¬P. That may not be consistent with usual folk beliefs about belief and about human psychology, but I don’t think it’s inconsistent with the usual meaning of the word “belief”

I’m not familiar with Kurt Wise’s thought processes enough to tell what’s going on, but there is a difference between what he asserts about his beliefs and what he actually believes or would believe. He might be mistaken. Or part of his brain may have a belief, and part of his brain may have another one. I do not know. Given that you say that you were in a similar situation, perhaps if you didn’t mind explaining how you thought about it in greater detail, I could make an assessment. As it is, I don’t know enough to tell, either. I do not recall finding myself in any situation remotely like what you describe, though, and I don’t even think I’m psychologically capable of that, at least going by my inmediate interpretation of your words. I mean, going by your description of Kurt Wise (again, just by that), my immediate gut reaction under the assumption he’s not lying is “weird alien mind”. Of course, I do know that we’re the same species, so that’s very unlikely. I probably would make a very different assessment if I had details. So, I’ll leave it to you: If you want to give me details in your case, I’ll give it a shot.

Regarding the hypothetical Christian case, I’d need more details, but here’s a hypothesis, at least for some cases:

a. She’s still a Christian. She’s found evidence that reduces her epistemic confidence in Christianity being true, but she still attributes Christianity an extremely high probability. In particular, she still believes in Hell, and out of fear, she choose to continue to say that Christianity is true, perhaps even to herself, in an attempt to reduce the risk that she would eventually lower her assessment to the point of non-belief.
However, she does so not because she believes that concluding that Christianity is false would be the rationally proper assessment, but rather, because she believes (for example) that she’s weak, “fallen” and prone to sin, and so she will probably be lured to attribute too much weigh to insufficient evidence against Christianity, or else she will be tricked by smart people who will give her evidence that Christianity is false but without showing her all of the evidence that supports that Christianity is true, or something like that.
Moreover, it may well be that she’s simply being epistemically irrational and is not even attempting to consciously assess the epistemic propriety or impropriety of the assessment she might make if she were to expose herself to further evidence: she’s simply afraid that she might end up making the – in her belief, false and terrible for her future – assessment that Christianity is false, and as a result of that fear, she chooses not to expose herself to further evidence and/or to the arguments she knows some people put forth when arguing against Christianity, etc.: in other words, she’s not choosing directly what to believe, but rather, she is acting on her actual belief that Christianity is true, and on the basis of that belief, she chooses not to take a course of action that has a non-negligible chance of terrible consequences for her, in her own assessment. That does not seem to me like a case in which she’s choosing what to believe in a direct fashion. Rather, she’s choosing not to place herself in a situation that is risky from her perspective.

b. An alternative possibility is that part of her brain already reckons that Christianity is false, but part of her brain doesn’t and is afraid of Hell. It’s possible to have contradictory beliefs. If so, she might end up acting on some of her beliefs sometimes – e.g., when she affirms Christianity, and she rejects looking at the evidence – while acting on contrary beliefs at other times – e.g., when she goes around her daily life most of the times, not thinking there is an omnipotent agent with the preferences and behavioral tendencies of the Biblical creator looking at her, and so acting in an “unChristian” manner without a care in the world.

Perhaps, one of those alternatives is (roughly) true in the case of some Christians, the other in others, and other Christians have other beliefs and intentions. That would have to be assessed on a case by case basis. But with regard to the fear of Hell, I also think it’s very real and I see no good reason why an example like alternative a. above isn’t the case.

And I can also consistently tell people to never shy away from the evidence out of fear, at least not in realistic circumstances. Now, granted, Christians who make the sort of claims you describe will almost certainly not listen to me if I tell them that, maybe because of their fear. But then again, I don’t think they would listen to your alternative, either – and in any event, the issues of whether belief can be chosen and how involve questions about the meaning of “belief” in ordinary language, and questions about human psychology, but not about the results one gets from that (in particular, I reject any practical argument for epistemic voluntarism, or for any other beliefs. :)).

Regarding whether having evidence for a claim proves that you believe it, I was saying that it does not prove it, either to you or to anyone else, nor does it establish it beyond reasonable doubt.

In other words, you may have evidence that is sufficient to establish beyond reasonable doubt that the moon landing occurred. This does establish beyond reasonable doubt that the moon landing occurred. But it does not establish beyond reasonable doubt that “Angra believes that the moon landing occurred,” either for other people or for you. You would have to note that you accept the implications of that evidence, unlike some people, in order to know beyond reasonable doubt that you believe that it occurred.

To some extent, this is even one of the motivations for my position: you can know perfectly well that there is sufficient evidence to establish something beyond reasonable doubt, while having involuntary doubts about it. In other words, you can know that your doubts are unreasonable ones. In which case, for reasons which will become clear as we go along, there will be very good reason for setting those doubts outside the bounds of what is counted as “belief” or “disbelief.” But much remains to be explained here (in the rest of this comment.)

Regarding the voluntary usage of words in general, I don’t deny that if we try to modify our usage, we might sometimes fail. That does not imply that “modify the use of this word in such a way” is never a beneficial project, anymore than the fact that you do not always succeed in your bodily movements means that you should never attempt them.

The usage of words should be considered voluntary to a first approximation, even if we are not constantly attending to it. But to illustrate this, consider a stupid example. Some people seriously propose that we should define “man” and “woman” in the following way:

1. “Man” means “a human being who wants to be called a ‘man’.”
2. “Woman” means “a human being who wants to be called a ‘woman’.”

Obviously these definitions are inherently meaningless, because people who want to be called those things, don’t just want to be called by certain sounds, but want to be called those sounds with a certain meaning. So what does “man” mean when the person says they want to be called one? If we take the meaning from the above definition, a man will be someone who wants to be called someone who wants to be called someone who wants to be called… and likewise with a woman. Or in other words, the words will be meaningless.

The people who propose these definitions, do so for motives of their own. In fact those motives are pretty unclear to me. They say that they want to make the people who want to be called those things feel good. And maybe that is a part of their motive, but I highly doubt it is the principal one.

But one thing is perfectly clear: their motive is not to understand reality better. That definition cannot help us to understand reality better, because it makes the words meaningless.

Perhaps you resist the idea of voluntarily modifying the meaning of words because you consider this a general truth: if we purposely change the meaning of a word, this will not help us to understand the world. But this is not necessarily true. Sometimes the ordinary usage of words does not “carve nature at the joints,” and consequently the ordinary usage is confusing rather than clarifying. In such cases, we can improve our understanding by modifying our usage to be closer to the real divisions of the world.

Let me return to my proposal: we should use the word “belief” to refer to the voluntary aspects of our intellectual behavior, and not to the involuntary aspects.

I agree with you that this does not perfectly conform to ordinary usage, but I deny that it is “against” ordinary usage. Rather, ordinary usage does not carve nature at the joints, and in fact it cannot be used to describe all situations. So I propose a semi-technical meaning to resolve these difficulties.

Like the people in the stupid example, I have a motive. I have two motives and will say them openly: first, to divide nature at the joints, and therefore to understand the world better, and second, to understand our own abilities better, and consequently become more capable of (intellectually) virtuous behavior.

Suppose someone thought that “get off the couch” means “move your body in such a way that it is no longer on a couch, and simultaneously annihilate the moon.” Then if you ask them, “Can you get off the couch?” they will say, “no, because I can’t annihilate the moon.” That’s fine; it is not false, in the way they understand it. But there is a problem: if they never separately think about moving in such a way that they are not on the couch, they will never notice that they can move in that way. In this particular case, even without changing the person’s language, we can just say “look, I just want you to move in such a way that you are no longer on the couch,” and they might realize they can do that. But it would simplify things if we just decided to use “get off the couch,” to mean the physical movement, and not include the annihilation of the moon.

Something similar is true about the ordinary usage of the word “belief.” The word is a vague generalization, and includes both voluntary and involuntary elements. So if you ask, “Can you change your beliefs?” someone will think that you mean to include changing involuntary elements, and so they will say no, like you have been doing. But including those voluntary and involuntary things together in one word is not carving nature at the joints, since they do not automatically go together; we can change any voluntary thing, but we cannot change the involuntary ones. The practical question that matters is not, “Can you change the things you cannot change?” but “Can you change the things you can change?” That is why I proposed to identify belief entirely with the voluntary elements. This is a more natural division of reality, since it does not count voluntary and involuntary things together, and it makes clear what is in our power and what is not, rather than saying that something actually in our power is not in our power “because we can’t annihilate the moon.”

The customary usage presupposes as a fact that the voluntary and involuntary elements never come apart. But obviously, since the voluntary ones are voluntary, they do sometimes come apart. I brought up the example of religious beliefs because this coming apart is particularly manifest there. Common usage really does not know how to deal with this, since it presupposes that they do not come apart. Nonetheless, there is a normal usage even for the extraordinary cases: people are normally said to believe their religious beliefs. And that is why my usage is not “against” normal usage, but simply a more precise and better specification of it. There is really nothing particularly special about religious beliefs here either; I chose the case because it is especially obvious, but all beliefs are like this. Political beliefs, for example, are often extremely strong personal commitments that can be almost entirely independent of evidence, just like Kurt Wise’s religious beliefs.

“But at any rate, I prefer not to attempt to stick to a definition I consider incorrect, but rather, use the words (in this case, “belief”), intuitively.”

As you quite rightly say, you “prefer” to do this. So it is not that you involuntarily must do this, but you choose to do so. What is your purpose in that choice? Now it could be that the purpose is that you want to understand reality, as I said my purpose was. But I doubt this is the principal motive involved. For you have not established that the intuitive usage (insofar as it is different from mine, which is not very far), is a better help to understanding reality. I have argued above that it is not, in fact, a better way to understand, because it promiscuously mixes the voluntary and involuntary. Here is what I think your motive is:

As you were writing your comment, you had the idea in mind to defend your thesis that belief is not voluntary. If we decide to use the word “belief” to refer just to the voluntary aspects of our intellectual life, then it will be obvious that “belief is voluntary”, in just that sense, is true. So a good way to defend your position, that is, to achieve the goal you were pursuing, is to reject that definition. You therefore rejected it, under cover of being objective, by saying that you are just preferring the intuitive usage.

A nice little piece of bulverism, you are likely to say. Indeed. But that is what I am saying: all of our words, statements, beliefs, and meanings, all of the time, are shot through with practical motivations. If we want to understand ourselves and reality better, we need to see those motives. The pretense that the motives are not there and that we are just seeking truth and nothing else is just that: a pretense, and one that will very surely not bring us closer to truth (although it might well help us obtain those other goals, as in the above case, where rejecting my definition does indeed help you defend your position, although it does not bring you closer to truth.)

In your analysis of my examples, you spoke about “part” of a person believing something and another part disbelieving or doubting. This is not an unreasonable way to speak, but it reveals the inadequacy of the “folk psychology” that the ordinary usage presupposes. The ordinary usage presupposes that either a person believes, or they don’t. It is black or white. In reality there are an indefinite number of intermediate conditions. And not only in the sense of degrees of belief, which you could explain in terms of probabilities: this person thinks this has a high probability, that person thinks that this has a low probability. But rather there are a very large number of scales where people or more or less believing. Some of these are voluntary, and some are not.

For all practical purposes, it is the voluntary ones that matter. So if I ask, “Should I believe that the moon landing occurred?” that is a question about voluntary things. Should I say that the moon landing occurred, or say that it didn’t? Should I argue for and defend the position that it occurred, or that it didn’t? These are voluntary aspects, and they have a clear answer: I should say that it occurred, and defend that position. And note that this is the correct conclusion both theoretically and practically, even if there happens to be some involuntary part of myself “saying” that it did not occur, or that it is doubtful.

One thing that is clear there is that it is a question of personal commitment for him, not a question of evidence. Of course evidence is not going to change his belief, because evidence is not the reason for his belief. But it is very, very clear that “Wise believes in a young earth” is closer to ordinary usage than “Wise is a liar who pretends to believe in a young earth.” And the personal commitment is clearly a voluntary one; he experienced it as a decision. “With that, in great sorrow, I tossed into the fire all my dreams and hopes in science.” He did not feel them taken away from him involuntarily: he chose to toss them into the fire.

In a similar way, there came a time in my life which was basically the opposite process: I had to make this decision: 1) stop respecting evidence and continue to hold my original beliefs; or 2) continue respecting evidence and stop holding those beliefs.

That was very much a decision, as free as any I have ever made. I chose to continue to respect the evidence and to stop holding the beliefs. But I was free to do the opposite; if you say that I wasn’t, that respects my experience no more than if you say that when I go left, I was not free to go right. As a compatibilist, you should be willing to respect people’s experience of choices, and they experience the choice to believe or not believe.

This sort of thing is of course a dramatic example, but it is what is happening to us all the time, with all our words, statements, and beliefs. We just normally don’t think about it much, just as when you go left, you are not automatically thinking, “I could have went right.”

“in particular, I reject any practical argument for epistemic voluntarism, or for any other beliefs.”

Theoretically, this would be for the sake of truth: if you accept a practical argument for a belief, then you might be moved to a belief for reasons other than truth.

As I pointed out above, whether you were paying attention to it or not, a motive that was moving you while you were writing your comment was “defend the thesis that belief is not voluntary.” I don’t really need my “definition” in order to say what I want: it is enough to assert the obvious fact that all the voluntary aspects of our beliefs are voluntary, and that one of the voluntary aspects is externally saying and defending a position.

So let us divide reality at the joints. Are you saying that you will not accept a practical argument for believing something, where “believing something” refers to something involuntary, or to something voluntary? If it refers to something involuntary, then you might guess or hope that you will not be convinced by a practical argument, but it is not up to you: it is involuntary, so you might be convinced by a practical argument despite your hope not to be convinced.

On the other hand, if it is voluntary, then indeed you have more hope: you can simply choose not to be convinced by a practical argument. But then you clearly are being unreasonable: for all voluntary things, we should consider practical arguments for behaving differently.

At one point you spoke about how when we consciously try to change the meaning of a word, we often accidentally slip back to using it the intuitive way. In a similar way, when we choose to affirm a false belief, we often slip back accidentally to speaking as though that belief were false. It is false that belief is involuntary, and you voluntarily choose to hold that it is true; and in this case, you accidentally slipped back to speaking in a way which gave away the voluntary character of belief. For you stated what was clearly a voluntary intention not to be swayed by practical arguments; and that voluntary intention is clearly unreasonable, since it is the intention not to be moved by practical arguments, in matters which are themselves practical and voluntary.

I agree that having evidence for a claim does not show beyond a reasonable doubt that I believe it.
On the other hand, making an assessment of the information (or evidence, if you like) while focusing (by means of introspection) on how I’m reacting to it – i.e., “looking” my probabilistic assessment, so to speak -, seems to establish beyond a reasonable doubt – from my epistemic perspective – that I believe one thing or another, at least in most cases (there probably are cases in which that’s not enough). But that does not imply that the procedure in question is how I know that I have a belief. It may well be that I already knew it.

In the case of involuntary doubts, that would seem to indicate that some part of your brain assigns a probability close to 1, and some other part assigns some other probability. We are not indivisible agents. Different parts of our brains (and then, of our minds), sometimes might do different things. But I don’t think that that gives me the power to choose what to believe, in any way.

Can one know that one’s doubts are unreasonable?
Yes, but here I’d need more precision: some part of me would know it; some other part of me does not.

With regard to changing the meaning of the words, I do agree that sometimes, it’s useful. It is however problematic when the words pick categories that come naturally to us, such as terms of folk psychology, moral terms, etc., because in those cases, if we manage to change the meaning, we will still need to come up with another word to mean whatever it is we (even instinctively) want to bring up. In particular, I wouldn’t be inclined to try to change the meaning of “belief”, just as I wouldn’t attempt to change the meaning of “immoral”, barring reasons such as some types of error theories (in both cases).

Regarding “man” and “woman”, I’m not sure whether those are categories that come naturally to us (i.e., instinctively), but it seems to me that it’s probable that either they do pick two natural human categories (i.e., categories humans instinctively make), or that there is more than one common meaning of those words, and at least one of the common usages picks two natural human categories.

Now, the definitions those people propose usually do not even match their own usage, let alone common usage. You ask what “man” means when a person says they want to be called one. I would say that usually, it means what the word “man” usually means, in colloquial speech, and assuming there is one meaning of that words; else, it might vary. Either way, in a minority of cases – when their ideology is getting in their way – “man” probably means what it means in their ideology, or is just meaningless. But I don’t see what you’re getting at here.

As for the motivations of the people who propose those definitions, I don’t know. I have seen a lot of epistemic mistakes with regard to those words (both on the left and the right; more often on the left), but I’ve not seen those particular definitions. At any rate, from what I’ve seen (especially from the left, some of whom you might be thinking of), motivations are – as usual – multiple and variable from person to person, but one common motivation on the left actually seems to be to help reduce misunderstandings about reality. In other words, it’s not that they want to understand reality better. But they want others to stop misunderstanding reality – in their assessment, of course. And while I haven’t seen those particular definitions, I have seen eliminativism with respect to females and males (e.g., take a look at a recent post at thedanceofreason, including my replies).

You say: “But one thing is perfectly clear: their motive is not to understand reality better. That definition cannot help us to understand reality better, because it makes the words meaningless.”
I disagree with that assessment, for the following reasons:
First, while I have not seen those particular definitions, going by plenty of examples from the left (which I have seen and read), I reckon there is a pretty good chance that one of their several motives is to promote a better understanding of reality, by means of reducing the misunderstanding of reality that – in their view – people who use the words in a frequent and different manner incur as a result.
Second, it’s not proper to assess their motives from the perspective of your assessment that their words are meaningless. It’s a very common error, committed by people of very different ideologies/religions. The problem is that there is generally no good reason to suspect that they share your assessment.
If they do propose those definitions, it’s very probable hat either they do not consider them meaningless, and even that they disagree with some of your key views of what it takes for a word to be meaningful. Alternatively – though much less probably – they think the words are meaningless, but they do not share your assessment that meaningless words cannot help us understand reality better: they may think that meaningless words reduce the level of confusion from present-day words that have false assumptions about reality built into their meaning.

“Perhaps you resist the idea of voluntarily modifying the meaning of words because you consider this a general truth: if we purposely change the meaning of a word, this will not help us to understand the world.”
No, that’s not at all the case. I’m all for voluntarily changing the meaning of words if everyone feels like it – for example -, and there is no problem of implementation. I would be disinclined to try to voluntarily change the meaning of words that pick natural human categories (including those in folk psychology), unless we also come up with a new word with the meaning of the previous one, or unless our evidence decisively supports an error theory if we go by the common meanings but something in the vicinity might still do some of the job we want them to do, or some other unusual case (I would have to consider the matter on a case by case basis). But in the error theory case, I would first need at least some evidence that it’s psychologically doable.

With regard to “carving nature at the joints”, I’m not sure nature has joints in the first place, but that would side-track us considerably, so let’s leave that aside, and let me address your proposal. I see it as problematic for the following reasons:
1. Most people will still use “belief” in the common sense or senses, resulting in miscommunication (I guess here our disagreement is whether your proposal is or not in conflict with common usages).
2. It has consequences when it comes to accusations like “You’ve chosen not to believe that Jesus is God”. Well, no, I have not chosen that. I do not believe that he is. I used to, since I was raised a Catholic. But I didn’t choose to believe that Jesus is God, and then I did not choose to stop believing it. As a kid, I was told that he was, and then I found myself believing. Later, I found myself not believing (ironically, after considering theistic and Christian philosophical arguments and reading the Bible, the Catechism, etc., and without reading any atheistic arguments, but that aside). But I didn’t make any choices. Later, I chose to study arguments in philosophy of religion in much greater detail, but that’s another matter. The people making the accusation generally do not use a technical or “semi-technical” definition.

“As you quite rightly say, you “prefer” to do this. So it is not that you involuntarily must do this, but you choose to do so.”
Actually, I never made a choice to start using the word “belief” in that manner. I involuntarily just did it. But when changing the meaning is proposed, I generally choose not to attempt to change the way I use it. I can give you reasons for that, like:

a. I probably would not succeed much of the time. I would often fall back to the usual meaning. My interlocutors would probably do so as well.
b. Even if and when I succeeded, I would end up being misunderstood by people who use the word in its usual sense. I would still understand them – since I also have the intuitive understanding -, but they would not understand me, and we would talk past each other.
c. I sometimes want to actually respond to accusations like the ones I described above. I can’t do so if I’m not talking in the language other people are talking.
d. I sometimes want to discuss belief, and I need the word “belief” for that, given that I don’t have a synonym. In particular, I wanted to discuss here whether belief is voluntary.

“As you were writing your comment, you had the idea in mind to defend your thesis that belief is not voluntary.”
I would say one of the main theses is that we cannot choose what to believe and bring that about – or at least some of us can’t. . But there are many indirect voluntary acts that predictably affect belief.-

“If we decide to use the word “belief” to refer just to the voluntary aspects of our intellectual life, then it will be obvious that “belief is voluntary”, in just that sense, is true. So a good way to defend your position, that is, to achieve the goal you were pursuing, is to reject that definition. You therefore rejected it, under cover of being objective, by saying that you are just preferring the intuitive usage.”
First, I was not talking about changing the meaning of “belief” only in this thread. I was talking about a general proposal.
Second, of course one of the reasons I wouldn’t want to do that is that without that word, my ability to communicate my thoughts would diminish (see above), and that includes my ability to make my case (but see below). But I’m not doing anything undercover.
Third, even if I did not want to change the meaning of “belief” because otherwise I would lack a word for even saying what my thesis is, that would not imply I’m not “objective”, in any sense of the words that would make somehow my choice negative – and the accusation is negatively loaded, of course.
For example, let’s say you want to argue that FGM is immoral, but your interlocutor proposes that to change the meaning of “immoral”, “morally wrong”, etc. If you opposed because you want to make my case and you would be out of words to make it without moral terms, it would not be proper to tell you that you’re acting under cover of objectivity. But in any case, leaving you out of words by definition surely would not help better understanding reality.

That said, if we stipulate that we’re going to use “belief” to refer to the voluntary aspects of our intellectual life, in the context of this thread, I would still have something important to talk about, without that word. So, in the context of this thread, I am willing to make the stipulation in question if you insist, but in that case, I’m not particularly interested in defending a thesis about belief. Instead, I would go with the following thesis: We cannot choose what epistemic probability to assign to a proposition/statement/assertion/etc. and bring that about – or at least some of us can’t, at least in nearly all cases.

“This is not an unreasonable way to speak, but it reveals the inadequacy of the “folk psychology” that the ordinary usage presupposes.”
It reveals the inadequacy of part of the folk psychology, but I don’t think ordinary usage presupposes it, if we’re talking about ordinary meaning. Ordinary beliefs are like that. If it does, then that would be a reason to change the meaning only to remove the inconsistency with reality, but still one would want to talk about what is or isn’t voluntary.

“For all practical purposes, it is the voluntary ones that matter.”
I don’t think so. There are plenty of practical purposes. One of which is to figure out what’s voluntary and what’s not (and I think belief, at least in one common and relevant sense, is not). Another one is to reply to accusations of making choices one has never made and is not even capable of making (successfully), and so on.

“I have a discussion about Wise here, which you may or may not have time to read: ”
I’m afraid I don’t at the moment, but I will take a look when I can.

“But it is very, very clear that “Wise believes in a young earth” is closer to ordinary usage than “Wise is a liar who pretends to believe in a young earth.””
Perhaps, but I never claimed or suggested otherwise, and I replied to some of your comments in which you said that I might be inclined to think some people (under your description) were lying, explaining that that was not at all my position.

“In a similar way, there came a time in my life which was basically the opposite process: I had to make this decision: 1) stop respecting evidence and continue to hold my original beliefs; or 2) continue respecting evidence and stop holding those beliefs.”
See, that’s what I don’t understand, in the usual sense of “belief”. But if you’re using “belief” in your modified sense, I would like to ask whether you had to make a choice regarding your epistemic probabilistic assessments.

“That was very much a decision, as free as any I have ever made. I chose to continue to respect the evidence and to stop holding the beliefs.”
In your modified meaning of “belief”, that seems obvious. But the question I find interesting is whether you chose whether or not to assign the epistemic probabilities that you did, or you found yourself making probabilistic assessments very different from your previous ones.

“As I pointed out above, whether you were paying attention to it or not, a motive that was moving you while you were writing your comment was “defend the thesis that belief is not voluntary.”
As I pointed out above, it was that we cannot choose what to believe and bring that about – or at least some of us can’t. But that is also for the sake of truth: I’m trying to defend a thesis that I believe is true. Of course, we care about some truths more than about others, but that is irrelevant to the point I was making. When I reject any practical argument for epistemic voluntarism, or for any other beliefs, that is not something I choose to do for the sake of truth. I choose to tell you that I reject it. But I just do reject it. I don’t choose to reject it. I don’t modify my probabilistic assessments – or my beliefs, for that matter -, on the basis of practical arguments. And if I unconsciously did that sometimes, I would be epistemically irrational in those cases.

“I don’t really need my “definition” in order to say what I want: it is enough to assert the obvious fact that all the voluntary aspects of our beliefs are voluntary, and that one of the voluntary aspects is externally saying and defending a position.”
I don’t think the voluntary parts are aspects of our beliefs, but are things that affect our beliefs. In any case, I got the impression you were trying to say more than that.

“So let us divide reality at the joints. Are you saying that you will not accept a practical argument for believing something, where “believing something” refers to something involuntary, or to something voluntary?”
I’m saying that the practical arguments fail to change my probabilistic assessments, or my beliefs in the usual sense of the term, which I think is involuntary. Furthermore, I think it would be epistemically improper on my part to change my probabilistic assessments in that manner, or my beliefs in the usual sense (if I’m mistaken about the usual sense, I’m afraid I have no words to say that part; perhaps I could try an ostensive definition to explain it to you, but it would take too long and there is no realistic chance of success if I actually got the meaning wrong, so in that case, I will have to leave it at the probabilistic level).

Practical arguments can of course affect my practices. You could make a case that – for example – it’s in my interest to spend less time in this debate and more time reading instead of writing posts. But I’m unlikely to be moved by the practical considerations given in arguments usually described as practical arguments for belief in anything – except, perhaps, I will get a motivation to reply to the argument, which I think is generally not a good one, since it seems to imply (or it’s usually understood in that manner) that some involuntary things are voluntary.

“If it refers to something involuntary, then you might guess or hope that you will not be convinced by a practical argument, but it is not up to you: it is involuntary, so you might be convinced by a practical argument despite your hope not to be convinced.”
But I assert that that does not happen. I suppose you might say it might happen, I cannot properly rule it out. But it seems to me I can properly do so. I do so on the basis of my knowledge of my own reactions, which is imperfect of course, but not that bad. And that kind of reaction seems pretty alien to me.

“It is false that belief is involuntary, and you voluntarily choose to hold that it is true; and in this case, you accidentally slipped back to speaking in a way which gave away the voluntary character of belief. For you stated what was clearly a voluntary intention not to be swayed by practical arguments; and that voluntary intention is clearly unreasonable, since it is the intention not to be moved by practical arguments, in matters which are themselves practical and voluntary.”
First, if you’re using “belief” in your sense, you’re equivocating, since I was not using “belief” in that sense.
Second, no, I did not express an intention of not being swayed by practical arguments. I told you that I’m not swayed by them (and I’m adding it would be epistemically irrational of me to be swayed).

However, it is true that I wouldn’t want to be swayed by them. There is nothing unreasonable about my saying that. You would have to misconstrue my words to reach that conclusion.

It would be consistent for me to hold I can make choices that reduce the chances that I would be swayed by them, even if I cannot choose what to believe, the same way I can consistently say I can make choices that reduce the chances that I will – say – get false beliefs about biology (e.g., go to college and study biology), or I can generally reduce my chances of being epistemically irrational by studying philosophy, etc. And those are choices that predictably affect our beliefs. I don’t think that is choosing what to believe, in the way I’m using the word “belief”, which I thing is the (or at least one of the, and the relevant to what I’m saying) common one, and when I’m talking about choices, I made it perfectly clear it’s about making a choice to belief that X, and successfully bringing it about by means of an act of will.

I wouldn’t expect you to have an ability to suddenly adopt the sort of strange beliefs that you talk about. Consider these two questions:

1. Can you go kill yourself in the next 10 minutes?
2. Can you go and do 1,000 one arm pull-ups in the next few hours?

I would hope there is no conceivable path that would lead to you killing yourself in the next 10 minutes. You will likely say that this does not take away any freedom, because the reason it cannot happen, is that you do not want to kill yourself.

In a similar way, I would suggest you do not want to have those beliefs.

Likewise, I suspect that there is no conceivable path to performing the pull-ups. You will likely say that this does not take away any freedom, because the reason it cannot happen, is that it is physically impossible.

In a similar way, I would suggest that it would be intellectually impossible for you to hold those particular beliefs right now.

But the fact that some things are physically impossible, and that you do not want to do certain things, does not prevent physical activity in general from being voluntary, and I would suggest the same is true of intellectual activity, including belief, although not with respect to all possible beliefs and on all occasions.

Consider the following thought experiment. You and I argue for days about this issue. At the end, I start to be convinced. But I keep making arguments against your position. When do I stop making arguments and say, “Well, it seems like you were right.” Isn’t it obvious that this is a choice? I can give in, and admit I was wrong, or I can stubbornly resist, and keep saying I am right. But then I am choosing to accept that I was a mistaken and that something else is true. Which means of course that I was right after all.

Another line of thought might be helpful here. Some arguments are arguments about the world, and some arguments are about the meaning of words. Let’s suppose for the moment that our argument is about the meaning of words. Then I am saying that we should use the word “belief” to describe the aspects of our intellectual behavior which are voluntary. Sure, we might have involuntary feelings that are opposed to what we say we believe. But let us count our “belief” as the thing we say, and count the involuntary feelings as just that, involuntary feelings.

Note that whether or not we use the word “belief” in this way is definitely voluntary: we can certainly use the word that way if we want, and avoid it if we want. Then I am arguing that the practical benefits of using the word in this way justify the verbal practice, and that it will have intellectual benefits as well, even with respect to the aspects of our intellectual behavior which are involuntary. If someone says, “I can choose to believe what I want,” then they can also say, “I can choose to respect the evidence or not.” But if someone says that their beliefs are involuntary, then they will likely also say something like, “Well, I am not the one who decides what I think. Sure you may have a good argument, but it is just a brute fact that I think you are wrong.” And then they will not even notice that they have the ability to respect the evidence.

The conclusion of the argument includes “if I can” precisely for the reason you say: there is nothing that proves you can force yourself to accept that particular conclusion, even if it is true in general that you can control your beliefs in a libertarian way. But it would prove that you should try to accept that, if the argument worked (in any case I argued that it doesn’t.) Because you would only know for sure whether you could do it or not, after you tried, just like you only know for sure whether you can do pull-ups or not if you try it.

Let me clarify a little bit. When I say I can or I can’t do something, I’m using that word in the colloquial sense, which is not the same in my assessment as to say that it’s metaphysically or physically possible or impossible, etc.; in fact, I think it’s probably metaphysically possible that I go and do 1,000 one arm pull-ups in the next few hours.

As for whether there is a conceivable path, I think the word “conceivable” is a difficult word. My answer would be that in a usual philosophical sense, there are plenty of conceivable paths. But it’s well beyond a reasonable doubt from my epistemic perspective that I will not do that, and furthermore, that I will be alive in ten minutes. In fact, I will very probably still be writing this reply.

I don’t think the fact that I won’t implies that I have no freedom. I don’t see any good reason why it would. I’m not entirely sure I would put it in terms of wants, though. There are subtleties here that may well be problematic if one tries to put this in terms of wants; I think it does have everything to do with my own psychology, but in a manner that is more subtle than limiting it to the specific psychological phenomenon(or phenomena) of wanting.

As for the pull ups, it seems to me it’s conceivable, and metaphysically possible. Is it physically impossible? Let’s say that aliens from another planet land, abduct me, take me to space, and tell me to do them in microgravity or else. I think I would do them. But I don’t think that the matter of physical possibility means I can do them. In the ordinary sense of the words, I can’t – i.e., I don’t have the power or capability to do them. I’m not sure in which sense you would ask whether not being able to do them “takes away” my freedom. I’m free to choose to do them, and try. I won’t, and I would not succeed if I tried. I don’t know if that addresses your question; if not, I’d like to ask you to elaborate a little.

Regarding your suggestion that I don’t want to have those beliefs, I’m not sure what beliefs you mean, but at any rate, that’s not the same as saying that I choose what to believe. For example, I don’t want to have the belief that the Moon Landing was a hoax, that Thor exists, or that there is no keyboard in from of me, because I don’t want to have irrational beliefs, and it would be epistemically irrational on my part to believe that. However, the fact that I don’t want to have those beliefs does not suggest that I have the power or ability to have them, or that if I attempted to have them, I would be able to pull it off. I don’t think I would be able to pull it off if.
For example, if the aliens of the previous example abducted me and told me to have those beliefs or else (and they plug something into my brain to see what I actually believe), I would almost certainly fail, and then else (whatever they want to do to me). Of course, this is a wholly unrealistic (though metaphysically and physically possible, in my assessment) scenario. But this is very different – for example – from the case of killing myself. There are unrealistic scenarios in which I would attempt and successfully kill myself. It simply won’t happen.

As for your hypothetical scenario in which I begin to convince you, yes, you choose when to stop making arguments. You also choose whether to read my arguments, whether to do so carefully, whether to plan to dedicate some time in the future to find counterarguments, or rather, to dedicate that time to take a closer look at my arguments and test some of your beliefs against them, etc.; so, I think that there is plenty of room for free choice. But I don’t think that you have the power to choose now to no longer believe that doxastic voluntarism is true, and bring about that result, except in the indirect sense that you have the power (and the freedom) to choose to carefully read arguments against doxastic voluntarism, and perhaps some of them will persuade you (then again, there is an issue with the definition of “doxastic voluntarism”; perhaps, the room I leave for choices would be enough under some conceptions).

In re: meaning of the words, I do not think that’s what the word “belief” means, in the relevant sense. I do believe that Thor does not exist, and I’m pretty sure I do not have the power to bring about that I believe that Thor exists. I have the freedom to decide to try, but if I tried, I’m pretty sure I would fail.
Now, whether we use a word in a certain manner is partially voluntary. The fact is that words have meaning, and that meaning might differ from what one thinks one should mean by them, in order to achieve some goal or another. For example, a left-wing activist might believe she should use the words “man” and “woman” in a certain manner, and give a definition, and actually use them in that way sometimes. Yet, some other times, when she’s not alert, she might fall back on the usual meaning of the words, which she picked – like nearly everyone else – from the linguistic environment around her, and which she might end up using despite her own choice.

Back again to the pull ups, I don’t think I would have to try in order to know I cannot do it. I do pull ups almost every day, and I’m certain (i.e., it’s beyond a reasonable doubt) that I cannot do 1,000 one arm pull-ups; I don’t need to use or know science for that. The same goes for many other things. I don’t need to try to move faster than a speeding bullet in order to be certain that I cannot. And I don’t think I need to attempt to believe that Thor or Yahweh exists in order to be certain that I cannot – but at any rate, even assuming I can’t have certainty, I would say it’s very, very improbable that I can.

I agree with most of this comment, I think, except the discussion of the meaning of “belief” and the relevant conclusions from that.

That is, I consider the word “belief”, in my usage and in ordinary usage, to signify the plan-of-life as treating something as a fact. So when you say, “But I don’t think that you have the power to choose now to no longer believe that doxastic voluntarism is true, and bring about that result, except in the indirect sense etc.”, I would just say that if we take belief as I do, it certainly is in my power; it would simply mean choosing to start arguing that doxastic voluntarism is false, choosing to say that is false, and choosing to act for all practical purposes as if it is false (as in the example of accepting the $1,000). But it is in my power in the way that killing myself is in my power; I consider both to be bad ideas, and so will not do them. I think the same applies to the beliefs that you say it is not in your power to bring yourself to accept. I think you could adopt the plan of life that treats them as facts, but you have no good reason to do so. And I don’t doubt that you cannot directly change the unconscious evaluation against those things; I simply do not call that belief.

I don’t know that you would actually be acting as if doxastic voluntarism were false in all situations. You could do it in some, though. Maybe you can do it in all, in which case you seem to have a capability I don’t have.
But I can consider my case. I can (if I chose to) claim that Christianity is true, argue for Christianity, go to church, etc. But I would still not be treating Christianity as if it were true – not fully. I would – for example – still reckon it’s false. I would still be attributing an extremely low probability to Christianity, even if I wanted to attribute it a high probability. Let me give you an example. I would not be actually afraid of Hell, or of the Christian creator, at least most of the time.
Now, I could pretend as an actor would, and while pretending, I would be able to have some of the emotions associated with the character’s situations, including some fear when the possibility of Hell is raised. However, that:

1. It would require an serious effort, it would be taxing, and I would not be able to keep it up for more than short periods at a time even if I tried (while the going to church, arguing for Christianity, etc., is something I could keep up indefinitely if I so chose). When I’m not making that effort, e.g., when I’m going about my daily life not doing religius-related stuff, I would fall back into not behaving as if Christianity is true.
2. It would be superficial. Beneath the surface, I would still reckon Christianity is false. If – for example – someone credibly threatened to literally throw me to the lions or set me on fire unless I publicly renounced Christianity, I’m pretty sure would not be able to be more afraid Hell if I do they tell me to, even if I were to try.

But I don’t know how much of this is disagreement about what we can do, and how much is disagreement about the semantics of the word “belief”.

“I would still be attributing an extremely low probability to Christianity, even if I wanted to attribute it a high probability. ”

Someone I know told me that he would be happy with his religious beliefs, if he knew for a fact that they had a probability of 30%. So he does not think that attributing a low probability is inconsistent with believing something. And it is not inconsistent, since “This is a low probability truth” is not a contradiction.

Still, I don’t want to get sidetracked. Let’s assume that you have to attribute a high probability to believe something. Then you can go around saying “Christianity has a probability of 99.9%,” and defending that probability using specious Bayesian arguments and so on, in the same way that you can say that it is true. So you can attribute the high probability in the same way that you can attribute truth.

But, I think, your real objection is that you will still internally feel like it is low probability or false, and you cannot voluntarily change that feeling.

Sure. But my point is that for all practical purposes that does not matter. If someone agrees that the evidence of the moon landing objectively proves it beyond reasonable doubt, it is reasonable for him to say that it happened and to defend that, even if he has internal feelings saying “it did not happen,” exactly like your feeling that Christianity is false.

This is different, you might say, because your feeling is reasonable and his is not. But this is not a difference that matters, because the feeling in both cases is involuntary. The real difference is that his reasons for defending the moon landing are good reasons, whereas in our example we did not really give you a good reason for defending Christianity. If you did have a good motive for this plan, then you could do it, and it would be just as reasonable (if the motive was just as good) as for the other guy’s plan to defend the moon landing.

I agree that you would be unlikely to accept martyrdom. But this isn’t about what you can or can’t do, but about what you would feel motivated to do. You wouldn’t accept martyrdom for reasons much like the reason you will not go kill yourself in the next 10 minutes. In any case we’re talking about the proposed course of action: if you had a good enough motive, you could choose to commit your life to something even to the point of accepting martyrdom.

In re: the 30% guy, I think he may be talking about a different meaning of “belief”, which is the sense in which people say (for example), “I believe I left them on the table…but I’m not really confident”. In that sense, attributing low probability is of course consistent with belief. But in the sense I think we’re discussing, that does not seem to be the case in my view. At least, I don’t seem to grasp the claim, in the sense of belief that – if I didn’t get it wrong – we’re talking about (as a side note, I would not be able to assign Christianity 30% if I tried to, or 3%, or 0.003%, but then again, I don’t seem to be able to pick how to assign probability).

That said, given your reply, it occurs to me that this might help us narrow down what it is that we disagree about – i.e., I don’t know whether this would get us side-tracked. I know that you think that beliefs are a matter of choice, but do you also think probabilistic assessments are?

“Sure. But my point is that for all practical purposes that does not matter. If someone agrees that the evidence of the moon landing objectively proves it beyond reasonable doubt, it is reasonable for him to say that it happened and to defend that, even if he has internal feelings saying “it did not happen,” exactly like your feeling that Christianity is false.”
I’m not sure how “objectively” plays a role here; do you mean to make a distinction between objectively and non-objectively “proving” it? What the observations show depends on the epistemic perspective of the agent making the assessment, but there is an objective fact of the matter (in the usual sense of that expression) as to whether a probabilistic assessment would be epistemically proper.

Leaving that aside, I don’t think the example is similar to my point about Christianity. Rather, I’m not sure how that would work at all – psychologically, it seems extremely weird to me.
I mean, if agreed that the evidence of Christianity showed that it’s true beyond a reasonable doubt, I would assign a probability close to 1, and would believe it. I don’t think I could do otherwise. As it happens, I think on the basis of priors and observations, it’s beyond a reasonable doubt that Christianity is not true (I’m not suggesting there aren’t thoughtful, brilliant Christian philosophers, just to be clear. Of course there are), and I cannot choose to believe that that is not so (and successfully bring myself to believe it; so, more precisely, I should say that I don’t have the power to bring myself to have that belief). I can choose to read more Christian arguments, think about the ones I considered once again, etc.; I can even choose to pretend to be a Christian, go to church, etc., so there are plenty of things I can choose and bring about, but not believing it – or, for that matter, assigning it a probability not close to zero.

“This is different, you might say, because your feeling is reasonable and his is not.”
While that’s a difference, I wasn’t going to bring it up, since the issue here is the psychology of it. My main issue with the scenario you propose is not that he’s being unreasonable, but that I’m having trouble even understanding what’s going on in his mind – or how he can manage that.
The best I can come up with is that part of his brain/mind assigns high probability, whereas some other part assigns low probability. But that’s not at all what I’m experiencing.

“If you did have a good motive for this plan, then you could do it, and it would be just as reasonable (if the motive was just as good) as for the other guy’s plan to defend the moon landing.”
If I had information on the basis of which I would assigned very high probability to Christianity, that would be it. But I wouldn’t be following a plan. It would just find myself believing, and then I would probably plan to do whatever it is I think the version of Christianity I ended up believing it requires of me.

I can think of a good motive for actually believing Christianity. For example, if aliens abducted me and my family and credibly threatened to inflict horrific suffering on all of us if I failed to believe in Christianity, and they hooked something in my brain that – they show me – can tell what my beliefs are, whether I assign high or low probability, etc., then in my assessment I would have a very good reason to believe that Christianity is true – even though it would be epistemically irrational of my part to do so.
However, I don’t think I could pull it off. I don’t even know how to go about trying that. Maybe I would do is go the actor route, just in case their device makes a mistake and takes what’s on the surface for belief. But my chances of success would be, in my assessment, negligible.

I’m replying here because the comment structure is becoming unwieldy. I want to focus on four things (replying to both of your recent comments):

1. The ordinary usage of “belief”
2. What you are calling an “epistemic probability assignment”
3. The discussion of the moon landing skeptic
4. How we know ourselves

Ordinary usage.

First, I am not proposing to use “belief” in a way which contrasts with ordinary usage. At most I am clarifying boundaries that would have to be clarified one way or another. The people I say believe things, ordinary usage says believe things, and the people I say don’t believe things, ordinary usage says don’t believe things. If you think my usage is actually in contrast, what would be an example where I would say that someone believes something (or doesn’t believe something), and people in everyday life would say the opposite?

Let me discuss one example. You talk about the accusation, “You’ve chosen not to believe that Jesus is God.” But as an accusation, they are not just saying that you happened to choose that. They are saying that you chose that, and that it was wicked to do so. Now your accusers are being uncharitable in accusing you of wickedness here. But that is no reason to be uncharitable in response. Since they say you chose that, they must be accusing you of choosing something voluntary, not of choosing something involuntary. If you understand them to be saying that you chose something involuntary, that’s an uncharitable interpretation.

To put this another way, it is easy to see that they are using “belief” in my way. And that is because my way is in fact the ordinary way of using it, except perhaps with a little more precision.

Catholics might very well make the same accusation against me. And here is my response: “Yes, I chose, and I still choose, not to believe that Jesus is God. But that was and is not a wicked choice, but a good and reasonable one, because the evidence shows that it is highly likely that he is not.”

Epistemic probability assignments.

I think that in order to guess what is going to happen next, the brain does involuntarily generate estimates of the likelihood of various possibilities. Let me give an example from memory. In the news some time ago, there was a story about a man who died after being bitten by snakes, who apparently interpreted literally the thing in Scripture about how believers won’t be harmed by poison. And the thing that made it really weird is that his father had died in the same way. But he said that his father just didn’t have enough faith, and he thought that he did.

I think that this man chose to believe that he would not be harmed by the snakes. “He really believed he wouldn’t be harmed,” seems to me the correct, ordinary way of describing him. At the same time, it is quite possible that his brain assessed a likelihood that he would be harmed, so that he felt fear about it. But if we said, “He does not really believe the snakes will not harm him,” then we cannot give a good explanation for why he handles them anyway. Sure, perhaps it made people think well of him. But do you really think he committed suicide so that people would think well of him? And if he knew that he would be harmed, then he knew people would make fun of his belief afterwards, which surely he would not have wanted.

But so far, I have simply distinguished the ordinary sense of belief from this assessment of likelihood by the brain. I don’t identify this assessment with probability, even though it is related to probability, for the following reason (and perhaps for other reasons as well):

How many different ways of feeling towards a statement can you have? You can feel like it’s definitely the case; you can feel it definitely isn’t. You can feel you just don’t know. You can feel it’s pretty likely, and this one could have various degrees. But however many degrees you might find, it just won’t be very many distinguishable feelings. Probably not more than 25 or so; perhaps with practice you could increase this to 100, the way someone might learn perfect pitch. And then you could say, “this feeling corresponds with a 57% chance; that one with an 85% chance.”

But probability is a continuous scale. And Bayes’s theorem implies that your probability should change with every single bit of evidence you find relevant to a claim. If you always adjusted by 1%, you would be adjusting far too much. Suppose you have an epistemic probability of 85% about something, and you identify this with a certain feeling of likeliness that you have. Then you find evidence that should increase the probability to 85.5%. What happens to the feeling? Either it will be left behind at 85%, or it will go too far, to 86%, because you don’t have an infinite number of feelings to use.

What should we do instead, in situations where we know that the probability of something is 85.5%? Well, as I’ve said before, the true answer is that the feeling does not matter. The feeling may correspond to 85%, or to 86%. Or perhaps the feeling is more off, and it feels like 60% or 99%. In any case, none of that prevents me from saying and believing, “The probability is 85.5%.” In this sense, probability assignments are just as voluntary as beliefs in general. But at the same time, there is an involuntary assessment of likeliness, one very like a probability but not quite a probability (the thing which is 85% or 86%), and this is the thing that you generally seem to have been identifying as “belief” or “epistemic probability.” This is not, however, ordinary usage, except insofar as ordinary usage assumes that this feeling will go along with a bunch of other voluntary things (which voluntary things are taken as the actual meaning of belief.)

Another reason not to identify the feeling with a probability assignment is that it can change depending on what you are thinking about at the moment, even when you know that the probability cannot be changing. So something may feel very likely when you are thinking about evidence in favor of it, and very unlikely when you are thinking of evidence against it.

Here is another point about the issue about the meanings of words. One of your concerns is that no matter what meanings we are using, there will still be something you want to talk about, namely this involuntary assessment of likelihood. But basically what I am saying is that although there is reason to talk about that, there is no need for some simple description like “belief” for it, first because that is not the ordinary meaning of belief, but more importantly, because the feeling is less important in practice than you think it is. In other words, you seem to have the attitude, “If I feel like X is probably true, I need to act as if X is probably true.” I am saying that we should treat the feeling as a feeling, and it may or may not mean that we should act as if X is probably true.

The moon landing skeptic.

Let’s suppose you are on a jury. And suppose you say to yourself, “If the defense can provide evidence XYZ, I will be convinced that the accused is innocent.”

But this is a prediction about the future, and you are not infallible. Suppose the defense provides evidence XYZ, exactly as you stipulated, and you are not convinced the person is innocent: in fact, you find yourself with an involuntary assessment that the accused is definitely completely guilty.

What do you do now? Do you vote guilty or not guilty?

This is related to what I said about about the probability of 85.5%. We may not have any feeling corresponding to that, but that does not prevent us from saying, “The probability is 85.5%.” In the same way, you know that evidence XYZ indicates innocence. That is why you thought you would be convinced. So there is nothing to prevent you from ignoring the involuntary assessment, and voting, “Not guilty.”

I have the feeling that you are again to going to say that this scenario just could not happen to you. If you thought you would be convinced, you definitely will be convinced. But this scenario happens to people in real life all the time.

This is basically the situation I intended with the moon landing skeptic. The skeptical arguments are very convincing to his brain’s involuntary assessment. But he has a general knowledge of what constitutes convincing evidence. So he says, “such and such evidence would prove the moon landing happened.” Then that very evidence is presented to him. But his brain is structured somewhat differently than for most people. That in fact is why he was inclined to conspiracy theories in the first place. So he still feels like the moon landing couldn’t have happened. But he continues to recognize as an abstract truth, “Such and such evidence proves the moon landing happened.”

Then there is nothing to prevent him from saying, “The moon landing happened despite my feeling that it didn’t,” just as in the jury case you can vote not guilty. And in ordinary usage, he is not only saying that the landing happened, but believing it, at least if he continues to act on that claim.

In the discussion of choosing to believe Christianity, there seems to have been some confusion between “good reason to do this,” in the sense of good evidence that it is true, and good purpose in the sense of “some good that will be accomplished.” Since I am talking about voluntary things, I am talking about good in general, not just about evidence in favor. Of course, truth is good. And evidence in favor of something provides a probability of truth. So “I have evidence for this,” is a reason to choose to believe it, namely because of the probability that you will be believing the truth. But there are other good things that can happen when you believe something besides attaining truth, and to the degree that belief is voluntary, those can be motives for believing.

It may be that a practical argument will never affect your involuntary assessment of likelihood. But I don’t care about that involuntary assessment; what matters is whether you go about saying and defending something, or the opposite. And if I say, “defending this position will destroy the world,” that is a practical argument against defending it. If it is true that defending the position will destroy the world, it is a very good reason not to defend that position, regardless of what the evidence may indicate.

I was talking about this when I said that we had not provided you with a good reason for choosing to believe (act as if it were true) Christianity. And one good reason would be good evidence, if it existed, since it would indicate the likelihood of believing truth. But it is not the only possible good reason. You might think that the likelihood of believing falsehood would be an evil that would outweigh any possible good purpose, but that is not necessarily the case. However, I have no reason to speculate on what such a good purpose might be, if there is any.

You asked about what happened in the case of my own choice and probability assignments. You probably meant by probability assignments the involuntary assessment that I have been talking about. And indeed, since this is involuntary, it changes gradually over time, and at no point did I or can I choose to change it. But in that sense, I felt for a long time that Christianity was unlikely. Again, this shows that my usage corresponds to ordinary usage: ordinary usage would say that I changed my mind when I decided to change my mind, not that I was a closet unbeliever for a long time (except in a very qualified sense.)

How we know ourselves.

Earlier you said that you do not know how you know what you believe. I do know how I know what I believe. All of our knowledge comes from experience, and knowledge of ourselves comes from experience of ourselves. In this sense, I know what I believe in very much the same way that I know what other people believe. In both cases it is by seeing how I or they behave, although I get to see more of my behavior (such as internal behavior) than I do of theirs. This is basically why the ordinary usage of “belief” is based mainly on voluntary behavior: because most of the things that would tell us what people believe, including ourselves, is voluntary behavior, either external or internal.

But let us consider the involuntary aspect. I have an involuntary assessment that the USA is larger than Austria. And that involuntary assessment can be experience by thinking about that statement. But here is a question: how do I know that this experience expresses the assessment that the USA is larger than Austria, rather than being the feeling that the moon is purple?

I think I know this by induction. The feeling I get when I think of “the USA is larger than Austria” strongly inclines me to say that the USA is larger than Austria. In no way does it incline me to say, “the moon is purple.” And it inclines me to act in ways that benefit me if the USA is larger than Austria, such as by accepting bets about what we will find if we look it up. It does not incline me to act in ways that benefit me if the moon is purple.

Note that what is really important here is how I end up acting, not the involuntary assessment, except insofar as it brings about ways of acting, ways of acting which are themselves voluntary. So yes, that involuntary assessment inclines me to say that the USA is larger, and to act in ways that benefit me if the USA is larger. But in principle, if I see that I will be benefited more by acting in other ways, and saying other things, there is no reason for that involuntary assessment to prevent me from achieving the good.

Okay, I finally made it to the fourth point you want to discuss, namely how we know what we believe.

We know what other people believe from experience of their behavior, but we have introspective access to our own beliefs – to some extent, at least -, whereas we don’t have introspective access to the beliefs of others. Maybe that’s all you mean by “internal behavior”?
As for whether all our knowledge “comes from” experience, in a way it does, but only in a very broad sense of “experience” – broad enough that I would not be inclined to say that I know how I know my beliefs if all I know is that by some process of experience – in a very broad sense of that word – of my “internal behavior” I know them. That might be a matter of definitions only, though.

On the other hand, when you say that the “internal behavior” in question is mostly voluntary, then I disagree at least when it comes to how I know my beliefs That’s not at all how I experience it.

For example, you say: “But let us consider the involuntary aspect. I have an involuntary assessment that the USA is larger than Austria. And that involuntary assessment can be experience by thinking about that statement. But here is a question: how do I know that this experience expresses the assessment that the USA is larger than Austria, rather than being the feeling that the moon is purple?

I think I know this by induction. The feeling I get when I think of “the USA is larger than Austria” strongly inclines me to say that the USA is larger than Austria. In no way does it incline me to say, “the moon is purple.” And it inclines me to act in ways that benefit me if the USA is larger than Austria, such as by accepting bets about what we will find if we look it up. It does not incline me to act in ways that benefit me if the moon is purple.”
I don’t know about you, but I think that’s not how I know it. I contemplate the statement “the USA is larger than Austria”, and I intuitively assign a very high probability (almost 1) to it, and I assign high probability (almost 1) to my assigning such high probability. But while I do choose to contemplate the statement that the USA is larger than Austria and the statement that I believe so, I intuitively assign (not by choice; I just find myself assigning such probability) extremely high probability to both.

It seems to me that my probabilistic assessment (almost probability 1) that I believe that the USA is larger than Austria comes first – temporarily first, even – to any kind of inductive test. In fact, when someone asks me whether I believe something, I don’t normally do any such tests. If they press me and ask me for more evidence, I may well decide to start doing some tests like that, but as I mentioned, my assessment was earlier than that. Of course, it may well be that my probabilistic assessment was even earlier than my being aware of my probabilistic assessment. I don’t know whether that is so.
Now, how does the introspective system work? How do I know that I’m assigning high probability?
I don’t know. It’s an interesting matter for research in human psychology, assuming other humans are like me in that regard (but it’s extremely unlikely that I’m alone in that and other humans have very different thought processes, even if it’s true that I am in certain ways unusual). But the assessment seems immediate and not voluntary, and I assign high probability to my assigning high probability.

It might be suggested that I’m making an unconscious and involuntary inductive test, and see that it inclines me to say this or that, and I come to know that I believe that the USA is larger than Austria on the basis of that unconscious and involuntary inductive test. But I see no evidence of that. Moreover, there is also the problem of how I know that I’m inclined to do something during the unconscious test. In any event, if that were the case and I were making an unconscious inductive test somehow, it would still be an unconscious and involuntary test.

“Note that what is really important here is how I end up acting, not the involuntary assessment, except insofar as it brings about ways of acting, ways of acting which are themselves voluntary.”
What’s important to whom, and/or with respect to what end?
I’ll consider two cases:
i. If you’re talking about what’s important to your assessment about what you believe (i.e., relevant for making your assessment), maybe so, but it’s definitely not at all important to my assessment about what I believe. Involuntary assessments rule so to speak, even though I can choose voluntarily to contemplate the statements, etc. (to some extent, though; when I read your example about the USA and Austria, by the time I finished reading I had already assessed that I believe that the USA is bigger, before I tried to assess it. But a deliberate choice to keep contemplating the matter of course is an option for me).Ifwe are indeed so different in this regard – which has a very low prior, but can get higher as the examples keep coming -, that would be a surprise, and actually it would be an interesting matter for future research in human psychology. Is there such degree variability in human psychology of belief?
It still is improbable in my assessment, but it’s an intriguing possibility.

ii. If you’re talking about what matters to you, okay. To me, what I actually assess is very important to me. But what I do is also very important to me. Which one is more so depends on the case.

If you mean important with respect to something else, please clarify.

“But in principle, if I see that I will be benefited more by acting in other ways, and saying other things, there is no reason for that involuntary assessment to prevent me from achieving the good.”
No doubt, but it seems to me you’d be acting.

We may be reaching a point where discussion is not going to be helpful, since you are saying that you can find no way to justify your claims, but you are going to make them anyway. “I contemplate the statement “the USA is larger than Austria”, and I intuitively assign a very high probability (almost 1) to it, and I assign high probability (almost 1) to my assigning such high probability.” That cannot be right, because the mathematical notion of probability is not an intuitive one, but a formalization.

But more basically, you are not understanding your experience well. Let me talk instead about the feeling of hunger. Your stomach feels empty, you might feel a little pain etc. How do we know that the feeling of hunger is a desire for food, instead of a desire for flying?

If you respond in the way that you did about knowing our beliefs, you would say that we know intuitively that the feeling is a desire for food. But this is not the case. We learn by induction that the feeling is a desire for food. In particular, when we feel that way, we normally go and find food and eat it. This is why we end up saying that “this feeling is a desire for food,” namely because it tends to make us go and get food.

Now you might ask how we know it is that tendency rather than some other (like David Hume.) And the answer is simply that this is how we judge a tendency: in most cases, when we feel that way, we go and get food. We do not normally jump off a building and try to fly when we feel that way. So it is the tendency to eat, not the tendency to fly.

Let’s look at the involuntary assessment. (And while I might comment on this later, whether it is stable or fluctuating is not really relevant; it is the same kind of thing in each case.) At one point you asked why I called this a “feeling.” The answer is because it is a feeling; it is a feeling in the same way that what we feel when we are hungry is a feeling. It is not a thought, or a belief, or a statement.

When we say, “hunger is a feeling,” and then ask, “What kind of feeling is it?” the answer is that it is a feeling which is a desire for food. As I said, we work that out by induction. In the same way, if we ask what kind of feeling is that assessment, it is a tendency as well: it is the tendency to treat a certain claim as a fact. And you treat a claim as a fact by thinking things about that claim, like “it is true,” by saying things about the claim, acting in certain ways in the world. And just as in the case of hunger, we learn by experience that the feeling makes us do these things, because when we feel that way, we end up acting in those ways, just like we end up getting food when we are hungry.

So rather than calling that assessment a belief, it would be more accurate to call it a desire. It is not believing something, but desiring to believe something. Hunger is the tendency to go and get food; that assessment is the tendency to treat a certain claim (“the USA is larger than Austria”) as a fact. And in both cases there are good reasons for those desires: you are benefited by food, and you are benefited by treating that claim as a fact.

But note that just because you have the desire for food, does not mean that you are forced to go and get food. And just because you have the desire to treat a certain claim as a fact, does not mean that you are forced to treat it as a fact. Going to get food is voluntary regardless of how you feel about it; and treating a claim as a fact, or as not a fact, is voluntary, regardless of how you feel about the claim.

I’ll stop here for now, partly because of time constraints, and partly because I feel that you have been distracted by details and that we have the heart of the matter here. Unfortunately I doubt you will be convinced at this point; but I think you need to spend more time thinking about these experiences and our knowledge of them.

“We may be reaching a point where discussion is not going to be helpful, since you are saying that you can find no way to justify your claims, but you are going to make them anyway. ”
Perhaps we are, but for other reasons (miscommunication comes to mind). That’s not what I’m saying. But it’s true that if we’re two different psychologically, I may not be able to communicate to you what I’m trying to say.

Still, I’ve been thinking of ways in which – assuming we are indeed so different psychologically – you could still get evidence of what I’m saying; here’s a method that you can use:

1. Ask some people (preferably familiar with philosophical reasoning) whether they believe that the USA is larger than Austria, whether they believe that Tokyo is the capital of Japan, or the Moon is smaller than the Earth, or anything you like. When they answer, then ask them what procedure they used to consciously assess whether they believe that the USA is larger than Austria, and see whether they reply something like “I intuitively apprehended it”, “I just saw it”, etc., or they come up with something like “I used observations about my behavior, like my disposition to say that the USA is larger than Austria, etc.”.
I suspect that most will fall on the first category, but of course you can test that too (even if it’s anecdotal evidence, it is evidence).

2. Regardless of what the majority say, in the case of the people who say something along the lines I said, then try to determine by your own means whether their claims about their own beliefs are correct. You already have some (I’d say pretty good) evidence because they’re telling you they believe that, but you can seek further evidence by presenting scenarios and asking them how they react, so that you can see whether they behave as if they believe that.

““I contemplate the statement “the USA is larger than Austria”, and I intuitively assign a very high probability (almost 1) to it, and I assign high probability (almost 1) to my assigning such high probability.” That cannot be right, because the mathematical notion of probability is not an intuitive one, but a formalization.”
If I intuitively reckon that it’s probable that my mouse is black, then it’s proper to say that I’m assigning a probability greater than 0.5. It doesn’t mean I’m actually giving it a specific number, but even if the mathematical notion is a formalization, it’s one that reflects what’s going on intuitively – even if it’s more precise -, so it can be used to convey an idea of what sort of probability I’m assigning. In this context, I’m using the “close to 1” statement (but without giving it a number, which I intuitively don’t) to give you an idea of how high the probability I’m assigning is. probable that X, it’s

“But more basically, you are not understanding your experience well. ”
I think I am, and you’re probably not understanding yours if you think that in order to check whether you believe that the USA is bigger than Austria, you make observations about your behavior. But I’m not asserting that (just saying “probably”) because I’m leaving open the chance that we are indeed very different psychologically in that regard.

“Let me talk instead about the feeling of hunger. Your stomach feels empty, you might feel a little pain etc. How do we know that the feeling of hunger is a desire for food, instead of a desire for flying?”
First, the questions “how do you know?” and the question “what method do you use to be aware of?” are not the same question, and in many cases, the results may not match. I become aware of my belief that the USA is larger than Austria immediately and after contemplating it in the way I described, but it may very well be that I already knew before I even did that that I believed that the USA is bigger than Austria.

Second, I don’t know about you (not for sure), but I immediately know it’s hunger. I’m not sure “desire” is the way of saying it. It’s more that one feels compelled to get food. But I guess in a broad sense, that would work.

“If you respond in the way that you did about knowing our beliefs, you would say that we know intuitively that the feeling is a desire for food. But this is not the case. We learn by induction that the feeling is a desire for food. In particular, when we feel that way, we normally go and find food and eat it. This is why we end up saying that “this feeling is a desire for food,” namely because it tends to make us go and get food.”
That would not be the answer. If you ask me how I know it, I will tell you I do not know how I know it, just as I told you earlier.
The procedure I mentioned is sufficient to establish that I believe that the USA is larger than Austria, and it’s a procedure I use immediately and intuitively when prompted, and which seems to have made me aware that I have such belief, unless I was intuitively aware even before that and I don’t realize it (which is an open hypothesis). What is very clear to me is that I did not engage in any sort of inductive reasoning considering several cases in order to become aware that I believe that the USA is larger than Austria.
Now, in any event, and however I became aware that I believe that the USA is larger than Austria (after you raised the matter), I don’t know how I know that I believe that the USA is larger than Austria. Maybe the way by which I know it is the same way by which I have become of it aware after you raised the issue. Or maybe it’s not. It’s an interesting matter for research in human psychology (or at least the humans that are like me on this, if it’s true that we’re so different), but I don’t claim to know the answer.

As for the food, that one is more complicated, because there is one sense in that feeling hunger already makes me aware that I want food (in a very broad sense of “want”), but you may be asking something else. I’m not entirely sure what you’re asking, but I can tell you that the moment you raised the issue, I found myself assigning intuitively and immediately a very high probability (which I can describe as close to 1 to give you an idea, even if the intuitive probability is actually not a specific number) to the hypothesis that it’s a desire for food. I surely did not do any inductive reasoning – certainly not conscious inductive reasoning.

Granted, you might argue that it was some unconscious inductive reasoning. I don’t know about that. I see no good evidence, but even if it was some sort of unconscious inductive reasoning, the fact remains that it was unconscious.

“Now you might ask how we know it is that tendency rather than some other (like David Hume.) And the answer is simply that this is how we judge a tendency: in most cases, when we feel that way, we go and get food. We do not normally jump off a building and try to fly when we feel that way. So it is the tendency to eat, not the tendency to fly.”
I wouldn’t be inclined to ask you that, since it seems intuitively clear to me. But while I don’t know how we know it, I’m pretty sure that’s not how I know it, and I would be very surprised if that were how you know it – though, again, I’m leaving the door open to the possibility of wide phenotypical variation among humans on this matter, it does seem pretty improbable to me.

The fact is that what you describe does not seem to match my psychological processes at all, or even resemble them. But maybe there is a misunderstanding here.
Are you saying that we unconsciously do all that inductive reasoning, and so when we consciously observe that we immediately are aware that it’s a desire for food without doing any conscious induction, it was still because of the unconscious induction?If that is what you’re suggesting, I would ask you to please clarify. In that case, I would say that I simply do not know whether that is so. Maybe, or maybe not.

“Let’s look at the involuntary assessment. (And while I might comment on this later, whether it is stable or fluctuating is not really relevant; it is the same kind of thing in each case.) At one point you asked why I called this a “feeling.” The answer is because it is a feeling; it is a feeling in the same way that what we feel when we are hungry is a feeling. It is not a thought, or a belief, or a statement.”
I don’t think so. The extremely probabilistic assessment may well be the same as a belief. I’m not sure whether the theories that hold that are correct, or whether belief supervenes but is distinct from said assessments. But I was using “probabilistic assessment” instead of “belief” because given your preferred definition of “belief”, I realize that I couldn’t properly make the main points I wanted to make using the word “belief” – I just couldn’t express them with your definition, and you insisted on your definition.

So, I switched to probabilistic assessments, hoping that I would be able to make my main points (since they’re pretty much equivalent, in terms of belief or epistemic probabilistic assessments as I understand those expressions) using that terminology – essentially, I was tabooing the word “belief”, given that there was not much progress we could make.
”
Now that you have replied more than once, it seems to me we’re not successfully communicating, either. Expressions like “probabilistic assessments” do not allow me to make my case any better than “belief”: we’re still apparently not communicating.

At this point, I’m not sure what else to say about that. But at least, perhaps (though it doesn’t seem promising) we can sort out some of the issues I addressed earlier in this reply.

“So rather than calling that assessment a belief, it would be more accurate to call it a desire.”
No, our intuitive epistemic probabilistic assessments may or may not be beliefs (if they’re high enough), or be something on which belief supervenes, but regardless of whether those theories are true, they are surely nothing like desires. That statement gives me evidence that we’re probably not talking about the same or even similar psychological phenomena – i.e., we’re probably talking past each other.

“I’ll stop here for now, partly because of time constraints, and partly because I feel that you have been distracted by details and that we have the heart of the matter here. Unfortunately I doubt you will be convinced at this point; but I think you need to spend more time thinking about these experiences and our knowledge of them.”
You’re right that I’m not convinced, and I also don’t think you’re likely to be convinced (hopefully, you’ll at least reckon that we’re probably miscommunicating, though I don’t know you will).
Whether I need to spend more time about these experiences and our knowledge of them depends on what my goals are. I don’t need to spend more time (I’ve already spend a lot) to know that they are not as you describe them (of course, I do not expect that you will be convinced of that), but if I want to learn more about them than I do at this point, I would need to spend more time than I already have. Maybe I will do so, but while the matter interests me, there are many other matters that interest me too, and I have a limited amount of time to allocate among them, so I don’t know how much time I will dedicate them.

If you’re interested in learning more about these matters too, I would suggest that you dedicate some time to ask others to describe as best they can their psychological processes when thinking about their beliefs, desires, how they are aware of that (which is not the same as how they know), etc., in ways similar to 1. and 2. I suggested above, and other ways one may add. Personally, I would be interested in asking that sort of question too, if I had more time and a proper venue to ask (I know I could go to a forum and make a post asking these questions, but I know nearly all if not all of the replies would very probably be either hostile or a complete misunderstanding, or both – probably both.). Maybe I’ll find the time and the proper venue to ask in the future, and I’ll do so. I’m not sure you’ll be interested in asking and gathering evidence from other people on these matters, but I think it would give you interesting info if you did.

entirelyuseless, for now, I only have time to the issue of the ordinary usage of “belief”, but I will address your other points later.

With regard to the ordinary usage of “belief”, I know that you claim that your definition is not in conflict with ordinary usage. But you say “we should use the word “belief” to refer to the voluntary aspects of our intellectual behavior, and not to the involuntary aspects.”
How is that in conflict with ordinary usage? (or at least, the ordinary usage that is relevant in this context; there is more than one ordinary usage).
I do believe that there is a computer screen right now, in front of me. That is not at all a voluntary aspect of our intellectual behavior (I could stand up and walk away, and then I would not believe that there is a computer screen in front of me. But that’s not the point). I have never voluntarily chosen to believe that the computer screen is there, nor do I have the power to make myself disbelieve.

As for the accusation that “You’ve chosen not to believe that Jesus is God”, yes, indeed, that is an accusation of an alleged voluntary action.
But what I’m saying is that the accusation is false. The people who make it have a false belief about my psychology, and apparently about the psychology of belief. They reckon that belief is always a choice. But that does not imply that they use the word “belief” in a way that matches your proposed definition. They just have a mistaken theory of belief. On that note:

“Since they say you chose that, they must be accusing you of choosing something voluntary, not of choosing something involuntary.”
Yes, of course. They believe it was something voluntary. I’m not being uncharitable. But as it turns out, my belief that Jesus is not God is not something voluntary. If we were to use the word “belief” in the way you propose – namely, to “refer to the voluntary aspects of our intellectual behavior, and not to the involuntary aspects.”, then it would not be the case that I believe that Jesus is not God, since that is – as a matter of psychological fact – not a voluntary aspect of my intellectual life. Clearly, that is in conflict with ordinary usage.
Of course, you might not believe that that is a psychological fact, in which case we have a disagreement about my psychology, and we can discuss that; but in order to defend the claim that your definition is in line with ordinary usage, you would have to – among other things; I have plenty of other examples, of course – make your case about my psychology, and argue that my claim about my belief about Jesus (or about my computer screen, for that matter) is false.

Now, they would disagree with my claim that it’s a fact about my psychology, of course, and insist that I do believe that, but that it was my choice. But I’m saying that that would be a mistake (I don’t know what your position is).

“To put this another way, it is easy to see that they are using “belief” in my way.”
No, that is not the case.

In reality, they do use the word “belief” to refer to (say) my belief that Jesus is not God. Were they to come to realize that that is not a voluntary choice on my part, if they were rational about it they would still agree that I believe that Jesus is not God, and without changing what they mean by “belief”.

“Catholics might very well make the same accusation against me. And here is my response: “Yes, I chose, and I still choose, not to believe that Jesus is God. But that was and is not a wicked choice, but a good and reasonable one, because the evidence shows that it is highly likely that he is not.””
Right, I would have expected you to respond that (well, if I had known you believe that Jesus is not God). If you are correct and you actually chose that, then it seems in one respect, your mind and mine are very different, but the point about the usage of “belief” remains: I did not and do not choose to believe that Jesus is not God. I grew up a Catholic, believing that he was, until eventually I found myself believing otherwise, and I do not have the power to bring myself to believe that he is God, just as I do not have the power to bring myself to believe that there is no computer screen in front of me, or that there is no chair in my living room, etc.

First, Bayes’ Theorem does not necessitate that probability be a continuous scale. There are finite probability spaces, or even infinite probability spaces that are not a continuum but are discrete.
Second, you say we attribute probabilities like 85.5%, etc. That may well be true, but if it is true that our probabilistic assessments actually do make up a continuum, then it seems we do have a continuum of different feelings/subjective perspectives. Indeed, if we’re willing so say probability of some event En is Pn, and there are infinitely many pairwise different Pn. Then, it seems we do experience infinitely many different things, since it’s different to say that the probability is Pn than to say Pm if n=\=m.
Similarly, if you are capable of experiencing infinitely many distinct bits of evidence that you find relevant to a claim, then it seems that you can in fact have infinitely many different ways of experiencing things, and in particular, you would also be disposed to act in infinitely many different manners.
Personally, I doubt humans are like that, but I don’t rule it out. But either way, that does not seem to challenge my position, for the following reasons:
a. If we indeed can experience infinitely many different things, and be disposed to act in infinitely many different manners (e.g., by asserting infinitely many different probability values), then indeed we can have infinitely many distinct feelings.
b. If not, then probability is not a continuous scale.
c. I didn’t identify intuitive probabilistic assessments with “feelings” anyway. I’m not sure why you do (I was hoping the category would be clear to you, if “belief” is not. I’m not sure how to go around that. We might not be able to communicate on this particular point).
d. Usually, we do not give a number when we make probabilistic assessments, and we have no means to do so. It’s “probable”, “more probable”, etc.

Regarding the man killed by a snake, I don’t know enough about his mind to tell. One possibility is that he assigned a high probability to the hypothesis that he would not be harmed, but assigned a non-negligible probability to the hypothesis that he would be. So, he was taking a risk from his perspective, but not committing suicide. As for his fear, part of it may be because of the non-negligible probability, but much of it (especially when he sees the snake, or thinks about it and represents the snake in his head so to speak) probably was just instinctive fear of snakes. Mice, cats, dogs, etc., do not fear snakes because they believe they might be killed. They just fear snakes. It’s instinctive, without a belief in death or other consequences. Humans evolved from other organisms that didn’t have the mental capacity to make conscious predictions about being poisoned or dying. Keeping part of the mental machinery that was already in place (e.g., fear of snakes) was probably conducive to reproductive success – i.e., losing that would have been bad for reproductive success.

At any rate, I don’t have enough info to make a certain assessment of his psychology, or close to that, so that’s a potential hypothesis.

But I have to say, your identification of that “feeling” with the probabilistic assessment is a bit puzzling to me. I’m talking about the probabilistic assessment. It feels in some way, sure, but I don’t think it’s a feeling, and I’m not sure what feeling you have in mind. In any case, the probabilistic assessment is not chosen – at least, I can’t seem to choose it. I can choose to say different things about it, but that’s another matter.

2. Regarding probability, I’d like to add that when you say that our “feelings” can change quickly, you seem to be talking about some quick, preliminary probabilistic assessments we make when assessing an argument, matter, etc., and before considering further data. There are also more stable assessments that we make, after we have considered the arguments. But either way, we (at least I) do not seem capable of choosing what they are, even though we have the power to choose freely which arguments to entertain, how much time to dedicate to think about a matter or another, etc.; so, there is plenty of room for choice, but how we end up making a probabilistic assessment does not seem to be among them.

3. The discussion of the Moon Landing skeptic:

“But this is a prediction about the future, and you are not infallible. Suppose the defense provides evidence XYZ, exactly as you stipulated, and you are not convinced the person is innocent: in fact, you find yourself with an involuntary assessment that the accused is definitely completely guilty.

What do you do now? Do you vote guilty or not guilty?”
In order to make the prediction, I contemplated the hypothesis that XYZ obtains, and reckoned under that hypothesis, the person in innocent. Now that XYZ obtains, I reckon differently. That’s pretty odd, if in the first and the second case, I considered the facts carefully. But given that you say it’s happening, it may well be that in the second case, I’ve considered info I hadn’t considered before – I had more time to think about it. So, I would keep thinking about that, and if my “guilty” assessment remains when I have to vote, then I vote guilty.
Now, it might happen that as I kept thinking, I come to conclude that I would need more time to consider some hypotheses – for example – and as it stands, I’m not sure that he’s guilty. In fact, the probabilistic assessments that changed are only preliminary assessments while I’m contemplating a matter. I so, I will vote not guilty because there is room for reasonable doubt. Whether there would still be room if I had more time to consider the pieces of evidence that were presented is another matter. But I don’t have more time, so for now, I’m not certain (i.e., my probabilistic assessment is not high enough for me to say it’s beyond a reasonable doubt), so I vote not guilty.
On the other hand, if you stipulate that my probabilistic assessment that he’s guilty is almost 1, then I will vote guilty.

“This is related to what I said about about the probability of 85.5%. We may not have any feeling corresponding to that, but that does not prevent us from saying, “The probability is 85.5%.””.
I’m not sure why you say that it’s a feeling. But normally, we wouldn’t say that the probability is 85.5%. We are not in a position to give numbers, at least in most cases. We can, however, say that the probability is roughly something (e.g., if you throw a dice, roughly 1/6), but we do have a mental state (call it “feeling” if you like, though that word seems to obscure matters, as it encompasses a wide variety of psychological phenomena) corresponding to that.

“In the same way, you know that evidence XYZ indicates innocence. That is why you thought you would be convinced.”
No, that’s not it. At first, I thought that XYZ indicated innocence, by contemplating the scenario in which XYZ obtains, which resulted in my assigning low probability to the hypothesis that he’s guilty. Later, I contemplated the scenario after I came to incorporate as data that XYZ obtains (or it’s almost certain), and yet, that no longer resulted in my assigning low probability to that hypothesis. As I said, something changed. What happened? I already speculated above.

“I have the feeling that you are again to going to say that this scenario just could not happen to you. If you thought you would be convinced, you definitely will be convinced. But this scenario happens to people in real life all the time.”
Actually, that’s not what I’m going to say. It is quite improbable, sure, because in order to make that prediction, I would have contemplated many scenarios to rule out alternatives. But in case it happened, my reply is above.

“This is basically the situation I intended with the moon landing skeptic. The skeptical arguments are very convincing to his brain’s involuntary assessment. But he has a general knowledge of what constitutes convincing evidence. So he says, “such and such evidence would prove the moon landing happened.” Then that very evidence is presented to him. But his brain is structured somewhat differently than for most people. That in fact is why he was inclined to conspiracy theories in the first place. So he still feels like the moon landing couldn’t have happened. But he continues to recognize as an abstract truth, “Such and such evidence proves the moon landing happened.””
But his reaction remains extremely weird to me. It’s not because it would be improbable that my assessment would change; I mean, that is improbable, but I’m leaving that aside. The problem is that his reaction remains very different from my reaction to the jury scenario you constructed (see above).

“Then there is nothing to prevent him from saying, “The moon landing happened despite my feeling that it didn’t,” just as in the jury case you can vote not guilty. And in ordinary usage, he is not only saying that the landing happened, but believing it, at least if he continues to act on that claim.”
He’s still being very weird to me, but I can think that maybe different parts of his brain have different beliefs.

“It may be that a practical argument will never affect your involuntary assessment of likelihood. But I don’t care about that involuntary assessment; what matters is whether you go about saying and defending something, or the opposite. And if I say, “defending this position will destroy the world,” that is a practical argument against defending it. If it is true that defending the position will destroy the world, it is a very good reason not to defend that position, regardless of what the evidence may indicate.”
Fair enough, you don’t care. That’s understandable. Different people care about different things. But I often do care when people imply that my probabilistic assessment is a choice when it’s not.

“You might think that the likelihood of believing falsehood would be an evil that would outweigh any possible good purpose, but that is not necessarily the case.”
No, I don’t think that’s true, actually. And I wouldn’t even be believing a falsehood in the sense I understand it. I would be pretending to believe a falsehood. For example, if I were told credibly that unless I argue for Christianity (or Islam, or whatever), I will be burned at the stake, you bet I would be arguing for Christianity (or Islam, or whatever), and I think that I would have a good reason (though I wouldn’t be free, since I would have been coerced).
Similarly, if defending a position will result in the destruction of the world, I will not defend it (well, unless something worse will happen, etc., but you get my point).
I’m in no way suggesting that practical arguments would never affect my behavior regarding defending a position or another, or refrain from defending a position, etc.

“You probably meant by probability assignments the involuntary assessment that I have been talking about. ”

I’m not even sure we’re talking about the same thing. I’m not talking about preliminary assessments only. But maybe that’s it.

“And indeed, since this is involuntary, it changes gradually over time, and at no point did I or can I choose to change it. But in that sense, I felt for a long time that Christianity was unlikely. ”
Okay, so I would be inclined to say that either you did not believe that Christianity was true anymore (unless that part of you did, and part of you didn’t. I don’t have enough info to tell, but the description you give me makes it improbable, though not very much so).

“Again, this shows that my usage corresponds to ordinary usage: ordinary usage would say that I changed my mind when I decided to change my mind, not that I was a closet unbeliever for a long time (except in a very qualified sense.)”
I don’t agree with that one, though I wouldn’t say you were a “closet unbeliever”, because perhaps given your theory of belief, you thought you were speaking the truth if you said you were a Christian (and there is the option that part of your brain believed it, though as I mentioned, the description you give me makes it improbable).

But let me ask you if you don’t mind (it’s okay if you don’t want to answer, of course). Did you say to yourself something like: “I believe that Christianity is true, but from now on, I will choose to believe otherwise?” Or how did that work? I’m having difficulty understanding how that works.