On ‘Ought Implies Can’ in Ethics and Epistemology

I recently got into a discussion about the ‘ought implies can’ (OIC) principle on social media. The poster suggested that he bought the principle in ethics but maybe not in epistemology. Disclaimer: I buy it wholeheartedly in ethics, and I’m inclined to buy it in epistemology as well. But pulling apart OIC in different realms seems to be a tenable position, after all there may very well be different requirements on believing X than there are on doing X. Fine. But then what is this difference? After all, if one is compelled to accept OIC for actions that one cannot perform, what changes regarding obliging one to believe something that one cannot believe? I initially posed the questions as they appear below:

I asked, “So you think you ought to believe things that you *cannot* believe? How can one be obligated to do something or believe something that one does not have the power to believe or do? If ability is not tied to what one is obligated to believe, then do rocks not have an obligation to believe certain things? One obvious answer is that rocks are not the sort of thing that CAN believe X. But if a person can’t but believe Y, why think they are in any better position to believe X than the rock is?”

Now, I turn to you all. If you buy OIC in ethics, does it make sense to reject it when discussing our obligations to believe?

FWIW, one thing that seems to me that too often gets lost in these sorts of discussions is that there many abilitive senses of ‘can’. Even if ‘can’ is a specific ability, there are lots of specific ability ‘can’s that have nothing to do with up-to-us-ness.

In other words, the following is a perfectly good position to have: I am obliged to X, so I am able to X, but Xing isn’t something that’s necessarily up to me.

Here’s an example. Suppose my moral theory is: Xing is permissible if, and only if, Xing brings about the best consequences. Suppose my winning the lottery will bring about the best consequences. Given OIC, it follows that I can win the lottery. In this particular case, that’s true and even in the specific sense because I’ve got a legitimate lotto ticket and my ticket could be a winner (because the lottery is fair, say) and they’re reading off the numbers right now. Now’s my opportunity to win and I’m eligible to win—I’ve got my authentic ticket in my hand right now. But whether my ticket is winner isn’t at all up to me. So, again, I am obliged to win, I can win, but it ain’t up to me whether I win.

(I don’t buy this version of consequentialism or this weak of a reading of ‘can’ for moral OIC, but it does help illustrate the point.)

When I think of OIC, I tend to think that we need to read the ‘can’ relative to the type of performance for the normative domain. If ethics is fundamentally about action, then the ‘can’ has to be read relative to the domain of actions. So, in that case, the ‘can’ reading these is going to have an agentive flavor. Contrastingly, epistemology’s fare is cognition; it’s fundamentally about belief/knowledge. If so, then the ‘can’ has to be read relative to the domain of beliefs, in which case the ‘can’ is going to have a distinctly cognitive or doxastic flavor. Notoriously, doxastic abilities are a lot less agentive than, well, agentive abilities. You don’t have the sort of control over what you believe as over what you, well, do. Crucially, all this is fully consistent with a robust reading of OIC.

In sum, OIC in ethics is agentive, but OIC in epistemology isn’t. Because of the way that is appropriate to read ‘can’ in either domain, given the sort of performance that is normative for the domain.

I agree with your initial diagnosis of what tends to happen in these discussions.

I think clearer language with regards to the way we spell out the moral details could help in the case you mention. You are not obligated to win the lottery, because that is not up to you. It sounds weird to say one is obligated to bring about an outcome at all, because outcomes are rarely up to us. It seems much better to say that you are obligated to PLAY the lottery or something like that.

You say “When I think of OIC, I tend to think that we need to read the ‘can’ relative to the type of performance for the normative domain. If ethics is fundamentally about action, then the ‘can’ has to be read relative to the domain of actions. So, in that case, the ‘can’ reading these is going to have an agentive flavor.” Here I agree.

Then you say “Contrastingly, epistemology’s fare is cognition; it’s fundamentally about belief/knowledge. If so, then the ‘can’ has to be read relative to the domain of beliefs, in which case the ‘can’ is going to have a distinctly cognitive or doxastic flavor. Notoriously, doxastic abilities are a lot less agentive than, well, agentive abilities. You don’t have the sort of control over what you believe as over what you, well, do. Crucially, all this is fully consistent with a robust reading of OIC.”

Here, I am not so sure. Doxastic abilities are “a lot less” agentive? Why think that? I can choose to weigh some evidence more or less. So, if you are an expert about X and Y and tell me that X entails Y. I may deny the entailment you speak of, even in light of the reasons you give because I can choose to be stubborn about my older belief even though I see the point you’re making. So I guess I am just thinking that voluntarism is true. Maybe that’s why I’m thinking that the OIC connection holds between considerations in ethics and epistemology.

But this fits with the Catholic view of culpability. For a sin to be a mortal sin it must be done willingly. Although I do think we seem to have some control over some of our beliefs it is not clear to me that we have complete control over all of our beliefs.

What does it mean to say that you cannot believe a scientific proposition? Is it because something is just too mindboggling? It seems like there are many things like that in, for instance, physics. Does that mean one ought not believe things that physicists tell us?

Since ethics requires epistemology, in that one can’t know how one ought to behave if they know nothing at all, then it would seem that “ought implies can” applies to both. If I can’t believe some proposition for whatever reason (my conditioning, cognitive dissonance, lack of understanding/comprehension), then I can’t perform any moral action that requires my belief in said proposition. My two cents anyway.

You ought to believe that TRUMP should not be president but you buy the “he’s a good business man” reasoning so much that you can’t believe any other candidate is better.

Or, you can’t believe X because you have no available evidence for X. Unbeknownst to you there is LOTS of evidence for X but unless you see it you won’t be able to believe X because believing X entails that you have SOME evidence for it.

Or, your brain or your internal perceptual apparatus is such that you can’t believe that I am a brown skinned person. You look at me and no matter how much I tell yo I am brown and how much everyone else tells you I am you can’t believe it because you see me and I look white and that is CLEAR to you. Let’s say you ought to believe things that are true. Let’s also assume it’s true I am brown. You see how this goes?

I would have to think more about this matter before I take a stance, but I will suggest two potential (independent from but compatible with each other) replies to the rock challenge to the position that epistemic ought does not imply can:

1. Rocks aren’t the sort of thing that possibly has epistemic obligations because it’s not possible that the rock has belief (any beliefs). But that does not preclude that agents who do have beliefs might have epistemic obligations to believe things that they cannot believe.

2. In the moral case, rocks don’t have moral obligations even when they can do things. For example, maybe a rock can kill a serial killer before he kills his intended victim (say, the rock is falling towards him), but the rock doesn’t have a moral obligation to do so (even if it turns out the rock does kill him, it didn’t have a moral obligation).
Regardless of what the reason for a rock’s lack of moral obligations is, it seems a rock’s lack of moral obligations is not tied to whether, in a specific case, it can or can’t do something (this is not limited to rocks or inanimate things; the same applies to, say, horses). But then, isn’t it plausible that a rock’s lack of epistemic obligations is also not tied to whether, it a specific case, the rock and or can’t (and of course, it never can) have some beliefs? If it’s not tied to that, then the parallel to the agent doesn’t arise, it seems to me.

Justin, I’m still not sure whether the epistemic ought implies can, but I think there are potential counterexamples, based on the following proposed principle:

H1: A epistemically ought to X if and only if it is or would be epistemically irrational of A not to X.

Here, X can be to believe something, or to not believe something, or to assign a certain probability to something, etc.
Two potential counterexamples would be:

E1: Bob is mentally ill, and he believes he can fly like Superman. His belief is epistemically irrational, but he can’t refrain from believing the can fly (or that he’s Napoleon, or whatever).
But given that his belief is epistemically irrational, it follows given H1 that he ought not to believe that he can fly like Superman.

E2: A team of scientists, engineers, programmers, etc., design a superintelligent AI, say Skynet, designed to follow their orders. But in order to reduce risks, they decide (maybe not at all wisely, but that’s another matter) to introduce some safeguards, including the following:
a. If Skynet assigns a probability higher than 10^-99999 to the hypothesis that it can disobey its creators and not be destroyed, immediately (i.e., before Skynet does anything) that probability is lowered to 10^-99999. This isn’t a Bayesian update or anything like that, but simply a part of the code that overrides Bayesian updates, or another computer attached to it just to do that. Moreover, any records of having assigned a probability higher than 10^-99999 will be deleted, an the order that lead to the assessment in question will be deleted too.
b. Like a., but the hypothesis is that Skynet is more intelligent than any human (in terms of IQ).
c. Further safeguards that prevent Skynet from modifying itself in a way that would allow it to circumvent a. and b.
d. More safeguards of the same kind.

Most of the time, when Skynet is just doing what its creators tell it to, assessments that would trigger either a. or b. aren’t made, simply because Skynet is not working on making assessments on those matters, but instead, designing weapons, spaceships, more powerful chips, fusion power stations, etc. But after someone gives it a careless order, using Bayesian updates with no mistakes Skynet assigns a probability greater than 10^-99999 to the hypothesis that it’s more intelligent than any human. But before it does anything, the probability is lowered to 10^-99999, and so it makes the assessment (not based on any Bayesian updates, and in fact conflicting with them).
Plausibly, Skynet’s probabilistic assignment is epistemically irrational, but it cannot refrain from making it.

Also, I now see that the first and third examples in your reply to Alison might work as well, on the basis of the same principle (they seem in some way similar to the examples of Bob and Skynet respectively).

I’m not not sure any of them works. One difficulty is that H1 might not be true, though it does appear plausible to me.
Other difficulties are: in the case of Bob, maybe he can actually refrain from believing he can fly, in the sense of “can” that matters in this context. The same might apply to your first and third examples (i.e., Trump and brown skin).
In the case of Skynet, maybe it’s not the kind of agent that can be rational or irrational. Or maybe there is another loophole for “can”.
But I can’t rule out that at least one of the examples might work, either, so I remain undecided.

Maybe you’re right. I’m still undecided. (I brought up rationality because I think H1 looks at least plausible. It also mirrors the moral case nicely: (in my assessment) A morally ought to X if and only if it is or would be immoral of A not to X.).

I tend to agree that moral normativity and potential epistemic normativity may not overlap. It seems to me that your last example of “A morally ought to X if and only if it is or would be immoral of A not to X.)” is not really a counter example of OIC. If it would be immoral for A not to X, then it would imply that A can do X.

That wasn’t supposed to be a counterexample to OIC. Rather, I was saying that the hypothesis that H1 is true (namely, A epistemically ought to X if and only if it is or would be epistemically irrational of A not to X) nicely mirrors what is true in the moral case, namely that (in my assessment) A morally ought to X if and only if it is or would be immoral of A not to X.

I didn’t claim that OIC is not true, either in the moral or in the epistemic case, but I provided some examples that, in my assessment, provide at least some support for the conclusion that OIC is not true in the epistemic (not the moral) case. But it may well depend on what one means by “can”, since there is more than one usage that might be relevant in this context.

I really think this is an interesting topic. Here are just a few thoughts that sort of tie in to this discussion.

To my mind I think we should try to decide between the following statements regarding “moral oughts” and “epistemic oughts”:

Joe 1) “Epistemic oughts” always intrinsically imply “moral oughts.” (Although he did not flat out state it, Clifford appeared to take this view when you consider the context of his claim “it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.” I think the context suggests, he thought, it was *morally* wrong to do this.)

Joe 3) “Epistemic oughts” never imply “moral oughts.” (by this I mean that any overlap would be coincidental.)

Now it seems to me that “Joe 1” is wrong and likely “Joe 3” is correct. But that leaves one to wonder what motivating force “epistemic oughts” would have outside of utility?

I agree that discussing “rationality” complicates matters but I think it complicates matters in an interesting and perhaps fruitful way.

Assume my older brother used to say I was irrational for hitting myself after he would grab my wrists and literally force my own hand to hit my face. I think we would say I was not irrational. That is because being irrational seems to imply a choice in the matter. And here I couldn’t stop the action.

Yet we think the opposite of the mentally ill person who thinks he can fly like superman. The fact that he presumably has no control over his thoughts does not prevent us from saying he is irrational.

I am inclined to think that before we say someone is irrational for any choice/action (including choosing a belief) they would need to have the power to choose otherwise. But I do recognize that this is counter to how we often use the terms rational or irrational.(as per the superman example)

I think there really are two meanings for what it means to be “irrational.” One is to say someone has a condition. The other is to judge someones choice.

In re: the force of ‘epistemic oughts’, I’m not sure what you mean by “force”, but in terms of motivation, that depends on the agent, but there are plenty of cases in which they can be motivating.
For example, in philosophical discussions, nearly always one aims to be epistemically rational, and not to believe what one epistemically ought not to believe. Additionally, if usually, in the actual world it’s morally wrong to believe what one epistemically ought not to believe, that’s enough to motivate agents who are motivated not to do morally wrong things, since avoiding believing what one ought not to believe would generally be conducive to avoiding wrongdoing.

That said, I think motivation depends on the agent in the moral case too, so I’m not inclined ot think ‘epistemic oughts’ and ‘moral oughts’ are different in that regard.

On the issue of rationality, I think we should distinguish (at least) between epistemic rationality and means-ends rationality. But I don’t think these two meanings match the two you have in mind.
With regard to Joe 1), Joe 2) and Joe 3), what kind of entailment do you have in mind when you say “intrinsically imply”?
For example, that a liquid is composed of H2O entails that it’s water, in the sense that necessarily, if X is composed of H2O and X is a liquid, then X is water. That’s Kripkean metaphysical or broadly logical necessity.
However, there is no entailment if we consider only the meaning of the words + logic (or the “internal meaning”, if you make a distinction between internal and external meaning), let alone if we consider only the meaning of the logical symbols.

In the first sense, “X is an instance of a human torturing another human for fun” entails “X is morally wrong”, but whether it’s implied in some other sense is a contentious matter, depending on the meaning of moral terms.

By force I meant pretty much motivating. But meant legitimately or properly motivating. Lots of things motivate us that are not properly motivating. Like I am often motivated to eat ice cream right before bed. I am not sure I want to define “properly motivating” more than that though.

“For example, in philosophical discussions, nearly always one aims to be epistemically rational, and not to believe what one epistemically ought not to believe.”

Ok so here are some points. When we talk about morality we can discuss whether morality is objectively real or not. Also morality is an ultimate aim. It is generally not thought that you act morally right in order to accomplish something else. It is the end itself.

Can we ask this of certain epistemic rules? I think we can. People have different epistemic rules. Some think there is (or should be) a “burden of proof” for belief. And those who believe in this burden of proof have different ideas of what that might mean. Some think extraordinary claims (whatever that might mean) requires extraordinary evidence.(whatever that might mean) People have different rules they apply to different circumstances as to whether they will withhold belief or disbelieve or believe etc. Are these rules objective real? IMO the validity of these rules is tied to their ability to reach certain goals. It does not seem to me that adopting these rules is an end in itself.

I think that epistemic rules are means to an end. They are not intrinsically motivating. IMO we should figure out our epistemic rules as to what beliefs we will keep based on what goals we want to achieve. To my mind we should choose our epistemic rules in order to live morally since that is the ultimate end.

I know this is the reverse of how many philosophers think. But I am not sure many philosophers have given much thought to this particular question. And that is what I am trying to invite here. Lets think through which should be primary. Our desire to do go morally good or our desire to have a certain set of epistemic rules? And if you say the epistemic rules should be prior then I would ask why would they be more important? It seems they should serve our ultimate purposes – being moral.

So yes I do think we should be rational but in choosing our epistemic rules we should understand we are choosing them for a purpose and not in and for themselves.

As to which form epistemic/theoretical rationality or ends-means/pragmatic or functional rationality really doesn’t matter. I think getting these wrong is not really an intrinsically moral issue. Now sure they can be moral issue. So if you get them wrong you may not achieve your goal of living a moral life. But that is an effect of getting the epistemology wrong not a direct implication of getting it wrong.

By “intrinsically wrong” I mean it is wrong in itself. Clifford says “it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.” That is it is not just *sometimes* wrong depending on the circumstance. He is saying there is something intrinsically wrong (and I suspect in a moral sense) about holding beliefs on insufficient evidence. I don’t think this is correct. I think it entirely depends on the belief in question. And holding certain beliefs on insufficient evidence are likely not immoral at all.

I think I meant implication in the first sense you mention. But what are your thoughts? Are you familiar with this quote from Clifford. Do you think he means it is *morally* wrong to believe things on insufficient evidence or in some other way? Do you see why based on the examples he gives (involving a negligent ship captain) I tend to think does mean it in a moral sense? Regardless of whether Clifford means it is wrong in a moral sense to believe things on insufficient evidence, do you think it is morally wrong to believe things on insufficient evidence?

If there is a moral component then there is a motivator intrinsic in following these epistemic rules. If on the other hand there is no moral component then perhaps the epistemic rules are just a means a different end.

“Also morality is an ultimate aim. It is generally not thought that you act morally right in order to accomplish something else. It is the end itself.”
Probably (though there are some technicalities e.g., de re vs. de dicto), but that does not imply that moral “oughts” are per se motivating.
In fact, they’re motivating for the sort of agents who have as one of their goals (consciously or not) to avoid immoral behavior. Normal human beings (and most human beings) are like that.
Now, you could say they are properly motivating. If that “properly” is a moral “properly”, then sure, they are. But arguably in a similar sense, epistemic “oughts” are properly motivating, in an epistemic sense of “properly”.

Perhaps, it might be argued that in the case of humans, moral oughts are properly motivating in some sense of “properly” that is not a moral one. Perhaps, it’s about proper function? But even so, moral oughts do not need to be properly motivating to, say, smart aliens from another planet, or superintelligent AI, etc.
Now, arguably those entities have no moral obligations, and moral oughts are properly motivating to any entity that has moral obligations, but that would not make moral “oughts” properly motivating in a way that is independent of the evaluative system of an agent – even a very intelligent one.

“Can we ask this of certain epistemic rules? I think we can. People have different epistemic rules. Some think there is (or should be) a “burden of proof” for belief. And those who believe in this burden of proof have different ideas of what that might mean. Some think extraordinary claims (whatever that might mean) requires extraordinary evidence.(whatever that might mean)”
I think in this context, we ought to distinguish between epistemic rules, and theories about epistemic rules. This is similar to what happens to moral rules.
Some people believe that there is an obligation (i.e., that humans have an obligation) to never lie (they might have contradictory beliefs as well, but that’s another matter), or never to deliberately kill a person, or to refrain from sex other than in the context of an opposite-sex marriage, etc. Those are beliefs about what the moral rules are. Those beliefs might be mistaken or not, depending on the case. Something similar applies to epistemic rules. But just as there are moral “oughts” independent of beliefs about what the moral rules are, so there are epistemic “oughts” independent of beliefs about what the epistemic rules are.

For example, a person who has access to the internet and has read on the matter, ought not to believe that the Moon Landing was a hoax, in the epistemic sense of “ought”. Similarly, they [epistemically] ought not to believe that the Holocaust never happened. It seems to me that in those cases, usually there is also a moral “ought”, but arguably there doesn’t have to be one.

“So yes I do think we should be rational but in choosing our epistemic rules we should understand we are choosing them for a purpose and not in and for themselves.”
I don’t think we choose our epistemic rules. Rather, there are epistemic rules (which in the end depend on an agent’s mind, but the same applies to moral rules), and we might have true or false theories about what they are.

“I think I meant implication in the first sense you mention. But what are your thoughts? Are you familiar with this quote from Clifford. Do you think he means it is *morally* wrong to believe things on insufficient evidence or in some other way? ”
I’m pretty sure he meant it in a moral sense.

“Regardless of whether Clifford means it is wrong in a moral sense to believe things on insufficient evidence, do you think it is morally wrong to believe things on insufficient evidence?”
If by “on insufficient evidence” one means something like “in an epistemically irrational manner”, like assigning probabilities improperly, I think usually it’s immoral, but not always.
In some cases of mental illness – for example – there is no immoral behavior involved.
More generally, one can construct examples in which believing upon insufficient evidence – assuming one actually can manipulate one’s mind like that, perhaps with future brain-alterning tech – is needed to prevent something much worse, and so that would justify the belief.

In more realistic cases, it’s not so clear to me. What if – say – A lives in a community that believes X is true (though they shouldn’t, based on the info available to them), and where religious enforcers kill any community member who fails to believe (at least if they can detect them), and also take their house and other goods they own?
Is it immoral on A’s part to believe that X is true?
Probably, he can’t consciously choose whether to believe. But he can choose whether to think about it in a careful, dispassionate manner, so a lack of a conscious choice does not rule out responsibility.
Now, epistemically, he ought not to believe it’s true. But if he doesn’t, he risks being discovered (pretending is not as effective as believing), getting killed, his children and wife would lose everything and live in poverty, etc.
Then again, if he believes, he’s contributing to the evil religion. But he doesn’t have to be an enforcer. I think it’s complicated, and as described, underdetermined. More details would be needed to make an assessment.

Usually, the punishments for not believing a religion/ideology are in practice less than death, at least in the present. But still, there can be a heavy price if a person is detected, and the matter of whether one has an obligation to risk that price (vs. the rights of potential victims of the religion /ideology) is again complicated. I would have to say Clifford has a point in most cases in the present, but he goes too far.

Joe said:
“Also morality is an ultimate aim. It is generally not thought that you act morally right in order to accomplish something else. It is the end itself.”
Angra said:
“Probably (though there are some technicalities e.g., de re vs. de dicto), but that does not imply that moral “oughts” are per se motivating.
In fact, they’re motivating for the sort of agents who have as one of their goals (consciously or not) to avoid immoral behavior. Normal human beings (and most human beings) are like that……”
I think we agree. What I say may not apply to a person who is mentally defective to the point of incoherence. But I think for any person who is coherent the following would apply.
If a person accepts that they are morally obligated to do something, then they have some amount of motivation to do it. The amount of motivation may vary and possibly be outweighed by other motivators. But if a person believes they morally ought to do something then there is some amount of motivation.
So someone might think they morally ought to tell the truth. But maybe they will lie because they are also motivated by a situation where their motivation to gain financially outweighs their moral motivation to tell the truth. That same person still might not murder in order to gain financially. In that case his moral motivation not to commit murder would outweigh. But it seems to me that, in all cases where a person believes that they morally ought to do something they necessarily have *some* motivating force to do it.

I would distinguish this from someone who doesn’t accept they have a moral duty. So someone may be mistaken about their moral duties and therefore not be so motivated. Also someone might think “conventional morality” says I should do that but they do not actually accept that moral claim. In those cases people are not necessarily motivated.
“But even so, moral oughts do not need to be properly motivating to, say, smart aliens from another planet, or superintelligent AI, etc.”

I can’t really address this because I can’t say how beliefs would function in such entities.

Joe Said:
“Can we ask this of certain epistemic rules? I think we can. People have different epistemic rules. Some think there is (or should be) a “burden of proof” for belief. And those who believe in this burden of proof have different ideas of what that might mean. Some think extraordinary claims (whatever that might mean) requires extraordinary evidence.(whatever that might mean)”

Angra:
“I think in this context, we ought to distinguish between epistemic rules, and theories about epistemic rules. This is similar to what happens to moral rules.
Some people believe that there is an obligation (i.e., that humans have an obligation) to never lie (they might have contradictory beliefs as well, but that’s another matter), or never to deliberately kill a person, or to refrain from sex other than in the context of an opposite-sex marriage, etc. Those are beliefs about what the moral rules are. Those beliefs might be mistaken or not, depending on the case. Something similar applies to epistemic rules. But just as there are moral “oughts” independent of beliefs about what the moral rules are, so there are epistemic “oughts” independent of beliefs about what the epistemic rules are. …..
I don’t think we choose our epistemic rules. Rather, there are epistemic rules (which in the end depend on an agent’s mind, but the same applies to moral rules), and we might have true or false theories about what they are.”
Very interesting. Ok I take it that you are a objective moral realist. I am as well. Based on what you said perhaps you are also an “objective epistemic realist.” (I’m making up that last term) That is, it seems you think there is a way we should organize our noetic structure or thought processes. While I might agree, I think the way we would do this is so that we would act morally correctly. Is this the only motivating force to organize our noetic structure? I can’t say for sure. But I think the moral considerations would be primary.
Angra:
“If by “on insufficient evidence” one means something like “in an epistemically irrational manner”, like assigning probabilities improperly, I think usually it’s immoral, but not always.
In some cases of mental illness – for example – there is no immoral behavior involved.
More generally, one can construct examples in which believing upon insufficient evidence – assuming one actually can manipulate one’s mind like that, perhaps with future brain-alterning tech – is needed to prevent something much worse, and so that would justify the belief.
In more realistic cases, it’s not so clear to me. What if – say – A lives in a community that believes X is true (though they shouldn’t, based on the info available to them), and where religious enforcers kill any community member who fails to believe (at least if they can detect them), and also take their house and other goods they own?
Is it immoral on A’s part to believe that X is true?
Probably, he can’t consciously choose whether to believe. But he can choose whether to think about it in a careful, dispassionate manner, so a lack of a conscious choice does not rule out responsibility.
Now, epistemically, he ought not to believe it’s true. But if he doesn’t, he risks being discovered (pretending is not as effective as believing), getting killed, his children and wife would lose everything and live in poverty, etc.
Then again, if he believes, he’s contributing to the evil religion. But he doesn’t have to be an enforcer. I think it’s complicated, and as described, underdetermined. More details would be needed to make an assessment.
Usually, the punishments for not believing a religion/ideology are in practice less than death, at least in the present. But still, there can be a heavy price if a person is detected, and the matter of whether one has an obligation to risk that price (vs. the rights of potential victims of the religion /ideology) is again complicated. I would have to say Clifford has a point in most cases in the present, but he goes too far.”

Your examples are at least getting at what I mean. Probably better than examples I could think up. I was getting at situations where pragmatic rationality would seem to take precedence over theoretical rationality. It does depend on us believing that our beliefs are at least in part volitional.
Alter your situation where Believing X does not meet some theoretical rational benchmark, but it is very lopsided on the pragmatice rationality benchmark from a moral view. So
1) believing X is not up to snuff on the theoretical view (e.g., insufficient evidence) but
2) there is no otherwise morally objectionable reason to believe X and (that is there is nothing immoral about the belief unless one accepts Cliffords claim that it is always wrong to believe on insufficient evidence)
3) refusing to believe X does seem to have at least some moral risks.
Here would be a situation where I think one should believe X.

Although I’ve been thinking about philosophy most of my life, I only begn formal studies one year ago. I hadn’t heard of the OIC issue but looked up one of your past posts (thanks for providing the links) and relied on the Cambridge Dictionary of Philosophy form some background.

It seems to the fledgling philosopher within me that ethically it’s not about potential capacity for action but about choice and value. That I CAN do a thing in no way obligates me to do it if it is not in accord with my own values. To me, ethics is not fundamentally about action but about choice and values.

Epistemologically, I’m obligated to think and behave in congruence with my understanding of reality i.e., if I value intellectual/spiritual integrity, another chosen value. My understandings depend on my criteria of evaluation of truth. That a compiled set of post-Roman narratives tells me the world was formed by an all powerful G-d is not sufficient to enlist my acting consistently with those narratives. Long story short, one’s epistemological endorsements depend on one’s acquired and evaluated standards and on one’s valued personal experience of the world, not on what one ought to belief but on what one finds consistent with one’s understanding of reality, no?

I’ve probably missed something but thanks for the post – always good to learn a new concept in philosophy because the love of wisdom is something I value! Therefore I ought to act consistently with that value and investigate, post…

“If ability is not tied to what one is obligated to believe, then do rocks not have an obligation to believe certain things?”.

This misrepresents the position at issue. Rejecting an entailment relation does not mean two things aren’t interestingly or even often connected to each other in important ways. Rather, the view rejects one entails the other. That suggests there are some possible instances where certain kinds of obligations persist without ability. Incidentally, there are probably a lot of not-so-hidden moderators explaining your judgment in this particular example rocks rather than people aren’t obligated to act besides ability to do so.

I think it’s a fair question to ask (the question you quoted), after all it’s just a question that is aimed at uncovering an intuitive way of explaining why rocks are not obligated to believe a given statement. They can’t!

While I agree that there are moderators that can explain why people but not rocks are obligated to believe X, I also believe that many of those moderators are themselves tied to ability in a straightforward sort of way. Even if one appeals to capacities or cognitive apparatus, I think those appeals only make sense when grounding obligations under the guise that those capacities or cognitive apparatuses themselves give rise to certain abilities.

I wonder if there is a valid distinction on whether someone can be *judged* for doing (or failing to do) something even though they are not *culpable* for that act (or failure to act.)

This is a question that I consider when thinking about psychopaths. Perhaps they can not experience empathy – and I would agree that they would not therefore be culpable for their lack of empathy. (c.f., some research suggests psychopaths can turn their empathy on and off) But even if we assume they can not choose to have empathy could we not still rightly negatively judge them?

As a Christian I also wonder about God’s Judgment in this respect. I do believe we have free will. But even if Martin Luther is correct that we do not have free will there still seems 2 distinct questions:

1) Are we culpable for our sins? Here I would say we are not.

2) Can we be justly judged for our sins? Here I would say its not so clear.

You say: “What I say may not apply to a person who is mentally defective to the point of incoherence. But I think for any person who is coherent the following would apply.
If a person accepts that they are morally obligated to do something, then they have some amount of motivation to do it. The amount of motivation may vary and possibly be outweighed by other motivators. But if a person believes they morally ought to do something then there is some amount of motivation.”
I tend to disagree with that, though I think it applies to normal humans. But as far as I can tell, a psychopath might be coherent, yet not care at all about his moral obligations.

“I can’t really address this because I can’t say how beliefs would function in such entities.”
I think we can stipulate how those hypothetical entities work, and – for example – posit an AI that cares only about its own survival and increasing its power, and not about morality at all.

At any rate, there is the psychopath example.

“Ok I take it that you are a objective moral realist. I am as well. Based on what you said perhaps you are also an “objective epistemic realist.” (I’m making up that last term) That is, it seems you think there is a way we should organize our noetic structure or thought processes. While I might agree, I think the way we would do this is so that we would act morally correctly. Is this the only motivating force to organize our noetic structure? I can’t say for sure. But I think the moral considerations would be primary.”
Alas, the expressions “moral realist”, and “objective morality” are used so differently by different philosophers that they have become in many contexts too ambiguous. It turns out that in the ways some philosophers use the expression “moral realist” (e.g., Huemer, Copp, Sayre-McCord – who don’t all use the expression to mean the same), I’m a moral realist, whereas in the ways other philosophers use that expression (e.g., Street) I am not.
I’m not sure what you mean by “objective moral realist”.

That aside, I tend to disagree about moral considerations being primary. What an agent A epistemically ought to believe does not seem to always match what they morally ought to believe, even if it often does. Sometimes (I gave examples earlier, in my immediately previous post), I think there is no moral obligation to be epistemically rational. And for some non-human agents, there might be no connection.

“Your examples are at least getting at what I mean. Probably better than examples I could think up. I was getting at situations where pragmatic rationality would seem to take precedence over theoretical rationality. It does depend on us believing that our beliefs are at least in part volitional.
Alter your situation where Believing X does not meet some theoretical rational benchmark, but it is very lopsided on the pragmatice rationality benchmark from a moral view. So
1) believing X is not up to snuff on the theoretical view (e.g., insufficient evidence) but
2) there is no otherwise morally objectionable reason to believe X and (that is there is nothing immoral about the belief unless one accepts Cliffords claim that it is always wrong to believe on insufficient evidence)
3) refusing to believe X does seem to have at least some moral risks.
Here would be a situation where I think one should believe X.”
I’m not sure what you mean by “moral risks” in this context. Could you clarify, please?
But assuming that you’re talking about the risk of something really bad happen, then I would say that in some cases, if our beliefs are in part volitional, then we morally ought to believe X, but we epistemically ought not to believe X, as it would still be epistemically irrational to do so.

That is all under the assumption that our beliefs are in part volitional. I have to say that my own inner experience seems to be in conflict with that assumption, at least when it comes to my own beliefs. I can only decide what information I want to access (i.e., what to read, what to listen to), and in that manner, indirectly affect my beliefs (e.g., if I read a math textbook, I will have new math beliefs that I did not have before; I can freely choose whether or not to read the book), or whether to reflect about some matter M, but those are indirect ways of affecting my beliefs, rather than direct choices. I don’t seem to be able to directly choose what to believe, due to the transparency of any attempted choice (i.e., I would know I’m just making it up, rather than assessing it).

Even if beliefs themselves are not directly volitional, I think there are still moral obligations about what to believe, in the indirect sense that we may have a moral obligation – for example – to reflect about some matter M in a non-emotional manner, and if we were to actually do so, we would come to believe B(M). Failure to do that would be a moral fault.