Thursday, April 07, 2016

Consider cases in which a person sincerely endorses some proposition ("women are just as smart as men", "family is more important than work", "the working poor deserve as much respect as the financially well off"), but often behaves in ways that fail to fit with that sincerely endorsed proposition (typically treats individual women as dumb, consistently prioritizes work time over family, sees nothing wrong in his or others' disrespectful behavior toward the working poor). Call such cases "dissonant cases" of belief. Intellectualism is the view that in dissonant cases the person genuinely believes the sincerely endorsed proposition, even if she fails to live accordingly.Broad-based views, in contrast, treat belief as a matter of how you steer your way through the world generally.

Dissonant cases of belief are, I think, "antecedently unclear cases" of the sort I discussed in this post on pragmatic metaphysics. The philosophical concept of belief is sufficiently vague or open-textured that we can choose whether to embrace an account of belief that counts dissonant cases as cases of belief, as intellectualism would do, or whether instead to embrace an account that counts them as cases of failure to believe or as in-between cases that aren't quite classifiable either as believing or as failing to believe.

I offer the following pragmatic grounds for rejecting intellectualism in favor of a broad-based view. My argument has a trunk and three branches.

--------------------------------------------

The trunk argument.

Belief is one of the most central and important concepts in all of philosophy. It is central to philosophy of mind: Belief is the most commonly discussed of the "propositional attitudes". It is central to philosophy of action, where it's standard to regard actions as arising from the interaction of beliefs, desires, and intentions. It is central to epistemology, much of which concerns the conditions under which beliefs are justified or count as knowledge. A concept this important to philosophical thinking should be reserved for the most important thing in the vicinity that can plausibly answer to it. The most important thing in the vicinity is not our patterns of intellectual endorsement. It is our overall patterns of action and reaction. What we say matters, but what we do in general, how we live our lives through the world -- that matters even more.

Consider a case of implicit classism. Daniel, for example, sincerely says that the working poor deserve equal respect, but in fact for the most part he treats them disrespectfully and doesn't find it jarring when others do so. If we, as philosophers, choose describe Daniel as believing what he intellectually endorses, then we implicitly convey the idea that Daniel's patterns of intellectual endorsement are what matter most to philosophy: Daniel has the attitude that stands at the center of so much of epistemology, philosophy of action, and philosophy of mind. If we instead describe Daniel as a mixed-up, in-betweenish, or even failing to believe what he intellectually endorses, we do not implicitly convey that intellectualist idea.

Branch 1.

Too intellectualist a view invites us to adopt noxiously comfortable opinions about ourselves. Suppose our implicit classist Daniel asks himself, "Do I believe that the working poor deserve equal respect?" He notices that he is inclined sincerely to judge that they deserve equal respect. Embracing intellectualism about belief, he concludes that he does believe they deserve equal respect. He can say to himself, then, that he has the attitude that philosophers care about most – belief. Maybe he lacks something else. He lacks "alief" maybe, or the right habits, or something. But based on how philosophers usually talk, you'd think that's kind of secondary. Daniel can comfortably assume that he has the most important thing straightened out. But of course he doesn't.

Intellectualist philosophers can deny that Daniel does have the most important thing straightened out. They can say that how Daniel treats people matters more than what he intellectually endorses. But if so, their choice of language mismatches their priorities. If they want to say that the central issue of concern in philosophy is, or should be, how you act in general, then the most effective way to encourage others to join them in that thought is to build the importance of one's general patterns of action right into the foundational terms of the discipline.

Branch 2.

Too intellectualist a view hides our splintering dispositions. Here's another, maybe deeper, reason Daniel might find himself too comfortable: He might not even think to look at his overall patterns of behavior in evaluating what his attitude is toward the working poor. In Branch 1, I assumed that Daniel knew that his spontaneous reactions were out of line, and he only devalued those spontaneous reactions, not thinking of them as central to the question of whether he believed. But how would he come to know that his spontaneous reactions are out of line? If he's a somewhat reflective, self-critical person, he might just happen to notice that fact about himself. But an intellectualist view of the attitudes doesn’t encourage him to notice that about himself. It encourages Daniel, instead, to determine what his belief is by introspection of or reflection upon what he is disposed to sincerely say or accept.

In contrast, a broad-based view of belief encourages Daniel to cast his eye more widely in thinking about what he believes. In doing so, he might learn something important. The broad-based approach brings our non-intellectual side forward into view while the intellectualist approach tends to hide that non-intellectual side. Or at least it does so to the extent we are talking specifically about belief -- which is of course a large part of what philosophers do in fact actually talk about in philosophy of mind, philosophy of action, and epistemology.

Another way in which intellectualism hides our splintering dispositions is this: Suppose Suleyma has the same intellectual inclinations as Daniel but unlike Daniel her whole dispositional structure is egalitarian. She really does, and quite thoroughly, have as much respect for the custodian as for the wealthy business-owner. An intellectualist approach treats Daniel and Suleyma as the same in any domain where what matters is what one believes. They both count as believers, so now let's talk about how belief couples with desire to beget intentions, let's talk about whether their beliefs are justified, let's talk about what set of worlds makes their beliefs true -- for all these purposes, they are modeled in the same way. The difference between them is obscured, unless additional effort is made to bring it to light.

You might think Daniel's and Suleyma's differences don't matter too much. They're worth hiding or eliding away or disregarding unless for some reason those differences become important. If that's your view, then an intellectualist approach to belief is for you. If on the other hand, you think their differences are crucially important in a way that ought to disallow treating them as equivalent in matters of belief, then an intellectualist view is not for you. Of course, the differences matter for some purposes and not so much for other purposes. The question is whether on balance it's better to put those differences in the foreground or to tuck them away as a nuance.

Branch 3.

Too intellectualist a view risks downgrading our responsibility. It's a common idea in philosophy that we are responsible for our beliefs. We don't choose our beliefs in any straightforward way, but if our beliefs don't align with the best evidence available to us we are epistemically blameworthy for that failure of alignment. In contrast, our habits, spontaneous reactions, that sort of thing -- those are not in our control, at least not directly, and we are less blameworthy for them. My true self, my "real" attitude, the being I most fundamentally am, the locus of my freedom and responsibility -- that's constituted by the aspects of myself that I consciously endorse upon reflection. You can see how the intellectualist view of belief fits nicely with this.

I think that view is almost exactly backwards. Our intellectual endorsements, when they don't align with our lived behavior, count for little. They still count for something, but what matters more is how we spontaneously live our way through the world, how we actually treat the people we are with, the actual practical choices we make. That is the "real" us. And if Daniel says, however sincerely, that he is an egalitarian, but he doesn't live that way, I don't want to call him a straight-up egalitarian. I don't want to excuse him by saying that his inegalitarian reactions are mere uncontrollable habit and not the real him. It's easy to talk. It's hard to change your life. I don't want to let you off the hook for it in that way, and I don't want to let myself off the hook. I don't want to say that I really believe and I am somehow kind of alienated from all my unlovely habits and reactions. It's more appropriately condemnatory to say that my attitude, my belief state, is actually pretty mixed up.

It's hard to live up to all the wonderful values and aspirations we intellectually endorse. I am stunned by the breadth and diversity of our failures. What we sincerely say we believe about ourselves and the people around us and how we actually spontaneously react to people and what we actually choose and do -- so often they are so far out of line with each other! So I think we've got to have quite a lot of forgiveness and sympathy for our failures. My empirical, normative, pragmatic conjecture is this: In an appropriate context of forgiveness and sympathy, the best way to frankly confront our regular failure to live up to our verbally espoused attitudes is to avoid placing intellectual endorsements too close to the center of philosophy.

10 comments:

Tad
said...

Interesting! Are you suggesting that the chief (defining?) difference between Intellectualist views and Broad-based ones is that in the former appropriate action is not a necessary condition for belief (perhaps only sincere endorsement), but in the latter it is?

Eric, this is a timely post for my epistemology class, as we've just been through your "Knowing your own beliefs" and are now covering Gendler's papers on alief. Some of my students have asked about how the concept of alief would square with your liberal dispositionalism and I'm not sure I know. In this post it seems as if you might want to reject it as it would tend to drain belief of some of its behavioral significance.

But there is a problem, I think, with wanting the concept of belief to account for too much of our behavior. The distinction between a System 1-ish associative state that activates motor routines and takes over under a cognitive load, and a System 2-ish state that is associated with reflective rationality and accorded some non trivial ability to override primitive impulses strikes me as useful and also not inherently committed to the kind of intellectualism that you are criticizing here.

In KYOB you consider Piotr who sincerely avows that people deserve equal respect but does not behave accordingly. He’s a beautifully recognizable character, but perhaps a bit of a caricature. Most of us are just south of Piotr behaviorally. We sincerely avow that people deserve equal respect, treat people that way when it is convenient, fail to treat people that way under a great deal of stress, but in most “in-between” cases notice in ourselves an impulse to treat people poorly, which we can usually overcome. The trick, I think, is to find a conception of belief that is not restricted to the prediction of utterances, box-checking and voting behavior, but which is still well-defined enough to explain this sort of thing, as well as less challenging situations.

Of course I understand there is a problem on your view with thinking of belief in explanatory terms, since beliefs on your view are not what causes behavioral dispositions, but the dispositions themselves.

I'm sure my class would love to hear any way you can straighten me out here.

One further thought. Your anti-intellectualist argument really seems to have its strongest appeal at the level of normative beliefs, which is where we often see the discordancy between what we say we believe and what we do. With empirical beliefs, such as a belief about where I live or who my children are or which car is mine, we hardly ever see it. So an intellectualist might respond that she agrees with you that belief is extremely important, and specifically what we want to account for is how this kind of state typically does produce appropriate behavior. Of course, we also want to explain why it sometimes doesn't, especially in the case of normative beliefs, but that in no way diminishes the behavioral significance of belief itself.

Tad: That's probably a bit too baldly and simply put, but yes, something in that direction.

Randy: My view of alief/belief is that there are some very alief-like cases and some very belief-like cases but that there's a vast expanse between, where we have some of the belief-y stuff and some of the alief-y stuff. So the term can be useful in my view, but it's potentially misleading in suggesting a clean sort. I have pretty much the same view about the System 1 / System 2 distinction. There's probably no pure System 2 stuff (I'm not sure what that would even mean) -- but there are clear cases of System 1-ish processes and clear cases of System 2-ish processes. For the kinds of big-picture folk-psychology-ish things that we care about (not e.g., edge detection), probably it's typically a mix. So it's a useful distinction but again possibly misleading if it suggests a clean sort.

On explanation: This is something I am planning to straighten out more clearly soon in a planned follow-up paper, but I have a two-pronged approach, depending on whether one is committed to the idea that beliefs cause behavior (and other manifestations). Prong one: If beliefs do not cause behavior (because, as some but not all metaphysicians think, dispositions don't cause their manifestations), then one can still explain, but in a "pattern explanation" way. For example, Kepler's mathematics of the orbits could explain the particular location of Venus in the sky non-causally by fitting it into a pattern. If beliefs do cause behavior, then the explanation story is more conventional. One move available to me if one insists both that beliefs must cause behavior and that dispositions cannot cause behavior is the "jujitsu" move (as I call it in a 2013 paper) of identifying the belief token-token with whatever, in the relevant world, happens to be the causal structure responsible for the manifested dispositions and (centered on that world) of the counterfactual tendency to manifest other of the dispositions.

On normative beliefs: Here I think the relevant issue is not as much whether the belief is a normative one but rather whether possession or not of the belief is normatively loaded for you (which is true for most normative beliefs but also for some non-normative beliefs, e.g., whether my son will graduate college). I agree that straightforward, empirical, not-normatively-loaded beliefs tend not to splinter. But even there, I'd point to cases like believing, or not, that the bridge you usually take to work is closed. If you're driving blithely toward it, instead of on the alternate route, expecting to cross it, but at the same time you are such that if you thought *explicitly* about it you'd remember that it was closed, then your belief state is, in my taxonomy, probably best thought of as in-between (this is "Ben" in my 2010). One advantage to classifying forgetfulness cases as in-between rather than as straight-out believed is that you'll need in-between cases for gradual forgetting over time anyway (e.g., "Konstantin" in my 2001), and this yields a unified treatment.

Thanks for posting!I'm more interested in where does your intuition against intellectualism come from. This is my hypothesis: the reason why we want to know about other people's mental states (beliefs included) is because they predict people's behavior i.e. what effects will other people have on us (good or bad etc.) so we could adopt the right attitude toward them (e.g. avoid them if bad). So, when a person believes X but acts inconsistently with that belief we're inclined to deny that they really believe X because the predictive aspect of their belief is gone (i.e. we are inclined to reject 'intellectualism').

In other words: we are more inclined to 'broad-based view' (that treats belief as a matter of how you steer your way through the world) when we evaluate *other* people's beliefs because it helps *us* to steer our way through the world. On the other hand, we tend to accept intellectualism when we're evaluating our own beliefs because we don't predict our own behavior in the same way we predict other people's behavior (i.e. by figuring out what we believe).

Thanks for that interesting comment! That seems like a plausible conjecture. A possible competing conjecture might be that for others we have to rely more on outward behavior, because we don't have as much information about the internal side. It does seem likely to me that intellectualism is somewhat intuitively difficult to deny in the first-person case.

Just a remark (following Aleksandar’s comment and your response to it). Something seems amusing to me: If indeed what makes us intellectualist or anti-intellectualist about belief is (at least partly) the relevance of our concept of belief for the prediction of behavior, then it seems to me that it may have the consequence that we should sometimes adopt intellectualism about belief, and sometimes anti-intellectualism, depending on the context in which the agent we are thinking of is embedded (as well as some rather high-level properties of this agent).For example: let’s say I live in a highly artificial and linguistic environment (A), in which I don’t have a lot of actual physical face-to-face physical interactions with my fellow humans, and in which my most relevant actions are constituted by conscious and careful linguistic utterances. Then maybe in this case intellectualism about belief is justified (because it doesn’t leave out much, compared to a “broad-based” view of belief, and intellectualism about belief has advantage of his own: it fits our first-person conception of belief, it makes beliefs rather easily accessible and knowable, etc.).

On the contrary, if I live in a environment B in which my conscious and careful linguistic utterances are not causally relevant, or almost not causally relevant, then most of my important “behaviors” won’t be predicted on the basis of my intellectual-beliefs, while it would be on the basis of my broad-beliefs. So, in these contexts, anti-intellectualism about beliefs would seem more justified.What is funny is that:

1/ It seems that most real environment (past, present and future) are somewhere on a spectrum that goes from A to B, so it may be that intellectualism or anti-intellectualism about beliefs are more or less relevant depending on where the agents we are considering live.2/ It may be that technological evolution makes our environment more and more like A, and less and less like B. So maybe technology makes our environment more and more so that intellectualism about belief is more relevant.3/ It may be that, comparatively to other people, academics live in environments that are closer to A. I don’t know if this could justify intellectualism about beliefs about academics themselves, but this could maybe explain (following Aleksandar’s comment) why traditionally academics tend to find intellectualism about belief attractive. After all, the kind of beliefs that count, when you want to understand major parts of an academic’s behavior (notably, the paper she writes), seems to be her “intellectual” beliefs.

This seems good, and certainly my initial objections were all from an intellectualist angle. But I wonder if there's a pragmatist counter argument as well. This is rather unformed, but I'm thinking about something along these lines:

Most of our actions are not motivated by any kind of belief at all. Most of our actions are instinctive things like breathing, walking, and chewing. And ascribing non-belief-driven actions to some kind of logical, belief-related framework is one of the classic mistakes of human cognition (originally extending far beyond people - we ascribed intention and beliefs to the random interactions of physical stuff for millennia).

Beginning with the assumption that every action which *could* be related to a belief *is in fact* connected to that belief is simply replaying that old fallacy, or at the very least, failing to take a scientific and/or objective view of the subject. Just as it is genuinely difficult to demonstrate that astrology isn't true, because you can always find plenty of evidence to point you in either direction, you can look at the tea leaves of a person's behaviour and read into it whatever you want. Not just that: you are choosing to systematically bias the way you read the tea leaves by relating their behaviour to the small subset of beliefs which we commonly choose to discuss.

There is also an issue of respect to consider. Your branch 1 says: "Too intellectualist a view invites us to adopt noxiously comfortable opinions about ourselves"; but too pragmatic a view invites us to adopt views about the contents of other peoples' minds in a way that we should not feel too comfortable with, either. I think you make an assumption of transparency in the ways that beliefs ought to play out in conduct; but that assumption is very troubling. The classic example that I've seen discussed is conservatives incensed by the argument that they "don't care" about the poor.

There is a curious link to your recent work on rationalisation. If rationalisation is the internal building of explanatory frameworks after action, isn't what you're proposing here a bit like the mirror image of that, building explanatory frameworks about a person's "beliefs" after observing their action? Obviously there's an asymmetry of information which explains why one is problematic, and the other less so, but there still seems to me to be a common element: that in both cases something has to be denied. The rationaliser has to deny their real motives; the pragmatic judger of beliefs has to deny the subject's claims about their own beliefs.

I'm not at all sure if those arguments can be hung together as a tree or any other structure!

Francois and Chinaphil: Thanks for those thoughtful and helpful comments. I'm embarrassed that I have neglected them until now.

Francois: That is very interesting, and I think I agree with all of it. In my 2002 and 2010 papers on belief, for example, I say that whether we want to attribute a belief in an in-between case can depend on the context of ascription: If what we care about is what the person would say about P (e.g., in a debate) then attributing the intellectually endorsed proposition might be the way to go. I hadn't thought about it's connection to academia and evolving technology though!

Chinaphil: I do see your two points as tied together. My preferred metaphysics of belief and other attitudes is dispositionalist, so that all there is to caring about the poor, for example, is being disposed to help them, feeling bad when they suffer, etc. So if we have access to the person's full range of dispositions we already know their attitude -- there's not even anything to "read off" or anything "transparent" that needs to be seen through. Similarly, believing that the ground is solid is in part constituted by the act of not walking fearfully upon it. In light of that metaphysics, my thought about your first point is this: *If* "beliefs" were some sort of stored internal representations that needed to be retrieved and deployed then I'd agree that most of our behavior is not "motivated" by belief at all. But if belief is not ontologically separate from the patterns of behavior, then the way to see it is this: Our actions and reactions form patterns. Of course they do. One way of *labeling* certain types of patterns is with the language of belief.

All that said, of course there is plenty of room for uncharitably biased interpretations and rationalization. But in my view those are bad for just the usual reasons that uncharitably biased interpretations and rationalizations are bad -- nothing particularly harmful (I think) about my account of belief in these respects.

Well, one caveat on that last point: My account does make it less likely that we will accept people's sincere-seeming attitude self-ascriptions as authoritative. That does, I think, cut both ways. I'm inclined to think that in sum it's good -- especially if handled with a disposition toward forgiveness and sympathy, in light of knowledge of one's own failings, as I recommend.