Tuesday, August 31, 2010

Not by self-report, at least. Here's a bit more data from a survey Josh Rust and I conducted of ethicists, non-ethicist philosophers, and comparison professors in other departments in five U.S. states. (Other preliminary survey results, and more about the methods, are here, here, here, here, here, here, and here.)

In the first part of the survey we asked respondents their attitudes about various moral issues. One thing we asked was for them to rate "Not keeping in at least monthly face-to-face or telephone contact with one’s mother" on a nine-point scale from "very morally bad" (1) through "morally neutral" (5) to "very morally good" (9). As it turned out, the respondent groups were all equally likely to rate not keeping in contact on the morally bad side of the scale: 73% of ethicists said it was morally bad, compared to 74% of non-ethicist philosophers and 71% of non-philosophers (not a statistically significant difference). There was a small difference in mean response (3.4 for ethicists vs. 3.7 for non-ethicist philosophers and 3.3 for non-philosophers), but I suspect that was at least partly due to scaling issues. In sum, the groups expressed similar normative attitudes, with perhaps the non-ethicist philosophers a bit closer to neutral than the other groups. (Contrast the case of vegetarianism, where the groups expressed very different attitudes.)

In the second part of the survey we asked respondents to describe their own behavior on the same moral issues that we had inquired about in the first part of the survey. We asked two questions about keeping in touch with mom. First we asked: "Over the last two years, about how many times per month on average have you spoken with your mother (face to face or on the phone)? (If your mother is deceased, consider how often you spoke during her last two years of life.)" The response options were "once (or less) every 2-3 months", "about once a month", "2-4 times a month", and "5 times a month or more". Only the first of these responses was counternormative by the standards of the earlier normative question. By this measure there was a statistically marginal tendency for the philosophers to report higher rates of neglecting their mothers: 11% of ethicists reported infrequent contact, compared to 12% of non-ethicist philosophers and only 5% of non-philosophers (chi-square, p = .06). (There was a similar trend for the non-philosophers to report more contact overall, across the response options.)

Second, we asked those with living mothers to report how many days it had been since their last telephone or face-to-face contact. The trend was in the same direction, but only weakly: 10% of ethicists reported its having been more than 30 days, compared to 11% of non-ethicist philosophers, and 8% of ethicists (chi-square, p = .82). We also confirmed that age and gender were not confounding factors. (Older respondents reported less contact with their mothers, even looking just at cases in which the mother is living, but age did not differ between the groups. Gender did differ between the groups -- philosophers being more likely to be male -- but did not relate to self-reported contact with one's mother.) So -- at least to judge by self-report -- ethicists are no more attentive to their mothers than are non-ethicist professors, and perhaps a bit less attentive than professors outside of philosophy.

Maybe this isn't too surprising. But the fact that most people seem to find this kind of thing unsurprising is itself, I think, interesting. Do we simply take it for granted that ethicists behave, overall, no more kindly, responsibly, caringly than do other professors -- except perhaps on a few of their chosen pet issues? Why should we take that for granted? Why shouldn't we expect their evident interest in, and habits of reflection about, morality to improve their day-to-day behavior?

You might think that ethicists would at least show more consistency than the other groups between their expressed normative attitudes about keeping in touch with mom and their self-reported behavior. However, that was also not the case. In fact the trend -- not statistically significant -- was in the opposite direction. Among ethicists who said it was bad not to keep in at least monthly contact, 8% reported no contact within the previous 30 days, compared to 13% of ethicists reporting no contact within 30 days among those who did not say that a lack of contact was bad. Among non-ethicist philosophers, the corresponding numbers were 6% and 27%. Among non-philosophers, 4% and 14%. Summarized in plainer English, the trend was this: Among those who said it was bad not to keep in at least monthly contact with their mothers, ethicists were the ones most likely to report not in fact keeping in contact. And also there was less correlation between ethicists' expressed normative view and their self-reported behavior than for either of the other groups of professors (8%-13% being a smaller spread than either 6%-27% or 4%-14%). It bears repeating that these differences are not statistically significant by the tests Josh and I used (multiple logistic regression) -- so I only draw this weaker conclusion: Ethicists did not show any more consistency between their normative views and their behavior than did the other groups.

Perhaps the ethicists were simply more honest in their self-described behavior than were the other groups? -- that is, less willing to lie or fudge so as to make their self-reported behavior match up with their previously expressed normative view? It's possible, but to the extent were were able to measure honesty in survey response, we found no trend for more honest responding among the ethicists.

Tuesday, August 24, 2010

As I suggested in my previous post, we sometimes have the impression that people do not fully endorse their delusions. In some circumstances, they don’t seem to act in a way that is consistent with genuinely believing the content of their delusions. For instance, a person with persecutory delusions may accuse the nurses in the hospital of wanting to poison him, and yet eat happily the food he’s given; a person with Capgras delusion may claim that his wife has been replaced by an impostor but do nothing to look for his wife or make life difficult for the alleged “impostor”.

Some philosophers, such as Shaun Gallagher, Keith Frankish and Greg Currie, have argued on the basis of this phenomenon (which is sometimes called “double bookkeeping”) that delusions are not beliefs. They assume that action guidance is a core feature of beliefs and maintain that, if delusions are not action guiding, then they are not beliefs. Although I have sympathies with the view that action guidance is an important aspect of many of our beliefs, I find the argument against the belief status of delusions a bit too quick.

First, as psychiatrists know all too well, delusions lead people to act. People who believe that they are dead Cotard delusion may become akinetic, and may stop eating and washing as a result. People who suffer from delusions of guilt, and believe they should be punished for something evil they have done, engage in self-mutilation. People who falsely believe they are in danger (persecutory delusions) avoid the alleged source of danger and adopt so-called “safety behaviours”. The list could go on. In general it isn’t true that delusions are inert.

Second, when delusions don’t cause people to act, a plausible explanation is that the motivation to act is not acquired or not sustained. Independent evidence suggests that people with schizophrenia have meta-representational deficits, flattened affect and emotional disturbances, which can adversely impact on motivation. Moreover, as Matthew Broome argues, the physical environment surrounding people with the delusion doesn’t support the action that would ensue from believing the content of the delusion. The content of one’s delusion may be so bizarre (e.g., “There’s a nuclear reactor in my belly”) that no appropriate action presents itself. The social environment might be equally unsupportive. One may stop talking about one’s delusion and acting on it to avoid incredulity or abuse from others.

My view that delusions are continuous with ordinary beliefs is not challenged by these considerations: maybe to a lesser extent than people with delusions, we all act in ways that are inconsistent with some of the beliefs we report - when we’re hypocritical - and we may fail to act on some of our beliefs for lack of motivation - when we’re weak-willed.

Friday, August 20, 2010

Brian Leiter just revealed this year's selections for the Philosopher's Annual "attempt to pick the ten best papers of the year". I'm led to wonder: How successful are such attempts? Selection for the Philosopher's Annual is certainly prestigious -- but to what extent does selection reflect durable quality as opposed to other factors?

There is of course no straightforward way to measure philosophical quality. But here's my thought: If an article is still being cited thirty years after publication, that's at least a sign of influence. [Insert here all necessary caveats, qualifications, hesistations, and hem-haws about the relationship between quality and citation rates.]

I compared citation rates of the 30 articles appearing in the first three volumes of Philosopher's Annual (articles published 1977-1979) with the citation rates of the first ten articles published in Philosophical Review during the each of those same years. (I only included citations of the articles' original appearances, not citations of later reprints of those articles, since the latter are much harder to track. I see no reason to think this would bias the results.) To the extent the Philosopher's Annual selection committee adds valuable expertise to the process, they ought to be able to beat a dumb procedure like just selecting the first ten articles each year in a leading journal.

Evidently, however, they don't. Or at least they didn't. (Time will tell if this year's committee did any better.)

The median total citation rate in the ISI citation database was 14 for Philosopher's Annual and 18 for Philosophical Review -- that's total citations in indexed journals over the course of 30+ years (excluding self-citations), less than half a citation per year on average. (The difference in median is not statistically significant by the Mann-Whitney test, p = .72.) The median total citations since 2001 is 2.5 for Philosopher's Annual and 3.5 for Philosophical Review (again not significantly different, p = .62).

But the medians might not tell the whole story here. Look at the distribution of citation rates in the following graph.

Although the median is about the same, it looks like Philosophical Review has more articles near the median, while Philosopher's Annual has more barely-cited articles and much-cited articles. It's hard to know this for sure, though, since we're dealing with small numbers subject to chance fluctuation: Only three articles, all with Philosopher's Annual, had 100 or more citations. (My measure of the difference in spread is statistically marginal: Levene's test for equal variances on a log(x+1) transform of the data, p = .09.)

The three articles with at least 100 citations? Walton's "Fearing Fictions" (118 cites), Lewis's "Attitudes De Dicto and De Se" (224 cites), and Perry's "The Problem of the Essential Indexical" (301 cites) -- all good choices for influential articles from the late 1970s. The most cited article from the Philosophical Review list was Watson's "Skepticism about Weakness of Will" (73 cites). Worth noting: Lewis's "Attitudes De Dicto and De Se", though counted toward PA rather than PR, was actually published in Philosophical Review -- just not among the first ten articles its year. Also, skimming forward through the mid-1980s, my impressionistic sense is that it is not the case that 10% of the PA articles are as influential as the three just mentioned. So possibly the apparent difference is chance after all, at least on the upside of the curve.

Maybe those people who selected articles for Philosopher's Annual in the late 1970s were more likely both to pick swiftly-forgotten duds and to pick home-run articles, compared to an arbitrary sample from Philosophical Review. The selection process did not appear to favor quality, as measured by influence over three decades, but possibly the selection procedure added variance in quality, both on the upside and the downside. The normative implications of all this, I leave to you.

UPDATE, August 22:

Given the uncertainty about whether Philosopher's Annual shows greater variance, I couldn't resist looking at the next three years (which, it turns out, are articles published in 1980, 1981, and 1983, skipping 1982). The trend toward similar median and greater variance appears to be confirmed. On the high side, PA had three articles with 100 or more cites (R. Dworkin "What is Equality: Part 2" [426 cites], Lewis "New Work for a Theory of Universals [262], and Shoemaker "Causality and Properties" [106]), while PR had only one article in that group, the least-cited of the four (Dupre, "Natural Kinds and Biological Taxa" [104]). On the low side, PA had 6 articles cited 3 times or fewer, while PR had only 3. Here is a new graph of the spread, merging all the data from 1977-1983:

Monday, August 16, 2010

Suppose that Chloe suffers from a delusion of erotomania and believes that President Obama is secretly in love with her. Chloe has never met him, so how does she know about his feelings? When probed, Chloe may offer no reason in support of her belief or offer reasons that others would consider unsatisfactory or irrelevant (e.g., “He is sending me love messages that only I can decipher”).

One explanation is that the belief is so certain for Chloe that she doesn’t feel the need to provide a justification for it. John Campbell argued that at least some delusions play the role of framework beliefs, a notion introduced by Wittgenstein in On Certainty. Framework beliefs (e.g., “The Earth existed long before my birth”) are central to our world-view and become virtually indubitable. They are the pillars on which the rest of our belief system rests, and can’t themselves be justified on the basis of beliefs that are more certain. However, they are manifested in our way of life - we wouldn’t believe our grandparents’ war stories if we thought that the Earth had come into existence at the same time as we did. In my view, delusions are unlikely to play the same role as framework beliefs. Framework beliefs are typically shared by an entire linguistic community, delusions are not. Framework beliefs are perfectly integrated in a belief system, whereas delusions are often in conflict with other beliefs.

What puzzles us about those delusions that seem to come out of nowhere is that the person reports them with conviction but doesn’t seem to genuinely endorse them, whereas there is no doubt that framework beliefs are endorsed. Richard Moran developed the notion of authorship which captures the sense in which we know what our beliefs are on the basis of the fact that we endorse their content. We can introspect some of our beliefs. We can infer some of our beliefs from our past behaviour. But at times we know that we believe that p, because we have made our mind up that p based on evidence for p. This mode of knowledge is direct like introspection, but it’s not as passive as perceiving a belief floating around in our stream of consciousness, and doesn’t involve looking inward, but looking outward, at the evidence for p. I know that I believe that the death penalty should be abolished because I have good reasons to believe that the death penalty should be abolished.

When I justify my beliefs with reasons that I regard as my best reasons, according to Moran I’m the author of the belief. The notion of authorship combines aspects of rationality and self-knowledge that we tend to take for granted. We expect that, if Chloe is convinced that Obama is in love with her, she must have some reasons to believe that, and she must be able to justify her belief on the basis of those reasons. But in the case of delusions, authorship can be fully or partially compromised. This suggests that people like Chloe experience a failure of self-knowledge.

Friday, August 13, 2010

Ten years ago, when I visited the Luray Caverns in Virginia, the tour guide took us down to the deepest cave and turned off the lights. He told us to wave our hands in front of our faces and asked if we could see our hands waving. We could, faintly -- or so we thought. He then asked us to wave our hands in front of our friends' faces. Our friends' hands we couldn't see at all. When we thought we could see our own hands we were fooling ourselves, he said. Not a single photon penetrated that darkness; no light actually came from our hands into our eyes.

Call this the spelunker illusion. The existence of this illusion appears to be common lore among avid cave explorers. I have also confirmed it in more pedestrian lightless environments. Yet no psychologist discusses it in any of literature I've reviewed in writing my forthcoming chapter on visual experience in darkness. But surely someone must have written a treatment? If you know of any discussions, I'd appreciate the reference!

I see three possible explanations of the spelunker illusion:

(1.) The brain's motor output creates hints of visual experience in accord with that output.

(2.) Since you know you are moving your hand, your visual system interprets low-level sensory noise in conformity with your knowledge (much as you might see a meaningful shape in a random splash of line segments).

(3.) There is no visual experience of motion, but the spelunker mistakenly thinks there is such experience because she expects there to be.

There might be other explanations too. I can see how we might start to empirically tease apart the three explanations above. For example, (1) and (2) seem to come apart if you have your friend move your passive hand rather than actively moving your hand yourself. And (3) can come apart from (1) and (2) if we can quash the expectation of experience.

Is this a mere curious triviality? Maybe. But the three explanations above do bear somewhat differently on different theories of sensory experience and our knowledge of it. Like other illusions, this illusion promises to reveal something about the hidden operation of the visual system, if it can be properly understood.

Wednesday, August 11, 2010

Irrationality is considered a defining feature of delusions in many influential definitions. But in what sense are delusions irrational?

Delusions can be procedurally irrational if they are badly integrated in one’s system of beliefs. They can also be inconsistent with other beliefs one has. Lucy who has Cotard delusion believes at the same time that dead people are motionless and speechless, that she can move and talk, and that she is dead. Here there is an apparent inconsistency that is “tolerated”, that it, doesn’t lead her to revise or reject one of the beliefs. Typical delusions are epistemically irrational, that is, they are badly supported by the evidence available to the subject, and they are not adequately revised in the light of new evidence. John who suffers from anosognosia doesn’t acknowledge that one of his legs was amputated and explains the fact that he can’t climb stairs any longer by a violent attack of arthritis.

These examples are striking. For many philosophers, the irrationality of delusions is a reason to deny that delusions are beliefs. Lucy can’t really believe that she’s dead, maybe what she means is that she feels empty and detached, as if she were dead. John can’t really believe that he has both legs because there is no problem with his visual perception. Maybe he wishes he still had both legs. This way of discounting delusions as metaphorical talk or wishful thinking is appealing. It is based on the view that there is a rationality constraint on the ascription of beliefs. We wouldn’t be charitable interpreters if we ascribed to Lucy the belief that she’s dead and to John the belief that he has arthritis.

I want to resist the idea that people with delusions don’t mean what they say. First, people often act on their delusions and base even important decisions in their lives on the almost unshakeable conviction that the content of their delusions is true. We couldn’t make sense of their behaviour at all if we couldn’t ascribe to them delusional beliefs. Second, the type of irrationality delusions exhibit is not qualitatively different from the irrationality of ordinary beliefs. Delusions may be irrational to a greater extent than ordinary beliefs, and the examples we considered were certainly puzzling, but procedural and epistemic irrationality can be found closer to home.

Students believe that wearing clothes of a certain colour will bring them good luck during the exam and nurses believe that more accidents occur in the nights of full moon. These beliefs are certainly inconsistent with other beliefs well-educated people have about what counts as the probable cause of an event. Prejudiced beliefs about black people being more violent are groundless generalisations that can be just as insulated from evidence and as resistant to change as clinical delusions.

Maybe what makes delusions so puzzling is only that they are statistically less common (not necessarily more irrational) than other procedurally and epistemically irrational beliefs.

In the last five years I have been working on the nature of clinical delusions, and have asked what they can tell us about the philosophy and psychology of belief. Clinical delusions are a symptom of a variety of psychiatric disorders, among which are schizophrenia and dementia. Some delusions have fairly mundane content, such as delusions of persecution or jealousy. Other delusions are very bizarre, and people may come to assert that they are dead (Cotard delusion) or that their spouse or family member has been replaced by an impostor (Capgras delusion).

In the Diagnostic and Statistical Manual of Mental Disorders (American Psychiatric Association DSM-IV-TR, 2000), delusions are defined in epistemic terms, as beliefs that are false, insufficiently supported by the available evidence, resistant to counterevidence and not shared by other people belonging to the same cultural group. In the philosophical literature it is an open question whether delusions should be considered as genuinely instances of belief.

According to the two-factor theory of delusions, delusions are explanatory hypotheses for an abnormal experience which is due to brain damage. The first factor contributing to the formation of the delusion is a neuropsychological deficit and the second factor is an impairment in the evaluation of hypotheses. Imagine that, overnight, Julia’s sister appears different to Julia, and this is powerful experience. One possible explanation is that an alien has abducted Julia’s sister during the night and replaced her with an almost identical replica without anybody else noticing. This hypothesis is implausible and even Julia would consider it as highly improbable, but if her hypothesis-evaluation system doesn’t work properly and doesn’t dismiss it, Julia may endorse it as something she truly believes. As a result, she may become hostile and even aggressive towards the alleged impostor.

Even from the oversimplified example above, one can see where the tension is. On the one hand, delusions seem to be just like any other beliefs. They are reported with sincerity and they can affect the person’s other intentional states and behaviour. They “make sense” of a very unusual experience. On the other hand, there is a neuropsychological deficit at the origin of delusions that is not present in the non-clinical population. The affective channel of Julia’s facial recognition process is damaged. The good functioning of the hypothesis-evaluation system is also compromised, maybe due to exaggerated versions of common reasoning biases. Julia “jumped to conclusions” as she accepted her initial hypothesis on the basis of insufficient evidence and without considering other, more probable, alternatives. This unusual aetiology and the apparent extreme irrationality might seem to be in tension with the view that delusions are “beliefs” in the ordinary sense of that term.

However, in my view, the main difference between clinical delusions and other irrational beliefs is that delusions severely undermine well-being. People with schizophrenia are often isolated and withdrawn and their life plans are disrupted. But on purely epistemic grounds we can’t easily tell delusions apart from the false beliefs that we ourselves report and ascribe to others on an everyday basis, such as: “Women make poor scientists” or “I failed the exam because the teacher hates me”. Irrationality is indeed all around us.

Monday, August 02, 2010

The standard view in contemporary epistemology is that knowledge entails belief. Proponents of this claim rarely offer a positive argument in support of it. Rather, they tend to treat the view as obvious, and if anything, support the view by arguing that there are no convincing counterexamples. We find this strategy to be problematic. In particular, we do not think the standard view is obvious, and moreover, we think there are cases in which a subject can know some proposition P without (or at least without determinately) believing that P. In accordance with this, we present four plausible examples of knowledge without belief, and we provide empirical evidence which suggests that our intuitions about these scenarios are by no means atypical.

Comments welcome, as always! (Either on this post or to my email address.)

This research was previously summarized in this post. The current version presents the issues and results in more detail and includes some new controls to address objections raised in the comments to the earlier post.