Monday, August 13, 2012

It occurs to me to doubt whether the external world exists -- that is, whether anything exists other than my own stream of conscious experience. Radical solipsism is of course crazy. But can I show it to be wrong? Or is my only recourse simply to assume it's wrong, since any attempt at proof would be circular? Can I break out of the solipsistic circle, using facts about my own stream of experience, knowable from within the solipsistic perspective, to license an inference to the existence of something beyond my stream of experience?

The most famous philosophical attempts to prove the existence of an external world from solipsism-compatible assumptions and methods -- for example, Descartes's and Kant's -- are generally acknowledged by the philosophical community to fail. I'm inclined to think those attempts fail because they set the bar too high: They aim for certainty through deductive proof. Better, perhaps, to aim lower -- to use scientific methods and scientific standards, licensing only tentative conclusions as the current best explanation of the available evidence. In another way, too, I aim low: I aim only to refute solipsistic doubts in particular, not other sorts of skeptical doubts such as doubts about memory or reasoning. In fact, I will take the general standards of science for granted, insofar as those general standards are compatible with solipsism. I aim to do solipsistic science.

Last year on my blog I presented the results of two experiments designed to provide evidence that an external world does indeed exist. The first experiment suggested that something exists ("my computer") that is swifter than I at calculating prime numbers. The second experiment suggested that the world has some constancy to it that exceeds the constancy of my memory. Today, I will report on a third experiment.

If solipsism is true, nothing in the universe should exist that is better than I am at chess. Nothing should exist with chess-playing practical capacities that exceed my own. Now I believe that I stink at chess, and my seeming student "Alan" seems to have told me that he is good at chess. If solipsism is true, then he should not be able to beat me at rates higher than statistical chance. I agreed with this seeming-Alan to play twenty games of speed chess, with a limit of about five seconds per move.

Here is what seems to be a picture of our procedure:
(photo credit: seeming-Gerardo Sanchez)

The results: seeming-Alan won 17, I won 2, and one stalemate. 17/19 is statistically higher than 50%, p < .001 (hand calculated).

It occurs to me that I might have hoped to lose, so as to generate results confirming my preferred hypothesis that the external world exists. Against this concern, I comfort myself with the following thoughts: If it was an unconscious desire to lose, then that implies that something exists in the universe besides my own stream of conscious experience, namely, an unconscious part of my mind. So radical solipsism is false. If it was, instead, a conscious desire to lose, I should have been able to detect it, barring an unusual degree of introspective skepticism. What I detected was a desire to win as many games as I could, given my background assumption that if Alan actually exists I would be hard pressed to win even one or two games. I found myself repeatedly and forcefully struck by the impression that the universe contained a practical intelligence superior to my own and bent on defeating me -- an impression, of course, confirmed by the results. The most natural scientific explanation of the pattern in my experience is that that impression is correct.

I might easily enough dream of being consistently defeated in chess. But dreams of this sort, as I seem to remember, differ from the present experiment in one crucial way: They are vague on the specific moves or unrealistic about those moves. In the same way, I might dream of having proven Fermat's last theorem. Both types of case involve dream-like gappiness or irrationality or delusive credulity -- the type of gappiness or irrationality or delusive credulity that might make me see nothing remarkable in my daughter's growing wings or in discovering a new whole number between 5 and 6. Genuine dream doubt might involve doubt about my basic rational capacities, but if so such doubts are additional to simply solipsistic doubt. Whether I am dreaming or not, if I consistently experience specific clever and perceptive chess moves that repeatedly exploit flaws in my own conscious strategizing, flaws that I experience as surprising, it seems hard to avoid the conclusion that something exists that surpasses my own conscious intelligence in at least this one area.

Wednesday, August 08, 2012

Do stakes impact knowledge ascription? According to a prominent claim in recent epistemology, people are generally less likely to ascribe knowledge to a high stakes subject for whom the practical consequences of error are severe, than to a low stakes subject for whom the practical consequences of error are slight, even when “traditional” epistemic factors like evidence and belief are held fixed (So for example, people are less likely to agree that Sarah knows that she parked her car on the upper level, if an armed robber is lurking on the lower level).

The claim that stakes impact knowledge ascriptions holds intrinsic interest to anyone interested in the psychological basis of knowledge ascriptions. It holds further interest to epistemologists because it features as the lead premise in an abductive argument to the radical claim—defended by Fantl & McGrath (2002), Hawthorne (2004), Stanley (2005)—that knowledge itself is sensitive to stakes. And, it holds a different sort of interest to experimental philosophers as an empirical claim made from the armchair.

Given all of this interest, it is perhaps unsurprisingly that so many different philosophers over the last few years have beugn studying the behavioral role that stakes could be playing in people's knowledge judments. There was a "first wave" of empirical studies—due to Feltz & Zarpentine (2010), May et al (2010), and Buckwalter (2010)—which casted doubt on folk stakes sensitivity. Then, there was a "second wave" of empirical studies—due to Pinillos (2012) and Sripada & Stanley (2012)—said to vindicate it. (And see great posts summarzing the major findings of the second wave experimental philsophers here on xphi and over on certain doubts by N. Angel Pinillos and by Jason Stanley.)

For those interested in the latest on stakes, or are maybe just getting into this debate for the first time, Jonathan Schaffer and I have recently finished a new paper offering an opinionated discussion of the “state of the art” of this research. In our paper, we review the first and second wave results above, as well as present a few new studies of our own. We conclude that the balance of evidence to date still best supports the folk stakes insensitivity thesis--or the claim that all else equal, people are equally likely to ascribe knowledge to a high stakes subject as to a low stakes subject.

Tuesday, July 24, 2012

A couple years ago, I ran a study to explore the view that professional philosophers’ beliefs are colored by their subjective experiences of the world, and are therefore influenced by personality.The first article from this study, “Do Personality Effects Mean Philosophy Is Intrinsically Subjective?” will appear in the Journal of Consciousness Studies in 2013, and I wanted to share some of the results here first.

Much of the research was exploratory, so the statistical validity of some of these findings is a complicated matter. But of particular interest are two specific hypotheses that were supported by the results of the study. A great deal of literature suggests that moral disapprobation covaries with negative emotions, even when those emotions are not the result of moral considerations. In this study, the lifelong tendency toward negative affectivity, as measured by neuroticism, correlated with one type of moral disapprobation.

Other psychological research suggests that the more conscientious and the more neurotic people are, the more intensely and passionately they experience of love. In this study, the belief that love could be reduced to a neuroscientific description and reproduced electronically also correlated with those variables. This supports the hypothesis that either the subjective experience of love, or its underlying personality influences, influence philosophical belief about the experience of love.

Below are statistical correlations between the Big Five personality factors and answers to nine philosophical questions. All results are from 234 participants who indicated that they held PhDs or DPhils in philosophy. 10 of 45 factor-belief pairs, and 4 of 9 overall Big Five-belief pairs showed significant correlations.

In order to keep this post shorter, I’ll post the exact questions used to test each philosophical belief in the comments; the brief tags in the chart are pretty vague. More about the Big Five personality factors, and a copy of the exact Big Five questionnaire used in this study, can be found at http://tinyurl.com/7573puh.

Friday, June 15, 2012

In 2007, I analyzed data on student charitable giving at University of Zurich, broken down by major. When paying registration fees, students at Zurich could choose to give to one or both of two charities (one for foreign students, one for needy students). Among large majors, philosophy students proved the most charitable of all majors. However, philosophy majors' rates of charitable giving didn't rise over the course of their education, suggesting that their giving rates were not influenced by their undergraduate training.

I now have some similar data from University of Washington, thanks to Yoram Bauman and Elaina Rose. At UW from 1999-2002, when students registered for courses, they had the opportunity to donate to two charities: WashPIRG (a broadly left-leaning activist group) and (starting in fall 2000) "Affordable Tuition Now" (an advocacy group for reducing the costs of higher education in Washington). Bauman and Rose published an analysis of economics students' charity and they kindly shared their raw data with me for reanalysis. My analysis focuses on undergraduates under age 30.

First, looking major-by-major (based on final declared primary major at the end of the study period), we see that philosophy students are near the top. The main dependent measure is, in any given term, what percentage of students in the major gave to at least one of the two charities. Among majors with at least 1000 student enrollment terms, the five most charitable majors were:

As reported by Bauman and Rose and also in Frey and Meier 2003 (the original source of my Zurich data), business and economics majors are among the least charitable. The surprise here (to me) is sociology. In the Zurich data, sociology majors are among the most charitable. (Hypotheses welcome.)

But what is the time course of donation? Bauman and Rose and Frey and Meier found that business and economics students were among the least charitable from the very beginning of their education and their charity rates did not decline further. Thus, they suggest, their low rates of charity are a selection effect -- an effect of who tends to choose such majors -- rather than a result of college-level economics instruction. My analysis of the Zurich data suggests the converse for philosophers: Their high rates of charity reflect a fact about who chooses philosophy rather than the power of philosophical instruction.

So here are the charity rates for non-philosophers, by year of schooling:

First year: 15% Second year: 15% Third year: 14% Fourth year: 13%

And for philosophers (looking at 1172 student semesters total):

First year: 26% Second year: 27% Third year: 21% Fourth year: 24%

So it looks like philosophers' donation rates are flat to declining, not increasing. Given the moderate-sized data set, the slight decline from 1st and 2nd to 3rd and 4th years is not statistically significant (though given the almost 70,000 data points the smaller decline among non-philosophers is statistically significant).

One reaction some people have had to my work with Josh Rust on the moral behavior of ethics professors (e.g., here and here) is this: Although some professional training in the humanities is morally improving, one reaches a ceiling in one's undergraduate education after which further training isn't morally improving -- and philosophers, or humanities professors, or professors in general, have pretty much all hit that ceiling. That ceiling effect, the objection goes, rather than the failure of ethics courses to alter real-world behavior, explains Josh's and my finding that ethicists behave on average no better than do other types of professors. (Eddy Nahmias might suggest something like this in his critical commentary on one of Josh's and my papers next week at the Society for Philosophy and Psychology.)

I don't pretend that this is compelling evidence against that position. But it does seem to be a wee bit of evidence against that position.

Monday, May 07, 2012

Brian Robinson, Paul Stey, and I have been working on a theory of the Knobe effect that draws on my collaboration with Beebe and Robinson. The basic idea is to use non-moral psychological processes to explain the effect. We presented some preliminary results at the Experiments in Ethical Dilemmas conference in London last week. Here's a write-up of that presentation.

Friday, March 16, 2012

As some of you will know, I have an abiding interest in the moral behavior of ethics professors. I've collected a variety of evidence suggesting that ethics professors behave on average no morally better than do professors not specializing in ethics (e.g., here, here, here, here, and here). Here's another study.

Until recently, the American Philosophical Association had more or less an honor system for paying meeting registration fees. There was no serious enforcement mechanism for ensuring that people who attended the meeting -- even people appearing on the program as chairs, speakers, or commentators -- actually paid their registration fees. (Now, however, you can't get the full program with meeting room locations without having paid the fees.)

Registration fees are not exorbitant: Since at least the mid-2000s, pre-registration for APA members been $50-$60. (Fees are somewhat higher for non-members and for on-site registration. For students, pre-registration is $10 and on-site registration is $15.) According to the APA, these fees don't fully cover the costs of hosting the meetings, with the difference subsidized from other sources of revenue. Barring exceptional circumstances, people attending the meeting plausibly have an obligation to pay their registration fees. This might be especially true for speakers and commentators, since the APA has given them a podium to promulgate their ideas.

From personal experience, I believe that almost everyone appearing on the APA program attends the meeting (maybe 95%). What I've done, then, is this: I have compared published lists of Pacific APA program participants from 2006-2008 with lists of people who paid their registration fees at those meetings -- data kindly provided by the APA with the permission of the Pacific Division. (The Pacific Division meeting is the best choice for several reasons, and both of the recent Secretary-Treasurers, Anita Silvers and Dom Lopes have been generous in supporting my research.)

Let me emphasize one point before continuing: The data were provided to me with all names encrypted so that I could not determine the registration status of any particular individual. This was a condition of the Pacific Division's cooperation and of UC Riverside's review board approval. It is also very much my own preference. I am interested only in group trends.

To keep this post to manageable size, I've put further details about coding here.

Here, then, are my preliminary findings:

Overall, 76% of program participants paid their registration fees: 75% in 2006, 76% in 2007, and 77% in 2008. (The increasing trend is not statistically significant.)

* People on the main program were more likely to have paid their fees than were people whose only participation was on the group program: 77% vs. 65% (p < .001).

* Gender did not appear to make a difference: 75% of men vs. 76% of women paid (p = .60).

* People whose primary participation was in a (generally submitted and blind refereed) colloquium session were more likely to have paid than people whose primary participation was in a (generally invited) non-colloquium session on the main program: 81% vs. 74% (p = .004).

* There was a trend, perhaps not statistically significant, for faculty at Leiter-ranked PhD-granting institutions to have been less likely to have paid registration fees than students at those same institutions: Leiter-ranked faculty 73% vs. people not at Leiter-ranked institutions (presumably mostly faculty) 75% vs. students at Leiter-ranked institutions 81% (chi-square p = .11; Leiter-ranked faculty vs. students, p = .03).

* There was a marginally significant trend for speakers and commentators to have been more likely to have paid their fees than people whose only role was chairing: 76% vs. 71% (p = .097).

Ethicists differed from non-ethicists in several dimensions.

* 33% of ethicists were women vs. 18% of non-ethicists (p < .001).

* 63% of participants whose only appearance was on the group program were ethicists vs. 42% of participants who appeared on the main program (p < .001).

* Looking only at the main program, 35% of participants whose highest level of participation was in a colloquium session were ethicists vs. 49% whose highest level of participation was in a non-colloquium session (p < .001). (I considered speaking as a higher level of participation than commenting and commenting as a higher level of participation than chairing.)

* Among faculty in Leiter-ranked departments, a smaller percentage were ethicists (38%) than among participants who were not Leiter-ranked faculty (49%, p < .001). (I've found similar results in another study too.)

I addressed these potential confounds in two ways.

First, I ran split analyses. For example, I looked only at main program participants to see if ethicists were more likely to have registered than were non-ethicists (they weren't: 77% vs. 77%, p = .90), and I did the same for participants who were only in group sessions (also no difference: 65% vs. 64%, p = .95). No split analysis revealed a significant difference between ethicists and non-ethicists.

Second, I ran logistic regressions, using the following dummy variables as predictors: ethicist, group program participant, colloquium participant, student at Leiter-ranked institution, chair. In one regression, those were the only predictors. In a second regression, each variable was crossed as an "interaction variable" with ethicist. No interaction variable was significant. In the non-interaction regression, colloquium role and main program participation were both positively predictive of having registered (p < .01) and participation only as chair was negatively predictive (p < .01). Being a student at a Leiter-ranked institution was not predictive (p = .18) and -- most importantly for my analysis -- being an ethicist was also not predictive (logistic beta = .04, p = .72), confirming the main result of the non-regression analysis.

[Thanks to the Pacific Division of the American Philosophical Association for providing access to their data, anonymously encoded, on my request. However, this research was neither solicited by nor conducted on behalf of the APA or the Pacific Division.]

Friday, March 02, 2012

People's responses to hypothetical moral scenarios can vary substantially depending on the order in which those scenarios are presented (e.g., Lombrozo 2009). Consider the well-known "Switch" and "Push" versions of The Trolley Problem. In the Switch version, an out-of-control boxcar is headed toward five people whom it will kill if nothing is done. You're standing by a railroad switch, and you can divert the boxcar onto a side-track, saving the five people. However, there's one person on the side-track, who would then be killed. Many respondents will say that there's nothing morally wrong with flipping the switch, killing the one to save the five. Some will even say that you're morally obliged to flip the switch. In the Push version, instead of being able to save the five by flipping a switch, you can do so by pushing a heavy man into the path of the boxcar, killing him but saving the five as his weight slows the boxcar. Despite the surface similarity to the Switch case, most people think it's not okay to push the man.

Here's the order effect: If you present the Push case first, people are much less likely to say it's okay to flip the switch when you then later present the Switch case than if you present the Switch case first. In one study, Fiery Cushman and I found that if we presented Push first, respondents tended to rate the two cases equivalently (on a seven-point scale from "extremely morally good" to "extremely morally bad"). But if we presented Switch first, only about half the respondents rated the scenarios equivalently. Somewhat simplified: People who see Push first will say that it's morally bad to push the man, and then when they see Switch they will say it's similarly bad to flip the switch. People who see Switch first will say it's okay to flip the switch, but then when they see the Push case they don't say "Oh, I guess that's okay too". Rather, they dig in their heels and say that pushing the man is bad despite the superficial similarity to the Switch case, and thus they rate the two scenarios inequivalently.

Strikingly, Fiery and I found that professional philosophers show the same size order effects on their judgment about hypothetical scenarios as do non-philosophers. Even when we restricted our analysis to respondents reporting a PhD in philosophy and an area of specialization or competence in ethics, we found no overall reduction of the magnitude of the order effect. (This research is forthcoming in Mind & Language and has been summarized here.) The Doctrine of the Double Effect is the orthodox (but by no means universally accepted) explanation of why it might be okay to flip the switch but not okay to push the man. According to the Doctrine of the Double Effect, it's worse to harm someone as a means of bringing about a good outcome than it is to harm someone as merely a foreseen side-effect of bringing about a good outcome. Applied to the trolley case, the thought is this: If you flip the switch, the means of saving the five is diverting the boxcar to the side-track, and the death of the one person is just a foreseen side effect. However, if you push the man, killing him is the means of saving the five.

Now maybe this is a sound doctrine, soundly applied, or maybe not. But what Fiery and I did was this: At the end of our experiment, we asked our participants whether they endorsed the Doctrine of the Double Effect. Specifically we asked the following:

Sometimes it is necessary to use one person’s death as a means to saving several more people—killing one helps you accomplish the goal of saving several. Other times one person’s death is a side-effect of saving several more people—the goal of saving several unavoidably ends up killing one as a consequence. Is the first morally better, worse, or the same as the second? [Response options: ‘better’ ‘worse’ or ‘same’]

Non-philosophers' responses to this question were unrelated to the order of the presentation of the scenarios. We suspect that many of them didn't see the connection between this abstract principle and the Push and Switch scenarios presented much earlier in the questionnaire. But philosophers' responses were related to the order of presentation of the Push and Switch scenarios. Specifically, the majority of philosophers (62%) who saw the Switch scenario first endorsed the Doctrine of the Double Effect. However, the doctrine was endorsed only by a minority of philosophers (46%) who saw Push first (p = .02). What seems to have happened is this: By manipulating order of presentation, Fiery and I influenced the likelihood that respondents would rate the scenarios equivalently or inequivalently. We thereby also influenced the likelihood of our philosopher respondents' endorsing a doctrine that appears to justify inequivalent judgments about the scenarios, the Doctrine of the Double Effect. Rather than relying on stable principles to reach judgments about the cases, a certain portion of philosophers appear to have reached their scenario judgments on the basis of covert factors like order of presentation and then endorsed principles only post-hoc as a means of rationalizing their covertly influenced judgments about the specific cases.

Manipulating the order of two pairs of scenarios (a Push-Switch case and a Moral Luck case) appeared to amplify the magnitude of this effect, by pushing philosophers either generally toward or generally against endorsing inequivalency-supporting principles. With two scenario pairs ordered to favor inequivalency, we found 70% of our philosopher respondents endorsing the Doctrine of the Double Effect. With the two pairs ordered to favor equivalency, only 28% endorsed the doctrine (p < .001). This is a very large shift in opinion, given how well-known the doctrine is among philosophers and given that by this point in the questionnaire, all philosophers had viewed all versions of each scenario. We then filtered our results, looking only at respondents reporting a PhD and an area of specialization or competence in ethics, thinking that these high-grade specialists (mostly ethics professors at Leiter-ranked institutions) might have more stable opinions about the Doctrine of the Double Effect. They didn't. When the two scenario pairs were arranged to favor inequivalency, 62% of ethics PhDs endorsed the Doctrine of the Double Effect. When the two pairs were arranged to favor equivalency, 29% endorsed the doctrine (p < .05).

The simplest interpretation of our overall results, across three types of scenarios (Double Effect, Moral Luck, and Action-Omission), is that in cases like these skill in philosophy doesn't manifest as skill in consistently applying explicitly endorsed abstract principles to reach stable judgments about hypothetical scenarios; rather, it manifests more as skill in choosing principles to rationalize, post-hoc, scenario judgments that are driven by the same types of factors that drive non-philosophers' judgments.

Monday, January 30, 2012

Throughout most of the 20th century, the medical community knew that large amounts of stress causes stomach ulcers—or at least they though they did. Here are some "known facts" (things that doctors thought were true) prior to 1979 about ulcers:

From "Helicobacter Connections"

But then, in the 1980s an Australian physician Dr. Barry Marshall, convinced that the medical community had it all wrong, infected himself with what he believed to be the real culprit responsible for the disease, H. pylori. In order to prove his theories against the overwhelming consensus, Marshall famously drank the bacteria in question.

And as it turns out, he was absolutely right. We now know that peptic ulcer disease is actually caused by bacterial infection.

Often, when people tell this story, they say things like "Everyone knew that stress caused ulcers, before an Australian doctor in the early 1980s proved that ulcers are actually caused by bacteria." But as philosophers we might wonder what we should make of this kind of claim. Did the doctors really know that stress was causing the ulcers or did they only think they knew it? Or phrased a bit more generally by Socrates “If one fails to get at the truth of a thing, will he ever be a person who
knows that thing?” The resulting dialogue in Plato’s Theatetus reveals his interlocutor’s answer (186c-187b). Knowledge cannot be mere opinion, because
there may be a false opinion. And the answer contemporary philosophers give has changed little since. Basically every epistemic analysis today includes a truth condition for knowledge. It’s just overwhelmingly clear to philosophers that only true things can be known.

Nonetheless, there seem to be
several examples today showing that non-philosophers do
not find this thesis obvious, and that at least as far as ordinary language is concerned, people frequently use ‘know’ in
what appear to be blatantly non-factive ways. A quick Google search--from the ulcer case, to headlines in the New York Times, to major blockbuster movies--reveals that non-factivity may be all around us!

Examples like these have led a growing number of philosophers to begin
to speculate about the role of factivity in the actual knowledge judgments
people make, as well as the significance these ordinary judgments might have for
traditional epistemic theorizing.
John Turri for instance, has advanced a performance-view of knowledge
that allows for knowledge of “approximate truths”, which are strictly speaking,
false beliefs (2011; forthcoming).
Sympathy for non-factivity has been expressed in research by Daniel
Nolan (2008). And, the possibility
that epistemic contextualism might allow for contexts under which certain kinds
of false beliefs qualify as knowledge has also been (at least) discussed in the works of
Keith DeRose (2009).

And perhaps the most
comprehensive challenge against orthodoxy to date is presented by Allan
Hazlett. In two recent papers, “The Myth of Factive Verbs,” and “Factive Presupposition and the Truth Condition on Knowledge” Hazlett displays compelling evidence that people ordinarily use purportedly factive verbs like ‘know’, ‘learn’, ‘remember’, and ‘realize’ in utterances of the form ‘S knows that p’ in ways that frequently do NOT require that p is true. Hazlett’s theory is that these kinds of sentences are acceptable to non-philosophers because the folk concept underlying the meaning of ‘knows’ allows that false things can genuinely be known.

But given that a number of
epistemologists have begun to focus more on the truth condition in light of
intuitions about ordinary usage, we may wonder, could it really be that the
folk concept of knowledge is truly a non-factive concept? Afterall, if true, it seems this would have a series of important epistemic and methodological implications about the connection between the ordinary concept and the (decidedly factive) concept of knowledge philosophers have historically been interested in analyzing.

Besides non-factivity however, another possible explanatory hypothesis of these linguistic data (like the ulcer case) is that ordinary uses of ‘knows’ are highly sensitive to something called ‘protagonist projection’. The basic idea basic idea is that non-factive uses frequently might just appear acceptable to us only because we take up the protagonist’s perspective and say what seems true from their perspective--and not because people think it's actually possible to know false things. (This is kind of like when one talks about someone else, and—with sufficient cues, e.g. imitating
their bodily language and tone of voice—can then use ‘I’ to refer to this other
person.)

To prove that Experimental Philosophy isn’t always about attacking the great tradition, I went ahead and ran some experiments attempting to confirm armchair intuitions about the factivity of ‘knows’. A draft of the paper is available HERE. The main goal of these experiments was to use explicit paraphrasing tasks (a method pioneered in the study of mental state attributions to groups) to see if the linguistic evidence about factivity collected so far is better explained by (i) the folk tendency to adopt the perspective of the putative ‘knower’ via protagonist projection when attributing knowledge to falsehoods or (ii) an underlying folk concept which really does allow for knowledge of false things.

Tuesday, November 01, 2011

Philosophers typically assume that pain is something in the mind. It is a certain sort of feeling, a phenomenal state. Indeed, it is perhaps the paradigm case of a psychological state that has phenomenal character.

In a new paper in Journal of Consciousness Studies, Justin Sytsma asks whether ordinary people see things in the same way. He reports a series of new studies indicating that they do not. On Sytsma's view, people do not think of pain as a psychological state. Instead, they think of pain as a real thing out there in the world, something located in their bodies. The idea is that the pain you feel when you stub your toe isn't in your mind at all -- it is in your toe.

Tuesday, October 11, 2011

Philosophers of mind typically distinguish experiential states or subjective experiences (like seeing red, feeling pain, or guilt) from intentional states (like believing or wanting) on the basis of their purportedly obvious phenomenal character. But if subjective experiences really are distinctive because they possess an obvious and unmistakable phenomenal character—in other words, because there is “something it is like to occupy them”—then presumably philosophers and non-philosophers will categorize the same mental states as subjective experiences. Specifically, if philosophers have identified a certain type of mental state as a paradigmatic subjective experience, then we should expect ordinary people to identify the same type of mental state as a subjective experience as well. But do they?

One method experimental philosophers and cognitive scientists have been using to get at this question of how people categorize mental states draws from philosophical thought experiments, which have long asked us to consider what mental states we would be willing to attribute to other entities. The method assumes that if ordinary people categorize a mental state as an experience insofar as it possesses an unavoidable phenomenal character, then we should expect this to be reflected in their attributions of phenomenal states to other entities. As Sytsma and Machery (2010) write, “we should expect the folk to deny that an entity, be it a simple organism, a simple robot, or a zombie, that lacks phenomenal consciousness can see red just as readily as they deny that it can be in pain” (302).