Monday, May 26, 2008

One of the classic debates in ethics is between realism and anti-realism. It's hard to precisely state what is at issue without being tendentious, but one way would be this. Are there moral facts (realism) or are there just individual human or social opinions or reactions (anti-realism)?

I'm not going to say anything in this post about the arguments for or against each side, and I'm intentionally not going to say which side I'm on personally. Instead, I want to just make a couple of sociological observations (with the caveat that my results are purely anecdotal).

(1) Most people feel strongly about this issue, whether or not they have a "philosophical" mind. This topic is a sure-fire discussion starter in any introductory philosophy class.

(2) Whichever side a person agrees with, she generally thinks that the other position is pretty obviously mistaken, and is a little bemused that anyone actually believes the other side.

(3) Realists worry that, if you actually took anti-realism seriously, it would encourage some sort of moral decay, while anti-realists worry that realism is really just a rationale for being dogmatic about morality.

(4) If pressed, most realists will assert that they "know" that anti-realism does NOT actually encourage moral decay, while anti-realists will assert that they "know" that realism is NOT actually just a rationale for being dogmatic about morality.

My sense is that (3) is what accounts for (1). In addition, no matter how often people assert what they do in (4), they still really believe (3) in their gut. This explains (2), because the arguments for one's position are really just rationalizations, while the arguments against one's position don't touch what really motivates one to accept it.

Maybe it would lead to a more productive debate if we talked less about moral realism and anti-realism, and more about how to find the mean between (A) taking one's ethical commitments seriously, and (B) dogmatically sticking to one's commitments? (David Wong has an interesting discussion of this in his recent book, Natural Moralities, pp. 179-272.)

(By the way, sorry for being so behind in replying to comments. I got a copy-edited manuscript this week, which I was rushing to revise for the publisher. But I sent it off, and I'll be back to my contrary self on Monday. *smile* )

Thursday, May 22, 2008

... that is, my essay of that title, is now out in Philosophical Review. I confess to some pride: The essay is the culmination of years of thought and discussion, including a series of more narrowly focused essays on the same general theme; and Philosophical Review is the most selective and prestigious of all philosophy journals.

A certain very eminent philosopher (who will go unnamed) told me that he thought the essay was perhaps "the chattiest essay ever published in Phil Review". I'm not quite sure what to make of that remark....

Here are the first two paragraphs [footnotes excluded]:

Current conscious experience is generally the last refuge of the skeptic against uncertainty. Though we might doubt the existence of other minds, that the sun will rise tomorrow, that the earth existed five minutes ago, that there's any "external world" at all, even whether two and three make five, still we can know, it's said, the basic features of our ongoing stream of experience. Descartes espouses this view in his first two Meditations. So does Hume, in the first book of the Treatise, and -- as I read him -- Sextus Empiricus. Other radical skeptics like Zhuangzi and Montaigne, though they appear to aim at very general skeptical goals, don't grapple specifically and directly with the possibility of radical mistakes about current conscious experience. Is this an unmentioned exception to their skepticism? Unintentional oversight? Do they dodge the issue for fear that it is too poor a field on which to fight their battles? Where is the skeptic who says: We have no reliable means of learning about our own ongoing conscious experience, our current imagery, our inward sensations -- we are as in the dark about that as about anything else, perhaps even more in the dark?

Is introspection (if that's what's going on here) just that good? If so, that would be great news for the blossoming -- or should I say recently resurrected? -- field of consciousness studies. Or does contemporary discord about consciousness -- not just about the physical bases of consciousness but seemingly about the basic features of experience itself -- point to some deeper, maybe fundamental, elusiveness that somehow escaped the notice of the skeptics, that perhaps partly explains the first, ignoble death of consciousness studies a century ago?

Monday, May 19, 2008

A fascinating question. A reflexive rejection of eugenics is too simplistic. There are so many ways now open (or opening) to shape ourselves and future generations, and the benefits and drawbacks are so diverse and complex, that you'll be glad to hear of

Scientists, students, ordinary folks, and even philosophers sometimes find the word "consciousness" baffling or suspicious. Understandably, they want a definition. To the embarrassment of us in "consciousness studies", it proves surprisingly difficult to give one. Why? The reason is this: The two most respectable avenues for scientific definition are both blocked in the case of consciousness.

Analytic definitions break a concept into more basic parts: A "bachelor" is a marriageable but unmarried man. A "triangle" is a closed, three-sided planar figure. In the case of consciousness, analytic definition is impossible because consciousness is already a basic concept. It's not a concept that can be analyzed in terms of something else. One can give synonyms ("stream of experience", "qualia", "phenomenology") but synonyms are not scientifically satisfying in the way analytic definitions are.

Functional definitions characterize terms by means of their causal role: A "heart" is the primary organ for pumping blood; "currency" is whatever serves as the medium of exchange. Someday maybe a functional definition of consciousness will be possible; but first we have to know what kind of causal role (if any) conscious plays; and we're a long way from knowing this. Various philosophers and psychologists have theories, of course, but to define consciousness in terms of one of these contentious theories begs the question.

So maybe the best we can do is definition by instance and counterinstance: "Furniture" includes these things (tables, desks, chairs, beds) and not these (doors, toys, clothes). "Square" refers to these shapes and not these. Hopefully with enough instances and counterinstances one begins to get the idea. So also with consciousness: Consciousness includes inner speech; visual imagery; felt emotions; dreams; hallucinations; vivid visual, auditory, and other sensations. It does not include: Immune system response, early visual processing, myelination of the axons; what goes on in dreamless sleep.

Unfortunately, definition by instance and counterinstance leaves unclear what to do about cases that don't obviously fit with either the given instances or counterinstances: If all the given instances and counterinstances of "square" are in Euclidean space, what does one do with non-Euclidean figures? Are paintings "furniture"?

Now maybe (this is my hope) there really is just one property -- what consciousness researchers call "consciousness" -- that stands out as the obvious referent of any term meant to fit the instances and counterinstances above, so that no human would accidentally glom on to another property, given the definition, just as no human would glom on to "undetached rabbit part" as the referent of the term "rabbit" used in the normal way. But this may not be true; and if it's also not acceptable (as I think it's not) simply to lump cases like the sound of the fridge and the feeling of one's shoes into the class of vague, in-between cases, then even definition by instance and counterinstance fails as a means of characterizing consciousness.

Are we left then with Ned Block's paraphrase of Louis Armstrong: "If you got to ask [what consciousness/jazz is], you ain't never gonna get to know"?

Thursday, May 15, 2008

Aristotle was wrong. Philosophy begins when a community of people encounter a problem that outstrips their current methods for problem-solving. For example, in ancient Greece, the Sophists seemingly could argue persuasively for either side in a court case or public policy debate. Or in Eastern Zhou-dynasty China, the traditional Way of organizing society was no longer promoting prosperity and preserving social order. Plato addressed the former problem, and Confucius the latter. (Forgive me for greatly oversimplifying the views of these two subtle and multifaceted philosophers.)

Faced with a philosophy-inducing problem, members of the community continue to share (and hold true) most of their background beliefs. If they did not, they would be unable to communicate about the problem. But whatever the problem is, it will call into question some of their beliefs. (For example, "rational argumentation can arrive at objective truth" or "the Way of the ancients is relevant to contemporary society.") On the basis of their shared beliefs, the members of the community formulate solutions to the problem. (For example, "mathematics provides a paradigm for how rational argumentation can succeed" or "if we ethically cultivate individuals and put them into positions of authority, society can be returned to the ancient Way.")

Any solution must satisfy two criteria. (1) It must answer all plausible, substantive objections that are raised against it by other members of the community (including alternative solutions); and (2) it must fit our interactions with the world. In other words, philosophy always involves two types of dialogue partners: other people and the world. (Obiter dicta, I think Richard Rorty tended to forget or underemphasize the role of the latter dialogue partner.) To continue my earlier examples, Plato had to answer the objection that most people are not convinced by the type of argumentation he recommends (and he replied with the "Myth of the Cave") and Confucius had to answer the objection that brute force was the only plausible method for enforcing social order (and he replied with the concept of sagely "Virtue").

What is the payoff of this "hermeneutic account" of philosophy?

Contrast Descartes, who began with subjective "ideas," and then tried to make the jump from them to the world. The problem is that if you start with subjective ideas, and assume that there is a world independent of those ideas, you will be led to skepticism. Or if you start with subjective ideas, and abandon the notion of some unattainable world beyond them, you will be led to relativism. (I think the influence of the Cartesian picture is part of the reason undergraduates assume that relativism or skepticism is self-evident.) But Descartes' epistemological starting point is arbitrary and unwarranted. We begin as creatures in the world, communicating with other creatures in the world with whom we share many common beliefs about the world. Now, through a subtle process of abstraction we can temporarily adopt a Cartesian standpoint, but we do not start out there, and we are not obligated to go there.

The methodological implication of the hermeneutic approach is that, in order for our position to be justified, we need to (1) know the major objections that have been raised against our "solution," and (2) know the major alternative solutions to the problem we face, so that we can (3) answer the objections, and (4) explain why our solution is superior to the alternatives. Again, a contrast with Descartes is instructive, because his Meditations invites us to think of philosophy as an individual process conducted in isolation from previous beliefs. But, as I noted in an earlier post, one cannot even understand Descartes himself without seeing him as a participant in an ongoing dialogue. So the individualist methodology is self-undermining.

Tuesday, May 13, 2008

Between the new daughter and two and a half weeks (so far) of raging sinusitis, it's been hard for me to keep up with the blog. Thankfully, Hagop and Bryan have done some interesting guest posts in the meantime!

But I can't let things go entirely, not without at least tossing out a little food for thought. Here's the food: In my experience, philosophers' spouses (unless they are themselves philosophers) are almost universally disdainful of the value of philosophy -- much more so than the average comparably-educated non-philosopher, it seems to me, and much more so than the spouses of professors in other fields are of the value of those other fields.

Here's a tiny bit of empirical data: Eleven people with no academic affiliation, mostly philosophers' spouses, responded to Josh Rust's and my survey of people's opinion of the moral behavior of ethicists. None of them thought ethicists behaved better on average than non-academics, six thought they behaved the same, and five thought they behaved worse (two-tailed binomial, p = .06). Although the sample size is obviously very small, of all respondents, philosophers' spouses appeared to have the darkest view of ethicists' behavior. (I wonder what ethicists' spouses in particular would say.)

Suppose I'm right about the disdain philosophers' spouses generally have for philosophy. What might explain that? Do they have a clearer view of philosophy than we? Bryan complains that (American) philosophers don't get no respect. Maybe we don't deserve respect, and our spouses are just the ones who know this best?

Update, May 14: Here's one possibility: Philosophy is about confronting questions that resist straightforward resolution and many of which are pretty much timeless. When faced with such a daunting task, most of what we mere mortals can provide is only bunk, even if the bunk-providing philosophers themselves don't realize it. (It doesn't follow that philosophy isn't worth doing.)

Tuesday, May 06, 2008

Eric wrote an interesting entry two months ago in which he noted how much higher the social status of philosophers is in Iran than in the U.S. I'd like to expand on that a bit.

I think that almost every civilization today or in recorded history has given philosophers (and humanists in general) more respect than does the contemporary US.

Abelard, the medieval philosopher, was greeted like visiting royalty wherever he went. The brilliant and well-born Heloise could have had any husband she wanted, but she famously said to him, "I would rather be your whore than another man's wife."

I knew a fellow graduate student who studied in Taiwan for a while. He came back with a Chinese wife from a wealthy and influential family. She knew that her new husband would never make a lot of money, but she was content because of the prestige that came with being a scholar (or a scholar's wife). When he got his first academic job, and she saw what the social status of professors in the US is, she divorced him and returned to Taiwan.

Jurgen Habermas is routinely consulted by European media for his views on current events, as were Derrida and Bertrand Russell. Here in the US, Larry King has interviewed Sean Penn about his views on the Iraq War, and Jenny McCarthy about the causes of autism. And Paris Hilton is a celebrity because she had the good fortune to be born rich, and the misfortune to appear in a sex tape.

If you look at representations of intellectuals in US media, they are almost always either arrogant and cruel (like Professor Kingsfield of The Paper Chase) or amusingly feckless (like Diane Chambers or Fraser Crane on Cheers). As Eric reminded me, there is a bit of an exception for scientists. Einstein posters still grace a few dorm rooms, and brilliant doctors like "House" are often pop-culture icons. But there is no social cachet for those of us who know that modern natural science would have been impossible without our own intellectual discipline.

I think that the American disdain for intellectuals grew out of a preference for populism and a rejection of what was seen as European elitism. But our country is in actuality very elitist. But it is not an elitism of intelligence and achievement. It is an elitism of wealth and celebrity.

Saturday, May 03, 2008

Academic philosophers are a fairly homogeneous bunch--mostly male, mostly white. So, when a philosopher proposes a thought experiment and then goes on to make a general claim of the form "in this case, most people would surely say that P" or "in this case, it is clearly the case that P", one might wonder whether the philosopher's intuitions really are so obvious or widely shared. Perhaps only philosophers, or male philosophers, or Western male philosophers, or Western male philosophers who maintain theory x, would find it obvious (or even entertain the idea) "that P".

Indeed, in recent years, experimental philosophers interested in such intuitions about particular cases (and their role in philosophical theories) have discovered that it's not hard to find significant cross-cultural variation. For example, recent studies have shown that Americans, East Asians and Indians may differ considerably in their intuitions concerning key thought experiments in epistemology and philosophy of language.

Some colleagues and I wanted to see whether this phenomenon held true for beliefs concerning free will and moral responsibility. We asked participants in Colombia, Hong Kong, India, and the United States what they thought about the following case. (For a video presentation of these questions, click here.):

**********

Imagine a universe (Universe A) in which everything that happens is completely caused by whatever happened before it. This is true from the very beginning of the universe, so what happened in the beginning of the universe caused what happened next, and so on right up until the present. For example one day John decided to have French Fries at lunch. Like everything else, this decision was completely caused by what happened before it. So, if everything in this universe was exactly the same up until John made his decision, then it had to happen that John would decide to have French Fries.

Now imagine a universe (Universe B) in which almost everything that happens is completely caused by whatever happened before it. The one exception is human decision making. For example, one day Mary decided to have French Fries at lunch. Since a person’s decision in this universe is not completely caused by what happened before it, even if everything in the universe was exactly the same up until Mary made her decision, it did not have to happen that Mary would decide to have French Fries. She could have decided to have something different.

The key difference, then, is that in Universe A every decision is completely caused by what happened before the decision – given the past, each decision has to happen the way that it does. By contrast, in Universe B, decisions are not completely caused by the past, and each human decision does not have to happen the way that it does.

1. Which of these universes do you think is most like ours? (circle one)

Universe A-----Universe B

2. In Universe A, is it possible for a person to be fully morally responsible for their actions?

YES-----NO

**********

Previous work in cross-cultural psychology has shown that Westerners and non-Westerners differ in the way they think about moral responsibility, individual agency, and even the more fundamental notion of what it means to be a person; naturally, we expected to find some significant differences in the response patterns of these groups. Not so. In all four cultures, the majority of participants responded as indeterminists and incompatibilists! That is, the majority of participants believed our own universe to be indeterministic, and denied that moral responsibility could be compatible with determinism.

How is it, then, that individuals from such different cultural and religious backgrounds, with divergent ways of understanding the world, who have probably never been instructed on the topic of causal determinism, all tend to embrace the same two theses--indeterminism and incompatibilism? Such cross-cultural similarities in beliefs cry out for one of two kinds of explanation. The first would focus on innate endowment, such as some basic capacities concerning causal cognition or theory of mind. The second would focus on shared experience, such as the phenomenology of moral choice or the unpredictability of human action. Either approach seems worthy of exploration. It is also possible—and indeed quite likely—that the similarities here arise from a complex interaction between innate endowment and shared experience. Future research may shed light on the mechanisms involved. For now, it remains a puzzling finding in an area where one might expect some variation.

Thursday, May 01, 2008

This blog entry is by Bryan Van Norden, Professor in the Philosophy Department and the Department of Chinese and Japanese at Vassar College.

Thank you to Eric for kindly allowing me to be a guest blogger for the next few weeks. The first topic I would like to write about is the importance of knowing the secondary literature in one's field.

1. The Problem

I recently wrote a Letter to the Editor that was published in the Proceedings and Addresses of the APA. In it, I described my experience when my department was interviewing job candidates. I noted that we met many terrific young philosophers, and ended up hiring someone we are delighted with. However, we also discovered that many job candidates are not familiar with even the most basic secondary literature on their areas of research (including the work of their own supposed advisors). I concluded the letter by reminding my fellow philosophers of the obvious (I hope) fact that professors have an obligation to train their graduate students. I was writing primarily about "mainstream" philosophy, but my experience has been the same in Chinese philosophy.

So my claim is that it is crucial to know the secondary literature and that far too many people get doctorates without knowing it (or even knowing that it is important).

2. Why Is It a Problem?

I don't know how many people would actually come out and say it, but I think there is a common view that it is not important to know the secondary literature. This view has several sources.

Isn't what's really important that we read the PRIMARY texts?

It's crucial that we read the primary texts! But it is not enough to read the primary texts.

Oh yeah? Why not?

Newton famously said, "If I have seen farther than others, it is because I stand on the shoulders of giants." By this he meant that his work would have been impossible without building upon the previous research of people like Euclid, Copernicus, Galileo and Kepler. Where would we be today if Newton had remained ignorant of them? So even in natural science, which people often think of as an enterprise that can "prove" things independently of tradition, it is impossible to achieve progress without building upon previous research.

But I want to think independently! I don't want to just parrot what people in the past have said on this topic.

Good! But you won't be able to do so unless you become self-aware about what assumptions you bring to the text. Descartes set the tone for much of modern philosophy when he said he was going to reject tradition and custom and just think for himself. Almost everyone today proudly rejects the content of Descartes' claims, but it is far too common to implicitly assume that the methodology (confronting reality with one's individual thoughts) is correct. But the methodology is fundamentally flawed as well. Descartes was certainly original in many ways, but (as any serious historian of modern philosophy will tell you) his work is deeply dependent upon its Platonistic, Aristotelian, Augustinean and Scholastic sources. ("I think therefore I am" is a paraphrase of a line from Augustine's Confessions.)

The issue isn't whether we should be original or not. The issue is whether we can be original and insightful while we are ignorant.

Give me an example of what you are talking about.

Okay. I have heard more than one person ingenuously discuss "Mengzi's claim that human nature is originally good." There's just one problem: Mengzi never says that. Mengzi says that human nature is good, simpliciter. The "originally" is a Neo-Confucian gloss. Even people who have read the primary text often assume the Neo-Confucian reading. But this is the sort of issue raised in the secondary literature.

In general, the problem is that you can't be open-minded if you don't know what the alternatives are to your view.

If the secondary literature is so interesting, just tell me what it says.

What would you say to a student who told you, "I didn't do the reading. Just tell me what it said and I'll argue with you about whether what you say is right."

But don't you think dialogue is important?

Absolutely! But the secondary literature is PART of the dialogue. Besides, if you don't know the secondary literature and I do, how productive will my conversation with you be?

But research is hard work. It's more fun to just chat about my impressions of the text.

Aw, I suspected that was the root of it all! ;)

3. The Solution

I'd like to conclude with a list of what are, in my opinion, the absolutely essential secondary readings for anyone interested in pre Qin dynasty Chinese philosophy. (One could easily add to this list, but I think it would be hard to say anything on it is optional for someone who claims to have an AOS in this area.)

Hall, David and Roger Ames. Thinking through Confucius. (Or at least one other book in the trilogy they wrote, which includes Thinking from the Han and Anticipating China.)

Hansen, Chad. Language and Logic in Ancient China (U of Michigan Press). (Or his A Daoist Theory of Chinese Thought.)

Harbsmeier, Christoph. Language and Logic. Vol 7, Part 1 of Joseph Needham, ed., Science and Civilisation in China (Cambridge U Press). (Or A.C. Graham's Later Mohist Logic, Ethics and Science. Some of the same material is also covered in Graham's Disputers of the Tao.)