Here is an outstanding interview of Joshua Greene by Harvard Law Student Jeff Pote. The interview, titled “On Moral Judgment and Normative Questions” lasts just over 58 minutes. It was conducted as part of the Law and Mind Science Seminar at Harvard.

Bio:

Joshua D. Greene is an Assistant Professor of Psychology at Harvard University. He received his A.B. at Harvard University in 1997 where he was advised by Derek Parfit. He received his PhD in Philosophy at Princeton University in 2002 having written a dissertation on the foundation of ethics advised by David Lewis and Gilbert Harman. From 2002 to 2006, when he began at Harvard, he studied as a postdoctoral fellow at Princeton in the Neuroscience of Cognitive Control Laboratory under Jonathan Cohen. He is currently the Director of the Moral Cognition Lab.

* * *

* * *

Table of contents:

00:00 — Title Frame

00:23 — Introduction

00:54 — How did your professional interests develop?

04:58 — What are the questions that interest you?

06:07 — What research projects are you currently working on?

08:32 — Could you describe the original experiment that supported a dual-process view of moral judgment?

13:13 — Has further research supported the dual-process view of moral judgment?

16:43 — Could you explain how this, or any, psychological understanding could bear on normative questions of law and policy?

24:39 — Could you provide an example of a situation where we should not rely on “blunt intuition?”

30:42 — Can you see other places where psychological research illuminates normative questions of law or policy?

We commonly describe people’s behavior in terms of character traits such as honest, courageous, generous, and the like. Furthermore, we praise and reward those who display virtuous character traits and we look down upon those who exemplify vices such as dishonesty, cowardice, and stinginess. That virtue ethics captures this aspect of our everyday moral practices—i.e., our tendency to describe human behavior in terms of dispositional traits that give rise to virtues and vices—is purportedly one of its chief selling points. On Aristotle’s intuitively plausible view, for instance, being properly habituated, morally speaking, makes it more likely that one will engage in the right behavior, under the right circumstances, and for the right reasons. Moreover, not only does having the virtues make it maximally likely that one will engage in virtuous activity, but Aristotle also suggests that once an agent acquires the proper character traits, these dispositions are “firm and unchangeable” (NE, 1105b1). So, while the virtues are not themselves sufficient for moral behavior, truly virtuous individuals will usually do what’s right even under the most difficult circumstances (NE, 1105a88-10). If, on the other hand, virtuous character traits were not robust and stable predictors of moral behavior as Aristotle and others suggest, it is unclear why inculcating the virtues would better equip one to reliably navigate the complex moral world we inhabit.

However, as intuitive and attractive as the characterological approach to moral psychology may initially appear, some philosophers have recently suggested that the virtue theorist’s commitment to robust and stable character traits opens her view up to possible empirical refutation (Harman 1999; 2000; Doris 1998; 2002). On this skeptical view, the gathering data concerning the etiological role played by situational stimuli paints a different picture of moral agency than the one adopted by Plato, Aristotle, and their contemporary followers. Rather than a world being navigated by moral agents armed with robust and stable habituated dispositions to act, what we find is a world whereby situational forces play a much larger role in moral agency than philosophers have traditionally assumed.

For present purposes, let’s call this the Situationist Challenge. To get a feel for the sorts of empirical pressures that allegedly face virtue theorists, consider the surprising results from the “helping for a dime” studies reported in Isen & Levin (1972). Subjects were random pedestrians in San Francisco, CA and Philadelphia, PA who stopped to use a public payphone. Whereas some subjects found a dime that had been planted in the phone booth by researchers, other subjects did not find a dime. When subjects left the phone booth, a female confederate of the researchers dropped an armful of papers and researchers recorded whether or not the individuals leaving the phone booth stopped to help. The results were shocking: the subjects who found the dimes were 22 times more likely to help a woman who “dropped” her papers than the subjects who did not find the dime. Let that sink in for a moment. The slight elevation in emotion caused by randomly finding a dime on top of pay phone made a significant difference on subjects’ moral behavior—something presumably all participants would deny if asked. Perhaps the most surprising feature of these results isn’t that something so morally insignificant—namely, finding a dime in a phone booth—had such a pronounced effect on people’s moral behavior, rather it’s that these results appear to be representative of moral behavior rather than anomalous.

Unsurprisingly, virtue theorists have not taken the Situationist Challenge lightly. Perhaps the most common rejoinder to characterological skepticism is to suggest that the situationist literature is entirely consistent with traditional accounts of virtue ethics. Indeed, we are told that the only reason virtue ethics appears to be under empirical attack is that the skeptics have purportedly either misread or misrepresented the ancient virtue theorists. In making their case on this front, virtue theorists often appeal to the purported rarity of truly virtuous individuals. Merritt (2000) summarizes this so-called “argument from rarity” (Doris 1998, p. ) in the following manner:

Now many sympathizers with virtue ethics will want to say, “So what?” The experimental evidence shows only that most people aren’t genuinely virtuous. (And haven’t we always known this anyway, without needing experimental psychology to reveal it?) That doesn’t mean there’s a problem with the normative ideal of virtue ethics. It just means that being genuinely virtuous is a rare and difficult achievement” These people have a point. (p. 367-68)

For instance, as Kamtekar (2004) points out, Plato openly admits that non-virtuous people are “impulsive and unstable” (Lysis 214d) and that they “shift back and forth” (Gorgias 481e). Moreover, Kamtekar reminds us that Plato also acknowledges that “guaranteeing the behavior of ordinary people (i.e., people who lack philosophical wisdom) consistently conforms to virtue ethics requires manipulating their situations—not only the environment in which people are brought up but also the situations in which they are called upon to act as adults” (2004, p.483). According to Kamtekar, Merritt, and others, if this reading of the ancient virtue theorists is correct, then not only is the literature on situationism consistent with the moral theories of Plato and Aristotle, but the gathering data are precisely what these theorists would predict. Finding a dime only makes it more likely that those lacking in virtue will help. The truly virtuous would have helped regardless of whether or not they found a dime. The same could arguably be said about all of the aforementioned situationist studies. What you find in each case is evidence that for many (if not most) people, situational forces can sometimes trump dispositional traits when it comes to moral behavior. However, this is purportedly a far cry from a refutation of virtue ethics. Instead, it is a reminder of just how genuinely hard it is to be a virtuous agent.

Of course, this is not the only line of response open to the virtue ethicists.[1] Rather than falling back on the rarity of virtue—which is not a move without its dialectical and theoretical costs—virtue theorists could also opt for any of the following strategies:

The Empirical Counter-Challenge: One could directly dispute the data from situational psychology rather than try to show that the data are compatible with the characterological moral psychology of virtue ethics.[2]

The Immunization Thesis: One could accept the data on situationism at face value and suggest that we can use these data to immunize or shield ourselves from the etiological encroachment of morally irrelevant situational variables—i.e., armed with a better understanding of the threat of situationism, we will be better equipped to allow our dispositions to find expression in our action.[3]

The Mischaracterization Response: Rather than focusing on the supposed rarity of truly virtuous agents and behavior, virtue theorists could focus instead on trying to show that characterological skeptics have misunderstood or misstated other importance aspects of virtue theory.[4]

The Revisionist Response: The virtue theorists could accept that the data on situationism puts serious pressure on classical versions of virtue ethics. So, rather than defending the Platonic or Aristotelian views from the challenge, these virtue theorists could offer revisionist or rival versions of virtue ethics that are purportedly better equipped to deal with the situationist challenge.[5]

Regardless of which of these strategies the virtue theorist adopts, it is clear that the empirical data on the dispositional and situational roots of behavior have forced virtue theorists to carefully reexamine both the views of the ancients as well as the contemporary views rooted in these earlier views. While the data themselves do not (and presumably cannot) undermine virtue ethics full stop, they do represent an empirically-tractable challenge that virtue theorists must take seriously.

References:

Annas, J. 2005. “Comments on John Doris’ Lack of Character.” Philosophy and Phenomenological Research 73: 636-47.

[2] See, e.g., Kamtekar (2004). The two most common issues raised about the studies on situationism are: (a) several of the studies have very small sample sizes; and (b) the studies don’t observe people’s behaviors across situations.

[3] See, e.g., Merritt (2000): “Situationist psychology does show that certain kinds of seemingly irrelevant situational factors may derail a person’s usual expressions of ethical concern…but that’s less likely to happen if we are aware of such situational factors and their usual influences on behavior” (p. 372).

[4] See, e.g., Kamtekar (2004): “I argue that the character traits conceived of and debunked by Situationist social psychological studies have very little to do with character as it is conceived of in traditional virtue ethics” (p. 460).

[5] See, e.g., Merritt (2000): “What is important for Hume’s purposes is that one’s possession of the virtues, which he characterizes as socially or personally beneficial qualities of mind, should be relatively stable over time somehow or other, not that it should be stable through taking a special, self-sufficiently sustainable psychological form. A Humean approach leaves us plenty of room to say that if an otherwise admirable structure of motivation were stable in a person only because it was in large part socially sustained, it would be no less a genuine virtue for that” (p. 378).

Below is a fascinating and enlightening 51-minute interview of Thomas Nadelhoffer by Harvard Law Student Brian Wood. The interview, titled “Developments in Neuroscience and their Implications for Criminal Law,” lasts just over 51 minutes. It was conducted the Law and Mind Science Seminar at Harvard (taught by Situationist Editor Jon Hanson).

Bio:

Situationist Contributor Dr. Thomas Nadelhoffer was born and raised in Atlanta, Georgia. He has earned degrees in philosophy from The University of Georgia (BA), Georgia State University (MA), and Florida State University (PhD). Since 2006, he has been an assistant professor of philosopy and a member of the law and policy faculty at Dickinson College in Carlisle, Pennsylvania. He is currently at Duke University as a Visiting Scholar in the Kenan Institute for Ethics.

His main areas of research include moral psychology, the philosophy of action, free will, punishment theory, and neurolaw. He is particularly interested in research at the cross roads of philosophy and the sciences of the mind. His articles have appeared in journals such as Analysis, Midwest Studies in Philosophy, Mind & Language, Neuroethics, and Philosophy and Phenomenological Research. He is the coordinator of the blogs Flickers of Freedom and the Law and Neuroscience Blog. He is also a contributing author to blogs such as The Situationist, The Leiter Reports, and Experimental Philosophy.

* * *

* * *

Table of contents:

What have you been working on recently? 0:22

What are some areas of the legal system in which this science is relevant? 1:07

What are the problems with the traditional approaches to using science in the criminal system, and how are new scientific methods relevant to fixing them? 2:15

How could these newer scientific methods be employed? 4:09

What are the rationales society has traditionally cited as justifying criminal punishment? 6:55

Can you explain what Compatibalism is? 10:17

Aren’t there problems with notions of moral responsibility under Compatibalism? 12:26

How do neuroscience, Compatibalism, and determinism relate to our notions of law? 12:55

What do you see as the problems with the classic approaches to punishment? 15:25

Is there anything especially strange about Retributivism to you? 20:37

Can you detail what you believe to be the just reasons for punishment and how society can punish people more justly? 23:41

In your view, how would you punish psychopaths under the consequentialist rationale? 30:40

Can you give an example of the distinctions psychopaths cannot draw? 34:50

What’s the most interesting experiment you have conducted? 37:01

Do you think these participants just misunderstood what determinism is? 38:15

What qualities do you believe you and other researchers and philosophers need to be successful? 40:03

How has what you have learned through your research influenced the way you live you life? 41:35

How do you see the relationship of law and mind science developing in the future? 44:55

On Thursday, April 1st, the HLS Student Association for Law and Mind Sciences (SALMS) and the Harvard Graduate Mind, Brain, and Behavior (MBB) Steering Committee are hosting a talk by Joshua Greene called “Moral Cognition and the Law.”

Joshua Greene is an Assistant Professor in the Psychology Department at Harvard University. He studies emotion and reason in moral judgment using behavioral experiments, functional neuroimaging (fMRI), and other neuroscientific methods. The goal of his research is to understand how moral judgments are shaped by automatic processes, such as emotional gut reactions, and controlled cognitive processes, such as reasoning and self-control.

The event will take place in Pound 101 at Harvard Law School, from 12:00 – 1:00 p.m.

From Youtube: Experimental philosophers take on one of philosophy’s most revered figures, Aristotle, by seeing if ordinary people agree with Aristotle’s conclusions about when one is forced to do something and when one does it freely.

From Googlevideo: “John A. Bargh, Ph.D., professor at Yale University [and Situationist Contributor], speaks during a symposium at the Society for Personality and Social Psychology Convention in Tampa, FL. This special keynote session was titled “What Social Psychology can Tell Us about the ‘Free Will’ Question.”

* * *

* * *

From Googlevideo: Roy Baumeister of Florida State University speaks at the same event about the usefulness and complexity of consciousness and human culture.

Despite recurring interest in the potential for affect to influence “rational” reasoning, in particular the effect of emotion on moral judgments, legal scholars and social scientists have conducted far less empirical research directly testing such questions than might be expected. Nevertheless, the extent to which affect can influence moral decisions is an important question for the law. Watching a certain sort of movie, for instance, can significantly influence responses to opinion polls conducted shortly after that movie. Legislative action based on public opinion as so expressed, or media reports of public opinion based on such polls, could thus inaccurately reflect that public sentiment. This is especially so for social and policy issues that are heavily emotional, such as capital punishment or affirmative action.

Most discussion on law and emotions has been theoretical, addressing philosophical approaches to law and emotion. What psychological data exist are mixed, and virtually none appears in the legal literature. Thus, to bring the legal academic discussion into the realm of the empirical, and to provide further data on the question of affective influences on moral and legal decision-making, I conducted two experimental studies examining mood’s influence on moral judgments.

After clarifying what I mean by “moral judgment” and how I measured it, I report the methodologies and results of those studies. Briefly, the data support other empirical research showing that individuals in a positive mood (here, happiness) tend to process information more superficially than those in a negative mood (here, anxiety). I then discuss the results’ implications for the legal system, including implications for trials (e.g., victim impact statements or graphic testimony), and implications for public policy-making (e.g., the context of public opinion polls and surveys).

Most broadly, the data contribute to the developing legal literature on the role of emotions in the law. They highlight the importance of conducting empirical research, and of the translation of such empirics to specific legal and policy applications.

Professor Nadelhoffer’s main areas of research include moral psychology, the philosophy of action, free will, punishment theory, and neurolaw. He is particularly interested in research at the cross roads of philosophy and the sciences of the mind. His articles have appeared in journals such as Analysis, Midwest Studies in Philosophy, Mind & Language, Neuroethics, and Philosophy and Phenomenological Research. He is also a contributing author to several other blogs such The Leiter Reports, The Garden of Forking Paths, and Experimental Philosophy. When not thinking about or teaching philosophy, he spend lots of time hanging out with his pack of dogs, climbing boulders and walls, and listening to indie rock.

Imagine you are serving on a jury: the defendant is charged with murder, but he also suffers from a brain tumor that causes erratic behavior. Is he to be held responsible for the crime? Now imagine you are the judge: What should the defendant’s sentence be? Does the tumor count as a mitigating circumstance?

The assignment of responsibility and the choice of an appropriate punishment lie at the heart of our justice system. At the same time, these are cognitive processes like many others—reasoning, remembering, decision-making—and as such must originate in the brain. These two facts lead to the intriguing question: How does the brain enable judges, juries, and you and me to perform these tasks? What are the neural mechanisms that let you decide whether someone is guilty or innocent?

A recent study published in the December 2008 issue of the journal Neuron, by Joshua Buckholtz and his colleagues at Vanderbilt University tackles exactly this question. Until recently, such topics would have been out of the reach of cognitive neuroscience for lack of methods; today, functional magnetic resonance imaging (fMRI) allows researchers to watch the brain “in action” as normal human participants make decisions about responsibility and punishment. In the new study, Buckholtz and colleagues asked participants to read vignettes describing hypothetical crimes that a fictitious agent, “John,” commits against another person. The stories were divided into three conditions: in the first, the “responsibility” (R) condition, the perpetrator was fully responsible for the negative consequences of his action against the victim; for instance, John might have intentionally pushed his fiancée’s lover off a cliff. In the “diminished responsibility” (DR) condition, mitigating circumstances were present that reduced John’s responsibility; imagine that John committed the same crime, but suffered from a brain tumor.

And finally, the “no crime” (NC) condition consisted of stories that did not describe crimes. The participants had to make judgments regarding the degree of punishment that John should receive, on a scale from one to nine.

The authors then analyzed the brain activation linked to these judgments. To identify neural correlates of responsibility, they contrasted activation in the R and DR conditions. Note that the stories in two conditions are identical, except for the degree to which John is responsible for his crime. This contrast thus aims to identify which regions of the brain are involved in assigning responsibility for a crime, holding constant the crime itself. Buckholtz and colleagues found a peak of activation in the right doroslateral prefrontal cortex (rDLPFC), a brain region on the top surface of the right frontal lobe that is known to be involved in high-level cognitive processes such as reasoning and decision-making. In addition, this same region was more active when subjects thought a diminished-responsibility crime deserved punishment compared with when it did not.

Thus, these findings suggest that rDLPFC might be involved in assigning responsibility for crimes, or making judgments about appropriate punishments. Based on this finding, one might have expected that activation in rDLPFC should be higher when participants decide that very severe punishments are appropriate. Buckholtz and colleagues found no correlation between neural activation and punishment magnitude in rDLPFC, however, suggesting that this brain region does not directly underlie the decision on the amount of punishment. In contrast, there was some evidence that activation in emotion-related areas, such as the amygdala, correlates with the degree of punishment subjects assign to John: higher punishment scores were associated with higher activation in these regions during the decision period.

Reconciling the Findings

Have we found, then, the brain center for jurisprudence? Probably not: the brain regions identified in this new study, in particular right DLPFC, have previously been highlighted in a number of other studies addressing related but slightly different questions. Unifying patterns do exist, however. We therefore first describe some related studies, and then outline a possible reconciliation between the different findings.

What does rDLPFC do when it isn’t busy assigning responsibility for crimes? One answer comes from a study by Alan Sanfey and colleagues in 2003: these authors found activation in rDLPFC when subjects decided whether to accept or reject a low offer in a two-person economic game called Ultimatum Game. In addition, Daria Knoch and her colleagues in 2006 found that when rDLPFC was deactivated with a technique called repetitive transcranial magnetic stimulation (TMS), participants became less able to reject low offers in this game, although they still judged these offers as very unfair. A different line of work by Joshua Greene and colleagues in 2004 suggests that rDLPFC may be involved in moral reasoning. They presented participants with moral dilemmas such as the decision whether or not to kill one’s own crying child to keep it raising the attention of enemy soldiers and thereby endangering the whole group. The rDLPFC region was activated when subjects acted in the interest of greater overall welfare, against their emotional impulses. Finally, rDLPFC was also highlighted by another study involving social decision-making by Manfred Spitzer and colleagues in 2007: these authors asked participants how much of their wealth they wanted to share with another player. This amount wasn’t very much, usually—unless participants were threatened with punishment. Under the punishment threat, participants transferred more money, and rDLPFC was more active. Moreover, the more subjects changed their behavior under the punishment threat relative to the situation without a threat, the more rDLPFC was activated, suggesting that rDLPFC played a key role in adapting behavior when facing the sanctioning threat.

* * *

To read the rest of the article, including Haushofer and Fehr’s discussion of the “big picture,” click here.

A recent story on MSNBC summarizes research indicating “why we’re all moral hypocrites.” Here are a few excerpts.

* * *

Most of us, whether we admit it or not, are moral hypocrites. We judge others more severely than we judge ourselves.

Mounting evidence suggests moral decisions result from the jousting between our knee-jerk responses . . . and our slower, but more collected evaluations. Which is more responsible for our self-leniency?

To find out, a recent study presented people with two tasks. One was described as tedious and time-consuming; the other, easy and brief. The subjects were asked to assign each task to either themselves or the next participant. They could do this independently or defer to a computer, which would assign the tasks randomly.

Eighty-five percent of 42 subjects passed up the computer’s objectivity and assigned themselves the short task – leaving the laborious one to someone else. Furthermore, they thought their decision was fair. However, when 43 other subjects watched strangers make the same decision, they thought it unjust.

* * *

The researchers then “constrained cognition” by asking subjects to memorize long strings of numbers. In this greatly distracted state, subjects became impartial. They thought their own transgressions were just as terrible as those of others.

This suggests that we are intuitively moral beings, but “when we are given time to think about it, we construct arguments about why what we did wasn’t that bad,” said lead researcher Piercarlo Valdesolo, who conducted this study at Northeastern University and is now a professor at Amherst College.

* * *

The researchers speculate that instinctive morality results from evolutionary selection for team players. Being fair, they point out, strengthens mutually beneficial relationships and improves our chances for survival.

So why do we choose to judge ourselves so leniently?

* * *

To read teh entire article, including the answer to that last question, click here.

In 2006, Slate’s John Lackman wrote a fine introduction to the then-nascent movement within philosophy known as “experimental philosophy” or “X-Phi”:

Philosophers have ignored the real world because it’s messy, full of happenstance details and meaningless coincidences; philosophy, they argue, has achieved its successes by focusing on deducing universal truths from basic principles. X-phi, on the other hand, argues that philosophers need to ask people what and how they think. Traditional philosophy relies on certain intuitions, presented as “common sense,” that are presumed to be shared by everyone. But are they? For example, can people be morally responsible for their actions if they don’t have free will? Many philosophers have assumed that all sane people would of course say no. Experimentalists don’t assume. They ask. Recently, they presented the following scenario to two groups:

* * *

Bill and his wife were flying home from vacation with their friend Frank, who was having an affair with Bill’s wife, as Bill knew. Kidnappers injected Bill with a drug that forced him to obey orders, then told him to shoot Frank in the head, which he did.

* * *

They told the first group that Bill wanted Frank dead and so grieved little for him. To the second, they said that Bill hated what he’d done. Both groups were then asked if Bill deserved blame for Frank’s death. Traditional philosophers have argued that Bill shouldn’t be blamed in both cases because it’s common sense that moral responsibility requires free will. But, in fact, the first x-phi group did blame Bill in the scenario in which he welcomed Frank’s death. Similarly, groups praised a hypothetical involuntary organ donor, even though he had no choice but to give. This doesn’t prove that you can have moral responsibility without free will. But it does vaporize a traditional philosophical objection to that view—that it lacks common sense.

***

Experimental philosophy is also challenging such basic philosophical notions as “intentional action.” What do we mean when we say that someone did somethingintentionally? Most philosophers assume that we’d all agree that this is a question of the actor’s state of mind. Experimentalist Joshua Knobe of the University of North Carolina at Chapel Hill asked college students: If a businessman interested only in profits knowingly harms the environment, should we say he did so intentionally? The students answered yes. Yet if the same businessman knowingly helped the environment, they said no. Apparently, intentionality depends not just on an actor’s state of mind, but also on the outcome he or she produces. And also on skill—the ability to carry out one’s intention. If you hit a bulls-eye your first time playing darts, did you do it “intentionally”? It turns out that the most common answer is yes if you keep regularly hitting the target and no if you don’t. Outcome trumps skill, though, when it comes to determining intentionality. Say a man tries to shoot his aunt, misfires, but is lucky and hits her anyway. Most people will say he killed her intentionally, even though he didn’t really have the skill to. It’s enough that he wanted to. Hijackers with experimental drugs, land-despoiling executives, aunt killers—what’s not to like about x-phi?

Although X-Phi certainly has its critics, it has only gained in strength and legitimacy since Lackman’s article, and the lines between this new brand of philosophy and cognitive psychology have faded into what appears to be a healthy, interdisciplinary blur. A recent “Call for Papers” summarized the field this way:

Over the last decade, philosophers have started using experimental and quasi-experimental methods to obtain data that are relevant for philosophical controversies. Surprising results have been obtained for a large range of topics, including intuitions about reference, intuitions about free will and responsibility, and the relation between judgments of causation and moral judgments. Meanwhile, psychologists are increasingly paying attention to aspects of our folk theories that directly bear on philosophy, such as the nature of folk explanation, the nature of causal judgments, the processes underlying moral judgments, the folk concept of race, and the nature of imagination. This movement, unified by a common desire to apply experimental methods to philosophical issues, is known as “experimental philosophy.”