Blog Stats

Archive for the ‘Philosophy’ Category

Some recent discoveries in evolutionary biology, ethology, neurology, cognitive psychology and behavioral economics impels us to rethink the very foundations of law if we want to answer many questions remain unanswered in legal theory. Where does our ability to interpret rules and think in terms of fairness in relation to others come from? Does the ability to reason about norms derive from certain aspects of our innate rationality and from mechanisms that were sculptured in our moral psychology by evolutionary processes?

Legal theory must take the complexity of the human mind into account

Any answer to these foundational issues demands us to take into consideration what these other sciences are discovering about how we behave. For instance, ethology has shown that many moral behaviors we usually think that are uniquely displayed by our species have been identified in other species as well.

Please watch this video, a lecture by primatologist Frans de Waal for the TED Talks :

The skills needed to feel empathy, to engage in mutual cooperation, to react to certain injustices, to form coalitions, to share, to punish those who refuse to comply with expected behaviors, among many others – abilities once considered to be exclusive of humans – have been observed in other animals. These traits have been observed in many animal species, especially those closer to our evolutionary lineage, as the great apes. In the human case, these instinctive elements are also present. Even small children around the age of one year old show great capacity for moral cognition. They know to identify patterns of relationships in distributive justice, even if they cannot explain why they came to a certain conclusion (because they even do not know how to speak by that age!).

In addition, several studies have shown that certain neural connections in our brains are actively involved in processing information related to capabilities typical of normative behavior. Think about the ability to empathize, for example. It is an essential skill that prevents us to see other people as things or means. Empathy is needed to respect the Kantian categorical imperative to treat the others as an end in themselves, and not means to achieve other ends. This is something many psychopaths can’t do, because they face severe reduction in their ability to empathize with others. Several researches using fMRI have shown year after year that many diagnosed psychopaths show deficiencies in areas of their brains that have been associated to empathy.

If this sounds like science fiction, please consider the following cases.

A 40 year old man, who had hitherto displayed absolutely normal sexual behavior, was kicked out by his wife after she discovered what he was visiting child porn sites and had even tried to sexually molest children. He was arrested and the judge determined that he would have to pass through a sexaholics rehabilitation program or face jail. But he soon got expelled from the program after inviting women at the program to have sex with him. Just before being arrested again for failing in the program, he felt a severe headache and went to a hospital, where he was submitted to an MRI exam. The doctors identified a tumor on his orbifrontal cortex, a brain region usually associated with training of moral judgment, impulse control and regulation of social behavior. After the removal of the tumor, his behavior returned to normal. Seven months later, he once more showed deviant behavior – and further tests showed the reappearance of the tumor. After the removal of the new cyst, his sexual behavior again returned to normal standards.

You could also consider the case of Charles Whitman. Until he was 24, he had been a reasonably normal person. However, on August 1st, 1966, he ascended to the top of the Tower of the University of Texas, where, armed to the teeth, he killed 13 people and wounded 32 before being killed by the police. Later it was discovered that just before the mass killings, he had also murdered both his wife and mother. During the previous day, he left a typewritten letter in which one could read the following:

“I do not quite understand what it is that compels me to type this letter. Perhaps it is to leave some vague reason for the actions I have recently performed. I do not really understand myself these days. I am supposed to be an average reasonable and intelligent young man. However, lately (I cannot recall when it started) I have been a victim of many unusual and irrational thoughts.”

What does this mean for legal theory? At least this means that law, so far, has been based on a false metaphysical conception that t brain is a lockean blank slate and that our actions derive from our rational dispositions. Criminal law theory assumes that an offender breaks the law exclusively due to his free will and reasoning. Private law assumes that people sign contracts only after considering all its possible legal effects and are fully conscious about the reasons that motivated them to do so. Constitutional theory assumes that everyone is endowed with a rational disposition that enables the free exercise of civil and constitutional rights such as freedom of expression or freedom of religion. It is not in question that we are able to exercise such rights. But these examples show that the capacity to interpret norms and to act accordingly to the law does not derive from a blank slate endowed with free will and rationality, but from a complex mind that evolved in our hominin lineage and that relies on brain structures that enables us to reason and choose among alternatives.

This means that our rationality is not perfect. It is not only affected by tumors, but also by various cognitive biases that affect the rationality of our decisions. Since the 1970s, psychologists have studied these biases. Daniel Kahneman, for example, won the 2002 Nobel prize in Economic Sciences for his research on the impact of these biases on decision-making. We can make really irrational decisions because our mind is based on certain heuristics (fast-and-frugal rules) to evaluate certain situations. In most situations, these heuristics help us to make the right decisions, but they also may influence us to make really dumb mistakes.

There are dozens of heuristics that structure our rationality. We are terrible on assessing the significance of statistical correlations, we discard unfavorable evidence, we tend to follow the most common behavior in our group (herd effect), and we tend to see past events as if they had been easily predictable. We are inclined to cooperate with whom is part of our group (parochialist bias), but not so with whom belongs to another group. And those are just some of the biases that have been already identified.

It is really hard to overcome these biases, because they are much of what we call rationality. These flaws are an unavoidable part of our rationality. Sure, with some effort, we can avoid many mistakes by using some techniques that could lead us to get unbiased and correct answers. However, using artificial techniques to do so may be expensive and demands lots of effort. We can use a computer and train mathematical skills in order to overcome biases that causes error in statistical evaluation, for instance. But how can we use a computer to reason about morality or legal issues “getting around” these psychological biases? Probably, we can’t.

The best we can do is to reconsider the psychological assumptions of legal theory, by taking into account what we actually know about our psychology and how it affects our judgement. And there is evidence that these biases really influence how judges evaluate judicial cases. For instance, a research done by Birte Englich, Thomas Mussweiler and Fritz Strack concluded that even legal experts are indeed affected by cognitive biases. More specifically, they studied the effects of anchoring bias in judicial activity, by submitting 52 legal experts to the following experiment: they required them to examine an hypothetical court case, which should determine the sentence in a fictitious shoplifting case. After reading the materials, the participants had to answer a questionnaire at the end of which they would define the sentence.

Before answering the questions, however, the participants should throw a pair of dice in order to determine the prosecutor’s demand. Half of the dice were loaded in order to show always the numbers 1 and 2. And the other half was loaded in order to indicate 3 and 6. The sum of the numbers should indicate the prosecutor’s sentencing demand. Afterwards, they should answer questions about legal issues concerning the case, including the sentencing decision. The researchers found that the results of the dice had an actual impact on their proposed sentence: the average penalty imposed by judges who had dice with superior results (3 + 6 = 9) was 7.81 months in prison, while the participants whose dice resulted in lower values ​​(1 +2 = 3) , proposed an average punishment of 5.28 months .

In another study, it was found that, on average, tired and hungry judges end up taking the easy decision to deny parole rather than to grant it. In the study, conducted in Israel, researchers divided the day’s schedule of judges into three sessions. At the beginning of which of them, the participants could rest and eat. It turned out that, soon after eating and resting, judges authorized the parole in 65% of cases. At the end of each session, the rate fell to almost zero. Okay, this is not really a cognitive bias, but a factual condition – however, it shows that a tired mind and energy needs can induce decisions that almost everyone would consider as intrinsically unfair.

And so on. Study after study , research shows that (1) our ability to develop moral reasoning is innate, (2) our mind is filled with innate biases that are needed to process cultural information in relation to compliance with moral/legal norms, and (3) these biases affect our rationality.

These researches raise many questions that will have to be faced sooner or later by legal scholars. Would anyone say that due process of law is respected when judges anchors judicial decision in completely external factors – factors about which they aren’t even aware of! Of course, this experiment was done in a controlled experiment and nobody expects that a judge rolls dice before judging a case. But judge might be influenced by other anchors as well, such as numbers inside a clock, a date on the calendar, or a number printed on a dollar banknote? Or would anyone consider due process was respected even if a parole hadn’t been granted because the case was judged late in the morning? These external elements decisively influenced the judicial outcome, but none of them were mentioned in the decision.

Legal theory needs to incorporate this knowledge on its structure. We need to build institutions capable to take biases into account and, as far as possible, try to circumvent them or, at least, diminish their influence. For instance, by knowing that judges tend to get impatient and harsher against defendants when they are hungry and tired, a Court could force him to take a 30 minute break after 3 hours of work in order to restore their capacity to be as impartial as possible. This is just a small suggestion about how institutions could respond to these discoveries.

Of course, there are more complex cases, such as the discussion about criminals who always had displayed good behavior, but who were misfortunate to develop a brain tumor that influenced the commitment of a crime. Criminal theory is based on the thesis that the agent must intentionally engage in criminal conduct. But is it is possible to talk about intention when a tumor was one direct cause of the result? And if it hadn’t been a tumor, but a brain malformation (as it occurs in many cases of psychopathy)? Saying that criminal law could already solve these cases by considering that the criminal had no responsibility due to his condition wouldn’t solve the problem, because the issue is in the very concept of intention that is assumed in legal theory.

And this problem expands into the rest of the legal theory. We must take into account the role of cognitive biases in consumer relations. The law has not realized the role of these biases in decision making, but many companies are aware of them. How many times haven’t you bought a 750 ml soda for $2.00 just because it cost $0.20 more than a 500 ml one? Possibly, you thought that you payed less per ml than you would pay if you had bought the smaller size. But … you really wanted was 500 ml, and would pay less than you payed for taking extra soda that you didn’t want! In other words, the company just explores a particular bias that affects most people, in order to induce them to buy more of its products. Another example: for evolutionary reasons, humans are prone to consume fatty foods and lots of sugar. Companies exploit this fact to their advantage, which ends up generating part of the obesity crisis that we see in the world today. In their defense, companies say that consumers purchased the product on their own. What they do not say, but neurosciences and evolutionary theory say, is that our “free will” has a long evolutionary history that propels us to consume exactly these kinds of food that, over the years, affects our health. And law needs to take these facts into consideration if it wants to adequately protect and enforce consumer rights.

Law is still based on an “agency model” very similar to game theory’s assumption of rationality. But we are not rational. Every decision we make is influenced by the way our mind operates. Can we really think that it is fair to blame someone who committed a crime on the basis of erroneous results generated by a cognitive bias? And, on the other hand, would it be right to exonerate a defendant based on those assumptions? To answer these and other fringes questions, legal scholars must rethink the concept of person assumed by law, taking into account our intrinsic biological nature.

Tufts University professor Daniel C. Dennett discussed the ways in which neuroscience may impact human understanding of moral and legal responsibility to an overflowing audience in Pound Hall at Harvard Law School yesterday.

The event, titled “Free Will, Responsibility, and the Brain,” was sponsored by the Law School’s Student Association for Law and Mind Sciences (SALMS), and began with a Dilbert comic strip depicting free will as an ambiguous concept.

“It does justice to our common sense thinking about free will,” he said of the comic strip.

Dennett, who co-directs the Tufts University Center for Cognitive Studies, is best known for his arguments that human consciousness and free will are the result of physical processes in the brain.

Early in the talk, Dennett asked the audience to flick their right wrists in the next ten seconds, explaining that their brains decided to perform the action a third of second before it actually occurred.

The experiment showed, he said, that unconscious action of the brain precedes the conscious action of an individual.

“Your conscious is out of the loop,” he said. “A voluntary act begins in the brain unconsciously before the person acts consciously.”

Yet Dennett said that the last minute “veto window,” also known as “free won’t,” allows conscious function to affect the final outcome.

Through the discussion, Dennett said he hoped to figure out how to undo the misunderstandings surrounding neuroscience’s implications on human responsibility.

He mentioned the common belief that determinism is incompatible with free will, but quickly dismissed it as a mistake.

The talk was intended to pique interest in understanding the human animal, in accordance with SALMS’s efforts to expose the Law School community to research and concepts from psychology, neuroscience, and other mind sciences, said SALMS President Matthew B. McFeely.

“I hope that attendees of the talk were encouraged to examine a little closer some of commonly held assumptions about people and their behavior,” McFeely said.

* * *

The video of Dan Dennett’s talk will be made available on the PLMS website and The Situationist in November.

On Tuesday, September 28th, the HLS Student Association for Law and Mind Sciences (SALMS) is hosting a talk by Tufts professor Daniel Dennett entitled Free Will, Responsibility, and the Brain.

Professor Dennett is the Austin B. Fletcher Professor of Philosophy at Tufts University, as well as the co-director for the school’s Center for Cognitive Studies. His work examines the intersection of philosophy and cognitive science in relation to religion, biology, science, and the human mind. Professor Dennett has also contributed greatly to the fields of evolutionary theory and psychology.

Professor Dennett will turn a critical eye on the recent influx of work regarding the impact of neuroscience on scholarly concepts of moral and legal responsibility.

He will be speaking in Pound 101 from 12:00 – 1:00 p.m. Free burritos will be provided!

We commonly describe people’s behavior in terms of character traits such as honest, courageous, generous, and the like. Furthermore, we praise and reward those who display virtuous character traits and we look down upon those who exemplify vices such as dishonesty, cowardice, and stinginess. That virtue ethics captures this aspect of our everyday moral practices—i.e., our tendency to describe human behavior in terms of dispositional traits that give rise to virtues and vices—is purportedly one of its chief selling points. On Aristotle’s intuitively plausible view, for instance, being properly habituated, morally speaking, makes it more likely that one will engage in the right behavior, under the right circumstances, and for the right reasons. Moreover, not only does having the virtues make it maximally likely that one will engage in virtuous activity, but Aristotle also suggests that once an agent acquires the proper character traits, these dispositions are “firm and unchangeable” (NE, 1105b1). So, while the virtues are not themselves sufficient for moral behavior, truly virtuous individuals will usually do what’s right even under the most difficult circumstances (NE, 1105a88-10). If, on the other hand, virtuous character traits were not robust and stable predictors of moral behavior as Aristotle and others suggest, it is unclear why inculcating the virtues would better equip one to reliably navigate the complex moral world we inhabit.

However, as intuitive and attractive as the characterological approach to moral psychology may initially appear, some philosophers have recently suggested that the virtue theorist’s commitment to robust and stable character traits opens her view up to possible empirical refutation (Harman 1999; 2000; Doris 1998; 2002). On this skeptical view, the gathering data concerning the etiological role played by situational stimuli paints a different picture of moral agency than the one adopted by Plato, Aristotle, and their contemporary followers. Rather than a world being navigated by moral agents armed with robust and stable habituated dispositions to act, what we find is a world whereby situational forces play a much larger role in moral agency than philosophers have traditionally assumed.

For present purposes, let’s call this the Situationist Challenge. To get a feel for the sorts of empirical pressures that allegedly face virtue theorists, consider the surprising results from the “helping for a dime” studies reported in Isen & Levin (1972). Subjects were random pedestrians in San Francisco, CA and Philadelphia, PA who stopped to use a public payphone. Whereas some subjects found a dime that had been planted in the phone booth by researchers, other subjects did not find a dime. When subjects left the phone booth, a female confederate of the researchers dropped an armful of papers and researchers recorded whether or not the individuals leaving the phone booth stopped to help. The results were shocking: the subjects who found the dimes were 22 times more likely to help a woman who “dropped” her papers than the subjects who did not find the dime. Let that sink in for a moment. The slight elevation in emotion caused by randomly finding a dime on top of pay phone made a significant difference on subjects’ moral behavior—something presumably all participants would deny if asked. Perhaps the most surprising feature of these results isn’t that something so morally insignificant—namely, finding a dime in a phone booth—had such a pronounced effect on people’s moral behavior, rather it’s that these results appear to be representative of moral behavior rather than anomalous.

Unsurprisingly, virtue theorists have not taken the Situationist Challenge lightly. Perhaps the most common rejoinder to characterological skepticism is to suggest that the situationist literature is entirely consistent with traditional accounts of virtue ethics. Indeed, we are told that the only reason virtue ethics appears to be under empirical attack is that the skeptics have purportedly either misread or misrepresented the ancient virtue theorists. In making their case on this front, virtue theorists often appeal to the purported rarity of truly virtuous individuals. Merritt (2000) summarizes this so-called “argument from rarity” (Doris 1998, p. ) in the following manner:

Now many sympathizers with virtue ethics will want to say, “So what?” The experimental evidence shows only that most people aren’t genuinely virtuous. (And haven’t we always known this anyway, without needing experimental psychology to reveal it?) That doesn’t mean there’s a problem with the normative ideal of virtue ethics. It just means that being genuinely virtuous is a rare and difficult achievement” These people have a point. (p. 367-68)

For instance, as Kamtekar (2004) points out, Plato openly admits that non-virtuous people are “impulsive and unstable” (Lysis 214d) and that they “shift back and forth” (Gorgias 481e). Moreover, Kamtekar reminds us that Plato also acknowledges that “guaranteeing the behavior of ordinary people (i.e., people who lack philosophical wisdom) consistently conforms to virtue ethics requires manipulating their situations—not only the environment in which people are brought up but also the situations in which they are called upon to act as adults” (2004, p.483). According to Kamtekar, Merritt, and others, if this reading of the ancient virtue theorists is correct, then not only is the literature on situationism consistent with the moral theories of Plato and Aristotle, but the gathering data are precisely what these theorists would predict. Finding a dime only makes it more likely that those lacking in virtue will help. The truly virtuous would have helped regardless of whether or not they found a dime. The same could arguably be said about all of the aforementioned situationist studies. What you find in each case is evidence that for many (if not most) people, situational forces can sometimes trump dispositional traits when it comes to moral behavior. However, this is purportedly a far cry from a refutation of virtue ethics. Instead, it is a reminder of just how genuinely hard it is to be a virtuous agent.

Of course, this is not the only line of response open to the virtue ethicists.[1] Rather than falling back on the rarity of virtue—which is not a move without its dialectical and theoretical costs—virtue theorists could also opt for any of the following strategies:

The Empirical Counter-Challenge: One could directly dispute the data from situational psychology rather than try to show that the data are compatible with the characterological moral psychology of virtue ethics.[2]

The Immunization Thesis: One could accept the data on situationism at face value and suggest that we can use these data to immunize or shield ourselves from the etiological encroachment of morally irrelevant situational variables—i.e., armed with a better understanding of the threat of situationism, we will be better equipped to allow our dispositions to find expression in our action.[3]

The Mischaracterization Response: Rather than focusing on the supposed rarity of truly virtuous agents and behavior, virtue theorists could focus instead on trying to show that characterological skeptics have misunderstood or misstated other importance aspects of virtue theory.[4]

The Revisionist Response: The virtue theorists could accept that the data on situationism puts serious pressure on classical versions of virtue ethics. So, rather than defending the Platonic or Aristotelian views from the challenge, these virtue theorists could offer revisionist or rival versions of virtue ethics that are purportedly better equipped to deal with the situationist challenge.[5]

Regardless of which of these strategies the virtue theorist adopts, it is clear that the empirical data on the dispositional and situational roots of behavior have forced virtue theorists to carefully reexamine both the views of the ancients as well as the contemporary views rooted in these earlier views. While the data themselves do not (and presumably cannot) undermine virtue ethics full stop, they do represent an empirically-tractable challenge that virtue theorists must take seriously.

References:

Annas, J. 2005. “Comments on John Doris’ Lack of Character.” Philosophy and Phenomenological Research 73: 636-47.

[2] See, e.g., Kamtekar (2004). The two most common issues raised about the studies on situationism are: (a) several of the studies have very small sample sizes; and (b) the studies don’t observe people’s behaviors across situations.

[3] See, e.g., Merritt (2000): “Situationist psychology does show that certain kinds of seemingly irrelevant situational factors may derail a person’s usual expressions of ethical concern…but that’s less likely to happen if we are aware of such situational factors and their usual influences on behavior” (p. 372).

[4] See, e.g., Kamtekar (2004): “I argue that the character traits conceived of and debunked by Situationist social psychological studies have very little to do with character as it is conceived of in traditional virtue ethics” (p. 460).

[5] See, e.g., Merritt (2000): “What is important for Hume’s purposes is that one’s possession of the virtues, which he characterizes as socially or personally beneficial qualities of mind, should be relatively stable over time somehow or other, not that it should be stable through taking a special, self-sufficiently sustainable psychological form. A Humean approach leaves us plenty of room to say that if an otherwise admirable structure of motivation were stable in a person only because it was in large part socially sustained, it would be no less a genuine virtue for that” (p. 378).

Below is a fascinating and enlightening 51-minute interview of Thomas Nadelhoffer by Harvard Law Student Brian Wood. The interview, titled “Developments in Neuroscience and their Implications for Criminal Law,” lasts just over 51 minutes. It was conducted the Law and Mind Science Seminar at Harvard (taught by Situationist Editor Jon Hanson).

Bio:

Situationist Contributor Dr. Thomas Nadelhoffer was born and raised in Atlanta, Georgia. He has earned degrees in philosophy from The University of Georgia (BA), Georgia State University (MA), and Florida State University (PhD). Since 2006, he has been an assistant professor of philosopy and a member of the law and policy faculty at Dickinson College in Carlisle, Pennsylvania. He is currently at Duke University as a Visiting Scholar in the Kenan Institute for Ethics.

His main areas of research include moral psychology, the philosophy of action, free will, punishment theory, and neurolaw. He is particularly interested in research at the cross roads of philosophy and the sciences of the mind. His articles have appeared in journals such as Analysis, Midwest Studies in Philosophy, Mind & Language, Neuroethics, and Philosophy and Phenomenological Research. He is the coordinator of the blogs Flickers of Freedom and the Law and Neuroscience Blog. He is also a contributing author to blogs such as The Situationist, The Leiter Reports, and Experimental Philosophy.

* * *

* * *

Table of contents:

What have you been working on recently? 0:22

What are some areas of the legal system in which this science is relevant? 1:07

What are the problems with the traditional approaches to using science in the criminal system, and how are new scientific methods relevant to fixing them? 2:15

How could these newer scientific methods be employed? 4:09

What are the rationales society has traditionally cited as justifying criminal punishment? 6:55

Can you explain what Compatibalism is? 10:17

Aren’t there problems with notions of moral responsibility under Compatibalism? 12:26

How do neuroscience, Compatibalism, and determinism relate to our notions of law? 12:55

What do you see as the problems with the classic approaches to punishment? 15:25

Is there anything especially strange about Retributivism to you? 20:37

Can you detail what you believe to be the just reasons for punishment and how society can punish people more justly? 23:41

In your view, how would you punish psychopaths under the consequentialist rationale? 30:40

Can you give an example of the distinctions psychopaths cannot draw? 34:50

What’s the most interesting experiment you have conducted? 37:01

Do you think these participants just misunderstood what determinism is? 38:15

What qualities do you believe you and other researchers and philosophers need to be successful? 40:03

How has what you have learned through your research influenced the way you live you life? 41:35

How do you see the relationship of law and mind science developing in the future? 44:55

Situationist Contributor John Jost recently co-authored a brief comment, titled “Virtue ethics and the social psychology of character: Philosophical lessons from the person–situation debate,” which will be of interest to many of our readers. Here are the opening paragraphs.

* * *

A venerable tradition of ethical theory drawing on Aristotle’s Ethics still flourishes alongside consequentialist (utilitarian) and deontological (Kantian) alternatives. The Aristotelian notion is that if humans develop in themselves and inculcate in others certain settled dispositions to reason and act in characteristic ways—bravely, honestly, generously—they will behave in ways that secure and preserve eudaimonia (happiness or well-being) for themselves and others (Burnyeat, 1980; Hursthouse, 1999; Sherman, 1997). Virtue theorists are therefore committed to the existence of significant moral personality traits that not only summarize good (vs. bad) behavior but also explain the actions of the virtuous (and vicious) agents.

A powerful empirical challenge to virtue theories developed out of Mischel’s (1968) critique of personality traits and social psychological research emphasizing the ‘‘power of the situation” (Ross & Nisbett, 1991). These lessons were applied, perhaps overzealously, to moral philosophy by Flanagan (1991), Harman (1999), Doris (2002), and Appiah (2008). Harman (1999) claimed: ‘‘We need to convince people to look at situational factors and to stop trying to explain things in terms of character traits. . . [and] to abandon all talk of virtue and character, not to find a way to save it by reinterpreting it” (p. 1). This position, which might be termed eliminative situationalism, stimulated useful philosophical debate, but it is too dismissive of the role of personality (or character) in producing ethically responsible behavior.

Below you will find three parts of an edited lecture by Harvard Professor Marc Hauser. The first part moves from various philosophical theories of morality to social science research into moral dilemmas, leading up to the philosopher’s classic, the “trolley problem.”

* * *

* * *

In the second part, below, Professor Hauser completes his description of the trolley problem and conclusions based on his research into how humans make moral decisions.

* * *

* * *

In the final part, below, Professor Hauser discusses the impact of religious belief on moral decision-making.

Like this:

Tomorrow (Monday, September 21), the Student Association for Law and Mind Sciences (SALMS) at Harvard Law School is hosting a talk, titled “Outcome vs. Intent: Which Do We Punish and Why?,” by Professor Fiery Cushman. The abstract for the talk is as follows:

Sometimes people cause harm accidentally; other times they attempt to cause harm, but fail. How do ordinary people treat cases where intentions and outcomes are mismatched? Dr. Cushman will present a series of studies suggesting that while people’s judgments of moral wrongness depend overwhelmingly on an assessment of intent, their judgments of deserved punishment exhibit substantial reliance on accidental outcomes as well. This pattern of behavior is present at an early age and consistent across both survey-based and behavioral economic paradigms. These findings raise a question about the function of our moral psychology: why do we judge moral wrongness and deserved punishment by different standards? Dr. Cushman will present evidence that punishment is sensitive to accidental outcomes in part because it is designed to teach social partners not to engage in harmful behaviors and because teaching on the basis of outcomes is more effective than teaching on the basis of intentions.

* * *

The event will take place in Hauser 104 at Harvard Law School, from 12:00 – 1:00 p.m. For more information, e-mail salms@law.harvard.edu.

Anyone who followed this past election season — and, considering the voter turnout records, that’s pretty much everyone — no doubt grew familiar with, and likely a bit tired of, each candidate’s avowed mission of “reaching across the aisle.” Almost immediately upon winning the presidency, Barack Obama set out to do just that, inviting a handful of Republicans to a Super Bowl party. Still he was able to rally only meager cross-party support for his historic stimulus bill, failing, in some eyes, to validate his call for a bipartisan era — which in turn prompted The New Yorker to point out that eight days in office was, after all, “a tight schedule for era-delivering.”

In the sciences, the era of interdisciplinary study has been delivering for some time. The past 50 years have seen researchers engaged in their own version of aisle reaching, extending a hand or a methodology or a graduate student across campus and, in some cases, across the globe, to advance some form of basic understanding. A recent National Academy of Sciences committee, charged with summarizing the state of scientific study across disciplines, reeled off an impressive list of achievements, from genome sequencing to neuroimaging to the Manhattan Project.

Psychologists have not been strangers to this trend. Rather, they have been in the vanguard, according to a paper published in Science (Wuchty, 2007). In the second half of the 20th century, the average size of a psychology research team increased 75 percent — the top rate of increase among social sciences.

As research teams have expanded, their composition has diversified. Economists and political scientists, in particular, have teamed with psychologists at a progressive rate, the Science authors found. More importantly, the citation impact of these larger teams seems to have increased with their added size and breadth. This heightened influence holds true even when adjusting for the increase in self-citation that comes with a greater number of researchers per study.

New fields have already begun to emerge from these meetings of minds—neuroscience, political psychology, cognitive science, and evolutionary psychology, to name a handful. Such instances distinguish true interdisciplinary work from multi-disciplinary efforts, which, as APS Past President John Cacioppo pointed out in a previous Observer column, require “only that one share an established procedure with an investigator in another field.” Ideally, interdisciplinary collaborations lead to more than a parlor game of pass the procedure. They don’t just shift eyes onto the question at hand; they ask completely new questions. The goal here, it would seem, is not to reach across the aisle, but rather to eliminate it.

Still, despite their head start over the Aisle Reacher-in-Chief, collaborative scientists also face many challenges when it comes to working outside their comfort zone. An additional workload, communication breakdowns, and tenure-track requirements are some the interdisciplinary scientist’s heaviest burdens. But most consider the evolution of psychology well worth the growing pains. “When psychology departments were forming, it was experimental, social, clinical, developmental — as if any one of these things can be studied independent of the other,” says APS Fellow and Past Board Member Elizabeth Phelps, who is part of the interdisciplinary Center for Neuroeconomics at New York University, of the way psychology operated up through the first half of the 20th century. “I think we had divided up how we understand human behavior. “I see a lot of those barriers starting to be broken.”

THE BELIEVER: I take it that one of the goals of the Stanford Prison Experiment was to build on Milgram’s results that demonstrated the power of situational elements. Is that right?

PHILIP ZIMBARDO: It was really to broaden his message and put it to a higher-level test. In Milgram’s study, we don’t know about those thousand people who answered the ad. His subjects were not Yale students, although he did it at Yale. They were a thousand ordinary citizens from New Haven and Bridgeport, Connecticut, ages twenty to fifty, and in his advertisement in the newspaper he said: college students and high-school students cannot be used. It could have been a selection of people who were more psychopathic. For our study, we picked only two dozen of seventy-five who applied, who on seven different personality tests were normal or average. So we knew there were no psychopaths, no deviants. Nobody had been in therapy, and even though it was a drug era, nobody (at least in the reports) had taken anything more than marijuana, and they were physically healthy at the time. So the question was: Suppose you had only kids who were normally healthy, psychologically and physically, and they knew they would be going into a prison-like environment and that some of their civil rights would be sacrificed. Would those good people, put in that bad, evil place—would their goodness triumph?

***

That sitautionist snippet should convince you to check out the rest of the interview! Also, it is worth pointing out the Sommers has a forthcoming collection entitled A Very Bad Wizard: Morality Behind the Curtain, which includes past interviews with philosophers and psychologists such as Galen Strawson, Michael Ruse, Jon Haidt, Frans de Waal, Steve Stich, Josh Greene, Liane Young, Joe Henrich, William Ian Miller, and Zimbardo. So, make sure to check it out as well once it comes out.

I recently stumbled upon a really provocative paper by Anders Kaye entitled, “The Secret Politics of the Compatibilist Criminal Law.” Given that one of the key issues addressed in the paper is whether compatibilist theories of free will–which focus very heavily on dispositional traits and conscious mental states–can accommodate situational forces that are criminogenic (e.g., poverty and early childhood abuse). According to Kaye, compatibilist theories of free will and responsibility have been used by contemporary legal retributivists such as Michael Moore and Stephen Morse to shield the criminal law from developments in behavioral science, criminology, etc. that might otherwise lead to a less punitive justice system as well as a more egalitarian society. In short, Kaye suggests that compatibilism is not a “politically innocent” theory of free will. Here is the abstract:

***

Many criminal theorists say that we have a ‘compatibilist’ criminal law, by which they mean that in our criminal law a person can deserve punishment for her acts even if she does not have ‘genuinely’ free will. This conception of the criminal law harbors and is driven by a secret politics, one that resists social change and idealizes the existing social order. In this Article, I map this secret politics. In so doing, I call into question the descriptive accuracy of the compatibilist account of the criminal law, and set the stage for a franker discussion of criminal punishment – one that recognizes that the perpetual struggle to say just who ‘deserves’ punishment is driven as much by brute politics and the competition to allocate power and resources in society as by any independent moral logic.

***

There is already a heated debate about Kaye’s novel line of reasoning over at The Garden of Forking Paths. However, it would be nice to see an active comment thread here at The Situationist as well. So, please take a look at the paper and then give us your thoughts!

Social psychologists have shown human decisions to be sensitive to numerous ordinary, possibly nonconscious, situational contingencies, motivating the view that control is largely illusory, and that our choices are largely governed by such external contingencies. Against this view is evidence that self-control and goal-maintenance are regularly displayed by humans and other animals, and evidence concerning neurobiological processes that support such control. Evolutionarily speaking, animals with a robust capacity to exercise control – both conscious and nonconscious – probably enjoyed a selective advantage. Counterbalancing data thus point to an account of control that sees an important role for nonconscious control in action and goal maintenance. We propose a conceptual model of control that encompasses such nonconscious control and links in-control behavior to neurobiological parameters.

* * *

I am curious to see what the readers of this blog make of their provocative suggestion that the kind of control needed for responsibility can actually be attributed to the very automatic processes that situationists often point to in an effort to put pressure on traditional models of moral and legal responsibility. There is also a post about this paper over at The Garden of Forking Paths.

Like this:

Daniel Dennett is the co-director of the Center for Cognitive Studies, the Austin B. Fletcher Professor of Philosophy, and a University Professor at Tufts University. Here is a brief Big Think video of Dennett discussing some of the problems of the human brain, including, the “very sharp limit to the depth that we as conscious agents can probe our own activities.”

The death of free will, or its exposure as a convenient illusion, some worry, could wreak havoc on our sense of moral and legal responsibility. According to those who believe that free will and determinism are incompatible…it would mean that people are no more responsible for their actions than asteroids or planets. Anything would go

–Dennis Overbye, The New York Times (2007)

During the past few years the popular press has become increasingly interested in free will, agency, and responsibility, with stories appearing in mainstream media outlets such as The New York Times, The Economist, Forbes Magazine, Wired, and FOX News.As psychologists continue to demystify the mind by uncovering the mechanisms that undergird human behavior, what was once an issue that fell mostly under the purview of philosophers and theologians has started to pique the curiosity of the public more generally.This interest is quite understandable.If free will provides the foundation for our traditional moral beliefs and practices, and its existence is incompatible with the gathering data from the so-called “sciences of the mind,” then free will isn’t just a topic fit for philosophers—it is a psychological, sociological, cultural, and policy issue as well.To the extent that scientific advancements undermine or threaten our traditional views about human agency, we ought to carefully consider what impact this might have on our moral and legal practices.

To get a sense for why some philosophers and psychologists are anxious when it comes to folk beliefs about free will and moral responsibility, consider the following extended quote from Francis Cricks’ The Amazing Hypothesis:

“You,” your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. Who you are is nothing but a pack of neurons.

Most religions hold that some kind of spirit exists that persists after one’s bodily death and, to some degree, embodies the essence of that human being. Religions may not have all the same beliefs, but they do have a broad agreement that people have souls.

Yet the common belief of today has a totally different view.It is inclined to believe that the idea of a soul, distinct from the body and not subject to our known scientific laws, is a myth.It is quite understandable how this myth arose without today’s scientific knowledge of nature of matter and radiation, and of biological evolution.Such myths, of having a soul, seem only too plausible.For example, four thousand years ago almost everyone believed the earth was flat.Only with modern science has it occurred to us that in fact the earth is round.

From modern science we now know that all living things, from bacteria to ourselves, are closely related at the biochemical level.We now know that many species of plants and animals have evolved over time.We can watch the basic processes of evolution happening today, both in the field and in our test tubes and therefore, there is no need for the religious concept of a soul to explain the behavior of humans and other animals. In addition to scientists, many educated people also share the belief that the soul is a metaphor and that there is no personal life either before conception or after death.

Most people take free will for granted, since they feel that usually they are free to act as they please.Three assumptions can be made about free will.The first assumption is that part of one’s brain is concerned with making plans for future actions, without necessarily carrying them out.The second assumption is that one is not conscious of the “computations” done by this part of the brain but only of the “decisions” it makes – that is, its plans, depending of course on its current inputs from other parts of the brain.The third assumption is that the decision to act on one’s plan or another is also subject to the same limitations in that one has immediate recall of what is decided, but not of the computations that went into the decision.

So, although we appear to have free will, in fact, our choices have already been predetermined for us and we cannot change that.The actual cause of the decision may be clear cut or it may be determined by chaos, that is, a very small perturbation may make a big difference to the end result.This would give the appearance of the Will being “free” since it would make the outcome essentially unpredictable.Of course, conscious activities may also influence the decision mechanism.

One’s self can attempt to explain why it made a certain choice.Sometimes we may reach the correct conclusion.At other times, we will either not know or, more likely, will confabulate, because there is no conscious knowledge of the ‘reason’ for the choice.This implies that there must be a mechanism for confabulation, meaning that given a certain amount of evidence, which may or may not be misleading, part of the brain will jump to the simplest conclusion.

Having just read Crick’s deflationary remarks about free will, do you think you would be more likely to cheat if you were given the opportunity? The obvious answer is “No, of course not”!However, the results from a series of recent studies by psychologists Kathleen Vohs and Jonathan Schooler suggest that things are less obvious than they seem.

For instance, in a recent paper entitled “The Value of Believing in Free Will,” Vohs and Schooler suggest that when people are induced to disbelieve in free will and believe in determinism—as the result of reading the aforementioned passage from Crick—they are more likely to cheat shortly thereafter.Here is the abstract:

Does moral behavior draw on a belief in free will? Two experiments examined whether inducing participants to believe that human behavior is predetermined would encourage cheating. In Experiment 1, participants read either text that encouraged a belief in determinism (i.e., that portrayed behavior as the consequence of environmental and genetic factors) or neutral text. Exposure to the deterministic message increased cheating on a task in which participants could passively allow a flawed computer program to reveal answers to mathematical problems that they had been instructed to solve themselves. Moreover, increased cheating behavior was mediated by decreased belief in free will. In Experiment 2, participants who read deterministic statements cheated by overpaying themselves for performance on a cognitive task; participants who read statements endorsing free will did not. These findings suggest that the debate over free will has societal, as well as scientific and theoretical, implications.

In light of the results from these two studies, Vohs and Schooler conclude that ‘the fact that brief exposure to a message asserting that there is no such thing as free will can increase both passive and active cheating raises the concern that advocating a deterministic worldview could undermine moral behavior’ (Vohs & Schooler 2008: 53).

If we assume for the sake of argument that being induced to disbelieve in free will is what is really driving the results of their studies—and I am unconvinced that it is, but that is a story for another day—then we are faced with the interesting question of what philosophers and psychologists who work on free will ought to do in light of these findings.For free will skeptics, the stakes are particularly high.After all, ought one to be advocating for the so-called “death of free will” if doing so might make it more likely that people will cheat or steal?

Professor Nadelhoffer’s main areas of research include moral psychology, the philosophy of action, free will, punishment theory, and neurolaw. He is particularly interested in research at the cross roads of philosophy and the sciences of the mind. His articles have appeared in journals such as Analysis, Midwest Studies in Philosophy, Mind & Language, Neuroethics, and Philosophy and Phenomenological Research. He is also a contributing author to several other blogs such The Leiter Reports, The Garden of Forking Paths, and Experimental Philosophy. When not thinking about or teaching philosophy, he spend lots of time hanging out with his pack of dogs, climbing boulders and walls, and listening to indie rock.

Distributive justice concerns how individuals and societies distribute benefits and burdens in a just or moral manner. We combined distribution choices with functional magnetic resonance imaging to investigate the central problem of distributive justice: the trade-off between equity and efficiency. We found that the putamen responds to efficiency, whereas the insula encodes inequity, and the caudate/septal subgenual region encodes a unified measure of efficiency and inequity (utility). Notably, individual differences in inequity aversion correlate with activity in inequity and utility regions. Against utilitarianism, our results support the deontological intuition that a sense of fairness is fundamental to distributive justice but, as suggested by moral sentimentalists, is rooted in emotional processing. More generally, emotional responses related to norm violations may underlie individual differences in equity considerations and adherence to ethical rules.

* * *

For a brief, helpful summary of the report on BPS Research Digest, click here.

Dan Jones has a terrific article in the April issue of Prospect, titled “The Emerging Moral Psychology.” We’ve included some excerpts from the article below.

* * *

Long thought to be a topic of enquiry within the humanities, the nature of human morality is increasingly being scrutinised by the natural sciences. This shift is now beginning to provide impressive intellectual returns on investment. Philosophers, psychologists, neuroscientists, economists, primatologists and anthropologists, all borrowing liberally from each others’ insights, are putting together a novel picture of morality—a trend that University of Virginia psychologist Jonathan Haidt has described as the “new synthesis in moral psychology.” The picture emerging shows the moral sense to be the product of biologically evolved and culturally sensitive brain systems that together make up the human “moral faculty.”

A pillar of the new synthesis is a renewed appreciation of the powerful role played by intuitions in producing our ethical judgements. Our moral intuitions, argue Haidt and other psychologists, derive not from our powers of reasoning, but from an evolved and innate suite of “affective” systems that generate “hot” flashes of feelings when we are confronted with a putative moral violation.

This intuitionist perspective marks a sharp break from traditional “rationalist” approaches in moral psychology, which gained a large following in the second half of the 20th century under the stewardship of the late Harvard psychologist Lawrence Kohlberg. In the Kohlbergian tradition, moral verdicts derive from the application of conscious reasoning, and moral development throughout our lives reflects our improved ability to articulate sound reasons for the verdicts . . . .

But experimental studies give cause to question the primacy of rationality in morality. In one experiment, Jonathan Haidt presented people with a range of peculiar stories, each of which depicted behaviour that was harmless (in that no sentient being was hurt) but which also felt “bad” or “wrong.” One involved a son who promised his mother, while she was on her deathbed, that he would visit her grave every week, and then reneged on his commitment because he was busy. Another scenario told of a man buying a dead chicken at the supermarket and then having sex with it before cooking and eating it. These weird but essentially harmless acts were, nonetheless, by and large deemed to be immoral.

Further evidence that emotions are in the driving seat of morality surfaces when people are probed on why they take their particular moral positions. In a separate study which asked subjects for their ethical views on consensual incest, most people intuitively felt that incestuous sex is wrong, but when asked why, many gave up, saying, “I just know it’s wrong!”—a phenomenon Haidt calls “moral dumbfounding.”

It’s hard to argue that people are rationally working their way to moral judgements when they can’t come up with any compelling reasons—or sometimes any reasons at all—for their moral verdicts. Haidt suggests that the judgements are based on intuitive, emotional responses, and that conscious reasoning comes into its own in creating post hoc justifications for our moral stances. Our powers of reason, in this view, operate more like a lawyer hired to defend a client than a disinterested scientist searching for the truth.

Our rational and rhetorical skill is also recruited from time to time as a lobbyist. Haidt points out that the reasons—whether good or bad—that we offer for our moral views often function to press the emotional buttons of those we wish to bring around to our way of thinking. So even when explicit reasons appear to have the effect of changing people’s moral opinions, the effect may have less to do with the logic of the arguments than their power to elicit the right emotional responses. We may win hearts without necessarily converting minds. . . .

Even if you recognise the tendency to base moral judgements on how moral violations make you feel, you probably would also like to think that you have some capacity to think through moral issues, to weigh up alternative outcomes and make a call on what is right and wrong.

Thankfully, neuroscience gives some cause for optimism. Philosopher-cum-cognitive scientist Joshua Greene of Harvard University and his colleagues have used functional magnetic resonance imaging to map the brain as it churns over moral problems, inspired by a classic pair of dilemmas from the annals of moral philosophy called the Trolley Problem and the Footbridge Problem. [For a review of Greene’s research, clickhere.]

* * *

What is going on in the brain when people mull over these different scenarios? Thinking through cases like the Trolley Problem—what Greene calls an impersonal moral dilemma as it involves no direct violence against another person—increases activity in brain regions located in the prefrontal cortex that are associated with deliberative reasoning and cognitive control (so-called executive functions). This pattern of activity suggests that impersonal moral dilemmas such as the Trolley Problem are treated as straightforward rational problems: how to maximise the number of lives saved. By contrast, brain imaging of the Footbridge Problem—a personal dilemma that invokes up-close and personal violence—tells a rather different story. Along with the brain regions activated in the Trolley Problem, areas known to process negative emotional responses also crank up their activity. In these more difficult dilemmas, people take much longer to make a decision and their brains show patterns of activity indicating increased emotional and cognitive conflict within the brain as the two appalling options are weighed up.

Greene interprets these different activation patterns, and the relative difficulty of making a choice in the Footbridge Problem, as the sign of conflict within the brain. On the one hand is a negative emotional response elicited by the prospect of pushing a man to his death saying “Don’t do it!”; on the other, cognitive elements saying “Save as many people as possible and push the man!” For most people thinking about the Footbridge Problem, emotion wins out; in a minority of others, the utilitarian conclusion of maximising the number of lives saved.

* * *

While there is a growing consensus that the moral intuitions revealed by moral dilemmas such as the Trolley and Footbridge problems draw on unconscious psychological processes, there is an emerging debate about how best to characterise these unconscious elements.

On the one hand is the dual-processing view, in which “hot” affectively-laden intuitions that militate against personal violence are sometimes pitted against the ethical conclusions of deliberative, rational systems. An alternative perspective that is gaining increased attention sees our moral intuitions as driven by “cooler,” non-affective general “principles” that are innately built into the human moral faculty and that we unconsciously follow when assessing social behaviour.

In order to find out whether such principles drive moral judgements, scientists need to know how people actually judge a range of moral dilemmas. In recent years, Marc Hauser, a biologist and psychologist at Harvard, has been heading up the Moral Sense Test (MST) project to gather just this sort of data from around the globe and across cultures.

The project is casting its net as wide as possible: the MST can be taken by anyone with access to the internet. Visitors to the “online lab” are presented with a series of short moral scenarios—subtle variations of the original Footbridge and Trolley dilemmas, as well as a variety of other moral dilemmas. The scenarios are designed to explore whether, and how, specific factors influence moral judgements. Data from 5,000 MST participants showed that people appear to follow a moral code prescribed by three principles:

• The intention principle: harm intended as the means to a goal is morally worse than equivalent harm foreseen as the side-effect of a goal.

• The contact principle: using physical contact to cause harm to a victim is morally worse than causing equivalent harm to a victim without using physical contact.

Crucially, the researchers also asked participants to justify their decisions. Most people appealed to the action and contact principles; only a small minority explicitly referred to the intention principle. Hauser and colleagues interpret this as evidence that some principles that guide our moral judgments are simply not available to, and certainly not the product of, conscious reasoning. These principles, it is proposed, are an innate and universal part of the human moral faculty, guiding us in ways we are unaware of. In a (less elegant) reformulation of Pascal’s famous claim that “The heart has reasons that reason does not know,” we might say “The moral faculty has principles that reason does not know.”

The notion that our judgements of moral situations are driven by principles of which we are not cognisant will no doubt strike many as implausible. Proponents of the “innate principles” perspective, however, can draw succour from the influential Chomskyan idea that humans are equipped with an innate and universal grammar for language as part of their basic design spec. In everyday conversation, we effortlessly decode a stream of noise into meaningful sentences according to rules that most of us are unaware of, and use these same rules to produce meaningful phrases of our own. Any adult with normal linguistic competence can rapidly decide whether an utterance or sentence is grammatically valid or not without conscious recourse to the specific rules that determine grammaticality. Just as we intuitively know what we can and cannot say, so too might we have an intuitive appreciation of what is morally permissible and what is forbidden.

Marc Hauser and legal theorist John Mikhail of Georgetown University have started to develop detailed models of what such an “innate moral grammar” might look like. Such models usually posit a number of key components, or psychological systems. One system uses “conversion rules” to break down observed (or imagined) behaviour into a meaningful set of actions, which is then used to create a “structural description” of the events. This structural description captures not only the causal and temporal sequence of events (what happened and when), but also intentional aspects of action (was the outcome intended as a means or a side effect? What was the intention behind the action?).

With the structural description in place, the causal and intentional aspects of events can be compared with a database of unconscious rules, such as “harm intended as a means to an end is morally worse than equivalent harm foreseen as the side-effect of a goal.” If the events involve harm caused as a means to the greater good (and particularly if caused by the action and direct contact of another person), then a judgement of impermissibility is more likely to be generated by the moral faculty. In the most radical models of the moral grammar, judgements of permissibility and impermissibility occur prior to any emotional response. Rather than driving moral judgements, emotions in this view arise as a by-product of unconsciously reached judgements as to what is morally right and wrong

Hauser argues that a similar “principles and parameters” model of moral judgement could help make sense of universal themes in human morality as well as differences across cultures (see below). There is little evidence about how innate principles are affected by culture, but Hauser has some expectations as to what might be found. If the intention principle is really an innate part of the moral faculty, then its operation should be seen in all cultures. However, cultures might vary in how much harm as a means to a goal they typically tolerate, which in turn could reflect how extensively that culture sanctions means-based harm such as infanticide (deliberately killing one child so that others may flourish, for example). These intriguing though speculative ideas await a thorough empirical test.

* * *

Although current studies have only begun to scratch the surface, the take-home message is clear: intuitions that function below the radar of consciousness are most often the wellsprings of our moral judgements. . . .

Despite the knocking it has received, reason is clearly not entirely impotent in the moral domain. We can reflect on our moral positions and, with a bit of effort, potentially revise them. An understanding of our moral intuitions, and the unconscious forces that fuel them, give us perhaps the greatest hope of overcoming them.

In 2006, Slate’s John Lackman wrote a fine introduction to the then-nascent movement within philosophy known as “experimental philosophy” or “X-Phi”:

Philosophers have ignored the real world because it’s messy, full of happenstance details and meaningless coincidences; philosophy, they argue, has achieved its successes by focusing on deducing universal truths from basic principles. X-phi, on the other hand, argues that philosophers need to ask people what and how they think. Traditional philosophy relies on certain intuitions, presented as “common sense,” that are presumed to be shared by everyone. But are they? For example, can people be morally responsible for their actions if they don’t have free will? Many philosophers have assumed that all sane people would of course say no. Experimentalists don’t assume. They ask. Recently, they presented the following scenario to two groups:

* * *

Bill and his wife were flying home from vacation with their friend Frank, who was having an affair with Bill’s wife, as Bill knew. Kidnappers injected Bill with a drug that forced him to obey orders, then told him to shoot Frank in the head, which he did.

* * *

They told the first group that Bill wanted Frank dead and so grieved little for him. To the second, they said that Bill hated what he’d done. Both groups were then asked if Bill deserved blame for Frank’s death. Traditional philosophers have argued that Bill shouldn’t be blamed in both cases because it’s common sense that moral responsibility requires free will. But, in fact, the first x-phi group did blame Bill in the scenario in which he welcomed Frank’s death. Similarly, groups praised a hypothetical involuntary organ donor, even though he had no choice but to give. This doesn’t prove that you can have moral responsibility without free will. But it does vaporize a traditional philosophical objection to that view—that it lacks common sense.

***

Experimental philosophy is also challenging such basic philosophical notions as “intentional action.” What do we mean when we say that someone did somethingintentionally? Most philosophers assume that we’d all agree that this is a question of the actor’s state of mind. Experimentalist Joshua Knobe of the University of North Carolina at Chapel Hill asked college students: If a businessman interested only in profits knowingly harms the environment, should we say he did so intentionally? The students answered yes. Yet if the same businessman knowingly helped the environment, they said no. Apparently, intentionality depends not just on an actor’s state of mind, but also on the outcome he or she produces. And also on skill—the ability to carry out one’s intention. If you hit a bulls-eye your first time playing darts, did you do it “intentionally”? It turns out that the most common answer is yes if you keep regularly hitting the target and no if you don’t. Outcome trumps skill, though, when it comes to determining intentionality. Say a man tries to shoot his aunt, misfires, but is lucky and hits her anyway. Most people will say he killed her intentionally, even though he didn’t really have the skill to. It’s enough that he wanted to. Hijackers with experimental drugs, land-despoiling executives, aunt killers—what’s not to like about x-phi?

Although X-Phi certainly has its critics, it has only gained in strength and legitimacy since Lackman’s article, and the lines between this new brand of philosophy and cognitive psychology have faded into what appears to be a healthy, interdisciplinary blur. A recent “Call for Papers” summarized the field this way:

Over the last decade, philosophers have started using experimental and quasi-experimental methods to obtain data that are relevant for philosophical controversies. Surprising results have been obtained for a large range of topics, including intuitions about reference, intuitions about free will and responsibility, and the relation between judgments of causation and moral judgments. Meanwhile, psychologists are increasingly paying attention to aspects of our folk theories that directly bear on philosophy, such as the nature of folk explanation, the nature of causal judgments, the processes underlying moral judgments, the folk concept of race, and the nature of imagination. This movement, unified by a common desire to apply experimental methods to philosophical issues, is known as “experimental philosophy.”