��Is moral judgment accomplished by intuition or conscious reasoning? An answer demands a detailed account of the moral principles in question. We investigated three principles that guide moral judgments: (a) Harm caused by action is worse than harm caused by omission, (b) harm intended as the means to a goal is worse than harm foreseen as the side effect of a goal, and (c) harm involving physical contact with the victim is worse than harm involving no physical contact. Asking whether (...) these principles are invoked to explain moral judgments, we found that subjects generally appealed to the ﬁrst and third principles in their justiﬁcations, but not to the second. This ﬁnding has significance for methods and theories of moral psychology: The moral principles used in judgment must be directly compared with those articulated in justiﬁcation, and doing so shows that some moral principles are available to conscious reasoning whereas others are not. (shrink)

To what extent do moral judgments depend on conscious reasoning from explicitly understood principles? We address this question by investigating one particular moral principle, the principle of the double effect. Using web-based technology, we collected a large data set on individuals' responses to a series of moral dilemmas, asking when harm to innocent others is permissible. Each moral dilemma presented a choice between action and inaction, both resulting in lives saved and lives lost. Results showed that: (1) patterns of moral (...) judgments were consistent with the principle of double effect and showed little variation across differences in gender, age, educational level, ethnicity, religion or national affiliation (within the limited range of our sample population) and (2) a majority of subjects failed to provide justifications that could account for their judgments. These results indicate that the principle of the double effect may be operative in our moral judgments but not open to conscious introspection. We discuss these results in light of current psychological theories of moral cognition, emphasizing the need to consider the unconscious appraisal system that mentally represents the causal and intentional properties of human action. (shrink)

Mind perception entails ascribing mental capacities to other entities, whereas moral judgment entails labeling entities as good or bad or actions as right or wrong. We suggest that mind perception is the essence of moral judgment. In particular, we suggest that moral judgment is rooted in a cognitive template of two perceived minds—a moral dyad of an intentional agent and a suffering moral patient. Diverse lines of research support dyadic morality. First, perceptions of mind are linked to moral judgments: dimensions (...) of mind perception (agency and experience) map onto moral types (agents and patients), and deficits of mind perception correspond to difficulties with moral judgment. Second, not only are moral judgments sensitive to perceived agency and experience, but all moral transgressions are fundamentally understood as agency plus experienced suffering—that is, interpersonal harm—even ostensibly harmless acts such as purity violations. Third, dyadic morality uniquely accounts for the phenomena of dyadic completion (seeing agents in response to patients, and vice versa), and moral typecasting (characterizing others as either moral agents or moral patients). Discussion also explores how mind perception can unify morality across explanatory levels, how a dyadic template of morality may be developmentally acquired, and future directions. (shrink)

: To what extent do moral judgments depend on conscious reasoning from explicitly understood principles? We address this question by investigating one particular moral principle, the principle of the double effect. Using web-based technology, we collected a large data set on individuals’ responses to a series of moral dilemmas, asking when harm to innocent others is permissible. Each moral dilemma presented a choice between action and inaction, both resulting in lives saved and lives lost. Results showed that: patterns of moral (...) judgments were consistent with the principle of double effect and showed little variation across differences in gender, age, educational level, ethnicity, religion or national affiliation and a majority of subjects failed to provide justifications that could account for their judgments. These results indicate that the principle of the double effect may be operative in our moral judgments but not open to conscious introspection. We discuss these results in light of current psychological theories of moral cognition, emphasizing the need to consider the unconscious appraisal system that mentally represents the causal and intentional properties of human action. (shrink)

When we judge an action as morally right or wrong, we rely on our capacity to infer the actor's mental states. Here, we test the hypothesis that the right temporoparietal junction, an area involved in mental state reasoning, is necessary for making moral judgments. In two experiments, we used transcranial magnetic stimulation to disrupt neural activity in the RTPJ transiently before moral judgment and during moral judgment. In both experiments, TMS to the RTPJ led participants to rely less on the (...) actor's mental states. A particularly striking effect occurred for attempted harms : Relative to TMS to a control site, TMS to the RTPJ caused participants to judge attempted harms as less morally forbidden and more morally permissible. Thus, interfering with activity in the RTPJ disrupts the capacity to use mental states in moral judgment, especially in the case of attempted harms. (shrink)

Ordinary people often make moral judgments that are consistent with philosophical principles and legal distinctions. For example, they judge killing as worse than letting die, and harm caused as a necessary means to a greater good as worse than harm caused as a side-effect (Cushman, Young, & Hauser, 2006). Are these patterns of judgment produced by mechanisms specific to the moral domain, or do they derive from other psychological domains? We show that the action/omission and means/side-effect distinctions affect nonmoral representations (...) and provide evidence that their role in moral judgment is mediated by these nonmoral psychological representations. Specifically, the action/omission distinction affects moral judgment primarily via causal attribution, while the means/side-effect distinction affects moral judgment via intentional attribution. We suggest that many of the specific patterns evident in our moral judgments in fact derive from nonmoral psychological mechanisms, and especially from the processes of causal and intentional attribution. (shrink)

Is the basis of criminality an act that causes harm, or an act undertaken with the belief that one will cause harm? The present study takes a cognitive neuroscience approach to investigating how information about an agent’s beliefs and an action’s conse- quences contribute to moral judgment. We build on prior devel- opmental evidence showing that these factors contribute differ- entially to the young child’s moral judgments coupled with neurobiological evidence suggesting a role for the right tem- poroparietal junction (RTPJ) (...) in belief attribution. Participants read vignettes in a 2 2 design: protagonists produced either a negative or neutral outcome based on the belief that they were causing the negative outcome (‘‘negative’’ belief) or the neutral outcome (‘‘neutral’’ belief). The RTPJ showed significant activation above baseline for all four conditions but was modulated by an interaction between belief and outcome. Specifically, the RTPJ response was highest for cases of attempted harm, where protag- onists were condemned for actions that they believed would cause harm to others, even though the harm did not occur. The results not only suggest a general role for belief attribution during moral judgment, but also add detail to our understanding of the inter- action between these processes at both the neural and behavioral levels. (shrink)

Studies of normal individuals reveal an asymmetry in the folk concept of intentional action: an action is more likely to be thought of as intentional when it is morally bad than when it is morally good. One interpretation of these results comes from the hypothesis that emotion plays a critical mediating role in the relationship between an action’s moral status and its intentional status. According to this hypothesis, the negative emotional response triggered by a morally bad action drives the attribution (...) of intent to the actor, or the judgment that the actor acted intentionally. We test this hypothesis by presenting cases of morally bad and morally good action to seven individuals with deficits in emotional processing resulting from damage to the ventromedial prefrontal cortex (VMPC). If normal emotional processing is necessary for the observed asymmetry, then individuals with VMPC lesions should show no asymmetry. Our results provide no support for this hypothesis: like normal individuals, those with VMPC lesions showed the same asymmetry, tending to judge that an action was intentional when it was morally bad but not when it was morally good. Based on this finding, we suggest that normal emotional processing is not responsible for the observed asymmetry of intentional attributions and thus does not mediate the relationship between an action’s moral status and its intentional status. (shrink)

When we evaluate moral agents, we consider many factors, including whether the agent acted freely, or under duress or coercion. In turn, moral evaluations have been shown to influence our (non-moral) evaluations of these same factors. For example, when we judge an agent to have acted immorally, we are subsequently more likely to judge the agent to have acted freely, not under force. Here, we investigate the cognitive signatures of this effect in interpersonal situations, in which one agent (“forcer”) forces (...) another agent (“forcee”) to act either immorally or morally. The structure of this relationship allowed us to ask questions about both the “forcer” and the “forcee.” Paradoxically, participants judged that the “forcer” forced the “forcee” to act immorally (i.e. X forced Y), but that the “forcee” was not forced to act immorally (i.e. Y was not forced by X). This pattern obtained only for human agents who acted intentionally. Directly changing participants’ focus from one agent to another (forcer vs. forcee) also changed the target of moral evaluation and therefore force attributions. The full pattern of judgments may provide a window into motivated moral reasoning and focusing bias more generally; participants may have been motivated to attribute greater force to the immoral forcer and greater freedom to the immoral forcee. (shrink)

Moral judgments, whether delivered in ordinary experience or in the courtroom, depend on our ability to infer intentions. We forgive unintentional or accidental harms and condemn failed attempts to harm. Prior work demonstrates that patients with damage to the ventromedial prefrontal cortex deliver abnormal judgments in response to moral dilemmas and that these patients are especially impaired in triggering emotional responses to inferred or abstract events, as opposed to real or actual outcomes. We therefore predicted that VMPC patients would deliver (...) abnormal moral judgments of harmful intentions in the absence of harmful outcomes, as in failed attempts to harm. This prediction was confirmed in the current study: VMPC patients judged attempted harms, including attempted murder, as more morally permissible relative to controls. These results highlight the critical role of the VMPC in processing harmful intent for moral judgment. (shrink)

Moral judgments, we expect, ought not to depend on luck. A person should be blamed only for actions and outcomes that were under the person’s control. Yet often, moral judgments appear to be influenced by luck. A father who leaves his child by the bath, after telling his child to stay put and believing that he will stay put, is judged to be morally blameworthy if the child drowns (an unlucky outcome), but not if his child stays put and doesn’t (...) drown. Previous theories of moral luck suggest that this asymmetry reflects primarily the influence of unlucky outcomes on moral judgments. In the current study, we use behavioral methods and fMRI to test an alternative: these moral judgments largely reflect participants’ judgments of the agent’s beliefs. In “moral luck” scenarios, the unlucky agent also holds a false belief. Here, we show that moral luck depends more on false beliefs than bad outcomes. We also show that participants with false beliefs are judged as having less justified beliefs and are therefore judged as more morally blameworthy. The current study lends support to a rationalist account of moral luck: moral luck asymmetries are driven not by outcome bias primarily, but by mental state assessments we endorse as morally relevant, i.e. whether agents are justified in thinking that they won’t cause harm. (shrink)

People perceive that if their memories and moral beliefs changed, they would change. We investigated why individuals respond this way. In Study 1, participants judged that identity would change more after changes to memories and widely shared moral beliefs versus preferences and controversial moral beliefs. The extent to which participants judged that changes would affect their relationships predicted identity change and mediated the relationship between type of moral belief and perceived identity change. We discuss the role that social relationships play (...) in judgments of identity and highlight implications for psychology and philosophy. (shrink)

We review several instances where cognitive research has identified distinct psychological mechanisms for moral judgment that yield conflicting answers to moral dilemmas. In each of these cases, the conflict between psychological mechanisms is paralleled by prominent philosophical debates between different moral theories. A parsimonious account of this data is that key claims supporting different moral theories ultimately derive from the psychological mechanisms that give rise to moral judgments. If this view is correct, it has some important implications for the practice (...) of philosophy. We suggest several ways that moral philosophy and practical reasoning can proceed in the face of discordant theories grounded in diverse psychological mechanisms. (shrink)

Moral intuitions are strong, stable, immediate moral beliefs. Moral philosophers ask when they are justified. This question cannot be answered separately from a psychological question: How do moral intuitions arise? Their reliability depends upon their source. This chapter develops and argues for a new theory of how moral intuitions arise—that they arise through heuristic processes best understood as unconscious attribute substitutions. That is, when asked whether something has the attribute of moral wrongness, people unconsciously substitute a different question about a (...) separate but related heuristic attribute (such as emotional impact). Evidence for this view is drawn from psychology and neuroscience, and competing views of moral heuristics are contrasted. It is argued that moral intuitions are not direct perceptions and, in many cases, are unreliable sources of evidence for moral claims. (shrink)

For centuries, humans have contemplated the minds of gods. Research on religious cognition is spread across sub-disciplines, making it difficult to gain a complete understanding of how people reason about gods' minds. We integrate approaches from cognitive, developmental, and social psychology and neuroscience to illuminate the origins of religious cognition. First, we show that although adults explicitly discriminate supernatural minds from human minds, their implicit responses reveal far less discrimination. Next, we demonstrate that children's religious cognition often matches adults' implicit (...) responses, revealing anthropomorphic notions of God's mind. Together, data from children and adults suggest the intuitive nature of perceiving God's mind as human-like. We then propose three complementary explanations for why anthropomorphism persists in adulthood, suggesting that anthropomorphism may be an instance of the anchoring and adjustment heuristic; a reflection of early testimony; and/or an evolutionary byproduct. (shrink)

Moral judgment depends critically on theory of mind, reasoning about mental states such as beliefs and intentions. People assign blame for failed attempts to harm and offer forgiveness in the case of accidents. Here we use fMRI to investigate the role of ToM in moral judgment of harmful vs. helpful actions. Is ToM deployed differently for judgments of blame vs. praise? Participants evaluated agents who produced a harmful, helpful, or neutral outcome, based on a harmful, helpful, or neutral intention; participants (...) made blame and praise judgments. In the right temporo-parietal junction, and, to a lesser extent, the left TPJ and medial prefrontal cortex, the neural response reflected an interaction between belief and outcome factors, for both blame and praise judgments: The response in these regions was highest when participants delivered a negative moral judgment, i.e., assigned blame or withheld praise, based solely on the agent's intent. These results show enhanced attention to mental states for negative moral verdicts based exclusively on mental state information. (shrink)

What is the role of self-concept in motivating moral behavior? On one account, when people are primed to perceive themselves as “do-gooders”, conscious access to this positive self-concept will reinforce good behavior. On an alternative account, when people are reminded that they have done their “good deed for the day”, they will feel licensed to behave worse. In the current study, when participants were asked to recall their own good deeds (positive self-concept), their subsequent charitable donations were nearly twice that (...) of participants who recalled bad deeds, or recent conversation topics, consistent with an account of moral reinforcement. In addition, among participants reporting good deeds, those who did not note whether they were recognized or unrecognized by other people donated significantly more than participants who took note of others’ responses. In sum, when people are primed to see themselves as good people, who do good for goodness’ sake, not to obtain public credit, they may be motivated to do more good. (shrink)

The thesis we develop in this essay is that all humans are endowed with a moral faculty. The moral faculty enables us to produce moral judgments on the basis of the causes and consequences of actions. As an empirical research program, we follow the framework of modern linguistics.1 The spirit of the argument dates back at least to the economist Adam Smith (1759/1976) who argued for something akin to a moral grammar, and more recently, to the political philosopher John Rawls (...) (1971). The logic of the argument, however, comes from Noam Chomsky’s thinking on language specifically and the nature of knowledge more generally (Chomsky, 1986, 1988, 2000; Saporta, 1978). If the nature of moral knowledge is comparable in some way to the nature of linguistic knowledge, as defended recently by Harman (1977), Dwyer (1999, 2004), and Mikhail (2000; in press), then what should we expect to find when we look at the anatomy of our moral faculty? Is there a grammar, and if so, how can the moral grammarian uncover its structure? Are we aware of our moral grammar, its method of operation, and its moment-to-moment functioning in our judgments? Is there a universal moral grammar that allows each child to build a particular moral grammar? Once acquired, are different moral grammars mutually incomprehensible in the same way that a native Chinese speaker finds a native Italian speaker incomprehensible? How does the child acquire a particular moral grammar, especially if her experiences are impoverished relative to the moral judgments she makes? Are there certain forms of brain damage that disrupt moral competence but leave other forms of reasoning intact? And how did this machinery evolve, and for what particular adaptive function? We will have more to say about many of these questions later on, and Hauser (2006) develops others. However, in order to flesh out the key ideas and particular empirical research paths, let us turn to some of the central questions in the study of our language faculty.. (shrink)

The importance of situational constraint for moral evaluations is widely accepted in philosophy, psychology, and the law. However, recent work suggests that this relationship is actually bidirectional: moral evaluations can also influence our judgments of situational constraint. For example, if an agent is thought to have acted immorally rather than morally, that agent is often judged to have acted with greater freedom and under less situational constraint. Moreover, when considering interpersonal situations, we judge that an agent who forces another to (...) act immorally (versus morally) uses more force. These two features can result in contradictory response patterns in which participants judge both that (1) a forcer forced a forcee to act and (2) the forcee was not forced by the forcer to act. Here, we characterize potential psychological mechanisms, in particular, “moral focus” and counterfactual reasoning that account for this paradoxical pattern of judgments. (shrink)

People often use indirect speech, for example, when trying to bribe a police officer by asking whether there might be “a way to take care of things without all the paperwork.” Recent game theoretic accounts suggest that a speaker uses indirect speech to reduce public accountability for socially risky behaviors. The present studies examine a secondary function of indirect speech use: increasing the perceived moral permissibility of an action. Participants report that indirect speech is associated with reduced accountability for unethical (...) behavior, as well as increased moral permissibility and increased likelihood of unethical behavior. Importantly, moral permissibility was a stronger mediator of the effect of indirect speech on likelihood of action, for judgments of one's own versus others' unethical action. In sum, the motorist who bribes the police officer with winks and nudges may not only avoid public punishment but also maintain the sense that his actions are morally permissible. (shrink)

Contemporary moral psychology has focused on the notion of a universal moral sense, robust to individual and cultural differences. Yet recent evidence has revealed individual differences in the psychological processes for moral judgment: controlled cognition, mental-state reasoning, and emotional responding. We discuss this evidence and its relation to cross-cultural diversity in morality.

What is the impact of science on philosophy? In “Experiments in Ethics”, Kwame Anthony Appiah addresses this question for morality and ethics. Appiah suggests that scientific results may undermine moral intuitions by undermining our confidence in the actual sources of our intuitions, or by invalidating our factual assumptions about the causes of human behavior. Appiah worries that scientific results showing situational causes on human behavior force us to abandon the intuition, formalized in virtue ethics, that what matters is “who you (...) are on the inside”. In this review, we agree with Appiah that scientific results at once force and do not force us to abandon this intuition. We also propose that Appiah’s worry is due in part to an over-simplified conception of “internal causes”, shared widely among scientists and philosophers. By re-introducing the true richness of internal causes invoked in moral judgments, we hope to relax the tension between scientific results and moral intuitions. Ultimately, we propose that science can undermine and constrain but cannot affirm our commitment to specific moral intuitions. (shrink)

In Natural Justice Binmore offers a game-theoretic map to the landscape of human morality. Following a long tradition of such accounts, Binmore’s argument concerns the forces of biological and cultural evolution that have shaped our judgments about the appropriate distribution of resources. In this sense, Binmore focuses on the morality of outcomes. This is a valuable perspective to which we add a friendly amendment from our own research: moral judgments appear to depend on process just as much as outcome. What (...) matters is not just that the butler is dead, but who killed him, how, and for what reason. Thus, a complete understanding of natural justice’ will entail an account not only of evolutionary pressures, but also of the psychological mechanisms upon which they act. (shrink)

The issues of climate change and environmental degradation elicit diverse responses. This paper explores how an understanding of human moral psychology might be used to motivate conservation efforts. Moral concerns for the environment can relate to issues of harm or impurity . Aversions to harm are linked to concern for current or future generations, non-human animals, and anthropomorphized aspects of the environment. Concerns for purity are linked to viewing the environment as imbued with sacred value and therefore worthy of being (...) protected at all costs. While both harm-based and purity-based framings of environmental issues can sometimes backfire, we argue that making these moral concerns salient can nevertheless bring about moral responses to environmental issues. In sum, scientists' emerging knowledge about the moral mind can be used to facilitate the sustainable conservation of the planet. (shrink)