informal discussion of ethics, moral psychology, Nietzsche, and other topics of interest

Main menu

Monthly Archives: February 2013

Below is a draft of a rather short section of “Experimental Moral Philosophy” (co-authored with Don Loeb for the Stanford Encyclopedia of Philosophy) on xphi and wellbeing. As always, comments, questions, criticisms, and suggestions are most welcome.

L1: A child raised in a particular linguistic community inevitably ends up speaking an idiolect of the local language despite lack of explicit instruction, lack of negative feedback for mistakes, and grammatical mistakes by caretakers.

M1: A child raised in a particular moral community inevitably ends up judging in accordance with an idiolect of the local moral code despite lack of explicit instruction, lack of negative feedback for moral mistakes, and moral mistakes by caretakers.

L2: While there is great diversity among natural languages, there are systematic constraints on possible natural languages.

M2: While there is great diversity among natural moralities, there are systematic constraints on possible natural moralities.

L3: Language-speakers obey many esoteric rules that they themselves would not recognize.

M3: Moral agents judge according to esoteric rules (such as the doctrine of double effect) that they themselves would not recognize.

L4: Drawing on a limited vocabulary, a speaker can express a potential infinity of thoughts.

M4: Drawing on a limited moral vocabulary, an agent can express a potential infinity of moral judgments.

Pair 1 suggests the “poverty of the stimulus” argument, according to which there must be an innate language (morality) faculty because it would otherwise be impossible for children to learn what and as they do. However, as Prinz (2008) points out, the moral stimulus may be less penurious than the linguistic stimulus: children are typically punished for moral violations, whereas their grammatical violations are often ignored. Nichols, Kumar, & Lopez (unpublished manuscript) lend support to Prinz’s contention with a series of Bayesian moral-norm learning experiments.

Pair 2 suggests the “principles and parameters” approach, according to which, though the exact content of linguistic (moral) rules is not innate, there are innate rule-schemas, the parameters of which may take only a few values. The role of environmental factors is to set these parameters. For instance, the linguistic environment determines whether the child learns a language in which noun phrases precede verb phrases or vice versa. Similarly, say proponents of the analogy, there may be a moral rule-schema according to which members of group G may not be intentionally harmed unless p, and the moral environment sets the values of G and p. As with the first point of analogy, philosophers such as Prinz (2008) find this comparison dubious. Whereas linguistic parameters typically take just one of two or three values, the moral parameters mentioned above can take indefinitely many values and seem to admit of diverse exceptions.

Pair 3 suggests that people have knowledge of language (morality) that is inaccessible to consciousness but implicitly represented, such that they produce judgments of grammatical (moral) permissibility and impermissibility that far outstrip their own capacities to reflectively identify, explain, or justify. One potential explanation of this gap is that there is a sub-personal “module” for language (morality) that has proprietary information and processing capacities. Only the outputs of these capacities are consciously accessible.

Pair 4 suggests the linguistic (moral) essentiality of recursion, which allows the embedding of type-identical structures within one another to generate further structures of the same type. For instance, noun phrases can be embedded in other noun phrases to form more complex noun phrases:

the calico cat –> the calico cat (that the dog chased) –> the calico cat (that the dog [that the man owned] chased) à the calico cat (that the dog [that the man {who was married to the heiress} owned] chased)

Moral judgments, likewise, can be embedded in other moral judgments to produce novel moral judgments:

“Thou shalt not kill” (Deuteronomy 5:13) –> “Ye have heard that it was said of them of old time, Thou shalt not kill; and whosoever shall kill shall be in danger of the judgment: But I say unto you, that whosoever is angry with his brother shall be in danger of the judgment.” (Matthew 5:21-2)

Another example: plausibly, if it’s wrong to x, then it’s wrong to persuade someone to x and wrong to coerce someone to x, and therefore also wrong to persuade someone to coerce someone to x. Such moral embedding has been experimentally investigated by John Mikhail (2007, 2008, 2011), who argues on the basis of a large number of experiments using variants on the “trolley problem” (Foot 1978) that moral judgments are generated by imposing a deontic structure on one’s representation of the causal and evaluative features of the action under consideration.

As with any analogy, there are points of disanalogy between language and morality. Within a given dialect, lay judgments about whether a given sentence is grammatical tend to be nearly unanimous, whereas, even within a given “moral dialect,” there is a great deal of variance in lay judgments about whether a given action is permissible. Moral judgments are also, at least sometimes, corrigible in the face of argument, whereas grammaticality judgments seem to be incorrigible. People are often tempted to act contrary to their moral judgments, but not to their grammaticality judgments. Recursive embedding seems to be able to generate all of language, whereas recursive embedding may only be applicable to deontic judgments about actions, and not, for instance, judgments about norms, institutions, situations, and character traits. Indeed, it’s hard to imagine what recursion would mean for character traits: does it make sense to think of honesty being embedded in courage to generate a new trait? If it does, what would that trait be?

References:

Chomsky, N. (1957). Syntactic Structures. The Hague: Mouton.

Foot, P. (1978). Virtues and Vices and Other Essays in Moral Philosophy. Berkeley, CA: University of California Press; Oxford: Blackwell.

Harman, G. (2000). Explaining Value and Other Essays in Moral Philosophy. New York: Oxford University Press.

Here’s a draft of the section of “Experimental Moral Philosophy,” (for the Stanford Encyclopedia of Philosophy) on the emotions that I just drafted. As always, questions, comments, suggestions, and criticisms are most welcome.

———————————————————————————————–

Experimental inquiries into morality and emotion overlap in myriad, distantly-related ways. We can only hope to gesture at many of the interesting questions that have been investigated in this context. For instance, are moral judgments always motivating? In other words, does it follow that, insofar as you judge that x is morally right (wrong), you are – perhaps only defeasibly, but to some extent – motivated to x (avoid xing)? An affirmative answer is often labeled “internalist,” whereas a negative answer is labeled “externalist.” Emotions are intrinsically motivational, so if experimental investigation could show that emotion was implicated in all moral judgments, that would be a point in favor of internalism.[1] Another question we will not discuss in depth: is emotionally-driven reasoning in general better or worse than “cold,” affectless reasoning? Greene et al. (2001, 2004) seem to presuppose that cold reasoning is typically or even always better, but we see little reason to make such a sweeping judgment.

Instead of trying to address all of the relevant questions, we focus on a particular application based on what have come to be known as dual-system models of cognition, reasoning, decision-making, and behavior. While the exact details of the two systems vary from author to author, the basic distinction is between what Daniel Kahneman calls System 1, which is fast, automatic, effortless, potentially unconscious, often affect-laden, and sometimes incorrigible, and System 2, which is slow, deliberative, effortful, typically conscious, and associated with the subjective experience of agency, choice, and concentration (2011, pp. 20-21). Whereas System 2 exhibits a degree of functional unity, System 1 is better conceived as a loose conglomeration of semi-autonomous dispositions, states, and processes, which can conflict not only with System 2 but also with each other.

The dual-system approach has been employed by various experimental moral philosophers and experimental moral psychologists, including Joshua Greene (2008, 2012), Jonathan Haidt (2012; Haidt & Björklund 2008), Joshua Knobe (Inbar et al. 2009), Fiery Cushman (Cushman & Greene forthcoming), and Daniel Kelly (2011). We will focus in particular on one process that relies heavily on System 1, disgust, to show what experimental moral philosophy of the emotions can do.

Disgust is an emotion that seems to be unique to human animals. It involves characteristic bodily, affective, motivational, evaluative, and cognitive patterns. For instance, someone who feels disgusted almost always makes a gaping facial expression, withdraws slightly from the object of disgust, experiences a slight reduction in body temperature and heart rate, and feels a sense of nausea and the need to cleanse herself. In addition, she is motivated to avoid and even expunge the offending object, experiences it as contaminating and repugnant, becomes more attuned to other disgusting objects in the immediate environment, is inclined to treat anything that the object comes in contact with (whether physically or symbolically) as also disgusting, and is more inclined to make harsh moral judgments – both about the object and in general. There are certain objects that basically all normal adults are disgusted by (feces, decaying corpses, rotting food, spiders, maggots, gross physical deformities), but there is also considerable intercultural and interpersonal variation beyond these core objects of disgust, including in some better-studied cases cuisines, sexual behaviors, out-group members, and violations of social norms. Furthermore, the disgust reaction is nearly impossible to repress, is easily recognized, and – when recognized – empathically induces disgust in the other person.[2]

In a recent monograph, Kelly (2011) persuasively argues that this seemingly bizarre combination of features is best explained by what he calls the “entanglement thesis” (chapter 2) and the “co-opt thesis” (chapter 4). First, the universal bodily manifestations of disgust evolved to help humans avoid ingesting toxins and other harmful substances, while the more cognitive or symbolic sense of offensiveness and contamination associated with disgust evolved to help humans avoid diseases and parasites. According to the entanglement thesis, these initially distinct System 1 responses became entangled in the course of human evolution and now systematically co-occur. If you make the gape face, whatever you’re attending to will start to look contaminated; if something disgusts you at a cognitive level, you will flash a quick gape face. Second, according to the co-opt thesis, the entangled emotional system for disgust was later recruited for an entirely distinct purpose: to help mark the boundaries between in-group and out-group, and thus to motivate cooperation with in-group members, punishment of in-group defectors, and exclusion of out-group members. Because the disgust reaction is both on a “hair trigger” (it acquires new cues extremely easily and empathically, p. 51) and “ballistic” (once set in motion, it is nearly impossible to halt or reverse, p. 72), it was ripe to be co-opted in this way.

Dan Kelly’s “Yuck!”

If Kelly’s account of disgust is on the right track, it seems to have a number of important moral upshots. One of the more direct consequences of this theory is what he calls “disgust skepticism” (p. 139), according to which the combination of disgust’s hair trigger and its ballistic trajectory mean that it is extremely prone to incorrigible false positives that involve unwarranted feelings of contamination and even dehumanization. Hence, “the fact that something is disgusting is not even remotely a reliable indicator of moral foul play” but is instead “irrelevant to moral justification” (p. 148).

Many theories of value incorporate a link between emotions and value. According to fitting-attitude theories (Rönnow-Rasumussen 2011), something is bad if and only if there is reason to take a con-attitude (e.g., dislike, aversion, anger, hatred, disgust, contempt) towards it, and good if and only if there is reason to take a pro-attitude (e.g., liking, love, respect, pride, awe, gratitude) towards it. According to response-dependence theories (Prinz 2007), something is bad (good) just in case one would, after reflection and deliberation, hold a con-attitude (pro-attitude) towards it. According to desire-satisfaction theories of well-being (Heathwood 2006), your life is going well to the extent that objects towards which you harbor pro-attitudes are promoted and preserved, and objects towards which you harbor con-attitudes suffer or are harmed. If Kelly’s disgust skepticism is on the right track, it looks like it would be a mistake to lump together all con-attitudes. Perhaps it still makes sense to connect other con-attitudes, such as indignation, with moral badness, but it seems unwarranted to connect disgust with moral badness. Thus, experimental moral philosophy of the emotions leads to a potential insight into the evaluative diversity of con-attitudes.

Another potential upshot of the experimental research derives from the fact that disgust belongs firmly in System 1: it is fast, automatic, effortless, potentially unconscious, affect-laden, and nearly incorrigible. Moreover, while it is exceedingly easy to acquire new disgust triggers whether you want to or not, there seems to be no reliable way to de-acquire them, even if you want to. Together, these points raise worries about moral responsibility. It’s a widely accepted platitude that the less control you have over your behavior, the less responsible you are for that behavior. At one extreme, if you totally lack control, many would say that you are not responsible for what you do. Imagine an individual who acts badly because he is disgusted: he gapes when he sees two men kissing, even though he reflectively does not endorse homophobia; the men see this gape and, understandably, feel ostracized. Would it be appropriate for them to take up a Strawsonian (1962) reactive attitude towards him, such as indignation? Would it be appropriate for him to feel a corresponding attitude towards himself, such as guilt or shame? Of course, if his flash of disgust is something that he recognizes and endorses, the answers to these questions may be simpler, but what are we to say about the case where someone is, as it were, stuck with a disgust trigger that he would rather be rid of? We will not try to answer this question here; instead, we intend it to show that, while experimental moral philosophy of the emotions may provide new insights, it also raises thorny questions.

Here’s a draft of a paper to be presented at a conference at UNC in May. As always, comments, criticisms, questions, etc. are most welcome.

1 Introduction

Gone are the heady days when Bernard Williams (1993) could get away with saying that “Nietzsche is not a source of philosophical theories” (p. 4). The last two decades have witnessed a flowering of research that aims to interpret, elucidate, and defend Nietzsche’s theories about science, the mind, and morality. This paper is one more blossom in that efflorescence. What I want to argue is that, in light of contemporary science, Nietzsche’s is the best-supported moral psychological theory in the history of philosophy.

Given limitations of space, I will not be able to engage at length with the many competitors for this title. Instead, I will proceed by discussing three key Nietzschean insights and the contemporary psychological evidence for them. The first Nietzschean insight is the disunity of the self. The second, connected, Nietzschean insight is the primacy of affect. This primacy is expressed by what I have called elsewhere (Alfano 2010, forthcoming b) the tenacity of the intentional, and what Nietzsche calls the Socratic equation (TI Socrates 4, 10; WP 2:432-3). The third major Nietzschean insight is the social construction of character, which presupposes a wild diversity within the extensions of trait-terms and the dual direction of fit of character trait attributions. This last point is somewhat in tension with the only other published defense of the empirical credentials of Nietzsche’s moral psychology (Knobe & Leiter 2007), so I will make a few remarks about the contrast between my view and theirs.