informal discussion of ethics, moral psychology, Nietzsche, and other topics of interest

Main menu

Monthly Archives: July 2012

Below is the third installation in the SEP series. This one is on the side-effect effect (aka the Knobe effect). As always, apologies for typos and missing citations.

——————————————————————————

Since Knobe’s seminal (2003) paper, experimental philosophers have investigated the complex patterns in people’s dispositions to make judgments about moral notions (praiseworthiness, blameworthiness, responsibility), cognitive attitudes (belief, knowledge, remembering), motivational attitudes (desire, favor, advocacy), and character traits (compassion, callousness) in the context of violations of and conformity to norms (moral, prudential, aesthetic, legal, conventional, descriptive).[1] In Knobe’s original study, participants first read a description of a choice scenario: the protagonist is presented with a potential policy (to increase profits) that would result in a side effect (either harming or helping the environment). Next, the protagonist explicitly disavows caring about the side effect, and chooses to go ahead with the policy. The policy results as advertised: both the primary and the side effect occur. Finally, participants are asked to attribute an attitude to the protagonist. What Knobe found was that participants were significantly more inclined to indicate that the protagonist had intentionally brought about the side effect when it was bad (harming the environment) than when it was good (helping the environment). This effect has been replicated dozens of times, and its scope has been greatly expanded from intentionality attributions after violations of a moral norm to attributions of diverse properties after violations of almost every imaginable kind of norm.

The first-order aim of interpreters of this body of evidence is to create a model that predicts when the attribution asymmetry will crop up. The second-order aims are to explain as systematically as possible why the effect occurs, and to determine the extent to which the attribution asymmetry can be considered rational. To that end, we have modeled how participants’ responses to this sort of vignette are produced.

Figure: Model of Participant Response to X-Phi Vignettes

In this model, the boxes represent entities, the arrows represent causal or functional processes, and the area in grey represents the mind of the participant, which is not directly observable but is the target of investigation. In broad strokes, the idea is that a participant first reads the text of the vignette and forms a mental model of what happens in the story. On the basis of this model (and almost certainly while the vignette is still being read), the participant begins to interpret, i.e., to make both descriptive and normative judgments about the scenario, especially about the mental states and character traits of the people in it. The participant then reads the experimenter’s question, forms a mental model of what is being asked, and – based on her judgments about the scenario – forms an answer to that question. That answer may then be pragmatically revised (to avoid unwanted implicatures, to bring it more into accord with what the participant thinks the experiment wants to hear, etc.) and is finally recorded as an explicit response on the Likert scale.

What we know is that vignette texts in which a norm violation is described tend to produce higher Likert scale responses. What experimental philosophers try to do is to explain this asymmetry by postulating models of the unobservable entities.

Perhaps the best known is Knobe’s conceptual competence model, according to which the asymmetry arises at the judgment stage. He claims that normative judgments about the evaluative valence of the action influence otherwise descriptive judgments about whether it was intentional (or desired, or expected, etc.), and that, moreover, this input is part of the very conception of intentionality (desire, belief, etc.). Thus, on the conceptual competence model, the asymmetry in attributions is a rational expression of the ordinary conception of intentionality (desire, belief, etc.), which turns out to have a normative component.[2]

The motivational bias model (Alicke 2008; Nadelhoffer 2004, 2006) agrees that the asymmetry originates in the judgment stage, and that normative judgments influence descriptive judgments. However, unlike the conceptual competence model, it takes this to be a bias rather than an expression of conceptual competence. Thus, on this model, the asymmetry in attributions is a distortion of the correct conception of intentionality (desire, belief, etc.).

The deep self concordance model (Sripada 2010, 2012; Sripada & Konrath 2011) also locates the source of the asymmetry in the judgment stage, but does not recognize an influence (licit or illicit) of normative judgments on descriptive judgments. Instead, the model claims that participants routinely distinguish someone’s “deep” self – which harbors her sentiments, values, and principles – from her “shallow” self – which contains her expectations, means-end beliefs, moment-to-moment intentions, and conditional desires. According to the model, when assessing whether someone intentionally brings about some state of affairs, people determine whether there exists concordance or discordance between the relevant portions of the shallow and deep self. For instance, when the chairman harms the environment, this is concordant with his deep self, since he has expressed a deep-seated contempt for the environment; in contrast, when the chairman helps the environment, this is discordant with his deep self. According to the deep self concordance model, then, the asymmetry in attributions is a reasonable expression of the folk psychological distinction between the deep and shallow self (whether that distinction in turn is defensible is of course another question).

Unlike the models discussed so far, the conversational pragmatics model (Adams & Steadman 2004, 2007) locates the source of the asymmetry in the pragmatic revision stage. According to this model, participants judge the protagonist not to have acted intentionally in both norm-conforming and norm-violating cases. However, when it comes time to tell the experimenter what they think, participants do not want to imply that the harm-causing protagonist is blameless, so they report that he acted intentionally. This is a reasonable goal, so according to the pragmatic revision model, the attribution asymmetry is rational, though misleading.

According to the deliberation model (Alfano, Beebe, & Robinson 2012; Robinson, Stey, & Alfano forthcoming; Scaife & Webber forthcoming), the best explanation of the complex patterns of evidence is that the very first mental stage, the formation of a mental model of the scenario, differs between norm-violation and norm-conformity vignettes.[3] When the protagonist is told that a policy he would ordinarily want to pursue violates a norm, he acquires a reason to deliberate further about what to do; in contrast, when the protagonist is told that the policy conforms to some norm, he acquires no such reason,. Participants tend to model the protagonist as considering what to do when and only when a norm would be violated. Since deliberation leads to the formation of other mental states – such as beliefs, desires, and intentions – this basal difference between participants’ models of what happens in the story flows through the rest of their interpretation and leads to the attribution asymmetry. On the deliberation model, then, the attribution asymmetry originates much earlier than other experimental philosophers suppose, and is due to rational processes.

Of course, single-factor models are not the only way of explaining the attribution asymmetry. Mark Phelan and Hagop Sarkissian (2009, p. 179) find the idea of localizing the source of the asymmetry in a single stage or variable implausible, claiming that “attempts to account for the Knobe effect by recourse to only one or two variables, though instructive, are incomplete and overreaching in their ambition.” While they do not propose a more complicated model, it’s clear that many could be generated by permuting the existing single-factor models.

[2] The idea that seemingly predictive and explanatory concepts might also have a normative component is not entirely original with Knobe; Bernard Williams (1985, p. 129) pointed out that virtues and vices have such a dual nature.

[3] See also Scaife & Webber (forthcoming), who helpfully point out that “the same words spoken by a character in the story do not necessarily have the same meaning, or give the reader the same impression, when the surrounding story has changed.”

Below is a draft of the section of the Stanford Encyclopedia of Philosophy article on “Experimental Moral Philosophy” devoted to virtue ethics and skepticism about character. As always, comments and criticisms are most welcome. And again my apologies for typos and missing citations.

—————————————————————————-

A virtue is a complex disposition comprising sub-dispositions to notice, construe, think, desire, and act in characteristic ways. To be generous, for instance, is (among other things) to be disposed to notice occasions for giving, to construe ambiguous social cues charitably, to want to give people things they need or would enjoy, to deliberate well about what would in fact be appreciated, and to act on the basis of such deliberation. Manifestations of such a disposition are observable and hence ripe for empirical investigation. Virtue ethicists of the last several decades have tended, furthermore, to be optimistic about the distribution of virtue in the population. Alasdair MacIntyre claims, for example, that “without allusion to the place that justice and injustice, courage and cowardice play in human life very little will be genuinely explicable” (1984, p. 199; see also Annas 2011, pp. 8-10).

Philosophical situationists argue on the basis of such evidence that the structure of most people’s dispositions does not match the structure of virtues (or vices). According to Doris (2002), the best explanation of this lack of cross-situational consistency is that the great majority of people have local, rather than global, traits: they are not honest, courageous, or greedy, but they may be honest-while-in-a-good-mood, courageous-while-sailing-in-rough-weather-with-friends, and greedy-unless-watched-by-fellow-parishioners. In contrast, Christian Miller (2013a, 2013b) thinks the evidence is best explained by a theory of mixed global traits, such as the disposition to (among other things) help because it improves one’s mood. Such traits are global, in the sense that they explain and predict behavior across situations (someone with such a disposition will, other things being equal, typically help so long as it will maintain her mood), but normatively mixed, in the sense that they are neither virtues nor vices. Mark Alfano (2013) goes in a third direction, arguing that virtue and vice attributions tend to function as self-fulfilling prophecies. People tend to act in accordance with the traits that are attributed to them, whether the traits are minor virtues such as tidiness (Miller, Brickman, & Bolen 1975) and ecology-mindedness (Cornelissen et al. 2006, 2007), major virtues such as charity (Jensen & Moore 1977), cooperativeness (Grusec, Kuczynski, Simutis & Rushton 1978), and generosity (Grusec & Redler 1980), or vices such as cutthroat competitiveness (Grusec, Kuczynski, Simutis & Rushton 1978). On Alfano’s view, when people act in accordance with a virtue, they often do so not because they have the trait in question, but because they think they do or because they know that other people think they do. He calls such simulations of moral character factitious virtues, and even goes so far as to suggest that the notion of a virtue should be revised to include reflexive and social expectations.[2]

It might seem that this criticism misses its mark. After all, virtue ethicists needn’t (and typically don’t) commit themselves to the claim that almost everyone is virtuous. Instead, they usually argue that virtue is the normative goal of moral development, and that people may fail to reach that goal in various ways. When Doris, Harman, Miller, or Alfano argues from the fact that most people’s dispositions are not virtues to a rejection of orthodox virtue ethics, then, he might be thought to be committing a non sequitur. But empirically-minded critics of virtue ethics do not stop there. They all have positive views about what sorts of dispositions people have instead of virtues. [3] These dispositions are alleged to be so structurally dissimilar from virtues (as traditionally understood) that it’s psychologically unrealistic to treat virtue as a regulative ideal. What matters, then, is the width of the gap between the descriptive and the normative, between the (structure of the) dispositions most people have and the (structure of the) dispositions that count as virtues.

Three leading defenses against this criticism have been offered. Some virtue ethicists (Badhwar 2009, Kupperman 2009) have conceded that virtue is extremely rare, but argued that it may still be a useful regulative ideal. Others (Hurka 2006, Merritt 2000) have attempted to weaken the concept of virtue in such a way as to enable more people, or at least more behaviors, to count as virtuous. Still others (Kamtekar 2004, Russell 2009, Snow 2010, Sreenivasan 2002) have challenged the situationist evidence or its interpretation. While it remains unclear whether these defenses succeed, grappling with the situationist challenge has led both defenders and challengers of virtue ethics to develop more nuanced and empirically informed views.[4]

[1] Owen Flanagan (1991) considered some of the same evidence before Doris and Harman, but he was reluctant to draw the pessimistic conclusions they did about virtue ethics.

[2] Merritt (2000) was the first to suggest that the situationist critique could be handled by offloading some of the responsibility for virtue onto the social environment.

[4] One might hope that philosophical reflection on ethics would promote moral behavior. Eric Schwitzgebel has recently begun to investigate whether professional ethicists behave more morally than their non-ethicist philosophical peers, and has found that, on most measures, the two groups are indistinguishable (Schwitzgebel 2009; Schwitzgebel & Rust 2010; Schwitzgebel et al. 2011).

Below is a draft of a section of the Stanford Encyclopedia of Philosophy entry on “Experimental Moral Philosophy.” Comments welcome. Apologies for typos and missing citations.

Which tasks are appropriate to experimental ethics? The answer to this question depends on which aspects of ethics are under investigation. Experimentalists investigate moral intuition, moral judgments, moral emotions, and moral behaviors, among other things. The most thoroughly investigated is moral judgment, which we discuss now.

One project for the experimental ethics of moral judgment, associated with Stephen Stich[1] and Jonathan Weinberg, is to determine the extent to which philosophical intuitions are shared by both philosophers and ordinary people. (The distinction between moral intuitions and moral judgments is fraught, but for the sake of this discussion, we’ll treat moral intuitions as moral seemings and moral judgments as considered moral beliefs.) As Stich, Weinberg, and their fellow travelers are fond of pointing out, many philosophers appeal to intuitions as evidence or use their content as premises in arguments. They often say such things as, “as everyone would agree, p,” “the common man thinks that p,” or simply, “intuitively, p.” But would everyone agree that p? Does the common man think that p? Is p genuinely intuitive? These are empirical questions, and if the early results documented by experimental philosophers are replicated, the answer would often seem to be negative. This raises the question of how much work the adverb ‘intuitively’ is meant to do when it comes out of a philosopher’s mouth. If it’s just something she says to clear her throat before she makes an assertion, then the fact that intuitions exhibit a great degree of variance matters little. If, on the other hand, the claim “intuitively p” is meant to be evidence for p, the philosophers who make such claims should tread carefully.

Furthermore, the factors that predict disagreement about supposedly intuitive philosophical claims are often non-evidential. Women find p intuitive, whereas men find ~p intuitive (Buckwalter & Stich forthcoming). Westerners mostly agree that q, but East Asians tend to think ~q (Machery, Mallon, Nichols, & Stich 2004).[2] People find r plausible if they’re asked about s first, but not otherwise (Nadelhoffer & Feltz 2008; Sinnott-Armstrong 2008; Sinnott-Armstrong, Mallon, McCoy, & Hull 2008). This leads to the second use of experimental evidence: arguing for the (un)reliability of moral intuitions, and, to the extent that moral judgments are a function of moral intuitions, those as well. Walter Sinnott-Armstrong (2008) and Eric Schwitzgebel and Fiery Cushman (2012) have recently followed this train of thought, arguing that moral intuitions are subject to normatively irrelevant situational influences (e.g., order effects), while Feltz & Cokely (2009) and Knobe (2011) have demonstrated correlations between moral intuitions and (presumably) normatively irrelevant individual differences (e.g., extroversion). Such results, if they can be replicated and explained, may warrant skepticism about moral intuition, or at least about some classes of intuitions or intuiters.

Other philosophers are more sanguine about the upshot of experimental ethics. Joshua Knobe, among others, attempts to use experimental investigations of the determinants of moral judgments to identify the contours of philosophically interesting concepts and the mechanisms or processes that underlie moral judgment. He has famously argued for the pervasive influence of moral considerations throughout folk psychological concepts (2009, 2010; see also Pettit & Knobe 2009), claiming, among other things, that the concept of an intentional action is sensitive to the moral valence of the consequences of that action (2003, 2004b, 2006). Others, such as Joshua Greene and his colleagues (2001, 2004, 2008), argue for dual-systems approaches to moral judgment. On their view, a slower, more deliberative, system tends to issue in utilitarian judgments, whereas a quicker, more automatic system tends to produce Kantian judgments. Which system is engaged by a given moral reasoning task is determined in part by personal style and in part by situational factors.[3]

A related approach favored by Chandra Sripada (2011) aims to identify the features to which philosophically important intuitions are sensitive. Sripada thinks that the proper role of experimental ethics is not to identify the mechanisms underlying moral intuitions – such knowledge, it is claimed, contributes little of relevance to philosophical theorizing. It is rather to investigate, on a case by case basis, the features to which people are responding when they have such intuitions. On this view, people (philosophers included) can readily identify whether they have a given intuition, but not why they have it. An example: “manipulation cases” have been thought to undermine compatibilist notions of free will. In such a case, an unwitting person is surreptitiously manipulated into having and reflectively endorsing a motivation to j. Critics of compatibilism say that such a case satisfies compatibilist criteria for free will, and yet, intuitively, the actor is not free. Sripada showed, however, through both mediation analysis and structural equation modeling, that to the extent that people feel the manipulee not to be free, they do so because they judge him in fact not to satisfy the compatibilist criteria. Thus, by determining which aspects of the case philosophical intuitions are responding to, it may be possible to resolve otherwise intractable questions of interpretation.

Just finished up attending and speaking at the “Architecture of Personal Dispositions” conference at the Sorbonne, organized by Jon Webber and Alberto Masala. Walter Mischel’s keynote was terrific, as were many of the other papers (especially Kate & Daniel Manne’s co-authored paper on bystander training). I was presenting some of my work on factitious moral virtue. Below is the script I wrote up but mostly didn’t use; when I went off script, it was mostly to argue for asymmetric standards of evidence for attributing virtues and vices (I think we should have low evidential standards for virtue attributions and high evidential standards for vice attributions) and my revisionary social ontology of character traits (I think that being designated honest, generous, or open-minded may be partially constitutive of being honest, generous, or open-minded). All that’s in the book, in case you’re interested.

———————————————————————————-

I’ll begin by sketching out my approach to ethics in broad strokes, and then describe in more detail one particular example of that approach, which I call factitious – or artificial – virtue.

I assume you’re all familiar with the traditional tripartite distinction that divides ethical theory into metaethics, normative ethics, and applied ethics. At the applied level, the ethicist tries to answer moral questions about specific ethical issues. What are the rights and responsibilities of businesses with respect to employees, shareholders, customers, societies, and governments? Under what conditions is euthanasia permissible? Under what conditions is abortion permissible? What obligation do people – from the developed world and the developing world – have to protect the environment from climate change? At a more general level, normative ethicists try to articulate an account of what makes things right, good, better, and best; what makes someone virtuous and caring, and the relations among these concepts. At a still more general level, metaethicists address questions about the meaning of moral language, the content of moral thoughts, and the reality of moral properties and facts.

It might seem that this tripartite distinction – metaethics, normative ethics, and applied ethics – pretty much exhausts the logical space. However, it’s become fashionable in recent decades to care as well about moral psychology. If ethical theory is about what would be good, right, virtuous, and caring, and about what people should, may, and shouldn’t do, be, think, and feel, moral psychology is about what people actually do, think, and feel in and about moral contexts.

One of the really attractive things about moral psychology is its essentially interdisciplinary nature. On the one hand, empirically informed philosophers such as Jesse Prinz construct theories of the moral emotions; on the other hand, philosophically savvy psychologists such as Jon Haidt investigate the causes, effects, and interactions of the emotions. This interdisciplinarity crops up not only in the exploration of the moral emotions but also in thinking about moral behavior. Over the last couple decades, Gilbert Harman and John Doris – armed with the empirical evidence gathered by psychologists such as John Darley – have made some waves by arguing that the ordinary notion of virtue and good character is an inadequate picture of how people really behave in moral contexts.

I’m going to discuss this research in a bit more detail later, when I lay out my theory of factitious virtue, but first I’d like to spell out the rest of the framework. In addition to moral emotions and moral behavior, moral psychologists also investigate moral judgments. In this field, the line between philosophy and psychology breaks down even further because so-called experimental philosophers such as Josh Knobe and Shaun Nichols run their own experiments in addition to interpreting and systematizing the studies of psychologists. One of the more prominent debates in this field has to do with the attribution of morally important attitudes like intention, belief, and desire to people who produce good or bad side-effects. In 2003, Knobe published a study which suggested that ordinary people are more inclined to say that someone intentionally brought about a side effect when it was bad than when it was good. I happen to disagree with his interpretation of the data, and have published an article that provides a better alternative, but the basic idea here is that philosophers and psychologists (notably Fiery Cushman) are now investigating the sorts of moral judgments that real people make.

Surely, you might think, by supplementing ethical theory with moral psychology, we’ve exhausted the logical space. No. Ethical theory is about how things should be; moral psychology is about things are. What’s needed is a way to bridge the gap between how things are and how they should be. I call that bridge moral technology. There can be various types of moral technology. The “nudge” theory of Cass Sunstein and Richard Thaler is a highly political version of moral technology. The ancients were very concerned with moral education or cultivation; after all, Plato devoted several of the central chapters of the Republic to the training of the guardians. I have a pet theory that the reason Epicurus had a statue of himself erected in his school was to remind his disciplines of his maxim, “Act at all times as if Epicurus were watching.”

In any event, this is how I picture the complete logical space of ethics. My own work addresses all three aspects – ethical theory, moral psychology, and moral technology – but what I’m most excited about are the connections among them. You don’t have an adequate ethical theory, I think, if it identifies norms that no human being could live up to. You don’t have an adequate moral psychology unless you have some idea what makes a judgment, action, or feeling moral. And you certainly don’t have an adequate moral technology if the moral psychology it presupposes is wrong or the normative goals is aims at are not the right goals.

In the remainder of my time, I’m going to try to show you how these things might go together. The thesis I’m going to defend is that even if the situationist critique of virtue ethics is successful, it’s still advisable to attribute virtues to people, plausibly and publicly (but not to attribute vices), because such attributions alter their target’s self-concepts and social expectations, and thereby function as self-fulfilling prophecies.

I’d like to start by distinguishing between attributions to agents and attributions to actions. You can say that someone is honest. You can also say that what someone did is honest. I’m going to be focusing primarily on attributions to agents, and hence attributions of virtuous traits or dispositions, though there are surely important connections between the two types of attributions.

Now here’s something interesting about attributions of virtues to agents. You can use a virtue attribution as part of an explanation of an action. If the question is, Why did Jenny donate a thousand dollars to Oxfam this year? the answer could be, Because she wanted to impress her pastor, or Because she wanted the tax write-off. But it could also be, Because Jenny is generous. Another use of a virtue attribution is as part of a prediction of an action. Here. I’m going to put this here. [place some money within easy reach of audience members] And now I’m going to turn around. And because I attribute at least a modicum of honesty to each of you, I’m going to predict that when I turn back around, my money will still be there.

Just so.

So far, virtue attributions differ in no way from attributions of other traits, such as neuroticism and dominance. Saying that someone is neurotic can help to explain her behavior. If you ask, Why does Karl chew his fingernails? the answer could be, Because he’s neurotic. Saying that someone is dominant can help to predict her behavior. Where virtues differ from other traits is in the fact that virtue attributions can be used in the evaluation of behavior as well. Saying that someone is open-minded doesn’t just license certain explanations and predictions; it also praises the target of the attribution.

This union of fact and value is a welcome feature of virtue theory, but it comes at a price. People such as John Doris, Gilbert Harman, me have argued that virtue theory faces what’s come to be known as the situationist challenge: if the evidence from social psychology is to be trusted, it would seem that most people don’t have such traits as honesty, generosity, open-mindedness, or curiosity. Seemingly trivial and normatively irrelevant situational factors have a huge influence over whether someone will do the virtuous thing. It seems that people tend to have only local versions of these traits, like honesty-while-watched-by-fellow-parishioners, generosity-while-in-a-good-mood, open-mindedness-after-eating-candy, and curiosity-when-it’s-sunny. I think it was Oscar Wilde who once quipped that he could resist anything but temptation. If the situationist challenge succeeds, even that might be too strong. The idea is not that people easily succumb to temptations: a temptation is a reason to do what you ought not to do. The idea is that non-temptations like the weather and mood elevators play a surprisingly large role in moral conduct, including both external behavior and more internal phenomena such as thought, feeling, emotion, and deliberation.

If this is right, it puts some pressure on virtue ethicists, since they would then have to say that most people aren’t virtuous. People might approximate virtue in some contexts, but overall they don’t exhibit the kind of consistency that virtue requires. How embarrassing an admission this is depends on how demanding you think an ethical theory should be. I tend to think that what might be called a hierarchy of demandingness is appropriate: it should be possible for most anyone to satisfy the minimal constraints of a normative theory (this is something like the ought-can principle), though perhaps only a moral elite can ever aspire to sainthood.

In any event, I’m not going to argue today that the situationist challenge succeeds, though on my reading of the literature it largely does. What I want to discuss today is a response I think is available even and especially if the challenge succeeds. What I want to suggest is that the situationist challenge to virtue ethics should not be resisted so much as co-opted.

Let’s approach this response from the point of view of moral education. My sister has a 4-year-old son, and, like any good mother, she wants him to grow up to be honest. How should she go about ensuring that he does? One strategy that naturally comes to mind is to exhort him to be honest, to give him some moral rules (don’t lie, don’t cheat, don’t steal), to punish him when he breaks those rules, to reward him when he follows them, to explain to him the benefits of being honest, and so on. Alternatively, she could tell him that he already is honest, especially when he does something that could be construed as honest and especially in front of other people. It might surprise you to learn that the latter strategy is more promising than the former.

To illustrate why I think this, let me tell you about a study conducted by Miller, Brickman, and Bolen in 1975 with two groups of fifth-graders. One group, call them the exhortation group, was repeatedly asked, encouraged, and wheedled to being more tidy in the classroom – by the teacher, the principal, even the janitors. The other group, call them the labeling group, was praised (falsely) for their above-average tidiness. For instance, the teacher told them that they were ecology-minded and that the janitors had commented to her that theirs was one of the cleanest classrooms in the school. The principal visited them to commend them for the orderliness of their classroom. The janitors left a note thanking them for making their job so easy. After a brief increase in their tidy behavior, the exhortation group fell back into their old ways. In contrast, the behavior of the labeling group remained tidier for an extended period.

This is just one study, and of course it would be crazy to base a theory on a single study. Fortunately, similar studies have turned up the same phenomenon when people are labeled with other traits, including charity, generosity, cooperativeness, helpfulness, and eco-friendliness. Labeling agents with traits beats exhortation. It also beats mere praise (“that was a good thing to do”). It even beats the labeling of actions with virtue terms. In one study, people gave 350% more to a charity after being told that they were generous than they did after being told that what they had done was generous.

It seems to me that two mechanisms are at work here: self-concept and social expectations.

Roughly, your self-concept is your picture of yourself, your settled beliefs about what personality traits you have. People enjoy acting in accordance with their self-concepts, especially when the relevant traits are evaluatively positive. And, unsurprisingly, they’re averse to acting in violation of their self-concepts, again, especially when the relevant traits are evaluatively positive. Since I think I’m curious, it pleases me when I note that I’m doing something curious. If you think you’re generous, it pains you to note that you’re not being generous when that’s what’s called for. To the extent that labeling people with virtues alters or reinforces the relevant parts of their self-concepts, then, it makes sense that labeling would induce action in accordance with those virtues. This is why the attribution has to be plausible. You’re not going to alter someone’s self-concept if you tell them something they won’t believe.

From this point of view, factitious virtue looks like the placebo effect. The placebo effect is the phenomenon in which someone’s beliefs about herself are causally implicated in their own truth. For instance, her pain goes away because she thinks it will. Or she recovers from some illness because she expects to. What’s so intriguing about the placebo effect and factitious virtue is that the fact tracks the belief, rather than conversely. Placebo analgesia depends on expectations of pain relief; expectations of pain relief don’t depend on placebo analgesia. And just as with factitious virtue, the beliefs involved in placebo effects don’t typically spring from nowhere. The patient usually has some reason to believe, such as a sugar pill, a sham surgery, or the prayer of a priest. In the same way, the target of the virtue attribution has some reason to believe because the attribution was plausible. In the Miller, Brickman, & Bolen study I described earlier, the experimenters surreptitiously cleaned the classroom of the labeling group so that the attribution of tidiness would seem plausible.

The other thing that I think helps to explain this phenomenon is social expectations. Just as people enjoy acting in accordance with their self-concepts and are averse to violating them, so they often enjoy doing what others expect of them and are averse to letting others down. If someone tells you that you’re courageous, you’re going to be especially keen not to look a coward in front of that person (and anyone else who was around when the label was applied). So again, it’s important that the attribution be plausible. Social expectations don’t get built up for free. This is also why it’s useful for the attribution to be public. The more people who expect you to act in accordance with some virtue, the more inclined you’ll be to live up to those expectations.

So from this point of view, factitious virtue looks like a self-fulfilling prophecy. A self-fulfilling prophecy is a public announcement whose truth depends in part on the very fact that it was publicly announced. If Ben Bernanke, the chairman of the US Federal Reserve, were to announce on Sunday night that the stock market was going to crash on Monday, it’s quite likely that it would in fact crash. Some people would think that he had evidence for his announcement, and therefore sell their stocks. Other people might think he had no evidence, but that others would be duped; they too would sell their stocks. As in the case of the placebo effect, the fact here tracks the announcement, not the other way around. The market crash depends on the announcement: if Bernanke didn’t predict the crash, it wouldn’t happen. But the announcement doesn’t depend on likeliness of the crash: Bernanke doesn’t know independently of his announcing it that the crash is going to happen.

The social expectations mechanism suggests that virtues are more closely related to social categories than it might seem. Take noble, for example. Originally, this was clearly a social category. To be noble was to belong to a certain family, with a certain pedigree. It was a matter of being an aristocrat. Later, a more psychological conception of nobility emerged: to be noble was a disposition to act and react in certain ways. It didn’t matter what social class you belonged to. What I’m suggesting is that being noble even in this latter sense may still be socially infused. Being psychologically noble depends in part on being considered noble. For some virtues (and vices) this notion is going to be more appealing than for others. For instance, it seems quite plausible that being charming depends in part on being thought charming. It’s hard to charm people who sneer at you. And it seems natural to say that being leaderly (to coin a term) depends in part on being thought leaderly. Similarly for some vices: thinking that someone is antagonistic is a pretty good way to make them disposed to antagonism. Expecting unfriendliness from someone may dispose them to be unfriendly.

For other virtues and vices, this suggestion may seem more far-fetched. Is it really true that being courageous depends on being thought courageous? Is it really true that being thought unfair disposes someone to be unfair? As I mentioned earlier, this phenomenon hasn’t been systematically investigated for all of the virtues and vices. It does seem to crop up at the very least, though, with charity, generosity, cooperativeness, helpfulness, and eco-friendliness, as well as with selfishness. The idea, then, is to blunt the force of the situationist challenge to virtue ethics with a moral technological intervention: even if virtue as traditionally defined is too demanding because people are surprisingly susceptible to seemingly trivial and normatively irrelevant situational influence, we should go on attributing virtues to people because such attributions induce something very much like virtue.

You can probably see now why I think it’s not advisable to attribute vices to people, even when you have pretty good evidence. If the attribution changes their self-concepts, they’ll be more inclined to act in accordance with vice. And if it alters their social expectations, again, they’ll be more inclined to act in accordance with vice. Saying that someone is vicious is likely to confirm them in that behavior, not encourage them to change it.

An interesting further upshot is that it’s especially useful to make plural rather than singular attributions. What I mean by this is, there’s an important and (as far as I know) hitherto unnoticed difference between saying, “You are honest,” and saying, “Y’all are honest,” or, “We are honest.” There’s some pretty convincing evidence in behavioral economics that lots of people aren’t cooperative as such but conditionally cooperative: they’ll cooperate only if they think enough other people will do so as well. Plural attributions help to generate these expectations. If I say, “We’re all generous people. Let’s all chip in for a travel fund for the graduate students,” you might be more inclined to play along because you think the rest of us will too. Nobody wants to play the sucker to someone else’s freerider.

OK, that’s about all I have time to say about factitious virtue. I’ll end by pointing out that factitious virtue isn’t quite the same thing as inducing real virtue in people. The person with factitious generosity gives to others because he thinks he’s generous, whereas presumably the genuinely generous person gives because it would help someone. The factitiously courageous person overcomes threats to valued ends because others expect her to, whereas presumably the genuinely courageous person doesn’t need that kind of social support. Still, it’s a good start. And it might just be – though this is purely speculative – that exercising factitious virtue long enough might lead someone to see the value in acting not just in accordance with virtue but from virtue. In other words, factitious virtue can be seen as an intermediate stage in the cultivation of genuine virtue.