What can Moral Psychology do for Normative Ethics?

A heated debate has arisen since the rise of moral psychology as a scientific discipline. Moral psychology examines, descriptively, how we come to make moral judgments and have moral intuitions (these two terms I will henceforth use interchangeably). Some thinkers argue that normative ethics — the study of what we morally ought to do — can be informed by moral psychology. Of course, whether this claim is controversial depends on exactly what one means by “informed.” In some rather trivial sense, moral psychology informs normative ethics insofar as it provides us with the mechanisms that lead to moral judgments.

The more controversial claim is that, in some sense, understanding how we come to make moral judgments can tell us, in a more direct way, what we ought to do. Somehow, learning about how we come to think about morality will inform us about how we should behave.

Opponents of this are quick to cite the famous is-ought gap — one cannot derive how things ought to be from how things are. Since moral psychology is a descriptive enterprise about how our minds work, it would seem to fall into the category of telling us what is, and so cannot tell us what ought to be. People taking this position we can call philosophical ethicists.

Proponents of the view that moral psych can tell us what we ought to do, who we’ll call psychological ethicists, will reply by first asking what it is that we use to determine what we ought to do. The answer that almost anyone would agree on, they say, is moral intuitions — those spontaneous, felt-judgments we have about what is right or wrong in various cases.

Philosophers will put forward a moral theory and then “test” it against our moral intuitions in cases. Take Utilitarianism, which loosely states that what one ought to do is whatever promotes the greatest amount of happiness. Well, if utilitarianism is correct, then holding gladiator fights for large amounts of people is not only morally good, but morally required. Since our intuition is that it is wrong to subject people to violent fighting simply to amuse an audience, we have a reason to doubt the truth of utilitarianism.

Psychological ethicists then claim that some facts about how we come to have moral intuitions can tell us whether those intuitions reliably track what is moral. Take the following case to help illustrate:

Utilitarian Drug

Psychologists, in collaboration with pharmacologists, develop a drug which lasts for five hours. This drug causes you to only have a strong utilitarian intuition in all moral dilemmas you are presented with. Should you act on the basis of these intuitions while you’re under the influence of this drug?

Given that your intuitions aren’t being “properly” generated, it would seem that you should not rely on the moral judgments you make while under the influence of the utilitarian drug, either when building a moral theory or making moral decisions. Thus, knowing facts about how your moral judgments are being formed has changed how you ought to behave.

Similarly, psychological ethicists will claim, understanding how our moral intuitions are formed can reveal cases where, perhaps, we shouldn’t trust such intuitions. For example, there is evidence that the order in which one is presented with different moral dilemmas can influence whether one intuits that an action is permissible, and also how strongly that intuition is felt. Clearly, the order in which moral cases are presented is irrelevant to the truth of one’s intuitions about how one ought to behave, and so, like in the case of the utilitarian drug, we should be wary of trusting our intuitions when we are presented with cases in serial fashion.

One way that philosophical ethicists might respond to this argument is to still deny that moral psychology has told us anything about what we morally ought to do or even ought not do. Rather, they might argue, it has told us what we rationally ought to do. When you are under the influence of a utilitarian drug, it would seem that such knowledge informs us that we are in a kind of skeptical scenario — whereby we have strong reason to doubt our intuitions. As such, we should refrain from believing any of those intuitions. But of course, this seems far from a moral intuition. Rather, it closely resembles a rational intuition about what we should believe based on prior knowledge about defeaters for our beliefs. Perhaps psychology hasn’t told us anything about what we ought to do, morally, after all.

Some psychological ethicists will concede this point, and be happy to do so, for they consider it to be a pyrrhic victory. Numerous studies have been marshaled to show that our moral intuitions are formed by all sorts of morally irrelevant factors such as the presence of a noxious odor, the wording and ordering of cases, and whether you have just seen a violent or negative visual stimulus. The idea is that such causal influences on our moral intuitions are the product of our idiosyncratic and morally irrelevant evolutionary history, and so they are quite ubiquitous. Given this ubiquity of such influences, perhaps we rationally ought not trust our moral intuitions at all.

Walter Sinnott-Armstrong takes a position that is somewhat like this. His claim is that the enormous and frequent impact of irrelevant influences on moral judgments implies that moral intuitions are never non-inferentially justified. In other words, such intuitions are not justified without further supporting reasons — they are not justified by default.

Peter Singer takes an even more radical approach, and, for somewhat similar reasons to those I just mentioned, claims that we shouldn’t use moral intuitions at all in moral theorizing. Instead, we should perhaps only appeal to what he calls rational intuitions.

There is little point in constructing a moral theory designed to match considered moral judgments that themselves stem from our evolved responses to the situations in which we and our ancestors lived during the period of our evolution as social mammals, primates, and finally, human beings. We should, with our current powers of reasoning and our rapidly changing circumstances, be able to do better than that (Singer, 2005).

At this point in the debate, one might be tempted to conclude that the answer to the question about what influence moral psychology has in normative ethics is either that is has none, or that it undermines the whole program, leading to a kind of moral skepticism (at least insofar as we shouldn’t use moral intuitions in moral theorizing). But I think that moral psychology can have a very different role, one that is compatible with the idea that we can trust moral intuitions, and still should use them in deciding how we ought to act. On my view, moral psychology should be treated as a small but useful tool in normative ethics.

In showing how our moral judgments are formed, psychology’s power isn’t only restricted to undermining our intuitions, but also clarifying our reasons for having those intuitions. Moral psychology can help specify which principles underlie our moral intuitions.

Take the famous principle of the Doctrine of Double Effect (DDE), which states that it is impermissible to intend to cause harm as a means to an end, but permissible to intend to help someone while foreseeing harm as an inevitable side effect. This is classically illustrated by the contrast between two types of trolley problem cases:

Pull

A runaway trolley is barreling down the tracks. If it continues on its course, it will strike five people who stand unaware on the tracks. However, beside you is a lever which, if pulled, will redirect the trolley onto a different track, where one unaware person is sitting. If this person is struck, he will surely die. Should you pull the lever?

Push

A runaway trolley is barreling down the tracks. If it continues on its course, it will strike five people who stand unaware on the tracks. However, you are standing on a footbridge above the tracks next to a plump man. If you push this man off the bridge and in front of the train, he will surely die, but the trolley will come to a halt, and the five people will be saved. Should you push the fat man?

Most people will say that it is morally permissible to pull the lever, but impermissible to push the large man. Philosophers have traditionally explained this difference in intuitions with the DDE. It is permissible to pull the lever because you intend to save the five even though the death of the one individual is a foreseeable side-effect. It is impermissible to push the fat man because in doing so you must intend to kill the fat man as a means to save the five.

Fiery Cushman has provided a good heuristic to determine whether a subject intends to use harm as a means to an end as opposed to intending to help with a foreseeable harmful side-effect — ask whether the agent’s plan to bring about good consequences could be implemented without harming the individual in the scenario. In Push, one cannot stop the train and save the five without the presence of the fat man, but in Pull one can save the five even if the individual on the tracks were not there.

The DDE intuition has been so powerful that it is reflected in many ethical and legal positions. For example, some people justify certain forms of euthanasia with this doctrine. The reason that euthanasia is permissible, they will say, is that the physician only intends to alleviate the pain of the person, though he foresees the inevitable side-effect of death. Chief Justice William Rehnquist appealed to the DDE in his majority opinion in Vacco v. Quill when he said, “Just as a State may prohibit assisting suicide while permitting patients to refuse unwanted lifesaving treatment, it may permit palliative care related to that refusal, which may have the foreseen but unintended “double effect” of hastening the patient’s death.”

I believe that moral psychology can play a role in clarifying whether our moral intuitions actually justify the DDE. First recall that what grounds the DDE is the intuitive difference between cases like Push and Pull. But while our intuitions are consistent with the DDE, it’s not as though people will consciously state that the principle that they believe in, which justifies their intuitions, is the DDE. So, it isn’t obvious that the DDE is what our intuitions are actually tracking. This is an inference philosophers have made to explain the intuitive “data.” If we showed that there are cases where our intuitions don’t track the DDE, but rather track other similar principles, we might want to give up the DDE on the basis of our intuitions.

To be clear, this is something that philosophers do all the time. They try to find thought experiments and cases that will show that our moral intuitions don’t track a principle. But moral psychologists can play a role in the process as well. First, they can help adjudicate what our moral intuitions are actually tracking and second, they can investigate the moral intuitions we have not just in thought experiments, but in performing actual actions. I’ll elaborate on these in turn.

The intuitions in Push and Pull are consistent with a variety of principles. For example, a prohibition against those more causally “direct” harms, and a permission for more causally “indirect” harms. In Push, the harm that is caused is up-close-and-personal, with your push immediately leading to the death of the fat man. In pull, there is a series of steps in between your action — pulling the lever — and the one person dying.

The DDE is silent on the moral relevance of causal “directness,” as it makes no mention of causation. What matters in the DDE is the intention an agent has. Of course, we sometimes infer an intention based on causal directedness. For example, it is sometimes said, according to chaos theory, that a person waving his arms in Texas can cause a hurricane in China (through a long chain of causal effects). Given how many causal steps there are between a person flapping his arms and a hurricane occurring, we infer very clearly that such a person did not intend to cause a hurricane.

So while the principle of causal directedness is related to the DDE, it is not identical to it. But both principles are compatible with the intuitive difference between Push and Pull. Maybe our intuitions don’t actually track the DDE, but track some other principle(s) that typically co-occur with DDE-style cases, like causal directedness.

In order to determine which of these two principles our intuitions actually track, philosophers traditionally would try to give thought experiments that would differentiate them, like these (again provided by Fiery Cushman):

Weighted Rescue

You are driving a motor-boat and see five people drowning far off in the distance. You know that your boat is too heavy to reach the speeds that would allow you to rescue the five in time. However, if you speed up the boat you will throw a passenger off causing him to drown, but allowing you to save the five. Should you speed up the boat?

Speedy Rescue

You are driving a motor-boat and see five people drowning far off in the distance. You know that if you don’t speed up the boat a drastic amount, the five will surely die. However, you also know that speeding up the boat will throw off one of your passengers, causing him to drown, but allowing you to save the five. Should you speed up the boat?

These cases are consequentially identical to Push and Pull, but they no longer involve any differences in causal directedness. So, if people report having strong differences in intuitions similar to those in the Push and Pull cases, this bodes well for the DDE.

However, suppose we couldn’t think of examples like this, and still wanted to figure out if causal directedness was grounding our intuitions. Psychology can help, here, by providing neural data. Suppose ascriptions of direct causation are paradigmatically associated with a certain region of the brain X, and indirect causation with region Y. We could give cases like Push and Pull and see whether there is a difference in the activations of X and Y, which would give us some reason to think ascriptions of causation are playing a role in underpinning our intuitions, lending support to that hypothesis. As such, moral psychology can assist in adjudicating (but not determining) which principles our moral intuitions track, and therefore justify, in this way.

One may reject that there can be any such useful mapping between brain and mental states. But I have two concerns with this. First, if one made this move, it would be hard for them to avoid claiming that our whole brain mapping project — whereby we correlate certain areas of the brain to things like emotion and vision — is mistaken, which seems implausible. Second, this would be to reject a large assumption in neural science, as opposed to rejecting the idea that moral psychology, if we assume it can map different states, has no role to play in normative ethics. I, at least, am interested in the latter, narrower, question, as are most people in this debate.

There is another way moral psychology can be helpful in normative theorizing — by showing how people’s intuitions differ in thought experiments and in practice. In the philosophy classroom, people make moral judgments from behind their desks. What they don’t do is actually embed themselves in situations that involve the relevant moral choices where they must act. Plausibly, moral intuitions will differ in practice and in theory, and that raises interesting ethical questions.

Suppose a medical student in Oregon feels that physician assisted suicide is permissible when he considers the question in a medical ethics class. But when he is doing his palliative care rotation he finds that he is not only hesitant to administer the drug, but has the moral intuition that euthanasia is impermissible. In such a situation, there is a disconnect between his judgment-in-theory and his judgment-in-action.

Moral psychologists can uncover such clashes. Since they have the advantage of a laboratory setting, they can produce situations in which subjects will have to make moral decisions, and then can ask for their moral intuitions shortly thereafter. Once they contrast these in-practice intuitions with other subjects’ behind-the-desk intuitions, they can reveal some stark differences. These contrasts should be interesting to philosophers, for they raise questions about which intuitions we should rely on in normative theorizing. Also, if there is no difference between in-practice and behind-the-desk intuitions about particular cases, perhaps we can have more confidence in them. But aside from merely raising questions, revealing differences between these kinds of intuitions provides philosophers’ with more kinds of intuitive “data” to build on, which strikes me as valuable.

While moral psychology certainly doesn’t settle normative questions, I think it can be a useful tool. It can help us determine which principles our moral intuitions actually track, and can reveal differences between in-practice and behind-the-desk intuitions, which provides philosophers with interesting intuitive contrasts that they can use in theory-construction. When it comes to normative ethics, moral psychology need not be seen as either simply irrelevant, or entirely subversive. There is a middle ground it can, and I think should, occupy.

Post navigation

Interesting analysis, Daniel. I observe only that when I went through the controversies regarding environmental responsibility formulated by the moral psychologists, I came away realizing that “moment in time” ethical conundrums don’t generate consistent results. Rather, the contradictions in the studies appeared to be consistent with morality as a quality of relation that can only be determined by an extended history.

Thus one-offs like the run-away trolley car might be taken as unworthy of deep consideration. To improve the safety of the trolley system seems clearly to be a “moral” commitment.

Many spiritual paths (notably Buddhism) also herald self-introspection as a method for minimizing the influence of confounding factors on our ethical decisions. This is not to supplant our natural moral compass with rational analysis, but rather to clarify its operation. Are you aware of any studies that look at the relative robustness of moral decisions as regards mindfulness practice as a factor?

DanT,
I wasn’t going to reply until/unless I could do so fully, but felt I needed to interject as quickly as I can before the comment conversation got fully underway.

There is so much wrong here, I don’t know where to begin.

First, I reject conflating ‘moral judgments’ which are expressed through action, with ‘moral intuitions’ which are only felt, or sensed or thought. I can sense all the moral intuition on a matter one cares for, but only what I do in response expresses my moral judgment on it, however I may be unhappy with that.

I reject all trolley/ rowboat thought experiments as having to do with ethical decisions in the real world whatsoever. You want me to decide what to do with a trolley rolling toward 5 people? Then hire a trolley and drive it down toward five real, living people, and see what I do. Ethics is not about what we imagine, but the concrete lives we actually live. Again, an ethical decision is not what I think but what I do.

“In such a situation, there is a disconnect between his judgment-in-theory and his judgment-in-action.”

There will always be a disconnect between theory and action. Good ethical theory generalizes from actual behavior, but no previous behavior can predict future behavior with any certainty.

“Since our intuition is that it is wrong to subject people to violent fighting simply to amuse an audience…” Whoever said that? What world are we living in? You don’t watch professional wrestling?

In fact the complexity and diversity of human intuition concerning violence – a problem I’ve been concerned with for decades, largely because of various commitments towards pacifism, but also as a writer of violent fiction in my youth, and as critic of violent films – pretty well crucifies this whole discussion. One of the reasons I might not write a full comment here is because your article has incited me to write a reply on this issue, or perhaps a series on my own web log, although it would mean revisiting unpleasant territory for me.

However, I will use this moment to point out that this elemental assumption reveals this discussion as occurring, and only possible, in, a certain culture and a certain time and place – which means that its generalizations are extremely narrow in focus, and inapplicable across time or differing cultures. And it also suggests that ‘ethical psychologists’ know dam’ little about the depth or breadth of human psychology – certainly less than did Freud or Nietzsche or even the Marquis De Sade.

The way you describe it here, you have the philosophers doing an activity that is basically social science: trying to map out people’s reactions to various behavior. The difference being that the philosophers contemplate thought experiments while the psychologists go to the lab or field. If the project is trying to figure out what people think they ought to do, isn’t such a project descriptive and not prescriptive?

I do think there is a promising role for psychology, but claiming to investigate “moral” intuitions seems to be question begging, I’d prefer to say “social” intuitions and reserve for a separate discussion whether and when they should be indulged. For example, of Haidt’s categories, having somewhat less “authority” and a lot less “purity” would do us good.

Which brings me to something else that has long puzzled me. So much of ethics is done in isolation, but people always live in societies that have some particular collection of physical and social technologies. Abstracting ethics away from that seems rather hollow. As an example, here in the U.S., we could make a big Pareto improvement in our criminal justice system by moving sharply away from expensive long incarceration in prisons and towards rehabilitation focused community based systems that look forward toward preventing future crime. But this hasn’t happened yet since there are sizable numbers of people who look backwards with a strong emotional commitment to making the criminal “pay” for what they did. So a good question is what are the prospects for dialing down these retributive reactions and how could you go about doing it? Clearly plenty of work for psychologists. But isn’t there also a role for philosophers in helping to explicate a coherent worldview that makes this less punitive policy broadly appealing? After all, there would have to be shifts in other attitudes and beliefs too. How could it all fit together?

So Daniel, I guess I subscribe to a stronger thesis than you advance. I’d say “the study of what we morally ought to do” necessarily involves learning things that psychologists and other social scientist are in a much better position to discover than philosophers are.

Hi Dan, I’m not sure I can be as hard on your article as EJ, since you are more raising questions than delivering answers, but I sympathize with some of the points he made.

There is some difference between intuition and judgment, as well as theory and practice. Though, I guess I am a bit less skeptical than EJ that future behavior can be judged from past behavior. Perhaps nothing is certain, but you can get some pretty useful predictors. Of course, like EJ, I was also caught on the claim about violence. That seems more what we like to say about ourselves, than what we do in practice.

Anyway, taking the piece as is, I would agree that moral psychology has a limited role in moral philosophy, but would go further to say neither fields of study does much to support (in some objective sense) normative claims.

For me the best moral philosophy can do is try to elucidate the many different ways problems we face can be considered. What aspects could be thought relevant? What relative weights would we assign different aspects? Why? In the end you will have to say what you feel, and decide what you will do, and figure out what that means about yourself and others (based on what they say and/or do). It is a process of discovery about the moral character of yourself and others. From this one can try to adopt changes (to either), but no moral theory can dictate (beyond your will) that you must make changes. It comes down to a refinement of one’s character through reflection.

Moral psychology can elucidate the mechanics of how one goes about considering problems. It is wholly unsurprising, and so uninteresting to me, that people’s moral intuitions/judgments can be influenced by transient conditions. That is why serious moral consideration and action is not done in the spur of the moment, it takes some time of reflection under different conditions. More interesting to me are tricks one can use to take different perspectives, or to prevent oneself from being manipulated by others. Also, I think people like Haidt have been useful in experimentally finding and grouping common aspects people use to base moral judgment. That can help us reflect on the different systems, or choices, that moral philosophy discusses.

In no case can a scientific enterprise help us understand what is “right” without falling to the “is…ought” problem, or the “naturalistic fallacy” (yes, I consider that different than what Hume was discussing).

Lowest in utility is neuroscience. It can tell one how the brain physically processes and so facilitates ethical considerations… but what does that help us do? Except maybe it will tell us what we can do physically to our brain to get a result we want, but the current brain processes do not allow? Ok, but that will never tell us we ought to want that change (or not).

“For example, of Haidt’s categories, having somewhat less “authority” and a lot less “purity” would do us good.”

and…

“So a good question is what are the prospects for dialing down these retributive reactions and how could you go about doing it? Clearly plenty of work for psychologists. But isn’t there also a role for philosophers in helping to explicate a coherent worldview that makes this less punitive policy broadly appealing? ”

and…

“I’d say “the study of what we morally ought to do” necessarily involves learning things that psychologists and other social scientist are in a much better position to discover than philosophers are.”

… you seem to have snuck in your own moral outlook as what is “right” and the question of “what we morally ought to do” a mere instrumental issue of how to achieve that goal.

“First, I reject conflating ‘moral judgments’ which are expressed through action, with ‘moral intuitions’ which are only felt, or sensed or thought. I can sense all the moral intuition on a matter one cares for, but only what I do in response expresses my moral judgment on it, however I may be unhappy with that.”

This is just philosophical convention. Frequently intuitions are taken to be, as I said in the essay, those spontaneous felt-judgments that we make. Philosophers also don’t understand “judgement” as needing to be expressed through action — it can simply be a thought that has propositional content. But all this aside, I was just conflating the two for simplicity,so that I wouldn’t have to repeat the word “intuition” throughout the entire essay. I did not intend to make any substantive philosophical claim in “conflating” the distinction.

“I reject all trolley/ rowboat thought experiments as having to do with ethical decisions in the real world whatsoever. You want me to decide what to do with a trolley rolling toward 5 people? Then hire a trolley and drive it down toward five real, living people, and see what I do. Ethics is not about what we imagine, but the concrete lives we actually live. Again, an ethical decision is not what I think but what I do.”

You can reject the standard philosophical methodology of using thought experiments to pump intuitions, and then using those intuitions to generate moral principles, if you want. But debating *that* is basically a different topic. I’m assuming the (at least somewhat) legitimacy of the normative ethics methodology that I described in the essay, which uses thought experiments.

“There will always be a disconnect between theory and action. Good ethical theory generalizes from actual behavior, but no previous behavior can predict future behavior with any certainty.”

Your first claim (about there *always* being a disconnect between theory and action) is an empirical one, assuming that you’re referring to someone’s “behind-the-desk” intuitions and their actual behavior. As I said in the essay, this kind of empirical claim is something which moral psychology can assist in answering. I don’t accept your claim a priori. The second point is one that you feel strongly about, but others disagree. For example, many people hold that basing ethical theory off of actual behavior is precisely the wrong thing to do. I won’t go into too many details, but suppose you were in germany during world war 2. Should you base your ethical theory on the way that nazis or germans acted?

I think the best way to put my concern is that I suspect you’re basically succumbing to an is-ought problem here. When you say “good ethical theory generalizes from actual behavior” that sounds an awful lot like saying “we ought to do what we actually do.” But, of course, this is just to make an is-ought inference.

““Since our intuition is that it is wrong to subject people to violent fighting simply to amuse an audience…” Whoever said that? What world are we living in? You don’t watch professional wrestling?”

I see why you took issue with this. When I said “violence” I meant to imply killing. Poor word choice. Also worth noting is when I said “subject people to violent fighting” I meant to imply a lack of consent (e.g slaves in gladiator fights). I plead guilty to poor phrasing.

Hi DW,

“The way you describe it here, you have the philosophers doing an activity that is basically social science: trying to map out people’s reactions to various behavior. The difference being that the philosophers contemplate thought experiments while the psychologists go to the lab or field. If the project is trying to figure out what people think they ought to do, isn’t such a project descriptive and not prescriptive?”

I think normative ethics has a descriptive and prescriptive components. It’s primary *goal* is prescriptive, but of course it will have to use facts about the world in some (at least) minor sense. This could mean nothing more than figuring out, descriptively, what one’s own intuitions are on certain matters.

“I do think there is a promising role for psychology, but claiming to investigate “moral” intuitions seems to be question begging, I’d prefer to say “social” intuitions and reserve for a separate discussion whether and when they should be indulged. For example, of Haidt’s categories, having somewhat less “authority” and a lot less “purity” would do us good.”

I’m fine with not calling those spontaneous, felt-judgments about what is right or wrong “moral intuitions” if you think they can be misleading. I don’t mind how words are used as long as we are referring to the same thing.

“So a good question is what are the prospects for dialing down these retributive reactions and how could you go about doing it? Clearly plenty of work for psychologists. But isn’t there also a role for philosophers in helping to explicate a coherent worldview that makes this less punitive policy broadly appealing? After all, there would have to be shifts in other attitudes and beliefs too. How could it all fit together? So Daniel, I guess I subscribe to a stronger thesis than you advance. I’d say “the study of what we morally ought to do” necessarily involves learning things that psychologists and other social scientist are in a much better position to discover than philosophers are”

Well, certainly empirical work is extremely important when you are doing *applied* ethics (which I think is what you’re concerned with), but normative ethics tries to develop moral theories about how to act as opposed to applying those theories to the world.

Hi Ontological Realist,

I take it you’re asking me if I am some kind of anti-realist about morality. I happen to be a realist (I hold that there are moral facts), but I don’t want to dwell on this issue because it is kinda tangential to the topic in the OP.

With a broad brush, that criticism is rather weak. Of course, any statement of the form “we ought ____” is contingent on something in virtue of which it is considered worth doing. But if we are unwilling to differentiate among the various “somethings” then the only thing to do would be try to get people to give up on concepts like ethics and morals because such notions hide what is really going on which is merely transactional.

But all our various somethings are not all alike. I chose the criminal justice example precisely because it offered a Pareto improvement (more realistically a near Pareto improvement). That is very low hanging fruit in being something that takes little in the way of “moral outlook” buy-in, and thus it can have broad appeal. Not many people would reject “less crime, at less tax-payer expense, and fewer people suffering in captivity”. The issue instead is can we get them to agree to the changes that would be needed to get to that different society.

I think this notion of how deep the motivating commitment is is an important one. I disagree with many of Rawls’ particulars, but I think it is very useful to consider what can be achieved by working out from starting points of minimal attitudes and beliefs that humans fairly broadly share.

One way to look at this is that we are in a local equilibrium, but there might be a different equilibrium that is nearby that is better on the basis of easy criteria like it is less violent, people live longer and healthier lives, they are more cooperative, they are richer, they are managing the world in a sustainable way that will benefit future generations… If we can tunnel through from here to there, then it isn’t a very big claim to say that we got better.

So not only should social scientists and philosophers study the current range of social intuitions, they should also study where these attitudes are amenable or resistant to change. Learning such things would be of use in developing policies that might actually succeed.

“normative ethics tries to develop moral theories about how to act as opposed to applying those theories to the world”

Yes, claims like this have always puzzled me. What is the context which you see “normative ethics” ranging over? Every action has only ever taken place at a particular time and place so how do you decide how you are going to abstract away from this?

If you consider the East African plains apes as they were on verge of colonizing the world, compare them to a typical gunpowder empire, and compare those to any of our modern cultures, you see, (among much else) a fantastic expansion in cooperation with total strangers. A truly amazing change. Is there much that can be said about “how to act” across this wide expanse without taking account of the particulars of what people thought of each other and how they expected others to act?

I guess I consider that there is much less distinction to be made with regards to “applied” ethics. Since some of human psychological makeup has changed a lot, and some has changed only a little, I would think that a worthwhile theoretical ethics would need to take a lot of empirical issues seriously. And it should be up front about it if it is a project of only limited scope.

Again it is not unreasonable to make some tentative inferences from these kinds of facts about moral foundations. These kinds of findings may not be entirely subversive, but they should certainly make one think twice. Lazari-Radek and SInger [2012] try to argue that if a moral axiom runs counter to likely evolutionary (and we could add neurocognitive) biases, then this is supposed to make us happier that reflective equilibrium has been more effective and we should trust that axiom (coming from their moral realist stance). ISTM that is not a particularly good test, though Christianity always got some mileage out of ostentatious acts against individual interest.

DanT,
“Philosophers also don’t understand “judgement” as needing to be expressed through action -” Now that is poor phrasing; Peirce, James, and Dewey, as well as Heidegger, Sartre, Ortega y Gasset, and Tetsuro Watsuji, have all either suggested or stated explicitly a contrary view; all were recognized philosophers of their time, and they still have adherents among current philosophers.

In fact the distinction between moral judgment, and actions predicated upon these, and moral intuitions is not casual but decisive – they must be made and discussed in order to understand the complexity of ethical decision making – which cannot be reduced to, say, the binary choices of such as the trolley problem.

I have no problems with asking questions about hypothetical test-cases for ethical inquiry. I have enormous problems with making simplistic assumptions and then positing thought experiments equally simplistic on the basis of these.

Let’s go back to the problem of violence: “Well, if utilitarianism is correct, then holding gladiator fights for large amounts of people is not only morally good, but morally required. Since our intuition is that it is wrong to subject people to violent fighting simply to amuse an audience, we have a reason to doubt the truth of utilitarianism.” This is also poorly phrased; you are phrasing the utilitarian judgment on gladiatoral combat from the outside perspective of those with the power to initiate or intervene against them. Then you wing inward with a claim about internal intuitions from the perspective of those who might or might not enjoy them.

But the public demand for gladiatoral games is a documented fact, thus the evidence is that large numbers of Romans felt no such intuitive rejection of watching killing for pleasure. And indeed, the games could only be defended on a utilitarian basis if this were so.

“DanT, ‘Philosophers also don’t understand “judgement” as needing to be expressed through action -‘ Now that is poor phrasing; Peirce, James, and Dewey, as well as Heidegger, Sartre, Ortega y Gasset, and Tetsuro Watsuji, have all either suggested or stated explicitly a contrary view; all were recognized philosophers of their time, and they still have adherents among current philosophers.”

Sorry for the poor phrasing. I wasn’t aware of the popularity of this account, and if you want to point me to contemporary defenders I’d love to look through them. I think that you and I read very different philosophical traditions.

But that concession aside, the fact is that it is convention these days to define intuitions in terms of judgments, and to understand judgments as thoughts with propositional content. Here are a couple of examples, and I know you could find many others (I’ve gone through some of the literature on the topic, and this is quite common).

I hope this illustrates that defining intuition in terms of judgment is a convention in moral philosophy. I’m using it here. If you want to debate the difference between the two that is a whole other topic, which we can discuss at a different time, but it’s not the subject of the essay. I’m operating within a certain framework I specified.

“In fact the distinction between moral judgment, and actions predicated upon these, and moral intuitions is not casual but decisive – they must be made and discussed in order to understand the complexity of ethical decision making – which cannot be reduced to, say, the binary choices of such as the trolley problem.I have no problems with asking questions about hypothetical test-cases for ethical inquiry. I have enormous problems with making simplistic assumptions and then positing thought experiments equally simplistic on the basis of these.”

It sounds like you’re questioning a large part of normative ethics. That’s fine, but again, it’s not the discussion topic of the essay. I’m operating within a standard methodology. I did not want to discuss the legitimacy of normative ethical methods here. Maybe this essay just wasn’t for you, since it takes place in frameworks you fundamentally disagree with. That’s fine, I think Dan K, for example, would also reject most of the frameworks I’m operating within.

“Let’s go back to the problem of violence: “Well, if utilitarianism is correct, then holding gladiator fights for large amounts of people is not only morally good, but morally required. Since our intuition is that it is wrong to subject people to violent fighting simply to amuse an audience, we have a reason to doubt the truth of utilitarianism.” This is also poorly phrased; you are phrasing the utilitarian judgment on gladiatoral combat from the outside perspective of those with the power to initiate or intervene against them. Then you wing inward with a claim about internal intuitions from the perspective of those who might or might not enjoy them.”

I’m really not sure what you’re getting at here (and I don’t intend for that to be dismissive of your point, but an invitation to elaborate), but I feel I need to remind you again that nothing important to the topic of my essay hinges on this example. I was using it to quickly illustrate standard philosophical methodology. If you don’t like this case, you can change it to something else — say, a child’s blood creates a cure for cancer when he is beaten. Utilitarianism would imply we ought to beat the child all the time in order to dispense the cure. We have an intuition against this.

“Should you base your ethical theory on the way that nazis or germans acted?”

You’ve misinterpreted my point, by suggesting that I’m suggesting that ethical theory exists to justify the behaviors it must begin with. That’s simply not the case, and that’s not what I was saying. Do we begin an understanding of ethics in Germany, by studying the behavior of the Germans and the Nazis in the ’30s and ’40s? Of course, but how could it be otherwise? And in such study our purpose is not to justify that behavior, but to understand it, and to derive principles, both positive and negative, according to which we have greater purchase over our own behavior in the future.

You may have forgotten here that, having written a study on Hitler, I had to confront a wide range of behaviors in Germany in that era. In that confontation, I had to ask some painful questions. What made highly intelligent and otherwise ethical doctors engage in crude and cruel ‘experiments’? Why did supposedly decent truck drivers willingly deliver Zylon B to the death camps, knowing what they were intended for? If one asked a young soldier whether it was right to beat an infant to death, he would not only have rejected that suggestion, he would have been appalled. Yet the next day he would then beat an infant to death, persuaded that the infant’s Jewish descent, or the presumed wisdom of the officer ordering him to do this, effectively excused him from responsibility.

After ordering the police to form what were death squads, to ‘clean up’ Jewish villages in Poland in the wake of the invasion, Himmler decided it was his duty to witness one of these mass executions. He came, he saw, he promptly threw up, disgusted with horror. Then he just as promptly reassured the men involved that they were engaging in terrible acts for the greater glory of Germany, and they would be well remembered for their ‘moral’ sacrifice. (By the way, the notion that these special police had to follow orders in performing mass murders happens to be a lie. If any of them felt they could not in good conscience participate, they were re-assigned to desk jobs back in Germany. Partly for this reason they were replaced by the more dedicated SS.)

Were you aware that the Supreme Court of Germany (at least up to the time of my study) had not ruled Hitler’s dictatorship or the laws made by him as illegitimate, but that they were completely constitutional for their time, but stood superceded by the post-war constitution? That should give us pause.

Other odd facts raising troubling questions: Himmler was a school teacher who believed stars were ice crystals. But the Nazis condemned contemporary physics as “Jewish science;’ except of course when it could be used to build weapons. Goebbels had a doctorate in engineering – along with some 40,000 Nazis holding graduate degrees in various fields, including half the medical doctors in Germany.

A right-wing influence on the young in the ’20s and ’30s was a major folk music revival. One of the most popular poets in this era was Walt Whitman in translation. Germany was peppered with pagan-revival religious cults, a movement dating back a century previous. The concentration camps were modeled in part on relocation camps for American Indians in the previous century.

Although homosexuals were oppressed and sent to camps in the later ’30s, the leadership of the Nazi SA (Brownshirts) were notorious for their homosexual orgies (which led the General Chiefs of Staff to demand their execution, carried out in the Night of the Long Knives).

The Marxists in the Reichstag voted for Hitler’s chancellorship, thinking that would position them to better negotiate with the Nazis.

Sociological analysis indicates that a third of Germany’s population actively supported Hitler, another third decided to go along with him, because what the heck, what did they have to lose? The final third were opposed to Hitler, but after all, they were Germans, and respected his legitimate election. Given the brutal totalitarianism of the Nazis, by the time they thought to resist, they were stuck.

Hitler himself was a vegetarian, something of an ascetic who only indulged by pouring sugar in his wine; he ended up addicted to pain pills. He banned modern artists, but in his youth had hoped to become one. He was fond of Mickey Mouse cartoons. Once the war started he found himself losing interest in Wagner’s operas. He told his architect Spear that he wanted buildings that would make ‘beautiful ruins.’ He refused to marry his lover Eva Braun until the moment he determined that they both needed to die. In the bunker he admitted bitterly that Schopenhauer had been right that the way of ‘Will’ was an exercise in futility, and that the Germans had proven the weaker race after all.

Historical facts like these present a wide array of ethical and political problems that aren’t going to be solved by trolley decisions made in a research clinic.

It was a mistake to play the ‘ethical Nazi dilemma’ card with me, Dan. I’ve already played that game, and hold bower in hand.

What next, the ‘five-year old Hitler dilemma’? – ‘if you could go back in time and shoot Hitler at age five, would you do so?’ Yes; double tap – and always put one in the brain.

Who are those five people the trolley is racing towards? Answer that question and the problem might be easier to solve.

We are getting drastically off topic now, and since you said you’ve been inspired to write a reply piece, I’ll just wait for that essay. I’d prefer if the discussion could revolve around the contemporary debates about moral psychology and its relation to normative ethics, assuming the framework I’ve described. You can challenge the frameworks in your next piece, but I’m going to leave it there.

DanT,
Sorry, I was busy writing my musings on the Nazi problem while you were posting your response. But I suggest it’s a mistake to use ‘Nazi dilemma’ problems (which were all the rage back in the early ’70s) in a discussion like this. However, it did allow me to develop an example of the depth and breadth of the issues that makes ethical inquiry and theory quite complex and difficult in the current era.

In looking up living philosophers I find interesting on such matters – such as Habermas, Sloterdijk, Agamben – it occurred to me that one interesting issue here is that the traditions I favor all assume a socially embedded ethics, while the whole purpose of trolley problems is to narrow ethical decisions down to the detached individual – the effort is thus to generate a sense that such decisions are only made by the individual and based on the individuals ‘intuitions’ which from my perspective is a mistake – how do we acquire these intuitions if not from family, friends, communities?

My point about the gladiatorial combat was that a utilitarian claim cannot be mitigated by an intuitionist clam, partly because of the diversity of possible intuitions regarding the same behavior, partly because behaviors are themselves too diverse to be easily allotted utilitarian rationale. (In short I am rejecting both horns of that dilemma.)

There certainly is a ‘clash of frameworks’ here. But it is worth the confrontation if we can at least persuade readers, if not each other, to consider the issues involved from differing perspectives.

Yes, I certainly appreciate (as usual!) you bringing in a perspective from a (I think) different tradition. I hope you don’t take my decision to back off from it as dismissive. I look forward to your next essay.

Hi EJ, that had to be one of the most fascinating OT replies I’ve ever read (not being sarcastic). I also look forward to your next essay. I think we share a similar outlook on the field of ethics. I see some differences (though will not go into them here) and hope these can be discussed at some point.

On the whole german history thing… brownshirt homosexual orgies, never heard of that and how is it known that isn’t a fabrication to retroexcuse the deed (but if true and Trump et al really take a fascist turn it might be something Milo Yiannopoulos should consider, I’ve been wondering which way their “long knives” will cut)? And the Hitler renunciation of the will, and denunciation of the German people sounds a bit like infamous “deathbed conversions” of athiests. I suppose no need to answer here, but if you have cites somewhere I’d love to have them.

………………

Hi Dan, given what you said to EJ I think much of my argument was also OT. Most was a similar rejection of framework issue. Within the framework I guess my main problem would be in what way psych or neuro can inform. I agree they can, but not exactly the way you set out (I gave some examples).

……………..

Hi Davidlduffy,

“doesn’t that imply utilitarians are reflecting for longer?”

It could, but it could also just mean they have had time for another system to kick in (and complete). The “utilitarian” deliberation time might be the same, but just have a delayed start time. One might also point out that requiring more time in one type of deliberation does not mean that it is better… it just means it takes longer. It would seem the 47 ronin had a long time to deliberate on their actions and the meanings of their actions and still sided with a deontological outcome.

“Again it is not unreasonable to make some tentative inferences from these kinds of facts about moral foundations. These kinds of findings may not be entirely subversive, but they should certainly make one think twice.”

I would like to see that argument unpacked.

“…if a moral axiom runs counter to likely evolutionary (and we could add neurocognitive) biases, then this is supposed to make us happier that reflective equilibrium has been more effective and we should trust that axiom (coming from their moral realist stance). ”

I realize you were not taking this position, so I am not criticizing you. But this is another good example of why I loathe EvPsych.

………………..

Hi DW,

“But if we are unwilling to differentiate among the various “somethings” then the only thing to do would be try to get people to give up on concepts like ethics and morals because such notions hide what is really going on which is merely transactional.”

There is something to that, but I don’t go exactly that far. In any case you seem to have missed my point. My problem is that you seemed to frontload your moral question (what should we do) with your own moral baggage. You reply here did not really change my impression. Much debate in philosophical ethics is whose moral baggage should be used.

“That is very low hanging fruit in being something that takes little in the way of “moral outlook” buy-in, and thus it can have broad appeal. Not many people would reject “less crime, at less tax-payer expense, and fewer people suffering in captivity”.”

The same could be handled by completely retributive models, using corporal punishment up to and including death sentences for all crimes. Intriguingly, many in the West find caning and chopping off hands to be morally repugnant, but find the slow excruciating torture and damage from cutting off huge portions of people’s lives (by sticking them in prison) somehow more “civilized”.

“I think it is very useful to consider what can be achieved by working out from starting points of minimal attitudes and beliefs that humans fairly broadly share. One way to look at this is that we are in a local equilibrium, but there might be a different equilibrium that is nearby that is better…”

Within a common social system, with shared political ties I might agree this works as a question of what should our community pursue as policy. As an idea of how morality works. The question of what *is* ethical? That is way off. You are adopting a utilitarian framework, and an assumption of humans that is not very realistic.

“… less violent, people live longer and healthier lives, they are more cooperative, they are richer, they are managing the world in a sustainable way that will benefit future generations… If we can tunnel through from here to there, then it isn’t a very big claim to say that we got better… Learning such things would be of use in developing policies that might actually succeed.”

Although I didn’t find it so horrific as an outcome, Brave New World is widely considered a dystopian future.

You said, “how do we acquire these intuitions if not from family, friends, communities?”

One alternative is that some our moral intuitions stem from innate moral knowledge. John Mikhail, a student of Chomsky’s, has used Chomsky-style arguments to defend a position he calls Universal Moral Grammar, which is a (at least partial) nativist account of moral knowledge. It has been gaining quite a bit of attention. Something on this is here http://www.sciencedirect.com/science/article/pii/S1364661307000496

If your framework of a socially embedded ethics assumes the empirical fact that intuitions arise solely from friends and family etc., then it relies on an empirical assumption. Perhaps moral psychology can be a small tool in helping us resolve this factual matter, since it investigates not only how our form our moral intuitions when evaluating cases, but how we develop such intuitions from birth.

I don’t see how one can say that moral psychologists identify “how we come to make moral judgments and have moral intuitions.” At best they identify how we come to make *what we think* are moral judgments and have *what we think* are moral intuitions. The two, of course, are not the same. And the latter, while perhaps interesting sociologically, is utterly uninteresting philosophically.

To do what you suggest they do would mean that we know what moral judgments/intuitions are or even that there are any. And that would mean that would mean that we have resolved the disputes between rival moral theories and also between moral realists broadly construed and moral skeptics, broadly construed. After all, if Kant is right, rather than Mill, than certain intuitions about motives are going to count as moral intuitions, but intuitions about outcomes are not. If Mill is right, then the opposite will be the case. And if the radical moral skeptic is right — the one who thinks that morality is a fiction — then there aren’t any moral intuitions or judgments for moral psychologists to study.

Part of the problem, here, is that it is not at all clear that one can give a generic account of what makes a thought or action a *moral* thought or action. Certainly, the various normative ethical theories will have a view of what makes an action *right* or *wrong*, morally speaking, but for *that* to be the object of moral psychological research would require, again, that all the disputes between moral theorists have been settled. What’s needed is some generic account of what makes a thought or action moral, rather than belonging to some other category, and that seems precisely what we do not have. Aristotle thought that what made certain character traits and behaviors count as the moral ones was that they had to do with our social and civic interactions. Kant, of course, eschewed the idea that one could identify the domain of the “moral” by subject-matter and instead proposed that one do it on the basis of the *form* of the judgments in question — i.e. the moral ones are the ones that can be framed as categorical imperatives.

It seems to me, then, that moral psychology is almost entirely worthless. To the extent that it can actually do something, it has to presuppose all sorts of things that it cannot presuppose — that what counts as moral is settled or even that there is such a thing as morality is settled. The trouble is that they have not been and never will be, given that there is nothing in philosophy that can play the role of decisive evidence in science. The result, then, is that moral psychologists simply assume a particular moral theory — usually utilitarianism — as well as moral realism of one sort or another — and then proceed to study “how we come to make” and “how we come to have” the relevant judgments and intuitions. For the reasons I have already articulated however, this strikes me as relatively worthless.

There doesn’t have to be a science of everything. Not everything has to be amenable to reductive analysis and explanation. Ethics strikes me as one of those areas that is the least amenable to such analyses and explanations and one that really is only fruitfully explored horizontally, so to speak, rather than vertically. I know that I am in a minority on this, but I would argue that this is just because analytic philosophy has been largely occupied by science-fetishists and reductionists and this not only skews the work that is being done, but the sort of people who get hired to teach in philosophy departments and to do professional philosophical research.

dbholmes.
My study of Hitler was conducted 20 years ago. I have the text, but unless the citation’s title is in the text, I have no notes. All I can do is remark that the homosexuality of the SA leadership is well known to historians of the Third Reich. Unfortunately both homosexuality and Nazi history are so politicalized that I found it impossible to find a trustworthy web citation for you among all the rants. The quotes from Hitler would have to have been remarked by survivors of the bunker, possibly in memoirs. I’d have to spend some time digging up those sources.

I apologize to DanT for going so off topic; but doing that study was a major experience in my life, and changed my thinking considerably, so I just let myself muse, which was not fair to Dan.

“To do what you suggest they do would mean that we know what moral judgments/intuitions are or even that there are any. And that would mean that would mean that we have resolved the disputes between rival moral theories and also between moral realists broadly construed and moral skeptics, broadly construed. After all, if Kant is right, rather than Mill, than certain intuitions about motives are going to count as moral intuitions, but intuitions about outcomes are not. If Mill is right, then the opposite will be the case. And if the radical moral skeptic is right — the one who thinks that morality is a fiction — then there aren’t any moral intuitions or judgments for moral psychologists to study.”

I think you are understanding the word “moral” in “moral intuitions” as a kind of success term — something counts as a moral intuition only when that intuition either *in fact* tracks what is morally good or bad, and when there is *in fact* something to which it refers. On this view, there can be no such things as “moral intuitions” if, for example, there were no moral facts, for there would be nothing morally good or bad, and our alleged “moral intuitions” would really just be intuitions, but they wouldn’t be moral.

But the moral psychologists, and the philosophers in this work (like Jeff McMahon mentioned above, or Walter Sinott-Armstrong), are not using the term “moral intuition” in this way. They are not understanding the word “moral” in “moral intutions” as a success term. Rather, they understand it as, basically, just a feeling (or thought, perhaps both), about what is right or wrong *whether or not* the thing the intuition is about is *in fact* right or wrong. The idea is that in the Trolley Problem, we have a certain feeling about what is right or wrong, and that feeling is all they are talking about.

This could be considered analogous to color perception. Let’s assume that there are no such things as colors out there in the world. We still have color experiences (in the sense that we have an experience of red or green), and vision scientists can study how photons and signal transduction come to produce such color experiences *even if* no colors in fact exist, and even if we have no agreement on what colors are, exactly. I think that moral psychologists and the moral philosophers working in this area understand “moral intuitions” in this kind of way — as a mental state that is produced in response to certain things — and not as a success term.

This is why I didn’t find this part of your response convincing, either:

“”It seems to me, then, that moral psychology is almost entirely worthless. To the extent that it can actually do something, it has to presuppose all sorts of things that it cannot presuppose — that what counts as moral is settled or even that there is such a thing as morality is settled. The trouble is that they have not been and never will be, given that there is nothing in philosophy that can play the role of decisive evidence in science. The result, then, is that moral psychologists simply assume a particular moral theory — usually utilitarianism — as well as moral realism of one sort or another — and then proceed to study “how we come to make” and “how we come to have” the relevant judgments and intuitions. For the reasons I have already articulated however, this strikes me as relatively worthless.”

The case of color perception is an apt analogy, I think. We can study how we come to have experiences of redness or blueness *without* having agreement on, or presupposing, what colors are (this is still an ongoing philosophical debate) and even if there are no such things as colors. Indeed, I think empirical work has done quite well here, given that we have some helpful restorative therapies to help people regain sight, and even color experiences.

I also have concerns for this bit of your comment because it would seem to not just render moral psychology worthless, but also applied ethics. You said that assuming a moral theory or the falsity of moral nihilism makes investigation into the intuitions about morality worthless. But applied ethics clearly assumes some kind of moral nihilism falsity, and sometimes even employs moral theories or principles (like the DDE in legal and medical ethics issues). Do these assumptions make such investigations worthless?

The reason moral intuitions, in the sense I’m talking about, is important to me is because I think (which I’m sure is disputable) that they are a significant factor we rely on to construct moral theories (whether they are produced in response to actual behavior or thought experiements), and decide what to do even when we have no moral theory. In fact, in a dialogue we did over at MoLTV, we were talking about moral intuitions in just the way I’ve described, I think. I recall you criticizing Singer’s view that we don’t need moral intuitions by saying that normative ethics couldn’t happen, and doesn’t even make sense, if we don’t use moral intuitions in some way or another.

In sum, you are referring to moral intuitions as a success term, people in the debate are using it as a mental state about what is right or wrong, whether or not it refers to anything or is true. As such, investigating it doesn’t presuppose the truth of moral realism or a normative theory, like in the case of color perception. But we will always have to rely on moral intuitions in the latter sense in practical decision making and theory construction, so it seems worthwhile to investigate.

I want to emphasize, again, though, that I do *not* think moral psychology plays any kind of dominant or determinative role in normative ethics. As I said in my essay, I consider it a “small but useful tool.” I hope people (I know you didn’t, Dan) haven’t gotten the impression from my essay that I am on the scientismist bandwagon when it comes to doing ethics. I am not. But I do think that philosophers, like many natural philosophers in the past (Aristotle, Newton, Descartes, etc) should draw upon any tool that can be helpful, which includes empirical tools. For the reasons I discussed in my essay, I think moral psychology can be one such small but useful instrument.

Oh one more thing — even if these moral psych studies have to assume moral realism, is that really a problem? Doesn’t all of scientific inquiry assume external world realism in some form or another? We still find such investigations worthwhile right?

“Anyway, taking the piece as is, I would agree that moral psychology has a limited role in moral philosophy, but would go further to say neither fields of study does much to support (in some objective sense) normative claims.”

Just wanna make clear that I did not say that moral psychology could do anything to support moral realism. I just said it could help normative ethics. Normative ethics, by the way, doesn’t support the conclusion of moral realism either. It just tells us how we ought to act and derives principles. So normative ethics, I think, is just as irrelevant to support “objective” normative claims as moral psychology is.

“For me the best moral philosophy can do is try to elucidate the many different ways problems we face can be considered. What aspects could be thought relevant? What relative weights would we assign different aspects? Why? In the end you will have to say what you feel, and decide what you will do, and figure out what that means about yourself and others (based on what they say and/or do). It is a process of discovery about the moral character of yourself and others. From this one can try to adopt changes (to either), but no moral theory can dictate (beyond your will) that you must make changes. It comes down to a refinement of one’s character through reflection.”

I think the last part of your comment, here, suggests that you are talking about moral motivation, as opposed to moral intuition. Sure, moral theory might not dictate beyond your will that you must make changes, but the law can, and the law is sometimes motivated by moral principles (though not moral theories), like the DDE. So, doing moral principle construction, even in the abstract way normative philosophers do it, can still be relevant to moral motivation through legal policy. If moral principle construction is worthwhile, and my points about how moral psychology can (minimally) help here, then moral psychology aimed at indirectly assisting normative ethics is worthwhile. Or so I think.

“Moral psychology can elucidate the mechanics of how one goes about considering problems. It is wholly unsurprising, and so uninteresting to me, that people’s moral intuitions/judgments can be influenced by transient conditions. That is why serious moral consideration and action is not done in the spur of the moment, it takes some time of reflection under different conditions. More interesting to me are tricks one can use to take different perspectives, or to prevent oneself from being manipulated by others. Also, I think people like Haidt have been useful in experimentally finding and grouping common aspects people use to base moral judgment. That can help us reflect on the different systems, or choices, that moral philosophy discusses.”

I think the issue is that, even in serious moral consideration — a reflective equilibrium or something — one uses moral intuitions, and if the wording one uses in thinking about moral issues, the smells present when you’re reflecting, or the order in which you think about cases can influence moral judgment, the issues still remain. I don’t think we can say that reflective moral reasoning *necessarily* circumvents all of the problems I outlined above with moral intuitions, especially if one doesn’t *know* when morally irrelevant factors are influencing their intuitions on the matter. Where I do agree with you is that we can’t seamlessly infer that, certainly, all of the problems moral psychologists are finding extend over into reflective moral reasoning, but I don’t think we can rule it out.

“In no case can a scientific enterprise help us understand what is “right” without falling to the “is…ought” problem, or the “naturalistic fallacy” (yes, I consider that different than what Hume was discussing).”

I mentioned in the essay that I don’t think moral psychology plays any kind of determintive role. But I think it can help us clarify our reasons for holding certain principles, as I argued in the essay.

“Lowest in utility is neuroscience. It can tell one how the brain physically processes and so facilitates ethical considerations… but what does that help us do? Except maybe it will tell us what we can do physically to our brain to get a result we want, but the current brain processes do not allow? Ok, but that will never tell us we ought to want that change (or not).”

I mentioned in the essay one way neuroscience could help (though yes, I agree the assistance is small), though I never claimed it could tell us directly what we ought to do or want. Do you take issue with what I claimed? I’d love to hear why!

Hi ej. Google Scholar finds hopefully less contentious citations eg ‘Hitler was of the opinion that Rohm’s “private life was his own affair as long as he used some discretion.” [Mosse, Nationalism and Sexuality: Respectability and Abnormal Sexuality in Modern Europe, 1985]…By 1941, Hitler ordered that “police officers who committed lewdness with another man or permitted themselves to be misused were to be given the death sentence [Oosterhuis, Medicine, Male Bonding and Homosexuality in Nazi Germany, J Contemp Hist 1997, 32:194].”’

Hi db. “why I loathe EvPsych”: Crazy, unless you believe in a blank slate and a complete discontinuity from other animals. I will allow you to be unhappy with, say, 90% of all papers in that literature 😉 Here is Darwin from 1838 indulging what I reckon is evolutionary psychology (you will know his other bee epigram about morality):

Two classes of moralists: one says our rule of life is what will produce the greatest happiness.—The other says we have a moral sense.—But my view unites both & shows them to be almost identical. What has produced the greatest good or rather what was necessary for good at all is the instinctive moral senses: (& this alone explains why our moral sense points to revenge). In judging of the rule of happiness we must look far forward & to the general action—certainly because it is the result of what has generally been best for our good far back.—(much further than we can look forward: hence our rule may sometimes be hard to tell). Society could not go on except for the moral sense, any more than a hive of Bees without their instincts.

The revenge comment is particularly interesting.

“that argument unpacked…” Simply that philosophers do make commonsensical type statements in the realm of practical ethics eg wide reflective equilibrium is better than a snap judgement. Inference: conscious and unconscious reasoning must be given time to work, ie information processing is going on. Surely further inferences can be made based on findings in cognitive neuroscience. For example,

Spanning from neurocognitive to hormonal to interpersonal levels of analysis, we identify six antecedents that increase both utilitarian and risky choices (ventromedial prefrontal cortex brain lesions, psychopathology, testosterone, incidental positive affect, power, and social connection) and one antecedent that reduces these choices (serotonin activity).

Hi Dan T, before we get to anything else we should probably clarify some things…

“I mentioned in the essay that I don’t think moral psychology plays any kind of determintive role… I mentioned in the essay one way neuroscience could help (though yes, I agree the assistance is small), though I never claimed it could tell us directly what we ought to do or want. ”

I took your position to be a bit stronger than what you are saying here (as it would seem Dan K did as well). The title itself links moral psychology to normative ethics. I consider the crucial portion of normative ethics to be its delivering prescriptive statements. While it has descriptive elements in its background, that is not enough to distinguish it from, well, descriptive ethics! So right from the title I was a bit front loaded with the idea you were going to discuss its contribution (or lack) to prescriptive statements.

The entire first portion of the essay, only reinforced this idea:

“In some rather trivial sense, moral psychology informs normative ethics insofar as it provides us with the mechanisms that lead to moral judgments… The more controversial claim is that, in some sense, understanding how we come to make moral judgments can tell us, in a more direct way, what we ought to do. Somehow, learning about how we come to think about morality will inform us about how we should behave… Proponents of the view that moral psych can tell us what we ought to do, who we’ll call psychological ethicists, will reply by first asking what it is that we use to determine what we ought to do… Psychological ethicists then claim that some facts about how we come to have moral intuitions can tell us whether those intuitions reliably track what is moral.”

So it seemed you were setting up to argue about the controversial rather than the trivial. I did not see anything to suggest a reversal on this, even if you were reducing some of the scope and power of what moral psych contributed. It seemed you were wanting to argue for more than the trivial mechanisms that lead to, but rather somehow help support a prescriptive claim. Here are more, seemingly, pointed statements:

“Thus, knowing facts about how your moral judgments are being formed has changed how you ought to behave… But I think that moral psychology can have a very different role, one that is compatible with the idea that we can trust moral intuitions, and still should use them in deciding how we ought to act. On my view, moral psychology should be treated as a small but useful tool in normative ethics… In showing how our moral judgments are formed, psychology’s power isn’t only restricted to undermining our intuitions, but also clarifying our reasons for having those intuitions. Moral psychology can help specify which principles underlie our moral intuitions.”

Those last two sentences could be read in the trivial or controversial sense of “reasons” and “underlie”, but given all the preceding I took you to mean the more controversial. After all how would knowing neutral mechanisms which facilitate intuitions deliver any reason we should “trust” or “use them”?*** keep this in mind, because I will come back to it.

As you move toward your close, the wording continues to point toward an understanding of mechanisms accurately aligning with an envisioned moral criteria as somehow supporting the normative prescriptive claims.:

“As such, moral psychology can assist in adjudicating (but not determining) which principles our moral intuitions track, and therefore justify, in this way… These contrasts should be interesting to philosophers, for they raise questions about which intuitions we should rely on in normative theorizing… While moral psychology certainly doesn’t settle normative questions, I think it can be a useful tool. It can help us determine which principles our moral intuitions actually track, and can reveal differences between in-practice and behind-the-desk intuitions, which provides philosophers with interesting intuitive contrasts that they can use in theory-construction.”

Again, “justify” and “should rely on” are very different than “are consistent with” and “better predict or describe the nature of”. Your last two sentences could be used to explain moral psych’s utility to a purely descriptive moral philosophy. That you tie it to normative ethics, and with the initial emphasis on a controversial role that could help with prescriptive claims, made me think that is what you were arguing for. I realize the last sentence starts with “certainly doesn’t settle normative questions”, but that is not the same thing as saying “contributes nothing that could help settle normative questions.”

I am arguing for the latter, which does not allow for some “middle ground” between moral psych and the most important aspect of normative ethics, which is the ability to make prescriptive statements.

Yes it can help distinguish between descriptive accounts, and yes normative ethical systems have descriptive accounts within them. But that would only leave you arguing that this distinguishing helps normative ethics by allowing to choose between two (or more) systems based on whether one descriptive account better aligns with the underlying mechanics, presumably because the one more in line with underlying mechanisms would more consistently predict what we would intuit. But how is that any more than trivial to the philosophical and practical *normative* question at hand?

***I actually believe the “causal directness” issue is a better explanation for common intuitions on things like trolley problems. It may also help explain differences between theory and real life situations, though that could also be explained by there being a difference between being morally permissible (for those who want such a thing) and morally required (for those that don’t). I think these background mechanisms are interesting issues to explore. And they are “worth it” for those interested in such questions. But they really break down when moving forward to some sort of justification of intuition or action.

A great example coming out of neuro is that we can see differences between how a socio- or psycho-path processes something like a trolley problem. We could conceivably generate excellent predictors of what they will intuit in any instance. Would that argue they are justified in running with their moral intuitions? No? Why? Just because the majority doesn’t process in the same way? We already have indications that (and how) brains of women/men, straights/gays process things differently. Should we use that same criteria we’d use against socio-paths against them? No? Why not? The list goes on.

Ultimately the field of ethics exists because people intuit things differently in the first place, and are unhappy with others intuiting differently than we like. What are we going to find at the psych and neuro level, other than people intuit things differently for such and such reason. All of the normative ethicist’s work lies ahead of them.

Dan T.: I’m afraid that your reply fails to confront the heart of the objection. Indeed, it fails even to take up the objection at all.

You say this:

“Rather, they understand it as, basically, just a feeling (or thought, perhaps both), about what is right or wrong *whether or not* the thing the intuition is about is *in fact* right or wrong. The idea is that in the Trolley Problem, we have a certain feeling about what is right or wrong, and that feeling is all they are talking about.”

———————-

But this was exactly my point. Thinking that something is moral is not the same thing as having a moral thought. Indeed, it was in my very first paragraph. And my further point was that knowing the causes of why *people think* something is moral, while sociologically interesting, has no bearing on the question of what *is* moral or whether *anything* is moral, which is what philosophical ethics is about.

If radical moral skeptics — those who think there literally is no such thing as morality at all — are right, then every time a person thinks they are having a moral thought, they are wrong. Not because they have the wrong moral thought, but because there can be no moral thoughts, as there is no such thing as “morality.” Now, if this is true, then of what philosophical use is the moral psychologists’ knowing why people think they have moral thoughts? None.

(This is also why the comparison with perception is inapt. There is no difference between thinking one is having a red quale and having one.)

Hi Davidlduffy, ah yes I should have been a bit less enthusiastic in my condemning “evpsych”.

I actually love the concept of the field. I was almost in a project that focused on evneuro. Sort of regret I turned it down and might return to it. Absolutely fascinating.

Then there is the pseudoscientific field that (unfortunately) goes by the same name, and (even worse) seems to have swallowed up a lot of suckers inside and outside of science. There is a huge difference between what is hereditary and so “evolved” and what is allowed as behavior (social or individual) and (further down the line) what is currently seen as common behavior.

You let me know when you see an actual study that has involved careful study of genotype, brain organization, brain function, and social behavior across species (with explanations for outliers), and I’ll agree it is an actual evopsych study.

What we have now is a bunch of people jumping the gun with postulations, based on potentialities and handwaving, usually arguing to some social/political point they wanted to make.

I might agree with Darwin that in some generic way society would not go on without a moral sense, though it might also be read the other way. We are not bees and most of what we do is not innate and so not “evolved”. I disagree with his theory of revenge.

The heart of the point is that there is no way to identify subject matter as “moral” separately from any particular moral theory. Aristotle simply thought that the moral subjects were those having to do with social and civic matters, but this is much broader than the moral psychologist is interested in. Kant thought that the moral judgments were those that could be issued categorically — that is, he thought that moral subject matter could be identified by its form — but of course, this requires that he be right, and he probably isn’t.

So, without being able to identify subject matter as moral, independently of any particular moral theory, I don’t see how a psychologist could identify particular thoughts as being the moral ones, as opposed to others..

Hi Dan K, I’m agreeing to some degree with your counterargument as well.

When DanT gave the example of color I thought that was ok on the surface, but a more relevant analogy to morality would be (for vision) “What is a good painting?” or for eating “What is a good meal?” There are so many different ways to judge either, and sure science can track them all, but never pin down which is the correct set of criteria (outside excluding the entirely frivolous).

Most of my last post was explaining why I thought you were trying to make a harder (“controversial”) point. If not, that is fine, and so most of that can be ignored for further discussion.

However the last 4-5 paragraphs (including the note) are on point. So I will repeat it and it can be focused on, because I think it is an important “hole” in the argument you present.

You suggest that psych or neuro can help us adjudicate between models of how we reason morally. To a very limited degree I think this is possible (again I sort of agree with the “causal directness” as being a better explanation for trolley problem judgments). Then again that breaks down on the sub-population and individual level, because not everyone give the same answer. This severely limits the “we” being considered, and/or puts into question the idea one is studying *the single* moral system which humans use (as Dan K argues some would even reject you are studying a moral intuition).

I mentioned problems with using neuro, for example on trolley problems between socio- and pyscho-paths and others. But we can look at Haidt’s work (which has direct comparisons with Lamme’s work in neuro). He has identified different traits that people use to judge things (morally), and has discovered that one can separate liberals and conservatives (loosely) based on different weighting of those traits in making judgments. So right there we see a problem for “adjudication” between moral systems. At some level we’d have to say, well for these guys it will run this way and for those guys it will run another.

Hopefully this makes sense? Do you see that as missing your argument in some way?

“You suggest that psych or neuro can help us adjudicate between models of how we reason morally.”

I understood Dan was saying, and I agree with him, that psych or neuro data can help us adjudicate which models better (or which combination of models, how well they do, including need for new models, etc etc) track how we reason in situations that involve what is commonly referred to as ethical questions.

“psych or neuro data can help us adjudicate which models better… track how we reason in situations that involve what is commonly referred to as ethical questions.”

I agree. Maybe I am still not making myself clear. I thought he was going for something stronger, I now understand (especially from his reply) that he was saying something not as strong (what you describe here).

I still have a problem with that, because I don’t see what that can give us. We know that people come to different answers to the same ethical questions, even if they have the same starting info. So we know (before psych or neuro) that people may not even use the same system of reasoning, which means we are going to see different processes going on. And in fact from neuro we do see that people process things differently. So that would seem to cut off tracking how “we” reason (in some generic sense), though one could say scientists can track how this person, or that group of people reason. That seems to limit contribution to only a descriptive enterprise (which can be worthwhile, but… limited).

“So that would seem to cut off tracking how “we” reason (in some generic sense)”

Change “how we reason” with “the ways we reason”

“though one could say scientists can track how this person, or that group of people reason”

Yeah, looking for patterns, commonalities between groups, and so on.

“That seems to limit contribution to only a descriptive enterprise (which can be worthwhile, but… limited)”

On one hand, I’m not sure what you would mean by a ‘more than a descriptive scientific enterprise’ in this case, on the other hand I don’t think what Dan is talking about is ‘only’ or limited in any negative way.

Want to get involved in The Electric Agora?

The Electric Agora publishes essays, videos, reviews, and humorous pieces, lying at the intersection of philosophy, the humanities, science, and popular culture.
If you want to get involved with the magazine, there are three ways you can do so.
1. Submit an article to us! For guidelines, check out the "submissions" page, linked to in the menu above.
2. Join the discussion! For commenting guidelines click on the link "For Readers" in the menu above.
3. "Like" our page on Facebook, follow us on Twitter, and/or sign up to follow The Electric Agora, by clicking the relevant links below!

by Daniel A. Kaufman ___ EA will be going on a brief hiatus, as I confront a family emergency. My father is in medical ICU, and his future is uncertain at the current moment. If any of our fine writers sends me an essay, I will publish it, but I will not be able to produce anything myself for a […]

I appeared on Aryeh-Cohen Wade’s “Culturally Determined” program, on BloggingHeads.TV, where we discussed my essay, “Adolescent Politics.” Along the way, we talked about Donald Trump, Joan Didion, and the impact of social media on politics.