In the field of psychology, there is a particular area that has become quite visible to outsiders for its colourful use of moral dilemmas, in particular the trolley problem. This area, moral psychology, studies the how people reason about moral issues, using sometimes those dilemmas to probe people's intuitions.

The recent story of moral psychology (At least, in the post-Kohlberg era) has its roots ,to a large extent, in the work of Jonathan Haidt, who postulated a series of moral foundations (Care, Purity, Fairness, Authority, Loyalty, and Liberty) to explain moral beliefs. Back from then there is a now classic though experiment, asking participants what they think of eating their recently deceased dog, or a brother and sister having safe, consensual sex.

For Haidt, morality is mostly a matter of emotions. In his early work, he seems to have left some room for reason, but in the latest editions of the theory (see the book The Righteous Mind), reason takes a back seat, and acts merely as a spokesperson of the moral sentiments.

Haidt was also notorious for being a pioneer of studying the moral judgments of libertarians: a group that is relatively small in politics in every country, but that finds its members overrepresented among philosophers.1

But reason seemed to also play a role. Enter Joshua Greene, who began publishing a series of papers (the gist of them is compiled in his book Moral Tribes), arguing that morality, like general reasoning, follows a two system model: there is an automatic, "gut-feeling" based mode, and a rational, cold, evidence-based, model. This second one he considers to be linked to utilitarianism.

As a separate issue, this book opened a sub-stream within philosophy itself (see, for example Erik Wielenberg's review of the book), on whether scientific studies like this can support or debunk moral theories. If it is indeed proved that a truly rational individual, devoid of all passions and biases would be utilitarian (or deontological, or libertarian) , doesn't that provide in itself an argument for those moral systems? (And, in addition, prove that we can have a purely rational morality, contra David Hume?) Research doesn't have a conclusive answer for these questions yet.

But anyway. Some time later, dual-system morality began to get some critiques, both in theory and methodology, most notoriously by Guy Kahane, one of the authors of the present paper (Kahane et al., 2015a and 2015b). Among other things, these papers show that psychopaths seemed to be highly utilitarian, which doesn't match with the impartial concern for all sentient beings that utilitarianism defends.

In addition to that, as a matter of personal interpretation, the moral dilemmas being used to tease out utilitarianism are not fit for purpose. Consider those available in the supplemental material of Greene et al. (2004). I -not an utilitarian- rank very high on those utilitarianism tests. The basis for my answers was not utilitarianism, so it wasn't picking up what is should.

This finally leads us to the paper at hand.

The authors try to get to a measure of utilitarianism that really picks up utilitarian thinking in the general population. And to a large extent they succeed, doing so with more rigor than the one deployed by your average paper:

They first collected hundreds of possible questions from psychology, philosophy, and themselves, and then tortured it with an expert panel of philosophers to make sure the questions do get to the core of what is known as utilitarianism, and then factor-analysed the responses from 960 participants. The resulting factors loadings were analysed to see if any items did not load into any of them, dropping "weak" items, and repeating the process ten times, arriving at a final model by looking at not one but six different metrics of model fit.

The end result was a two subcomponent scale (Oxford Utilitarianism Scale), that itself was tested for coherence in a different sample of 282 participants. Having survived this barrage of statistical analysis, the arrived at this:

Oxford Utilitarianism Scale (OUS)

Impartial Beneficence subcomponent

The only way to save another person’s life during an emergency is to sacrifice one’s own leg, then one is morally required to make this sacrifice.

From a moral point of view, we should feel obliged to give one of our kidneys to a person with kidney failure since we do not need two kidneys to survive, but really only one to be healthy.

From a moral perspective, people should care about the well-being of all human beings on the planet equally; they should not favor the well-being of people who are especially close to them either physically or emotionally.

It is just as wrong to fail to help someone as it is to actively harm them yourself.

It is morally wrong to keep money that one doesn’t really need if one can donate it to causes that provide effective help to those who will benefit a great deal

Instrumental Harm subcomponent

It is morally right to harm an innocent person if harming them is a necessary means to helping several other innocent people.

If the only way to ensure the overall well-being and happiness of the people is through the use of political oppression for a short, limited period, then political oppression should be used.

It is permissible to torture an innocent person if this would be necessary to provide information to prevent a bomb going off that would kill hundreds of people.

Sometimes it is morally necessary for innocent people to die as collateral damage—if more people are saved overall.

This scale picks up the gradient of position that goes between Kantianism and Act Utilitarianism, with weaker forms of utilitarianism and virtue ethics in between.

The two factors they found were not very correlated in their validation sample (r=.14). Note that the factors were extracted assuming latent variables (factors), not principal components (that would have made them fully orthogonal by design), so the fact that they do show some correlation is not a bad thing. But in a sample of philosophers, they were indeed correlated. 2

Other correlations of the scale, shown below, indicate that there is a moderate correlation between psychopathy and the instrumental harm scale, but not the impartial beneficence scale. This is what may be at play in the experiments cited above: psychopaths care less about harm, but do not endorse impartial beneficence.

Empathy had a low-medium correlation with the impartial beneficence scale, with some negative correlation with the instrumental harm scale. The authors remark that this association with empathy might be making utilitarianism psychologically unstable: If you have deep feelings for others that leads to impartiality, but also to reluctance to harm people to pursue the greater good, and viceversa.

I might add that this sounds at odds with the dual-system model of morality. The empathic concern scale (items marked as EC) asks questions like "I often have tender, concerned feelings for people less fortunate than me", "Sometimes I don feel sorry for other people when they are having problems", "When I see someone being taken advantage of, I feel kind of protective towards them".

So here, feelings of empathy might be driving (the impartial beneficence component of) utilitarianism, in the same way that feelings of disgust or even anger may be behind more traditional moralities. But perhaps this is a proxy for things like "I believe I ought to help those less fortunate than me" or things like that, which are questions that appeal to beliefs and not feelings. This ought to be explored more 3.

Religiosity was also, interestingly, associated with the impartiality scale ("love thy neighbor..."), and two measures of political ideology (social and economic) also showed some association with the instrumental harm scale.

Finally, what do the authors think are the next steps following the release of the scale? Doing less trolley problems, for one, and just using the OUS to measure utilitarianism. Also, for more pragmatic ends, they recommend utilitarians to focus on the moral impartiality side of things rather than in the instrumental harm side of things - which makes sense.

This paper will surely accrue many citations in the coming years, and I bet it will probably find its way into a central position in the area. We will probably still see trolley dilemmas, but they will now come paired with a really robust and theoretically sound measure that does match with the philosophical meaning of "utilitarianism".

To close this post, there is one question that the paper raises, which is unrelated to the paper itself, but that is probably worth exploring. The paper says that

Modern-day secular morality can be seen as the gradual expansion of our circle of moral concern from those who are emotionally close, physically near, or similar to us, to cover the whole of humanity, and even all sentient life (Singer, 1981; see also Pinker, 2011). Utilitarians like Bentham, John Stuart Mill, and, in our time, Peter Singer, have played a pivotal role in this process, and in progressive causes more generally. They have been leading figures in the fights against sexism, racism, and ‘speciesism;’ influential supporters of political and sexual liberty; and key actors in attempts to eradicate poverty in developing countries as well as to encourage more permissive attitudes to prenatal screening, abortion, and euthanasia within our own societies (Bentham, 1789/1983; Mill, 1863; Singer, 2011)

The question is: Did historical utilitarians have a morality closer to ours, compared to other intellectuals of the time? (Whoever they may be) Surely they do have a morality closer to ours, compared to the average of the population back then, but the better yardstick for intellectual merit purposes should be other intellectuals.

And I can't help to make the comment that the expanding circle story, while initially plausible, is not probably correct, pace the authors, Pinker, and Singer. As Huemer (2015) says, switching from a traditional morality to a more modern morality in terms of, eg. premarital sex, or capital punishment (Or abortion, or womenś rights more broadly) does not involve an expansion of the circle of concern.

Similarly, concern with the welfare of animals has existed for millenia (cf. Hinduism or Jainism, and even Judeo-Christianity Laws against animal mistreatment appear much later, in 1600-1800). Since then, the laws have changed as knowledge of animal biology has expanded, but it doesn't seem to me that in terms of moral values much has changed in that regard. At some point, consumption of animal products will be drastically reduced, but this won't happen because of utilitarianism or any changes in moral values. It will happen because cheap, tasty, substitutes will be available, and ceteris paribus people prefer to cause less suffering than more, a principle that goes as back as there is written records.

1

In this framework, abstract, highly intellectual philosophies, as in Kantian deontology, utilitarianism or libertarianism, are weirdenesses, to be explained by means of autism or some other deviance from neurotypicality, plus perhaps high reactance in the case of libertarians.

2

The authors say that

It is striking that while the two were only weakly associated in the lay population, they were strongly correlated in a sample of expert moral philosophers. It is unclear what explains this sharp contrast, and at this stage we can only speculate about its source. One natural explanation would be that this change may be due to philosophical education, including exposure to explicit views and arguments that tie the two moral dimensions together.

On the 2D model, individuals are likely to arrive at such an overall utilitarian view by following two distinct psychological paths. Some individuals—perhaps driven by unusually high levels of empathic concern—begin by endorsing a radically impartial vision of moral concern and, in an attempt to turn this endorsement into a coherent theory, eventually come to endorse forms of instrumental harm as well, to promote such impartial goals. Other individuals may start with greater acceptance of instrumental harm—likely driven by low levels of empathic concern—and a general rejection of traditional moral rules, and, seeking to find a systematic moral framework to replace the commonsense morality that they reject, come to endorse a sweeping impartial view.

In both cases, reasoning may serve not as the impetus to the embryonic utilitarian view, but rather as a means to integrate two aspects of utilitarianism that are psychologically

independent or even opposed. Utilitarianism may be the product, not of pure rational reflection and argument, but of an attempt to bring pretheoretical tendencies and intuitions into a coherent equilibrium (along the lines suggested, in the deontological context, by Holyoak & Powell, 2016).

Further research could investigate (a) the extent to which explicit endorsement of utilitarian views involves such adjustment, (b) whether one of the two ‘starting points’ is predominant, and (c) whether the initially dominant

dimension predicts the degree to which the behavior of utilitarians mirrors their theoretical commitments. One can predict, for example, that individuals who start out high only in instrumental harm give less money to charity compared with those who start out high in impartial beneficence.

An alternative explanation of the association between the two subscales in the expert sample is that the structure of the debate in current moral philosophy attracts those individuals in whom the two dimensions are already aligned...

3

The authors do acknowledge this in the conclusion

While a unitary model of utilitarianism-as-cognitive (and

deontology-as-emotional) seemed to be supported by earlier stud-

ies using sacrificial dilemmas that tied prosacrifice judgments to

effortful deliberation (Greene et al., 2004), more recent work has

related such judgments to reduced aversion to harming (Cushman,

Gray, Gaffey, & Mendes, 2012; Kahane et al., 2015). This latter

work suggests therefore that prosacrifice judgments are largely

driven by reduced emotion. In line with this interpretation, in the

present study we found an association between the Instrumental

Harm subscale and psychopathy and reduced empathic concern.

The positive dimension of utilitarianism has also often been

claimed to be based in rational reflection (de Lazari-Radek &

Singer, 2012; Sidgwick, 1907)—Peter Singer’s (1972) famous

argument that there is no moral difference between letting a

drowning child die and refusing to donate money to prevent deaths

in developing countries is based on an appeal to consistency, not

on pulling at our heartstrings. As we saw, however, the present

study also associates impartial beneficence with empathic concern,

an affective disposition—indeed, one that is exactly the reverse of

that associated with instrumental harm. At the same time, neither

subscale was significantly associated with need for cognition, a

trait measure of motivation to engage in effortful cognition. These

results are consonant with recent studies that found that extreme

altruists who donated their kidneys to strangers exhibit higher

empathic concern (Brethel-Haurwitz et al., 2016).

4

What they call economic conservatism. Free market economics is not conservative at all! If anything it is radical and progressive!

Comments from WordPress

Kabelo 2017-12-29T13:01:50Z

Could you please elaborate, or point me towards books to read to substantiate the view that free market economics [may be] radical and progressive.

Many things espoused by free marketeers are radical [departures from the status quo]
Things like abolishing the minimum wage, open borders, or privatising education on one side, to a full-on minimal state on the other.

In comparison, self-defined progressives seem to be making tweaks to what already exists.

[…] A Breakthrough In Moral Psychology by Artir – A brief history of the field of moral psychology. Do we have evidence that a fully rational person would be utilitarian? By polling philosophers and doing extensive statistics a paper derives a two factor model of utilitarianism: Impartial Benevolence and Instrumental Harm. […]