Wednesday, October 15, 2014

Fiery Cushman and I have a new paper in draft, exploring the question of whether professional philosophers' judgments about moral dilemmas are less influenced than non-philosophers' by factors such as order of presentation and phrasing differences.

We presented the trolley problems either with a Switch case first (the protagonist saves five people by Switching a runaway trolley onto a side track where it kills one), followed by a Push or Drop case (saving five by Pushing one person into the trolley's path or by Dropping him into its path); or Push/Drop first, followed by Switch. For each scenario, participants rated the protagonists' choice to kill the one person to save the five others, using a 7-point scale from "extremely morally good" to "extremely morally bad".

Our previous research suggests that non-philosophers are much more likely to judge the Push case and the Switch case equivalently when Push is presented first than when Switch is presented first. On some views of philosophical expertise, philosophers' judgments about these cases should be less dependent on order of presentation than are non-philosophers' judgments. In other words, philosophers, due to their familiarity with scenarios of this type and their expertise in applying moral principles to them, should have more stable opinions, less influenced by order of presentation. We wanted to see if philosophers with prior familiarity with the cases, or self-reported expertise in the area, or self-reported stability of opinion, would show smaller order effects. We also wanted to see if we could reduce order effects by enforcing a delay before responding during which we encouraged participants to reflect carefully on different versions of the scenario and different ways of phrasing the scenarios.

We were unable to find any level of expertise at which the order effects were detectably reduced. Nor did adding a reflection condition appear to reduce the order effects. This figure shows the rates at which Switch was rated as morally equivalent to either Drop or Push on the 7-point scale:

[click to enlarge]

The order effect, as indicated by the differences in height between the black and the gray bars, is basically the same for philosophers and the non-philosophers, even in the "reflection" condition.

This next figure breaks the results down by degree of self-reported expertise among philosopher respondents:

[click to enlarge]

Note that at no level of expertise does the order effect appear to be reduced: not among philosophers reporting being professors with a specialization in ethics, nor among philosophers reporting having a "stable opinion" about trolley problems of this sort. If anything, the trend appears to be toward larger order effects with increasing expertise.

Some of the most famous results in psychology are the Tversky-Kahneman "loss aversion" framing effects. Participants are asked to imagine that an unusual disease will kill 600 people if nothing is done and then given a choice between two programs: On Program A, 200 people will be saved. On Program B, there's a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved. When the decision is framed this way, in terms of the number "saved", most people favor the non-risky Program A. When what are (purportedly) the exact same options are presented in terms of how many will die (400 will die vs. one-third probability that none will die and two-thirds probability that 600 will die), respondents tend to favor the risky Program B.

The results:
Percentage of philosopher respondents recommending the risky Program B, by framing, level of expertise, and order of presentation:

[click to expand]

As is evident from the figure, our philosopher respondents showed very large framing effects (similar to those of our comparison group and similar to the effect sizes seen in other studies with non-expert populations) -- again up to very high levels of expertise, including self-reported expertise on framing effects and self-reported stability of opinion about framing effects. To see this, look just at the black bars above, ignoring the gray bars.

Philosophers also showed large order effects when they were presented two slightly different framing-effect scenarios, either die-frame followed by save-frame or save-frame followed by die-frame. To see this, compare the adjacent pairs of black and gray bars above.

16 comments:

At the end of your draft article you write " We wouldn’t expect, for example, that Judith Jarvis Thomson (1985) would rate Push and Switch equivalently if the scenarios are presented in one order and inequivalently if they are presented in another order." It seems to me that J.J.Thomson may have actually fallen pray to a bias since in her 2008 article "Turning the trolley"(Philosophy & Public Affairs 36, no. 4) she changed her moral judgement (after more than 20 years of thinking about it!) that it is not ideed permissible to turn the trolley in the switch scenario by considering a "framing" where there is a 3rd option to turn the trolley onto herself .... which made her realize that by swithing the trolley we kill 1 (negative duty) and by not switching we merely let 5 die. It is a different kind of framing but still i find her change of "heart" very interesting.

Mathieu: It's an interesting thought -- and yet I would also emphasize that it's one thing to change your mind after years of hard thought and quite another to shift your reaction based on short-term features of the presentation. I doubt Thomson would do that, for these cases!

It's kind of off topic, but I wonder if the push tends to make it more personal and not just that, but if you could push them and you advocate that, you're close enough to be pushed yourself and what if they push you and are you advocating your own demise?

But if you ran a study on it, you must consider push and switch as somehow dissimilar yourselves - how would you describe the difference, Eric?

Callan -- part of the set up is supposed to be that the hiker's weight and backpack makes it so he's heavy enough to stop the train, though you aren't. (How you know this is left unsaid.) But as Mathieu says, there's also and interesting version in Thomson 2008 where you have the choice of sacrificing yourself, someone else, or letting the trolley kill the five.

Fiery argues in other work -- plausibly, I think -- that an important factor influencing people's condemnation of Push is the idea that it's worse to harm by physical contact than to do the same harm without physical contact, though if you act people specifically about that, they deny that moral judgments should be influenced in that way.

Great study! I think there aren't many options left for proponents of the expertise defense to explain your results in a friendly way. But maybe one could raise the issue whether and how order and framing effects actually affect people's considered views about e.g. trolley-type cases - i.e., the kinds of views people would actually endorse in their published work. For example, what about testing, for each individual rating, how confident people are about their rating? Based on that, one could check whether there is a significant interaction of low confidence with susceptibility to order and framing effects. And if there is such an interaction, one could argue that, although even experts are susceptible to these effects, they might still be less likely to endorse the relevant verdicts as their considered views - and this is what we ultimately care about in philosophical practice (I'm kind of blending themes from the work of Jennifer Wright and Regina Rini here).

Joachim -- yeah, a confidence rating would be interesting. But it's not straightforward what the relationship is, I think, between confidence and what one's considered view. For example, further consideration might decrease rather than increase confidence. We were trying to get at considered views by asking about "stability" and (in a different way) by including a reflection condition, but neither attempt seemed to reduce the effect. I do think there are still some things to try. I don't think that Wright/Rini-style possibilities have been exhausted yet. And there must be *some* level of expertise at which the effects largely disappear. As we mention at the end of the paper it's hard to imagine that Judith Jarvis Thomson's or John Martin Fisher's equivalency judgments would be much influenced by order.

Fiery argues in other work -- plausibly, I think -- that an important factor influencing people's condemnation of Push is the idea that it's worse to harm by physical contact than to do the same harm without physical contact, though if you act people specifically about that, they deny that moral judgments should be influenced in that way.

That's really kind of interesting - I'd pitch the idea that you have two interests occuring. Perhaps one could be called emotion, and it doesn't want to push someone to their death (understanbably). But then the other interest could be called 'intellect' and it has to be seen to be consistant, so if you pitch them in a particular order, intellect will insist on being consistant between the two (with academic further affected by the intellect consistancy urge?). Pitch them in the other order and emotion gets in first and intellect isn't fast enough to realise this will cause a latter inconsistancy.

I wonder what else could be done to try and seperate the two hypothesized interests? In an attempt to show them as seperate things, anyway?

Eric: In light of your results, it actually strikes me as quite optimistic to expect that order effects must disappear at some level of expertise. At any rate, in studies with other world-class experts, e.g. Olympic gymnastic judges (Damisch, L., Mussweiler, T., and Plessner, H., 2006, Olympic Medals as Fruits of Comparison? Assimilation and Contrast in Sequential Performance Judgments. Journal of Experimental Psychology: Applied 12, 166–178), order effects didn't disappear. And I'm not sure how helpful it would be for the practice of philosophy at large if only a handful of super-duper-experts like Thomson or Fisher were resistant to such effects...

Anon, I agree it would be interesting to try to extend it to other areas. Of course, there is a large literature on expertise and what it does and doesn't get you; including a bit on framing effects and loss aversion (but not to my knowledge at the high levels Fiery & I test); but it hasn't been done with trolley-like moral dilemmas that I'm aware of.

When considering moral dilemmas of this sort, are people given explicit instructions to ignore the consequences of their actions for them?

For example, we might (however consciously) consider how our actions will be interpreted by others, so the Push is more likely to involve future imprisonment. I wonder if we can effectively suppress our sense of how acts are viewed by others when asked to make these kinds of judgements.

That is, when answering these questions publicly - since we are reporting our answer to someone - are we "picking our nose in public" so to speak if we express a willingness to Push?

Ironically I'd think not, since people tend to think the law, as practiced, will follow the moral direction they take. 'It had to be done!'

I don't know how to prove that, but it might be interesting to run 'push', but mentioning to the subjects the law potentially taking it's own, very different interpretation of events and with that different interpretation, a potential hefty jail term. And of course a control group where you don't mention that idea to subjects.