The Messy Reality Of Judicial Decisions

How do we make moral and ethical decisions? It would be nice if our choices were grounded in nothing but the facts, in the details of the issue at hand. Alas, that’s not the way it works. Jonathan Haidt, a psychologist at the University of Virginia, has famously argued that our moral judgments are like aesthetic judgments. “When you see a painting, you usually know instantly and automatically whether you like it,” Haidt writes. “If someone asks you to explain your judgment, you confabulate…Moral arguments are much the same: Two people feel strongly about an issue, their feelings come first, and their reasons are invented on the fly, to throw at each other.” In other words, when it comes to making ethical decisions, our rationality isn’t a scientist, dispassionately chasing after the facts. Instead, it’s a lawyer. This inner attorney gathers bits of evidence, post hoc justifications, and pithy rhetoric in order to make our automatic reaction seem reasonable. But this reasonableness is just a façade, an elaborate self-delusion. Benjamin Franklin said it best: “It is so wonderful to be a rational animal, that there is a reason for everything that one does.”

Ed Yong, over at the always excellent Not Exactly Rocket Science, covers a fascinating new PNAS paper that exposes this fraught mental process in judges overseeing parole hearings:

The graph above is…the work of Shai Danziger from Ben Gurion University of the Negev, and summarises the results of 1,112 parole board hearings in Israeli prisons, over a ten month period. The vertical axis is the proportion of cases where the judges granted parole. The horizontal axis shows the order in which the cases were heard during the day. And the dotted lines, they represent the points where the judges went away for a morning snack and their lunch break.

The graph is dramatic. It shows that the odds that prisoners will be successfully paroled start off fairly high at around 65% and quickly plummet to nothing over a few hours (although, see footnote). After the judges have returned from their breaks, the odds abruptly climb back up to 65%, before resuming their downward slide. A prisoner’s fate could hinge upon the point in the day when their case is heard.

According to Danziger, the effect is driven by a well-demonstrated human foible, which is that a tired brain is much more likely to opt for the default choice. (It takes extra mental effort to break from the status-quo.) For these judges, the default choice is denial of parole. And so, over the course of their long day, as they hear one case after another, these judges also become more reliant on the easiest decision-making alternative, the path of least cognitive resistance. Because it takes extra work to come up with reasons why a prisoner deserves parole, the judges are less likely to do it; their first impressions settle the case. (Similarly, people also become less likely to enjoy “aesthetically challenging” works of art after completing a series of difficult puzzles.) The details of the case can’t compete with a worn down prefrontal cortex.

This isn’t the first time psychologists have documented the effect of human biases and flaws in judicial decisions. In 1989, Sheldon Solomon, a psychologist at Skidmore College, conducted a fascinating experiment on 22 municipal judges in Tucson, Arizona. He told the judges that he was studying the relationship between personality traits and bail decisions. However, Solomon slyly inserted in the middle of the personality questionnaire a question designed to evoke awareness of mortality, so that half of the judges were forced to contemplate “the emotions that the thought of your own death arouses in you.” After the judges completed the questionnaire, Solomon asked them to set bail in the hypothetical case of a woman charged with prostitution. The control group of judges – they were not prompted to think about their own death – set bail at around $50, an amount that conformed to the legal guidelines of Arizona. The judges who reflected on their mortality first, however, were much more punitive: their average bail was $455.

According to Solomon, the fleeting thought of death made the judges more conservative, more willing to punish those who “violate cultural values.” This is known as mortality salience and it’s been demonstrated in a number of different domains. In general, making people think about death makes their moral judgments more extreme: We react more positively to those who “share cherished aspects of one’s worldview” and more negatively to those who do not. (In subsequent studies, Solomon has shown that priming people with thoughts of 9/11 also makes them far more likely to vote for Republicans.) This suggests that, if you’re involved in a case that reminds people of death and dying, you definitely don’t want to appear before a judge who hasn’t eaten lunch.

There is, of course, nothing particularly surprising about these results. Judges are human beings, their choices shaped by all the usual feelings and flaws. (It would remarkable if their moral decisions weren’t shaped by these forces!) Nevertheless, it’s imperative that we make judges aware of these tendencies, so that they can take steps to reduce their effects. Our moral decisions will always be shaped by our emotions and instincts, but that doesn’t mean they must be also dependent on whether or not we need a break. Some feelings are more relevant than others. And this is why judges should always interrogate their intuitions and question that inner lawyer. Does this prisoner really deserve to have his parole request rejected? Or do they just feel that way because they need a snack?