What we found was that if you enhance people's serotonin function, it makes them more deontological in their judgments. We had some ideas for why this might be the case, to do with serotonin and aversive processing—the way we deal with losses.

It certainly seems as if the internet renders everyone more deontological than consequentialist. Perhaps it's the performative aspect of publishing on that stage, the norm of using those networks for signaling, that leads everyone up the steps to their high horses.

CROCKETT: Yeah. Indignation, or a retaliative desire to punish wrongdoing, is the product of a much less deliberative system. We have some data where we gave people the opportunity to punish by inflicting a financial cost on someone who treated them unfairly or fairly, and we varied whether the person who was going to get punished would find out if they'd been punished or not. We were able to do this by making the size of the overall pie ambiguous.

If people are punishing others in order to enforce a social norm and teach a lesson—I'm punishing you because I think you've done something wrong and I don't want you to do this again—then people should only invest in punishment if the person who's being punished finds out they've been punished. If punishment is rational and forward‑looking in this way, it's not worth it to punish when the person isn't going to find out they've been punished.

This is not what we find at all. In fact, people punish almost the same amount, even when the target of punishment will never find out that they've been punished. This would suggest that punishment, revenge, a desire for retaliation are a knee-jerk reaction, a retrospective evaluation of the harm, rather than a goal‑directed deliberative desire to promote the greater good.

More:

We've done experiments where we give people the option to play a cooperative game with someone who endorses deontological morality, who says there are some rules that you just can't break even if they have good consequences. We compare that to someone who's consequentialist, who says that there are certain circumstances in which it is okay to harm one person if that will have better consequences. The average person would much rather interact with and trust a person who advocates sticking to moral rules. This is interesting because it suggests that, in addition to the cognitive efficiency you get by having a heuristic for morality, it can also give you social benefits.

One of my sisters won a school auction prize in which my niece Lyla, who is six, could have a special lunch with her teacher. She had the opportunity to invite one friend, and rather than choose her best friend, she chose the girl in the class with special needs. This girl can't speak, so Lyla spent the day communicating with the girl through an iPad. I saw the photos and was so moved that she would be so generous at such a young age.

In a family email thread one of my brothers joked that we needed to send around photos of all my other nieces and nephews (nine in total!) doing noble things as confirmation they weren't all monsters. Smiley emoji's all around, but there's an element of that on the web now. If something grave happens in the world and you haven't heard of it, god forbid you post some humorous at that moment lest ye be judged summarily and without mercy by the humourless scolds on the internet.

But observe enough trolls on the internet and you see why signaling your virtue, or your deontological creds, might be critical to passing through the morality filters which maintain the norms of good decorum on whatever internet space you're traveling through.

KAHNEMAN: The benefit that people get from taking a deontological position is that they look more trustworthy. Let's look at the other side of this. If I take a consequentialist position, it means that you can't trust me because, under some circumstances, I might decide to break the rule in my interaction with you. I was puzzled when I was looking at this. What is the essence of what is going on here? Is it deontology or trustworthiness? It doesn't seem to be the same to say we are wired to like people who take a deontological position, or we are wired to like people who are trustworthy. Which of these two is it?

CROCKETT: What the work suggests is that we infer how trustworthy someone is going to be by observing the kinds of judgments and decisions that they make. If I'm interacting with you, I can't get inside your head. I don't know what your utility function looks like. But I can infer what that utility function is by the things that you say and do.

This is one of the most important things that we do as humans. I've become increasingly interested in how we build mental models of other people's preferences and beliefs and how we make inferences about what those are, based on observables. We infer how trustworthy someone is going to be based on their condemnation of wrongdoing and their advocating a hard-and-fast morality over one that's more flexible.