The personal blog of Adam Nash

Predictably Irrational

For those of you who have actually clicked through the link about why I named this blog Psychohistory, you know that I’m fascinated by the ways in which the irrational (people) interact with the rational (math, technology, finance). In fact, to quote that original post:

As a software engineer, my primary interest was in human-computer interaction and the recognition that technology is useless without significant thought given to how people perceive and interact with it. As my interests shifted to the study of economics, I developed a deep fascination with the study of behavioral finance and the recognition that classic economic models fail to predict activity in many cases because people are often not rational actors.

These insights are fascinating to me because I firmly believe that in fact, there is a method to the madness. People are irrational in many situations, but in many cases predictably so.

So I named my blog after the fictional science, invented by Isaac Asimov, called Psychohistory, which claimed to predict the behavior of society by aggregating the behavior of unpredictable individuals.

Dan Ariely, seems to have taken a more direct approach. He’s named his blog Predictably Irrational, and is launching his first book this month with the same name. And I have to say, I’m thinking that I should have used that name instead. 🙂

Here is a brief bio of Dan Ariely, in his own words:

Predictably Irrational, is my attempt to take research findings in behavioral economics and describe them in non academic terms so that more people will learn about this type of research, discover the excitement of this field, and possibly use some of the insights to enrich their own lives. In terms of official positions, I am the Alfred P. Sloan Professor of Behavioral Economics at MIT’s Sloan School of Management and at the Media Laboratory, a founding member of the Center for Advanced Hindsight, and a visiting professor at Duke University.

Before we decide which parties are to blame, let me tell you about some experiments we recently conducted on cheating with MIT and Harvard students.

We gave a large group of students a sheet of paper with 20 simple math problems but only five minutes to solve these problems. A third of the students submitted their sheets and got paid 50 cents per correct answer. Another third were asked to tear up their worksheets, stuff the scraps into their pockets, and simply tell the experimenter their score in exchange for payment–making it possible for them to cheat. The final third were also told to tear up their worksheets and simply tell the experimenter how many questions they had answered correctly. But this time, the experimenter wouldn’t be giving them cash. Rather, she would give them a token for each question they claimed to have solved. The students would then walk 12 feet across the room to another experimenter, who would exchange each token for 50 cents.

What is the point of all of this? We had the intuition that people could easily take a pencil from work home without thinking of themselves as dishonest, but that they could not take 10¢ from a petty-cash box and feel good about themselves. In essence we wanted to find out if the insertion of a token into the transaction–a piece of valueless, nonmonetary currency–would affect the students’ honesty? Would the token make the students less honest in tallying their answers?

What were the results? The participants in the first group (who had no way to cheat) solved an average of 3.5 questions correctly (they were our control group). The participants in the second group, who tore up their worksheets, claimed to have correctly solved an average of 6.2 questions. Since we can assume that these students did not become smarter merely by tearing up their worksheets, we can attribute the 2.7 additional questions they claimed to have solved to cheating. But in terms of brazen dishonesty, the participants in the third group took the cake. They were no smarter than the previous two groups, but they claimed to have solved an average of 9.4 problems–5.9 more than the control group and 3.2 more than the group that merely ripped up the worksheets. This means that when given a chance to cheat under ordinary circumstances, the students cheated, on average, by 2.7 questions. But when they were given the same chance to cheat with nonmonetary currency, their cheating increased to 5.9–more than doubling in magnitude. What a difference there is in cheating for money versus cheating for something that is a step away from cash!

I find the implications of this fascinating, especially when extended to current thinking around executive compensation, the balance of incentives and disincentives in commerce and regulation, and even general management theory. How much of the historical “agency problem” exhibited by the misalignment of interests between management and investors might be exaggerated by this effect?

Fundamentally, there is something extremely powerful here. If it is true that humans don’t fit the classical model of rational actors, there may still be hope for creating extremely productive and efficient systems in technology and finance. If people are irrational, but in predictable patterns, then by investing time and thought into how those patterns affect behavior, we can optimize our products and services around those behaviors.

You can bet I’ll be ordering his book as soon it is available. If you’d like, click through here to buy it on Amazon.com. I do, after all, get a marginal affiliate bonus if you order it through this site.

Ironically, I’m visiting MIT next week to give a speech on behalf of LinkedIn. Maybe I’ll be lucky and have a chance to meet Prof. Ariely while I’m there.