How We Know What Isn't So

Some observations on cognitive bias drawn from the book How we know what
isn't so by Thomas Gilovich.

Introduction

We have many beliefs that arent true. Examples:

Infertile couples who adopt are subsequently more likely to conceive

Admissions people believe that they can make better decisions using a personal interview

Nurses believe that more babies are born when the moon is full

Why do they believe these things if they arent true?

Today, more people believe in ESP than in evolution. In the US, there are 20 times more
astrologers than astronomers.

It is not simply lack of exposure to evidence: they hold the beliefs in spite of
evidence. Nor is it stupidity. Probably the opposite: evolution has given us brains that
can process huge amounts of information using a variety of simple cognitive and perceptual
processes. These processes are a great strength, but they are also the cause of some of
our biggest follies.

Some of these cognitive principles:

Tendency to see regularity and pattern in random events

Inability to detect and correct for biases introduced by incomplete or unrepresentative
data

Tendency to interpret ambiguous data in the light of our pet theories and a priori
expectations

Some of the motivational and social determinants of false beliefs:

Wishful thinking and self-serving distortions of reality

Distortions introduced by summarizing and the need to entertain

Overestimating the extent that others share our beliefs

Why do we care about erroneous beliefs? They are the reason behind some of
humanitys most egregious and senseless acts, such as

Burning "witches"

Exterminating black rhinos (the hors are supposed to have curative powers)

Little children killed because of strange parental beliefs: Rhea Sullins, 7 yrs old,
became ill, so father put her on water only diet for 18 days, followed by juice only for
another 17. She died of malnutrition.

While caring for critically ill son, man thinks he sees face of Jesus in the wood grain
of hospital floor. Hundreds now visit site each year and confirm the miraculous likeness.

The ability to spot real patterns is the key to human success. We can exploit
regularities we observe in nature to build technology. So the tendency to see pattern is
evolutionarily adaptive.

A folk theory: in basketball, success leads to success. Getting a basket gives you
confidence, and this helps you get the next basket. And so on. The result is hot streaks.
Virtually everyone believes this, including coaches and the players themselves. But the
data contradict this. (see page 13 of Gilovich).

Of course, it could be that some other process is masking the effects of "hot
hand". Like that a person who is hot gets extra coverage by the defense. But even if
you examine controlled situations like free throws, you see that probability of getting a
basket is the same regardless of the success or failure of the previous shot.

Or it could be that the essence of hot hand is not success but predictability: they
know whether the next shot will hit or not. So this was tested experimentally. But
players predictions were not correlated with the outcomes of their shots at all.

So why do we believe in hot streaks when they dont really happen? One reason is
that people have faulty images of what chance sequences look like. People expect that a
coin tossed many times will more or less alternate heads and tails. If there are sequences
of 4 or 5 heads in a row, they think there is something non-random going on. But in fact
those are quite common. For example, a sequence like OXXXOXXXOXXOOOXOOXXOO looks
non-random, but it is.

It looks non-random because there are so many clusters, and we dont expect
clusters in random data. Why not? Because of the representativeness axiom that
people seem to use to think with. We evaluate whether an outcome is likely or not based on
similar on a few simple features to what we would expect given the cause. For example, we
will believe that someone is a librarian if they look bookish  they are
representative of the category. The salient feature of independent events like coin tosses
is that in the long run, we expect the two outcomes to occur an equal number of times:
50-50. But this is in the long run. Yet we expect the same result in the short run as
well, so if a given sequence has 9 heads and 1 tail, we think there is something wrong
with it.

There are lots of phenomena that are like this:

the ups and downs of the stock market, witnessed by traders

the sequences of boy and girl births in a hospital, witnessed by hospital workers

Both areas are filled with folk theories governing the outcomes. In births, there are
theories involving the phases of the moon. In stocks, there are dozens of strange
theories, like the hemline theory and the Super Bowl theory.

Even statistical analysis does not always help dispel the illusion. With hindsight, we
can always pick the most unusual features of the data and build an analysis around them.
Example is the shown in pg 20  pattern of bombs dropped on London. By choosing the
right quadrants, we can make it look non-random.

After-the-Fact, Ad Hoc Explanations

Easy to create a story that justifies an outcome. Experiments with split brains show
this easily. The right brain is made to choose something based on something presented to
the right brain only. The left brain is then asked why they chose that. There is never any
hesitation: they make it up instantly.

Regression

Whenever two variables are imperfectly correlated, extreme values of one variable are
always matched by less extreme values of the other variable. We have trouble internalizing
this. So in life, we tend to assume that extraordinary performance in one year will be
matched the extraordinary performance the next year, but this is rarely the case. This
affects how we buy mutual funds and other stocks, how we hire people, lend money to
businesses, etc.

If I tell you that someone who is in 90th percentile of sense of humor, you
tend to predict that their gpa will be in the 90th percentile. Yet, if the
correlation between sense of humor and gpa is near zero, your best guess for their gpa
would 50th percentile.

Instead of recognizing regression effects, we tend to interpret it. If someone who
score very high before scores less high now, we think they got overconfident, or slacked
off or were resting on laurels etc.

This may be why most people, like parents, use punishment more often than reward, even
though psychological research suggests that reward works better. We give rewards when
someone has done something extraordinarily well. Then, of course, they dont do as
well the next time, so you think the reward was not effective. In contrast, we give
punishments when someone really screws up. And of course, the next time, they dont
screw up as much, so we think the punishment was effective. But it was just the regression
effect: the lack of correlation between the events.

An experiment of this kind was done by having a teacher deal with a students
lateness. A computer showed the teacher what time the kid arrived each day, and each day
the teacher could either issue praise, punishment, or no comment. After several
"days" of this, the teacher was asked which seemed to be most effective,
punishment or reward. Most felt that punishment was more effective. What they didnt
know was that the students arrival time was pre-programmed and completely unrelated to the
what the teacher did.

Coda

Can combine the clustering illusion with the regression fallacy: seeing streaks or
clumps of events, and then having their subsequent absence be interpreted causally.
Example is the trip to Israel described on pg 28. A flurry of deaths due to natural causes
in the northern part of Israel led to speculation of some new threat. A group of rabbis
attributed the problem to the sacrilege of allowing women to attend funerals, which was
previously forbidden. So they decreed that women could not attend funerals anymore. Soon,
the rash of deaths subsided, and this was taken as confirmation that the remedy was
effective!

In other words, simple features of human cognition can account for major beliefs such
as the proper role of women in society.

Incomplete and Unrepresentative Data

"I know someone who cured themselves of cancer through positive thinking."
"Of course there is a sophomore slump  you see it all the time."

If something is true à then there should be some
evidence of it. But the existence of some instances do not prove the general case. If
mixing a package of pop rocks with coke will kill you, then there should be some cases of
people who did that and died immediately after. But just because somebody eats pop rocks
with coke and dies immediately after doesnt meant that there is any connection
between the two!

We tend to believe in things because we saw it happen once. We also tend to give too
much weight to evidence that supports the belief, while ignoring evidence that disconfirms
it.

Take the claim that "african-americans like volvos". If you believe that, you
tend to find supported evidence everywhere: every time you see an african american driving
a volvo you think "see, I told you!". But you dont really notice volvos
driven by whites, or other cars (not volvos) driven by african-americans.

Volvo

Other Car

African American

a. you notice these the most!

b. These are also seen

Other person

c. These you ignore ignore as irrelevant

d. You may notice these

Yet all the cells are needed to evaluate the claim. Compare this table:

Volvo

Other Car

African American

100

10

Other person

10

100

With this one:

Volvo

Other Car

African American

100

10

Other person

10

1

If you focus only on the (a) and (b) cells, the two situations look the same. But only
the first table shows an actual relationship between Race and Car Manufacturer. The second
table provides clear evidence that there is NO relationship between Race and Car
Manufacturer. (Just convert the data to percentages and youll see.)

Similarly, if you are trying to evaluate whether seeding clouds is effective in making
rain, you tend to look at only the (a) cases. These are unambiguously relevant. But the
(c) cases tend to be ignored because they are ambiguous: they dont clearly speak to
the issue.

Rains

Doesnt Rain

Seed clouds

(a) you are persuaded by
many cases of this type

Dont seed

(c) ambiguous  ignored

It is particularly difficult to deal with variables in which one category signifies the
absence of the other, as in "rains" and "not rains". We have
difficulty working with the "not rains" category. In contrast, we do better with
"Male" and "female".

We generally favor the positive. John Holt played 20 questions with kids. He would say,
Im thinking of a number between 1 and 10,000. You have twenty yes/no questions to
guess the answer. So someone would say is the number less than 5,000? and if
the answer was Yes, the kids would cheer. If it was NO, they would groan. But its
just as useful no matter how it comes out!

Suppose your job is to figure out what questions to ask a respondent to determine
whether they are an extrovert. Most people ask things like "what would you do to
liven up party?" They are thinking about behaviors that extroverts do. They
dont think to ask "do you like to curl up in front of a fire and read a
novel", even though a yes answer would help to determine that the person is not an
extrovert.

If asked to determine whether the respondent is an introvert, they ask about
introverted behaviors.

Censored Data

Suppose we evaluate whether students with high sat scores really do perform better in
college. We can look at the sat scores of good and bad students in colleges, but notice an
important point: students with really bad sat scores dont get into college at all.
So you really cant examine disconfirming cases, like students with bad sat scores
that do well in college: they are artificially removed from the game. Similarly, studnets
with really bad gpas, who might have had good sat scores (contrary to hypothesis),
are bounced out and again are not available in your sample. Also, students with different
sat scores dont go to the same schools, so it is hard to assess the effect of sat on
gpa alone.