Sunday, October 23, 2011

The illusion of validity

Surprise! There's a book coming out on this.

A fascinating article by a Nobel-prize-winning psychologist in the New York Times magazine:

I thought that what was happening to us was remarkable. The statistical
evidence of our failure should have shaken our confidence in our
judgments of particular candidates, but it did not. It should also have
caused us to moderate our predictions, but it did not. We knew as a
general fact that our predictions were little better than random
guesses, but we continued to feel and act as if each particular
prediction was valid. I was reminded of visual illusions, which remain
compelling even when you know that what you see is false. I was so
struck by the analogy that I coined a term for our experience: the
illusion of validity.

There is a lot here for everyone. Of course, it is going to be easier to spot those among our opponents who are subject to the illusion of validity, rather than ourselves. There are those who will find in this concept a general critique of climate science, but to do so you must ignore the fact that climate science has made predictions which are quite a bit better than chance:

Kellogg thought he had his Minecraft addiction under control

Yet it should concern all of us that through the act of telling a story, we become more confident that that story is true.

As I learn more about the climate, I trust the experts more, and my own intuition less. I have come to realize that my ability to formulate a reasonable and plausible story about what is likely to happen far exceeds my ability to actually predict the future.

The confidence we experience as we make a judgment is not a reasoned
evaluation of the probability that it is right. Confidence is a feeling,
one determined mostly by the coherence of the story and by the ease
with which it comes to mind, even when the evidence for the story is
sparse and unreliable. The bias toward coherence favors overconfidence.
An individual who expresses high confidence probably has a good story,
which may or may not be true.

Dr Kahneman is blowing through some pretty advanced material on cognitive biases here, especially the availability heuristic, which is the observation that we are heavily influenced by examples that are easy to call to mind. This make people more responsive to events that are dramatic and memorable -- terror attacks, for example -- out of proportion to their actual frequency. Physicians, for example, will often change their entire practice pattern on the basis of one disaster -- eschewing a useful therapy, for example, because one patient had a horrible complication.

In blog science this probably comes up most often because it is easier to remember being right as opposed to being wrong. We are constantly in the process of reshaping our memories, usually with the result that they become more flattering to us over time. Another reason to favor scientists over non-scientists in the discussion of science is that through publication, we have a record of that person's actual predictions at a given time. They are accountable for what they said.

There are other concepts in the article highly relevant to the blogosphere:

Decades later, I can see many of the central themes of
my thinking about judgment in that old experience. One of these themes
is that people who face a difficult question often answer an easier one
instead, without realizing it.

Over the time I've been following politics, I've seen this become expected behavior in press conferences and debates: answering the question you wish you had been asked rather than the one you were. It's endemic among deniers, who will frequently answer a question about the climate with a political lecture on the evils of leftism.

You can try and use this, and I suppose some will, as one more argument that the "uncertainty monster" should hold us paralyzed in fear. But that seems to be just about the opposite of what Kahneman is arguing -- he is talking about the limits of snap judgements, and the biases that distort the (oftentimes exceptionally successful) application of rapid cognition to problems. How does he know they are flawed? Well, he systematically gathered data and applied statistical tests to it, based on his hypothesis. In other words, Kahneman's "gold standard" for accuracy is science. The lesson here is to interrogate our intuitions using hard data, not to make a fetish of doubt.

8 comments:

RE: ". . . you must ignore the fact that climate science has made predictions which are quite a bit better than chance:"

Wow, that's some achievement. I take it (from what I've read here) that if climate science had made predictions which were only a bit better than chance, it would still be seen as worth the time and money?

Feel free to enlighten me as to what purpose is served by pondering whether climate science would be worth the time and money spent on it if it were a lot less successful in explaining the climate than it actually is. Sounds much more like a waste of time to me than climate science does.

Your graph has Hansen's prediction lining up nicely with the temperature record, but it looks like you've chosen the curve that corresponds to Hansen's "zero emissions" scenarios, ie a scenario in which all CO2 emissions are ended in 2000.

If the originators of the graph at Skeptical Science are to be believed - see http://www.skepticalscience.com/comparing-global-temperature-predictions.html - looks have deceived you and the graph is based on Hansen's Scenario B, not on Scenario C as you surmised.