Tuesday, April 10, 2012

Be careful what you wish for

In the year 1610, Galileo discovered that the planet Saturn, the most distant then known planet, had a peculiar shape. Galileo’s telescope was not good enough to resolve Saturn’s rings, but he saw two bumps on either side of the main disk. To make sure this discovery would be credited to him, while still leaving him time to do more observations, Galileo followed a procedure common at the time: He sent the announcement of the discovery to his colleagues in form of an anagram

smaismrmilmepoetaleumibunenugttauiras

This way, Galileo could avoid revealing his discovery, but would still be able to later claim credit by solving the anagram, which meant “Altissimum planetam tergeminum observavi,” Latin for “I observed the highest of the planets to be three-formed.”

Among Galileo’s colleagues who received the anagram was Johannes Kepler. Kepler had at this time developed a “theory” according to which the number of moons per planet must follow a certain pattern. Since Earth has one moon and from Jupiter’s moons four were known, Kepler concluded that Mars, the planet between Earth and Jupiter, must have two moons. He worked hard to decipher Galileo’s anagram and came up with “Salve umbistineum geminatum Martia proles” Latin for “Be greeted, double knob, children of Mars,” though one letter remained unused. Kepler interpreted this as meaning Galileo had seen the two moons of Mars, and thereby confirmed Kepler’s theory.

Psychologists call this effort which the human mind makes to brighten the facts “motivated cognition,” more commonly known as “wishful thinking.” Strictly speaking the literature distinguishes both in that wishful thinking is about the outcome of a future event, while motivated cognition is concerned with partly unknown facts. Wishful thinking is an overestimate of the probability that a future event has a desirable outcome, for example that the dice will all show six. Motivated cognition is an overly optimistic judgment of a situation with unknowns, for example that you’ll find a free spot in a garage whose automatic counter says “occupied,” or that you’ll find the keys under the streetlight.

There have been many small-scale psychology experiments showing that most people are prone to overestimate a lucky outcome (see eg here for a summary), even if they know the odds, which is why motivated cognition is known as a “cognitive bias.” It’s an evolutionary developed way to look at the world that however doesn’t lead one to an accurate picture of reality.

Another well-established cognitive bias is the overconfidence bias, which comes in various expressions, the most striking one being “illusory superiority”. To see just how common it is for people to overestimate their own performance, consider the 1981 study by Svenson which found that 93% of US American drivers rate themselves to be better than the average.

The best known bias is maybe confirmation bias, which leads one to unconsciously pay more attention to information confirming already held believes than to information contradicting it. And a bias that got a lot attention after the 2008 financial crisis is “loss aversion,” characterized by the perception of a loss being more relevant than a comparable gain, which is why people are willing to tolerate high risks just to avoid a loss.

It is important to keep in mind that these cognitive biases serve a psychologically beneficial purpose. They allow us to maintain hope in difficult situations and a positive self-image. That we have these cognitive biases doesn’t mean there’s something wrong with our brain. In contrast, they’re helpful to its normal operation.

However, scientific research seeks to unravel the truth, which isn’t the brain’s normal mode of operation. Therefore scientists learn elaborate techniques to triple-check each and every conclusion. This is why we have measures for statistical significance, control experiments and double-blind trials.

Despite that, I suspect that cognitive biases still influence scientific research and hinder our truth-seeking efforts because we can’t peer review scientists motivations, and we’re all alone inside our heads.

And so the researcher who tries to save his model by continuously adding new features might misjudge the odds of being successful due to loss aversion. The researcher who meticulously keeps track of advances of the theory he works on himself, but only focuses on the problems of rival approaches, might be subject to confirmation bias, skewing his own and other people’s evaluation of progress and promise. The researcher who believes that his prediction is always just on the edge of being observed is a candidate for motivated cognition.

And above all that, there’s the cognitive meta-bias, the bias blind spot: I can’t possibly be biased.

Scott Lilienfeld in his SciAm article “Fudge Factor” argued that scientists are particularly prone to conformation bias because

“[D]ata show that eminent scientists tend to be more arrogant and confident than other scientists. As a consequence, they may be especially vulnerable to confirmation bias and to wrong-headed conclusions, unless they are perpetually vigilant”

As I scientist, I regard my brain the toolbox for my daily work, and so I am trying to learn what can be done about its shortcomings. It is to some extent possible to work on a known bias by rationalizing it: By consciously seeking out the information that might challenge ones beliefs, asking a colleague for a second opinion on whether a model is worth investing more time, daring to admit to being wrong.

15 comments:

Speaking of planets and 'bias blindness', here is a pertinent example.

Roughly a year ago I tried to argue that the new exoplanet data: clearly showing high eccentricity systems, non-coplanar systems, retrograde orbits and "hot Jupiters" all pointed to the Planet Capture Model of stellar system formation, as predicted by Discrete Scale Relativity, and were very difficult to understand from the old Laplacian Cloud Collapse Model.

At sci.astro.research and sci.physics.research my arguments were received with nothing but loud barking that the Capture Model was "impossible" for various reasons.

I note that the distinguished astrophysicist Helmut Abt will give a talk at the June AAS meeting in Anchorage that says the orbital elements of exoplanet systems convince him that many exoplanet systems have to be the product of captures of unbound objects.

Main point: For many, many decades astrophysicists clung to the Laplacian Model uncritically.And worse, they were blind to the naturalness and elegance of the Capture Model.

Finally a few brave souls are questioning the old dogma. The observational data is heavily in their favor and they will win out in the end.

As with highly excited atoms, so with stellar systems because we live in a discrete self-similar cosmos.

This is because every man/woman is an island. They all think that they are special somehow and they can't possibly be wrong. They are chosen by fate to do marvels. They refuse to see themselves as part of a broader picture. They are the kings of their own personal kingdom an the rest of the people are just the background, the subjects obliged to admire them. They are the writers of their autobiography and not random heroes in life's mad and chaotic novel where misfortunes happen all the time to anybody.

The choice you have is to try walking in other people's shoes by building strong relations and a common understanding with your follow men. These strong bonds within society will render you special and important. Such a man for example would never brag a priori that he is a better driver. On the contrary the Western man is taught that the only way to be special is to continuously boost his ego and alienate himself from society.

So the scientists instead of focusing all the time on what's wrong with the brain or the genetics should start thinking on what's wrong with the society in which people are brought up.

You're right of course in that these biases are socially driven. But I've written so often about the systemic problems with academia, I thought I'd try something that addresses the individual starting point instead. If the community wouldn't tolerate biases the problem would be much alleviated. Unfortunately, in my impression biases are actually amplified by the community eg because people who are confident are paid more attention to than people who are careful and maybe not entirely convinced of their own work. Or, as I wrote recently, because they punish you for admitting on being wrong, even though that's what good science sometimes requires.

Susan Cain in her book "Quiet" also picks on the problem, though from a somewhat different perspective. Introverts, she reports, are statistically less likely to misjudge realistic outcomes. They are also less likely to be very convincing and enthusiastic defenders of their own convictions. A culture that doesn't pay attention to the "quiet" people, Cain argues, does so at its own peril.

So. I agree with you. Still, I think it's worthwhile to point out that everybody can work on their biases individually too. Cain doesn't explicitly say it that way - she isn't concerned with the same problem as I (how science works best), but she too points out that we're not just stuck with the way our thought process work, but that they can be to some extent improved with some effort. Best,

"To see just how common it is for people to overestimate their own performance, consider the 1981 study by Svenson which found that 93% of US American drivers rate themselves to be better than the average."

Some high fraction of US Americans believe that they are among the top earners, when in fact they are not. As such, they oppose higher taxes for the rich (and lower for the poor) because they mistakenly believe they would have to pay more when in fact they would have to pay less.

It is said that Eisenhower was shocked when an aide mentioned that half of the US population was below average in intelligence.

@RLO: One can read the abstract here: http://adsabs.harvard.edu/abs/2010PASP..122.1015A

Note that he is not talking about the majority of planets, but the majority of exoplanets found to date. He mentions that we are limited in what we can at present detect and notes that we wouldn't detect something like our own solar system. In other words, a selection effect, well known in astronomy and in exoplanet research. You, not Abt, are making the jump from "majority of detected planets" to "majority of planets".

I would agree that recognizing our biases seems to be a difficult thing for people in general and yet of the greatest necessity in science. That is especially since as one of the bedrock premises of science being the maintenance of doubt. I think where the problem arises is when this premise is extended more to having scientists doubting their peers than it is to the current established models. I think then the avoidance of such often detrimental biases can only be achieved if theorists struggle with their own proposals to have them made as clear and distinct as possible, while remaining skeptical of scientific models which are vague and/or obscure as to depend on this to having them maintained as unquestionable.

I guess than that is why my favourite physicists of the past century have been those that seemed to have recognized this as to have it to guide them. That’s to say that seemingly arrogant confidence need not interfere with conducting science as long as it’s only the scientists who are the only thing left to be vulnerable to being blinded by them while not their objective.

“I'd rather be clear and wrong than foggy and right”

-J.S. Bell (in response to John Wheeler arguing the point I’ve made here) as found in “ The Age of Entanglement by Louisa Gilder”

"Ignorance more frequently begets confidence than does knowledge," Charles Darwin. "What does not support my imposition makes it stronger." Never write a bank holdup note on the back of one of your checks.

When you say misleading things like: "You, not Abt, are making the jump from "majority of detected planets" to "majority of planets".", you are claiming that I have said things that I clearly have not said.

That behavior is quite unscientific and suggests some sort of personal vendetta against me and Discrete Scale Relativity.

Stick to the facts and what people actually say. Don't put words in their mouths or use the shabby and hackneyed straw-man debating tactic.