In the second chapter, Stanovich delves deeper into his third characteristic of science: Scientists deal with solvable problems. To define “solvable problems” in this context, we must start with two critical concepts: first, what are the defining characteristics of a scientific theory, and second, what does it mean to be falsifiable?

Scientific theories must always be stated in such a way that the predictions derived from them could potentially be shown to be false.

This principle is known as the falsifiability criterion, and it is attributed to the philosopher of science Karl Popper. Popper argued that nothing in science was verifiable (you can’t prove anything to be absolutely true) but that all science must be falsifiable (you must be able to prove it false). Interesting tangent, George Soros was a student of Popper’s, and his Open Society Institute is named after Popper’s book: The Open Society and its Enemies.

Stanovich highlights that good theories :”go out on a limb” and make predictions which might not be true. He discusses two theories that were not science, because they resisted exactly this: Freud’s theory and bloodletting (using Benjamin Rush as an example). Freud’s theory, built on case studies and extensive storytelling did not make prediction, but rather explanations of behavior after the fact. For example, he mentions the many psychoanalytic explanations for Tourette’s syndrome (discussed by Shapiro 1978): Tics were a source or erotic pleasure, and therefore the patient did not want to give them up, “a conversion symptom at the anal-sadistic level,” symptoms of a “narcissistic character.”

Sandor Ferenczi, a disciple of Freud who had never seen a patient with Tourette’s [italics added by Stanovich] made an equally serious error when he wrote that the frequent facial tics of people with Tourette’s were the result of a repressed urge to masturbate”

Really guys? Repressed urge to masturbate? Really? Ok, let's measure number of masturbations in Tourettes vs. non-Tourettes, and across time for Tourettes patients, wouldn't we see a correlation then?

The story was similar with bloodletting. Benjamin Rush, a famous American surgeon and patriot, would justify treatment of bloodletting by explaining everything. If a patient died, “The disease was too far gone for the treatment to work” if they recovered, well then, “the treatment worked!”

The key for me here is that science must be in the language and spirit of prediction. Scientific theories must both explain existing facts, and guide the search for new facts. In this search, theories must make real predictions that can be proved wrong. In class, I try to take a step back at this point, and try to get inside the heads of these crazy historical pseudoscientists.

Benjamin Rush was no idiot. He read a whole lot. He knew a whole lot. He wanted to help people. He was a pioneer is saying that mental illness is a disease and could be treated. Ok, one of his treatments was this chair. But he was no silly fool.

Image Credit: University of Pennsylvania Digital Archives

He was the “Father of American Psychiatry” for chrissakes! But he was wrong wrong wrong on bloodletting. Why? Because science is hard. We want to explain, we have all sorts of motives to explain while risking nothing. To predict is to risk being wrong, to go out on a limb and realize that we have been cutting people’s arms open, and blaming autism on their “refrigerator mothers” for nothing. Causing real harm while doing no good. Creating knowledge is great, but who wants that kind of knowledge?

Stanovich celebrates the “Freedom to Make a Mistake” as a key value in science, but also shows that the fear of making mistakes and looking stupid is kind of, ahem, human nature. Thomas Kuhn in his Structure of Scientific Revolutions, takes this claim to its logical extreme, arguing that instead of science proceeding by falsification of theories, theories die when the people who hold them do, who clutch them tightly in their hands throughout their careers (Ok, that’s my image, Kuhn is a bit more academic).
I think Stanovich does a good job finding a middle ground between Popper and his students like Imre Lakatos and Paul Feyerabend, who characterized scientists from a much more sociological point of view. He ends that section by citing a lighthearted ribbing from psychologist Ray Nickerson that science proceeds through falsification, not because individual scientists enjoy being wrong, but because they enjoy showing that other scientists theories are wrong. Which reminds me of this delightfully ranty post from Peter Watts about those leaked emails in Climategate. Money quote:

Science doesn’t work despite scientists being asses. Science works, to at least some extent, because scientists are asses. Bickering and backstabbing are essential elements of the process. Haven’t any of these guys ever heard of “peer review”?
There’s this myth in wide circulation: rational, emotionless Vulcans in white coats, plumbing the secrets of the universe, their Scientific Methods unsullied by bias or emotionalism. Most people know it’s a myth, of course; they subscribe to a more nuanced view in which scientists are as petty and vain and human as anyone (and as egotistical as any therapist or financier), people who use scientific methodology to tamp down their human imperfections and manage some approximation of objectivity.
But that’s a myth too. The fact is, we are all humans; and humans come with dogma as standard equipment. We can no more shake off our biases than Liz Cheney could pay a compliment to Barack Obama. The best we can do— the best science can do— is make sure that at least, we get to choose among competing biases.

Ok, but I skipped two great sections in chapter 2. One, titled “Not all Confirmations are Equal” lays out that part of what makes some predictions (and therefore some theories) better than others is the potential for disconfirmation. I was thinking about this this morning as I was trying to run barefoot, after reading that recent article in the NYT (and having followed the growth of barefoot running for a bit). I also appreciated . After all the evidence and experience that shoes help cushion our steps, and make running easier, any evidence that running barefoot reduces injuries is pretty amazing to me. Of course, in that article there are mostly testimonials, and this caveat is a welcome contrast to the celebratory wonder of McDougall’s piece. But if someone made a prediction to me that running a marathon in bare feet was not only possible from elite runners, but better for many other runners? That is quite a prediction, and it seems to be coming true.

The second great section in Chapter 2, that I can’t let go, is entitled “Thoughts are Cheap.” In science, theories are cheap, and experiments are currency. Yes, this value of experimental evidence may be cheapened or perverted by our modern “publish or perish” environment, but the fact remains, as Stanovich quotes Steven J. Gould:

“But thoughts are cheap, any person of intelligence can devise his half dozen before breakfast. Scientists can also spin out ideas about ultimates. We don’t (or rather we confine them to our private thoughts) because we cannot devise ways to test them, to decide whether they are right or wrong”

I happen to disagree that scientists confine these big cheap thoughts to our own heads. Plenty of scientists are willing to use their scientific reputations to engage in provocative speculations in public (Steven Pinker, Drew Westen, even … Steven J. Gould). I happen to agree with some of these stories, but they are not really science as Gould defines it above.

Ok. I’ll stop there because actually measuring stuff is the subject of the next chapter on Operationism and Essentialism.