Posts Tagged ‘Scientific Method’

“The problem with randomised controlled trials is that they don’t show how therapeutically useful homeopathy is.”

Statements like this elevate my blood pressure. They are often signify an oncoming attack against this wicked thing called Science. One encounters the word ‘homeophobia’: the denial by scientists that unknowable truths exist, motivated by the scientists’ fear that this would shatter their precious little calculations. (They are not so quick to offer a name for the phobia’s counterpart: the fear of being told you are not in fact a beautiful and unique snowflake endowed with mystical powers and the blood of Númenor flowing through your veins; in essence, the fear that there is such a thing as a wrong answer, and you might just have given it.)

Complaining about science in this way tends to be more about identity politics than a real issue of scientific methodology. Nevertheless, let’s take the above claim at its word, and make for it the best case that we can.

The assertion, minus the controversial H-word, is this:

“There exists, or might exist, such a treatment that has two properties: 1) it has some therapeutic value; and 2) it does so in a way that randomised controlled trials do not detect.”

This is the statement that proponents of randomised controlled trials (RCT) must disprove; and since the RCT is held as the gold standard, the burden of proof lies with its supporters.

Imagine a chemical called archibaldmatthewphillipsium or AMPium for short. This is a natural plant extract that has no medicinal value whatsoever – unless your name is Archibald Matthew Phillips. For this one lucky man, regular doses of archibaldmatthewphillipsium reduce the odds of cancer by 95%.

This is an extreme case. But it is well known that different people do react to some drugs differently, and personalised medicine is an important field. A patient with a certain illness will often be given a series of treatments in order to find out which one is best for him.

It so happens that AMPium is exactly the right treatment for Mr Phillips to take, and any systematic drug test that missed its value would let Mr Philips down. In this case, yes, a randomised controlled trial would not be the answer.

But this is unlikely. Nobody is that special. What is far more plausible is that what is 95% effective for Archibald Michael Phillips is, say, 90% effective for his immediate family; 80% for his extended family; 50% for people from a similar genetic or socioeconomic background, and so on, tailing away to a large proportion of the population for whom the drug is essentially valueless. Now we are back in the realms of ordinary statistics.

What should a doctor do when presented with a new patient? Of course the best thing to do would be to test him with every conceivable chemical compound, mystical trinket and shamanistic rite to find out exactly what the best treatment is best for him personally, and I would dearly love to see this happen. But this is impossible: it would take too long, and his symptoms might grow much worse before the tests were completed. This is a fact; not to acknowledge it is extremely irresponsible.

What we ask instead is: which group do we think he belongs in? The group for which treatment x is perfect? The group for which x is just an okay sort of treatment? Or the group for which it’s effectively useless?

Given our imperfect knowledge of the patient, we would base our decision on the sizes of the groups, and start him on the treatment that is most effective for the greatest number of people. And it is precisely that information that a randomised controlled trial provides.

REFERENCES

Jeanette Winterson OBE took the opposite view here. I advise you not to click. She doesn’t need the hit count.

With the scientific world abuzz with reports of neutrinos appearing to travel faster than the speed of light, I have become painfully aware that what I know about modern physics I could fold in half and fit between the keys of a typewriter without seriously impeding its function.

So when I heard about the discovery, I went to what I felt to be the most relevant academic treatise on the subject, which I found highly appropriate despite it being 263 years out of date.

David Hume’s Enquiry Concerning Human Understanding was published in 1748 and deals with the problem of how we are able to know things. Since Hume believed that everything we know comes from experience ­– from evidence and experimentation, as opposed to revelation and belief – the book hits at the very core of scientific way of thinking.

When I heard of a discovery that appears to completely contradict the present scientific consensus, I went to Hume. In particular I went to chapter 10 of his Enquiry, a two-part essay entitled Of Miracles.

A miracle, for Hume, is “a violation of the laws of nature”. We determine the laws of nature by our experiences of how the world works. What we call a ‘good’ law of nature is one that we see demonstrated over and over again. Every time we have let go of a ball in mid-air, it has fallen; through habit of association we come to expect that the ball will always fall, and we arrive at a law of nature that says that all released balls fall ­– let’s call this gravity.

The questions is: how should we react to someone’s story that once he saw a ball float in mid-air ­– that he saw the force of gravity disappear? In essence, how should we react to a miracle?

Hume’s general principle, and it’s a good one, is this: “That no testimony is sufficient to establish a miracle, unless the testimony be of such a kind, that its falsehood would be more miraculous, than the fact, which it endeavours to establish.”

In the case of our friend saying the ball didn’t fall, we must ask ourselves: what is more likely? That the man is mistaken/lying/joking, or that gravity really did stop for him? If it’s just one man’s account, unsupported by evidence, then of course we are within our rights to dismiss him out of hand (A more modern commentator: “What can be asserted without evidence can also be dismissed without evidence.”)

But if he comes back with photos, confirmatory experiments, and other, independent witnesses, eventually it comes to the point where it really would be more miraculous that this was a mistake. Then gravity would be a weakened hypothesis. We would pose a new law of nature: that gravity usually holds, but in some cases doesn’t, as in the following examples…

Whatever new theory, or extension of the old one, takes the place of the traditional concept of gravity, we would listen to it, but with caution and scepticism, until the evidence in its favour built up to make it more certain.

Whenever a scientist ­– or indeed anyone at all – comes out with something new, something really new, think of sceptical old Hume. In his own words:

A wise man … proceeds with more caution: he weighs the opposite experiments: he considers which side is supported by the greater number of experiments: to that side he inclines, with doubt and hesitation; and when at last he fixes his judgement, the evidence exceeds not what we properly call probability.

One of the problems scientists face in the lab is that human beings are really, really bad at science. There are fundamental mechanisms in the way we think that help us enormously in our everyday lives, but are the exact opposite of what a scientist needs to understand the world.

Let’s play a game.

I have a rule in my head that generates numbers, three at a time. What I’ll do is give you the first three numbers. You have to guess what the next three numbers generated by the rule will be. You will ask me “Are the next three numbers x, y, and z?” and I will say yes or no. And we can repeat this process as many times as you like until you feel you know what the rule is. The game ends when you tell me what you think the rule is, and I tell you if you’re right or wrong.

All set? Okay, let’s go.

The first three numbers are 2, 4, 6.

Which numbers do you think come next? Unfortunately we can’t play this live, but you might try it with a friend afterwards; in the meantime, I’ll provide a transcript of the time I played this game with my old friend Clint.

The SI: “The first three numbers are 2, 4, 6.”

Clint: “Are the next three numbers 8, 10, 12?”

The SI: “Yes.”

Clint: “Are the next three numbers 14, 16, 18?”

The SI: “Yes.”

Clint: “Then 20, 22, 24, 26, 28…”

The SI: “Yes, to all.”

Clint: “Okay, then I know what the rule is. The rule is, generate the next number by adding 2 to the last one. That’s it, right?”

Actually, it doesn’t matter. Clint might well be right: his theory does account for all the observed data, and has successfully predicted future results. But he has approached the problem in completely the wrong way. Yes, the n+2 rule is a viable one. But so is “the numbers always go up”. If that had been the rule, then if Clint had gone on to ask “29, 30, 31”, he would have been correct; but Clint didn’t ask that. Most people don’t.

What Clint did ­– what most people do – is come up with a theory and seek confirmation. This is a good rule of thumb that works most of the time, but it is fragile and dangerous because it can lead you in the wrong direction. You might go a considerable distance, wasting time and precious resources, before realising you’ve had it wrong all along.

The correct way to approach the problem, indeed all problems in science, is to seek disconfirmation. You have an idea: how do you prove it wrong? What are the minimum standards of scrutiny to which your theory must be subjected? What wringer can you put it through to see if it comes out unharmed?

This is how science is done, but it is not a natural way of thinking: our brains just aren’t wired up to work in this way. We try our best, but often get fooled; and if you take it out of the context of a lab and present it as just a game about numbers, it’s amazing to see how badly we fare.