Hi all, I haven’t posted much on badscience.net due to exciting home events, fun dayjob activity, a ton of behind-the-scenes work on trials transparency with alltrials.net, activity on policy RCTs, exciting websites, and a zillion talks. I’m going to post this year’s backlog over the next week or two (and maybe rejig the site if I get a chance). So first up…

Here’s an editorial I wrote in the British Medical Journal with David Spiegelhalter, about the complex contradictory mess of evidence on the impact of bicycle helmets. Like most places where there’s controversy and disagreement, this is a great opportunity to walk through the benefits and shortcomings of different epidemiological techniques, from case control studies to modelling. Epidemiology is my dayjob; Bad Science and Bad Pharma are both, effectively, epidemiology textbooks with bad guys; and since the techniques of epidemiology are at the core of most media stories and squabbles on health, it’s very weird that you don’t hear the word more often. More on that in another journal article, which I’ll post later on!

This link to the full piece on the BMJ website should get you past the paywall:

++++++++++++++++++++++++++++++++++++++++++
If you like what I do, and you want me to do more, you can: buy my books Bad Science and Bad Pharma, give them to your friends, put them on your reading list, employ me to do a talk, or tweet this article to your friends. Thanks!
++++++++++++++++++++++++++++++++++++++++++

jimb0123 said,

I know the thrust of the article was more about the difficulty of analysing multi-factorial, ‘real world’ outcomes but as a non-helmet wearer this is interesting to me. Also I’ve been doing some basic analysis recently where I’ve discovered that feedback is standard linear model I used and I’d say “risk compensation” is an exmaple of feedback.

I suspect that things will become clearer cut when the technology meets the brief. Current cycle helmets are a design compromise (they have to be light weight and ventilated).

Something like the “invisible helmet” www.youtube.com/watch?v=WpznWLgGfv8 will change everything and society will look back at cycling the way we look at the Wright brother’s first aeroplane.

RRZ said,

Wondering what you think of Elvik’s metaanalysis. Does it suffice to reconcile the case-control results with the population observations? OTOH, I think the Canadian population results were based on helmet laws that weren’t enforced.

Guy Chapman said,

I’ve been interested n the evidence deficiency for a long time. I was at the founding meeting of cyclehelmets.org editorial board, the amount of agenda-driven work in the field is quite staggering.

Amazingly, people still cite the 85%/88% figures of the 1989 Seattle study, even though nothing approaching this has ever been observed in a real cyclist population and the case and control groups were completely different.

The Cochrane review is disgraceful. It’s written by the Seattle group, is dominated by their own work, double-counts their data sets, and entirely ignores the fact that substituting co-author Rivara’s own street count data in the 1989 study, instead of their assumed value, makes the effect vanish into the statistical noise.

Something that I think is missing from this type of discussion is some acknowledgement of the level of intervention. It’s not like discussing whether to recommend that patients take drug A rather than drug B. Doctors are discussing whether to recommend that the police start arresting and prosecuting people who cycle with bare heads. It seems to me that a much greater level of harm, and a much higher standard of proof ought to be needed before advocating such a step.

For example, doctors are pretty certain that surgery is an effective treatment for some types of cancer. But can you imagine doctors campaigning to make it a crime to decline a surgical excision? The idea would be totally against the principles of ethical medicine: major surgery can only be done with the patient’s consent.

So why do some doctors feel happy to recommend that the criminal justice system enforce their preferred intervention in the case of cycle helmets? What happened to the principle of patient consent?

Greg Hill said,

I think I understand the issues you’ve covered, but I’m not sure what to take out of it. Here’s my question: if you’re teaching your 7-year-old child to ride a bike would you insist on a crash-hat?
Thanks, Greg.

woodchopper said,

njloof, if we are just looking at the financial effects of safety, a dead cyclist is no longer able to work and pay taxes. With an 18 year old dead cyclist some 50 years of productive work has probably been lost. There are of course many non-financial reasons to promote safety (though I’m not in favour of compulsory helmets).

greyspoke said,

Is there any decent research other than the epidemiological about this? It ought to be possible, with suitable designed test dummies and information about modes of cycle accident to derive a decent model of the different ways cycle accidents affect the bod both with and without helmets. Presumably there is biomedical data on how the bod reacts to different types of force? Data about how cycle accidents occur can be gathered easily enough.

There is such data for car safety, of course funded via the motor industry and the need for type-approval and so on. Surely it would be worth government and the industry stumping up some dosh to do likewise.

It has always appeared to me that epidemiology alone is not going to provide the answers that are required. I think the problem is that people look to the medical industry for research on this topic and epidemiology is how they do things.

The “invisible helmet” is a good example, it is never going to sell enough, to the right kind of cyclist, to provide useful data from real world accidents. The dummy tests they appear to have done do not provide hard data or a comparison with helmets, nor is there anything to suggest their model accident scenario is one of the ones we should be concerned about.

Spinney said,

Great article. As a helmet wearing cyclist, I have to say that at least one reason I wear the thing is to help keep my hair out of my eyes!

To Greg Hill – the case of teaching a 7-year-old to ride is significantly different to an experineced cyclist. The former is far more likely to fall off and have the kind of low-speed accident for which a helmet may well provide useful protection. The latter is more likely to come into conflict with a moving car or turning lorry – for many accidents of this type a helmet will make little difference.

Anyway – Ben, if you read the comments – a bunch of us on a cycling forum were wondering if there was any chance you could write a similar article aimed at ‘Joe Bloggs’ in the street, who has never heard of epidemiology – the same stuff, but in complete layman’s language.

Since the editorial not only concedes that there is only a “complex contradictory mess of evidence”, and further discusses the impact of a law in a country not my own, rather than being about the wearing of bicycle helmets per se, it is of little help my own individual case.

I believe that I wear a bicycle helmet because I am somewhat more risk-averse than average, notwithstanding that I suspect that bicycling is somewhat more risky than the alternatives (for recreation, exercise, or transportation), and even more so for me because I’m somewhat clumsy. But how to determine if my wearing of a helmet induces me to take more risks or even reduces my risk at all? I’m not much of a scientist, but I believe I’ll need a pack of scientists to follow me around and observe me, and a much larger group of subjects for a double-blind randomized controlled trial, which calls for a few thousand clones of me. I wonder if I can get someone to fund this.

lindaward said,

Several population level helmet-law studies have controlled for background trends and included both head and non-head injuries, and shown that the effect of the legislation on hospital admissions for cycling head injuries to be far from minimal:

The head injury results in all these population-level longitudinal studies, and the AIS3/4 head/brain injury results in the Carr study, are consistent with the (hospital control) results of the Thompson Cochrane Review, and the Attewell and Elvik meta-analyses, of case-control studies.

Two factors are likely to be responsible the Dennis minimal effect finding: collinearity (between ‘time’ and ‘time since intervention’); and such a tiny number of pre-law data points for Ontario (30% of the 1994 injuries, law was Oct 95) and British Columbia (19% of the 1994 injuries, law was Sep 96).

Dennis et al. cite the Scuffham and Walter studies as being “limited by sample size or methodological quality”. However both the Scuffham and Walter analyses took baseline trends into account, and had (more than) adequate sample sizes. Macpherson claimed that the Povey and Scuffham analyses, and a preliminary (1992) MUARC study by Cameron, “failed to include a concurrent control group in the analysis”; however all 3 analyses used cyclist non-head injuries as concurrent control groups. (Povey’s and Scuffham’s analyses also included non-cyclist injuries.) Dennis also cites the preliminary 1992 Cameron/MUARC study; both Macpherson and Dennis have apparently overlooked the (1995) Carr/MUARC study (4 years of post-law data), which superceded the (1992) Cameron study (1 year of post-law data).

With respect to the 85/88% in the “Seattle” study, Guy Chapman states that “nothing approaching this has ever been observed in a real cyclist population and the case and control groups were completely different”. By “real cyclist population” and “completely different” “case and control groups”, it seems that Guy may mean population-level longitudinal studies, and hospital vs population controls. I am not aware of any studies using population controls, it would be helpful if Guy were to cite the studies he is talking about (and a reference for his claim, on a Wikipedia talk page last year, that “50% of cyclist deaths in London are due to crushing by goods vehicles at junctions, cause of death being abdominal trauma”).

Guy states that “substituting co-author Rivara’s own street count data in the 1989 study, instead of their assumed value, makes the effect vanish into the statistical noise”, but does not provide an references. Struggling to understand how one could (validly) “substitute” “Rivara’s own street count data” into a case-control study (and finding no helmet studies in with Rivara as 1st author in PubMed), I forced myself to have a look at the (truly dreadful) cyclehelmets site. Guy’s claim that substituting “Rivara’s own” data . . . makes the effect vanish into the statistical noise” seems to be referring to the www.cyclehelmets.org/1068.html claim that “Of 4,501 child cyclists observed cycling around Seattle, just 3.2% wore helmets. This is not statistically different from the 2.1% of the hospital cases who were wearing helmets”. The required sample size, to detect a difference (with 80% power) between 2.1% and 3.2%, is 3,346 in EACH group; the cyclehelmets site states that there were 135 cases. The effect does not “vanish into the statistical noise”, it is (statistical) rubbish to claim, on the basis of such grossly inadequate sample size (less than 1/20th of the numbers cases required for such a comparison), that the lack of a statistically signifcant effect is (real) evidence that there is no effect.

I am still wondering what Guy means by “assumed value”, it would be helpful if Guy could explain how the the case-control study “assumed” helmet wearing prevalence.

It is the BHRF site (cyclehelmets) site, not the Cochrane review, that is disgraceful: the site also misrepresents the results of the Carr, Hendrie, Povey, Scuffham, Karkhaneh, Walter, Attewell, and Elvik studies; it also mispresents the results of the Australian (Victorian, New South Wales, and South Australian) participation surveys (see the above Olivier/ACRS link).

My current ‘favourite’ example is the claim (www.cyclehelmets.org/1146.html) that “Helmeted cyclists have about the same percentage of head injuries (27.4%) as unhelmeted car occupants and pedestrians (28.5%). Wearing a helmet seems to have no discernible impact on the risk of head injury.”. The reference cited is “Serious injury due to land transport accidents, Australia, 2003-04″. As a BHRF “editorial board” member, maybe Guy can explain how it is possible to draw such a conclusion from a report that does contain any information as to what the head injury rates were prior to the helmet legislation?

(The BHRF: a perfect teaching case for how NOT to ‘do’ epidemiology?)

old.johns said,

That’s a very good point. Bike helmet legislation falls under “road safety” in most people’s minds, and we’re accepting of strong coercion in that context. E.g. when it comes to stopping at red lights, driving on the right (or left) our actions can kill others, so coercion makes sense. But wearing a helmet affects only the wearer, so I think it should be seen as a public health measure, rather than a matter of road safety.

The case for coercion then falls apart, as you say, even if it works pretty well. On the other hand, as Goldacre and Spiegelhalter say, the benefits of legislation are unclear, to say the least. They are “too modest” to be scientifically measured.

In addition, cycling itself is known to be a very effective public health measure. Simple calculations show that the marginal health benefit of helmet use while cycling is about 1/50 of the benefit of cycling itself! It seems bizarre to force a weak health intervention on people who are voluntarily taking a much stronger one. Aren’t they doing enough already?