The other day the British Medical Journal (BMJ) published a clutch of articles about whether Tamiflu was as useful a drug as some have touted. I read the main article, another one of the Cochrane Collaborative meta-analyses of the studies they deem useful about any particular subject, and it didn’t seem to make much news. It confirmed what their previous review had said about the neuriminidase inhibitor antivirals for influenza (Tamiflu and Relenza): these drugs work but their effect is modest. We’ve been saying the same thing for years here, not because we did a fancy meta-analysis, but because that’s quite clearly what the literature said. They confirmed it. Again. Not very interesting, I guess, so the BMJ, quickly becoming medical tabloid central, fastened on the one scientific aspect of the paper that might remotely have a news hook: the meta-analysis didn’t have enough information to show — according to Cochrane Collaborative standards, that is — that healthy people who got flu and were given a neuriminidase inhibitor avoided more serious complications like pneumonia. It didn’t show the antivirals didn’t work for this. It just alleged there was no Cochrane-required-level-of-evidence they did. The data — in their hands — showed the evidence was compatible with either outcome. Yawn. But yawn was good enough. It was elevated to a different story: that one reason we didn’t know is that the drug companies were hiding the data. That is a news story, I agree, but not a news story about whether the drug works or doesn’t. While just an allegation (because they didn’t get to see the data), the medical journal was doing this in collaboration with television channel 4 in the UK and the Cochrane Collaboration itself. Conflicts of interest?
The paper was co-authored by, among others, Dr. Thomas Jefferson and graduate student Mr. Peter Doshi, both of whom I have criticized here on one occasion or other. I’m not making any serious complaints about this paper (although why, on scientific grounds it should have been published in a high profile journal isn’t clear to me since it didn’t provide new evidence), but I do know enough about this kind of work to know that a great deal of judgment is used in accepting or rejecting papers (indeed that’s why this follow-up was done; someone objected to a paper that was considered in the previous review). However the overwrought point-counter-point between drug maker Roche, the authors, BMJ reporters and the editors over access to the data and who was supported by whom had the effect of entangling use of an important class of drugs for influenza — a class of drugs that everyone seems to agree works to some extent when no other does (and during a pandemic, no less) — and the important but unrelated issue of transparency over data involving pharmaceuticals. Let me be clear that on this issue I am on the BMJ’s side. I think it’s a scandal that we don’t have access to information used to license drugs given to the public with an official sanction of safety and efficacy. But it’s an issue that would likely crop up with almost any drug they sought to examine. Doing this in tandem with a media outlet whose objectives are not science but snagging viewers and the Cochrane Collaborative itself is unseemly at best and borders on the unethical. It makes it look like the BMJ was again engaging in self-promotion (OK, I understand it’s a business enterprise, but let’s recognize what’s involved). It didn’t hurt that the self-promotion seemed to serve everyone’s purpose (except for Roche’s, and frankly I can’t bring myself to feel sorry for them).

Well, maybe not everyone’s. I don’t think it served the public purpose or the public health. Tamiflu and Relenza work. They are in fact they only therapeutic modality we have other than supportive care or in critical cases heroic methods (mechanical ventilation). Vaccines are preventive, not therapeutic. So here we are in the middle of a pandemic and the Cochrane folks, aided and abetted by the BMJ and television producers, are saying, “How do you know for sure that they will prevent pneumonia in an otherwise healthy person who gets flu?” There is evidence about this, even if the Cochrane zealots don’t recognize it:

And observational data on these drugs’ usefulness in patients hospitalized with severe cases of flu – seasonal and H1N1 – points toward benefit, Dr. Tim Uyeki of the U.S. Centers for Disease Control recently reported in the New England Journal of Medicine.

“We don’t want people to stop using the antivirals in the way that we’re recommending they’re used because we believe they are having a beneficial effect on hospitalization, on severity of illness and indeed (preventing) death,” [antiviral expert Charles] Penn said Tuesday from WHO headquarters in Geneva.

“And if people stop using them, then the consequences of that will be an increased burden on the health-care system and worse outcomes.” (Helen Branswell, Canadian Press)

But as long as we’re talking about the need for randomized trials — something I’m not against but know enough about not to think they are automatically better than observational studies — let’s really talk about the need for them. The Cochrane Collaborative exists to do systematic reviews of the medical literature for the purpose of improving medical outcomes (note that they set their own rules for this). To be at all effective they have to be read and understood and used by practicing clinicians. Do we really know that clinicians who follow the recommendations of the Cochrane Collaborative have better outcomes for their patients? Isn’t it possible that some Cochrane recommendations actually do harm by inhibiting the use of treatments that are helpful in certain cases? Remember that their judgments, even if valid, are about average effects. If we wanted to find out if anyone should pay attention to a Cochrane review, shouldn’t we subject them to a randomized clinical trial? It would be easy. We could give a randomized groups of physicians real and sham Cochrane reviews (no one can really understand how they are really done from the descriptions in the papers anyway so the placebo ones would look just like the real ones). Then we could see if they do result in better medical practice.

Comments

I am interested in hearing more about the idea that an RCT is not automatically better than an observational study. This warrants greater discussion. In addition, more comments about the Cochrane way of doing things would be helpful. I have had some concerns about Cochrane, and just what it takes for them to say that an intervention is effective. If you have any links to these topics, they would be of use to many readers.

I could not disagree more. This so called “argument” about clinicians basing judgments on “averages” is ridiculously unfair because you are not complaining about the MANY MANY times greater problem of clinical interventions that use those same “averages” to do something.

Opportunity costs and negative outcomes from interventions are real whether those intervening admit responsibility or not.

There are ways to continue to make evidence better and the Cochrane Collaborative is not perfect. The above does not help.

Also the ridiculous ahistorical nature of your complaints about statistical bias against medical interventions. I guess you do not know about the thousands of times medical leaders, academic institutions, research orgs, and big pharma have cooked the books (or is that so boring nowadays that it is conveniently not worth mentioning).

I amazed constantly about the hypocrisy of people who always fight against the “un-science” of “alternative” and even “anti-” medicine who have convenient blinders on about the medicalization of every aspect of life and the constant “up-selling” of disease and intervention. They are both problems but the relatively minor one gets all the blows. That is bullying in my book, but everyone seems to want to be a bully these days.

Floormaster: I could not disagree more. This so called “argument” about clinicians basing judgments on “averages” is ridiculously unfair because you are not complaining about the MANY MANY times greater problem of clinical interventions that use those same “averages” to do something.

I don’t think you understood my point. The “averages” I referred to were the meta-analysis estimates, not those used by clinicians. So using your argument, all the more reason to require an RCT for Cochrane. Notice that is where my point was headed, so you are supporting it.

Opportunity costs and negative outcomes from interventions are real whether those intervening admit responsibility or not.

Again, you missed the point. You are supporting a demand for an RCT of Cochrane. I said nothing about medical interventions being harmless. On the contrary, I have good reason to know how harmful they can be and how often it occurs.

There are ways to continue to make evidence better and the Cochrane Collaborative is not perfect. The above does not help.

So shouldn’t it be submitted to an RCT? Does it really make medical practice better? Can they (or you) support the claim? Or is what’s sauce for the goose not sauce for the gander?

Also the ridiculous ahistorical nature of your complaints about statistical bias against medical interventions. I guess you do notknow about the thousands of times medical leaders, academic institutions, research orgs, and big pharma have cooked the books (or is that so boring nowadays that it is conveniently not worth mentioning).

Not only did I not make such claims, but I explicitly said I agreed with BMJ that the data secrecy is a scandal. You may either wish to re-read, or if you are confident you read correctly, point us to where you think we said anything like what you claim.

I amazed constantly about the hypocrisy of people who always fight against the “un-science” of “alternative” and even “anti-” medicine who have convenient blinders on about the medicalization of every aspect of life and the constant “up-selling” of disease and intervention. They are both problems but the relatively minor one gets all the blows. That is bullying in my book, but everyone seems to want to be a bully these days.

I don’t think we can be accused of any of this. But if you have specific instances, rather than blanket accusations, we’d be glad to discuss them with you. We hardly ever talk about CAM or quackery here, and myself have tried acupuncture (as far as I could tell it didn’t help, but neither did anything in the medicine cabinet). We have been involved in very public disputes with Orac over the meaning of scientific method and it is always been our position that the “demarcation problem” (how do you separate science from so-called pseudoscience) is unsolved. We consider astrology a science (it makes testable claims and is based on theory). It’s just that the claims it makes are false (they have been tested in large scale data studies about things like vocation and birth sign), so as science it is bad and should be discarded. The theory also has no basis in what we know today either. But that just makes it bad science, not pseudoscience. But apparently you have a different view. I’d like to know what it is. Can you tell science from pseudoscience, or perhaps more to the point, in over five years and 3400 posts written here can you find a single instance where we claimed to be able to do so?

As for the medicalization of everyday life, you will get no argument from me. When our children were born the battle cry among progressive young parents was not to medicalize pregnancy. Now that our children our having babies, it is a hundred times more medicalized than ever before. Mrs. R. and I just shake our heads. So we too, are amazed, but you seem to be amazed at us. Perhaps, again, you can give me examples, because it is also one of our concerns and we do not wish to feed it. But if you want absolute zealots on the issue of what is proper and what isn’t in medicine, look to the Cochrane Collaborative, not us.

Our favorite journalists, Shannon Brownlee and Jeanne Lenzer, have come out with a story about this – published just today on Atlantic Online. I was waiting for them to weigh in on this important matter, since it seemed just up their alley, but they did it sooner than I expected.

I just read the first paper posted by SusanC @5, which was funded by Hoffman-LaRoche. What I thought was interesting was that 71% of their subjects (those admitted to hospital for influenza) had received the flu vaccine. Maybe some RCTs to establish vaccine efficacy would be worthwhile.

Thank you S. Lakshmi for that link. An excellent article.

Sid Offit: As of today, 10,402 H1N1 deaths have been reported worldwide, according to the European Centre for Disease Prevention and Control. The number of people infected is at best a rough estimate. According to WHO stats, seasonal influenza kills 250,000-500,000 people annually. It would take a lot more number massaging to justify all the media attention and money that’s been given to the flu this year.

(Those statistics and sources are all available from wikipedia – “2009 flu pandemic”)

Revere: Evidence-based medicine is the most reliable, and double blind, randomized controlled trials are the strongest form of evidence we have. We can’t keep recommending drugs just because we think they work to some extent. These drugs have side effects. Evidence of their safety and efficacy is required to justify their use. You sound a bit like a homeopath.

“And if people stop using them, then the consequences of that will be an increased burden on the health-care system and worse outcomes.” (Helen Branswell, Canadian Press)

Having been able to peak into the sausage factory of what goes on in the creation of a Cochrane Systematic Review, I’d like to endorse the worries expressed in this post.

It also chafes me when a Cochrane Review allows for an “Editor’s Conclusion” where words like “Promising!” or “Merits further research!” are bandied about for treatments which clearly flop if you even just quickly skim the data. The CAM reviews seem particularly guilty of this editorial legerdemain, but they’re not alone.

I’d *love* to see an RCT on the effectiveness of Cochrane meta-analyses and systematic reviews. And you know what, I bet Archie Cochrane himself would have been very open to the idea. Then again, he was Cochrane and not a Cochraenist.

Quinn: I guess that means that sciences that don’t use experimental method (seismology, astronomy, natural history, cosmology) don’t have reliable evidence. A good observational study is better than a bad randomized trial any day. And there are a lot of bad randomized tirals. They are experiments only in a provisional sense. Once treatment has been allocated they are just another observational study. They are difficult to do, often done poorly, and full of pitfalls. If this weren’t true, you would only need one RCT for each drug or intervention. Instead we get many and they conflict. That’s why Cochrane has a big Handbook on how to judge RCTs. What is your evidence that they are the most reliable form of inquiry? Or don’t you need evidence for that?

Quinn: Well I think I’d rather listen to an epidemiologist or statistician (for different views) than a podiatrist, especially one who is addressing EBM in podiatry exclusively and doesn’t even get all oit right. I and my colleagues actually do these studies and teach about how to do them. Most clinicians don’t know a thing about them or how to evaluate them.

Quinn: So to summarize. You present as your evidence the opinions of a podiatrist backed you no data. Do you think this article would be considered evidence by the Cochrane folks. Evidence of anything? Not because he’s a podiatrist but because there’s no evidence there. Just an opinion. You made a statement but have no data for it. I use evidence from all sorts of study designs and I know about them all. They each have strengths and weaknesses. But the one thing we don’t teach our students is that one design is automatically better than another. The design has to be fit to the question and the situation. There are questions for which RCTs are impossible, others for which the RCTs are of very poor quality or almost infeasible, others where they really are good or even the best evidence. I hpe that’s not too nuanced for you.

Revere,
I agree that the study design must fit the research question. “Levels of evidence” aren’t the sort of thing that can be easily tested using RCTs. But there is a solid rationale for this hierarchy and it is well accepted in the medical community.

I thought the article I offered did a nice job explaining this rationale. The levels of evidence and the reasons for them are not the author’s opinion, and he references his source – the Centre for Evidence Based Medicine at the University of Oxford. Here’s the link:http://www.cebm.net/index.aspx?o=1025

When it comes to the safety and efficacy of drugs, there are very good reasons why we consider double blind RCTs to be superior to uncontrolled studies, case reports, and anecdotes.

Quinn: many kinds of adverse event data are not easily uncovered by RCTs unless they are very large, run for a very long time, and have protocols that describe what the definition of an adverse event is, minor, serious, or life-threatening, and so forth.

Quinn: The other study designs are not uncontrolled. They are carefully designed (or should be; like RCTs they can be done badly), they use the same logic as RCTs (the randomization only serves to control but not eliminate confounding which can be dealt with in observational studies in both the design and the analysis. The Cochrane Collaboration accepts them in principle but often do studies like this with arbitrary exclusions for observational studies that can be highly informative (read: reliable evidence).

Marissa: I completely agree. RCTs are not useful in every situation. When it comes to establishing the efficacy of antivirals, however, properly designed, double blind RCTs would provide the strongest possible evidence.

Quinn: Well you are in error. Both cohort and case-control designs have comparison groups (what you call a control group). They are designed to take advantage of natural experiments that usually can’t be performed.

When I said “there are very good reasons why we consider double blind RCTs to be superior to *uncontrolled studies*, case reports, and anecdotes”, I wasn’t referring specifically to the observational studies in question.

But they could be included here as well. Well designed, double blinded RCTs provide stronger evidence than well designed cohort and case-control studies.

Of course you have now hedged in your contention with qualifications, but even so it is not necessarily true. If the purpose of randomization is to reduce (although not eliminate) the problem of residual confounding, you can often design an observational study so that confounding is not a likely explanation for what you see. More importantly, perhaps, is that the observational studies are of people living real lives in the real world and RCTs are artificial. That is the reason that the Cochrane analysts and others have to make a distinction between efficacy and effectiveness. Observational studies measure effectiveness (if you want to use their terms and concepts) and RCTs measure efficacy. Since it is usually effectiveness you are after, observational studies are superior. All this says is that the design has to be fitted to the question and the answer properly interpreted. You get certain kinds of important and reliable information from well designed studies of some kinds and other kinds of important and reliable information from well designed studies of other kinds. The idea that you can put the reliability of study designs in a linear order, while a common misconception, is contrary to how evidence is actually used by scientists and fallacious. Remember, too, that RCTs are only a tiny, almost miniscule fraction of the science that is done. Toxicologists and chemists and physicists rarely randomize. Yet they produce highly reliable science.

So your problem with this article is that they restrict the EBM review to RCTs only? Or that it (a mere EBM review, not new data) was published in a high profile rag? Or that it concluded something that you “know” isn’t the case?

People in RCTs are not living in the real world? Where are they living? Of course they are in the real world.

Of course some sorts of evidence are of better quality than others.

Observational studies can be designed to make it such that confounding is not a likely explanation (albeit often are not) but cannot eliminate the possibility like a true RCT can. Observational studies should count and are often the best evidence we’ve got, but the quality of evidence from observational studies is not as good as that from a well powered and well run RCT. I have hard time understanding your arguing otherwise.

The evidence that Tamiflu does much is moderate strength evidence at best (of a sort not considered by Cochrane for reasons that are unclear) for a very modest effect. Re-emphasizing this fact is important as some have this season handed out Tamiflu like it is candy corn and as some panic over the emergence of Tamiflu resistance in Pandemic A/H1N1-SOIV.

Don: There was nothing much (if anything) new in this article. They don’t even claim there is. One claim: NIs reduce the symptoms of flu “modestly.” That’s not new. They say that they don’t work for non-flu disease (ILI that’s not flu). Duh. Evidence that they prevent complications is insufficient is something else they claim is “new.” They updated a search for new literature but didn’t find any that met their standards. Nothing was good enough for them (out of 1400 studies whose evidence they disregarded). They restricted it to studies of people otherwise healthy (even though it has been mainly recommended for people with risk factors), who may not have had flu because they included use in any ILI, they excluded challenge studies because they didn’t consider them equivalent to field studies even though they are maximally informative for the biological efficacy of the drug and the studies had to include 75% of subjects with the lowest risk of getting complications from flu (ages between 14 and 60). So if you are at low risk for complications, they didn’t have enough data to show that the NIs helped, even though the data were compatible with major reductions in complications as well as with no reductions. And if you were at high risk for complications (risk factor or age), then they didn’t look. Not that you’d know it by the way it was spun by them and the journal and now the two journalists at The Atlantic, both of whom have a financial interest in this because they are paid to write articles that attract readers (and if you compare their version with the original article they did a lot of spinning and obfuscation) and one of them also has been — in the language they like to use — a paid consultant to the journal where it was published.

As for RCTs versus experimental studies, yes they live in the real world but it’s not a real world intervention. The populations are different, compliance is different, outcomes are judged differently and the people behave differently. This is in addition to the restrictions on study populations I mentioned above. An RCT cannot be done with these drugs given what we know when seriously ill people are involved, something explicitly stated by the authors and obviously true. But some doctors, not having read the study, might hesitate to use them given current indications and they would put themselves at risk of liability (correctly) and perhaps increase the risk of dying of their patient. This story is being spun as saying that the drugs don’t work. That’s false. They do for healthy adults and animals and the idea that they decrease complications is highly plausible. They have not established the contrary but have excluded much evidence from observational studies indicating this.

Their conclusion? It should not be used for routine treatment of seasonal influenza. The only country that does this, as far as I know, is Japan. That has never been the practice in the US. They make no recommendations about pandemic influenza, people with risk factors or the elderly for seasonal or pandemic influenza. Not that you’d know this from reading about it. And you have to read the article carefully and parse it to even know this from the journal itself, not just news media.

The data on how much the NIs decrease symptoms has been known for a long time and there is clinical and observational evidence that they help seriously ill patients. Would you withhold them from a patient seriously ill with flu on the basis of this article? I would hope not. But some doctors will because all they read was the news articles or the crap put out by The Atlantic.

Actually the was another country in which Tamiflu was being handed out to healthy adults and children complaining of ILI – the UK. The UK ran a phone hotline and if the algorithm placed the individuals’ descriptions as being likely H1N1 they were instructed to go and pick up a Tamiflu pack. No need to be seen by the doc.

“Just 20 per cent of all cases diagnosed by the National Pandemic Flu Service were actually cases of swine flu, HPA scientists found. Everyone diagnosed by the service was given vouchers to get Tamiflu”

Don: Note that the BMJ piece was not about that. It was about seasonal flu, only. The use of Tamiflu for containment was incredibly stupid and much criticized, here and many other places. We also criticized WHO years ago for even broaching it as a possibility for H5N1. But the UK made a complete hash of the whole business.

Not that my opinion is worth much, nor that anyone needs my endorsement. But FWIW I’ve now read every single comment, and still fully support all of revere’s arguments. Thank you, for your perseverance. There are people out there who need this treatment, and there are doctors out there who can be easily misled. When these 2 meet, the outcome may depend on how much stock the doctor puts in what they read in BMJ (less likely) or in the media (more likely, unfortunately)..

Oh c’mon revere. This review wasn’t about the use of Tamiflu in seasonal influenza (which is almost all Tamiflu resistant by now); it was about using the evidence of seasonal influenza as a proxy for Pandemic A/H1N1. As the editorial described the history (http://www.bmj.com/cgi/content/full/339/dec07_2/b5164) “Following the outbreak of influenza A/H1N1 in April 2009, the UK NHS National Institute of Health Research commissioned an update of the Cochrane systematic review of neuraminidase inhibitors in healthy adults.” A narrow question of great importance in a country in which the decision was to pass out Tamiflu to all healthy individuals with any hint of ILI.

Yes you are right that the methods they used are needlessly narrow, not considering good evidence that failed to meet their arbitrary definition of acceptability.

Yes you are right that the resultant article was written poorly enough as to allow misinterpretation of the results and that the media has magnified that error by misrepresenting it completely. But even so …

I guess our reactions come from two different perspectives. You (and Susan) see the risk that some doctors will be idiots and not use Tamiflu for those who have a good chance of benefiting from it because they have read of this report in the press. I see my fellow idiot physicians currently overusing Tamiflu beyond the CDC guidelines which are themselves a bit of a Tamiflu carpetbombing approach. They need the reminder that there is no solid evidence that it does all that much of importance in otherwise healthy adults and that there is some risk of harm.

I am operating from the clinician’s POV of being presented with an individual with ILI (in my case a child, often that under 2 year old with a febrile URI or croup which could be H1N1 or not) and having to decide if Tamiflu is indicated. And we have established that my clinical judgment (along with every other clinician’s) sucks and that the available testing is worse yet. If the evidence was solid that Tamiflu was going to significantly reduce the risk of complications that the individual was at high risk of if they did have H1N1 then I should call it H1N1 in order to not miss a treatment opportunity. The cost of a false negative in that case would be high and the benefit of a true positive meaningful. But given that that evidence is only moderate of a small benefit then the cost of my false negative and the benefit of my true positive are both less. I should hold off on making the call more often in order to avoid the cost of false positives (side effects and dollars both).

Double blind RCTs are conducted to establish that a medication works. *If* the beneficial effects of the medication are demonstrated, observational studies can be helpful in evaluating its usefulness in routine clinical practice. However, if double blind RCTs fail to show any beneficial effect, the drug shouldn’t be incorporated into clinical practice. If it is, and observational studies subsequently suggest that it’s beneficial, the apparent benefits shouldn’t be attributed to the drug.

For example, say RCTs indicate that an expensive medication has no therapeutic effect, but observational studies suggest that it does. If physicians tend to prescribe the drug more often to insured wealthy people, who receive better health care in general, the drug may appear to be beneficial because it was given to people who were more likely to recover anyway.

There are strategies that can reduce confounding in an observational study, but such studies will always be more likely to be confounded than well designed, double blind RCTs.

The idea that you can put the reliability of study designs in a linear order, while a common misconception, is contrary to how evidence is actually used by scientists and fallacious.

Are we talking about “levels of evidence” here? Did you check out the link I gave to the Centre for Evidence Based Medicine at the University of Oxford? They seem to have bought in to the “misconception” too. As did the professors who taught me evidence based medicine, and most of the medical community.

Remember, too, that RCTs are only a tiny, almost miniscule fraction of the science that is done. Toxicologists and chemists and physicists rarely randomize. Yet they produce highly reliable science.

Double blind RCTs aren’t the *only* way to generate reliable data, but they generate the *most* reliable data. RCTs aren’t always applicable depending on the branch of science and the given research question. We want to know if Tamiflu works. Double blind RCTs are the best way to find out.

Don: Read the actual article. It is quite explicit. The commentaries are all add ons to dress up the fact they didn’t find much and make a big to do about the data (which I agree is a serious problem). Read the conclusions and what thee article adds.

I live in the UK. The “just 20% had swine flu story” is one example of epidemiology taken out of context. I don’t know how exactly they arrived at that 20%, but I can tell you that the HPA reported that for people calling NHS Direct who were sent a self-swabbing kit, the positive rate was only 7%.

On the other hand, in an outbreak at Eton College, after they identified an index case, they notified all students and staff and asked people to self-identify. Out of 102 people who self-identified, 63 were still symptomatic at that point. PCR tests on all 63 came back positive. I repeat, the detection rate was 100%. You don’t see such kinds of information in the Daily Mail, do you?

The accuracy of such tests depends on who is taking the test, whether the samples were processed properly etc etc. Of course, that is too complicated for the likes of Daily Mail.

Quinn: Yes, that’s what RCTs are meant to do and they may or may not be able to do it. Whenever someone talks this way about RCTs they omit all the qualifications they are then forced to stick in: “adequately powered,” “properly conducted,” “properly analyzed,” “sufficient follow-up”, etc., etc. That’s because once allocation is made they are just another kind of observational study where confounding and other biases can return. And randomization does not eliminate confounding. It just reduces its probability according to population size. The problem I am speaking against is that the notion that uttering the magical words “randomized clinical trial” seems to befog the minds to many otherwise knowledgeable people. I don’t know how much you were taught about two things: how to do an RCT; and how to do a meta-analysis. But as Meat Robot @10 says, once you’ve seen the sausage made you don’t think it’s so magical anymore.

I think that this discussion could go on for a long time and that a better way for me to approach it will be to write a post about RCTs so we can isolate our points of agreement and disagreement. Because we are now repeating ourselves.

Revere,
I’ve repeatedly used the qualifier “well designed” when speaking of RCTs. I’m not saying well designed, double blind RCTs are perfect or magical, I’m saying they’re the best way to tell if a drug works.

Quinn: Yes, you have repeatedly used that qualifier. But you and most people not engaged in this work don’t know what it means and even if you did, it’s not the end of the story. It’s the beginning. Any of my students can produce a well designed study. But they can’t execute one. On paper. In the real world they can be extraordinarily difficult to conduct, analyze and interpret properly. So what you are presenting me is either a platitude or a tautology. If a study is really good, it will show you what you want (that’s what it means to be really good, after all). Randomization doesn’t make that happen. It’s just another tool in the tool box. It has been elevated to magical status. I don’t think you understand that.

I’m not going to go down the RCT vs whatever track cos that horse has been beaten to death many times over.

What gets my ire is the tone with which all this is being reported, starting from the BMJ itself. Breathlessly, with ‘shock and awe’, as if there’s been some major discovery. Well, guess what? There hasn’t. There isn’t anything in there that we don’t already know, period.

So my next question is, why the big fuss?

Let’s look at the caricaturization of tamiflu being handed out like candy. Granted there IS some truth to that, anecdotally, and I don’t support the UK policy, but things happen in the fog of war that don’t necessarily stand well to armchair scrutiny. So let’s look at some real numbers here. The UK has a population of 60+ million. Unlike the US, the summer wave in the UK was several times bigger than the previous seasonal flu wave (see more detailed analysis here http://www.newfluwiki2.com/diary/4115/are-we-done-here ). Granted not all of them were due to H1N1, but then by definition this was ILI tracking. Comparing like-for-like, this was an extraordinary ILI wave in the middle of the summer when not much ILI generally happens.

So how many people got infected by H1N1? It’s anybody’s guess. But let me tell you about the outbreak at Eton (because it was thoroughly documented). When they identified the index case, the school had already broken up for a short break, so self-identified cases (102 of them) were dispersed all over the country (Eton being a boarding school). They were swabbed by whoever in their local area. These tests came back at an astonishing 100% positive for PCR. Because they were not taken at a single center, I tend to believe the results are probably reasonably accurate. But the more interesting part is this. After it was all over, they did some seroprevalence testing, and found 39% of the students tested positive. (see HPA report, link in previous comment).

In other words, in a school with 1300+ students, 39% were seropositive, 8% (102) self-reported clinical illness, of whom those who were still symptomatic tested 100% positive, but only 3% (39) sought medical treatment. Granted that the seroprevalence results may overestimate the true level of infection, we are still looking at a lot of people having been infected during that single outbreak over a 4 week period, of whom only a small fraction sought medical care. (FYI, tamiflu prophylaxis was offered to the whole school, with an uptake of 48%. Whether that is high or low depends on where you are coming from…)

Anyway, the point of all this is, an AWFUL lot of people were infected. Now, the Daily Mail (and others) complain that >1 million doses of tamiflu were given out. We all know that compliance is never 100%, but even if 1+ million doses of tamiflu were consumed in a country with 60 million people, we are still talking about only 2%, give or take, of the population.

Where is the carpet-bombing?

We can debate whether this was/is a good policy. And we will likely never agree here, which is fine. What I dislike most is hyperbole and grandstanding. As if Cochrane, BMJ et al have suddenly caught the UK government red-handed doing something terrible that they didn’t tell the public about.

Guess what? Nothing happened. It was a policy. It may be a good one, or a bad one. Only time will tell. But there’s no call to conflate the issue (and confuse the public) with grandstanding and hype.

In the meantime, just for the record, even Jefferson et al, after reporting 567 serious neuropsychiatric adverse events (the most serious AE under discussion) from Japan, agrees that the risk is RARE:

It is, however, estimated that more than 36 million doses have been prescribed since 2001, making such harms (even if confirmed) rare.

Oh I agree that this article doesn’t say much. I can’t say that I have seen the big to-do about it: the first I heard of it actually was here. No docs I know are talking about it. But then I don’t get out much.

2% of a national population getting a medication for dubious indications to me is carpetbombing.

To use your specific example, offering Tamiflu prophylaxis to the entire school and having nearly half the population take them up on it, is carpetbombing. The benefit gained by that? Any? By your numbers most had already uneventfully recovered. Most would have done just fine without the medicine. Was there a single hospitalization or even outpatient case of pneumonia likely prevented by doing that? In fact, revere, assuming that the studies excluded by the Cochrane review do indeed convincingly show a small but real benefit in preventing hospitalization then what is the number needed to treat to prevent a single hospitalization? How many would experience mild side effects (nausea, nightmares, etc.) of that number? How many would experience more significant reactions (I had one kid with a serious anaphylactic reaction for example, but also other allergies, etc.)?

Let’s stick to your UK numbers Susan. The NHS cost is allegedly 20 pounds per course. That would come to 20 million pounds spent but online I find that the actual cost to the UK was 500 million pounds. Now how many hospitalizations did that money prevent? Could its use have been restricted to a much higher risk group and prevented nearly or exactly as many hospitalizations for a fraction of the cost?

Tamiflu is held up by some to be this magic drug, much like how revere believes some hold up the RCT. While the review was not a great example of EBM its message that Tamiflu may be overused and is of only limited value is important for the public to hear. More important than reassurance that it does work a little bit.

Oh, I love that cost debate. Sure we spend however many hundred million pounds on tamiflu. Let me remind you that the tamiflu was stockpiled several years ago. The UK was among the first to start stockpiling, in anticipation of a possible pandemic against H5N1. Even with extensions, these meds were just about ready to expire. So even if we didn’t use it, the money was down the drain.

As I said, it’s easy to be armchair generals. Pandemic planning is complicated business, and things never happen the way you planned. If a pandemic with H5N1 happened, would the same people complaining now, consider the costs justified? Probably yes.

For me, I’d rather the money was wasted, either the drugs expired and there was no pandemic, or somehow some of it was used in a mild pandemic that probably didn’t need that level of treatment. I’d MUCH rather that money was ‘wasted’, than if it was ‘well spent’.

Don: Let me summarize what the paper seems to say. Let’s see if we can agree:

The data suggest that neuraminidase inhibitors are modestly effective at reducing the symptoms of influenza (about one day reduction) for healthy individuals who get flu. Generalizing this to very ill people in hospital seems reasonable to them, although they don’t have data. However they believe it is unlikely ethics committees would permit a trial that compared no treatment for people with influenza who have life threatening disease.

The authors believe NIs should not be used in routine control of seasonal influenza. They believe use for pandemic is reasonable but have no data on it.

Yeah well I am sure there is a pile of antibiotics that is about to expire somewhere in a warehouse. We probably should hand it out to a lot of people with colds before it goes bad, y’know? Afterall treating every cold with an antibiotic is sure to prevent a pneumonia or two.

Sorry, I come from the pediatric side, and in peds we learn that often the best thing to do is to do nothing, just so long as you do it well. Very seriously. You need to know when to not test and when to not treat at least as much as you need to know when to to test and when to treat. An article that messages to a treatment happy populus that doing nothing (other than TLC) may be the better choice if you are an otherwise healthy individual is a good thing to me.

Yup revere, not much really to it. Modest benefit and no EBM comment on hospitalized patients. Poorly written as they really should just have said that that question was outside the scope of the review which focused only on healthy adults. As for pandemic use they think it is reasonable to generalize from seasonal to pandemic and that its role in controlling the spread of a pandemic is limited at best – if used then only as part of a “package of measures to interrupt spread”.

Don, you can agree or disagree with the UK policy. FYI I don’t support it either, not all of it, and I’ve been a severe critic over the years.

But that is not the issue under discussion. The issue is the BMJ articles, whether they were justified. I think they are technically justified on content, but not justified in presentation. Irrespective of whether the UK policy is a good one, there was NOTHING that was new and unknown, and nothing shockingly unethical or dangerous that justified the level of grandstanding accompanied by uninformed accusations in the media.

Hype is bad for a sane debate of policy. It’s also bad for science. It diminishes the credibility of BMJ and Cochrane et al.

That’s just my personal opinion. I’m sure there are plenty who disagree. Fine by me.

Don: Let’s talk about antibiotics. If you use them for something they aren’t effective for, does that mean you shouldn’t use them at all? Because the BMJ paper says NIs are not effective for ILI and that essentially is being interpreted don’t use them for real flu. And we are talking about doing nothing well — or poorly.

You see I see the article being spun as that perhaps and poorly phrased but actually not claiming that Tamiflu should not be used at all.

I follow the CDCs guidance but allow myself the flex as described by deciding where I draw the line as to labeling a particular febrile URI in an under two as likely H1N1 or not. The issue as I presented it earlier is very real to me. If I was convinced that there was a significant lost opportunity to treat with a false negative I’d label more readily; without that I am less likely to overdiagnose even knowing that by doing so I will also miss some. Asthmatics and those with neurodevelopmental/neuromuscular issues OTOH I call more and Rx more readily – they are less likely other viral with a fever that high and more likely to not just have hospitalization but death.

What I’m hearing is there are actually no hard and fast rules. It’s a judgment call.

You’re worried about people over-prescribing. I’m worried about both over- and under-prescribing. What I’m most worried about, is those clinicians with the least capable judgments are probably the same ones most likely to be persuaded by media hype.

Revere,
re: post 34
Your suggestions that I lack knowledge and understanding wouldn’t be helpful even if they were true. Please keep in mind that you don’t know much about my educational background or professional experience. As Susan quite rightly pointed out, the “RCT vs whatever” issue has been beaten to death, so I’ll leave it alone.

But I have another question. In the first study that Susan posted (at comment 4), 71% of the patients hospitalized for influenza had received the vaccine. Less than a third of the population (in Canda, at least) is typically vaccinated for influenza. What do you make of this?

Susan: I’ve been referring to you in the third person here, but I’d appreciate your thoughts too

Thanks for the link, Susan. I got the figure from recent news reports that I’ve seen and heard, but I didn’t have a specific source in mind. You’re right that the figure increases with age and risk factors.

About 36% of the subjects in the study were under 15, though. Even if 75% of the elderly in the province had been vaccinated, the finding is hard to explain.

Quinn: May I remind you that you called me presumptuous? But I don’t really mind. I’m happy to engage you on the substance now as I did then. But for the record, I said that you and most people like you (meaning people with training and knowledge but not experience actually doing these studies) don’t know what’s involved — if I am in error (which on the evidence of what you have been saying I don’t think I am but I will take your word for it), you can respond to what I actually said, not merely take umbrage. When you called me presumptuous I responded by asking what was presumptuous about what I said (and got no answer) and in this case I have spelled out what you don’t seem to understand: “Randomization doesn’t make that happen. It’s just another tool in the tool box. It has been elevated to magical status. I don’t think you understand that.”

So you can just take umbrage at my inference (that you didn’t understand something which I took care to spell out) by demonstrating you do understand it or ask for clarification as I did with your (I believe unfounded and not particularly constructive) accusation I was “presumptuous,” a clarification I never received. Not to put too fine a point on it, but I did indicate what I thought you didn’t understand while you merely said I was presumptuous.

So I’ll say it again. Randomization is just a tool to deal with one kind of bias, residual confounding by uncontrolled confounders. One tool among several. By itself it works no magic. I don’t think you appreciate that (I’ll say it that way if the word “understand” bothers you). The issues with randomization are deep and somewhat controversial but I am assuming we will remain skating on the surface since that’s the level it is usually treated in these discussions, and usually without harm. But not always, so if you want to get into the weeds about randomization I’m ready.

That isn’t true. I said your statement was presumptuous and I offered a very clear, itemized explanation as to what the statement assumed. Here’s a recap:

you said: “This seems to me to be another argument in favor of the vaccine: it prevents DNA damage from the virus.”
I said “This is presumptuous.”
and explained:
“This assumes that a) the H1N1 virus is mutagenic, and b) the vaccine is not. Neither is a safe assumption.” There’s more of an explanation as to why the second isn’t a safe assumption in the post (the first is obvious).

Randomization doesn’t make that happen. It’s just another tool in the tool box. It has been elevated to magical status. I don’t think you understand that.

This spells out what you think I don’t understand? Randomization doesn’t make *what* happen? I never claimed that randomization makes a study perfect, or that it hasn’t been “elevated to magical status” by some.

My experience with research methodology (or lack thereof, if you choose to believe) isn’t relevant to the argument I’ve made. The Evidence Based Medicine folks at the University of Oxford share my views on “levels of evidence” – do you think they don’t understand randomization? It’s widely accepted in the medical community that double blind RCTs are the gold standard for research design. I’m shocked that you’d contest this.

Revere,
I’ve calmed down a bit and I’ll retract my statement that “I’m shocked that you’d contest this”. I think I do know where you’re coming from.

We may be arguing two slightly different things.

I actually agree on the following:
-observational studies can be useful and there are ways to reduce confounding, other than randomization.
-RCTs may be poorly designed and don’t always generate reliable data.
-the hierarchy of evidence does not eliminate the need for careful evaluation of study design in weighing evidence.

My argument has been that properly designed RCTs are the best way to answer the question “do NIs work?”. In other words, if we were going to conduct a new study to answer the question, a large, properly designed, double blind RCT would be the best choice. And if we’re assessing a bunch of different studies that are all well designed, the RCTs would get more weight.

But, if we want to answer the question “do Nis work?” on the basis of existing evidence, we need to carefully evaluate the studies we have. And it’s likely that some observational studies will be more useful than some RCTs. It would be inappropriate to weigh the evidence on the basis of study type alone.

I’m guessing that you don’t disagree so much with the “hierarchy of evidence”, but with its rigid application(?) I can appreciate that.

Susan, I have answered only for myself and while that is judgment it is also as messy of sausage making as any meta-analysis. And while it may be good medicine (in my assessment) it does risk medicolegal exposure – the “safer” defensive medicine thing to do is to think less and treat everyone in a CDC risk group who even might have an ILI. But that is another discussion.

How badly did the BMJ misrepresent or sex it up and how much was the media coverage of it?

I read those reports as saying in several places that this is a question about treating otherwise healthy adults (as was widely done in the UK during this pandemic), not treating those with risk factors.

“…health policy makers often have to make important decisions about new drugs when not all relevant trials have been undertaken. This is the situation currently regarding the use of antiviral therapy in H1N1 influenza. It may be argued that at such times we should be informed by all available evidence and not constrained by randomised trials alone. …

… Results of multivariable analyses were broadly in line with those reported in the randomised trials. For participants with clinically diagnosed influenza, the estimated number who needed to be treated with oseltamivir to avoid one diagnosis of pneumonia was always over 100, and may be as high as 1000. Only one of the observational studies reviewed here considered safety issues. …

… generally support the conclusion that oseltamivir may reduce the incidence of pneumonia and other consequences of influenza in otherwise healthy adults. However, these events are rare, so for most otherwise healthy adults treatment of influenza with oseltamivir is not likely to be clinically important.

A potential advantage of observational studies is that they can provide evidence on the use of a drug in a realistic setting. However, this advantage was largely undermined by the studies’ selection criteria …

… The estimated effect of antiviral drugs in people with existing cardiovascular disease was substantial, and the difference in rates of death and serious morbidity were potentially clinically important. …

… Our rapid review of these “real life” data suggests that oseltamivir may reduce the risk of pneumonia in otherwise healthy people who contract flu. However, the absolute benefit is small, and side effects and safety should also be considered. None of the studies examined the role of oseltamivir in patients with H1N1 influenza, which may be associated with higher rates of pneumonitis than seasonal influenza.14 We did not consider the evidence for the use of oseltamivir in high risk patients, although several of the studies identified by Roche were in special populations. Other observational studies suggest that early intervention with antivirals for influenza may benefit a range of high risk patients and potentially improve survival rates, but these studies are also open to residual confounding.13 14 15 16 17 18 19 20″

The BMJ article seems to be a bit much perhaps but this beating them up also seems uncalled for.

Quinn: Thanks for this good summary of the situation. I think that you ably set out where we are both coming from, and, as usual, when the dust settles it turns out people are not so far apart as it may have looked at the outset. I appreciate your willingness to see the conversation through to the point where we each have been able to clarify our views (noting that blog posts and comment threads are not ideally suited to well organized, nuanced and coherent arguments but still work if time is taken).

Don: The issue with BMJ isn’t just this article but a pattern of provocation and sensationalism that appeared when the new Editor arrived. There was the notorious Jefferson piece on vaccines which became the pretext for The Atlantic article which took up so much time and space here (and around the net and the MSM in general) and those same conflicted journalists have tried to leverage this piece, too. The Cochrane Collaborative is a network of volunteers, many of whom do an excellent job of survey and summarization, producing reviews which are useful for others looking to get a quick overview of a topic and guide to some of the literature. Because, like any review, they are limited in scope and full of selection biases of their own (sometimes clearly evident and sometimes not) they must be used judiciously.

The problem is that some of the volunteers have become zealots and have even made their reputation writing contrarian and attention getting pieces and BMJ has taken advantage of this for its own attention getting purposes. Sadly, high profile journals are doing this in all sorts of ways these days, including using embargoed pre-pub seeding of articles to get journalists’ attention, deliberate selection of articles for public newsworthiness rather than scientific value, etc. That’s a fact of life. But what irritated som many people about the BMJ practice was both the pattern and the lack of responsibility in recognizing that doing this regrettable practice has a context and in this case the context was the first influenza pandemic of the new millennium. This has been a trying period for everyone, including clinicians, policy makers, public health, the public, the media and many others and therefore “sexing it up” has a consequence in this case that it doesn’t have in other contexts: it seeds the last thing we need, more confusion, confusion that isn’t the result of a confusing science but the result of a public message that is not meant to enlighten but to gain viewership (the colluding TV station), market share (The Atlantic, BMJ) and attention (the paper’s authors, who are least culpable for the paper itself but guilty of blowing this up in the commentaries and other material that turned a not very interesting or enlightening paper into a media event.

Let me say again that I have sympathy for the generic data issues that arise with drug companies and their own agendas as they overlap with and interfere with scientific publishing. But entangling that issue with the use of the only class of drugs we have for treating influenza in the midst of a pandemic was not responsible.

Don, see revere’s response. I agree entirely. I don’t always see eye to eye with the reveres. We’ve had some vigorous debates where we finished up neither convincing the other. But on this one, the BMJ articles annoy me for precisely the same reasons cited by revere, especially the issue of context. I hate opportunism in science, particularly opportunism that doesn’t care about the unintended (erring on the charitable side) consequences.

So to recap: your problem isn’t this particular article and its narrow conclusions which as it turns out is, while not earth shattering, is also not unfair or inaccurate (if a bit breathless in its style), but in the pattern of articles by the newest BMJ editor. And in how it was played by the media misinformation machine.

Thanks so much for these excellent comments on a very important topic!

Despite the lack of definitive supporting data from randomized controlled clinical trials, patients with severe H1N1 disease do seem to benefit from early antiviral therapy.

Starting treatment with a neuraminidase inhibitor within 2 days after symptom onset was significantly associated with a lower risk of ICU admission or death in hospitalized 2009 H1N1 patients (N=272; median age, 21 years), as compared with later treatment (P <0.05).

In a multivariable model that included age, admission within 2 days or more than 2 days after the onset of illness, initiation of antiviral therapy within 2 days or more than 2 days after the onset of illness, and influenza-vaccination status, the only variable that was significantly associated with a positive outcome was the receipt of antiviral drugs within 2 days after the onset of illness.

At the bedside, physicians taking responsibility for the outcome of seriously ill patients often do not have the luxury of always having well-designed, definitive randomized controlled clinical trials to guide their therapeutic decisions (the science of medicine), and clinical decision-making is made even more difficult because young healthy patients can quickly succumb to severe H1N1 disease, and early in the course of their illness – when they may have only mild symptoms – is the optimal window of opportunity for greatest possible benefit from antiviral therapy (the art of medicine).

Good investigative journalists know that often when there’s smoke there’s usually fire – in this case, it seems, perhaps one more example of pharmaceutical companies witholding and distorting data.

I believe there is a special place in Hell reserved for those engaged in profit-driven dishonesty, deceit & deception that endangers patients’ lives.

I hope the media will continue to shine the light on this important issue and encourage badly-needed reform to restore scientific integrity in clinical research.

The story isn’t that Tamiflu is of little use; that would be a non-story indeed, everybody knows it’s a waste of time.

The story is that Roche destroyed data from their own research, then their PR department pre-wrote the conclusions of the studies that were actually published and instructed the researchers and paper writers to arrive at those conclusions.

If you think this is a big story, wait until people start going to prison.

ewan: You are right. The story isn’t that Tamiflu or Relenza are of little use. Because that’s not what the paper says (although the story has mistakenly been interpreted that way). As for the rest, I have problems with Roche and other Big Pharma handling of data. If some people go to jail over it, fine with me.

George: Except when they don’t. They “disprove” nothing and they have no hard data. They don’t do any studies of their own. They summarize other studies using a specific protocol. They, in effect, are doing observational studies where the data points are RCTs. That means all the problems of observational studies are on their doorstep, too, for example, observer bias, selection bias, etc. So their main fault is hubris. Not uncommon, even in the “hard” sciences (where, BTW, it is rare to find anybody doing a randomized trial; what’s the matter with you people? you don’t do “science”?).