Archives

Follow us on Twitter

If you wanted to minimize the real-life effects of misconduct, you might note that some of the retractions we cover are in tiny obscure journals hardly anyone reads. But a new meta-analysis and editorial in JAMA today suggests — as a study by Grant Steen did a few years ago — that the risk of patient harm due to scientific misconduct is not just theoretical.

Synthetic colloids received market approval in the 1960s without evaluation of their efficacy and safety in large phase 3 clinical trials. Subsequent studies reported mixed evidence on their benefits and harms.

There has been controversy over the use of HES for decades, with the most recent high-level review showing “no significant mortality increase.” But one of the reasons for that review — by the prestigious Cochrane Collaboration — was to see if the dozens of now-retracted studies by Joachim Boldt had an effect on the overall evidence for HES. Boldt’s retractions resulted from a lack of evidence of IRB approval, as well as the likelihood of faked data.

An internal investigation found no evidence of harm to the patients Boldt treated, and the the Cochrane review found “no change in the findings related to the inclusion or exclusion of the studies by Boldt et al.,” according to the editorial. But the new meta-analysis found something different:

In other words, there was an increased risk of death and kidney failure among those given HES:

The report by Zarychanski et al highlights the potentially important and adverse effect of scientific misconduct. With the inclusion of studies by Boldt et al, the medical community might reasonably have concluded that use of hydroxyethyl starch was not inappropriate. Yet the analyses in which these studies were excluded shifts the balance of evidence toward harm. This study also demonstrates the importance of revising and revisiting recommendations and guidelines in light of new systematic reviews and evidence.

To make matters worse, most respectable journals won’t publish “follow-up” or conformational articles. Most researchers don’t want to waste their time validating someone else’s work if they aren’t going to have anything to show for it.

I would take the link honestscientist provides with a pinch of salt. Its popular links include a story by David Icke and one about the illuminati. “collective evolution” also is your one-stop for just about all your ‘alternative’ science and worldviews: the illuminati are real and try to control the world, fluoride in drinking water is poisonous, global warming is not man-made, vaccines are dangerous, and there’s aliens visiting us, are frequent stories on that website.

For those who want to get an idea about the views on that site, but dare not go there, I have a choice quote:
“The current time is in alignment with the celestial bodies that are bringing forth the openings of the star gates and the doorways for transformational energies. We are past the mid point. The understanding of knowing who you are and what you are to become is manifested as social changes that are being brought into the world. This is a time when the world needs to directly face the shifting of energy bringing us into a new age of action and transformation. No one is doing all this to us, we are doing it to ourselves collectively in order to make way for the new. From the chaos of the Piscean eclipse and the Aryan influx of chaotic inspiration and ungrounded action there is a great awakening into stability and a want for a reconnection and a grounding of the planet. As we move from the Piscean age of mystery and confusion of saviors and sacrifice and pain, the truth unfolds.”

When the NIH’s belt starts tightening, more researchers may turn to industry for funding assistance. This is ominous. And more often than not, testing a hypothesis may require someone else’s synthetic compound; and it’s possible it may be one from industry (which opens a potential conflict of interest: we give you this compound, but…)

Ivan,
“was to see if the dozens of now-retracted studies by Joachim Boldt had an effect on the overall evidence for HES.”

That’s not quite what the JAMA authors did. They started with 38 studies that included none of Boldt’s retracted papers but did include 7 papers by Boldt from before 1999 that were not retracted (the misconduct investigation was limited to post-1999). Using this record, the odds ratio of mortality was 1.07 but was not significant.

They then checked and found that the 7 Boldt papers which have not been retracted are significantly different from the rest of the database, and if they remove those papers, the odds ratio of mortality goes up to 1.09 (slight change) but now it is significant. (Boldt’s findings alone are that there is no harm and maybe a benefit — his numbers were stretching the confidence intervals at the low end.)

The JAMA authors did NOT include the retracted Boldt papers in any analysis, so this article doesn’t say what effect the retracted Boldt papers have on the field. We can assume that if the non-retracted Boldt papers messed up the field, the retracted ones messed it up as well, but that is not an actual finding of the JAMA paper. They also do not report whether the Boldt papers messed up the results on secondary outcomes like renal failure, because they only looked at secondary outcomes using the non-Boldt records.

Thanks for the comment, but the phrase you quoted describes the Cochrane review, not the JAMA one:

There has been controversy over the use of HES for decades, with the most recent high-level review showing “no significant mortality increase.” But one of the reasons for that review — by the prestigious Cochrane Collaboration — was to see if the dozens of now-retracted studies by Joachim Boldt had an effect on the overall evidence for HES.

Yes I noticed the silence about renal failure also.
Its paywalled, so I can’t read it and I don’t know what the overall rate of mortality across the meta-analysis was.
A couple of possibilities spring to mind:
1. Would a time selection show similar results? Ie would (excluding Boldt) earlier papers be less likely to show a difference in mortality than later studies? That might point to an improvement in the nature of the alternative materials.
2. It might well be that the nature of the patients enrolled in the individual studies might influence results. If one study was enrolling patients whose condition was more critical they might see an effect that might not been seen in another study of patients less ill or not suffering sepsis etc.

Finally, I am not sure of the nature of fabrications that Joachim Boldt was supposed to have undertaken, but death is a fairly major outcome, a study might need to be retracted but still manage to have captured this type of data correctly.

@LGR,
It is indeed complicated. It is certainly possible that the non-retracted Boldt papers, and even some of the retracted Boldt papers, contain perfectly valid data, and he showed no mortality because his hospital “did it better.” I don’t know how that can be proven one way or the other, although the JAMA editorial highlights some of the findings from the article that seem to rule that out. The JAMA article also appears to rule out an effect of different materials, as something like 50-60% of the statistical power of the meta-analysis comes from 3 very recent large trials. But even the editorial says, “Thoughtfully planned, adequately powered, and rigorously conducted randomized controlled trials will be needed to assess the issues of safety and efficacy of hydroxyethyl starch 130/0.4 starch [the newest formulation] vs crystalloids.”

But I think the only reasonable thing to do is leave Boldt out altogether, if he was doing one unethical thing (experiments without IRB review) then there is no assurance he is not doing other unethical things. “Falso in uno falsum in omnibus.”

Two observations: first, the link kindly provided by honestscientist is indeed horrifying, but I can’t find documentation of the figures mentioned in the collective evolution post; certainly not in the links at the end of the post. One refers to a 29% incidence of reported “conflict of interest” in clinical cancer trials; the other is even less damning.
I would agree that we have major problems, but I don’t think they are quite as bad as the collective evolution post claims. Just my opinion.

Second, hydroxyethyl starch is not good for resuscitation in patients with shock due to sepsis or otherwise. It causes nephrotoxicity and other unpleasant reactions at times. The bogus studies by Boldt, if they contributed to the general attitude that using HES for resuscitation is a good idea, definitely caused patient harm. It seems from the literature (I’m too old to have much experience with this kind of emergency medicine) that HES was quite popular until very recently; then, in the last year or so, studies have been published that show increased mortality in severe sepsis as well as nephrotoxicity. A lot of ER doctors who were enthusiastic users of HES are very red in the face if they realize what has transpired.

..and thus goes more evidence of the fallacies of “Evidence-Based Medicine”, EBM-based Clinical Guidelines, and the like. Hard to justify the use of pathways based on this junk, particularly given the extant industry taint on current research in so many fields. So much of the ACA, ACO, and “effectiveness” literature is also tainted in like fashion. What a mess.

It’s more than one fallacy.
1. Statistical analysis. A nurse said that average temperature of patient over 24 hours was normal – 36.7 C. (It was 41 C at night and it is now 21 C.)
2. I don’t see much difference between what David Icke is concluding and what all these molecular studies are concluding: they all mix small islands of real knowledge with huge number of unknowns, although the latter are not mentioned – partly because they are simply not known and party because these people live in the world of fantasy. The stories of cholesterol were amazing, and they probably are not finished.
3. True, research is research. “Liebig pointed out the injustice of confounding alchemy with gold-making. “Alchemy was a science”, he wrote…(Quoted from The Alchemist in Life, Literature and Art. John Read) The alchemist tried his remedies on people, but he did not have thousands of patients and billions of dollars. The fallacy now is having these and having the mass media deceiving the population about science.
4. How about substituting opiates with aspirin and publishing dozens of papers fraudulently claiming success (Scott Reuben)? They said no data that patients were harmed.
5. More concretely, all our organism is not a water solution, it is a gel which in addition is divided minutely by membranes. The chemical reactions are going at specific places, and going very slowly (otherwise, 10 years of our life would be finished in 10 minutes), that’s what the life is as compared with the in vitro things. Here, people add new colloids, that’s a whole new ball game. It is very interesting to modify the physico-chemical properties of natural fluid as opposed to adding reactants. But should be first tried on dogs and for a very long time.

Just for clarification, no one should be under the illusion that Boldt is being framed or singled out for any reason else than his own actions. Several investigations have been conducted into his research activities and none have concluded that he ran a reputable research stream. Even his own hospital has distanced itself from him noting that at least 10% of his data was fabricated and none of the studies they investigated had proper ethical committee approval.

Now as for the other data that Boldt reported on in his publications. They too were conflicting with what we were seeing from similar trials. It has been previously shown that mortality, an objective serious adverse event, is the most difficult outcome to bias. Therefore we began with that… when we noticed that Boldt’s data was discrepant [showing a protective effect of HES, while the remaining body of literature showing harm] we knew we couldn’t go any further with his data. That’s why Boldt’s data is only reported for mortality.

As for the negative comment about evidence-based medicine, I see that this paper contradicts the proposed argument. Without the principles of evidence-based medicine, including a need for regular re-evaluation of the evidence, we would still be debating the association between HES and unwanted outcomes.

Finally, I should mention that no recent study or published systematic review has shown significant benefit of using HES in fluid resuscitation, even the pharma sponsored trials. Guidelines and governing agencies are taking a close look at their past recommendations and any have been redrafted or are currently being re-evaluated.

“Now as for the other data that Boldt reported on in his publications. They too were conflicting with what we were seeing from similar trials. It has been previously shown that mortality, an objective serious adverse event, is the most difficult outcome to bias. Therefore we began with that… when we noticed that Boldt’s data was discrepant [showing a protective effect of HES, while the remaining body of literature showing harm] we knew we couldn’t go any further with his data.”

You say that one research group shows a protective effect that is in discord with other groups showing no effect or harm, therefore the discrepant data was discarded. The plain language of this statement is quite concerning, although I am sure you did not mean it this way. There are many reasons that one group might find different results than others. Bias is only one such reason. And I assume that if mortality data is easily biased, that goes both ways — excess mortality could also be the result of bias.

By discarding data that disagrees with your hypothesis, you naturally strengthen the statistics of the data that agrees with your hypothesis. If I did this on my cell culture experiments, it would be cherry-picking and probably misconduct.

Obviously here you have an objective and independent reason to discard Boldt’s data, and I’m sure you did not mean to indicate that you discarded the data that disagreed with your hypothesis because it disagreed. And yes I am nitpicking. But words are important.

The decision to exclude was not as naive as you make it out to be. In a previous review by the team, Boldt’s studies were included, no questions asked as there was no proof of fraudulent activities. In 2011, 88% of Boldt’s publications since 1999 were retracted by their respective journals, for scientific misconduct. Even his own organization has said that at least 10% of his work is fabricated. No one investigated his research prior to 1999. Therefore the question posed when conducting the current study was should the scientific community automatically assume that before 1999 Boldt was an honest and reputable researcher and only went rouge from 1999 onwards, or do we consider all his research apples from the same poisonous tree and discard it to begin with. We decided to give his published data a chance to discredit the pundits and show that it was following the same trends as others; or at least close enough to consider the difference acceptable due to sampling error rather a systematic error (bias). The rest is well.. history.

I don’t take issue with what you did, just your phrasing of why you did it. The reason to discount the non-retracted study was statistical discordance PLUS independent evidence of misconduct in his work. Without evidence of misconduct, discordance alone will usually not be a reason to discount an experiment or result.

To clarify, first they excluded Boldt’s retracted articles only. Still found lack of statistical significance for harm. Then, they excluded the OLD Boldt studies that hadn’t been retracted, as well as the retracted ones. THEN they found statistical significance for harm (RR 1.07– not significant; RR 1.09– is significant.) The old studies, not retracted, were all at the lower end of the confidence interval, and most likely were bogus as well, even though they hadn’t been retracted.
So, Boldt made it look as if HES was OK, at least not harmful (to mortality), while in fact it WAS harmful. Very bad.
How many times has a similar thing happened, and it has not (yet) been discovered? How much of medical and biological science must be critically re-examined to see if it really stands up? I’m afraid it looks like MOST of our current practices have to be reviewed.
By the way, my favorite example of a wrong practice, debunked over a hundred years ago and still in use: the use of percussion for determining heart size. All medical students are still taught this technique (as far as I know.) By tapping the chest and listening for the “hollow” vs “dull” sound, you are supposed to be able to tell how large the heart is. This works for liver size and for the presence of lung consolidation. (However, the heart is hidden from the anterior chest wall by a lobe of the lung, making the mechanics of the technique questionable at best.) A study reported in a book written in 1896, shortly after the introduction of radiographic techniques, showed that heart size as determined by fluoroscopy did not correlate at all with the apparent size as determined by percussion. (I received a copy of this book as a bequest from a deceased doctor.)
That study was completely ignored and heart size by percussion continued to be taught, to this day as far as I know (someone please tell me otherwise).
When I was taught the technique in medical school, it seemed fishy to me– I couldn’t tell the difference in sound. I didn’t use the technique, especially since everybody got a CXR in those days. Nowadays, there is a trend against routine CXR’s, which would hide people with enlarged hearts but no symptoms of congestive heart failure. Seems routine CXR’s aren’t “cost effective.” I wonder how many doctors realize that you can’t tell if a person’s heart is enlarged without a chest X-ray.
The costs of Xrays have gone down dramatically with the introduction of digital computer techniques for image capture and presentation. Elimination of silver-based Xray films has cut the cost. Of course, prices charged are still going up in most places, ignoring the drop in cost. I wonder if CXR would be cost effective now if they factored in the cost reduction.

This is reminiscent of, but also has important differences to a matter that we recently investigated. We accidentally stumbled across some inconsistencies in the literature on skin antisepsis (might be considered a fairly trivial topic, but it has patient safety implications), investigated this further and found a fairly significant-scale medical literature error that had affected primary clinical trials, systematic reviews and evidence-based guidelines. Between about 1/3 to 1/2 of the literature in this topic was affected, simply by authors overlooking an important component that likely contributed to trial outcomes. The underlying error is actually quite trivial.

Our findings have potential implications for patient safety, by way of the error having led to unsubstantiated and/or incorrect recommendations in evidence-based guidelines, but so far there are no recorded instances of adverse outcomes (although this would be difficult to capture).

I followed the link. A thorough study showing the unacknowledged presence of an additional, essential active ingredient in a widely used formula for topical antisepsis… an ingredient that can be effective by itself… for unknown reasons, a third of the authors of studies evaluating the efficacy of this topical combination failed to mention the additional active ingredient in their conclusions. The ingredient was used as a solvent.
I’m trying not to ruin the suspense. You’ll have to follow the link.