Fudged statistics on the Iraq War death toll are still circulating today

Author

Disclosure statement

Michael Spagat does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

What happens when a scientific journal publishes information that turns out to be false? A fracas over a recent Washington Post article provides an illuminating case study in how, even years after they’re published, uncorrected false claims can still end up repeated time and again. But at the same time, it shows how simply alerting responsible journalists and news editors to repeated errors can do a lot to combat false claims that stubbornly live on even after they’ve been debunked.

And although the article’s reported violence numbers increased over time far more rapidly than those reported by other sources, including the Iraq Body Count (IBC) project, the graph gave the inaccurate impression that IBC trends actually tracked their new data quite closely – ostensibly validating what, at first glance, seemed like a very hard-to-swallow new dataset.

In addition, the graph included a third dataset purporting to show violence trends measured by the US Department of Defence (DoD), trends that were again presented as consistent with the authors’ new data. The finished graph was central to the paper’s effort to “mainstream” the shocking new numbers by connecting them with other data on war violence.

Yet a few weeks after the article was published, letters sent to the Lancet from other researchers discredited the graph entirely.

Falling apart

First, Debarati Guha-Sapir and two colleagues pointed out that the graph used two Y axes, a device notorious for creating the illusion that two curves moving in the same direction at different speeds are in fact moving at the same speed.

But this was just one of the problems with the graph. The article’s authors also tweaked the trends by comparing their own data with cumulative IBC data. Specifically, they plotted the first 13 months of their data against 13 months of IBC data, their second 13 months against 26 months of IBC data, and their third 13 months against 39 months of IBC data.

And in another published letter to the journal, IBC’s Joshua Dougherty demonstrated that the DoD curve the authors included did not represent what they said it did. For example, going back to the source, he said the DoD data they cite include both deaths and casualties, and “do not offer any direct means by which to calculate what number might be deaths, let alone civilian deaths”.

Rather surprisingly, in their reply – also published in the Lancet – the authors actually admitted to several problems with their graph. But their mea culpa was grudging and incomplete. In it, they suggested that the issues with the graph are merely technical, and that the trends really do match – but as the letters mentioned above pointed out, that is not correct.

One would think that by this point, deafening alarm bells would have been ringing in the Lancet editors’ ears. After all, they had just published a highly compromised graph that two letters had utterly discredited. And the authors had even admitted errors, an unusual development to say the least. Surely the Lancet would withdraw the graph; maybe they would at least leave it up online but post a warning sign nearby. But no. Instead, they just left the article and the graph online in perpetuity.

And so, in spring 2018, enter the Washington Post and its reporter Philip Bump.

Caught out

To his credit, Bump wrote and published an article marking the 15th anniversary of the invasion of Iraq, an event that’s getting far too little media attention in the UK and US. The article reads well – that is, until it landed on the discredited graph, which it reproduced and presented as if it were legitimate.

The Lancet could argue that if Bump had only read the follow-up letters it published, he never would have reprinted the discredited graph. But this argument is akin to saying that there is no need for warning labels on cigarettes because people can just read the scientific literature on smoking and consider themselves warned. But in practice, many people will just assume the graph is kosher because it sits on the Lancet website with no warning attached.

As you might expect, the dust-up over the graph is just the tip of the iceberg with the 2006 article. I myself have comprehensively debunked the article, with at least some of the inaccuracies that propped it up. The article’s lead author, Gilbert Burnham, was censured by the American Association for Public Opinion Research for refusing to explain basic elements of his methodology. He was also sanctioned by Johns Hopkins, which “suspended Dr. Burnham’s privileges to serve as a principal investigator on projects involving human subjects research”. Johns Hopkins also said it would send an erratum to the Lancet to address inaccuracies in the article’s text.

Bump did not mention any of this fallout from the article he cited; presumably he just didn’t come across it in the course of his own research. But maybe he would have been primed to dig deeper if the Lancet had done all it could to label the graph with an appropriate warning. Letters to a journal are better than nothing, but they are not enough to correct a published false claim; it is incumbent on all involved to flag inaccuracies and misrepresentions as conspicuously as they can.

That said, this particular chapter at least has a happy ending. I wrote to Bump and the Washington Post and they fixed the story, in the process demonstrating an admirable respect for evidence and a commitment to the truth. The Lancet would do well to follow their example.