New study argues war deaths are often overestimated

A new study, the Human Security Report, argues that politics and fund-raising priorities often lead to overestimates of war deaths, touching off a controversy among the researchers who work on the issue.

ByJina Moore, CorrespondentJanuary 22, 2010

A Rwandan Hutu rebel carries a gun in the village of Kimua, eastern Congo. A study released last week argues that reports of battlefield deaths such as those in Democratic Republic of Congo have been overestimated.

The hotly contested report calls into question the larger estimate, made by the International Rescue Committee (IRC), and alludes to controversies surrounding death tolls in Darfur and Iraq. It also questions the most general assumptions about conflict, from how deadly war is to whether the number of war dead can even be counted.

“[It’s] not just a battle of titans,” says Greg Greenough, director of research at the Harvard Humanitarian Initiative. “It really is [a battle] of philosophies, of how we approach a very difficult question of deaths of people in war.”

But it’s not just philosophical. Because government, military, and humanitarian officials all take into consideration death tolls and casualty rates, the numbers can play a key role in determining the response to a conflict.

Death tolls, the report argues, are better understood as fundraising arguments aimed at donors than as real scientific estimates of the human cost of war – a contention that has angered experts in the field.

"Many years ago we went out and attempted to report to the world about an unfolding crisis in the Congo. We did it carefully, but as we described at the time, crudely, at great risk to life and limb, and at only a few percent of the cost of this Human Security Report," wrote Les Roberts, a collaborator on former HSR reports and IRC studies, in an open letter to the Center. "It is unbecoming to grab a headline a decade after by tearing down a study with erroneous speculation,"

Criticism of the new report

After the International Rescue Committee published its early findings of the DRC death toll, humanitarian aid to Congo increased 500 percent. Peacekeeping assistance followed, and today the DRC hosts the world’s largest peacekeeping mission, with more than 20,000 members.

The IRC conducted five mortality surveys in the country between 2000 and 2007. The limitations of those surveys, the IRC says, have always been clear.

“We’ve discussed those [limitations], we’ve published those, and I think there’s been generally agreement among many experts that they don’t invalidate our findings,” says Richard Brennan, who helped write the IRC’s last two surveys from Congo.

Dr. Brennan also questions the latest report’s own conclusions. “I think there are inconsistencies; there is cherry-picking of data; and they haven’t referenced other important reports that would counter what they’re saying,” he says.

In situations such as Congo, surveys strive to determine the number of people who, if there had never been a war, would probably be alive.There’s never a single number, but rather a scale – which is necessarily based on problematic data.

“You start with the acknowledgment that there’s just not good population data during a conflict in most places,” says Harvard’s Dr. Greenough.

Brennan points out that the IRC studies reveal not a single number, but a scale of violence, with 3 million deaths at the low end of the IRC estimate and over 7 million at the high end. The point, he says, is to understand and prepare to respond to the scale of the conflict.

Disagreements about which data to use and how to collect it have also resulted in rows about the number of dead in Iraq and in Darfur. A similar battle is now being fought about Congo.

“Our point in doing this is not to say we’re right, you’re wrong,” says Andrew Mack, project director for the research behind the latest report. “It was a way of trying to demonstrate that relatively small change in assumptions can produce a huge change in outcomes.”

But critics argue, and Mack acknowledges, that such a debate can lead to cynicism. “Were we to do it again – we didn’t phrase it quite appropriately,” says Mack. “If we were to rewrite it ... we would language it differently.”

The politics of numbers

Such professional disagreement – one that is often shaded by politics – has real-world consequences, as seen in Congo. It’s precisely because policymakers rely on such numbers to make decisions that it’s so critical to get them right, says Andrew Mack, project director for the research behind the latest report.

“The problem is – Darfur showed it, Iraq showed it – if policymakers start to get suspicious about numbers this whole issue of population surveys is going to lose credibility, and a lot of people worry about that,” he says.

That’s because the surveys are used to estimate death tolls also generate attention and funds. But there are different schools of scientific thought about how to craft these surveys; the report essentially discounts the predominate approach.

But numbers and the disputes they inspire can be motived by politics as much as science.

“The famous axiom that the truth is the first casualty of war is unfortunately also true of casualties,” says Anthony Cordesman at the Center for Strategic and International Studies in Washington. “There’s a tendency far too often to have the casualty estimate track with the NGO’s politics ... [and] there’s a tendency, if it’s the government that’s conducting the war, to underestimate.”

But Brennan argues that without surveys of the kind the report calls into question such estimates are “plucked ... out of the air and ... propagated.” The IRC conducted its first Congo survey in response to an unsourced figure of 100,000 deaths that appeared in a New York Times article, he says.

“In the case of Congo, the scale was completely unappreciated,” he says. “That’s what death toll estimates can do. They can counter misinformation when noncredible estimates are too high and draw attention to a crisis when noncredible (estimates) are sometimes too low.”

The case for and against death tolls

The Human Security Report also casts doubt on numbers from other places. It questions the Darfur estimate and points out that the first comprehensive study on Iraqi civilian deaths, published in 2006 by the British medical journal, The Lancet, faced credibility challenges.

While some say death tolls are an imperfect science, statistician Patrick Ball – director of the Human Rights Data Analysis Group at Benetech in Palo Alto, Calif., – disagrees.

In work that Mack calls “the gold standard” of investigations, Ball and his team have estimated casualties for conflicts in Guatemala, Liberia, Peru, and elsewhere.

“Do people make mistakes? Absolutely,” says Ball. “Were mistakes made in IRC estimates? For sure. I think that Mack and his collaborators have pointed to some really important errors that need to be corrected.... Does that mean we should just give up on estimation? Absolutely not. It means we need to take the insights that Mack and his collaborators have brought form the IRC study and say, ‘How can we build a better study?’”

Today’s wars are less deadly

The project found a 70 percent decrease in high-intensity conflict – those wars with 1,000 or more battle-deaths per year – since the end of the cold war, and a 40 percent overall decrease in conflict, according to Mack.

Meanwhile, he says, emergency humanitarian assistance is increasingly effective – in part because those in war-torn areas are healthier when the fighting starts, a fact he attributes to peace-time health projects such as immunization or breast-feeding campaigns.

The number of battle deaths is also going down, he says, from 33,000 a year in 1950 to “just around” 1,000 a year in 2007.