Too much has been made of the claims about main street bias in the new Lancet study — if you do a few calculations you’ll find that even if it exists, it doesn’t make much difference. As Jon Pedersen said:

Pedersen did NOT think that there was anything to the “Main Street Bias” issue. He agreed, I thought, that, if there was a bias, it might be away from main streets [by picking streets which intersect with main streets]. In any case, he thought such a “bias”, if it had existed, would affect results only 10% or so.

But now Johnson, Spagat and co have put together a working paper where they argue that main street bias could reasonably produce a factor of 3 difference.

How did they get such a big number? Well, they made a simple model in which the bias depends on four numbers:

q, how much more deadly the areas near main street that were sampled are than the other areas that allegedly were not sampled. They speculate that this number might be 5 (ie those areas are five times as dangerous). This is plausible — terrorist attacks are going to made where the people are in order to cause the most damage.

n, the size of the unsampled population over the size of the sampled population. The Lancet authors say that this number is 0, but Johnson et al speculate that it might be 10. This is utterly ridiculous. They expect us to believe that Riyadh Lafta, while trying to make sure that all households could be sampled, came up with a scheme that excluded 91% of households and was so incompetent that he didn’t notice how completely hopeless the scheme was. To support their n=10 speculation they show that if you pick a very small number of main streets you can get n=10, but no-one was trying to sample from all households would pick such a small set. If you use n=0.5 (saying that they missed a huge chunk of Iraq) and use their other three numbers, you get a bias of just 30%.

fi, the probability that someone who lived in the sampled area is in the sampled area and fo the probability that someone who lived outside the sampled area is outside the sampled area. They guess that both of these numbers are 15/16. This too is ridiculous. The great majority of the deaths were of males, so it’s clear that the great majority were outside the home. So the relevant probabilities for f are for the times when folks are outside the home. And when they are outside the home, people from both the unsampled area and the sampled area will be on the main streets because that is where the shops, markets, cafes and restaurants are. Hence a reasonable estimate for fo is not 15/16 but 2/16. If use this number along with their other three numbers (including their ridiculous estimate for n) you get a bias of just 5%.

In summary, the only way Johnson et al were able to make “main street bias” a significant source of bias was by making several absurd assumptions about the sampling and the behaviour of Iraqis.

Comments

the size of the unsampled population over the size of the sampled population. The Lancet authors say that this number is 0, but Johnson et al speculate that it might be 10. This is utterly ridiculous.

Don’t you mean to say “Lancet authors *speculate* that this number is 0″ Tim?

They expect us to believe that Riyadh Lafta, while trying to make sure that all households could be sampled

That’s not what the Lancet report says he was trying to do. It says he was trying to sample main streets (however those are defined) and cross streets to them.

And the msb paper says: “The Iraq mortality study used a variation on the WHO/EPI methods that the Congo paper applied to widely distributed or larger units.”

Such a method precludes “making sure all households could be sampled” before the process even begins, under the assumption (speculation) that excluding the households that would be excluded by this short cut will not introduce bias.

It looks to me like Tim is right about the Main Street Bias. The paper gets their estimate of a bias of 3 by making wild assumptions. This is quite unusual if one is seriously interested in exploring the issue. At a minimum, they should provide a Table with various paramater estimates. Of course one could calculate it. but such a Table should be there to demonstrate that less radical assumtions, especially about the value on n, the ration of population in unsampled to unsampled territory.

I can’t see anything here but a claim that an n of 10 would lead to substantial bias. The letter by Burham and Roberts would contradict this.

That said, it sure would help to have more details on the methodology, including precisely what happened in the filedwork in Iraq. I had expected that in the “Report” and was disapointed that it wasn’t there. A description of exactly who what why when would go a long way to reducing the controversy.

“Of our 47 clusters, 13 or 28% were rural, approximating the UN estimates for the rural population of Iraq.

Bohannon states that Gilbert Burnham did not know exactly how the Iraqi team conducted its survey. The text sent to Bohannon, which he fails to cite, said, “As far as selection of the start houses, in areas where there were residential streets that did not cross the main avenues in the area selected, these were included in the random street selection process, in an effort to reduce the selection bias that more busy streets would have.” In no place does our Lancet paper say that the survey team avoided small back alleys. The methods section of the paper was modified with the suggestions of peer reviewers and the editorial staff.”

If this is accurate, then the number would indeed be 0, and not 10. Johnson et al are essentially implying that Burnham et al are lying about their methods.

The claims about the L2 methodology have been slippery and contradictory. The passage you cite says:

“streets that did not cross the main avenues…were included in the random street selection process, in an effort to reduce the selection bias that more busy streets would have.”

There are two important points here. First, Burnham concedes that the methods as described in their published methodology would have this bias. Second, he claims that non-cross streets “were included”, but who knows what this means? How often, and by what method of selection, were they included? None of this has ever been described. How a “main street” was chosen or defined has never been described in the first place. If using anything like the published methodology, even allowing some spill-over onto non-cross streets during the interviews, they would still not be included in anything like a representative or unbiased way. But if some number were somehow included, Burnham could still issue the vague statement that they “were included”, which is vague enough to still be technically true.

They seemed to have plenty of time in their report (and their companion document) to tell us all about all this research into “passive surveillance” that they’ve supposedly done, what “newspaper accounts” supposedly covered in Guatemala in 1980, along with many other superfluous arguments to the effect that nobody is supposed to think other lower findings contradict theirs, yet supposedly not enough space to give the methodology that produced their findings in the first place.

1. There is simply no way the Lancet study could have included all residential streets in the selection process (as Burnham claims) without departing from the published methodology in a fundamental way.

2. The Lancet study’s authors have failed to state how (and how often) they departed from the published methodology in order to select streets not connected to a main street. (The cases of 40 houses taking them around the block to a side street or two clearly isn’t sufficient to give them a chance of covering “all residential streets”).

3. Since Gilbert Burnham has acknowledged that the Lancet team made “an effort to reduce the selection bias that more busy streets would have”, we must assume that he acknowledges that such a bias does in fact exist.

Therefore the burden must be on the Lancet team to demonstrate how, exactly, they reduced this bias in the selection process.

Tim Lambert wrote:
> The Lancet authors say that this number is 0,
> but Johnson et al speculate that it might be 10.
> This is utterly ridiculous.

Actually, based on the selection process described in the published methodology it’s not ridiculous at all. I would accept that the figure could be lowered if they departed from the methodology (ie to select streets not crossing main streets).

But the problem with that is you’ve got a survey that was conducted in a fundamentally different manner than the published account of it.

Tim Lambert wrote:
> Hence a reasonable estimate for fo is not 15/16 but 2/16.

Of course, in his rush to dismiss the main street bias figures as ridiculous, Tim fails to point out that the 15/16 figure already takes into account the women who stay at home, whilst allowing for two working-age males per
average household of eight, with each spending six hours per 24-hour day outside their own zone.

Tim asserts that “the relevant probabilities for f are for the times when folks are outside the home”. This sounds like nonsense, unless Tim knows something about the “times” when people were typically killed (I don’t think this data was recorded by the Lancet study) – in which case f covers the whole 24 hour period. (If I misunderstand you here about “times”, Tim, please let me know).

Tim finds the MSB authors’ value for q plausible. So allowing for Tim’s absurdities and overstatements concerning n and f, we’re back to the MSB paper presenting a reasonable (though obviously not certain) case.

So when are these guys going to do their own mortality survey in Iraq? Or do they prefer the facts provided by government bureaucrats and overworked hospital staffers?

There are people who do polls in Iraq, so there shouldn’t be any problem in doing the whole thing over again, if funding is provided. It’s been two years since Lancet 1–I’m surprised, given all the critics of that first paper, that nobody tried to do it again until Lancet 2 came out.

I don’t understand why critics of the study obsess over the “main street bias” when the study seems to have a much bigger red flag: the large number of deaths attributed to car bombs. Car bombs account for 13% of the deaths in the survey. If the authors extrapolated that proportion into the total number of excess deaths — and I think that’s what they did — then even at their low-end figure of 400k excess deaths you end up with 52000 deaths from car bombs. That averages to slightly over 40 deaths per day from car bombs since the occupation began, an implausibly high death rate.

How did they get such a high figure for car bomb deaths? The simplest explanation is that through some sort of error they dramatically oversampled especially violent areas of Iraq. And if they oversampled violent ares, then their entire death rate from violence will be considerably inflated.

Stephen Soldz wrote:
> The paper gets their estimate of a bias of 3
> by making wild assumptions.

Let’s examine these assumptions to see if they’re as “wild” as Stephen thinks. The “n=10″ is based on representations (eg graphically on Iraqi street maps) of the street selection scheme published in the Lancet (plus the Lancet authors’ additional detail about spilling over into side streets). The only assumption here seems to be regarding the Lancet authors’ definition of main streets as “major commercial streets or avenues”. The MSB team considered conservative and liberal interpretations of this. The result is illustrated by the maps they’ve published (which, to me, depict clearly that n=10 is reasonable, even for liberal interpretations of “main street”).http://www.rhul.ac.uk/economics/Research/conflict-analysis/iraq-mortality/Iraqmaps.html

I don’t see any “wild” assumptions here. On the contrary, they’ve merely assumed that the survey was indeed conducted according to the description of it provided by the Lancet authors. The Lancet authors’ assertion that all streets were included in the selection process (which would result in n=0) flatly contradicts the methodology as published. The Lancet authors could instantly clarify this issue by providing the following (so why haven’t they?):

(a) The list of main streets from which they randomly sampled.(b) A full and detailed description of exactly how they sampled streets not connected to a main street.

Until they do so, anyone reviewing the Lancet study must rely on what the Lancet authors have so far published. This is what the MSB team have done.

Moving on, what “wild assumptions” underlie “f=15/16″? The MSB team make the assumption that women, children and the elderly stay close to home, whilst allowing for two working-age males per average household of eight, with each spending six hours per 24-hour day outside their own zone. This yields f=6/8+(2/8×18/24)=15/16. Any “wild assumptions” here?

Women, children and the elderly staying close to home? Is this a wild assumption? Almost certainly not. To quote a TIME magazine reporter (Bobby Ghosh) based in Baghdad:

“Iraqi politics is now dominated by Islamicist parties – Shi’ite and Sunni. And many neighborhoods are controlled by religious militias or jihadi groups. Some of them openly demand that women confine themselves to their homes. Even where there are no such “rules”, many women say they feel safer staying indoors, or wearing the veil and abaya when they step out”

Does allowing for two working-age males (per average household of eight) to each spend six hours per day outside their own zone require any “wild” assumptions? To give some context, the UN puts the unemployment rate at 27%, but the Washington Post (for example) says it’s much higher. Many others have part-time or irregular jobs. One can assume that many men who are outside their homes aren’t at work. They may, for example, be in cafes or obtaining groceries (they’re more likely to choose local cafes/groceries, etc). And if you are a sunni, you probably avoid entering shi’ite areas (and vice versa) – generally you don’t go far from your neighbourhood unless you have to.

The MSB authors suggest a value for f of 15/16. This seems reasonable to me based on the above. It could be a little lower, perhaps, but not much. Tim Lambert suggests a value of 2/16. This implies that the average Iraqi (including women, children and the elderly) spends only 3 hours out of each 24-hr day in their own home/zone (presumably sleeping), and spends the other 21 hours outside their zone. Since this is clearly ludicrous, I imagine Tim is redefining “f” in an unspecified way, thus changing the whole equation (in a manner unknown to us). Tim might have grounds for doing this, but if so, can he please state what those grounds are, and on what assumptions they’re based (unknown assumptions are worse than “wild” ones).

Finally, since we’re debating “wild assumptions”, one might ask what assumptions underlie the Lancet authors’ (so far undemonstrated) claim that their methodology of selecting cross streets manages to include “all” streets in the selection process.

Robert Shone sent me the above comment as an email. I now see that it is posted. Here is my slightly edited reply (sent to him by email):

/blockquote>”Robert, if you want to call Burham ety al liars, do so openly. They stated in their Science letter: “As far as selection of the start houses, in areas where there were residential streets that did not cross the main avenues in the area selected, these were included in the random street selection process, in an effort to reduce the selection bias that more busy streets would have.” They assert that this was inadvertently left out of the original paper. As one who has published dozens of research papers, this is entirely plausible.

This statement is now in the public domain. You are free of course, to call them liars. But don’t play games about it. Either they are liars, are n is quite a bit closer to 0.

I am not adverse to disagreeing with the Lancet authors. I do, however, admire them and take them at their word as being essetially honest. When MSB was first announced, I posted on the Media Lens message board a statement calling for people to not dismiss it and to look into possible sampling bias issues. Since then, I’ve seen the further work, which is dishonest in ignoring statements in the public domain. “

I do agree with Robert Shone and Josh, however, that it would be helpful to have more details on the L2006 methodology. I do hope that they produce such a document.

Let’s examine these assumptions to see if they’re as “wild” as Stephen [Soldz] thinks.

Let’s not. Let’s skip to the bottom line: Johnson et al. think the putative MSB could result in an overall bias factor of 3, while most others appear to think that the overall effect, if it exists, is much smaller. So let’s look at the 2004 Roberts study, which was quite similar to the Burnham study except that it did not use the same scheme for selecting clusters. If you (and Johnson) are right that a bias factor of 3 is plausible for the cluster selection scheme used in the 2006 Burnham study, then the Burnham study should have found something like 3x as many deaths as the 2004 Roberts study over the period from March 2003 to September 2004.

Robert et al: Could you explain why it is that although Lancet/JHU can produce 850 violent deaths a day in Iraq in 2005/6, the Sunnis and Shias even in their weekend splurges only manage on average about 100 per weekend? The weekly total needs to be about 6,000 to vindicate Lancet&co. If the big shows at weekends manage only 100, that leaves about 5,800 apparently unreported deaths per week. I think Roberts and Burnham are asleep on the beat and should get back to Baghdad to verify their daily count.

“If you (and Johnson) are right that a bias factor of 3 is plausible for the cluster selection scheme used in the 2006 Burnham study, then the Burnham study should have found something like 3x as many deaths as the 2004 Roberts study over the period from March 2003 to September 2004.

Stephen, nowhere have I called the Lancet authors “liars”. I have pointed out a fundamental contradiction between their published methodology and their statement that “all” streets were in included in the selection process. There are other possible reasons besides outright dishonesty for this contradiction. But until they clarify we will not know the reasons.

(Thanks for your email response, btw – I’ll respond to it at greater length than the above shortly).

Tim Lambert wrote:
> they get n=10 with a conservative interpretation of main street.
> This is not plausible as I explained above.

Tim, your “explanation” is based not on evidence, but on your acceptance of the unsupported claim by the Lancet authors that their methodology was designed from the outset to include all households. In other words you’ve accepted the very premise which the MSB research challenges.

Tim Lambert wrote:
> As for fo, you don’t seem to have read what I wrote about it.
> I have redefined it and I explained why.

I did read what you wrote (very carefully). Which is why I believe you haven’t adequately “redefined” f. The MSB authors are very clear in defining f. What is your definition, and on what assumptions do you base your value of 2/16?

Stephen Soldz wrote:
> Since then, I’ve seen the further [MSB] work, which is
> dishonest in ignoring statements in the public domain

The MSB authors didn’t “ignore” Burnham’s statement and are certainly not “dishonest” about it (as Stephen Soldz asserts). In fact the MSB team raised precisely this issue with Burnham et al in the long email correspondence mediated by Science magazine. They wanted to know how (and how often) the Lancet team departed from their published methodology to include streets that did not cross main streets. Burnham was unable to provide this information and, to date, has still not provided it, despite it being absolutely crucial to the process of evaluating sampling biases.

In other words, the only basis for the Lancet team’s claim that they included “all” streets in the selection process is their assertion that they did so. No information has been provided by them to support this assertion. The MSB work challenges (rather than “dishonestly” “ignoring”) this assertion, based on information which has been provided by the Lancet authors.

And while we may, like Stephen Soldz, assume that the Lancet authors are being honest, this doesn’t elevate an assertion to the status of evidence fit to include in a scientific review of a methodology. (Stephen, btw, whilst wanting us to assume that the Lancet authors are honest, is quick to dismiss the MSB team as “dishonest”).

Yeah, I’ve seen the estimates of violent deaths but in the residual way of estimating these kinds of things the precision for any subgroup is always going to be less than the precision for the whole. Which is not to say that I dismiss it (that would be a joshd-level of denial and evasion); just that I give it less weight than the whole.

Here’s my perspective: whenever you have a candidate for the One True Explanation, it has to handle all (or a good many) of the anomalies — not just the ones you or I cherry-pick to fit. If it can’t, then there’s probably a different explanation that addresses them.

So if you’re focusing on the violent deaths, a perfectly fair question is: what’s the mechanism by which MSB would result in a corresponding drop in non-violent deaths?

Robert Shore, the intent of the designers of the Lancet study was that all households should be in the sample frame. You seem to think that they just wanted to sample from 9% of the country, which is just silly. It is conceivable that their methodology for such sampling was flawed and some streets were not reachable by their sampling scheme. But anyone trying to get every household in the sampling frame would not use the very conservative definition of “main street” that Johnson et al use to get n=10. No-one would do this. It’s just ridiculous.

Let me cut and paste my more reasonable definition of fo for you:

>The great majority of the deaths were of males, so it’s clear that the great majority were outside the home. So the relevant probabilities for f are for the times when folks are outside the home. And when they are outside the home, people from both the unsampled area and the sampled area will be on the main streets because that is where the shops, markets, cafes and restaurants are. Hence a reasonable estimate for fo is not 15/16 but 2/16.

Tim Lambert wrote:
> You seem to think that they just wanted to sample
> from 9% of the country, which is just silly.

That’s not what I think. I have no doubt they intended (or “wanted”, to use your term) to make their sample as representative as possible of the whole Iraqi population, within practical/safety constraints. My point wasn’t about intention.

By the way, Stephen Soldz attributed some remarks about main street bias to Jon Pedersen (quoted by Tim Lambert at the top of this page). I have queried this with Jon Pedersen (who hadn’t read the MSB work prior to his discussion with Soldz). In an email, Pedersen says:

“Yes, probably Stephen Soldz confused the issue somewhat here. There are actual several issues:1) I very much agree with the MSB-team that there is some main stream bias, and that this is certainly an important problem for many surveys – not only the Iraq Lancet one.2) I am unsure about how large that problem is in the Iraq case – I find it difficult to separate that problem from a number of other problems in the study. A main street bias of the scale that we are talking about here, is very, very large, and I do not think that it can be the sole culprit.3) The MSB people have come up with some intriguing analysis of these issues.”
(Jon Pedersen, email to me, 4/12/06)

Tim, I said I had no doubt they intended to make their sample “as representative as possible of the whole Iraqi population, within practical/safety constraints”. I didn’t say I had no doubt they originally intended to include all streets in the selection process. There’s an important distinction (for those paying attention).

The crucial point is whether they believe(d) a bias results from sampling “close” to main streets. Burnham seems to think so, but Les Roberts (at least from one quote I read) isn’t (or wasn’t) convinced.

“Yeah, I’ve seen the estimates of violent deaths but in the residual way of estimating these kinds of things the precision for any subgroup is always going to be less than the precision for the whole.”

The CI’s in the Lancet report appear to suggest otherwise in this case:

Total excess deaths: 654,965 (392,979-942,636)

Violent excess death ‘subgroup': 601,027 (426,369-793,663)

“Here’s my perspective: whenever you have a candidate for the One True Explanation, it has to handle all (or a good many) of the anomalies — not just the ones you or I cherry-pick to fit. If it can’t, then there’s probably a different explanation that addresses them.”

The MSB is an argument about bias wrt measuring “violent events” and violent deaths with this methodology.

Robert Shone wrote: The crucial point is whether they believe(d) a bias results from sampling “close” to main streets.

No, the crucial point is whether they sought to give all households an equal chance of inclusion insofar as that was practicable.

Meanwhile, joshd refers to a CI for the “violent excess death ‘subgroup'” in the Lancet report. The term “violent excess death” doesn’t appear at all in the Lancet report and the CI in question is for all post-war violent deaths, not for a component of excess deaths.

[Incidentally, I’ve changed my username to “Bob Shone” (from “Robert Shone” ) to avoid being confused with the other “Robert”]

Given Jon Pedersen’s views on main street bias (as expressed in the email to me quoted above), I hope both Tim Lambert and Stephen Soldz will consider amending their respective web pages, to avoid misrepresenting Pedersen.

Well, at least one of the Lancet authors (Burnham) had to believe in the possibility of a bias in order to state that the Lancet team made “an effort to reduce the selection bias that more busy streets would have”.

Why would they make this effort if they didn’t believe the bias existed? And in that case, why do various quotes from Les Roberts indicate he’s totally dismissive of the idea of such a bias? It doesn’t add up, whichever way you look at it.

Judging from the email you provide, Bob Shone, it sounds like Pedersen thinks a factor of 3 is a “very very large” main street bias. He evidently thinks the Lancet 2 paper is flawed, but that msb probably isn’t the main culprit.

I continue to wonder why nobody else has done a mortality study in Iraq and I don’t mean this as snark. It’s rather peculiar. It’s been 2 years since Lancet 1 was denounced by so many (including IBC) as inflated. You’d think someone would want to get it right. Hell, you’d think someone would want to get it wrong–people conduct polls all the time in Iraq and even if the samples aren’t large enough to determine mortality accurately(I don’t know), one could at least report something.

Or the UN or the US could sponsor a large-scale mortality survey run by Pedersen, for instance, but I suppose that’s crazy talk.

In fact what Pedersen says is: “A main street bias of the scale that we are talking about here, is very, very large, and I do not think that it can be the sole culprit.”

He also says: “I very much agree with the MSB-team that there is some main stream bias, and that this is certainly an important problem for many surveys – not only the Iraq Lancet one.

Donald wrote:
> It’s been 2 years since Lancet 1 was denounced
> by so many (including IBC) as inflated.

Careful, Donald – you’re starting to sound like the folks at MediaLens. Try reading IBC’s 2004 press release which praises (and certainly doesn’t denounce) the 2004 Lancet study. See how many quotes you can find of IBC denouncing Lancet 2004. (But remember that Ron F, after trawling on Google for muck on Sloboda, only ever found one example of Sloboda being quoted as sounding critical of Lancet 2004 – and he’s not “denouncing” it. Only one example. Odd considering IBC’s “clout” with the mainstream media).

Bob, you really are making a mountain out of a molehill. Burnham is just stating the obvious. To make any inferences at all you want all households to have an equal chance of inclusion, or as nearly so as you can manage. Obviously people who live near main streets may be atypical. Where I live they are disproportionately young, single and wealthy, since the centre of town is expensive and unsuitable for families. Main streets would therefore be biased towards lower mortality rates.

I have seen nothing to suggest that Roberts is “totally dismissive of the idea of such a bias”; I have of course seen remarks of his to the effect that problems like this get more attention than they deserve. That’s not at all the same thing.

Judging by your own quotation from Pedersen, there is really no reason to suppose that he, Burnham and Roberts are very far apart at all on this issue. They all acknowledge the need to get as truly random a sample as possible and none of them is a taker for the idea that main street bias could be as severe as the Oxford group are suggesting.

Having said that I really don’t much care what they think. This isn’t some arcane branch of science where we have to take the word of leaders in the field as gospel, having no hope of getting to grips with the issues ourselves. For the most part the issues involved are straightforward enough. The bottom line is that either there were more than 300,000 violent deaths in Iraq since the invasion, or the Burnham et al paper is basically a fraud. Other possibilities (a gigantic fluke, spectacular incompetence) are just too far-fetched to be worth considering.

Bob, I followed IBC in 2004 and when Lancet 1 came out, they didn’t take a public position on its accuracy, though as you admit, Sloboda apparently let his real feelings out at one point. I wasn’t aware of Sloboda’s comment at the time. They more or less laid their cards on the table in the summer of 2005, when they released their 2 year analysis. I wasn’t expecting an endorsement of Lancet 1, but I was expecting them to admit that they didn’t really know to what extent their data was complete and I also expected them to say they didn’t know to what extent it was a representative sample of who was doing the killing. I expected a section on the sorts of biases that might be introduced when you have to rely on press reports and government data to determine the number of deaths and the identity of the perpetrators and when they display a graph that shows that in most months, coalition forces contributed an average of about 1 civilian death per day, I expected them to wonder if maybe either deaths were being covered up or the perpetrators were being incorrectly identified. In short, I expected some epistemological humility. But no, they took everything in their database at face value and calculated percentages of who got killed by whom or by what sorts of weaponry to 3 significant figures and were triumphalist in tone about their methodology. And they grabbed onto the ILCS study because it verified (in their own minds anyway) their methodology and obviously they don’t want to hear that asking one question on a long survey might possibly lead to an undercount.

So, no, IBC didn’t directly attack Lancet 1 in 2005—-in fact at one point they praised it. But read carefully they obviously didn’t believe it and didn’t think you should either.

“Meanwhile, joshd refers to a CI for the “violent excess death ‘subgroup'” in the Lancet report. The term “violent excess death” doesn’t appear at all in the Lancet report and the CI in question is for all post-war violent deaths, not for a component of excess deaths.”

The study says:

“We estimate that between March 18, 2003, and June,
2006, an additional 654 965 (392 979-942 636) Iraqis have
died above what would have been expected on the basis
of the pre-invasion crude mortality rate as a consequence
of the coalition invasion. Of these deaths, we estimate
that 601 027 (426 369-793 663) were due to violence.”

Kevin seems to be right that the study does not use the exact term “violent excess death” when stating the figure, but the above says the same thing with different wording. Either he’s wrong or the above passage from the report is mistaken.

In either case, I fail to see the relevance of his claim, as I’m not sure if or how he thinks this would change the basic point I was making about Robert’s claim.

Donald Johnson wrote:
> though as you admit, Sloboda apparently let his
> real feelings out at one point.

Well, it was hardly a denunciation of L1. In fact, it was difficult to be certain of what John Sloboda was commenting on – his quote was introduced out of context into a separate debate – you never knew what question he’d been asked and he doesn’t even mention Lancet by name. Why would you, Ron F or any of the MediaLens minions take one vague, unsatisfactory media quote as evidence for some sort of campaign by IBC against Lancet 2004?

If, as folks at MediaLens claim, IBC have both “clout” with the mainstream media and a desire to “rubbish” and “denounce” the Lancet study, then you should expect to see endless quotes from IBC in the media attacking the Lancet. But Ron F (in all his Google muckraking) managed to dig out just that one vague, unsatisfactory Sloboda quote from an obscure debate transcript. I don’t know anything about John Sloboda’s private opinions on the Lancet study, but the fact that only one critical quote of Lancet could be unearthed from the whole of the Worldwide Interweb says to me a lot of good things about IBC.

Bob, it’s a relatively subordinate point but you are overworking the word “bias” in your comments. You use “selection bias” as though it’s synonymous with “bias” in final study estimates found, but this is not the case. In what you’re calling the MSB paper their parameter ‘n’ is closest to representing their bad faith (and as Tim well says, absurd) guess about the size of “main street selection bias” in the work of their supposed scientific colleagues, while their calculated value for ‘R’ is their derived guess about the size of the resulting overall bias in their colleagues’ estimates. They discuss three special cases of their expression for ‘R’ in which some degree of selection bias leads to no “bias effect” in the final estimates at all, for instance.

As in the wake of their self-exciting and politically motivated press release I called the Royal Holloway Dept of Economics’ dreamteam “jackasses”, I’m holding off criticism of their paper until I have something nice to say about them. Anyway they have the paper out for comment sooner than expected after that press release, so “well done” to them for that.

Robert said; “If you (and Johnson) are right that a bias factor of 3 is plausible for the cluster selection scheme used in the 2006 Burnham study, then the Burnham study should have found something like 3x as many deaths as the 2004 Roberts study over the period from March 2003 to September 2004. It didn’t.”

Robert is correct.

In fact, the results from the second Johns Hopkins study are very consistent with those from the first. The two findings are within 10% of one another over the time period considered in the first study (from March 2003 to August 2004), even though the methodology for selecting households to be survyed was quite different.

“The results from the new study closely match the finding of the group’s October 2004 mortality survey. The earlier study, also published in The Lancet, estimated that more than 100,000 additional deaths from all causes had occurred in Iraq from March 2003 to August 2004. When data from the new study were examined, 112,000 deaths were estimated for the same time period of the 2004 study.”

This is a serious problem for Spagat et al. It essentially puts the final nail in the coffin of their argument, which was not very convincing to begin with.

For the simple reason that the very people with the means to undetake an all-encompassing study simply have no incentive for carrying it out. They have no desire to see the real mortality number — or more precisely, they have a desire that others do not see it.

And besides, even a limited study like that carried out by Johns Hopkins is very dangerous (to surveyors and surveyees alike) — and getting more so by the day.

Here’s my perspective: whenever you have a candidate for the One True Explanation, it has to handle all (or a good many) of the anomalies — not just the ones you or I cherry-pick to fit. If it can’t, then there’s probably a different explanation that addresses them.

To which JoshD replied:

The MSB is an argument about bias wrt measuring “violent events” and violent deaths with this methodology.

Exactly. They have a hypothesis that they think explains all of half of a problem and explains none of the other half. That’s cherry-picking.

when Lancet 1 came out, they [IBC] didn’t take a public position on its accuracy

I’d say a radio/web broadcast was pretty public. Here’s John Sloboda a few days after the paper was released, on a show discussing casualties in Iraq:

I think you’re going to find, in the weeks and months to follow, that there’s going to be very, very serious debates and criticism of the study, and maybe, at the end of the day, the figure will be retracted or modified. And one of the lasting problems of this is that then, somehow, everybody who’s trying to do estimates of civilian casualties in Iraq might be tarred with the same brush, and the whole enterprise kind of written off.http://www.onthemedia.org/yore/transcripts/transcripts_110504_f.html

You can read the whole transcript or listen to the broadcast at the link. You’ll see Bob’s complaint that “he doesn’t even mention Lancet by name” is just smoke and mirrors, as is his claim that it’s from an “obscure debate” since that channel is syndicated to over 200 U.S. radio stations.

Bob mentions IBC’s press releases. I suggest you read them, if you want further education in smoke and mirrors, with headlines like this, from their February ’04 press release:

“As many as 10,000 civilians were killed in Iraq during 2003.”

Iraq Body Count know that figure is false since they go on to suggest the final tally will be higher, though they forget to mention their best estimate (their words) is that they only capture approx. 50% of casualties.

And again, from a press release headline dated October ’04:

No Longer Unknowable: Falluja’s April Civilian Toll is 600″

No caveats in that headline either, such as “Toll at least 600″, and they again forget to mention that IBC’s best estimate is that they only capture approx. 50% of casualties. Clumsy, eh?

In their response to Lancet 1 they state that:

“Our count is purely a civilian count”

They forget to mention that this includes the Iraqi police, who are armed and trained by occupation forces. Some are even trained in Afghanistan by the United States. Their deaths deserve to be recorded as much as anyones, but they sure as heck aren’t civilians.

joshd writes: Kevin seems to be right that the study does not use the exact term “violent excess death” when stating the figure, but the above says the same thing with different wording. Either he’s wrong or the above passage from the report is mistaken.

It is perfectly clear that the CI referred to is for post-invasion violent deaths. In fact that is stated in the Summary.

In either case, I fail to see the relevance of his claim, as I’m not sure if or how he thinks this would change the basic point I was making about Robert’s claim.

Then look again at Robert’s claim: “the precision for any subgroup is always going to be less than the precision for the whole.” Do you really think you have refuted that claim by your comparison of the CI for total excess deaths with the CI for violent deaths post-invasion? If so, think again. Also, Robert was referring to a subset of the post-invasion violent deaths, which reinforces his point.

Kevin I refer you to my previous posting, in which I quote the study saying:

“We estimate that between March 18, 2003, and June, 2006, an additional 654 965 (392 979-942 636) Iraqis have died above what would have been expected on the basis of the pre-invasion crude mortality rate as a consequence of the coalition invasion. Of these deaths, we estimate that 601 027 (426 369-793 663) were due to violence.”

This passage says the violence figure of 601,027 is a subset of the total “excess deaths”, which therefore means the 601,027 figure is the excess violent deaths.

The passage in the summary says:
“We estimate that through July 2006, there have
been 654,965 “excess deaths” — fatalities above the pre-invasion death rate – in I Iraq as a consequence of the war. Of post-invasion deaths, 601,027 were due to violent causes.”

The last sentence follows a discussion of the total excess figure. It is unclear to me that it has suddenly switched to a post-invasion figure prior to any excess calculation just for the violence. If it is doing that, as you claim, then this statement in the study’s summary is contradicting the statement I quoted from the study above. It would also contradict this statement from Human Costs:

“Excess deaths can be further divided into those from violent and from non-violent causes. The vast majorit of excess deaths were from violent causes. The excess deaths from violent causes were 7.2/1,000. Applying this to the population we estimate that 601,027 were due to violent causes.”

Are you suggesting still that the 601,027 figure is not referring to excess violent deaths? I still fail to see what great difference this would make to my point in any case, unless you could show 1) that the 601,027 is actually just a straight post-war violence figure, not “excess” violence, in contradiction to what the study says in the two passages I’ve quoted, and 2) that applying the excess calculation and subtracting the small number of pre-war violent deaths would dramatically weaken the published CI. You’ve yet to prove 1) and you haven’t even begun addressing 2).

And yes, I do believe I have refuted Robert’s claim. The CI for the total excess deaths in this case appears to have been weakened by its conflation of the non-violent deaths with violent deaths. Leaving the non-violent deaths out of the equation, and using only the violent death “subgroup” produces a stronger finding.

Ron F wrote:
> You’ll see Bob’s complaint that “he doesn’t even
> mention Lancet by name” is just smoke and mirrors

Ron, it’s great to see you yet again resurrecting the one and only quote you could find (across the Whole Worldwide Web) that suggests (shock, horror) that IBC’s John Sloboda had critical opinions of Lancet 2004.

Most people don’t have a problem with other people having critical opinions on various topics. Of course most people do have a problem with smear campaigns, etc. What does your repeated quoting of Sloboda’s solitary comment demonstrate? That he had an opinion (shock, horror again), or that he conducted a campaign? (Clue: if he had conducted a campaign, Ron would have found more than one quote).

Ron F is certainly conducting a campaign to smear IBC. I can provide several examples of very unpleasant smears that he’s posted on the MediaLens messageboard: for example that IBC are “cosy with” the military and intelligence agencies. I expect him at some point to claim IBC are part of a Satanic baby-eating cult. Or maybe they helped with the Kennedy assassination?

I do admire the final two paragraphs of that 2004 IBC press release, Bob (and joshd), it was very timely then and reflected well on the IBC. Second last paragraph:

We also recognise the bravery of the investigators who carried out the Lancet survey on the ground, and support the call for larger and more authoritative investigations with the full support of the coalition and other official bodies.

No Longer Unknowable: Falluja’s April Civilian Toll is 600″
No caveats in that headline either, such as “Toll at least 600″, and they again forget to mention that IBC’s best estimate is that they only capture approx. 50% of casualties. Clumsy, eh?

I do remember that release about Fallujah and (in those ancient pre-Lancet 1 era) I remember finding that formulation a little annoying. I thought it was maybe a slip, but when they came out with their two year analysis it became clear that it wasn’t–they really do think that press releases are giving a pretty clear and relatively complete picture of how many people are dying and who is killing them.

Well, maybe so. I’m not wedded to the Lancet estimates. But I’d be annoyed at IBC’s hubris even if Lancet 1 and Lancet 2 had never been published. What’s ironic is that before Lancet 2 came out, the argument was over a factor of two, really. IBC seems to think they are getting about two-thirds of the total violent deaths. The midrange Lancet 1 estimate was 3 times higher than the corrected IBC figure for the first 18 months. (Lancet 1 said 15,000 or so in their paper from IBC, but IBC later added some data, so the number for Lancet 1 is actually about 19,000, I think, vs 60,000 for L1.) But the L1 authors thought the true figure was in the upper end of their CI and IBC decided that ILCS’s number “proved” that the Lancet 1 midrange figure was too high, so the debate polarized.

Of course, with L2 there’s no possibility of an agreement, though plenty of room for both sides to be wrong.

“Lancet 1 said 15,000 or so in their paper from IBC, but IBC later added some data, so the number for Lancet 1 is actually about 19,000, I think, vs 60,000 for L1.”

L1’s violence is actually about 57,000. Compare this to IBC’s 19,000 (or even the 15,000 if you feel like removing everything IBC subsequently added for the period for some reason – and go ahead and put aside that IBC is only civilian too).

Then ask yourself how well this squares with the oft-quoted L2 assertion that:

“Aside from Bosnia, we can find no conflict situation where passive surveillance recorded more than 20% of the deaths measured by population-based methods.”

I’m pretty sure that the CI in question relates exclusively to post-invasion violent deaths. Take a look at Table 3. Clearly that third line isn’t excess deaths, since the pre-invasion column isn’t zero. Now, to satisfy yourself that it’s the same confidence interval: check the following calculations, which express the end-points of the CIs as ratios of the point estimates:

5.2/7.2 = 0.72 and 9.5/7.2 = 1.32;

426369/601027 = 0.71 and 793663/601027 = 1.32.

Surely it’s intuitively clear that an excess-deaths CI will be wider than a post-invasion CI? The excess-deaths CI has to incorporate the uncertainty in the pre-invasion estimate. Look at what happened to their non-violent mortality CI when they translated that into excess deaths.

Incidentally the same table illustrates Robert’s point very nicely. Do the above calculation on the post-invasion crude mortality rate and you will see that, proportionately, the spread is narrower than that of both the violent mortality CI and the non-violent mortality CI.

Maybe that won’t convince you, but then I can’t prove that sheep aren’t carnivores either.

If you are “pretty sure” about this you had better tell the Lancet authors to revise their text, because the two passages I quoted are saying that that the 601,027 figure and the CI range applies to the violence subset of the total excess deaths.

They also say the same (erroneous according to you) thing here too:

“We estimate that, as a consequence of the coalition
invasion of March 18, 2003, about 655 000 Iraqis have died
above the number that would be expected in a non-conﬂict
situation, which is equivalent to about 2·5% of the
population in the study area. About 601 000 of these excess deaths were due to violent causes.”

If you’re correct, why didn’t that rigorous peer review catch all these mistakes in the text? Though I’m not totally convinced that you’re right. The table you refer to simplifies to “deaths per 1000 people per year”. That may obscure some subtle differences. Remember that, relatively speaking, the pre-war violent death rate the study found was almost non-existant.

As I said before though I don’t believe this would change my point much in either case. Yes adding in the “excess” calculation on top of the other adds in more uncertainty, but if you are correct that the text in the peer-reviewed L2 report is wrong and misleading in all these places, and the 601,027 figure is violent deaths prior to any excess calculation, what would be the figure and the CI if making the small ‘excess’ correction for the pre-invasion violent deaths? The CI given to violence is much narrower than the total excess CI. So, if you are correct that the violence figures are not excess, as the study repeatedly says they are, how much would making them excess violence figures widen the CI? It would have to be quite a lot to make Robert’s point correct, let alone at all relevant.

The anomoly in the excess calculations in this study appear to be the non-violent deaths, which first go down (counter-intuitively – why would natural cause deaths and accidents go down due to the invasion?) and then back up again.

Sorry for the delayed response (I’m under the deadline gun, so any other responses from me may be delayed for a couple of days).

To clarify, when I wrote “precision” I wasn’t talking about statistical sampling precision: it’s often the case that for rv’s X and Y, var(X+Y) > var(X). What I was talking about was the nature of determining whether an event has occurred, then allocating that event into bins. Anyone who has worked with vital event registers knows what I’m talking about: causes of death are often far more ambiguous than whether the death itself occurred. That’s the basis of a large cross-national effort to regularize recording of deaths by cause both over time and place.

In any event, that was just a side issue to my main point: a bias factor of 3 for MSB is implausible.

Josh, I’ve already said myself that the Lancet people are wrong to imply that passive methods always result in a huge undercount. I gave as an example the Israeli-Palestinian conflict or the one in Lebanon last summer. Nobody to my knowledge thinks the true death tolls are five or more times greater than what is commonly cited in the press.

There are wars where huge undercounts do occur–Vietnam and Algeria are likely examples, and probably Korea and various others. The question is where Iraq falls. One could totally ignore the Lancet papers and be deeply suspicious of some of the numbers that appear in the press–there are obvious incentives to lie. I admit (all along) that I also find some of the Lancet 2 numbers hard to believe, though it does seem possible that the death toll could be in the hundreds of thousands.

Donald Said: “I admit (all along) that I also find some of the Lancet 2 numbers hard to believe, though it does seem possible that the death toll could be in the hundreds of thousands. But anyway, we need another survey.”

Because they differ so dramatically from the results that some other groupslike IBC and Iraqi officials have provided? (which are undercounts by any reasonable assessment)

Do you believe that the people like Josh D who are questioning the first and second Johns Hopkins surveys are going to accept the results to yet another survey if it shows results that are not in keeping with their own beliefs — ie, that hundreds of thousands have died in Iraq since the war began?

Johns Hopkins has had two surveys now and the results from the second are in very good keeping with those from the first if one considers the same period of time conisred by the first survey.

According to the posting on JHU’s website:
“The results from the new study closely match the finding of the group’s October 2004 mortality survey. The earlier study, also published in The Lancet, estimated that more than 100,000 additional deaths from all causes had occurred in Iraq from March 2003 to August 2004. When data from the new study were examined, 112,000 deaths were estimated for the same time period of the 2004 study.”

The methodologies for selecting the sampled households were different in the two studies and they wre separted in time and space and it is highly unlikley that the results came out almost the same (within 12%) merely by chance.

..it is highly unlikley that the results came out almost the same (within 12%) merely by chance.

I find it interesting that critic of the Johns Hopkins studies are conveniently ignoring this fact because until it is addressed, all their of the talk about “main street bias” and the like is just unadulterated garbage.

..it is highly unlikley that the results came out almost the same (within 12%) merely by chance.

I find it interesting that critic of the Johns Hopkins studies are conveniently ignoring this fact because until it is addressed, all their of the talk about “main street bias” and the like is just unadulterated garbage.

I’ve addressed it before. And yes, it is by chance, or rather by convoluted and counter-intuitive statistical sleight of hand.

L2 found about 145,000 extra violent deaths and a *decline* of about 33,000 non-violent deaths during the same period as L1 found about a 33,000 increase in non-violent deaths.

It just so happens (by chance) that if you conflate both types of deaths, and apply the superfluous “excess” calculation, an illusion is created wherein these two sets of completely divergent findings look somewhat similar, even while these two studies might as well have been looking at two different wars, or two different countries.

i did some figuring from the two lancet papers and must agree that the ‘eqivalence’ between the results for thepewriod covered by the first survey is [as you indicsated above] indeed not what is seems.

the numbers i calculate from the information provided were roughly the sasme as the ones you give above [with the one minor exception that i got 30,000 for L1 non-volent deaths as opposed to your 33,000].

i also read the paper by spagat et al and have one major qualm with it. it basically assumes that their is bias and then calculates what the bias is based on those assumptions.

not only does it assume that the death rate in areas sampled is higher than in areas not sampled, it also assumes that the people who live in those areas spend the vast majority of their time in those areas and also that the people who live in low-violence areas stay in those areas.

the first assumption may be accurate since the areas sampled might be the more accessible ones that are alos accessible to those perpetrating the violence. however, i have some major qualms with the second assumption.

first, as tim lambert points out, even people who live in low-violence may venture out into thr higher violence areas of main street more than 6 hours a day (as spagat et al assumed when they came up with their factor of 3 bias) if they work on main street or have optjher business there.

second, to think that the people who live in high violence areas are going to spend nearly all their time confined to those high violence areas is not reasonable — counterintuitive, actually. if one gives it a little thought, one can see why.

if i live on main street which is subject to daily carbombings and the like and there is a neighborhood a few streets over where there is much lower violence, why in the world would i spend all my waking hours in the high violence area?

i would not. i would take my family away from the violence at every opportunity 9in other words, whenever i did not have to be there (eg, to work or sleep

but that is precisely what spagat et al assume; that the probability that one will venture outside ones area of residence is low.

the spagat paper assumes a considerable bias at the getgo and then gives a number to “quantify” the assumed bias — as if they are somehow ‘calculating’ what the bias is from first principles, which they are not.