A controversial October 2004 report from The Lancet compared pre-invasion and post-invasion mortality rates in Iraq. Discussion of this report surfaced in comments to Dan Ellsworth’s excellent post on the Iraq War here and here (by Mark IV and Naismith respectively). As usual, I agree with Mark IV: The report by The Lancet deserves as much scorn as we can heap upon it.

The report tries to compare rates in the 18 months preceding the war and the 18 months following the war, estimating an increase of 100,000 to 150,000 deaths.

This is a very tenuous claim, and it results from a methodology that was far from rigorous. Moreover, The Lancet promoted the report using language that tended to deceive casual readers.

This article will first emphasize how The Lancet used deceitful hype to promote the report, and then discuss some of the short-comings of the results of the report.

The Lancet’s press release was deceptively entitled “100,000 Excess Civilian Deaths after Iraq Invasion.” This was also the title of the link to the report that appeared on the front of The Lancet web site when they promoted the report in late 2004. In like manner, the opening paragraph of the press release boldly announces,

Public-health experts from the USA and Iraq estimate that around 100,000 Iraqi civilians have died as a result of the March 2003 invasion-the majority being violent deaths among women and children relating to military activity. Results of the research, done among clusters of Iraqi households last month, is published online by THE LANCET at 0001 H (London time) Friday 29 October 2004

The sensationalistic title and the wording of this press release led many to conclude that The Lancet accused America and its allies of killing 100,000 Iraqis. Only later in the article do we read:

The number of population clusters chosen for sampling is small; the confidence intervals around the point estimates of mortality are wide; the Falluja cluster has an especially high mortality and so is atypical of the rest of the sample; and there is clearly the potential for recall bias among those interviewed.

Many readers will be surprised to learn that the actual title of the report was, “Mortality before and after the 2003 invasion of Iraq: cluster sample survey.” This accurately reflects what the report intended to measure and compare; viz., pre-invasion and post-invasion mortality rates, including (for example) infant mortality. The promotional material seems designed to imply that this prestigious, peer-reviewed journal has made much less modest claims than the report actually makes.

But the report’s claims, modest though they may be, do not withstand even a cursory examination.

The report sought to measure pre-invasion (Jan 2002 – Mar 2003) and post-invasion (Mar 2003 – Sep 2004) mortality rates using largely unverified reports from households in Sep 2004. Nearly everyone was asked to estimate how many people died in their household and when. Seriously — that’s all. This is not an accepted method for determining mortality rates, and it’s appalling to see such work put forth in a prestigious, peer-reviewed journal. All we have is data suggesting that people tend to underestimate the passage of time since someone’s death once it has receded into the indefinite past, and The Lancet is using it to inflate mortality rates.

The report itself states that it has the following short-comings:

We obtained January, 2003, population estimates for each of Iraq’s 18 Governorates from the Ministry of Health. No attempt was made to adjust these numbers for recent displacement or immigration.


When violent deaths were attributed to a faction in the conflict or to criminal forces, no further investigation into the death was made to respect the privacy of the family and for the safety of the interviewers.


Within clusters [of which there are 30 of 30 households each], an attempt was made to confirm at least two reported non-infant deaths by asking to see the death certificate. Interviewers were initially reluctant to ask to see death certificates because this might have implied they did not believe the respondents, perhaps triggering violence. Thus, a compromise was reached for which interviewers would attempt to confirm at least two deaths per cluster.

This is not the type of report that warrants a claim of 100,000 to 150,000 additional deaths.

Bias in human knowledge makes individuals less motivated to critically evaluate opinions with which they are sympathetic, and more motivated to evaluate the opinions with which they are not sympathetic. I’m happy to admit that I’m motivated to criticize the report based on my biases. I just wish that those who embrace the report could be more candid about why they tout it as peer-reviewed.