http://www.slate.com/id/2108887
war stories Military analysis.
100,000 Dead—or 8,000
How many Iraqi civilians have died as a result of the war?
By Fred Kaplan
Posted Friday, Oct. 29, 2004, at 3:49 PM PT
The authors of a peer-reviewed study, conducted by a survey team from
Johns Hopkins University, claim that about 100,000 Iraqi civilians have
died as a result of the war. Yet a close look at the actual study,
published online today by the British medical journal the Lancet,
reveals that this number is so loose as to be meaningless.
The report's authors derive this figure by estimating how many Iraqis
died in a 14-month period before the U.S. invasion, conducting surveys
on how many died in a similar period after the invasion began (more on
those surveys later), and subtracting the difference. That
difference—the number of "extra" deaths in the post-invasion
period—signifies the war's toll. That number is 98,000. But read the
passage that cites the calculation more fully:
We estimate there were 98,000 extra deaths (95% CI 8000-194 000)
during the post-war period.
Readers who are accustomed to perusing statistical documents know what
the set of numbers in the parentheses means. For the other 99.9 percent
of you, I'll spell it out in plain English—which, disturbingly, the
study never does. It means that the authors are 95 percent confident
that the war-caused deaths totaled some number between 8,000 and
194,000. (The number cited in plain language—98,000—is roughly at the
halfway point in this absurdly vast range.)
This isn't an estimate. It's a dart board.
Imagine reading a poll reporting that George W. Bush will win somewhere
between 4 percent and 96 percent of the votes in this Tuesday's
election. You would say that this is a useless poll and that something
must have gone terribly wrong with the sampling. The same is true of the
Lancet article: It's a useless study; something went terribly wrong with
the sampling.
The problem is, ultimately, not with the scholars who conducted the
study; they did the best they could under the circumstances. The problem
is the circumstances. It's hard to conduct reliable, random surveys—and
to extrapolate meaningful data from the results of those surveys—in the
chaotic, restrictive environment of war.
However, these scholars are responsible for the hype surrounding the
study. Gilbert Burnham, one of the co-authors, told the International
Herald Tribune (for a story reprinted in today's New York Times), "We're
quite sure that the estimate of 100,000 is a conservative estimate." Yet
the text of the study reveals this is simply untrue. Burnham should have
said, "We're not quite sure what our estimate means. Assuming our model
is accurate, the actual death toll might be 100,000, or it might be
somewhere between 92,000 lower and 94,000 higher than that number."
Not a meaty headline, but truer to the findings of his own study.
Here's how the Johns Hopkins team—which, for the record, was led by Dr.
Les Roberts of the university's Bloomberg School of Public Health—went
about its work. They randomly selected 33 neighborhoods across
Iraq—equal-sized population "clusters"—and, this past September, set out
to interview 30 households in each. They asked how many people in each
household died, of what causes, during the 14 months before the U.S.
invasion—and how many died, of what, in the 17 months since the war
began. They then took the results of their random sample and
extrapolated them to the entire country, assuming that their 33 clusters
were perfectly representative of all Iraq.
This is a time-honored technique for many epidemiological studies, but
those conducting them have to take great care that the way they select
the neighborhoods is truly random (which, as most poll-watchers of any
sort know, is difficult under the easiest of circumstances). There's a
further complication when studying the results of war, especially a war
fought mainly by precision bombs dropped from the air: The damage is not
randomly distributed; it's very heavily concentrated in a few areas.
The Johns Hopkins team had to confront this problem. One of the 33
clusters they selected happened to be in Fallujah, one of the most
heavily bombed and shelled cities in all Iraq. Was it legitimate to
extrapolate from a sample that included such an extreme case? More
awkward yet, it turned out, two-thirds of all the violent deaths that
the team recorded took place in the Fallujah cluster. They settled the
dilemma by issuing two sets of figures—one with Fallujah, the other
without. The estimate of 98,000 deaths is the extrapolation from the set
that does not include Fallujah. What's the extrapolation for the set
that does include Fallujah? They don't exactly say. Fallujah was nearly
unique; it's impossible to figure out how to extrapolate from it. A
question does arise, though: Is this difficulty a result of some
peculiarity about the fighting in Fallujah? Or is it a result of some
peculiarity in the survey's methodology?
There were other problems. The survey team simply could not visit some
of the randomly chosen clusters; the roads were blocked off, in some
cases by coalition checkpoints. So the team picked other, more
accessible areas that had received similar amounts of damage. But it's
unclear how they made this calculation. In any case, the detour
destroyed the survey's randomness; the results are inherently tainted.
In other cases, the team didn't find enough people in a cluster to
interview, so they expanded the survey to an adjoining cluster. Again,
at that point, the survey was no longer random, and so the results are
suspect.
Beth Osborne Daponte, senior research scholar at Yale University's
Institution for Social and Policy Studies, put the point diplomatically
after reading the Lancet article this morning and discussing it with me
in a phone conversation: "It attests to the difficulty of doing this
sort of survey work during a war. … No one can come up with any credible
estimates yet, at least not through the sorts of methods used here."
The study, though, does have a fundamental flaw that has nothing to do
with the limits imposed by wartime—and this flaw suggests that, within
the study's wide range of possible casualty estimates, the real number
tends more toward the lower end of the scale. In order to gauge the risk
of death brought on by the war, the researchers first had to measure the
risk of death in Iraq before the war. Based on their survey of how many
people in the sampled households died before the war, they calculated
that the mortality rate in prewar Iraq was 5 deaths per 1,000 people per
year. The mortality rate after the war started—not including
Fallujah—was 7.9 deaths per 1,000 people per year. In short, the risk of
death in Iraq since the war is 58 percent higher (7.9 divided by 5 =
1.58) than it was before the war.
But there are two problems with this calculation. First, Daponte (who
has studied Iraqi population figures for many years) questions the
finding that prewar mortality was 5 deaths per 1,000. According to quite
comprehensive data collected by the United Nations, Iraq's mortality
rate from 1980-85 was 8.1 per 1,000. From 1985-90, the years leading up
to the 1991 Gulf War, the rate declined to 6.8 per 1,000. After '91, the
numbers are murkier, but clearly they went up. Whatever they were in
2002, they were almost certainly higher than 5 per 1,000. In other
words, the wartime mortality rate—if it is 7.9 per 1,000—probably does
not exceed the peacetime rate by as much as the Johns Hopkins team
assumes.
The second problem with the calculation goes back to the problem cited
at the top of this article—the margin of error. Here is the relevant
passage from the study: "The risk of death is 1.5-fold (1.1 – 2.3)
higher after the invasion." Those mysterious numbers in the parentheses
mean the authors are 95 percent confident that the risk of death now is
between 1.1 and 2.3 times higher than it was before the invasion—in
other words, as little as 10 percent higher or as much as 130 percent
higher. Again, the math is too vague to be useful.
There is one group out there counting civilian casualties in a way
that's tangible, specific, and very useful—a team of mainly British
researchers, led by Hamit Dardagan and John Sloboda, called Iraq Body
Count. They have kept a running total of civilian deaths, derived
entirely from press reports. Their count is triple fact-checked; their
database is itemized and fastidiously sourced; and they take great pains
to separate civilian from combatant casualties (for instance, last
Tuesday, the group released a report estimating that, of the 800 Iraqis
killed in last April's siege of Fallujah, 572 to 616 of them were
civilians, at least 308 of them women and children).
The IBC estimates that between 14,181 and 16,312 Iraqi civilians have
died as a result of the war—about half of them since the battlefield
phase of the war ended last May. The group also notes that these figures
are probably on the low side, since some deaths must have taken place
outside the media's purview.
So, let's call it 15,000 or—allowing for deaths that the press didn't
report—20,000 or 25,000, maybe 30,000 Iraqi civilians killed in a
pre-emptive war waged (according to the latest rationale) on their
behalf. That's a number more solidly rooted in reality than the Hopkins
figure—and, given that fact, no less shocking.
Related in SlateIn October 2001, Chris Suellentrop explained the
difficulties of counting the number of Iraqi children "killed" by U.N.
sanctions on Saddam Hussein's regime.
Fred Kaplan writes the "War Stories" column for Slate.
--
Say hello to my little friend.---John Holmes...
DVD Collection:
http://www.intervocative.com/DVDCollection.aspx/extract