The Oil Plume Paradox

Coverage of various studies engenders frustration

Pinpointing the amount of oil lingering in the Gulf of Mexico continues to be a source of frustration for journalists and scientists alike, with multiple, contradictory—if not necessarily “dueling”—research reports having been published on the subject over the last few weeks.

Last month, the federal government released an “oil budget,” which claimed that 74 percent of the crude had essentially been dealt with through skimming, burning, dispersion, evaporation, and other means. Last week, scientists from the University of Georgia contradicted the federal assessment, claiming that as much as 79 percent of the oil spilled remained in the Gulf. A few days after that announcement, scientists from the Woods Hole Oceanographic Institution reported in the journal Science that they had confirmed the existence of 22-mile-long underwater oil plume near the leaking wellhead in late June, a little more than two weeks before the well was finally capped. Because both the federal government and the University of Georgia’s estimates of the remaining oil were based on incomplete information and not peer-reviewed, many reporters (myself included) turned the Woods Hole paper published in Science to help settle their contradictory findings. That, says one of the authors of that paper, was a big mistake. In column for CNN published Wednesday, Woods Hole’s Christopher Reddy wrote:

Instead of being able to consider our results on the basis of the information alone (“just the facts, ma’am”), readers, viewers and listeners around the world were exposed to newspaper, TV and radio reports clouded with politically charged agendas that were premature at the least and outright wrong at the most.

I must have spoken with at least 25 journalists last week, and despite my every effort to explain our findings, the media were more interested in using the new information to portray a duel between competing scientists. The story turned into an us-versus-them scenario in which some scientists are right and others are wrong. Seeking to elucidate, I felt caught in a crossfire

Even though my colleagues and I repeatedly avoided contrasting our results with previous NOAA estimates that some 75 percent of the spilled oil was already gone from the Gulf, much of last week’s coverage of our work made that a prominent part of the story.

By way of example, Reddy cited an article in The Washington Post, which reported that “Academic scientists are challenging the Obama administration’s assertion that most of BP’s oil in the Gulf of Mexico is either gone or rapidly disappearing — with one group Thursday announcing the discovery of a 22-mile ‘plume’ of oil that shows little sign of vanishing.” I quoted a similar sentence from The New York Times in a review of the coverage last week. Such reporting, Reddy argues, cast the Woods Hole paper as evidence that the NOAA estimates were wrong and that the University of Georgia was right:

Neither of these conclusions was ever meant to be drawn from our research on the oil plume. This reasoning implicit in the media coverage was not only premature, but it might turn out to be wrong.

Science does not work that way. It is incremental. It is not a house of cards where one dissenting view leads to a complete collapse. Rather, science is more like a jigsaw puzzle. Each piece is added. Occasionally a wrong piece may be placed, but eventually science will correct it.

As if to illustrate his point, a Science published a paper on Tuesday from scientists at the Lawrence Berkeley National Laboratory, which found that oil in the twenty-two-mile-long plume near the wellhead was—contrary to the Woods Hole data—biodegrading rapidly. The measurements upon which the Berkeley team based its paper were actually taken before the Wood Hole team measurements, but in a meeting in Seattle on Tuesday, the lead of the Berkeley paper, Terry Hazen, said that in the last three weeks his team hasn’t been able to detect the underwater plume at all.

What is a reporter to do, with so many competing analyses? The simplest answer might be to re-read Reddy’s media criticism in CNN and bear in mind that science is an incremental process, in which a single dataset rarely, if ever, settles a particular debate.

That understanding has, to some extent, been evident in this weeks’ coverage of the Berkeley paper. Many reporters have been careful to emphasize that, with the exception of the rate of oil biodegradation, it came to many of the same conclusions as the Woods Hole paper. Most importantly, perhaps, both studies found that as bacteria digest the spilled oil, they are not depleting oxygen levels in the water as much as many feared they would. Beyond that, Berkeley’s Hazen toldScience News that he was “amazed” at how similar the Woods Hole data was to his team’s in terms of the measuring the flow rate, size, path, and oil concentration of the undersea plume. A number of outlets have also quoted the lead author of the Wood Hole paper, Richard Camilli, making similar points about the complementary nature of the two papers.

While it is nice to see reporters eschewing the “dueling papers” paradigm, however, there is a limit to harmony as well. In an article for the Montreal Gazette, Margaret Munro went so far as to report that Camilli is “downplaying” the contradictions between the Berkeley and Woods Hole papers. That might be too much of an inference, but, as Georgia Institute of Technology microbiologist Jim Spain toldThe Wall Street Journal’s Robert Lee Hotz, “There is real disagreement here.”

Unfortunately, while almost every news article about the Berkeley paper has mentioned it contradicted the Woods Hole conclusion about the rate at which oil is biodegrading, few explained why it did so. As Wired’s Brandom Keim did perhaps the best job of explaining, the Woods Hole team did not look directly at bacteria in the water, but rather used a “proxy”—oxygen depletion—to calculate their activity. In contrast, the Berkeley team made more direct measurement, extracting microbial DNA from plume samples and sequencing the genes to identify their functions.

(In so doing, the Berkeley team identified a previously unknown strain of oil-eating bacteria that doesn’t consume much oxygen when devouring crude, and the novelty of that discovery—rather than the ongoing debate about the amount of oil lingering in the Gulf—was actually the main thrust of articles from the San Francisco Chronicle and Associated Press.)

Such explanations, however, often fly over the heads of readers who have read that the oil is biodegrading slowly one week and quickly the next. As such, in the comments section of the Knight Science Journalism Tracker (see here and here), Keim and a number of other reporters covering the spill complained that Science, which published the Woods Hole and Berkeley papers, should have held the former in order to publish them together. Not doing so led to the Woods Hole paper being interpreted as more conclusive than it actually was, the reporters argued, whereas running it with the Berkeley paper would have allowed them present a single, more comprehensive account of scientists’ current understanding of the oil plume. (Likewise, Greenwire reporter Paul Voosen conceded that he would not have worded the lede of his article about the Berkeley paper—which read, “The Gulf of Mexico’s undersea oil plume is no more.”—so strongly had he not been reacting to the poor coverage of the Woods Hole paper.)

I raised the reporters’ concerns with the people at Science. “We do try when possible to coordinate papers on a common topic, but this is not always possible or appropriate, particularly in rapidly developing fields,” the journal’s deputy editor for physical science, Dr. Brooks Hanson, explained in an e-mail. “In this case, note that the Hazen et al. paper was not even accepted until after the Camilli paper was published. We expect that Science and other journals will continue to publish papers when they are ready, rather than delaying the release of important peer-reviewed data that are needed in real time. We hope that journalists will understand the need for prompt release.”

Likewise, in an e-mail, Ginger Pinholster, the director of Science’s communications office, said that while she thought Keim and the other reporters had “valid concerns,” the publication schedule of the Woods Hole and Berkeley papers was not “bound to engender confusion for science journalists and the public.” In fact, she added, “Reporters who covered the Hazen paper have generally done a fine job of understanding and conveying the relationship between the two research efforts.” In particular, she cited stories from The Washington Post and NPR.

I would tend to agree with Pinholster about the quality of coverage, but with a few caveats. As mentioned above, not all outlets explained why the Woods Hole and Berkeley teams came to different conclusions about the rate of oil biodegradation. Nor did every outlet mention that the Berkeley team’s work was partly funded by an existing, ten-year grant that BP had made to the Energy Bioscience Institute at Berkeley (the grant, unrelated to the oil spill, does not cast aspersions on Hazen’s work, but should be mentioned in news coverage).

Finally, not every outlet checked the Berkeley team’s conclusions with other scientists (beyond those from Woods Hole, that is). Although the Berkeley team’s data on the rate of oil biodegradation is ostensibly more robust than that from the Woods Hole team, there is no reason to believe it is perfect. Indeed, a number of scientists—quoted in by outlets such as Science News, Scientific American, Wired, and The Wall Street Journal—variously expressed concern about the Berkeley team’s methods or that the oil plume may have moved or been diluted, rather than consumed by bacteria (a few of these outlets appropriately gave Hazen a chance to respond; he told the LA Times that he hopes to use the oil-eating bacteria as a “tracking device” for hydrocarbons in the Gulf).

Moving forward, reporters will therefore hopefully be mindful of the admonitions in the column that Woods Hole’s Reddy wrote for CNN. “Science is incremental. Science takes time.” Rarely is one piece of research “absolutely right or absolutely wrong.”

Popular Science’s Rebecca Boyle, who had a good article about the recent papers, put it another way at the Knight Science Journalism Tracker, astutely observing that, “While daily coverage is often a bad way to handle big science, the Web leaves us little choice. That’s why we felt it was important to keep saying, ‘this is not definitive; there’s more to come.’ And we intend to deliver on that.”

Indeed, the last few weeks of “see-saw swings of uncertainty” and “science on the fly” (as a Wall Street Journal reporter and one of his sources respectively described the situation) have not made the job of reporters covering the lingering oil in the Gulf any easier. But such is journalism, and reporters can improve their coverage by remembering that science is not, at the end of the day, an exact science.

Has America ever needed a media watchdog more than now? Help us by joining CJR today.

Curtis Brainard writes on science and environment reporting. Follow him on Twitter @cbrainard.

Don't Miss

In June 2003, the San Francisco company Linden Labs launched a Massively Multiplayer Online Role Playing Game called Second Life. It quickly grew to over a million users, and has become a touchstone for the potential social adoption...