New research findings from Princeton University

Menu

Princeton University and University of Oxford researchers found that negative media reports seem to have only a passing effect on public opinion, but that positive stories don’t appear to possess much staying power, either. Measured by how often people worldwide scour the Internet for information related to climate change, overall public interest in the topic has steadily waned since 2007. To gauge public interest, the researchers used Google Trends to document the Internet search-engine activity for “global warming” (blue line) and “climate change” (red line) from 2004 to 2013. They examined activity both globally (top) and in the United States (bottom). The numbers on the left indicate how often people looked up each term based on its percentage of the maximum search volume at any given point in time. (Image courtesy of William Anderegg)

By Morgan Kelly, Office of Communications

The good news for any passionate supporter of climate-change science is that negative media reports seem to have only a passing effect on public opinion, according to Princeton University and University of Oxford researchers. The bad news is that positive stories don’t appear to possess much staying power, either. This dynamic suggests that climate scientists should reexamine how to effectively and more regularly engage the public, the researchers write.

Measured by how often people worldwide scour the Internet for information related to climate change, overall public interest in the topic has steadily waned since 2007, according to a report in the journal Environmental Research Letters. Yet, the downturn in public interest does not seem tied to any particular negative publicity regarding climate-change science, which is what the researchers primarily wanted to gauge.

First author William Anderegg, a postdoctoral research associate in the Princeton Environmental Institute who studies communication and climate change, and Gregory Goldsmith, a postdoctoral researcher at Oxford’s Environmental Change Institute, specifically looked into the effect on public interest and opinion of two widely reported, almost simultaneous events.

The first involved the November 2009 hacking of emails from the Climate Research Unit at the University of East Anglia in the United Kingdom, which has been a preeminent source of data confirming human-driven climate change. Known as “climategate,” this event was initially trumpeted as proving that dissenting scientific views related to climate change have been maliciously quashed. Thorough investigations later declared that no misconduct took place.

The second event was the revelation in late 2009 that an error in the 2007 Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) — an organization under the auspices of the United Nations that periodically evaluates the science and impacts of climate change — overestimated how quickly glaciers in the Himalayas would melt.

To first get a general sense of public interest in climate change, Anderegg and Goldsmith combed the freely available database Google Trends for “global warming,” “climate change” and all related terms that people around the world searched for between 2004 and 2013. The researchers documented search trends in English, Chinese and Spanish, which are the top three languages on the Internet. Google Trends receives more than 80 percent of the world’s Internet search-engine activity, and it is increasingly called upon for research in economics, political science and public health.

Internet searches related to climate change began to climb following the 2006 release of the documentary “An Inconvenient Truth” starring former vice president Al Gore, and continued its ascent with the release of the IPCC’s fourth report, the researchers found.

Anderegg and Goldsmith specifically viewed searches for “climategate” between Nov. 1 and Dec. 31, 2009. They found that the search trend had a six-day “half-life,” meaning that search frequency dropped by 50 percent every six days. After 22 days, the number of searches for climategate was a mere 10 percent of its peak. Information about climategate was most sought in the United States, Canada and Australia, while the cities with the most searchers were Toronto, London and Washington, D.C.

The researchers found that searchers for the phrase “global warming hoax” and related terms correlate in the United States with Republican or conservative political leanings. They compared the prevalence of searches for “global warming hoax” with the Cook Partisan Voting Index — which gauges how far toward Republicans or Democrats a congressional district leans — for 34 US states (above). They found that the more Republican/conservative the state (bottom measurement), the more frequently people in that state looked up related terms. The bottom graph shows how often a state votes Democrat (low numbers) versus Republican (high numbers). The numbers on the left indicate how often people looked up “global warming hoax” based on its percentage of the maximum search volume at any given point in time. (Image courtesy of William Anderegg)

The researchers tracked the popularity of the term “global warming hoax” to gauge the overall negative effect of climategate and the IPCC error on how the public perceives climate change. They found that searches for the term were actually higher the year before the events than during the year afterward.

“The search volume quickly returns to the same level as before the incident,” Goldsmith said. “This suggests no long-term change in the level of climate-change skepticism.

We found that intense media coverage of an event such as ‘climategate’ was followed by bursts of public interest, but these bursts were short-lived.”

All of this is to say that moments of great consternation for climate scientists seem to barely register in the public consciousness, Anderegg said. The study notes that independent polling data also indicate that these events had very little effect on American public opinion. “There’s a lot of handwringing among scientists, and a belief that these events permanently damaged public trust. What these results suggest is that that’s just not true,” Anderegg said.

While that’s good in a sense, Anderegg said, his and Goldsmith’s results also suggest that climate change as a whole does not top the list of gripping public topics. For instance, he said, climategate had the same Internet half-life as the public fallout from pro-golfer Tiger Woods’ extramarital affair, which happened around the same time (but received far more searches).

A public with little interest in climate change is unlikely to push for policies that actually address the problem, Anderegg said. He and Goldsmith suggest communicating in terms familiar to the public rather than to scientists. For example, their findings suggest that most people still identify with the term “global warming” instead of “climate change,” though the shift toward embracing the more scientific term is clear.

“If public interest in climate change is falling, it may be more difficult to muster public concern to address climate change,” Anderegg said. “This long-term trend of declining interest is worrying and something I hope we can address soon.”

One outcome of the research might be to shift scientists’ focus away from battling short-lived, so-called scandals, said Michael Oppenheimer, Princeton’s Albert G. Milbank Professor of Geosciences and International Affairs. The study should remind climate scientists that every little misstep or controversy does not make or break the public’s confidence in their work, he said. Oppenheimer, who was not involved in the study, is a long-time participant in the IPCC and an author of the Fifth Assessment Report being released this year in sections.

“This is an important study because it puts scientists’ concerns about climate skepticism in perspective,” Oppenheimer said. “While scientists should maintain the aspirational goal of their work being error-free, they should be less distracted by concerns that a few missteps will seriously influence attitudes in the general public, which by-and-large has never heard of these episodes.”

Anderegg, William R. L., Gregory R. Goldsmith. 2014. Public interest in climate change over the past decade and the effects of the ‘climategate’ media event. Environmental Research Letters 9 054005. doi:10.1088/1748-9326/9/5/054005 Article published online May 20, 2014.

High-throughput screening for the discovery of small molecules that activate silent bacterial gene clusters. Image courtesy of Mohammad Seyedsayamdost.

by Tien Nguyen, Department of Chemistry

Resistance to antibiotics has been steadily rising and poses a serious threat to the stronghold of existing treatments. Now, a method from Mohammad Seyedsayamdost, an assistant professor of chemistry at Princeton University, may open the door to the discovery of a host of potential drug candidates.

The vast majority of anti-infectives on the market today are bacterial natural products, made by biosynthetic gene clusters. Genome sequencing of bacteria has revealed that these active gene clusters are outnumbered approximately ten times by so-called silent gene clusters.

“Turning these clusters on would really expand our available chemical space to search for new antibiotic or otherwise therapeutically useful molecules,” Seyedsayamdost said.

In an article published last week in the journal Proceedings of the National Academy of Sciences, Seyedsayamdost reported a strategy to quickly screen whole libraries of compounds to find elicitors, small molecules that can turn on a specific gene cluster. He used a genetic reporter that fluoresces or generates a color when the gene cluster is activated to easily identify positive hits. Using this method, two silent gene clusters were successfully activated and a new metabolite was discovered.

Application of this work promises to uncover new bacterial natural products and provide insights into the regulatory networks that control silent gene clusters.

Princeton University researchers found that the banana-like curve of the bacteria Caulobacter crescentus provides stability and helps them flourish as a group in the moving water they experience in nature. The findings suggest a new way of studying the evolution of bacteria that emphasizes using naturalistic settings. The illustration shows how C. crescentus divides asymmetrically into a “stalked” mother cell that anchors to a bacterium’s home surface, and an upper unattached portion that forms a new, juvenile cell known as a “swarmer.” Swarmer cells later morph into stalked cells and lay down roots nearby. They repeat the life cycle with their own swarmer cell and the bacterial colony grows. The Princeton researchers found that in moving water, curvature points the swarmer cell toward the surface to which it needs to attach. This ensures that the bacteria’s next generation does not stray too far from its progenitors. (Image by Laura Ancona)

By Morgan Kelly, Office of Communications

Drawing from his engineering background, Princeton University researcher Alexandre Persat had a notion as to why the bacteria Caulobacter crescentus are curved — a hunch that now could lead to a new way of studying the evolution of bacteria, according to research published in the journal Nature Communications.

Commonly used in labs to study cell division, C. crescentus naturally take on a banana-like curve, but they also can undergo a mutation in which they grow to be perfectly straight. The problem was that in a laboratory there was no apparent functional difference between the two shapes. So a question among biologists was, why would nature bother?

Then Persat, who is a postdoctoral researcher in the group of Associate Professor of Molecular BiologyZemer Gitai, considered that the bacteria dwell in large groups attached to surfaces in lakes, ponds and streams. That means that their curvature could be an adaptation that allows C. crescentus to better develop in the water currents the organisms experience in nature.

In the new paper, first author Persat, corresponding author Gitai and Howard Stone, Princeton’s Donald R. Dixon ’69 and Elizabeth W. Dixon Professor of Mechanical and Aerospace Engineering, report that curvature does more than just help C. crescentus hold their ground in moving fluid. The researchers monitored C. crescentus growth on surfaces in flow and found that the bacteria’s arched anatomy is crucial to flourishing as a group.

“It didn’t take a long time to figure out how flow brought out the advantages of curvature,” Persat said. “The obvious thing to me as someone with a fluid-dynamics background was that this shape had something to do with fluid flow.”

The findings emphasize the need to study bacteria in a naturalistic setting, said Gitai, whose group focuses on how bacterial shapes are genetically determined. While a petri dish generally suffices for this line of study, the functionality of bacterial genes and anatomy can be elusive in most lab settings, he said. For instance, he said, 80 percent of the genes in C. crescentus are seemingly disposable — but they might not be in nature.

“We now see there can be benefits to bacterial shapes that are only seen in a growth environment that is close to the bacteria’s natural environment,” Gitai said.

“For C. crescentus, the ecology was telling us there is an advantage to being curved, but nothing we previously did in the lab could detect what that was,” he said. “We need to not only think of the chemical environment of the bacteria — we also need to think of the physical environment. I think of this research as opening a whole new axis of studying bacteria.”

While most bacteria grow and divide as two identical “daughter” cells, C. crescentus divides asymmetrically. A “stalked” mother cell anchors to a bacterium’s home surface while the upper unattached portion forms a new, juvenile version of the stalked cell known as a “swarmer” cell. The swarmer cells later morph into stalked cells then eventually detach before laying down roots nearby. They repeat the life cycle with their own swarmer cell and the bacterial colony grows.

The Princeton researchers found that in moving water, curvature points the swarmer cell toward the surface to which it needs to attach. This ensures that the bacteria’s next generation does not stray too far from its progenitors, as well as from the nutrients that prompted cell division in the first place, Gitai said. On the other hand, the upper cells of straight bacteria — which are comparatively higher from the ground — are more likely to be carried far away as they are to stay near home.

But the advantage of curvature only goes so far. The researchers found that when the water current was too strong, both curved and straight bacteria were pressed flat against the surface, eliminating the curved cells’ colonization advantage.

These findings put some interesting boundaries on what is known about C. crescentus, starting with the upper limits of the current in which the organism can thrive, Gitai said. He and Persat also plan to pursue whether the bacteria are able to straighten out and cast offspring downstream when the home colony faces a decline in available nutrients.

At the same time, understanding why C. crescentus got its curve helps in figuring out the evolution of other bacteria, he said. Close relatives of the bacteria, for example, are not curved — could it have to do with the severity of their natural environment, such as the powerful turbulence of an ocean? Harmful bacteria such as Vibrio cholerae, strains of which cause cholera, are curved, though the reason is unclear. It’s possible this shape could be related to the organism’s environment in a way that might help treat those infected by it, Gitai said.

Whatever the reason for a specific bacteria’s shape, the Princeton research shows that exploring the influence of its natural habitat could be worthwhile, Gitai said.

“It was clear with C. crescentus that we needed to try something different,” Gitai said. “People didn’t really think of flow as a major driver of this bacteria’s evolution. That really is a new idea.”

The work was supported by the Gordon and Betty Moore Foundation (grant no. GBMF 2550.02), the National Science Foundation (grant no. CBET-1234500), and the National Institutes of Health Director’s New Investigator Innovator Award (grant no. 1DP2OD004389).

Smaller groups actually tend to make more accurate decisions, according to a new study from Princeton University Professor Iain Couzin and graduate student Albert Kao. (Photo credit: Gabriel Miller)

By Morgan Kelly, Office of Communications

The trope that the likelihood of an accurate group decision increases with the abundance of brains involved might not hold up when a collective faces a variety of factors — as often happens in life and nature. Instead, Princeton University researchers report that smaller groups actually tend to make more accurate decisions, while larger assemblies may become excessively focused on only certain pieces of information.

The findings present a significant caveat to what is known about collective intelligence, or the “wisdom of crowds,” wherein individual observations — even if imperfect — coalesce into a single, accurate group decision. A classic example of crowd wisdom is English statistician Sir Francis Galton’s 1907 observation of a contest in which villagers attempted to guess the weight of an ox. Although not one of the 787 estimates was correct, the average of the guessed weights was a mere one-pound short of the animal’s recorded heft. Along those lines, the consensus has been that group decisions are enhanced as more individuals have input.

But collective decision-making has rarely been tested under complex, “realistic” circumstances where information comes from multiple sources, the Princeton researchers report in the journal Proceedings of the Royal Society B. In these scenarios, crowd wisdom peaks early then becomes less accurate as more individuals become involved, explained senior author Iain Couzin, a professor of ecology and evolutionary biology.

“This is an extension of the wisdom-of-crowds theory that allows us to relax the assumption that being in big groups is always the best way to make a decision,” Couzin said.

“It’s a starting point that opens up the possibility of capturing collective decision-making in a more realistic environment,” he said. “When we do see small groups of animals or organisms making decisions they are not necessarily compromising accuracy. They might actually do worse if more individuals were involved. I think that’s the new insight.”

Couzin and first author Albert Kao, a graduate student of ecology and evolutionary biology in Couzin’s group, created a theoretical model in which a “group” had to decide between two potential food sources. The group’s decision accuracy was determined by how well individuals could use two types of information: One that was known to all members of the group — known as correlated information — and another that was perceived by only some individuals, or uncorrelated information. The researchers found that the communal ability to pool both pieces of information into a correct, or accurate, decision was highest in a band of five to 20. After that, the accurate decision increasingly eluded the expanding group.

At work, Kao said, was the dynamic between correlated and uncorrelated cues. With more individuals, that which is known by all members comes to dominate the decision-making process. The uncorrelated information gets drowned out, even if individuals within the group are still well aware of it.

In smaller groups, on the other hand, the lesser-known cues nonetheless earn as much consideration as the more common information. This is due to the more random nature of small groups, which is known as “noise” and typically seen as an unwelcome distraction. Couzin and Kao, however, found that noise is surprisingly advantageous in these smaller arrangements.

“It’s surprising that noise can enhance the collective decision,” Kao said. “The typical assumption is that the larger the group, the greater the collective intelligence.

“We found that if you increase group size, you see the wisdom-of-crowds benefit, but if the group gets too large there is an over-reliance on high-correlation information,” he said. “You would find yourself in a situation where the group uses that information to the point that it dominates the group’s decision-making.”

None of this is to suggest that large groups would benefit from axing members, Couzin said. The size threshold he and Kao found corresponds with the number of individuals making the decisions, not the size of the group overall. The researchers cite numerous studies — including many from Couzin’s lab — showing that decisions in animal groups such as schools of fish can often fall to a select few members. Thusly, these organisms can exhibit highly coordinated movements despite vast numbers of individuals. (Such hierarchies could help animals realize a dual benefit of efficient decision-making and defense via strength-in-numbers, Kao said.)

“What’s important is the number of individuals making the decision,” Couzin said. “Just looking at group size per se is not necessarily relevant. It depends on the number of individuals making the decision.”

Kao, Albert B., Iain D. Couzin. 2014. Decision accuracy in complex environments is often maximized by small group sizes. Proceedings of the Royal Society B. Article published online April 23, 2014. DOI: 10.1098/rspb.2013.3305

This work was supported by a National Science Foundation Graduate Research Fellowship, National Science Foundation Doctoral Dissertation Improvement (grant no. 1210029), the National Science Foundation (grant no. PHY-0848755), the Office of Naval Research Award (no. N00014-09-1-1074), the Human Frontier Science Project (grant no. RGP0065/2012), the Army Research Office (grant no. W911NG-11-1-0385), and an NSF EAGER grant (no. IOS-1251585).

Princeton University graduate student Andrew Babbin (left) prepares a seawater collection device known as a rosette. The team used samples of seawater to determine how nitrogen is removed from the oceans. (Research photos courtesy of A. Babbin)

A decades-long debate over how nitrogen is removed from the ocean may now be settled by new findings from researchers at Princeton University and their collaborators at the University of Washington.

The debate centers on how nitrogen — one of the most important food sources for ocean life and a controller of atmospheric carbon dioxide — becomes converted to a form that can exit the ocean and return to the atmosphere where it is reused in the global nitrogen cycle.

Researchers have argued over which of two nitrogen-removal mechanisms, denitrification and anammox, is most important in the oceans. The question is not just a scientific curiosity, but has real world applications because one mechanism contributes more greenhouse gases to the atmosphere than the other.

“Nitrogen controls much of the productivity of the ocean,” said Andrew Babbin, first author of the study and a graduate student who works with Bess Ward, Princeton’s William J. Sinclair Professor of Geosciences. “Understanding nitrogen cycling is crucial to understanding the productivity of the oceans as well as the global climate,” he said.

In the new study, the researchers found that both of these nitrogen “exit strategies” are at work in the oceans, with denitrification mopping up about 70 percent of the nitrogen and anammox disposing of the rest.

The researchers also found that this 70-30 ratio could shift in response to changes in the quantity and quality of the nitrogen in need of removal. The study was published online this week in the journal Science.

The two other members of the research team were Richard Keil and Allan Devol, both professors at University of Washington’s School of Oceanography.

The researchers collected the samples in 2012 in the ocean off Baja California. Click on the image to read blog posts from a similar expedition that took place the following year.

Essential for the Earth’s life and climate, nitrogen is an element that cycles between soils and the atmosphere and between the atmosphere and the ocean. Bacteria near the surface help shuttle nitrogen into the ocean food chain by converting or “fixing” atmospheric nitrogen into forms that phytoplankton can use.

Without this fixed nitrogen, phytoplankton could not absorb carbon dioxide from the air, a feat which is helping to check today’s rising carbon dioxide levels in the atmosphere. When these tiny marine algae die or are consumed by predators, their biomass sinks to the ocean interior where it becomes food for other types of bacteria.

Until about 20 years ago, most scientists thought that denitrification, carried out by some of these bacteria, was the primary way that fixed nitrogen was recycled back to nitrogen gas. The second process, known as anaerobic ammonia oxidation, or anammox, was discovered by Dutch researchers studying how nitrogen is removed in sewage treatment plants.

Both processes occur in regions of the ocean that are naturally low in oxygen, or anoxic, due to local lack of water circulation and intense phytoplankton productivity overlying these regions. Within the world’s ocean, such regions occur only in the Arabian Sea, and off the coasts of Peru and Mexico.

In these anoxic environments, anaerobic bacteria feast on the decaying phytoplankton, and in the process cause the denitrification of nitrate into nitrogen gas, which cannot be used as a nutrient by most phytoplankton. During this process, ammonium is also produced, although marine geochemists had never been able to detect the ammonium that they knew must be there.

That riddle was solved in the early 2000s by the discovery of the anammox reaction in the marine environment, in which anaerobic bacteria feed on ammonium and convert it to nitrogen gas.

But another riddle soon appeared: the anammox rates that Dutch and German teams of researchers measured in the oceans appeared to account for the entire nitrogen loss, leaving no role for denitrification.

Then in 2009, Ward’s team published a study in the journal Nature showing that denitrification was still a major actor in returning nitrogen to the air, at least in the Arabian Sea. The paper further fueled the controversy.

Back at Princeton, Ward suspected that both processes were necessary, with denitrification churning out the ammonium that anammox then converted to nitrogen gas.

To settle the issue, Ward and Babbin decided to look at exactly what was going on in anoxic ocean water when bacteria were given nitrogen and other nutrients to chew on.

They collected water samples from an anoxic region in the ocean south of Baja California and brought test tubes of the water into an on-ship laboratory. Working inside a sturdy, flexible “glove bag” to keep air from contaminating the low-oxygen water, Babbin added specific amounts and types of nitrogen and organic compounds to each test tube, and then noted whether denitrification or anammox occurred.

“We conducted a suite of experiments in which we added different types of organic matter, with variable ammonium content, to see if the ratio between denitrification and anammox would change,” said Babbin. “We found that not only did increased ammonia favor anammox as predicted, but that the precise proportions of nitrogen loss matched exactly as predicted based on the ammonium content.”

The explanation of why, in past experiments, some researchers found mostly denitrification while others found only anammox comes down to a sort-of “bloom and bust” cycle of phytoplankton life, explained Ward.

“If you have a big plankton bloom, then when those organisms die, a large amount of organic matter will sink and be degraded,” she said, “but we scientists are not always there to measure this. In other words, if you aren’t there on the day lunch is delivered, you won’t measure these processes.”

The researchers also linked the rates of nitrogen loss with the supply of organic material that drives the rates: more organic material equates to more nitrogen loss, so the quantity of the material matters too, Babbin said.

The two pathways have distinct metabolisms that turn out to be important in global climate change, he said. “Denitrification produces carbon dioxide and both produces and consumes nitrous oxide, which is another major greenhouse gas and an ozone depletion agent,” he said. “Anammox, however, consumes carbon dioxide and has no known nitrous oxide byproduct. The balance between the two therefore has a significant impact on the production and consumption of greenhouse gases in the ocean.”

The research was funded by National Science Foundation grant OCE-1029951.

Completion of a promising experimental facility at the U.S. Department of Energy’s Princeton Plasma Laboratory (PPPL) could advance the development of fusion as a clean and abundant source of energy for generating electricity, according to a PPPL paper published this month in the journal IEEE Transactions on Plasma Science.

The facility, called the Quasi-Axisymmetric Stellarator Research (QUASAR) experiment, represents the first of a new class of fusion reactors based on the innovative theory of quasi-axisymmetry, which makes it possible to design a magnetic bottle that combines the advantages of the stellarator with the more widely used tokamak design. Experiments in QUASAR would test this theory. Construction of QUASAR — originally known as the National Compact Stellarator Experiment — was begun in 2004 and halted in 2008 when costs exceeded projections after some 80 percent of the machine’s major components had been built or procured.

“This type of facility must have a place on the roadmap to fusion,” said physicist George “Hutch” Neilson, the head of the Advanced Projects Department at PPPL.

Both stellarators and tokamaks use magnetic fields to control the hot, charged plasma gas that fuels fusion reactions. While tokamaks put electric current into the plasma to complete the magnetic confinement and hold the gas together, stellarators don’t require such a current to keep the plasma bottled up. Stellarators rely instead on twisting — or 3D —magnetic fields to contain the plasma in a controlled “steady state.”

Stellarator plasmas thus run little risk of disrupting — or falling apart — as can happen in tokamaks if the internal current abruptly shuts off. Developing systems to suppress or mitigate such disruptions is a challenge that builders of tokamaks like ITER, the international fusion experiment under construction in France, must face.

Stellarators had been the main line of fusion development in the 1950s and early 1960s before taking a back seat to tokamaks, whose symmetrical, doughnut-shaped magnetic field geometry produced good plasma confinement and proved easier to create. But breakthroughs in computing and physics understanding have revitalized interest in the twisty, cruller-shaped stellarator design and made it the subject of major experiments in Japan and Germany.

PPPL developed the QUASAR facility with both stellarators and tokamaks in mind. Tokamaks produce magnetic fields and a plasma shape that are the same all the way around the axis of the machine — a feature known as “axisymmetry.” QUASAR is symmetrical too, but in a different way. While QUASAR was designed to produce a twisting and curving magnetic field, the strength of that field varies gently as in a tokamak — hence the name “quasi-symmetry” (QS) for the design. This property of the field strength was to produce plasma confinement properties identical to those of tokamaks.

“If the predicted near-equivalence in the confinement physics can be validated experimentally,” Neilson said, “then the development of the QS line may be able to continue as essentially a ‘3D tokamak.’”

Such development would test whether a QUASAR-like design could be a candidate for a demonstration — or DEMO —fusion facility that would pave the way for construction of a commercial fusion reactor that would generate electricity for the power grid.

The research was supported by the U.S. Department of Energy under contract DE-AC02 09CH11466. Princeton University manages PPPL, which is part of the national laboratory system funded by the U.S. Department of Energy through the Office of Science.

While carbon dioxide is typically painted as the bad boy of greenhouse gases, methane is roughly 30 times more potent as a heat-trapping gas. New research in the journal Nature indicates that for each degree that the Earth’s temperature rises, the amount of methane entering the atmosphere from microorganisms dwelling in lake sediment and freshwater wetlands — the primary sources of the gas — will increase several times. As temperatures rise, the relative increase of methane emissions will outpace that of carbon dioxide from these sources, the researchers report.

The findings condense the complex and varied process by which methane — currently the third most prevalent greenhouse gas after carbon dioxide and water vapor — enters the atmosphere into a measurement scientists can use, explained co-author Cristian Gudasz, a visiting postdoctoral research associate in Princeton’s Department of Ecology and Evolutionary Biology. In freshwater systems, methane is produced as microorganisms digest organic matter, a process known as “methanogenesis.” This process hinges on a slew of temperature, chemical, physical and ecological factors that can bedevil scientists working to model how the Earth’s systems will contribute, and respond, to a hotter future.

The researchers’ findings suggest that methane emissions from freshwater systems will likely rise with the global temperature, Gudasz said. But to not know the extent of methane contribution from such a widely dispersed ecosystem that includes lakes, swamps, marshes and rice paddies leaves a glaring hole in climate projections.

“The freshwater systems we talk about in our paper are an important component to the climate system,” Gudasz said. “There is more and more evidence that they have a contribution to the methane emissions. Methane produced from natural or manmade freshwater systems will increase with temperature.”

To provide a simple and accurate way for climate modelers to account for methanogenesis, Gudasz and his co-authors analyzed nearly 1,600 measurements of temperature and methane emissions from 127 freshwater ecosystems across the globe.

New research in the journal Nature found that for each degree that the Earth’s temperature rises, the amount of methane entering the atmosphere from microorganisms dwelling in freshwater wetlands — a primary source of the gas — will increase several times. The researchers analyzed nearly 1,600 measurements of temperature and methane emissions from 127 freshwater ecosystems across the globe (above), including lakes, swamps, marshes and rice paddies. The size of each point corresponds with the average rate of methane emissions in milligrams per square meter, per day, during the course of the study. The smallest points indicate less than one milligram per square meter, while the largest-sized point represents more than three milligrams. (Image courtesy of Cristian Gudasz)

The researchers found that a common effect emerged from those studies: freshwater methane generation very much thrives on high temperatures. Methane emissions at 0 degrees Celsius would rise 57 times higher when the temperature reached 30 degrees Celsius, the researchers report. For those inclined to model it, the researchers’ results translated to a temperature dependence of 0.96 electron volts (eV), an indication of the temperature-sensitivity of the methane-emitting ecosystems.

“We all want to make predictions about greenhouse gas emissions and their impact on global warming,” Gudasz said. “Looking across these scales and constraining them as we have in this paper will allow us to make better predictions.”