Category Archives: Atmospheric CO2 levels

Post navigation

That is the assertion of a paper in Nature by Carolyn W. Snyder (Snyder, 2016) based on an analysis of the correlation between atmospheric CO2 concentrations and changes in the global average surface temperature (GAST) over the past 800,000 years. Actually the assertion is that the 95% “credible interval” is 7 to 13 degrees Celsius (12.6 to 23 degrees Fahrenheit) Yikes! Even the current scientific consensus value of something on the order of 3 C (5.4 F) (Collins et al., 2013) is frightening when you consider that the difference in the GAST between the last glacial maximum about 20,000 years ago and at present wasn’t much different than that. Continue reading →

Atmospheric CO2 levels are always lower during glacial periods than during interglacials like the one we are in now. During the last glacial maximum 20,000 years ago, for example, they were at 190 parts per million (ppm), whereas during the most recent 10,000 years, almost up to the present, they have been about 280 ppm. [We have now succeeded in raising them to over 400 ppm and still counting, but that’s a different story.] Eric Galbraith and S. Eggleston (Galbraith and Eggleston, 2017) argue that as far as we know, atmospheric CO2 levels have never gone below the typical glacial levels of 190 ppm, even in extended snowball earth conditions. Why not? Well, a carefully-reasoned 2009 paper they cite (Pagani et al., 2009) suggests that even in the mostly warm conditions of the last 24 million years, when CO2 levels fell below 190 ppm, terrestrial plants stopped effectively photosynthesizing, thus they not only stopped removing CO2 from the atmosphere directly, but they also stopped the active root growth which increases the acidity of soils and enhances chemical silicate weathering from the rocks which removes CO2 from the soil, and ultimately from the atmosphere. Galbraith and Eggleston argue that the same thing has been happening during the glacial periods of the last 800,000 years, and extend the argument to the photosynthesis of oceanic phytoplankton. To wit, when CO2 levels get below 190 ppm, CO2 removal from the atmosphere by photosynthesis and chemical weathering is sharply reduced, so they decline no further. Continue reading →

A glance at the graph above, from the University of East Anglia Climatic Research Unit (http://www.cru.uea.ac.uk/), shows that the last two time periods covered (encompassing 2015) are warmer than at any time since 1850. In the prior decade, however, there was much less upward trend, feeding speculation, particularly from climate-change deniers, that all of the warming we have seen since 1900 was largely unrelated to anthropogenic carbon dioxide emissions. There was also speculation from other scientists that the apparent slowdown in warming was statistically “in the noise” and that, in time, there would be a rebound and that the monotonic upward trend since the mid-1970s would soon resume, as it now seems to have. Time will tell, of course, but mainstream climate scientists Fyfe et al. (2016) have just made a new analysis of the early 2000s warming slowdown and pronounce it real and probably largely attributable to the early 2000s’ negative phase of the Interdecadal Pacific Oscillation (IPO) in which intensification of the trade winds lowered sea surface temperatures enough to offset the warming from the ongoing increases in atmospheric greenhouse gases. Continue reading →

Maize (corn) production continues to be a very important source of food, feed, and fuel all around the world, but climate change has raised the concern about being able to maintain the yield rates. A negative relationship between extremely high temperatures (above 30˚C) and yield has already been observed in various regions. Previous studies have not been able to demonstrate which mechanism causes the correlation between extreme temperatures and yield, thus it is possible that the relationship reflects the influence of another variable, such as precipitation rates. There are other possible explanations for the observed relationships. This study explores the mechanisms used in other studies that document the importance of extreme heat on rainfed maize using the process-based Agricultural Production Systems Simulator (APSIM). The study asks three main questions: can APSIM reproduce the empirical relationships—what farmers are seeing on the ground?; if so, what does APSIM imply are the key processes that give rise to these relationships?; how much are these relationships affected by changes in atmospheric CO2? Continue reading →

Nitrogen fertilizer, crucial for growing commercial crops, is based on ammonia made in factories using the energy- and CO2-intensive Haber-Bosch process; hydrogen is stripped off natural gas using steam, then reacted with nitrogen in the air. The process uses repeated cycling at high temperature and pressure, and consumes 2% of the world’s energy production. Stuart Licht and colleagues at George Washington University noticed, however, that a recently developed fuel cell using ammonia as a fuel and producing electricity as an output might be run in reverse: electricity in, ammonia out, with a whole lot less temperature and pressure (and energy) required. Even better, it wouldn’t need natural gas as a hydrogen source—with its attendant CO2 production—being able to get it from air and steam at a temperature lower than a household oven baking bread and at ambient pressure. Furthermore only simple materials would be required; molten sodium and potassium hydroxide (inexpensive commodity chemicals), nickel electrodes, and an iron oxide catalyst, all in a single pot.

The amount of the powerful greenhouse gas, methane (natural gas) released into the atmosphere by farmers and gas and oil companies is substantially underestimated by the USEPA according to a team led by Scott Miller, a Harvard Ph.D. student, published late last year in the Proceedings of the National Academy of Sciences. The previously unaccounted sources are ruminants, manure, and fossil fuel extraction and processing in the South-Central US, traced back to their sources from fixed towers and aircraft-based methane sensors using the Stochastic Time-Inverted Lagrangian Transport model (STILT). On the other hand, Hristov et al.(2014), in response to the Miller et al. paper, calculated cattle methane releases from the ground up based on the number of cattle in the US and thought that the EPA had it right in the first place. So we have a calculation based on numbers of cattle competing with empirical data from the methane sensors processed through an interesting atmospheric model. This type of atmospheric modeling–tracing airborne chemicals back to their source–is what NASA is about to use with the data from its newly launched (July 2, 2014) Orbiting Carbon Observatory, OCO-2, except that the observatory will detect CO2 rather than methane. No public data from it are available as yet, but since methane oxidizes to CO2 in the atmosphere, it is possible that the satellite will soon confirm the sources of the methane. You can find out about the OCO-2 instrument here. http://1.usa.gov/1qT3KDI

Anthropogenic carbon emissions have been a large factor in climate change since the start of the industrial revolution. Scientists have become increasingly concerned with warming and other effects associated with the release of carbon into the atmosphere. Currently, most world governments have set a target that limits warming to two degrees Celsius since preindustrial times. With this target in place policies are then enacted to limit carbon emissions and hopefully to mitigate anthropogenic effects on earth’s climate. Steinacher et al. (2013) set out to show that setting a target temperature is not sufficient to control many other effects of climate change such as sea level rise and ocean acidification that also result from anthropogenic carbon emissions. They find that when targets are set for these other factors, the allowable carbon emissions are much lower than current targets based on temperature alone.