GeoLog » Geosciences Columnhttp://blogs.egu.eu/geolog
The official blog of the European Geosciences UnionFri, 31 Jul 2015 11:05:58 +0000en-UShourly1http://wordpress.org/?v=4.2.3Geosciences Column: When water is scarce, understanding how we can save it is importanthttp://blogs.egu.eu/geolog/2015/07/17/geosciences-column-when-water-is-scarce-understanding-how-we-can-save-it-is-important/
http://blogs.egu.eu/geolog/2015/07/17/geosciences-column-when-water-is-scarce-understanding-how-we-can-save-it-is-important/#commentsFri, 17 Jul 2015 12:49:15 +0000http://blogs.egu.eu/geolog/?p=11772Supplies of water on Earth are running dry. The rate at which an ever growing population consumes this precious resource is not matched by our Planet’s ability to replenish it. Water scarcity is proving a problem globally, with regions such as California and Brazil facing some of the most severe water shortages on record. Used for drinking, agriculture and industrial processes, water forms an fundamental part of our day to day life, so finding ways in which to preserve this vital resource is important.

The global population now exceeds 7.3 billion people. One of the greatest challenges of the 21st century will be to feed this ever growing population – by 2050 crop production will have to double to meet demand. At the same time, agricultural irrigation currently accounts for approximately 80-90% of global freshwater consumption, while agricultural areas requiring irrigation in the past 50 years having roughly doubled. With both space and freshwater in short supply, innovative solutions and fresh approaches will be need if the increase in crop demand is to be met.

The fields in the image are farmed on seemingly vertical hillsides. Terraced fields are present nearly to the top of every available mountain, and ploughed by hand or with a draft animal. Terraces, by Cheng Su, distributed via imaggeo.

It might come as a bit of a surprise that current irrigations systems operate at efficiency of 50% or below. Water is wasted as it is transported to the crops as well as whilst it is applied to the plants and is affected, not only by the irrigation system itself, but also meteorological and environmental factors. A recent paper published in the open access, EGU Journal, Hydrology and Earth System Sciences, has found that improving current irrigation practices can contribute to sustainable food security.

To better understand where efficiencies might be made in irrigation systems, the scientists used a new approach: They took into account ‘manageable’ factors such as water lost through evaporation, run-off, deep percolation and that taken on by weeds. At the same time, assessing mechanical performance of the systems and the vegetation dynamics, climate, soils and land use properties of a particular region. These factors were fed into a global irrigation model implemented on the three main irrigation types: surface, sprinkler and drip.

The researchers created maps of the global distribution of irrigation systems at a country level, based on the results from their model. The maps showed that areas where surface irrigation – were water is distributed over the surface of a field – is common, irrigation system efficiency was low, sometimes registering values of less than 30%! This is particularly applicable to Central, south and Southeast Asia due to the widespread cultivation of rice. In contrast, areas where there is a high usage of sprinkler systems – similar to natural rainfall – and drip systems (were water is allowed to drip slowly to the root of the plant), such as North America, Brazil, South Africa, Ivory Coast and Europe, efficiency was above the global average.

Global patterns of beneficial irrigation efficiency for each irrigation system (a) surface, (b) sprinkler, and (c) drip. This figure is based on theoretical scenarios, in which each system is respectively assumed to be applied on the entire irrigated area. From Jägermeyr et al., 2015. Click to enlarge.

To investigate how the three irrigation system types compared to one another, irrespective of their geographical distribution, the researchers produced another map. They found that surface irrigation is the least efficient of the three methods, with values at less than 29%. Sprinkler and drip systems perform significantly better, with values of 51 and 70%, respectively. Interestingly, regardless of the system used, irrigation efficiency in Pakistan, northeast India and Bangladesh is always at below global average values. Crop type can also play an important role: rice, pules and rapeseed are linked to poor system efficiencies, whilst, maize sugarcane and root crops (such as potatoes) are above average.

Jägermeyr, the study’s lead author, and his team calculated that 2469km³ of water is withdrawn yearly for irrigation purposes – that is close to 5 times the volume of water held in the Canadian/American Lake Erie. Of that, 608 km³ is non-beneficially consumed. In other words, lost through evaporation, interception (by foliage leaves) and during delivery to the plants and represents an area where substantial water savings could be made.

Replacing surface irrigation with a sprinkler or drip system proves one of the best solutions to the problem, with a potential 76% reduction in non-beneficial consumption of water. This would mean that up to 68% less water would be needed for the purposes of irrigating crops.

Therefore, irrigation system improvements could make an important contribution to sustainably increase food production. The water saved would allow for irrigated areas to be expanded and yields increased on farms where production is currently limited by an insufficient water supply.

The upgrade of irrigations systems seems a very attractive solution to the problem, but the researchers warn that its suitability must be assessed on a river basin level. Factors such as crop management, soil type and local climate may affect the suitability of this approach in some geographical areas. The study finds that regions such as the Sahel, Korea and Madagascar, as well as temperate regions in Europe, North America, Brazil and parts of China would benefit the most from irrigation system improvements.

]]>http://blogs.egu.eu/geolog/2015/07/17/geosciences-column-when-water-is-scarce-understanding-how-we-can-save-it-is-important/feed/0Geoscience Column: Recent and future changes in the Greenland Ice Sheethttp://blogs.egu.eu/geolog/2015/06/05/geoscience-column-recent-and-future-changes-in-the-greenland-ice-sheet/
http://blogs.egu.eu/geolog/2015/06/05/geoscience-column-recent-and-future-changes-in-the-greenland-ice-sheet/#commentsFri, 05 Jun 2015 11:00:32 +0000http://blogs.egu.eu/geolog/?p=11596Over the past few decades, the Arctic region has warmed more than any other on Earth. The Greenland Ice Sheet is losing mass faster than ever before, and is expected to keep melting with consequences for global sea-level rise and ocean circulation. At a media briefing, during the EGU’s General Assembly in April (stream it here), researchers presented new results on the factors that influence the Greenland Ice Sheet’s rapid and profound changes – from glacial lakes to clouds and snow darkening.

The vast expanse of the Greenland Ice Sheet covers an area of 1.71 million km2 (approximately a tenth of the size of Russia), and holds a staggering volume of ice: 2.85 million km3. The ice sheet is only rivalled in size by one other: the Antarctic Ice Sheet. Scientist have calculated that the Greenland Ice Sheet stores enough freshwater to raise sea level by 7.4m, should all the ice melt, so understanding what causes the ice to melt now and in the future is critical!

The importance of clouds

When you think of clouds, you probably think of them as purveyors of rain and bad weather. But that is not all; clouds form an intrinsic part of the climate system which is more complex than simply how they affect day to day weather. In Greenland, (as elsewhere across the globe), clouds are a source of precipitation, bringing all-important snow which accumulates on the ice sheet and makes it grow in size.

Clouds also affect temperatures: on a clear day you’ll feel the warmth of the sun on your back, but as night falls temperatures start dropping quickly as heat is lost to the atmosphere. However, if in the late afternoon the clouds started rolling in, the night would be warmer, as clouds stop heat being lost to the atmosphere. If they stick around long enough though, they promote cooling, as they reflect sunlight away from the Earth’s surface.

“On a global scale, clouds (on average) tend to cool the Earth’s surface, but there are many regional differences”, explained Kristof Van Tricht,(a PhD student at the University of Leuven in Belgium), during the press conference.

It turns out that, in Greenland, the warming effect of clouds dominates, and warming of the surface encourages melting of the ice sheet. However, the remoteness of the area means that direct observations of just how much the clouds warm the surface and to what extent this impacts on the ice sheet has been limited. Until now.

Using satellite observations, Van Tricht and his team have been able to study the warming effect of clouds in more detail than ever before. Their models show that, in the presence of clouds, the Greenland Ice Sheet can be up to 1.2°C warmer, which can cause substantial melting. Compared to models ran without cloud cover, the ice sheet could melt up to 38% more. This equates to 12% more runoff from the ice sheet into the oceans, solely due to the presence of clouds.

Predictions of what the findings mean for the ice sheet in the future are tricky though. The scientists’ model is based on real-time observations and so it isn’t possible to look into the future. For that, improved cloud model simulations are needed.

Beautiful lakes

Lakes form, seasonally, on the surface of the Greenland Ice Sheet as a result of run-off water pooling in depressions in the ice. Although beautiful to look at, because they are darker than the surrounding ice, they attract more heat. The lakes also drain sporadically, and when they do, some of the water they hold drains through the ice making its way to the base of the ice sheet. Once there, the water lubricates the base of the ice sheet and promotes it to flow more easily and quickly towards the ocean. Combined, these two effects affect the dynamics of the ice sheet.

Drained Supraglacial Lake Bed. This lake has drained through the bottom for several years in a row. The large block was initially formed in summer of 2006, but large cracks run through it from subsequent lake drainages.Credit: Ian Joughin (distributed via imaggeo.egu.eu )

At present, the lakes generally form within the ablation zone – the low-altitude regions towards the edges of the ice sheets where ice is lost through melting, evaporation, calving and other processes – where it is already warmest on the ice sheet.

At the press conference, Andrew Sheperd presented research carried out by Amber Leeson, on how the location on the ice at which the supraglacial (meaning they form on the surface of the ice) lakes form might change with a warming climate and what this means for the Greenland Ice Sheet.

As the climate warms, higher altitude regions on the ice sheet will too. Through building a hydrological model, Leeson found that the lakes spread father inland. According to Leeson’s simulation

“by 2050, the lakes have spread about 50 to 100 km further inland, so more of the ice sheet is potentially exposed to this lubrication effect,” added Shepard.

Previous studies of how the ice sheet might respond to a warming climate do not consider the effects of the added melt water volume at the base of the ice sheet as a result of more lakes at the surface. Leeson’s findings mean that these models need to be re-run so that scientists can fully understand the potential implications. This is particularly true in terms of the lubrication effect at the base of the ice and whether the ice will more readily slip towards the oceans, potentially heightening the risk of sea level rise.

This blog post presents only some of the findings which were discussed during the press conference. Other aspects of this press conference where covered in the media, you can find more on those here and by following this link.

]]>http://blogs.egu.eu/geolog/2015/06/05/geoscience-column-recent-and-future-changes-in-the-greenland-ice-sheet/feed/0Floods and droughts set to increase due to climate changehttp://blogs.egu.eu/geolog/2015/04/29/floods-and-droughts-set-to-increase-due-to-climate-change/
http://blogs.egu.eu/geolog/2015/04/29/floods-and-droughts-set-to-increase-due-to-climate-change/#commentsWed, 29 Apr 2015 12:42:00 +0000http://blogs.egu.eu/geolog/?p=11512The planet is set to encounter a record-level amount of floods and droughts by 2050 – researchers recently announced at the European Geosciences Union’s General Assembly in Vienna. Nikita Marwaha shares their predictions on the impact that climate change will have on these extreme weather conditions.

In a study by the Joint Research Centre (JRS) – the European Commission’s in-house science service – new climate impact models are being used to determine future flood risk in Europe under conditions of climate change. These state-of-the-art models, presented by JRS scientist Lorenzo Alfieri, indicate that the change in frequency of extreme river discharge is likely to have a larger impact on the overall flood hazard than changes in their magnitude.

“We predict a 150% increase in future flood risk by 2050”, Alfieri said. This dramatic increase will trigger the so-called “floods of the century” that we currently experience every 100 years, to double in frequency – submerging much of Europe under water within the next few decades. As a result, the extent of damage and number of people affected are expected to increase by 220% by the end of the century.

With more lives predicted to be touched by this climate change-induced flooding, it is of utmost importance to accurately calculate projections of future flood events and to assess the situation that our planet faces. In this study, the JRC applied the most recent climate change projections to assess future flood risk in Europe. Using statistical tools and dedicated analysis, flood simulation was carried out to evaluate changes in the frequency of extreme river discharge peaks.

These projections of future flood events were then combined with data on the exposure and vulnerability of populations, in order to estimate the overall flood risk in Europe under a high-emission climate scenario. Socio-economic scenarios were also investigated. The research addressed both current and future scenarios – with the dates of 2020, 2050 and 2080 used in the socio-economic impact models of large, European river floods.

Alfieri estimated that between 500,000 and 640,000 people will be affected by river floods by 2050, increasing to 540,000 – 950,000 by 2080, as compared to 216,000 in today’s climate. A wider range was found for the annual economic impact of flood damage. It is currently estimated at 5.3 billion EUR, set to rise to between 20 and 40 billion EUR in 2050 and to between 30 and 100 billion EUR in 2080. Such predictions are dependent on future economic growth, resulting in the larger range of figures presented at the conference.

Another extreme weather condition that the planet faces is drought – said to increase before the middle of the century. Yusuke Satoh, a researcher from the International institute for Applied Systems Analysis (IIASA) shared new research suggesting that some parts of the world may see unpreceded levels of drought before 2050. These new findings urge swift action to be taken to adapt reservoirs and water management policies in accordance with the depleting water resources.

“Our study shows an increasing urgency for water management systems to adapt for future drought”, Satoh said in a statement at the press conference. “In order for policymakers to plan for adaptation, they need to know when and where this is likely to happen, and have an understanding of the levels of uncertainty in such projections”.

Droughts are predicted to grow more severe and frequent by 2050 for 13 of the 26 countries mapped by the organisation. A new measure was proposed in this study – Timing of Perception Change for Drought (TPCD). This drought will surpass all historical records and countries will reach TPCD at varying times – with western United States feeling the effects as early as 2017, and the Mediterranean by 2027, at current emission rates.

The new study by IIASA combined five different global climate models to examine two different scenarios for future climate change – a 1°C and 3.7°C rise in temperatures by 2100. This technique allowed researchers to address the uncertainty of our planet, since climate change is a manmade environment issue that is difficult to accurately foresee using just one climate model.

From this research, the predicted arrival date of these record-breaking droughts was found to be more uncertain in the Sahara, sub-Saharan Africa and South Australia regions, with certainty very high in southern South America and the Central United States.

Being aware of where the uncertainty lies in the world is important. It allows policymakers and water resource managers to prepare for greater future variations in water availability, since the historical data that the hydrological structures of today are built on, will eventually become void as climate change carves new figures into the history books.

Satoh advised measures such as releasing water from reservoirs during the dry season to relieve the onset of future dryness. “The earlier we take this seriously, the better we will be able to adapt”, he said.

Controlling the amount of seasonal water precipitation and water use, will allow us to manage both the natural and manmade causes of hydrological drought – giving us better control as the effects of climate change begin to set in.

By Nikita Marwaha, EGU Press Assistant and EJR-Quartz Editor

]]>http://blogs.egu.eu/geolog/2015/04/29/floods-and-droughts-set-to-increase-due-to-climate-change/feed/1Geosciences Column: The quest for life on Marshttp://blogs.egu.eu/geolog/2015/02/20/geosciences-column-the-quest-for-life-on-mars/
http://blogs.egu.eu/geolog/2015/02/20/geosciences-column-the-quest-for-life-on-mars/#commentsFri, 20 Feb 2015 12:45:29 +0000http://blogs.egu.eu/geolog/?p=11134Understanding where we come from and whether Earth is the only habitable planet in the Solar System has been a long standing conundrum in science. Partly because it is our nearest neighbour, partly because of its past and current similarities with our own home, Mars, the red planet, is a likely contender in the quest for extra-terrestrial life. In this guest blog post, James Lewis, a PhD student at Imperial College London, takes a brief look at the findings of his recent research. Strap up, we are rocketing over to Mars!

Mars has always been at the forefront of our imaginations when we picture alien life and the discoveries planetary science has made in recent decades reveal that the idea of our neighbouring world having once been inhabited is not so far-fetched. Mars appears to have once been a habitable world, the question is did life ever exist there? This is one of the questions that Curiosity Rover is attempting to shed more light on but results so far have been inconclusive. One potential problem is that the mineralogy of Mars might seriously disrupt experiments looking for evidence of ancient microscopic Martians. Chlorine salts have already been proven to be problematic and in research, published today, and summarised in the following article I have shown that a salt containing iron, sulfur and oxygen, known as jarosite, can also be added to the list of problematic minerals for life detection experiments.

Eberswalde Delta on Mars, evidence for an ancient persistent flow of water over an extended period of time on the Martian surface. Image Credit: NASA/JPL/MSSS.

The satellites, landers and rovers sent to Mars have started to unravel many of the mysteries of the red planet. Perhaps their most exciting discovery is that ancient Mars may have been a habitable environment for life. The Martian surface at present is extremely cold, exceptionally dry and bombarded by ultraviolet radiation. The atmosphere is at such a low pressure that liquid water would instantly vaporise. However, characteristic landforms and the presence of minerals that we know only form in water have revealed that ancient Mars had persistent surface or near surface liquid water. The presence of liquid water is exciting because it is a precursor for life and for it to persist on the surface would require a warmer thicker atmosphere.

This potentially habitable liquid water existed billions of years ago, so how can we investigate if life ever existed in these environments? If ancient Martians existed they would likely be microscopic organisms like bacteria on Earth. We could look for the fossils they might leave behind but these features would be extremely small and there are many non-biological processes that can form similar structures. The least ambiguous evidence would be to find chemical compounds that only life leaves behind. As biological molecules contain carbon they fall under a chemical class called organic compounds. However, not all organic compounds are biological. For example, asteroids and comets contain non biological organic compounds that formed in the early Solar System.

Comets and asteroids have been impacting Mars throughout its history so when we send missions to Mars we would expect to see the organic molecules delivered by impacts from outer space. The strange thing is that we haven’t. If we can’t detect compounds we know should be there, what are our chances of detecting possible organic compounds indicative of life? All that has been detected so far are very simple organic compounds with chlorine attached. Their origin is uncertain as similar compounds are used as cleaning agents on Earth and sometimes as reagents inside the rovers, so they could just be contamination. However, recent discoveries have complicated things even further; in 2008 a mineral called perchlorate was discovered on Mars. Perchlorate is very rare on Earth as it is only stable in very arid environments such as the Atacama Desert and the Dry Valleys of Antarctica. Perchlorate has now been discovered by multiple Mars’ missions so it would appear it is widespread in the extremely arid present day Martian surface.

The Phoenix Lander made the first detection of perchlorate on Mars in 2008. Dusty Martian soil can be seen in the background and on the Lander’s frame. Image Credit: NASA/JPL-Caltech/University of Arizona/Texas A&M University.

Perchlorate is a big complication in our search for organic compounds on Mars. The most common technique used to analyse samples for the presence of organic compounds is to heat materials in an inert atmosphere until organic compounds break down and go into the gas phase. The chemical composition of this gas can then be analysed. For example, on the Curiosity rover the gas passes from the sample oven into a and then a mass spectrometer, which separates out the constituent gases and identifies them. The problem with perchlorate is that it breaks down at low temperatures, in fact just at the temperatures that organic molecules would start to break down and be detectable. Perchlorate releases oxygen and chlorine when it thermally decomposes. Oxygen will react with, and break down, organic compounds into carbon dioxide and water. So it will greatly reduce the instrument’s ability to detect organic molecules if it is present in the sample heating oven. The simultaneous release of chlorine by perchlorate could also chemically alter the products of heating experiments. This may explain why so far we have only detected simple chlorinated organic molecules on Mars.

I wanted to investigate the question as to whether perchlorate is the only mineral that might have a negative influence on our search for organic compounds on Mars. I analysed a group of minerals called sulfates. They contain sulfur and oxygen in the form SO4 and include common minerals such as gypsum. When sulfates thermally break down they release sulfur dioxide and oxygen, so they have the potential to be problematic like perchlorate. However, most break down at very high temperatures (above 1000 °C), which is sufficiently high not to interfere with the release of organic molecules from samples during heating experiments. However, iron sulfates start to break down at dramatically lower temperatures. They can decompose to give off sulfur dioxide and oxygen from around 500 °C. This is around the same temperatures that large complex organic molecules might start to break down and be detectable. I was particularly interested in an iron sulfate called jarosite, as it has been detected on Mars, including recent detections by Curiosity Rover, and forms in wet acidic conditions. It’s therefore indicative of ancient wet environments that existed on Mars and may have once been inhabited by microorganisms, as similar environments on present day Earth, such as Río Tinto in Spain are a habitat for acid resistant bacteria.

I conducted fieldwork on a small island in the south of the United Kingdom called Brownsea Island. If you walk along the southern coast of Brownsea you will often see crusts of a soft yellow mineral on the short cliffs. This is jarosite, it grows here because the clay rich rocks that make up the cliff face contain the iron and sulfur mineral pyrite, pyrite reacts with water and the atmosphere to form jarosite. The geology here is a perfect case study as the rocks also contain a tough form of organic matter called lignite, a low rank of coal. I crushed the sample into a powder so that I had a mix of jarosite, clay and organic compounds. I then heated this powder at different temperatures to see if I would be able to detect the organic compounds contained in the sample. Unfortunately all I could detect was carbon dioxide, carbon monoxide, water and sulfur dioxide. The first three are compounds that you would expect to detect if organic matter was breaking down and reacting with oxygen and the sulfur dioxide indicated that the jarosite was thermally decomposing. When a sulfate breaks down we know that sulfur dioxide is paired with oxygen but when I heated this sample the oxygen wasn’t detectable. It had been consumed by reacting with organic compounds and breaking them down. From these results jarosite can now be added to the list of problematic minerals on Mars, alongside perchlorate.

Jarosite is a soft yellow mineral and can be seen growing on the clay rich cliffs of Brownsea Island, UK. As it is an iron mineral it can rust if exposed at the surface long enough in wet conditions. The orange-brown layer at the base of the cliff and the dark patches in the hand sample are rust. Image Credit: James Lewis.

Jarosite is indicative of environments that may have been habitable for life so simply avoiding it is not a satisfactory solution. Though it has a major negative influence on organic detection experiments some interpretation may still be possible. If sulfur dioxide and carbon dioxide peak at the same time in Curiosity Rover data, from a sample known to contain jarosite, it may be evidence that organic matter was present and reacting with oxygen. Unfortunately it is not always the case that a carbon dioxide peak means the presence of organic matter. Minerals known as carbonates contain carbon and oxygen in the form CO3. When carbonates thermally decompose they produce carbon dioxide. Therefore the chance of a carbonate being the source of carbon dioxide seen in Curiosity Rover data must be considered. Fortunately Curiosity has the ability to perform an assessment of the mineralogy it is adding to its heating ovens for analysis, so the presence of carbonates can be checked.

Identifying which rock units on Mars might contain abundant organic compounds would be of great use to future missions that might return samples to the Earth where a whole suite of laboratory techniques can be employed on samples without the tight space and energy constraints of a rover or lander.

My research is published online today in the journal of Astrobiology and will be free for all to read once the open access application is processed.

]]>http://blogs.egu.eu/geolog/2015/02/20/geosciences-column-the-quest-for-life-on-mars/feed/0Geosciences Column: Fire in ice – the history of boreal forest fires told by Greenland ice cores.http://blogs.egu.eu/geolog/2015/02/13/geosciences-column-fire-in-ice-the-history-of-boreal-forest-fires-told-by-greenland-ice-cores/
http://blogs.egu.eu/geolog/2015/02/13/geosciences-column-fire-in-ice-the-history-of-boreal-forest-fires-told-by-greenland-ice-cores/#commentsFri, 13 Feb 2015 16:34:37 +0000http://blogs.egu.eu/geolog/?p=11080Burning of biomass contributes a significant amount of greenhouses gases to the atmosphere, which in turn influences regional air quality and global climate. Since the advent of humans, there has been a significant increase in the amount of biomass burning, particularly after the industrial revolution. What might not be immediately obvious is that, (naturally occurring) fires also play a part in emitting particulates and greenhouse gases which can absorb solar radiation and contribute to changing Earth’s climate. Producing a reliable record of pre-industrial fire history, as a benchmark to better understand the role of fires in the carbon cycle and climate system, is the focus of research recently published in the open access journal, Climate of the Past.

Did you know the combustion of biomass can emit up to 50% as much CO2 as the burning of fossil fuels? The incomplete burning of biomass during fires also produces significant amounts of a fine particle known as black carbon (BC). Compare BC to more familiar greenhouse gases such as methane, ozone and nitrous oxide and you’ll find it absorbs more incoming radiation than the usual suspects. In fact, it is the second largest contributor to climate change.

NEEM camp position and representation of boreal vegetation and land cover between 50 and 90 N. Modified from the European Commission Global Land Cover 2000 database and based on the work of cartographer Hugo Alhenius UNEP/GRIP-Arendal (Alhenius, 2003). From Zennaro et al., (2014). Click to enlarge.

The boreal zone contains 30% of the world’s forests, including needle-leaved and scale-leaved evergreen trees, such as conifers. They are common in North America, Europe and Siberia, but fires styles in these regions are diverse owing to differences in weather and local tree types. For instance, fires in Russia are known to be more intense than those in North America, despite which they burn less fuel and so produce fewer emissions. All boreal forest fires are important sources of pollutants in the Arctic. Models suggest that in the summertime, the fires in Siberian forests are the main source of BC in the Artic and shockingly, exceed all contributions from man-made sources!

To build a history of forest fires over a 2000 year period the researchers used ice cores from the Greenland ice sheet. Compounds, such as ammonium, nitrate, BC and charcoal (amongst others), are the product of biomass burning, and can be measured in ice cores acting as indicators of a distant forest fires. Measure a single compound and your results can’t guarantee the signature is that of a forest fire, as these compounds can often be released during the burning of other natural sources and fossil fuels. To overcome this, a combined approach is best. In this new study, researchers measured the concentrations of levoglucosan, charcoal and ammonium to detect the signature of forest fires in the ice. Levoglucosan is a particularly good indicator because it is released during the burning of cellulose – a building block of trees – and is efficiently injected into the atmosphere via smoke plumes and deposited on the surface of glaciers.

The findings indicate that spikes in levoglucosan concentrations measured in the ice from the Greenland ice sheet correlate with known fire activity in the Northern Hemisphere, as well as peaks in charcoal concentrations. Indeed, a proportion of the peaks indicate very large fire events in the last 2000 years. The links don’t end there! Spikes in concentrations of all three measured compounds record a strong fire in 1973 AD. Taking into account errors in the age model, this event can be correlated with a heat wave and severe drought during 1972 CE in Russia which was reported in The New York Times and The Palm Beach Post, at the time.

Ice core. Credit: Tour of the drilling facility by Eli Duke, Flickr.

The results show that a strong link exists between temperature, precipitation and the onset of fires. Increased atmospheric CO2 leads to higher temperatures which results in greater plant productivity, creating more fuel for future fires. In periods of draught the risk of fire is increased. This is confirmed in the ice core studied, as a period of heightened fire activity from 1500-1700 CE coincides with an extensive period of draught in Asia at a time when the monsoons failed. More importantly, the concentrations of levoglucosan measured during this time exceed those of the past 150 years, when land-clearing by burning, for agricultural and other purposes, became common place. And so it seems that the occurrence of boreal forest fires has, until now, been influenced by variability in climate more than by anthropogenic activity. What remains unclear is what the effects of continued climate change might have on the number and intensity of boreal forest fires in the future.

]]>http://blogs.egu.eu/geolog/2015/02/13/geosciences-column-fire-in-ice-the-history-of-boreal-forest-fires-told-by-greenland-ice-cores/feed/0Geosciences Column: Do roads mean landslides are more likely?http://blogs.egu.eu/geolog/2015/01/16/geosciences-column-do-roads-mean-landslides-are-more-likely/
http://blogs.egu.eu/geolog/2015/01/16/geosciences-column-do-roads-mean-landslides-are-more-likely/#commentsFri, 16 Jan 2015 15:19:49 +0000http://blogs.egu.eu/geolog/?p=11001Landslides have been in the news frequently over the past 12 months or so. It’s not surprising considering their devastating consequences and potential impact on nearby communities. Data collected by Dave Petley in his Landslide Blog shows that from January to July 2014 alone, there were 222 landslides that caused loss of life, resulting in 1466 deaths.

A recent paper, in the journal Natural Hazards and Earth System Sciences investigates, what the potential effects of human denudation can have on the occurrence of landslide events. There is no denying that landslide susceptibility has been increased by human activity. Global warming and greater precipitation are key contributing factors to the rise in the number of landslides which occur globally. On a local scale, the building of infrastructure, particularly roads and felling of trees to make way for agriculture are largely to blame for increased numbers of slides and slumps.

Overview of the study area with mean annual precipitation patterns (top panel), and its location in southern Ecuador (lower left panel). Highways Troncal de la Sierra E35 and Transversal Sur E50 extend in the north–south and east–west direction, respectively. The numbers along the street refer to the corresponding geological unit (1: unconsolidated rocks; 2: sedimentary rocks; 3: volcanic rocks; 4: metamorphic rocks; 5: plutonic rocks). Precipitation data are taken from the study of Rollenbeck and Bendix (2001). From Brenning et al., (2015). Click on the image for a larger version.

The research presented in the paper focuses on landslides along mountain roads in Ecuador, where drainage systems and stabilisation of hillsides is often inadequate and is known to increase the likelihood of landslides. This problem is not exclusive to Ecuador and is often linked to poorer infrastructure and engineering in developing countries. In addition, the study area is a tropical mountain ecosystem, which is naturally more sensitive and prone to landslides. The key question here being: are more landslides likely to happen close to a road (in this particular case an interurban highways), or does greater distance from them offer some hazard relief?

The geology, and local climate and vegetation are important factors to also take into consideration when carrying out an assessment of this nature. Highways E35 and E50 run along Southern Ecuador and intersect the Cordillera Real, which creates a strong local climate divide and generates a precipitation gradient along the area studied. Páramo ecosystems are dominant towards the east, whilst tropical dry forests are common in the west. The geology is also variable across the area studied: dipping and jointed metamorphic rocks are dominant, but are in contact with horizontally layered sedimentary units of loose conglomerates and sandstones. Additionally, the hill sides running along the highways are often deforested to make way for coffee, sugar cane and banana crops. When they are not, they are commonly handed over to cattle for grazing.

By mapping, in great detail, all landslide occurrences within a 300m corridor along the highways, the researchers were able to digitise 2185 landslide initiation points! In total, 843 landslides were mapped and classified by recording the type of movement experienced, as well as the material type (soil, debris or rock) and whether the slide was still active, inactive or had been reactivated. The detailed data meant it was possible to statistically model the likelihood of landslides occurring in close proximity to the highway (25m) vs. some distance away (200m). The results showed that susceptibility to landslides increases by one order of magnitude closer to the highway when compared to areas between 150-300 m away from the mountain road. Furthermore, slides close to the highway were found to be more likely to be reactivated than those a greater distance away.

The study found that the local topography, geology and climate conditions had a lesser influence on the likelihood of landslides. However, the influence of stretches of mountain road constructed in the sedimentary units seems to enhance the hazard.

Landslides occurring along the investigated highways. (a) Typical landslides of the wet metamorphic part of the study area in theeast. (b) Typical landslides of the semi-arid, conglomeratic part of the study area in the west. (c) Highway destroyed by landsliding. (d) Ahighway is cleared from a recent landslide occurrence. From Brenning et al., (2015).

In future, the model can be used to predict locations where landslides are more likely to occur along the E35 and E50. Recently, engineering works have been carried out along the studied stretch of highways to stabilise the hillsides. The data collected as part of the research presented in the paper will be useful in the future to monitor the efficacy of the improvements. On a larger scale, further studies of this type could be used by local governments when planning new infrastructure and could lead to incorporation of cost-effective mitigation measures in new developments.

]]>http://blogs.egu.eu/geolog/2015/01/16/geosciences-column-do-roads-mean-landslides-are-more-likely/feed/1When Astronomy Gets Closer to Home: Why space weather outreach is important and how to give it impacthttp://blogs.egu.eu/geolog/2014/12/19/when-astronomy-gets-closer-to-home-why-space-weather-outreach-is-important-and-how-to-give-it-impact/
http://blogs.egu.eu/geolog/2014/12/19/when-astronomy-gets-closer-to-home-why-space-weather-outreach-is-important-and-how-to-give-it-impact/#commentsFri, 19 Dec 2014 12:00:16 +0000http://blogs.egu.eu/geolog/?p=10879When the public think about natural hazards, space weather is not the first thing to come to mind. Yet, though uncommon, extreme space weather events can have an economic impact similar to that of large floods or earthquakes. Although there have been efforts across various sectors of society to communicate this topic, many people are still quite confused about it, having only a limited understanding of the relevance of space weather in their daily lives. As such, it is crucial to properly communicate this topic to a variety of audiences. This article explores why we should communicate space weather research, how it can be framed for different audiences and how researchers, science communicators, policy makers and the public can raise awareness of the topic.

Introduction

As you sit reading this article, the Sun is brimming with activity. The yellow disc in the sky may appear unimpressive but when looking in the extreme ultraviolet region of the spectrum, the Sun’s hot active regions glow bright (Figure 1). These are areas with an especially strong magnetic field — manifested in the form of dark patches or sunspots on the solar surface — that can be the source of explosive bursts of energy and solar material. Even though the Sun is some 150 million kilometres away, these solar storms can alter the near-Earth space environment, changing our space weather.

Of the solar storms that can hit the Earth, the most damaging are coronal mass ejections. These high-speed bursts of solar material — if powerful enough and directed towards our planet with the proper orientation of their magnetic field — can disturb the Earth’s magnetic field, creating a geomagnetic storm. This can impact power grids and pipelines, and affect communications and transportation systems. Coronal mass ejections and other solar storms such as solar flares — outbursts of radiation and high-energy particles — can also affect spacecraft and satellites and even be a radiation hazard for astronauts and air crews flying at high latitudes and altitudes.

Figure 1. The Sun in the extreme ultraviolet, imaged by NASA’s Solar Dynamics Observatory on 04 December 2014. This wavelength highlights the outer atmosphere of the Sun (corona) and active solar regions, which appear bright in the image. Solar flares and coronal mass ejections would also be highlighted in this channel. (Credit: Image courtesy of NASA/SDO and the AIA, EVE, and HMI science teams)

The importance of communicating space weather research

Space weather may be a concept unfamiliar to many, but, as with any natural hazard, it is important that the public know about it and understand the potential dangers. At its most extreme space weather can cause large-scale power blackouts and, thus, affect global supply chains including food and water supplies, damaging livelihoods and the economy in the process. Severe space weather occurs about once a century on average (Riley, 2012), but milder events can disrupt human activity once or twice per decade (POST Note, 2010). At a time when we are over-reliant on technology and our power grids are more connected than ever, meaning they are more vulnerable to space weather, telling people about this natural hazard becomes all the more crucial.
Space weather is an area of astronomy much closer to home than most, which can in itself act as a hook for audiences, whether children or policy makers. After all, most people have either seen or heard about the most visible and stunning space weather-related phenomenon, the aurora, which forms when particles from the Sun energise the atoms in the Earth’s atmosphere making it glow (Figures 2 and 3).

Communicating space weather is an opportunity to get others interested in space and science, and to inspire younger people to pursue a career in these areas. In more general terms researchers of space weather, as is the case with many areas of astronomy, have much to gain from communicating their research. Communicating space weather as a researcher can help to improve a CV, hone presentation and writing skills and bring a new perspective to research. Expanding the audience for this research beyond the astronomy community can further lead to interdisciplinary collaborations and an increase in citations for relevant research papers.

In addition, communicating space weather research with the public is a way of justifying the taxpayers’ money that funds most solar–terrestrial research. Engaging the public with this often-forgotten subject area could increase public support for it and inform policy, ensuring that legislation relating to space weather is based on sound science.

As with more general astronomy or science outreach, before communicating space weather it is important to define an audience. Will this be a talk at a school or an article for a popular astronomy magazine? Is the aim to brief engineers who work on infrastructure protection or to give evidence to a parliamentary committee? The message needs to be targeted to the public that the communicator is reaching out to.

Communicating with young people or a general audience

When communicating with school children, focussing on the Sun and the fascinating aspects of solar–terrestrial science is a way to get the audience excited rather than scared about space weather. For both younger crowds and the wider public, the use of images, videos, animations and other visuals helps to captivate the audience’s attention and can go a long way towards explaining tricky topics.

To help familiarise the audience with complex concepts, it is often useful to use everyday analogies and examples — like using a peppercorn and a football to give an idea of the relative sizes of the Earth and Sun. In addition, as with other topics, it is important for the communicator to speak or write clearly and avoid technical terms when reaching out to a general audience.

The language can be more technical when communicating with engineers or policy makers, but should still be free of discipline-specific jargon. Engineers are likely interested in finding out about the properties of solar storms and how spacecraft can be made more resilient, or how the effects of geomagnetic storms could be mitigated to avoid excessive damage to technological infrastructure. Policy makers want the facts given in a balanced, clear and objective way, and are interested in space weather aspects with policy relevance, such as monitoring, resilience and funding.Real-world examples and avoiding scaremongering

A crucial aspect is to strike a balance between informing about the dangers of space weather and avoiding scaremongering. The communicator should give concrete examples about past events that have affected human activity. Typical examples include the famous 1859 Carrington event, which affected telegraph systems and caused aurorae as far south as Cuba (Bell, 2008); the Quebec 1989 geomagnetic storm that caused a power blackout affecting several million people and temporarily paralysed the Montreal metro and international airport (POST Note, 2010); or the Halloween storms of 2003 over northern Europe that damaged satellites, caused a blackout in Sweden, and forced air companies to reroute trans-polar flights (POST Note, 2010).
These events illustrate that space weather is something that the public and policy makers need to be aware of because it can affect their daily lives. But it’s also important to explain that geomagnetic storms, particularly severe ones that could cause trillions of euros in damage, are not very common (Workshop report, 2008). It is important to raise awareness of space weather and educate the public on the best ways to prepare for and mitigate space weather without getting people needlessly worried about its impact. Always finish on a positive note when doing space weather outreach.

Getting involved as a science communicator, scientist or member of the public

For those convinced about the importance of engaging the public with space weather, and confident about delivering a targeted and informative message, there are many opportunities to get involved in space weather outreach. If you are an astronomy communicator, and thus likely to already be writing popular science articles or giving presentations about various aspects of astronomy, why not choose space weather as your next topic? As a researcher, there are science cafes available to bring space weather to the public, you could blog about your work, give talks at local schools, or — if you are preparing a new and exciting paper on the topic — you can reach out to journalists through the press office at your institution.

Experienced scientists have an additional responsibility to communicate with policy makers. They can reach this audience by providing input to a policy briefing, such as those written by the Parliamentary Office of Science and Technology (POST) in the UK, or by contributing to a governmental report through their research council. Scientists can also apply to serve as science advisers to their local politician or to a governmental body, or join science policy groups in their country to raise the importance of space weather in the political agenda.

Finally, if you are a member of the public who knows little about space weather, but is interested in finding out more, you can help researchers and communicators in this area by taking part in public consultations, such as the Space Weather Public Dialogue underway (at the time of writing) in the UK, which is open to people from all countries. The aim of this project is to help UK research councils and entities find out more about how to best communicate space weather and its impacts and to evaluate the public’s level of preparedness.

If you want to communicate space weather, or help others do it more effectively, there are plenty of opportunities out there to get involved. Be enthusiastic and pro-active, and encourage others to raise public awareness about what happens on the Sun and in our local space environment.

# # #

Acknowledgements
This article is based on a presentation given at a session of the European Geosciences Union 2014 General Assembly in Vienna on 2 May 2014. The session, titled ‘Raising and Maintaining Awareness of our Local Space Weather: Education and public outreach’, was convened by Athanasios Papaioannou and Jean Lilensten. I am grateful to Athanasios for inviting me to speak at the European Geosciences Union conference, for encouraging me to write this article, and for the useful comments that improved the initial draft of this text.

]]>http://blogs.egu.eu/geolog/2014/12/19/when-astronomy-gets-closer-to-home-why-space-weather-outreach-is-important-and-how-to-give-it-impact/feed/1Geosciences Column: Is it possible to quantify the effect of natural emissions on climate?http://blogs.egu.eu/geolog/2014/11/14/geosciences-coloumn-is-it-possible-to-quantify-the-effect-of-natural-emissions-on-climate/
http://blogs.egu.eu/geolog/2014/11/14/geosciences-coloumn-is-it-possible-to-quantify-the-effect-of-natural-emissions-on-climate/#commentsFri, 14 Nov 2014 12:00:41 +0000http://geolog.egu.eu/?p=10735The air we breathe is full of tiny particles that can have a big impact on our climate. Industrial activities have greatly increased the number of these particles, cooling the climate and potentially offsetting some of the warming due to greenhouse gases. In this post Kirsty Pringle introduces new research that suggests that it might not be possible to quantify the effect of industrial emissions on climate unless we constrain estimates of the natural emissions (Carslaw et al 2013). Kirsty is a member of Ken Carslaw’s research group at the University Of Leeds (UK) which performed the research.

Deserts, oceans, fire and even pine forests emit tiny particles called aerosol into the atmosphere. Human (anthropogenic) activities also play a role; burning fuel, e.g. in cars or power-plants, emits particles and aerosol concentrations have increased substantially since the start of the industrial revolution. One effect of these particles is to change the properties of clouds, causing them to become brighter, producing a net cooling that can offset some of the warming due to greenhouse gases. Treating this effect in climate models is challenging and it remains one of the most poorly quantified areas of climate science.

Schematic of the first aerosol indirect forcing: Human (anthropogenic) emissions add aerosol particles to the atmosphere, these particles can act as condensation sites, which aids cloud droplet formation. Anthropogenic emissions result in clouds with more, smaller, cloud droplets. These clouds are brighter and reflect more solar radiation, resulting in a net cooling. The first aerosol indirect forcing (AIE) is the change in the reflected solar radiation between the present day and the pre-industrial scenarios due to this effect. Click image for a larger version. (Credit: Kirsty Pringle)

This cloud brightening effect is called the first aerosol indirect effect (AIE). It is thought to produce a global average cooling that is sufficient to offset between 25 to 90% of the warming due to long lived greenhouse gases (IPCC). The AIE is challenging to treat in climate models as atmospheric aerosol has a range of sources, the magnitudes of which are quite uncertain. They also undergo a series of processing steps in the atmosphere, which can change their properties; affecting both their lifetime and their ability to interact with clouds. These processing steps are difficult to represent in climate models and this complexity contributes to the large range of estimates.

As the aerosol indirect effect (AIE) is potentially large, but very uncertain, it is important to try to understand where this uncertainty arises. This has been a focus of Ken Carslaw’s research group for the past four years where a new collaboration between aerosol scientists and a statistician has resulted in some very long meetings, a new approach to uncertainty analysis, and the first study that has identified and quantified the factors that contribute to uncertainty in model estimates of the AIE.

I should clarify that Carslaw just considered parametric uncertainty, this is the uncertainty associated with inputs to the model, e.g. the magnitude of the emissions or uncertain values used within parameterisations. There are other types of model uncertainty, e.g. structural uncertainty, that were not considered in this study. Parametric uncertainty is, however, intrinsic to all climate models, so it is an important starting point.

The first step was to choose which parameters to focus on. The team did this by talking with other scientists to identify which input parameters everyone felt were most uncertain; together they estimated the maximum, minimum and median value for each parameter. They identified 28 uncertain parameters in total; these can be grouped into three categories:

Process parameters used within the aerosol microphysics (e.g. the rate of aerosol aging, or the wet deposition parameter), 14 parameters.

The contribution of each parameter to the uncertainty in the AIE calculation can be found using a statistical technique called Monte Carlo analysis, but to perform Monte Carlo sampling on 28 uncertain parameters one would need to run tens of hundreds of model simulations, which isn’t possible with a complex model. To avoid this, Carslaw ran a few hundred simulations and used a statistical emulator to carry out the Monte Carlo sampling much faster. The emulator is a statistical package that “learns” from the output of the computer model, it can be used to interpolate from the hundreds of runs performed to the thousands of runs needed for the statistical analysis.

Schematic showing the methodology used by Carslaw to perform a sensitivity analysis on the first aerosol indirect effect (AIE). (Figure courtesy of Lindsay Lee).

Carslaw found that by varying the values of the uncertain parameters, the model produced a range of estimates of the strength of the AIE forcing that was similar to, but slightly smaller than, the range of values estimated by the multi-model estimate from the IPCC. Surprisingly 45% of the uncertainty was found to be due to uncertainty in natural emissions, 34% was due to uncertainty in anthropogenic emissions and the rest due to uncertainty in process parameters.

Caption: Magnitude and sources of uncertainty in the model estimate of the first aerosol indirect forcing. (Credit: Carslaw et al, 2013)

Although the forcing is caused by anthropogenic emissions, the amount of natural emissions has a large effect on how sensitive the climate is to these anthropogenic emissions: the natural emissions don’t produce the forcing but they contribute a lot to the uncertainty in the forcing.

This effect arises because the relationship between aerosol emissions and cloud brightness is not linear; instead it is curved with higher sensitivity of cloud brightness to emissions when emissions are low (as they were in the pre-industrial atmosphere). This means that in the clean pre-industrial atmosphere a change in the amount of natural aerosol emissions has a large effect on the cloud brightness: when natural emissions are small, the initial cloud brightness (albedo) is low and any anthropogenic emissions have a big impact on cloud brightness, so the calculated forcing is larger.

Schematic explaining why the calculation of the first aerosol indirect forcing is sensitive to the magnitude of the natural aerosol emissions. (Credit: Kirsty Pringle).

This sensitivity to the natural aerosol emissions is important as it is very difficult to constrain estimates of natural aerosol emissions as measurements taken in today’s atmosphere are almost always affected by anthropogenic emissions. This means that some of the uncertainty in the estimates of the first aerosol indirect effect may be irreducible, but it will still need to be considered in future estimates of warming due to greenhouse gases.

By Kirsty Pringle,Research Fellow, School of Earth and Environment, University of Leeds

GeoLog regularly brings readers information about recent research in the geosciences as well as updates on the EGU’s activities. Part of what makes GeoLog a great read is the variety that guest posts add to our regular features, and we welcome contributions from scientists, students and professionals in the Earth, planetary and space sciences. If you want to report on a recent Earth science event, conferences or fieldwork, comment on the latest geoscientific developments or highlight recently published findings in peer-reviewed journals, like Kirsty has done here, then we welcome your contribution. If you’ve got a great idea, why not submit a post?

]]>http://blogs.egu.eu/geolog/2014/11/14/geosciences-coloumn-is-it-possible-to-quantify-the-effect-of-natural-emissions-on-climate/feed/0Geosciences Column: Adapting to acidification, scientists add another piece to the puzzlehttp://blogs.egu.eu/geolog/2014/09/12/geosciences-column-adapting-to-acidification-scientists-add-another-piece-to-the-puzzle/
http://blogs.egu.eu/geolog/2014/09/12/geosciences-column-adapting-to-acidification-scientists-add-another-piece-to-the-puzzle/#commentsFri, 12 Sep 2014 10:30:15 +0000http://geolog.egu.eu/?p=10468In the latest Geosciences Column Sara Mynott sheds light on recent research into how ocean acidification is affecting the California Current Large Marine Ecosystem. The findings, published in Biogeosciences, reveal large differences between the abilities of different animals to adapt and highlight the urgent need to understand the way a greater suite of species are responding…

Large Marine Ecosystems (LMEs) are highly productive ocean areas that border the continents. To give you a flavour of just how productive we’re talking, together the world’s LMEs account for 80% of the global marine fisheries catch, making them incredibly important regions both socially and economically. The California Current Large Marine Ecosystem (CCLME) is one such system and covers the length of the US Pacific coast. But, like other ocean ecosystems, the CCLME is under threat from climate change.

Major changes in the carbonate chemistry of the oceans are expected over the next few decades, and changes in the California Current system are to be some of the most rapid. Determining how this system, and indeed other ecosystems, will respond is a significant challenge for biologists, ecologists and climate scientists alike.

In 2010, an interdisciplinary research group known as OMEGAS (Ocean Margin Ecosystems Group for Acidification Studies) set out to find answers by monitoring a ~1300 km stretch of the CCLME that runs from central Oregon to southern California. Because this stretch of ocean can be divided into distinct areas with differing pH and carbonate chemistry, the researchers could compare the characteristics of animals living in more acidic conditions with those living in a less acidic environment and assess their ability to adapt.

Like other LMEs, the California Current system is characterised by upwelling – a process that brings nutrient-rich deep water to the surface. Upwelling waters bring with them a change in pH. In the southern CCLME, there is regular upwelling but in the north it is intermittent. This means animals living off the Oregon coast experience more variable pH, and are exposed to lower pH water more often. By comparing animals in the north with those in the south of the study area, the OMEGAS scientists could effectively peer into the ecosystem’s future. The scientists were substituting space for time.

The California Current Large Marine Ecosystem, showing the sites monitored by OMEGAS for changes in the region’s biology and chemistry. Seawater is coloured according to temperature and land is shown in grey. (Credit: Hoffman et al., 2014)

By matching measurements of ocean properties, including pH, temperature and the amount of CO2 in the water, with information about the way different animals are responding to acidity (e.g. growth rate, shell thickness) and their genetic variation, the team are putting together a picture of how acidification is likely to affect the ecosystem in the future. One such animal is the purple sea urchin, a conspicuously bright spiny mass found throughout the CCLME, and an important control on the amount of algae carpeting the coast.

When peering at their skeletons for signs of acidification-related stress, the OMEGAS team found that the urchins differed little between sites – they were all tolerant of the pH range experienced across the CCLME. Urchin larvae travel large distances, rendering populations relatively homogeneous, so it isn’t too surprising. Taking a look at another ecologically important species, the Californian mussel, the team found that they were also made of hardy stuff, as growth in adult mussels was not reduced in low pH regions.

The news wasn’t all good though. A series of complementary experiments revealed that mussel larvae exposed to low pH water showed a decline in both growth and shell strength, similar to that seen in other young marine bivalves. Such a weakness would leave them more susceptible to attack from predators and, as ocean acidification continues, means they will become yet more vulnerable to predation in the future. Purple sea urchin larvae, on the other hand, could tolerate present day CO2 conditions, and higher levels had little influence on their growth and development. What’s more, studies of the sea urchin’s genetics revealed high genetic variation in the purple sea urchin population – a good indicator that they’d be able to adapt to future change.

California mussels, Mytilus californianus. (Credit: Stephen Bentsen)

The study highlights that the impact of acidification varies widely between species and a greater understanding of how ocean acidification will affect a variety of marine organisms is urgently needed. The OMEGAS team are now figuring out the capacity of other organisms in the CCLME to adapt, including coralline algae, a widely distributed algae with a calcium carbonate skeleton, making it highly vulnerable to ocean acidification.

The team are continuing their work in an effort to find refuges that may be relatively safe from future acidification, populations and life stages that are particularly vulnerable and those that are able to adapt to the rate of change our oceans are currently experiencing. Understanding how multiple species can adapt is critical to creating a coherent picture of how acidification will affect regions such as the CCLME in the future.

]]>http://blogs.egu.eu/geolog/2014/09/12/geosciences-column-adapting-to-acidification-scientists-add-another-piece-to-the-puzzle/feed/1Geosciences Column: The Toba eruption probably did have a global effect after allhttp://blogs.egu.eu/geolog/2014/07/25/geosciences-column-the-toba-eruption-probably-did-have-a-global-effect-after-all/
http://blogs.egu.eu/geolog/2014/07/25/geosciences-column-the-toba-eruption-probably-did-have-a-global-effect-after-all/#commentsFri, 25 Jul 2014 11:00:48 +0000http://geolog.egu.eu/?p=10246Almost everyone has heard of the Toba super-eruption, which took place on the island of Sumatra roughly 74,000 years ago, but the only evidence of tephra or tuff (volcanic fragments) from the eruption is in Asia, with nothing definite further afield. It has sometimes been thought that this huge eruption may have led to a volcanic winter, a period of at least several years of low temperatures following a large eruption. This is caused by the effect of enormous quantities of volcanic ash entering the atmosphere and reducing the sun’s penetration. Sulphides also help to reduce solar energy penetration and increase the Earth’s albedo (increasing the reflection of solar radiation and leading to cooler temperatures).

In a 2013 study, Anders Svensson and colleagues suggest that a link has now been made between the onset of Greenland Interstadials (GI) 19 and 20 and Antarctic Isotope Maxima (AIM) 19 and 20 for the Toba eruption. Stadials are periods of low temperatures, lasting less than a thousand years, during the warmer periods between ice ages. Conversely, interstadials are warmer periods lasting up to ten thousand years, occurring during an ice age but not lasting long enough to qualify as interglacial periods. The GI periods and AIMs are numbered according to a scale based on Dansgaard-Oeschger events. These are relatively short-lived climate fluctuations, which occurred 25 times during the last glacial period (covering 110,000 to 12,000 years ago) and are used to help date events.

The study’s claim is based on their matching of volcanic acidity spikes at both poles produced by increased sulphur compounds in the atmosphere following the Toba eruption, which have been matched to the existing dates based on Asian tephra records and the bipolar seesaw hypothesis. This hypothesis explains the thousand-year offset between temperature changes over the two poles during the last glacial period as being caused by a seesaw mechanism involving heat redistribution by the Atlantic Meridional Overturning Circulation (AMOC) system, whereby warm water moves north from southern waters, mixes with cooler water in Arctic regions, sinks, and returns south as deep bottom water.

The fluctuations in temperature associated with Dansgaard-Oeschger events over a few decades are reflected in Greenland ice cores but Antarctic ice cores show a picture of slower changes over hundreds to thousands of years, running out of phase with the Greenland records. A direct link was made by Carlo Barbante and colleagues in a 2006 paper. They used methane records from a Northern Greenland ice core and oxygen isotope records from the Antarctic Dronning Maud Land ice core to show direct coupling between warm events in the Antarctic and the duration of cold events in Greenland indicating their probable common origin in a reduction in AMOC. An increase in freshwater entering the North Atlantic during warming would slow heat transport towards the north. This would lead to cooler surface air temperatures there and warmer temperatures in the southern waters and vice versa – taking several decades to pass from one hemisphere to the other, hence the name bipolar seesaw effect.

When volcanic gases, especially sulphurous gases, travel round the world in the atmosphere they form aerosols, which are trapped with air in precipitation at the poles in the form of bubbles. The bubbles are entombed at the depth at which firn (a type of rock-hard snow that looks like wet sugar) is compacted into ice. Succeeding precipitation and compaction over thousands of years provides evidence of atmospheric composition at different times, in trapped air bubbles, which can be analysed in ice cores. But the ages of both ice and gases are offset by 100-1000 years, depending on factors such as thickness of the firn, temperature and the presence of impurities in the ice. Ice cores are a bit like tree rings in that their thickness and other properties reflect climatic conditions at the time and differences can be counted as annual rings for the younger cores, before compaction makes it impossible to distinguish them.

Isotope data for Greenland and Antarctic ice cores over the past 140,000 years. (Credit: Leland McInnes)

Oxygen isotope and atmospheric methane signals in ice cores were available to link two separate records (NGRIP and EDML) for the period covering 80-123 thousand years ago.

Different gases can be used to date a particular part of an ice core and, as can be seen above, δ2H in Antarctica and δ18O in Greenland were used for dating purposes. 10Beryllium is found in the atmosphere and its levels change with solar activity and the Earth’s magnetic field. It is only found in the atmosphere for one or two years at a time so it can be used to help synchronise data from different ice cores more closely. A dating method using cosmogenic 10beryllium signals to match both Greenland and Antarctic ice cores to the Laschamp geomagnetic event (a short reversal of the Earth’s magnetic field). This helps pinpoint a particular date at around 41 thousand years ago, and provides a direct link between the ice-core horizons so that the data could be matched and signs of the eruption detected at the known eruption time. With markers for 41,000 and 80,000 years ago synchronised for the two polar regions, the ice cores were examined for signs of sulphate acidity spikes indicating a major eruption. The time scales were then combined for datasets from a number of ice cores (NGRIP, EDC, EDML and Vostok) to produce consistent scales for ice and gas records.

From the Greenland (NGRIP) and Antarctic (EDML) ice-core data, evidence of the Toba eruption was synchronised between them for approximately 2000 years around the known eruption time, using a pattern of bipolar volcanic spikes and the Greenland Ice Core Chronology 2005 defined for the NGRIP annual layer count, to compare with Antarctic data. They found evidence of large quantities of atmospheric sulphates (usually linked to volcanic eruptions) in both sets of data in the form of acidity spikes and linked these to the Toba eruption.

In fact, there are four bipolar acidity spikes within a few hundred years of the presumed Toba event, suggesting that there may have been several events, also confirmed by argon dating. Moreover, the authors found that the Toba eruption was linked to up to 4 acidity spikes occurring between 74.1 and 74.5 thousand years ago. These Toba events occurred at a time of rapid climate change from warm interstadial to cold stadial periods in Greenland and the equivalent Antarctic warming within 100 years, which perfectly agrees with the bipolar seesaw hypothesis.

Interestingly, another more recent study, showed that Lake Prespa in southeast Europe reached it’s lowest recorded level, or lowstand, at the time of the Toba eruption. The timing of the lowstand is dated at 73.6 ± 7.7 thousand years ago based on Electron Spin Resonance dating of shells. This short-lived lowstand also coincides with the onset of Greenland Stadial GS-20, with a possible link to the Toba eruption.

This new data and more accurate dating described in these studies, does tend to show that the Toba eruption (or series of eruptions) did, in fact have global impact.