Maand: mei 2015

Back in November of last year, the US and China came to an agreement to ensure a peak in carbon dioxide emissions by 2030. After 2030, yearly emissions would go down. Estimates by scholars in China were that by 2030, emissions would peak at 10.6 billion tonnes, 34% above the 2012 rate of 7.9 billion tonnes a year.1 To wait this long with reducing emissions would be catastrophic, eliminating any chance we might have to stay beneath two degree Celsius.

The problem with passing the two degree target is that we expect a number of positive feedback loops to kick in eventually, that will start emitting such high amounts of greenhouse gasses on their own that humans lose their control over the process. As some examples, bacteria in wetlands start producing higher amounts of methane, while the melting of permafrost will release methane as well. Forests may start to die in giant forest fires, releasing the carbon that’s currently stored in their soil and biomass.

Is there no solution whatsoever then? Well, there is one glimmer of hope. This hope is referred to as peak carbon: The idea that most of the world’s remaining fossil fuels are of such low quality that their use will prove to be economically nonviable. As a result, carbon and methane emissions would peak. Humans would be forced to start using drastically less energy and the economy would rapidly start to contract.

Peak carbon would require a painful and difficult period of transition. How painful such a transition would be depends largely on whether we took measures to prepare ourselves for this scenario and how a society responds to sudden shortages. Societies that suffer internal cultural divisions, like Syria and Iraq, seem less capable of coping with prolonged periods of economic contraction in a stable and peaceful manner than societies that are more homogeneous, like Japan and Greece.

Peak carbon is an extension of a concept that most people have heard of, peak oil. Peak oil has traditionally been seen as the fossil fuel that will be first to pose significant problems of shortage. Unlike coal and natural gas, the Western world has had experiences with oil shortages in the 1970’s, as a result of political instability affecting the middle east. Thus our society’s dependence on oil has traditionally been more prominent on the public radar than our dependence on coal and gas.

There are however peculiar global developments that suggest we are reaching limits in multiple natural resources simultaneously, which may lead to the phenomenon of peak carbon. To start with, the world was surprised a few months ago, with the news that CO2 emisisons in 2014 had flatlined compared to 2013.2 This is a unique development that hardly anyone had anticipated. The Global Carbon Project estimated in September 2014 that CO2 emissions in 2014 would be 2.5% higher than in 2013.3

Thus, it would appear that in late 2014, something began to readily diverge from our projections. Something highly unusual appears to have happened in China.4 Economic growth declined to its lowest rate since 1990. Energy consumption in China grew by just 3.8%, while coal consumption dropped by 2.9%.

Important to note first of all is that this is not a fluke. The decline in coal consumption fits an overall pattern seen in China over the past few years, which suggests that China is running out of high quality coal. The image below shows Chinese coal production until 2009:

What we see here is that although Bituminous coal production continued to grow enormously, Anthracite production peaked. This is a signal of depletion, as anthracite is generally seen as a very high quality type of coal. Most of the world’s industrialized nations have hardly any anthracite left, some are even starting to run out of bituminous coal. As the image below demonstrates, the overall rate of growth in coal production also began to slow down before the drop observed in 2014:

This decline of coal consumption in China also corresponds to a global plateau in coal consumption:

Interestingly enough, the decline in coal consumption in China appears to continue. On an annualized basis, statistics show that in the first four months of 2015, coal consumption in China dropped by an incredible 8%, while overall CO2 emissions dropped by 5%.8 So, what has happened then? There have been suggestions that the Chinese data are simply inaccurate, that coal mines are continuing to produce coal without being registered by the Chinese government. However, the stabilization in coal production has been a process that has taken place over multiple years, so I consider this an unlikely explanation, as the data fit in line with what we would expect.

It would also be persuasive to assume that the Chinese economy has simply started to decarbonize, by transitioning to low-carbon sources of heat and electricity and increasing energy efficiency. This would contradict the Chinese plan to peak carbon emissions by 2030 however. Why wait until 2030 and risk a global catastrophe, if you’re perfectly capable of reducing your emissions without affecting economic growth today?

A third possible explanation would be to suggest that the Chinese economy is being involuntarily decarbonized. Perhaps the Chinese are simply no longer capable of burning ever increasing amounts of coal. Multiple factors can be responsible for this. Overseas demand for carbon-intensive products may have declined. Low coal prices triggered by low demand may have forced some coal manufacturers to drastically reduce their production, because they can not produce coal at such low costs.

Evidence that verifies the case for low demand being responsible would include the 6% decline in demand for steel in the first quarter of 2015.11 Rail freight in China is declining by double-digit figures, and electricity use has declined for the first time since 2009, back when the world was in the middle of the global recession. In light of these problems, some analysts are skeptical about the GDP growth figures coming out of China.

There are also other factors, that could be interpreted as either voluntary or involuntary decarbonization, depending on how you wish to look at it. The Chinese government estimated in 2014 that 20% of its farmland is polluted, as a result of industrial activity. Many places face epidemics of birth defects as a result of pollution and cities across China are facing enormous problems with smog. Some coal resources are so dirty that the Chinese government aims to ensure through regulations that they will never be burned.9 If there is no “clean” coal available to replace such dirty coal, it’s inevitable that less coal will have to be burned.

The idea of China hitting peak coal around this time may come as a shock to some of us, but others have anticipated something similar occurring on a global level around this time. In 2010, Tadeusz Patzek estimated that the world would hit peak coal around 2011.10 The exact year that global coal production would peak is less interesting than the implication. He found that 36 of the IPCC’s 40 projected scenarios for the future are not going to happen, simply because we will never find enough fossil fuels that we can afford to burn. Croft estimates that we only have enough fossil fuels left to raise global temperatures by another 0.8 degree Celsius, which should mean that we end up staying well below 2 degree, assuming that positive feedback loops don’t begin to kick in yet at such temperatures.20

The idea of peak coal is not as strange as it may seem. The deindustrialization of Europe and North America that we have seen occur can be largely attributed to the fact that we simply could not afford to maintain our energy-intensive economies. The United Kingdom, which was first to undergo the industrial revolution, saw its coal production peak in the year 1913, at 292 Mt (million metric tons). Today, electricity prices in Germany, Denmark and other European countries are around four times as high as the price in India and China.

Western governments take limited action to maintain energy-intensive manufacturing industries inside their own countries. Rather, they try to preserve the relevance of their economies by focusing on service jobs. We find for example that Western nations prefer to focus on branding, marketing and associated factors in their products. The EU ensures that only certain regions of France are allowed to call their wine champagne.

To make the case for peak coal in China, it’s important to note the historical difference in coal use between China and Europe. Whereas between the year 1 and 1000, China had a significant share of the world’s population, Europe’s share back then was relatively low. Lower population density allowed Europe to use wood as a source of heat energy, whereas China was forced to rely on coal. In Britain some monarchs even prohibited the burning of coal, because of the pollution it created.

As a result, we find that throughout recorded history, China has always had a fair amount of industrial activity involving coal combustion, whereas Europe did not. No other nation in the world came as close to an industrial revolution as Southern Song did between 1127 and 1279. Such industrial activity also would have been significantly less energy efficient than modern industrial activity. Thus, although coal was burned at nowhere near the yearly rate that China currently uses it, hundreds of years of industrial activity may have robbed China of some of its best and most easily accessible coal.

In addition to this, there is no guarantee that other nations have high quality coal reserves about as large as those found in Europe. Climatic and geological conditions have varied across different parts of the planets for millions of years. Europe’s brown coal deposits were mostly produced by the giant coniferous trees related to redwoods that once grew in Europe millions of years ago. A different climate in China can have the effect of producing fewer economically useful coal deposits than we find in Europe.

At around 28% of the world’s carbon emissions, China is the most important factor in global carbon emissions.The United States trails China at about 15% of the world’s emissions. Looking at coal alone, China and the US are responsible for 40% and 16.2% of emissions respectively. In regards to the United States, the total tonnes of coal mined peaked in 2008. On the other hand, the total energy content of the coal mined peaked in 1998, because the quality of coal mined continues to deteriorate.12 By 2012, the total energy content of coal mined in the United States was down to 86% of its 1998 high. Natural gas has been increasingly forced to substitute for coal as a result.

An interesting development we can note in American coal production is that the sulfur content in burned coal is steadily climbing. Whereas between 2005 and 2008, sulfur content hovered around 0.98%, by 2009 sulfur content began to climb, rising to 1.32% by 2014.16 This is odd, considering the glut of natural gas. We would expect that abundant natural gas would have the effect of enabling a move away from dirty coal. The rise in sulfur content is indicative of the increasing use of brown coal of a particular low quality.

Since China and the United States together constitute more than half of global coal production, a peak of coal use in these nations can be sufficient to ensure that the peak in coal use is now behind us. A skeptic might argue that this does not necessitate peak coal, because other developing countries home to billions of people are still nowhere near the level of electricity use of the Western world.

A big part of future coal use hinges on the amount of recoverable coal found in India, something still unclear. India’s coal use is rising rapidly to serve its rapidly expanding economy, but the industry is mired by tremendous corruption. This is a problem that can be overcome, but it leaves us to wonder how much can be stated with certainty in regards to the size of its coal deposits.

Greenpeace has published an interesting report on India’s coal reserves.13 By 2012 it stated, India’s main producer CIL, responsible for 80% of production, had revised its coal reserves downwards by 16% compared to 2010. The company has consistently failed to meet its production targets. Greenpeace estimated in 2013 that at India’s targeted growth rates, CIL’s official coal reserves could be depleted within 17 years.

Important to note here is that CIL’s coal reserves are in all likelihood hopelessly optimistic. CIL has not made any effort to estimate geographical and land use limitations in its estimate of extractable coal reserves. This is quite a big problem, as India’s population density is about as high as the Netherlands’. Much of it would thus seem likely never to be used, as the soil above the coal deposits will prove to be more valuable. To recover coal after all is inevitably a more disruptive process that recovering oil or gas.

When it comes to coal, it’s important to note that different grades of coal have different properties and uses. Anthracite is generally the most useful type, followed by Bituminous and sub-bituminous. There is also coal that can be classified as metallurgical grade, based on its purity. This type of coal has to be very low in contaminating elements, as steel is very vulnerable to the effects of adding small amounts of sulfur or phosphorus.

Finally, we have lignite, the desperate man’s coal. Burning lignite yields so little energy, that lignite has to be burned directly near the location where it is recovered, as transporting this heavy coal over large distances means that you simply end up spending more energy transporting the coal than you recover from it. At this point, Germany only has significant amounts of lignite left. This forces the country to forcibly evacuate small towns that happen to have lignite beneath their soil. For this reason, Germany is particularly enthusiastic about transitioning to renewable energy.

Having some background knowledge about the different types of coal out there allows us to look at stated coal reserves with some skepticism. As an example, Pakistan has large lignite deposits in the Thar desert. This lignite will have to be burned on the spot to generate energy, it can not be exported. Of course this yields some problems, as mining and burning coal requires large amounts of water, water that simply is not available in a desert in the middle of Pakistan.

This is part of a larger problem that industry in countries closer to the equator will face should they ever seek to utilize large amounts of nuclear or coal power. Water is used to generate power. This water can then be passed through cooling towers, where it is lost to evaporation. Large parts of the world are increasingly facing water shortages and will thus be less than enthusiastic about losing what little water remains to coal plants. These towers tend to be expensive to build, so many coal plants don’t have them. Rather than losing the water to evaporation, it can also be dumped back into a river or lake, where it causes thermal pollution, which kills most of the fish that live there and creates toxic algae blooms and other problems.

We’ve looked at the problems that coal extraction faces. It’s interesting to look at oil and natural gas now, although an in depth analysis of oil and natural gas depletion is outside the scope of this essay. It’s worth noting however, that there is agreement that conventional oil has peaked in 2006, as even the IEA admits.17

What has increasingly substituted for conventional oil is unconventional oil, that is comparatively dirty, with higher carbon dioxide emissions per barrel of oil production. The debate focuses on whether or not the economy can continue to function when it becomes fully dependent on such unconventional oil.

In regards to unconventional oil, it remains to be seen how much of its is economically viable to extract. Current oil prices have rendered much of the US shale oil deposits economically nonviable to extract, even though these companies benefit from low interest rates as a result of monetary policies. Companies have focused on “sweet spots”, where the geology is just right. The American “miracle” is also unlikely to be repeated elsewhere, as the United States is believed to have more than three quarters of the world’s reserves.

Shale gas, now making up 39% of US natural gas production, can be produced at a low cost, because of the negative externalities that are imposed on the environment. If companies can be sued for the earthquakes their waste injections cause, oil and gas production is jeopardized.18 In the meantime, the earthquakes continue to get worse. The Oklahoma geological survey projects 941 M3+ earthquakes over the entire year 2015, a thousand-fold increase over the earthquake rate before the wastewater injection process began.19

The problems of peak oil and peak coal are difficult to see as separate from one another, as shortages in one resource will significantly affect the production potential of the other. Production of oil shale, with its extensive network of pipelines and many wells, requires high amounts of steel. The collapse of the oil shale industry in the United States is causing hundreds of people working in the steel manufacturing industry to lose their jobs. Steel production in turn depends largely on coal, 12% of all the world’s coal is used to produce steel, a figure that includes the poor quality coal types like lignite that are exclusively used for electricity generation.14

We find ourselves faced with a situation where a variety of resources are becoming increasingly difficult to extract. As an example, the mining industry in Australia is faced with ore grades that have halved in thirty years, while waste that has to be removed to access the ores has doubled, causing a tremendous increase in required energy.14 This shouldn’t be surprising, as the process of industrialization and exponential economic growth has meant that exploitation of a variety of resources is now at record highs. The effect the situation has on our economy is comparable to adding a variety of heavy burdens on a camel’s back.

I know what you’re thinking: “How is a fermentation method ever going to make a difference in my survival of climatic changes?” That’s exactly what I will tell you. People who know how to make sourdough rye will have a big survival advantage over people who don’t, for the following reasons:

Reason one: Sourdough Rye Bread is higher in protein

Rye as a flour is higher in protein content than wheat.1 Rye has sixteen gram of protein per 100 gram. Wheat has only thirteen gram of protein per 100 gram. In addition, rye has a higher protein efficiency ratio than wheat.2 This is because rye is higher in lysine than wheat, lysine being the limiting amino acid in cereal grains.

So how about fermentation? Fermentation is incredibly important, because the bacteria in sourdough increase the protein content of the dough. After all, a bacteria has to grow itself. To do this, it has to synthesize proteins. Bacteria use the starch in flour as their energy source for this, which is good, because the flour has starch in abundance.

Studies find that protein content increases in Maize and Sorghum when fermented with lactic acid bacteria.3 After one week of treatment with Lactobacillus Brevis, protein content of Maize increased from 10.2% to 13.6%. In Sorghum, protein content increased from 15.1%, to 18.4%. Bacteria also chop up large protein chains and turn them into smaller protein bits, which makes the protein easier to absorb for our bodies. Notable here is that the study mentions that the amino acid Lysine increased a lot. This is the amino acid we earlier discussed as a limiting amino acid in our diet.

But why should you care about protein in your diet in the first place? The limiting element in your diet in the future is not going to be total calories, but rather, protein content. Studies find that protein content in our crops decreased by 20% when atmospheric CO2 concentrations increase.4 CO2 concentrations today are already much higher than before the industrial revolution, so chances are that the protein content of our cereals is already lower than it was when we started to genetically adapt to cereal grains as the main portion of our diet.

You might think that Rye has no real place in the future, because Rye is a cold-adapted plant, but you’d be wrong. Rising atmospheric CO2 concentrations increase winter temperatures, but also have the effect of making plants far more vulnerable to the effects of low temperatures. For many species, a doubling of CO2 increases the temperature at which they start freezing by about 1.5 degree Celsius. Thus, cold-resistant plants will still be necessary in many locations, even if winters there become less cold.

Sourdough is colonized by bacteria and fungi that live there. They have no interest in competing organisms invading their habitat. Thus, they produce chemical compounds that prevent infection. This increases the life expectancy of your sourdough bread. Sourdough increases the shelf-life of bread by somewhere between 12 and 14 days.5 When you have to bake bread, making sure you produce sourdough bread means you will find yourself throwing away less bread.

Reason three: Sourdough bacteria break down dangerous pesticides.

As our climate changes, our crops may suffer more from competition with fungi, insects and weeds. As a result, farmers may start using larger amounts of pesticides. It seems possible that Americans are already suffering effects of the excessive use of pesticides on food.

Well, this is where good news comes in. The bacteria in sourdough bread break down pesticides. One study found that pesticide concentration was reduced by 42% by lactic acid bacteria.6 Thus the lactic acid bacteria in sourdough bread will protect you against pesticide exposure.

Rye is also important, because rye needs less pesticides to grow than wheat. Rye needs less pesticides because it’s a hardier plant. It evolved as a weed, that grew in fields of wheat that humans grew, thus the plant is less demanding than wheat. In America, some pesticides are forbidden from being used against rye, that are allowed to be used when growing wheat.

As humans continue using rock phosphate to produce fertilizer, arsenic, cadmium, uranium and other harmful chemicals are building up in our soils. Normally, bacteria exist in the soil that slowly and gradually make those elements biologically unavailable to absorb. When humans remove all plants and spray a variety of pesticides, we kill the organisms that live in the soil, thus they can’t do this important work. As a result, you have arsenic, cadmium, uranium and other harmful elements in your food.

What do the Lactic acid bacteria do? They perform the important task of making these chemical unavailable for your body to absorb. Lactic acid fermented soy milk has the effect of increasing cadmium excretion by rats, while decreasing the uptake of cadmium.7 The same effect is found on lead, as well as on arsenic and other harmful elements.8

When humans grow cereal grains, it’s quite common for the grain to be infected with parasitic fungi, which then produce mycotoxins that harm our health. Billions of dollars worth of crops are lost because of fungal strains like Fusarium graminearum that infect the grain plants. Studies find however, that Lactic acid bacteria can break down the mycotoxins found in food.9 One study found a 77% reduction in Fusarium mycotoxins, after just 24 hours of fermentation. Thus, while other people might get sick from particular flour, using sourdough fermentation with the same flour may mean that you do not suffer sickness.

Conclusion:

Sourdough fermentation is an essential skill that you need to learn to adapt to the growing challenges we face as a result of resource depletion, climate change and associated problems. Knowing how to make sourdough bread will greatly increase your chances of survival.

Why did humans ever start practicing agriculture? It’s not self-evident as it might seem to us. Anatomically modern humans emerged 200,000 years ago, but did not start practicing agriculture. Humans were smart enough to make boats, flutes and cave paintings tens of thousands of years before they ever started to engage in agriculture. It took until about 12,000 years ago for some people in the Middle East to embark on this endeavor, a few thousand years later agriculture then began to emerge in other places around the world, in what we know are independent events, using entirely different plants.

So why did this happen? There are a number of different theories by people who did a lot of research on this topic. Most of these theories attempt to explain the direct process of a particular group of people transitioning to agriculture. Demographic explanations exist which propose that population density in some particular area of the world became too high to survive by hunting and gathering, forcing people there to engage in hard labor to increase the productivity of their environment through agriculture. Other explanations suggest that people engaged in agriculture to store food in preparation of big feasts, while yet others suggest that agriculture was the process of a continuation of evolution.

These are interesting suggestions, but they don’t explain the bigger mystery, why it took so long for the type of conditions to emerge that could produce such an outcome. Surely people 200,000 years ago could have prepared for a feast or suffered overpopulation? Why didn’t they make the transition? One suggestion has been that something changed in our environment that enabled us to make this transition to a new way of living that had hitherto been impossible to sustain.

We don’t know the exact date when people began to practice agriculture, but we do know that the climate changed rapidly around 12,000 years ago, as we left a glacial period and entered an interglacial period, a period of relatively high temperatures within an ice age. Afterwards, the first signs of agriculture emerge in the archaeological record. Other places that don’t demonstrate signs of agriculture yet do begin to display signs of sedentism.

About four degree Celsius separates the Holocene from the preceding Pleistocene. In the 21st century, we expect to see a temperature increase somewhere between two and four degree Celsius too.

It’s persuasive to consider that a change in the climate had opened up the possibility of agriculture for us, but why is agriculture so dependent on particular climatic conditions? Surely if it was too cold for agriculture away from the equator, it might have been possible further to the south? For this question we turn to Richerson et al.1

The most important factor that makes the Holocene different from the Pleistocene is climatic stability. What matters most is not the average temperature in a particular region over a long period of time, but the year to year fluctuation between temperatures and precipitation. If the amount of rain that falls this year is relatively similar to the amount that falls next year and temperatures remain similar too, it’s relatively easy to use a single crop. On the other hand, if the amount of rain that falls varies significantly or temperatures vary a lot, plants that perform well this year may not perform particularly well during the next year.

The type of dependence on one particular crop we see for much of recorded history does not make sense under such conditions. In addition, human population is likely to vary a lot between different years. A series of bad years may cause a population to decline a lot, but a series of following years may deliver people with plenty of food. Population may recover during those years, but with plenty of food naturally available, people would have no need to begin to engage in agriculture.

Note also that the first steps of plant domestication likely did not occur consciously. People through their natural activities simply propagated seeds of plants they ate a lot, which caused plants to survive that were more edible. In an environment with a lot of humans who eat a limited number of plants, such selective pressure on plants is very strong, but perhaps not strong enough in a more variable climate.

Richerson et al provide some interesting graphs that illustrate just how unstable the temperatures of the Holocene must have been compared to the temperatures we experience today, show below.

Above we see the amount of temperature variability over periods longer than 150 years on the low pass filter, with the amount of variability over periods shorter than 150 years visible on the high pass filter. The Holocene is relatively easy to distinguish on the right side of the images here.

Two more interesting graphs. The bottom graph shows Oxygen ratios, used as a proxy for temperatures in the area where ice core samples are taken.The graph above shows the amount of dust in the air. Note the almost complete absence of dust in the last few thousand years, compared to periods in the Pleistocene, indicating a relatively peaceful and hospitable environment, without a lot of dust reaching Greenland.

So, in conclusion, the evidence suggests that we’ve had a relatively stable and peaceful climate for the past 12,000 years or so. Understanding how unstable temperatures can be over short periods of time and the effect it had on our ancestors might help us appreciate the relative stability we inherited a bit more. We can’t see Glyptodons and Mammoths with our own eyes and we have to sit in cubicles and stare at computer screens all day long, but we can generally sow seeds with the knowledge that enough rain will fall and temperatures will probably be stable enough for plants to grow.

There are other factors in the climate that can affect agriculture besides temperatures and the amount of wind and rain, although these are probably the most important factors. Different atmospheric CO2 concentrations have varying effects on plants, some of them positive, others negative. At higher concentrations, some plants become more vulnerable to the effects of insect plagues and interactions between plants and fungi change in ways that can be problematic, either for the plants themselves or for animals that wish to feed on them. These factors are mostly outside of the scope of this article, although I look into them a bit more in some other posts on this blog.

Now that humans are changing the climate, it’s often dismissed as a problem by people who suggest the climate has always changed in the past. This is a lie of omission, because it omits the important fact that our modern way of life emerged in a period of unusual stability. There are reasons to believe that temperature and precipitation fluctuations will increase again in the future for large parts of the planet. The year 2014 was very unusual, because the United States saw both extreme cold and extreme heat simultaneously in different parts of the nation.2 Why did this happen?

The jet stream normally prevents very cool Arctic temperatures from escaping further south. However, because temperatures in the Arctic increase much faster than in the rest of the world, the jet stream weakens and begins to meander.3 What happens as a consequence is that unusually cold temperatures can occur further south and stay stuck for weeks. This is unpleasant for humans, but also potentially dangerous for agriculture. Global temperatures have so far increased about 0.8 degree Celsius above the pre-industrial average, but are likely to increase much further, with current plans aimed at keeping the temperature increase below 2 degree Celsius. Thus the increasing instability we have seen so far is likely to get worse in the decades ahead.