Carbon Dioxide Puzzles

I like it when people do interesting calculations and help me put their results on this blog. Renato Iturriaga has plotted a graph that raises some interesting questions about carbon dioxide in the Earth’s atmosphere. Maybe you can help us out!

Renato decided to plot both of these curves and their difference. Here’s his result:

The blue curve shows how much CO2 we put into the atmosphere each year by burning fossil fuels, measured in parts per million.

The red curve shows the observed increase in atmospheric CO2.

The green curve is the difference.

The puzzle is to explain this graph. Why is the red curve roughly 40% lower than the blue one? Why is the red curve so jagged?

Of course, a lot of research has already been done on these issues. There are a lot of subtleties! So if you like, think of our puzzle as an invitation to read the existing literature and tell us how well it does at explaining this graph. You might start here, and then read the references, and then keep digging.

But first, let me explain exactly how Renato Iturriaga created this graph! If he’s making a mistake, maybe you can catch it.

The red curve is straightforward: he took the annual mean growth rate of CO2 from the NOAA website I mentioned above, and graphed it. Let me do a spot check to see if he did it correctly. I see a big spike in the red curve around 1998: it looks like the CO2 went up around 2.75 ppm that year. But then the next year it seems to have gone up just about 1 ppm. On the website it says 2.97 ppm for 1998, and 0.91 for 1999. So that looks roughly right, though I’m not completely happy about 1998.

[Note added later: as you’ll see below, he actually got his data from here; this explains the small discrepancy.]

Renato got the blue curve by taking the US Energy Information Administration numbers and converting them from gigatons of CO2 to parts per million moles. He assumed that that the atmosphere weighs 5 × 1015 tons and that CO2 gets well mixed with the whole atmosphere each year. Given this, we can simply say that one gigaton is 0.2 parts per million of the atmosphere’s mass.

But people usually measure CO2 in parts per million volume. Now, a mole is just a certain large number of molecules. Furthermore, the volume of a gas at fixed pressure is almost exactly proportional to the number of molecules, regardless of its composition. So parts per million volume is essentially the same as parts per million moles.

So we just need to do a little conversion. Remember:

• The molecular mass of N2 is 28, and about 79% of the atmosphere’s volume is nitrogen.

• The molecular mass of O2 is 32, and about 21% of the atmosphere’s volume is oxygen.

• By comparison, there’s very little of the other gases.

So, the average molecular mass of air is

28 × .79 + 32 × .21 = 28.84

On the other hand, the molecular mass of CO2 is 44. So one ppm mass of CO2 is less than one ppm volume: it’s just

28.84/44 = 0.655

parts per million volume. So, a gigaton of CO2 is about 0.2 ppm mass, but only about

0.2 × 0.655 = 0.13

parts per million volume (or moles).

So to get the blue curve, Renato took gigatons of CO2 and multiplied by 0.13 to get ppm volume. Let me do another spot check! The blue curve reaches about 4 ppm in 2008. Dividing 4 by 0.13 we get about 30, and that’s good, because energy consumption put about 30 gigatons of CO2 into the atmosphere in 2008.

And then, of course, the green curve is the blue one minus the red one:

Now, more about the puzzles.

One puzzle is why the red curve is so much lower than the blue one. The atmospheric CO2 concentration is only going up by about 60% of the CO2 emitted, on average — though the fluctuations are huge. So, you might ask, where’s the rest of the CO2 going?

Probably into the ocean, plants, and soil:

But at first glance, the fact that only 60% stays in the atmosphere seems to contract this famous graph:

This shows it taking many years for a dose of CO2 added to the atmosphere to decrease to 60% of its original level!

Here’s a possible explanation. Maybe my estimate of 5 × 1015 tons for the mass of the atmosphere is too high! That would change everything. I got my estimate off the internet somewhere — does anyone know a really accurate figure?

Renato came up with a more interesting possible explanation. It’s very important, and very well-known, that CO2 doesn’t leave the atmosphere in a simple exponential decay process. Imagine for simplicity that carbon stays in three boxes:

As we pump CO2 into box A, a lot of it quickly flows into box B. It then slowly flows from boxes A and B into box C.

The quick flow from box A to box B accounts for the large amounts of ‘missing’ CO2 in Renato’s graph. But if we stop putting CO2 into box A, it will soon come into equilibrium with box B. At that point, we will not see the CO2 level continue to quickly drop. Instead, CO2 will continue to slowly flow from boxes A and B into box C. So, it can take many years for the atmospheric CO2 concentration to drop to 60% of its original level — as the famous graph suggests.

This makes sense to me. It shows that the red curve can be a lot lower than the blue one even if the famous graph is right.

But I’m still puzzled by the dramatic fluctuations in the red curve! That’s the other puzzle.

Post navigation

49 Responses to Carbon Dioxide Puzzles

Since I’m in Kansas, where we get a fair amount of snow from time to time, some winters being more snowy than others – and thus the following spring, there’s more of a recharge for the groundwater, which means that more grass and crops grow as opposed to drier winters in which less snow falls, maybe there’s a correlation between snowfall and CO2 levels in the atmosphere. In addition, ice crystals (and snowflakes) trap air and keep it close to the ground – maybe CO2 gets trapped in greater quantities than N2 and O2 in the H-O-H lattice…

Since I’m in Kansas, where we get a fair amount of snow from time to time…

You got a huge snowstorm there recently, right? So that would be on your mind. Year-to-year variations in weather worldwide might affect the CO2 concentration at Mauna Loa, both for reasons related to plant growth but also many other reasons. Just for fun I’d like to compare the CO2 concentrations to the El Niño Southern Oscillation, since that affects Pacific Ocean surface temperatures and warmer water might release CO2. I have no idea how significant this effect could be, but it would be amusing to check.

I have trouble believing that significant amounts of CO2 gets trapped in falling snow: even if a bit gets trapped, the total volume of the world’s snow must be quite puny compared to that of the atmosphere.

This doesn’t have a big enough effect to explain the graphs, but getting energy use data is quite non-trivial. It’s most probably being done primarily by tracking fuel being sold and assuming that’s being used. This has two big problems:

1. There’s a risk of fuel being sold illicitly, or getting confused and counting repeated sales of the same load of fuel.

2. Various nations strategic fossil fuel reserves may not be fully transparent.

So the blue curve may not be as smooth as shown. But not by anything like the magnitude needed to reconcile the two curves.

I took a quick look at chapter 10 (the perturbed carbon cycle) of David Archer’s book (which I think is a very useful reference for people with at least some training in the sciences). He claims that we put 7 Gtons C/yr in, of which 3 Gtons C/yr stays in the atmosphere, and 4 Gtons go into the land and the oceans (about 50/50). Uptake by the oceans imvolves multiple time scales (a short time scale for warm shallow water, a longer time scale for mixing with deep water, and an even longer time scale for conversion to CaC03).

Tracking fuel sold and treating it as all being used is a pretty fair assumption, given that storage capacity is well known, is fairly constant and stays that way, and figures for fuel in transit (usually by ship for oil and train for coal) are easily obtained.

Fuel use should increase as population increases, and that’s a fairly smooth curve (or has been).

Strategic fuel “reserves” are usually a classified or top-secret number, the public numbers are made-up nonsense, and they don’t matter anyway, because the fuel has to be extracted, first, mined and broken up into fine pieces as in coal, or produced at the wellhead as in crude oil, dewatered (usually), and desulfured (sometimes). What’s left in the ground is for all extents and purposes largely unknown, but it doesn’t enter the marketplace until extracted.

If you’re talking about things like the Strategic Ready Reserve or whatever that salt dome down in Louisiana is called, then that fuel isn’t on the market and probably won’t go on the market unless the armed forces allow it, which is unlikely. No oil and the military machine grinds to a halt, as Patton found in the winter of 1944/45.

As for public storage of fuel, that is easily estimated by counting coal piles, which you can do by looking at Google Earth, looking for power plants (they’re pretty generic) and then estimating the size of the coal pile(s) which will obey certain physical constraints usually being conical with a given limit to height vs diameter; and by counting crude oil tankers and tank farms, also using Google Earth.

As for using wood as a fuel, remember that burning green wood is difficult and that you’ve got to let it dry out for six months to a year, otherwise you can look to the rate of deforestation for this figure because most people don’t store appreciably sized woodpiles and it tend to rot away to compost.

I would have assumed an agency like the EIA would be looking at certain reported oil transactions, but to my understanding there’s some margin for unreported transactions.

The use of, eg, the US’s Strategic Petroleum Reserve has been used many times to smooth out local supply difficulties (so that oil that comes out eventually gets replaced). Even if the US has been fully transparent about its behaviour, it’s unclear whether reserves such as China’s strategic petroleum reserve have been used in similar ways.

So the point I was making was that just pulling the headline figures on fossil fuel sales can oversmooth the actual true behaviour. However, I wouldn’t think the magnitudes of the fluctuations would remotely be large enough to match the reported carbon dioxide concentations.

The human perturbation to ocean carbon is notoriously difficult to measure, despite the ocean’s large role in buffering the build-up of atmospheric CO2. The difficulty arises from the inhomogeneity of ocean carbon and from the fact that anthropogenic carbon has increased ocean carbon by only 1-2%, even while it is has increased atmospheric carbon by about 38%. The only global observational estimate previously made of anthropogenic carbon in the ocean was a snapshot in time for 1994 made by Sabine et al. (2002). In the new study, we used novel indirect techniques to tease out the signal over the entire industrial era.

Looks like jitter noise to me, for all the familiar reasons. Some of it real, some of it due to NOAA’s clumping, resampling, and detrending: “monthly mean”, “centered on the middle of each month”, “after correction”. Some oversampled (but still processed, not raw) data are availableftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_weekly_mlo.txt

The single best ad hoc annualizing of trend would be to take annual differences between corresponding pairs of smoothed points near each of the zero crossings of the known annual cycle – looks like around July and January.

A better analysis procedure would be to Butterworth filter out the annual contributionhttp://en.wikipedia.org/wiki/Butterworth_filter
using an extremely narrow (hence the advantage of oversampling) annual bandpass. Then use the 5 point trapezoid filter (1 2 2 2 1) to eliminate most remaining jitter. Unfortunately I don’t have time today, maybe this weekend, but hopefully someone else can also do.

There’s nothing wrong with your atmospheric mass calculation; see this comment for a reference with a more precise number.

There isn’t really a contradiction between short-term interannual variability and the longer-term average response to an atmospheric carbon pulse. As you say, there are multiple time scales at work here.

I’m sure you’ve seen the large seasonal fluctuations in the Keeling CO2 curve, due mostly to terrestrial vegetation in the Northern Hemisphere. Likewise, you can get some similarly significant variability year-to-year due to vegetation dynamics (and also ocean dynamics).

The multidecadal response time in the “famous” graph (is it famous?) is due, for example, to the export of carbon from fast (“labile”) carbon pools to slower (“recalcitrant”) pools — i.e., litter carbon moving into the soil, more isolated from the atmosphere. This does not preclude seasonal and interannual variability, as (for example) gross photosynthesis grows and wanes. You just end up with a multidecadal decay curve with short-term fluctuations superimposed. Many of the vegetation fluctuations are climatic in origin (temperature and precipitation variability), or due to disturbance (fire, insect invasion, etc.). Of course disturbance itself is related to climate.

Thanks, Nathan! What you say makes sense. I’ll look at these references. It was fun trying to figure things out without reading anything, but now that my curiosity is picqued, I’m eager to see what the literature says.

I know it’s a bit risky to blog about climate science before I really understand it, but everyone’s doing it these days , and I thought I’d try to set a good example by:

1) presenting data and calculations in a way that’s very easy to check and criticize,

and:

2) raising questions, rather than claiming to draw earth-shaking conclusions.

I’m sure you’ve seen the large seasonal fluctuations in the Keeling CO2 curve, due mostly to terrestrial vegetation in the Northern Hemisphere.

Yes, they’re visible in this blog entry. Anyone who doesn’t know what Nathan is talking about, just look at the red wiggles here, and the blowup in the lower right:

Efforts to control climate change require the stabilization of atmospheric CO2 concentrations. This can only be achieved through a drastic reduction of global CO2 emissions. Yet fossil fuel emissions increased by 29% between 2000 and 2008, in conjunction with increased contributions from emerging economies, from the production and international trade of goods and services, and from the use of coal as a fuel source. In contrast, emissions from land-use changes were nearly constant. Between 1959 and 2008, 43% of each year’s CO2 emissions remained in the atmosphere on average; the rest was absorbed by carbon sinks on land and in the oceans. In the past 50 years, the fraction of CO2 emissions that remains in the atmosphere each year has likely increased, from about 40% to 45%, and models suggest that this trend was caused by a decrease in the uptake of CO2 by the carbon sinks in response to climate change and variability. Changes in the CO2 sinks are highly uncertain, but they could have a significant influence on future atmospheric CO2 levels. It is therefore crucial to reduce the uncertainties.

There are some nice graphs in this paper, including figure a, which shows the rate of increase of CO2 concentration. This graph is jagged like Renato’s, but different, because it’s based on different data, also provide by NOAA:

We used the global mean data after 1980 and the Mauna Loa data between 1959 and 1980.

There’s a lot of good information here, but they note that it would be good to have more:

Progress has been made in monitoring the trends in the carbon cycle and understanding their drivers. However, major gaps remain, particularly in our ability to link anthropogenic CO2 emissions to atmospheric CO2 concentration on a year-to-year basis; this creates a multi-year delay and adds uncertainty to our capacity to quantify the effectiveness of climate mitigation policies. To fill this gap, the residual CO2 flux from the sum of all known components of the global CO2 budget needs to be reduced, from its current range of ±2.1 Pg C yr−1, to below the uncertainty in global CO2 emissions, ±0.9 Pg C yr−1. If this can be achieved with improvements in models and observing systems, geophysical data could provide constraints on global CO2 emissions estimates.

Atmospheric CO2 has increased at a nearly identical average rate of 3.3 and 3.2 Pg C yr−1 for the
decades of the 1980s and the 1990s, in spite of a large increase in fossil fuel emissions from 5.4 to 6.3 Pg C yr−1. Thus, the sum of the ocean and land CO2 sinks was 1 Pg C yr−1 larger in the 1990s than in the 1980s. Here we quantify the ocean and land sinks for these two decades using recent atmospheric inversions and ocean models. The ocean and land sinks are estimated to be, respectively, 0.3 (0.1 to 0.6) and 0.7 (0.4 to 0.9) Pg C yr−1 larger in the 1990s than in the 1980s. When variability less than 5 yr is removed, all estimates show a global oceanic sink more or less steadily increasing with time, and a large anomaly in the land sink during 1990–1994. For year-to-year variability, all estimates show 1/3 to 1/2 less variability in the ocean than on land, but the amplitude and phase of the oceanic variability remain poorly determined. A mean oceanic sink of 1.9 Pg C yr−1 for the 1990s based on O2 observations corrected for ocean outgassing is supported by these estimates, but an uncertainty on the mean value of the order of ±0.7 Pg C yr−1 remains. The difference between the two decades appears to be more robust than the absolute value of either of the two decades.

(It says O2 there and I don’t think that’s a typo, though I don’t quite understand it — look at the end of section 4.)

There are also nice graphs here, but the story they tell seems to differ significantly from Le Quéré et al‘s more recent paper.

Knowledge of carbon exchange between the atmosphere, land and the oceans is important, given that the terrestrial and marine environments are currently absorbing about half of the carbon dioxide that is emitted by fossil-fuel combustion. This carbon uptake is therefore limiting the extent of atmospheric and climatic change, but its long-term nature remains uncertain. Here we provide an overview of the current state of knowledge of global and regional patterns of carbon exchange by terrestrial ecosystems. Atmospheric carbon dioxide and oxygen data confirm that the terrestrial biosphere was largely neutral with respect to net carbon exchange during the 1980s, but became a net carbon sink in the 1990s. This recent sink can be largely attributed to northern extratropical areas, and is roughly split between North America and Eurasia. Tropical land areas, however, were approximately in balance with respect to carbon exchange, implying a carbon sink that offset emissions due to tropical deforestation. The evolution of the terrestrial carbon sink is largely the result of changes in land use over time, such as regrowth on abandoned agricultural land and fire prevention, in addition to responses to environmental changes, such as longer growing seasons, and fertilization by carbon dioxide and nitrogen. Nevertheless, there remain considerable uncertainties as to the magnitude of the sink in different regions and the contribution of different processes.

I believe that the starting point in this analysis is that the atmospheric CO2 level comes about from the impulse response of the CO2 uptake with the forcing function of the CO2 fossil fuel input combined with seasonal variations.

So if g(t) is the impulse response and f(t) is the forcing function, then the atmospheric content is the time convolution of f(t) with g(t).

The reason that the seasonal variations are still observed is that the convolution is essentially a low-band-pass filter and though it does filter the periodic signal, it doesn’t do it completely, and so we end up seeing the residual noisy oscillations in the Mauna Loa data.

There is also a time lag on the output of a convolution, and since g(t) has a significant fat-tail component, the convolved output keeps on accumulating, long after the forcing function is turned off.

I also combined these in a chapter of The Oil ConunDrum online book. I got interested in this topic because the oil production process can be described as a series of convolutions as well, and the CO2 residual is just another convolution stage in this process.

As William Feller said: “It is difficult to exaggerate the importance of convolutions in many branches of mathematics” from “An Introduction to the Probability Theory and its Applications”.

BTW, I found out that climate scientists understand convolutions very well but the knowledge of this technique amongst oil depletion analysts is very small.

Thanks, WebHubTel! I hadn’t wanted to bring convolutions into my already long blog post, but thinking about convolutions is precisely what made me so puzzled by the jaggedness of the red curve here:

I don’t think I can get that red curve by convolving the blue curve with any function of the general sort shown here:

That is, some function with with for and monotone decreasing for .

WebHubTel wrote:

The reason that the seasonal variations are still observed is that the convolution is essentially a low-band-pass filter and though it does filter the periodic signal, it doesn’t do it completely, and so we end up seeing the residual noisy oscillations in the Mauna Loa data.

The variations in the red curve aren’t what I’d call ‘seasonal’: it’s wiggling around drastically at the 1-5 year scale. It would be nice to plot a graph of monthly averages, to see more detail.

But I like this aspect of your idea: if the production of CO2 by natural (as opposed to human) agents were very noisy, a low-pass filter might leave us with a curve like the red one. And it’s always worth remembering that natural processes produce and consume a lot more atmospheric CO2 than the human processes produce. So there’s potentially a lot of natural noise, with the blue curve as a small but significant signal buried in this natural noise.

Here’s another suggestion to think about (although sorry I’m feeling too lazy tonight to do it myself…)

The annual fluctuation that Nathan raised above is much larger than the annual mean increment. So the “rapid flux” into and out of the biosphere over the year is in effect the most rapid process. However, it’s likely that this has some variability from year to year, so that when you do the 12 month averaging you are going to end up with a signal containing effects from this variability. Another way of looking at it, if you Fourier transform theCO2 after removing the exponential growth you will get more than a simple 1 year peak – there will be a broader spread of energy. You might actually want to filter it out with a slightly better filter than a simple 12 month moving average in order to compare to the annual reported emissions – they are most likely estimates with some built-in smoothing from year to year anyway.

I read somewhere recently that the latest drought in the Amazon released as much CO2 as all the cars in the world. So presumably that will end up giving an upwards glitch in the red graph.

so that when you do the 12 month averaging you are going to end up with a signal containing effects from this variability.

I think it has something to do with this. The averaging process is a low-band-pass filter and will suppress the noise, e.g. the classic Mauna Loa graph which is a cumulative averager. However, when we switch over to looking at year-over-year variations as in the incremental graph shown by Renato, we are essentially dealing with a derivative, which is a high-band-pass filter. In that case, any noise is accentuated and it starts looking more jagged.

OTOH, the energy production increments are likely based on data that is so filtered over time that the year-over-year increments turn very smooth. The derivative of this accentuates very little noise.

So I think this may be partially an artifact of how the CO2 data is collected and possible aliasing leading to derivative spikes and noise accentuation. We would really need to look at the original data.

This does require some thought as I now understand the concerns and see why John labelled it a “CO2 puzzle”.

Yes, it does have to do with the difference between natural source variation and equations, but you’ve missed the main reason for the difference.

The main reason is that an environment houses numerous simultaneously emerging and evolving systems, so best thought of as a kind of big pot of “pop corn” going off. It’s not “random” in that the large variations in locally developing events are just not connected at that scale. You don’t make progress with this subject unless you start asking questions about what animates these local developmental processes…

Phil, Interesting to bring up the popcorn analogy. Many people presume that the popcorn going off is in some ways predictable. Yet, when food science researchers carefully measure the time it takes to pop for individually cooked kernels, they find that it generates a spread in times that is not even normally distributed, with obviously fatter tails. That’s why you find lots of unpopped kernels, as the variability is so large. I actually have a section on the phenomenon in The Oil ConunDrum. I looked into this because the popping of popcorn mimics both the temporal dynamics of searching for stuff like oil and of predicting the reliability of components. These are complements in the sense that success is the complement of failure. When you find a success (like an oil reservoir) or when you expose a failure is stochastic in mathematically similar ways.

I guess it further points to the great variability in natural processes.

It turns out that the popcorn hazard functionhttp://en.wikipedia.org/wiki/Survival_analysis
follows an extreme value (i.e. Gumbel) distribution. The reason is that the kernels that pop in a given interval (of increasing temperature) are simply the ones that were least likely to survive the interval. The details of the local processes (some slightly hotter, some slightly cracked) are irrelevant if reproducible.

Well, actually, there’s a special kind of “kernel” that is especially useful for exposing the “pop corn” events hidden in the confusion of time series data. It doesn’t work automatically everywhere, but works astoundingly well some places. It involves the careful use of a smoothing kernel with a hole in the middle. A smoothing kernel with a hole in the middle preferentially reduces fluctuation for higher derivative rates, and so minimizes scalar distortion. It means you can find the true shape of the natural phenomenon, making the data more differentiable, and so expose their dynamics more graphically. It’s one of several mathematical tools I developed in the 80’s & 90’s for investigating locally emergent systems phenomena. http://www.synapse9.com/drwork.htm

I’d be happy to discuss is anyone is interested. Use the side bar to navigate or scroll down the page to the table of contents. I haven’t touched the thing in 10 years really as not a soul ever understood what it was about it seems.

The complete system of the CO2 consists in a lot cycles with different time scales, so more or less decoupled, each subcycle should have an equilibrium point which shifts as you pump CO2. So there is a subtlety difference between how may years will be in the air the CO2 that we just throw away and how many years will be the excess of CO2 once we stop pumping !

Just for the record, the data of the red curves comes from the data of December of each year, not from the average of the year.

Just for the record, the data of the red curves comes from the data of December of each year, not from the average of the year.

Data for fossil fuel combustion is usually integrated over an entire year, so that the noise excursions in CO2 levels might be reduced by around a factor of 3 if you followed the same procedure and integrated over a yearly cycle. I believe that noise reduction would occur if the noise was IID and the improved counting statistics would reduce it by the square root of 12.

Just for the record, the data of the red curves comes from the data of December of each year, not from the average of the year.

Because the details seem to matter now:

If you got your data from here, your red curve does not exactly show the difference in CO2 concentration at Mauna Loa between one December and the previous one. It’s a difference of ‘corrected’ four-month averages:

The annual mean rate of growth of CO2 in a given year is the difference in concentration between the end of December and the start of January of that year. If used as an average for the globe, it would represent the sum of all CO2 added to, and removed from, the atmosphere during the year by human activities and by natural processes. There is a small amount of month-to-month variability in the CO2 concentration that may be caused by anomalies of the winds or weather systems arriving at Mauna Loa. This variability would not be representative of the underlying trend for the northern hemisphere which Mauna Loa is intended to represent. Therefore, we finalize our estimate for the annual mean growth rate of the previous year in March, by using the average of the most recent November-February months, corrected for the average seasonal cycle, as the trend value for January 1. Our estimate for the annual mean growth rate (based on the Mauna Loa data) is obtained by subtracting the same four-month average centered on the previous January 1.

I wish they explained the ‘correction’ method.

And while we are thinking about small but perhaps important issues: you could make me happier if you’d carefully check the 1998 data on your red curve:

Let me do a spot check to see if he did it correctly. I see a big spike in the red curve around 1998: it looks like the CO2 went up around 2.75 ppm that year. But then the next year it seems to have gone up just about 1 ppm. On the website it says 2.97 ppm for 1998, and 0.91 for 1999. So that looks roughly right, though I’m not completely happy about 1998.

Is it just some inaccuracy in the graphing program, or something else?

When you say “‘correction’ method”, do you refer to where they “corrected for the average seasonal cycle”?

I don’t know quite what that means. But I did look up what they do to remove the seasonal cycle. Maybe their ‘correction’ has something to do that. Thoning et al. (1989) describes the seasonal removal method (Sections 4.1-4.3). I don’t know if they’ve made any tweaks to the method since that paper was published.

They linearly detrend the gap-filled daily data, then apply a zero-padded fast Fourier transform. To remove the seasonal cycle, they apply a low-pass filter which is a decaying exponential of the fourth power of frequency (Eq. 2).

The filter has a “cutoff frequency” of 667 days (0.55 cycles/year), meaning the power is attenuated by half at a period of 667 days. 667 days was chosen so that the filter transfer function drops to almost zero right at a period of 1 year.

(They also talk about a 50-day filter to remove subseasonal variability, and it’s not entirely clear whether they apply that first before the seasonal filter, or whether that’s for a separate analysis.)

After filtering, they perform an inverse FFT back to the time domain, and add back the linear trend.

John, I have to say that this is a most impressive bit of data forensics and scientific sleuthing that I have seen in a while.

The take away message has to be that the natural cycles and variations in CO2 can be momentarily large but as long as they don’t accumulate above the long-term average, they still pale in comparison to the relentless, almost monotonic, advance of man-made CO2 emissions.

Have you used a accumulative variance test to see if it’s a random walk? Or a variance suppression test to see if it’s an accumulative process?

I haven’t really done anything except explain what Renato Iturriaga did. The CO2 data is here — have at it!

But I’m not sure what you mean by ‘it’. The anomaly, I guess. For that, I think the most exciting thing is its apparent correlation with the “Niño-3 SST index”, as shown above. This is the sea surface temperature in a patch of the ocean fairly close to Hawaii:

Data for this region are available here — monthly since 1950 and weekly since 1990.

Oh, that’s easy, the “it” in this case is the physical process you are trying to describe, using the data recorded from it as a guide. The question of whether it is a statistical process or an overlay of many kinds of unrelated dynamic systems, or a single large scale system with small scale fluctuations.. etc. That would be important for knowing how to construct your mathematical description of it, wouldn’t it?

Those two tests described on my drstats.htm page would help answer those questions based on whether the trends visible have flowing change in their continuities, to being developing a case for it being one or another kind of natural phenomenon creating the data.

One can make a perfectly good statistical model of a dynamic system, but then it’s largely meaningless isn’t it, due to the mismatch in kind, right?.

I wonder if algal blooms are causing the difference. They should coincide with any significant influx of iron from stuff like upwelling during el nino, coastal runoff, dust from the Saharan and Mongolian deserts, and so on.

The total reduction of the regional sea-to-air CO2 flux during the 1991–94 El Niño period is estimated to account for up to one-third of the atmospheric anomaly (the difference between the annual and long-term-average increases in global atmospheric CO2 content) observed over the same period.

The data for the red curve in 1998 is 2.76 this is from the difference of december 1998 and december 1997. There is no error inaccuracy in the graphing program. Only that the data is coming from december to december, not from the average. Since the data of the blue line comes on what happened on a given year I thought it was reasonable to take the same time intervals for the observed increment.

What’s the best free software for graphing a lists of numbers? I want something that works on Windows. I think it would be fun to make a number of graphs related to this CO2 puzzle. For example, it would be fun to see the difference between your simple “December minus December” calculation and the more complicated calculation advocated by NOAA.

I use OpenOffice, avoiding software that I need to pay for, so I’ll give that a try. Now that I’m gradually ceasing to be a pure mathematician I need to learn how to draw pretty graphs — not just pretty commutative diagrams!

How To Write Math Here:

You need the word 'latex' right after the first dollar sign, and it needs a space after it. Double dollar signs don't work, and other limitations apply, some described here. You can't preview comments here, but I'm happy to fix errors.