Searching for the missing pieces of climate change communication

What difference does it make anyway?

In the previous two post I looked at the original hockey stick and its very last incarnation. Both hockey stick shapes seemed to be artifacts of the methods used, not of the underlying data. But, you could say, “Even if this uptick doesn’t follow from the data in those two graphs, we measure surface temperatures more than a century and the way the temperatures go is up. If the hockey stick graph doesn’t tell the story, the measurement data surely do! So what difference does it make anyway?”.
I seen this remark popping up at several discussions. At first I was puzzled by such statements, but now I think it fails to take into account what is really at issue. Let’s look into it in more detail.

There are two data sets in play here. The first is the proxy data set, which consists of proxy data like tree rings (Mann’s hockey stick) or ocean sediment core data (Marcott’s hockey stick). The second set is the instrumental record which consists of temperature measurements with thermometers.

Proxy data is NOT real temperature data. Previously I assumed it was, because I knew that for example in a good year the rings of a tree will be wider than in a colder year. Although this is definitely true, it is also true that there are other influences on tree rings like moisture, nutrition, diseases, pests, competition with other plants/trees, interactions with wildlife, weather events and who know how many other elements that are important in the health of that tree. In that sense, the width of the tree ring is not only dependent on temperature, but also on these other influences. This means the temperature signal is diluted in the proxy data and not directly comparable with real temperature data. What could be said is that the conditions for that tree were better or worse during time, not necessarily that temperatures went up or down. This proxy data will consist of a temperature signal, but it will be noisy data (the temperature signal probably is a big part of it, not necessarily a constant part).

Thermometers on the other side have a very good temperature signal. When the temperature goes up, the substance they contain (alcohol, mercury, metal) will expand. When they cool, that substance will contract. The higher the temperature, the bigger the expansion. The lower the temperature, the bigger the contraction.After the measurements, it becomes more complicated with issues like the UHI (Urban Heat Island) effect on the measurements and the further processing of this data (do they really are representative for global or Northern Hemisphere temperatures), but that is a different story altogether.

Another issue in this comparison is the resolution. For example, the Marcott hockey stick has a resolution of more than 300 years. The instrumental record has a resolution that is much higher and could be described with a resolution of one day. Even if we bring that to a year, even 10 or 20 years, it is a much higher resolution than the proxy data set. If the instrumental record data were somehow put behind the proxy data and treated the same way as the proxy data, it would been barely 1 (one) measly point and probably not even placed high in the graph either.

As far as I know, there is no dispute that the world has being warming since 160 years. Temperatures are being measured for some time now and although we are now in a flat-lined region, generally the trend since 1850 was upwards. But that is not what these two hockey stick graphs were trying to say here. The issue they want to prove is that the last century is unusually warm compared to previous eras. According to their statements it hasn’t happened in let’s say the last 1,000 (Mann’s hockey stick) or 11,300 years (the Marcott hockey stick) and therefor it has the human fingerprint all over (because of humans emitting more and more CO2 into the atmosphere).

Let’s keep focus on what is really being said here. At issue in the hockey sticks is the uniqueness of the warming, not the fact that it warmed. We already know it warmed, but we don’t know if this didn’t happen before and the data given by these two studies are not sufficient to base that conclusion on. Even if it would have happened in the past, these methods will not be able to show this. When this uniqueness within the long time frame doesn’t follow from the data, it makes no sense to prove this with the incredibly short data set we have with the instrumental record.

Another issue that came to light with the Marcott paper: making the claim that the last 100 years are unprecedented (in the press release) and later saying the non-robustness of the last 100 years doesn’t matter because the instrumental record could well prove it (in the FAQ), is not really honest. The claim made was exactly about that non-robust data, when in reality the data of the graph was not saying much about the last 100 years, even seem to conclude that this data is useless for this current period. If the available evidence doesn’t support a claim, then one shouldn’t make that claim.

Returning from this to the initial question: what difference does the non-correctness of the last part of the hockey stick graphs make, because we know the earth has warmed the last 160 years anyway? As seen above, that is a false premise because that was not the thing that the hockey sticks wanted to prove anyway. But there is more to it than that and it was the statements about the Marcott paper that let me to notice this. The initial question diverts the attention from the strong statements that were made in the press. Just let me turn the question around: if the proxy data has to be tortured in order to get it into a hockey stick shape, how much signal of our current temperatures is there really in the proxy data set? To put it in other words: how much really is this an “independent” confirmation of our current temperatures anyway?

In the end, does it matter? For those who have read the papers, probably not. If they saw the articles in the press, they could put this into context. But it does matter for the laymen who only got to see the articles in the press and were yet again confirmed in their beliefs, without realizing that the papers themselves didn’t warrant those conclusions at all.

Hmmm…interesting. Yeah, I can’t see how proxy data helps. I mean, if it could show that the temperatures have been moving one direction or the other over a historical time scale, and then throw in actual temperature data from the past 100yrs to prove it is different, then it might do something. Odd.

Sure, you can look at the direction of the proxy data and compare it with the instrumental record. Heck, it would even be technically possible to graft one on the other. But comparison of the two will be problematic.

How sure can we be of the real direction of the data if for example we know that the resolution of the proxy samples is much lower than the length of our current temperature record? Even if we would have a period with the same temperature increase as the current one, the method would not not even detect it. For what we know there could be dozens of those in the past eras, maybe even altering the direction in some part(s) of the graph altogether.

An example more in line with the post: suppose that there was a temperature peak in the last 1000 years similar to ours. This peak will be smoothed out because of the resolution and of the noisiness of the proxy data. Probably a small blip will rest. If one now compare that with high resolution data from the instrumental record (where the current peak is clearly visible), then the conclusion would be that there was no similar temperature peak in the past because it is not seen in the proxy data part and therefor our warming is “unprecedented”.

That would be about the same as saying that there were much less planets in the past because the ancient astronomers only found three and the Hubble telescope today loads more. It is not that those planets weren’t there in the past, but they couldn’t be seen yet with the low resolution telescopes of the ancient astronomers.

The Keeling graph article: it is not really clear to me what you want to ask or clarify with that link?

Thanks for the clarification on the resolution issue–the Planet example is pretty good.

I wasn’t really meaning to ask a question with the Keeling Curve article, it was just an interesting bit of information that came across my fb one day and I thought I would let you look it over and see if there are errors in it like you’re talking about with the Hockey Stick graphs.

The Keeling curve consists of the measurements of the CO2 concentration in the air at Mauna Loa (Hawaii) during the last 60-70 years. As far as I know this is high quality data. It shows a steadily increase from the end of the 1950s until now, most likely due to humans emissions. I can agree with that. The Keeling curve is a lot in the news now because the concentration is reaching 400 ppm (parts per million).

The article at richarddawkins.net is the introduction of an article from the Washington Post. In that article they show the graph of the CO2 levels of the last 800,000 years and being grafted on that, the Mauna Loa record. The 800K years record is probably the Dome C-record (they drilled a core out of the Antarctic ice and then melted it an analyzed the composition of that released air). It should be an accurate process (assuming f.e. the air bubbles are not contaminated in/after the icing process and if the sampling is really traceable to actual years). But I think most people find it an accurate representation.

It is the first time I seen the combination of this two graphs. The graph raises a lot of questions. How accurate are the measurements in that first part? What are the consequences of the difference in resolution of the two sets do with the graph (thousand(s) of years for the Dome C-curve, probably yearly/per decades for Mauna Loa)? But especially the question: how did they manage to calibrate the two graphs to each other, considering the last measurements were well before the first Mauna Loa measurements? This could well alter the hight of the shaft. So for myself there are too many questions to make sense out of it. I am only a (interested) layman and I can be wrong. Don’t only rely on my information, but look also at others to form an opinion.

But following the theme of the post: what is really the issue here? The Keeling curve shows us only that the CO2 concentration rises. But what mostly is implied is that this is a dangerous thing (it wasn’t that high in 800,000 years). And this is where the debate starts. Some scientists think the powers of CO2 are highly exaggerated. CO2 is not the strongest greenhouse gas, not the most abundant either and its effects are logarithmic (the higher the concentration, the lesser the cumulative effect will be, so the strongest effects could well be already behind us), and so on. The rising temperatures could also well explained in other ways (f.e. coming out of a cold period one would expect warming,…). Personally, I believe CO2 as a greenhouse gas has an effect on temperatures, but not necessary as the main driver.

The question being, if there really is a limit: what is too much? Some say we should have left it with 290 (pre-industrial level) because earth was in balance with that. Some say we already past the save limit at 350 ppm. Some say it is 650 ppm and I wouldn’t be surprised if there are who say it is even higher. This will always be educated guesses and the scary scenario’s have been wrong until now.