Proxy spikes: The missed message in Marcott et al

There is a message in Marcott that I think many have missed. Marcott tells us almost nothing about how the past compares with today, because of the resolution problem. Marcott recognizes this in their FAQ. The probability function is specific to the resolution. Thus, you cannot infer the probability function for a high resolution series from a low resolution series, because you cannot infer a high resolution signal from a low resolution signal. The result is nonsense.

However, what Marcott does tell us is still very important and I hope the authors of Marcott et al will take the time to consider. The easiest way to explain is by analogy:
50 years ago astronomers searched extensively for planets around stars using lower resolution equipment. They found none and concluded that they were unlikely to find any at the existing resolution. However, some scientists and the press generalized this further to say there were unlikely to be planets around stars, because none had been found.
This is the argument that since we haven’t found 20th century equivalent spikes in low resolution paleo proxies, they are unlike to exist. However, this is a circular argument and it is why Marcott et al has gotten into trouble. It didn’t hold for planets and now we have evidence that it doesn’t hold for climate.

What astronomy found instead was that as we increased the resolution we found planets. Not just a few, but almost everywhere we looked. This is completely contrary to what the low resolution data told us and this example shows the problems with today’s thinking. You cannot use a low resolution series to infer anything reliable about a high resolution series.

However, the reverse is not true. What Marcott is showing is that in the high resolution proxies there is a temperature spike. This is equivalent to looking at the first star with high resolution equipment and finding planets. To find a planet on the first star tells us you are likely to find planets around many stars.

Thus, what Marcott is telling us is that we should expect to find a 20th century type spike in many high resolution paleo series. Rather than being an anomaly, the 20th century spike should appear in many places as we improve the resolution of the paleo temperature series. This is the message of Marcott and it is an important message that the researchers need to consider.

Marcott et al: You have just looked at your first star with high resolution equipment and found a planet. Are you then to conclude that since none of the other stars show planets at low resolution, that there are no planets around them? That is nonsense. The only conclusion you can reasonably make is that as you increase the resolution of other paleo proxies, you are more likely to find spikes in them as well.

==============================================================

As a primer for this, our own “Charles the Moderator” submitted this low resolution Marcott proxy plot with the Jo Nova’s plot of the Vostok ice core proxy overlaid to match the time scale. Yes the vertical scales don’t match (numerically on the scales due to the ticks being different and the offset difference), but this image is solely for entertainment purposes in the context of this article, and does make the point visually.

Spikes anyone? – Anthony

(Added) Study: Recent heat spike unlike anything in 11,000 years “Rapid” head spike unlike anything in 11,000 years. Research released Thursday in the journal Science uses fossils of tiny marine organisms to reconstruct global temperatures …. It shows how the globe for several thousands of years was cooling until an unprecedented reversal in the 20th century. — Seth Borenstein, The Associated Press, March 7th

Note: If somebody can point me to a comma delimited file of both the Marcott and Vostok datasets, I’d be happy to add a plot on a unified axis, or if you want to do one, leave a link to the finished image in comments using a service like Tinypic, Imageshack or Flickr. – Anthony

What Nancy Green says is true. However, the astronomers looking for planets wanted to find them. Climate ‘scientists’ looking for hockey sticks want a nice straight shaft then a single blade. This was shown in the CG1 emails with the discussion about getting rid of the Medieval Warm Period. I doubt very much that we will see any concerted search for high resolution proxies for paleo climates as ‘The Team’ would see finding a twentieth century type spike several thousand years ago as a threat to their hypothesis (aka funding).

Anthony, the vertical scales DO match. I did by eyeball only but did it carefully. The baseline is shifted with a WAG based on the smoothed lines.

REPLY: Perhaps I wasn’t clear, I’m saying they don’t match numerically on the scales due to the ticks being different and the offset difference, the amplitude of the scales looks like a reasonable match (added clarification to body of story)- Anthony

“Marcott tells us almost nothing about how the past compares with today, because of the resolution problem. Marcott recognizes this in their FAQ. The probability function is specific to the resolution. Thus, you cannot infer the probability function for a high resolution series from a low resolution series, because you cannot infer a high resolution signal from a low resolution signal. The result is nonsense.”

Perhaps you should reread his the para in the Marcott paper starting with, “Because the relatively low resolution and time-uncertainty of our data sets should generally suppress higher-frequency temperature variability, an important question is whether the Holocene stack adequately represents centennial- or millennial-scale variability.”

I know the FAQ helps most WU readers, but I don’t think there is anything new there. It’s all in the original paper, including the points make above.

shows what it claims, namely that the Marcott et al methodology would be able to detect thermal spikes such as the one we’ve seen in the past century and a half. Tamino’s result is not meaningful because for such spikes to exist in the Marcott data (where there is intrinsic natural and measurement smearing) the actual non-smeared spike would have had to be an order of magnitude larger, so as to appear smeared and flattened out as in the actual data.

In other words, the proxies are intrinsically smeared by natural processes over the millennia, as well as by the collection procedure, and by the averaging performed in the calculation. For a spike as used in Tamino’s demonstration to exist in the physical data, the actual temperature excursion, over a century, would have had to be one to two orders of magnitude greater than the current one. That is, Tamino may have shown that Marcott et al can rule out spikes of 10 or 100 degrees. But he did not show that spikes of 1 degree would have been detected.

Strangely enough, the comment I left at RC applies perfectly to this post:

“1- That’s not “spikes”, that’s “noise”. Why do you think the authors performed this Monte-carlo simulation? Because the raw data itself is not interpretable on short timescales.

2- A short “spike” in the past could not be comparable to modern warming, because modern warming is not just fast, it is also *durable*. Even if CO2 emissions completely stop in 2100, the warmth will remain for centuries. If something like *that* had happened in the past, Marcott’s proxies and methods would have detected it.

A short “spike” in the past could not be comparable to modern warming, because modern warming is not just fast, it is also *durable*. Even if CO2 emissions completely stop in 2100, the warmth will remain for centuries.

You may believe that. But you can’t believe it based on a low resolution study such as Marcott.

This attempting to defend the “spike” by bringing in material completely external to the study is sadly typical of its defenders.

1) Marcott can’t show whether a spike is precedented or not.

2) Marcott can’t show the cause of anything at all, let alone a spike.

3) Marcott can say nothing of any long term trends (note, the base trend is down).

toto says:
April 3, 2013 at 8:41 pm
Because the raw data itself is not interpretable on short timescales.

and

A short “spike” in the past could not be comparable to modern warming,
>>>>>>>>>>>>>>>>>>

So you are saying that modern warming is different from the past warming even though you’ve stated that the data can’t be used to interpret the past warming so you don’t actually know what it looked like at all? Yet you are confident that it is different?

“…A short “spike” in the past could not be comparable to modern warming, because modern warming is not just fast, it is also *durable*. Even if CO2 emissions completely stop in 2100, the warmth will remain for centuries…”

And you know this how, exactly? Crystal ball, perhaps? Or simply unquestioning faith in what can only be described as a “plausible theory with several major suppositions”?

Any suggestions as to what may have caused the “temporary short spikes” in the past?

2- A short “spike” in the past could not be comparable to modern warming, because modern warming is not just fast, it is also *durable*. Even if CO2 emissions completely stop in 2100, the warmth will remain for centuries.

Durable, that’s a good one. Even if I don’t see the evidence for it, I like the idea that the only thing that humans have done which is sustainable is to fix the climate. /sarc

The case is also understated, because the issue is not only the low frequency sampling.

There is further spreading and flattening of maxima and minima due to
1. dating errors (suppose Hadcrut year 2001 temperatures would be computed by averaging 1905 temperatures from location A and 2003 temperatures from location B)
2. non temperature influences on proxies.

The latter appears to be very severe in this reconstruction. McIntyre’s hemispheric reconstructions have very little similarity with instrumental temperature for the whole instrumental record.

The very good think about Marcott et al is it has brought this ‘spike’ issue out into a blazing bright spotlight.

The spike created by grafting high resolution instrumental data onto very low resolution, ‘smoothed by nature and statistics’ proxies, even though that was not the cause of Marcott’s uptick (as Steve McIntyre and others have shown) had CAGW proponents bringing up the defensive argument “well, we already a have an instrument record that shows the same thing”.

I love that Vostok overlay by Anthony, I’ll run that under a few noses. Thanks!

One thing I don’t understand. With 11000 years of history indicating the current trend is downwards, wouldn’t you think there might be a problem with a projection that goes straight up? The analysis of the paleo data would indicate CAGW projections are likely wrong. What am I missing?

Leif: sorry dude, but do you know what kind of average? Neither do I. It matters a lot. Without a priori knowledge of the actual process, it’s a wag what the original data looked like. Moving averages decimated down are particularly messy in this regard.

You’d think Tamino, the guy with a real world time series analysis gig, would realize how attenuated an output “spike” would be, and would understand the implications, after passing through some apparently low frequency filtering mechanism.

Don’t get me wrong Leif, I’m more saying that if you highlight the word “reliable” in the quote you chose, rather than “anything,” the distinction is there. There is always an unknown, likely incalculable, uncertainty going in that direction.

I think I follow toto’s argument: the current temperature trend is unprecedented in the past 10,000 years. We know that because it is mainly caused by human CO2 emissions and we know that because it is unprecedented in the past 10,000 years.

“What Marcott is showing is that in the high resolution proxies there is a temperature spike. This is equivalent to looking at the first star with high resolution equipment and finding planets. To find a planet on the first star tells us you are likely to find planets around many stars.”

This analogy only works if you start by assuming that there is nothing special about the 20th century, i.e. that there can be no antropogenic warming, and since Watts is using it to claim there is nothing special about the 20th century it becomes a circular argument. What Watts is doing is like starting with the observation that there are planets around our sun concluding there should be planets around others, neglecting that if there weren’t planets here we would be able to live here to observe the fact. You always have to consider if there is something special about one observation before you start extrapolating from it!

Marcott has certainly generated a lot of different posts now.
I made the observation on an early post at Climate Audit that the uptick at the end was very important. It may not have been “robust” so that Nick & Richard Telford can dismiss it as irrelevant, but it was a match to the instrumental record – leading to a sense of “confidence” in the proxy.
Critics jumped on the uptick as poor science, but within the paradigm of climate science, it was the uptick that made the paper useful. Get the headlines and wear the flack – mission accomplished. And in the climate science establishment it would be seen as the correct way to treat the data.
Now in the face of so many criticisms, their reaction is “who cares if the uptick was an artifact and not robust, the thermometers say it is actually real”, so they say the critics are running around over molehills.
But this post picks another aspect of the analysis. It is a feature of this analysis that it smooths out the past and gets rid of spikes. It is not a bug. The clear issue is to convey a smoothly varying climate so that the present appears way out of line with natural processes. This paper accomplishes this – and is rewarded with congratulations even from the more reasonable contributors like Richard Telford.
The third great “feature” of this study was to smear the data from various far flung places in order to come up with the most wonderful and physically meaningless statistic of “world” temperature. Using comparisons with ice cores can now be dismissed as looking only at regional effects, not global data.
I can understand why this study is very important to the establishment. And I suspect it may well survive the adverse comments.
For me though, it is clear that proxies for past temperatures or climate conditions are untestable and cannot be realistically calibrated. They are the unleashing of the tremendous power of human imagination onto a very noisy and mysterious pile of data and drawing out patterns based on very strong assumptions and world views to make grand pronouncements about the past. The fact that scientist are generally happy with such studies [that curiously seem to overturn each other with regular frequency!] is actually a sad commentary on the nature and direction of modern science [or is that post-modern science?].
As I have noted before, such highy suspect proxies should not even be considered as in the same category as very large scale effects – such as villages and roads now abandoned under ice, the shifting of forest lines, written histories, etc

There is a saying in Product Development circles to effect that, “Just because you don’t know how to do it, doesn’t mean it can’t be done.” Resolution aside, we’d also cite Milutin Milankovich and Alfred Wegener, both universally condemned by geophysicists for decades on the specious grounds that equinoctial precession was solely an astronomical phenomenon while deep-ocean (bathyscaphic) geology was similar to that of continental landmasses.

Since no-one can experiment with global climate, “climatology” is not an empirical scientific discipline but a classificatory exercise akin to botany. “Peer review” of such as Marcott accordingly is not in any sense a validation, which can only come from Nature, but mere semantics indicative of a scholastic Aristotelian consensus. Characterizing this as “mere word-smithery” understates the case.

So Thomas, what you’re saying is the 20th C temperature is not itself evidence for unusual behaviour. Its because we know something unusual is happening that the temperature must therfore be unusual. But Mann et al 1998 started by teling us ‘look 20th C temperature is usual, so something unusual must be happening’. Why am I feeling a little dizzy?

This analogy only works if you start by assuming that there is nothing special about the 20th century

But that is the way science works. Science ASSUMES that the null hypothesis is true until it is PROVEN false. Since no one has proven this false, we must as scientists conclude that there is nothing special about the 20th century.

This makes the rest of your comments, Thomas, rather unneccessary since the null hypothesis is that any changes witnessed are due to natural variation. This is most definitly not a circular argument, this is THE actual science today. We must assume that there is nothing special about an event or a particular time period because if you do assume there is something special you are subjecting data and methods to what is known as observer bias.

And I do not think you quite understood the mistake in logic there either. This is the problem with observer bias. If you incorrectly dictate your logic in such a way that you assume a certain time period or a certain star system (such as our sun) is special, that is the second you are not looking at actual data but jumping at a whim. If a certain star system is special for instance, the science will eventually find the evidence of this and the entire conjecture WILL be proven by the science. This will never happen the other way around. You can not FORCE science to rotate around your point of reference and actually find evidence that way. The mistakes in logic and technique that you already applied shows that instead of ever finding proof of the special, you will instead just double-down on assumptions and observer bias and never actually find the proof that you need. Indeed, we find this time and time again in climate change.

So that is why climate science is doomed in other words to be stuck in a quagmire of tribalism and belief. The entire process was wrong from the start because the entire process made an assumption that was in other words not proven scientifically and instead of leaving it at the hypothesis stage, scientists jumped the shark and made proclamations that we live in special times. This is the why the scientists find “special” things in the data. They hunt and seek as a quest to find this missing evidence and instead of ever finding it through this flawed approach, they instead find statistical mistakes that they made more times then not.

Or they simply find an artifact in the data. How many times have we found that through torture of the data that the scientist simply outputs the first hockey stick they find? I am always curious on how long it took them to find that hockey stick in the data which does not show it. And the reason their methods are hard to duplicate?

Why that is the easy answer. I bet they really can not recall what they did exactly to get that result. But that is not truly incompetence, that is just observer bias in action. The incompetence part comes in when they subjected themselves to the observer bias in the first place.

This analogy only works if you start by assuming that there is nothing special about the 20th century, i.e. that there can be no antropogenic warming, and since Watts is using it to claim there is nothing special about the 20th century it becomes a circular argument. What Watts is doing is like starting with the observation that there are planets around our sun concluding there should be planets around others, neglecting that if there weren’t planets here we would be able to live here to observe the fact. You always have to consider if there is something special about one observation before you start extrapolating from it!

I’m afraid the converse is true as well. If you see planets around our perfectly normal sun, then it is fairly safe to assume that we are NOT the anomaly. The only way you could assume that we were, is if you assume that life will happen anywhere at all, so it was likely to happen around the only star with planets, otherwise we are an extreme outlier in the universe.

Likewise, it is more sensible to assume, without any evidence to the contrary, that temperature fluctuates quite a bit, and the that the recent temperature increase is not unusual.

In order to convince the world that this is NOT the case, and that extreme measures are required that could plunge us into relative poverty for decades, we really need extraordinary proof. This isn’t it, and none has been generated to date.

. When analyzing time series, ther are two things to keep in mind:
1- the Shannon theorem: it is impossible to detect a frequency higher than half the sampling frequency of your data (which is rather long for most of the proxies, and from there no “peaks” are detectable) (in other (symetrical) words: the shortest period you can detect exceeds ALWAYS two times the time interval between two samples)
2- to avoid “bord effects”, the longest period you can detect with some accuracy may not exceed say about 1/4 th of the total duration of the time series. This is why, when using time series obtained exclusively from direct measurements of temperature (more accurate than proxies), you cannot detect periodicites longer than 60 years (as the longest continuous time series for temperature measurements do not exceed 4*60 = 240 years). I think this is the reason why Scafetta underlines the importance of a 60 years periodicity in temperature records + a long linear trend corresponding to an exit from the Dalton little ice age (circa 1830). If he had mixed proxies and temperature records, he could have seen that this “linear trend” is actually the ascending branch of another sinusoid (of period approximatively equal to 360 years). The superposition of a 11 + 60 +360 years sinusoid describes quite well all the climatic features detected since 2000 years, including the present standstill lasting for the last 15 years. No significant contribution of anthropogenic greenhouse gas must be evoqued at all to explain experimental evidence. I can provide more details on this, if wanted /needed.

The Vostok dataset shows 33 (eyeballed from chart) similar, or longer, warming periods over the past 11,000 years.

So, if you argue smoothing (low resolution) the data to achieve a long term trend is correct, then the current warming will be smoothed away.

If you argue the above provides insufficient resolution, then the argument has to be that we have seen the current situation circa 33 times before in the last 10,000 years, so the current warming is just another example of natural climate change.

Arguing the fact there is more carbon dioxide now in the atmosphere and this is solely responsible for all of the recent warming is clearly nonsense. However, arguing that CO2 was responsible for some of the recent warming seems does not seem unreasonable. Despite what the IPCC says, we still do not have a clue how much of the recent warming is man made and how much of it is natural.

Anyhow, far too much of ‘climate science’ is falsely concentrated on eliminating the previous warm periods (MWP, Roman, Minion and Holocene Optimum) of the Holocene Era and exaggerating what is happening today.

This analogy only works if you start by assuming that there is nothing special about the 20th century

A fairly incredible statement, Thomas, when you have just seen it demonstrated that the recent spike disappears in the noise of a higher resolution proxy, and would probably be even less visible if we had long term precise instrumental data.

So, how do we really know whether the 20th century spike is ‘special’ or not?

I look forward to your answer to this important question, as the whole CAGW story rests upon that particular assumption.

(Please try to answer without mentioning the phrases “97%”, “deniers” or “conspiracy”).

How about this analogy:
I’m still alive and at my last check-up last week, doctors claim that I’m in a healthy condition.
This proves that my average temp was around 37°C from my birth, till the last check-up. (proxy)
Since last week however, I started keeping my temp at regular time intervals and as I started to get a fever, by the end of the week, my average temp over this week was an alarming 38,5°C.

If I plot these together, we would see a large uptick as well.
But does it mean that I never had a fever before?

To the others believing this study would or could pick up a spike similar to our recent temp averages…… stop. Move away from the PC. Pick up something useless like in I-something or use your cell phone for communicating. And no, don’t ever pick up a calculator, again. That includes you, Tammy. It’s an abhorrently stupid conversation to be having. The time resolution doesn’t allow for this. I wrote a post on this last month.

Instead of directing traffic to my site, I’ll offer this. ….. Just go to the WFT site. Pick one of the land/ocean data sets. It doesn’t matter which one GISS or HadCrut…. it doesn’t even have to have ocean. Just pick one. Be generous and select from 1900, so as to get the full MONTHLY data set for last century and to the present. Click on “plot graph”. Note the high and low points as to where they are in relation to the vertical axis. Regardless if you picked HadCrut or GISS the difference between the high and low should be about 1.5 C difference. Now, we don’t say that is the amount of warming we have. We typically say we have had about 1/2 to 1 deg warming or less. I believe Marcott’s spike showed about 0..8 C warming. Everyone with me?

Now Marcott seems confused about his resolution as well, in that they state in their SI and the FAQ….. ” the average resolution of the 73 paleoclimate series is 160 years, and the median is 120.” and “there is no statistically valid resolution to the combined proxy set for anything less than 300-year periods.” ……. okay, whatever. so, let’s be generous and take them at the lowest figure…… no, let’s even go lower. Go back to your WFT app…… now, set the mean at 100 year resolution. or for the math challenged, 1200 months. Now, click on “plot graph” again. Now mark the values of the low and high points and do that tricky math stuff. We should note about a 0.11 deg C increase. This is the “averaging” over time. What people are trying to do is the same thing as comparing one day’s temperature to the average annual temperature. Or even a month’s average. It’s insidiously stupid.

To those who think Marcott spliced the temp record onto his graph, I don’t think that’s what he did. I think he simply manipulated his proxies to emulate Mann’s hs. In other words he used the temp record as a template and fit his garbage. So it’s a distinction, but without a real difference.

PS. Anthony, I can appreciate the desire to plot Vostok with Marcott’s, but, wouldn’t that also be a distortion of the spatial resolution? But, heck, Marcott’s proxies swung faster than the thermometer record. MD01-2421 recorded a 1.15 deg uptick going from 356 BP to 334 BP….. thats 22 yrs. That’s just the second one I looked at.

PPS The Marcott data is in a workbook form, in other words is has many pages. So moving it to a flat file would be challenging.

Leif Svalgaard says:
April 3, 2013 at 7:37 pm
You cannot use a low resolution series to infer anything reliable about a high resolution series.
Yes you can. You can infer the long-term trend among other things. Don’t overstate your case.
/////////////////////////////////////////////////////////////////////
To be more accurate, should you not be saying :”You can infer the long-term trend SMOOTHED BY THE LIMITATIONS OF THE LOW RESOLUTION USED IN THE STUDY among other things”?

Quote Leif Svalgaard
> You cannot use a low resolution series to infer anything reliable about a high resolution series.

Yes you can. You can infer the long-term trend among other things. Don’t overstate your case.
Unquote

As I see you comments on over 50% of the posts here on WUWT, it appears to me that your comments may be perceived as aggressive.

There is a nice way of saying things, and there are a number of more or less aggressive ways of saying the same thing. Maybe it would be more productive to consider a less hostile way of addressing problems with both posts and comments?

lsvalgaard says:
April 3, 2013 at 8:40 pm
Mark T says:
April 3, 2013 at 8:11 pm
Leif: look up the term aliasing.
Won’t make any difference as the data is not sampled but averaged [as far as I can assess]

Actually, such an average sampling is considered worse, more distorted and has less high frequency content than perfect sampling.

Brian Eglinton says:
April 3, 2013 at 11:23 pm
“…For me though, it is clear that proxies for past temperatures or climate conditions are untestable and cannot be realistically calibrated.”
////////////////////////////////////
There are 2 main problems with proxies.
1.
Proxies respond to favourable environmental growing conditions in general, and not to one factor in isolation. It is therefore all but impossible to isolate the response to say temperature from the more general response of the proxy to beneficial environmental conditions in general. In otherwords, it is difficult to seperate the signal that we are looking for from the noise.
2.
Tuning and calibration. One would need a very lengthy overlap with the thermometer record to properly and reliably tune the proxy to the thermometer record. Consider Mann’s tree rings. He was able to tune them to the 1910 to 1960 thermometer record, but when tuned to this they diverged from the post 1960 thermometer record showing a proxy response to temperature or a tuning issue. Had mann tuned his tree proxies to say the post 1960 thermometer record, the hockey stick would have taken a diiferent shape.

In summary there are always huge error bars and uncertainties. They should always be taken with a pinch of salt, merley giving a wide ball park indicator of what may be happening, but nothing more than that. One must never splice one proxy record onto another. The calibration and tuning issues are too great, to lead to anything other than a misleading result.

If our thermometer record went back 11,0000 years we may see something very different to the smoothed curve produced by the Marcott proxy study.

Comparing local temperature spikes to a global reconstruction is apples-/-oranges and meaningless.

There are finer resolution global temperature data which also show no past spikes comparable to the last century warming.

So what?

Comparing low temporal resolution data to high temporal resolution data is apples-/-oranges and meaningless.And that is what this thread is about because that is what Marcott et al. did.

There are finer resolution global temperature data which DO show past spikes comparable to the last century warming.
For example, the above graph which overlays Vostock data shows several such spikes.

Funny how warmunists claim the Vostock data provides global information when it suits their purposes but claim that same data is “meaningless” when it doesn’t fit their purposes.

And if all finer resolution global temperature data did not show past spikes comparable to the last century warming (n.b. some do) then that would be meaningless becauseabsence of evidence is NOT evidence of absence.
And that is also what this thread is about.

2- A short “spike” in the past could not be comparable to modern warming, because modern warming is not just fast, it is also *durable*. Even if CO2 emissions completely stop in 2100, the warmth will remain for centuries. If something like *that* had happened in the past, Marcott’s proxies and methods would have detected it.

They didn’t, so it hasn’t. Hence, “unprecedented”.”

You assume CO2 caused warming.
You assume this warming will remain for centuries
You assume Marcott’s proxies and methods would have picked up this warming.
You assume the current assumed CO2 induced warming is unprecedented.
You assume too much.

Excellent article, Nancy Green. The great truths are often very simple.

In answer to Leif Svalgaard whom I respect greatly but who I think on this occasion has fallen into the trap of making a word-based argument:

Yet another analogy, from the world of accounting this time. (That’s where I’ve spent much of my working life.) Say that I have the total salary remuneration paid by a company for the last 20 years. For affirmative action reporting purposes I am now required to break this down, trend it and chart it, categorized by race, gender, age, disabilities, etc.

If all I have is the statistical snapshot for the latest financial period, and no breakdown for the previous 19 periods, the data resolution is too coarse. It is impossible to infer anything reliable about the previous financial periods. No contortion of language can upset that conclusion.

@Thomas:
“This analogy only works if you start by assuming that there is nothing special about the 20th century, i.e. that there can be no antropogenic warming, and since Watts is using it to claim there is nothing special about the 20th century it becomes a circular argument.”

Can you explain, without invoking an argument from authority, the precautionary principle or faith, how you haven’t turned logic on its head here? I suspect that once you look behind the press releases to uncover the real lack of empirical proof for anthropogenic warming on the scale alarmists would have us believe, you will find that your assertions are the ones that rest, squarely, on circular argument.

Or are you just another hit and run commenter unwilling to justify your boilerplate argument?

“Comparing local temperature spikes to a global reconstruction is apples-/-oranges and meaningless.”

Indeed. Seems to be what people here have been saying for a long time.

“There are finer resolution global temperature data which also show no past spikes comparable to the last century warming.”

Where? If you could find me a proxy that has been accurately calibrated to the ENTIRE unadjusted modern temperature record (one like HadCRUt as opposed to GISS that smears wildly variable arctic temperatures over thousands of miles and gradually reduces all sampling from a mixture of urban and rural to exlusively airports over the late 20th century), agrees with it without any truncation of data, can be demonstrated through comparison to the instrumental record to have a resolution of less than 30 years, does not have to go through statistical manipulation akin to using a photoshop skew / rotate (or invert) tool on a graph to show correlation with the instrumental record, is demonstrably not subject to bias from external factors, like grazing sheep or CO2 levels (without these biases being arbitrarily ‘corrected’ for, correlates convincingly across its entire record with a large number of proxies in other locations of the same nature, can, through the application of rigid tests (not ones made up to suit the occasion), be shown to be impervious to contamination from the act of extraction, then I will believe you. Otherwise, what you say just looks like hand waving to me.

For a short spike in the graph to occur in the past comparable to today’s rise, it would have to show up on the same sample in most of the 70 some proxies.And then the spike would be the average of all the proxies and may would have the amplitude comparable to today. What is the likelihood that they have all the centuries lined up exactly. And have you seen the spaghetti graph of all the proxies together on one chart. There is no way a 1 century spike of half degree would ever survive that mess.

I think that even if the resolution had been high enough to detect any such spike it appears to my layman’s mind that given the core top redating of most of the proxies to 1950 and given the fact that the very nature of sediment sampling can destroy or disturb up to the top two feet of the sample, making the actual start date anything up to and including 1000 years BP it not only seems unlikely that any such spike would survive their methods but I suggest it’s a physical impossibility.
It’s clear that the majority of their data has ended up in the wrong bins when sorted and none of them in alignment.

Excellent observation.
This helps explain the failure of the hockey sticks.
It is difficult to imagine that trained academics could simply as a group over look this.
But that raises the question of motive.

The takeaway is you can’t look at long period low-resolution temperture proxies and compare them with the short period modern high resolution instrument record and conclude that we are in a period of unprecedented warming OR that there were no such periods in the past.

We have volcanic dust and sulphate spikes in both the EPICA Dome C and Greenland ice cores that show eruptions more than an order of magnitude greater than observed in modern times.
The Aniakchak (Alaska) eruption of 1645 BC would have plunged the entire Northern hemisphere into cooling, and if aerosols like dust and sulfate do block sunshine, then cooled the whole world.
When know when major eruptions occurred, we know which ones make it to both poles. If you proxie misses a huge thermal event like this, throw it away.

Benfrommo: “But that is the way science works. Science ASSUMES that the null hypothesis is true until it is PROVEN false. ”

In this case it is proven false. Our CO2-emissions are clear as is the rapidly rising concentration in the atmosphere. Your “null hypothesis” would have to pretend industrialization never happened or that CO2 isn’t a greenhouse gas. You should be careful about what you call the “null hypothesis” as that is usually far from obvious.

Jerome: “If you see planets around our perfectly normal sun, then it is fairly safe to assume that we are NOT the anomaly”

Then you have to know that our sun is “perfectly normal” which you can’t do unless you have knowledge about how common planets are. Assume only one sun in a million has inhabitable planets. What are the odds our sun has it? Still 100%. I’m a perfectly ordinary human. I was born in Sweden. Does that mean that everyone is born in Sweden? Statistics from a single sample is risky.

markx “So, how do we really know whether the 20th century spike is ‘special’ or not?”

We can’t, there is always the possibility that there has been rapid change that just is too small or too rapid for us to be able to see it in the proxies (God in the gaps). However, that was not what I argued against. Nancy Green claimed the much stronger “Rather than being an anomaly, the 20th century spike should appear in many places as we improve the resolution of the paleo temperature series”. She used a study that didn’t find any spikes before the current one as support for the claim that there ought to be others.

Kudos to nearly everyone making a comment. WUWT shows a more sophisticated understanding of scientific method than any comparable site.

In apology for the poor performance of some making comments, especially Thomas, we have to keep in mind that one leader of the Alarmist movement, Trenberth, has called for reversing the null hypothesis.

Anthony, this is a figure I was thinking of generating myself, but it bothers me when the caption states that the “vertical scales don’t match”. Do the ORIGINS not match (naturally) or is it the SCALE that does not match? This is a crucial question. The point can be made mathematically precisely — the annual variance (really, more than just variance, several cumulants/moments) about a running mean temperature is known to reasonable accuracy from the data from the thermometric era, and it is bone simple to generate colored noise with the same distribution to add to the oversmoothed data from Marcott et. al.

I would argue that if the SCALES of the curves match up, this figure is meaningful even “as is” above, but it would be much more meaningful if one generated a half dozen variations of Marcott plus annual noise that matches the observed natural variability of the climate about 100 year smoothed trends or 50 year smoothed trends.

In any event the term “scales don’t match” needs to be clarified — if the size of the degrees match they have the same scale (but are overlaid with different vertical origins to have roughly the same mean). To REALLY get the right feel for things, the noise should be applied only to a segment of Marcott with the same length as the added sequence of noise — indeed, it should decorate the curve so the first 10,800 years match the noise apparent in the last 200 years in their actual figure.

Leif Svalgaard says:
April 3, 2013 at 7:37 pm
You can infer the long-term trend among other things. Don’t overstate your case.
=========
Dr Svalgaard, thank you for taking the time to review my paper. I considered your argument when I wrote the sentence in question and in the general sense I believe you are correct.

I used the word “reliable” in the sense that as you increase the resolution the trend may change, depending upon the error in the original signal. What I was trying to convey is that Mother Nature tends to be full of unexpected surprises and we should not over estimate the reliability of our findings. Do we really known that the trend over 8000 years is for example 1.1C? Might not better data change this to 1.3C?

I have repeated Tamino’s work on peak detection in the Marcott data generating three 200-year long triangular peaks of amplitude 0.9C at dates 7000BC, 3000BC and 1000BC. I did exactly the same analysis but using the individual proxy measurements binned into 50 year intervals. I used the Hadley algorithms for generating the global averaging of the 5×5 degree grid. I found that it is indeed correct that the peaks would be detected. The signal however is more smeared out than that of Tamino and I suspect would be further reduced by ~30% folding in measurement error.

What is even more interesting is that the underlying data actually does show a few slightly smaller peaks similar to the generated ones. One of these coincides with the medieval warming period !

The analogy doesn’t really work because the “high resolution spike” is not a product of high resolution data but of data manipulated to create a spike.

Please pay attention to the following: Marcott et al did NOT combine the proxy record with the thermometer record to produce their spike. This falacy is frequently repeated here. Someone should be pointing out this error.

There was a comment a few days ago on another thread that stated the actual increase in the last century given by the Marcott proxies was around .5C. Interestingly, this matches the global raw data without adjustments. Has anyone verified if this is true? If it is, it would make for an interesting discussion. And, it makes claiming the recent warming is exceptional look a little silly.

Leif I will give you this, you state more personal opinions than almost anyone. I have to totally disagree with you about using low resolution models to spot long term trends…and speaking of long term trends, the current trend is thus

GAT is still above CAT, and average atmospheric co2 is also below the running average. That you and the alarminati choose to study co2 in the context of human history is just a biased opinion. The running average is the more objective number to use. And what you have now is a damaged science because the alarminati continue to produce deceitful graphics in which they show current temperatures running warmer than the Roman and Medieval warm periods which is a lie.

This article highlights the whole problem with the Warmista position:
Warmistas MUST demonstrate that today’s climate is abnormal, and try as hard as they do, they cannot.
This Marcott paper was perhaps the most dishonest attempt yet to do so.

Thomas says:
April 3, 2013 at 10:55 pm
This analogy only works if you start by assuming that there is nothing special about the 20th century
==========
Marcott ends at 1940. The population explosion and global Industrialization based on fossil fuels are almost exclusively post WWII. Therefore, the spike in Marcott cannot be attributed to AGW except in a minor sense. The majority of the spike must be due to natural causes or error in the underlying work.

Jordan says:
April 4, 2013 at 5:10 amOn sampling, I can sample a pure sine wave at fixed intervals which are multiples of its period.
The data was not sampled at 300-yr intervals, but represents averages over 300 year windows. All the wailing about sampling, Nyquist, aliasing, etc miss the point.

Alkenone series suffer from the same limitations as tree rings. Any subset of measurements could be disturbed by bioturbidity, sediment slumping, changes in currents, etc., so you have to average a lot of series together. A high resolution series is a single core, but with lots of uncertainty. This leaves lots of room for statistical interpretation.

Mark T. and Leif Svalgaard are both mostly right. You should be able to ascertain a trend from low resolution data as long as you can safely rule out systematic error like aliasing.

Another point should be made. Marcott et al. is geting a lot of press as being the “first global study that spans the Holocene.” Folks need to remember that there are lots of proxies that cover the Holocene, and they tend to show the Holocene Thermal Maximum as much warmer than today. But you have to go back to studies written on paper not bytes, and modern climate scientists seem unwilling to do that.

Apologize in advance for constantly flogging this point regarding the centrality Tennessee, Oak Ridge National Laboratory, and politcal climate science. Marcott, Shakun, et al ran their computations through it’s custom made hockey stick generating Jaguar supercomputer. ORNL is a creature of the nuclear power industry and perfectly represents our former Senator Al Gore’s interest in cutting edge technology and edgy environmentalism. If you see ORNL on the list of ingredients you may want to consider the contents carefully.

The Vostok ice core plot completely negates the careful responses on RealClimate about the ice cores not showing upticks like the one grafted onto Marcott’s graph. Either Gavin, Eric et al. knew that the Vostok record does show such spikes and were trying to hide their knowledge from their readership, or they were unaware of the content of this basic climate record. Either way, they don’t look good.

Historical temp spikes near the poles are likely to be larger than temp spikes in temperate areas, just as shift in temps today are enhanced nearer the poles. So Vostok can make the case about spikes, but may overstate it.

Matt Ridley had a post on higher temps during the Holocene Optimum (about 6 to 8 thousand years ago):

Maybe somebody who is very good with numbers and is familiar with these issues might do a blog post, followed by a journal article, showing what these temp records look like? We could see the extent of temp spikes, then, not just at the poles but also nearer the equator.

Thomas says:
April 4, 2013 at 6:10 am
‘I got a bunch of replies and will answer a few of them:

Benfrommo: “But that is the way science works. Science ASSUMES that the null hypothesis is true until it is PROVEN false. ”

In this case it is proven false. Our CO2-emissions are clear as is the rapidly rising concentration in the atmosphere. Your “null hypothesis” would have to pretend industrialization never happened or that CO2 isn’t a greenhouse gas. You should be careful about what you call the “null hypothesis” as that is usually far from obvious.’

In this statement, you assume that the rise in manmade CO2 has caused warming, that the “forcings versus feedbacks” calculation has been completed and the magnitude of the warming caused by manmade CO2 alone is known, and that natural causes of temperature change are known and known not to have contributed to warming or cooling that interfered with the calculation for manmade CO2 alone. But you know none of these things nor does anyone else. In assuming that you know the contribution of natural causes to temperature change, you have assumed that the Null Hypothesis has been falsified.

The reason for the spikes are the distorted scales especially the Y axis. If sensible scales were chosen then there would be no spikes just minor bumps and hollows. The proxies are treated as absolutes when in fact they are just educated guesses?
The quasi religious debate over global warming will only end when scientists with no axe to grind are formed into a group and are tasked to look objectively at the scientific evidence. Unfortunately hell will freeze over before that happens?

In that dataset, only the top 11642 years of Vostok are shown even though the Vostok core goes back much farther. Yet, Marcott uses sea floor cores back to 20000 years. So why didn’t Marcott use Vostok back to 20,000 years?

you come home to find your house burned down after a nasty run in with known arson.
The fire marshall shows up and notes: Well, this house, burned down way back 1905. It wasnt arson then, therefore it cant be arson now.

paleo arguments about unprecedented are stupid on both sides.
throw all the paleo data in the trash bin and you still know from physics that C02 will warm the planet. Nothing can change that. Not paleo records, not modern records.

Nancy, I’m not saying you read my comments specifically on earlier posts on Marcott et al spike, but certainly there were a number of other commenters on this very subject. I suggested that, using the 150 yr record as a reasonable sample of the high resolution data,, one could put a band of “whiskers” on Marcott’s graph that would add another 0.5 to 1C. I would say that Steve McIntyre at CA and McKitrick’s piece in the Financial Post explained it for the initiated but should have given this example for the layman: “to come closer to apples to apples, one could average the 150 yr record to a single figure and add that on to the proxy graph: presto, the spike is gone. It still isn’t completely legitimate but it is at least somewhere near where the next data point would be. Your analogy is a good one but still not good enough for ordinary good folks.

Here is 20 year spline smooth on Marcott’s published data. Marcott’s global temperature stack is provided in a 20 year time horizon but it is clearly hundreds of years of smoothing (more than 300) in actuality. And it is not really a smooth but more of a random tweaking of the uncertainities to arrive at the smoothest curve possible.

We also know that he played around significantly with the dating (there is a word for what he did that I won’t say) so even the “published data” is useless as a temperature history.

But, here it is anyway on a 20 year spline of the data “presented” which, indeed, shows large variability (and even the cold spike at 8,200 years ago (6,300 BC) shows up in the data now.

This is a great argument for contrarians, because it can be made indefinitely. If no higher-resolution proxies are ever found then they can keep claiming-forever-that modern warming is nothing to be concerned about. And even if proxy resolution gets higher they can just continue to claim that resolution isn’t high enough. Because hey, it’s not like anyone actually ran any numbers for this claim. It’s just a statement.

Also, the statement that spikes similar to the modern one “should appear” with higher resolution is not at all reasonable. A mechanism to produce such a spike is not known to science. You’re just assuming it’s there because your world view demands that it is.

One wonders where we would be if Galileo, having turned his telescope onto Jupiter and discovering moons, had then concluded that this proved the consensus view that the earth was the center of the universe. Instead:

Galileo was found “vehemently suspect of heresy”, namely of having held the opinions that the Sun lies motionless at the centre of the universe, that the Earth is not at its centre and moves, and that one may hold and defend an opinion as probable after it has been declared contrary to Holy Scripture. He was required to “abjure, curse and detest” those opinions.[59]
He was sentenced to formal imprisonment at the pleasure of the Inquisition.[60] On the following day this was commuted to house arrest, which he remained under for the rest of his life.
His offending Dialogue was banned; and in an action not announced at the trial, publication of any of his works was forbidden, including any he might write in the future.[61]

Is the situation today that much different? Having seen the moons around Jupiter, has Marcott taken heed of the warning inherent in Galileo’s story? What should one expect when speaking out against the consensus view of the powerful and influential? Would it not be wiser to claim the moons around Jupiter are proof that the earth lies at the center of the universe?

Alternative analogy: You find an old mangled 8-track tape that somehow manages to play without jamming. You play it, and all you hear is bass. At the very end, you suddenly hear 3 seconds of bass, cymbals and tambourine. You conclude that there was nothing but bass in the original recording until the final 3 seconds.

Now hook up a scope. Look at all those spikes in the final 3 seconds. They’re the highest amplitude in the whole recording.

Thomas says: April 3, 2013 at 10:55 pm
. . . .
“This analogy only works if you start by assuming that there is nothing special about the 20th century, i.e. that there can be no antropogenic warming, and since Watts is using it to claim there is nothing special about the 20th century it becomes a circular argument.”

Hoo boy. A few commenters have already walloped Thomas for this . . . butttttt there’s nothing like a pile on.

There’s nothing circular about it, it’s simple logic. And most importantly, Nancy’s argument assumes nothing, unlike yours. She simply pointed out that if you find a spike like Marcott found in the 20th century, that should give rise to investigation to see if there are more spikes. Note, she’s not making a conclusion—other than to say that more investigation is needed to find out if the spike is usual or unusual. Her proposal regarding that further investigation is that because the spike exists that makes it more probable that similar spikes happened in the past than if no spike was found. This is logical and exactly how science is supposed to work: devise hypothesis, test with data.

But, if like you, you start from the premise that there IS something special about 20th Century, you have already made a conclusion: that something is special about the 20th century spike. In other words, how do you know it is special? You have no other data to evaluate that claim against—that I am aware of—it’s simply your bias.

This reveals that in addition to begging the question with your own argument, you also assume something about Nancy’s. She merely proposed a hypothesis and asked for investigation; you seemingly assume that she discounted the idea that such investigation might show the 20th Century spike is special. She did not.

As you correctly point out, Marcott et, al. told us that, if we happen to see a spike, ignore it as it is not robust. Why do you point this out here? Why not tell Revkin, Borenstein, Gillis and the rest of the arm-waving Yellow Press?

Dr. Brown,

You are dead on, Vostok and GISP both show much larger swings much faster. The chart should be perfected, worth the effort.

Below is a partial list of members of the Baker Center named after Howard Baker. This is the highest-level bi-partisian policy guidance group which calls the tune at Oak Ridge National Laboratory. Perhaps having this information helps explain the instant media-hype given Marcott. You can bet that these folks have high-level media contacts in their rolodexes.

Baker Center Board
The Honorable Howard H. Baker Jr.
Former Ambassador to Japan
Former United States Senator
The Honorable Phil Bredesen
Former Governor of Tennessee
The Honorable Albert Gore Jr.
Former Vice President of the United States
Former United States Senator
The Honorable Don Sundquist
Former Governor of Tennessee
William H. Swain
The University of Tennessee Development Council
The Honorable Fred Thompson
Former United States Senator

Very bipartisan. All in favor of a carbon tax to raise the price of fossil fuels in the US.

I love the analogy and agree with the point made, and reading the comments has made my morning. But this is really all about Marcott, and when you clear away all fog probably the most important and to the point comment here is
Jeff Norman says:
April 4, 2013 at 6:31 am
Have a great day everyone, it was -12 C here in Saskatchewan this morning!

Here is a Vostok overlay on Marcott. The y-axis on the Vostok data had a 5°C range. I tried to match points at +.8°C and -1.2°C from the Vostok graph to the Marcott graph. The zero lines mismatch by .05°C, 1% of the scale, not too bad for eyeballing in my assessment.

Map:This Spotfire application will allow a user to choose 1 or many proxy cores from a world map so cores in an area can be easily compared.. Local or Regional trends can be investigated.

Drag or Ctrl-Drag to select cores for viewing in all other charts below. Or Ctrl-click on the Count column of the Cores table to the right of the map.
Charts:B1 Age Mar vs Pub: an X-Y scatter to cross plot the Marcott redating vs the published ages, both in YBP. (YBP=0 is 1950).B2 Age Chg vs Pub: this plots only the Age Change (marcott-Published) vs Published age. Points above the zero Line (negative values, reversed scale), are redated by Marcott to later, more recent, dates.

C1 Age Both vs Depth. Depth on X axis, Published age in Red, Marcott Age in blue.

D1 Age Both vs TempAnom: Same as C1 but X asis is Temperature Anomaly. Left is cooller, rihgt hotter. red – blue separation are because of differences in age dating, Y-axis.

E1: Temp Anomaly vs Age Published: Each core is a different color, shape by proxy type.E2: Same as E1, but Age as redaated by Marcott.

F1: TempA vs AgeP MAvg Same as E1, but all cores in blue. X-axis is binned with a binning interval adjustable by the user (note slider on the axis control), 3 period (bin) trailing moving average plotted in redF2: TempA vs AgeM MAvg Same as F1, But using Marcott redated ages.

G1: TempA vs AgeP Box: similar to F1, but each bin represented by a Box-Wisker plot. Median (white) and Mean (yellow)G2: same as G1, but tha Ages are Marcott Redated.

Note, in this version of the data, the Vostok Ice Core does not have data in the Age_Mar (redated) column. Only the published ages are present, therefore, Vostok won’t appear in any of the *2 charts.

Thomas, several people answered your first post but, unfortunately, your reply at April 4, 2013 at 6:10 am says you failed to understand. I hope this is easier for you to grasp.

Ryan, your post at April 4, 2013 at 8:03 am states that you have the same lack of scientific knowledge.

I write to explain the basic principle which you both say you don’t know. And I hope you find this basic information helpful.

The Null Hypothesis says it must be assumed a system has not experienced a change unless there is evidence of a change.

The Null Hypothesis is a fundamental scientific principle and forms the basis of all scientific understanding, investigation and interpretation. Indeed, it is the basic principle of experimental procedure where an input to a system is altered to discern a change: if the system is not observed to respond to the alteration then it has to be assumed the system did not respond to the alteration.

In the case of climate science there is a hypothesis that increased greenhouse gases (GHGs, notably CO2) in the air will increase global temperature. There are good reasons to suppose this hypothesis may be true, but the Null Hypothesis says it must be assumed the GHG changes have no effect unless and until increased GHGs are observed to increase global temperature. That is what the scientific method decrees. It does not matter how certain some people may be that the hypothesis is right because observation of reality (i.e. empiricism) trumps all opinions.

Please note that the Null Hypothesis is a hypothesis which exists to be refuted by empirical observation. It is a rejection of the scientific method to assert that one can “choose” any subjective Null Hypothesis one likes. There is only one Null Hypothesis: i.e. it has to be assumed a system has not changed unless it is observed that the system has changed.

In the case of global climate no unprecedented climate behaviours are observed so the Null Hypothesis decrees that the climate system has not changed.

Importantly, an effect may be real but not overcome the Null Hypothesis because it is too trivial for the effect to be observable. Human activities have some effect on global temperature for several reasons. An example of an anthropogenic effect on global temperature is the urban heat island (UHI). Cities are warmer than the land around them, so cities cause some warming. But the temperature rise from cities is too small to be detected when averaged over the entire surface of the planet, although this global warming from cities can be estimated by measuring the warming of all cities and their areas.

Clearly, the Null Hypothesis decrees that UHI is not affecting global temperature although there are good reasons to think UHI has some effect. Similarly, it is very probable that AGW from GHG emissions are too trivial to have observable effects.

The feedbacks in the climate system are negative and, therefore, any effect of increased CO2 will be probably too small to discern because natural climate sensitivity is much, much larger. This concurs with the empirically determined values of low climate sensitivity.

Indeed, because climate sensitivity is less than 1 .0deg.C for a doubling of CO2 equivalent, it is physically impossible for the man-made global warming to be large enough to be detected (just as the global warming from UHI is too small to be detected). If something exists but is too small to be detected then it only has an abstract existence; it does not have a discernible existence that has effects (observation of the effects would be its detection).

To date there are no discernible effects of AGW. Hence, the Null Hypothesis decrees that AGW does not affect global climate to a discernible degree. That is the ONLY scientific conclusion possible at present.

The paper from Marcott et al. does not change this in any way. This is because it compares two different data sets. One data set only shows the period of recent rise of global temperature and the other data set does not – and cannot – show the recent rise or any similar rises which may have happened in the past.

… The data was not sampled at 300-yr intervals, but represents averages over 300 year windows.

You can do an experiment to see for yourself what Leif is talking about.

In your favourite spreadsheet, create numbers from 1 to 2000 in column A.
In cell B1 put the formula “=sin(A1/10)”. Copy that to all the cells down to B2000.
In cell C1 put the formula “=sin(A1/100)”. Copy that to all the cells down to C2000.
In cell D1 put the formula “=b1 + c1″. Copy that to all the cells down to D2000.
In cell E101 put the formula “=sum(D1:D100)/100″. Copy that to all the cells down to E2000.
Create a line graph for column D. That will be the raw signal.
Create a line graph for column E. That will be the signal averaged over 100 samples.

You can see how the signal with the shorter period is attenuated. You can play around with the divisor for the signal in column B. If you reduce the divisor to 1, the high frequency signal disappears from the graph of column E.

What the experiment demonstrates is that a running average is a low pass filter.

You can add a spike to the signal in column B to see how the running average affects it. In cell B1000 change the formula to “=sin(A1000/10) + 1″ and copy it to the cells down to B1100. Compare the graphs for columns D and E to see how the spike is attenuated. In this case, the spike was the same width as the window. Shorter duration spikes are more attenuated.

The question is: If you had only the data in column E, what could you say about the signals that produced it? In this case, because we have a well behaved signal, we could apply an inverse filter and have a pretty good idea of what the original signals were. In the case of climate data … not so much. ;-)

benfrommo says:
April 4, 2013 at 12:23 am
————————————————–
I concur. I believe the questions’ jumped order without being answered.
1. Are the earth’s atmosphere and waters warming unnaturally?
2. Is the cause man’s (un-exhausting, disgusting, vile) discharge of CO2?
3. What other catastrophes are (maybe, might, could be) caused by man’s (mindless) activities?

Scientists and NGOs went straight to the government with #3 demanding man stop using fossil fuels and give them lots of money to prove why.

As a boy I learned the earth has had periods of ice and no ice.
Seems to me the earth’s temperature has been here before.
I don’t think they know how we got here in the past.
First, let’s make sure we get question #1 right.

This is why I’m skeptical.
I lack of trust based on past and current activities.
I’m pretty sure everyone else has their reasons for believing or not.
I’ve made it through many scientific/government scares, it’s their M.O.
It’s not that the temperature is acting abnormal….
It’s the scientists, NGOs and government are acting normal.
cn

I am glad to see that you know Galileo. He is the creator of scientific method in the sense that he was the first to clearly formulate it and to clearly apply it. Kepler’s work exhibits an understanding of scientific method but Kepler produced no formulation of it. Climate science needs its own Galileo.

Nancy, the number of proxies in Marcott’s study drops of sharply towards the end so the last spike is rather uncertain, except we happen to know from the instrumental record that there is a temperature increase there.

Richard, we have plenty of evidence that CO2 is a significant greenhouse gas that changes the temperature, even if people on this blog do their best to deny that fact. Your null hypothesis regarding CO2 is not viable. You are just using it as a way of shifting all burden of evidence away from what you believe.

Your statement “There is only one Null Hypothesis: i.e. it has to be assumed a system has not changed unless it is observed that the system has changed.” is rather ironic given Nancy’s belief that there ought to have been earlier spikes in temperature even if we can’t observe them. On the other hand, we have observed both an sharp increase in CO2 and in temperature the last century. That’s why your null hypothesis fails.

Could you also explain what the difference is between “climate sensitivity” that you claim, based on a very biased selection of papers, is low and “natural climate sensitivity” that you claim is much higher? A more unbiased selection of papers tend to give a climate sensitivity around 2.5-3 degrees. There is a certain chutzpah of you to link to Lindzen’s paper from Spencer’s site given what Spencer has to say about it:http://www.drroyspencer.com/2009/11/some-comments-on-the-lindzen-and-choi-2009-feedback-study/

I think this will be enough from me for now. As wte9 put it the idea here is to “wallop” any dissenters anyway, proving you are right not by strength of your arguments but by numbers. A local consensus.

Thomas says: April 4, 2013 at 6:10 am“Benfrommo: “But that is the way science works. Science ASSUMES that the null hypothesis is true until it is PROVEN false. ”

In this case it is proven false. Our CO2-emissions are clear as is the rapidly rising concentration in the atmosphere. Your “null hypothesis” would have to pretend industrialization never happened or that CO2 isn’t a greenhouse gas….”

I rather thought we were discussing temperature spikes (and perhaps their possible causes). I don’t think anyone is disagreeing that atmospheric CO2 levels have increased.

Are stating here that you consider the 20th century temperature spike is ‘proven’ because you know CO2 levels have risen? If so, you would seem to have missed a couple of crucial steps in the scientific process.

well…. went old school with the rolodex reference but they don’t even have to use their iphone to get the word out. The fellow above is the Publisher Emeritus of The Tennesean and father of John Siegenthaler jr, formerly of NBC and AP and also the Robert F. Kennedy Book Awards for the RFK Center for Justice and Human Rights and chairman emeritus of the annual Profile in Courage Award selection committee of the John F. Kennedy Library Foundation. Seigenthaler is a member of the the board of directors of the Howard H. Baker Jr. Center for Public Policy at the University of Tennessee at Knoxville.

His most enduring contribution to bunging up the sciences that many of you love is his decades long mentorship of Robert Kennedy Jr. and Albert Gore Jr. Marcott is a press release.

“Though scientists have known for many years, based on Antarctic ice cores, that temperature and CO2 were linked over the Ice Ages, establishing a clear cause-and-effect relationship has remained difficult. In fact, when studied closely, the ice-core data indicate that CO2 levels rose after temperatures were already on the increase, a finding that has often been used by global warming skeptics to bolster claims that greenhouse gases do not contribute to climate change.

Many climate scientists have addressed the criticism and shown that the lag between temperature and CO2 increases means that greenhouse gases were an amplifier, rather than trigger, of past climate change, but Shakun and his colleagues saw a larger problem” …. and here’s the UCAR contribution to the press campaign elevating Shakun and mis-representing the views held by many of you.

did I mention Siegenthaler’s leadership position with the Nieman Fellowship at Harvard or would that be gilding the lilly

Also, the statement that spikes similar to the modern one “should appear” with higher resolution is not at all reasonable. A mechanism to produce such a spike is not known to science. You’re just assuming it’s there because your world view demands that it is.

Big volcanic eruptions are known to science to produce global cooling. Huge volcanic eruptions are known from sediment, to have occurred in the past. (One commenter upthread mentioned one in alaska that occurred around 1600 BC.) But they don’t show up in Marcott. It’s reasonable to assume that they should.

The rapid warming in the Younger Dryas is known to have occurred, despite “A mechanism to produce such a spike is not known to science.” So what? Science knows very little about the climate system and what makes it change.

Richard, you are right that empiricism trumps all assumptions. Can you please point out the empiricism in this post? What did Anthony test and what methods did he use? Oh, right. None.
As far as trying to pull a null out of thin air and demanding that authors either disproved it or must accept it…that is nonsense. I can’t take a study about pollen charge and use it to demand that my null about alligator gestation must be accepted.

This is the Vostok graph I used to overlay on Marcott. It seems to show much more variation than the Vostok record record overlaid in red accompanying this article and Mike McMillan says:April 4, 2013 at 10:11 am

Are the large temperature upticks caused by UHI in the last century matched by UHI increases earlier in the Holocene during similar periods of massive global urbanization and industrialization?

Aaah. Good question, justasking.

I think we can safely come to the conclusion that UHI was not significant earlier in the Holocene. (evidence … complete absence of any evidence of “urban” anything, or any sign of massive global urbanization and industrialization ).

Therefore any earlier temperature spikes would have great significance, as we can perhaps fairly confidently assume that any such earlier temperature spikes, if they existed, very likely had other causes. Which would call into question somewhat the degree to which atmospheric CO2 may have contributed to 20th century warming.

“Nancy, the number of proxies in Marcott’s study drops of sharply towards the end so the last spike is rather uncertain, except we happen to know from the instrumental record that there is a temperature increase there.”

The point is that the proxy data is of such low resolution that it could not show any spikes even if they occurred. Go back to the top and look at Charles the moderators plot. See all those spikes? These could well have occurred, but Marcotts data would never show it.

Look at the plot again. The current “instrument” spike doesn’t look unusual by comparison does it?

Regarding the graphs and discussion of Vostok, should we keep in mind that these are local temperatures but not global? After all, local temperatures can change rapidly, like yesterday to today at my house. Because warm air or water move around. I don’t see how global temperatures can do something similar.

“throw all the paleo data in the trash bin and you still know from physics that C02 will warm the planet. Nothing can change that. Not paleo records, not modern records.”

Yes, but… This is the same thought that gets all the CAGW true believers in trouble. You, as a lukewarmer, only say ‘warm’, which is good since what we know from physics only supports limited warming. As opposed to what others claim to ‘know’. Not to mention that other physics processes may limit this warming even further.

@Steven Mosher says: April 4, 2013 at 7:41 am “paleo arguments about unprecedented are stupid on both sides. throw all the paleo data in the trash bin and you still know from physics that C02 will warm the planet.”

Steven, you obfuscate as usual (I will add to markx).
The contentious issue is anthropogenic CO2. Show us the proof for anthropogenic CO2 warming and what percentage of warming that may be attributed to ???

“Richard, we have plenty of evidence that CO2 is a significant greenhouse gas that changes the temperature, even if people on this blog do their best to deny that fact.”

I can grant you that increases in manmade CO2 concentrations cause increases in temperature. But then the question is whether those changes are trivial. What is the magnitude of temperature change caused by manmade CO2 alone? How does it compare to the magnitude of the temperature change caused by an increase in water vapor alone? You don’t know, do you?

“Your null hypothesis regarding CO2 is not viable. You are just using it as a way of shifting all burden of evidence away from what you believe.”

What is his null hypothesis? Can you state it?

As I understand the Null Hypothesis, it is that temperature fluctuations are natural and that the magnitude of such fluctuations can explain just about all of the warming in the last 100 years. To falsify the Null Hypothesis, which is what you want to do, you must show that the warming in the last 100 years cannot be explained by natural fluctuations. That is going to be difficult because the decade of the Thirties was just as warm as the last decade. That conclusion about the Thirties will become universally accepted now that Hansen is no longer the manager of the temperature data.

You have bet on the claim that only increases in manmade CO2 can explain the non-trivial increases in temperature during the last 100 years. A moment’s reflection will reveal that your bet is a loser.

moshe, if the house burns down regularly(alternating climate optima and minima) how do you know it was an arsonist this time. Better, maybe your nasty arsonist just happened to be in proximity, but hadn’t lit a match. Correlation does not prove causation, and your faith in proof may be hanging an innocent man. There are plenty suffering today through this miscarriage of justice and science. Time for the appeal.

Others have pointed out the problem of your absolute expansion of lab results to the real world.

You’ve dodged around Muller’s attribution. In your heart, do you believe it?

I am answering your comments addressed to me in your post at April 4, 2013 at 9:57 am.

In my post at April 4, 2013 at 9:15 am I tried to explain a basic scientific principle in sufficiently clear language for you to understand it.

There is no possibility of explaining anything to somebody who chooses not to understand. And your reply to me says you prefer superstition to science. Therefore, I would have ignored your reply were it not for you asking me some specific questions.

I am bothering to answer your reply because – although I recognise you choose not to learn from reason – I would not want you to convince yourself that I am stumped by your questions.

You say to me

Richard, we have plenty of evidence that CO2 is a significant greenhouse gas that changes the temperature, even if people on this blog do their best to deny that fact. Your null hypothesis regarding CO2 is not viable. You are just using it as a way of shifting all burden of evidence away from what you believe.

Your statement “There is only one Null Hypothesis: i.e. it has to be assumed a system has not changed unless it is observed that the system has changed.” is rather ironic given Nancy’s belief that there ought to have been earlier spikes in temperature even if we can’t observe them. On the other hand, we have observed both an sharp increase in CO2 and in temperature the last century. That’s why your null hypothesis fails.

Could you also explain what the difference is between “climate sensitivity” that you claim, based on a very biased selection of papers, is low and “natural climate sensitivity” that you claim is much higher? A more unbiased selection of papers tend to give a climate sensitivity around 2.5-3 degrees. There is a certain chutzpah of you to link to Lindzen’s paper from Spencer’s site given what Spencer has to say about it:http://www.drroyspencer.com/2009/11/some-comments-on-the-lindzen-and-choi-2009-feedback-study/

There is absolute and certain evidence that CO2 is a greenhouse gas, but there is no evidence – none, zilch, nada – that an increase to atmospheric CO2 from present levels will have any discernible effect on global temperature. And there is evidence that it cannot have a discernible effect. I explained one of the reasons for this to you in my post (I,e. negative feedbacks), but you say you prefer your superstitious belief to scientific evidence. I accept scientific evidence whether I like what it indicates or not.

You really, really do insist on misunderstanding when you talk about [my] Null Hypothesis. The scientific method decrees the Null Hypothesis; not me, not you, and not anybody else. Were you not blinded by your superstition then you would have read that I wrote

It is a rejection of the scientific method to assert that one can “choose” any subjective Null Hypothesis one likes. There is only one Null Hypothesis: i.e. it has to be assumed a system has not changed unless it is observed that the system has changed.

There is nothing “ironic” in my telling you about this because – contrary to the delusion created by your superstition – Nancy’s analogy EXPLAINS an implication of the Null Hypothesis.

And her analogy does not say anything “ought” to be: it points out that something probably – but not certainly – IS.

And words fail me in expressing my astonishment that anybody would write as you do

On the other hand, we have observed both an sharp increase in CO2 and in temperature the last century. That’s why your null hypothesis fails.

That is three logical fallacies in two sentences.
1.
Correlation is not causation.
“We have observed both the milk turning sour and an increase in witches in the last century”.
2.
‘Argument from ignorance’ is a classical logical fallacy.
It is superstitious nonsense to ascribe whatever cause you want (be it CO2 or witches) merely because you don’t know the true cause of an observed effect.
3.
And the Null Hypothesis would have been refuted if a clearly observed effect were thatCO2 rise causes global temperature rise.
But that is NOT a clearly observed effect: CO2 has continued to rise while global temperature has NOT risen for at least the last 16 years.
Also, atmospheric CO2 concentration change follows temperature change at all time scales: a cause cannot follow its effect (without use of a time machine).

There is no “bias” in my “selection of papers”.
Those are the papers which – as I said – provide empirical derivations of climate sensitivity. They use very different methods, each analyses a different data set, and they each deduce climate sensitivity is ~0.4 deg.C for a doubling of atmospheric CO2.

And I don’t have a clue what you mean when you write
“Could you also explain what the difference is between “climate sensitivity” that you claim, based on a very biased selection of papers, is low and “natural climate sensitivity” that you claim is much higher?”What ““natural climate sensitivity” that [I] claim is much higher”?
I mentioned no such thing!

You say Spencer has some dispute with one of the papers I cited. Good, that is how science is done. Evidence is challenged.

And you are further deluded when you claim I have “chutzpah” by citing scientific papers in refutation of your superstition. You are entitled to your superstition, but your superstition does NOT give you the right to deride my respect for science, the scientific method, and the findings of science.

Thomas says:
April 4, 2013 at 9:57 am
. . . .
“I think this will be enough from me for now. As wte9 put it the idea here is to “wallop” any dissenters anyway, proving you are right not by strength of your arguments but by numbers. A local consensus.”

Dear Lord, Thomas. The “wallop” I mentioned clearly referred to those commenters who had previously refuted your *premise,* which is demonstrably incorrect. No one here advanced any argument by consensus. Our argument isn’t made any stronger or weaker because more than one individual told you you were wrong. If you have a problem with people taking your flawed reasoning to task, then don’t comment here. You are quite thin-skinned.

It’s really remarkable how you try to twist the scientific method and logic on its head to support your case. You incredibly wrote: “Your statement ‘There is only one Null Hypothesis: i.e. it has to be assumed a system has not changed unless it is observed that the system has changed.’ is rather ironic given Nancy’s belief that there ought to have been earlier spikes in temperature even if we can’t observe them. On the other hand, we have observed both an sharp increase in CO2 and in temperature the last century. That’s why your null hypothesis fails.”

First, Nancy doesn’t have a belief as to earlier spikes as you assert, she has a testable hypothesis. Conflating the two is inexcusable. Second, you run afoul of post hoc ergo propter hoc. The null hypothesis doesn’t change because there is a correlation between temperature and C02 during part of the 20th Century. That the two have, at times, risen simultaneously proves nothing. We’re saying you haven’t proven your case about a causal relationship between C02 and temperature and that to robustly do so you need to show that similar spikes haven’t happened at other points in the Earth’s history independent of C02 and/or man’s influence. Surely you acknowledge that the C02 hypothesis would at least be open to criticism if that had, in fact, happened?

Bottom line: There is no burden shifting, you still have to prove your assertion with data.

@Vince Causey 11:13amThe point is that the proxy data is of such low resolution that it could not show any spikes even if they occurred

Perhaps all we see are the spikes. The proxies show the envelope of maximum temperatures.
Life is a very non-linear process. Why should we believe that whatever the proxy is measuring is the average temperature or even can be correlated to average temp. From a biological process point of view, we might be better corrolated to the maximum weekly temperature of the year.

IF the proxies measure temperature to any significant correlation, what calibration do we have that they are closer to average than max?

rogerknight, You are correct that volcanoes produce cooling, but only for a few years, much shorter than what we are talking about here, and there is as far as I know no mechanism to get similar warming spikes.

markx “Are stating here that you consider the 20th century temperature spike is ‘proven’ because you know CO2 levels have risen? ”

No, I consider it proven because we have observed it.

Theo ” That is going to be difficult because the decade of the Thirties was just as warm as the last decade.” No temperature reconstructions shows anything even close to that.

“To falsify the Null Hypothesis, which is what you want to do, you must show that the warming in the last 100 years cannot be explained by natural fluctuations.”

Now, this is a fundamental misunderstanding of how science works. You can *never* prove that something can’t be caused by some unknown process. Maybe gravity doesn’t exist and it’s just angels pushing the planets around. How do you disprove that? In science you create different hypothesis how something works, calculate what predictions they make, and make measurements to see which hypothesis works best. It may not be perfect, but until you find something better you work with the theory you have, trying to find the boundaries for where it is useful.

Richard, repeating a statement like ” but there is no evidence – none, zilch, nada – that an increase to atmospheric CO2 from present levels will have any discernible effect on global temperature.” doesn’t make it true. Repeating claims about negative feedbacks don’t make them true. You didn’t “explain” about negative feedbacks, you asserted them, contrary to mainstream science.

” Nancy’s analogy EXPLAINS an implication of the Null Hypothesis.” No, Nancy claims that you should expect warming spikes in the absence of either measurements or theoretical reason to see them, the absolute opposite of the null hypothesis as you define it.

“Correlation is not causation” Yes, I know that. I considered pointing it out but considered that no one would be stupid enough to misunderstand me. I apologize for my mistake. Go back to your post from 9:05 where you tried to trivialize the observed warming in order to get your null hypothesis to fit. It doesn’t: we have observed warming. That was my entire point. You can’t use the null hypothesis to “disprove” AGW the way you tried to do.

“It is superstitious nonsense to ascribe whatever cause you want (be it CO2 or witches) merely because you don’t know the true cause of an observed effect.”

Now you are being silly. Surely you are aware that we have had good scientific reasons to believe CO2 causes warming for well over a century. It’s not a matter of what I want but of what existing science tells us.

” CO2 has continued to rise while global temperature has NOT risen for at least the last 16 years.”

Actually we do have a positive trend over 16 years, and climate models show that you expect periods with more and less warming even if you in the longer term has a steady trend. There is internal variation in the climate system.

Historically CO2 has acted as a positive feedback, amplifying e.g. the ice age cycles. There is unfortunately no good historical analogy to our current large scale burning of fossil fuels. It’s terra incognito, an interesting scientific experiment although somewhat risky to experiment on the planet we live on.

“There is no “bias” in my “selection of papers”.” Of course there is. I have no idea how many papers there are trying to estimate climate sensitivity, but they must be in the hundreds, and you pick three of the ones with the lowest sensitivity. Check the references in the IPCC report for other papers.

“What ““natural climate sensitivity” that [I] claim is much higher? I mentioned no such thing! ”

You could have gone back and read what you wrote: “The feedbacks in the climate system are negative and, therefore, any effect of increased CO2 will be probably too small to discern because natural climate sensitivity is much, much larger. “

I’m very surprised an astronomer hasn’t chipped in here to show that the analogy is a bad one. The resolution of telescopes still isn’t good enough to view extrasolar planets directly. What we have is indirect evidence such as the transit of a planet making a momentary dip in the light being received from that star.

Also, the statement that spikes similar to the modern one “should appear” with higher resolution is not at all reasonable. A mechanism to produce such a spike is not known to science. You’re just assuming it’s there because your world view demands that it is.

Thomas says:
April 4, 2013 at 12:37 pm
…
——–
There’s so much wrong in what you said that I’m having a hard time deciding what to run with. Maybe a reset is in order, since I count 10 separate points in your last post. Of all the rubbish you’ve been arguing, what do you consider the most important point, or maybe the most important two or three?

Thomas says:
April 4, 2013 at 12:37 pm
Well, he says a lot but I think it can be summarised as:
1 We know that CO2 causes warming
2 We know that CO2 was released by industrialisation in the twentieth century
3 We know that the world warmed in the 20th century
Therefore,
4 Industrialisation caused the warming
He also implies that that is bad.

However, the challenges remain:
Point 1: How much warming for CO2? Is it a constant amount? If it can pause for a generation is it significant? How do you measure it?
Point 2: How much of the CO2 rise is from industrialisation? How much is due to changes in the natural balance? Why do ice cores show CO2 levels following global temperature by a few centuries but the effect of the medieval warm period is not counted by any model? And what would that effect be?
Point 3: No-one disputes the world warmed but not in anyway that can be related to CO2 rises alone. Something else must be significant, but what (the weather?) As we all agree that the effects of CO2 can be overwhelmed in the twenty-first century (for the moment at least) why would that not continue?

Point 4 can only be answered with, “So what?” If it’s worth answering at all.

However, Thomas makes it clear that he starts from the premise that AGW is proven and any contrary evidence should be discounted. It is no surprise that he has trouble following the scientific method. Empirical evidence will be contaminated by a biased selection and theoretical speculation (models) if you already know the truth.

That may be why he cannot doubt the models regardless of what actually happens.

It would require writing a book to correct all your errors in your comments addressed to me in your post at April 4, 2013 at 12:37 pm. However, one of the difficulties results from a typing error which I made and failed to see.

I will quote each of your points in turn and give brief answers. And I will state my error which caused confusion.

Richard, repeating a statement like ” but there is no evidence – none, zilch, nada – that an increase to atmospheric CO2 from present levels will have any discernible effect on global temperature.” doesn’t make it true. Repeating claims about negative feedbacks don’t make them true. You didn’t “explain” about negative feedbacks, you asserted them, contrary to mainstream science.

It is not possible for me to prove a negative but it would be very simple for you to prove me wrong if I were. All you need to do is provide one single solitary piece of evidence that “that an increase to atmospheric CO2 from present levels will have any discernible effect on global temperature.”

You cannot provide such evidence because there is none.
Three decades of research conducted world-wide and costing over $5 billion p.a. has failed to find any. If you find some such evidence then publish it and get a Nobel Prize. Santer tried to pretend he had found some in the 1990s but his shenanigans was rapidly exposed.

I did NOT “assert” negative feedbacks.
I cited the evidence for negative feedbacks and I provided references and links to the pertinent papers.
SB derivation of climate sensitivity with no feedbacks is ~1 deg.C for a doubling of CO2 equivalent. The empirical measurements each provide indication of climate sensitivity which is about half that.
Quad Erat Demonstrandum.The feedbacks are measured to be negative.

” Nancy’s analogy EXPLAINS an implication of the Null Hypothesis.” No, Nancy claims that you should expect warmng spikes in the absence of either measurements or theoretical reason to see them, the absolute opposite of the null hypothesis as you define it.

No! Clearly, you are as mystified by logic as you are by science.
This issue was explained to you by wte9 at April 4, 2013 at 11:45 am but you either did not read his explanation or have been incapable of understanding it. So, I will try to spell it out for you.
1.
Marcott provides a time series which cannot resolve ‘spikes’ similar to the recent global temperature rise because it lacks sufficient temporal resolution.
2.
The recent ‘spike’ is provided by the existing system.
3.
The Null Hypothesis says the system has to be assumed to be unchanged unless there is evidence of a change.
4.
Therefore, the Null Hypothesis provides a testable hypothesis which Nancy’s analogy explains; viz.If the system has not changed then similar ‘spikes’ to the recent rise in global temperature would be observed in a time series with sufficient temporal resolution to observe therm.

”“Correlation is not causation” Yes, I know that. I considered pointing it out but considered that no one would be stupid enough to misunderstand me.

Say what!?
I quoted you verbatim. You said

On the other hand, we have observed both an sharp increase in CO2 and in temperature the last century. That’s why your null hypothesis fails.

I fail to understand any possibility of that meaning other than you were claiming that correlation of those two parameters showed the system has changed (i.e. “Null Hypothesis fails”) and I don’t understand how that can be anything other than an assertion of causality.

” I apologize for my mistake. Go back to your post from 9:05 where you tried to trivialize the observed warming in order to get your null hypothesis to fit. It doesn’t: we have observed warming. That was my entire point. You can’t use the null hypothesis to “disprove” AGW the way you tried to do.

That is such a mish-mash of illogical and untrue twaddle that I am tempted to let it stand because it defeats itself. However, it contains falsehoods so I will address those.

I did NOT try to “trivialize the observed warming”. There is no need to because it is trivial.
Over the twentieth century mean global temperature rose about 0.8 deg.C.
Each year mean global temperature rises by 3.8 deg.C deg.C from June to January and falls by 3.8 deg.C deg.C from January to June.So, the warming which you want to exaggerate is about a fifth of the rise experienced during 6 months of each year.

“It is superstitious nonsense to ascribe whatever cause you want (be it CO2 or witches) merely because you don’t know the true cause of an observed effect.”
Now you are being silly. Surely you are aware that we have had good scientific reasons to believe CO2 causes warming for well over a century. It’s not a matter of what I want but of what existing science tells us.

Twaddle.
I pointed out – with example – the fallacy of ascribing cause without evidence and you claim ”science” says we should adopt that fallacy.
IT DOES NOT. SCIENCE SAYS WE SHOULD REJECT THE FALLACY.
I repeat, there is no evidence which uniquely identifies atmospheric CO2 concentration as being responsible for any of the recent rise in mean global temperature.

” CO2 has continued to rise while global temperature has NOT risen for at least the last 16 years.”
Actually we do have a positive trend over 16 years, and climate models show that you expect periods with more and less warming even if you in the longer term has a steady trend. There is internal variation in the climate system.

It seems you really do want to show you can bat 100% wrong.

If one wants to know how long it has been since there was any discernible global warming at 95% confidence then one has to start from now – any other date is ‘cherry picking’ – and consider back in time. Then one has to determine if global temperature trend differs from zero at the low confidence level of 95% which is used by ‘climate science’.

It turns out that – depending on which time series is analysed – the time of no recent discernible global warming at 95% confidence is between 16 and 23 years. In other words, discernible global warming stopped at least 16 years ago.

This finding refutes the AGW hypothesis as exemplified by global climate models.

” The multi-model average warming for all radiative forcing agents held constant at year 2000 (reported earlier for several of the models by Meehl et al., 2005c), is about 0.6°C for the period 2090 to 2099 relative to the 1980 to 1999 reference period. This is roughly the magnitude of warming simulated in the 20th century. Applying the same uncertainty assessment as for the SRES scenarios in Fig. 10.29 (–40 to +60%), the likely uncertainty range is 0.3°C to 0.9°C. Hansen et al. (2005a) calculate the current energy imbalance of the Earth to be 0.85 W m–2, implying that the unrealised global warming is about 0.6°C without any further increase in radiative forcing. The committed warming trend values show a rate of warming averaged over the first two decades of the 21st century of about 0.1°C per decade, due mainly to the slow response of the oceans. About twice as much warming (0.2°C per decade) would be expected if emissions are within the range of the SRES scenarios.

In other words, it was expected that global temperature would rise at an average rate of “0.2°C per decade” over the first two decades of this century with half of this rise being due to atmospheric GHG emissions which were already in the system

This assertion of “committed warming” should have had large uncertainty because the Report was published in 2007 and there was then no indication of any global temperature rise over the previous 7 years. There has still not been any rise and we are now way past the half-way mark of the “first two decades of the 21st century”.

So, if this “committed warming” is to occur such as to provide a rise of 0.2°C per decade by 2020 then global temperature would need to rise over the next 7 years by about 0.4°C. And this assumes the “average” rise over the two decades is the difference between the temperatures at 2000 and 2020. If the average rise of each of the two decades is assumed to be the “average” (i.e. linear trend) over those two decades then global temperature now needs to rise before 2020 by more than it rose over the entire twentieth century. It only rose ~0.8°C over the entire twentieth century.

Simply, the “committed warming” has disappeared (perhaps it has eloped with Trenberth’s ‘missing heat’?).

I add that the disappearance of the “committed warming” is – of itself – sufficient to falsify the AGW hypothesis as emulated by climate models. If we reach 2020 without any detection of the “committed warming” then it will be 100% certain that all projections of global warming are complete bunkum.

Historically CO2 has acted as a positive feedback, amplifying e.g. the ice age cycles. There is unfortunately no good historical analogy to our current large scale burning of fossil fuels. It’s terra incognito, an interesting scientific experiment although somewhat risky to experiment on the planet we live on.

Oh dear! Wrong again.
Perhaps CO2 did act as a positive feedback in geological times but – if it did – then
(a) that does not affect my point in any way
and
(b) that feedback was so small that it failed to stop temperature rising and falling because (according to the Vostock ice core) the delay of CO2 reversal was typically 800 years after each temperature reversal.

Also, that is the longest time scale and I said “at all time scales”.
At the shortest time scale CO2 follows temperature by 5 months. This was first discovered in 1990 by Kuo, Lindberg & Thomson
(ref. Cynthia Kuo, Craig Lindberg & David J. Thomson “Coherence established between atmospheric carbon dioxide and global temperature” Nature 343, 709 – 714 (22 February 1990) )
This has been independently confirmed by several others since and the subsequent studies have revealed that the time of the delay of CO2 after temperature varies with latitude.

Atmospheric CO2 was much higher than now throughout most of the time since the Earth has had an oxygen rich atmosphere. Indeed, plants grow better in higher atmospheric CO2 concentrations because they evolved when CO2 was higher. Horticulturists usually try to keep CO2 concentration in their greenhouses at ~1,000 ppm. Pumping CO2 into the greenhouses costs money but the obtained plant growth more than compensates for this cost.

Burning fossil fuels t is returning CO2 to the carbon cycle which was removed from the carbon cycle by the plants from which the fossil fuels were formed.

The Deccan Traps released CO2 in at least as great a rate as the burning of fossil fuels.

We know exactly what will happen if we return CO2 to the atmosphere: plant life and everything up the food chain from plants will gain the benefits which they had until the carbon was sequestered as fossil fuels.

” “There is no “bias” in my “selection of papers”.” Of course there is. I have no idea how many papers there are trying to estimate climate sensitivity, but they must be in the hundreds, and you pick three of the ones with the lowest sensitivity. Check the references in the IPCC report for other papers.

I object to your misrepresentation of what I said. I cited the EMPIRICAL derivations of climate sensitivity. That is NOT bias. Choosing fiddle factors used in computer models over the empirical data is bias.

” “What ““natural climate sensitivity” that [I] claim is much higher? I mentioned no such thing! ”
You could have gone back and read what you wrote: “The feedbacks in the climate system are negative and, therefore, any effect of increased CO2 will be probably too small to discern because natural climate sensitivity is much, much larger. “

Ouch!
Thankyou I mistyped and failed to see my mistake.
I apologise for that error and thank you for pointing it out.
Having made the error I ‘read’ what I intended to type which was
“The feedbacks in the climate system are negative and, therefore, any effect of increased CO2 will be probably too small to discern because natural climate VARIABILITY is much, much larger.”

This was a clear typing error by me and demonstrates my poor ability at proof-reading my own words. I apologise for the resulting confusion.

We all make mistakes. It is important to acknowledge when we have made them. Perhaps you can try to do it, too?

dr. lumpus spookytooth, phd. says:
April 4, 2013 at 6:40 amLeif I will give you this, you state more personal opinions than almost anyone.
Better than stating somebody else’s opinions, don’t you think?

Steve Keohane says: April 4, 2013 at 11:04 am
This is the Vostok graph I used to overlay on Marcott. It seems to show much more variation than the Vostok record record overlaid in red accompanying this article and Mike McMillan says:April 4, 2013 at 10:11 am.

Your version of Vostok has different year dates from the Jo Nova post. It has a 1999 before present date, while Jo Nova may have a different before present date. I’ll overlay the two and if there’s a noticeable difference I’ll redo.

“The easiest way to explain is by analogy: 50 years ago astronomers searched extensively for planets around stars using lower resolution equipment. They found none and concluded that they were unlikely to find any at the existing resolution.

[…]

What astronomy found instead was that as we increased the resolution we found planets. Not just a few, but almost everywhere we looked. This is completely contrary to what the low resolution data told us and this example shows the problems with today’s thinking. You cannot use a low resolution series to infer anything reliable about a high resolution series.”

Your analogy is very very useful and of course covers vast other discussions, from history, to crime, to well, everything. In short, ‘absence of evidence is not evidence of absence’.

I must nitpick the planet discovery thing though, “as we increased the resolution we found planets” unless we confine that statement only to our own solar system. The current trend at NASA and other places is to pronounce every “wobble” or “dimming” as a new planet, and they build up a ledger of names of “discoverers” of new planets, people who deserve no such recognition, yet.

We don’t even have adequate optical resolution photos of the “nearby” planet Pluto and her moons in my opinion, and those newer ones beyond her that are still “nearby”. These so-called “planets” in other solar systems are hundreds and thousands of light years away (while Pluto is like .0005 on average) which is tens and hundreds of million times farther away. They can show me a picture when they got one. :-)

This is extra-solar planet thing is all about “safe Science”, stuff that cannot be proven in the next thousand human lifetimes, and is merely a way to get recognition and funding for doing nothing of consequence. Wikipedia proves this by saying “A total of 861 such planets (in 677 planetary systems, including 128 multiple planetary systems) have been identified as of March 22, 2013″. Utter garbage. Defund this nonsense. And stop naming planets in far-away places that may already be named by the local inhabitants ;-)

P.S. this is not a religious statement, it is not an attack on “Science”, it is an attack on lack of precision and fake work which should not be called “Science” at all, just like the entire AGW hoax. I don’t need fake Science to tell me there are planets everywhere. I already know they are there from glancing up at the stars.

“… throw all the paleo data in the trash bin and you still know from physics that C02 will warm the planet. Nothing can change that.”

Personally I prefer a little more accuracy in my blanket statements. Using “warm the planet” in this sentence sure sounds like adding heat were there was none before. Precision demands that statement to instead read as ‘slows down the loss of heat’. That may sound like nitpicking, and certainly would be for the average layman, but that does not include Steve Mosher.

I can think of one scenario where CO2 would add heat where none existed before. Imagine an experimental planet with zero atmosphere, a complete vacuum. Now we add an atmosphere of pure CO2. Now, completely disregarding the “greenhouse effect” of capturing and re-emitting some IR back towards the planet, in fact just imagine there is no sun and no incoming radiation at all. Summary: there is “new heat” by way of friction of atmosphere molecules now bouncing off things, i.e., matter exists where previously there was none. Is this added heat even measurable? Probably not, just ask the Martians whose planet is an example of that experiment.

The point is Mosher, that phrase sure sounds to me like you are saying adding CO2 adds heat, but in reality the effect is only fractionally slowing down cooling. The existing heat is exactly the same heat that was always going to be there, minus the heat from friction of extra air molecules. Steve knows this full well, yet he continues to parrot the phrase “CO2 will warm the planet”. He apparently does not give a crap that he is misleading the people that do not follow these things closely. Why is that, Steve?

Thomas says:
April 4, 2013 at 12:37 pm” CO2 has continued to rise while global temperature has NOT risen for at least the last 16 years.”
Actually we do have a positive trend over 16 years,
It depends on your data set. For RSS, for example, the slope since December 1996 to March, 2013, is -2.3524e-05 per year. This is 16 years and 4 months. See:http://www.woodfortrees.org/plot/rss/from:1996.9/plot/rss/from:1996.9/trend

‘“To falsify the Null Hypothesis, which is what you want to do, you must show that the warming in the last 100 years cannot be explained by natural fluctuations.”

Now, this is a fundamental misunderstanding of how science works. You can *never* prove that something can’t be caused by some unknown process. Maybe gravity doesn’t exist and it’s just angels pushing the planets around. How do you disprove that? In science you create different hypothesis how something works, calculate what predictions they make, and make measurements to see which hypothesis works best. It may not be perfect, but until you find something better you work with the theory you have, trying to find the boundaries for where it is useful.’

The full range of natural variation is found in the historical records. The highest temperature is the top of the range and the lowest is the bottom of the range. If present temperatures exceed the highest historical temperature then we have something to explain, namely, what caused the high range of natural variation to be exceeded. The Roman Warm Period exceeded today’s temperatures and the Medieval Warm Period equaled them. Therefore, there is no reason to assert that natural variation cannot account for today’s temperatures.

Some Alarmists argue that the rate of temperature increase since 1975 or so has been greater than any known rate. Setting aside matters of cherry picking, the rate of increase in the Thirties was as high as the rate today. In any case, to make the argument you have to buy into Mann’s preposterous Hockey Stick. If you have bought into Mann’s work, you simply have not done your homework.

Do you now understand what natural variation is? It is simply the full range, bottom to top, of temperatures known from historic records and paleo studies and such. If present temperatures fall within that range then natural variation explains present temperatures. (Yes, climate scientists should be hard at work identifying all causes of historic variation but they seem unable to do that while publishing CAGW warnings brings in the grant dough.) If present temperatures were to fall outside the range in a non-trivial way, then the Null Hypothesis would be falsified and there would be a reason to search for a cause that is unique to our time, something like manmade CO2.

You, like most Alarmists, labor under the confusion that climate science is about causes only. Alarmists present manmade CO2 as a cause of higher temperatures, though they will not admit that no such thing can be known until the “forcings and feedbacks” calculation is complete, and challenge Skeptics to come up with a better cause. That is not science because it ignores the records of observations that define the higher and lower bounds of what can be expected from natural variation.

markx “Are stating here that you consider the 20th century temperature spike is ‘proven’ because you know CO2 levels have risen? ”

No, I consider it proven because we have observed it.
====================================
LOL you get props for being persistent. Well done!

Yes, sort of. But you’ve no idea how it relates to the past. No one does. And that’s the whole point of the discussion about resolution.

As to CO2 being the cause, repeating a mantra doesn’t make it true. As you correctly pointed out, there’s probably thousands of estimations of climate sensitivity to CO2. They’re all over the board. No one knows what it is or even if there is a measurable sensitivity. Forgoing rehashing of old arguments like the ignoring of other heat transfer mechanisms, and absorption frequency windows, we can just say there is no skill in such calculations because we don’t know how a chaotic system is going to work. It’s the ultimate hubris to believe otherwise.

As to the null hypothesis, you don’t know what else could cause a spike, ergo none exists? The problem arises when people feel an explanation is needed when no need really exists. There’s plenty of evidence the current warmth isn’t unusual. But, you’re focused on a spike which can’t be seen in the paleo recons because of the resolutions. So, it must be CO2. Sadly, I think this is how much of the climate science is conducted today.

As to the warming over 16 years, I’m sure you know it depends on what temp set one uses. http://www.woodfortrees.org/plot/rss/from:1997/plot/rss/from:1997/trend Of course, if we use that, it pokes some holes in the whole thought experiment about CO2. Sat temps are suppose to be amplified. oops. Ignore that inconvenience. We can conceive of nothing else, ergo, we’re right.

As far as negative feedbacks….. they are published in all sorts of science literature. Indeed, we just read a paper asserting the warmth is the cause for the expansion of the southern hemisphere ice extent. We’ve also seen insistent claims about the warmth causing the cold and snow. Now, I’m not big on the warmcold theory, but it is out there, promoted by alarmist sciency types. Still, we can’t help but notice the expansion of the ice and the increase in the winter snow. If we are to believe calculations regarding albedo, I’d say we’ve found a couple of negative feedbacks, and because of the origins of snow, it seems we’ve probably found another.

“Now you are being silly. Surely you are aware that we have had good scientific reasons to believe CO2 causes warming for well over a century. It’s not a matter of what I want but of what existing science tells us.”

You continue to assert this point though people have explained that the “forcings and feedbacks” calculation is not complete. We cannot know that manmade CO2 causes increases in global average temperature until that calculation is complete. It does not matter what CO2 does in the laboratory. What we need to know is what it does in the atmosphere. No one will know until the matter of forcings and feedbacks has been sorted out.

I conclude that you have no idea what the “forcings and feedbacks” calculation is.

richardscourtney says:
April 4, 2013 at 2:35 pm
————–
As usual a pleasure reading your rebuttals Richard, well done as always, except for the frustration I experience due to your thoroughness. Once you’ve finished posting there isn’t any troll leftover for me. :)

Steve Keohane says: April 4, 2013 at 11:04 am
This is the Vostok graph I used to overlay on Marcott. It seems to show much more variation than the Vostok record record overlaid in red accompanying this article and Mike McMillan says:April 4, 2013 at 10:11 am.

I apologise for my “thoroughness” but I only answered points addressed specifically to me. Others (e.g. Theo Goodwin) have also addressed some of those same points.

If I thought my contributions were to prevent your making your much more succinct and extremely cogent posts then I would cease to contribute.

At April 4, 2013 at 2:32 pm, my son provides a demonstration of how to make a clear and complete argument in far fewer words than I do .

I console myself with the hope that a variety of onlookers will contain a variety of types of people so a variety of types of post will be appreciated. Different people liked information to be provided in different ways.

Since most opinions on AGW are appropriated from others with no value added and often with no understanding of any science behind those opinions, the climate discussions in the blogs and media would be rather shorter without all that repetition.

Since most opinions on AGW are appropriated from others with no value added and often with no understanding of any science behind those opinions, the climate discussions in the blogs and media would be rather shorter without all that repetition.

The only way I can make sense of your assertion is to assume that by “blogs” you were talking about warmunist ‘echo chambers’ such as SkS.

It is certainly not true of WUWT as anybody reading e.g. this thread can see.

“A theory with this many holes in it would be have been thrown out long ago, if not for the fact that it conveniently serves the political function of indicting fossil fuels as a planet-destroying evil and allowing radical environmentalists to put a modern, scientific face on their primitivist crusade to shut down industrial civilization.

Skiphil says:
April 3, 2013 at 8:29 pm
fyi, Tamino has a new post claiming to have tested via “three spikes”
==========
My understanding is that Tamino has drawn in 3 spikes on top of the old proxies data and voila, Marcott can detect them. I’ve been hearing that RC and Tamino have become quite excited since I posted this paper to their sites!!

What you have to realize is that Tamino has played a magician’s trick on you. He hasn’t added the spikes to the location where the proxies were created, he has drawn them on top of the proxy data. To understand this by analogy, consider this:

Adding additional planets around stars does not make them detectable to astronomers 50 years ago. These additional planets would be real spikes. However, drawing picture of planets on the old photos will certainly make them detectable! These are Tamino’s spikes. Your half blind old granny could detect them!! So, no surprise Marcott was able to do the same.

I would like to thank the many contributors that have made this the Top Post at WUWT. Wahoo! Please pat yourselves on the back. I would especially like to thank Anthony and “Charles the Moderator” for helping make this article such a success.

~Steve Keohane says: April 4, 2013 at 11:04 am
~This is the Vostok graph I used to overlay on Marcott. It seems to show much more variation ~than the Vostok record record overlaid in red accompanying this article and Mike McMillan ~says:April 4, 2013 at 10:11 am.

The two curves are identical..

My proxy graph, linked above has 25% more variation, not counting the initial tail. It measured readings from -2°C to +2°C in the past 8100 years as opposed to -1.3°C to +1.87°C from the graph you used.
Graphically: http://i48.tinypic.com/2v8k11s.jpg

When an analog time-waveform is processed on a digital computer (as opposed to processed on an analog computer–which in today’s world, is a rare event), the original analog signal must have been sampled at discrete time intervals. An ideal low pass filter (a) passes undistorted (constant gain–ideally unity, and either no phase shift at any frequency or a phase shift at each frequency that is directly proportional to the frequency) all frequency components in the signal below the filter cutoff frequency and (b) completely removes all frequency components greater than the filter cutoff frequency. The impulse response (both digital and analog) of an ideal low pass filter is a sync function (continuous in the analog case, discrete time in the digital case) that in time extends to plus and minus infinity. As such, in the real world ideal low pass filters don’t exist. If the original analog signal is sampled at uniformly spaced time intervals, then the discipline of digital signal processing can be used to characterize filter performance. If the original analog signal is sampled at nonuniformly spaced time intervals, then most uniform-time-sampled-data digital signal processing “rules” cease to apply. For a real analog signal (as opposed to a complex analog signal), uniform sampling aliases (causes frequency information to appear at a different frequency) all signal frequency information above half the sampling rate into signal information in the frequency interval between 0 and half the sampling rate. If frequency components above half the sampling rate exist in the analog signal, the uniform-rate sampling process renders recovery of analog frequency information above half the sampling rate impossible and corrupts the analog information below half the sampling rate.

For uniform sampling, a moving average corresponds to an impulse response that is zero everywhere except over a short, finite, continuous time interval, during which the impulse response is constant. A moving average impulse response acts like a poor low pass filter. Specifically, except at discrete frequencies, some high-frequency information is passed, and the gain as a function of frequency over low frequencies is not constant. The degree to which these imperfections affect conclusions drawn from uniform-sampled, low-pass filtered data will in large part be a function of the original analog signal and the information being sought.

Help mods, it looks like I missed hitting [shift] on closing the start italic notation and got a period rather than a greater than symbol. I think I closed italics following “the two curves are identical…” but it does not show, please correct. Thank you.

richardscourtney,
I really tried to get through your whole post but was unable to. I had to stop reading after you wrote, “Each year mean global temperature rises by 3.8 deg.C deg.C from June to January and falls by 3.8 deg.C deg.C from January to June.
So, the warming which you want to exaggerate is about a fifth of the rise experienced during 6 months of each year.”

Please, could you reference where you got this from. I am at a lost as to how you came up with that number but feel it explains so much about your posts.

The concluding statement of this comment is interesting in that it refers to the rate of change conclusion as ‘our main conclusion.’ That is, the main conclusion of the Marcott et al paper is, as per the press release etc, the unprecedented rate of change. However, this comes not by way of the Marcott et al proxy uptick but by comparison of its slow Holocene cooling with instrumental hockey stick blade in the Mann08 composite:

It’s important to note though that the choice of core-top assumption should have little impact on our overall Holocene reconstruction or our main conclusion that 20th century warming from the instrumental record spanned much of the Holocene range.

Thus, while sceptics have something to work with in this defense, in the end it their sound and fury will amount to naught. All the attention that Nancy Green, Steve McIntyre etc have given to the proxy uptick is eschewed and deflected back to a defense of Mann’s old hockey stick.

Nancy, at first I thought Tamino’s shenanigans were indeed “tricks,” as you note. However, more and more I wonder if he knows how to use the tools, but lacks the fundamental understanding necessary to appreciate why and how the tools do what they do. In other words, incompetent, not a liar.

“rogerknight, You are correct that volcanoes produce cooling, but only for a few years, much shorter than what we are talking about here, . . . ”

But some of these ought to show up if they were super-sized, like this one:

DocMartyn says:
April 4, 2013 at 6:08 am
We have volcanic dust and sulphate spikes in both the EPICA Dome C and Greenland ice cores that show eruptions more than an order of magnitude greater than observed in modern times.
The Aniakchak (Alaska) eruption of 1645 BC would have plunged the entire Northern hemisphere into cooling, and if aerosols like dust and sulfate do block sunshine, then cooled the whole world.
—————–

“and there is as far as I know no mechanism to get similar warming spikes.”

Nevertheless the immense warming spike of the Younger Dryas occurred. If such spikes can occur for an unknown reason, why couldn’t that unknown reason explain Modern Warming as well?

richardscourtney says, “I fail to understand any possibility of that meaning other than you were claiming that correlation of those two parameters showed the system has changed (i.e. “Null Hypothesis fails”) and I don’t understand how that can be anything other than an assertion of causality.”

No correlation or causality necessary. If atmospheric CO2 levels have changed and or temperature has changed then your null hypothesis that the system has not changed is shown to be false. Are you saying CO2 levels and or temperature has not changed or are you in agreement that the system has changed?

“Now you are being silly. Surely you are aware that we have had good scientific reasons to believe CO2 causes warming for well over a century. It’s not a matter of what I want but of what existing science tells us.”

The consensus view is that CO2 has caused warming since 1950 (63 years), not “for well over a century.”

I would like to thank the many contributors that have made this the Top Post at WUWT. Wahoo! Please pat yourselves on the back. I would especially like to thank Anthony and “Charles the Moderator” for helping make this article such a success.

As the author, I would tend to think it is you who is responsible for this article being a success. The rest of us just add minor details and have a discussion on what it means in relation to the Marcott mess. You did the hard and heavy lifting and suffer the nice attacks by trolls, the rest of us just troll bait and/or troll bash.

In any regard, looking forward to future articles and I want to remind everyone once again that correlation does not imply causation. (I am looking at you Thomas.)

Mark T says:
April 4, 2013 at 9:16 pm
Nancy, at first I thought Tamino’s shenanigans were indeed “tricks,” as you note.
===========
The point of misdirection is to confound the audience so they cannot see what is really happening. Almost always this requires the performer to control the stage, to limit the audience to a specific point of view. Otherwise the illusion will break down. For this reason sites like Tamino and RC need to heavily censor and control the presentation.

The purpose of a good analogy is to change your point of view. To allow you to look at the stage from a different angle in spite of the performer, so that you can see clearly how the trick is done. In this case Tamino wants you to believe he has traveled back in time and created 3 climate spikes. He wants you to believe that these three spikes are now part of the actual proxies and are at the same resolution as the proxies.

However, that is not what he has done at all. He has drawn three spikes onto the low resolution proxies, but he drew the spikes in high resolution. Much higher resolution that what exists in the proxies themselves. He has changed the resolution without telling you the audience, hoping you will not notice. And as predicted in my article, these proxies are detectable by Marcott.

The only conclusion you can reasonably make is that as you increase the resolution of other paleo proxies, you are more likely to find spikes in them as well.

If anything, Tamino has simply demonstrated my point. He has shown that as you increase the resolution you are more likely to find spikes.

Nancy, at first I thought Tamino’s shenanigans were indeed “tricks,” as you note. However, more and more I wonder if he knows how to use the tools, but lacks the fundamental understanding necessary to appreciate why and how the tools do what they do. In other words, incompetent, not a liar.

richardscourtney, thank you very much indeed for such a detailed and clear exposition of the Null Hypothesis, specifically as it relates to the assumption of AGW. I very much admire your patience in attempting to clarify the basics of scientific principle, in the face of so much willful ignorance.

I’m sorry that neither Thomas nor ‘sceptical’ is able to follow the argument, but I (and no doubt others) will find it useful here in the UK for forwarding to friends who get their news from the BBC, and who read the Guardian, the Independent and other such AGW-addicted media outlets, due their blinkered Greenie political views.

@ scpetical, who wrote
“No correlation or causality necessary. If atmospheric CO2 levels have changed and or temperature has changed then your null hypothesis that the system has not changed is shown to be false. Are you saying CO2 levels and or temperature has not changed or are you in agreement that the system has changed?”

richardscourtney clearly demonstrated – with citations – that records show nothing unusual or unprecedented in the warming of the last half of the C20th. So no: the variations do NOT prove any change in the system, and your argument therefore falls.

This was again explained above, for your benefit, by Theo Goodwin (April 4, 2013 at 3:34 pm)

Furthermore, all records show that the increases in CO2 lag those in temperature by several hundred years. To quote richardscourtney again, because these points are crucial to the argument against AGW:

” (b) … that feedback was so small that it failed to stop temperature rising and falling because (according to the Vostock ice core) the delay of CO2 reversal was typically 800 years after each temperature reversal.
Also, that is the longest time scale and I said “at all time scales”.
At the shortest time scale CO2 follows temperature by 5 months. This was first discovered in 1990 by Kuo, Lindberg & Thomson (ref. Cynthia Kuo, Craig Lindberg & David J. Thomson “Coherence established between atmospheric carbon dioxide and global temperature” Nature 343, 709 – 714 (22 February 1990) )
This has been independently confirmed by several others since and the subsequent studies have revealed that the time of the delay of CO2 after temperature varies with latitude.

Atmospheric CO2 was much higher than now throughout most of the time since the Earth has had an oxygen rich atmosphere. ”

I wonder if your are in the West Country?
The excellent and impressively knowledgeable ‘tonyb’ lives in Devon and I am in Cornwall. If there were sufficient of ‘climate realists’ in the West Country then it may be possible for us to meet-up at a pub to swap AGW anecdotes.

At very least we could amuse each other with tales of abuse we have had from warmunists, and nobody can know what may develop from such a social gathering.

There’s lots of good discussion about scientific method here. Could I just add something?

For almost all physical sciences, experiments are performed in order to understand more about reality. An experiment is a closed system, so that a scientist can change one input variable and determine whether the outputs are changed. The null hypothesis is that there will be no change, and that is deemed disproven if there is a change in output which is statistically significant (generally at the 5% level, though this is just a convenience and does in fact permit 1 in 20 experiments to give a false positive (type I) error). Following that, theories are propounded to explain the mechanism, and then further experiments are performed on those… and so on in order to deepen our understanding (eg. elements -> atoms -> protons/neutrons/electrons -> subatomic particles…etc).

It is not possible to do experiments in an open system. The real causes may or may not be acting, but because the conditions are not controlled (ie. it’s not a closed system) it is not possible to know for sure. Anything that happens may be caused by the thing of interest, but it may also be caused by a million and one other things concurrently acting (or not acting). It’s impossible to tell.

So it may be that CO2 is a greenhouse gas, and that in a closed system (experiment) an increased concentration raises temperatures. This does not mean that it has that effect in an open system, because other forces are acting at the same time.

So the most we can ever say about CO2 (in the open system that is climate) is that an increase in its concentration is correlated with an increase in temperature (if indeed we ignore the last 15 years, and we accept the temperature record as re-re-re-adjusted and reported to 2 decimal places). We may assume that CO2 drives that temperature change but it’s impossible to know for sure because we can’t do any experiments. We can model its effects, but models are not evidence until we can be sure we know all the “million and one other things concurrently acting (or not acting)” — and I would argue we will never know them all.

I think the way the models fail on average to predict the last 15 years’ lack of warming is evidence enough that we should humbly be saying “we don’t know very much” rather than “the science is settled”.

toto “Even if CO2 emissions completely stop in 2100, the warmth will remain for centuries.”

Please, you have absolutely no idea what will happen to “the warmth” in the future. Even “the warmth” itself is questionable, it’s not at all clear that it’s warmer today than the 1930s — most of the signal is in questionable adjustments to the raw data, and the rest could be explained by siting problems, meaning CO2 may have little to no discernible effect at all.

I am grateful for your post at April 4, 2013 at 3:24 pm but I write to knit-pick. My knit-pick concerns statistical validity. And I make this cavil for the benefit of onlookers because I am certain you know the point I am writing to make.

The issue of the validity of statistically derived information goes to the heart of the subject of this thread. The elegance of Nancy’s analogy is that it explains a point concerning validity of statistically-derived information without need to provide any maths.

Also, I have waited until now before replying to your post because I wanted to be sure Thomas had withdrawn in case this comment induced another dialogue about superstitious nonsense.

Your post says

Thomas says:
April 4, 2013 at 12:37 pm

”

” CO2 has continued to rise while global temperature has NOT risen for at least the last 16 years.”

Actually, your graph addresses a different question to the point I was making. Your graph considers the recent period when the trend has been negative.

As I explained in my post at April 4, 2013 at 2:35 pm, I was considering discernible global warming at 95% confidence. And I wrote

”If one wants to know how long it has been since there was any discernible global warming at 95% confidence then one has to start from now – any other date is ‘cherry picking’ – and consider back in time. Then one has to determine if global temperature trend differs from zero at the low confidence level of 95% which is used by ‘climate science’.

It turns out that – depending on which time series is analysed – the time of no recent discernible global warming at 95% confidence is between 16 and 23 years. In other words, discernible global warming stopped at least 16 years ago.

That is true.

So, I reported the period when the global temperature trend is not discernibly different from zero at 95% confidence.
And
You have reported the period where the RSS trend is negative without assessing confidence limits.

I suspect Thomas would find the information you have provided more cogent than the information I stated, so I am grateful for you having provided it. However, readers who understand statistics will recognise that the information I provided has more validity.

Thank you for your comments. My next post is in the pipeline and I expect it out in a day or two. As with my past posts, I talk about both “no warming” on several data sets and “no significant warming” on the same data sets. The SkS site only goes to the January data for RSS. And for 16 years and 3 months, the slope on RSS is -0.003 ±0.223 °C/decade (2σ). I interpret this to mean that there is a larger than 50% chance there has been cooling for 16 years and 3 months.
And since September of 1989, the trend is 0.129 ±0.130 °C/decade (2σ).
So you could say that at the 95% level, there is no warming for 23 years and 7 months. If the data is updated with February and March, it would either have no affect or it would add a month to this time.
(I must confess to some confusion on this point since is it really 95% or is it 97.5%? Suppose that it said 0.129 ±0.129, would that mean the chances of being above 0.258 are 2.5% and the chances of being below 0 are also 2.5%? And if so, could we then only say that we are 97.5% certain of an increase in temperature since September 1989?)

richardscourtney,
Still waiting to hear a reference for your claim that the global temperature rises and falls by 3.8 deg.C twice a year. Thank you.
—————————–
What’s the gag, sceptical? Seasonal variation obviously, you know; gets hot in summer, gets cold in winter. Please don’t come back to cudgel me with some sort of stupidity about how this isn’t global temperature change, I know Richard said global in his original post. It’s got nothing to do with the point he was making about exactly how trivial a 0.8C change really is.

The GLOBAL temperature does rise and fall 3.8deg.C during each year. And Hemispheric temperatures seasonally fluctuate much more than this.
I will explain this later in this post and try to do it in language sceptical may be capable of understanding.

At April 4, 2013 at 8:55 pm sceptical wrote a post addressed to me which said in total

richardscourtney,
I really tried to get through your whole post but was unable to. I had to stop reading after you wrote, “Each year mean global temperature rises by 3.8 deg.C deg.C from June to January and falls by 3.8 deg.C deg.C from January to June.
So, the warming which you want to exaggerate is about a fifth of the rise experienced during 6 months of each year.”

Please, could you reference where you got this from. I am at a lost as to how you came up with that number but feel it explains so much about your posts.

That post states that
(a) an anonymous troll chooses not to read information I provide because the troll does not like the information
but
(b) the troll demands I provide him/her/them/it with more information for the troll to not read.

I chose to ignore that nonsense because I have better things to do than to flatter the ego of an anonymous troll by spending time providing references the troll says he/she/they/it will not read.

At April 5, 2013 at 5:39 pm the troll again demanded that I provide the reference for him/her/them/it to not read.

I will not do that. Instead, I intended to provide information which would enable the troll to use the ‘Way back Machine’ to find it. Thus, if the troll really wanted the information then the troll could do the homework which the troll demanded of me.

However, that intention has been prevented by Werner who – in his typically useful, helpful and informative manner – has provided a link which I did not know and is one such requested reference . I am grateful for the link but disappointed that the troll has had his/her/their/its question answered without being required to do any homework.

THE REASON GLOBAL TEMPERATURE VARIES THROUGHOUT EACH YEAR

The Earth is warmed by the Sun and the Earth goes round the Sun. This passage around the Sun is called the Earth’s orbit.

The Earth’s orbit is an elipse – not a circle – so the Earth moves towards and away from the Sun as it makes an orbit. One orbit is one year.

The Earth obtains more warming from the Sun when it is nearest to the Sun than when it is furthest from the Sun. The Earth is nearest to the Sun in December/January and furthest in June/July.

The Earth gets hotter when most warmed: this is similar to a person getting warmer when moving nearer to a fire.

The Earth’s axis is tilted relative to the perpendicular to the plane of the Earth’s orbit. Ooops!” Strike that last sentence: clearly, the troll will not ‘get it’. I will try again.
The Earth leans over so its Northern half gets more Sun for half the year, and this is called Northern Hemisphere summer. And the Earth’s Southern half gets more Sun for the other half of the year, and this is called Southern Hemisphere summer.

The Northern Hemisphere is mostly covered in land and the Southern hemisphere is mostly covered in oceans. Oceans are made of water, and water needs to get a lot more heat than land for them to warm by the same amount. This means the average temperature of the Northern Hemisphere varies more than the average temperature of the Southern Hemisphere throughout the year.

The average temperature of the Earth is the average including both the Northern and Southern Hemisphere’s.

Variation in heating of the Earth around its orbit and the different responses of the Earth’s Hemispheres combine to give a variation of the average temperature of the globe throughout the year. This variation is +/- 3.8 degrees Celsius with the Earth being hottest in January of each year.

Global temperature is usually quoted as ‘anomalies’. These are the differences in temperature from the Earth’s average temperature over a period of 30 years. The anomalies for an individual month (e.g. April) are obtained as differences from an average of 30 of those months (e.g. 30 Aprils).

So, in a year when January and June have the same global temperature anomaly (e.g. 0.0 deg.C) then the global average temperature January was 3.8 deg.C higher than in June.

In times past NASA GISS posted actual monthly global temperatures on its web site. NASA GISS removed this information from its web site when it was pointed out (e.g. by me) that the twentieth century global warming was put into perspective by this information on global temperature variation throughout each year.

Global temperature rose ~0.8 deg.C throughout the twentieth century. This is about a fifth of the rise in global temperature which occurs during 6 months of each year.

Following several attempts to answer your specific questions in different ways – and cognisant of the need to use language comprehensible to onlookers – I have decided to provide this general answer in hope that it adequately covers all the issues you raise.

A full and proper consideration of these issues requires reference to text books concerning the use of statistical procedures as part of the philosophy of science. This very brief answer is my attempt at an overall view of confidence limits.

Nothing is known with certainty, but some things can be inferred from a data set to determine probabilities of being ‘right’ and ‘wrong’. These probabilities are the ‘confidence’ which can be stated.

As illustration, consider a beach covered in pebbles.

There are millions of the pebbles. A random sample of, say, 100 pebbles is collected and each pebble is weighed. This provides 100 measurements each of the weight of an individual pebble. From this an average weight of a pebble can be calculated. One such average is the ‘mean’ and it is obtained by dividing the total weight of all the pebbles by the number of pebbles (in this case, dividing by 100).

The pebbles on the beach can then be said to have the deduced mean weight.

However, none of the pebbles in the sample may have a weight equal to the obtained average. In fact, none of the millions of pebbles on the beach may have a weight equal to that average. Any average – including a mean – is a statistical construct and is not reality.

(This difference of an average from reality is demonstrated by the average – i.e. mean – number of legs on people. On average people have less than two legs because nobody has more than two and a few people have less than two.)

In addition to considering the mean weight of pebbles in the sample, one can also determine the ‘distribution’ of weights in the sample. Every pebble may be within 1 gram of the mean weight. In this case, there is high probability that any pebble collected from the beach will have a weight within 1 gram of the obtained average. But that does NOT indicate there are no pebbles on the beach which are 10 grams heavier than the obtained average. (This leads to the wider discussions of sampling and randomness which I am ignoring.)

Importantly, there is likely to be a distribution of weights such that most pebbles in the sample each have a weight near the mean weight, and a few pebbles have weights much lower and much higher than that average. This may provide a uniform distribution of weights within the sample. However, the sample may not have a uniform distribution because no pebble can weigh less than nothing, but a few pebbles may be much, much heavier than the mean weight: in this case, the sample is said to be ‘skewed’.

Assuming the sample is uniform then it is equally likely that a pebble will be within a range of weights heavier or lighter than the mean weight. (If the sample is skewed in the suggested manner then the likely range of weights heavier than mean weight is greater than the likely range of weights lighter than that average.) These ranges are the + and – ‘errors’ of the average and have determined probabilities.

The probable error range of +/-X at 99% confidence says that 99 out of 100 pebbles will probably be within X of the mean.
The probable error range of +/-X at 95% confidence says that 95 out of 100 (i.e. 19 out of 20) pebbles will probably be within X of the mean.
The probable error range of +/-X at 90% confidence says that 90 out of 100 (i.e. 9 out of 10) pebbles will probably be within X of the mean.
etc.

And that is all the confidence limits say; nothing more.

Therefore, if the weights of two pebbles are within the probable error range then at the stated confidence they cannot be distinguished as being different from the sample mean. And if a ‘heavy’ pebble has a weight within the probable error then that pebble’s weight says nothing about the sample mean.

In the above example, the sample mean is a statistical construct obtained from individual measurements of pebbles. And it is meaningless in the absence of a probable error with stated confidence: such an absence removes any indication of what the weight of a pebble from the beach is likely to be. And every pebble has equal chance of being within the +/- range of the mean.

The linear trend of a time series is also a statistical construct obtained from individual measurements. It has confidence limits with stated probability and it, too, is meaningless without such confidence limits.

At stated confidence a trend is equally likely to have any value within its limits of probable error.
(This is the same as any pebble is equally likely to have any weight within its limits of probable error from the mean weight.)

Therefore, if the trend is 0.003 ±0.223 °C/decade at 95% confidence then there is a 19:1 probability that the trend is somewhere between
(0.003-0.223) = 0.220 °C/decade and (0.003+0.223) = 0.226 °C/decade.

And, in this example, 0.220 °C/decade is not discernibly different from 0.226 °C/decade or from any value between them. There is a range of values from 0.220 °C/decade to 0.226 °C/decade which are not discernibly different. And similar is true for all probable error ranges.

Importantly, this lack of discernible difference is not affected by whether or not the range straddles 0.000 °C/decade.

I made a stupid typing error in my post at April 6, 2013 at 6:44 am.
This does not affect my argument but may cause confusion.

I wrote:

Therefore, if the trend is 0.003 ±0.223 °C/decade at 95% confidence then there is a 19:1 probability that the trend is somewhere between
(0.003-0.223) = 0.220 °C/decade and (0.003+0.223) = 0.226 °C/decade.

And, in this example, 0.220 °C/decade is not discernibly different from 0.226 °C/decade or from any value between them. There is a range of values from 0.220 °C/decade to 0.226 °C/decade which are not discernibly different. And similar is true for all probable error ranges.

It seems that another of my keys is getting dodgy.Obviously, I should have written

Therefore, if the trend is 0.003 ±0.223 °C/decade at 95% confidence then there is a 19:1 probability that the trend is somewhere between
(0.003-0.223) = -0.220 °C/decade and (0.003+0.223) = 0.226 °C/decade.

And, in this example, -0.220 °C/decade is not discernibly different from 0.226 °C/decade or from any value between them. There is a range of values from -0.220 °C/decade to 0.226 °C/decade which are not discernibly different. And similar is true for all probable error ranges.

Hello Richard, Thank you for your posts at 6:44 and 7:49. All is clear there. However in your post from 2:22, you had the warm and cold months backwards. The earth is actually hottest when furthest from the sun.

This variation is +/- 3.8 degrees Celsius with the Earth being hottest in January of each year.

It should read: This variation is +/- 3.8 degrees Celsius with the Earth being hottest in July of each year.

richardscourtney says:
April 6, 2013 at 2:22 am
“Each year mean global temperature rises by 3.8 deg.C deg.C from June to January and falls by 3.8 deg.C deg.C from January to June.”
Global temperature rose ~0.8 deg.C throughout the twentieth century. This is about a fifth of the rise in global temperature which occurs during 6 months of each year.
=============
Wow! I’d forgotten all about this. That certainly puts the “alarm” over global warming into perspective!! Could one use this 3.8C to provide a ballpark estimate as to natural variability?

What if one considered this 3.8C year to year to be similar to waves on the ocean? With no change in forcings we would expect some waves to be naturally smaller and others to be naturally larger. We already have a good body of work to estimate peak waves, so in theory it might work here.

I’ll leave it to others to do the formal math. Here is a quick estimate. The standard deviation of the data is the square root of the variance. Thus:

This would suggest that 0.8 C is well within natural variability. That most of the most times variability should be within 1.4 C, but that we shouldn’t be surprised by a natural variability of 2.8C and that in extreme cases we could see 4.1C.

This would suggest that any attempt to keep average temperatures within 2.0 C is futile. I’d be interested in comments. Does this guesstimate seems reasonable? Maybe there is another paper here?

Climate variability is much lower than that. Decadal averages represent a good parameter with respect to the effect on economy. At this scale sigma is 2 or 3 tenths of degree at the regional level. It is probably thus for several millennia, including twentieth century. 0.8 ° C of secular warming is a misunderstanding. Thermometers of weather stations do not measure changes in regional temperatures, while proxies do vaguely the job.

I am replying to your posts at April 6, 2013 at 9:09 am and April 6, 2013 at 12:02 pm, respectively.

Global climate variability is a statistical construct and – as I explained in my “statistics primer” post to Werner at at April 6, 2013 at 6:44 am – a statistical construct is not ‘real’: it is a formulation of the definition used to obtain it.

Hence, comments such as “Climate variability is much lower than that” are statements of what commenter means by “climate variability”.

Transition from a glacial to an interglacial state is a variation in global climate.

This poses several problems.

Firstly, what definition of “climate variability” is appropriate?

There are several global climate parameters which could be assessed as indicators of climate variability; e.g. temperature, total system heat content, precipitation, etc.

If global temperature were chosen then the seasonal variation each year would indicate that ~4deg.C is well within natural variability because that variation occurs each year.

Using global temperature anomaly enables political objectives such as avoiding 2deg.C rise. In global temperature terms this is hopeless because global temperature rises by double that during each year.

However, global temperature anomaly has been chosen. This seems to be a strange choice that has been made to advance political – not scientific – objectives.

The second problem is that the chosen indicator of climate variability is certainly not a good choice. It is based on the assumption of AGW and not on any real science.

It is difficult to see a problem if the global temperature anomaly were to rise by 2deg.C when the actual global temperature varies by 4deg.C each year.

However, an increase to the ‘seasonal’ variation by 4deg.C would be problematic but global temperature anomaly may not indicate it: colder winters would cancel hotter summers in a calculated anomaly.

Simply, until a sensible, rational and scientific definition of climate variability is obtained there cannot be a sensible, rational and scientific determination of climate variability.

Several people have done similar calculations to those in Nancy’s post.
And several people have used the change to solar radiative forcing during each year in attempt to estimate climate sensitivity.

But I think all such calculations are pointless. The basic point is that ‘climate variability’ is a statistical construct. No such construct is ‘real’: it is an expression of its definition (as I explained in my post at April 6, 2013 at 6:44 am). And nobody has provided a rational definition of ‘climate variability’.

But I can, do, and will say that a rise in global temperature anomaly of 2deg.C would be very unlikely to have discernible effects when global temperature varies by double that each year.

Anyway, those are my thoughts on the subject, and I hope they are helpful to your thoughts.

Richard Courtney’s April 4, 2013 at 2:35 pm comment on the Null Hypothesis is accurate: current climate parameters, including global temperatures, precipitation, extreme weather events, etc., are neither unusual nor unprecedented. All current climate parameters have been exceeded during the Holocene, therefore the Null Hypothesis has not been falsified.

Richard also make a cogent point when he states that the global temperature over the past century and a half has been extremely small. It is not unusual for global temperatures to fluctuate by tens of degrees, on decadal time scales. A period of more than 150 years with only a minuscule 0.8ºC temperature fluctuation is unusually benign. We are fortunate to be living in this “Goldilocks” climate. Things could be much worse.

Finally, there is no scientific evidence showing that CO2 is the major cause of global warming. In fact, there is no empirical evidence showing that CO2 is the cause of any global warming. It is my personal belief that AGW may exist as a minor forcing. But that is only my belief, as there are no testable measurements showing that CO2 causes global warming. Without any real world measurements, my belief is merely a conjecture.

In any case, the possible effect of CO2 can be completely disregarded, as it is at best a 3rd-order forcing — which is swamped by second-order forcings, and which are in turn swamped by first-order forcings. Observing a putative AGW effect in that context shows how very inconsequential it is.

phi says:
April 6, 2013 at 12:02 pm
Climate variability is much lower than that.
============
The unexpected leveling of temps over the past 16 years suggests that natural variability has been grossly under-estimated — which has led mainstream climate science astray. As a result of this poor estimate, climate science mistakenly attributes almost every change to human activity. This has caused the climate models to go off the rails.

A more reasoned explanation is that their estimates of natural variability are wrong. Given that we see a 3,8 C variation in the overall temp of the earth’s surface every year if is hard to believe this doesn’t induce some sort of instability in the climate system. At the very least this oscillation should induce all sorts of harmonics and sympathetic vibrations that should be visible at scales much larger than a year.

One obvious answer is that by reducing temperatures to anomalies climate science has hidden the true magnitude of this vibration. In effect climate science has place a set of noise cancelling headphones on the climate data, which has fooled the models into underestimating the amount of noise in the system.

note: my ballpark guesstimate of 1.9 as the variance looks a bit low. 2.0 would probably have been better, but it doesn’t significantly affect the result.

richardscourtney says:
April 6, 2013 at 12:57 pm
===========
Thanks Richard, that makes a great deal of sense.

My intuition tells me that if the difference in temperature over 1 year was closer to 0 C than 3.8 C, there would be less variability in the temps year to year, decade to decade. If instead of 3.8 the difference was 13.8 C then I would expect the variability in to me greater year to year and decade to decade.

Given that we see a 3,8 C variation in the overall temp of the earth’s surface every year if is hard to believe this doesn’t induce some sort of instability in the climate system. At the very least this oscillation should induce all sorts of harmonics and sympathetic vibrations that should be visible at scales much larger than a year.

YES!
I have been saying this in many places – including on WUWT – for years.

re. your question to me in your post at April 6, 2013 at 1:56 pm.
I share your intuition, but I know of no indication that it is correct and do not know what such an indication would be.

Please note that the concept of radiative forcing as the driver of climate change is completely without demonstration: it may be correct but has been adopted without any indication it is right.

The climate system exhibits bi-stability (i.e. stable in glacial and interglacial states). This is indicative of the chaotic system having two strange attractors. If this possibility is the reality then the concepts of climate variability and radiative forcing are both mistaken. All we see is the system constantly adjusting to its acting attractor while constantly being disturbed in that adjustment by the oscillations you mention in your paragraph I have quoted.

In my opinion ‘climate scientists’ have forgotten that the most profound scientific statement is
“We don’t know”. And climate science has been halted in its advance for three decades because they have forgotten it.

Nancy Green says:
April 6, 2013 at 1:56 pmMy intuition tells me that if the difference in temperature over 1 year was closer to 0 C than 3.8 C, there would be less variability in the temps year to year, decade to decade.

richardscourtney says:
April 6, 2013 at 2:27 pmI share your intuition, but I know of no indication that it is correct and do not know what such an indication would be.

What I would often tell my physics students was to imagine an extreme situation and see what conclusions you would draw from them. So in this case, let us assume two very different scenarios. In one case, assume the earth is not tilted on its axis at all and that the orbit is a perfect circle. Then the temperature difference would be closer to 0. In the other case, assume the earth is tilted at 90 degrees instead of 23.5. And also assume the orbit is highly elliptical so the distance varies like a comet throughout the year. That would obviously give a huge variation! And in terms of our present discussion, I do not see how very wild swings in anomalies from year to year can be avoided. When put into its proper perspective as Richard has done, an increase of 2 C does not seem that much any more. Thank you for that!

richardscourtney says:
April 6, 2013 at 2:27 pm
The climate system exhibits bi-stability
==========
The 600 million year paleo record certainly shows this, with a stable states at 11C and 22C. Which suggests our present state of 14.5C is inherently unstable. Which suggests that our present “natural variability” could be quite large and highly non-linear. It also suggests that most standard statistical methods will deliver spurious results.

I am between duties I have to perform today so I apologise if this post is perfunctory or I fail to give adequately quick replies to further messages to me today. I will certainly try to ‘catch up’ this evening.

Firstly, I take this opportunity to say that I like your desire to evaluate: it is lack of this desire which most offends me about the bulk of what is called ‘climate science’. Thank you.

I am replying to your post addressed to me at April 6, 2013 at 11:10 pm. It says

richardscourtney says:
April 6, 2013 at 2:27 pm

The climate system exhibits bi-stability

==========
The 600 million year paleo record certainly shows this, with a stable states at 11C and 22C. Which suggests our present state of 14.5C is inherently unstable. Which suggests that our present “natural variability” could be quite large and highly non-linear. It also suggests that most standard statistical methods will deliver spurious results.

I agree.
Indeed, in my opinion the problem is greater than you suggest.

Transition between the two states has been in the form of ‘flickers’ which shift between the states in periods of a few decades. The system often switched between the two states in a series of flickers until it stabilised in one or other of the states.

This poses the questions as to why the two stable states exist and why we are not now in one of them.

I repeat that ‘climate science’ has stalled because it has forgotten the importance of recognising the importance of what is known to be not known. A starting point for understanding the variability of the existing global climate would be an attempt to understand the constraints (i.e. true boundary conditions) of the climate system in each of its observed states including that which now exists.

Until these constraints are understood then factors which could shift the system to its 11C or 22C states cannot be known. I usually try to avoid this subject because it excuses scare-mongering about imagined ‘tipping points’. However, as you say, it has fundamental importance to any investigation of climate changes which may or may not happen.

It also raises your concern about statistical analysis. Climatology is a statistical science by definition, but ‘climate science’ as currently practiced displays astonishing ignorance of fundamental statistical limitations. For example, a NASA GISS scientist has come to WUWT and made statements about confidence limits that display a total lack of understanding concerning what confidence limits do – and do not – indicate.

Linear trends are the standard method for assessing climate changes but – as you say – non-linearity is the norm for all climate behaviours. This alone justifies your suggestion that “most standard statistical methods will deliver spurious results”. Applying an assumption of linearity to a non-linear process is guaranteed to provide “spurious results”.

There are real problems with ‘climate science’ and your post I am answering highlights them. Science is about seeking the closest approximation to ‘truth’ which we can obtain. But ‘climate science’ as currently practiced is about providing evidence which shows AGW is a problem. Indeed, this thread is about your very fine explanation of why one attempt to provide such evidence is flawed according to basic principles of science, logic and statistics.

Reality is often a problem (as anybody exposed to a hurricane will say) and it is time to consider what the realities of climate are. As D B Stealey says, we are in a “Goldilocks climate”, and I think it would be useful to know why because the 11C and 22C climates are more typical of past global climates.

Is the present climate state stable or likely to switch to one of the other states? I don’t know. Nobody knows. And until we understand the nature of existing climate nobody can know or can knowledgably apply appropriate statistical analyses of climate.

richardscourtney says:
April 7, 2013 at 1:51 am
Applying an assumption of linearity to a non-linear process is guaranteed to provide “spurious results”.
==============
The problems inherent in solving multivariate non-linear equations make it very tempting to approximate the problems using linear methods. And in many industries this can make sense – because you can validate the results to see if the approximation is valid.

One example that comes to mind is the oil industry and ground penetrating sonar data. A fast solution can be found by iteration of linear least squares approximations. If your result is not accurate you won’t find oil and you are quickly out of business. If you method is valid you will find oil and your product will be in demand. This quickly eliminates errors.

Similarly in weather forecasting. If your forecast is no good this will quickly become apparent. To the point where in Australia it was discovered that if you simply forecast yesterdays actual weather today it was more accurate than the government weather forecasts bureau.

However, in climate science there is no FACTUAL feedback loop to separate bad science from good science, because of the 30 year lag between weather and climate. By the time someone discovers your science is worthless you are retired. The only true feedback comes in the form of other people’s OPINIONS about your work, which makes the field inherent political. It matters not how accurate your results are, rather what matters is how accurate people believe they are.

This leads to the “David Copperfield” effect in climate science. Dress up a simple trick in an elaborate stage setting and people will forget that they are seeing an illusion.

Yes. You are right and I stand corrected. I should have written;
Applying an assumption of linearity to a non-linear process is guaranteed to provide “spurious results” unless the approximation of linearity can be validated empirically.

However, I don’t feel chastened because I was discussing ‘climate science’ and – as you say – in that context the practicalities of the required empirical validation are usually insurmountable.

Anyway, I take your point and thank you for the correction.

And I like your point about “no factual feedback loop to separate bad science from good science”.
I will use it if you don’t mind my using it.