Driving Forces

There’s a new paper published in Nature Scientific Reports called “Identification of the driving forces of climate change using the longest instrumental temperature record”, by Geli Wang et al, hereinafter Wang2017.

By “the longest instrumental temperature record” they mean the Central England Temperature, commonly called the “CET”. Unfortunately, the CET is a spliced, averaged, and adjusted temperature record. Not only that, but the underlying datasets from which it is constructed have changed over time. Here are some details from the study by Parker et. al.

Between 1772 and 1876 the daily series is based on a sequence of single stations whose variance has been reduced to counter the artificial increase that results from sampling single locations. For subsequent years, the series has been produced from combinations of as few stations as can reliably represent central England in the manner defined by Manley. We have used the daily series to update Manley’s published monthly series in a consistent way.

We have evaluated recent urban warming influences at the chosen stations by comparison with nearby rural stations, and have corrected the series from 1974 onwards.

According to the paper, no less than 14 different and distinct datasets were used to construct the CET.

Perhaps predictably, the authors of Wang2017 completely fail to mention any of this … instead, they simply say:

As the world’s longest instrumental record of temperature, the Met Office Hadley Centre’s CET time series represents the monthly mean surface air temperature averaged over the English midlands and spans the period January 1659 to December 2013.

Well … no, not really. And more to the point, using such a spliced, averaged, and adjusted dataset for an analysis of the underlying “driving forces” is totally inappropriate.

Now, in the Wang2017 analysis, they claim to find a couple of “driving forces” of the CET. Of these they say:

The peak L1=3.36 years seems to empirically correspond to the El Niño-Southern Oscillation (ENSO) signal, which has a period range of within 3 to 6 years. ENSO is arguably the most important global climate pattern and the dominant mode of climate variability13. The effect of ENSO on climate in Europe has been studied intensively using both models and observational or proxy data e.g. refs 14, 15, and a consistent and statistically significant ENSO signal on the European climate has been found e.g. refs 14 and 16.

The peak L4=22.6 years is coincident with the Hale sunspot cycle.

Let me start by saying that a claim that something “seems to empirically correspond” with something else is not a falsifiable claim … and that means it is not a scientific claim in any sense. And the same is true for a claim that something “is coincident with” something else. The use of such terms is scientific doublespeak, bafflegab of the highest order.

Setting that aside, here’s what the CET actually looks like:

Now, there is a commonly-used way to determine whether two datasets are related. This is a cross-correlation analysis, which shows more than just the correlation of the two datasets. It shows the correlation at various lag and lead times. Here is the cross-correlation analysis of the Central England Temperature and the El Nino datasets:

Does the El Nino affect the temperature in Central England? Well, perhaps, with a half-year lag or so. But it’s a very, very weak effect.

Then we have their claim about the relationship of the CET with sunspots, wherein they make the claim that a 22.6-year signal is “coincident with” the sun’s Hale Cycle. “Coincident with” … sheesh …

Now, the “Hale Cycle” reflects the fact that around the time of the sunspot maximum, the magnetic polarity of the sun reverses. As a result, the Hale Cycle is the length of time from any given sunspot peak to the peak of the second sunspot cycle following the given peak.

And how long is the Hale Cycle? Well, here’s a histogram of the different lengths, from NASA data …

So … is a signal with a 22.6-year cycle “coincident with” the length of the Hale Cycle? Well, sure … but the same is true of any cycle length from 17 to 28 years. Color me totally unimpressed.

Next, do the sunspots actually affect the temperature of Central England? Again, the cross-correlation function comes into play:

Basic answer is … well … no. Cross-correlation shows no evidence that the sunspots have any significant effect on the CET.

Finally, what kinds of signals do show up in the CET data? To answer this question, I use the Complete Ensemble Empirical Mode Decomposition method, as discussed here. Below, I show the CEEMD decomposition of the monthly CET data. The upper graph (blue) shows the different empirical mode cycles (C1 to C7) which when added together along with the Residual give us the raw data in the top panel.

The lower graph (red) shows the periodogram of each of those same empirical mode cycles.

Of all of these empirical modes, the strongest signal is at about 15 years (C4, lower red graph). There is a signal at about 24 years (C5, lower red graph), but it is much weaker, less than half the strength. In the corresponding mode C5 in the upper blue graph we can see why—sometimes we can see a cycle in the 25 to 30-year range, but it fades in and out.

To me, this is one big advantage of the CEEMD method—it shows not only the strength of the various cycle lengths, but also just where in the entire timespan of the dataset the cycles are strong, weak, or non-existent. This lets us see whether we are looking at a real persistent cycle which is visible from the start to the finish of the data, or whether it is a pseudo-cycle which fades in and out of existence.

Finally, is there any evidence of anthropogenic global warming in the CET data? To answer this, look at the residual signal in the bottom panel of the blue graph above. This is what remains after all of the underlying cyclical waves have been removed … looking at that it seems that there is no unusual recent warming of any kind.

My conclusion? If you use enough different statistical methods, as Wang2017 has done, you can dig out even the most trivial underlying cycles of a dataset … but the reality is, when you decompose even the most random of signals, it will show peaks in the underlying cycles. It has to—as Joe Fourier showed, every signal can be decomposed into underlying simpler waves. However, this does not mean that those underlying simpler waves have any kind of meaning or significance.

Finally, an oddity. Look at Mode C2 in the upper blue graph. I suspect that those blips are related to the spliced nature of the CET dataset. When you splice two datasets together, it seems to me that you’ll get some kind of higher-frequency “ringing” around the splice. Below I show Mode C2 along with what I can determine regarding the dates of the splices in the CET …

Is this probative of the theory that these are related to the splices? By no means … but it is certainly suggestive.

Here on the coast north of San Francisco, after two days of one hundred plus temperatures the clouds and fog are returning, and the evening is cool … what a world.

My best to everyone, in warm and cool climes alike,

w.

My Usual Request: When you comment, please QUOTE THE EXACT WORDS THAT YOU ARE DISCUSSING, so that we can all understand just what you are referring to.

My Other Request: Please do not stop after merely claiming I’m using the wrong dataset or the wrong method. I may well be wrong, but such observations are not meaningful unless and until you add a link to the proper dataset or give us a demonstration of the right method.

Post navigation

118 thoughts on “Driving Forces”

Trying to me sense of red noise is a fools errand, to bad it pay a few idiots so well. Maybe congress need to see in mass the old film Music Man. that would not help much since the film is well above our congress and and Senators head, somehow Jeff Flake is name is very revealing about his mental state. McCain seems to have had the brain tumor for well over twenty years, his dementia rival Reagan later years.

“Finally, is there any evidence of anthropogenic global warming in the CET data? To answer this, look at the residual signal in the bottom panel of the blue graph above. This is what remains after all of the underlying cyclical waves have been removed … looking at that it seems that there is no unusual recent warming of any kind.”

Really? Well, I guess it depends on what you understand by “unusual”. It is certainly unusual if you compare it to the rest of the same data set. Warming from 1900 onwards is clearly bigger and faster than before, in the residual. Although according to the climate stablishment, only the part after 1950 could be anthropogenic. If that’s what you meant (1950-2013 not different from 1900-1950), then I agree.

Whilst CET is informative of past historic behavioral patterns in temperatures, one has to be very careful with CET since the whole of central England is now one great heat island. Not simply due to urbanisation and urban sprawl, but also changes in farming, irrigation and water management.

No doubt Climatereason will check in and comment since he probably knows more about CET than anyone else and has posted articles on Judith Curry’s site.

Two quick comments:
Firstly if the paper by Wang et al. was worth reading it wouldn’t be in Scientific Reports. This is a
open access journal designed by Nature to get authors to pay to publish and has essentially no
quality standards except that the reviewers don’t think the manuscript is wrong. If this paper had
anything significant to say then it would be published elsewhere.

Secondly Willis and Yang are analysing two different things. The first thing that Yang et al do is to
construct a theoretical time series of the driving force for the Central England Temperature record. This is
clearly similar to but different from the temperature record itself. I have no idea how this driving force
is constructed and so can’t comment but any discussion about the periodicities in the temperature record
are irrelevant for the purposes of this paper. And given where the work was published I imagine that this work was rejected several times from higher quality journals before ending up here.

“the reviewers don’t think the manuscript is wrong.”
Gosh, I wish all journals were like that – on account of that being exactly and only what peer review ought to be. Trouble is they use “not interesting to our audience” to sink papers that are inconsistent with their thoughts and opinions.

Hi Willis , always interesting to see periodic analysis of climate and you seem to make a better job of it than Wang et al.

one point:

To me, this is one big advantage of the CEEMD method—it shows not only the strength of the various cycle lengths, but also just where in the entire timespan of the dataset the cycles are strong, weak, or non-existent.

When Fourier decomposition fits a cycle it means that it is always there even when not visible during time frames because it is cancelled by other cycles that are in anti-phase at that time. Fourier does not deal why cycles of variable magnitude.

So if your periodogram of C5 shows two somewhat broad peaks and the decomposition seems to have times which are rather flat that is due to constructive and destructive interference of these cycles. Sometimes they add sometimes they cancel. It does not mean that they have variable amplitudes. They do not.

When Fourier decomposition fits a cycle it means that it is always there even when not visible during time frames because it is cancelled by other cycles that are in anti-phase at that time. Fourier does not deal why cycles of variable magnitude.

So if your periodogram of C5 shows two somewhat broad peaks and the decomposition seems to have times which are rather flat that is due to constructive and destructive interference of these cycles. Sometimes they add sometimes they cancel. It does not mean that they have variable amplitudes. They do not.

Greg, while that may be true, it’s a difference that makes no difference. If there is destructive interference that totally cancels out a given cycle, in the real world that cycle doesn’t exist, regardless of whether Joe Fourier says it is there or not.

Here’s an example. When two pure tones are near each other, we hear a “beat frequency”. Now, this beat frequency may NOT be present in the Fourier analysis … but it absolutely exists in the real world. We know this because we can hear it, even though Joe Fourier says it doesn’t exist …

Whether or not “beat frequencies” exist is more subtle than it appears. In a linear system they do not
exist and it is only when the intensity is detected with a “square law” detector do they appear. And with
two closely spaced frequencies what you hear is a change in volume which the brain interprets as a third harmonic. It is not real.

Whether or not “beat frequencies” exist is more subtle than it appears. In a linear system they do not exist and it is only when the intensity is detected with a “square law” detector do they appear. And with two closely spaced frequencies what you hear is a change in volume which the brain interprets as a third harmonic. It is not real.

Thanks, Germonio. I tune my guitar using beat frequencies. If they were not real, I could not do that.

Also, a beat frequency is not a “third harmonic”, although that is also real. I use them sometimes when playing the guitar.

Greg, while that may be true, it’s a difference that makes no difference. If there is destructive interference that totally cancels out a given cycle, in the real world that cycle doesn’t exist, regardless of whether Joe Fourier says it is there or not.

If the aim is to detect and determine drivers of a physical process, it is essential to understand that a periodic forcing may still be present active and real even when it is being cancelled out by other effects and not visible.
That is why, when you ask Joe Fourier’s advice on a signal you should be prepare to accept what he tells you ;)

If you do not realise and accept this you may erroneously conclude that a driver has no effect on the system under study because at a certain point in time it is not visible. It is a difference which matters.

Willis, while “beat notes” is really separate from the main subject, I think it might be appropriate to note that “beat notes” (AKA mixing products of two or more signals) are a product of the detector used (your ears, or the detector stage in a radio receiver), and are not present in the environment ahead of the detector. This is why Fourier doesn’t “hear” them.

Willis, while “beat notes” is really separate from the main subject, I think it might be appropriate to note that “beat notes” (AKA mixing products of two or more signals) are a product of the detector used (your ears, or the detector stage in a radio receiver), and are not present in the environment ahead of the detector. This is why Fourier doesn’t “hear” them.

I disagree entirely. The beat notes are present in the air, as patterns of compression. They do NOT require a detector stage in a radio receiver. Instead, they are picked up by a microphone. Why?

Because they really exist, as patterns of rarefaction and compression, in the air itself. The “beat” takes place in the air, not in the radio receiver or in my ears. It is a real phenomenon.

As perhaps a better example, you can see the same thing with trains of waves in water. At times they reinforce each other, and at times they cancel each other out. This is NOT happening in our eyes (the “detector used” in your terms) … it is happening in the real world.

Indeed … but you must understand that a CEEMD analysis is NOT a Fourier analysis. It is a totally different analysis, one which DOES deal with cycles of variable magnitude. That is one advantage of CEEMD.

Thanks Willis, I did appreciate that the CEEMD decomposition is not FA. however each band is then subject to spectral analysis. That is the level at which I was making those points. Eg the variable visibility in the C5 band and periodogram of that band.

There is no contradiction with the beats phenomenon either. FA breaks down such a signal into two close but different fixed frequencies. But this is mathematically identical to an intermediate frequency which is amplitude modulated. There is nothing more “real world” about either representation.

At certain frequencies , human perception of sound may interpret the sound as modulated beats, that is fine. FA resolves this as two fixed ampltudes: that is the “perception” of the Fourier method. Both are valid, real world and totally mathematically equivalent.

To get the human “beats” from the FA result you take average frequency modulated by a signal of half the difference of the FA frequencies. You then need to realise that human perception of sound only hears the amplitude of the intensity of the sound and is insensitive to the phase. So we do not hear the smoothly varying sine wave envelop but the folded sine wave bumps. ie we perceive two bumps per cycle. The “beats” are twice as fast as the physically real sine wave modulation.

How about the circa 60y in AMO is a “beats” phenomenon between 9.1 y lunar and nominally 10.8 y solar drivers.

I do not know what drives the AMO cycle, but I do know temps are regulated by the AMO and PDO, and the way these two control how most of the water vapor get distributed across the planet to cool as it moves poleward.

the same is true for a claim that something “is coincident with” something else. The use of such terms is scientific doublespeak, bafflegab of the highest order.

I must beg to differ. Personally, I like the expression “coincident with” since it emphasises the point that correlation is not causation, and I consider the expression should be used more often.

As a scientist, one knows (or ought to know) that correlation does not mean causation, but even so, the mere use of the expression correlation carries a subliminal message which may impact upon the way one reads a paper or thinks about a point. To a lay person, the use of the word correlation very probably is suggestive of a causative link.

In my opinion coincident is precisely the right expression to use. It is implies that the link may be nothing more than a fortuity and not in any way whatsoever causally connected, or there could be some causality but we just do not know.

the same is true for a claim that something “is coincident with” something else. The use of such terms is scientific doublespeak, bafflegab of the highest order.

I must beg to differ. Personally, I like the expression “coincident with” since it emphasises the point that correlation is not causation, and I consider the expression should be used more often.

Huh? “Coincident with” is totally undefined. It has no numerical bounds. It cannot be falsified. I could say that short skirts are coincident with Dow-Jones highs … but that has no meaning because it is neither quantified nor quantifiable.

In our current example, would a 20-year cycle be “coincident” with Hale Cycle lengths? How about an 18-year cycle? Coincident or not? Their claim is TOTALLY SUBJECTIVE, unmeasurable, and thus not science in any form. Heck, it doesn’t even fit all that well, since their value is neither the mean, the median, nor somewhere between …

Coincidence is defined by the fact itself. If it cannot be defined by the fact, it cannot be coincident, and one is forced to use more general expressions such as there are some similarities between which is an expression that I might use when comparing paleo proxy records of temperature and CO2. No correlation, no coincidence, but some similarities.

It does not matter whether it can or cannot be falsified. It is simply a factual statement, and nothing more..

Thus for example, If one looks at the satellite data set, one will note that there is a step change in temperature of around 0.3degC coincident with the Super El Nino of 1997/98 (I don’t like using one hundredths of a degree, so I have rounded the change).

That is just a fact. It points out that two happenings have occurred at the same time, ie., in and around 1997/98 there was a Super El Nino, and in and around 1997/98 there is a step change in temperature.

it does not state that the Super El Nino of 1997/98 drove the step change in temperature. We simply do not know what caused the step change in temperature that occurred. All we know is that the step change in temperature happened, when it happened and also that it happened at the same time that there was a Super El Nino. Are the two in some way connected? Well maybe. Presently our knowledge and understanding is not sufficient to answer that question, but it is a noteworthy point that the step change in temperature and the Super El Nino both occurred in similar time frame, such that it highlights an area of investigation. That is why it would be appropriate to say the two events are coincident with one another.

What needs to be falsifiable is conclusions that are drawn. These conclusions need to be capable of testing, if they are to be something other than mere opinion.

As regards your cycle example, you are right that it is subjective, and I consider that one always has to use an element of commonsense. It is difficult to see that a 20 year cycle and an 18 year cycle can be coincident with one another. Factually, they soon become out of phase with one another. With a short data set, it would be possible that the peak of each cycle is coincident with one another, or that the start of one cycle is coincident with the peak in the other cycle, but it is just a factual matter, and one must take account of margins of errors, uncertainties and de minimis

The same with “correlation”. It too is subjective and only applies to the data set used. It’s whether it can be used for prediction that counts. richard verney September 4, 2017 at 12:56 am perhaps says it better.

When one talks about a step, one has to differentiate between the riser and the tread. In the specific case, the riser is the ENSO event. And the ENSO event is not simply the El Nino but also the following La Nina and recovery therefrom.

The fact is that there is quite a long evolution in ENSO events, and the satellite appears to be more sensitive to warming than it is to cooling possibly due to convection carrying warmth up to the altitude at which the satellite takes its measurements. The satellite is also sensitive to short lived volcanic cooling possibly because the cooling occurs at altitude due to high level aerosol emissions.

The step change is the completion of the ENSO cycle, and whatever small lag there is in the system to that event particularly the oceans.

What one sees in the data set is that before the Super El Nino of 1997/98, the anomaly was tracking at around the – 0.15deg C level, and after the completion of the ENSO cycle (that includes the subsequent La Nina and recovery therefrom),the anomaly tracks around the +0.2 degC level.

Coincidence is defined by the fact itself. If it cannot be defined by the fact, it cannot be coincident, and one is forced to use more general expressions such as there are some similarities between which is an expression that I might use when comparing paleo proxy records of temperature and CO2. No correlation, no coincidence, but some similarities.

It does not matter whether it can or cannot be falsified. It is simply a factual statement, and nothing more..

DAV September 4, 2017 at 2:32 am

The same with “correlation”. It too is subjective and only applies to the data set used. It’s whether it can be used for prediction that counts. richard verney September 4, 2017 at 12:56 am perhaps says it better.

You guys are missing my point, likely my lack of clarity.

We have a mathematical method tomeasure the correlation of two datasets. We also have methods to determine whether that correlation is statistically significant.

We have no mathematical method to measure whatever it is you might be calling coincidence. Since we cannot measure it we cannot determine if it is statistically significant.

THEREFORE, one is a scientifically valuable and falsifiable statement, and one is not.

As in this example … where anything from a cycle length of 18 to 29 could be said to be “coincident” with the Hale Cycle … but when we measure the correlation of the Hale Cycle and the CET we find it is not statistically significant.

the ENSO event is not simply the El Nino but also the following La Nina

You are even drifting further apart from the evidence. There is no such thing as an ENSO event made by an El Niño and following La Niña. The big 2015-16 El Niño was not followed by a La Niña, and the frequencies of Niños and Niñas can be quite different over a certain period of time. ENSO is an oscillation that is anything but regular, and the biggest increases in temperatures come after strong La Niña events. This is only logical as El Niño discharges oceanic temperature to the atmosphere and eventually to space, lost by the climate system, while La Niña recharges oceanic temperature from solar irradiation, due to lower cloud cover in the tropics, that remains in the climate system.

We have a mathematical method to measure the correlation of two datasets. We also have methods to determine whether that correlation is statistically significant
…
THEREFORE, one is a scientifically valuable and falsifiable statement, and one is not..

No. One is quantifiable while the other is qualitative but neither is particularly informative.

As for falsifiable, a measure of correlation is no more falsifiable than the mean of the data. It is what it is. You will get the same value every time for the same data.

I’m not a mind reader but I suspect that measures of correlation and statistical significance don’t mean what I suspect you think they mean. See: http://wmbriggs.com/post/4024/ for more on this topic.

Of all of these empirical modes, the strongest signal is at about 15 years (C4, lower red graph). There is a signal at about 24 years (C5, lower red graph), but it is much weaker, less than half the strength.

Why do you not mention the circa 21y peak ? In C6 it is a narrower, clearer peak than the broad 24y lump. It is also present in C5, so you need add the presence in the two bands C5,C6 since the arbitrary processing bands have split this peak across the two because , by chance, it falls away from the centre of both and is attenuated by the bandpass effects of CEEMD in both cases. It is not even clear that adding will give an accurate measure but clearly the attenuated peak in either band is not a full representation of its strength.

Maybe CEEMD could be modified to produce differently centred bands ( maybe by chose more or fewer bands in the processing ).

Since you go to some length to challenge the idea of Hale cycle having an effect it does not seem right to simply ignore this circa 21y peak from your commentary.

Why do you not mention the circa 21y peak ? In C6 it is a narrower, clearer peak than the broad 24y lump. It is also present in C5, so you need add the presence in the two bands C5,C6 since the arbitrary processing bands have split this peak across the two because , by chance, it falls away from the centre of both and is attenuated by the bandpass effects of CEEMD in both cases.

Huh? The peak at 21 years is minuscule, whether you add C5 and C6 together or not.

Maybe CEEMD could be modified to produce differently centred bands ( maybe by chose more or fewer bands in the processing ).

Please, please read the explanation I gave in the link in the head post. The empirical modes are split based on the data itself, that’s why they are “empirical”. Choosing more or fewer bands does NOT change the bands, it just tosses the rest into the “Residual”.

Since you go to some length to challenge the idea of Hale cycle having an effect it does not seem right to simply ignore this circa 21y peak from your commentary.

It is teeny. There are dozens of other peaks larger than the 21-year peak. Why should I pay attention to something that is down in the weeds and lost in the noise?

As you rightly say the CET is an average. It is an average of all seasons. Thus any seasonal signals are diluted.
If you just look at the January CET it shows winter temperatures steadily becoming milder overlaid with a variable cycle that is no doubt influenced by jet stream periodicity.
Interestingly the June record shows no temperature change over the 300 years.

If you just look at the January CET it shows winter temperatures steadily becoming milder overlaid with a variable cycle that is no doubt influenced by jet stream periodicity.

Interestingly the June record shows no temperature change over the 300 years.

Huh? How about you demonstrate that with some data. I just looked at the January CET data. It is warming at the rate of 0.03°C per decade. June is warming at a rate of 0.01°C per decade.

As to the January data being “overlaid with a variable cycle”, that has no meaning. You could say the same about virtually any observational dataset. All you’ve said is that the dataset varies with time … as do all of them.

Finally, as far as I know we have no reason to assume that any temperature change is “no doubt influenced by jet stream periodicity”.

No doubt? I doubt it. If you think differently, please provide us with a couple hundred years of data regarding the location of the “jet stream” so we can analyze the relationship between that and the CET.

Here is a link to a Clive Best Blog showing CET seasonality.http://clivebest.com/?attachment_id=7018
Philip Eden, a UK weatherman used to explain that the UK winter temperature was related to the number of days that we experienced high pressure systems that gave an Easterly cold continental airflow and therefore a reduced prevailing SW mild atlantic weather. Periodicity of this I would attribute to the Jetstream.

What is interesting from that plot is that the 20th century was not warmer than ~1775 until around the late 1990s.

It was only around the lead up to Super El Nino of 1997/98 and afterwards, that 20th century temperatures exceeded those seen in ~1775. So much for claims of unprecedented warmth.

Perhaps more significant, to that, is to look at the rate of the rise in temperatures during the last half of the 20th century (ie., the modern warm period0 and compare that with the rate of rise in temperatures between 1680 and 1735, or for that matter between 1880 and 1940.

As can be seen, there is nothing remarkable about the rate of temperature rise in the last half of the 20th century; there is nothing unprecedented about the rate of change in temperatures, and no suggestion that increasing levels of CO2 have brought about changes in temperatures at a higher rate than previous warming episodes seen in CET.

The peak L1 = 3.36 years seems to empirically correspond to the El Niño-Southern Oscillation (ENSO) signal, which has a period range of within 3 to 6 years.

This is nonsense. Either you have a period or you do not. A period can not change by a factor of two and still be a period. If you do not have a fixed period in ENSO you can not attribute it to a fixed peak in a periodic decomposition.

There is quite a strong peak in cross-correlation of UAH TLT ( lower tropo air temp ) and ENSO index:

This does NOT establish that ENSO is a “driver” , there are many regions showing both positive and negative correlation with ENSO. That is not consistent with “driving” changes, simply that it is a planet wide variability which can be characterised by the ENSO index.

There is a clear periodic variation in the positive lag section of the graph with a period of about 272 mo / 6 = 45 mo or 3.8 years. That is over the rather short period of the satellite data since 1979.

I always look forward to your essays Willis.
Both here and on your Skating blog.
The statistical treatments of temp reconstructions are however a bit beyond my ken.
Can you do a pre-emptive piece soon on whether someone on “The Team” has done a paper asserting the correlation of chicken entrails with agw?

It is good to see a scientific post by you. These days, these are too few and too far between.

Given the way climate science has developed, namely the extrapolation of trends from data sets not fit for scientific purpose, you highlight a deficiency in this science, namely a lack of competence in data handling, data presentation and statistics. In fact, statistics ought to be the key foundation course, and we all know what Ernest Rutherford said about the use of statistics and their role in science (“If your experiment needs statistics, you ought to have done a better experiment.“).

In my opinion, there is far too much in the way of curve fitting and seeking to make out causative trends when the data is so poor and inconclusive that no firm conclusions can be drawn. I know that you do not like the use of wishy washy expressions such as may or could etc but since almost all the data sets are not fit for scientific purpose (for a variety of different reasons) this is all an honest scientist can correctly state.

You state:

Finally, is there any evidence of anthropogenic global warming in the CET data? To answer this, look at the residual signal in the bottom panel of the blue graph above. This is what remains after all of the underlying cyclical waves have been removed … looking at that it seems that there is no unusual recent warming of any kind.

Obviously, your presentation bears that out, but the position is so stark that the mark 1 eyeball readily reveals the same point (as I noted at September 4, 2017 at 12:27 am above).

and was, Central England however may years ago as it is now? The breakpoint being end-of WW2
I am slap bang in the middle of Central England and right now, when I go driving around/exploring, I see mile after mile of of brown dirt.

Only a month ago growing wheat, barley and oilseed – now harvested and gone leaving bare dirt with a bit of dead (organic) trash lying around.
That brown dirt has low albedo, so it gets hot.
It is bone dry – gets hotter than if it were damp.
There are no living plants transpiring water to have a cooling effect.
So it warms up.

That trash, exposed to a still strong sun, is being smashed up (thence oxidised) by the same solar energy that created it less than 2 months ago.
Anything organic in the surface layer of that dirt (the ‘soil organic’ fraction and bacteria) is exposed to the sun is being oxidised.
So it releases carbon dioxide.

And that soil organic fraction is what’s left of what was ‘litter’ from under the forest that covered England, until it was cut down (by Henry VIII?) to make ships, cannons and cannon balls.
That litter was anything up to 2 feet deep and nearly all organic material. The dirt round here now is little different to the sand you’d find in Southern Tunisia. (I say that because I’ve been there)

Tiny little scrap of forest is left (Sherwood Forest they call it round these parts)
I put a datalogger (see the advert for them here) in both a section of forest and another datalogger in my garden – with a cornfield on 2 sides, barley on one side and potatoes on the 4th corner.
The forest datalogger runs 1 degC cooler – on average of course (recording at 30 minute intervals)

And see the CET temp graph.
See it ramp up after WW2.
Because (certainly English) politicians got such a fright from having to beg for food from their colonial cousins, they went hell-for-leather to grow enough ‘at home’
They coincidentally and fortunately got the tools to do that – big tractors and nitrogen fertiliser arrived simultaneously on farms.

CET temperatures and carbon dioxide levels started ramping up at the exact same time.
No coincidence, not in my book.

The bare dirt lifted the temperature and the nitrogen hoisted the CO2,
Tyndall noticed that CO2 had ‘colour’ That’s all.
So what.
Most things do have colour.

Unless people have lived in this area, I suspect that they do not appreciate how things have changed these past 100 years. Whilst England still possesses pleasant countryside, the countryside is now very different. The point you make is a good one, providing a little more detail on the point that I made above.

Whilst CET is informative of past historic behavioral patterns in temperatures, one has to be very careful with CET since the whole of central England is now one great heat island. Not simply due to urbanisation and urban sprawl, but also changes in farming, irrigation and water management.

Most data has been falsely warmed after 1940. If NASA or NOAA have touched it, it’s been intentionally corrupted by government employees.

An excellent place to learn about that is Tony Heller’s site, where he has gone to the trouble of making flashing .GIFs showing the systematic and continual alteration of the past before 1940 and warming all data after 1940.

It’s called ” realclimatescience ” dot com, one of the topics is altered data, at the top of the title page.

This ongoing falsification of records is happening on a continual basis – it’s being done by artificially tying temps toGHG levels.

Everyone should really try to look over his charting of this, it’s simply and transparent once you see it.

I dunno if the records associated with this thread are among those. My bet is that they have been altered, and considerably so.

Whether CET is a good proxy for the Northern Hemisphere is a matter of conjecture and debate, and whether anything useful can be drawn as from 1950 onwards is also moot.

That said, it is noteworthy to look at the historic data and the large variability in temperatures, It shows a change of about 4 degC from ~7.5degC to ~11.5degC.

If one looks at the historic part of the data set for the period prior to 1950 there is variability of upwards of 2degC. This is very different to the handle of the M@nn Hockey Stick.

One would have expected that if the MBH98 paper sought to proclaim that NH temperatures were essentially flat until the modern warm period, 1960s onwards (ie.the date of the splice on of the adjusted thermometer record), that they would have to have examined that claim in the light of CET for the period 1660 to 1960.

If I recall their first paper did a reconstruction back to about 1400 (and their second paper extended the reconstruction back a thousand years), so CET covers 50% (or more) of their historic reconstruction.

Of course, I accept that the early part of CET is a reconstruction but all of that is something that ought to have been addressed in the MBH98 paper.

As you know I use CET a lot. It was composed in a very painstaking manner by Manley. I have personally discussed it at the Met office with Parker. He believes it to be a pretty accurate historic compilation as does the Met office generally. The older data set to 1659 is not generally used as it provides a monthly rather than daily record hence why Parkers 1772 version-when daily records are available-is used.

However that is not to minimise the accuracy of the older set, or CET in general, but we need to bear in mind Hubert Lambs caveat concerning historic temperature databases that ‘we can believe in their tendency but not their precision’. So is it accurate to a tenth of a degree? No. is It broadly accurate in showing us the vagaries of the British climate enough to be very useful? Yes.

I wrote very extensively on it here citing all the relevant CET back up material ;

Can anything be discerned from a ‘spliced’ dataset? Within my study below I looked at the impacts of vulcanism and sun spots on CET. I can not see that either have had a huge impact, although there are one or two extreme examples where there may be some sort of impact.

Volcanoes in particular are largely a red herring with the gleeful pointing by certain scientists to sudden dips in temperatures due to a massive eruption (think of tree rings or a lack of them) being contradicted when more detailed records show the cooling had already started to occur several years prior to the eruption

So is CET a useful record of the likely approximate temperature and as some sort of useful proxy for a wider area? Yes. Should it be taken as some definitive record accurate to a fantastic level with all that implies in trying to discern micro trends out of it? No

Note that the graphs in the links above also show my reconstruction of CET prior to 1659. This is not the Met offices material, although they have seen it and have a broad interest. I am currently working on the 13th century CET reconstruction which incudes several large volcanoes enthusiastically endorsed by Mann and Miller as showing cooling. Hmmm.

Is anything much happening in recent times when looking at the broad sweep of British weather records over many centuries? It is hard to see it. What is easy to see is mans impact on their environment by way of UHI. Which is why the recent record makes an allowance for UHI but as I have discussed with David Parker and Richard Betts I do not think enough of an allowance.

The volcanic effect appears to be warming the winters and cooling the summers due to the different stratospheric effect of volcanic stratospheric emissions. While they block a significant part of solar radiation, they also uncouple the Solar-QBO-Polar vortex-NAO signal by reducing ozone levels, and thus tend to be accompanied by higher NAO values in the winter. The winter warming effect at mid-high NH latitudes by stratospheric volcanic eruptions is known since 1992.
Robock, A., & Mao, J. (1992). Winter warming from large volcanic eruptions. Geophysical Research Letters, 19(24), 2405-2408.

Thus the effect of volcanic eruptions on temperatures of the past depends on the proxy you are using. Biological proxies are affected disproportionately as they rely on summer-half year temperatures, and in some cases precipitations. To my knowledge the effect of volcanic eruptions on precipitations has not been clarified. That the time of the temperature effect appears to be off might be a consequence of different dating models. Volcanic eruptions are very precisely dated to the year by their ice core signature, while most biological proxies depend on imprecise age models. Tree rings used for the radiocarbon calibration curve are also precisely dated, and most authors support an effect of volcanic eruptions on tree rings.
Scuderi, L. A. (1990). Tree-ring evidence for climatically effective volcanic eruptions. Quaternary Research, 34(1), 67-85.“Ringwidth variations from temperature-sensitive upper timberline sites in the Sierra Nevada show a marked correspondence to the decadal pattern of volcanic sulfate aerosols recorded in a Greenland ice-core acidity profile and a significant negative growth response to individual explosive volcanic events.”

Jones, P. D., Briffa, K. R., & Schweingruber, F. H. (1995). Tree‐ring evidence of the widespread effects of explosive volcanic eruptions. Geophysical Research Letters, 22(11), 1333-1336.“Tree-ring evidence from 97 sites over North America and Europe are used to develop a chronology of widespread cool summers since 1600… A number of the extreme low density years occur in both North America and Europe, suggesting a common response to high-frequency forcing… Of the five common extreme low-density years (1601, 1641, 1669, 1699 and 1912) four are known to be coincident with the year or year following major volcanic eruptions.”

1. Where are the thermometers used- is it Oxford City?
2. What is change land use for the surrounding 10 miles?
3. How precise are the thermometers?
4. How frequently are readings taken?
From about 1690-1740, over a 50 year period or slightly less , temperatures appears to increase from about 8 C to 11 which coincides with the massive growth in the yield of British agriculture. The increase in temperatures produced a very wealthy yeoman farming class( the farm houses of this period are often large )and aristocratic land owning class which enabled the industrial revolution to occur. In the 1780s, a French aristocratic was surprised that all classes could eat butchers meat every day.

I would suggest, that British very variable and warm periods has produced a well fed, contented, resilient ,resourceful who loved liberty : so what is the problem?

Why not use the Uppsala series instread? That is a single record with daily data from 1722 to date. It has a few years spliced in from a nearby station in the earliest part and is affected by UHI, but it is still a better quality record than CET:

Modern thermocouples have a materially different response time to that of the old LIG thermometers, and it maybe that this is causing an artificial warming. There is a post about this on Jo Nova’s site.

This is well worth a read. This is potentially a systemic bias since one never gets a short lived blast of cold air from a jet engine, or a cold gust over a tarmac parking lot etc.

I have for a long time suggested that we should retrofit the best sited stations with the same LIG thermometers used by the stations in question back in the 1930s/1940s and observe using the same practices as used by that station in the 1930s/1940s. We could then obtain RAW data which could be compared to the station’s historic RAW data without the need for any adjustments. That would quickly and easily give us a good idea as to whether temperatures have truly moved as from the 1930s/1940.

It is a pity that BEST did not adopt this type of approach. Ie, come at the temperature record from a different perspective. It should have audited all the stations and selected the best sited and most pristine stations and then retrofitted these stations. 50 to 150 of the best sited most pristine stations would have been plenty.

A good piece by willis.
It’s somewhat marred by his attacj on CET.
CET represents an area of england.
combining stations is one way of doing this, it has it’s drawbacks (splicing artefacts)
better to do an area average and avoid splicing.
The differences will be minor, of technical interest only.

A fair test of the hypothesis would be to use some of the other long records.

One would not expect these drivers to drive one single magical patch on the globe.

But CO2 is claimed to be a well mixed gas (and is acceptably so at high altitude), and therefore any impact it has ought to be seen to similar extent all over the globe, save only to the extent that there are material differences in humidity, or the impact of unique features due to particular oceanic currents, or unique weather patterns due to topography and the like.

There is no reason why just a handful, or so, of well sited stations could not be capable of observing the signal to CO2, if there be any signal at all to observe.

As you are aware, the contiguous US has not warmed since the 1930s. What makes the contiguous US an outlier? What are its unique features that mean that it does not and could not be expected to behave like the mid latitude region of the NH below the Arctic and above the Tropic of Cancer?

Lower 48 raw temperatures have support from records in Canada, Greenland, Iceland, South Africa, Paraguay… What are the statistical chances of these not reflecting actual global patterns? Here is S. Africa for example from 1880.

A good piece by willis.
It’s somewhat marred by his attack on CET.
CET represents an area of england.
combining stations is one way of doing this, it has it’s drawbacks (splicing artefacts)
better to do an area average and avoid splicing.
The differences will be minor, of technical interest only.

Thanks, Steven. My attack was not on CET, my apologies for lack of clarity.

It was on the use of CET, or any spliced, averaged, and adjusted dataset, as a subject for an analysis of “driving forces”. The CET is useful for many things. Such an analysis is not among them.

As to whether the differences between this spliced dataset and a non-spliced dataset will be “minor, of technical interest only”, I fear that until you actually present your analysis of two such datasets, one spliced and one not spliced, you’re just making an unsupported claim.

However, I subsequently realized that the method, usually called the “scalpel” method, will turn a saw-toothed wave which gradually increases and then drops quickly into a spurious trend. Like say when a station is sequentially moved out of town a bit and then the town slowly encroaches, so it is moved again to a cooler area, where the town encroaches again so it is moved again to a cooler area …

Or when a Cotton Region Shelter is periodically cleaned and repainted white, leading to a drop in recorded temperature.

The “scalpel” method will cut out the sudden drops but leave the gradual increases, baking in a totally bogus warming.

I’ve pointed this out to both you and Zeke Hausfather a number of times, but you’ve never responded with anything real. All I’ve gotten back from you are claims that it’s no problem, claims you’ve considered the issue, and the usual bafflegab.

I see this as a sign of intellectual cowardice, which I’d expect from Richard Mueller, but not from you and Zeke.

When are you going to deal with this clear and cogent scientific objection to the “scalpel” method from the person who, as you say, came up with the idea? How about you cut the crap, stop the cutely two-sentence drive-by postings, and actually respond to a real problem with the method you are using?

It seems to me that a proper splicing of the temperature records would have used digital filter windowing on each part, say a Hamming window or even a triangular window to minimize the high frequency effects from the discontinuities of each splice.

Regarding the graph of cross correlation of CET with sunspots: I do not expect that to show correlation, because the Hale cycle has two cycles of the number of sunspots, each with opposite overall solar magnetic field.

Regarding the graph of cross correlation of CET with sunspots: I do not expect that to show correlation, because the Hale cycle has two cycles of the number of sunspots, each with opposite overall solar magnetic field.

Thanks, Donald. I thought about that. However, after some experimentation, my conclusion was that if we have a cycle of length say 12, it will show significant peaks in a cross-correlation analysis with a cycle of length 6.

The other problem is that as far as I know there’s no way to adequately represent the Hale Cycle, Seems to me that it is a vector cycle rather than a scalar cycle. How are you proposing to graph it out?

Willis, a nice quick shredding of ‘forced’ science. Two things about the volatility at your conjectured splice points in the record:
a) are the splice points not recorded by Met Office? If available, it is a fine series of data points you’ve teased out.
b) the data of the purturbed points would seem to offer a means to judge the legitimacy of what was done and in some cases, refine and improve the splice.

Well done, Willis. Read the paper because am deeply interested in all things attribution. Thought it was awful, but was too lazy to write something up dor here. You have properly shredded their shoddy analysis.
The only interesting attribution analysis I am aware of is Merohasy’s new paper using advanced neural network AI trained on 6 carefully selected high resolution paleoclimate proxies over the past millenium to project natural warming since 1900, with the residual assumed to be AGW. Her numbers are >75% natural and <25% AGW since 1950. The main methodological issue is that proxy time resolution is still arguably poor compared to the attribution period examined. Would be interested in your keen analysis of the paper.

Willis, just a short question to this excellent post: did you calculate the cross-correlations on the ensembles (series1,series2) or did you subtract for each series the average from every data point (i.e. calculate crosscor(series-averageseries1, series2-averageseries2)) ?

Having collected environmental data of all sorts in the AGW “debate” I have pondered how temperature has been measured over time. How we measured temperature has changed form LIG thermometers to thermocouples, to satellites. Even LIG thermometers from different runs from the same manufacturer may have different precision. While there are ways to “adjust” as one replaces a thermometer at a weather stations I know that for a few it is seldom done.

Willis: Excellent post, as we have come to expect. Deflating dubious conclusions is fun.

Philip Eden (very distinguished, retired meteorologist) maintained an independent CET series because he wasn’t sure that The Met Office/Hadley was an appropriate custodian of the data, and he wasn’t sure of how accurately they added current temperature records to their version of CET. This is a quote from his website that articulates his concern, in a nicely understated, English way:

Since Professor Manley’s death, the Meteorological Office has become the self-appointed guardian of the CET series, although one wonders whether it is a guardianship of which Manley would have approved.

Eden seems to have stopped updating his CET series in 2014 – that’s when his website seems to stop getting updates. His data set from 1974 (when Gordon Manley died) to 2014 is here:

And his text (hidden behind the images, you have to get the page source) says:

The Hadley Centre’s CET calculation has recently undergone a major change, involving the replacement of several of the contributing stations. Their series is now based on Stonyhurst (Lancs), Pershore (Worcs) and Rothamsted (Herts), all of which are Campbell Automatic Weather Stations. The Philip Eden series continues to emulate Manley’s original work which calculates the mean between the Oxford district and the Lancashire Plain, and no changes have been introduced to this series in recent months. You can draw your own conclusions as to the efficacy of the Hadley Centre’s change.

The plot of “Hadley CET minus MO E&W” (I think that’s Met Office England and Wales) goes negative by about 0.25°C between July 2004 and July 2005, while the plot of “Philip Eden CET minus MO E&W” stays more or less flat. In other words, the Hadley CET went up by 0.25°C in one year, compared with an independent data series that follows a constant methodology.

Does this conclusion have a familiar ring to it? Do I hear “homogenization”? Karlization?

I’m sort of bogged down in work today, but I’m going to play with Eden’s data and see if the difference continues, or continues to increase. I may be back later today if work goes well and I can make the time.

The temperature at any specific point (e.g. a CET measurement station) varies continuously during any 24 hr period. This is normal and expected because we see the sun rise and set and note it has gone away for about half of the 24hrs. Interestingly if you stare at the thermometer all day for several days and take regular notes every minute or 5 you will also see the variations are erratic and unpredictable. Sometime the temperature moves quickly in a matter of minutes. Sometimes it hardly moves for hours. It appears quite random sometimes.

If you were interested in the underlying “heat” energy involved in these movements you would really need to take loads of measurements each day, plot the graphs, and look at the areas under the graphs too. Taking just the MAX and MIN isn’t going to show you much about what is really happening. And using MAX+MIN/2 adds nothing to your lack of knowledge!

Of course if what you are really interested in is the surface temperature of the Earth then you had better move those thermometers and stick them in the ground, or at least in contact with it because right now (or in 1752) you appear to be measuring the temperature of that thin wispy stuff with varying amounts of water in it which we call the atmosphere, or at least the tiny bit of it in your measuring cabinet blown one way or another by the wind.

In conclusion : CET data may be interesting but it is of no use whatsoever in studying the SURFACE temperature of the planet BECAUSE THE THERMOMETERS ARE NOT ON THE SURFACE.

Mods.
I was trying to upload an image I have from a book, a graph of CET record which I had on my Facebook page. Doesn’t seem to have worked! Can anyone tell me the easiest way to do this? It’s really worth seeing.

the graph is called
Accumulated temperatures during the growing season in ‘month-degrees’ 1749 -1950
Location: 600 ft above sea level. Western Pennines.
It is from a book called “Climate and the British Scene” by Gordon Manley. Published by Collins in 1952.
It was compiled from ‘unadjusted data’ during a period of genuine scientific curiosity by people with no agenda.
And in my eyes it is clear evidence of scientific fraud.

There being no correlation between CET, an energy measure with SSN, a proxy for a forcing (power measure) should not be a surprise. Why should there be? Are you surprised if a plot of your Watt hour meter reading is a different shape from a plot of your Watt meter reading?

A rational comparison would be between CET and the time-integral of the SSN anomaly. Properly accounting also for the net effect of ocean cycles and water vapor achieves a 98% match with measured since 1895.

Ste – If you understood this stuff you might recognize the SSN anomaly time-integral as a proxy for energy retained by the planet and thus contributing to the average global temperature anomaly. Click my name for an analysis that explains how it works.

Ste[ven] – If you understood this stuff you might recognize the SSN anomaly time-integral as a proxy for energy retained by the planet and thus contributing to the average global temperature anomaly.

Dan, I agree with Steven on this one. The problem is that the shape of the integral is totally determined by the choice of the zero point. If you choose one point the integral increases steeply … choose another zero point and the integral plunges downwards. Choose a third zero point and the integral winds up right back where it started.

One other point. Integrals are not totally useless. While the trends are meaningless, the variations sometimes are not. For example, the integral of the Southern Ocean Index has interesting properties … more to follow.

Ste & Wil – Of course I am aware of that. What I did avoids that issue.

Sunspots are a proxy for a power thing. The integral is mandatory to get an energy thing and it must be with respect to a nominal to account for both gains and losses. Divide the energy thing by the effective thermal capacitance and you get the contribution to the temperature anomaly.

It appears that you have reached a conclusion prior to a rigorous assessment of what was done. Perhaps you would gain better understanding by spending some open-mind time with my blog/analysis.

In brief, what I have discovered is that CO2 has no significant effect on climate and that the rising water vapor (which is IR active) is countering the temperature decline that would otherwise be occurring.

We have a mathematical method to measure the correlation of two datasets. We also have methods to determine whether that correlation is statistically significant
…
THEREFORE, one is a scientifically valuable and falsifiable statement, and one is not..

No. One is quantifiable while the other is qualitative but neither is particularly informative.

As for falsifiable, a measure of correlation is no more falsifiable than the mean of the data. It is what it is. You will get the same value every time for the same data.

I’m not a mind reader but I suspect that measures of correlation and statistical significance don’t mean what I suspect you think they mean. See: http://wmbriggs.com/post/4024/ for more on this topic.

Perhaps I’m not making myself clear.

“The water at point X is over 2 metres deep” is a statement which is falsifiable.

“The water at point X is really deep” is a statement which is not falsifiable.

In general, science revolves around falsifiable statements. “E=MC^2” is a falsifiable statement. “John is a handsome man” is not falsifiable. One is a proper subject for scientific study. The other is not.

In fact, this is one of the central problems with climate alarmism, in that so few of the claims made are readily falsifiable.

Now, you say:

“As for falsifiable, a measure of correlation is no more falsifiable than the mean of the data. It is what it is.”

No. If you say “the mean of 6, 8, and 12 is 8.1”, that is a falsifiable statement. If you say “The two datasets have a correlation of 0.82”, again that is falsifiable. Why? Because both the mean and the correlation are measurable.

BUt if you say that “10 is coincident with the length of the sunspot cycle”, that is NOT measurable. There is no measurement of “coincidentness”, so we cannot say whether the statement is true or false.

And that, in turn, means that it is NOT a proper scientific statement. Science is a funny game. Someone makes a claim which can be falsified, like say E=MC^2. Other people try to falsify it. If they can do so, it is not accepted as a provisionally true scientific statement. But if they cannot falsify it despite their best efforts, it is considered provisionally true until someone comes along who can overthrow it.

But you see … all of that process, which is the very heart of science itself, cannot occur unless the statement of the first person is falsifiable.

No. If you say “the mean of 6, 8, and 12 is 8.1”, that is a falsifiable statement. If you say “The two datasets have a correlation of 0.82”, again that is falsifiable. Why? Because both the mean and the correlation are measurable.

The mean is either correctly calculated or it is not. By itself, it is a meaningless value. With the scientific method, you want to falsify predictions. Falsifying or verifying calculations (by redoing them) is just checking the work.

Neither the mean nor the correlations within the data have any meaning in and of themselves. Now if you were to come up with some hypothesis involving them with a prediction made using that hypothesis then you have something falsifiable.

However, if one is just making an observation, it is perfectly fine to qualitatively say that two things appear to be correlated. Saying it quantitatively doesn’t make it any more precise or scientific. The value really tells you nothing except that when the value is higher there is more correlation and when the value is lower there is less — a qualitative answer. To a lot of people, though, having a numerical answer is more sciency. So I can see where you are coming from and you are not alone.

Even if you really do have numbers to the Nth decimal place, the first thing that come to my mind is: that’s nice; so what?

In addition, and somewhat tangentially, reliance on statistical significance gives a false sense of accomplishment. Please read the Briggs link.

The correlation is evidence of water vapor controlling daily Min Temp.

The correlation between any two variables is not evidence of a causal relationship between them — thus the “correlation is not necessarily causation” admonishment. At best, it means that a causal relationship cannot be ruled out. Using correlation alone one needs at least three variables and then this minimum can only indicate causation if one of the three is a cause of the other two. See Causality: Models, Reasoning and Inference by Judea Pearl ISBN-13: 978-0521895606.

The basic flaw with the temperature records is the fact that absolute temperature may vary within a wide margin of about 2 degrees C. That became the argument to use anomaly instead arguing that the anomaly would show a trend if the temperature changed despite the problems with the station network. Nice logical assumption, mostly true but not always.

But then they screwed the pooch by changing stations, adjusting stations, and changing means of extapolating between stations. They have nice 2 degree C range of uncertainty that allows them to turn that concept of anomaly showing the trend to do anything they wish within the original believed bounds of accuracy.

Thus all the changes they make could be justified but the trend be all wrong as a result.

This is a big issue in accounting where determining values is often uncertain. An appraisal for example might come up with multiple potential values. Therefore the accountant has to be aware of that as he does an audit to ensure the company accounts for items in a consistent manner, not cherry picking one method for one asset and another method for another asset. Also period over period creates a picture for the company’s financial progress so consistency must be observed there also. The client can change methods but to do so and account for that properly the method has to be redone for all periods using both the new and old method so the investor can see what effect the change has on company financial results. None of this done in climate science and as a result its highly unreliable.

Especially in temperature measurement to determine global trends “consistency must be observed”. In addition, for temperature, one must rationally attend to confounding things like ‘heat island effect’ and ‘satellite drift’.

Where they fail, is GHG’s are not the defining attribute of surface temps. Water vapor is.
The ruse that water vapor is condensing is both why it controls surface temps, and why they erroneously exclude it. But I ask them this, when was the last time the atm did not hold any water vapor?

The non-condensing ghg’s are really important to an ice ball earth. But we’d want more not less. And we’re not in an ice ball.

Willis Eschenbach, thank you for the essay. I thought that the original and your critique were both worth reading.

If there is (or were ) a causal link between the ENSO and the Central England temperatures (of which the CET series is an imperfect record), what do you think the actual R^2 value is (or would be)? This relates to the problem I posed a while ago about the poor statistical power of statistical tests that adhere to the conventional 5% and 1% levels.

Matt, thanks for the comments. I do think that there is a link between El Nino and other parts of the globe.

Regarding the value of the link, here’s the oddity.

The temperatures in the Pacific tropics and the North Atlantic seem to be operating in some kind of see-saw pattern. When one is going up the other is going down. Go figure … however, this effect fades out by the time you get to England.

Note that there are other areas of the globe which are more strongly anti-correlated with the NINO3.4 area.