Comments on: Did Canada switch from Engine Inlets in 1926 Back to Buckets? https://climateaudit.org/2008/06/01/did-canada-switch-from-engine-inlets-in-1926-back-to-buckets/
by Steve McIntyre
Thu, 21 Mar 2019 05:47:17 +0000
hourly
1 http://wordpress.com/
By: Historical Sea Surface Temperature Adjustments/Corrections aka “The Bucket Model”… | Watts Up With That? https://climateaudit.org/2008/06/01/did-canada-switch-from-engine-inlets-in-1926-back-to-buckets/#comment-420875
Sat, 25 May 2013 20:34:14 +0000http://www.climateaudit.org/?p=3132#comment-420875[…] Buckets and Engines, The Team and Pearl Harbor, Bucket Adjustments: More Bilge from RealClimate, Rasmus, the Chevalier and Bucket Adjustments, Did Canada switch from Engine Inlets in 1926 Back to Buckets?; […]
]]>
By: Reg https://climateaudit.org/2008/06/01/did-canada-switch-from-engine-inlets-in-1926-back-to-buckets/#comment-149566
Fri, 20 Nov 2009 00:25:24 +0000http://www.climateaudit.org/?p=3132#comment-149566A healthy mind in a healthy body.
]]>
By: Juraj V. https://climateaudit.org/2008/06/01/did-canada-switch-from-engine-inlets-in-1926-back-to-buckets/#comment-149565
Tue, 06 Oct 2009 21:25:25 +0000http://www.climateaudit.org/?p=3132#comment-149565I have been thinking about the CA´s bucket SST issue and I have realized, that by that abrupt adjustment based on false sampling premises, “they” managed to cut down the top of the SST warm period by 0.3 deg C, before it fully peaked in 50ties.
By applying the sampling issue correctly, the SST graph should look more like this: http://blog.sme.sk/blog/560/190772/hadsst2corr.jpg
70% of Earth is covered by ocean; above ocean air temperatures (which have been derived from SST before sat data became available) have 70% weight in the global data sets. When “they” prematurely cut down the SST warm peak,they also removed the top of the warm peak from global temperature data sets in 50-60ties.
Had the corrected SST data been used, oceans in 50ties would had been as warm as oceans in 2000s.
I stand my case, that this “all engine intake since 1945” premise has actually been another artificial manipulation of global temperature data sets. Those [self-snip] simply cut down the warm SSTs down and thus cooled down the whole inconveniently warm period.
]]>
By: John F. Pittman https://climateaudit.org/2008/06/01/did-canada-switch-from-engine-inlets-in-1926-back-to-buckets/#comment-149564
Sat, 07 Jun 2008 11:55:46 +0000http://www.climateaudit.org/?p=3132#comment-149564#66-68

What is the proper way to view the errors introduced by improper thermometer siting (see Watts)? It isn’t a simple matter of calculating a warming bias of X amount.

I think to state it properly is that the range of error is +-x amount not just +x amount. As discussed on this thread http://www.climateaudit.org/?p=3114 #153, it appears an audit to look at the claim of precision and accuracy of the century temperature trend and its sd,se is needed.

First, I would look at completely rural stations. And by rural, I’m not talking about stations at an airport 5 feet from a tool shed. Then I’d look at large city and smaller urban as seperate sets. I don’t think there’s anything to be gained by trying to create homogonous data from non-homogonous data sets.

The next answer would be to throw it all out and start over.

]]>
By: MPaul https://climateaudit.org/2008/06/01/did-canada-switch-from-engine-inlets-in-1926-back-to-buckets/#comment-149562
Fri, 06 Jun 2008 16:09:12 +0000http://www.climateaudit.org/?p=3132#comment-149562Ugh, its been too long and my skills are rusty. Sorry Steve for getting into an elementary topic (and sorry for reducing this to a practical engineering discussion). If you have a population of data of unknown distribution, if you take random samples from that population and average subgroups of samples, the new distribution (of the averaged sub-groups) will be normally distributed and any random measurement errors will be remapped symmetrically about the new mean. So sayeth the CLT (if I’m remembering correctly).

Lets say you were measuring the bacteria level of frozen peas on a manufacturing line. Every 10 minutes you randomly take 5 samples, measure the bacteria level of each and average the five. Do this procedure a sufficient number of times. The distribution of the sample averages will be normal. You can calculate the mean of this distribution and you can calculate the standard deviation. Now lets say you keep doing this every 10 minutes for several weeks. 99.73% of your sample averages will fall within your 3 sigma limits. If you now get 3 sample averages out of 5 consecutive sample averages that are outside of the 3 sigma limits, then it is highly, highly probable that something changed — something non-random. This is what an engineer would call assignable cause variation. There’s a whole branch of statistics dedicated to developing techniques that separate random variation from assignable cause variation. The manufacturing industry depends on them.

You would never, ever, ever see Green Giant ‘adjusting’ the bacteria data the way climate scientists routinely adjust data. Such a practice would be considered reckless and would probably be the stuff of lawsuits. Rather, they rely on strict analytical procedures to determine when something changed.

“if your instrument only measures to +/- 1 degree then it does not matter how many samples you take the accuracy will not get any better.”

What is the proper way to view the errors introduced by improper thermometer siting (see Watts)? It isn’t a simple matter of calculating a warming bias of X amount. The siting problems can vary by type and by degree (extent) and influence temperature readings by different amounts at different times of the year. If it can be shown that temperature recordings at a majority of sites can be wrong by a range of 0 to 5 degrees, how can such error be resolved to demonstrate a trend measured in tenths of a degree?

[At my kids’ swim meets, parents are given stopwatches which purport to measure to one hundreth of a second. Two parents timing the same swimmer can differ by more than half a second. It seems that Watts’ survey is showing the same type “operator” error exists for surface temperature stations.]

Let’s not get at cross purposes – also this is OT. I write on CA because of dissatisfaction with the standards of climate and green/political science.

In my career we were required to lodge all significant results and reports with a government repository. Our corporate survival depended on good science and others were run out of the country and ended up in jail for poor work and falsification and I have no sympathy.

When a personal career depends on the excellence of your work and on delivering the goods, you tend to pay more attention to quality than those whose performance is judged (say) by number of peer-reviewed papers produced, indifferent as to importance.

I am not exaggerating greatly to claim that proper science is under assault as never before. Galileo the Sequel comes to mind.

Your post basically confirms what I’ve been saying in several posts. There simply is no raw (as is ) data that is of sufficiently high quality to say very much at all what the temperature of the atmosphere or the oceans is with any degree of accuracy. I’ve always thought that if the data is good then no adjustments or mathematical manipulations should be needed. Certainly not enough accuracy to have a “consensus”on climate change.

]]>
By: MC https://climateaudit.org/2008/06/01/did-canada-switch-from-engine-inlets-in-1926-back-to-buckets/#comment-149558
Thu, 05 Jun 2008 23:05:06 +0000http://www.climateaudit.org/?p=3132#comment-149558#61 I’m a scientist (a physicist) and though it is true that science is not “policed” in the same way it depends on the field. Particle physics or material science has little time for shoddy method. Climate science is inherently a bit more “wooly” as the principle is to try and distill some patterns and behaviours in a multi-parameter system. However this does not excuse bad scientific method.

#59 and general. As an experimental physicist I feel I need to make the point that precision of a measurement(as in how close to a theoretical steady state value) is the key to all this, not the trend per se. This was stated much earlier in that trends are being derived from data with larger measurement errors. Secondly if your instrument only measures to +/- 1 degree then it does not matter how many samples you take the accuracy will not get any better. The measurements belong to what is called in maths a bounded set. You only know that your measurement lies between two limits. That is it. The only way you can improve accuracy is if you calibrate against a simultaneous measurement with better known accuracy. Any other Gaussian or CLT is an assumption that must be backed up by test or specifically stated as an assumption. This is scientific method.

Hence the problem we see with the SST is that it looks like the precision of the instruments is much larger than 0.1 degrees. In fact it looks to be 2-3% or more of the absolute reading for a lot of the historic readings, which when a mean is taken away results in hundreds of percentage uncertainty. The conclusion is simple: the accuracy of the SST is no where near accurate enough to determine 0.1 degree resolution trends and so we cannot use it and need to make better measurements.

We do not have a useable understanding (beyond units of temperature) of how average temperatures have varied over the century. I do not need to go into statistics. The raw data is enough to tell me this.