Actually the CET temp record is a good temp proxy for an area larger than the UK, the UK is a small island so is linked to the temp of the surrounding water rather than a large land mass. Better than a couple of Brislecones any day.

I agree with Steve’s (and JohnH’s) contention that this is strange. I can’t remember where now, but I recall reading that considering the British Isles predominately maritime climate, the CET and Armagh datasets are actually very good proxies for the North Atlantic, a not inconsiderable volume of water, even in global terms. Likewise the data from Tiree and Lerwick and Iceland all show a strong correlation for warm and cool years/periods, despite the distances between them. Most of the heat is in the ocean, and thankfully it wins out over the northerly winds in the end, otherwise we in Scotland would have a very much colder climate than we do.

I agree with Tom. The local weather has almost nothing to do with the global averages. The climate is a local effect; there is nothing such as the global climate. It’s not hard to make this statement quantitative.

The standard deviation of local monthly temperatures – e.g. in Central England – is something like 2-5 degrees Celsius. On the other hand, the global December anomalies oscillate with the standard deviation 0.2 degrees Celsius or so. So whether you subtract the global swings from the local temperature or not is pretty much irrelevant.

Even many skeptics don’t appreciate how brutally negligible the global temperature changes are for any practical purposes such as the predictions of the local weather. We have -8 degrees Celsius now in Pilsen and it was -15 degrees Celsius a few days ago. One would have to raise it by 20-30 degrees to get a decently warm winter day. Even if the global trend were 3 degrees per century as some insane people argue, it would still make no difference for the local weather. One has to observe the weather carefully for a very long time and do the statistics very carefully to see any “trend” at all. Normal life is not making these precise measurements and calculations which is why the global average’s changes make no difference for any practical purposes.

Needless to say, the alarmists will offer you completely different explanation. The global temperature is rising – by 1.3 deg C per century, oh so hot – and it is totally disrupting everything and making everything behave crazily. Now add the 56,000 things caused by global warming here – one of them are extreme weather events. The freezing is caused by warming, some of them will say.

Needless to say, this explanation is complete rubbish especially because record cold winter weather events mostly come from the cold wind from the Arctic, as Richard Lindzen likes to emphasize, but because the Arctic is allegedly getting quickly warmer – even more quickly than general places on the Earth – it should mean that the winds from the North won’t be able to bring such cold weather to the moderate zones and the record cold events should almost totally disappear. For similar reasons, the weather fluctuations should get reduced because the Arctic-equatorial difference is also dropping, thus reducing the impact of Southern and Northern winds.

But many people are not able or willing to think rationally about such things.

Well now, if global cooling were to begin, what would the symptoms be? Some expert climatologist ought to be able to tell us that. For example, do we expect the tropics to cool first, and then the poles, or vice versa, or should it be equal?

Given the theory that global warming is magnified towards the poles, I think we must expect global cooling to be magnified towards the poles too. Also, given the heat retained in oceans, we should expect to see land masses cool more quickly too.

What are we seeing? Northern land masses appear to be cooling. This can either be viewed as a symptom of incipient global cooling, or a random weather fluctuation, depending on the configuration of various neurons in your brain.

There are a number of different aspects to “cooling” versus “warming” that tend to be entirely lost in the “debate” between “warmists” and “deniers.” For instance, if we are still within the gross pattern of the Pleistocene climate (calling the present the “Holocene” may have been geological jumping the gun) then we should be well into the early stages of the next glacial epoch. If you consider the data on temperature trends since the early Holocene, then the globe has in fact been experiencing a cooling trend. The recovery from the LIA is unlikely to exceed or even meet the levels seen in the peaks of the Minoan and Roman warm periods. Which in turn could mean that the warm up from the LIA ended at about 1998 and the present cooling trend is really the near shoulder of the next step down toward a new glacial.

An interesting and opposite alternative is that we are at the end of the latest “ice house” phase and the climate will be considerably warmer the next several tens of millions of years. Canada could be come an important banana exporter if that happens, but withe growth of ice in Antarctica this seems unlikely.

I would think it would depend on what causes you to use the word ‘appear’. If it is the temperature of one or two years, you are probably misusing the word. Most people would assume that is due to variability in weather. Most of climate science (on either side of the debate) tends to use 30 year trends as guide posts, not 2 year weather incidents.

What makes you think that Northern land masses appear to be cooling? Is it 30 year trends or 1 or 2 year weather incidents?

It is not 30-year trends. Although meteorologists are keen on these, despite them being only half of a possible 60-year cycle (see Scafetta (2010)), unfortunately the World Government wants to make precipitous decisions. We need to be able to discern the gradient of global (and perhaps local) temperatures more quickly than 30 years.

How soon after the Great Climatic Shift of the mid 1970s could one get a Bayes factor of 5 (say) that the climate had started to warm again? 10 years? How long might we need to get that factor for cooling, or at least for a cessation of warming?

I believe that Hansen’s predictions of 1988 are now statistically improbable. How many years would it take, if I predict a fall in temperature of 0.15K over 10 years, for my prediction to become statistically improbable?

If a clear warming signal is not seen post 1998, should the World Government be persuaded to wait 30 years before taking action against global warming?

Interesting questions… With regards to your question about a Bayes factor of 5, I don’t know. With regard to your other questions, it would probably be helpful if you define some of your terms. What do you mean by ‘clear warming signal’? What would you need to see to acknowledge a clear warming signal? As an aside, what is so special about 1998? Is there some a priori selection criteria you use to determine the beginning of your term? Selecting 1998 without providing that criteria certainly makes it look like you just picked the warmest year as your starting point so that it makes it difficult to see any warming from that point. I’m not implying you did that, but I am curious about your criteria used for selecting 1998.

My previous comment was a bit rambling, for which I was going to apologize. Here I am going to try to be a bit more coherent, regarding Bayes factors.

Chris, you are right that one shouldn’t necessarily start from the super El Nino year 1998; in fact I usually start from 2001, after that El Nino-La Nina episode, and the same comment on lack of warming applies.

Anyway, to Bayes factors. Let’s suppose that under AGW there should be a steady increase in temperature, obviously with errors due to El Nino etc. To compare the coming decade with the last, suppose that in year 2011+i the global anomaly is expected to be x_i more than the 2001-2010 mean. Suppose the rise to be linear, so

x_i = (11+2i)d/20

This gives a mean for i=0 to 9 of d, and a value in 2005.5 (i=-5.5) of 0. Suppose also that the observations are normal with standard deviation s. Suppose the actual observations are y_i.

Then compared with a null hypothesis of no change between decades, the Bayes factor in favour of an increase by d over the decade, using year 2011+i only, is

exp(-(y_i-x_i)^2+y_i^2)/2s^2)
=exp(x_i(2y_i-x_i)/2s^2)

The total Bayes factor over i=0 to n-1 is the product of these values.

When I get back to my home computer I’ll work out some actual figures, but for now suppose that s=0.1 and d=0.2 (i.e. a mere 2 degrees per century). Suppose also that it happened that years 2011 to 2013 each record y_i=0. Then the Bayes factor in favour of d=0.2 would be

Interesting. I don’t have the experience to fully understand the derivation, but I am usually pretty good at math (MS in Mechanical Engineering). In this case, maybe I’m just not following your notation, but I’m not following the math either. However, I’m content to say I trust you :)

However, I will question the application of your model to the underlying physical process. Lets say that after 3 years y_i is 0 for each year and we get your calculated 13 to 1 odds. If I understand you correctly, those odds are 13 to 1 against the 2010 decade having an average temp that is .2 degrees higher than the 2000 decade?

If I do understand you correctly, then what does that imply about a long term trend. I would argue that we would need 3 decades to determine a trend, not 1. What is the likelihood that an actual trend over 3 decades could show no trend after 1 decade?

I also did some quick research (ok, I googled “yearly mean temperature standard deviation”) and found this site: http://www.fhwa.dot.gov/publications/research/infrastructure/pavements/ltpp/03092/task3a.cfm I went to table 25 and found that the STD of max temps was .8 and STD of min temps was .7. For arguments sake, lets say that the STD of mean temps is .75. Lets replace your .1 with .75 and see what happens… (I would do it myself, but like I said, I’m going to have to trust you on this one). I’m also open to someone correcting my value of .75 for the STD.

I’m making this a reply to an earlier comment, to retrieve some textual width!

Regarding the mathematics, it is the Normal probability density function

exp(-(t-mu)^2/(2 sigma^2))/(sigma sqrt(2 pi)),

using the ratio for two different means, which leads to the Bayes factor formula

exp(x(2y-x)/2s^2)

where x is the predicted increase and y is the observed increase. One can easily see that the Bayes factor is >1 if y>x/2, and less than 1 otherwise.

Regarding the standard deviations, the site you quote seems to be giving the standard deviation for U.S. temperatures, and I’m not sure on what basis. Anyway, the s.d. for globally averaged temperatures is much lower (not surprisingly). See the table below for values from HadCRUT3, where 0.1 is a fair typical value if trend is allowed for.

The table has the first year of the period, the length of the period, the standard deviation if zero trend is assumed, the mean, the annual trend, the standard deviation about the trend line. I am happy to publish the R code for this if anyone wants it.

What can we learn from this table? First, the trends over a decade, with values above 0.2K in each of the 1860s, 1870s, 1890s, 1930s, 1990s are more volatile than the change in means, which never exceed 0.2, the 2 largest being 0.194K from the 1990s to 2000s and 0.176K for the 1920s to 1930s.

Second, after allowing for each decadal trend, such as it is, the standard deviation shows 6 values above 0.1 and 10 below; the median is .087.

The lowest of all the s.d.’s is .053 in the 2000’s. Wow, that’s extreme – anthropogenic global warming is causing the global temperature to be highly stable with a negligible trend! Sounds great.

Chris, you say “Let’s say that after 3 years y_i is 0 for each year and we get your calculated 13 to 1 odds. If I understand you correctly, those odds are 13 to 1 against the 2010 decade having an average temp that is .2 degrees higher than the 2000 decade?” That’s almost right, but doesn’t take into account my allowance that the 2010’s should (in some sense) warm at 0.02 per year through the decade. If the 2010’s were to be flat at +0.2, the Bayes factor against starting with 3 0’s would be even higher than 13:1. Anyway, this is moot as yet – the 3 0’s haven’t happened yet (which would be HadCRUT3=0.44 for each of 2011, 2012, 2013), but it is intended to show the sort of tests that we can apply. Perhaps the next 3 years will be spot on the money – 2011 at 0.44+5.5*0.2=0.55, 2012 at 0.57, 2013 at 0.59. I don’t know if you’re aware, but the Met Office predicted a year or two back that at least half the years 2010-2019 would beat 1998’s 0.54; 2010 failed despite its being a part El-Nino year with a much better chance than 2011.

We can ask, in the same way, how the Bayes factors would have looked in the past. And the answer is that for each of the last 3 decades an increase of 0.2K would have attained a significantly high Bayes factor (though not as high as 0.16K would). But the question now is,
are we going to see that happen again in the 2010’s? If we look at the within-decade trends, the last times they were less than 0.01Kp.a. were in the 1950’s and 1960’s, and the following decades did not significantly warm. Therefore the flat 2000’s suggest that the 2010’s will not significantly warm. But hey, CO2 is increasing faster now, so perhaps they will – hence there is huge interest in the trend over the next few years.

Regarding your suggestion that we should only consider 30 years’ worth of data, there are two problems. The first is that 60 years makes more sense, since Scafetta(2010) has shown that 20- and 60-year cycles are significant in the data (and the 1951-2010 trend is 1.18K per century versus 1.66 for 1981-2010). The second is: are you suggesting that IPCC and the “World Government” should wait 30 years before making any policy decisions? OK, I can agree with you on that :-).

But the situation we are in is that there is huge political pressure to act against CO2. Probably the only thing relieving that pressure is the current stasis in global temperatures. So continuing statistical analysis year-by-year is important for assessing the claims of the Global Circulation Models, with the possible result that the estimates of the sensitivity to CO2 will have to be reduced, with possible concomitant impact on policy.

I’m really not following you. Could you clarify this for me: That’s almost right, but doesn’t take into account my allowance that the 2010′s should (in some sense) warm at 0.02 per year through the decade. If the 2010′s were to be flat at +0.2, the Bayes factor against starting with 3 0′s would be even higher than 13:1.

At first you were saying that you were testing the proposition that d=.2. Now, you make the comment that the 2010’s should (in some sense) warm at .02 per year. I don’t even know what “If the 2010’s were to be flat at +0.2″ means.

What exactly is the original test again?

To get away from the math a bit (which is disappointing, because I feel like I could learn a lot from you in this area if we continued this discussion), I do want to question a few of your statements. First, I never said that we should only consider 30 years worth of data. I thought I was implying that we should not look at one or two year events and be confused and think that that represents a significant climate change. I think that a one or two year event represents variability. I also think that 30 vs. 60 year time periods are the kind of thing that need to have some sort of a priori criteria for. What if we use 57 year periods? Are trends more or less than 60 year periods? Does that mean anything if they are? So what if the MET made a silly prediction? Would it really make a difference if 6 years ‘beat’ 1998 vs. 4 years? Even the choice of year to start our decades seems fraught with uncertainty. Would trends be different if our decade was 4 to 3 instead of 0 to 9? What if we used 17 year periods to calculate trends instead of 10 year periods?

I guess I have a difficult time thinking that there is anything significant about a lot of the questions you are considering. Long term trends seem like they are important. Without any physical process tied to what I would consider short term trends a lot of this seems like it has little to do with AGW.

And with respect to your comment about waiting 30 years, that seems pretty reckless. I don’t know much about how large companies work, but I work for a small company and we tend to consider risk mitigation an important part of our strategic decision making process. If we decide that there is a 10% chance that something bad could happen, we start taking steps to deal with that. We don’t wait until it becomes a 75% chance that something bad is going to happen. My wife and I just purchased life insurance, even though there is a really good chance that it is a waste of money. What would you like to see happen 30 years from now, that is different from what you have seen in the last 30 years that would make you happy to see the World Government do something? What is your definition of ‘current stasis in global temperatures’, btw?

Here is my clarification for you. My main model was x_i = (11+2i)d/20 which follows gradual warming through the decade. Given observations y_i it will lead to certain Bayes factors as defined. But a simpler to use model would have x_i = d for each i, i.e. immediate warming followed by no more warming during the decade. For the first year, if y_1 was 0, the Bayes factor in favour of warming by d would be lower for the second model than the first (and the second model sounded more like what you wrote).

You say “So what if the MET made a silly prediction?”. I say, if it is silly enough to prove their models wrong, then we shouldn’t trust those models when they say the temperature in 2100 (or earlier) will be alarmingly hot.

You say “If we decide that there is a 10% chance that something bad could happen, we start taking steps to deal with that.” I say, it depends on the cost, and the benefit, from taking that action. You are talking about the Precautionary Principle, and I’ll leave other CA readers to debate that one; if you Google in this site you will probably find several enlightening discussions on it.

The basic point behind my statistical tests is to discover inaccuracies in the GCM’s if they exist. Recently Gavin Schmidt, in response to the warming by 2.4C by 2020 farce, stated an expectation of warming by 0.2C by 2020. We should test that.

By “current stasis in global temperatures” I mean the lack of upward or downward trend in 2001-2010. It looks like the top of a hill to me, but until it goes down significantly (or up) we don’t know.

You make a couple comments about testing certain models and their predictions. But it doesn’t look like you are testing models. It looks like you are testing a few ridiculous extreme cases. What aspect of the model actually drove that silly prediction? Is that something that came from the PR department at the MET? Is it from a published scientific paper from the MET? What did Schmidt really mean when he said that? Which model was he quoting? What were the error bounds? It looks like you are choosing a few extreme statements that really don’t have much to do with AGW. Maybe the MET has a model that says 5 years in the next decade will have temps higher than 98… So what? If it is only 4 years instead of 5, does that mean that the model is wrong? What could you possibly hope to prove by testing a prediction like that? Its a PR stunt on both sides and really doesn’t do much to increase understanding or knowledge by anyone. Oh, and according to GISS, 2010 did beat 98, despite 98 being a strong el nino and 2010 being a strong la nina (at least, according to John Lancaster below): http://data.giss.nasa.gov/gistemp/graphs/

Personally, I’m not convinced either way, but it sounds like you are and you are going out of your way to present certain evidence to support that conclusion. Thats fine, but please don’t be offended if I ask something like why do you choose 2000-2010 as your period to examine when looking for stasis. That seems like a completely arbitrary period. What is it about that 10 year period, bounded by big round numbers based on a calendar convention, that makes you think they show temperature stasis? That is, what are the criteria that you used, before knowing the result, that prompted you to look at those years?

You make a couple comments about testing certain models and their predictions. But it doesn’t look like you are testing models. It looks like you are testing a few ridiculous extreme cases. What aspect of the model actually drove that silly prediction? Is that something that came from the PR department at the MET? Is it from a published scientific paper from the MET?

I don’t think the Met Office would agree that the “5 of next 10 years > 1998″ was a silly prediction, at least when they made it. Possibly, with 2010 failing, they may be rethinking it, or possibly they think that after this La Nina there will still be a good chance.

What did Schmidt really mean when he said that? Which model was he quoting? What were the error bounds?

Schmidt is one of the main people in the Team of “consensus scientists”, and it shows that they still think that the globe should warm by 0.2K this decade. They have recently realized that they need to become cleverer with error bounds, hence are now suddenly making statements (because 2010 wasn’t top) that 1998, 2005, and 2010 are statistically indistinguishable.

It looks like you are choosing a few extreme statements that really don’t have much to do with AGW.

I don’t follow. Aren’t all global temperature statements to do with AGW? Or is that only when the temperature is going up?

Maybe the MET has a model that says 5 years in the next decade will have temps higher than 98… So what? If it is only 4 years instead of 5, does that mean that the model is wrong?

The purpose of using statistics is to increase, or decrease, one’s confidence about certain statements. I am pretty sure that if 4 out of 10 years are higher than 1998, by a modest margin, then the Bayes factors will come out in favour of a linear +0.02 per year rise this decade. But if it’s 1 instead of 4, then that would be doubtful.

What could you possibly hope to prove by testing a prediction like that? Its a PR stunt on both sides and really doesn’t do much to increase understanding or knowledge by anyone.

Isn’t that statement anti-science? We’re trying to use mathematics to assess the state and rate of global warming. There is, we are told, a scientific consensus in favour that it is a scary amount. Is the consensus actually correct?

I can’t tell you how low an opinion I hold of GISS. I will have no truck with it. Just search on this admirable site for “GISS” and you will see numerous serious problems with that series.

Personally, I’m not convinced either way, but it sounds like you are and you are going out of your way to present certain evidence to support that conclusion.

What do you mean by “either way”? Personally, I am convinced that CO2 causes some global warming, but I just don’t think it is by a large amount.

Thats fine, but please don’t be offended if I ask something like why do you choose 2000-2010 as your period to examine when looking for stasis. That seems like a completely arbitrary period. What is it about that 10 year period, bounded by big round numbers based on a calendar convention, that makes you think they show temperature stasis? That is, what are the criteria that you used, before knowing the result, that prompted you to look at those years?

Well, I think it’s important not to start at 1998, with a big El Nino, or at 1999/2000 with a big La Nina. As for using only 10 years, I would remind you of this. In 1988, when speaking to Congress rather than on a blog, James Hansen had at most 15 years of global warming to go on to make his disturbing predictions of doom. He felt it was important on the basis of that short length of data to make public pronouncements. I feel it is important, cumulatively year by year, to test the “consensus” predictions.

>>Schmidt is one of the main people in the Team of “consensus scientists”, and it shows that they still think that the globe should warm by 0.2K this decade. They have recently realized that they need to become cleverer with error bounds, hence are now suddenly making statements (because 2010 wasn’t top) that 1998, 2005, and 2010 are statistically indistinguishable.

All true. However, you didn’t address my comment. You’re trying to use statistics to address a PR move. What is the model you are testing there? Did he really quote the models accurately when he made that comment? Are you really testing a model or a silly PR comment? Either way is fine, but be up front about it.

>> I don’t follow. Aren’t all global temperature statements to do with AGW? Or is that only when the temperature is going up?

No and no. If I say “The global average temperature was really high this year”, it has nothing at all to do with AGW. Nothing. If I say, “there is a long term trend of increasing temperatures at a rate of X”, that is very much related to AGW. Seems a waste of time trying to test individual years as proof or disproof of AGW…

>>Isn’t that statement anti-science? We’re trying to use mathematics to assess the state and rate of global warming. There is, we are told, a scientific consensus in favour that it is a scary amount. Is the consensus actually correct?

No, it isn’t anti-science. I would say that it is anti-science to apply a test to an almost meaningless prediction from the MET and assume that it has some scientific meaning. If the MET wants to engage in PR, let them. If you want to engage in counter-PR, go ahead. But don’t call it science.

>> I can’t tell you how low an opinion I hold of GISS. I will have no truck with it. Just search on this admirable site for “GISS” and you will see numerous serious problems with that series.

I’m not actually ‘new’ to this site, and I have seen many of the discussions of GISS. It is difficult to argue with your statement that you will have no truck with them, so I guess we will leave that lie. So, whose data set that you trust have you used to show that 2010 is cooler than 1998?

>> Well, I think it’s important not to start at 1998, with a big El Nino, or at 1999/2000 with a big La Nina. As for using only 10 years, I would remind you of this. In 1988, when speaking to Congress rather than on a blog, James Hansen had at most 15 years of global warming to go on to make his disturbing predictions of doom. He felt it was important on the basis of that short length of data to make public pronouncements.

Interesting statement, but it didn’t answer my question. You seem to think that Hanson was wrong for only using 15 years worth of data. So your response to that is to use 10?

>> I feel it is important, cumulatively year by year, to test the “consensus” predictions.

I don’t argue with this. However, given that single year data points are subject to so much variability, I think it is important to note ‘error bounds’ when making statements based on 1 year of data. If you are testing a model prediction for 2011 and the model fails, what is the likelihood that the model is still accurate?

OK, Chris, I am beginning to see where you are coming from. You don’t like what you see as “PR” statements, whether from the AGW side or the sceptic side. Fair enough, but being on the luke-warming sceptic side I consider it important to counter AGW PR statements.

You seem to want to be using science, with good long periods like 30 years, which is for right or wrong an accepted length for establishing norms. If it’s just an academic exercise, then that’s fine, but unfortunately the World Government doesn’t want to wait that long before making political decisions. So we need better models of future temperature, and if the current models prove to be inaccurate then they need to be revised.

By the way, I never said that I disagreed with Hansen spouting after 15 years of global warming, so I reserve my right to spout after 10 years of “stasis” :-)

You say that I need to take account of error bounds, but then of course I do – that’s what the 0.1degC standard deviation is about, which affects the Bayes factors. A value which is just 1 s.d. down on expected attracts a Bayes factor of exp(-1/2) = 0.61, which isn’t a big deal – until there are 6 in a row which turns into a 20:1 shot.

Well, if you never disagreed with Hanson making predictions based on 15 years, it seems odd to bring it up… It’s entirely possible that 15 years is long enough to draw conclusions and 10 years is not (I’m not saying that I agree with that, but you now seem to be using Hansons use of 15 years as proof of the correctness of you using 10 years). And your reasons for not starting with 98 or 99, I’m sorry, don’t convince me. For one reason, you didn’t present any reason why 12 years or 11 years have any scientific basis. And your reason for discarding them seems to be based on what I would call ‘previous knowledge’ of how those data points would affect the result. That isn’t scientific. If 12 years is a valid period, then ’98 is fair game. So, you are making what I call a pretty big assumption based on there being a current ‘stasis’ and you are basing that current stasis on something that you still haven’t explained.

Couple other things, if you don’t mind… Please share your source that 2010 didn’t beat 1998. And, please share what you would need to see in 30 years that you aren’t seeing now that would make you comfortable having the World Government take action.

Regarding the question of where to start between 1998 and 2001, I’ll be happy to calculate figures for each, if it makes people happy. 10 is just a nice round number intended to avoid cherry-picking. 11 might be more logical as it’s an average solar cycle. Starting and ending at local maxima (or minima) could also make sense – thus 1998-2010, or 1999-2008.

For me to be comfortable with the World Government taking action now, I would need to see that in 30 years time the anomaly would have risen from the 2000’s decade +0.44C to +1.04C in the 2030’s. In my humble opinion we don’t have enough evidence yet that that is at all likely.

When you look at the side by side comparison of GISS and HADCRU, it seems odd that you trust one dataset but ‘will have no truck’ with the other.

It also seems odd that you need roughly 60 years of warming, with accelerated warming at the tail end of that period, before you think it appropriate to take any mitigating steps at all, yet 10 years are enough for you to decide that we are in a stasis. I’m not sure I would call you a ‘luke warm skeptic’.

It also seems odd that you go out of your way to state that 2010 was a thousandth of a degree away from being beaten into 4th. From what I can tell, the error on a single year measurement is .05, or 50 times the difference you are quoting. Is that really significant to you?

There seem to be so many things strange with this that it is difficult to consider them all at once. First, why would the OP think that local weather conditions varying greatly from global trends be ‘strange’? Isn’t that why they call it weather?

<>

Hmmm… From Dec-Feb, the average high temp in Central England is around 1-2C. During the same period, the average arctic temp is around -30 to -35C (quickly pulled that from wikipedia). That kinda implies that the arctic could warm by quite a bit and still be considerably colder than Central England during the winter months, doesn’t it?

You appear to be more of an expert than I, could you please tell me just how close the arctic and CET temperatures need to be before the arctic can stop influencing winters in England?

Needless to say, this explanation is complete rubbish especially because record cold winter weather events mostly come from the cold wind from the Arctic, as Richard Lindzen likes to emphasize, but because the Arctic is allegedly getting quickly warmer – even more quickly than general places on the Earth – it should mean that the winds from the North won’t be able to bring such cold weather to the moderate zones and the record cold events should almost totally disappear

The BOM results are biased through
a) selection of sites mainly airfields which have UHI effect and discarding of longterm rural sites
b) adjustments and amalgamations done the wrong way -ie reduction of older temperatures and increases of more recent temperatures. This was shown in the New Zealand figures and has been found by a number of assessments of Australian data as for Darwin.
Australian temperatures have some connection to ENSO. Temperature in 2010 were lower than normal due to cloud cover and the higher than normal rainfall. Temperatures in the period 1900-1915 (Federation drought) were higher than the last decade if raw data of rural sites are properly analysed and adjusted UHI.

The first hundred years or so of the CET is a very rubbery compilation of ‘data’,and the next hundred ain’t that flash either. I suggest you look at the details.

Steve - Good point. I don’t have the time or inclination to wade through this data, but I agree that one can’t lean too heavily on it. Michael Mann, on the other hand, was quite sure that he knew NH temperature in AD1000 within a couple of tenths of a degree – a statement that he made to the NAS panel. However, the 20th century data has fewer problems and it is interesting that Dec 2010 was lower than any 20th century December.

If you have a couple of hundred years of ‘rubbery’ figures you would certainly expect those figures to fluctuate, and for some years to be lower than this past month. There are no such December figures anywhere in the early CET, and that to me is quite a surprise.

I don’t find it strange at all. I have been looking into instrumental records for 10 years and most of the time raw, unadjusted, unhomogenised data from single stations show less (or no) warming than the “official” averaged data sets. Many stations show cooling, of course depending on time scale.

We need some kind of global temperature index (not average global temperature), with the best stations we have (no moving, minimum urban/UHI influence) and even if it’s only 20 stations worldwide (from every continent), it would still be better than these “tortured” official data sets. Scientifically better.

Given that this area was once rural and now is one of the most heavily industrialised and populated areas on the planet, the UHI would probably mean that this December was really the coldest. Even in 1890, the urban warming would not have been as significant, I’d suggest.

I think these coldish years do matter, maybe now there will be some advance in smoothing methods. mike writes (1062784268.txt) ( I think this is somewhat related to CET smoothing (?) )

The second, which he calls “reflecting the data across the endpoints”, is the constraint I
have been employing which, again, is mathematically equivalent to insuring a point of
inflection at the boundary. This is the preferable constraint for non-stationary mean
processes, and we are, I assert, on very solid ground (preferable ground in fact) in
employing this boundary constraint for series with trends…
mike

I assert that a
preferable alternative, when there is a trend in the series extending through the
boundary is to reflect both about the time axis and the amplitude axis (where the
reflection is with respect to the y value of the final data point). This insures a point
of inflection to the smooth at the boundary, and is essentially what the method I’m
employing does (I simply reflect the trend but not the variability about the trend–they
are almost the same)…

And now this leads to following figure:

Jones also mentions CET:

> > > Normal people in the UK think the weather is cold and the
> > > summer is
> > > lousy, but the CET is on course for another very warm year.
> > > Warmth
> > > in winter/spring doesn’t seem to count in most people’s minds
> > > when it comes to warming.

Yes, extrapolations are problematic if someone bothers to check those later:

Can’t understand what Mann means by ‘preferable constraint for non-stationary mean
processes’.. I’d prefer no smoothing at all if there is no statistical model for the process itself, something like this maybe:

You obviously don’t have the time to “wade through” the CET data. I do, and have done for many years, and thus claim a fairly detailed knowledge of what has been recorded, at least at the monthly average and lower resolution levels.

The notion that real global changes of one or even 3 degrees C /per century/ can possibly or noticeably affect weather systems whose lifespan is measured in days or weeks is obviously preposterous. In many conversations with friends, where the subject matter drifts towards climate, I try to point out the folly of putting any faith in the virtually continuous stream of hype that is put out by our government, the BBC and other biased and ill-informed media sources.

If you would be interested in a long contribution on what can be gleaned from careful study of the CET record I would be pleased to attempt one. It would depend heavily on graphical presentation, and currently I do not know how to post graphics :-((

Can’t remember the CA thread, but I pointed out a few years back that CET in the early 18-th century correlates v. poorly with Lund, Sweden, whereas DeBilt correlates well. As a data gourmand, I would love to see your more complete chewing of the CET data sausage.

The CET armargh comparison is also a worthy subject which would require detailed knowledge of both. The situation at Armagh is fairly good although some of their readings had to come from Dublin because of problems elsewhere ( and I remember not why). Thermometer changes have occured over the years, of course, but otherwise it is a relatively excellent record but difficult to extract.

“the red line is a 21-point binomial filter, which is equivalent to a 10-year running mean.”

I am not a statistician and therefore probably talking rubbish but I understand that a change in direction of a trend is confirmed by the duration and the rate of change, in this case the “cooling phase” being as quick as the preceding “warming phase”.

It is also not just about cold winters this years CET “growing season” Apl through Oct was 0.6C lower than 2009 and 1.6C lower than 2006.

I regularly study the daily CET max figures, via an R program, to look for interesting records. For December 2010, while the mean, and perhaps the mean max, are reported above to be coldest since 1981, the lowest max (i.e. the coldest day) was only the coldest since 1981.

I was relieved to see that CET recorded -3.2 for the 19th – having lived through it and observed through it I would have suspected an improper adjustment if it had been higher.

Here’s the bigger record: the warmest December day, at 7.5, was the coldest since 1933.

Re the CET maximum records I have noted, here is a summary of the way things have been going in the last 3 years, with 18 cold phenomena to 2 warm ones, at a time when the globe is supposedly near peak warmth. (I only note phenomena which are extreme over at least 10 years).

I would have suspected that the cold spell in the Little Ice Age would be
colder than this year, specifically the sometimes freezing over of the Thames
around London in the late 1700s, early 1800s (Edward Rutherford’s London) or
even the freezing over of the Hudson River about the same time. Maybe it was too cold then for anyone to get out and make a measurement, or maybe having the thermometer next to a roaring fire has led to some interesting values?

Freezing of the Thames in a way that allowed the old ‘Frost Fairs’ to be held on the solid ice is much less likely now than prior to the 19th Century – the last such event was in the Winter of 1813/14. The classic book on London Weather (J H Brazell) expressed the view that the chances of this type of freeze had been reduced considerably by the demolition and replacement (in 1831) of the old medieval London Bridge – which was a bridge of many narrow arches and even had houses and shops built upon it. This Bridge acted as as considerable restriction on the flow of the Thames. Brazell also notes other factors, including the drainage of the marshes on the river bank and the building of the embankment – that ‘canalised’ the river – and factors such as piping of tributaries. He also noted that at the time he wrote in the 1960s warm cooling water from factories and power plants had made the Thames warmer!

Steve, Some time ago you posed a question about why we perceive the winter temperatures now are somewhat warmer than those in the past.

After this December, which was record breaking cold, I did not perceive it to be as cold as ones in my childhood. For the record I am 58 and always lived in the same place right in the middle of central england.

Well in my childhood the houses and bedrooms were not well heated, ice on the inside of the windows, frozen pipes etc was commonplace. Nowadays, the internal temperature resembles almost tropical splendour, all year round.

So AGW has always been an easy fit with our human perceptions, it only takes a few dubious statistical studies, a few warm winters and the end of the world is at hand.

Are we sure that the temperature has been measured in comparable ways? In particular, is there correct procedure for the move from mercury-in-glass to electronic thermistors, plus the way that average daily temperature is calculated now as opposed to 30 years ago (e.g half of Tmax+Tmin, or by smoothed half hour profiles or whatever is used in GB)?

For an Australian example see my post at Jan 4, 2011 at 1:58 AM just below in “NASA GISS Adjusting the Adjustments”.

Dr. F. Loewe, A period of warm winters in Western Greenland and the temperature see-saw between Western Greenland and Central Europe, Quarterly Journal of the Royal Meteorological Society, Vol 63, issue 271, p365-372, july 1937

Hann may have termed it the Jakobshavn see-saw, but it had been noted much earlier by Hans Egede Saabye, a Danish Missionary, who wrote in his late 18th century diary: “When the winter in Denmark was severe, as we perceive it, the winter in Greenland in its manner was mild, and conversely.”
[ h/t to alexjc38, December 19, 2010 at 8:04 pm http://diggingintheclay.wordpress.com/2010/12/18/nao-is-the-winter-of-our-discontent/ ]

I maintain that I can see the effect of the 1999 Bakkel Ridge vulcanism on a time lapse reconstruction of the Arctic Ice. Apparently the area in question was cloud covered at the time, but photos should show whether the clouds were ordinary Arctic clouds or the clouds that would form over open water or relatively warmed ice. Those photos exist but are difficult to view.
==================

While 2010 worldwide is nudging 1998 for warmest year in the three major land temperature indices, 2010 was the 6th coldest CET year since 1950.

This is such an eyeopener, yet is something I’ve suspected forever.

Every time I hear of world temperatures at some point in the distant past, I know it is only based on a few data points from proxies, sometimes only from one. And that one proxy data point might be used to represent an entire decade or century.

CET is as good as it gets for reliable single-point long term temp records, yet here we see that it is in stark counterpoint to the global average.

If the best instrument record can be at such odds with what is put out as a global average, how much faith can any of us purport to have regarding proxies from ages past, which have so few data sources from which to draw?

No matter how good of data comes from one source, extrapolating that out to a global average is just a horrible methodology.

In election polling, it is common to try to find voting precincts that consistently and closely reflect the overall voting. These precincts are rare, but they exist and have proven to be accurate, at least for several election cycles.

We don’t have that luxury in climate studies. We can’t go back and see how Yamal or bristlecone pines accurately reflect the overall global temps in order to say that the ones in, say, northern California are better indicators than Siberian ones.

A look at 2010 as a point in the milieu shows us that Siberia is at this time a poor indicator of the balance of the world, yet Siberia seems overly weighted IMHO in order to get the global average for 2010. Tremendously anomalously cold temps at CET, in Australia, in South America, all are outweighed by the high temps of Siberia and Pakistan.

None of this makes any sense to me. It makes me doubt nearly every data set, to the point of wondering if the data is reliable at all, and if the warming since 1980 even exists in reality. I have to agree with the late Michael Crichton in thinking there is a problem with what is done with the data and what the resulting compilations mean. The recent UHI studies scream out that urbanization is skewing the data and the scientists charged with studying the data are missing the skew. And when CET (which is in an area that has been urbanized at or above the global average) shows such a divergence from the published global averages, what confidence can we have that the global averages are anything but garbage?

I am loathe to use such a strong word as “garbage,” but I am not sure I’d line a bird cage with any of it. I want the data to reflect reality, and I simply don’t think it does.

It made me want confirmation of something. I assume that what it shows is anomalies during a La Nina compared to anomalies during the average of the rest of the time, with ‘the rest of the time’ being normalized to zero. If this is the case, it doesn’t make me think anything. It is colder during a La Nina than it is during the average of ‘the rest of the time’. It shows that coming into a La Nina it is hotter than the average of ‘the rest of the time’. Probably because that is an El Nino. If nothing else, this is a plot of anomalies for 7 random years. I don’t know why there is some a priori reason why these 7 years should show anything special about climate or long term climate trends. What exactly do you think this proves/shows about long term temperature trends and why?

Steve – and those interested in the Central England Temp Record should note that a strange green line has appeared on the CET showing a rise to above 1 degree when the data for feb shows -0.3. see here: http://hadobs.metoffice.com/hadcet/

I have asked the met office for an explanation

One Trackback

[…] 2010 was the second-coldest December in the entire history dating back to 1659,” noted Steve McIntyre, a climate scientist and the editor of climate blog Climate Audit. He bases his claim on data from […]