Temperature analysis of 5 datasets shows the ‘Great Pause’ has endured for 13 years, 4 months

HadCRUT4, always the tardiest of the five global-temperature datasets, has at last coughed up its monthly global mean surface temperature anomaly value for June. So here is a six-monthly update on changes in global temperature since 1950, the year when the IPCC says we might first have begun to affect the climate by increases in atmospheric CO2 concentration.

The three established terrestrial temperature dataset that publish global monthly anomalies are GISS, HadCRUT4, and NCDC. Graphs for each are below.

GISS, as usual, shows more global warming than the others – but not by much. At worst, then, global warming since 1950 has occurred at a rate equivalent to 1.25 [1.1, 1.4] Cº/century. The interval occurs because the combined measurement, coverage and bias uncertainties in the data are around 0.15 Cº.

The IPCC says it is near certain that we caused at least half of that warming – say, 0.65 [0.5, 0.8] Cº/century equivalent. If the IPCC and the much-tampered temperature records are right, and if there has been no significant downward pressure on global temperatures from natural forcings, we have been causing global warming at an unremarkable central rate of less than two-thirds of a Celsius degree per century.

Roughly speaking, the business-as-usual warming from all greenhouse gases in a century is the same as the warming to be expected from a doubling of CO2 concentration. Yet at present the entire interval of warming rates that might have been caused by us falls well below the least value in the predicted climate-sensitivity interval [1.5, 4.5] Cº.

The literature, however, does not provide much in the way of explicit backing for the IPCC’s near-certainty that we caused at least half of the global warming since 1950. Legates et al. (2013) showed that only 0.5% of 11,944 abstracts of papers on climate science and related matters published in the 21 years 1991-2011 had explicitly stated that global warming in recent decades was mostly manmade. Not 97%: just 0.5%.

As I found when I conducted a straw poll of 650 of the most skeptical skeptics on Earth, at the recent Heartland climate conference in Las Vegas, the consensus that Man may have caused some global warming since 1950 is in the region of 100%.

The publication of that result provoked an extraordinary outbreak of fury among climate extremists (as well as one or two grouchy skeptics). For years the true-believers had gotten away with pretending that “climate deniers” – their hate-speech term for anyone who applies the scientific method to the climate question – do not accept the basic science behind the greenhouse theory.

Now that that pretense is shown to have been false, they are gradually being compelled to accept that, as Alec Rawls has demonstrated in his distinguished series of articles on Keating’s fatuous $30,000 challenge to skeptics to “disprove” the official hypothesis, the true divide between skeptics and extremists is not, repeat not, on the question whether human emissions may cause some warming. It is on the question how much warming we may cause.

On that question, there is little consensus in the reviewed literature. But opinion among the tiny handful of authors who research the “how-much-warming” question is moving rapidly in the direction of little more than 1 Cº warming per CO2 doubling. From the point of view of the profiteers of doom (profiteers indeed: half a dozen enviro-freako lobby groups collected $150 million from the EU alone in eight years), the problem is that 1 Cº is no problem.

Just 1 Cº per doubling of CO2 concentration is simply not enough to require any “climate policy” or “climate action” at all. It requires neither mitigation nor even adaptation: for the eventual global temperature change in response to a quadrupling of CO2 concentration compared with today, after which fossil fuels would run out, would be little more than 2 Cº –well within the natural variability of the climate.

It is also worth comparing the three terrestrial and two satellite datasets from January 1979 to June 2014, the longest period for which all five provide data.

We can now rank the results since 1950 (left) and since 1979 (right):

Next, let us look at the Great Pause – the astonishing absence of any global warming at all for the past decade or two notwithstanding ever-more-rapid rises in atmospheric CO2 concentration. Taken as the mean of all five datasets, the Great Pause has endured for 160 months – i.e., 13 years 4 months:

The knockout blow to the models is delivered by a comparison between the rates of near-term global warming predicted by the IPCC and those that have been observed since.

The IPCC’s most recent Assessment Report, published in 2013, backcast its near-term predictions to 2005 so that they continued from the predictions of the previous Assessment Report published in 2007. One-sixth of a Celsius degree of warming should have happened since 2005, but, on the mean of all five datasets, none has actually occurred:

The divergence between fanciful prediction and measured reality is still more startling if one goes back to the predictions made by the IPCC in its First Assessment Report of 1990:

In 1990 the IPCC said with “substantial confidence” that its medium-term prediction (the orange region on the graph) was correct. It was wrong.

The rate of global warming since 1990, taken as the mean of the three terrestrial datasets, is half what the IPCC had then projected. The trend line of real-world temperature, in bright blue, falls well below the entire orange region representing the interval of near-term global warming predicted by the IPCC in 1990.

The IPCC’s “substantial confidence” had no justification. Events have confirmed that it was misplaced.

These errors in prediction are by no means trivial. The central purpose for which the IPCC was founded was to tell the world how much global warming we might expect. The predictions have repeatedly turned out to have been grievous exaggerations.

It is baffling that each successive IPCC report states with ever-greater “statistical” certainty that most of the global warming since 1950 was attributable to us when only 0.5% of papers in the reviewed literature explicitly attribute most of that warming to us, and when all IPCC temperature predictions have overshot reality by so wide – and so widening – a margin.

Not one of the models relied upon by the IPCC predicted as its central estimate in 1990 that by today there would be half the warming the IPCC had then predicted. Not one predicted as its central estimate a “pause” in global warming that has now endured for approaching a decade and a half on the average of all five major datasets.

There are now at least two dozen mutually incompatible explanations for these grave and growing discrepancies between prediction and observation. The most likely explanation, however, is very seldom put forward in the reviewed literature, and never in the mainstream news media, most of whom have been very careful never to tell their audiences how poorly the models have been performing.

By Occam’s razor, the simplest of all the explanations is the most likely to be true: namely, that the models are programmed to run far hotter than they should. They have been trained to yield a result profitable to those who operate them.

There is a simple cure for that. Pay the modelers only by results. If global temperature failed to fall anywhere within the projected 5%-95% uncertainty interval, the model in question would cease to be funded.

Likewise, the bastardization of science by the IPCC process, where open frauds are encouraged so long as they further the cause of more funding, and where governments anxious to raise more tax decide the final form of reports that advocate measures to do just that, must be brought at once to an end.

The IPCC never had a useful or legitimate scientific purpose. It was founded for purely political and not scientific reasons. It was flawed. It has failed. Time to sweep it away. It does not even deserve a place in the history books, except as a warning against the globalization of groupthink, and of government.

In the 21st Century temps have flatlined and CO2 has skyrocketed. Believers do know it’s 2014, just 86 more years to the scary 2100? Maybe their next move will be to adjust CO2 down, then project CO2 to skyrocket with temps.

Maybe when they adjust CO2 down they will say it went and hid in the deep oceans, and it will come back with a fury making things worse than we first thought. I have a feeling they will play that card within the next 5 years as this great pause has them scrapping the bottom of the idea barrel to keep their agenda alive.

Meanwhile, in the UK, the so called ‘protectors of the countryside’ are flailing about over fracking sites that are small and readily shielded by vegetation whilst at the same time supporting bird mashers on every hilltop and in every coastal view with acres of solar panels frying birds in flight in as many formerly green fields as possible.

Meanwhile the fear of ‘peak oil’ is disappearing as technological advances rapidly make vast reserves within the Earth’s crust available for use very cheaply for the indefinite future.

Fossil fuels are making more and more nations wealthy enough to get to the point where individuals can afford to decide to voluntarily reproduce at less than replacement level.

Fossil fuels, in getting people to a position where they choose to have less children are the very means by which long term sustainability can be achieved via a voluntary slow decline in global population after the peak which is expected within the 21st century.

Nuclear energy has been grossly maligned since modern techniques offer increasingly tempting opportunities for huge amounts of relatively cheap energy. Radiation leaks and ‘accidents’ have led to surprisingly small negative consequences compared to the benefits. More people were killed or injured in the primitive 19th century UK coal mining industry than have ever been harmed by the nuclear power industry worldwide.

Plentiful cheap energy is there for the taking but that doesn’t suit certain political interests who regard a free and prosperous population as a threat to their hold on power.

Climate alarmists including Greenpeace et al are, in reality, the enemies of the planet in that they are wilfully and sometimes violently obstructing a natural progression of humanity towards a sustainable accommodation with Gaia.

The Luddites who opposed the industrial revolution were of the same caste of mind.

The world never was created as a paradise, It has been red in tooth and claw from the beginning. Humanity has always been engaged in a desperate struggle for survival against a hostile environment. The Garden of Eden was always a fantasy.

Fossil fuel use is the means by which humanity elevated itself above the viciousness of natural competition for survival and it turns out that there is more than enough energy cheaply available for us to get to a point where we can both live with nature and preserve it.

It is no coincidence that the wealthiest nations have greatest social stability and the most cared for environments.

The Earth’s crust is big enough and deep enough to enable us to achieve that which is necessary and the doomsayers are the main obstacle in our way.

It is time to tell them to go away and preach their negativism elsewhere.

Unfortunately, politicians gain power by fomenting fearfulness and encouraging a dependent electorate. As long as we vote for politicians who tell us that they will protect us from one threat or another then we shall remain slaves.

This appears to be a fair interpretation of the facts which clarifies that there is no reason to panic or spend vast amounts of money preparing for a disaster that is not going to happen. Adaptation to climatic changes as and when required would seem more intelligent.
Does anyone disagree with the data provided or the way it has been analyzed?
I am waiting patiently for a response from Paul Wheelhouse, the Scottish Minister for Environment and Climate Change, referring to similar issues.
Perhaps someone could forward this document to him to encourage him in turn, to respond to me.
I feel embarrassed about harassing him again.

I think it should have been “have gotten”. “Gotten” is acceptable in American English, but apparently it is not in British English. “had gotten”… implies that they no longer pretend this is the case. They still persist in their belief that skeptics do not accept the basic science of greenhouse gases.

It is surely time for a collective call to account to be issued – by a single body representing all sceptic organisations, scientists, journalists and blogs, to those responsible for this nonsense: the IPCC, leading alarmists & NGO’s, the national governments of the US, UK, and the EU and to the alarmist media.

It’s time they were held to account for wrongly promoting what are now transparently failed projections, improbable scenarios and impossible outcomes.

AGW theory is dying on the vine right before our eyes. There is not one key indicator that is falling in favour of AGW theory. It’s way past time those responsible for pushing this junk are held responsible.

Gotten, past participle of “get”. The OED gives it a Middle English origin along with “got.” The OED does indicate the use is largely North American. The preceding “had” actually places it properly in past tense.

Steven Wilde: I have it on testimony of a French Engineer, who I ran across in some contract work, about 10 years ago. A “First Form”, or 7th Grader in France, by the time she or he moves to the UPPER level (i.e., their grade school runs to 7th grade I think) can be asked to go up to a blackboard and draw the Nuclear Power cycle, from Ore to Waste, with everything inbetween as block diagrams, with lines to “link” things. Not really engineering, but like knowing where the spark plugs are, the water pump, the distributor, the transmission, the oil pan, the differential, the gas tank, etc. in a car…makes you a better owner/operator, certainly having a complete concept of the nuclear cycle, Ore to Disposal (after reprocessing), helps you to be a better informed citizen when it comes to choice. I also have it on “bad authority” that an American 7th grader “feels good” about themselves, and can put a condum on a cucumber. Alas, therein lies the problem.

Gotten, past participle of “get”. The OED gives it a Middle English origin along with “got.” The OED does indicate the use is largely North American. The preceding “had” actually places it properly in past tense.
==========================================================
Indeed. Some USA language usage is closer to that of Shakespeare than is current UK English. There are those in UK who mock the likes of ‘gotten’, displaying their ignorance of how language changes.
See also e.g. ‘dived’ and ‘dove’.

SCheesman and James Abbott question the 17+ years of no global warming. If I may explain…

In 1999, über-Warmist Phil Jones was in an interview. He was asked about the fact that global warming had apparently stopped for the past 2 years. Jones responded that it had stopped, but to be statistically significant, global warming would have to stop for at least fifteen years. Remember that this was from Jones’ own starting date of 1997.

Jones probably felt very secure giving himself another 13 years of wiggle room. But in the event, Planet Earth called his bluff: as of now there has been no global warming for more than 17 years, which takes us back to 1997.

That is why 1997 is often used as the beginning date: it was Phil Jones’ designated starting year.

Now, of course, that date causes immense consternation among climate alarmists. Their own HE-RO, Phil Jones, picked it himself. Some alarmists try to blame skeptics, but anyone familiar with events knows better.

Hello, how long will it be before we collectively realise that CO2 is a good gas, and that simple fact is the problem. All the rest is just talk.

Its a giant sized scam, engineeried by persons who want both power and lots of money, expose them for what they are, for example how does “Green” Al Gore actually live, and how many Carbon miles does he clock up. We can then laugh at these wealthy clowns. As for the “Dumb” Greenies, all wanting the warm inner glow, its time that they moved from their Ivory towers and into the real world. Put them naked into a cave in a cold place so they can find out that nature is not like Walt Dysney portraded it.

Unfortunately I dont believe any of the above data. Recently someone posted here on WUWT a compilation of world reliable class 1 rural surface data worldwide from 1640 showing a 0.46C “warming” per 100 years which absolutely mimicked CET(which I’ve always maintained is the ONLY reliable surface data + Armagh). This I do believe and 0.47C change over a 100 years might as well be 0C in my books! LOL

It can be your back yard temperature at noon for the last 15 years, or it can be the real telephone numbers in the Manhattan telephone directory, or the number of animals per square km , or even the numero-alphabet character at the right of the top line, on each page of an original edition of “Gone with the Wind.”

Once the dataset is known, the statistics are exactly determinate.

You are welcome to calculate the result yourself, since M of B has told you what the data set is.

As past participles of get, got and gotten both date back to Middle English. The form gotten is not used in British English but is very common in North American English, though even there it is often regarded as non-standard. In North American English, got and gotten are not identical in use. Gotten usually implies the process of obtaining something, as in he had gotten us tickets for the show, while got implies the state of possession or ownership, as in I haven’t got any money.

Although the British stopped using the past participle gotten about three hundred years ago, the American colonists and their descendants–especially in New England–still tend to use it. Some English teachers have tried to ban its usage to make American English conform to British English, especially during the nineteenth and early twentieth century when there was a movement to purify English. Others are just not used to its use because it is not used in their region and hear it as an error. Ultimately, language is convention. If you are writing for a formal audience outside of New England, you might want to use the simple past form got instead. It is like the dictum to never end a sentence with a preposition because that is something some people just will not put–ummm–up with which some people just will not put! Yes. For example: “Since I last saw you, you have gotten big!” Gotten is correct, and very old. In England many people wrongly assume that gotten is a modern Americanism, but the truth is the English more-or-less stopped using it, and have forgotten (!) that they used to use it. That said, “gotten” isn’t good English. In most cases other, more precise and meaningful words should be used in its place. While “have got” sounds wrong to American ears, “have gotten” can usually be replaced by “have become”, and “have been able to” or “have had the chance/opportunity to” would make better sense in other situations.”You would have got along with him” is proper English.

In the UK, the old word “gotten” dropped out of use except in such stock phrases as “ill-gotten” and “gotten up,” but in the US it is frequently used as the past participle of “get.” Sometimes the two are interchangeable, however, “got” implies current possession, as in “I’ve got just five dollars to buy my dinner with.” “Gotten,” in contrast, often implies the process of getting hold of something: “I’ve gotten five dollars for cleaning out Mrs. Quimby’s shed” emphasizing the earning of the money rather than its possession.

Phrases that involve some sort of process usually involve “gotten”: “My grades have gotten better since I moved out of the fraternity.” When you have to leave, you’ve got to go. If you say you’ve “gotten to go” you’re implying someone gave you permission to go.

In increasing order of prevarication the scale goes: lies. damned lies, statistics, and government reports. UN and NGO reports are completely off theprevarication scale, but it would look like a hockey stick at the right end.

Likewise, the bastardization of science by the IPCC process, where open frauds are encouraged so long as they further the cause of more funding, and where governments anxious to raise more tax decide the final form of reports that advocate measures to do just that, must be brought at once to an end.

I don’t think increased taxation is the main motive. Those in attendance at the SFP meeting are not primarily politicians, but emissaries of the Environmental departments of their governments. (In countries with proportional representation systems, the Greens are usually given control of the Environmental Department as their payoff for being part of a coalition government.) As such, they are full-fledged alarmists, and they are less cautious than scientists about going beyond the data. That is why the final SFP is more alarmist than the draft versions, and omits the embarrassing charts in the earlier drafts, and is more alarmist than the base document.

dbstealey you “explain” nothing. What Phil Jones said in 1999 is completely irrelevant to the content of Lord M’s interesting article – which happens to back up what I have been saying for some time (ie the pause started in 2002) – and which you (along with others) have regularly attacked.

george e. smith you miss the point entirely. Lord M’s (more realistic) analysis is based on taking the 5 data sets together which cover both satellite and terrestrial temperature measurements – as opposed to the single RSS data set (satellite sensing of atmospheric temperature in various altitude bands) which happens to give a favoured result for those looking for the least warming.

AS a matter of historical interest, the use of the word gotten has an honourable mention in one of the finest examples of sublime English Language, The Book of Common Prayer of 1662.

In the rubrics at the end of the Order for the Administration of the Lord’s Supper or Holy Communion the following instruction is given: “And to take away all occasion of dissension and superstition, which any person hath or might have concerning the Bread and Wine, it shall suffice that the Bread be such as is usual to be eaten; but the best and purest Wheat Bread that conveniently may be gotten.”

I am sure the Pilgrim Fathers would have taken many Lincolnshire words and phrases which have survived in the current American culture.

I have two beefs with these data-sets. The first one is that it is asinine to draw a straight line through the data from 1950 to 2014. If you want to show this temperature range, don’t be lazy and analyze the the temperature changes taking place in this temperature regime. As a corollary, there is not sufficient detail visible to do it well because the horizontal scale is too compressed. The second one is that all three ground-based data-sets still feature the phony computer processing traces I have from time to time pointed out. What happened is that the three global temperature sources considered in this article have been cooperating to fix the global temperature curve to their liking. Their first move was to change an eighteen year temperature stretch in the eighties and nineties from no warming to warming and dedicate it as “late twentieth century warming. Satellite data show that there was no warming from 1979 to 1997 but their temperature curve shows a rise of 0.1 degrees Celsius between these two points. That they cooperated in fixing this temperature came out accidentally because they screwed up. What happened is that they used computer processing on all three data-sets using the same equipment. As an unanticipated consequence of this operation it left traces of itself in the finished product. These consist of sharp upward-pointing spikes at the beginnings of years in their published curves. You are showing all three data-sets as well as UAH and RSS satellite curves in this article You should have no trouble recognizing these inserts by comparing satellite and ground-based data. The most prominent of these phony spikes in GISS are located at years 1987, 1990, 1995, 1998, 1999, 2002, 2007, and 2008. There are others harder to find. They happen to be at the exact same locations in all three databases. The ones at 1998 and 1999 are attached to the super El Nino of 1998 which becomes a tenth of a degree higher than it is.

This kind of manipulation is exactly what Michael Crichton was afraid of would happen when he spoke to the United States Senate in 2005: “…let me tell you a story. It’s 1991, I am flying home from Germany, sitting next to a man who is almost in tears, he is so upset. He’s a physician involved in an FDA study of a new drug. It’s a double-blind study involving four separate teams—one plans the study, another administers the drug to patients, a third assess the effect on patients, and a fourth analyzes results. The teams do not know each other, and are prohibited from personal contact of any sort, on peril of contaminating the results. This man had been sitting in the Frankfurt airport, innocently chatting with another man, when they discovered to their mutual horror they are on two different teams studying the same drug. They were required to report their encounter to the FDA. And my companion was now waiting to see if the FDA would declare their multi-year, multi-million-dollar study invalid because of this contact.
For a person with a medical background, accustomed to this degree of rigor in research, the protocols of climate science appear considerably more relaxed. A striking feature of climate science is that it’s permissible for raw data to be “touched,” or modified, by many hands. Gaps in temperature and proxy records are filled in. Suspect values are deleted because a scientist deems them erroneous. A researcher may elect to use parts of existing records, ignoring other parts. But the fact that the data has been modified in so many ways inevitably raises the question of whether the results of a given study are wholly or partially caused by the modifications themselves.”

PS to my comment of 4:41 pm above. All the governmental delegates at the Summary For Policymakers meetings who come from developing nations (a majority of delegates) have an additional motive for alarmism: the transfer payments they hope to get from the knock-on effects of an alarmist report.

[IPCC AR5] conclusions have been reached with high statistical confidence by a working group made up of many of the world’s leading climate scientists drawing on areas of well-understood science – UK House of Commons report on IPCC AR5.

What is a statistical confidence? In science you determine it in two steps:
1. Form a hypothesis (“conclusion” in IPCC jargon).
2. Test it against available data. The more data you have the more confidence you may get. There are formulas to compute a confidence level from a number of observations supporting or opposing the hypothesis.

Please note that a statistical confidence has nothing to do with a size of a working group, or the number of leading climate scientists in it. “IPCC confidence” might be a better term.

And this reckoning is based on admitted fiddling of the temp record that is presently built into the algorithms of the main record keepers. Temperatures a hundred years ago are likely >0.3 C and climbing warmer than the reworked figures.

“Get” also has roots in old Norse and German words having similar meaning. The global spread of English is due in part to its large vocabulary which comes from the lack of strict rules for admitting new words into the dictionary – like the French tend to do. An ugly, but complex grammar also may help the spread of English as it facilitates expressing complex thoughts.

I will go along with thirteen years, four months. I always wondered why others wanted to include the period covered by the super El Nino of 1998 because it is simply a different kind of temperature regime. If you study the satellite data carefully you will see that on both sides of the super El Nino are narrow La Nina valleys. The one on the right starts to form a new El Nino peak but strangely overshoots its mark and ends up a third of a degree higher than I expected. This is an unexpected step warming that created the no-warming platform we now inhabit. And instead of another El Nino peak we got a seven year, basically flat, temperature platform until the La Nina of 2008 finally showed up. (That is the one that Trenberth did not understand!). I put it all down to the extra warm water which the huge super El Nino brought over but the flat mean temperature persisted and here we are, trying to understand why the temperature did not go down when the century started.

Just how much of an increase in temperature would it take for the warmistas to dance in the street and declare that the “pause” was over? Or would it take more than one single reading. . .perhaps reading over a year?

“Calling it a “pause” is disingenuous, because in order to be a ‘pause’, we would have to be able to look back and see when warming resumed. But it hasn’t.”

See? That’s an explanation, isn’t it? Words matter, and “pause” is the wrong word under the circumstances.

Next, you say:

What Phil Jones said in 1999 is completely irrelevant to the content of Lord M’s interesting article

So what? I was explaining the background, for your edification.

Next:

(…the pause started in 2002) – and which you (along with others) have regularly attacked.

Because it deserves to be attacked. Is there something illogical or wrong with my point that you can only call it a “pause” if global warming resumes? For now, global warming has stopped. It may resume. Or not. Or, global cooling may start. Nobody knows. However, “pause” clearly implies that global warming will start again. But you don’t know that. Nobody does.

Next:

george e. smith you miss the point entirely. Lord M’s (more realistic) analysis is based on taking the 5 data sets together which cover both satellite and terrestrial temperature measurements – as opposed to the single RSS data set (satellite sensing of atmospheric temperature in various altitude bands) which happens to give a favoured result for those looking for the least warming.

Wrong again. George E. Smith can easily defend himself, but James Abbott, you go off on an unrelated tangent here:

…which happens to give a favoured result for those looking for the least warming.

Who is looking for “the least warming”? It is what it is. The point here is, rather, that global warming has stopped. And not for a short time, but for many years now. I know that causes you great consternation, because the planet isn’t conforming to what you believe it should be doing [accelerated warming]. But that is what happens when you put belief ahead of science.

As George says:

You are welcome to calculate the result yourself, since M of B has told you what the data set is.

But you would rather talk about the putative “pause”. There is no pause. Global warming has stopped. Your predictions were wrong. Accept reality. ☺

I would stick with the “no global warming for 17 years, 10 months.” You have the data, why change it.? You might be muddying the waters. (They will say that Monckton has retreated his position on the warming pause).

I didn’t expect to be schooled on that of all words. I am sure my 7th grade English teacher, Mrs Amrhein never banned that word in 1957, and she was a real stickler. My bigger concern is to hear announcers on national TV saying “have went” or “he don’t”. Eck! Like fingernails on the chalkboard.

I’m pleased we are now talking about 13 years and not 17. With consensus data sets and trend line calculations, the flat-lining period is very robustly portrayed. Using just the sceptic’s favorite temperature set (currently UAH) and stretching it to 17 years nine month smells faintly of cherry-picking. When the data is on your side, you don’t need to Mann-handle it. Using 13 years instead of 17 is more accurate and more defensible.

Also, with the big warm-up in the mid-late 1990s, the flat-line trend is not going to go beyond 1997 for a long long time unless some serious cooling kicks in. Also a word of warning…the trend is quite flat so it won’t take much (a good el nino?) to turn all those near-flat trend lines slightly upward.

13.4 years is longer than the time since global cooling, all the rage in the 1970’2 was redefined as global warming in the 1980’s. I note they’re not so quick jump on the global climate pause bandwagon. The pause is quite honestly the least likely of the natural states of the climate so you’d think they’d hasten to blame it on humans. There is something quite odd, in fact, about a stagnant climate. Think of all the drivers that have to be doing nothing right now for there to be a pause decades long.

I take it the error margins said to be associated with
these data sets are the STATISTICAL error margins, which
are themselves components of the much larger SURVEY
margins of error. But we should not expect climate scientists
to be aware of this latter concept, as there is no cause to
believe that they are in any way experts in the collection,
analysis, and interpretation of survey data, something I’ve
been doing for 45 years. Actually, for various reasons, I
think that the land-based matrix of temperature stations is
a dog’s breakfast, and that the claims made for and from it
are bloody outrageous excuse my language.

Steven Wilde: I have it on testimony of a French Engineer, who I ran across in some contract work, about 10 years ago. A “First Form”, or 7th Grader in France, by the time she or he moves to the UPPER level (i.e., their grade school runs to 7th grade I think) can be asked to go up to a blackboard and draw the Nuclear Power cycle, from Ore to Waste, with everything inbetween as block diagrams, with lines to “link” things. Not really engineering, but like knowing where the spark plugs are, the water pump, the distributor, the transmission, the oil pan, the differential, the gas tank, etc. in a car…
_________________________
So, what did the French end up with?
A bunch of nuclear power plants and the Citroen Goddess.

“He states quite clearly in the present post that he took the average of five data sets.”

The underlying assumption is that the data sets are accurate, or at least reasonably so. If you average 5 sets of data that are wrong, you really aren’t providing more or better information. Since all the data sets disagree about the temps over the period, they can’t all be right can they?

Oh dear, the magnificent Lord Monckton has a logical error in this one. Below the first set of graphs, he says, “The IPCC says it is near certain that we caused at least half of that warming – say, 0.65 [0.5, 0.8] Cº/century equivalent. ”

“At least” half is quite different from “at most half” or “exactly half.” Yet the rest of Monckton’s caclulations seem predicated on “at most” half. We could even be causing more warming than the entire graph, as some alarmists claim, where Nature would be cooling the temperatures.
I also point out that carbon dioxide has not doubled in this period. It has risen from about 300 ppm to 400ppm. Assuming that CO2 caused around 0.6C warming from that rise (wlld speculation), then a full doubling would cause over 1 degree warming.

In high school physics, I learned that the essence of science is correct prediction. IPCC is falsified as science by that standard, as shown in one of Monckton’s graphs above. But then IPCC does not stand for International panel on climate change, but InterGOVERNMENTAL Panel for climate change. They are politicians out for tax dollars–at any cost to the actual biosphere.

I consider it unlikely that a doubling of carbon dioxide would cause more than 1/5 degree C rise because graphs of past temperatures and carbon dioxide levels show no correlation at all for the ones produced before alarmism began, and carbon dioxide leading for present ones. Another century or so, and a much larger rise in carbon dioxide may–perhaps–make the matter clear.

icepilot says:
July 29, 2014 at 4:30 pm
I would note that almost the entire range of the IPCC’s 2005 prediction is below that of the 1990 prediction range, yet still above that of reality.
______

I’m glad you pointed this out. Since Nature didn’t cooperate with the earlier predictions/projections, IPCC and friends have devised new ones that are closer to observation. They may even try to justify their high confidence in the models on this basis. But it is important to remember that they sold the warming panic based on the work that was done in the 90’s. So, yeah, the projections they have now are a bit closer to actual measurements, but they are also less scary. If, way back then, they had used the more accurate models or data sets that we have today, would it have been frightening enough to stampede the world into all of its climate emergency measures?

The problem with defending the purity of the English language is that English is about as pure as a cribhouse whore. We don’t just borrow words; on occasion, English has pursued other languages down alleyways to beat them unconscious and rifle their pockets for new vocabulary.

Just how much of an increase in temperature would it take for the warmistas to dance in the street and declare that the “pause” was over? Or would it take more than one single reading. . .perhaps reading over a year?

Yesterday, I saw an article in a Democrat-type web newszine that said this past June was “the hottest June on record” at any time in recorded history. With cherry-picking the news, they can keep up their alarmism for a long time. This one was a bit unusual in mentioning the temperature itself–61.3 degrees F, or something like that. This is 16.3 degrees C. That is far from hot–that is COLD!! They never notice that.

Harry Reid and his ‘Democratic’ “Whores” in the Senate are using the “Impeach Obama” meme to squeeze donors to line their pockets with cash [no checks, no credit cards, only cash], after “some” taxes by the “Gate Keeper” of course. snicker snicker.

Yes off topic.

But Hansen in 1988 went very “native” and that is what got him fired in 2013 !

A bit off topic, but in a comment on another thread someone (I cannot recall who or on which thread) suggested that the AGW believers had never got a single prediction right. I know some of their biggies (temperature, hot spot, ice) were wrong, but is there nothing they got right?

I’d caution against gloating over the pause too much. The earth has been warming up for the last 400 years, since the Little Ice Age. If you take the LIA out as a blip, the earth has been warming for thousands of years. I’ve no reason to believe that this long term trend won’t exhibit itself given enough time. What the current pause suggests is two things:

1. Sensitivity is far lower than IPCC estimates
2. The climate models are invalid based on the metrics of the modelling community themselves

When the pause ends (and it will, either up or down, the prospect of it staying them same for decades more is unlikely) the real argument will remain the same. The models can’t emulate natural variability and hence cannot ascribe any amount of warming (or cooling) to athropogenic causes, and sensitivity has been grossly over estimated.

Generally temperature follows the precipitation condition. If the precipitation is above the average then temperature follows below normal condition. If the precipitation is below the average then temperature follows above the average condition. Southern Oscillation factor is part of this game only. For example in 2009 severe drought increased the temperature by 0.9 oC.

Hot d@mn, my tomato plants on the subtropical Fraser Coast are stunted weeds, because the weather has been too cold for them to grow.
============================================================================

“Global warming has stopped. Your predictions were wrong. Accept reality …”
==================================
Ageed, at this stage of the game it has stopped, that’s all one can say.
It puzzles me why those who genuinely believe that any future warming of the planet would be unequivocally harmful are so reluctant to accept that fact.

Werner Brozek;
Take a look at the following and tell me if we are in a pause or a cooling period:
>>>>>>>>>>>>>>>>>

I can’t answer. The data is too noisy and the time period to short in my opinion to draw any conclusions. Plus, we’re not even sure what the error bars are on that graph. RSS could be the best of the temperature records, or it could be the worst.

But let’s say for the moment that the 5 year trend you’ve spotted is real, and continues for another 5 years. Plotted since 1950, there would still be a substantive warming trend, and the warmists would still argue that there’s a serious problem, just being “hidden” by natural variability. Now, if the cooling trend were to get so pronounced that over 5 years it actually wiped out all the warming since 1950….well the debate would certainly be over, but not in a good way.

But let’s go the other way, and suppose that over the next 5 years warming resumes more or less the same as since 1950. My point is that this would still point to lower sensitivity than calculated by the IPCC and would still invalidate the models because they need several times that rate over 5 years to catch up to their “projections” in any manner that would suggest sensitivity is high enough to be dangerous.

Which is why I focus on sensitivity across the temperature record rather than the length of the pause.

Given the vast quantities of cold Southern Ocean water gushing into the Pacific equator off the coast of Peru, the strong 2014 El Nino cycle the CAGW bed-wetters were predicting ain’t gonna happen:

Accordingly, there is a high probability global temp anomalies will continue to fall for at least remain flat for another 6 months to a year. If a full-blown La Nina event develops, then global temps will most likely fall for the next 1.5 years, making almost 20 years of flat/falling global temp trends (RSS)…

To top it off, the Arctic is experiencing its 3rd coldest summer (the coldest Arctic summers were 2010 and 2013…) since DMI started keeping records in 1958…. This year’s cold Arctic summer will add further consternation to the CAGW bed-wetters who were desperately hoping to propagandize a new record-low Arctic Sea Ice Minimum this September; not so much…

“By Occam’s razor, the simplest of all the explanations is the most likely to be true: namely, that the models are programmed to run far hotter than they should. They have been trained to yield a result profitable to those who operate them.”

That’s it, there is nothing more to be said.

For this, the western world is beggaring its economies and embracing unreliable and expensive energy supplies, thus ensuring future fuel poverty.

Steve Case says:
July 29, 2014 at 6:52 pm
If you plot out the entire HADCRUT4 Global Mean,http://woodfortrees.org/plot/hadcrut4gl/from:1850/to:2014
it sure looks like it’s nosing over.
——————————————————————————————
Indeed. But what I always find puzzling Steve is why the warmers and others get their knickers in a knot over the rise in temps. On the left side of the graph put in the normal range in temperature over say a season and that HADCRUT plotted graph would plot pretty much as a straight line!

Remember the Precautionary Principle: We can’t wait for evidence because the risk of irreversible harm is too great.

Well, it isn’t. The rate of warming has now disproven the application of the Precautionary Principle. Any alarmist who has ever called on the Precautionary Principle can now be called out on it. They need to retract their rhetoric.

There has been AGW, but it was from cloud albedo falling due to increased aerosol emissions as Asia industrialised. You see it as a temporary decrease in cloud area because of the way the satellites analyse images. This is the same biofeedback responsible for amplifying tsi change at the end of ice ages. I submitted a paper about it in 2011 but to get such heresy accepted by Nature is near impossible. Now, I could be a crank but the US top cloud physicist noticed the same effects as me in 2010; he hasn’t been able to publish either.

As for the CO2-AGW part; there is none. The atmosphere self-controls making all well mixed GHGs give no warming. It’s the way the system works.

Remember the Precautionary Principle: We can’t wait for evidence because the risk of irreversible harm is too great.

Of course you also don’t know what the risks of taking the advocated action might be either. So in practice the “Precautionany Principle” is anything but. Along with “sustainable” which can only be short term and “renewable energy” where the plant requires very frequent maintainance and is practically useless at alectricity generation even at the best of times.
A while back the author of a distopian novel coined the term “double think” to describe the kind of political thinking where words are used to mean their antithesis. (Of course the all time “classic” is naming countries “The Dermocratic Republic of …”)

D M Hoffer says at 10.21pm
Look at the later graphs to gain a better assessment of the natural temperature variations over time.
The Earth temperature appears to be in a steady decline (from the Minoan? period), with occasional, lower peaks and troughs coinciding with the Roman and Mediaeval warm periods and the Dark Ages and the LIA. The present warming is small and indistinguishable from natural variation.

Just how much of an increase in temperature would it take for the warmistas to dance in the street and declare that the “pause” was over? Or would it take more than one single reading. . .perhaps reading over a year?

For the RSS dataset, it doesn’t need any increase in temperature – only another 2 years at the same level. If the most recent anomaly ( 0.345 deg C in June) were to remain until May 2016, then all the carefully selected negative slopes would vanish.

My, just a few days ago we were told that global warming had stopped for the past 17 years. Now we learn it’s only 13 years. To quote the author: “By Occam’s razor, the simplest of all the explanations is the most likely to be true: … They have been trained to yield a result profitable to those who operate them.”

It’s amazing how many of you are so seriously lacking in reading comprehension. The IPCC creates forecasts of the form: “if CO2 emissions increase by X amount, then temperatures increase by Y amount.” They give these forecasts for various values of X, and then state what they think the most likely value of business-as-usual X is. If people actually listen to the IPCC and work hard to lower CO2 emissions below the business-as-usual path, then temperatures will be lower. The actual temperatures are inline with the IPCC forecasts made for the actual amount of CO2 emitted.

The Earth temperature appears to be in a steady decline (from the Minoan? period), with occasional, lower peaks and troughs coinciding with the Roman and Mediaeval warm periods and the Dark Ages and the LIA. The present warming is small and indistinguishable from natural variation.

The great climatologist H.H. Lamb taught me (through his books) that the world had, indeed, seen much warmer times that now and much colder times than at present. He taught me that the world experienced bitter and brutal cold during the “Little Ice Age”, and that it had been warming up in fits and starts ever since. Later unscrupulous men, unfit to even carry one of Lamb’s books, saw that if they could claim that the natural warming that had been going on since 1850 or so was caused by mankind then they could become the modern, sciency profits of doom. The ticket was to claim that CO2 was a magic molecule that would destroy us all if mankind released any back into the atmosphere. (mother nature’s massive contributions were called “neutral” for some odd reason)

As one might imagine, a fellow who came to the climate debate by reading Lamb first has had a really hard time buying any of the IPCC’s rubbish. Darn hard time indeed.

29 July: UK Register: Lewis Page: Just TWO climate committee MPs contradict IPCC: The two with SCIENCE degrees
‘Greenhouse effect is real, but as for the rest of it …’
The UK’s Parliamentary climate change select committee has just issued a written endorsement of the latest, alarmist UN Intergovernmental Panel on Climate Change (IPCC) report. However, two MPs – the two most scientifically qualified on the committee – have strongly disagreed with this position…
“As scientists by training, we do not dispute the science of the greenhouse effect – nor did any of our witnesses,” said Peter Lilley (Conservative) and Graham Stringer (Labour) in a statement issued as the committee report came out.
“However, there remain great uncertainties about how much warming a given increase in greenhouse gases will cause, how much damage any temperature increase will cause and the best balance between adaptation versus prevention of global warming.”
The two sceptics highlighted the ongoing hiatus in global warming, which has seen temperatures around the world remain basically the same for more than 15 years, following noticeable warming in the 1980s and early 1990s.
“About one third of all the CO2 emitted by mankind since the industrial revolution has been put into the atmosphere since 1997; yet there has been no statistically significant increase in the mean global temperature since then,” the two MPs state.
“By definition, a period with record emissions but no warming cannot provide evidence that emissions are the dominant cause of warming!”…
The other nine MPs disagreed, however, and outvoted the two sceptics to firmly endorse the IPCC view…
All in all, the snapshot view provided by the Parliamentary climate change committee would seem to bear out the results of a recent survey – which concluded that the more scientific and mathematical knowledge a person has, the less worried about climate change they tend to be. http://www.theregister.co.uk/2014/07/29/just_two_climate_committee_mps_clash_with_ipcc_the_two_with_science_degrees/

“…For years the true-believers had gotten away with pretending that ….

gotten?

It’s perfectly acceptable English even though a little archaic. Strangely enough there is some useful background information in a letter to the Scunthorpe Evening Telegraph:

VINCE Withers is wrong to criticise “Grimsby, its council and our nation” for allowing the use of the word GOTTEN in Grimsby’s Freshney Place.

He is wrong if he believes that the origin of the word gotten is American.

The legitimate use of the word gotten dates back to Middle English as a past participle of “get”. Gotten was used by such great English writers as William Shakespeare, Francis Bacon and Alexander Pope in the 16th to 18th centuries.

Our English language has a long, interesting, ever-changing and continuously developing history. The words used in Vince’s letter beautifully illustrate the way in which words move from one language to another and are then subtly changed.

Vince tells us about his wife choosing bras, looking at caricatured images and his attempt to educate others about the bastardisation of true words.

“Bra” was introduced into English in the 1930s as an abbreviation for “brassiere” which had found its way into English from the French language earlier in the 20th Century.

“Bastardisation” is an extension of “bastard”, which came into Middle English via Old French from the Latin word “bastardus”.

“Caption” and “Educate” also started as Latin words (captio-, capere, educare) and became part of late Middle English.

“Caricature” came into English in the mid-18th Century via French from Italian.

Similarly “Fall” is often decried as an Americanism but was used in this sense in Britain in the 1660s and is said to derive from “fall of leaf” from the 1540s.

My own favourite archaic word is Sennight. People commonly use “Fortnight” to mean “in two weeks’ time” without realising that it is a contraction of “fourteen nights.” In the same way “Sennight” is a contraction of “seven nights” meaning “in a week’s time” Try using it, it’s a very useful word.

cesium62 says:
July 30, 2014 at 2:31 am
My, just a few days ago we were told that global warming had stopped for the past 17 years. Now we learn it’s only 13 years.

We could use either 13 years or 17 years, the two periods are not mutually exclusive. Both require the same proof, that statistics show that there has been no significant warming. If both periods pass this test than either can be used.

The multi-model average warming for all radiative forcing agents held constant at year 2000 (reported earlier for several of the models by Meehl et al., 2005c), is about 0.6°C for the period 2090 to 2099 relative to the 1980 to 1999 reference period. This is roughly the magnitude of warming simulated in the 20th century. Applying the same uncertainty assessment as for the SRES scenarios in Fig. 10.29 (–40 to +60%), the likely uncertainty range is 0.3°C to 0.9°C. Hansen et al. (2005a) calculate the current energy imbalance of the Earth to be 0.85 W m–2, implying that the unrealised global warming is about 0.6°C without any further increase in radiative forcing. The committed warming trend values show a rate of warming averaged over the first two decades of the 21st century of about 0.1°C per decade, due mainly to the slow response of the oceans. About twice as much warming (0.2°C per decade) would be expected if emissions are within the range of the SRES scenarios.

In other words, it was expected that global temperature would rise at an average rate of “0.2°C per decade” over the first two decades of this century with half of this rise being due to atmospheric GHG emissions which were already in the system.

This assertion of “committed warming” should have had large uncertainty because the Report was published in 2007 and there was then no indication of any global temperature rise over the previous 7 years. There has still not been any rise and we are now way past the half-way mark of the “first two decades of the 21st century”.

So, if this “committed warming” is to occur such as to provide a rise of 0.2°C per decade by 2020 then global temperature would need to rise over the next 6 years by about 0.4°C. And this assumes the “average” rise over the two decades is the difference between the temperatures at 2000 and 2020. If the average rise of each of the two decades is assumed to be the “average” (i.e. linear trend) over those two decades then global temperature now needs to rise before 2020 by more than it rose over the entire twentieth century. It only rose ~0.8°C over the entire twentieth century.

Simply, the “committed warming” has disappeared (perhaps it has eloped with Trenberth’s ‘missing heat’?).

This disappearance of the “committed warming” is – of itself – sufficient to falsify the AGW hypothesis as emulated by climate models. If we reach 2020 without any detection of the “committed warming” then it will be 100% certain that all projections of global warming are complete bunkum.

This disappearance of the “committed warming” is – of itself – sufficient to falsify the AGW hypothesis as emulated by climate models. If we reach 2020 without any detection of the “committed warming” then it will be 100% certain that all projections of global warming are complete bunkum.

One cannot falsify a projection, you know that. There isn’t even a legitimate statistical basis for the multimodel mean “projections” in AR4 or AR5, so one cannot perform a hypothesis test on it. AR5, in chapter 9, openly acknowledges this. That does not stop it from making statements with various levels of “confidence” in the summary for policy makers, even though they could not possibly offer a justification for their assertions of confidence other than “I pulled this level out of my ass” because there is no defensible statistical derivation of the numbers, or rather, the assignment of phrases in English that anywhere else in science would have to be backed up by hard, defensible, numbers.

We can never be certain that all projections of global warming are complete bunkum until each individual projection fails. We cannot falsify a projection in the meantime because the projections do not offer us any way to compute a p-value for the present state, so we cannot say how unlikely it is (given the results of the models assuming that the models are correct).

As I’ve pointed out, we could perform a hypothesis test for each of the models in CMIP5 — simply form the envelope of their perturbed parameter ensemble runs, split it up by percentiles, and look at the percentile that is the best match for the current climate. If the current climate falls at the extreme left of the distribution in the first few percent (as it does for most of the models), we can reject the null hypothesis that “this model is a correct climate simulation” for that model with some defensible probability of being correct.

If this were done collectively, one model at a time, one could actually think about making a defensible statement about the probability of the MME mean being correct — if (say) 30 out of 36 models fail and the remaining 6 don’t fail but are systematically off all in the same (to warm) direction then we could say with a great deal of confidence indeed that the MME prediction is useless. But we could do that analysis right now, we don’t need to wait six more years.

Averaging surface data and satellite data is questionable. If you must do it then you should give equal strength to both types of data. First average the surface data and satellite data separately and then average the two results. You can see the problem by considering what you would get if you averaged 100 surface data sets with the 2 satellite data sets. The satellite data would be completely overwhelmed.

Personally, I would ignore the surface data. It is beyond hope. Giving it any credence at all destroys one the principle skeptic points that the surface data is not fit for purpose.

rgbatduke: “As I’ve pointed out, we could perform a hypothesis test for each of the models in CMIP5 — simply form the envelope of their perturbed parameter ensemble runs, split it up by percentiles, and look at the percentile that is the best match for the current climate. If the current climate falls at the extreme left of the distribution in the first few percent (as it does for most of the models), we can reject the null hypothesis that “this model is a correct climate simulation” for that model with some defensible probability of being correct.”

You’ve said stuff like this before, of course, and I’ve always sensed that there was something to what you were trying to communicate, but I could never understand precisely what it was. Perhaps I’ll get my mind around it better if you explain just what you mean by “percentile.” Percentile of what? Trend over some period? Temperature at some date? Squared differences from actual temperatures?

Maybe you’re saying the following. For each member of an ensemble of initial-value (boundary-value, forcing value, whatever) sets, all of which we consider equally likely, we run a given model and take a histogram of the, say, temperature trends they produce. We don’t know what initial-value set actually applied in real life, but, if the histogram is any guide, it would be unlikely for it to be approximated by any ensemble member whose trend is within x degrees/century of the actually observed trend if the model is accurate to within x degrees/century.

For the sake of us who have trouble with statistics, in other words, could you put into English exactly what test you’re proposing?

Remember the Precautionary Principle: We can’t wait for evidence because the risk of irreversible harm is too great.

Yes, MC, you’re right to call out the PP – which is generally a load of rubbish. The PP is designed around the proposal that, ‘we must do something’; that, ‘we must never let this happen again’. But sometimes it really is best to do NOTHING.

Meanwhile, on the way AGW is reported in the pause, we are expected to get all fired up over a rise in GT of around 1.25 C per century. That’s means nothing to the man on the street, who has been through multiple degree changes in temp this week alone (in UK). Why would he be scared of a measly one and a bit degree rise in a hundred years? It really is all about scaremongering.

To those who question whether there has been a standstill in global warming for 13 years 4 months or 17 years 10 months, I reply that the HadCRUT4 dataset usefully provides measurement, coverage and bias uncertainties for each monthly data point. The combined uncertainties amount to 0.15 K. The differences between the different datasets are less than 0.15 K. Therefore, though the mean of all the datasets shows no global warming at all for 13 years 4 months, and the RSS dataset shows none for 17 years 10 months, the two values are within each other’s error margins, and they are saying broadly the same thing – that there has been no global warming for around a decade and a half. None of the models predicted, as its central estimate, any such outcome.

I very much hope that Professor Brown will carry out the statistical analysis of the CMIP5 models that he has suggested. That would be a great service to the truth, if he can find the time to do it.

To those who question my courteous use of the American usage “gotten”, it is a strong-verb past-participle akin to “wrought” (past participle of “wreak”) and “dove” (past participle of “dive”). In U.S. English, these past participles were carried across the pond on the Mayflower and are all commoner in the U.S. than here. But “gotten”, in particular, still survives in the phrase “ill-gotten gains” (I have never heard anyone say “ill-got gains”). And it is used not only in Cranmer’s Godly Order but also frequently in the King James version of the Bible, with which I was brought up.

For instance, Genesis IV:1, “And Adam knew Even his wife, and she conceived, and bare Cain, and said, I have gotten a man from the LORD.” Genesis XII:5, “And Abram took Sarai his wife, and Lot his brother’s son, and all their substance that they had gathered, and the souls that they had gotten in Haran; …”. Exodus XIV:18, “And the Egyptians shall know that I [am] the LORD, when I have gotten me honour upon Pharaoh, upon his chariots, and upon his horsemen.” Leviticus VI:4, “… he shall restore that which he took violently away or the thing which he hath deceitfully gotten, …”. Numbers XXXI:50, “… what every man hath gotten, of jewels of gold, chains and bracelets, rings, earrings and tablets, …”. Deuteronomy VIII:17, “And though say in thine heart, My power and the might of [mine] hand hath gotten me this wealth.” And that’s just a few from the Pentateuch.

If “gotten” was good enough to be used frequently by the great committee that translated the Hebrew and Greek of the Bible into one of the finest works of literature in our language, then it’s good enough for me. And it satisfies the first obligation of the written word, that it should be comprehensible to its audience.

It is baffling that each successive IPCC report states with ever-greater “statistical” certainty that most of the global warming since 1950 was attributable to us when only 0.5% of papers in the reviewed literature explicitly attribute most of that warming to us, and when all IPCC temperature predictions have overshot reality by so wide – and so widening – a margin.

Exactly! I too have noticed this odd behaviour, would this be possible in any other science?

Below is a nice graphic showing their INCREASING confidence levels at each new report, compared to their projections, compared to actual observations. They say a pictures speaks a thousand words.

Richard M says:
July 30, 2014 at 5:57 amIf you must do it then you should give equal strength to both types of data.

WTI is a combination of Hadcrut3, UAH version 5.5, GISS and RSS. Hadcrut3 is not out yet for June, however my best estimate for when it does come is that the WTI pause would be 13 years and 6 months to the end of June.

davidmhoffer says:
July 29, 2014 at 10:21 pm
Joel O’Bryan;
The LIA ended about 1850 AD, that’s 164 ya.
>>>>>>>>>>>>>>>>>>>
Yup. And when was the beginning? When was it at the “bottom”?
First slide shows warming trend starting in the 1600’s. About 400 years ago.
====================================================
The graphs I see of the Holocene say we are not “warming for thousands of years” at all as you said in a previous post, and this link shows that as well. We have been cooling for most of the Holocene. Not sure why you contradict yourself?

I very much hope that Professor Brown will carry out the statistical analysis of the CMIP5 models
==================
just the sort of thing grad students were invented for. one would think the climate modelling community would also welcome such a study, and support the funding application, if in fact they have faith in their models.

I would very much like to see the variance for individual models before and after anthropogenic forcings are added. does the addition of anthropogenic forcings in fact increase variability? and secondly, what is the natural variability without anthropogenic forcings? do the individual model runs show very little variability, or in fact do they show that natural variability is high?

from a practical standpoint, it would seem that it is much simpler problem to model the future as a probability, than it is to predict which future we will arrive at. Like throwing a pair of dice, it is much simpler to predict that the value on the next 100 rolls will lie between 2 and 12, with an expected variance, than it is to predict the actual value of the next 100 rolls. thus, studying the statistics of computer models is likely to tell us much more about the future climate than will simple projections of averages.

At July 30, 2014 at 5:04 am you quote my having concluded at July 30, 2014 at 3:10 am which is here..

This disappearance of the “committed warming” is – of itself – sufficient to falsify the AGW hypothesis as emulated by climate models. If we reach 2020 without any detection of the “committed warming” then it will be 100% certain that all projections of global warming are complete bunkum.

then respond saying to me

One cannot falsify a projection, you know that. There isn’t even a legitimate statistical basis for the multimodel mean “projections” in AR4 or AR5, so one cannot perform a hypothesis test on it. AR5, in chapter 9, openly acknowledges this. That does not stop it from making statements with various levels of “confidence” in the summary for policy makers, even though they could not possibly offer a justification for their assertions of confidence other than “I pulled this level out of my ass” because there is no defensible statistical derivation of the numbers, or rather, the assignment of phrases in English that anywhere else in science would have to be backed up by hard, defensible, numbers.

Yes, “One cannot falsify a projection” but you can falsify a prediction, you know that.
And – as I quoted and explained – the “committed warming” is a prediction.

No hypothesis test is needed because one compares the forecast to outcome of a prediction, and I did.

The importance of the “committed warming” is that it did provide “hard, defensible, numbers” which passage of time has shown to be wrong.

And I stand by my correct statement that said
“This disappearance of the “committed warming” is – of itself – sufficient to falsify the AGW hypothesis as emulated by climate models .”
The models made a prediction which is wrong. Therefore,
(a) the hypothesis emulated by the models is wrong,
or
(b) the models fail to emulate the hypothesis correctly
or
(c) both (a) and (b).

And those models provide the projections of future climate and they each incorporate an attempted emulation of the AGW hypothesis. So, if the the AGW hypothesis as emulated by those climate models is wrong then the projections provided by those models must be wrong; i.e. they are complete bunkum.

davidmhoffer July 29, 2014 at 8:31 pm says:
“What the current pause suggests is two things:
1. Sensitivity is far lower than IPCC estimates
2. The climate models are invalid based on the metrics of the modelling community themselves”

You are on the right track but do not go far enough. First, sensitivity is actually zero because carbon dioxide is not the cause of warming. Second, the models are so bad that the entire modeling enterprise should be closed down. It started with Hansen in 1988 who tried to predict the climate out to 2019. His “business as usual” curve was so far off reality that it was ridiculous to see it as we lived through his predicted years. They have now had 26 years to improve their product, have switched from an IBM mainframe to supercomputers costing millions of dollars, are using million-line code in their software, and their results are no better than Hansen’s and actually worse for the twenty-first century. If you just look at a CMIP5 ensemble you will see that the dozens of whips in it, each one from a different supercomputer, slope up and indicate warming while we actually experience a temperature standstill that has lasted since the beginning of the century. It is easy enough to see where this stupidity comes from. Despite the fact that global temperature is at a standstill atmospheric carbon dioxide at the same time is increasing. They have it coded into their software that increasing carbon dioxide means increasing global temperature and that is what gives them the nerve to come out with predictions of warming that do not exist. Fact is that the alleged greenhouse effect from this carbon dioxide simply does not exist. There is no experimental proof of it and it all depends on accepting the greenhouse theory that goes back to Arrhenius. He observed that carbon dioxide in his laboratory absorbed IR radiation and got warm. From that he deduced that doubling the amount of atmospheric carbon dioxide would raise global temperature by four or five degrees. More current calculations put this value at 1.1 degrees Celsius. But carbon dioxide is not the only or even the most important GHG in the atmosphere, water vapor is. And Arrhenius cannot handle several greenhouse gases simultaneously absorbing in the IR. The only theory that can do this is the Miskolczi greenhouse theory (MGT). What it predicts is what we see: addition of carbon dioxide to the atmosphere does not warm it. According to MGT carbon dioxide and water vapor jointly establish an IR absorption window in the atmosphere with an optical thickness of 1.87. If you now add carbon dioxide to the atmosphere it will start to absorb, just as the Arrhenius theory says. But this will increase the optical thickness. And as soon as this happens water vapor will start to diminish, rain out, and the original optical thickness is restored. The carbon dioxide that was introduced will continue to absorb of course but it will not be able to generate heat because the reduction of water vapor compensates for the potential warming this could create. That is the explanation of why there is no warming today despite a rising atmospheric carbon dioxide. It follows that this suppression of greenhouse warming by water vapor is universal wherever water vapor is present. Hence, there is no such thing as anthropogenic global warming. AGW is just a pseudo-scientific fantasy, promulgated by true believers who babble about water vapor tripling their imaginary greenhouse warming.

I enjoyed reading RB Alleys Two Mile Time machine and have re-read it quite a few times. For someone with more than a passing interest in climate change I was always intrigued by the graph in his book showing the temperature in Greenland over the past 800,000 Years. The graph shows that the current warm period is well past its bedtime and for some reason the Earth has had a very stable warm temperature for some 2000 years have now. The temperature has been on slight cooling trend throughout the 2000 yrs. Note this is based on just one graph in a book but an interesting one. So was the Little Ice age an attempt to start the major cooling trend again? are we seeing another attempt now? Are the planetary forces overcoming thebit of warming we have seen over 150 years prior to 14 years ago. Does 150 extra particles of anything per million make that much difference? Why is the current warm period of the Ice Age we are in differing from the previous 4 going back 800,000 years? i.e. they peaked very sharply then cooled very rapidly unlike the teetering of the current warm 2000 years. Thanks some great debate and information on this site.

“If you now add carbon dioxide to the atmosphere it will start to absorb, just as the Arrhenius theory says. But this will increase the optical thickness. And as soon as this happens water vapor will start to diminish, rain out, and the original optical thickness is restored.”

What seems to happen is that if CO2 increases radiative capability from within an atmosphere then the radiation direct to space reduces the amount of energy that can be returned to the surface in adiabatic descent. Less energy is then returned adiabatically than is taken upward adiabatically which is a net cooling effect at the surface.

That weakens convective overturning which leads to less wind and so less evaporation and less water vapour as per Miskolczi’s observations.

The mistake of radiative theory lies in thinking that CO2 leads to more convective overturning rather than less.

They don’t realise that the surface cannot be warmed by more DWIR because at the same time an equivalent amount of radiation is leaking out of the adiabatic exchange to space so that the reduction in energy getting back to the surface from weaker adiabatic descent exactly offsets the effect of the DWIR.

@ Roger… Fall I was told was around Aug 1 in the Northern Hemisphere. That’s when the leaves first being to fall. There won’t be very many, just a few. (on their own, not storm or insect damage) It is no longer spring or high summer. It is a small but definite change. The leaves that fall will be yellow and supple. You will find none of them in May or June.

The reason that 17 yrs and 10 months is better than 13 years is that the longer the pause continues, the more distance between the relationship of temperatures and co2. At some time point, it will become obvious to all that there is no relationship, or only marginal at best. It’s not the sky falling and we don’t need to tax ourselves into an icy cold death, starvation, or years of economic downturn. … As an aside, no tomatoes this year, it has been too cold.

We don’t need to bring the west down to a poverty level like the rest of the world. We need to bring the rest of the world up to a decent standard of living. Petty despots and throwback communists have no agenda for actually improving peoples lives. The middle east is alive with such people who ascribe to do as I say or die. A command economy has never brought people out of poverty, and never will. A transfer of wealth like the UN agenda 21 will not improve people’s lives. It will enrich the few that are already rich or in control in those countries.

Lord Monckton: Most, if not all, of you peers expect the current pause to end sooner or later – perhaps later this year if a strong El Nino develops. It would make more sense to focus people’s attention on a phenomena certain to last longer – the unambiguous over-projection of warming since AOGCMs were first developed in the 1980s. This discrepany will survive the next big El Nino, the pause may not.

Genuine thanks for your post at July 30, 2014 at 10:04 am which says in total

Richard Courtney, you write “Yes, but the truth is what it is, and it is not affected by any group refusing to acknowledge it.”

Yes but if the truth is not agree by the warmsts, then NOTHING is going to happen. Politicians are going to believe CAGW for ever.

You address the most important issue pertaining to the anthropogenic (i.e. man-made) global warming scare (i.e. the AGW-scare). I tend to avoid it because whenever it is raised then certain ultra-right-wing trolls hijack threads with their bile.

As I see it, the issue is as follows.

The AGW-scare was killed at the failed 2009 IPCC Conference in Copenhagen. I said then that the scare would continue to move as though alive in similar manner to a beheaded chicken running around a farmyard. It continues to provide the movements of life but it is already dead. And its deathly movements provide an especial problem.

Nobody will declare the AGW-scare dead: it will slowly fade away because politicians never proclaim that they were wrong. This is similar to the ‘acid rain’ scare of the 1980s. Few remember that scare unless reminded of it but its effects still have effects; e.g. the Large Combustion Plant Directive (LCPD) exists. Importantly, the bureaucracy which the EU established to operate the LCPD still exists. And those bureaucrats justify their jobs by imposing ever more stringent, always more pointless, and extremely expensive emission limits which are causing enforced closure of UK power stations (Didcot is being demolished now and its cooling towers were felled this week).

Bureaucracies are difficult to eradicate and impossible to nullify.

As the AGW-scare fades away those in ‘prime positions’ will attempt to establish rules and bureaucracies to impose those rules which provide immortality to their objectives. Guarding against those attempts now needs to be a serious activity. Warmunist activists will be working to promote and install the rules and bureaucracies.

Publicising issues such as the ‘pause’ informs the public of a need to oppose imposition of the rules and bureaucracies. Politicians will never publicly admit they were wrong.

James Abbott
…which happens to give a favoured result for those looking for the least warming.

RSS is selected because it is by far the best data set. All the terrestrial sets are heavely adjusted in a warming biased direction. Over and over this blog has shown how verious stations are adjusted in the wrong direction, and have overly influenced other nearby stations due to the gridding techniqu.

Hot d@mn, my tomato plants on the subtropical Fraser Coast are stunted weeds, because the weather has beengotten too cold for them to grow. I was hoping for at least a *little* global warming this year… :-)

One thing His Lordship fails to mention is that if you calculate 95% confidence intervals on the slopes he discusses, it will turn out that the slope for the shorter period (13 yrs, 4 mo, or whatever) is not only statistically indistinguishable from zero, but it’s also statistically indistinguishable from the longer-term slope of around 0.16 °C per decade. So if he wants to be really up-front and honest about all this, he should mention that the short-term slope is too uncertain (due to the strength of the interannual noise) to say whether it is essentially flat, or whether it is essentially the same as the long-term slope over the last several decades.

Another important fact he seems to have forgotten to mention is that back in 1990, the models the IPCC used didn’t include such things as ocean circulation. And yet, they still got the direction right, and the magnitude of the trend so far within a factor of 2. For Monckton, this means the model is falsified, but anyone who actually does modeling of natural systems knows that all models are wrong by definition, but some are useful. Getting such a simple model of the global climate system to work so well is a rather stunning achievement, and quite useful.

He also fails to mention that nobody ever claimed that current AOGCMs should be good at getting the timing of quasi-periodic oscillations like ENSO right, although some of them are reasonably good at mimicking the periodicity (not the exact timing). So the idea that the models should have predicted the La Niña-dominant conditions of the last decade or more is sheer nonsense. It’s a nice debate tactic to raise the bar so high that no real model could possibly be expected to pass the test, and then dismiss them all on that basis, but that’s not going to impress any scientists who have to model messy natural systems for a living.

There are quite a lot of uncertainties involve in climate modeling. But given that paleoclimate data (which has its own problems, but different ones,) gives a central estimate of around 3 °C/2xCO2 for climate sensitivity, we have reason to believe that the standard AOGCMs are probably in the ballpark. It certainly isn’t impossible that the sensitivity is as low as His Lordship seems to want it to be–it’s just not very probable, given the present state of the data.

And so the actual climate scientists keep trying to refine their models, because they know there are always bound to be things that could be improved. Are the estimates of aerosol forcing right? Does the spatial distribution of aerosols matter (yes). Can we get better measurements of deep ocean temperatures to test the ocean circulation in the models? They’re looking at all that, just like they should be.

Your post at July 30, 2014 at 2:42 pm says two points of significance and they are each very misleading.

You say

One thing His Lordship fails to mention is that if you calculate 95% confidence intervals on the slopes he discusses, it will turn out that the slope for the shorter period (13 yrs, 4 mo, or whatever) is not only statistically indistinguishable from zero, but it’s also statistically indistinguishable from the longer-term slope of around 0.16 °C per decade.

True, but if you consider the previous 13 years the lower bound of the linear trend IS positive at 95% confidence: (see this). In other words, discernible global warming stopped at least 13 years ago according to the data sets.

And you say

And so the actual climate scientists keep trying to refine their models, because they know there are always bound to be things that could be improved. Are the estimates of aerosol forcing right? Does the spatial distribution of aerosols matter (yes). Can we get better measurements of deep ocean temperatures to test the ocean circulation in the models? They’re looking at all that, just like they should be.

Refining worthless clunkers is pointless. The existing climate models should be scrapped and the “estimates of aerosol forcing” are a good illustration of why.

None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.

This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.

More than a decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.

The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.

And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).

Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.

He says in his paper:

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.

The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

And, importantly, Kiehl’s paper says:

These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.

Kiehl’s Figure 2 can be seen at

Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:

It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.

In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.

So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.

In answer to apologists for the models, the IPCC in 1990 expressed “substantial confidence” that by 2025 there would be 1.0 [1.5, 0.7] K global warming – i.e., around 0.68 K by now. Instead, the rate of warming has been half the IPCC’s then central estimate and considerably below even its then least estimate. The long-term warming trend of less than 0.12 K/decade since 1950 is indeed statistically indistinguishable (over a sufficiently short period) from a zero trend: but that fact, of course, reinforces the absurdity of any pretense that we are – thus far, at any rate – facing any kind of “climate crisis” driven by global warming.

Nor is it credible to praise the models for “having gotten the direction right”. One could have gotten the direction right with the toss of a coin. The plain fact is that the models in 1990 predicted double the global warming that has occurred, and that the IPCC, far from seeking to pretend any longer that its original near-term predictions were appropriate, has all but halved them – and they are still too high.

Paleoclimate data, often prayed in aid by apologists for high climate sensitivity, can be tortured to give any desired sensitivity. But the least unreliable of the relevant paleoclimate records – the ice-core temperature reconstructions from Vostok station, Antarctica (Jouzel et al., 2007) – show (after due allowance for polar amplification) that global temperatures have probably fluctuated by little more than 1%, or 3 K, either side of the 810,000-year mean. That powerfully indicates a thermostatic rather than a feedback-dominated climate object, and indicates that the models – unduly obsessed with the radiative transports – are undervaluing the non-radiative transports. To take just one example, evaporation has been measured to increase as a result of warmer weather at a rate thrice the models’ central estimate.

Besides, the climate object is demonstrably chaotic in the variables that govern climate sensitivity. For this reason, models are unable to represent key features of the climate correctly – notably the ocean oscillations and the Nino/Nina cycles, whose causes are still poorly understood (there is a growing body of evidence in the literature that el Ninos are triggered by quasi-periodic subsea volcanism).

The bottom line is that the rate of warming since 1950 – at less than 0.12 K/decade – is far too little to be definitively distinguishable from natural internal variability in the climate. It is also far too little to justify the crippling taxes and charges that have flung more than a quarter of Scotland’s population, and almost a tenth of the UK population, unnecessarily into fuel poverty. While the profiteers of doom get rich at the expense of the poor, the poor are dying in large numbers because they cannot afford to heat their homes (7000 excess deaths, over and above the usual 24,000 excess winter deaths, in the UK alone in the cold 2012/13 winter). No small fraction of the doubling of fuel and power prices in recent years is attributable to cross-subsidies to useless wind farms and solar panels that make no measurable difference to (non-existent) planetary warming. Time to accept that the models have failed, desubsidize “green” energy, cut fuel and power bills in half, and spend the worldwide savings of $1 billion a day on something less murderous, less destructive, and more likely to do some good in the world.

“True, but if you consider the previous 13 years the lower bound of the linear trend IS positive at 95% confidence: (see this). In other words, discernible global warming stopped at least 13 years ago according to the data sets.”

This is nonsense. The systematic rise in temperature is slow enough, compared to the magnitude of interannual noise, that there will ALWAYS be some time period over which the slope is not statistically distinct from zero. Sometimes it will be a shorter time period, and sometimes longer, but there will always be one. But it is a mistake to assume that the null hypothesis always has to be that the slope is zero. Why not have the null hypothesis be that the slope is the same as it has been for last several decades? There are reasons for doing either one, but if you can’t rule out either null hypothesis, then you can’t rule out either null hypothesis. It’s as simple as that.

“In answer to apologists for the models, the IPCC in 1990 expressed “substantial confidence” that by 2025 there would be 1.0 [1.5, 0.7] K global warming – i.e., around 0.68 K by now. Instead, the rate of warming has been half the IPCC’s then central estimate and considerably below even its then least estimate.”

And like I said, the major reasons for this are obvious. Ocean circulation is important, for instance. Is this surprising to anyone?

“The long-term warming trend of less than 0.12 K/decade since 1950 is indeed statistically indistinguishable (over a sufficiently short period) from a zero trend….”

No it isn’t. You are confused, here. The long-term trend since 1950 would have very small error bars, so it would be quite readily distinguishable from zero. Saying that it is indistinguishable “over a sufficiently short period” is the same as saying that you aren’t really talking about the trend since 1950. This longer-term trend is NOT statistically distinguishable from the short-term trends you calculate, however, because the error bars on THOSE are quite large.

Do you know what I’m talking about? That is, do you know how to calculate a confidence interval (error bar) for a slope? (They generally don’t teach this in introductory statistics courses.)

“Nor is it credible to praise the models for ‘having gotten the direction right’. One could have gotten the direction right with the toss of a coin.”

Yes, but when the known natural forcings (insolation, volcanoes) have been pushing toward slight cooling, it’s considerably more impressive that they got the direction right. Like I said before, getting the slope correct within a factor of 2 with such obviously oversimplified models isn’t bad at all.

“Paleoclimate data, often prayed in aid by apologists for high climate sensitivity, can be tortured to give any desired sensitivity. But the least unreliable of the relevant paleoclimate records – the ice-core temperature reconstructions from Vostok station, Antarctica (Jouzel et al., 2007) – show (after due allowance for polar amplification) that global temperatures have probably fluctuated by little more than 1%, or 3 K, either side of the 810,000-year mean. That powerfully indicates a thermostatic rather than a feedback-dominated climate object, and indicates that the models – unduly obsessed with the radiative transports – are undervaluing the non-radiative transports.”

I fail to see your point. Supposing you are correct, that means the difference between a glacial period and an interglacial is up to 6 °C. That actually seems roughly reasonable to me, because the best estimates for the difference between now and the last glacial maximum are in the range of 4-5 °C. Using the change in temperature, glacial extent (to estimate albedo change), and GHG concentrations since the last glacial max is one way paleoclimatologists estimate climate sensitivity. It’s about the same as the models.

Your faith in the Earth’s “thermostat” fails to be comforting, given that a 4-5 °C difference in global mean temperature gives you the difference between now, and a time when ice sheets thousands of feet thick covered much of the Northern Hemisphere. And there’s the little fact that the temperature oscillations correlate well with Milankovitch forcing, and that isn’t large enough to explain the changes without some hefty positive feedback.

“Besides, the climate object is demonstrably chaotic in the variables that govern climate sensitivity. For this reason, models are unable to represent key features of the climate correctly – notably the ocean oscillations and the Nino/Nina cycles, whose causes are still poorly understood (there is a growing body of evidence in the literature that el Ninos are triggered by quasi-periodic subsea volcanism).”

A system that is “chaotic” exhibits unpredictable behavior in the short term, but long-term averages can still be quite predictable. (Look up “strange attractors”, for instance, and determine why they are called “attractors”.)

In other words, you have produced quite a bit of hand-waving and bluster, but not any cogent arguments.

Your Lordship, my understanding is that it is not correct to draw a linear line through clear break points as was done in your graphs. The proper way is to do a linear regression though data points without a break point then add the trends together to get the overall trend.

For example, in the first GISS chart the period 1950 to 1979 would be one regression, 1980 to 1996 would be a second regression, and 1997 to 2014 would be a third regression. The three trends would then be added together to get the overall trend. This gives a more correct trend which is considerably lower. You can easily see that the trend line drawn through 1950 to 1979 is totally incorrect. The actual trend is essentially zero yet doing it your way it is shown to be .34C per 20 years.

Physically, the break points are climate shifts caused by known natural ocean cycles. It is much easier to see break points using actual temperatures instead of anomalies. I suspect that you (and others) have fallen into the Believers spin trap. Anomalies hide a multitude of spins (pun intended).

Barry Bickmore says:
July 30, 2014 at 2:42 pmOne thing His Lordship fails to mention is that if you calculate 95% confidence intervals on the slopes he discusses, it will turn out that the slope for the shorter period (13 yrs, 4 mo, or whatever) is not only statistically indistinguishable from zero, but it’s also statistically indistinguishable from the longer-term slope of around 0.16 °C per decade. So if he wants to be really up-front and honest about all this, he should mention that the short-term slope is too uncertain (due to the strength of the interannual noise) to say whether it is essentially flat, or whether it is essentially the same as the long-term slope over the last several decades.
I will apologize in advance if I am misinterpreting your statement above. But by only mentioning the presumably positive “0.16 °C per decade” without mentioning the negative “0.16 °C per decade”, it seems to me that you are confusing two different issues.
One issue is for how long the slope is actually zero with error bars of equal size above and below the zero.
The other is the length of the pause that could include zero. I do not know the numbers for the combination of the five data sets, however Hadcrut4 alone is pretty close. It is zero for 13 years and 5 months going to the latest update of May 2014 at Nick Stokes site.
However it is 17 years and 7 months for a time that could include zero at the 95% level.
The information below is taken from here:http://moyhu.blogspot.com.au/p/temperature-trend-viewer.html

Lord Monckton’s title was:
“Temperature analysis of 5 datasets shows the ‘Great Pause’ has endured for 13 years, 4 months”

It could just as easily have been something like:
“Temperature analysis of 5 datasets shows the ‘Great Pause’ has endured for 17 years and 7 months at a rate that is not distinguishable from zero at the 95% level”.

Apologists for the computer models are perhaps too indulgent of their manifest failures. Recall that the IPCC in 1990 expressed “substantial confidence” that by 2025 there would be 1.0 [1.5, 0.7] K global warming – i.e., around 0.68 K by now. Instead, the rate of warming since 1990 has been half the IPCC’s then central estimate and considerably below even its then least estimate. If, as one apologist here suggests, ocean circulation is so important that it was obvious that the models in use in 1990 would fail, the modelers, even if they had not yet incorporated ocean circulation into the models, would surely have been aware of this obvious point. In that event it must be inferred that the IPCC’s expression of “substantial confidence” in its interval of projections was intended to mislead. It would have been less dishonest if, right from the start, the IPCC had said that the models in their then state were inadequate and that, therefore, it was not possible to express “substantial confidence” in their output. But any such honesty might have been fatal to the IPCC’s own profitable continuance.

As to the significance of various trends, I trust that it is now agreed that the trend since 1950 is not =0.16 K/decade but less than +0.12 K/decade. The trend on the Central England Temperature Record (the world’s oldest regional record) from 1694-1733 was +0.39 K/decade, and that was before the industrial revolution began. Since the Central England record is a not unreasonable proxy for global temperature change, one may reasonably infer that warming of 0.12 K/decade is comfortably within the interval of natural internal variability.

Over any period long enough to show a trend in excess of +/- 0.15 K, the trend at least becomes sufficient to overcome the combined measurement, coverage, and bias uncertainties in the data, as published alongside the data themselves in the HadCRUT4 series. We may, therefore, infer that there has been some warming of the climate since 1950. However, it is possible to go back 18 years 6 months, to the beginning of 1996, before one finds a trend on the mean of the five principal global-temperature datasets that is in excess of +0.15 K and hence distinguishable from the published uncertainties.

It is trivially true that in any sufficiently stochastic time-series that exhibits an overall trend (though not, of course, in any time series) there will be periods during which the trend will be zero. However, the global temperature trend has been indistinguishable from zero for well over 18 years during which record CO2 emissions have been recorded. That that circumstance is startling may be deduced from the fact that not one of the CMIP4 or CMIP5 models predicted that outcome as its central estimate, and – as best I can determine – very few predicted that outcome even within the 95%-confidence interval. No surprise then, that the IPCC – under the advice of expert reviewers such as me – has accepted that it can no longer get away with its absurdly overblown medium-term predictions. Its current interval of predictions is so much below its 1990 interval that the two barely overlap at any point. If the apologists for the models think the IPCC ought not to have taken account of the discrepancy between models’ past predictions and the far less exciting observed trend in global temperatures over the past quarter of a century, they should address their concerns not to me but to the IPCC.

It is interesting to note that the apologists for the models are reduced to asserting that in 1990 the models did well to make the coin-toss prediction whether temperatures would rise or fall. However, by 1990 temperatures had been rising appreciably for a decade and a half since the sudden climate shift of 1976, so – particularly since theory would lead us to expect that adding greenhouse gases to the atmosphere would be likely, all other things being equal, to cause some warming – one did not really need models at all to predict that some warming would continue. Indeed, though the solar physicists are telling us that they expect global temperature to begin falling in the next few years, I suspect that over a sufficiently long period – i.e. a full 60-year cycle of the ocean oscillations – temperature may well continue to rise, though probably not by much.

Apologists for the models should not underestimate the remarkable circumstance that absolute temperature has varied by little more than 1%, or 3K, either side of the 810,000-year mean, notwithstanding the substantial forcings to which the climate has been subjected in that time. Since we are already close to the upper bound on the interval of inferred temperature change over recent millennia, another degree or two of warming is the most that might be expected to occur before all recoverable fossil fuels were exhausted: for the climate object is manifestly better characterized as near-perfectly thermostatic than as driven by large positive feedbacks. And a degree or two of warming, given that nearly all of the ice on Earth has melted over the past 11,400 years, would be likely to do more good than harm.

Finally, the apologists for the models have long exhibited very little understanding of the relevant characteristics of an object – such as the climate – that behaves as a chaotic object. It is often falsely assumed by those inexperienced in the modeling of chaotic objects that across a sufficient interval the behavior of the object becomes respectably predictable. It is necessary only to read the paper that founded chaos theory (albeit without mentioning the word “chaos”), Lorenz (1963), to understand why any such notion is unscientific. For it is a key property of chaotic objects that a small perturbation in one of the initial conditions may cause a bifurcation which, while deterministic, is not determinable unless the initial conditions are known to a precision that is and will aye be unavailable in the climate. The IPCC itself understands this, and said so in paragraph 14.2.2.2 of its 2001 Third Assessment Report. So, if the apologists for the models consider that the IPCC has insufficient appreciation of the significance of Lorenz attractors, they should address their concerns not to me but to the Secretariat.

The IPCC attempts to overcome the inherent unpredictability of the climate over the very long term (i.e., over more than a few weeks) by the use of probability-density functions. However, it is a property of such functions that they require more understanding of the initial conditions and evolutionary processes of the object under examination than would be necessary to attempt a simple central estimate flanked by error-bars. Probability-density functions, therefore, are peculiarly unsuitable as a device to attempt the reliable, long-term prediction of the future states of chaotic objects such as the climate. It is time for those who prefer models to mere reality to accept that every model is an analogy, that by definition every analogy breaks down at some point, and that for well-understood mathematical reasons the climate models were broken from the outset. Models were not, are not and will never be capable of determining climate sensitivity, especially while it remains profitable to their operators to arrange for them to predict implausibly high rates of global warming.

Thankyou for your post addressed to me at July 30, 2014 at 4:00 pm in reply to my post at July 30, 2014 at 3:25 pm.

Werner Brozek gave an excellent reply to your assertions about temperature trends in his post at July 30, 2014 at 9:30 pm. I see no reason for me to attempt to compete with that so I refer you to it.

I write to correct your failure to understand the Null Hypothesis which you state when you write.

“True, but if you consider the previous 13 years the lower bound of the linear trend IS positive at 95% confidence: (see this). In other words, discernible global warming stopped at least 13 years ago according to the data sets.”

This is nonsense. The systematic rise in temperature is slow enough, compared to the magnitude of interannual noise, that there will ALWAYS be some time period over which the slope is not statistically distinct from zero. Sometimes it will be a shorter time period, and sometimes longer, but there will always be one. But it is a mistake to assume that the null hypothesis always has to be that the slope is zero. Why not have the null hypothesis be that the slope is the same as it has been for last several decades?

The “nonsense” is your attempts
(a) to assert I claimed a Null hypothesis which I did not
and
(b) to replace the scientific method with a Null Hypothesis of your choosing.

I never cease to be amazed at how often I am called upon to explain the Null Hypothesis as it applies to anthropogenic (i.e. man-made) global warming (AGW). This is the second time this morning.

The Null Hypothesis says it must be assumed a system has not experienced a change unless there is evidence of a change.

The Null Hypothesis is a fundamental scientific principle and forms the basis of all scientific understanding, investigation and interpretation. Indeed, it is the basic principle of experimental procedure where an input to a system is altered to discern a change: if the system is not observed to respond to the alteration then it has to be assumed the system did not respond to the alteration.

In the case of climate science there is a hypothesis that increased greenhouse gases (GHGs, notably CO2) in the air will increase global temperature. There are good reasons to suppose this hypothesis may be true, but the Null Hypothesis says it must be assumed the GHG changes have no effect unless and until increased GHGs are observed to increase global temperature. That is what the scientific method decrees. It does not matter how certain some people may be that the hypothesis is right because observation of reality (i.e. empiricism) trumps all opinions.

Please note that the Null Hypothesis is a hypothesis which exists to be refuted by empirical observation. It is a rejection of the scientific method to assert that one can “choose” any subjective Null Hypothesis one likes. There is only one Null Hypothesis: i.e. it has to be assumed a system has not changed unless it is observed that the system has changed.

However, deciding a method which would discern a change may require a detailed statistical specification.

In the case of global climate no unprecedented climate behaviours are observed so the Null Hypothesis decrees that the climate system has not changed.

Importantly, an effect may be real but not overcome the Null Hypothesis because it is too trivial for the effect to be observable. Human activities have some effect on global temperature for several reasons. An example of an anthropogenic effect on global temperature is the urban heat island (UHI). Cities are warmer than the land around them, so cities cause some warming. But the temperature rise from cities is too small to be detected when averaged over the entire surface of the planet, although this global warming from cities can be estimated by measuring the warming of all cities and their areas.

Clearly, the Null Hypothesis decrees that UHI is not affecting global temperature although there are good reasons to think UHI has some effect. Similarly, it is very probable that AGW from GHG emissions are too trivial to have observable effects.

The feedbacks in the climate system are negative and, therefore, any effect of increased CO2 will be probably too small to discern because natural climate variability is much, much larger. This concurs with the empirically determined values of low climate sensitivity.

Indeed, because climate sensitivity is less than 1.0°C for a doubling of CO2 equivalent, it is physically impossible for the man-made global warming to be large enough to be detected (just as the global warming from UHI is too small to be detected). If something exists but is too small to be detected then it only has an abstract existence; it does not have a discernible existence that has effects (observation of the effects would be its detection).

To date there are no discernible effects of AGW. Hence, the Null Hypothesis decrees that AGW does not affect global climate to a discernible degree. That is the ONLY scientific conclusion possible at present.

Monckton: Provide specific references to your sources if you want anyone but sheep to believe you. The IPCC did not state with ‘substantial confidence’ that there would be 0.68 K of global surface warming by now. They might have said, with ‘substantial confidence’ that given CO2 emissions in a small projected range that we would have 0.68K of global surface warming by now. Just because you and your friends have reading comprehension problems doesn’t mean the rest of us do.

Your arrogant, ignorant and stupid post at July 31, 2014 at 1:39 am rudely says in total

Monckton: Provide specific references to your sources if you want anyone but sheep to believe you. The IPCC did not state with ‘substantial confidence’ that there would be 0.68 K of global surface warming by now. They might have said, with ‘substantial confidence’ that given CO2 emissions in a small projected range that we would have 0.68K of global surface warming by now. Just because you and your friends have reading comprehension problems doesn’t mean the rest of us do.

I can do better than that. I cite the IPCC prediction (n.b. PREDICTION not projection) of “committed warming” from CO2 emissions made in the past.

The multi-model average warming for all radiative forcing agents held constant at year 2000 (reported earlier for several of the models by Meehl et al., 2005c), is about 0.6°C for the period 2090 to 2099 relative to the 1980 to 1999 reference period. This is roughly the magnitude of warming simulated in the 20th century. Applying the same uncertainty assessment as for the SRES scenarios in Fig. 10.29 (–40 to +60%), the likely uncertainty range is 0.3°C to 0.9°C. Hansen et al. (2005a) calculate the current energy imbalance of the Earth to be 0.85 W m–2, implying that the unrealised global warming is about 0.6°C without any further increase in radiative forcing. The committed warming trend values show a rate of warming averaged over the first two decades of the 21st century of about 0.1°C per decade, due mainly to the slow response of the oceans. About twice as much warming (0.2°C per decade) would be expected if emissions are within the range of the SRES scenarios.

In other words, it was expected that global temperature would rise at an average rate of “0.2°C per decade” over the first two decades of this century with half of this rise being due to atmospheric GHG emissions which were already in the system.

This assertion of “committed warming” should have had large uncertainty because the Report was published in 2007 and there was then no indication of any global temperature rise over the previous 7 years. There has still not been any rise and we are now way past the half-way mark of the “first two decades of the 21st century”.

So, if this “committed warming” is to occur such as to provide a rise of 0.2°C per decade by 2020 then global temperature would need to rise over the next 7 years by about 0.4°C. And this assumes the “average” rise over the two decades is the difference between the temperatures at 2000 and 2020. If the average rise of each of the two decades is assumed to be the “average” (i.e. linear trend) over those two decades then global temperature now needs to rise before 2020 by more than it rose over the entire twentieth century. It only rose ~0.8°C over the entire twentieth century.

The linear global temperature rise prior to now from year 2000 from “committed warming” should have been more than the 0.68K reported by the Third Viscount Monckton of Brenchley.

Simply, the “committed warming” has disappeared (perhaps it has eloped with Trenberth’s ‘missing heat’?).

This disappearance of the “committed warming” is – of itself – sufficient to falsify the AGW hypothesis as emulated by climate models. If we reach 2020 without any detection of the “committed warming” then it will be 100% certain that all projections of global warming are complete bunkum.

richardscourtney: I’m happy to switch the conversation to “committed warming”
and happy to finally see concrete references. You may want to look athttp://www.skepticalscience.com/ipcc-overestimate-global-warming.htm
starting at the ‘Scorecard’ heading:
”
The IPCC AR4 Scenario A2 projected rate of warming from 2000 to 2012 was 0.18°C per decade. This is within the uncertainty range of the observed rate of warming (0.06 ± 0.16°C) per decade since 2000, though the observed warming has likely been lower than the AR4 projection. As we will show below, this is due to the preponderance of natural temperature influences being in the cooling direction since 2000, while the AR4 projection is consistent with the underlying human-caused warming trend.
”
In the GISS data
(http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt)
you can see that there are ~0.3 C changes in temperature from year to year
(e.g. Feb 2001 is 0.46C, Feb 2002 is 0.74C). Using yearly averages, we have
0.40C for 2000 and 0.66C for 2010. Given the amount of noise in the data,
a prediction of 0.2C of committed warming per decade pretty much agrees with
what we are actually observing.

“Cesium62” ought to know better than to be discourteous from behind a furtive cloak of pseudonymity. My statement about the predictions made by the IPCC in 1990 referred, of course, to the IPCC’s 1990 First Assessment Report. “Cesium62” need only read the summary for policy-makers to discover that the IPCC predicted what I said it had predicted, and did so using the phrase “We predict …”, and also expressed “substantial confidence” that the models were appropriately representing the principal features of the climate system (though Mr Bickmore disagrees with the IPCC on this point, in that the models did not at that time reflect ocean circulation, and he considers that this should have been “obvious” to all, inferentially including even the admittedly dim IPCC).

The IPCC’s predictions, of 1.0 [0.7, 1.5] Celsius degrees of warming by 2025, would amount to 0.68 Celsius degrees by now, assuming a linear trend since 1990. The observed warming has only been 0.34 Celsius degrees in the quarter-century since 1990. And that, exactly as I said, is half the warming that the IPCC had then predicted, and below its least estimate. The models were wrong. Get used to it. The IPCC itself has had to reduce its near-term projection by almost half compared with 1990, and its present and 1990 projection intervals barely overlap at any point. On any objective test, the predictions that provoked the scare were so badly adrift that there is no longer any reason to be scared about global warming, and still less to spend any taxpayers’ or energy-users’ money on making it go away.

“Cesium32” seems to think that a prediction of 0.2 C/decade global warming is “consistent” with the outturn of no global warming at all in the past decade and a half. However, the combined measurement, coverage and bias uncertainties amount to only 0.15 C in total, whereas there should have been 0.3 C warming in the past decade and a half and there has not been any to speak of. The discrepancy between prediction and observation is, therefore, significant.

Since 1950 the mean rate of global warming has been less than 0.12 C/decade, well below the IPCC’s prediction of 0.2 C/decade. Whichever way one tries to rearrange or apologize for the numbers, the IPCC’s projections have been consistently and substantially on the side of exaggeration, and the IPCC has foolishly attempted to claim ever-greater “confidence” in its predictions even as the failure of those predictions becomes ever greater and more obvious to all.

Well, if you are “happy” to discuss “committed warming” I wonder why you did not. Instead, you suggested I “may want to look” at the contents of the cess pit known as SkS.

I gave you reference, citation, link and quotation which shows the IPCC predicted (n.b. predicted) ~0.3°C of global temperature rise from year 2000 until now. Even allowing for their stated error estimate, the rise should have been over 0.2°C by now.

I point out that this “committed warming” has not happened. And your reply is to say that “I may want” to dip into climate porn: I don’t!

Stephen Wilde said –
More people were killed or injured in the primitive 19th century UK coal mining industry than have ever been harmed by the nuclear power industry worldwide.

There certainly were – and well into the 20th Century too. When I went down the pit with my dad in 1956 there were 550 fatalities in UK coal mining and over 10,000 serious accidents that year, in 50 working weeks. This was typical. Do the Math, as our American friends say. Such a casualty record today is unthinkable, in any industry.

And to Lord Monckton, right on Milord, more power to your elbow – nuclear power and fossil fuel power preferably.

And to Anthony, what a fantastic website this is, required lunchtime reading for me every day – you should be knighted too!!. Please publicise your trip to UK so we can come and hear you and give you our support.

richardscourtney (July 31, 2014 at 1:32 am). A null hypothesis can be whatever you choose. Typically, one would choose a condition that indicates no change, but no change in what sense? My point was that, for the global mean temperature series, you could choose no change in temperature the temperature, or you could choose no change in the slope if it’s been relatively consistent over a longer period. And whether or not choosing the second option fits with your philosophy of null-hypothesis-choosing, the fact remains that the 13-year slope is statistically indistinguishable from the 40-year slope. On this basis alone, it is dishonest to say warming “stopped” 13 years ago.

“Since 1950 the mean rate of global warming has been less than 0.12 C/decade, well below the IPCC’s prediction of 0.2 C/decade. Whichever way one tries to rearrange or apologize for the numbers, the IPCC’s projections have been consistently and substantially on the side of exaggeration, and the IPCC has foolishly attempted to claim ever-greater ‘confidence’ in its predictions even as the failure of those predictions becomes ever greater and more obvious to all.”

Given that the IPCC didn’t exist in 1950, I’m reasonably sure they didn’t predict a rate of 0.2 °C since then. Did you mean to say something else?

In response to Brian J in UK, I had great respect for the miners who went down Britain’s deep mines. They were dangerous, difficult, uncomfortable places to work. I thought of becoming a miner when I discovered they were paid three times what I was getting as a leader-writer on the Yorkshire Post. I went down a mine and took one look, and reckoned that they were a lot braver than I was.

It was a shame they were badly led by Communists trained in and funded by Moscow. If they hadn’t been used by Moscow as an instrument first to topple the Conservative government of Edward Heath and then to try (unsuccessfully) to do the same to Margaret Thatcher, the process of switching from dangerous, loss-making deep-mined coal to the cheaper and less life-threatening opencast would have been slower and more dignified, and the mining communities would not have suffered as sharply and painfully from the abrupt transition as they did.

It’s a shame that the Left, who once backed the miners because the miners were led by Communists, now oppose the miners on the bogus ground that coal emits dangerous quantities of CO2, when the real reason for Mr Obama’s opposition to coal is that coal corporations have traditionally been among the biggest donors to the Republican party.

When the history of the present age comes to be written, the cynical wickedness of the Left in exploiting and then in ditching the very workers for whom they claimed to speak will be exposed and recorded. But the deep miners of the UK, badly led though they were by the Communists who had captured their union, will always be the heroes of labor to me.

And if you plot the same metric, for the same length of time, but use 2006 as your endpoint, the rate is double the long term trend, and nearly 50% above the IPCC (1990) prediction. maybe the ‘pause’ is no more than regression to the mean?

A system that is “chaotic” exhibits unpredictable behavior in the short term, but long-term averages can still be quite predictable.
======================
define “long-term”.
based on the definition of climate as the average of weather over a 30 year period, then the period from 1950 until today is 64 years. About 2.5 data points on our climate sample. It is nonsense to talk about long-term averages with only 2.5 data points.

As a rule of thumb you need 1000 data points — 3000 years of climate data to get a long-term reliable average. I expect current temperatures are within the average +- 3 standard deviations of climate over the past 3 thousand years. As such, there is nothing unusual about current climate.

Stand on the beach. Every now and then a wave will come along bigger than all the rest. Does this mean the waves are getting bigger? Only if you stand there a very long time can you make that call. Looking at 10 waves or 100 waves isn’t enough. You need 1000 waves. 2-3 hours of watching to be sure. 64 years — 2.5 climate cycles — that is similar to watching 2.5 waves on the beach and concluding they are getting larger. It is a nonsense conclusion, because if fails to properly consider randomness.

average +- 3 standard deviations
===============
process control in modern manufacturing relies heavily on this definition. a process that is outside these limits is considered “out of control” — and thus needing corrective action.

true, by confusing weather with climate, and thus juggling the timescales, can one reach faulty conclusions. but this is an abuse of statistic. you cannot slide the 30 year ruler back and forth to create multiple samples within a period of 2.5 samples, and then try and attach significance to this.

3000 years of climate will at a minimum include the medieval and roman warm periods, which are well matched to the modern warm period within the limits of the available records. our current climate is an “in control” process.

Milankovitch forcing, and that isn’t large enough to explain the changes without some hefty positive feedback.
=============
not correct. every object has a natural frequencies at which it will resonate. when a forcing is in sync with these frequencies the object will resonate with a much greater amplitude than can be explained by the forcing.

explain why for example Venus always shows the same face to earth at the point of closest approach. the orbital forcings cannot explain this, yet the retrograde spin of venus is clearly synchronized to earth. but because this cannot be explained by science, the consensus view is that it must be coincidental.

like someone winning the lottery 10 times in a row, because it cannot be explained it must be coincidence. similar explanations were given for ice ages and plate tectonics. because we cannot explain the cause, what we observe cannot be due to cause and effect. it must simply be coincidence.

the more obvious answer is the one scientists never give — that we don’t know the cause. the reality is we can’t say it is due to co-incidence simply because we haven’t found the cause. that is like in times past, people saying god creates illness. or saying that humans cause global warming – because we can’t find any other cause.

ferdberple, if you agree that Milankovitch forcing is the cause of the glacial-interglacial cycle, but that the Earth has to “resonate” with it to obtain the observed response… then you just repeated what I said about positive feedback.

richardscourtney (July 31, 2014 at 1:32 am). A null hypothesis can be whatever you choose.

NO! Bayesian priors can be chosen, but the scientific method defines the Null Hypothesis. Please see my explanation which you cite. It is here. It explains thatThe Null Hypothesis says it must be assumed a system has not experienced a change unless there is evidence of a change.

The Null Hypothesis is a fundamental scientific principle and forms the basis of all scientific understanding, investigation and interpretation. Indeed, it is the basic principle of experimental procedure where an input to a system is altered to discern a change: if the system is not observed to respond to the alteration then it has to be assumed the system did not respond to the alteration.

The scientific method has provided many benefits and I see no reason to think that your desire to alter it should be accepted.
Indeed, your rejection of the scientific method gives me reason to reject everything you say.

richardscourtney (July 31, 2014 at 1:32 am). A null hypothesis can be whatever you choose.

NO! Bayesian priors can be chosen, but the scientific method defines the Null Hypothesis. Please see my explanation which you cite. It is here. It explains that The Null Hypothesis says it must be assumed a system has not experienced a change unless there is evidence of a change.

But until 13 years ago the climate was warming at 0.16 degrees per decade. The NULL Hypothesis, therefore, should be that the climate is still warming at 0.16 degrees per decade (i.e. the system has not changed).

In truth, though. it doesn’t really matter what the surface/atmosphere is doing. Until we have evidence that the oceans have stopped warming then we can’t say global warming has stopped.

It seems that the apologists for the failed models have difficulty in understanding the basics of temperature change, just as the models themselves do. To establish the true trend, canceling out the warming and cooling phases of the Pacific Decadal Oscillation, it is necessary to take periods of approximately 60 years. Since 1950, the year when, according to the IPCC, we might have begun to influence the weather, 64 years have passed. The rate of warming over that period has been less than 0.12 K/decade. The IPCC, however, has been predicting 0.2 K/decade – close to twice what has occurred.

If, however, the apologists prefer to reckon from 1990, the date of the IPCC’s First Assessment Report, then the rate of observed warming is still below 0.14 K/decade, compared with the 0.28 K/decade predicted by the IPCC. As I have said, whichever way one slices the numbers, the IPCC’s predictions have been relentlessly exaggerated when compared with real-world measurement. It is futile, as well as intellectually dishonest, to try to maneuver and wriggle, duck and dive, rather than having the scientific integrity to admit – as the IPCC itself has admitted – that the models were simply wrong.

dbstealey you “explain” nothing. What Phil Jones said in 1999 is completely irrelevant to the content of Lord M’s interesting article – which happens to back up what I have been saying for some time (ie the pause started in 2002) – and which you (along with others) have regularly attacked.

george e. smith you miss the point entirely. Lord M’s (more realistic) analysis is based on taking the 5 data sets together which cover both satellite and terrestrial temperature measurements – as opposed to the single RSS data set (satellite sensing of atmospheric temperature in various altitude bands) which happens to give a favoured result for those looking for the least warming……”””””

Really; did I do that ??

……”””””” Lord M’s (more realistic) analysis is based on taking the 5 data sets together …..”””

Nah; didn’t miss that, I said that.

……”””””””as opposed to the single RSS data set ……””””

Nah, didn’t miss that, I said that.

Darned if I didn’t say he also had (in the past), tried his algorithm on each of the other data sets individually, and showed no great differences.

Only things I DIDN’T SAY, but are the results of your editorializing, are these two:

…..””””” (more realistic)…..”””””

…..””””” which happens to give a favoured result for those looking for the least warming…..””””

Now James, just what was the point you were attempting to make; since I missed nothing, that you claimed I missed.

Last time I studied statistics, about 60 years ago, if you used the same data set, and applied the same statistical mathematics algorithms to that, you (and anybody else) ALWAYS get the exact same results. Statistics is completely deterministic.

But if you change either the data set (for some reason), or the algorithm (for some other reason), the the output results will change as well.

Lord Monckton changed the data set; and got a different answer (to a different problem).

Furthermore, he correctly reported his results, in both instances.

Now if the five data set result had come out at 19 years and 7 months, would you now also be whining about the altered outcome.

Like I said, if you didn’t like his arithmetic; (statistics requires at least fourth grade arithmetic), then why not do the calculations yourself, and see if you get the same results.

I do know that summing a non absolutely convergent , convergent infinite series, will get you any positive or negative total sum that you like; depending on the order you do the summation.

But this is only finite data sets, so the result is pre-determined by the data, and the algorithm, and not by the person doing the computation.

So give it a go, and show us where Christopher (allegedly) went wrong.

If, however, the apologists prefer to reckon from 1990, the date of the IPCC’s First Assessment Report, then the rate of observed warming is still below 0.14 K/decade, compared with the 0.28 K/decade predicted by the IPCC.
===========================================================================

The IPCC’s 1991 First Assessment Report also states in the Executive Summary :
“Based on current models we predict………The rise will not be steady because of the influence of other factors.”

I see your post at July 31, 2014 at 3:04 pm demonstrates you have reading comprehension problems.

I suggest that you read my explanation again and – this time – attempt to understand this bit

If something exists but is too small to be detected then it only has an abstract existence; it does not have a discernible existence that has effects (observation of the effects would be its detection).

To date there are no discernible effects of AGW. Hence, the Null Hypothesis decrees that AGW does not affect global climate to a discernible degree. That is the ONLY scientific conclusion possible at present.

I see your post at July 31, 2014 at 3:04 pm demonstrates you have reading comprehension problems.

I don’t think so and I’m not sure why you bang on about AGW later in your post. We are discussing the continuation or non-continuation of a warming trend. Whether that trend is due to AGW is irrelevant.

I maintain that there is insufficient evidence to conclude that the previous surface warming trend has stopped. I further maintain that global warming (for whatever reason) has certainly not stopped because the oceans are continuing to accumulate energy at the rate of ~7×10^22 Joules per decade.

In response to Mr Finn, the warming trend that was evident between 1976 and the turn of the millennium has manifestly ceased since, on all datasets. That is not to say that it will not resume in due course: we are continuing to add CO2 to the atmosphere and, all other things being equal, theory would lead us to expect that warming will resume.

However, one cannot safely pray ocean warming in aid. Sea level, according to the Envisat satellite, barely rose during its eight years of operation; and the GRACE gravitational-anomaly satellites showed it falling from 2004-2009, suggesting that any net accumulation of heat in the oceans must be small enough not to cause much sea-level rise. Unfortunately, the measurement of changes in sea temperature is insufficiently well resolved to allow us to be sure at what rate (if any) the oceans are accumulating heat; nor can we be sure how much of any accumulating heat in the oceans is attributable to warming of the atmosphere and how much to radiation from the Earth’s mantle.

The least ill-resolved record of ocean heat content we have is the 3500 ARGO bathythermograph buoys: but their resolution is the equivalent of taking a single temperature and salinity profile of the whole of Lake Superior less than once a year, as Mr Eschenbach, in one of his distinguished contributions here, has pointed out. What they indicate (and subject to concerns about their resolution) is that the oceans have been accumulating heat at a rate approximately one-sixth of that which the models had predicted. On should hesitate, therefore, to draw definitive conclusions about the continuance of global warming from the apparent heat content of the oceans. We cannot measure it accurately enough to be confident of any such conclusion.

Furthermore, the oceans are denser than the atmosphere by three orders of magnitude: therefore, a very large change in atmospheric temperature would be needed to bring about a very small change in ocean heat content. Yet for a decade and a half there has been no change in atmospheric temperature at all – and certainly not a large one. Therefore, if there has been an increase in ocean heat content, it is perhaps more likely to have come from below than from above.

John Finn:
I see your post at July 31, 2014 at 3:04 pm demonstrates you have reading comprehension problems.

I don’t think so and I’m not sure why you bang on about AGW later in your post. We are discussing the continuation or non-continuation of a warming trend. Whether that trend is due to AGW is irrelevant.

Well, that confirms the observation of your reading comprehension problems.

This part of the thread commenced with Barry Bickmore writing his post addressed to me at July 30, 2014 at 4:00 pm. My reply at July 31, 2014 at 1:32 am said

The “nonsense” is your attempts
(a) to assert I claimed a Null hypothesis which I did not
and
(b) to replace the scientific method with a Null Hypothesis of your choosing.

I never cease to be amazed at how often I am called upon to explain the Null Hypothesis as it applies to anthropogenic (i.e. man-made) global warming (AGW). This is the second time this morning.

The Null Hypothesis says it must be assumed a system has not experienced a change unless there is evidence of a change.

That is why my comments on that matter “bang on about AGW”. It was the subject under discussion.

Global warming stopped. It has ceased. It is kicking up the daisies. It is an ex-parrot.
But you try to pretend that global warming has not stopped by writing

I maintain that there is insufficient evidence to conclude that the previous surface warming trend has stopped. I further maintain that global warming (for whatever reason) has certainly not stopped because the oceans are continuing to accumulate energy at the rate of ~7×10^22 Joules per decade.

John Finn, warming consists of an increase in temperature. Global warming is an increase in the surface temperature of the Earth. There has been no increase in the surface temperature of the Earth for more than a decade. So, GLOBAL WARMING HAS STOPPED.

Your claim that global warming has not stopped because “the oceans are continuing to accumulate energy” is like saying the parrot has not died because its feet are nailed to its perch.

I have to say that this has been an entertaining conversation with Lord Monckton. He started with an assertion that there has been no statistically significant warming for the last 13 years, to which I replied that it’s also true you can’t statistically distinguish the 13-year slope from that over the last several decades. So if we want to be up-front and honest, we have to say that this short period isn’t enough to distinguish whether the warming rate is about the same as it has been, or essentially flat. This is an elementary point about overlapping error bars and honest reporting of statistics, but Monckton tried to cover his ignorance of statistical methods with this strange comment. “The long-term warming trend of less than 0.12 K/decade since 1950 is indeed statistically indistinguishable (over a sufficiently short period) from a zero trend….” This is flatly untrue, and it makes no sense to talk about the slope “since 1950… over a sufficiently short period”, because the slope since 1950… is the slope since 1950. But when I said so, he shifted the goalposts again. “Since 1950 the mean rate of global warming has been less than 0.12 C/decade, well below the IPCC’s prediction of 0.2 C/decade.” So now he’s comparing the rate of change since 1950 with the IPCC projected rate since sometime in the 1990’s or 2000’s. I pointed out that the IPCC wasn’t around in 1950 to make projections, but now he’s really bringing in the big guns–the PDO!

Monckton of Brenchley (July 31, 2014 at 3:15 pm) says:

“It seems that the apologists for the failed models have difficulty in understanding the basics of temperature change, just as the models themselves do. To establish the true trend, canceling out the warming and cooling phases of the Pacific Decadal Oscillation, it is necessary to take periods of approximately 60 years. Since 1950, the year when, according to the IPCC, we might have begun to influence the weather, 64 years have passed. The rate of warming over that period has been less than 0.12 K/decade. The IPCC, however, has been predicting 0.2 K/decade – close to twice what has occurred.”

How bizarre. He simply dismisses the point that the IPCC projection he cites was not for the period since 1950, and throws up some smoke and mirrors about how you must have at least 60 years to cancel out the effect of the PDO. Let’s just think about that for a bit.

1. If you need at least 60 years to average out PDO effects, why is Monckton making a big deal about the 160-month temperature trend, when the slope of the PDO index has been negative over that time period? Shouldn’t he do a PDO correction on the recent trend, or at least acknowledge that the recent trend might have been pushed down due to PDO effects?

2. This is an important point, because the PDO generally follows ENSO, but at a lower frequency. Multiple studies have now shown that if you statistically correct for the correlation of the temperature with ENSO, the temperature slope over the last decade and a half is essentially the same as it has been since at least the 70’s.

3. I wonder what work on the PDO His Lordship is referring to. I’m quite familiar with Roy Spencer’s work on the subject. See the following, in which I showed that (among other things) Roy had started his model wildly out of equilibrium so he could fit the first 50 years of his model run, and then posited a 700 m ocean mixed layer (it’s maybe 50-200 m) so he could get a weak equilibrium climate sensitivity. He could have gotten any answer he wanted for the climate sensitivity, with exactly the same quality of fit.

It will be interesting to see how His Lordship follows this up. This conversation has given me a nostalgic feeling, because a few years ago I was able to show that Monckton had 1) somehow corrupted the IPCC A2 scenario CO2 values, and then 2) fed them into a simple equation meant to estimate EQUILIBRIUM temperature response, to 3) produce a fake IPCC “prediction” of the TRANSIENT temperature response. (I should also note that he neglected other types of forcing than CO2). He could have looked up the actual IPCC projections for the A2 scenario, but instead he blamed his own fake “predictions” on them. He even showed the graph to a Congressional committee. See here:

When he responded to the charges, Monckton included the following utterly fascinating paragraph.

“Some have said that the IPCC projection zone on our graphs should show exactly the values that the IPCC actually projects for the A2 scenario. However, as will soon become apparent, the IPCC’s ‘global-warming’ projections for the early part of the present century appear to have been, in effect, artificially detuned to conform more closely to observation. In compiling our graphs, we decided not merely to accept the IPCC’s projections as being a true representation of the warming that using the IPCC’s own methods for determining climate sensitivity would lead us to expect, but to establish just how much warming the use of the IPCC’s methods would predict, and to take that warming as the basis for the definition of the IPCC projection zone.”

In other words, it was really ok for him to say that the projection zone on his graphs was the IPCC’s, even though they had actually made different projections. Why? Because those sneaky bastards didn’t do their projections right!!! They should have done it like Monckton did, with a simple equation describing equilibrium climate sensitivity, rather than models that produce time series. All of which avoids the glaringly obvious fact that IF THEY DIDN’T MAKE THAT PROJECTION, IT IS DISHONEST TO SAY THEY DID.

Likewise, if you are going to use statistical analysis to say the temperature slope is different than it was before, you have to look at the error estimates. Monckton, along with some of the others who comment here, refuses to consider that.

John Finn, warming consists of an increase in temperature. Global warming is an increase in the surface temperature of the Earth.

So “global” only refers to the earth’s surface, does it? I don’t think so. The oceans are an important – in fact the most important – part of the earth’s climate system. If the oceans are warming it is a clear indication that the earth is gaining energy. The oceans are warming. Therefore, earth’s climate system is continuing to warm and we still have global warming.

John Finn, warming consists of an increase in temperature. Global warming is an increase in the surface temperature of the Earth.

So “global” only refers to the earth’s surface, does it? I don’t think so. The oceans are an important – in fact the most important – part of the earth’s climate system. If the oceans are warming it is a clear indication that the earth is gaining energy. The oceans are warming. Therefore, earth’s climate system is continuing to warm and we still have global warming.

That ‘moves the goalposts’ off the planet!

Global warming has always been an increase in the surface temperature of the Earth.
That is why HadCRU, NASA GISS, et al. have invested so much time, money and effort into devising their time series of global temperature anomally (GASTA) and why climate modellers have been attempting to project and predict GASTA.

But global warming has stopped so John Finn has decided that he “don’t think” the definition of global warming is right.

John Finn, I suggest you inform the compilers of GASTA data sets, the climate modellers, and the IPCC that you “don’t think” they have been assessing the right thing. Please report back on their responses and then we can discuss what you “don’t think”.

John Finn, I suggest you inform the compilers of GASTA data sets, the climate modellers, and the IPCC that you “don’t think” they have been assessing the right thing. Please report back on their responses and then we can discuss what you “don’t think”.

It’s not a case of whether or not the “right thing” is being assessed. It’s perfectly reasonable to consider the surface temperature trend as well as the OHC trend but it’s important to recognise
that, by far, the best measure of the earth’s energy (im)balance is in the OHC data. I think you’ll find Roger Pielke agrees very much with this statement. I do accept, though, that the mainstream (AGW) scientists were happy to focus on the surface temperature records while the observations were supporting their case.

The apologists for the failed models have done their best to throw up a Gish-gallop of ingenious irrelevancies in the failed hope of diverting attention from the simple fact that, on the average of all five major global temperature datasets that report monthly data the global temperature trend has been zero for 13 years 4 months nothwithstanding record increases in CO2 concentration.

One such irrelevancy is the suggestion that no account has been taken of the error-bars in the measurement of global temperatures. On the contrary: those error-bars are, as I have pointed out above, published as part of the HadCRUT4 data, and amount to 0.15 Celsius degrees. Accordingly, it is not possible to be 95% confident that there has been any global warming for approaching 18 years.

Building upon that irrelevancy, the apologists for the models offer the further irrelevancy that the zero trend of the past 13 years 4 months (or 17 years 10 months on the RSS data) is indistinguishable statistically from the warming trend of recent decades. At first, an attempt was made to state that that trend was +0.16 Celsius degrees per decade: however, since 1950 it has been +0.12 C/decade. And that, of course, is well below the currently-projected medium-term warming of +0.2 C/decade, and still further below the IPCC’s 1990 central projection of more like +0.3 C/decade.

Building upon that irrelevancy, the apologists maunder on to the effect that the IPCC in 1990 was making projections since 1990 and I should not be comparing them with results since 1950. However, the graph published in the head posting compares the predicted and actual trends since 1990. In that year the IPCC might well not have been aware of the 60-year quasi-periodicity in global temperatures that is known as the Pacific Decadal Oscillation. Yet we know about it now: so that, in evaluating the appropriateness of the IPCC’s 1990 projection, it is of course relevant – as anyone sufficiently educated in these matters would know – to compare that projection, expressed as a rate of warming in Celsius degrees per decade, with the outturn averaged across the most recent entire period of the PDO.

The ineluctable fact remains that, whichever way the numbers are sliced and diced by the apologists for the models, the models have made and continue to make predictions of near-term gkobal warming that have been and are plainly excessive. The more the apologists wriggle, the more they draw attention to that fact. Remove the exaggerated predictions and the scare disappears. That is why they become so agitated when I merely point out the obvious – or, rather, what is obvious provided that in all the acres of print and hours of broadcasting someone actually bothers to point the actual changes in global temperature (or, in recent decades, the conspicuous lack of such changes). When I asked the 650 attenders at the Heartland climate conference last month whether they had ever seen the Great Pause reported in any mainstream news medium, very few had seen it mentioned.

From this one may deduce that the apologists for the models, notwithstanding their whining bluster here, are becoming more and more alarmed at the failure of global temperatures to rise anything like as fast as ordered (or, recently, at all), and are doing their best to sneer at anyone who dares to point out the truth, in the hope of concealing the abject failure of the models.

First we were told by Hansen that one must wait at least five years before recognizing that the models were wrong. Five years without warming came and went. Make that ten years, he said. Ten years came and went. Make that 15 years, said the NCDC’s “State of the Climate” report for 2008. Fifteen years came and went. Make that 17 years, said Ben Santer (he who had single-handedly rewritten the 1995 IPCC report to claim that a discernible human influence on global climate had been found, when the scientists had said five times that no such influence could yet be detected and it was not known when such an influence might be found). Now, 17 years have come and gone without any warming that can be statistically distinguished from the combined measurement, coverage and bias uncertainties. But the apologists for the models now say that approaching 18 years without significant global warming is consistent with the models’ predictions. And they expect us to take them seriously. The shuffle and clatter of moving goalposts is audible.

Bottom line: as Mr Courtney has trenchantly pointed out, global warming has stopped. Theory would lead us to expect it will resume: but the notion that it will resume at a rate rapid enough to be dangerous looks ever more implausible with each month that passes. In the end, the ad-hominem remarks in which the sneering apologists for the models so readily indulge have now come back to haunt them: for the sheer venom with which they write has begun to alert all but a dwindling band of true-believers to the hollow falsity of all their fictions and fabrications – a falsity no longer effectively concealed by their shoddy readiness to descend into mere personalities.

Monckton of Brenchley: “[G]lobal warming has stopped. Theory would lead us to expect it will resume: but the notion that it will resume at a rate rapid enough to be dangerous looks ever more implausible with each month that passes.”

Having been among those who have never hesitated to criticize some of Lord Monckton’s posts, it is only fair for me to applaud the quoted passage’s measured and to-the-point summary of the situation.

Moreover, I think the penultimate paragraph of the comment that contained it merits recycling with some frequency. To that end, it may be worth the trouble to include hyperlink cites to the several mentioned Hansen claims as well as the Santer revision, against the admittedly unlikely eventuality that some mainstream-medium reporter will happen upon it in the course of a brief dalliance with actual facts.

Trying to determine why atmospheric temperature rises are not as predicted “is NOT moving the goal posts, its science. There are many other indicators that show AGW is very real. Rises in CO2 from fossil fuels, sea temperature, sea level, record breaking weather events, damage costs due to weather, changes in wildlife behaviour etc. The latest IPCC report shows that the atmopsheric temperature increase has risen at a much slower rate than expected in the last 17 years not that its stopped. What the “mutidisciplined” teams working on climate change have discovered is that the sea is taking up much of this increase in energy and as a result the seas are becoming more acidic. The sea is absorbing 50% of “our” carbon emissions. However it can only absorb so much. No wonder the surface temperatures are not rising as expected specially when combining this with the newly discovered pacific winds and a slow down in the gulf stream. It appears to me that the so called “climate sceptics” believe the latest IPCC report when it suits them but ignore the sections referring to the observed devastation and the consequences of in-action. I suggest that the “sceptics” read the full report.