In August 2007, Doug Smith took the biggest gamble of his career. After more than ten years of work with fellow modellers at the Met Office’s Hadley Centre in Exeter, UK, Smith published a detailed prediction of how the climate would change over the better part of a decade. His team forecasted that global warming would stall briefly and then pick up speed, sending the planet into record-breaking territory within a few years.

The Hadley prediction has not fared particularly well. Six years on, global temperatures have yet to shoot up as it projected. Despite this underwhelming result, such near-term forecasts have caught on among many climate modellers, who are now trying to predict how global conditions will evolve over the next several years and beyond. Eventually, they hope to offer forecasts that will enable humanity to prepare for the decade ahead just as meteorologists help people to choose their clothes each morning.

These near-term forecasts stand in sharp contrast to the generic projections that climate modellers typically produce, which look many decades ahead and don’t represent the actual climate at any given time. “This is very new to climate science,” says Francisco Doblas-Reyes, a modeller at the Catalan Institute of Climate Sciences in Barcelona, Spain, and a lead author of a chapter that covers climate prediction for a forthcoming report by the Intergovernmental Panel on Climate Change (IPCC). “We’re developing an additional tool that can tell us a lot more about the near-term future.”

In preparation for the IPCC report, the first part of which is due out in September, some 16 teams ran an intensive series of decadal forecasting experiments with climate models. Over the past two years, a number of papers based on these exercises have been published, and they generally predict less warming than standard models over the near term. For these researchers, decadal forecasting has come of age. But many prominent scientists question both the results and the utility of what is, by all accounts, an expensive and time-consuming exercise.

To make its climate prediction, Smith’s team used its standard climate model, but broke the mould by borrowing ideas from the way meteorologists forecast the weekly weather. Typical climate projections start some way back in the past, often well before the industrial era, in a bid to capture the average climate well enough to forecast broad patterns over the long term. Weekly weather forecasts, however, begin with the present.

“It’s fair to say that the real world warmed even less than our forecast suggested,” Smith says. “We don’t really understand at the moment why that is.”

The answer may lie in the oceans. Although the atmosphere largely controls day-to-day weather, the slow-moving oceans hold so much more energy and heat that they dominate how the climate changes from year to year. Researchers suspect that much of this variability is tied to widespread cycles, such as the El Niño warming and La Niña cooling system in the eastern tropical Pacific. In theory, the fact that salt water circulates more slowly than air should also make the oceans a little easier to model.

Despite their faults, such efforts helped spark a wave of research among modellers who are hungry for ways to test and improve their calculations. The global climate-modelling groups that took part in the IPCC’s experiments invested a substantial portion of their modelling time to produce the first systematic predictions of how the global climate will evolve in the coming years. These models predict cooler temperatures: on average 15% less warming over the next few decades compared with standard climate projections.

To determine whether these projections are likely to hold, the groups ran the usual test of seeing how well their models performed when hindcasting, or predicting the past. The teams plugged in all of the observational data and ran decadal climate predictions at least every five years beginning in 1960, comparing the resulting hindcasts to the actual climate as well as standard climate models. In one such analysis4, Doblas-Reyes and his colleagues say that their model anticipated the slowdown in global warming up to five years in advance. Their paper also bolstered the theory that the deep oceans, notably the Atlantic and tropical Pacific, had stalled atmospheric warming by absorbing much of the heat being trapped by rising concentrations of greenhouse-gas concentrations in the air.

——–

Lost heat: why has the warming slowed? It is one of the biggest mysteries in climate science: humans are pumping more greenhouse gases into the atmosphere today than ever before, yet global temperatures have not risen much in more than a decade. That trend does not undermine the idea that greenhouse gases will eventually push global temperatures into uncharted territory, but it does have scientists puzzled.One partial explanation is natural variation: temperatures are expected to plateau occasionally even during a warming climate. And the world remains a very warm place. The ten hottest years on record have all occurred since 1998.Yet with the stalled warming now approaching its 15th year, researchers are seeking some deeper explanation. “The heat must be going somewhere,” says Ed Hawkins, a climate scientist at the University of Reading, UK. “The question is where.”One likely culprit is the oceans, which already absorb most of the heat. The latest research suggests that more heat than expected could be going into the deep oceans, below 700 metres. Another possibility that scientists have investigated is whether volcanic ash from minor eruptions and pollution from the industrialization of China and other countries are reflecting more of the Sun’s energy back into space. Still another is the prolonged lull in solar activity early in the millennium, which might decrease the amount of energy hitting Earth.But scientists cannot yet fully explain the recent trends, and the larger question is whether the lack of warming today portends less warming in the future.Michael Ring and his colleagues at the University of Illinois at Urbana-Champaign argue that Earth might in fact be less sensitive to greenhouse gases than previously believed. Whereas the Intergovernmental Panel on Climate Change estimates that doubling atmospheric carbon dioxide levels would ultimately increase global temperatures by 2–4.5 °C, with a best estimate of 3 °C, the Illinois group says that the rise is more likely to be between 1.5 °C and 2 °C.Other researchers argue the opposite, and the issue remains unsettled. Besides, the continuing climb in global emissions means that a lower climate sensitivity would cause only a slight delay in global warming, says Alexander Otto, a climate policy researcher at the University of Oxford, UK. “The impacts we were expecting in 2050 would happen a decade later,” he says. “There is certainly no reason for complacency.”

————

It is one of the biggest mysteries in climate science: humans are pumping more greenhouse gases into the atmosphere today than ever before, yet global temperatures have not risen much in more than a decade. That trend does not undermine the idea that greenhouse gases will eventually push global temperatures into uncharted territory, but it does have scientists puzzled.

One partial explanation is natural variation: temperatures are expected to plateau occasionally even during a warming climate. And the world remains a very warm place. The ten hottest years on record have all occurred since 1998.

Yet with the stalled warming now approaching its 15th year, researchers are seeking some deeper explanation. “The heat must be going somewhere,” says Ed Hawkins, a climate scientist at the University of Reading, UK. “The question is where.”

But scientists cannot yet fully explain the recent trends, and the larger question is whether the lack of warming today portends less warming in the future.

Michael Ring and his colleagues at the University of Illinois at Urbana-Champaign argue that Earth might in fact be less sensitive to greenhouse gases than previously believed. Whereas the Intergovernmental Panel on Climate Change estimates that doubling atmospheric carbon dioxide levels would ultimately increase global temperatures by 2–4.5 °C, with a best estimate of 3 °C, the Illinois group says that the rise is more likely to be between 1.5 °C and 2 °C.

Other researchers argue the opposite, and the issue remains unsettled. Besides, the continuing climb in global emissions means that a lower climate sensitivity would cause only a slight delay in global warming, says Alexander Otto, a climate policy researcher at the University of Oxford, UK. “The impacts we were expecting in 2050 would happen a decade later,” he says. “There is certainly no reason for complacency.”

“We do see that there are some improvements,” says Lisa Goddard, a climate scientist at Columbia University in New York who is heading a systematic analysis and comparison of the predictions from the IPCC models. Many models, for instance, captured a sudden warming of sea surface temperatures in the North Atlantic that began around 1995. “They all predict the shift beautifully,” Goddard says. “Unfortunately, from what I hear, different models are doing it for different reasons.”

JC comments: You may recall a previous Climate Etc. post CMIP5 decadal hindcasts, which describes this modeling effort and provided the first multi-model verification analysis. The initialized decadal simulations starkly point out the deficiencies that climate models have in simulating natural internal variability.

While I think this focus on decadal simulations is very important for illuminating the deficiencies of climate models and developing insights for their improvement, I suspect that it will be a long time before climate models can do a better job on decadal timescales than statistical models, based either on persistence or climate dynamics.

Finally, I would like to take issue with this statement by Alexander Otto:

Besides, the continuing climb in global emissions means that a lower climate sensitivity would cause only a slight delay in global warming, says Alexander Otto, a climate policy researcher at the University of Oxford, UK. “The impacts we were expecting in 2050 would happen a decade later,” he says. “There is certainly no reason for complacency.”

So, cutting the climate sensitivity in half means the impacts we are expecting in 2050 would happen a decade later? Huh?

‘Just look at the Actual Data. There is more than enough data to understand what is really happening/’

So if this is the case, you can provide us with all the many sources that have the ‘actual data’ you say exists that further corroborates your assertions? I didn’t see any data in the link provided. I’m looking to you as the expert here, you write extensively about your theory so you must be getting this data from many sources to back it up… right? The link you provide talks a lot about ice loss still being the direction the world is going which seems contrary to what you assert. Did you miss that part?

I have been around long enough to see through Climate Alchemy’s BS and to develop my own ideas based on sound physics – Climate Alchemy makes many elementary mistakes but perhaps the worst of all is to assume that the Spectral Distribution of OLR is invariant.

It varies to keep OLR = SW in. Make this simple change in perspective and there can be no CO2-AGW!

I’ll leave the rest of this farrago of half-baked junior school physics designed to fool the public and the dim-witted to swing in the breeze of cold facts, scientific wind-chimes, the epitaph to Hansenkoism.

For those of a scientific bent realise that the Earth’s purpose is to increase radiation entropy to Space. The rest is simple thermodynamics.

“It is one of the biggest mysteries in climate science: humans are pumping more greenhouse gases into the atmosphere today than ever before, yet global temperatures have not risen much in more than a decade. That trend does not undermine the idea that greenhouse gases will eventually push global temperatures into uncharted territory, but it does have scientists puzzled.”

Well that is just dumb as a box of rocks. Of course the failure of the trend to match model predictions “undermines” CAGW. It doesn’t disprove it, but it definitely undermines it. If it didn’t, we wouldn’t have such a frenzied rush to figure out why the models are so wrong.

“Where has the heat gone” is an explicit admission that the consensus does not understand the climate sufficiently to model it or make accurate predictions. Which is a shock only to themselves.

Let me make my own prediction now. We are going to get a plethora of short term, 5 to 10 year predictions over a whole range of temperatures. Then, in 5 or 10 years, when one of those forecasts comes close to reported temps by sheer dumb chance, there will be a spate of learned papers saying “See? We now have accurate climate models.”

They got the first part about right. 2004 through 2009 saw a negative trend as natural variation swamped the AGW signal, which is what they predicted. After 2009, the 2004 to 2011 trend is positive. 2010 was a very warm year, record or near record. So they looked good until then.

The Smith et al study I suppose you are referring to – published in 2007′

“The dramatic warming predicted after 2008 has yet to arrive.”
“It’s fair to say that the real world warmed even less than our forecast suggested,” [modeller] Smith says. “We don’t really understand at the moment why that is.”

Part of the reason is intense and frequent La Nina for 20 to 40 years.

Unlike El Niño and La Niña, which may occur every 3 to 7 years and last from 6 to 18 months, the PDO can remain in the same phase for 20 to 30 years. The shift in the PDO can have significant implications for global climate, affecting Pacific and Atlantic hurricane activity, droughts and flooding around the Pacific basin, the productivity of marine ecosystems, and global land temperature patterns. #8220;This multi-year Pacific Decadal Oscillation ‘cool’ trend can intensify La Niña or diminish El Niño impacts around the Pacific basin,” said Bill Patzert, an oceanographer and climatologist at NASA’s Jet Propulsion Laboratory, Pasadena, Calif. “The persistence of this large-scale pattern [in 2008] tells us there is much more than an isolated La Niña occurring in the Pacific Ocean.”

Natural, large-scale climate patterns like the PDO and El Niño-La Niña are superimposed on global warming caused by increasing concentrations of greenhouse gases and landscape changes like deforestation. According to Josh Willis, JPL oceanographer and climate scientist, “These natural climate phenomena can sometimes hide global warming caused by human activities. Or they can have the opposite effect of accentuating it.” http://earthobservatory.nasa.gov/IOTD/view.php?id=8703

Perhaps the slower rate of temp rise means that humans can easily adapt to the changes over time and there is no need to undertake vast changes in the world economy. Perhaps it is best to allow the over 3 billion people worldwide to get electricity as cost efficiently as possible.

Waht specifically is your largest fear regarding more CO2? Is it sea level rise? That certainly is NOT happening at an alarming rate.

You don’t need to have your own estimate to point out a stupid statement.

You do need at least a modicum of common sense (and perhaps some basic arithmatic skills) to realize that claiming a sensitivity that is half the current estimate will only result in a decade long pause in projected warming is the stuff of morons.

My best estimate is that the finite nature of fossil fuel reserves/resources will limit future CO2 emissions to something approximating the IPCC rcp 45 scenario over the next 100 years or so, but with a concentration peak occurring around 2150 driven by ever-increasing fossil fuel costs. “Cutting sensitivity in half” (to about 1.6 C) will then lead to a peak temperature increase of around 1 C in the 2150 time frame, followed by a slow decline thereafter. In contrast, the average CMIP5 model predicts an increase from now to 2100 of about 1.5 C in comparison to the present, but with the temperature still slowly increasing. This difference in sensitivity is (should the world prove to be rational about it) likely to be very significant as it concerns optimal emission-related policy options.

I agree Judith. Seems like we would have an extra 50 years at least. Not sure why 15 years of flat temp.’s would only cause a ten year delay even if the sensitivity was not cut in half. But if the heat goes into the deep ocean more efficiently and is only raising it 0.001 degrees or so thus far, would this not give us 200 or 1,000 years more?

If the sensitivity to doubling is 1.5 C and we have seen half of this on our way to 560 ppm, then it would have to double again to 1120 ppm to see another 1.5 C. Now that some estimates are saying the sensitivity may be less and the change may only be a few degrees, now some are saying we really need to keep temp’s from going up more than 1C instead of the 2C they have been saying for so long. And now Hansen is saying if we pursue fracking, etc. that we could go way over 1,000 ppm. They have to keep raising the scare bar it seems.

As climate models reflect the biases and assumptions of the modelers, would it not be simpler and so much cheaper just to ask the modelers what their best guess temperatures will be in 5 years. Think of all the computer time not spent on climate modeling. Instead, use the computer time to understand the weather, re: the previous post.

We could develop a Paul Ehrlich Population Bomb prize and award it to the winn
We

The Paul Ehrlich Population Bomb Prize for the most prognostic bombast awarded to the modelers who failed to guess the correct 5 year temperature. Then, everyone will go home a winner, just like 6 year old’s soccer teams.

The Ehrlich Award awarded every year to most outrageously wrong environmental prediction. Or even have a few categories, most dire prediction for the next 5 years and whatnot. Add classic predictions (award the first prize to Ehrlich himself)

When I was at Australia’s Office of the Economic Planning Advisory Council, we once asked three leading modellers for medium term economic forecasts (why three? I think that there was some politics involved, the ALP government was matey with one of the modellers, whose work was not generally well regarded). We got wildly different forecasts, the reasons for the differences were not clear. We got the three modellers in a room, the differences were entirely due to the different assumptions they used. So the output told us largely how the modeller’s assumptions varied, they gave us a useful forecast only to the extent that we accepted particular assumptions and the forecasting techniques applied to them.

Asking them for their “best guess” would have saved a lot of time and expenditure and have probably been just as useful; not very, economic forecasting is generally very poor, in part because it has little or no capacity to allow for the unforeseen events which always occur, and because it is difficult to forecast the pace at which various economic adjustments occur.

But at least the modellers had a coherent framework based on evidence-backed theory, and good data to input, so they probably had more chance of successful prediction than climate modellers.

In general, when I was involved in economic modelling, getting the assumptions right was critical; the rest was fairly mechanical. Equally, in looking at model outputs, understanding the assumptions was the critical first step. Repeated criticisms of climate models at CE have been that the modellers don’t understand the processes they seek to model and ignore possible causal variables which don’t fit their prior assumptions. Moving to shorter-term forecasts from long-range predictions will require a much deeper understanding of the processes involved and probably data at a much finer level than has been available.

One last point: I’m not a modeller, but I’ve always had a good feel for figures. I often picked out serious flaws in model outputs which the learned professors presenting them had not seen, or saw important relationships or conclusions which they had not picked up. I suspect that there are parallels in the climate and weather areas, whereby some not so close to the models can see flaws/findings which the modellers miss.

(Pretty tired, I don’t know if this is helpful but it’s nice to write something after a long walk.)

I’m still curious as to why we refer to climate sensitivity as a single value. Isn’t it much more likely that sensitivity actually changes along with other conditions? It would not take much to convince me that sensitivity to a doubling of concentrations of CO2 well-mixed in the atmosphere is different today than what it was at some point in the past and what it will be at some point in the future.

Yes, traditionally, the feedback term is 1 / f . It certainly is 1/ f(time, temp), but the Box motto (“all models are wrong, some are useful”) plus Occam made a constant feedback term a useful running hypothesis.

I would guess feedback may end up being 3.0+ multiplier but over the multi-century timescale, where ice-albedo and ocean circulations have enough time to dominate decadal atmospheric variation.

Over the short-run, like these decadal forecasts described, there may be no actual sensitivity to a small forcing, just a change in initial conditions, applied to a chaotic system => not predictable in magnitude or even directionally.

Judith an others have pointed out that the value may very well not behave in a linear fashion. However, over the period of interest, the next 100 years, you can assume that its linear and calculate a number. Whether that assumption will hold is a different question.

“With climate shifts every few decades – it is a very poor assumption. I know – let’s assume it won’t”

Wrong. you cant just assume it wont. You’d have to suggest something better than the linear assumption. merely pointing out as Judith and others do that it might not be linear is no solice. You actually have to suggest something testable. Otherwise you are just playing sceptic rather than using scepticism as a methodological tool.

simple tony. You assume its linear because then its amenable to analysis.
You always have to assume SOMETHING. Assuming its linear gives you a starting point that others can then use to suggest BETTER approaches.
Playing the game of skepticism, what if its UNICORNS, is fine in the philosophy class, but this isnt philosophy. So, you assume its linear. You calculate accordingly. When somebody comes along with a better characterization then you can compare them. When somebody points out that it might not be linear, well true it might be unicorns. So, there are countless examples of people solving problems by first assuming that its linear and then solving. Of course we KNOW that it cannot be linear over the entire span of time, but that doesnt stop us from calculating an answer under the assumption of linearity. For example, when you calculate the “slope” of a set of data points you assume that the underlying process is linear. this is no different.

He assumes whatever he needs to support his preconceived closely held narrative. A committed lukewarmer needs a constant, predictable climate sensitivity number at least long enough to get scary close to the dreaded 2C temperature rise before end-of-century. Mosher will tell you he changed his mind one time on the matter of global warming (a reflection of his father/hero figure R. Muller undoubtedly). Read between the lines to hear him say he’ll be damned if he’s going to change it again. His opinions are now writ in granite and as predictable as the day is long. Just keep in mind he’s a BA in english/philosophy not a scientist or engineer. It shows. His arguments are more semantic and epistemological than anything else.

“steven | July 12, 2013 at 5:31 am |
I agree you always have to assume something. Assuming that climate sensitivity is a variable and that you aren’t sure what it is currently makes for a perfectly sound assumption.”

unfornutaely this is a perfectly reasonable epistemic assumption, but its operationally useless. you can assume that if you want to get nowhere, but if you are interested in understanding the world and producing operationally effective knowledge then the assumption of “i dont know” is a non starter. In short, “I dont know” is a useless Assumption. its fine for the philosophy club or blog debates, but useless in science and practical knowing.

Steven, I don’t know is what you say when you don’t know. It isn’t useless. It is saying we need to study the issue before we will know. Your method says I don’t know but that is useless so I’ll make something up. Worse than an honest I don’t know answer by far.

‘Your method says I don’t know but that is useless so I’ll make something up. Worse than an honest I don’t know answer by far.’

steven, that is a very jaded way of looking at how you begin an investigation in what you don’t know. Assuming linearity gives you something to try and work with… it’s easy and often it works for short x axis ranges whatever the independent variable ‘x’ is. So why assume that by using this technique, a technique used in science time and time again to get approximations to ideas, that it is a worse than honest way of learning what we don’t know? This speaks more to your skewed way of thinking than it does about why it is a valid starting point IMO.

You gotta start somewhere and make improvements from there. Linear is a start, I don’t think I am seeing anyone say we know linear is factually true. How is one to study the issue if you can’t start out with a few assumptions to get started? Your approach is doomed to result in nothing more than throwing up your hands and walking away because ‘we don’t know’.

No, my answer does not mean you throw up your hands and walk away. My answer says you study the issue until you can make an educated guess as to where to start. I don’t have a problem with short range linearity assumption in climate sensitivity. Short term from what? Paleo calculations? That isn’t short term. Derived climate sensitvity to the best of our ability as it is today and try to determine error bars. That would be the logical course of action. Just because I say your way is wrong doesn’t mean there just isn’t a right way to start. It sort of points to your skewed way of thinking actually.

steven, I have been over this time and again with the warmists. They believe that if physics cannot say what happens when you add CO2 to the atmosphere, we must never say so. We must pretend that there is a way of proving the hypothesis of CASGW, because that is the way we can save the world from over heating.

I am afraid you are correct, but, like myself before you, you are wasting your time.

steven, when solving problems, don’t we start simple and go from there? Linear is as simple as it gets. Assume simple… learn about simple… see where it breaks down when compared with observable… make adjustments that make the theory less simple but maybe more accurate and try again. To get away from linear, you have to have an idea of what non-linear would look like so you can model it. You have to have a testable function to use to calculate. What would you use in place of a linear function? What information do you use to pick whatever alternative function you think should be used? How do you arrive at your educated guess? What do you study to get there? Are you asking yourself these questions while studying the problem?

Also, again, I wonder what makes you think this approach is ‘making something up’ which is ‘worse than an I don’t know answer by far’? It is not ‘making something up’, it’s starting simple and working from there. Why assume linear is ‘made up’?

“No, my answer does not mean you throw up your hands and walk away. My answer says you study the issue until you can make an educated guess as to where to start. I don’t have a problem with short range linearity assumption in climate sensitivity. Short term from what? Paleo calculations? That isn’t short term. Derived climate sensitvity to the best of our ability as it is today and try to determine error bars. That would be the logical course of action. Just because I say your way is wrong doesn’t mean there just isn’t a right way to start. It sort of points to your skewed way of thinking actually.”

A) a linear assumption is an educated guess. its certainly not an uneducated guess. in short people have studied the issue long enough
to make this assumption. Further THEY ARE JUST DOING CALCULATIONS. assume linearity, here is the answer. You want to assume something else… GO RIGHT AHEAD see where you get.

B) The length of term is set from the outset. If your interested in say
100 years, can you assume its linear? of course you can, and you
have good evidence to assume that.
C) we do have error bars, where have you been. The issue is this

even with a simplifying assumption of linearity over the next 100 years
we have large error bars. whats the problem

” We must pretend that there is a way of proving the hypothesis of CASGW, because that is the way we can save the world from over heating.”

I’m not interested in saving the world from anything. I don’t really care about “the world.” What’s more interesting is how you calculate, estimate, measure, guess, at things. If some politcian wants to take these numbers and make decisions, then he gets to do that. I could give a rats ass.

Its like a guy who works on weapons. I really never cared that the stuff I worked on would be used in wars.

‘We must pretend that there is a way of proving the hypothesis of CASGW, because that is the way we can save the world from over heating.’

Not everyone studying the problem is trying to prove CAGW Jim because first, there is no definition of CAGW, therefore CAGW it is whatever you want it to be therefore it can’t be measured and by your standard if you can’t measure it then it is the same as zero. So there is no such thing as CAGW by your own logic. Second, many people are genuinely interested in what the possible effects of elevated CO2 in the atmosphere could mean with respect to climate. The physics of radiative heat transfer theory is solid. It indicates elevated CO2 should lead to more warming. We have observed warming. Don’t you think the curious minded want to know if the two are related? Don’t you think it’s slightly prudent to look at the problem and try to determine it’s size? Those who are interested make their predictions using their methods. They put it out there and eventually over a long enough period of time it will be clear whether their theory holds water. You have made your prediction as well and over a long enough period we will see if your right. Only problem with your prediction is there is no theory behind it and so nothing practical will be learned by it.

John, linear is obviously made up. Why would a world half covered in ice with clouds of dust everywhere have the same sensivity as todays world? Why would the the several stages between have the same sensitivies? Does desert have the same albedo as marsh? Tundra the same albedo as forest? Picture the glaciers melting and where they were compared to modern topography. We didn’t go straight from melting mountains of ice to what we have now. There were several stages along the way. Besides the albedo you would also have changes in the circulations of the atmosphere and the ocean. Heat transport systems almost certainly changed several times between LGM and now. To assume it is linear is worse than saying I don’t know because you are saying this is the wrong answer and we know it but it is the best we can do so we will treat it as if it is the right answer. Calculate the sensitivity over the last few hundred years and we can argue about that. Don’t tell me we know the sensitivity can’t be below 1.5C because that’s what it would have taken to get us from the LGM to now. Tell me that using the data we have have we have calculated that an average sensitivity of at least 1.5C would have been required but we aren’t sure what the sensitivity is currently, well, unless you have it calculated and handy?

Steven Mosher and John Carpenter. John writes “Don’t you think the curious minded want to know if the two are related? Don’t you think it’s slightly prudent to look at the problem and try to determine it’s size?”

I am retitred so I dont mind wasting my time again; I am a bear for punishmnent. I agree with both statements that John makes. What I object to is the IPCC claiming that they have solved the problem scientifically, when they have dont nothing of the sort. The various reports by the warmists establishing the hypothesis of CAGW, are completely satisfactory. CAGW is a viable hypothesis.

The problem comes when the wartmists. led by the IPCC, claim that the physics presented proves that certain things about CAGW are “extremely likely”, or “very likely”, http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch9s9-7.html The physics, as the other steven and myself both point out, can do nothing of the sort.
This is the issue both John and Steven Mosher refuse to discuss.

Steven, I must have been lost. Where is the study that calculates the climate sensitivity for the last 100 years and attributes the increase in heat transport in the Atlantic? How much warming did that cause? Don’t say we don’t know. That’s something I would say.

Can someone please explain what it means to say “climate sensitivity is linear”? Don’t you mean “constant”? And why would we expect it to be constant (as a function of time or temperature) when all sorts of non-linear feedbacks (as a function of temperature) are included in determining CS?

“you can assume that if you want to get nowhere, but if you are interested in understanding the world and producing operationally effective knowledge then the assumption of “i dont know” is a non starter. In short, “I dont know” is a useless Assumption. its fine for the philosophy club or blog debates, but useless in science and practical knowing.”

No Steven. If “I don’t know” is the truth then it’s damn useful to know that you don’t know. Assuming something that is false is a recipe for disaster. If we don’t know if a bridge can hold the traffic it must bear safely then we don’t assume that it can nor do we assume it cannot. We do what needs to be done to improve our knowledge and in the meantime we don’t build the bridge and hope for the best. Obviously they don’t teach common sense to english/philosophy majors. You’re an argumentative dimwit.

“If “I don’t know” is the truth then it’s damn useful to know that you don’t know.”

The discussion is not about ‘I don’t know’ as being the truth or not. The discussion is what do you do to try to understand what you don’t know when confronted with ‘I don’t know’ as the only answer you have. To your bridge example, before human knowledge had the current engineering understanding of physics to build safe bridges, did we not build bridges? Did we look at the other side of the chasm and say, ‘I don’t know how to get there’? No, some took risks with ideas. Some envisioned a walkway (a bridge) to get there instead of a 10 mile hike around the chasm. The first bridges were probably pretty crude. Many probably failed. People died using them. We also learned from those failures and built better bridges. As our knowledge about bridges and physics grew, we learned to engineer a bridge on paper purely from calculations (a mathematical model) that when finally built into reality would hold the loads it was designed for without any doubt. We don’t build bridges today without doing that exercise first. In fact we pretty much model most every building, bridge, airplane, rocket, car, etc etc on a computer first before we build it now.

You can keep pretending the discussion is about ‘I don’t know’ therefore whatever someone else says isn’t true. But it’s really a problem solving discussion. How do you go about getting to the other side of the chasm without having to walk 10 miles out of your way to get there because your not satisfied with ‘I don’t know’? It’s about how do you start to build the first bridges of understanding climate and what, if any, sensitivity it has to a forcing like higher concentrations of CO2 in the atmosphere. It’s about starting with simple approximations first and seeing where they breakdown. Show me where it has been stated that climate sensitivity to CO2 forcing is, without a shadow of a doubt, linear. You will not find it. Linear is used to simplify an already complex problem. If you simply use the ‘I don’t know’ strategy, you will be staring across the chasm for a long time. There are and will be more failures in modeling climate and its sensitivity to a forcing due to CO2. We will learn from the failures and build better models. Make better approximations in the attempt to get closer to observable. Failure is no reason to not continue to pursue knowledge and understanding with the excuse of ‘I don’t know’.

“If “I don’t know” is the truth then it’s damn useful to know that you don’t know.”

The discussion is not about ‘I don’t know’ as being the truth or not.

————————————————————————————-

Yes, it is precisely about that. Nowhere did I say that “I don’t know” means throwing one’s hands up in the air and quitting. It’s all about acknowledging what you don’t know and doing something to improve your knowledge before risking blood and treasure. Read the history of the manned space program. It was a world of “I don’t know” at the outset and we did a buttload of careful experiments to learn what needed to know to land a man on the moon and return him safely to the earth. And even then it was still dicey because there’s many a slip twixt the cup and the lip.

When one doesn’t know one doesn’t pretend to know. Not in engineering. Not when blood and treasure is on the line. That’s a recipe for disaster.

Lumping everything together on a common basis makes some sense when you are looking at an outcome, global temperature anomalies, which also lumps the things together. There is an NRC book on the uses and misuses of TOA forcings from somewhere in the early 2000s, motivated by Pielke Sr.s POV on land use changes. Attribution studies such as Ben Santer does tries to find ways in which the different forcings will manifest themselves differently. A simple example is comparison of solar vs. ghg forcing. The former will manifest as strat ozone changes in a way the latter will not.

Forcings also make sense if you are looking at small changes where everything is linear. An example of where it would not is going from very low conc of a ghg where the response would be linear to one where it is high where the response would be logarithmic. A nice (in the science sense) exploration of that would be how high does methane have to go before the linear response folds over

…I suspect that it will be a long time before climate models can do a better job on decadal timescales than statistical models, based either on persistence or climate dynamics…

2018??? The forecast is for a solar minimum and an intensification of the current cool Pacific Decadal Variation mode, more snow from a more open Arctic, more melt and a slowing THC, a spread of ice sheets in the NH, ice albedo feedbacks and a drop of 10 degrees C over North America and Europe.

Still – the issue is not climate sensitivity but abrupt and nonlinear change.

Judith,
“So, cutting the climate sensitivity in half means the impacts we are expecting in 2050 would happen a decade later? Huh?”

You are missing the point. If there is another 40-50 years available before we are doomed (DOOMED!) to cook in our own juices, then people are going to (quite rationally) question the shrill urgency of demands for immediate public action and take a wait-and-see stance. That is what motivates Alexander Otto to make such a nonsensical statement. It is, unfortunately, mostly political advocacy masquerading as science… and always has been.

Lost heat: why has the warming slowed? It is one of the biggest mysteries in climate science

Well, it is not a mystery at all. The much touted “Greenhouse Effect” simply delays the flow of energy though the Sun/Earth/Atmosphere/Universe system by causing energy to make multiple trips through the system (alternating as IR radiation/Heat/IR radiation/Heat/etc, etc.). Since the energy is travelling at very close to the speed of light this delay simply changes the “response time” of the gases in the atmosphere.

A thermal insulator on the other hand actually reduces the velocity (distance travelled per unit time, rate of forward progress, etc.) with respect to other materials in the system (i.e. the oceans).

The “missing heat” is actually travelling away from the Earth as a spherical IR wavefront that is currently X + d light years away. In this equation X represents the elapsed time since the sunlight arrived (100 years for sunlight from 1913) and “d” represents the slight delay from the multiple passes through the system from the “Greenhouse Effect”. “d” is of course a statistical distribution and likely averages about 10 milliseconds (distance to TOA times the speed of light). Some photons will “bounce” back and forth between the atmosphere and the surface many times, while others will just leave the surface and travel to the “cold” void of space without bouncing “back” at all.

The “missing heat” is long gone, almost “with the wind”, but in fact at a much higher velocity.

For more information look up the “temporal response of an integrating sphere”, a unique optical device that exhibits what the “climate science” community would consider 100% radiative forcing.

I’ve made exactly the same statement about the missing heat being uniformely distributed in a sphere with a radius of about 50 light years with the earth at its center.

I’m willing to accept the possibility that the missing heat is sequestered in the deep ocean instead, or perhaps split between the ocean and rejected into space by more efficient cooling. The practical result of dilution into the ocean basin and rejection to space is the same. The law of entropy prohibits the energy in either place from re-concentrating itself on the earth’s surface i.e. once diluted it stays diluted.

In both cases we remain in the position of needing an explanation for the mechanism which, much to the surprise of the usual suspects, kept the heat from manifesting itself as a rise in global average temperature. Until we know the mechanism it’s difficult to forecast with a straight face what the mechanism will do in the future.

Personally I’m favoring an even split between ejection to space, heat of fusion in lost glacial and sea ice, and dilution into the deep ocean. That’s mostly because the ocean appears to be rising at 3mm/year instead of a historical 2mm/year and, aside from measurement error, it seems that requires warming and meltwater to explain. Ejection into space was predictable enough from Arctic sea ice melt. Removal of insulating ice cover from the ocean and heat loss to space must increase. There’s probably also some changes to clouds which I believe are forming at a slightly (almost undetectably) higher altitude but at the same temperature which causes a slight decrease in environmental lapse rate while putting cloud tops a bit closer to the heat sink of outer space and thus giving the heat an easier path to escape while presenting a more restricted path back to the surface.

David – maybe you know where this data might be. UC Boulder applies all kinds of corrections to the sea level rise. I can’t find data on their site that tells us only the change in sea level without the corrections. Do you know where such data might be?

Base data sets near top of page. Links to all papers describing methods.

I generally don’t quibble with sea level rise because it isn’t extraordinary.

In the previous interglacial sea level peaked some 9 meters higher than it has anytime in the Holocene. Moreover it happened very early in the interglacial period. I think an approach to the maximum of the previous interglacial might be the herald for the ending of the Holocene IG. A warmer ocean with more surface area encourages massive snowfalls which can bury a continent and shrink the ocean surface area as it happens causing a swing to the cold extreme which lessens clouds and snow allowing the process to reverse. Lather, rinse, repeat. Probably some perfect storms involving volacanoes, solar minima/maxima, and orbital parameters conspire for exact timing of transitions. The Holocene interglacial has lasted longer than average already. Persistence of greenland icecap might have something to do with the longevity.

it appears that the short term increase in the rate of sea level rise will go away in the next measurements

——————————————————–

It’s a remarkably stable number around its trend line. An ocean with an average depth of 4000 meters hasn’t deviated in exact depth by more than a couple millimeters from a +3.3mm/year trend line since the radar sats have been flying.

IMO this is arguably the most reliable measure we have of energy budget especially for constraining other parameters. At 3.3mm/year we have about 3000 years before sea level in this interglacial reaches the peak attained the in prior interglacial.

A couple more centuries at that rate won’t mean crap. We may or may not have enough economically recoverable fossil energy to last that long. It’s imperative that we find a replacement. But it isn’t a climate imperative. Abundant affordable energy today is as important as food and water for billions of people in the modern world.

Your analysis, while interesting, is flawed. As energy travels through the Earth system, from one form to another and one sphere (atmosphere, hydrosphere, lithosphere, and cryosphere) to another, certain times it is traveling at the speed of light, and certain times it is barely moving at all in comparison (think of a raindrop falling). Increasing GH gases have a cumulative effect in that they while they are increasing they allow more net energy to accumulate in the system. It is not lost to space. Natural variability and the internal dynamics of the system will determine at any given time where certain perventages of that accumulating energy is located, but you can be sure it remains in the system.

There is no mystery of the missing heat. How do we know the heat is here on earth and not light-years away in space? Satellite observations from CERES and GERB2 measure 5.5 W/m^2 energy imbalance at TOA. There’s net inflow of radiant energy from space. Where on earth is the heat? In the ocean below 700 m. How do we know? From measurements of sea level. It is rising at 3.3 mm/yr. This is due to thermal expansion of seawater. How much heat is required to expand the sea by that much? Basic physics calculation gives 5.2 W/m^2. Pretty close to 5.5 measured by the satellites. They matched. Mystery solved.

BTW why did warming slow down? Because we are measuring air temperature. The sea surface is radiating to the atmosphere but the heat is in subsurface below 700 m. It may rise back to the atmosphere in the near future or stay there for hundreds of years or just dissipate like cigarette smoke disappearing after mixing with large volume of air.

You misplaceda decimal point. The supposed imbalance at TOA is 0.5W/m2 not 5.5W. And it isn’t measured it’s modeled using ARGO to correct satellite estimates. The margin of error for the satellites measuring total emission from earth is about 4W/m2 almost ten times the estimated imbalance.

The reason satellite measurment is poor is that, unlike measuring the solar constant, earthshine isn’t a point source. The earth radiates in all directions and a satellite can only measure what comes directly to it in a straight line. Ergo it must do a god awful lot of interpolating and assumptions about what a dynamic atmosphere is doing where it can’t take a direct measure.

It’s not a misplaced decimal. Your number is change in imbalance. My number is actual imbalance. See Dewitte 2008. It means 5.5 +/- 0.5 W/m^2. No interpolation and assumptions. It’s highly variable depending on night or day, land or sea, but it’s direct measurement and we take the average. The margin of error is not really an error in measurement but the variability of the thing being measured.

Certainly not a preponderance of the evidence, and all measurement data has its limitations including those from gravity field models, there are a number of more recent papers summarized by the following:

” NOAA 2012 Sea Level Budget which finds sea levels have risen at only 1.1-1.3 mm/yr over the past 7 years from 2005-2012 [less than 5 inches/century], and the paper of Chambers et al finding “sea level has been rising on average by 1.7 mm/year over the last 110 years.”

“From the IPCC FAR Chapter 5.5.2: Holgate and Woodworth (2004) estimated a rate of 1.7 ± 0.4 mm yr–1 sea level change averaged along the global coastline during the period 1948 to 2002, based on data from 177 stations divided into 13 regions. Church et al. (2004) (discussed further below) determined a global rise of 1.8 ± 0.3 mm yr–1 during 1950 to 2000, and Church and White (2006) determined a change of 1.7 ± 0.3 mm yr–1 for the 20th century.”

The other thing an as yet unknown mechanism for ocean heat to start at 2300 feet below surface does not have to be proposed. The missing heat is not missing at all. It never was to begin with, so it couldn’t be found.

First, it’s silly to deny sea level rise because we can measure it at 3.3 mm/yr in the last 20 yrs. Second, silly to propose that heat bypassed the upper ocean 2300 ft. Heat flows from high to low temperature. Ocean surface is the warmest and ocean floor is the coldest. No need to bypass everything in between.

“UK, Smith published a detailed prediction of how the climate would change over the better part of a decade. His team forecasted that global warming would stall briefly ”

To have predictive power his model would have details that explain the stall. The lack of an explanation would mean confidence in his model is no higher than the IPCC’s. The latter have never explaned CO2’s allegedly voracious appitite for heat.So Smith tried changing initial conditions – big deal. But commonsense suggests modellers should chose a start far enough back that the system will have forgotten possible errors in initial conditions.

“UK, Smith published a detailed prediction of how the climate would change over the better part of a decade. His team forecasted that global warming would stall briefly ” .

Briefly, but not 14 years!

“The answer may lie in the oceans.”

What an original thought ! But does anyone know the average transport delay of the oceans. But remember it is not an inertial delay: It is more like a pure transport delay. One approach that have used in my website underlined above, is to assume that the second period of global warming (1970 to 1998) was just the first period (1910 to 1940) having taken about 30 years to work through the oceans.

“Lost heat: why has the warming slowed? ”

One obvious solution lies in quantum mechanics which predicts that temperatures of gases rise and fall in steps and stairs. That happens when a photon of energy captures or releases a large enough amou\nt of energy. If enough molecules can do this in synchrony it is possible, especially in the early industrial revolution ending in 1940, to produce the subsequent pause. Also remember that the CO2 molecule temperature can fall in the atmosphere to its lowest vibrational state, where its appitite for kinetic heat is .is only a little larger than O2 or N2..

A. Biggs;
Your quantum mechanical argument is pure nonsense. Once the photon is absorbed, within a few hundred picoseconds it is distributed into thermal equilibrium in the local thermal bath of O2 and N2.

When the CO2 molecule exits the tailpipe or chimneyit is very hot and fully excited, so it rises in a plume of CO2, like a hot air balloon. As the plume rises it scatters and cools. Eventually the CO2 molecules fall back as they cool and lose excitation.When they reach an average of about 13C, they wiill be at a low level of excitation and their heat absorption will oniy be comparable with N2 and O2. Hence the ‘pause’.

Alex,
At any temperature the ratio of the first vibrational excited state to the ground state of any vibrational mode is given explicitly by the Boltzman distribution. For the doubly degenerate bending mode at 700 cm-1 (the only mode that is important for the greenhouse effect of CO2) at 40 C about 4% is vibrationally excited while at -40 C it’s closer to 1%. The excited state can emit photons while the ground state cannot. It is this small % of excited CO2 molecules in the equilibrium thermal bath of the local atmosphere that can emit.

“We’re developing an additional tool that can tell us a lot more about the near-term future.”.
All efforts to date tell us that they can’t do that.
“It’s fair to say that the real world warmed even less than our forecast suggested,” Smith says. “We don’t really understand at the moment why that is.”
Blind Freddie can see why that is: the models are useless. That’s what they STILL can’t understand

Build something other than CO2 into the models, then test them, test them, test them – but before starting the tests, publish the success/failure criteria and then publish everything abut the tests. Don’t even think about publicly forecasting anything until the testing demonstrates a good level of reliability.

“some 16 teams ran an intensive series of decadal forecasting experiments ”
Doug Smith, write out 100 times “Computer models are not experiments”
Now, for all those who believe that “The missing heat” is lurking below this magic 700m line, give me a mechanism how it’s got there and how water at 4C is going to:-
1) get back to the surface
2) heat air that’s warmer than 4C
Here’s an idea of what’s been happening.http://chiefio.wordpress.com/2013/07/10/why-land-air-temperature-is-exactly-wrong/
“When the sun has a quiet phase, as now, it makes much less blue and ultraviolet. When the sun is highly active, as it was in the 1980s and 1990s, it makes a lot more UV and blue light and less red and infrared. This changes where in the ocean that solar heat is deposited. When the sun is highly active, sunlight is absorbed more at depth, slowly warming a huge mass of ocean over decades. When the sun is quiet, much less is absorbed at depth, and more is absorbed in the surface. Less is stored at depth, more is moved as rain into the sky.”

A computer simulation is no more an experiment than calculating the sum of two numbers is an experiment. The only difference is there’s usually far less confidence in the outcome of the climate model calculation than a simple sum of numbers.

of course they are experiments. An experiment is nothing more than a set of behaviors designed to improve your understanding. Now, there are various species of experiments. Don’t confuse lab experiments with other types of experiments.

Steve, they can only be experiments IF they test a testible hypothesis. However, when the models fail to match reality the modelers are prone to reject, and then alter, reality.
How you can defend the failure to reject models is beyond me. You should treat the models in the same way you treat drugs; would you risk your child’s life on a drug that failed to reproduce the desired outcome?

Our brand is chock full of the innocent victims of this computer haze.

Before digging in let us pray for the modelers fooled by their own biases, for the other climate scientists who trusted the modelers and the models, for the mass of other scientists who trusted that climate science operated approximately as their sciences did, for the public who thought science and policy were necessarily consenting adults, rather than the mewling and puking prodigals they are.

Now there’s a good question for Mosher. He can harmlessly bloviate until the cows come home. Even better get Willard in on it too. I’m not sure what, if any, kind of experience it will be to observe the two in an epistemological battle of wills but I know it’s an experience I’m going to avoid. The moving finger suggests; and, having suggested, Moves on.

Total power changes only 0.1 – 0.2% over a solar cycle but power in ultraviolet changes by 1 – 2%. Changes in UV power are balanced by an equal and opposite change in visible light power.

This is new information obtained within just the past decade and may have profound effects in the stratosphere especially with ozone. Trickle down effects are in the troposphere are known to happen with changes in ozone but the extent and magnitude is controversial. Proxy stuides attempting to get a longer history of UV power indicate that UV power has increased some 3% since the Maunder Solar Minumum. Accompanied by a 3% decrease in visible light that’s a substantial shift with possibly profound consequences to weather patterns.

I guess you could argue what Otto says if he previously thought the sensitivity was 2.5 C, at the lower end of the IPCC range, and is now 2 C in his study, so it takes 5 decades to warm as much as it would have in four, 2060 versus 2050.
I don’t think the Otto study made a correct estimate of the equilibrium sensitivity because of their linearity/uniformity assumption about how quickly the earth surface warms (Armour et al.).

Here is a running 30-year average global surface temperature ending with the last 30 years marked at the mid-point ( 3 datasets).http://www.woodfortrees.org/plot/hadcrut4gl/mean:360/plot/gistemp/mean:360/plot/hadcrut3vgl/mean:360
You can see this is rising smoothly (still). This is the part that is climate change, and that curve is the one the climate scientists want to predict. There are real changes here. Will it rise another 2 or 3 degrees this century or just stop and reverse? This is the question.

Regional records like the Central England Temperature show a very similar last 100 years, with a step followed by a continuing rise, when the 30-year average is taken. They go back further and show that a previous warm episode got almost up to the 1940’s temperatures in the early 1700’s. What we have had since 1960 is unprecedented in that record which goes back to the 1600’s.

Consider why the pause doesn’t show up in the 30-year average. It is because prior to the pause was an anomalously fast rise from 1980-2000. Examine these graphs and you can see it. The pause just returns the rise rate to the average of 0.5 degrees per 30 years, so it is a correction period after an anomalous high in 1998.

““The heat must be going somewhere,” says Ed Hawkins, a climate scientist at the University of Reading, UK. “The question is where.”One likely culprit is the oceans, which already absorb most of the heat.”
———————————–
cul·prit
/ˈkəlprit/
Noun

A person responsible for a crime or other misdeed.
The cause of a problem or defect.

Synonyms
offender – delinquent – malefactor – criminal

They just can’t help themselves, can they? If their precious models are not working out, some evil malefactor, or “culprit”, (in this case, the naughty oceans) must be to blame.

This ranks with a recent briefing from a UK Minister about “unconventional oil and gas” in the remarkable pantheon of anthropomorphism spawned by Climate Science.

When this subject came up on WUWT, I posted the following.
@@@@@
From the Nature paper “Smith says. “We don’t really understand at the moment why that is.””
I find this to be a very interesting statement. The UK Met. Office used the Smith et al study as the basis for their prediciton of future climate. Then, at Christmas 2012, they quietly changed the forecast, but gave no reason or analysis about this change. If Smith is right, and I suspect he is, and they dont know why his study did not give the desired result, what is the basis for the Met. Office believing the new forecast is any better that the old one?
@@@@@

It’s like the climate models. They are tuned to hindcast the past, so they look good there, but when it comes to the future, the forecast is cloudy. If these climate prognosticators were stock brokers, their clients would have put them in jail by now.

I’m afraid a lot of your colleagues, except perhaps Michael Ring fail to see the blindingly obvious. We here at the sharp end of bad policy driven by bad science won’t be forgiving when finally the funding bubble bursts. The only ‘mystery’ is why it is taking so long. I’m appalled at the sheer bloody-minded stupidity of academia.

This issue has a lot of inertia and you know what Newton says about that…it’s going to require a very large external force for a short time frame or a modest force for a longer time frame. I believe we are in the midst of the latter.

But not proved wrong, yet. ’til then they SHOULD keep calling. Human DNA requires at least some degree of forward looking. Easy looking back to what was required and judge then. Not always so easy looking forward and being required to call if you see danger.

“It’s fair to say that the real world warmed even less than our forecast suggested,” Smith says. “We don’t really understand at the moment why that is.”

The reason is the 0.2 deg C per decade warming from about 1975 to 2005 includes the warming phase of the multidecadal oscillation. When the multidecadal oscillation turns to its cooling phase, the trend returns to the long-term trend of about 0.1 deg C per decade warming.

Our hostess writes “Finally, I would like to take issue with this statement by Alexander Otto:”

I am surprised that our hostess finds this statement surprising. It is not a scientific message; it is religious. It is like the author reciting the Nicene Creed in a Christian Church. The empirical data is proving that CAGW is wrong. If the warmists dont acknowledge this data they will lose ALL credibility. So they need to write papers showing that something is rotten in the State of Denmark. But, in order for this paper to be published at all, it must make the mandatory mantra that, despite all the”bad” news, CAGW is still true.

It is one of the biggest mysteries in climate science: humans are pumping more greenhouse gases into the atmosphere today than ever before, yet global temperatures have not risen much in more than a decade. That trend does not undermine the idea that greenhouse gases will eventually push global temperatures into uncharted territory, but it does have scientists puzzled.

One partial explanation is natural variation: temperatures are expected to plateau occasionally even during a warming climate. And the world remains a very warm place. The ten hottest years on record have all occurred since 1998.

Yet with the stalled warming now approaching its 15th year, researchers are seeking some deeper explanation. “The heat must be going somewhere,” says Ed Hawkins, a climate scientist at the University of Reading, UK. “The question is where.”

But scientists cannot yet fully explain the recent trends, and the larger question is whether the lack of warming today portends less warming in the future.

Michael Ring and his colleagues at the University of Illinois at Urbana-Champaign argue that Earth might in fact be less sensitive to greenhouse gases than previously believed. Whereas the Intergovernmental Panel on Climate Change estimates that doubling atmospheric carbon dioxide levels would ultimately increase global temperatures by 2–4.5 °C, with a best estimate of 3 °C, the Illinois group says that the rise is more likely to be between 1.5 °C and 2 °C.

Other researchers argue the opposite, and the issue remains unsettled. Besides, the continuing climb in global emissions means that a lower climate sensitivity would cause only a slight delay in global warming, says Alexander Otto, a climate policy researcher at the University of Oxford, UK. “The impacts we were expecting in 2050 would happen a decade later,” he says. “There is certainly no reason for complacency.”

A question.
The models must have a lot of good inputs and a lot of the variables would differ between these models.
If one was to run them with the proviso that climate sensitivity to Co2 was neutral.
Ie remove the factor due to carbon sensitivity.[and the forcings].
Would any of the models then reflect the actual temperature changes to date ?
I am sure one or two would have a reasonable chance?
Any chance of this ?
Anyone?

The forecast for 2018 is for scattered sunspots. Livingston hail and Penn gales will alternate in a generally stormy political climate; colliding fronts and streams of jets(sharks?) will make travel hazardous locally.
============

A delightful Post, as are many or the responses.
The nice thing about decadal forecasts is that they are falsifiable within a decade, unlike century long prognostications. One would think that with model tuning, they would also have a better chance at accuracy. That they have not simply shows how poor the models actually are.

Perhaps focussing on getting quarterly forecasts into the ballpark is a baby step to reasonable decadal forecasts, which will necessarily require accounting for the sort of short term natural variability TonyB has documented historically. Only then might some confidence in even longer prognostications begin to build.
As of now, the model credibility is absolutely shot thanks to the pause. But the IPCC AR5 SOD does not seem to have noticed. A predictable massive train wreck is coming soon.

‘A negative tendency of the predicted PDO phase in the coming decade will enhance the rising trend in surface air-temperature (SAT) over east Asia and over the KOE region, and suppress it along the west coasts of North and South America and over the equatorial Pacific. This suppression will contribute to a slowing down of the global-mean SAT rise.’ http://www.pnas.org/content/107/5/1833.full

There are very few initialized decadal forecasts I am aware of. They all seem fairly primitive. Smith et al was wrong. Mochizukia et al and Latif and Keenlyside are closer. At any rate the discussion is about initialized models rather than longer runs of uninitialized models.

I wouldn’t think so.
Forcing needs energy, energy equations are based on kind of a square law : mc^2, mv^2, RI^2 etc.
Thus I suggest using square law in the forcing equation too, and as we can see herehttp://www.vukcevic.talktalk.net/SSN_NAP.htm
it neatly links solar activity to the N. Atlantic SST (AMO).
Oh, yes I forgot to say, CO2 is not the forcing agent, it is the Earth itself.

So you mean that the ‘matching function’ from TOA down to the Surface has changed in some way that the system is not able to respond to in other ways (once whatever energy barrier tht could prevent it is overcome anyway)? That little glob of murcury is sure real easy to pick up.

No, I’m suggesting that defining climate sensitivity to changes in GH gases only in terms of tropospheric temperatures is far too narrow to get a good idea of what is happening with the entire Earth energy system as GH gases increase. Furthermore, the troposphere is the most subject to natural variability and has the lowest thermal inertia of all the components of the climate system, making it the most convenient, but not necessarily the best choice for sensitivity to longer-term climate forcing such as we are getting form the multi-century human carbon volcano that continues to erupt with such increasing vigor.

‘This paper highlights how the emerging record of satellite observations from the Earth Observation System (EOS) and A-Train constellation are advancing our ability to more completely document and understand the underlying processes associated with variations in the Earth’s top-of-atmosphere (TOA) radiation budget. Large-scale TOA radiation changes during the past decade are observed to be within 0.5 Wm-2 per decade
based upon comparisons between Clouds and the Earth’s Radiant Energy System (CERES) instruments aboard Terra and Aqua and other instruments. Tropical variations in emitted outgoing longwave (LW) radiation are found to closely track changes in the El Nino-Southern Oscillation (ENSO). During positive ENSO phase (El Nino), outgoing LW
radiation increases, and decreases during the negative ENSO phase (La Nina)…

The top-of-atmosphere (TOA) Earth radiation budget (ERB) is determined from the difference between how much energy is absorbed and emitted by the planet. Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.’

I might note that we are talking about anomalies rather than absolutes, with ever improving instrumentation and that ARGO confirms rather dramatically the TOA radiant flux anomalies. Indeed as does earlier ocean heat (based on sea level rise) content the ERBS results.

The significant changes in the satellite record are in SW – indeed with cooling in LW to the late 1990’s. The ocean heat content seems to follows closely TOA net flux which has large natural fluctuations associated in recent times with cloud cover changes. The CO2 ‘volcano’ seems a relatively minor influence thus far.

——–
Depends on the metric of comparison you are using, doesn’t it? Since you are using the term “relatively” then you would have to compare it to some other multi-century event that has added 40% more CO2 to the atmosphere in a such a geologically short time frame. Try as we might to find some natural forcing that has acted over a similar timeframe, it is hard to find one that is “relatively” the same. In short, we have no honest gauge to suggest whether the effects qualify as a “minor” influence or not. We are, as you well know, in uncharted territory.

All of the warming in the ARGO period is in SW. Hugely more so in the ERBS record where we get 2.1W/m2 warming in SW and 0.7W/m2 cooling in IR between the 80’s and 90’s. Warming from anthropogenic CO2 is minor in the record.

‘Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.’

You would have to provide evidence that shows SW at the ocean surface has increased to provide an average of at least 1 x 10^22 joules per year of additional energy to the ocean over the past 10 years, and then provide the mechanism whereby this energy has worked it way down to at least 2000m. Get this two pieces of evidence for me and then I might be interested in what you aspire to articulate.

The increase in energy in the system was all in less reflected SW – the so called missing energy. Offset somewhat by a decline in TSI in the 11 year cycle. We know this closes the energy budget with ocean heat content because Kevin tells us so.

The mechanism for deep ocean warming is a balance between warm water buoyancy and eddy dissipation. It is assumed that this changed around the end of the millennium – but the earlier measurements to depth seem quite problematical.

Though you can post as many links as you want, if they all fail in showing a consistently rising SW striking the ocean surface over the past several decades to match the rising heat content of the ocean, why insist on posting them? Do you think that somehow the data will magically show what it doesn’t simply by posting it? The facts are that clouds changes and SW changes do not explain the rising oven heat content, and so you can’t find any links that would show that.

‘In summary, although there is independent evidence for decadal changes in TOA radiative fluxes over the last two decades, the evidence is equivocal. Changes in the planetary and tropical TOA radiative fluxes are consistent with independent global ocean heat-storage data, and are expected to be dominated by changes in cloud radiative forcing. To the extent that they are real, they may simply reflect natural low-frequency variability of the climate system.’ IPCC 3.4.4.1

“No, I’m suggesting that defining climate sensitivity to changes in GH gases only in terms of tropospheric temperatures is far too narrow to get a good idea of what is happening with the entire Earth energy system as GH gases increase. ”

So, you’ll agree with me that its time to pull the plug on this CO2 silliness until such time as we have teased out all of the other components of sensitivity, natural and anthropogenic, so we can finally get to a useful model. The CO2 plus water vapor amplification model is now untenable and its doomsday predictions are null and void.

Should be said that some people don’t understand why the real world troposphere warmed less than forecast, for certainly the “world”, as in entire Earth system has been adding energy quite steadily for many decades, including the last.

vukcevic: Forcing needs energy, energy equations are based on kind of a square law : mc^2, mv^2, RI^2 etc.
Thus I suggest using square law in the forcing equation too, and as we can see here

I am all in favor of exploring multiple functional forms: let a thousand models bloom. However, tangible heat flow is linear in the temperature difference, other things being equal, and energy radiation is proportional to T^4.

Thanks for your comment, I am aware of T^4, however in the graphhttp://www.vukcevic.talktalk.net/SSN_NAP.htm
is quoted ‘forcing’ formula relating to the assumed (tectonic) kinetic energy affecting the ocean currents, the most important regulator of the solar energy absorb/discharge process controlling the climate’s long term natural variability.

” The initialized decadal simulations starkly point out the deficiencies that climate models have in simulating natural internal variability.”

——

Rather, it should be said that:

The initialized decadal simulations starkly point out the deficiencies that climate models have in simulating natural internal variability THAT ACTUALLY OCCURS.

And that is precisely why the models are always going to be wrong, and should not be considered as forecasts but only as simulations that indicate known, quantifiable dynamics. From volcanic eruptions to ENSO to solar, these natural variability factors are currently far from ever being able to be put into models, but these deterministic chaotic factors are not reasons to judge the models. What they should be judged on is the known quantifiable dynamics that are in the models and how well they represent general longer-term trends, feedbacks, and teleconnections.

Models are not “bad” because they fail to account for natural variability, that’s why they go wrong. Models are bad when they get the dynamics wrong due to a lack of understanding of the system. Improving that understanding is what research is about and why the models are always evolving.

A model can not provide consistent results within an acceptable margin of error for a variety of reasons. When they fail to perform as desired, we call them “bad models”. It may mean the fundamental concept of the model was flawed or that one or more of the variables was weighted incorrectly. it all comes down to understanding the details of why the model failed to match observed conditions.

It is a conclusion. Years 2005 thru 2009 show natural variation swamping AGW, which is what they predicted. They predicted half of the years after 2009 would record hottest years. 2010 is the hottest/near hottest year in the record.

How did they get it right? Nobody is saying. Maybe a WAG. I don’t know.

The question is, is it useful? You appear willing to throw it away without finding out, which is why I do not trust you at all.

Smith et al have adjusted and moved on, just like any scientist would. This is, of course, seen as sinister in some way. Lol.

I do not throw out a model just because it does not provide the outputs I wanted. I work to understand and get the model to work well for its intended purpose and only abandon it if I’m convienced the basic approach was wrong.

What I do not do is use the model until it works within the margin of error necessary for the intended purpose. That is not the practice in climate science when it comes to long term modeling. In climate science just because we can see that a model is performing terribly after a few years it still gets considered in the ensemble of models being used for long term policy decisions. It is the only field I can think of where this happens.

Again Rob, we can have a great model in terms of accurate dynamics that is wrong because of natural variability, and we can have a bad model that is right because of dumb luck in that errors in dynamics happen to match natural variability over some short time frame.

Your comment makes it sound like you believe natural variability is some external factor that is impacting the system and does so at unpredictable times. Natural variability is just the system working and if a model does not accurately provide output itt is a poor representation of the system

Natural variability is far more than just the “system working” but represnets those unpredictable, chaotic and nonlinaer factors that the nudge the system this way or that over various time frames. To pull from an excellent resource on this:

“The natural variability of the climate system is the result of four factors:
■ mathematically, the climate system exhibits “chaotic” (i.e., complex and non-linear) behavior, which means that it has limited predictability;
■ important parts of the climate system exhibit oscillating behavior, e.g., the El Niño- Southern Oscillation (ENSO) cycle that repeats every 2-8 years in the tropical Pacific, and the North Atlantic Oscillation that has a cycle length of 60-80 years;
■ variability in solar intensity, a key natural driver of climate, which occur in cycles which vary in length from familiar 11-year sunspot cycle to shifts in the Earth’s orbit that occur in cycles of 100,000 years; and
■ the random nature of volcanic eruptions, which are also a natural driver of climate.”

As to what models should be judged on – just how useful are models that can’t model large unquantifiable dynamics in the climate system? As for how well they represent general longer term trends, at what point do we recognize they don’t appear to being all that well.

I am not anti-GCM’s. Modeling has its purposes. My objections have been two fold:

1) That most of the various claims for worry due to climate change impacts are the result of relying on model outcomes which you acknowledge as being simulations. As I’ve said before, I can run through an entire seasons worth of NFL games on my son’s Xbox and get a projected Superbowl winner. I can even tweak and tune it by making modifications to various team rosters. But the bottom line is – would I bet a mortgage payment on the team the model projected as winning it all? I might be willing to bet your mortage, but I sure as heck wouldn’t risk mine.

2) For all the money and resources spent on GCM’s exactly what have we gotten in return? I believe we have increased our level of knowledge and understanding on climate due to them. But what else? What concrete benefits have derived from them? People argued the space program was a waste of money, but an incredible number of real benefits came out of it. Is it unfair to say that even if the only benefit the space program had provided was Tang, it would still represent greater value than what GCM’s have produced. Personally, I think if we are going to spend billions of dollars on modelling, it should be on models which have a possibility of producing something of value – i.e. models which can provide reasonable short to medium term regional weather forecasts.

The models forecast more than temperature. If as an example you knew that rainfall would be lower (or higher) than it had been in the past it could impact the types of crops that farmers plant. There are many examples

May I ask something simple;
if CO2 is the major cause of temperature change, Thermostatic CO2, should the value of the function Temperature Anomaly/log[CO2] be a constant, over time, only interrupted by volcanic aerosol spikes?

I’m saying we need to look at all sources of energy flow to the atmosphere and how they vary over time to understand how the roll of one forcing variable to the climate, namely CO2, should be interpreted in influencing that flow, if indeed the atmosphere, versus the full Earth system is what you want to focus on. For example, CO2 may influence both the rate of energy flow from atmosphere to space, but also ocean to atmosphere energy flux, and these additionally have other natural variations and nonlinearities.

What do you mean by ‘vary of time‘.
I have personally witnessed two total eclipse’s, in both cases when the suns radiant flux was blocked by the moon, it cooled. Each day I not it begin to warm in the morning and begin to cool just after noon.
It is colder in the winter than the summer. Personally I would measure the incoming radiation year round for a particular local, take the two temperatures at the same flux, average them, and work out climate sensitivity, for locals worldwide, this way.

This is how factors that are part of natural vari-ability vary over time, from the smallest to the largest, none of which are predictable, and all of which can make any model rapidly wrong, Lorentz discovered this. What’s so hard to understand?

\\“It’s fair to say that the real world warmed even less than our forecast suggested,” Smith says. “We don’t really understand at the moment why that is.”//

So naturally we discourage any large economic change policies until we do understand things better?????????????????????????
Why can’t you say that?
So instead we get no pipelines, no coal and no clean coal, and a bunch of govt do gooders/criminals propping up useless green business models with other people’s money. The fact that you let that continue is NOT cool.

Bart R writes “It’s only when your scales are ……..longer than three decades that you get much value from forecasts.”

I remember reading this nonsense many years ago from Gavin Schmidt, when I first got interested in CAGW; I could not believe someone would seriously write such a thing. It would be interesting to know what science Bart thinks proves that climate models can forecast accurately after 3 decades.

If thirty model runs with a GHE from CO2 all show warming of some species in conditions like the present and anticipated future from what we do know, and if a dozen runs with no GHE from CO2 all show cooling, at every point and in every condition both from the past and in the future, and we never saw the cooling but do see the warming, we can take that as some degree of evidence or proof that there must be a GHE from CO2.

If every addition of new information used to refine the models continues to show this difference, we have no evidence to doubt the GHE hypothesis.

If someday we see a model run without a GHE that behaves anything at all closer to the real world, past or future, than the model runs with GHEs, we could dispense with the GHE hypothesis.

Likewise, if we could get the spectacularly large number of model runs it would take to discern among Bayesian priors for climate sensitivity, we could even suggest what sensitivity likeliest matches the real world.

Of course, if the climate sensitivity itself is multimodal, that’d be quite the feat of simulation.

Yes, typical climate forecasts are at least 0.5 degrees in three decades. This is usually easy to see against the noise. Such a forecast in 1980 would have been verified by the temperature rise since then. A forecast for no change would have been falsified by now.

It’s only when your scales are shorter than a month or longer than three decades that you get much value from forecasts.

The truth of the matter is that global temperature forecasts of anything more than a few months are doubtful, any their prediction value decreases directly with the length of time of the period of projection.

You are as bad as my master the Grand old Duke of York. He led us up to the top of the hill then marched us down again. Which is exactly what you have done with promises of SERF three that don’t materialise.

I’d expect false information from our masters, but when fellow serfs promise things that don’t get delivered, why, I think to myself would I be best sticking to my master who has money and power or the Serf Underground movement which hasn’t got two groats to rub together and that ‘power’ is the sound that a cat makes.

You give forecasts far too much credit. Also, I don’t believe you’re well enough acquainted with either the matter or truth to begin assertions as you have.

Likewise, the value of a meaningless prediction, once it is zero, isn’t going to fall any lower. Hardly anyone pays us to listen to their bad predictions.

Once a prediction’s reliability falls below the level we can distinguish it from random chance, it’s fairly useless for prediction.

But predictions that vary from random chance, that are really dramatically poor compared even to random chance, they give us something to sink our teeth into.

Predictions of climate based on models with no GHE, those are really, really wrong. Predictions of climate based on models with a CO2-induced GHE? Those are hard to distinguish from being about as wrong as random chance.

And that’s useful. It tells us we cannot explain climate without GHE, and more CO2 will lead to more GHE. Though what exactly that GHE will lead to on any given day, or in any given week or month or year, we cannot predict, except to say, “worse.”

Methane concentrations are increasing, but the increase itself is lagging. It was more dramatic up to about 1998, moving from 1690 to 1770 nmol/mol, but then, leading up to 2006, reached a sort of plateau, and then moved up again to where it is at present at about 1820.

This is in major contrast to the Keeling Curve, where CO2 has been without any complex characterizations other than one simple upward line. But with CH4, it is 80 nmol in 14y, and only 50 in essentially the next 15y. There have also been studies which suggest that methane release could cause temperature declines (burning of particles in the atmosphere causing negative feedback) and that the scenario of a dramatic change in climate is less likely on account of the nature of the gas itself.

The temperature record is also interesting, and this is something I cannot reconcile. From the NASA temperature anomaly, I took offhand decadal averages for years ending 1989, 1994 etc. and compared. The 1991 Pinatubo eruption accounts for declines during the early 1990s.
Decadal Change, decade ending 1979/89: +22
Decadal Change, decade ending 1984/94: +17
Decadal Change, decade ending 1989/99: +13
Decadal Change, decade ending 1994/04: +29
Decadal Change, decade ending 1999/09: +23
Also took an average for readings July 1993–June 2003 and July 2003–June 2013, and the difference is +20. This suggests a declining trend in an increase of the increase.

HOWEVER, that is not all. February, 1994 was the last single-digit global reading, as the effects of Pinatubo subsided, and if I compare the average for 1994–2004 with JL03–JE13, then this change is remarkably less: +14.

Readings in the immediate future will need to show an even greater increase in temperature to keep pace. I have a book from 1992 which mentions the IPCC’s projected decadal increase of +0.30C. This is not happening. What is also thoroughly elusive is that the first decade of the century did not “compensate” (have higher readings) for the lag caused by Pinatubo in the 1990s average.

What can I say? This is basically about modeling of climate with expensive equipment that is poorly done. There must be several dozen outfits with supercomputers involved, each costing fifty million plus, and none of them have got the current standstill right. Not one. These machines just refuse to tell us the future. And the past and present too. Small wonder because these highly paid scientists are going about it ass backwards. They still have the delusion that carbon dioxide is a world warmer. That is over now. Atmospheric carbon dioxide today is highest it has ever been but there is no warming now. And there has not been any for the last 15 years as even Pachauri of the IPCC has admitted. If you were doing a scientific experiment and for fifteen years you get nothing but negative results you would be justified in calling that experiment a failure. A failure to cause greenhouse warming by the natural experiment set up by our climate is what we are looking at now. But Jeff Tollefson still has faith in the existence of global warming and puts it this way: “That trend does not undermine the idea that greenhouse gases will eventually push global temperatures into uncharted territory…” Then he consoles himself further by saying that “…temperatures are expected to plateau occasionally even during a warming climate. And the world remains a very warm place. The ten hottest years on record have all occurred since 1998….” Yay, Wait till I show you a plateau you never heard of. And those ten hottest years were not caused by any imaginary greenhouse effect but exist only because of a step warming brought on by the warm water transported across the ocean and raised global temperature by a third of a degree Celsius at the beginning of the twenty-first century. You might not know about it because that section of the temperature curve was covered up by a phony warming called “late twentieth century warming” until last fall. I had demanded an investigation of it but nothing happened for two years. Then, totally unexpectedly, GISTEMP, HadCRUT, and NCDC decided to get rid of that fake warming and aligned their temperature data in the eighties and nineties with satellite data I referred to in my book [1]. This was done secretly and no one was told of the purpose of it. It is of course obvious why it was done. But while that warming was official numerous articles referred to it as confirmation that warming was man-made. Man-made all right, in the back rooms of temperature depositories. This leaves us free to put together the correct temperature curve for the satellite era. Start with an 18 year no-warming period in 1979. Add to it all of the twenty-first century as another no-warming period. This leaves just a narrow window between the two, wide enough to accommodate the super El Nino of 1998 and its step warming. And no space at all for any greenhouse warming that might want in! Which means that we have now existed for 34 years without any greenhouse warming. With this knowledge, how likely is it that any of the earlier warming was greenhouse warming? Probably zero, I would say. And since scientists really just don’t know what is causing this lack of warming we are left to explain it.. I would put their ignorance down as lack of due diligence in doing their homework. Ferenc Miskolczi has the explanation which he published in 2007. In 2010 he followed it up with experimental observations. Using NOAA database of weather balloon observations that goes back to 1948 he studied the absorption of infrared radiation by the atmosphere over time. He discovered that the absorption had been constant for 61 years while carbon dioxide at the same time went up by 21.6 percent [2]. The addition of this substantial amount of carbon dioxide to air had no influence whatsoever on the absorption of IR by the atmosphere. And no absorption means no greenhouse effect, case closed. This invalidates all predictions of warming based on the existence of the greenhouse effect and explains why there is no warming now. But there are persistent attempts to explain away the current lack of warming, including a suggestion that the missing heat has disappeared into the ocean bottom [3]. That sure is a good place to hide it because Trenberth and Fasullo [4] at one time managed to lose 80 percent of global energy in the ocean somewhere and no one has been able to find it since. Stuff like that I regard as last-ditch attempts to save global warming from the waste basket of history where it has a place right next to phlogiston, another failed theory of heat. Like I said, it’s over.

When you talk about lost heat, how is the expected heat determined? Is it by measurement of some sort, if so how measured? Is it by calculations? If so how calculated. Pardon me if I have missed explanations for these questions.

The expected heat is determined by assuming that surrounding the cooling Earth with an atmosphere containing an extremely small proportion of CO2 will not only stop the Earth cooling, but will cause it to heat.

The technical name for this process is “magic”.

Because of the nature of this “magic”, the wizard in charge has to guess at the amount of “heat” that will be generated due to variations in the proportion of CO2.

Then the guess is used as the input to a “computer program” which provides the number that the Committee of Wizards agreed on – more or less. Now, in accordance with the Lore of Magic, the “heat” is created as a large quantity of “caloric”, which is a subtle fluid which moves of its accord from warmer objects to cooler ones.

In this case, the caloric has not performed its World heating role as expected (probably due to a defective spell or three). It has instead flowed into the colder deeper parts of the Ocean. However, the Wizards assure us that the caloric fluid can be coaxed out of its hiding place, and convinced to warm itself up, so that the World can resume heating, rather than cooling.

I am sorry if you have missed this explanation. Or you can believe the fairy tale put about by the Warmists. It is pretty similar, but is a bit more far fetched – it’s hard not to be gripped by bouts of convulsive laughter interspersed with howls of incredulity when listening to it.

R Gates Models are not “bad” because they fail to account for natural variability, that’s why they go wrong. Models are bad when they get the dynamics wrong due to a lack of understanding of the system.

Sorry models are bad when they put in factors that are clearly wrong due to inherent warming bias [known as climate sensitivity to CO2].
Supercomputers have ENSO’s and cloud patterns and currents and solar radiance with variation thrown into them. to model natural variability Volcanoes are put in the programmes, though they cannot predict when a large surprise event occurs they are predicting the average volcanic activity as it is known. Yes they will be wrong as they cannot model all the factors but they should not go over a cliff or do a hockey stick unless a known working identifiable set of factors is occurring.
CO2 is going up but…
Sea Level might be going up [a lot of data worries]. Global temperature rise has stalled apart from the one group of land based [biased] measurements which uses constantly refreshed algorithms to alter the raw data [never downwards], They may still be right but who can trust them. Ocean Heat content seems to be going up but the amount of true measurement is again in Question. Total sea ice extent is above normal again 30 years down the track albeit only for a few months.
None of them have gone up in step with CO2 rises . None of them.
Take out the known bias. rerun the models. See which ones model the recent 10 years. I am sure some will.
I am not saying a CO2 rise shouldn’t cause some rise in Temp. I am saying it has not done so and therefore should be removed from the formula as there seem to be coping mechanisms in the earth that feedback and totally nullify this factor.

The Nature paper by Jeff Tollefson is an excellent example of warped, circular logic IMO.

I particularly liked these paragraphs:

It is one of the biggest mysteries in climate science: humans are pumping more greenhouse gases into the atmosphere today than ever before, yet global temperatures have not risen much in more than a decade. That trend does not undermine the idea that greenhouse gases will eventually push global temperatures into uncharted territory, but it does have scientists puzzled.

One partial explanation is natural variation: temperatures are expected to plateau occasionally even during a warming climate. And the world remains a very warm place. The ten hottest years on record have all occurred since 1998.

Yet with the stalled warming now approaching its 15th year, researchers are seeking some deeper explanation. “The heat must be going somewhere,” says Ed Hawkins, a climate scientist at the University of Reading, UK. “The question is where.”

But scientists cannot yet fully explain the recent trends, and.

Let’s analyze this.

The fact is that humans are pumping more greenhouse gases into the atmosphere today than ever before, yet global temperatures have not risen muchat all in more than a decade.

This tells us that the earlier estimates of 0.2°C per decade greenhouse warming were wrong, despite unabated emissions of GHGs. Yet the author concludes: That trend does not undermine the idea that greenhouse gases will eventually push global temperatures into uncharted territory, but it does have scientists puzzled.

In other words, the theory is correct by definition, even if the facts on the ground do not support it, and this puzzles the scientists.

A more logical conclusion (Occam’s razor) is that the earlier estimates of 2xCO2 temperature response were grossly exaggerated.

The natural variation explanation also raises the same question.

Earlier IPCC estimates in AR4 had anthropogenic forcing (principally from added CO2) playing the major role for all warming experienced since pre-industrial 1750: this supposedly accounted for 93% of all the warming.

Yet now, all of a sudden, these same natural factors have supposedly overwhelmed record increases in GHG concentrations, with CO2 levels reaching all-time record heights.

This is not logical.

Again, the more logical (and straightforward) conclusion is that the earlier estimates of 2xCO2 temperature response were grossly exaggerated and that natural factors played a larger role in the past warming than previously assumed.

The ten hottest years on record have all occurred since 1998.

This statement is a red herring. Warming has stalled (in fact it has started to cool). It just hasn’t cooled as much over the past decade as it warmed over the previous two or three decades, so obviously the current years are warmer than earlier years.

“The heat must be going somewhere,” says Ed Hawkins, a climate scientist at the University of Reading, UK. “The question is where.”

Like Jeff Tollefson, Hawkins is baffled about the missing heat that he knows, deep in his heart must be somewhere, based on his earlier model projections. But, rather than simply acknowledging that his earlier estimates of CO2 temperature response were grossly exaggerated and modifying his climate model inputs accordingly, he insists that the hypothetical heat must be going somewhere.

The larger question is whether the lack of warming today portends less warming in the future.

Of course it does. Duh!

Where is Occam when we need him so badly? (He wouldn’t need a razor, since he’d be pulling out his hair at the logic being used here, supposedly by scientists.)

That’s a value statement. I call BS on that statement. The planet is in a ‘Coldhouse Phase’. The planet’s average surface temperature is currently around 15 C whereas normally it is around 8 C warmer than now.

Uhh, according to the latest guest post, “Unforced Variability and the Global Warming Slow Down,” some of the “buried” (ie. missing) heat from the “deep Pacific ocean” is released periodically during el Ninos. Which is surprising news, Since other consensus types aren’t sure it is even there, or doubt that it will be “released” any time soon if it is there.

Amusingly, the alarmists are now in the position of hoping their deep missing heat will boil out of the oceans. Oddly, and not so amusingly, I hope so, too, because we are cooling, folks; for how long even kim doesn’t know.
================

Tell me, R Gates Skeptic, do you want(expect) the missing heat to boil out now or later? If now, is it to prove you are right or so that we may enjoy the manifold, and historically manifest, benefits of warming?
================

I only speak to the data. So long as ocean heat content continues to accumulate far in excess of the relatively small fluctuations of energy in the troposphere, I must speak about the data– not about what might be or could be. If we did not have the ARGO data (and I’m sure a few fake-skeptics wish we didn’t) then there would be more room for speculation, or if the ARGO data showed ocean heat content decreasing, we’d also be having a different conversation.

You could be right Jim2, or you might be wrong. Either way I’m an optimist. Should the human carbon volcano turn out to be catastrophic to the current web of life, after a few million years, another will evolve.

Oh where, oh where has the missing heat gone?
Oh where, oh where can it be?
Is it down with the dead-men a thousand fathoms below,
Or nicked inter space, even Trenberth don’t know,
It’s dispersal’s a deep mysteree.

That’s highly amusing, B Gates. First, because they virtually concede that the Maunder Minimum caused the Little Ice Age, something I suspect but am not prepared to concede. Second, because they maunder on about sunspots without once mentioning Livingston and Penn.

The paper in Nature is incredibly significant. 1 year ago only skeptical heretics were talking about a pause. Then a few months ago articles began to appear in other mainstream publications like The Economist. And now a decidedly Warmist journal allows such a paper to be published. This is no small change. It signals more honest debate in a variety of forum. Maybe we could even get back to legitimate science.

John Carpenter | July 13, 2013 at 7:47 am |
You write “Failure is no reason to not continue to pursue knowledge and understanding with the excuse of ‘I don’t know’.”

I basiclally agree with everything you have written, but it is you who have missed the point. Sure we should try to do all the things you have suggested, and, indeed, the warmists have given it their best shot. But if, in the end, their best shot cannot answer the question, then warmists should not pretend that the question has been answered. Which is what has happened, and which you refuse to discuss.

Yes, the warmnists have established that CAGW could be true; I agree completely. But if the warmists have not provided the vital bit of infornation – the actual measurement of climate sensitivity – then they have not proven that CAGW is any more that a hypothesis. So it is wrong for the IPCC to write http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch9s9-7.html. How can you possibly support this bit of the AR 4, when CAGW is still a hypothesis? That is the point you refuse to discuss.

1. We no longer have to deal with the crackpots saying that “there is no pause” when Nature and its mainstream sources readily admit that there is a pause and that the pause is puzzling.

2. “They all predict the shift beautifully,” Goddard says. “Unfortunately, from what I hear, different models are doing it for different reasons.” Could there be a funnier revelation of ex post tuning and data snooping?

“You give me a pattern, and I’ll give you a tweak to the simulator that can reproduce it” seems to be the name of the game.

Judith, the Nature News article does not show the most recent Met Office decadal forecasts. Nor does it show the forecasts submitted by the Met Office to IPCC AR5 as the benchmark. There is an amazing 0.52 deg decrease in the forecast between the CMIP5 submission (HadGEM2) and the most recent decadal forecast (using the more recent HadGEM3) for the most recent reported period for the HadGEM3 decadal forecast( 2016.76-2017.75) See http://climateaudit.org/2013/07/15/nature-hides-the-decline/