Overestimated global warming over the past 20 years

Recent observed global warming is significantly less than that simulated by climate models. This difference might be explained by some combination of errors in external forcing, model response and internal climate variability.

The latest issue of Nature Climate Change includes the following Opinion & Comment by Fyfe, Gillett and Zwiers: Overestimated global warming over the past 20 years. [link; behind paywall]. Its a short piece, here are some excerpts:

Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade (95% confidence interval). This rate of warming is significantly slower than that simulated by the climate models participating in Phase 5 of the Coupled Model Intercomparison Project (CMIP5). To illustrate this, we considered trends in global mean surface temperature computed from 117 simulations of the climate by 37 CMIP5 models. By averaging simulated temperatures only at locations where corresponding observations exist, we find an average simulated rise in global mean surface temperature of 0.30 ± 0.02 °C per decade (using 95% confidence intervals on the model average). The observed rate of warming given above is less than half of this simulated rate, and only a few simulations provide warming trends within the range of observational uncertainty.

The inconsistency between observed and simulated global warming is even more striking for temperature trends computed over the past fifteen years (1998–2012). For this period, the observed trend of 0.05 ± 0.08 °C per decade is more than four times smaller than the average simulated trend of 0.21 ± 0.03 °C per decade. The divergence between observed and CMIP5- simulated global warming begins in the early 1990s, as can be seen when comparing observed and simulated running trends from 1970–2012.

The evidence, therefore, indicates that the current generation of climate models (when run as a group, with the CMIP5 prescribed forcings) do not reproduce the observed global warming over the past 20 years, or the slowdown in global warming over the past fifteen years. [S]uch an inconsistency is only expected to occur by chance once in 500 years, if 20-year periods are considered statistically independent. Similar results apply to trends for 1998–2012. In conclusion, we reject the null hypothesis that the observed and model mean trends are equal at the 10% level.

One possible explanation for the discrepancy is that forced and internal variation might combine differently in observations than in models. For example, the forced trends in models are modulated up and down by simulated sequences of ENSO events, which are not expected to coincide with the observed sequence of such events. For this reason the moderating influence on global warming that arises from the decay of the 1998 El Niño event does not occur in the models at that time. Thus we employ here an established technique to estimate the impact of ENSO on global mean temperature, and to incorporate the effects of dynamically induced atmospheric variability and major explosive volcanic eruptions. Although these three natural variations account for some differences between simulated and observed global warming, these differences do not substantively change our conclusion that observed and simulated global warming are not in agreement over the past two decades. Another source of internal climate variability that may contribute to the inconsistency is the Atlantic multidecadal oscillation (AMO). However, this is difficult to assess as the observed and simulated variations in global temperature that are associated with the AMO seem to be dominated by a large and concurrent signal of presumed anthropogenic origin. It is worth noting that in any case the AMO has not driven cooling over the past 20 years.

Another possible driver of the difference between observed and simulated global warming is increasing stratospheric aerosol concentrations. Other factors that contribute to the discrepancy could include a missing decrease in stratospheric water vapour, errors in aerosol forcing in the CMIP5 models, a bias in the prescribed solar irradiance trend, the possibility that the transient climate sensitivity of the CMIP5 models could be on average too high or a possible unusual episode of internal climate variability not considered above. Ultimately the causes of this inconsistency will only be understood after careful comparison of simulated internal climate variability and climate model forcings with observations from the past two decades, and by waiting to see how global temperature responds over the coming decades.

JC comments: As far as I can tell, the methods for statistically comparing observations and models and drawing inferences from this comparison are rock solid.

The selection of 20 years is interesting for several reasons. It gets away from the ‘cherry picking’ criticism of starting with 1997 or 1998. Also it includes the big jump from 1993-1998.

In terms of reasons for model underestimation, the apparent ‘preferred’ explanation of ‘the ocean ate it’ does not get any play here, other than in context of a brief consideration of natural internal variability. Their conclusion This difference might be explained by some combination of errors in external forcing, model response and internal climate variability is right on the money IMO, although I don’t think their analysis of why the models might be wrong was particularly illuminating. If you would like further illumination on why the climate models might be wrong, I refer you to my uncertainty monster paper.

The higher the sensitivity, the colder it would now be without the warming effect of AnthroGHGs. Pick a sensitivity that frightens you and then calculate how cold it would now be without human endeavour and beneficence.
==============

Recent observed global warming is significantly less than that simulated by climate models. This difference might be explained by some combination of errors in external forcing, model response and internal climate variability.

Gee, ya think? Ya think it might be some combination of those factors? Or maybe not.

What’s your problem? Actually sometimes it takes a certain level of intelligence to say “We’ve looked at this from all angles and the conclusion is we don’t really know let’s wait for more data, or come up with better ways at looking at the problem” That seems to be what these guys are doing and it seems to be the RIGHT answer.

It’s only an idiot that thinlks he has to give an answer one way or the other when it’s clear he hasn’t a clue.

(Note: in the present context ‘he’ is not intended to be gender specific, both men and women can be idiots)

Now we need 1000′s of times more computing power to find out what the question is.

‘AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.’http://www.pnas.org/content/104/21/8709.full

Model error I suggest may have a bit to do with ‘a posteriori’ choosing the wrong solution out of many feasible solutions. If they had pulled the right solution out of their arses all this could have been avoided.

Finally – Walker Circulation changes with ENSO states and ENSO states have decadal, centennial and millennial modes. Some 5,000 years ago ENSO shifted from a La Nina dominant state to El Nino dominant. I expect Walker Circulation changed then as well.

From article: “The divergence between observed and CMIP5- simulated global warming begins in the early 1990s, as can be seen when comparing observed and simulated running trends from 1970–2012.” I’m not a modeler, but aren’t models supposed to be ‘calibrated’, occasionally. If the model diverged from reality, shouldn’t it have been ‘adjusted’??

Ah, the ‘models’ are supposed to be based on first principles and the modelers claim that they are not in anyway, shape or form. fitted to the past; it just happens that the models all hindcast much better than forecast.

An opinion leader is specified as a highly self-confident agent with strong opinions. . . .
as history has proven, paradigm shifts are possible. It all starts with an agent willing to propose a new opinion and strongly holding it.

the common assumption of bias cancellation (invariance) in climate change projections can have significant limitations when temperatures in the warmest months exceed 4-6 deg C above present day conditions.

When assumptions that underlie a particular regression method are inappropriate for the data, errors in estimated statistics result. . . .
Conclusion: Only iteratively reweighted general Deming regression produced statistically unbiased estimates of systematic bias and reliable confidence intervals of bias for all cases.

Pretty general there, Capt. Do you have more detail on “everything?” Does anyone have a detailed list of such assumptions? With no details, I’d assume that “ideal and symmetrical” would produce over- and under-estimates more or less equally.

The two simplifying assumptions I’ve seen are: 1) that CO2 is equally distributed at the same level as the Hawaiian measure, and 2) that outgoing radiation can be computed proportional to T4 using average world T.

Webby often dazzles us with his mathematical brilliance so we expect something that will blow our minds. I predict that FOMD and lolwot will bombard us with white noise. I would predict at least 30 references to Hansen from FOMD.

No problem with that. The year 2100 is the expected date at which atmospheric CO2 will double from pre-industrial times, at an increase of 2 PPM/year.

At a rate of 0.14C/decade, that will push the temperature up another 1.2C by 2100. That will add to the 0.8C already in the bank, bringing it to about 2 C per doubling. The equilibrium climate sensitivity will be about 50% greater than this due to the ocean acting as a heat sink, so the ECS will be about 3C, in line with the mean estimate from the models.

WebHubTelescope: At a rate of 0.14C/decade, that will push the temperature up another 1.2C by 2100. That will add to the 0.8C already in the bank, bringing it to about 2 C per doubling.

I am awaiting that 0.14C/decade. If we get a net 2C increase by 2100 as a consequence of CO2 doubling, I think that James Hansen’s great-great grandchildren will be safe, not to mention his grandchildren. You make a good case that energy is much more important a concern than warming.

At a rate of 0.14C/decade, that will push the temperature up another 1.2C by 2100.
===========
Where I live, the lowest average low during the year is 10F. The warmest average high is 82F. The lowest on record is -33F. Warmest is 101F. We average 7 days per year below 0F. We average 9 days above 90F annually. Which means there is no such thing as an “average” year.

While I don’t expect to see ppm level for CO2 to fall the century following this one, or probably the next, it is an interesting question, mainly for the answers it might garner.

From my Atmospheric Physics class I recall 6 years residence time for CO2. However I’ve since seen statements ranging up to hundreds of years. As the source of some of those statements have been folks such as Bill McKibben, I am not sure how much to trust them. Then again, I seem to recall serious climate folks also making that claim.

So I would very much like to see an anwser to your question, along with an explanation.

Maybe they didn’t want two on the same subject? Don’t know how it works. But from what I saw of the von Storch paper, they _really_ said the same thing, even down to the three explanations for the discrepancy. I saw on klimazwiebel that Ed Hawkins pointed out that not all the CMIP5 models were used in the simulation; from the numbers he gave (43 models?), this paper didn’t either. Zorita answered that they did their best, tried not to leave any out, but the CMIP5 website is really hard to use and find everything. They did the best they could and could have missed some of the models. Weird.
No one seems to think or be suggesting that the results would have been different.

Perhaps you should read Annan more closely,
“Note by the way that it’s not just the recent decade of data that points to a more moderate sensitivity estimate. For example, back in 2000, Forest et al generated an 90% range of 1.3-4.2C, when they used an expert prior – but at that time, the IPCC experts had all decided that a uniform prior was the correct approach.”

Von Storch comments today on the difference between his paper and Fyfe et al:

Paul, we mostly interested in the ability of scenario simulations in describing the present stagnation, not in explaining the stagnation. That is quite different. What I find difficult with the “other” paper that it is again an a-posteriors explanation (like cold European winters caused by less Arctic sea ice in the preceding fall) and just one. There are in principle others, and we would need to do some work to disentangle the plausibility of different explanations.

Von Storch and Zorita tried to find out whether the observation are still consistent with the models, or at least some of them. Their conclusion seems to be: “only barely” and soon not at all unless temperatures start to rise again. They didn’t compare average model projections but the lower tail of the projections for rate of temperature rise.

Is it at all possible that measuring energy to determine climate may have nothing to do with real climate and we are chasing a rabbit down the wrong hole? Perhaps the planet doesn’t need an energy equilibrium as much as we thought?

Why stop at 20 years? We can go back 34+ years with UAH’s global surface temperature and find that the trend is around 1.4 degrees C per century, well below the low end of the consensus. The exact number depends on what method you use to calculate the trend, but every method that I’ve tried has a 95% CI that excludes the consensus-low-end of 2.0 degrees C per century.

The explanation is completely simple. Judith you continually state that the limits of climate sensitivity for a doubling of CO2 are 1 C to 6 C. This is just plain WRONG. The lower limit is 0 C. The little empirical data we have (e.g. Beenstock et al) gives a strong indication that the true value of climate sentivity is indistinguishable from zero. If you use a climate sensitivity of zero as an input to the models, the gap between observed and estimated temperatures will miraculously disappear.

Edim, you write “Agree, but IMO the lower limit is not zero, it could be negative,”

I agree. But if I were to suggest, as I have in the past, that the lower limit could be negative, the warmists denizens of CE, like Steven Mosher and lolwot, might create a diversion to the topic I think needs discussing, by claiming that it is impossible for the climate sensitivity to be negative.

But if I were to suggest, as I have in the past, that the lower limit could be negative, the warmists denizens of CE, like Steven Mosher and lolwot, might create a diversion to the topic I think needs discussing, by claiming that it is impossible for the climate sensitivity to be negative.

Easy enough to shut them down: remind them that “feedback” in climate models is really a metaphor, not a “real” feedback in the sense of, say, an electrical circuit. A negative feedback larger than the original would be impossible in a (simple) electrical circuit, but the climate is an extremely complex non-linear system, and any time the word “feedback” is used, it’s only a vague and defective analogy to the referent in electrical circuits etc.

Of course, the same is true for “sensitivity”: the assumption that global “average” temperature is somehow part of an equilibrium is invalid, it may be a useful fiction but it’s still a metaphor.

A negative value would mean that the more the solar input strengthens, the more that the earth cools. Only a skeptic could believe such a thing is possible.

Typical slimeball rhetorical trick. That’s the only type of argument anybody can make that it’s impossible for sensitivity to be negative. Of course, possible doesn’t mean very probable. But it does change the shape of the expected PDF. Not that such expectations are anything but unvalidated assumptions.

“Climate” is also a myth. It’s normally defined as the “average” of weather, but as used in “climate science” it’s functional definition is something more like “a statistical description of the basin(s) of attraction within which weather follows its path”. Whatever it is, its definition is unclear, involves several detailed definitions that are assumed to be identical, and thus involves built-in assumptions that are unwarranted.

For instance, the assumption that the statistically measured basin(s) of attraction containing the path of evolution of various model runs is somehow representative of the actual basin(s) of attraction containing the path of the actual weather is totally unwarranted. They appear to for hindcasting, but that’s mostly because models have been trained on known weather, and models that failed to simulate other known weather are deprecated.

AK, huh? I was just explaining what it meant. Some skeptics might change their minds in the light of this explanation.

Well, unless you’re calling Pekka a “skeptic“, you’re wrong. AFAIK, anybody who understands the climate as a complex non-linear system would have to agree that a local negative “sensitivity” is possible. (Local in the sense of small changes to pCO2.) Even if they don’t consider it very probable.

My point is that there are a number of mechanisms which can potentially lead to cooling greater than the warming provided by an increase to either the solar constant or CO2. Such mechanisms probably only produce regional negative net “sensitivities”, and while they could add up to a negative global sensitivity, they almost certainly don’t (IMO). But the regional effects can potentially contribute to a much lower global sensitivity than people would expect if they intuitively assumed that sensitivity (with “feedbacks”) can’t be negative.

Hansen’s 1988 paper seems like a source that would support a CS of 0. The most accurate scenario by far was the one that assumed no substantial increase in CO2. Which is indistinguishable from a zero effect from a substantial increase in CO2.

AK, you write “AFAIK no model has “:a climate sensitivity […] as an input“. Would you like to provide a link to a peer-reviewed piece describing how some model uses such an “input“?”

I understand very well how models in general work; I do not understand the details of GCMs. So let me rephrase this. Either the model has an input of CS, which it uses, or it has inputs from which the model estimates the CS. So the models should be run either with an input of CS as zero, or inputs from which the CS is estimated should be changed so that the result is a value of CS which is zero..

Jim C:
Why stop at zero? If zero is possible, negative sensitivity is also possible. Assume the climate acts like a refrigerator. By adding heat to a portion of the system, more heat is allowed to “escape” to “elsewhere” and another portion of the system cools. Call it the Accelerated Iris Effect. Another 200ppmv CO2 and we will cross the tipping point to the next big freeze. Not likely possible, but who knows. Gotta give something for the skydragoons to cling to.

The lower limit cannot be 0. mathematically, physically, and historically.

The sensitivity parameter or metric is defined as the following

DeltaT = Lambda * Forcing

Rearranging Lambda = Change in Temperature/Forcing.

So for example, If we take the climate system and increase
the forcing by 1 watt.. lets say we turn up the sun by 1 watt, then we will
observe a increase in temperature. lets say that after 100 years of increased forcing of 1 watt we see a change in temperature of 1C

Lambda = 1/1 or sensitivity is 1.

The sensitivity can only be Zero iff there is no change in temperature
when you add more energy to the system. That would violate fundamental physics.

There is only one other way that the sensitivity to doubling c02 can be zero. That is if doubling c02 leads to No increased forcing.

However our best science and engineering tells us that doubling C02 will
lead to 3.7 watts of additional forcing.

Edim. Just take a look at this long post that Steven Mosher has made. I was afraid that this would happen. That was why I did not claim CS was negative. Steven, of course, does not want to discuss the possibility that the value of CS for a doubling of CO2 could be INDISTINGUISHABLE from zero, and so he raises this red herring to divert the discussion.

Steven you write “The sensitivity can only be Zero if there is no change in temperature
when you add more energy to the system. That would violate fundamental physics. ”

I completely agree. But this is a fundamental error that the warmists, such as yourself, continually make. There are two completely different types of forcings; those that add energy to the system, and those that dont. If the solar constant increases, it will add more energy reaching the earth.

However, and it is an enormous HOWEVER, adding more CO2 to the atmosphere does NOT add any more energy to the system. The total amount of energy reaching and radiating from the earth is, to a first approximate, completely unchanged. Therefore there is no reason whatsoever why a change of CO2 concentration has to make any difference to global temperatures.

I would say that it is at least possible that the energy may not get added. A change in clouds may reflect it or if the optical density of the atmosphere due to water + CO2 (+everything else) is already saturated at those wavelengths then no additional energy will get added. I’m not saying this is the case, but I don’t know that either of these is impossible.

Steven Mosher It can be near zero and historically, Angstrom said CO2 was near saturated at the surface. Critiques of Angstrom’s experiment mention changing the dimensions of the test cylinder which is kinda fine tuning to reach an optimum rather that an expected. Woods also less than ideal experiment indicated a small impact without fine tuning for optimum. Since the estimates include feedbacks which are no where near certain, the “It can’t be zero.” is right up there with “I know I am right.”

The direct influence of additional CO2 is well understood, no doubt remains on that. It’s known that other influences than that on IR absorption and emission are minimal (for them ppm means that the effect is also ppm). The influence on radiative energy transfer is, however, quite significant.

That effect does reduce OLR affecting the energy balance by 3.4 W/m^2 according to the more recent estimates, when the concentration is doubled. All the other changes like those on albedo are indirect. First CO2 must warm the Earth system and then the warming leads to the additional changes.

It’s not absolutely inconceivable that warming by CO2 does lead to a chain of changes that cancels the effect on the global level, but unlikely enough to be safely ignored. (In principle warming could persist in some areas, and lead to cooling elsewhere in a way that keeps the average unchanged, but this is really unlikely.)

So for example, If we take the climate system and increase
the forcing by 1 watt.. lets say we turn up the sun by 1 watt, then we will
observe a increase in temperature. lets say that after 100 years of increased forcing of 1 watt we see a change in temperature of 1C

The sensitivity can only be Zero iff there is no change in temperature
when you add more energy to the system. That would violate fundamental physics.

Nope, you’re not necessarily adding more energy to the system. You’re increasing the energy coming at the system. Depending on what effect more solar energy in some places has on cloud albedo in others, it’s quite possible that the actual amount of energy entering the system (as opposed to being reflected away by increased albedo) is smaller.

There is only one other way that the sensitivity to doubling c02 can be zero. That is if doubling c02 leads to No increased forcing.

Nope. Increased GH downwelling IR isn’t the same as increased sunlight. Because in many places it’s absorbed in the first few microns of the (ocean) surface while most shortwave is absorbed at a depth of meters, it can produce increased evaporation without increasing the temperature (except for within a few microns of the surface). Not only that, but this effect can be magnified by a positive feedback from increased low clouds which produce increase downwelling IR while also increasing albedo and reflecting away more net energy.

In addition to the mechanism I mentioned above for an increased solar constant.

Bottom line, a zero or even negative ECS can’t be ruled out due to “fundamental physics.“

“I completely agree. But this is a fundamental error that the warmists, such as yourself, continually make. There are two completely different types of forcings; those that add energy to the system, and those that dont. If the solar constant increases, it will add more energy reaching the earth.”

by FORCING I mean, BY DEFINITION, something that adds energy to the system. there are not two kinds of forcing.

The units is Watts. sensitivity = C per Watt.

So you are left arguing that doubling C02 adds no watts.

#####################################

However, and it is an enormous HOWEVER, adding more CO2 to the atmosphere does NOT add any more energy to the system. The total amount of energy reaching and radiating from the earth is, to a first approximate, completely unchanged. Therefore there is no reason whatsoever why a change of CO2 concentration has to make any difference to global temperatures.

1. Wrong. we can and do measure the additional watts that are radiated back to the surface. we measure this all over the planet.

2. these watts are no different than other watts.

very simply C02 warms the planet by raising the ERL. When the ERL is raised the planet radiates from a cooler altitude. Cooler objects radiate more slowly than warmer objects. Slowing the escape of radiation to space means the planet is warmer than it would have been otherwise.

if you want to see the effect of freeing up the return of IR to space,
look at roy spenser nice little experiment.. or study any IR physics

Steven, you write “So you are left arguing that doubling C02 adds no watts.”

Yes and no. I measure heat in joules, not watts. Watts are a rate of heating, not heat. The earth gets almost all it’s joules from the sun. It might get some more from joules from the core. But wherever it gets it’s joules from, adding more CO2 to the atmosphere does not change the joule input to the earth. There might be some very slight effect of the absorption of sunlight by the extra CO2, but I suspect this effect is negligible. So adding more CO2 to the atmosphere, by itself, does not change the number of joules that the earth receives. Simple.

If, and it is a mightly big IF, but IF CO2 causes some warming, then maybe, just maybe, the feedbacks could change the amount of joules the earth receives. But that is a whole different issue. First one needs to show that adding CO2 to the atmnosphere causes any changes.

In a system as complex as the Earth’s climate system, there is no “fundamental physics” that would prevent the sensitivity from being zero – or even negative.

I accept that increasing CO2 provides an “additional forcing”, but the reaction of the climate system to that forcing is not required to increase the global average temperature. It could increase the temperature in some places and reduce it in others. Relatively small changes in cloud cover and its distribution could do this, for example.

My guess is that in reality sensitivity is positive, but complex systems have a nasty habit of behaving in very unexpected ways – and these systems are often hard to model well.

Now we need 1000′s of times more computing power to find out what the question is.

Sensitivity is more correctly sensitivity to initial conditions and can indeed be negative or positive at different times depending on the distance to a bifurcation point, the direction of approach and the nature of the resultant instability.

‘The climate system has jumped from one mode of operation to another in the past. We are trying to understand how the earth’s climate system is engineered, so we can understand what it takes to trigger mode switches. Until we do, we cannot make good predictions about future climate change… Over the last several hundred thousand years, climate change has come mainly in discrete jumps that appear to be related to changes in the mode of thermohaline circulation.’ http://www.earth.columbia.edu/articles/view/2246

Although THC seems significant in large scale shifts – there are of course other shifts that have been less persistent and long lived in the modern era. These seem related to ocean and atmospheric patterns associated with cloud changes that – from the data – seem the dominant forcing in the satellite era by far.

‘What happened in the years 1976/77 and 1998/99 in the Pacific was so unusual that scientists spoke of abrupt climate changes. They referred to a sudden warming of the tropical Pacific in the mid-1970s and rapid cooling in the late 1990s. Both events turned the world’s climate topsy-turvy and are clearly reflected in the average temperature of Earth.’ http://www.sciencedaily.com/releases/2013/08/130822105042.htm

It seems quite likely that temperatures wont increase for a decade or three more.

Steve, you could not be more wrong. Near surface and sst could stay nearly constant despite an increase in forcing because the atmosphere is a three dimensional object and you are only measuring a two dimensional object (the surface). It’s true the energy in the system would have to be higher, but the temperature distribution in the full three dimensional atmosphere does not need to remain static.

Judith, you write “Dare we hope for sanity from the AR5 in their assessment of detection and attribution? Based upon the ‘leaks’, I am not too hopeful.”

And if the leaks turn out to be true, what are you going to do about it? Are you going to do what any true and proper scientist ought to do, which is yell at the top of your lungs from the rooftops, that the IPCC is committing extreme violence to all that we hold sacred in science, physics?

Oh, for heavens sake! Pay attention, Jim! Just because Judith’s many criticisms over the years of the IPCC, the AGU, and others on the alarmist side of the fence have not echoed and/or matched word-for-word your own strident (and counter-productive, IMHO) Johnny-one-note clamouring does not make them any less valid.

In fact, I’m inclined to suspect that in the eyes of many, Judith’s approach has done far more to encourage others – from all walks of life – to investigate for themselves and come to their own skeptical conclusions – than any of your bitter knee-jerk rants (and those of others I could name!)

Thank you for reading what I wrote. I dont feel like retracting one bit of it. But accusing me of “knee-jerk rants”, I resent. I have spent some 10 years trying to understand the physics which allegedly shows that CAGW is real. To typify what I write as a “knee jerk rant” simply does not come anywhere near the truth. What I write has been painstakingly researched; whether it is accurately researched is another matter.

I should add that I have said on many occasions just how much I admire what Judith has done and is doing. It is just that I hope she will do a lot more

Mine tends to be a bit twisted. I still think the funniest newspaper article I ever read was the story about a man in Chicago who accidently killed himself playing Russian Roulette – with an automatic pistol.

omnologos, I don’t think it’s strange. The divergence began as soon as the models were frozen, more or less. Till then they were “training”, after that they are “testing”. They just aren’t doing very well. Normal in science, eventually they will do better, but unfortunately those models are bearing a lot of weight of world politics and economics.

Surely the problem stems from calibration of a possibly incorrect model. If observed warming is attributed to CO2 and there are other, unknown, components of the climate system, the parameterisation will be incorrect.

If we have a deterministic model, is the data sufficient to compute the parameters reliably, or will the measurements be ill-conditioned (either because the problem is poorly posed or the model is wrong)?

How much error is involved in linearisation of of the system? Can we be certain that higher order terms can be ignored?

In these cases the predictions are likely to be incorrect. I was faced with a similar problem, in an entirely different discipline, some years ago. When I examined the sensitivity of paramater estimation to small changes in input data, I concluded that the system was ill-conditioned. Further investigation showed that the model was incorrect because of an ignored input. Correction of this gave marginally better performance.

Since I am not an expert in climate modelling, I am loathe to comment on the details, except to say that CGMs are exceptionally complex and I wonder if all the numerical sources of error have been properly addressed, as well as the physical assumptions.

RC, welcome to the party. There are many different models, and of course “all the numerical sources of error” have not been properly addressed. They are still learning. Some models have high climate sensitivity to CO2 and high sensitivity to aerosols to balance, other do the opposite. And so forth. Unfortunately, the signature metric that they claim to be able to predict, global surface temperature, is a really poor choice for the purpose. You get _one new data point every month_. Regional predictions don’t work (yet) at all.

I guess I’ll add that the big new variable that many are discussing, deep ocean heat transfer, only has data for the last five years from the Argo floats. All the models’ ability to predict surface temperatures for the past century does not take that into account, and cannot, as there is no data on it at all.

What is not clear to me is how much of the error is due to incorrect parameterisation and how much is due to a canonically incorrect model.

As you rightly point out, the influence of oceanic heat-sinks is potentially a large factor and I don’t really know how the transfer of energy within it is modelled in terms of difusion, layer mixing and bulk transport. Given the complexity of these processes and the lack of data, I would expect any estimates to be crude. Given the very large heat capacity involved, this would be expected to influence surface temperatures, as has been pointed out many times, but the parameterisation of the ocean heat model would be expected to have a large influence on surface temperatures. For example, what effect would a 1% error in assumed thermo-Haline current have on predicted surface temperature rise?

▷ Best Available Climate Science Derives from thermodynamic considerations associated to conservation of mass, conservation of energy, and increase of entropy, as instantiated by radiation transport theory, as calculated by slide-rule, and as affirmed by paleo-evidence and by sustained observation of global energy imbalance. In summary, the multi-decadal arc of Hansen-style climate change science (here and here and here and here).

And yet global surface temperature is still not increasing for a decade to three more yet. Damn that weak and insipid science that pretends that changing ocean and atmospheric circulations can have any impact on climate at all. Even if we can measure them, thousands of scientists follow them worldwide and there has been a century of connecting these things to climate globally.

FORTRAN is alive and well on most scientific computers. To check for yourself, at the command line enter “man gcc“. Most likely you’ll be up-and-running immediately (free-as-in-freedom, thanks to Richard Stallman/GNU Project).

There used to be a whole slide-rule design culture besides the classic Napier analog calculator. All kinds of linear and rotary gadgets, often manufactured out of non-rigid plastic and handed out as promotional sales tools, that could tell you the radius of a roll of material x feet long and y mils thick or even help you with grammar. I sometimes wonder what happened to the people who specialized in designing them.

“I am *sick* of moving these variables around. But Jean-Michel has all but threatened a hissy fit if they aren’t removed from PARAMS.h. So now here they are *back* in MNC_PARAMS.h where they were just a few days ago”.

GIGO — All of the GCMs fail validation testing, which means GCMs do no agree with real world observations–i.e., none of the GCMs demonstrate any predictive ability whatsoever. The only ‘evidence’ we have so far is that the climatists’ GCMs are god-awful: GIGO (Garbage In, God Out). Billions of dollars have been wasted paying colluding, superstitious, fearmongering school teachers who loath America to corrupt science and destroy the economy.

“Ultimately the causes of this inconsistency will only be understood after careful comparison of simulated internal climate variability and climate model forcings with observations from the past two decades, and by waiting to see how global temperature responds over the coming decades.”

I find it odd that no one suggests experiments where existing models are modified so that the ECS is lower. The range of model ECS is from 2.1 to 4.4 ( in Ar4 ). Waiting to see how temperature responds wont really tell you anything because the discrepancy will still exist. You can find similar discrepancies in the past and more observation hasn’t help to resolve those.

Further, one thing we found is that the polar amplification in the models is inaccurate . Northern latitude warming is exagerated. This might be important to understand especially if you are looking at impact studies for agriculture.
Also, getting that pattern of warming wrong maybe points to something that is not related to forcing or internal variability but rather to the fundamentals of transport and re organization

My motto is “I believe I’m less wrong” science isnt settled from an epistemological standpoint.. ever. It can be “settled” pragmatically. the pragmatic “settling” of science merely indicates that people dont waste their time questioning it.

If somebody assured you that the science was settled, then you made the mistake of listening to the wrong people. That is your bad. No scientist I work with ever says such things. perhaps you were reading the internet and got fooled.

the first I ever heard that was from skeptics saying it wasnt settled.

granted if you look around you will find somebody somewhere in the heat of some debate using this phrase. But if you have a thoughful conversation with people they usually wont say such things or they will mean something different than you do when your use the term “settled”

Like C02 warms the planet. Thats settled. Of course it could be otherwise and of course some kooks will object, but nobody thinks its worthwhile to question it.

You face every “finding” of science with the same question.

Can I build on this or is worth questioning.

E=MC^2. Is that “settled” well, it could be wrong, but when faced with the mountain of evidence, when faced with the amount of physics that will have to be redone if its wrong, I make the PRAGMATIC decision that is more fruitful to accept this “as settled” and Build on it, rather than expending the effort to disprove it. Because the latter will mean that I have to rebuild huge swaths of physics.

So C02 warms the planet. Could that be wrong, ya, monkeys might fly out of my butt. It is both improbable and painful. If somebody just wants to to question that and rebuild all the physics associated with that knowledge they can try. But if somebody just wants to question its certainty, if they just want to question a physical theory WITHOUT replacing it, then they need to wear a philosophy hat and stop pretending that they are “doing” science.

“But if somebody just wants to question its certainty, if they just want to question a physical theory WITHOUT replacing it, then they need to wear a philosophy hat and stop pretending that they are “doing” science.”

Actually, all anyone needs to do is note that the squiggly line doesn’t always go upwards. This indicates that C02 doesn’t always “warm the planet.” It further indicates that it can’t stop the planet from cooling. This is more than enough, to chuck “C02 warms the planet” into the overly simplistic/unscientific dustbin.

Unfortunately, the MSM and far too many policy makers must be listening to the “wrong people” since according to the MSM and members of the Safe Climate Caucus, the science is settled, and we must take immediate and drastic action to save the planet.

“It further indicates that it can’t stop the planet from cooling. This is more than enough, to chuck “C02 warms the planet” into the overly simplistic/unscientific dustbin.”

There’s no evidence in the surface station record that there’s a trend in the loss of nightly cooling since the 1950’s. Over 100 million daily records.
You can read it if you follow the link in my name.

I heard it it first in the media. I’ll grant that politicians, environmental activists and media talking heads are not climate scientists. When I first started looking for information the two sites I went to were SkS and Real Climate. I was told there it was settled. I’m pretty sure I’ve seen Dr Mann make reference to it being settled. The use of the label denier is strong evidence people believe it is settled.

I have no problem believing that thoughtful people do not believe the settled science story line. Just visiting websites will show they exist. But my impression is a lot of people in the field don’t speak out when the “settled” issue comes up, nor talk much about uncertainty. At least in public. Otherwise Dr Curry wouldn’t standout. Her’s would be the majority view. If they express a different opinion privately, well, how would most of us know that?

I find it odd that no one suggests experiments where existing models are modified so that the ECS is lower.

Wouldn’t that require trying a large number of variations in parametrization with many model runs each? Different for different models (that is, models that really are different and not just clones of other models)?

Steven Mosher: I find it odd that no one suggests experiments where existing models are modified so that the ECS is lower.

That’s essentially a method to estimate, approximately, one parameter of each model while keeping the others fixed. For each model, there is an ad hoc change to this parameter that produces the best fit — but the confidence interval on the parameter estimate is extremely large, and correlated with all other parameter estimates.

Also, getting that pattern of warming wrong maybe points to something that is not related to forcing or internal variability but rather to the fundamentals of transport and re organization

That’s what a lot of us call the source of “natural variation” (because the system is chaotic), and you call “unicorns”. The specific word “transport” that you use is what I often use to refer to “non-radiative transport” such as the dry and moist thermals that transport sensible and latent heat from surface and near surface to the upper troposphere. In short you hint that “the physics” embodied in the model may be incomplete.

If the models continue to be inaccurate enough for long enough, everyone will have to reorganize their cognitions along these lines.

‘For each model, there is an ad hoc change to this parameter that produces the best fit — but the confidence interval on the parameter estimate is extremely large, and correlated with all other parameter estimates.”

matthew ECS is not a parameter. Its an emergent property.

‘That’s what a lot of us call the source of “natural variation” (because the system is chaotic), and you call “unicorns”. The specific word “transport” that you use is what I often use to refer to “non-radiative transport” such as the dry and moist thermals that transport sensible and latent heat from surface and near surface to the upper troposphere. In short you hint that “the physics” embodied in the model may be incomplete.”

Wrong. The structural issue I’m talking about is not natural variation.
If you actually looked at model results ( the spatial field ) you would see that most models have the gradiant too steep, others have it too shallow, and only a couple have it just right.
Finally the physics are always going to be incomplete of necessity. This is true for any physical model. That is the iron law of modelling reality. You will, you must, you cannot help but leave out some physics.

The idea of the EGO algorithm is to first fit a response surface to data collected by evaluating the objective function at a few points. Then, EGO balances between finding the minimum of the surface and improving the approximation by sampling where the prediction error may be high. EGO implements the algorithm EGO by D. R. Jones, Matthias Schonlau and William J. Welch: Efficient Global Optimization of Expensive Black-Box Functions, Journal of Global Optimization, 13:455-492, 1998.

‘If you wish to reparameterise a model to get lower ECS, you have a big problem of knowing which parameter to perturb and whether the result you get is unique.”

The relationship between aerosol forcing and ECS is roughly linear.
That is I can adjust the sensitivity by adjusting aerosols. Since the latter is loosely constrained there is ample justification for perturbing that input.

yes, that has always confused me. they dont appear to approach the problem like engineers. i’m not prepared to call their approach wrong. I am prepared to say that it is not how many engineers would approach the problem of doing sensitivity tests when an important parameter is unknown. One does have to appreciate the complexity of the models and the difficulty of testing the whole parameter space.

It also has to be annoying to have kibitzers yammering away on the internet. So, I’ll just say that some things they do confuse me and leave the judgments out of the discussion

I would be happier if they compared what they believe the relative effects of CO2 and aerosols to be, then run these values through the known CO2/Dust levels in the ice cores.The higher the value that aerosols cool, the less sensitive the planet is to CO2. We know that in the past aerosols have changed by three orders of magnitude, which high levels associated with ice-ages and very low levels with warm-ages.

CS seems more like a vector field in a Hilbert space rather than a singular constant to be estimated whereas ECS could be described as tendency to a constant but that this value would constanlty be changing in response to changes in parameters over time in a dynamic system.

I dunno.. ModelE in Ar4 had an ECS of 2.7 and I think the changes made to it pushed the ECS up a bit

Its unclear to me that model development, testing, and improvement is being made in any systematic fashion. Thats to be expected since there are so many different groups with different skills etc.

I think we are unfair when we expect the community of modelers to act like some giant engineering department. I would not want to herd those cats. And on one hand we do want them all working in unique different directions..

Lets see: If my goal is science and discovery I dont want the modelers to listen to any guidance from engineers and policy. i want them in the ivory tower doing stuff with no interference no drive for consensus or a “final answer”

If my goal is producing outputs for policy makers.. then I’d suggest picking one model and blessing it with IV&V and then controlling its development and testing in a rigorous fashion. It will not be the cutting edge of science.

Lets see: If my goal is science and discovery I dont want the modelers to listen to any guidance from engineers and policy. i want them in the ivory tower doing stuff with no interference no drive for consensus or a “final answer”.

Yeah. Too bad the IPCC “consensus” process didn’t let it play out that way.

In AR4 FAQ 8.1, IPCC asks the rhetorical question “How Reliable Are the Models Used to Make Projections of Future Climate Change?”

– the early 20thC warming cycle (1910-1944) showed a linear warming of 0.53C over 35 years (0.15C per decade) while the models showed 0.21C over these 35 years (0.06C per decade).

– since 2001 the models projected warming of 0.2C per decade while the actual record shows slight cooling.

The problem, Mosh, is that there is an undeserved overconfidence on the part of IPCC in the ability of climate models to make projections of climate change; the projections have been consistently overestimated and overstated as the lead post points out.

And I’d suggest that this was simply the result of political pressure to arrive at a consensus, which supports the desired CAGW message, IOW a corruption of the science to support a political agenda.

Steven,
I am glad you mentioned this:
“..If my goal is producing outputs for policy makers.. then I’d suggest picking one model and blessing it with IV&V and then controlling its development and testing in a rigorous fashion. It will not be the cutting edge of science.”
I was going to ask you a question up thread in regards to picking that “one model”- What criteria would you suggest be used to get to three to five best models that could then follow a development approach that would lead to more useful outputs.

Steven Mosher | August 28, 2013 at 2:14 pm |
…
Lets see: If my goal is science and discovery I dont want the modelers to listen to any guidance from engineers and policy. i want them in the ivory tower doing stuff with no interference no drive for consensus or a “final answer”
===========
Problem is, “my goal” is to keep or increase my funding from policy makers who need something to scare people with in order to raise taxes so they can spend more…

In that case, if you are doing experiments at various values of ECS, then ECS is a constraint on the parameters, probably not a well defined constraint. You did say experiments intentionally designed to make ECS take specific values such as 2.1. Then you are essentially estimating a bunch of parameters subject to the constraint. So the caveat applies to all those parameters re-estimated to get the constraint satisfied.

The structural issue I’m talking about is not natural variation.

You specifically referred to transport, and the transport processes are nonlinear. That produces some of the natural oscillation: high dimensional non-linear dissipative systems always have chaotic oscillations — ie natural variation.

Finally the physics are always going to be incomplete of necessity. This is true for any physical model. That is the iron law of modelling reality. You will, you must, you cannot help but leave out some physics.

Well sure. Then you have to test whether the omissions produce large enough errors to be non-ignorable. In the case of the models discussed here, the errors are so large that the models are useless for long-term forecasting. That’s the motive for experimenting with them by, for example, constraining parameter estimates to produce particular values of ECS.

It reminds me about, a physics prof. that could predict results in horse races. After the race had finnished one guy pointed out that he was wrong. The prof. answered that the reason could be that the model was based on spherical horses in vacuum :-)

It depends on the hypothesis. If the hypothesis is that an observation is within 2 standard deviations of the mean, this is 2 tailed unlike a hypothesis that an observation is always greater than the mean, i.e.: 1 tailed. This might seem a bit theoretical but is a potent source of error in hypothesis testing.

To me the supposed statement of AR5 is not any stronger than that of AR4. The level of certainty is higher, but the statement itself is weaker as it refers to a longer period.

I don’t think that anyone knows, what the final formulation will be as this is certainly a point to be debated in September in Stockholm.

I’m not entirely happy with the way uncertainties are presented by IPCC. They are, indeed, not calculated by a well specified method, but are just consensus estimates. On the positive side It’s good that the level of the certainty the authors have is expressed in an understandable and quantified way. On the other hand giving numbers or statements equivalent to numbers is not a good practice, when they cannot be calculated.

Oh its OK Pekka, the public know that 95% of all the statistics they are given are fake.
I other fields, like my own, claiming a statistical value that you cannot support via data is called a lie and such usage is a sackable offence.
If a medic was to claim a 95% success rate in any procedure, without evidence, the doctor would lose his licence.
Similarly, if an engineer claimed that there was a <5% chance of a component failure within a given time period, if they did no have the data, his/her career would be over.
However, I understand that things are different in your area, where you don't actually have to support your hunch's, just state them with confidence. How physics has changed.

“To me the supposed statement of AR5 is not any stronger than that of AR4. The level of certainty is higher, but the statement itself is weaker as it refers to a longer period.”

A statement that is more certain AND covers a longer period of time, is a weaker statement? If by weak you mean less supportable, then I would agree. If you mean not INTENDED to be as strong, that is nonsense. They are claiming they are more certain than ever, and their increased certainty covers a longer period of time.

It’s much more certain than 99% that humans are causing some persistent warming. Thus the uncertainty is about the rate. Due to the plateau warming up to 2010 is essentially equal to warming to some earlier years. Having the same warming over a longer period means that the rate is smaller, thus the claim is weaker.

Nature Climate Science is not Nature, it’s published by Nature but has a much lesser exposure.

The paper was on of three short papers in the section “Opinion & Comment”, not a regular scientific paper. The second of these commentaries should be of interest to you: Richard W. Katz, Peter F. Craigmile, Peter Guttorp, Murali Haran, Bruno Sansó and Michael L. Stein: Uncertainty analysis in climate change assessments. The byline of that paper reads:

Use of state-of-the-art statistical methods could substantially improve the quantification of uncertainty in assessments of climate change.

The paper presents following recommendations:

• Replace qualitative assessments of uncertainty with quantitative ones.
• Reduce uncertainties in trend estimates for climate observations and projections through use of modern statistical methods for spatio-temporal data.
• Increase the accuracy with which the climate is monitored by combining various sources of information in hierarchical statistical models.
• Reduce uncertainties in climate change projections by applying experimental design to make more efficient use of computational resources.
• Quantify changes in the likelihood of extreme weather events in a manner that is more useful to decision-makers by using methods that are based on the statistical theory of extreme values.
• Include at least one author with expertise in uncertainty analysis on all chapters of IPCC and US national assessments.

Pekka, “Use of state-of-the-art statistical methods could substantially improve the quantification of uncertainty in assessments of climate change.” What is the reason climate scientists fail to use state of the art statistics in the first instance?

I’m not sure at all that big improvements are possible. There are certainly weaknesses in the existing statistical analyses, but I have some doubt’s on the improvement in the accuracy and reliability of the results that can be obtained by better methods. By that I don’t mean that better methods should not be introduced, only practical experience will tell, what’s the difference they make.

My doubts on the level of improvement are based on the suspicion that the limiting factor is deeper in the data that can be collected, not so much in the methods used in processing it.

The March 2011, 8.9-magnitude quake off the coast of Japan that caused a tsunami, waves that reached California within twelve hours, moved the island of Japan by about eight feet, and shifted the Earth on its axis by about four inches, this is an example of the magnitude of natural forces — mostly unappreciated by humanity — that are continuously shaping Earth’s climate. There is no global warming that cannot be explained by the effects of nature that we do know something about. Temperatures and sea levels have not needed man’s help to rise and fall over the last 1,500 years.

Why is everyone so reluctant to accept the truth that combustion of fossil fuels is for the purpose of producing heat, and that heat is enough to raise atmospheric temperature by four times the measured rise. And why are we surprised that temperatures have quit rising since we now have a trillion tons a year of glaciers melting which holds down the temperature. Now that enough glaciers have melted, ocean circulation has probably increased to the point that future melting will increase at a greatly accelerated rate. More and more flaws are showing up in the models that rely on CO2 concentration instead of heat emissions. I believe CO2 has very little to do with global warming.

We have a new definition of denier – someone who thinks that energy released into the atmosphere after millions of years is at least worth thinking about. It is the total of the energy released in a period and not the change that is relevant in this case.

The nominal forcing from CO2 is from a slug of CO2 released into the atmosphere at ambient temperature. The reduction of the mean free photon path – energy retained for longer in the atmosphere – causes the molecules to warm and emissions to increase again in a new conditional equilibrium. The radiant imbalance is reduced to zero after the atmosphere is warmed in this simplistic equilibrium formulation. The planet is never in equilibrium.

It is physically unrealistic. The molecules are emitted at a flame temperature of hundred’s to thousands of degrees. The molecules cool down to the ambient temperature. The energy is – especially if we add internally generated heat from radioactive decay in the mantle – sufficient to increase the temperature of the atmosphere to the higher temperature without having a ‘radiant imbalance’ at TOA at all. The energy exceeds the energy from the increase in nominal forcing in any period. Does this seem like a difficult idea? Greenhouse gases from fossil fuel combustion are hot?

The difficulty seems to be binary either/or thinking. Can’t walk and chew gum. The higher energy state of the atmosphere is maintained by increased emission and adsorption in the atmosphere. But we just need a simple narrative for the true believers.

The heat from energy production is small, very much much less than the influence of additional CO2. Furthermore it increases only slowly, at the rate of increase in the power of energy production. The influence of CO2 is equally cumulative as the influence of energy production. Thus we do not get any additional factor from that.

The conclusion is that the heat releases from energy production has no observable influence on the warming.

But I do believe that there is still an uncertainty whether or not the laboratory-observed IR absorption mechanism of CO2 (and other GH gases) really translates into a significant forcing of our planet’s climate, even excluding the impacts of any short-term or long-term feedbacks.

Sure, IF 2xCO2 results in 3.71 W/m^2, as Myhre et al. have estimated, then an annual 2 ppmv addition would result in 0.0027 W/m^2, BUT

The increase in fossil fuel combustion is not relevant – the total energy added to the system in a period is. The system is of course dynamic and not static.

We have the increase in atmospheric temperature from nominal forcing and the increase from energy introduced into the system from combustion and radioactive decay in the mantle. They are of the same order of magnitude and the latter are overwhelmingly obvious physical processes.

Do I need to say again that the higher energy state in the atmosphere is maintained by increased emission and absorption in the atmosphere?

But I do believe that there is still an uncertainty whether or not the laboratory-observed IR absorption mechanism of CO2 (and other GH gases) really translates into a significant forcing of our planet’s climate, even excluding the impacts of any short-term or long-term feedbacks.

But I do believe that there is still an uncertainty whether or not the laboratory-observed IR absorption mechanism of CO2 (and other GH gases) really translates into a significant forcing of our planet’s climate, even excluding the impacts of any short-term or long-term feedbacks.

Are you serious???
============
Pekka, please point to the measurements, in the real atmosphere, that prove that the behavior of CO2 measured on a small translates to the rel scale and I’ll believe.

Physics is a well established and thoroughly empirically tested science. That allows for drawing many conclusions that are as certain as anything else considered certain. The basic data and theory of radiative heat transfer belongs to that part of physics.

I cannot prove that. Thus you are free to believe or doubt my word on that. That’s, however, my view, and I’m not alone in holding that view.

And you are equally convinced that all of the differences between lab scale and atmospheric scale are accounted for? For example, the lab source replicates the sun in EVERY way? Represents the Earth’s diurnal cycle in EVERY way? Represents the distribution of molecules in the atmosphere in EVERY way?

I’m not trying to describe everything, only those phenomena that I have mentioned. They are real and answer the particular questions discussed originally in this thread. If you bring in something else, then something else must be added also o the answers.

I agree with you in general, and go a little further. As far as I know, CO2 is caused by the oxidation of C. This releases “heat” in various quantities and at various temperatures, depending on the process.

Combustion of carbon based fuels, say acetylene with oxygen, produces a demonstrably different effect to the oxidation of a similar amount of C by carbon based life forms.

In any case, man and his works generate heat by the oxidation of carbon on a ceaseless continuing basis, day and night. This “heat” does not “accumulate”, or get “stored”. It merely obeys the laws of physics, and, to put it simply, eventually wanders off into space.

I would be surprised if increasing the Earth’s population by a factor of around four in the century from 1900 to 2000, and increasing the use of coal, oil, gas etc., did not raise the instantaneous heat production at any given instant. Now “temperatures” in various localities are read at synchronised moment in time, (supposedly), and anyone who cares to look at a thermograph chart over one diurnal cycle will soon appreciate the reason.

Whether this increase in instantaneous heat production shows up in near surface land based temperature measuring data sets or not, I don’t know.

I do know that when manually recording thermometer readings at an airport Met Office, one needs to take into account wind direction, aircraft exhaust upwind and so on, before rushing off to declare a new record maximum temperature.

As to CO2 in the atmosphere warming the planet, far greater concentrations in the past hasn’t stopped the Earth’s surface temperature from falling a few thousand degrees in the last four and a half billion years, to its present tolerably comfortable point.

To paraphrase Mark Twain (or somebody), “everbody talks about the climate, but nobody does anything about it”. What a waste of time effort and money!

I think there should be a grassroots letter writing campaign to the Lord of similar bets. Not just John Abraham. 1 dollar bets from children. get as many little people as possible writing the Lord demanding he lay a bet with them.

Also, betting 1000 dollars is too much. Much better to have 1000 people offer to bet him a dollar. or 10000 people.

Then when he refuses the bet you have 10000 voices that can say, Monketon wouldnt bet a dollar on his beliefs.

10000 you tubes of children and grand children, old people young people, all shapes and sizes.

Hopefully we are beginning the winds of change when it comes science over climate alarmism and hypocritically blaming modernity for destroying the Earth. “How the Left thrives by substituting negative ads and nasty political rumors for genuine political debate.” ~Brad Miner

The always passionate, deeply committed FOMD, says: “The striking contrast with the discourse-destroying censorship practices of Watts/WUWT could scarcely be sharper … or more favorable to Judith Curry’s scientific reputation.”

I’ve put up innumerable posts critical of Anthony, and he’s published every one. However, let’s accept your operating premise, that WUWT is a veritable festival of authoritarian censorship. Why wouldn’t you also mention SkS and RC? Or is their censorship different, because it’s somehow the “right kind,” that is to say justified given that the planet is at stake and all?

You ask a very good question. I never get censored on wuwt. Does that mean they are more open or they generally like my point of view. Possibly a bit of both.

The Guardian is undoubtedly extremely hostile to opposite views even when politely expressed. My factual posts citing such as the met office were routinely disappeared in minutes. In particular they don’t seem to like graphs. So they exhibit a greateR degree of censorship than wuwt in my case.
Does that ratio hold true elsewhere?

Someone would have to pose as a strident commentator from both sides of the debate on at least five blogs each of warmist and sceptical views to find out

Fanny complains of being censored at WUWT, but considering the fact that’s he’s broken at least one thread here with his incoherent topicless pictoart and Hansen p0rn, I’m surprised he’s not gone from here, too.

This creature going as “FOMD” got himself banned under a previous screen name at WUWT after an interminable barrage of vicious, off topic, and malignant harangues in violation of site policies and moderator warnings. Any blog operator would be within their rights (and duties) to ban him for what he did there.

I don’t read nearly all of the threads at any climate site, but my strong impression is that the moderation latitude is very wide at WUWT. Certainly tons of critical comments get posted there. It is only when someone gets extremely abusive and/or cannot obey site policies that they have any problem there, no matter what their views may be.

How can frightening children with fears about changing climates and bad weather end well? The UHIs where all of the blue city residents exercise their ignorance and vote their fears are not going to go away. The Left will never admit they’ve been wrong about global warming or that we’ve all been had by the politicians or that academia has been corrupted nor will global warming alarmists ever take responsibility for the damage they’ve done to society. Does anyone imagine a time when Western school teachers will ever be held accountable for turning on the taxpayer, the founders, the scientific method and burning common sense at the stake? Isn’t the mainstream media’s definition of a very smart person, a secular socialist who thinks Christians and Jews are all idiots who need to be told how to think and what to do by public school teachers and Leftist politicians who should be put in charge of controlling the Earth’s climate even if it means rewriting the Constitution?

That is not an explanation, it is a string of words.
Here is the temperature of Vostok
If I was a black body it would radiate at 200K, it will have a Boltzmann curve with a peak at 14.5 µm, which is right by the CO2 absorbance peak.
So why is this bulge greater than the ground temperature?
Why is the area under the curve greater than he ground temperature?

With temperature inversion the surface is colder than atmosphere. Surface seems to be at -70C, while atmosphere is considerably warmer and some altitudes.

From the curve you shoe, we see that at wavelengths outside the CO2 enhancement the radiative level is essentially that of a black body at 200K or -70C. That’s the temperature of the surface. Because the air is very dry, the surface radiates directly to space, when CO2 doesn’t intervene.

Around the 670 1/cm peak CO2 leads to opacity of the atmosphere. Radiation from the cold surface is absorbed by the CO2, but that same CO2 radiates at a higher intensity, because it’s warmer. The effective temperature of almost 220K is only a little colder that at other latitudes. Effective temperature is the weighted average of temperatures at altitudes from which the radiation that reaches space originates.

The temperatures that I mention are marked on the black body curves presented with the spectra.

The point has come up here, and repeatedly in the recent past, about the differences in Hansen’s 1981 and 1988 predictions (except the no CO2 scenario C). Not to mention that the CGMs used by the IPCC also seem to be diverging more and more from reported temps.

Given that the climate scientists have been studying the climate with massive funding for over 30 years since Hansen’s1981 predictions, and given that modelers are constantly trying to improve their models to include the new found knowledge about various climate forcings, feedbacks, oscillations, etc. that resulted, it seems that there should have been SOME improvement in the predictions over the years. Yet they seem to just keep getting worse.

But another development over the last 30 years is that the climate industry, scientists and modelers alike, have become ever more certain that CO2 is the driver of climate, and that “it’s worse than we thought.”
What if that is the only reason the models are getting worse, not better?

Mind you, I am one who thinks the climate is too complex and chaotic to model. But that just means they should be generally bad. But it seems to me they are getting demonstrably worse. And all in the same direction.

That’s not the funny thought. This is.

Have any of the modelers, since Hansen’s scenario C in 1988, run their models for a zero impact of CO2? The funny thought is, what if we took all these doomsday models, told them to assume CO2 would have at most a negligible effect on temperature, and ran a series on each one. What would everyone say if the results suddenly got much closer to reality?

Now that would be funny. Or very, very sad, depending on your perspective.

Whenever we model we run reality-model and look to where it is failing and alter the model to make reality-model smaller.
In climate science, they run reality-model and look to where it is failing and alter the past to make reality-model smaller.

This just in. The “mega-fires” are going to put climate deniers out of business. Invest on this little gem … go ahead … I dare you!

” SAN LUIS OBISPO, Calif. (MarketWatch) — New investment strategies, dead ahead. Not just for America’s 95 million investors. But for the climate-change deniers like Big Oil and the Koch brothers. The trigger: “megafires” destroying treasures like our national parks. Time for a “mega-wakeup call.”

Act now, because the climate deniers will soon do a megashift and stop denying. Got that? Denialism will soon stop. End. So get out in front of this historic shift. ”

The IPCC has been so wrong for so long, it is embarrasing to their own scientists as they plough on trying to meet their founder poloticians objectives. The public no longer believes their stories of gloom and doom. Yet no one has the guts to say ‘scrap the IPCC’ The Australian leader of the Oppoditiom once called global warming ‘crap’. Perhaps he was closer to the truth than he thought. Yes, the 20th and 21st centuries have yielded about 0.8C of extra temperature, but you can get 3C just by moving from Melbourne to Sydney!

Jim S | August 28, 2013 at 9:40 pm | “One possible explanation for the discrepancy is that forced and internal variation might combine differently in observations than in models.”

lol

This is science?

All of these analyses is what passes for science among the lysenkoclimatescientists, their only interest is in keeping alive the illusion that that this is a scientific argument.

There was only one reason for the whole farce, the destruction of the coal industry by oil and nuclear interests to which end they utilised the free energy of greenie emotions and we have ended up with countles scenarios like this:

It is getting practically impossible not to conclude that all ‘climate scientists’ are either totally ignorant of basic physics on properties and processes of gases and energy or are actively milking the punters paying for this knowing full well that carbon dioxide cannot do what they say it does, anthropogenic or not.

The paper shows that the estimated observed global temperature change from 1998-2012 is between -0.03 and +0.13C per decade, mean 0.05C. Within 95% probability, it could be zero or negative.

The take-home message for me is that there are no grounds for continuing expensive GHG emissions reduction programs. This is especially so in Australia, where the economic costs on a carbon-intensive economy are high (our main comparative advantage is in energy, minerals and energy-intensive metals processing; we provide a high proportion of world trade in such areas), and where any impact on eventual warming will be negligible. Australia’s policies in the last six years have been based on two demonstrably wrong propositions: (1) we can lead the world: show them the way and they will follow; and/or (2) the whole world will undertake serious emissions reduction programs, we can’t stand aside from that. No party in our imminent election acknowledges any of these points, they are all subject to CAGW brain-washing.

Faustino: You are right. Few people know that there has been no extra heating in thr iast 14 years. Even the Opposition has not had the courage to announce that. The minister for climate change should.

Taking into account CO2 changes, the 20 years 0.14 C implies a sensitivity of 1 C per doubling while the climate models had an unsurprising 2 C per doubling. However, just extending it back to 1980 including these same 20 years as a subset of the 32 years, the measured change is equivalent to 2 C per doubling. Remarkable sensitivity to the span chosen, and I tend to trust longer spans to remove more natural variation in general. 1993 unfortunately bisects a significant period of warming, which is why there is a lot of sensitivity to starting dates.

Maple leaves are a different shape than oak leaves. This isn’t controversial to botanists, but there’s political charge perceptual interference in the climate discussion context, underscoring the darker side of human nature.

All statistical inference models are based on assumptions. A good mentor encourages students to pause reflectively to consider very carefully whether those assumptions are realistic. A realistic practitioner pursues due diagnostics to check the validity of assumptions underpinning stat inference.

Climate “science” does NOT pay due attention to taxonomy. If you skip the crucial sorting after it has been pointed out with crystal clarity countless times, a sensible, sober observer will be forced to conclude that you are ignorantly &/or deceptively (whether by naive accident or deliberate intent) leveraging statistical paradox (a more general category than Simpson’s Paradox) to maintain (perhaps for political purposes) a fatally razed narrative.

This paper by Fyfe et al is a reminder to all to ignore the 95% confidence level of climate models because it is meaningless. It’s not physical. Running many models many times do not make them more accurate. They can create a probability curve but it’s just a mathematical game. Nothing more. I can also create a probability curve with confidence intervals by tossing a dice many times.