Why models can’t predict climate accurately

Dr Gavin Cawley, a computer modeler the University of East Anglia, who posts as “dikranmarsupial”, is uncomfortable with my regular feature articles here at WUWT demonstrating the growing discrepancy between the rapid global warming predicted by the models and the far less exciting changes that actually happen in the real world.

He brings forward the following indictments, which I shall summarize and answer as I go:

1. The RSS satellite global temperature trend since 1996 is cherry-picked to show no statistically-discernible warming [+0.04 K]. One could also have picked some other period [say, 1979-1994: +0.05 K]. The trend on the full RSS dataset since 1979 is a lot higher if one takes the entire dataset [+0.44 K]. He says: “Cherry picking the interval to maximise the strength of the evidence in favour of your argument is bad statistics.”

The question I ask when compiling the monthly graph is this: “What is the earliest month from which the least-squares linear-regression temperature trend to the present does not exceed zero?” The answer, therefore, is not cherry-picked but calculated. It is currently September 1996 – a period of 17 years 6 months. Dr Pachauri, the IPCC’s climate-science chairman, admitted the 17-year Pause in Melbourne in February 2013 (though he has more recently got with the Party Line and has become a Pause Denier).

2. “In the case of the ‘Pause’, the statistical test is straightforward. You just need to show that the observed trend is statistically inconsistent with a continuation of the trend in the preceding decades.”

No, I don’t. The significance of the long Pauses from 1979-1994 and again from 1996-date is that they tend to depress the long-run trend, which, on the entire dataset from 1979-date, is equivalent to a little over 1.2 K/century. In 1990 the IPCC predicted warming at 3 K/century. That was two and a half times the real-world rate observed since 1979. The IPCC has itself explicitly accepted the statistical implications of the Pause by cutting its mid-range near-term warming projection from 2.3 to 1.7 K/century between the pre-final and final drafts of AR5.

3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”

One does not need anything as complex as a general-circulation model to explain observed temperature change. Dr Cawley may like to experiment with the time-integral of total solar irradiance across all relevant timescales. He will get a surprise. Besides, observed temperature change since 1950, when we might have begun to influence the warming trend, is well within natural variability. No explanation beyond natural variability is needed.

4. The evidence for an inconsistency between models and data is stronger than that for the existence of a pause, but neither is yet statistically significant.

Dr Hansen used to say one would need five years without warming to falsify his model. Five years without warming came and went. He said one would really need ten years. Ten years without warming came and went. The NOAA, in its State of the Climate report for 2008, said one would need 15 years. Fifteen years came and went. Ben Santer said, “Make that 17 years.” Seventeen years came and went. Now we’re told that even though the Pause has pushed the trend below the 95% significance threshold for very nearly all the models’ near-term projections, it is “not statistically significant”. Sorry – not buying.

5. If the models underestimate the magnitude of the ‘weather’ (e.g. by not predicting the Pause), the significance of the difference between the model mean and the observations is falsely inflated.

In Mark Twain’s words, “Climate is what you expect. Weather is what you get.” Strictly speaking one needs 60 years’ data to cancel the naturally-occurring influence of the cycles of the Pacific Decadal Oscillation. Let us take East Anglia’s own dataset: HadCRUT4. In the 60 years March 1953-February 2014 the warming trend was 0.7 K, equivalent to just 1.1 K/century. CO2 has been rising at the business-as-usual rate.

The IPCC’s mid-range business-as-usual projection, on its RCP 8.5 scenario, is for warming at 3.7 K/century from 2000-2100. The Pause means we won’t get 3.7 K warming this century unless the warming rate is 4.3 K/century from now to 2100. That is almost four times the observed trend of the past 60 years. One might well expect some growth in the so-far lacklustre warming rate as CO2 emissions continue to increase. But one needs a fanciful imagination (or a GCM) to pretend that we’re likely to see a near-quadrupling of the past 60 years’ warming rate over the next 88 years.

6. It is better to understand the science than to reject the models, which are “the best method we currently have for reasoning about the effects of our (in)actions on future climate”.

No one is “rejecting” the models. However, they have accorded a substantially greater weighting to our warming influence than seems at all justifiable on the evidence to date. And Dr Cawley’s argument at this point is a common variant of the logical fallacy of arguing from ignorance. The correct question is not whether the models are the best method we have but whether, given their inherent limitations, they are – or can ever be – an adequate method of making predictions (and, so far, extravagantly excessive ones at that) on the basis of which the West is squandering $1 billion a day to no useful effect.

The answer to that question is No. Our knowledge of key processes – notably the behavior of clouds and aerosols – remains entirely insufficient. For example, a naturally-recurring (and unpredicted) reduction in cloud cover in just 18 years from 1983-2001 caused 2.9 Watts per square meter of radiative forcing. That natural forcing exceeded by more than a quarter the entire 2.3 W m–2 anthropogenic forcing in the 262 years from 1750-2011 as published in the IPCC’s Fifth Assessment report. Yet the models cannot correctly represent cloud forcings.

Then there are temperature feedbacks, which the models use to multiply the direct warming from greenhouse gases by 3. By this artifice, they contrive a problem out of a non-problem: for without strongly net-positive feedbacks the direct warming even from a quadrupling of today’s CO2 concentration would be a harmless 2.3 Cº.

But no feedback’s value can be directly measured, or theoretically inferred, or distinguished from that of any other feedback, or even distinguished from the forcing that triggered it. Yet the models pretend otherwise. They assume, for instance, that because the Clausius-Clapeyron relation establishes that the atmosphere can carry near-exponentially more water vapor as it warms it must do so. Yet some records, such as the ISCCP measurements, show water vapor declining. The models are also underestimating the cooling effect of evaporation threefold. And they are unable to account sufficiently for the heteroskedasticity evident even in the noise that overlies the signal.

But the key reason why the models will never be able to make policy-relevant predictions of future global temperature trends is that, mathematically speaking, the climate behaves as a chaotic object. A chaotic object has the following characteristics:

1. It is not random but deterministic. Every change in the climate happens for a reason.

2. It is aperiodic. Appearances of periodicity will occur in various elements of the climate, but closer inspection reveals that often the periods are not of equal length (Fig. 1).

3. It exhibits self-similarity at different scales. One can see this scalar self-similarity in the global temperature record (Fig. 1).

4. It is extremely sensitive to the most minuscule of perturbations in its initial conditions. This is the “butterfly effect”: a butterfly flaps its wings in the Amazon and rain falls on London (again).

5. Its evolution is inherently unpredictable, even by the most sophisticated of models, unless perfect knowledge of the initial conditions is available. With the climate, it’s not available.

Figure 1. Quasi-periodicity at 100,000,000-year, 100,000-year, 1000-year, and 100-year timescales, all showing cycles of lengths and magnitudes that vary unpredictably. Click each image to enlarge it.

Not every variable in a chaotic object will behave chaotically: nor will the object as a whole behave chaotically under all conditions. I had great difficulty explaining this to the vice-chancellor of East Anglia and his head of research when I visited them a couple of years ago. When I mentioned the aperiodicity that is a characteristic of a chaotic object, the head of research sneered that it was possible to predict reliably that summer would be warmer than winter. So it is: but that fact does not render the climate object predictable.

By the same token, it would not be right to pray in aid the manifest chaoticity with which the climate object behaves as a pretext for denying that we can expect or predict that any warming will occur if we add greenhouse gases to the atmosphere. Some warming is to be expected. However, it is by now self-evident that trying to determine how much warming we can expect on the basis of outputs from general-circulation models is futile. They have gotten it too wrong for too long, and at unacceptable cost.

The simplest way to determine climate sensitivity is to run the experiment. We have been doing that since 1950. The answer, so far, is a warming trend so far below what the models have predicted that the probability of major warming diminishes by the month. The real world exists, and we who live in it will not indefinitely throw money at modelers to model what the models have failed to model: for models cannot predict future warming trends to anything like a sufficient resolution or accuracy to justify shutting down the West.

“Besides, observed temperature change since 1950, when we might have begun to influence the warming trend, is well within natural variability.”

There is a pair of temperature graphs from the 20th century which show nearly indistinguishable rates of temperature change. One is from before 1950, so must be natural variability. Anyone remember where those graphs are available? (Sure, I can recreate them, but prefer to give credit to the original article.)

As I understand it, supposedly 95% of the greenhouse effect is due to water vapor, the remaining 5% from methane, CO2, ozone and other chemicals. If this is true, why would anyone be surprised that models which primarilly attempt to explain warming by examining CO2 and methane (and aerosols) are not working? Seems to me the slightest changes in water vapor behavior would completely overwhelm any effects of changes in CO2 and methane concentrations.

You forgot to mention non-stationarity. In statistics, stationarity has to do with the underlying probability distribution being the same over time. So non-stationary means that the distribution is changing over time. When trying to analyze time-series data over time it is a problem if the underlying system changes, in effect changing the data distribution. Earth’s climate system does change over time, with respect to albedo, circulation, particulates, etc. These changes modify the climate engine, changing how it works, making accurate modeling practically impossible.

3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”

I’m sorry, but the proper response to this nonsense is “So #$%^ing what?”

Alarmist GCMs do not verify. The relative performance or even existence of any other GCM is absolutely irrelevant to that question. Claiming otherwise is a Tu Quoque fallacy, and a decidedly unscientific thing to say.

6. It is better to understand the science than to reject the models, which are “the best method we currently have for reasoning about the effects of our (in)actions on future climate”.

LOL. It is understanding science, specifically the science of modeling but the epistemology of science generally, that causes one to reject these models.

No matter what the alternative models might be, it is evident that the models from the IPCC have produced significant and biased errors when their forecasts and the future are compared. Given that these models are being used to justify extraordinary costs and efforts, it should be required that they at least do not produce biased predictions. End of topic.

Diikranmarsupial cannot be Dr Gavin Cawley. I have never met the good mild mannered doctor with his refined manners and steel trap mind. I have however encountered the marsupial in its native habitats at skeptical science and Tamino’s where its behaviour and language leads the bunch. It might be a big brother to the little Australian Island marsupial with bad manners. Thank you for tanning its Hyde.

This drivel regarding linear trends just gets my goat and for me epitomises the stupidity of so much of the so called research regarding climate change.

Fitting linear trends to a set of data is fine if you have no knowledge of any higher order trend behaviour of the data, i.e. you do not understand the mechanism. If you do have higher order insight into the mechanism and know or reasonably suspect it contains cyclical or other non linear elements then using a linear fit is simply puerile beyond a certain simplistic point.

I once saw a paper that purported to show an uptrend in sea level rise and had a fairly long data set of local sea level oscillating and when compared to the PDO there was a correlation. The PDO oscillation was sine wave like and the pattern was plain as day when you looked at the graph. So what did this paper do? It then ran a linear regression and discerned an uptrend. Ergo global warming was causing sea level rise. The problem? The data set started in a ‘trough” and finished near a crest” and so the the “uptrend” was a construct not of global warming but of mis matching linear mathematics with sinusoidal. They would have got essentially the same result if the data conformed to a pure sine wave, i.e. by definition had zero uptrend.

This sort of stuff just goes to the calibre of people involved in the work.

Please spare me the linear trend crap because it is as vulnerable to the cherry picking argument as cherry picking sinusoidal data.

I believe the models get it so wrong, because they are tuned and constructed to replicate the warming from 1980 to 2000, which they do very well.
Unfortunately the modellers have made it so complicated, that no one dare to start over again, so they hang on their own bad assumptions, and the only thing they do, is to build even more layers on to disguise the failings.

There are a lot of simulations in use for a lot of physical phenomena, but in every case that I know of where simulations produce useful results, the underlying physics is understood, but the equations are too difficult to solve given the boundary conditions. Climate science does not understand the underlying physics (any pretense to the contrary is laughable), nor are the boundary conditions known in anything like the detail required. We are many years away from the equations being known, but too hard to solve. Modelers are wandering in the dark. It’s pretty hard to incorporate what you don’t know in your models.

I was particularly amused by the assertion that “it’s the best thing we have”. Bleeding was the best way we had of treating most everything in the 16th century. That hardly made it right.

What the climate establishment really doesn’t want to say is what the real state of the science is. Their funders think it is entirely different. Eventually, the truth will out, despite everyone’s best efforts. Our task is to keep them from doing something stupid before that happens.

As soon as I saw the name “Cawley”, the association melan-cawley, or perhaps, watermelon-cawley (sorry).

As son of mulder, erm, pedants :) , once a computer starts processing the (probably kludged, see
?readme from CG1), the numbers have rounding errors. I would go even further and say that the initial state is an approximation, because of differences between binary and decimal representation.
Further to that, with all of the infilling of missing data points, real accuracy (pardon the pun) is impossible. Many years ago I had a rather picky, but good professor who insisted on “working the units” (doing a reasonableness analysis) before even starting work on a problem (shows that I’m from the slide-rule era…). He was also very careful about accuracy and precision, and the misuse thereof (yep, right, show me a thermometer with four digits….).

Dodgy Geezer: “The word is not difficult to understand once you realise that a proper education includes Classical Greek…”

I’m pretty sure that most people who know what “heteroskedastic” means didn’t take Greek–and vice versa. (And knowing Latin wouldn’t have given me a clue to what, e.g., “nisi prius” means as a legal term.)

I consider knowing how to say “faithful companions” or “wine-dark sea” in Homeric to be a relic of a misspent youth; don’t let your grandchildren take dead languages.

Melord quoth: It is extremely sensitive to the most minuscule of perturbations in its initial conditions. This is the “butterfly effect”: a butterfly flaps its wings in the Amazon and rain falls on London (again).

I think there is a better analogy. Think back to your childhood and the games we played in playground:

What those models do is play crack the whip with your inputs.

The fist kid in line makes a small change in course. By the time the last kid reaches that point, he is flying through the air. And that’s what happens when employing the bottom-to-top type of model we have seen applied, a small error goes in one end and comes out the other magnified manyfold. This is an inherent problem with all such models.

As an estwhile wargame designer I have seen this play out many a time.

A top-down model, while far more rude and crude, does not go off the rails in such a manner.

The lesson here is if you want to design a reasonable simulation of controlled chaos, say, the Eastern Front, you start with armies and army groups and work your way on down (if at all). You (most emphatically) do NOT start out with a man to man simulation, where the design of a machine-gun barrel winds up (spuriously) turning defeat into victory.

“No skeptic has made a GCM that can explain the observed climate using only natural forcings.”

Maybe that’s because no sceptic would be foolish enough to try with current technology that cannot run the CFD in the vertical dimension in the global resolution required?

When Callendar tried to revive the idea that adding radiative gases to the atmosphere would reduce the atmospheres radiative cooling ability in 1938, Sir George Simpson had this to say –

“..but he would like to mention a few points which Mr. Callendar might wish to reconsider. In the first place he thought it was not sufficiently realised by non-meteorologists who came for the first time to help the Society in its study, that it was impossible to solve the problem of the temperature distribution in the atmosphere by working out the radiation. The atmosphere was not in a state of radiative equilibrium, and it also received heat by transfer from one part to another. In the second place, one had to remember that the temperature distribution in the atmosphere was determined almost entirely by the movement of the air up and down. This forced the atmosphere into a temperature distribution which was quite out of balance with the radiation. One could not, therefore, calculate the effect of changing any one factor in the atmosphere..”

Still as true today as in 1938. If the models cannot properly model all non-radiative transports, then they cannot work. But climastrologists would not dare trying to properly model non-radiative transports because that would reveal that the net effect of our radiative atmosphere over the oceans was cooling of the oceans. That would defeat the true purpose of the models, which is propaganda tools.

The IPCC models beg the question. They have coded in that adding CO2 to the atmosphere causes warming so that is what their computer predictions show but Nature shows otherwise.

Let us reason within the contest of the greenhouse effect theory.

AGW is based on the idea that adding CO2 to the atmosphere causes its radiative thermal insulation properties to increase because of CO2’s LWIR absorption bands. The insulation causes a restriction in radiative heat flow which results in warming in the lower atmosphere and cooling in the upper atmosphere where earth radiates to space in the LWIR. The warming in the lower atmosphere causes more H2O to enter the atmosphere which results in more warming because H2O is also a greenhouse gas with LWIR absorption bands. This mechanism provides a positive feedback. The results of added insulation and positive H2O feedback is modeled as if it were another heat source but that is not what really happens in the Earth’s atmosphere.

Besides being a greenhouse gas, H2O is a primary coolant in the Earth’s atmosphere moving heat from the surface to where clouds form via the heat of vaporization. More heat is moved in this manner then by LWIR absorption band radiation from the surface and convection combined. So more H2O means that more heat is moved which is a negative feedback. that is not factored into IPCC’s models.

More H2O means that more clouds form. Clouds not only reflect incoming solar energy but they provide a more efficient LWIR radiator to space then the clear atmosphere they replace. Clouds provide another negative feedback that the IPCC models have ignored.

As the increased insulation warms the lower atmosphere it cools the upper atmosphere. According to greenhouse effect theory, from space the earth looks like a 0 degree F black body radiating at an equivalent altitude of 17k feet. But there is no radiating black body radiating to space at 17k feet. Because of the low emmisivity of the atmosphere we are realling talkgrayabout grey bodies radiating at higher temperatures and hence lower altitudes. It is these lower altitudes where the actual radiation takes place that is the cold end of the radiative thermal insulation so the upper atmosphere I speak of is well within the troposphere The cooling in the upper atmosphere causes less H2O to appear which counteracts the addition of more CO2 which provides still another negative feedback.

H2O provides negative feedbacks to the addition of greenhouse gases which mitigates their possible effect Negativete Negitive feedback inherentlye inharently stable. The Earth’s climainherentlyn inharently stable to changes in greenhouse gases long enough for life to evolve. We are here. The IPCC models do not include the negative feedbacks so they are wrong and hence their results have been wrong. It is all that simple.

4. It is extremely sensitive to the most minuscule of perturbations in its initial conditions. This is the “butterfly effect”: a butterfly flaps its wings in the Amazon and rain falls on London (again).

A short answer to the question of why the models can’t predict accurately is that they don’t predict at all. As I’m using the term a “prediction” is an extrapolation across a specified time interval between an observed state of nature and an unobserved but observable state of nature. For example, it is an extrapolation from the state “cloudy” to the state “rain in the next 24 hours.” Observation of the observed state provides the user of the associated model with information about the unobserved state. It is this information that makes it possible to control the associated system.

Each of the two states belongs to a collection of mutually exclusive collectively exhaustive states that is called a “state-space.” A pairing of a state from each state-spaces describes an event. For the global warming models of today, there are no states, state-spaces, events or specified time intervals. The user of the model is provided with no information. Thus, using existing climate models, control of the climate system is not possible.

AnonyMoose says:
April 2, 2014 at 1:46 pmThere is a pair of temperature graphs from the 20th century which show nearly indistinguishable rates of temperature change. One is from before 1950, so must be natural variability.

I spent about 4 hours yesterday in my local Barnes and Noble Booksellers perusing a copy of the book “The Unpersuadables’ by Will Storr while sipping Starbucks triple venti cappuccinos.

There are a dozen or so pages in the book dedicated to Christopher Monckton as a “famous skeptic” about climate. Those pages include mostly background on Monckton and then a brief account of Storr’s interview with Monckton.

Storr casts the context about discussing Monckton in a socio-political-inheritable way, not a scientific way. No science was formally addressed on climate.

Storr basically claims to show that Monckton had to be the way he was acting / thinking and was “unpersuadable”. I was not persuaded by Storr that Monckton was “unpersuadable”. : )

According to the 2009 Trenberth ‘Energy Budget** the IPCC modellers exaggerate the real GHE by a factor of 3 and the real surface mean heat transfer to the atmosphere by the same factor.

To offset this excess warming, they apply incorrect physics at the top of the atmosphere to cool the upper atmosphere. Then in ‘hind-casting’, they claim about 25% more low level cloud ‘reflection’ of solar energy than reality.

These shenanigans have the effect of making the sunlit part of the oceans much warmer and the cloudy bits colder, hence no average temperature rise compared with measured data. However, because water evaporation rate increases exponentially with temperature, the result is to create the imaginary ‘positive feedback’, needed to give the 3x real GHE.

It’s a clever fraud designed to meet the demands of the politicians and the Mafia who own renewables and carbon trading, for a way to con the Public.

3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
=============================
Just take a model, any model and remove the CO2 fudge factor, I reckon the accuracy of the model will improve somewhat, maybe by about 97%

“No skeptic has made a GCM that can explain the observed climate using only natural forcings.”

What an incredibly stupid statement.

I would comment – No climate scientist has made a GCM that can yet explain the observed climate! No matter what smoke and mirrors they put up, the climate modelling does not match observations! I.e. they do not work!

To be accurate, I am rejecting reductionist computational general circulation climate models. I am rejecting them, because they are trying to simulate a single run of a unique entity (terrestrial climate) with no adequate physical understanding of the general class to which said entity belongs to.

The class is that of irreproducible quasi stationary non equilibrium thermodynamic systems. Some members of this class could be studied experimentally in the lab, but that was never done. Also, no entity belonging to this class had a successful computational model ever.

A system is irreproducible, if microstates belonging to the same macrostate can evolve into different macrostates in a short time, which is certainly the case with chaos. For such systems not even straightforward definition of Jaynes entropy is known, therefore doing theoretical thermodynamics on them is premature.

However, all is not lost, there is tantalizing evidence of unexplained symmetries in the climate system. One could do experiments in the lab to see if it is a general property of such systems and if it is related to some variational principle. That’s how science is supposed to work.

Until such time saying models built along the current paradigm are “the best method we currently have for reasoning about the effects of our (in)actions on future climate” is plain silly. If the best we have is inadequate, we have nothing to work with.

92-94% of all CO2 comes from vulcanos. Active and dead. All readings close to vulcanos, such as in Hawaii only comes from instruments placed there by vulcanoexperts who use figures to calculate next eruption. Rest of CO2 comes almost all from natural sources.

As for computermodels (I am educated systemprogrammer as well as teacher in Geography (including Geologi), History and some other subjects) Computermodels never ever can be better than the skill the systemprogrammer has and only if all factors/variables that is at hand in real life, at least 43 to take into account, as well as correct (not corrected figures) are what’s used in program/model. (I used 43 variables including underwaterstreams etc when I myself wrote a program in early 1990’s in order to establish correct sealevel in Oceans from Stone Age up to 1000 AD). For todays so called models – well none of them would have passed to exam 30-40 years back in time. They forgotten all Theory of Science….. as we said in old days: Bad input – bad output.

Stupendus says:
April 2, 2014 at 3:24 pm
“Just take a model, any model and remove the CO2 fudge factor, I reckon the accuracy of the model will improve somewhat, maybe by about 97%”
——————————————————–
I believe this would be workable. Of course pressing “Delete” on the whole file would also improve accuracy and have the added benefit of reducing cost to the taxpayer by over 97%.

Still it looks like the old ones of Chaco Canyon New Mexico had a better handle on the knowing of the weather long term than this Crawleyone or Mike Mann etal. All they had was some curved stones, life out in the weather and the talkers from the past.

Sun comes up sun goes down.
Rain Comes, Snow comes, hot comes , cold comes, sometimes more, sometimes less.
Repeat long term, short term, very long term, then the very, very long terms come and its all new once more.

Rule #2……Obtain the same data source report, from the next previous month, and record that as “Initial Month Datum”.

Rule #3…..CALCULATE by standard statistical mathematical protocol, the value of the trend between the initial Month Datum, and the Final month Datum; and the statistical standard deviation for that trend value.

Rule #4…..If the calculated value for the trend is statistically different from zero; as indicated by the standard deviation value, go to END.

Rule #5…..If the calculated value for the trend is statistically equal to zero, based on the calculated standard deviation, jump to Rule #2.

END… subtract the month number for Initial Month Datum, from the month number for Final Month Datum.

Report the result at END to WUWT, and assert identity, with Monckton of Brenchley.

It is better to understand the science. And the science says to reject the models. SO I disagree with Christopher Monckton. Some do indeed reject the models because they are useless. Now that does not say that “models” will never be useful. However the ones in use today suffer from an extreme bias of political nature that renders them useless.

Christopher Monckton of Brenchley
. . .
3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”

One does not need anything as complex as a general-circulation model to explain observed temperature change. Dr Cawley may like to experiment with the time-integral of total solar irradiance across all relevant timescales. He will get a surprise. Besides, observed temperature change since 1950, when we might have begun to influence the warming trend, is well within natural variability. No explanation beyond natural variability is needed.

It would be interesting if the author could expand upon how this time-integral process works. To support the claim “no explanation beyond natural variability is needed” one has to do some credible attribution, otherwise the proper claim is “we don’t know what’s going on”. While that may well be the case then one can’t rule out a significant anthropogenic driver.

It would also be interesting if the author could comment on how to reconcile the results of Kosaka and Xie 2013 which suggests an attribution consisting of both a specifically identified natural variation component and an anthropogenic component over the relevant period.

“No skeptic has made a GCM that can explain the observed climate using only natural forcings.”

I’ve never tried to model the exact number of angels that will fit on the head of a pin either.

The observed climate can be explained simply; it’s awful dang chaotic, changes slowly (except when it changes abruptly), and there is so much noise (weather) superimposed on top of the climate that observing it is the only sensible thing that folks should be doing.

The full RSS MSU plots of anomaly from 1979 to 2014 show zero anomaly around 1980.
The current anomaly is about 0.2 degrees higher, 34 years later.
Note that anomaly values show a month-to-month variability that can easily reach 0.2 degrees, not that the real Earth actually changes by 0.2 degrees in a month.
Annual changes can change more. The point here is that an anomaly value of 0.2 degrees over 34 years is not significant. UAH shows the same.
It does not matter what the models claim, actual gobal temperatures measured with scientific integrity over 34 years show no significant change.

Svend Ferdinandsen says:
I believe the models get it so wrong, because they are tuned and constructed to replicate the warming from 1980 to 2000, which they do very well

Well they don’t do it that well. There are a lot of different ways to combine the many inputs and frig factors to reproduce the general wiggles of a short period. The problem is that it does not project backwards or forwards in anything resembling close to real world.

I’ve recently discovered that Lacis et al ( part of Hansens team at GISS ) published a quite thorough paper in 1993 that had volcanic forcing considerably stronger than they now attribute it, based on simple direct physics. More recently they’ve watered it down to try to get the data to fit their models !

In fact the earlier figures fit Mt. Pinatubo much better but require a recognition of the strong negative feedback in the tropics.

That also reveals a strong warming climate reaction that runs at least until the 1998 El Nino.

The warming they are trying to attribute to CO2 is a climate kickback to recover the energy deficit caused by major volcanoes.

To fix the models they just need to play with all the frig factors:

Put volc. aerosol forcing back to what they properly calculated it to be in 1993 ( optical density * 31 W/m2 ).

That matches the top-of-atmosphere energy budget measured by ERBE. Then temps stop rising and the models work. CO2 problem disappears in a puff of colourless, odourless, non-toxic gas , since it too is reduced by 90% by cloud feedbacks.

That third graph is most remarkable. Not only does it show that what we are experiencing now is nothing new, I am amazed that the earth’s temperature has varied by only one degree (Celsius) over the last 2,000 years. The Earth’s climate is remarkably stable, all things considered.

It is clear that these pseudo-scientists doing modeling have not the slightest idea of what to do with scientific measurements. If you get obviously inaccurate results what you do is find out the reason for it, make a correction, and try again. There is no evidence that anything of this sort has been done for the last 24 years when Hansen first tried to model future “business as usual” temperatures. We know that his predictions have been way off but year after year new predictions come out, more money is spent on supercomputers, the number of predictions skyrockets, and yet none of them work. After 24 years, their predictions are no better than Hansen’s first prediction. As an administrator I would decide that after 24 years of trying to make it work it is not working, cut my losses, and close down the enterprise. As a scientist I would decide that after 24 years of trying it is clear that either it is impossible to make it work or else that the personnel are simply incompetent to handle this task. In either case, my decision would also be to shut it down to stop the flow of erroneous predictions into global climate forecasts. As a neutral outside observer I seriously suggest shutting down the climate modeling arm, selling the hardware, and firing the personnel. The latter is common business practice, and government as well. When Nixon canceled the last three moon shots the prime contractor for the Apollo Lunar Lander Module was forced to lay off ten thousand men within a month. That was unjust but laying off those non-performing modelers would serve the cause of justice and improve climate forecasts.

I too build predictive models as a career. I need to predict the chemical and physical stability of new drug products and set product specifications and expiration dates. We conduct lengthy scientific studies as well as required “formal stability studies” and all data and models are freely available to any agency where we are filing the product.

And in my professional opinion as a modeler of chemical reactions, climate models are a giant failure. They explain nothing and predict nothing. You don’t need a degree to see they fail. I don’t see what the AGW crowd is going on about. Your models are junk, get over it, it happens. They look foolish trying to defend them.

“The simplest way to determine climate sensitivity is to run the experiment. ”

Yes, I think the only way that the IPCC and other alarmists will tone down their models and alarmist predictions is to give the climate another few decades or so to run the experiment, if temperatures don’t rise much in the next few decades, or even cool (which is my take on the data), this will show what effects things like high solar activity in the 20th century, clouds and the PDO had on the warming in the late 20th century (and indeed since the LIA), then they will finally come around, the various paradigms will be replaced, and to save face we can thank them for stimulating debate, their ‘excellent’ research that led to the advance of science etc etc.

Repeating the same action again and again, while expecting different results. is stupidity.

Running a climate model again and again with the same initial conditions, getting different results each time, then averaging the results to get a “projection” is both mathematically and scientifically unsound.

Averaging the results of an ensemble of models whose output has already been averaged, to get a “more reliable” final projection is much more than unsound – it’s scientific fraud.

No … GCM that can explain the observed climate using only natural forcings.
==============
This is completely false and Dr Gavin Cawley should know this.

The IPCC spaghetti graph clearly has some model runs that show no temperature increase, consistent with the pause. Thus the IPCC models themselves are telling us that the pause is within the natural variability predicted by the models.

Look at the IPCC spaghetti graph. The spread between the top and bottom models runs is the models themselves predicting natural variability. They are telling us that climate may follow the lowest results or the highest result, without any change in forcings.

The IPCC is being dishonest in saying that climate will follow the mean. The spaghetti graph is telling us that even the models think climate is highly variable without the slightest change in forcings.

But rather than listen to the models, the IPCC constructs an artificial model mean, and tries to sell this as future climate.

Simple, “…winter cool to colder, spring – a bit warmer, summer a bit warmer, autumn, cooling. Unfortunately we can not give exact dates when these changes will happen but they are approximate with vast variations expected below or above norm.”

No one can exactly predict climate, but they can predict weather to a point, with cloud cover and expected rain fall observed by radar and satellite. Cyclone, hurricane and tsunami warnings and volcanic eruptions. Earthquakes ? Yet when I regularly check the weather forecast with BOM or Essential Energy – Stormtracker, we often miss out from expected storms.

Climate is what we expect, weather is what we get! Keep repeating this, and yell it out aloud.

4. It is extremely sensitive to the most minuscule of perturbations in its initial conditions. This is the “butterfly effect”: a butterfly flaps its wings in the Amazon and rain falls on London (again).

That’s fun, but usually when a butterfly flaps its wings the wave has been overwhelmed by turbulence within 10 inches or so of the butterfly. Chaotic models generally produce oscillations within a range for long periods into the future, but they are simply not perfectly periodic, seldom explosive (the name for the effect quoted in italics.) The large Lyapunov exponents make the chaotic model amplify the unknown error in the initial conditions and parameter estimates into unpredictable turbulence much more quickly than they would with a periodic function.

There are models of chaotic phenomena, heartbeat and breathing for example, where forecasts are reasonably accurate several cycles in advance. If the climate has “cycles” of about 60 years, there is no intrinsic reason why a chaotic model can not reasonably accurately predict the distribution of the weather (mean, variance, quartiles, 5% and 95% quantiles) 200 years into the future. That they don’t do so yet is evidence that they don’t do so yet, not that they can’t ever do so.

Attempts at modeling the climate at long range are hampered by the severe shortage of independent observed events; for example, there are no such events going back 200 years. I imagine that this is not a factor in studies of heartbeat or breathing.

Posted on ATTP re RP JUN but relevant to time frames with computer models so I thought I would add it here
“There is a difference between trending and truth, and a difference between probability and truth.
You may well be able to point out a trend in 20-50 years, heck there will always be a trend up,down or flat. But imputing significance to it is another matter.
A discernible upward trend in that time interval is only 10 % likely to be correct, 90 % likely to be wrong.
Given 98 years on your figures you are 50 % likely to be right! at 247 years you are 95% likely to be right.Your words.
If we extrapolate this to surface temps for Marko we could say that the IPCC is 90% wrong to be advocating action on climate change based on a small 20-50 year trend in temperature changes particularly when the trend is now flattening rapidly due to the pause”.
Christopher, is this concept that a 20-50 year trend is only 10% likely, meaning it is 90% unlikely right and can you use it in your forays?

“When Nixon canceled the last three moon shots the prime contractor for the Apollo Lunar Lander Module was forced to lay off ten thousand men within a month.”

I could be mistaken, but I believe it was the US House of Representatives that withdrew the funding for the last three moon shots. Back then they controlled the “purse strings”.

After a while one of the top NASA officials observed; “It was probably a good thing we stopped when we did, before some more people (re: Apollo 1) got killed” (paraphrased by myself).

The Apollo missions where indeed amazing. But they also benefited from a string of relatively good luck. The more times you try to do something incredibility complex and risky the more likely it will fail in a spectacular manner, just simple statistics. Yes, the Apollo 13 crew made it back safely, but just barely.

My father ran trains and locomotives for a railroad, as a youngster starting out he came up to a “stop sign” (red STOP SIGNAL in RR parlance) quite fast and managed to stop the many thousand ton train JUST before the signal. The “old head” training him got out, walked up to the signal, looked around and said; “Wow, that’s pretty impressive, how many times do you think you can do that ???” Dad gave up his “hot-rodding” ways and went on to a safe 50 year career on the railroad without causing a single fatality.

The entire exercise of trend matching is foolishness. There is a downward trend from the Eocene, an upward trend from the LGM. There have been so many ups and downs that we have no meaningful way to evaluate these trends. We really have to get beyond trends and start digging into processes. The problem for my old SKS buddy Dikran et al is that the deeper one digs into processes, the more trends become the only game in town.

evanmjones writes: “You (most emphatically) do NOT start out with a man to man simulation, where the design of a machine-gun barrel winds up (spuriously) turning defeat into victory.” I don’t have any particular expertise in wargame modeling–so if I’m completely wrong here, just say so and no hard feelings. What strikes me about your interesting post is whether that barrel design can actually turn defeat into victory over a wide range of top-down inputs (so for example, if one side has a huge well equipped modern army and the other is on horses with swords, we wouldn’t expect the design of a barrel to affect things much. But for relatively evenly matched armies, is there a bottom up effect in the real world?). If barrel design can have that effect, that means, I think, that reality acts like a bottom-up model and is intractable from a computational viewpoint over a wide range of top-down inputs. And for things like big wars, we really don’t have the controlled data to tell us whether bottom up effects are meaningful and over what range of top-down inputs. How would one even go about answering that question? The same questions apply to climate models, I think.

ossqs says (5:58 pm)
“… For that matter, how many actually have access to view that code.”

Well everyone, I think. I downloaded a copy of one of the GCMs a couple of years ago and looked through it. It was a big pile of Fortran code with lots of changes made (with the old code left in, but commented out). When I saw that one of the parameters controlling an equation had THE SIGN CHANGED (not just the value) I decided that it was all a bunch of crap and haven’t wasted much time worrying about the sacred models since. Oh yeah, it was obvious from the history comments that it was originally written by James Hansen — which probably explains why he was always so enamored of it.

” The RSS satellite global temperature trend since 1996 is cherry-picked to show no statistically-discernible warming [+0.04 K]. One could also have picked some other period [say, 1979-1994: +0.05 K]. The trend on the full RSS dataset since 1979 is a lot higher if one takes the entire dataset [+0.44 K].”

Is this accurate? If it is, doesn’t this mean that something like 80% of all the warming in the RSS data set took place in just 3 years (1994-1996)? Can you say “step change”?

Again the good Lord uses big scientific/mathematical words that he doesn’t fully understand.
No doubt they give him a warm feeling but they don’t lift the Highland fog.
The fact that Cawley doesn’t know what he is talking about makes the Lord’s verbosity especially meretricious.

3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”

Hang on – why SHOULD skeptics have to produce such a GCM. Skeptics aren’t the one’s demanding massive social, political and economic disruption. If someone wants me to take such a hit to my standard of living, they better have all their scientific ducks in a row. Computer-assisted guessing isn’t going to cut it, especially when the guesses are diverging further and further from real-world observations.

Dr. Doug L. Hoffman says:
April 2, 2014 at 1:56 pmYou forgot to mention non-stationarity. In statistics, stationarity has to do with the underlying probability distribution being the same over time. So non-stationary means that the distribution is changing over time.

The last section on chaotic nonlinearity in climate is a golden nugget hidden at the end of Brenchley’s fine post. His point about quasi-periodicity is hugely important in the context of tireless attempts to attribute climate “oscillations” to direct astrophysical forcing. Equally important is Doug’s point about non-stationarity – the Lorenz climate runs clearly showed that the evolving chaotic system moved from one apparent plateau or baseline to another with no outside forcing – just its internal attractors. Bob Tisdale has educated us about the climate shifts – moments at which global ocean-driven temperatures have apparently shifted up to a different level at certain timepoints. This is classic behavior of Lorenz type deterministic nonlinear systems.

The foundational paper for climate modeling is of course Deterministic Nonperiodic Flow by Lorenz 1962:

Just think for a moment what computer modeling meant in 1962. Computations in the hundreds rather than hundreds of millions per second, iterative runs lasting several days which today would take only a second on a cell phone. Despite the advance in computer technology however, this paper has not been and may never be surpassed in terms of its significance to climate science. Without an understanding of this paper there is no climate science any more than there is science of gravity without Newton’s principia and Einstein’s general relativity or chemistry without Mendeleev’s periodic table etc.

When will folks get the point – climate temperature shifts, or “climate change” does not need to be explained by outside forcing. Not by CO2, soot, small particles, big particles, ozone, nor by astrophysical cycles of whatever exotic flavor or harmonic.

Christopher Monckton has written an accurate, relevant and forceful article as usual. Thank you, CM. However I would not quite agree with you on two points:

re point 2 : I don’t buy the idea that there was a step rise in temperature around 1994-6. Yes, you can do a flat linear trend up to 1994 and again at a higher level from 1996 on. But a flat linear trend over a period doesn’t mean the temperature was flat. If you take a simple sine wave and tilt it to a rising trend, you can get the same result.

Re point 6 : CM says “No one is “rejecting” the models“, and then gives a long detailed and accurate description of the models which demonstrate very clearly why the models should indeed be rejected. To CM’s analysis I would add that the method they use is incapable of remaining accurate more than a few weeks into the future, if that. The reason is that they use similar logic and processes to weather models, which will always magnify inaccuracies exponentially over time. It is for this reason that weather models are incapable of predicting weather more than a few days ahead. The same applies to climate models. [CM covers this obliquely in 6-4, 6-5, but I would contend that prediction is manifestly impossible even with
perfect knowledge of the initial conditions.]. A few weeks is totally useless for climate prediction, so the climate models should indeed be rejected.

Basically, what I am saying is that you need to look at the entire picture, writ large, and create your input from there. And to hell with machinegun barrels. There is no way you can ever know if that mattered a tenth as much as the bad pastry Napoleon ate before the battle of Waterloo (Ligny, actually.) We are talking CHAOS, here.

Someone once asked Michelangelo how to make a sculpture of an elephant. He is said to have replied, by taking a block of marble and cutting away all the parts that don’t look like an elephant. There’s your man — he got it.

But these model bozos start out by trying to construct a mouse out of grains of sand (and sure enough, they wind up with a freakin’ elephant — then they cry wolf).

I designed a Civil War game (Blue Vs Gray) that storyboarded down to the last relevant detail. Every card pick. Every die roll. Every step loss. No one else, in all the hundreds of ACW games out there, has even come close to that. (At least one college professor actually used it to teach the Civil War to his class, and that’s the best level of accuracy a wargame can ever cut. And I’m an old hand.)

How did I get there? FROM THE TOP. That’s how I got there.

It had a supremely simple combat results table. How did I design that? I took the 35 or so largest battles of the war (I can still rattle them off). I then noted down how many troops on both sides, what the losses were, and who “won” and why (outnumbering the enemy or superior generalship).

Ten pages of morale rules? Didn’t need ‘em. The morale factor was the players themselves. Best “simulation” of that — evah.

Losses based on the size of your own force, NOT largest force. (Yes, I know — but that’s what produced the historically accurate results and plausible possibilities. And that’s the point. I didn’t let “civil war consensus” destroy the accuracy of my game.)

And that’s it. And it gave plausible results for every single one of those battles (including a few two-turn jobs, such as Gettysburg, Shiloh, Chickamaugua). I even worked out a simple metric for commanders getting killed, wounded, or sacked that fit seamlessly (Shiloh, Wilderness, etc.).

Now, I have seen incredibly complex Combat Results Tables for for strategic level Civil War games. And not one of them came close to accurately covering the actual results of any given battle, far less all of them.

Why? And Why not? Because I let the war write the rules. Top-down. If the war didn’t fit, I altered the rules. But them and their thousands of “accurate” parameters and massively complex sub-systems? what did they do? They tried to make the rules write the war. And they would never alter one of their beloved subsystems because then it “wouldn’t be accurate”.

“Accurate”, hell! They couldn’t make one lousy battle work out right, once the dust cleared. (But, by Jimminy, they got that canister round rate right! The fools.)

And even the outcomes were virtually predetermined: (The infamous “Lee always wins” syndrome.) But Lee didn’t always win. He got clean wins in only 3 of the 7 major attacks he made. Not only did my system reflect that, but the results Lee actually got were more likely — while preserving the — accurate — unpredictability of the results.

I won’t go into how I made the rest of the game work — systems, replayability, “fun factor”. But you get the idea.

But the point is that when you are confronted by controlled chaotic subject matter, you design from the top-down. That way you can control your path. But if you do it from the bottom up, you won’t ever come close. Not ever. And anyone who thinks they can do otherwise is just another garden-variety high-IQ fool — crippled by his own intelligence.

If I’m sounding brash and arrogant here, please understand, it’s only because I am.

Joe Johnson :”…about the sacred models since. Oh yeah, it [climate model] was obvious from the history comments that it was originally written by James Hansen — which probably explains why he was always so enamored of it.”

That probably also explains why as head of GISS he was able to ditch the physics-based estimations of developed by the team ( Lacis et al 1992 ) and substitute a lower value that helped to model to better reproduce late 20th c. climate.

What he effectively did was to change the data to fit the model rather than to change the model to fit the data.

What they tried tried to do was the match volcanic forcing directly to the short term change, ignoring the fact that the initial reaction was much stronger and in close agreement with the Lacis figure. The problem is that this implies a strong negative feedback to changes in radiative forcing.

That did not fit the agenda, so they ditched the solid science of Lacis et al and arbitrarily rescaled to input instead of correcting the model to have a negative feedback.

And that is fundamentally why GCMs don’t work, is because they are not allowed to.

Hansen was co-author on the Lacis paper, so knew full-well that what they were doing was contrary to “the science” said about aerosol forcings.

Once they stop trying to rig the results, there’s a change GCMs may get a lot closer to reality.

A well presented discussion of the good Doctor’s points raised in objection.

All points were satisfactorily refuted. Models of the climate have absorbed far too much money for far too little in benefit. If they can’t predict the general climate in terms of a temperature rise for which they claim the physics is well understood then the models should be assumed non-physical until proven otherwise.

if there are ice age cycles then to decontexualise from those is to present a meaningless snapshot. Taking 30year snapshots and extending 100 yr predictions lines is weak. If that is the basis of betting money then the charts should be shown to financial traders [who would laugh at them]

if this is an inter glacial warming period why look for ‘a villain’ for natural warming and create a show trial of co2?

why are modern bsc climate science courses 50% the study of sustainability? the subject looks misnamed.

why do the ipcc prefer those who got their phds in the last 8 years? because they would be soaked in the misnamed ‘ climate science’?

i did ask on RealClimate if the code and design for the models was open source so that anyone could examine the formula used and was told only 1 of the models [out of the 50 or so] was open. So its basically ‘black box’. if they are publicly funded all the designs and code should be online for anyone to inspect.

Konrad says:
April 2, 2014 at 3:13 pm
“When Callendar tried to revive the idea that adding radiative gases to the atmosphere would reduce the atmospheres radiative cooling ability in 1938, Sir George Simpson had this to say -”

But even Callendar 1938 outperforms todays unverified unvalidated computer models.http://climateaudit.org/2013/07/26/guy-callendar-vs-the-gcms/
So, when Dikranmarsupial calls todays “climate science” the best there ever was, he is mistaken; Climate Science has regressed over the last 70 years; and we would have achieved MORE by not doing any “research” at all.

Mr Turner asks whether anyone understands the jet streams. The polar jet streams were discovered by accident in the Second World War. They have been much studied, since the recent displacement of the circumpolar vortex caused eddies in the northern polar jet stream, giving many places in eastern North America their coldest winter on record.

Mr Jonas queries whether the Singer Event – a step-change in global temperature in the late 1990s – occurred. Yes, it did, beginning with the rebound in global temperatures following the Pinatubo eruption, and ending with the Great el Niño of 1998. The Singer Event was self-evidently not caused by CO2. One would not necessarily expect CO2-driven warming to be entirely smooth. However – and this goes some way to answering a point by MattS – the profile of temperature change since the late 1970s does not fit well with the notion that CO2 was the main driver of warming.

Mr Jonas and others say I have, in effect “rejected” the models. No, I haven’t. They are not at all valuable in predicting CO2-driven global temperature change, for the empirical and theoretical reasons outlined in the head posting. However, they are useful in short-term weather forecasting because unpredictable chaos-driven bifurcations in the evolution of the climate object are less likely to occur in the short term than in the long. They are also useful in assisting with the understanding of climatic processes. The IPCC, however, abuses them.

Many have taken Dr Cawley to task for having stated that “no skeptic has made a GCM that can explain the observed climate using only natural forcings”. Actually, there are several simple models that can reproduce the temperature change of the instrumental era solely on the basis of the time-integral of solar activity, though – as far as I know – none has yet been peer-reviewed and published in a journal. I received a draft from a Norwegian group last year, for instance. I rewrote it for them to strengthen the English and to clarify the mathematics. There is also a TSI-integral model at woodfortrees.org.

“Bruce” lowers the tone by saying I have used big scientific and mathematical words that I do not fully understand. This is a breach of the Eschenbach Rule. What words did I use that he did not understand? And on what evidence, if any, does he consider that I did not understand them? If he is thinking of “heteroskedasticity”, for instance, he may like to read Dr Cawley’s published papers on the subject.

Frank and Evan M Jones discuss the question whether one should model from the top down or from the bottom up. The usual approach in modeling processes over time – such as climate – is to start at the beginning, known as t0, and include as much information on all scales as the model can handle. This information is called the “initial conditions”. And one should not dismiss the potential influence of apparently minor events. It is in the nature of chaotic objects that even the smallest perturbation in the initial conditions can cause drastic bifurcations in the evolution of the object. It is also worth recalling historical events. The Hyksos people had stirrups and the Egyptians didn’t. Guess who won the war.

Mr Webb asserts, on no evidence, that the heteroskedasticity of the climate “ignores the laws of thermodynamics”. What I had actually written was that even the noise overlying the data is heteroskedastic [and particularly with respect to variations in the inputs]. Again, Mr Webb might like to read some of Dr Cawley’s papers before maundering on about these matters.

The untastefully pseudonymous “gymnosperm” says that “trend matching is foolishness”. Another breach of the invaluable Eschenbach Rule. I did not attempt to “match” the trends on different timescales: I merely pointed out, correctly, that at all timescales temperature exhibits aperiodic behavior. Nor is “gymnosperm” correct in saying “we have no meaningful way to evaluate … trends”. The IPCC has made specific predictions about the near-term trend in global temperature. It backdates those predictions to 2005, the last year for data included in the previous Fourth Assessment Report. The predicted trend can, therefore, be evaluated by comparison with the observed trend. The former is rising; the latter is not.

Mr Chang rightly says that GCMs have trouble predicting weather more than a few days out. That is an ineluctable consequence of the chaoticity of the climate. The longer one waits after making a prediction, the more likely it is that a bifurcation will take the climate object off in an unpredicted direction.

“Angech” asks whether a trend 20-50 years long has only a 10% probability of being correct. That is not how statisticians would look at a trend. They would be more concerned with the number of data points (which is why I use monthly rather than annual data in compiling my temperature graphs: the more data points, the more reliable the analysis). And they would be concerned with the measurement uncertainties. In the temperature datasets, the measurement uncertainties to two standard deviations sum to about 0.15 K, so that any trend less than this (up or down) over a given period cannot be statistically distinguished from a zero trend with 95% confidence.

Mr Oldberg says the models don’t make predictions. Yes, they do, and the predictions are wrong. Get over it.

Mr Marler says there is no intrinsic reason why a chaotic model cannot reasonably predict the weather 200 years into the future. Yes, there is. It’s the Lorenz constraint. In his 1963 paper, in which he founded what later came to be called chaos theory, he wrote: “In view of the inevitable inaccuracy and incompleteness of weather observations, precise, very-long-range weather forecasting would seem to be non-existent.” And “very-long-range” means more than about 10 days out. See also Giorgi (2005); IPCC (2001, para. 14.2.2.2).

Fred Berple says “The IPCC spaghetti grap clearly has some model runs that show no temperature increase, consistent with the pause.” Not the latest one. See Fig. 11.25a of the Fifth Assessment Report (2013). The trend is now below all models’ outputs in the spaghetti graph.
Mr Price says running a climate model again and again with the same initial conditions and getting different results each time and averaging them to get a “projection” is unsound. It is also impossible. Models are deterministic: if two runs of a model have the same initial conditions and the same algorithms, they will produce identical outputs.

The paleozoically pseudonymous “thingadonta” says temperatures may fall. Good point. Dick Lindzen says there is an approximately equal chance of warming or cooling to 2050.

The electrostatically pseudonymous “Sparks” wanders off the reservation, asking why we have a monarchy which, he thinks, interferes with democracy. We keep our monarchy not only because we are proud of our history but also because we are proud of our Queen. The net profit in tourism from having a proper, old-fashioned monarchy greatly exceeds the cost of the monarchy itself. Our Queen is a lot less costly to run than your President. And she does not interfere in democracy: she is a constitutional monarch.

Mr Newton makes the profound point that “The Earth’s climate is remarkably stable, all things considered”. He notices that temperatures have varied little over the past 2000 years. In fact, absolute temperatures have varied by only 1% either side of the long-run average in 420,000 years. That is enough to take us in and out of ice ages, but it is not enough to allow us to imagine that strongly net-positive feedbacks are operating.

The acronymically pseudonymous “bw” says the current RSS anomaly is only 0.2 degrees higher than 34 years ago. That’s not how it’s done. One takes the trend on the data. That shows 0.44 degrees’ warming since 1979.

Mr Whitman kindly says Mr Storr did not persuade him I was “unpersuadable”. In a future posting I hope to answer the question I get a great deal from true-believers: “What would it take to convince you we must shut down the West to Save The Planet?”

“What would it take to convince you we must shut down the West to Save The Planet?”

That is their focus. The social ecologists see everything in terms of ecology. They say ‘ego centric man’ [with its industrialisation ‘ecology’] must be transformed into an eco centric man whose ecology will be one of joy. So their focus is one of transformation from ego centric to eco centric [which sounds very ego centric to start with :)]. What does it take to turn one into another? This is the subject of a recent research project

‘Psychology could hold the key to tackling climate change……. “Funded by a €1.5M grant from the European Research Council, Dr Lorraine Whitmarsh from the University’s School of Psychology will for the next five years lead an international team tasked with providing evidence to support this theory.” http://phys.org/news/2014-04-psychology-key-tackling-climate.html#jCp

This transformation they say should happen even if there was NO global warming and one hears this echo in such responses like ‘isn’t sustainability a good thing we should do anyway?’ and many more phrases along those lines.

For them the climate is just another tool in their toolbox of transformation to the ‘new man’ [a concept used in the 1930s to have collectivisation that resulted in massive famines or the famous ‘killing of the sparrows’ under Mao].

So is it ego centric to want to have clean water? Is it ego centric not to want to live in mud huts? What is an eco centric man? We are told its means from not eating meat and ‘not washing to save the planet'[social ecology] to mass extermination of humans [deep ecology] whose numbers need to be reduced through famines and pestilence [thus reductions in co2]. Social and deep ecologists hate each other.[like in Monty Python Life of Brian People’s Front and Popular People’s Front]

So they will keep on asking ‘What would it take to convince you we must shut down the West to Save The Planet?” and if you agree to that then they will ask ‘What would it take to convince you must stop eating meat to Save The Planet?” and so until you are a good eco man. Which is a taliban style narrative.

They call to sacrifice everything in the name of ‘saving the planet’ except stupidity. Saving means something is under threat. It is no accident that in AR5 the term vulnerability [a term used to measure threat to ecosystems] has compared to AR4 been totally decoupled from anything to do with climate. So they not even pretending they need climate reasons any more to promote eco man sustainability. It is now a good in itself. No need for climate reasons or proof.

Its a shame that they deny the fact that basic flaws in their thinking leads to gross errors within their computer models.

Their models do not need to be totally accurate.

They need to be good enough.

For example, if you were to write an aeroplane simulator (to practice flying) you do not need to program into the system ‘whole wing’ aerodynamics to give an accurate representation of how a wing gives more lift for a greater angle of attack, just to create an equation that allows the system to give the ‘correct’ amount of lift for the airspeed and angle of attack. Other equations deal with stall speed.

However, if you were designing a new wing for manufacture and you wished to simulate the performance of the wing under all phases of flight, then you would need a very accurate model of how a wing generated lift and drag.

It appears that climate modellers are trying to do a detailed simulation (GCM) without understanding all (or many) of the interactions between key variables.

If you do not understand all of the mechanisms, then increasing the area resolution of your simulation from 1000 km square down to 5 km square will not help you at all.

You have just dramatically increased the number of calculations with errors and / or unknowns.

Honestly I think the use of the “forcing” approach is completely wrong. It assumes that there are only a FEW actions that affect temperature and nearly completely ignores convection and atmospheric tidal forces. It also neglects electromagnetic induction as a heat source… which is stupid since you can see it almost every night up in Finland.

I would make the point to Lord Monckton and Cawley, that there is a significant problem in that the models are “Non Physical”, That is the models assume or predict situations that are energetically impossible and ignore the costs of effects. For example, one paper forcasts a 20% increase is hydrological cycling (rainfall) when the actual imbalance is only capable of increasing cycling by 0.8% before all the imbalance energy is consumed in the additional hydrological cycling and further temperature increase is dampened by the negative feed back of evaporation. Cawley also fails to understand that feedback is not a scalar, treating feedbacks as a resultant scalar sum is non-physical. Real feedbacks have an amplitude and a lag. One needs to introduce the square root of negative 1.

I have a huge problem with models that presume to say something about what happens in the climate but ignores the need to establish a physical mechanism, and then prove that the mechanism is actually possible by looking at the energy expenditures within that mechanism in comparison with the available driving energy.

For example, at the average ocean wave of 3m contains almost 30 kW per square meter of wavefront. If we were to assume that waves are driven by wind, which is driven by temperature then we would have the situation that the effect contains more energy that the cause. Wave energy must therefore predominately come from somewhere else.

1) the models are composed of formulas each of which has NOT been proven. The formulas used in the models are conjectures, postulates, theories unproven. Each individul assumption needs to undergo rigorous testing and backtesting. People in this business try to conflate what is proven that CO2 absorbs radiation from the sun with the theories in the models which are UNPROVEN. A model of unproven formulas is unlikely to produce correct results. There are so many formula that are in these models many of which may not even have the signs correct in terms of how they affect things. The IPCC admits that large numbers of things are very uncertain but then they say that the results are certain to 95%. That is simply unsustainable assertion.

2) The models are gridded approximations of the earths surface that are iterated over millions/billions of times. The initial error in the data which is large is only magnified millions of times and the possibility that the result is at all meaningful is zero. The analogy to chaotic system simulations such as wind tunnels is not sufficient because there is not enough evidence the formulas are at all correct. Even if correct the initial data are not known well enough to trust the results. The errors in the results are greater than all possible outcomes. For instance, the error bars on the 2100 temperature are more than 30 degrees wide. Any 2100 temperature could be said to fit the models.

3) None of the models are any more predictive than any other models. One model which works for one time period better than the others does not work any better in other time periods. They choose to average the models to take out this random effect but this is indicative of a problem. If any of the models actually had correct physics in them we would expect one to outperform the others. There would be evidence of efficacy. There isn’t so they choose to average the models because the models are really “fits” to the data. It would make sense if you have 20 fits to the data that averaging the fits would produce a better fit. However, if there were models which really were better then averaging would produce a poorer fit because you would be taking some poor models and averaging them with good models. Since that isn’t the case we can safely say the models are all just expensive fits. Much cheaper fits can be generated without all the stuff they do in these models.

4) Fits to the data means that there is no proof the models are actually representative of the physics. Therefore, there is no proof that outside the backtested and backfitted data that was used to fit all the models that the models will predict. Therefore the only way to test the models is to take NEW data which hasn’t been incorporated into the models fitting and see if the models can predict them. Since the models were created in 1979 and after the only data relevant is recent data. Recent data does not match the models. That is disproof because backtested data and fitted data cannot be used to “prove” models that were constructed with that data. That is circular logic.

5) They claim the data in some cases is not “fitted” but there is experimenter bias evident in all these results. The modelers all have a bias and they do not consider all the reasons these models could be in error or the data in error. They literally change the data to match the models and vice versa. For instance temeprature data for the US for the last 120 years has been adjusted by algorithms that are unpublished and are not proven. The adjustments modify the historical record significantly showing the temperatues we measured for the last 120 years were significantly cooler than we measured at the time in the past. Yet they have not proven these adjustments actually make sense. They have not gone to specific locations and shown why the locations are reporting the erroneous data to show efficacy to the modifications they are making to the historical record. If the historical record is off significantly then the models based on this record and fit to this record which are said to be good matches with this data would be erroneous. In any case there is simply not enough good data to calibrate the models due to the large uncertainty in most of the data except for the most recent data (last 30 years or so).

6) Large portions of the earths surface and oceans until the last 30 years are simply unknown with any accuracy to construct models. The oceans only until the last 13 years that ARGO has been in operation. We do not have enough data to construct models. This should be evident. It’s not saying we can’t eventually figure this out, just that it is genuinely evil to say that you know something you don’t. The modelers and climate scientists simply don;’t know and should admit that this science is still in its infancy and needs time to prove its theories and refine them. There is nothing wrong with that. There is something wrong in saying you know something you don’t.

7) The IPCC conflated the fact that in 2007 their models showed a high correlation with the historical record (which was circular logic as pointed out above) to say that therefore they had accounted for most if not all natural variability. Therefore based on this analysis they concluded that the chance that the variability seen in 1979-1998 of warming must be with 95% certainty be because of CO2. Since 1998 the temperatures have been flat. This means natural variability was not accounted as they presumed. The models did not account for natural variability. Therefore their assertion of 95% certainty was unjustified and erroneous conclusion based on poor thinking and poor mathematics. Now they say “likely” and refrain from giving a solid certainty but it is more severe than that. The level of variability is such that it is not clear at all that any of the warming in 1979 – 1998 was caused by CO2 or maybe only a small part therefore their ability to predict is zilch.

8) As lord monckton has pointed out and I have been saying this for a long time the historical rate of change is lower than they predict over the remaining period requiring a sudden unproven nonlinear increase in the rate of temperature change higher than we have ever seen and sustained for an unbelievable long period (i.e. 4x rate of change increase for 80 consecutive years without pause). This assertion needs to be proved as it is beyond the experience and data we have it is unlikely to be the case that we have this sudden change in the rate or that it is sustained for such a long period. Such a belief in a sudden change like this is more akin to a religious belief than a scientific belief. They cannot show how this will happen, why it will happen other than pointing to models as if they were magic. We need to see how this sudden massive rate of increase in temperatures is possible because it seems ridiculous on the face of it.

9) The CO2 output of humans has really only been significant since 1945. They must admit that any temperature increases from 1880-1945 are natural variability and actually weaken the argument for CO2. If temperatures between 1880-1945 went up as much as between 1945-2013 then since the changes before 1945 were not from co2 then it is possible that the changes or most of the changes after 1945 could be from things other than co2 as well.

Since 1945 the record is confusing because from 1945-1975 temperatures DECLINED during major CO2 production. Also now between 1996-2013 temperatures are zero trend even with massive CO increase. Therefore during the period 1945-2013 while CO2 production has been consistent and rising there have been 47 of the 68 years showed no increase or even decrease in temperatures yet we are to believe that temperatures will now suddenly spike at a rate 2x or more the period 1979-1998 for 80 years continuously without pause when the evidence seems to point that CO2 actually is a minor effect on temperature as it was increasing massively during this entire period and for the vast majority of this time there was no increase in temperature. Something else is clearly at work. Why can they not admit this. It’s obvious to all but the stupidest person. It is certainly possible that co2 has some effect but clearly there are other things that have a huge impact and until those are accounted for it is impossible to make predictions they claim. Why is this not obvious to everyone?

10) Whatever increase in temperatures is asserted it is not at all proven that the consequences of an increase are negative. For the last 400 years temperatures have been increasing and for that entire period human and animals have generally benefited from the increase in temperature. It is extremely unlikely we have just reached the exact inflection point where rising temperatures cause a problem. In fact the IPCC does say if you look closely that there is actual benrfit from temperature increases up to a degree more or even 2 degrees. Therefore the net result of all this co2 may be net positive depending on the level of temperature increase even by their statements. However, the 2 degrees is even uncertain. Predictions such as in 80 years when temps hit 2 degrees food production will decrease are so ridiculous its impossible to understand how anyone takes this seriously. We have no idea what food production in 2080 will be but it is zero probability given our growth in knowledge that it will be lower because of 2 degrees. These kind of things in their models and their compputations show that the entire thing is complete hogwash.

I think it is worth studying climate. I think it is worth studying many of these things. I am simply saying we don’t know and to say we know and make the assertions they do is academic criminality in my mind because it is so clearly not proven, not known.

I am and so should everyone else. All the reasons Lord Monckton gives for chaotic objects being unmodellable are correct but another problem with models is that they are computer programs. When developing software, it must be tested and validated. Fundamentally this means making sure the software does what its author expects it to do. These expectations may be based on empirical data (GCMs clearly fail to match empirical data) or simply what the author wants. If a GCM author expects CO2 to be the control knob, then guess what, CO2 is the control knob. As I think Willis has observed, the rest is just tuning the model internals so that the output does the desired thing when the CO2 knob is twiddled.

Computer models, even ones which closely match the data (no known GCM), do not necessarily tell you anything about the underlying physical mechanisms. I could probably use a n-th order polynomial to give a decent fit to temperature series but the coefficients would contain no useful information on the reasons for the variation in those series.

@Robany – I agree. Perhaps what CM was saying is that models in general should not be dismissed. That I would agree with. Models do serve a purpose. Even bad ones tell you that you have to go back to the drawing board.

“We keep our monarchy not only because we are proud of our history but also because we are proud of our Queen.”

We have a monarchy here in the United States as well; we subjects just do not know who the monarch (he, she or they) is. In remembrance of the old, lost Constitutional Republic we still hold a traditional mock election every four years, where we choose who shall live in the White House and become the publicity officer for the royals.

In answer to Philip Marsh, “heteroskedasticity” is usually spelled “heteroscedasticity” in the UK, but, in deference to the majority of readers here, who are from the United States, I spell it, as they do, with a “k”, like the “k” in “skeptic”. OK?

In answer to Philip Marsh, “heteroskedasticity” is usually spelled “heteroscedasticity” in the UK, but, in deference to the majority of readers here, who are from the United States, I spell it, as they do, with a “k”, like the “k” in “skeptic”. OK?

Monckton of Brenchley says:
April 3, 2014 at 3:41 am
Not the latest one. See Fig. 11.25a of the Fifth Assessment Report (2013). The trend is now below all models’ outputs in the spaghetti graph.
============
Thank you for the reply. Question:

Is there a graph/table that shows the raw model runs? From what I can see, the IPCC spaghetti graph only include 1 ensemble mean per model. Thus, by the time the model runs appear in the graph, they have already been averaged which hides the variability in each model. All we see is the variability across models.

My point is that the IPCC report itself is hiding the variability in the individual models, and then further hides the variability across models by the use of the ensemble mean. So in effect the IPCC uses an average of averages.

I believe a very informative article for WUWT would be to plot the raw model data for all models, then draw a min max boundary on the data. This would show that full variance the models are predicting, which is a reasonable measure of natural variability as predicted by the models.

The reason is to show that the models themselves are actually telling us that natural variability is high. That based on a similar set of assumptions, the models are predicting a very large range of results. This range is not a result of forcings, because the models are drawing their forcing estimates fro the same data, so the range must be a result of natural variability as predicted by the models.

This doesn’t mean that the models are correct in their measure of natural variability. Rather that the models are telling us that natural variability is high, but by the process of averaging, not once by twice, the IPCC is hiding the variability, which may well be why scientists such as Dr Gavin Cawley believe variability is low.

I am in the business of computer ‘modelling’. I can assure Lord Monckton that you cannot reasonably model something as complex and unknown as ‘climate’, whether it is for one day; one year; or 1000 years. Matters not. Sun and cosmic influences by themselves are impossible to correlate into code. The many to many variables in climate [about 1 million] make modelling both global and local [thermodynamic physics] aspects of climate, impossible.

So this ‘computer modeller’ from state-funded East Anglia, is just another troll, who likely, would not be hired in the private market if he is this ignorant; or is just another propagandist with a business card containing a ‘scientific’ job title. There is no computer-science validity to the cult of warm. See Mann et al. for computing model / stats fraud.

Lots of good material here. I’d like to comment on a few points of interest.

On Pauses:

No, I don’t. The significance of the long Pauses from 1979-1994 and again from 1996-date is that they tend to depress the long-run trend, which, on the entire dataset from 1979-date, is equivalent to a little over 1.2 K/century. In 1990 the IPCC predicted warming at 3 K/century. That was two and a half times the real-world rate observed since 1979. The IPCC has itself explicitly accepted the statistical implications of the Pause by cutting its mid-range near-term warming projection from 2.3 to 1.7 K/century between the pre-final and final drafts of AR5.

emphasis added
When we talk about a Pause, there’s an implicit assumption associated, specifically, that there is some trend that has paused. What trend are we talking about? As I understand it, Gavin correctly notes that there is no significant pause in observed trends. Lord Monckton correctly notes that there is a significant pause in IPCC projected trends. What do you mean when you say Pause? Pause in what, projected or observed trends?

For my part, I’m interested in this in the first place because climate science is the singular exception and curiosity I’ve encountered in my life where scientific predictions appear to be failing spectacularly. At least it’s the only exception I’m aware of. I care about IPCC projected trends. That’s what I mean when I refer to a Pause.

On Rejecting Models:
I think sloppy speech can be confusing. Rejecting the models and rejecting the idea that the models are currently good enough for a particular purpose may not be exactly the same thing. I don’t reject the models, I’m not even quite sure what that means. Would that mean I think we should throw them in the trash and quit trying to improve them? I certainly don’t think that. Would that mean I think they have no value whatsoever, in any context? No, I’d seriously doubt that.

All this being said, for any given model it’s critical to understand what the model is good for. What aspects of the system does the model model. What are the model’s capabilities and limitations. A VM (virtual machine) for example can be a model that very exactly emulates a microprocessor’s execution of any program over it’s instruction set. A VM can be a very good model of a deterministic system within its scope, such a good model in fact that programs natively compiled for that processor will often execute on a VM without the slightest modification. Are VM’s perfect models? Emphatically not! For example, often no effort is made to emulate the timing of the target processor; the VM might run much more quickly or much more slowly than the actual machine. So even a very good model of a predictable, deterministic system like a microprocessor need not be accurate in every metric to have value.

What are the GCM’s good for? I don’t know for sure. I expect they are useful for some things. Analysis like those done by Lucia at the Blackboard however tell me that GCM’s aren’t useful for projecting atmospheric temperature trends on decadal timescales, and that the models probably aren’t good for projecting atmospheric temperature trends period. Can we improve them? Well, that’d be great if we can. If there are people who are trying, I say more power to them. But let’s not kid ourselves about the capabilities of GCMs as they stand today. I reject using these models for projecting atmospheric temperature trends when it’s been demonstrated that they model this poorly.

A final thought. My opinion is that a good model is the pinnacle of a thorough understanding. Not being able to model certain aspects of a system well does not demonstrate that we know nothing about that system. But being able to accurately model the aspects of a system we are interested in does demonstrate mastery, to my mind. I think that the failure of the GCM’s to project atmospheric temperature trends accurately should be an alarm, a wake up call to those who believe the science is settled and that we understand the Earth’s climate well enough to predict and control it via policy.

Lord Monkton says:
‘No one is “rejecting” the models. However, they have accorded a substantially greater weighting to our warming influence than seems at all justifiable on the evidence to date.’

Actually many of us are rejecting the models insofar as they are supposed to provide predictions of future climate.
Forget about the (possible) apparent fudging of the raw data and the dubious homogenisation 9of that data. Even if we accept the shaky foundations on which these models have been built there are far too many variables (parameters, if you prefer) for any linear model, be it deterministic or stochastic, to be valid.
Has anyone ever counted the “forcings” that might be involved in climate change?
In a peer-reviewed paper in a professional journal, I once alleged that there were at least forty and, of the many letters and emails that were received after publication, none disputed this figure.
For those who might be tempted to argue, please note that there are at least six “greenhouse gases” alone. Start counting from there.

My apologies if this has been brought up, but I clicked your link to the WG1 report, and scanned through it.

It is very interesting!

MOST interesting is a small figure (blowing it up helped these old eyes some … … ) on Page 46 (pagination from the .pdf file, not the ‘listed’ text page).

The second graph on that page is titled “Reconstructed (grey) and Simulated (red) NH Temperature”, right below a reconstruction of TSI. Unless I am mistaken, does this graph not contradict Mikey’s hockey stick? I see a MWP, and a LIA, quite distinctly, somehow correlating to changes in TSI (above).

…I think sloppy speech can be confusing. Rejecting the models and rejecting the idea that the models are currently good enough for a particular purpose may not be exactly the same thing. I don’t reject the models, I’m not even quite sure what that means. Would that mean I think we should throw them in the trash and quit trying to improve them? I certainly don’t think that. Would that mean I think they have no value whatsoever, in any context? No, I’d seriously doubt that.

In the world of science, simplifications are used far more often than not, principally because the analytic workload is too high, or it is impossible to accurately qualify all of the significant variables involved.

Both are the case with climate simulations. As Chris accurately states, climate is inordinately susceptible in the short term to the butterfly effect, and in the longer term a qualitative misunderstanding of how the variables applied in the simulations actually work in the real world of climate. Some variables are easy to understand, such as the Milankovitch variables associated with isolation. The variable of CO2 is really not well understood, nor is its relationship to other variables such as water vapor. We do understand the physics at the quantum level of CO2 emission and absorption, but how those fundamental underlying energy transfer mechanisms interact with convection, conduction, variable radiation from other sources is beyond where we are at today.

I have yet to see a qualitative comparison between CO2 absorption emission spectra as recorded several decades ago by the U.S. military and an equivalent spectra today. That would give us a qualitative means to understand exactly what that variable’s impact on total energy budget of the planet is. This is real science, measuring variables and then estimating the resulting impact on climate. However, we would much rather use computer models, based upon flawed assumptions of these variables behaviors. These flawed assumptions spring at least somewhat from the destructive circle of the demands that come from federal funding vs what those that fund expect to see.

Models can only tell us so much, and when measurements conflict with models, it is ALWAYS the models that must be modified, not arm waved away like the “missing heat” fallacy, or the outright denial that the observed data conflicts with the models..

Let me be the first to point out that this model predicts a non-existent 30 year hiatus starting ~1910 and misses one starting ~1945.

On a more general note, an integrating model is going to be very sensitive to thresholds and prone to runaway (unless the integration time is limited in which case this little parlor trick doesn’t work). Constrained by Stefan-Boltzmann it won’t actually blow up but it wouldn’t be pretty. It would be far preferable that the merely logarithmic forcing AWG guys are closer to the truth.

Attempts at modeling the climate at long range are hampered by the severe shortage of independent observed events; for example, there are no such events going back 200 years. I imagine that this is not a factor in studies of heartbeat or breathing.

That does not imply, as Lord Monckton wrote, that a chaotic model of a chaotic phenomenon can have no predictive value. It does imply that there is no realistic hope of basing model parameters on multiple cycles of the phenomenon, so the parameter estimates, if based on data, are necessarily more uncertain that with a periodic model observed over multiple cycles.

Mike Webb: The statement that climate variation is heteroskedastic will be as difficult to observationally disprove as the Lambda Cold Dark Matter theory, though both theories ignore the laws of thermodynamics.

Heteroskedastic means that the variance is not constant across time and location, e.g. temperatures near the Equator might be less variable than temperatures in Central Missouri, at same day of year and time of day. It is certainly subject to empirical verification or rejection.

Excellent piece! Is there very much work out there which looks at the interplay between water vapor/humidity/cloud formation and TSI reaching various levels within the atmosphere visa vi temperature? There would, of course, from what analysis I have seen, be significant multiple colinearity amongst the water vapor/humidty/clouds and TSI if used as independent (causal) variables to temperature.

Monckton of Brenchley: Mr Marler says there is no intrinsic reason why a chaotic model cannot reasonably predict the weather 200 years into the future. Yes, there is. It’s the Lorenz constraint. In his 1963 paper, in which he founded what later came to be called chaos theory, he wrote: “In view of the inevitable inaccuracy and incompleteness of weather observations, precise, very-long-range weather forecasting would seem to be non-existent.” And “very-long-range” means more than about 10 days out. See also Giorgi (2005); IPCC (2001, para. 14.2.2.2).

Lorenz, 1963, 51 years ago in an active research field, is not the last word. Note the vague “would seem to be”. A prediction of a functional, like the mean, over 3 periods is not impossible. They are not trying to predict the temperature of Central Missouri on June 11, 2025 at 2:30 pm, they are trying to predict the June 2025 afternoon mean temperature, Clearly they can not do that now, but merely citing the “butterfly effect” and “chaos” is not sufficient to show that the goal is unachievable. A thorough overview of one field of modeling with dynamic models including chaotic models is the book “Dynamic Systems in Neuroscience” by Eugene Izhikevich. Of course, reasonable success in heartbeat, breathing rhythms, and neuronal modeling are no guarantee that the problems of modeling the climate will necessarily be overcome, but they are evidence that universal claims of the non-predictability of chaotic models are false.

“No skeptic has made a GCM that can explain the observed climate using only natural forcings.”

To which I reply with a quote from Dr. Saul Perlmutter: “”Science isn’t a matter of trying to prove something – it is a matter of trying to figure out how you are wrong and trying to find your mistakes.”

Mr Marler has breached Eschenbach’s Rule by not quoting me accurately and completely. He says I wrote that “a chaotic model of a chaotic phenomenon can have no predictive value”..What I wrote was that modeling a chaotic object prevented the making of “policy-relevant” predictions – in other words, predictions accurate enough to be acted upon sensibly. And I explained why: the climate object’s “Its evolution is inherently unpredictable, even by the most sophisticated of models, unless perfect knowledge of the initial conditions is available. With the climate, it’s not available.”

I cited Lorenz on the inaccuracy and incompleteness of weather observations. In the absence of sufficiently precise and well-resolved data, one cannot predict the evolution of a chaotic object more than a few days out. And the science has indeed moved on in the half-century since Lorenz’s paper: it has confirmed the need for precise, well-resolved data before a chaotic object can be modeled reliably in the very long term.

It is startlingly evident that the models are not correctly predicting the one thing everyone wants them to predict: global temperature change. At present, they are not fit for their purpose, and chaos is one of the reasons. A chaotic object, being deterministic, is completely predictable under the condition that the modeler possesses perfect knowledge of both the initial conditions and the evolutionary processes. In the climate, we have neither, and can never acquire the first.

Monckton of Brenchly: Mr Marler has breached Eschenbach’s Rule by not quoting me accurately and completely. He says I wrote that “a chaotic model of a chaotic phenomenon can have no predictive value”..What I wrote was that modeling a chaotic object prevented the making of “policy-relevant” predictions – in other words, predictions accurate enough to be acted upon sensibly. And I explained why: the climate object’s “Its evolution is inherently unpredictable, even by the most sophisticated of models, unless perfect knowledge of the initial conditions is available. With the climate, it’s not available.”

I cited Lorenz on the inaccuracy and incompleteness of weather observations. In the absence of sufficiently precise and well-resolved data, one cannot predict the evolution of a chaotic object more than a few days out. And the science has indeed moved on in the half-century since Lorenz’s paper: it has confirmed the need for precise, well-resolved data before a chaotic object can be modeled reliably in the very long term.

It is startlingly evident that the models are not correctly predicting the one thing everyone wants them to predict: global temperature change. At present, they are not fit for their purpose, and chaos is one of the reasons. A chaotic object, being deterministic, is completely predictable under the condition that the modeler possesses perfect knowledge of both the initial conditions and the evolutionary processes. In the climate, we have neither, and can never acquire the first.

Here is the second paragraph of my first post: There are models of chaotic phenomena, heartbeat and breathing for example, where forecasts are reasonably accurate several cycles in advance. If the climate has “cycles” of about 60 years, there is no intrinsic reason why a chaotic model can not reasonably accurately predict the distribution of the weather (mean, variance, quartiles, 5% and 95% quantiles) 200 years into the future. That they don’t do so yet is evidence that they don’t do so yet, not that they can’t ever do so.

I acknowledged that the current climate models are not sufficiently accurate, and I directed attention to successful modeling of chaotic processes with chaotic models to show that universal assertions of the impossibility of usefully modeling chaotic process are not true. What is a “universal assertion of the impossibility of modeling chaotic processes”?

Will this do?

Monckton of Brenchley: At present, they are not fit for their purpose, and chaos is one of the reasons. A chaotic object, being deterministic, is completely predictable under the condition that the modeler possesses perfect knowledge of both the initial conditions and the evolutionary processes.

All models are predictable only with a range of uncertainty, and only up to a point. The difference between chaotic models and non-chaotic models is that, with estimates of parameters and initial conditions instead of exact values (what we always have), chaotic models become useless faster. However, there is no reason that GCMs of necessity will never be accurate enough for useful predictions of the functionals of weather distribution (means, variances, quartiles, other percentiles).

For the record, other chaotic models of chaotic systems are the multi-body gravitational problems that are addressed by the programs that guide satellites and space probes. In those cases, parameters and initial conditions are known with sufficient accuracy that the computations produce sufficiently accurate results (especially “sufficiently accurate results” because course corrections are possible.)

There is no guarantee that GCMs or other models will ever be accurate enough, but there is also no guarantee that they won’t be. On this I disagree with Lord Monckton of Brenchley, as much as I admire most of his work, and his extraordinary dedication.

Another example of a “universal denial” is the implicit assumption of this title: Why models can’t predict climate accurately

Unless there is also an implicit “yet” at the end, I think the implicit assumption is not demonstrated to be true. I think that is a little like predicting that polio (malaria, whooping cough, measles) will never be eradicated because of the difficulties encountered so far.

“In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

Anyone who claims that it is possible to model such a system now or at any point in the future for more than a matter of days – possibly a few weeks at the most – is either utterly misinformed or a confidence trickster.

With reference to the question of whether there is the potential for successfully predicting the climate over more than a few days, it is pertinent that an existing model successfully predicts whether yearly rainfall will be wetter or dryer than the median in the watershed east of Sacramento, CA over a forecasting horizon of one to three years. Making this possible is that the model is the algorithm of an optimal decoder that is tuned to a message from the future. HDTV works by similar principles but the decoder is tuned to a message from the past.

The particular problem that makes very long run reliable climate prediction unavailable is the enormous and irremediable information deficit as to the initial conditions at any chosen starting moment. I have modeled many chaotic objects myself, from the Verhulst population model to the Mandelbrot fractal to the oscillation of a pendulum (which, under certain conditions, behaves as a chaotic object). It is naive in the extreme to imagine that we can gather enough information to render the climate sufficiently predictable, as catweazle666 rightly reminds us even the IPCC has accepted. It can’t be done. Chaos, therefore, is one of the reasons why the models are doing so badly, and why they will continue to do badly.

Well said. Thirty year forecasts are presently impossible because the available information is insufficient. Six month to one year forecasts are a possibility because the need for information is much less.

Monckton of Brenchley: It can’t be done. Chaos, therefore, is one of the reasons why the models are doing so badly, and why they will continue to do badly.

That’s where I disagree with you, and why I cited examples of its being done in physiological systems.

catweazle666: In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

With all due respect for their efforts, I do not believe projections by the IPCC. A good example of a coupled non-linear chaotic system whose future states are reasonably predictable is given by Leloup and Goldbeter, “Modeling the molecular regulatory mechanism of circadian rhythms in Drosophila” BioEssays, vol 22, pp 84-93, 2000, copyright John Wiley and sons. The authors have a followup with a more complicated model applicable to mouse circadian rhythms. This work does not have the potentially civilization shattering importance of modeling climate dynamics, but it does show that the quoted statement from the IPCC is not correct.

In fact, a chaotic quasi-periodic model of circadian rhythm with a capacity limited output transformation can produce an extremely stable circadian rhythm, with waking and sleeping falling within narrow time spans day after day. The principle is similar to the principle by which gradual input to the climate system can produce step-changes in temperature means, a topic of a previous interchange between Lord Monckton and me.

My modeling and Lord Monckton’s modeling (cited in his post just before mine) show that chaotic systems sometimes can and sometimes can not produce accurate predictions of the future. The range of possibilities in nonlinear dynamic modeling is astonishing.

Matthew R Marler says:
April 3, 2014 at 4:40 pm
but it does show that the quoted statement from the IPCC is not correct.
===========
I believe an alternative explanation is more likely. Circadian rhythm relies upon an outside clock, which makes the underlying chaotic system predictable. A similar mechanism is a work in predicting the earth’s ocean tides.

To the extent that orbital mechanics determines climate, climate is “predictable”. However, to try and accurately forecast climate from first principles using forcings and feedbacks is hopeless. It doesn’t work for the tides and it will not work for the significantly more complex problem of climate.

Further to Fred Berple’s remark, a stable ocillator such as an orbiting planet supplies a carrier frequency to which a decoder of a message from the future can be tuned. A complete mechanistic understanding of the associated phenomenon is not required in order to tune into this message.

DirkH says:
April 3, 2014 at 3:24 am
“But even Callendar 1938 outperforms todays unverified unvalidated computer models.”
——————————————————–
This is true, but correlation is not causation or rather Callendars work appears to do better than later work, but this is not necessarily because he understood the role of radiative gases in our atmosphere.

In reading the exchange between Callendar and Simpson, it is sadly clear that you are correct in saying “Climate Science has regressed over the last 70 years; and we would have achieved MORE by not doing any “research” at all.”

One of the most interesting points about Callendars work is that he was well aware of the bias of urban heat islands. A point rarely raised by those seeking to use his work to defend AGW.

The usual approach in modeling processes over time – such as climate – is to start at the beginning, known as t0, and include as much information on all scales as the model can handle.

I know, I know. And if the figures have actual meaning, say, as in building a bridge, then yeah, I agree. As would any engineer. But Climate (and War) are a whole different bag of beans and require a more constrained approach.

This information is called the “initial conditions”. And one should not dismiss the potential influence of apparently minor events. It is in the nature of chaotic objects that even the smallest perturbation in the initial conditions can cause drastic bifurcations in the evolution of the object.

I think that with war as well as with climate there are too many “initial conditions” to consider from the bottom to top. So it has to start out very simply and from the top down. With Climate models, I’d start with “PDO flux + Everything Else” and take it from there. Keep it down to a half dozen factors at first, and add as you go.

But don’t try to feed it all into one end by complex formula and expect anything but meaningless insanity to come out the other end. And that’s what CMIP did, as you accurately describe — and accurately diagnose. But I think the whole approach is wrong (as in futile), though you don’t seem to.

It is also worth recalling historical events. The Hyksos people had stirrups and the Egyptians didn’t. Guess who won the war.

Ah, but the Hyksos were blooded veterans. Fully made. Hungrier for the win. The Egyptians at the time were soft, by comparison. My money says the Hickies would have hit the Egyptians for six if the lot of them had been on foot — and the operational boyz would be chalking it all up to anglefire.

Look, melord, I sympathize. Really, I do. I cut my teeth on operational and tactical wargames. My current jaundiced prejudice is the result of having walked many paths.

My philosophical basis here is inductive, not deductive — by necessity, not design. Call it a kind of reluctant intellectual “advance to the rear”.

We see a bit of T-34 syndrome in the Eastern Front buffs, too. Now, don’t get me wrong. The T-34/76 was the right tank in the right place at the right time. Best tank in history, in terms of effect (and let’s not forget the later /85). It gave Russians a big jump over the PzIII, even the H and J (50/L50) models, and it wasn’t until spring of ’42 that the Germans had upgunned the PzIV to the 75/L70, which brought them back to parity. (And KV 76-85 so on and Tiger I / Pz4H / Panther so forth. We could go on all night. Things shifted back and forth.) It was all very important. But I wouldn’t dream of ascribing the Soviet victory to tank design, as important as the T-34 was.

No, wait! We forgot the Optics and Radio Rules! And, OMG, what about the turret rings? It never ends. And before you can say “Advanced Squad Leader”, you wind up with a mishmash of seductive tactical porn at the unaffordable price of complete strategic disconnect.

One possible “model” of the climate; whatever the blazes the climate is, would be a complete list of all of the raw data values of whatever constitutes the climate, that have been measured over the period for which the model is purported to be valid.

Such a model, must be correct, by definition, since it exactly reproduces the observational data that is used to compare ANY model to.

The aim, of climate modeling, then becomes a matter of simply reducing the number of data entries in the model, while still being able to replicate all the observational measured values from the reduced element model.

A really good model of some process, would be a closed form equation, with some small number of derived parameters, from which the outcome of any experimental observation of the system, can be predicted, with some degree of certainty.

It would appear, that existing climate models, require more parameters, than the total number of experimentally observed real measured values of the “climate.” That is a really lousy bargain, and explains why you need a terrafloppy computer.

You are describing the method for construction of a model that features a number of adjustable parameters whose numerical values are extracted from the observational data. The method of extraction generally employs an intuitive rule of thumb known as a “heuristic”; minimization of the squared error is an example of one of them. In each circumstance in which a particular heuristic selects a particular value, a different heuristic extracts a different value. In this way, the method of heuristics violates non-contradiction. Non-contradiction is a logical principle.

I have for many years used computer modeling to predict the performance of various optical systems, that I have designed, and actually had manufactured; and in numbers that are counted in billions for some systems.

You could be using some of them right now.

If your computer mouse glows red underneath, the chances (today) are about 50:50 that it has one of my patented macro camera lenses in it (1:1 close up, of about 1.5 mm focal length) and it also has a sophisticated non-imaging optical illumination system of my design. At one time the chances were about 90:10, but now it is about 75% likely that you have my design, or some Hong Kong phooey knockoff from my patents. If your mouse glows blue underneath; forget it; I had nothing to do with that; but know who did it. The blue is strictly for sex appeal; no virtue whatsoever; but very cool.

Optical design has two major branches; imaging and non-imaging.

Imaging is self explanatory; light from a point on an object is directed to a corresponding point on an image, no matter at what angle (within reason) it leaves the object

In non-imaging optics, no point to point correspondence is required. One seeks to direct as much light from a given non-point source area, onto a non point sink area, and source and sink, could be shapes other than flat surfaces. Where the rays end up is not important; we call it photon herding; get all the horses in the corral. Well the reverse would be photon stampeding; get the cows out of the barn onto the pasture. That’s the aim of LED lighting.

Actually NIO is every bit as difficult as IO, but it has its own set of “you can’t get there from here.” rules. You can see one every day in front of you in the tail lights of the car in front. An array of bright spots, in a mostly dark field area.

The second law of thermodynamics, won’t allow you to uniformly illuminate the whole area over the range of viewing angles required.

Modelling these things on a computer, is a very hierarchical process.

You can start with something as simple as Newton’s thin lens formula (for imaging): xy = f^2 where x and y are the object and image distances measured along the axis from the two focal points. All kinds of pestilence are hidden, by that simple model, and you can complicate it some by changing to the Gauss equations for a thick lens, possibly with three different optical media for object space, lens, and image space.

Sequential ray tracing programs can trace millions of geometrical optics rays through any number of optical surfaces, including refractive, reflective or even refractive surfaces. The geometrical optical approximation, assumes that rays have a wavelength of zero, so that there are no diffraction effects. Non- imaging (NIO) ray tracing can send million of rays from hundreds of sources, to any number of objects, hitting some but not others, and just carrying on till they get stopped somewhere.

Then you can do real physical optics, where the programs can resort to Maxwell’s equations, or even propagate coherent Gaussian beam modes.

The point is, that at any point in the design, you have to ask; just how big a gun, do I need for this fight ?

If you launch into a physical optics battle right at the outset, you are likely to have no insight whatsoever, as to what optical structure has a chance of working at all.

So all of the models in the hierarchy, have their place, and you better start off with bare fists, before resorting to any firearms.

I’m sure this isn’t much different from designing a carbon fiber aero-plane on a computer.

But the designer has to know at what point in the design, you need to crank up the modeling horsepower a notch, to get closer to what the natural laws of the universe, will allow you to accomplish.

I’m happy to be able to say that at least 97% of my finished designs (of this and that) actually went into production. As far as I know, not one of them (designs that is) failed to perform within the expected range of behavior, with acceptable manufacturing yields.

Modelling real systems, is a standard part of engineering, and our customers expect our stuff to do what we claim it will do.

We’d all be unemployed, if our idea of par, was what passes for climate modeling.

3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”

Okay, the next time I have a few hundred million dollars going spare I’ll give it a go.

Another excellent piece by Christopher Monckton.
Of course, it was computer models that gave us the BBQ summer, and computer models which predicted that the recent British winter would have slightly below average rainfall.

I would think that the IPCC’s claims that computer models can predict the climate in 50 or 100 years time are fraudulent.
I would think that, to successfully predict the future, at least three conditions must be fulfilled:
1. The physical laws that drive the system are perfectly understood.
2. The system is non-chaotic.
3. The initial conditions are known with perfect precision.

None of these conditions are met, even remotely.
On the other hand, computer models that predict the Earth’s position in 100 years are probably right. That’s because planetary dynamics models meet all three conditions. For example, the Newtonian and relativistic physical laws are very well understood, and can provide predictions to many decimal points of precision. Compared to this, climate models are laughably deficient.
But how to get this over to the Camerons and Obamas of this world….
Chris

It is not logically proper to reach a conclusion on the issue of whether computer models can predict the climate in 50 or 100 years time as the word “predict” is polysemic. Details on why this is so are available at http://wmbriggs.com/blog/?p=7923 .

Modelling real systems, is a standard part of engineering, and our customers expect our stuff to do what we claim it will do.

I heartily agree with everything you said, but I want to highlight this part emphatically. I don’t ‘model’ in any of the usual direct ways we think of modeling. But something both academic scientists and non technical folk seem to fail to grasp is that in engineering, the damn thing has to do what we claim it will. That’s what we do. I don’t have to be a master of the theoretical science to apply the principles, but the science had darn well better be correct, or the application will fail. This is why I harp endlessly on the failed projections. If you want me to accept your science and add from it to my engineering toolbox, you had better be able to demonstrate that your theory accurately and usefully describes something in reality that we care about. I got no use for wrenches that can’t be used to tighten bolts, and I’m not going to pretend I can tighten bolts if my wrenches don’t work.

It would appear, that existing climate models, require more parameters, than the total number of experimentally observed real measured values of the “climate.” That is a really lousy bargain, and explains why you need a terrafloppy computer.

And my premise being that a terrafloppy won’t do. All you get is a bunchload of entirely false precision — as the train flies off the track and plunges down the ravine.

It’s as bad as trying to reproduce the Eastern Front using “Advanced Squad Leader”. Or “Doom”.

You’ll do better with pencil and paper (using one side of the page), as crude as that may be.

“Your number is not only in error, it is a variable and not a constant”

Earth rotation rate is not constant. Furthermore, the rate of rotation you gave corresponds to a solar day and not a siderial day, which would give you a much closer number to what is currently the Earth’s angular velocity of 1.160576…x10^-5 revolutions per second.

We could calculate that number more precisely, but there aren’t exactly 375.25 days per year. And that inexactness changes over time due to gravitational fluid torques on the Earth. These things, I think, account for the irregularity with which so-called leap seconds are applied.

With that, I have exceeded what I know. What I do know for certain is that the Earth’s inertial angular velocity is not well-characterized by calculations that use 1 rev/24 hours.

ferdberple: Circadian rhythm relies upon an outside clock, which makes the underlying chaotic system predictable.

This is only semi true. Animals, including humans, maintain a circadian rhythm in the absence of external cues. The “circa” in circadian means “approximately”, and a human or hamster without external cues will maintain a rhythm with about a 24.5 hr period or 24.0 hour period, respectively. The rhythm depends on a feedback in the expression and transcription of genes, and the system has been well-studied in animals and plants whose genes and transcription factors can be directly manipulated. In humans the stability of the rhythm has been studied in at least one cave dweller, and in “forced desynchrony” routines where, for example, the light sequence is 90 min on and 30 min off for multiple weeks, and in submarine crews (4 hrs on, 4hrs off, and other unnatural schedules.) What the external cues, mainly sunrise and sunset, do is re-synchronize the natural oscillator to maintain the activity cycle appropriate to the animal in its niche: e.g., hamsters go to sleep near sunrise, and mosquito females forage for blood predominantly near sunrise and sunset.

For more information, start here: http://ccb.ucsd.edu/; check out the videos of the circadian rhythm machinery (SCN, etc.) For an example of a forced desynchrony study with measures of circadian rhythms in melatonin, core body temperature, cortisol, and visual acutity, try :

Starbuck: The correct description of Chaos is “Stochastic behavior in a Deterministic System” or lawlessness in a system governed entirely by laws.

There is no single short description of “chaos”. Characterizations include: large Lyapunov exponents of opposite signs; functions of two or more periods where the ratio of the periods is irrational; strange attractors; functions that go through a region of phase space with a perfect periodicity but no point with a perfect periodicity; and others.

I picked up both definitions from Ian Stewart in his book; “Does God Play Dice?” (1989) page17. His book, (along with Mandelbrot and others) became the basis for my studies of Chaos. Indeed, the word “chaos” itself has different connotations here than in the everyday world. The definition was proposed by an international conference held by the Royal Society in London, 1986 to address the distinction.

Stewart did the interpretation: “lawlessness in a system governed entirely by laws. “, The Royal Society proposed: “Stochastic behavior in a Deterministic System”.

“Your number is not only in error, it is a variable and not a constant”

Earth rotation rate is not constant. Furthermore, the rate of rotation you gave corresponds to a solar day and not a siderial day, which would give you a much closer number to what is currently the Earth’s angular velocity of 1.160576…x10^-5 revolutions per second.

We could calculate that number more precisely, but there aren’t exactly 375.25 days per year…….””””””

Well I could just take a wild ass guess, and get closer than your number.

My first stab at it would most likely be 365.25 days.

And I don’t have a clue about the angular rotational rate of our galaxy, or any larger universal entity, so I cited the value I get in a rotating frame of reference, with the mean sun direction vector, as my zero angular reference.

So I’ll stick with my 86,400 second day, since the starting erroneous assumption was infinite; which is definitely incorrect.

But when you do your more accurate model, don’t forget to reference it as a sidereal model, so we can correctly assign any discrepancies.

Why would you want a model, where the earth does not circumnavigate the sun; that’s as erroneous, as an infinite day length.

Mr Oldberg continues to write complete nonsense, misdefining and misapplying “heuristic” and wittering on in his usual futile way about his notion that people should not use the word “predict” in the sense of “predict” because, he feels, it can mean something other than “predict”. Well, the IPCC uses “predict” when it means “predict”, and so do I, and so does everyone except Mr Oldberg. The fact is that the IPCC’s predictions of global temperature change have failed and failed and failed again, and trolling to the effect that their predictions were not predictions, or that one cannot make predictions because of Mr Oldberg’s barmy interpretation of the word “predictions” will not conceal that fact.

The RSS data for March 2014 are now available, and there has been no global warming for 17 years 8 months. The trend is flat. The IPCC did not predict that. The models did not predict that. They were wrong. Get over it.

Nevertheless, the descriptions that I wrote are those that appear in the mathematics.

Back to climate for a moment, because some of the drivers are themselves random (volcanoes, variations in the particles that provide nucleation centers for condensation, etc) the climate really should be thought of as a random dynamical system, and these have characteristics a little different from their deterministic counterparts.

Monckton of Brenchley: Mr Oldberg continues to write complete nonsense, misdefining and misapplying “heuristic” and wittering on in his usual futile way about his notion that people should not use the word “predict” in the sense of “predict” because, he feels, it can mean something other than “predict”.

I am with you on that.

To extend the discussion a little, without reference to anything in particular said by anyone on this thread; a problem with statements like “GCMs predict” or “a chaotic system is not predictable”, is that we fail to specify the appropriate error bounds that would make a prediction usable, and the conditions in which such an error bound is achievable. Clearly the GCMs are running too hot now. But all predictions are made, and all applications of mathematical modeling are carried out, with an error of approximation. Even linear and polynomial models, as well as trigonometric polynomials, have errors of approximation,and these may be severe outside the conditions in which the models have been fitted and tested. Chaotic models differ from those only in that the error of approximation grows faster, not that the others are error free.

How accurate would a climate model have to be in order to be useful? I would propose, to start the discussion, that a rms error of 0.25C over 120 years, for the mean, two quartiles, and the 5% and 95% points of the distribution (with some degree of temporal and regional specificity, like say Central Ohio in June, noon and midnight), with much lower average bias (i.e. not a growing upward or downward bias as exhibited by current GCMs) would be useful. A model capable of that is not going to be developed any time soon, and certainly could not be tested any time soon. Nothing in the knowledge base, mathematical analysis or empirical science, implies that is intrinsically impossible, only that it hasn’t been done yet, and it will be hard to do.

Thanks for offering your views. I offer the following counter-examples in refutation of your claim that “…all predictions are made, and all applications of mathematical modeling are carried out, with an error of approximation.”

The claim that “the global temperature will be 15 Celsius at midnight GMT on January 1, 2030″ is sure to be falsified by the evidence unless the position is adopted that what is meant is that the temperature will be “about 15 Celsius.” That it will be “about 15 Celsius” is an example of an equivocation, for the word “about” is polysemic. That the claim is an equivocation renders this claim non-falsifiable and thus non-scientific. In particular, there is no temperature value however distant from 15 Celsius that renders this claim false.

The approach to stating a claim that is illustrated above may be contrasted to the approach resulting in the claim that “the global temperature will lie between 15 and 16 Celsius at midnight GMT on January 1, 2030.” In this case, the claim is stated unequivocally. If the observed temperature is less than 15 or greater than 16, this claim is false. Otherwise it is true.

The entities that Monckton of Brenchley would evidently like to be free to call “predictions” are equivocations similar to the one stated above. It would be well if the terms of arguments about global warming were to be disambiguated such that decision makers were not misled through applications of the equivocation fallacy. To do so is my recommendation. For the person who favors a logical approach to scientific research this recommendation has no downside. It is, however, not currently being practiced.

That’s true for long-term predictions but not short-term predictions. Take the heart rate example, and assume for now that you know your heart rate is 70 bpm. Taking your most recent beat as 0 (and assuming a reasonably accurate time keeper), you can predict reasonably well when the next 3 – 10 heartbeats will occur, but the not the 70th. The same is true of breathing, and of a spiking neuron: for the latter, if you know the dynamics and the time of the most recent spike you can reasonably predict the times of the next 3 – 10 spikes, but not the 30th.

That’s to amplify my point earlier: when speaking of “predicting”, one should specify the time over which the prediction is intended to be accurate, and the accuracy needed for the prediction to be useful.

I’m with you on that. I studied my own heartbeats at the time I was following Chaos theory and saw that, as a synchronized free-running oscillator, it was not going to be a good timekeeper! I especially relied on that observation and the departures from regularity to illustrate to my physician that a drug he was prescribing did indeed interfere with that regularity.

Your comment to Monckton is quite sensible. The most important takeaway from this thread is the critiques of modelling as a predictor. My gut response has been for a number of years to be cautious of the conclusions being drawn, not because I have skills to produce or even analyze models, but beyond that, is the falsification of Scientific Determinism by Karl Popper in his book “Open Universe”. Determinism seems to be driving predictions from models.

What I never see mentioned is concern over what actions contemplated might actually do to climate. It seems most people feel that shutting down use of carbon fuels will restore us to earlier, comfortable times. I am not so sure, not at all.

Maybe I am missing this as I don’t read each and every thread on this subject, let alone all the papers!

Personally Mr Oldberg and with respect to your opinion, anyone who even tries to predict future weather or statements that can be challenged should take it on the chin. There is one element in science that no one can predict as to what the future holds especially considering the theory of chaos. If someone is silly enough to swim in a lake full of sharks, man eaters, and gets eaten then that is not chaos. It is a likely outcome. If we or the earth is suddenly hit by an asteroid that is coming from the sun, and can’t be spotted, that can cause chaos. It was completely unexpected. If a volcano erupts suddenly, and they had no warning, it creates chaos. But if the temperature suddenly is 2C warmer than expected , the whole hypothesis of doom for humans and living organisms will happen if we don’t cut CO2 down to acceptable levels (what are they exactly) and this will prevent drastic climate change is a dicey model to predict. We don’t know. Personally I think some North Americans would like their winter to end right now and would welcome a increase of 2C. But a drop of 5C would have more affect on precipitation and crop growing. Predicting future temperatures and weather, is much like telling fortunes to gullible people. We don’t know and the unpredictable is not science.

Lord Christopher it would seem we have deserted you, its the timelines at fault. This latest PREDICTION we are heading for a bad El Nino episode, is a stupid prediction. The sun will have a say on the climate and actually if you wish to predict, there is a 50/50 chance Nina will come along again too. They are depending on swaying uneducated people to their point of view.

NB. In Australia you have my post dated April 5th 9 pm. Actually in my real time it is 2.48 pm on 6th April, we have just turned our clocks back one hour.

The multifarious experts in climate change differ principally in the way they choose to start their invalid linear extrapolations, which go off the rails as soon as a “trend” reverses or starts to curve. So far, they all have.

Well I could just take a wild ass guess, and get closer than your number.

Your “wild-ass guess” would fail. “My number” is a fairly accurate representation of the value of Earth rotation rate that is used in long-term, high-accuracy inertial navigation systems. In other words: you are disagreeing with a parameter that is directly measurable and has been extensively verified in practice.

Of course I don’t use revolutions per second, in practice, but rather radians per second. That value would be 7.2921150×10^-5 rad/s. It’s extremely widely used. You can Google it, if you like.

If you made an error of that size in the navigation software of e.g. an F-16 aircraft (this is more of a medium-accuracy system), you would be introducing an error several times that permitted by the requirements spec.

Which is what I meant by “fail”.

Why would you want a model, where the earth does not circumnavigate the sun; that’s as erroneous, as an infinite day length.