Are Climate Modelers Scientists?

For going on two years now, I’ve been trying to publish a manuscript that critically assesses the reliability of climate model projections. The manuscript has been submitted twice and rejected twice from two leading climate journals, for a total of four rejections. All on the advice of nine of ten reviewers. More on that below.

The analysis propagates climate model error through global air temperature projections, using a formalized version of the “passive warming model” (PWM) GCM emulator reported in my 2008 Skeptic article. Propagation of error through a GCM temperature projection reveals its predictive reliability.

Those interested can consult the invited poster (2.9 MB pdf) I presented at the 2013 AGU Fall Meeting in San Francisco. Error propagation is a standard way to assess the reliability of an experimental result or a model prediction. However, climate models are never assessed this way.

Here’s an illustration: the Figure below shows what happens when the average ±4 Wm-2 long-wave cloud forcing error of CMIP5 climate models [1], is propagated through a couple of Community Climate System Model 4 (CCSM4) global air temperature projections.

CCSM4 is a CMIP5-level climate model from NCAR, where Kevin Trenberth works, and was used in the IPCC AR5 of 2013. Judy Curry wrote about it here.

In panel a, the points show the CCSM4 anomaly projections of the AR5 Representative Concentration Pathways (RCP) 6.0 (green) and 8.5 (blue). The lines are the PWM emulations of the CCSM4 projections, made using the standard RCP forcings from Meinshausen. [2] The CCSM4 RCP forcings may not be identical to the Meinhausen RCP forcings. The shaded areas are the range of projections across all AR5 models (see AR5 Figure TS.15). The CCSM4 projections are in the upper range.

In panel b, the lines are the same two CCSM4 RCP projections. But now the shaded areas are the uncertainty envelopes resulting when ±4 Wm-2 CMIP5 long wave cloud forcing error is propagated through the projections in annual steps.

The uncertainty is so large because ±4 W m-2 of annual long wave cloud forcing error is ±114´ larger than the annual average 0.035 Wm-2 forcing increase of GHG emissions since 1979. Typical error bars for CMIP5 climate model projections are about ±14 C after 100 years and ±18 C after 150 years.

It’s immediately clear that climate models are unable to resolve any thermal effect of greenhouse gas emissions or tell us anything about future air temperatures. It’s impossible that climate models can ever have resolved an anthropogenic greenhouse signal; not now nor at any time in the past.

Propagation of errors through a calculation is a simple idea. It’s logically obvious. It’s critically important. It gets pounded into every single freshman physics, chemistry, and engineering student.

And it has escaped the grasp of every single Ph.D. climate modeler I have encountered, in conversation or in review.

That brings me to the reason I’m writing here. My manuscript has been rejected four times; twice each from two high-ranking climate journals. I have responded to a total of ten reviews.

Nine of the ten reviews were clearly written by climate modelers, were uniformly negative, and recommended rejection. One reviewer was clearly not a climate modeler. That one recommended publication.

I’ve had my share of scientific debates. A couple of them not entirely amiable. My research (with colleagues) has over-thrown four ‘ruling paradigms,’ and so I’m familiar with how scientists behave when they’re challenged. None of that prepared me for the standards at play in climate science.

I’ll start with the conclusion, and follow on with the supporting evidence: never, in all my experience with peer-reviewed publishing, have I ever encountered such incompetence in a reviewer. Much less incompetence evidently common to a class of reviewers.

The shocking lack of competence I encountered made public exposure a civic corrective good.

Physical error analysis is critical to all of science, especially experimental physical science. It is not too much to call it central.

Result ± error tells what one knows. If the error is larger than the result, one doesn’t know anything. Geoff Sherrington has been eloquent about the hazards and trickiness of experimental error.

All of the physical sciences hew to these standards. Physical scientists are bound by them.

Climate modelers do not and by their lights are not.

I will give examples of all of the following concerning climate modelers:

They neither respect nor understand the distinction between accuracy and precision.

They understand nothing of the meaning or method of propagated error.

They think physical error bars mean the model itself is oscillating between the uncertainty extremes. (I kid you not.)

They don’t understand the meaning of physical error.

They don’t understand the importance of a unique result.

Bottom line? Climate modelers are not scientists. Climate modeling is not a branch of physical science. Climate modelers are unequipped to evaluate the physical reliability of their own models.

The incredibleness that follows is verbatim reviewer transcript; quoted in italics. Every idea below is presented as the reviewer meant it. No quotes are contextually deprived, and none has been truncated into something different than the reviewer meant.

And keep in mind that these are arguments that certain editors of certain high-ranking climate journals found persuasive.

1. Accuracy vs. Precision

The distinction between accuracy and precision is central to the argument presented in the manuscript, and is defined right in the Introduction.

The accuracy of a model is the difference between its predictions and the corresponding observations.

The precision of a model is the variance of its predictions, without reference to observations.

Physical evaluation of a model requires an accuracy metric.

There is nothing more basic to science itself than the critical distinction of accuracy from precision.

“[T]he author thinks that a probability distribution function (pdf) only provides information about precision and it cannot give any information about accuracy. This is wrong, and if this were true, the statisticians could resign.”

“The best way to test the errors of the GCMs is to run numerical experiments to sample the predicted effects of different parameters…”

“The author is simply asserting that uncertainties in published estimates [i.e., model precision – P] are not ‘physically valid’ [i.e., not accuracy – P]- an opinion that is not widely shared.”

Not widely shared among climate modelers, anyway.

The first reviewer actually scorned the distinction between accuracy and precision. This, from a supposed scientist.

The accuracy-precision difference was extensively documented to relevant literature in the manuscript, e.g., [3, 4].

The reviewers ignored that literature. The final reviewer dismissed it as mere assertion.

Every climate modeler reviewer who addressed the precision-accuracy question similarly failed to grasp it. I have yet to encounter one who understands it.

2. No understanding of propagated error

“The authors claim that published projections do not include ‘propagated errors’ is fundamentally flawed. It is clearly the case that the model ensemble may have structural errors that bias the projections.”

Rogelj (2013) concerns the economic costs of mitigation. Their Figure 1b includes a global temperature projection plus uncertainty ranges. The uncertainties, “are based on a 600-member ensemble of temperature projections for each scenario…” [5]

I.e., the reviewer supposes that model precision = propagated error.

Murphy (2007) write, “In order to sample the effects of model error, it is necessary to construct ensembles which sample plausible alternative representations of earth system processes.” [6]

I.e., the reviewer supposes that model precision = propagated error.

Rowlands (2012) write, “Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. “ and go on to state that, “Perturbed-physics ensembles offer a systematic approach to quantify uncertainty in models of the climate system response to external forcing, albeit within a given model structure.” [7]

I.e., the reviewer supposes that model precision = propagated error.

Not one of this reviewer’s examples of propagated error includes any propagated error, or even mentions propagated error.

Not only that, but not one of the examples discusses physical error at all. It’s all model precision.

This reviewer doesn’t know what propagated error is, what it means, or how to identify it. This reviewer also evidently does not know how to recognize physical error itself.

Let’s find out: Stainforth (2005) includes three Figures; Every single one of them presents error as projection variation. [8]

Here’s their Figure 1:

Original Figure Legend: “Figure 1 Frequency distributions of T g (colours indicate density of trajectories per 0.1 K interval) through the three phases of the simulation. a, Frequency distribution of the 2,017 distinct independent simulations. b, Frequency distribution of the 414 model versions. In b, T g is shown relative to the value at the end of the calibration phase and where initial condition ensemble members exist, their mean has been taken for each time point.”

Here’s what they say about uncertainty: “[W]e have carried out a grand ensemble (an ensemble of ensembles) exploring uncertainty in a state-of-the-art model. Uncertainty in model response is investigated using a perturbed physics ensemble in which model parameters are set to alternative values considered plausible by experts in the relevant parameterization schemes.”

There it is: uncertainty is directly represented as model variability (density of trajectories; perturbed physics ensemble).

The remaining figures in Stainforth (2005) derive from this one. Propagated error appears nowhere and is nowhere mentioned.

Reviewer supposition: model precision = propagated error.

Collins (2012) state that adjusting model parameters so that projections approach observations is enough to “hope” that a model has physical validity. Propagation of error is never mentioned. Collins Figure 3 shows physical uncertainty as model variability about an ensemble mean. [9] Here it is:

Original Legend: “Figure 3 | Global temperature anomalies. a, Global mean temperature anomalies produced using an EBM forced by historical changes in well-mixed greenhouse gases and future increases based on the A1B scenario from the Intergovernmental Panel on Climate Change’s Special Report on Emission Scenarios. The different curves are generated by varying the feedback parameter (climate sensitivity) in the EBM. b, Changes in global mean temperature at 2050 versus global mean temperature at the year 2000, … The histogram on the x axis represents an estimate of the twentieth-century warming attributable to greenhouse gases. The histogram on the y axis uses the relationship between the past and the future to obtain a projection of future changes.”

“To say that this error indicates that temperatures could hugely cool in response to CO2 shows that their model is unphysical.”

“[T]his analysis would predict that the models will swing ever more wildly between snowball and runaway greenhouse states.”

“Indeed if we carry such error propagation out for millennia we find that the uncertainty will eventually be larger than the absolute temperature of the Earth, a clear absurdity.”

“An entirely equivalent argument [to the error bars] would be to say (accurately) that there is a 2K range of pre-industrial absolute temperatures in GCMs, and therefore the global mean temperature is liable to jump 2K at any time – which is clearly nonsense…”

Or that the bars from propagated error represent physical temperature itself.

No sophomore in physics, chemistry, or engineering would make such an ignorant mistake.

But Ph.D. climate modelers have invariably done. One climate modeler audience member did so verbally, during Q&A after my seminar on this analysis.

The worst of it is that both the manuscript and the supporting information document explained that error bars represent an ignorance width. Not one of these Ph.D. reviewers gave any evidence of having read any of it.

5. Unique Result – a concept unknown among climate modelers.

Do climate modelers understand the meaning and importance of a unique result?

“[L]ooking the last glacial maximum, the same models produce global mean changes of between 4 and 6 degrees colder than the pre-industrial. If the conclusions of this paper were correct, this spread (being so much smaller than the estimated errors of +/- 15 deg C) would be nothing short of miraculous.”

“In reality climate models have been tested on multicentennial time scales against paleoclimate data (see the most recent PMIP intercomparisons) and do reasonably well at simulating small Holocene climate variations, and even glacial-interglacial transitions. This is completely incompatible with the claimed results.”

“The most obvious indication that the error framework and the emulation framework

presented in this manuscript is wrong is that the different GCMs with well-known different cloudiness biases (IPCC) produce quite similar results, albeit a spread in the

climate sensitivities.”

Let’s look at where these reviewers get such confidence. Here’s an example from Rowlands, (2012) of what models produce. [7]

The variable black line in the middle of the group represents the observed air temperature. I added the horizontal black lines at 1 K and 3 K, and the vertical red line at year 2055. Part of the red line is in the original figure, as the precision uncertainty bar.

This Figure displays thousands of perturbed physics simulations of global air temperatures. “Perturbed physics” means that model parameters are varied across their range of physical uncertainty. Each member of the ensemble is of equivalent weight. None of them are known to be physically more correct than any of the others.

The physical energy-state of the simulated climate varies systematically across the years. The horizontal black lines show that multiple physical energy states produce the same simulated 1 K or 3 K anomaly temperature.

The vertical red line at year 2055 shows that the identical physical energy-state (the year 2055 state) produces multiple simulated air temperatures.

These wandering projections do not represent natural variability. They represent how parameter magnitudes varied across their uncertainty ranges affect the temperature simulations of the HadCM3L model itself.

The Figure fully demonstrates that climate models are incapable of producing a unique solution to any climate energy-state.

That means simulations close to observations are not known to accurately represent the true physical energy-state of the climate. They just happen to have opportunistically wonderful off-setting errors.

That means, in turn, the projections have no informational value. They tell us nothing about possible future air temperatures.

There is no way to know which of the simulations actually represents the correct underlying physics. Or whether any of them do. And even if one of them happens to conform to the future behavior of the climate, there’s no way to know it wasn’t a fortuitous accident.

Models with large parameter uncertainties can not produce a unique prediction. The reviewers’ confident statements show they have no understanding of that, or of why it’s important.

Now suppose Rowlands, et al., tuned the parameters of the HADCM3L model so that it precisely reproduced the observed air temperature line.

Would it mean the HADCM3L had suddenly attained the ability to produce a unique solution to the climate energy-state?

Would it mean the HADCM3L was suddenly able to reproduce the correct underlying physics?

Obviously not.

Tuned parameters merely obscure uncertainty. They hide the unreliability of the model. It is no measure of accuracy that tuned models produce similar projections. Or that their projections are close to observations. Tuning parameter sets merely off-sets errors and produces a false and tendentious precision.

Every single recent, Holocene, or Glacial-era temperature hindcast is likewise non-unique. Not one of them validate the accuracy of a climate model. Not one of them tell us anything about any physically real global climate state. Not one single climate modeler reviewer evidenced any understanding of that basic standard of science.

6. An especially egregious example in which the petard self-hoister is unaware of the air underfoot.

Finally, I’d like to present one last example. The essay is already long, and yet another instance may be overkill.

But I finally decided it is better to risk reader fatigue than to not make a public record of what passes for analytical thinking among climate modelers. Apologies if it’s all become tedious.

This last truly demonstrates the abysmal understanding of error analysis at large in the ranks of climate modelers. Here we go:

“I will give (again) one simple example of why this whole exercise is a waste of time. Take a simple energy balance model, solar in, long wave out, single layer atmosphere, albedo and greenhouse effect. i.e. sigma Ts^4 = S (1-a) /(1 -lambda/2) where lambda is the atmospheric emissivity, a is the albedo (0.7), S the incident solar flux (340 W/m^2), sigma is the SB coefficient and Ts is the surface temperature (288K).

“The sensitivity of this model to an increase in lambda of 0.02 (which gives a 4 W/m2 forcing) is 1.19 deg C (assuming no feedbacks on lambda or a). The sensitivity of an erroneous model with an error in the albedo of 0.012 (which gives a 4 W/m^2 SW TOA flux error) to exactly the same forcing is 1.18 deg C.

“This the difference that a systematic bias makes to the sensitivity is two orders of magnitude less than the effect of the perturbation. The author’s equating of the response error to the bias error even in such a simple model is orders of magnitude wrong. It is exactly the same with his GCM emulator.”

The “difference” the reviewer is talking about is 1.19 C – 1.18 C = 0.01 C. The reviewer supposes that this 0.01 C is the entire uncertainty produced by the model due to a 4 Wm-2 offset error in either albedo or emissivity.

But it’s not.

First reviewer mistake: If 1.19 C or 1.18 C are produced by a 4 Wm-2 offset forcing error, then 1.19 C or 1.18 C are offset temperature errors. Not sensitivities. Their tiny difference, if anything, confirms the error magnitude.

Second mistake: The reviewer doesn’t know the difference between an offset error (a statistic) and temperature (a thermodynamic magnitude). The reviewer’s “sensitivity” is actually “error.”

Fifth mistake: The reviewer is apparently unfamiliar with the generality that physical uncertainties express a bounded range of ignorance; i.e., “±” about some value. Uncertainties are never constant offsets.

Lemma to five: the reviewer apparently also does not know the correct way to express the uncertainties is ±lambda or ±albedo.

But then, inconveniently for the reviewer, if the uncertainties are correctly expressed, the prescribed uncertainty is ±4 W/m2 in forcing. The uncertainty is then obviously an error statistic and not an energetic malapropism.

For those confused by this distinction, no energetic perturbation can be simultaneously positive and negative. Earth to modelers, over. . .

When the reviewer’s example is expressed using the correct ± statistical notation, 1.19 C and 1.18 C become ±1.19 C and ±1.18 C.

And these are uncertainties for a single step calculation. They are in the same ballpark as the single-step uncertainties presented in the manuscript.

As soon as the reviewer’s forcing uncertainty enters into a multi-step linear extrapolation, i.e., a GCM projection, the ±1.19 C and ±1.18 C uncertainties would appear in every step, and must then propagate through the steps as the root-sum-square. [3, 10]

So, correctly done, the reviewer’s own analysis validates the very manuscript that the reviewer called a “waste of time.” Good job, that.

This reviewer:

doesn’t know the meaning of physical uncertainty.

doesn’t distinguish between model response (sensitivity) and model error. This mistake amounts to not knowing to distinguish between an energetic perturbation and a physical error statistic.

doesn’t know how to express a physical uncertainty.

and doesn’t know the difference between single step error and propagated error.

So, once again, climate modelers:

neither respect nor understand the distinction between accuracy and precision.

are entirely ignorant of propagated error.

think the ± bars of propagated error mean the model itself is oscillating.

have no understanding of physical error.

have no understanding of the importance or meaning of a unique result.

No working physical scientist would fall for any one of those mistakes, much less all of them. But climate modelers do.

And this long essay does not exhaust the multitude of really basic mistakes in scientific thinking these reviewers made.

Apparently, such thinking is critically convincing to certain journal editors.

Given all this, one can understand why climate science has fallen into such a sorry state. Without the constraint of observational physics, it’s open season on finding significations wherever one likes and granting indulgence in scienceto the loopy academic theorizing so rife in the humanities. [11]

When mere internal precision and fuzzy axiomatics rule a field, terms like consistent with, implies, might, could, possible, likely, carry definitive weight. All are freely available and attachable to pretty much whatever strikes one’s fancy. Just construct your argument to be consistent with the consensus. This is known to happen regularly in climate studies, with special mentions here, here, and here.

One detects an explanation for why political sentimentalists like Naomi Oreskes and Naomi Klein find climate alarm so homey. It is so very opportune to polemics and mindless righteousness. (What is it about people named Naomi, anyway? Are there any tough-minded skeptical Naomis out there? Post here. Let us know.)

In their rejection of accuracy and fixation on precision, climate modelers have sealed their field away from the ruthless indifference of physical evidence, thereby short-circuiting the critical judgment of science.

Climate modeling has left science. It has become a liberal art expressed in mathematics. Call it equationized loopiness.

The inescapable conclusion is that climate modelers are not scientists. They don’t think like scientists, they are not doing science. They have no idea how to evaluate the physical validity of their own models.

They should be nowhere near important discussions or decisions concerning science-based social or civil policies.

Not one of their journalists as far as I can see has a degree relevant to climate. The closest is Moonbat who at least has a science degree, but I hardly count a zoologist as qualified to comment on atmospheric physics or renewable energy.

But does that stop them attacking us sceptics who overwhelmingly have these qualifications? Take our host Anthony Watts. Clearly qualified to speak about climate and the legitimate scientific dispute, producing the world class well researched articles that makes him the mainstream media in this area. And who attacks him? Uneducated the scientifically illiterate cut-and-paste “journalists” of the Guardian.

Dana Nuccitelli is BS artist promoted to well above his pay grade , that the Guardian has handing itself over to him and Bob ‘fast fingers ‘ Ward to self promote their egos and the pay masters financial outlooks . Is a shame but not unexpected has is has made it clear for years that only unquestioning support of ‘the cause ‘ is an acceptable stance for them . It tried to avoid even covering climategate and only did so when it became clear others newspapers would do so . Its always been an oddity of the paper that when it get obsessed on a subject , like AGW , it tends to take an absolute stance and the quality of its coverage goes down hill the more it covers it.

First, Anthony, thank-you very much for posting my essay about climate modelers. I am grateful for the opportunity.

Next, Slywolfe, if you understand the first figure of the essay, or the fourth, or the linked poster, you’ll know that climate models can’t make any predictions at all and so, ipso facto, can not “do a good job.” Unless making not-predictions is their job.

Pat,
Thanks for generating a very worthwhile discussion on the GCM failures and allowing WUWT readers a “peer-reviewing” the sorry state of Climate Science manuscript peer-reviewers. Bob Tisdale and Christopher Monckton (as you may be aware) regularly update WUWT readers with GCM external failures. Your elucidation of the internal reasons for those GCM failures (along with RGBatDuke, Ferdburple, Jimbo, and many others) is very much appreciated.

I understood most of what you presented and took away a very important refresher lesson on the importance of a “unique result” in any science-based model. I also remember, that some months back someone at WUWT posted a comment that the GCM initializations used a single value for enthalpy of evaporation for 4º C water instead of 26º C as is for most of the tropical waters. They mentioned that evaporation enthalpy value error would propagate through the hundreds of iterations of the GCM’s, compounding until nothing was left but essentially a random noise signal. That made me realize that the GCMs of the IPCC are total crap, built with circular logic to deliver a politically-desired output.

Thanks, Joel. I’d never have thought of that water enthalpy error. One expects if all the physical errors of climate models were documented, their propagation would produce a centennial uncertainty envelope of approximately the size of North America.

Pat Frank
I’ll make it ultra-simple for you: Predicting the future (anything) is very difficult for humans. One might as well flip a coin.
.
The IPCC Report Summary is leftist personal opinions formatted to look like a real scientific study.
.
As you can see from the formerly beloved Mann Hockey Stick chart, ‘predicting the past’ is just as difficult for the “climate astrologers” as predicting the future.
.
It’s a climate change cult. — a secular religion for people who reject traditional religions.
.
The coming global warming catastrophe scam is 99% politics and 1% science.
.
You can not debate a cult using data, logic and facts any more than you can debate the existence of god with a Baptist.
.
The long list of environmental boogeymen started with DDT in the 1960s, and as each new boogeyman lost its ability to scare people, a new boogeyman was created, and the old one was immediately forgotten.

If we are lucky, and it seems that we have been for two years so far, it will remain cold enough so the average person begins to doubt the coming global warming catastrophe predictions — thank you Mr. Sun and Mrs. Cosmic Rays, for riling up the leftist so they reveal their true bad character — with harsh character attacks on scientists who do not deserve them.

But consider that Maxwell’s equations do a darn good job predicting the future behavior of emitted electromagnetic waves. And Newton’s theory does a good job at predicting the future positions of the planets — at least out to a billion years or so. In my field, QM does a pretty good job of predicting the details of x-ray absorption spectra before any measurement.

So, physical science has a good array of predictive theories. Climate modelers have managed to convince people that they can predict future climate to high resolution. Their claim is supported only by the abandonment of standard scientific practice. Abandonment not just in climatology, but by august bodies such as the Royal Society and the American Physical Society.

In a way the modelers themselves are innocents, because my experience shows they’re not trained physical scientists at all. They couldn’t have abandoned a method they never knew or understood. The true fault lays with the physical scientists, especially the APS, who let climate modelers get away with their ignorance and scientific incompetence.

I agree with you that AGW alarm has been seized upon by progressives as their politically opportune proof positive that capitalism is inherently evil. The history of the 20th century has shown that their preferred alternative is manifestly monstrous. But as committed ideological totalitarians, finding a moral position in lying, cheating, and stealing to get their utopian way, remediative introspection has never been a progressive strong suit.

Stop wasting your time with “climate journals”. They continue their gate-keeping while your message is being missed in the climate policy debate. Why not try publishing your paper in a journal where peer review is not redefined in order to protect the AGW-hypothesis and the climate model industry?

He did a tremendous amount of authentic work on this manuscript. I’m guessing that he was confident that it would be recognized as such from anybody resembling an authority/expert and be most relevant in a climate journal………..even knowing the bias that exists.

Pat Frank,
I appreciate you taking the time to share this with us. It’s extraordinarily enlightening.

“Tuned parameters merely obscure uncertainty. They hide the unreliability of the model. It is no measure of accuracy that tuned models produce similar projections. Or that their projections are close to observations. Tuning parameter sets merely off-sets errors and produces a false and tendentious precision.”

Back in college, in a synoptic meteorology lab class, we had to create a simple weather model. I was sort of overwhelmed by all the mathematical equations and some of the other stuff but did manage to get my model to work………..by repetitive trial and error.

I had little confidence that the equations/parameters were the best ones to represent the atmosphere but know that by continually tweeking and tweeking, it resulted in the changes which moved my model in the right direction and finally showed what it was supposed to show, to get the result we needed to pass.

I did think that in my early naive days, Mike. But no more. I didn’t realize that climate modelers don’t know the first thing about assessing accuracy. Plus I didn’t know that many editors apparently lack the courage to publish anything truly controversial in a poisonous atmosphere such as has been deliberately created in climate science.

Also, thank-you for your kind words. Your description of constructing a weather model sounds like a good experience. First, you learned to do it, second you overcame your fears, and third you gained a critical perception of models. Nothing to feel tentative about.

I worried about that very problem, Pethefin. So my first submission was to the journal Risk Analysis. After three weeks of silence, the manuscript editor came back and told me that the paper was not appropriate to the journal. So he declined to even send it out for review.

This decision was backed up by the chief editor who, in essence, said that the analysis would be of limited interest to their audience. That is, a paper showing there’s no knowable greenhouse warming risk is of small interest to risk analysis professionals. Incredible, but not worth arguing. I suspect they were relieved to have dodged a political bullet.

australian CAGW sceptics have made the MSM and BOM has admitted no Cat 5 cyclone passed over Queensland. not that most MSM and the public have grasped this fact, as yet, such has been the hysteria:

24 Feb: Courier-Mail: Climate researcher questions Cyclone Marcia’s category 5 status
Jennifer Morohasy said the bureau had used computer modelling rather than early readings from weather stations to determine that Marcia was a category 5 cyclone, not a category 3…
Systems Engineering Australia principal Bruce Harper, a modelling and risk assessment consultant who analyses cyclones, said it was often difficult to determine whether a storm was a marginal 3, 4 or 5.
What was important was that after the bureau conducted its post-storm analysis, it told people that they experienced category 3 impacts as it passed over the land.
It was dangerous for residents to be thinking they had survived a category 5 when it was a storm that degraded quickly…http://www.couriermail.com.au/news/queensland/climate-researcher-questions-cyclone-marcias-category-5-status/story-fnkt21jb-1227236188297

But in folk memory this will be a Cat 5 Typhoon from now until the end of time. That’s how the propagandists of CAGW work, shout exaggerated claims from the rooftops, by the time they withdraw the claim the MSM have moved on.

Agree. Memorise some shit, suck up to your tutor and graduate. Then get a job at Bank of America where they don’t care about your qualifications as long as you graduated in something. It’s the same in most industries.

Very good Alex. I would like to add a little to your observation. There are a few that are good at memorizing some shit but too dumb to realize that they should be working at Bank of America. They get hired by big businesses that a very difficult time firing people that don’t have the abilities that their diplomas say they should have.

This struck a chord with me. One of the best electronics engineers I ever knew (20 yrs USAF as engineer/project mgr + 26 yrs helping integrate hardware onto the ISS) had no degree whatsoever. He was entirely self-taught. He couldn’t get promoted beyond the grade he had when our company assumed a contract, because ‘company policy’ said you HAD to have a degree in “math, science, engineering or a related field’ to be an engineer. He taught me more about networks, software, hardware and how to solve integration problems than any of my four degrees. So, to answer the question: it depends on who you ask. Ask the journals or most academics, and it’s the degree that makes you a scientist. Ask anyone in the real world, and it’s your work that defines which category into which you should be placed.

Based on the latter criteria, anyone publishing results of work that don’t match reality is something, but it ain’t a scientist!

Nobody gives a rat’s arse if you’re following the scientific method or not. Frederick Kekule’s discovered the structure of benzene by dreaming of a snake coiled and biting its tail. Did he follow the scientific method?

Not following the scientific method only becomes a problem when people later find out that your research was a useless waste of time.

A scientist is a person with common sense who is very skeptical about every conclusion (hypothesis) presented by scientists, including his own conclusions. A degree is not relevant — the quality of his scientific work determines whether he deserves to be called a “scientist”.
.
Predicting the future with computer games, has nothing to do with science.

A scientist would never focus on ONLY one variable, CO2, probably a very minor variable with no correlation with average temperature, when there are dozens of variables affecting Earth’s climate … and then further focus only on manmade CO2, for political reasons (only that 3% of all atmospheric CO2 can be blamed on humans … which is the goal of climate modelers … along with getting more government grants.)

But Big Government, who wants a “crisis” that must be “solved” by increasing government power over the private sector, could not possibly influence scientists getting government grants and/or salaries, and of course NEVER has to be disclosed as part of an article, white paper or other report by any scientist on the goobermint dole.

Perhaps grade inflation and lowered standards have combined with the world’s greatest fooling machine (computer+software) over the years to make peeple stoopids. It’s much easier these days to be a fraud and incompetent.

But if games reacted that poorly to input from the player and displayed a gaming “world” that was that whacked out from a “real” world… no one would buy or play them. You would need government to step in and mandate that everyone buy and force everyone to play those games… oh, we’re doing that now… never mind.

Good point, CP. I think of climate modeling, as presently done, as video game science. It’s like trying to understand the physics of explosions by studying the hotel lobby explosion scene in “The Matrix.”

Pat, I appreciate the effort and, without getting into the merits of your work, rejection is part and parcel of academic/scientific publishing. An author that has not been rejected many more than 4 four times is an author that is either a genius or hustler. My advice is that you keep on trying. Don’t let the malice of incompetent reviewers get in your way.

Help me out here, if I may ask a favor. I know 3D finite element analysis, have used it, have worked statistics control, have worked metrology, have worked reactor neutron flux curves and their shapes as the control rods are driven in and out at various levels of various poisons after shutdown at various times, have criticized answers (approximations) of stress-strain colored images from such models, and have worked in fluid dynamics problems with the solutions (approximations to the solutins) coming from such models. Fine. I know I know parts and pieces of the field fairly well. Others always should know more about their specialties.

In context of the criticism of your paper, and of the problems and failures in global circulation models (now being called global climate models by the way!), explain the different erros the global warming sim,ulators are making, and the different erros and assumptions about their errors and their error margins using this exampel.

I need to calculate the value of (e/pi)^ 10001.0001 Assume I set this problem up like the climate scientists have.

If I ran this problem using 2.7 / (22/7) what error am I making? Is climate science making this kind of error, and not knowing they are making this kind of error of simply using too many approximations of real world variables (albedos, transmission losses, cloud reflections, and everything else) that are NOT simple one-point constants?

If I ran this problem once using 10002.0002 what I be duplicating their error? if not, what error am I making?

If I ran this error changing the accuracy of “pi” everytime by 0.0001 percent, am I not promulgating that error through every subsequent multiplication?

If I ran it 4000 times using 10002.0001 would I be more accurate (in their minds) even though I would never get the right answer?

If they ran this problem 300,000 times on a supercomputer using a different algorithm for both constants every time, could they use the average of the random errors of their results to (a) get a more accurate answer or (b) just be displaying random errors in their generation sequence of both “constants”?

If they ran this problem using a program that printed “40000.00001” every time, would today’s climate scientists claim they had greater accuracy than my sliderule?

I know absolutely that many FEA runs using exact “perfect” data on a “perfect” crystal or pure piece of metal machined exactly per the model dimensions under loads exactly as described by the modeled equations will yield (on average) results similar to the average of many model runs. Each model run under those circumstances “should” be exact and perfect, but each will be a bit different even in the ideal case of a simple stress-strain issue. But, is this what forms the CAGW “religion” ? A belief that they have described the problem exactly and perfectly so every run using the same “core equations” as its kernel can be averaged into today’s world?

RACook, the 2.7/(22/7) case would be an accuracy problem, increasing the error with every step. I have no specifications on the source of climate model error. I just assessed the error and calculated its consequence.

The FEA models you describe seem to be engineering models. These — the result of detailed physical experiments — accurately describe the behavior of their object within the model bounds. Outside those bounds is dangerous ground, as I’m sure you know.

Climate models are like engineering models. They can be made to describe the behavior of elements of the climate within the time bounds were tuning data exist. However, they’re being used to project behavior well outside their bound. The claim is then made that they do this accurately, and that’s the problem.

“The FEA models you describe seem to be engineering models. These — the result of detailed physical experiments — accurately describe the behavior of their object within the model bounds.”
And those model bounds have to be obtained in the real world through control of critical parameters such as maximum defect size and quantity and alloy material composition including unwanted contaminants that will reduce predicted performance. It is also necessary to understand to some degree how the defects will propagate under stress. To help insure this, intensive physical inspection and testing are usually incorporated throughout the manufacturing process.
There are so many parameters with so little accurate understanding of function covering such large areas of many differing kinds of surface, that there is no way possible to even begin to simulate anything similar with climate models.

It is only so small because the Stefan-Boltzmann law puts limits on how hot or cold the planet can get given constant insolation. If the error propagation continued unresisted by that -and absolute zero at 0 K-, it would be +/- 1000s of degrees C in 2100.

Hi Stefan — the (+/-) uncertainties are not temperatures. They are an ignorance width. When they become (+/-)15 C large, they just mean that the projection can’t tell us anything at all about the state of the future climate.

I was puzzled by how your uncertainty ranges increased without bounds when there’s no time-aspect in your equations. But I think I figured it out.

It looks like you additively increase the cloud forcing uncertainty at each timestep. You compute a new cloud forcing at the present timestep, and add it to the last.

IIUC, this means that you’re not treating this +/- 4 as the uncertainty in the cloud forcing, but uncertainty in the change of the cloud forcing. In other words, your equations act as if the change in forcing from one year to the next must be within +/- 4 W/m2. Propagating this through allows the actual cloud forcing uncertainty in your equation to grow without bounds.

Compare this to the actual cloud forcing (in W/m2), what you’re using is a completely different metric, W/m2/year. These two metrics are as different as speed and location. Uncertainty in the derivative of forcing is verrrrry different from uncertainty in the forcing itself.

If the actual cloud forcing uncertainty is between +/- 4 W/m2, then that range is fixed. It doesn’t change, it doesn’t increase without end. It already represents the entire range of cloud forcing uncertainty.

And this is why your model produces nonsensical results. No, actual cloud forcing cannot grow or fall without bounds. You already established the actual cloud forcing: +/- 4 W/m2. The cloud forcing at 95% of any given timestep should be within these bounds.

Windchaser, there is an implied time aspect in the equation, found in the change in forcing over the time of the projection.

Error is propagated through a linear sum as the root-sum-square. That is the standard method.

I do not compute any new cloud forcings. I merely propagate the global annual average long-wave cloud forcing error made by CMIP5 climate models, in annual steps through a projection.

Sorry to say, YDUC. I am treating the (+/-)4 Wm^-2 as an error. It is injected into every annual modeled time step. The reason it is injected is that it is an error made by the models themselves. That is, intrinsic to some theory-bias error.

Every annual initiating state has a cloud forcing error, which is delivered to the start of the simulation of the subsequent state. The model makes a further long wave cloud forcing error when simulating that subsequent state. This sequence of error in, more error out is repeated with every step. Error is necessarily step-wise compounded.

However, we don’t know the magnitude of the error, because the simulated states lay in the future. But we can project the uncertainty by propagating the known average error. That’s what I’ve done.

Your “In other words, … statement is not correct. Error in forcing is propagated, not the change in forcing. Every step in a simulation simulates the entire climate, including the cloud forcing. Whatever the change in cloud forcing, the average error in the total long wave cloud forcing is (+/-)4 Wm^-2. Every time.

It’s not the error in the derivative of forcing. It’s the error in the forcing itself.

You’re right that the statistical dimension is W/m^2/year. But the annual change in GHG forcing is also W/m^2/year.

Windchaser, you’re clearly an intelligent guy. Let me try and explain this. When there is an average annual (+/-)4 Wm^-2 error in long wave cloud forcing, it means the available energy is not correctly partitioned among the climate substates.

This means that one is not simulating the correct climate, for that total energy state. That incorrect climate is then projected forward, but projected incorrectly relative to its particular and incorrect energy sub-states because the error derives from theory-bias.

So an already incorrect climate state is further projected incorrectly into the next step.

The uncertainty envelope describes the increasing lack of knowledge one has concerning the position of the simulated climate in its phase-space relative to the position of the physically correct climate. That lack of knowledge becomes worse and worse as the number of simulation steps increases, because of the unceasing injection and projection of error.

The uncertainty grows without bound, because it is not a physical quantity. It is an ignorance width. When the width becomes very large, it means the simulation no longer has any knowable information about the physically true climate state.

Error is propagated through a linear sum as the root-sum-square. That is the standard method.

Root-sum-square is the standard method for combining independent sources of error. For instance, let’s say I move in a straight line, twice. The first time, I measure that I have moved 100 m, +/- 10 m. The second time, 200m, +/- 5m.
The final error will be the root-sum-square of the previous, independent errors: the square root of (5*5 + 10*10). This is because each measurement and its error are independent.

Or, here’s another example: say I am traveling at 10 +/- 1 meters per second. At each timestep, no matter how long this continues, the error in my velocity remains the same, +/- 1. However, the error in the *distance* I’ve traveled grows as the root-mean-square, because the error ocurring in distance traveled at each timestep is independent of the error at any other timestep. Each second has its own error in distance traveled, of +/- 1 m.

After 1 second, I travel 10 meters. The error is +/- 1m. After 1 more second, I travel another 10 +/- 1m, so now I have travelled 20 meters, +/- 1.41m. After another second, I’ve traveled 30 meters, +/- 1.73m. Etc.

No matter how many seconds have gone by, my speed and its error are the same. 10 +/- 1. But the error in the distance grows with time, as [sqrt(t)*1].

Similarly, if you determine an error in the overall cloud forcing, that error stays fixed from one timestep to the next. If this forcing is (f0 +/- 4) at the beginning, it should be (f0 +/- 4) at every timestep. Without some explicit relation to time, the errors do not propagate forward through time in the manner that you describe.

On the other hand, if this error, +/- 4, represented the derivative of the cloud forcing with respect to time, then, yes, the total cloud forcing uncertainty would grow over time and without bound, just like how, in the example above, the uncertainty in the derivative of distance-traveled caused the uncertainty in distance-traveled to grow over time without bound.

For verification, please refer to Bevington and Robinson, or to Larsen and Marx, or to whatever other book on statistics and errors that you prefer. But unless I’m really missing something, the uncertainty you calculated has no relationship to time, so it cannot propagate forward through time like you describe.

windchaser, when you consulted Bevington and Robinson, or whatever, you found no time-dependence in the equations for propagating error.

All that’s required is a step-wise sequential calculation yielding some additive final result, with some error in every step. The error then propagates through the steps into the final result.

Those conditions are met in a projection of air temperature. The calculation is step-wise. The final state and each intermediate state is a linear sum of prior calculated terms. Each term has an associated error. Propagated error follows. Uncertainty grows with the number of steps.

No “explicit relation to time” is required.

You wrote, “No matter how many seconds have gone by, my speed and its error are the same. 10 +/- 1. But the error in the distance grows with time, as [sqrt(t)*1].”

Actually, your uncertainty grows with distance traveled. It’s right there in your own description. Time doesn’t enter your calculation at all.

I’ve told you the origin of the (+/-)4 Wm^-2 error term. You can find it in Lauer and Hamilton, my reference [1].

It is the average long-wave cloud forcing error derived from comparing against observations, 20 years of hindcasts made by 26 CMIP5 models.

The (+/-)4 Wm^-2 is not the derivative of cloud forcing with time. It’s the average annual long wave cloud forcing error in the simulated total cloud fraction.

In every simulation step, every simulated annual total cloud fraction will produce an incorrect LW cloud forcing, which will be (+/-)4 Wm^-2 further divergent from the prior also and already incorrectly simulated prior LW cloud forcing. As such, the error will be present in every single simulation step of a step-wise projection. Such error must propagate and the uncertainty in air temperature must grow with the number of simulation steps.

Your analysis is not at all relevant, windchaser.

Climate modelers can be either scientists and get down to the hard gritty business of physics, or they can be gamers. But they can’t be gamers and pretend to be scientists.

All that’s required is a step-wise sequential calculation yielding some additive final result, with some error in every step. The error then propagates through the steps into the final result.

Nope. Your units are wrong for this to be passed forward through time. Nor does that make intuitive sense.

Actually, your uncertainty grows with distance traveled. It’s right there in your own description. Time doesn’t enter your calculation at all.

/shrug. Same difference. We have a relationship between time and distance travelled, and the uncertainty is in that relationship. So as either time or distance travelled grows, so does the uncertainty.

Where is your uncertainty here? Just in the cloud forcing, no? It’s not in the cloud forcing’s relationship to other forcings, nor is it in some relationship to time. So the total uncertainty does not grow. It cannot.

Pull the equations out of Bevington and Robinson, if you like: you’ll notice that they discuss uncertainties in terms of differentials. Here, that would be the timestep, since that’s what you’re compounding it over, which means that your uncertainty must be in terms of time. With no differential / no relationship to time, there’s no multiple, independent uncertainties to perform a root-sum-square on.

The uncertainty you provide is fixed; it doesn’t change with respect to anything else. So how can you possibly compound it?

Your units are just wrong, which means your math is wrong.

In every simulation step, every simulated annual total cloud fraction will produce an incorrect LW cloud forcing, which will be (+/-)4 Wm^-2 further divergent from the prior also and already incorrectly simulated prior LW cloud forcing.

No. Again, this is nonsensical: you’re saying that if we stepped through time half as quickly, say at 0.5 years instead of 1 year, then the uncertainty would grow twice as quickly. This is nonsense: uncertainty propagation does not depend on the size of the timestep. And hey, look! If we use really large timesteps, then all this uncertainty goes away, and the model projections are fine again!

Climate modelers can be either scientists and get down to the hard gritty business of physics

*cough*. I’m not the one messing up basic statistics here. But by all means, keep insisting that you’re right, and keep submitting your paper and getting it rejected.

For error propagation, go to page 39-41 of the book (page ~54 of the pdf). You can try to walk me through your math and its units, if you like, but I’ll be surprised if you can: your units are wrong; they don’t make any sense.

The differentials for propagating error, (sigma_x)^2 = (sigma_u)^2(dx/du)^2 +…, are generalized to any “x” and do not necessarily refer to time.

Your intuition is no test of correctness.

I already discussed your units argument: the error unit is W/m^2/year. The annual change in GHG forcing is also W/m^2/year. The head post figure is Celsius per year.

You’ve got no case.

You shrugged off the time/distance mistake in your own criticism. But you ignored the time/forcing equivalence I pointed out here. Your criticism is therefore illogical and self-servingly inconsistent, and you’ve ignored your own dismissal of your own prior criticism; a self-goal.

The propagation time-step is annual, because the long wave cloud forcing error is the annual average. The error is from GCM theory bias, putting it freshly into every single annual simulation step. That is why it must be compounded.

The annual error in long-wave cloud forcing is propagated in relation to annual greenhouse gas forcing. I should have thought that was obvious given the first head post figure and the linked poster.

Long-wave cloud forcing contributes to the tropospheric thermal flux. So does GHG forcing. The change in GHG emissions enters an annual average 0.035 Wm^-2 forcing into a simulated tropospheric flux bath that is resolved only to (+/-)4 W/m^2. And that’s a lower limit of error.

Semi-annual cloud error may be of different magnitude. GCM simulations can proceed in quite small time steps. A detailed and specific error estimate and propagation could produce very different uncertainty widths. Possibly much wider than those in the head post figure, because of the multiple sources of error in a GCM.

If GCMs are ever able to project climate in 100 year jumps, your ludicrous “really large timesteps” argument might have relevance. But then, of course, we’d have to apply a 100-year average error, not just an annual average. What would be the magnitude of that, I wonder.

You’re right that the statistical dimension is W/m^2/year. But the annual change in GHG forcing is also W/m^2/year.

Ahh, great! If that’s actually the case, and you’re calculating the error in the change in forcing over time, why do you present it in terms of W/m^2?

You can understand my confusion. Mathematically, you treat the number as the error in the derivative of cloud forcing with respect to time, but in terms of units, you present it as just a constant, flat error in the cloud forcing.

I haven’t looked very closely at your derivation, but it also seems to reflect just a flat (constant) error in cloud forcing, not something that changes from one timestep to the next, or that feeds back with any other terms in your equations. Is that incorrect? The poster suggests that you calculate the total average cloud forcing error over a block of time, not the error in the change in cloud forcing, for which the units would be W/m^2/year.

You seem to contradict yourself, as at other times in our discussion, you said: “The (+/-)4 Wm^-2 is not the derivative of cloud forcing with time. It’s the average annual long wave cloud forcing error in the simulated total cloud fraction.”

Which is it? Can you clarify this for me?

The head post figure is Celsius per year.

The head figure is in total change in Celsius. It’s not ambiguous: the right axis is labelled as delta-C, not delta-C per year. Likewise, the head equation in your poster is also in terms of the total forcing and total temperature change, not forcing per year or temperature change per year.

The one exception, of course, is the part of the equation where you sum over all the timesteps, summing the annual changes in forcing/year to get the total change in forcing. This is where you’d convert from W/m^2/year to W/m^2. But obviously, if you’re starting with an error in terms of W/m^2, you can’t integrate that over time to get W/m^2. It’d be like integrating speed over time and getting back your speed, instead of getting distance traveled.

Sorry, if this sounds simplistic, but I’m trying to explain it in the simplest terms possible. If you integrate a quantity over time, then your units must change.

windchaser, you wrote, “Ahh, great! If that’s actually the case, and you’re calculating the error in the change in forcing over time,…

No, windchaser. I’ve told you over and over again, it’s the (rms) average annual long wave cloud forcing error.

A twenty years rms average, yielding the average annual error in the total forcing. Why is that so hard for you to understand?

”… why do you present it in terms of W/m^2?” Because that’s what it is, windchaser.

“…you treat the number as the error in the derivative of cloud forcing with respect to time…” No, I do not. I treat it for what it is: the annual average error. It has nothing whatever to do with dynamics.

I’ve made no “contradiction,” but have been clear and consistent throughout. The mistake has been yours from the outset, given your insistence that a linear root-mean-square annual error is a derivative.

It appears your exposure of physical error is so lacking that you evidently have no grasp of its meaning.

You wrote, “The head figure is in total change in Celsius. It’s not ambiguous: the right axis is labelled as delta-C, not delta-C per year.”

In what unit is the slope of the line in that figure, windchaser?

You wrote, “Likewise, the head equation in your poster is also in terms of the total forcing and total temperature change, not forcing per year or temperature change per year.”

In the PWM equation, what does the subscript “i” represent?

You wrote, “…you can’t integrate that over time to get W/m^2.”

It’s a linear sum, windchaser. No units change.

You’re still getting nowhere.

I have to congratulate you though. At least you’re struggling with error propagation. That’s more than any of my climate modeler reviewers did.

Save the one reviewer who clearly was not a climate modeler, understood the error analysis, and recommended publication.

One does not add a total error to the same series, over and over again. It’s added once.

“It’s a linear sum, windchaser. No units change. …In the PWM equation, what does the subscript “i” represent?”

The ith timestep, of course.
What you’re doing in that equation is exactly a numerical integration: you take the change in greenhouse gas forcing at a given point in time. You multiply it by delta-t, a change in time: 1 year. You get delta-F, the amount of change over the timestep. Then you sum over all these delta-Fs, to get the total change in GHG forcing over all timesteps.

It’s just this.
change in F == Sum over t: [dF/dt * delta-t]

Sorry for the cludgy representation of the math, but that’s a textbook numerical integration. So if you’re including a cloud forcing error, which is in the same units as the LHS, it must either be in W/m^2, in which case it’s a constant, added to F. Or if the cloud forcing error is not constant with respect to time, it should be integrated over with respect to time, just like dF/dt.

Arright, the remedial calculus lessons are done. You said that I’m “still getting nowhere”, and I can indeed see that. Please: find a mathematician you trust and run this by him. Perhaps he can explain this to you better than I can.

Honestly, best of luck with publishing your manuscript. And thanks for the conversation. It’s been interesting.

windchaser, you wrote, “One does not add a total error to the same series, over and over again. It’s added once.

It’s a theory bias error. It enters into every single simulation step.

”The ith timestep, of course.” There goes your argument that “there’s no time-aspect in your equations.”

“You multiply it by delta-t,…” where do you see a time delta-t anywhere in the equation?

The delta-F_i are the annual forcings recommended by the IPCC, e.g., for the standard SRES scenarios. Time enters only implicitly with the steps in forcing.

“Sorry for the cludgy representation of the math, but that’s a textbook numerical integration.” Not a problem. So you’d agree that numerical integration is just a linear sum. Subject to linear propagation of error.

“So if you’re including a cloud forcing error, which is in the same units as the LHS, it must either be in W/m^2, in which case it’s a constant, added to F.

Correct. For any step and including error, the forcing is [delta-F_i(+/-)4] Wm^-2. As mentioned umpteen times so far, the (+/-)4 Wm^-2 is the rms average CMIP5 LWCF error. As an average, it’s necessarily constant at every step, as a theory bias error it enters into every step, and its propagation yields a representative physical reliability of the projection.

“Please: find a mathematician you trust and run this by him.” I’ve done that. No problems found.

Windchaser, look at your own analysis. You’ve described numerical integration as a linear sum. Linear propagation of error follows directly. All you need do now is recognize the serial impact of a theory-bias error on the growth of uncertainty in a step-wise simulation.

“Honestly, best of luck with publishing your manuscript. And thanks for the conversation. It’s been interesting.”

Its a good question for another reason , when we consider what is a climate scientists we find that there is in fact there is no agreed definition of what this means. Given it is a term that has been applied to failed politicians and railway engineers and a host of others who have had no formal acedmic training in the area , we can see that in pratice its far from clear what actually makes a person a climate scientists.
From the alarmist prospective its simply , a climate ‘scientists’ is someone that works to support AGW . Its very useful way of looking at it because they can claim that no climate ‘scientists’ disagrees with them , therefore other ‘none ‘ climate ‘scientists’ can safely be ignored .

However its also an entirely dishonest way of looking at it , because even those climate ‘scientists’ they like vary on how they view the situation , the consensus is no such thing . Secondly there are clearly those who work in the area , whose training and academic standing at least equal others but who do not share the alarmist prospective . therefore should have the right to be called climate ‘scientists’ in any fair and honest system.

Always worth remember that when the infamous 97% claim is pulled out , that in pratice given they simply have no idea who many scientists , climate or otherwise , there is that they cannot know how many would be in the whole group for which a sub-group is supposed be a percentage off. So even setting aside the many problems of its methodology, the claim itself fails at a basic maths level, which shows even setting aside the many problems of the the value of this claim is the same as ‘nine out of ten cats prefer.

in a vague sense. As a Meteorologist we hold more comprehension of all that climate stuff by virtue of climate being front and center in forecasting. I can also call the bluffs of forecasters just by the wording they use.

Note that by posting his essay here, he got the attention of readers like you who can now pass it on. If he’d tried at SkS or Greenpeace, his article would never have seen the light—and you would never have become aware of it.

Besides WUWT isn’t an echo chamber, nor is it a closed site. New readers come in and learn something here.

A model is an output of science – generally used in Engineering. Take a comparatively simple airflow model*: Science creates and researches the methodology, variables, and uses, as well as constraints or error bars in conjunction with, the model. This model can also then be revised and revised over time. The model is then used by engineers to make cars or airplanes or sacks of peanuts flow better through the air. An engineer using this model is not a scientist – but an engineer improving this model (ideally publishing and spreading the revisions) is a scientist performing science.

I think I’m gonna have to disagree here. An aeronautical engineer will use a virtual model and simulator to design various shapes and test the desired performance parameters. But then they’ll build a physical scale model and test it in a wind tunnel. You don’t go from a virtual model to the 10mm to the cm scale directly.

To TomB,
You said”
I think I’m gonna have to disagree here. An aeronautical engineer will use a virtual model and simulator to design various shapes and test the desired performance parameters. But then they’ll build a physical scale model and test it in a wind tunnel. You don’t go from a virtual model to the 10mm to the cm scale directly.”

What allows them to get away with this, until the “PAUSE” occurred, is that the real results hopefully wouldn’t be known until after they retire and collect their govt pensions. In many cases it was after they were dead.

I wonder if they realize that what they are doing to collect a paycheck, and justify their existence, has little to no real validity? (Who has read the “Black Widowers” series by Asimov?)

I recently spent 3 hours with an engineer who created a program (model) that uses 15 minute interval data from smart meters to analyze energy use in buildings. Spent years creating it. Explaining to him how much he is potentially missing was emotionally draining. He had not even been onsite to survey the facility yet was willing to talk about potential energy savings.

Having used “models” for years to design structures, pipelines, water networks, sewerage and water treatment systems, project management systems, financial management, I would have to agree. Thing is with all those models I used, there were empirical tests to proof the model as well as real world tests to see if what we “modelled” actually gave us the projected results – often an iterative process to “tune” the models. But in every case – real world testing and proofs.

Meteorologists get to test their models every day/week.

So why are climate models whacky and why does anyone accept them as anything but they are: first generation guesstimates.

Perhaps that is the difference between my engineering and “Climate Science”. Climate Models are still in the state of “Science” so real world proofing is unnecessary or beyond them.

“Science is about discovery of something previously not known or defined”
Reminds me of the problem I have with the word – research. I thought of myself as a researcher because I looked through old reports for answers that others had searched for and found.
“

Truthseeker, models that include sufficiently well-developed physics can make unique predictions about observables. That opens them to falsification.

Prediction/observation/falsification (or not) is the way of science. So, physical models do have a critical part to play. Climate modelers, however, have removed their models from science, and sealed them away from the ruthless indifference of observation.

n this connection, I would like to present my own experience in early 90s. I submitted a paper to an international journal. One reviewer pointed minor corrections and approved for publication and the 2nd reviewer gave excellent marks but at the end he made a statement saying it can also be fitted to linear curve. With this the regional editor rejected the paper for publication. Then I wrote a detailed letter to the Editor-in-chief of the journal. He sent this letter to three regional editors. All agreed with my observations and asked me to split in to three parts. They published these in 1995. All the three relate to papers by Editorial committee members. One of the paper related to climate change. The abstract states that “Climate change and its impact on environment, and thus the consequent effects on human, animal and plant life, is a hot topic for discussion at national and international forums both at scientific and political levels. However, the basis for such discussions are scientific reports. Unless these are based on sound foundation, the consequent effects will be costly to exchequer”. Here the authors tried to look into the impact of temperature and rainfall increase on ETp [evapotranspiration] and thus on crop factors. The percentage changes in ETp attributed to climate change can also be attributed [partly] to scientists’ induced factors, such as (i) the choice of ETp model and ETp model vs environment; (ii) probable changes in meteorological parameters due to climate change, expressed as absolute change or percentage change; and (iii) ETp changes expressed in terms of absolute changes or percentage changes. All these explained in the article using their article. Second paper deals with overemphasis on energy terms in crop yield models. — Three different groups working under three different country conditions come up with different conclusions on the impact of energy term on crop yield. Models to be more meaningful, in physical and practical sense, and to be applicable in wider environmental context, should be addressed under holistic systems by taking in to account abundantly available information in the literature on all principal components of a model. . With this, I presented an integrated curve that fits all the three conditions.

“neither respect nor understand the distinction between accuracy and precision.”

Damn, we learned that in year 11 chemistry. Our chemistry teacher was, arguably, better than our physics teacher. He had strict standards but was rarely unnecessarily strict.

I’d like to make a point which perhaps might be as clarifying to you as it is to me (although I could be totally off, like an athlete who keeps on running even though the race is over). I say that there is a huge difference between EMULATION and SIMULATION.

Climate simulators are exactly that – they are superficially modelling a climate system. But they are not emulators, and an emulator behaves exactly like the original. And it is becoming obvious to me that one cannot in fact emulate a climate.

Is anyone here into retro computing? Then you’ll know that a computer emulator lets you perfectly imitate a different platform on a foreign host. For example, you can run an Atari ST emulator on a modern Macintosh. That emulator lets you run the system software and applications as if it was the original – there is no difference, save for bugs.

I once wrote an ST simulator in JavaScript. It merely resembled the ST’s desktop – it could not run software, save data, or anything. It just superficially resembled the desktop, with its drop-down menus and icons. Here is what it sort of looked like, though this is not mine:

As for statistics, lots of blowhards think they know statistics. I am a hack with some knowledge but I don’t pretend otherwise.

The psychologist Dr. Richard Wiseman thought he had ‘debunked’ a parapsychology meta-analysis but used incorrect statistics to do it.

Naomi Oreskes seems to think that p values of <0.05 are just 'convention' and apparently knows nothing about standard deviation. And because we apparently know that AGW is true, we don't need those high standards, so let's settle for <0.10 (which, when you think about it, makes no sense, because if AGW is so obviously true, all uncertainty about it would easily fall below 0.05).

And of course we have the usual bullshit PR nonsense that mammograms are 80%+ accurate, that HIV tests are 99% accurate, etc, etc.

On the name of Naomi: One of my favourite actresses is Naomi Watts. One Naomi I know is very gentle. Another is an in-law and is beautiful and happy. I guess it depends on geography!

The Base Rate Fallacy shows up in all sorts of places. False positives and false negatives quickly overpower our intuition, leading to overconfidence in our results. For example:

A group of policemen have breathalyzers displaying false drunkenness in 5% of the cases in which the driver is sober. However, the breathalyzers never fail to detect a truly drunk person. 1/1000 of drivers are driving drunk. Suppose the policemen then stop a driver at random, and force the driver to take a breathalyzer test. It indicates that the driver is drunk. We assume you don’t know anything else about him or her. How high is the probability he or she really is drunk?

Many would answer as high as 0.95, but the correct probability is about 0.02. (50 false alarms versus 1 actual drunk driver)

Your analogy works to a point. But BAC analyzer measurements are not a yes/no binary output.

Assume the legal definition of DUI is 0.08% BAC. You state that “the breathalyzers never fail to detect a truly drunk person.” Therein that statement assumes high accuracy with some un-stated precision, i.e. your analysis ignores measurement error. If the accused’s measured BAC is 0.085, the analyzer may round up to display 0.09%, thus legally DUI. But if the manufacturer says precision is +/- 0.01% then the accused could be at 0.075%, under the legal limit. Anyone whose recorded BAC is say 0.10%, the probability of then he or she is really drunk is quite high, and then presumed DUI conviction is beyond a reasonable doubt.

The skeptics here at WUWT (myself included) often hammer the dishonest alarmists over their willful ignoring of thermometer measurement precision in temperature records who then try and proclaim “highest-ever” alarmism, when the differences are being proclaimed to hundredths of a degree.

I had some clients for the company I worked for use a math co-processor emulator so that they could run a certain CAD program that required that instruction set. My computer, utilizing an Intel 80386 processor had the math co-processor, at an several hundred additional cost, and it ran the software 10 times faster that the clients computers could. An emulator always runs slower than the real thing, but if you are using modern hardware to run old hardware emulators, you are not going to notice. By the way, the next generation Intel chip, 80486 then the Pentium series, had the math instruction sets built into them.
A more specific term might be – computer based simulation of coupled non-linear .?. by the Finite Element Method. When playing computer games, much of the graphics are simulated because more realistic ray-tracing algorithms takes many orders of magnitude more computing time.

Are you sure that’s not a 286 needing the co-processor? I recall having to install one in order to run AutoCAD on an IBM PS2 back in the 80s. My first computer was a 486DX66 which didn’t need a co-processor, so it couldn’t have been that one, and I never had a 386.

I recall reading that when the co-processor became available, ALL 386 chips were printed with it. The co-processor tended to be unstable, so if it failed the burn-in, they laser cut the traces and viola! a 386SX chip was born.

When I bought my third computer (first a Commodore 64 (64KB of memory), second a Sanyo 8086 (or possibly 8088, I will have to look at the box but its currently in my garage attic ) it was a 80386. I bypassed the 286 unlike everyone else in my office that went cheap and bought an obsolete 16 bit system (though it was the biggest selling computer worldwide that year) and my boss paid for the separate math co-processor, about $400 or so back then, that plugged into the available socket on the mother-board / main-board.
As for the 386SX, it did not have a math co-processor.

“In 1988, Intel introduced the 80386SX, most often referred to as the 386SX, a cut-down version of the 80386 with a 16-bit data bus mainly intended for lower cost PCs aimed at the home, educational, and small business markets while the 386DX would remain the high end variant used in workstations, servers, and other demanding tasks. The CPU remained fully 32-bit internally, but the 16-bit bus was intended to simplify circuit board layout and reduce total cost.[13] The 16-bit bus simplified designs but hampered performance”

Meant to type “separate” before co-processor. It wasn’t even made by Intel.

386SX could only address half the RAM of a 386. When the 386 was first released, there were no 32-bit co-processors available, so machines were made with sockets to take existing 16-bit co-processors, but I never saw one.

I had a 286 based PC that I used to run a program to calculate the position of the moon over a month that I’d written. It took about 30 minutes to complete all the calculations with the occasional printer line being generated. When I eventually I added a co-processer and it was so ‘fast’ the printer couldn’t keep up and the speed seemed so wrong the first time I couldn’t even watch :)

Similar experience with downloads when moving from a 300 baud to 9600 baud modem.

Karim DG, agree about learning error propagation in Chemistry. I learned it big-time when taking Analytical Chemistry as a college sophomore. I can’t speak to your distinction between emulation and simulation. Guess it’s a matter of applied meanings. But congratulations with your Naomis and you’ve got good taste in actresses.

The IPCC doesn’t have any models. They just have black box hindcast curve fitting. It is impossible to model anything when most of the science is unknown. Finite approximations are a joke when the cells are so big as to completely miss features climate features as huge as thunderstorms.
The real test of a real climate model will be whether it can predict the weather next week. Don’t hold your breath.

The modelers all hotly deny that they are overfitting, but the dramatic difference between the success of the Hindcast versus the Forecast, shows without a doubt that they are indeed overfitting, however much they wish to deny it.

Climate Science is based upon statistical analysis and computer modellling. Neither of which are scientific. At best, they may, and I say “may”, be assumed to be some sort of loose engineering, but sience they are not, never have been, and never will be.

Statistical analysis is a methodology of extracting information out of raw data. The problem is that supposed methodology has nothing to do with physical reality. It only deals with taking biased assumptions from the statistician and looking for the best mathematical construct to overlay on the data, and then extrapolate a physical reason. Whereas, real science, works the other way around, it is about taking solid reality based physical observations and let the mathematics evolve from the facts and data. In other words, science is about taking physical events and looking for the math, while statistics and therefore climate science, is about taking theoretical models and looking for the physics, which is not science, but religion by Ex Cathedra.

Yes Climate Science is an oxymoron, there is no science in “Climate Science”, it is just pure religion. And as I have been pointing out for quite some time now, Climate Science is not the only religion in science, we have the same problems in Physics (cosmology and astrophysics), Biology (evolution and BCS theory), Economics (macroeconomics), Archaeology (Egyptology and Sumerology), Anthropology, Paleontology and so on.

Focusing on the details on what is wrong with the reviewers will not change anything. Academia and the academic practice is now largely determined by “Ex Cathedra”. The true scientific method and the presuit of knowledge has now broadly been usurped by religious attitudes and protected by priests, high-priets and godlike mentalities. You can thank tenure for this. As tenure was once supposed to protect the voice or reason and progress, like everything once it has been around long enough, it is now corrupted. Tenure only now largely serves to protect, the criminal, dishonest, incompetent and meglomaniacs.

For thousands of years society, knowledge and science evolved without the use of tenure. But now, universities and schools are entering a new dark age of knowledge, where Science will be determined by the priests, pope, gods of the academic church, and not only backed but empowered with might of arms of the police and military the governments wield through the use of undemocratic legislation and the court system.

Science has only itself the blame for this disaster we are in. Complain all you like about the morons, idiots, incompetents, buffoons and meglomaniacs of academia. Just remember how they got there? They got there, because every one of you who went to university and played along with the rise of these corrupt people. Don’t forget as much as you like to criticize these thieves and criminals, they too have Ph.Ds, they too have Chairs of distinction, they too control the journals of note (like Nature, Physics Letters etc), they too run the departments as Deans, they too win the accolades, they too have built up reputations based upon their army of accolytes and sycophants, they too have the ears and attention of corrupt governments, and ABOVE ALL THEY, NOT YOU HAVE CONTROL OF THE AGENDA….ALL THANKS TO YOU FOR NOT DOING YOUR JOBS, when you should have been doing them in the past.

And now everybody is crying poor Science! This has been a long time in coming. I saw this when I went through my years at university in physics, where people lied on their papers about their results, where they took 1 set of data, and published multiple papers (in one case I remember well a Ph.D candidate got 7 papers from the same set of results in 7 different PRESTIGIOUS journals!), how those candidates supported the right Professor, got excellent funding, and those who didn’t, just hard ship, how researchers all around the world formed cliques of mutual interest and passed each other’s papers in the review process, how data was convienently fudged to look good, and when the raw data was asked for confirmation, it was always conveniently lost. Ah how many papers are out there in all these fields of endeavour are false? How many? 10, 20, 30% or more? You will be surprised if you did the same analysis above to all the papers, you would probably fail some 80 to 90% of all papers! How do I know this? For I too once reviewed, once! Not any more, for I too used to do the same analysis on every single paper, and barely 10% passed, and that only after revisions were made, and then the word came, I am failing to many papers, and that was the end of that. Standards must never be kept in Science, they must always be lessened, it so appears.

What are you people all crowing about! This blog entry is RIDICULOUS!

Go to any, ANY JOURNAL. Take a random sampling of 10 papers and put them through a thorough analytical review, INCLUDING, checking all the references (and here is another sneaking perversion of science), you will find that 8 or maybe 9 of them FAIL!

Complain all you like guys. But the truth is this, we have fields of endeavour that now belong more in science fiction movies than in academia. For example, we give Noble Prizes in Economics, are you aware that NOT A SINGLE THEORY IN ECONOMICS HAS EVER WORKED. That’s right economics is pure poppy-cock, and yet we teach it. Evolution absolulely no proof for evolution. Biological Classification theory is totally based upon committee, NO SCIENCE. Cosmology all based upon data and photos that nobody can verify. High Energy physics and String Theory has so many holes in it that it makes a black hole look full. How about Psychology, there is no science in it, psychiatry and its bible the DSM is run by committee – no science, no evidence, no proof, how many people are declared ill and forced, by even the courts, to take meds that have ABSOLUTELY no proof of aid.

You guys are complaining about Climate Science! Bwahahahahahahah!

We have so many problems that are far far greater. Our university systems need an over haul. Our economies need to be restructured, our democractic rights are being eroded by corrrupt incomptetent governments. Our Scientific journals are FILLED with rubbish.

You have lost sight of reality if you think that doing a paper review is going to change this problem, the problem of lying cheaters who never should have gotten a degree in science in the first place, for ….

THE LUNATICS ARE NOW RUNNING THE ASYLUM (aka university).

THE QUESTION YOU SHOULD ALL BE ASKING IS THIS….

HOW DO WE GET RID OF THE LUNATICS AND GET SOME SANITY BACK INTO THE SYSTEM?

Reviewing papers is not the issue, MORONS, LUNATICS AND SHARLATANS RUNNING AROUND AND MASQUERADING AS SCIENTISTS, PHYSICISTS, BIOLOGISTS, ECONOMISTS, POLITICIANS, JUDGES, AND NOBEL PRIZE WINNERS ARE THE PROBLEM?

How do we clean up system? For get about the toilet paper these people write, that’s easy to fix, you flush it down the toilet. How do we stop creating more of these lunatics?

Yes Alex. In the good old days we used to allow the stupid people to kill themselves. Now we do our darnedest to stop them. I think that the reason we try so hard to stop them is that now more so than in the past they take many undeserving people with them.

Regarding your comments about economic “science,” note that not one of their theories has been or can be subjected to any sort of controlled experiment. Those theories promulgated by “influential” academics are considered sacrosanct and beyond refute. Economists NEVER, EVER consider their theories (actually, they are more akin to conjectures) are wrong even despite, when actually applied in the real world, they produce results contrary to those intended; and this occurs MOST of the time. Coin flipping would produce the correct strategy more often than the garbage produced by academic economists.
Economists produce papers that are awash in formal mathematics buried under unintelligible econo-jargon. What matters is “the model;” the more math, the better.

Astrologists at least can tell you where the planets will be sometime in the future. Economists CANNOT TELL YOU when a recession has hit until AFTER is has started!!! What kind of science is it that has ZERO predictive ability.

As a result of this “science,” based purely on opinion and the popularity of a particular individual or group of individuals we have the farce, the joke, the scam of “liberal” vs “conservative” economists.
WHAT !!!!………the POLITICAL IDEOLOGY of the economist will determine the economic strategies that should be pursued.

If you are seeking a “science ” more of a farce, a scam, a joke than climate “science,” take a gander at economic ‘science.” Unfortunately, just like the climate charlatans, the guinea pigs in their exercises are the citizenry, who get royally screwed over, once again, by the “elites.”

John. I received an economics lesson from my father about fifty years ago that makes me agree with your observation. He was a farmer as a youngster. He explained how the prices and more importantly the profits in animal feed and animal production would cycle and why they would cycle. It was a problem that they could live with until the government got involved and tried to fix it. All the government managed to do was lengthen the time period of the cycles. This made matters much worse for the farmers because it made the non profit period longer and thus more farmers had to throw in the towel. The more the government tried to fix things the more fixing was required. We are still living with the fixes and requiring more.

You drilled into a nerve on economics “science”. RJ Gilbert in Tau Beta Pi “Bent” issue Spring 1993 summarized it nicely: “. . .Economics is a difficult subject because it is not about the control of a passive system. Rather, it is about the design of policies in pursuit of complex objectives in a system comprised of people who are at least as intelligent as the government that is attempting to influence their behavior. . .”

Neo classical economics is based on numerous assumptions that are not true in the real world.
The two most ridiculous ones are 1. Perfect information and 2. Perfect Scalability.
Think about how many industries rely on selling information and the laws in place to protect information.
If we had perfect information none of them would need to exist.
Think about a mine and mineral processing plant. When the price drops they highgrade the orebody which means that the mine actually produces more mineral. If the price goes up the opposite happens so there is actually less mineral production. This is the opposite of perfect scalability.

I totally agree with you. The reason for this inaccuracy in our knowledge base is the inadequacy of the use of our spoken and written language. ie ” You keep using that word. I do not think it means what you think it means!” (S Morgenstern),,LOL
Misusing the word science to mean just about anything is a disservice to our gaining of knowledge. Indeed misusing any word confuses the logical train of thought. ( see wealth, money and jobs ). The general use the word Science as as noun, or a verb, or an adjective will guarantee obfuscation as to the real meaning of the thought conveyed. I prefer to use scence as a process, not a noun.
I am reminded of when I was cutting steel to precise sizes that I could not just use a ruler and measure it 500 times and compute the average to find the dimension to a ten thousanth.
Accuracy is easier to obtain when precision is in the mix.

Gotta say, Dorian, that my experience in Chemistry is not your experience in Physics — whatever branch it was. I review papers regularly, and most of them are competently done; possibly incomplete somewhere or perhaps not taking the analysis far enough. I’ve never been censured for being too critical. In fact, I’ve been thanked for being critical.

So, while science is certainly under vicious attack, mostly by Progressives these days, I tend to be long-term optimistic.

Tenure was a good idea, so long as academics honored their side of the contract. Their side to to speak as objectively as possible. The university side is no one can fire them for doing so.

But academics, especially in the Humanities, the soft sciences like Cultural Anthropology, and in any department with a name ending in “Studies” no longer speak objectively. They’ve become openly and loudly partisan and political. In my opinion, this violates and, indeed, abrogates, the tenure contract. University presidents have been grossly remiss in allowing this to continue. Politically partisan faculty should be let go, as having fatally violated their tenure contract.

There are three aspects of the global weather/climate system that are fundamental to its workings: the Pacific Decadal Oscillation, the North Atlantic Oscillation and the El Nino/La Nina perturbations. Any atmosphere/ocean coupled model worth its salt should have phenomena similar to these as emergent from simulations (that is with extent and time scales similar to the real thing). None of them do. Therefore somethings very fundamental are not yet understood let alone included in those models.

That climate modellers nevertheless think those models are good enough to base public policy on shows that they lack the self-criticism inherent in real science. They therefore are little more than glorified similators. Their models relate to the real world as cartoon figures to real people.

That climate modellers nevertheless think those models are good enough to base public policy on shows that they lack the self-criticism inherent in real science. . .

If the agenda is really public policy (e.g. ‘global governance’, ‘climate justice’) then it doesn’t matter if the models have any basis in reality; they have been created to support the agenda with a specious ‘scientific’ legitimacy. They have the advantage of being so arcane that they are beyond the ken of ordinary people; only the high priests of climate scientism are admitted into their mysteries.

Clearly the author of this post, Pat Frank, has not been properly initiated, or he would have seen that so naive a concept as ‘error propagation’ does not apply to the sacred models, which inhabit a realm unblemished by mere empirical facts.

It is with deep sadness we note the passing of the Null Hypothesis in climate science. Born about 1925, Null has had a long and distinguished career testing the significance of an immense variety of theories and conjectures. In particular, Null brought to science a realization of the medical injunction to “First do no harm,” by not claiming the truth of a hypothesis without clear evidence. Unfortunately, in recent years Null fell into declining health, contracting a serious case of consensus from which Null never fully recovered. Finally, when complications set in from natural variability and other signs of “bad data,” Null finally expired. In lieu of flowers, Null’s estate asks that you donate to the statistician of your choice.

Oh, for the good old days of slide rules. With slide rules you had to think through the problem to get the right magnitude. Now with calculators and computers you can get ten digits of precision with absolutely no understanding of the problem.

At least with my old K & E Log Log Deci-Trig you had to mentally calculate the decimal point and check the reasonableness of the result. Most kids today, using hand-helds, haven’t the foggiest idea whether the they get answer is even close to the right order of magnitude.

All computer models are wrong but some are useful. Climate “experts” fail to recognise or even accept this basic truth. There are many reasons why computer models should never be used to predict the future — and there are even more when they apply to a complex system such as climate — of which this is a very good example.

It is CRUCIAL that papers such as this be published and there must be a publisher somewhere who recognises the difference between PROPER, objective scientific review and what passes for this in today’s supposedly scientific media.

Truthseeker- chaotic non-linear systems are extremely difficult for classical physics to handle. The partial differential equations used are mathematics and do correctly describe particular phenomena but they cannot be solved as a unique value, only a numeric estimation. That estimate immediately evolves into a calculation error when numerous calculations involving small increments, like a climate model, are made. After some limited number of steps the errors overwhelm any actual result. But chaotic systems can still be studied scientifically, they just involve a whole series of intractable mathematical problems that haven’t been solved yet. Christopher Essex’s several lectures on the problems with computer numerical modelling and Lorenz ‘s orginal article on discovering the “butterfly effect”(through a climate model) are pretty much still up to date as an intro.

This essay on accuracy and precision and how to handle the errors in each show that climate modelers haven’t really grasped the ideas yet. My physical chemistry class spent the better part of a quarter(72 hours of class) just covering the very basic stuff on errors in measurement and how they ballooned even in very simple calculations.

As the Essex lecture points out, and many of us have also, there is no such thing as a global temperature because the way it is constructed it doesn’t deal with observations but statistical constructs from the data. Using the “GAT” as an input to any kind of simulation becomes a simplistic method of getting wrong answers because the physics involved has nothing to do with the non-existant average temperature but the particular temperature affecting a process in a particular place.

I always thought of this problem as what happens to 2 parallel lines, when one end of the line offsets by a fraction of a degree. How long before the parallel lines become meters apart? Kilometers a part?

So yes tiny errors can result in huge errors down the processing line.

Put another way, “To err is human, top really f**k it up you need a computer.”

The lesson being humans do make errors but computers can then replicate and compound those errors a thousand times a second.

There is no way to know which of the simulations actually represents the correct underlying physics. Or whether any of them do. And even if one of them happens to conform to the future behavior of the climate, there’s no way to know it wasn’t a fortuitous accident.

Seeing as the hundred of other models certainly don’t conform to the future behaviour of the climate – as they don’t all track each other – it must be a fortuitous accident.

Because you do not get enough warming projected, very little warming actually.
They force the models by a simple trick to generate extra warming.
The warming projected is not a warming due to GHG effect only, it is an artificially inflated warming.
They know that, because is done in purpose, not accidently, even that it may be considered as such, like one of these accidental errors.
They are not interested to do that. Simply a conflict of interest.
They do not want to know that, the right model that works, because then there no any AGW projections there, if that done.

That is why the beautiful and perfect work of Pat is rejected by these guys.

Brilliant piece! The “propagated error” point is a revelation to me. I have really learnt something here.
I read the Bank of England quarterly inflation reports and wondered why all their graphs had the same shape as the graph on the right of your figure. Now I know. They run economic models and clearly understand the uncertainties inherent in them and the effect of propagated error. Interestingly they add a probability function into their “fan charts” so that you can see that the chance that the errors are all in the same direction is lower than if they are more balanced, some positive, some negative. However the central point is that the actual result has a positive chance of being anywhere in the fan – and each quarter they critically compare their previous prediction with the actual outcome. Something environmental scientists seem reluctant to do.

I would really like to read this paper. Keep trying, perhaps you should try some Statistics journals rather than Environmental Science journals. They will have less of a vested interest in climate modelling and you might get reviewers who actually know what they are talking about. Maybe you could even get your paper accepted in an Economics journal, possibly rewritten to contrast the cleverness of economic modellers with the stupidity of climate modellers. Everybody responds to flattery!

The accuracy vs. precision problem is spot on. I have also noticed that some commenters in here have the same problem. When it is dicussed about world temperatures some people on both sides of the fence seem to have trouble telling them appart. They complain about error bars in world temperatures when the real problem is lack of accuracy. (Well, also that the concept of temperature of a system which is not in equilibrium is a messy one and does not equate to total energy of the system)

Also, graph b is the type of graph one would expect when performing any kind of modelling that consists of taking the results of one iteration and use them as the starting point of the next iteration.

One very good question to ask is , if not models what else .
In reality when you ask this question you find that the evidenced for the whole ‘we are doomed ‘ game is pretty much rubbish without the models . Given that you can see why , despite their inabilities, the models have to be defended and promoted so heavily. There are lot of careers , cash and politic ambitions resting on their shoulders.

Quite simple really. We are in the age of virtual reality. Most people hate their lives and live in a virtual world. You can have virtual love, relationships, sex (there is an app and equipment for that). Soapies, tv series, movies of every kind to suit every taste. Models are no different to that. The MSM can make high drama out of this and most of the sheep lap it up.
I, for one, am hanging on to the toilet rim and refuse to be flushed down with the rest of the idiots.

well here in reality that 340W/m2 has moved (expanded?) my model thermo-meter from -30F to -20F since dawn and with a probably pretty good albedo given the whiteness of my view – but blue skies for the transparent greenhouse so nada in the backradiation scam
On the interior, firewood is oxidizing nicely.

EPA formally stated in the Endangerment Finding for GHGs that the attribution of warming to humans rests on 3 lines of evidence: 1. Temperature Records, 2. Physical Understanding of Climate, and 3. Models. They claimed >90% confidence based on these 3 lines of evidence. AR5 bumped that to 95%.

Nos. 2 and 3 are total crap.

Hot spot, anyone?

No. 1 – we are well within natural variability and so there is no basis for an inference that humans have caused an excursion beyond natural variability.

Lots of climate modelers have their degrees in mathematics, SanityP. Science is pretty grubby to them, what with all that messy observational stuff and materiality (dirt). My instinct is to avoid such journals.

Can you name the reviewers?
There are so-called “name and shame” campaigns that go after those who do not support the “consensus” position….why can we not know whom it is that have zero understanding of their own models?

Close but no cigar. Yes you are allowed, indeed some journals now have “open peer review”. But these are exceptions, I will grant you this.

However, the best example of open peer review I can give in this AGW field is the original paper of the “Father” of AGW.
The title of the paper “The Artificial Production of Carbon Dioxide and its Influence on Temperature”, published in 1938 in Q.J.R.M.S (certainly a top scientific journal) by G.S. Callendar.

You can then read the comments of the reviewers, as well as their names, quite a few of them, under the Discussion of the paper. Then you can read the answers from Callendar to them. Surprise?

I certainly do not want to go of topic about peer review. But I also had a surprising experience.
In 1971, I submitted a review article and received comments from one reviewer in the typical anonymous fashion. When the article was published I was very surprised to see the name of the reviewer printed on the title page with the note that he was the reviewer of the article.
I am not sure, when I wrote the article I certainly was not yet established as a scientist in this particular field. He was over 60 and well respected.
Then the article was and is still cited and became a “fixture” in that field. Another surprise: several authors when citing the article added his name (I think by honest mistake) as a co-author!
A few years later, I had the pleasure of meeting him as we served on an advisory committee. We had a few drinks, a nice dinner and he was still teasing me about a small part of the article he did not like. I teased him about being a false co-author. So much for peer review. Never perfect, but needed.

My impression now is that with the Internet, we are seeing major changes in scientific publishing and we will also see major changes in peer reviews and open comments.

By the way, if you read the paper, you will see that the Father loved the increase in CO2!

Basically, he wrote a paper that pointed out that accuracy of models (how close they are to reality) is not the same as precision (how much they wobble around – which is a function of the models, not the real world).
The peer reviewers got confused between the two ideas and thus, conveniently, rejected the paper.

In addition he points out that errors in the start values (or maybe the model assumptions) are iterative, they are repeated. As such they add up.
“You owe me a fiver ± a friendly pint” is fine. No-one keeps count of the friendly pint.
But if the same thing happens day after day post-work then you can feel the resentment growing. That “friendly pint” becomes significant.
Yet the peer reviewers seems to think that the wobbles around the start are a limit to the number of friendly pints so they can be ignored. They are wrong – it repeats and adds up.

He also pointed out that error boundaries (how far from physical reality the models are expected to be) are not the same as the range of wobbles (precision) as the wobbling is not wobbling about the real world; it wobbles about what the models are centred on. Again the reviewers get a little confused. Apart form the one he thinks isn’t a “Climate Scientist” and he therefore thinks may be a competent scientist.

The rest was further illustrations of that theme. If I understood the author correctly.Hope that helps.

The modelers were told by Pat in a very fine and clear way that obviously their models break the very first Commandment for the models….and the answer basically was that that does not matter at all…..that is how they like their models regardless how wrong and perverse that could be.

Interesting new developments in the computing world with people building supercomputers with cheap $35 computers arranged into computing nodes. Here is an example of a 32 node compute-cluster using the pi raspberry version 1 (version 2 has 4 cores instead of only one core for version one so could total 128 computing cores for the same cost) :
Imagine what we the skeptics might have available to us before the end of this decade to investigate (run) climate models on our own.

Nice toy but wrong approach for a number cruncher. Highest performance in TFlops/Watt – and price as well – can only be achieved with high concentrations of actual computing pipelines, SIMD arrays, like the NVidia cards or SoC’s like Xilinx ZynQ – which has 250 DSP slices embedded in an FPGA (and 2 ARM cores for controlling the thing).

These things have GPU’s on them that could also be used for computing. It is an inexpensive way for a person like me who would want to try out code for climate models.
On the other hand, Intel has an 18 core hyper-threaded chip that can run 36 threads simultaneously, but is rather expensive – in the thousand dollar range.
Microsoft is building a Windows 10 variant for the Raspberry Pi. When that becomes available, I will seriously look into building my own compute cluster.

They are nice toys, but the problems raised in this post still hold good. It is straight forward Lorenz the start data is inaccurate, the models lack the capability to model everything in the chaotic climate system. Even the IPCC said: “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible. “
Small errors in the initial state propagate but in a chaotic system they will not propagate uniformly. As the number of inaccurate variables is close to infinite and the chaotic system has “unknown unknowns” anyone who thinks that it is possible to model the climate with any level of correctness does not understand the climate.

A very good an interesting article. To a large extent I share your despair, but would encourage you to find other possible places to publish. My overview is that those working on GCMs have developed ‘ignorant expertise’: they have become expert in their own paradigm and groupthink but divorced from the tenets of science as a whole.

Thanks, Jonathan. I apologize for communicating despair; didn’t mean to. The ms has been submitted again, and I remain cautiously optimistic. You’re right about the modelers. Some came across as quite upset that I should suggest a means of analysis standard in physical science, but not standard in their field.

As I have used the phrase ‘climate-models-can’t-predict-squat’ in C3 articles multiple times, it’s a guilty pleasure to read an article that addresses the issue head-on.

Being a retired biz executive, the climate model output has always reminded me of marketing managers spending way too much time devising Excel algorithms that provide “empirical” evidence, with the end result always being that a new marketing campaign means total domination of a given market within a few years.

And these fairly smart sales/marketing manager types would truly come to believe their simulated outputs were the probable future reality. (This type of simulation “science” was also used to fertilize the crazed tech boom frenzy that ended badly with the severe 2000 dot-com bubble bust – instead of sales projections, it was the grandiose simulated predictions of ‘eyeballs captured’ that fed the investors’ appetites.)

Alas, the climate modelers are no different than the self-deceived jokers in the marketing/sales departments, who made faulty sales projections based on complex Excel formulas without an understanding/appreciation of the underlying nuances and unknown macro, micro, behavioral and innovation economics at work, globally, 24/7.

Climate modelers as scientists? Nope. Instead, they’re the climate science community’s jokers, closely related to their always failing brethren in the business world.

Sometimes I am amazed that we as a society ever got complicated machines like cars and airplanes built on such a large scale with such craziness going on. It seemed to me that in my world those who could not do their technical job very well realized their inabilities and therefore turned their attention to becoming managers. Many of them succeeded. The problem was that they were not good with determining who were technically competent and who were not. Eventually you are part of a group with a rightfully deserved bad reputation.

Modelers live in virtual reality so they can’t see what’s really happening. Computer programs are under their control and so give the illusion of mastery. Your frustration is akin to that of all teachers whose pupils just don’t have the capacity to understand. Thank you, though, for putting this on the record rather than just letting it go.

As Frank suggests between the lines, in science the purpose of a model is to make predictions of real world data. Science is a mapping on data to data.

The purpose of the climate modelers is to get published in an approved journal via peer review. To the latter, accuracy and precision mean no more than consistency in model results, and they have programs to make their models consistent, programs which have, by and large, been successful. GCMs predict future climate, but these predictions can and are never validated. They are near enough to require urgent funding, but far enough away to be untestable in our lifetimes.

In a brief, lucid moment, Richard Horton, Editor of Lancet, explained the modern publication process:

The mistake, of course, is to have thought that peer review was any more than a crude means of discovering the acceptability – not the validity – of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed [jiggered, not repaired], often insulting, usually ignorant, occasionally foolish, and frequently wrong.

Science is not about voting. It is not about peer-review, publication, or consensuses. These are subjective. It’s about predictive power. Science is the (strictly) objective branch of knowledge.

Fortunately for science and society, and unfortunately for the climate modelers, the GCMs contain one accessible, implicit prediction: Climate Sensitivity. Data from the last decade and a half invalidate that prediction. The toast fell jelly side up.

Climate models fail – not because they are computer models, but because they butcher the physics of climate. They are incompetent. These postmodern modelers talk about feedback, but then leave out the most powerful feedback in all of climate, total cloud albedo, the number nominally put at about 31%, and which is in fact variable, gating the Sun on and off. It is a positive feedback, amplifying solar radiation (the burnoff effect) and a negative feedback, mitigating warming from any cause (from the Clausius-Clapeyron effect).

These top level aspects of the climate story can be widely understood, even reaching the general public.

Good luck with that, Pat. You are still being way too nice to them. Additional points:

* They treat the PPE envelope as if it is error when it is not as you say. But they do not examine the structure of the individual traces, which themselves often have absolutely absurd variability and the wrong autocorrelation. I have remarked many times on what the wrong autocorrelation means physically via the fluctuation dissipation theorem. In a nutshell, if the autocorrelation times are not correct, then the physics of the open system is provably not correct, end of story.

* The models do not conserve energy per timestep. This means that at the end of every timestep the system has to be renormalized or it will run away. But they cannot fully renormalize it, or else the models would not run away the way they need them too. They therefore have to renormalize the energy balance enough to stabilize the model but in a way that permits GHGs to force the solution to grow over time. I won’t say that it is impossible to perform this sort of numerical magic without introducing all sorts of human bias into the result — I’ll just say that I am deeply skeptical about the entire process. It’s like solving a stiff set of coupled ODEs (very much like it, in fact, almost identical to it) so that it sort of diverges but doesn’t really diverge. How can you be sure that the result is actually a solution and not your beliefs about the solution.

* The Multi-Model-Mean is an abomination, and all by itself proves utter ignorance about statistics in climate modeling.

* In the end, how are the models any different from a simple direct physical computation of GHG forcing? They are obviously set up to have a median output around the centroid prediction of the usual logarithmic climate sensitivity, and everything else is just model-induced noise around this obvious trend. I could (and have) produced the centroid line just fitting and extrapolating the climate data in a one significant parameter purely statisical model fitting HadCRUT4. The PPE output is mere window dressing designed to make this fit somehow more plausible, or to emphasize that it COULD warm as much as 6 C — if there were no negative feedbacks in the system and all of the dice used in the model came up boxcars a hundred times in a row.

” The Multi-Model-Mean is an abomination, and all by itself proves utter ignorance about statistics in climate modeling.”. You mean that 15 wrongs don’t make a right? :-( It would seem that if 15 models all give different results, and we consider differences of 0.02C to be significant, then at lesst 14 of them have to be wrong.

I’ve often wondered, rgb, why you don’t write a critical article. You’re so totally qualified, and you (unlike me) understand the physics and math right down to the bedrock. I’m still wondering. It would go nuclear. Why not do it? Think of the children. :-)

The Catastrophic Anthropogenic Global Warming (CAGW) theory has so many obvious flaws that in my opinion there are only two reasons someone might believe in it. Either they are being paid to or they’re not the sharpest tool in the box, i.e. they wear a polar bear suit to demonstrations.

@Pat
There is so much error in your position that I hardly know where to begin. I hope WUWT people don’t think this article is the last word on computer simulations. Simulations don’t sample a statistical distribution, unless that is direclty programmed into the simulation. That’s why there is no “error propagation”. You can call this a fault of the simulation, but most simulations performed in all fields similarly lack a modeled error to the input parameters, and therefore do not and cannot propagate error.

The actual error of computer simulations is measured as propagation of truncation error due to limited precision of the computer. The way to establish a statistical distribution of your outputs is through sampling parameter space and observing the outcomes. This error estimate is sort-of what the climate modelers are doing and what you have a problem with. I don’t think they’re any different than modelers in other fields. SO, Pat Frank meet windmills; windmills, Pat Frank. :)

Also, your writing style is opaque. I had a hard time parsing what you meant. This may have also been a problem for your reviewers.

Finally, I have always been skeptical of the dogmatic “propagated error” rules in physics. The underlying assumption of these rules is that the input parameters are normally distributed with a small but finite probability of being infinitely in error. This is total nonsense. Using the well-known rules of error propagation, you can generate errors that are absurd and also depend upon the number of mathematical operations you perform without changing the underlying reality of how uncertain our knowledge is. It’s very clearly a game.

But suppose that I want proof of your claim that error propagation rules conform to reality. Where is your proof that error propagation in physics is correct in the physical world and not some inept game? How can you know *empirically* that error propagation calculations are correct in all fields? You don’t cite any articles saying that error propagation rules were observed to be correct!!! Everything you say is utter dogmatism without proof.

JDN,
Your comments confirm the knowing refusal to accept the Null Hypothesis in Climate Science. Instead, climate modellers continue to redesign their bamboo control towers, adjust the layouts of their runways, and add bling to the controller’s headsets… and then wonder why the planes still do not land.

1. How or why does a lack of sampling of statistical distributions preclude error propagation? I may be a dumbass, but I have no idea why it is even a pertinent observation to say that simulations don’t sample statistical distributions. What difference does that make?

2. What does it mean to say that “most simulations” “lack a modeled error to the input parameters”? What difference does that make; how or why does this, whatever it is, preclude error propagation?

The actual error of computer simulations is measured as propagation of truncation error due to limited precision of the computer.

It seems to me that you cannot tell the difference between ‘measurement precision’ and ‘floating point precision’, and maybe that is the problem the reviewers have as well.

I give you an example. If Earth albedo cannot be measured with more than 2 significant digits, there is not point on performing calculations or writting code in double precision or double extended precision format. Single precision will do. The third digit in your final calculation is not going to be significant, anyway.

JDN: The underlying assumption of these rules is that the input parameters are normally distributed with a small but finite probability of being infinitely in error.

that is not correct. The probability that a normally distributed random variable is greater than a particular large number is finite, but the probability goes to 0 as the large number is made larger. The mathematical fact that the normal distribution has infinite support has never prevented it being useful with lots of kinds of measurements that are physically bounded, such as Einstein’s model of Brownian motion.

This is in reference to your concern over the compounding of uncertainty in model simulations. It is not an attempt to answer your epistemological challenge in the final paragraph.

From Wikipedia:

In statistics, propagation of uncertainty (or propagation of error) is the effect of variables’ uncertainties (or errors) on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g., instrument precision) which propagate to the combination of variables in the function.

Either climate modelers take proper account of inherent errors (in both measurements and in basic theory) and how they interact within each model or they do not. It seems like a legitimate area of investigation to me.

JDN, error propagation does not require sampling statistical distributions. I propagated systematic error, which need not follow any standard statistical distribution at all.

And it was not an input error propagated, but a theory-bias error; one made by the models themselves and therefore present in every simulation step. Such errors can be estimated, and can always be propagated through a simulation. And should always be propagated through a simulation.

Truncation error is a numerical error. My post spoke to physical error. Do you understand the difference? Your idea that, “The way to establish a statistical distribution of your outputs is through sampling parameter space and observing the outcomes.” shows exactly the confusion about accuracy vs. precision evidenced by all my climate modeler reviewers. Sampling parameter space is about precision. It tells one nothing about physical error. Physical error is determined by comparing model expectation values against the relevant observational magnitudes.

You wrote, “This error estimate is sort-of what the climate modelers are doing and what you have a problem with.” Correct; it is what htey do. Their method says nothing about the physical accuracy or reliability of their model projections.

“I don’t think they’re any different than modelers in other fields.” If you’d like to see an actual model reliability analysis, consult Vasquez VR, and Whiting, WB. 2005. Accounting for Both Random Errors and Systematic Errors in Uncertainty Propagation Analysis of Computer Models Involving Experimental Measurements with Monte Carlo Methods. Risk Analysis 25(6): 1669-1681, doi:10.1111/j.1539-6924.2005.00704.x.. They propagate error.

After reading that paper, I wrote to Whiting about it. His reply was illuminating: ”Yes, it is surprisingly unusual for modelers to include uncertainty analyses. In my work, I’ve tried to treat a computer model as an experimentalist treats a piece of laboratory equipment. A result should never be reported without giving reasonable error estimates.” There’s the attitude of a practicing physical scientist for you. Rather diametrical to your modeling standards, isn’t it.

So, it seems you may be right, and that modelers elsewhere make the same mistake you do here. The mistake so obvious to a trained physical scientist and so ubiquitous among climate modelers: the inability or unwillingness to perform physical reliability analyses on their model results. An inability to understand the difference between physical accuracy and model precision.

So, accuracy, meet JDN, JDN, accuracy. I recommend you learn to know it well. If you want to do a physical science.

Sorry you found my writing style opaque. You seem to be the only one who has (so far as I’ve read). Maybe the problem has something to do with your immersion in modeler-thought.

Regarding the statistics of propagated error, when the error is empirical (made vs. observations) one doesn’t know the true distribution. The uncertainties from propagation of error are therefore always estimates. But so what? In physical science, one looks for useful indications of reliability.

Rules for propagating error are not “dogmatic.” That’s just you being rhetorically disparaging.

The results of error propagation are not absurd errors, but non-absurd uncertainties. Of course, the uncertainty increases with the number of step-wise calculations. Every step transmits its uncertainty forward as input into the next step and each step also has its own internal parameter error or theory bias error. That means one’s knowledge of state magnitudes must decrease with every calculational step. You may not like it, but it’s not mysterious.

Take a look at the derivations in Bevington and Robinson 2003 Data Reduction and Error Analysis for the Physical Sciences. Show me their derivational dependence on location in the physical world, or limitation by discipline or field. Their generalized derivations make it obvious that they are universally applicable wherever physical calculations are carried out. You’re just grasping at straws, JDN.

You’re welcome to carry out your climate modeling sealed hermetically away from physical error analysis, and from the threat of countervailing observations. But then don’t pretend that you’re doing science. And don’t pretend that your models have anything to do with physical reality.

I agree that propagating error in computer code is possible, and I think you agree that it’s not usually done. Should it be? The reason I brought up answering the question empirically is because, to my knowledge, our basic error propagation technques were derived in fields that are suited to examining the results of these rules. For example, I’m reasonably sure the error estimate propagation techniques will work in atomic physics. I’m completely uncertain whether error should be propagated in the same way for “big world” simulations. The Monte Carlo approach to precision is what most people go with. With chemical simulations I’ve done, this is what I go with.

Your demand for “precision” makes you a character out of central casting. You would be the tragically flawed scientist who opposes the eventual hero. It’s just the “angry old man syndrome” talking. Just saying… “Precision” and “accuracy” are overloaded terms. They have been defined in so many ways, you can’t seriously expect people to *not* have cognitive dissonance reading these terms.

If you want to be understood, instead of just yelling at kids to get off your lawn, call these things something closer to what they are, maybe confidence interval of simulation output vs. variance from observation. See… that really clears things up. Demanding that people adhere to your jargon is unfriendly.

You can afford to be a friend because the climate simulations are completely bogus for so many other reasons.

And to answer the other commenters about my opinion of climate scientists, whether they’re scientists… they seem to be really bad ones. But my own field is gradually going this way as well. The corrosive effect of grant money means that the greatest scientific sin is to lack funding. It used to be an insult if someone said you would believe anything for money… now, you can put it on your CV. If you guys have so much time, stop messing around with criticizing bad stats and get control of the funding.

JDN, your supposition that I am ‘demanding precision’ makes me think you haven’t actually read anything. My entire post is about the importance of accuracy to science. Demur as you like, but it is both appropriate and standard to propagate error through any multi-step calculation.

If you’d care to read Bevington and Robinson you’ll find accuracy and precision carefully defined, and in the standard manner. Accuracy and precision have not been defined in “many ways” as you have it, but in one way only. You lose sight of that at the peril of your work.

Thankyou for your fine article. I ask you to continue to press the issue because I have been pressing it with no success since 1999.

You summarise the problem when you write:

Every single recent, Holocene, or Glacial-era temperature hindcast is likewise non-unique. Not one of them validate the accuracy of a climate model. Not one of them tell us anything about any physically real global climate state. Not one single climate modeler reviewer evidenced any understanding of that basic standard of science.

You are not the first to observe that the climate model fraternity lacks a “basic standard of science”. For example, in my peer review of the draft IPCC AR4 I wrote the following which was ignored; i.e. my recommendation saying “the draft should be withdrawn and replaced by another that displays an adequate level of scientific competence” had no effect.

“ Page 2-47 Chapter 2 Section 2.6.3 Line 46
Delete the phrase, “and a physical model” because it is a falsehood.
Evidence says what it says, and construction of a physical model is irrelevant to that in any real science.

The authors of this draft Report seem to have an extreme prejudice in favour of models (some parts of the Report seem to assert that climate obeys what the models say; e.g. Page 2-47 Chapter 2 Section 2.6.3 Lines 33 and 34), and this phrase that needs deletion is an example of the prejudice.

Evidence is the result of empirical observation of reality.
Hypotheses are ideas based on the evidence.
Theories are hypotheses that have repeatedly been tested by comparison with evidence and have withstood all the tests.
Models are representations of the hypotheses and theories.
Outputs of the models can be used as evidence only when the output data is demonstrated to accurately represent reality. If a model output disagrees with the available evidence then this indicates fault in the model, and this indication remains true until the evidence is shown to be wrong.

This draft Report repeatedly demonstrates that its authors do not understand these matters. So, I provide the following analogy to help them. If they can comprehend the analogy then they may achieve graduate standard in their science practice.

A scientist discovers a new species.
1. He/she names it (e.g. he/she calls it a gazelle) and describes it (e.g. a gazelle has a leg in each corner).
2. He/she observes that gazelles leap. (n.b. the muscles, ligaments etc. that enable gazelles to leap are not known, do not need to be discovered, and do not need to be modeled to observe that gazelles leap. The observation is evidence.)
3. Gazelles are observed to always leap when a predator is near. (This observation is also evidence.)
4. From (3) it can be deduced that gazelles leap in response to the presence of a predator.
5. n.b. The gazelle’s internal body structure and central nervous system do not need to be studied, known or modeled for the conclusion in (4) that “gazelles leap when a predator is near” to be valid. Indeed, study of a gazelle’s internal body structure and central nervous system may never reveal that, and such a model may take decades to construct following achievement of the conclusion from the evidence.

(Having read all 11 chapters of the draft Report, I had intended to provide review comments on them all. However, I became so angry at the need to point out the above elementary principles that I abandoned the review at this point: the draft should be withdrawn and replaced by another that displays an adequate level of scientific competence).”

Yes, you are right. However, my post was not intended to introduce discussion of an illustration: I wrote in support of the assertion by Pat Frank that climate modelers lack an adequate “basic standard of science”.

The propagation of errors is the errors in the physical measurements that can take the calculations off the rails. Each measurement, be it TSI, down welling radiation in W/m^2, surface air temperature, etc., has an error range associated with it. When you model these measurements into the future, you need to keep in mind that they all have a range of accuracy. The first step has a range of accuracy. As a result, the second step has does not have a fixed starting point. The error from the first step has to be included in the second step. At each subsequent step, the error of the previous step has to be added to that of the current step.

Example from the text: Using the root-sum-square, after 100 steps (a centennial projection) ±1.18 C per step propagates to ±11.8 C.

Frederick, I read “Taken by Storm” a coupla-three years ago, thanks. It’s an excellent book. If consensus climatology were a real field of science, books like that, and other published work, would have changed its direction long ago. I’ve been in touch with Chris Essex. He knows my work.

“Kuhn’s book (William: Kuhn’s book “The Structure of Scientific Revolutions” challenged the popular belief that scientists (William: Particularly including ‘climate’ scientists who are extraordinary resistive to the piles and piles of logical arguments/observations/analysis results that indicate their theories are urban myths due to the climate wars) are skeptical, objective, and value-neutral thinkers. He argued instead that the vast majority of scientists are quite ‘conservative’; they’ve been indoctrinated with a set of core assumptions (William: The core assumption/theories become overtime unthinkable to be incorrect) and apply their intelligence to solve problems within the existing paradigms.

Scientists don’t test whether or not they are operating with a good (William: validate) paradigm as much as the paradigm tests whether or not they are good scientists. (William, ‘good’ scientific behaviour is defined as a person how does not publically question the groups’ core beliefs, imply that the groups’ core beliefs are an urban legend.)”

If there are fundamental errors in the base theory, there will be piles and piles of observational anomalies and paradoxes. The fact that there are piles and piles of anomalies in almost every field of ‘pure’, non applied science indicates there is something fundamentally incorrect with the methodology/approach and ‘culture’ of ‘pure’ science. It also explains why major, astonishing breakthroughs in pure science are possible. (Imagine decades and decades of research which provides the observations/analysis to solve the puzzles and a weird irrational culture that stops people from solving the problem.)

An example of an in your face failure to solve the very important scientific problem/puzzle: What is the origin of the earth’s atmosphere, oceans, ‘natural’ gas, crude oil, and black coal. The competing theories are (there was/is no theory competition) are: 1) The ‘late veneer theory’ (the late veneer theory is connected with the fossil fuel theory, where a tiny amount of CO2 is recycled in the upper mantle) vs/or 2) The deep core CH4 theory (See the late Nobel prize winnings, Astrophysics’ Thomas Gold’s book ‘Deep Hot Biosphere: The Myth of Fossil Fuels’, where there is a large continuous input of CH4 and CO2 into the biosphere from CH4 that is extrude from the core of the earth as it solidifies, which explains Humlum et al’s CO2 phase analysis result paradox and roughly 50 different geological paradoxes/anomalies) as to the origin of the earth’s atmosphere, oceans, and ‘natural’ gas/crude oil.

The fact that a standard, effective, structured approach to problem solving as basic as listing and organizing the anomalies/paradoxes in a very long review paper or a short book, looking at the logical implications of the anomalies/paradoxes, and formally exploring/developing alternative theories is not done, as it would highlight the fact that are fundamental errors in the base theories, that there some of the base theories are most certainly urban myths (i.e. cannot possibly be correct).

It is embarrassing, unthinkable for the group of specialists to question their core science, to suggest that there are or could be fundamental errors in their base theory and that those errors could have gone unaddressed for decades, that their base theory could be, is obvious an urban myth. New graduates that want to ‘progress’ in the field, that want to get university teaching and research positions (imagine a research department being made up of a couple of dozen professors, with seniority and a pecking order and each specialty field having a few hundred teaching members and a thousand want to be teachers/researchers, again with a pecking order and benefits that can be controlled/changed to encourage the culture) have no logical alternative but to continue to support the incorrect paradigms and ineffective approach to problem solving.

The IPCC climate models’ response to forcing changes is orders of magnitude too large (the general circulation models, GCM, amplifies forcing changes – positive feedback, rather than suppresses, resists forcing changes – negative feedback). If the real world’s response to forcing changes was to amplify the forcing change the earth’s temperature would widely oscillated in response to let’s say a large volcanic eruption or other large temporary forcing change.

The justification that the planet amplifies rather than resists forcing changes, is the fact that there is in the paleoclimatic record, very large, climate changes. These very, large, very rapid climate changes are not however random, there are cyclic. The Rickies (rapid climate change events RCCEs), correlate with massive solar magnetic cycle events and unexplained geomagnetic changes.

A climate model that has positive feedback can be ‘tuned’ by adjusting the inputs and internal model variables produce to make the model produce a rapid, very large, abrupt temperature response, to a small forcing change. As noted above however as the earth’s temperature does not widely oscillate when there are large temporary forcing changes, the explanation for cyclic abrupt climate change in the paleo record is not that the planet amplifies the forcing. The explanation for the Rickies is the sun can and does change in a manner that causes very, very large changes in the earth’s climate which is supported by the fact there are cosmogenic isotope changes at each and every abrupt climate change event and at the slower, less large climate change events.

A small few of the Climate Science practioners have grudgingly admitted their discomfort at the use of higher than observed mid latitude and tropical aerosols to balance the GHG forcing if the Arctic temp rises are to be replicated by the GCM ensemble results.

That problem, in and of itself if “climate sciencism” were a real science, should demand a major dumping of the inherent assumptions built into the models, and then the models themselves.

Scientists are heir to all the foibles that infect humanity, William. There’s nothing new or revelatory about that.

The interplay of falsifiable theory and replicable observation, though, is the particular strength of science. It’s not present in any other field of study.

So long as this method is freely and honestly practiced, the problems you note will be only temporary, uncomplimentary and braking of progress though they may be.

The problem in climatology has been the deliberate subversion of free and honest scientific practice. Had the major scientific institutions — the APS and AIP especially — stood up against the politicization of climate science, we’d never be in the Lysenkoist-like oppressive mess.

Rather than “Are modelers scientists?” Dr. Frank might as well have asked, “Are people logical?”

Having dealt extensively with the physicists, chemists, and engineers that Dr. Frank contrasts with climate modelers, I can assure you that they are more than capable of similarly flubbing basic distinctions.

In my working life I saw a group of them fail repeatedly to comprehend the difference between the flow of a fluid and the propagation of a disturbance through that fluid. I saw a large sum of money wasted because highly regarded scientists failed to focus on the distinction between length resolution and angle resolution. I could go on.

In the blog world–actually, on this very site–I’ve seen scientists repeatedly fail to distinguish between the stated conclusion of Robert G. Brown’s “Refutation of Stable Thermal Equilibrium Lapse Rates,” which can be interpreted as correct, and the logic by which that conclusion was reached, which is a farrago of latent ambiguities and unfounded assumptions defended by gauzy generalities and downright bad physics.

In the latter context I succumbed as Dr. Frank did to the temptation to be provocative, in my case by saying that we lawyers are justified in viewing science as too important to be left to scientists; it sometimes seems that you have to undergo a logicectomy in order to become a scientist. The truth, though, is that failure to recognize basic distinctions is less a characteristic of any particular occupation than a general human shortcoming that in varying degrees afflicts us all.

But Dr. Frank’s response is undoubtedly salutary; venting is often better for the soul than a stiff drink when others blandly reject what you see with crystal clarity. That it will also serve as “a civic corrective good” in this case is devoutly to be hoped.

Please see my response to Willian, Joe. It perhaps applies as well to your experience, too. That scientists are fallible and given to their own brand of foolishness or failure is a constant across modern history. Recognizing that won’t help the situation, but it may allow you to let go, and help you feel a bit more optimistic about things.

The cure for alcoholics is not (1) more alcohol, nor (2)more money to buy better alcoholic beverages, nor (3) removal of life’s personal/family responsibilities to negate the negative effects in order to justify staying drunk.

Climate modellers and their climate scientist followers practice analogies of all three of the above. Similar to alcoholism, the cure for climate modelism will be the complete removal of funds which supports the dysfunction. Of course they will will resist with screaming fits and tantrums.

It is true that we teach engineering juniors propagation of error (I prefer propagation of uncertainty), I’m not so sure that undergraduates in physics or chemistry necessarily see the same. I don’t recall it from my physics undergraduate days of 40-44 years ago. I do not recall seeing it in any chemistry course I have taken. I made a presentation about this a few years ago to a group of community college educators (science and mathematics) and no one could recall seeing it before.

I am very disturbed by this quotation from one of the reviewers.

“[T]he author thinks that a probability distribution function (pdf) only provides information about precision and it cannot give any information about accuracy. This is wrong, and if this were true, the statisticians could resign.”

Not only is it badly worded (“could resign”?), the writer fails to comprehend systematic error, or bias, and that a PDF does not quantify such. In fact, if one considers that consistency means that collecting more data must lead to an outcome closer to truth, bias results in a method that lacks consistency. I think the reviewer “could resign.”

Kevin Kilty: Not only is it badly worded (“could resign”?), the writer fails to comprehend systematic error, or bias, and that a PDF does not quantify such. In fact, if one considers that consistency means that collecting more data must lead to an outcome closer to truth, bias results in a method that lacks consistency.

Yeh. That’s bad. In my experience, reviews almost always included at least a few fundamental errors or misunderstandings of statistics. In my case, they were not the deciding factors in acceptance/rejection.

Here, I think the basic disagreement between Pat Frank and the reviewers is that the reviewers want to treat the value of 4W/m^2 as though it is really accurately known from considerations outside the modeling effort itself, and Pat Frank wants to show how much uncertainty in the modeling results is added when it is treated as any other poorly known parameter..

So, you are saying that the reviewers think there is no bias, and thus accuracy==precision. Fair enough. But they haven’t proven any such thing and have no basis for assuming such. I’ll repeat this again and again, even the metrologists measuring fundamental constants have trouble quantifying bias. Fischoff showed one of the experimentally determined speed of light values, from back when speed of light was not the definition of the meter, was 42 standard deviations away from the best accepted value of later experiments.

Kevin Kilty: But they haven’t proven any such thing and have no basis for assuming such.

I don’t disagree. But the value is based on reasoning outside the specific climate model. It’s widely cited as the best available working value for the effect of doubling CO2 concentration. Treating it as a random variable adds to prediction uncertainty (Pat Frank’s point, I think) without elucidating sources of model uncertainty (a cost, imho.) If you add enough to the prediction uncertainty, you can’t have (in my opinion) as much confidence that the models are wrong.

If you add enough to the prediction uncertainty, you can’t have (in my opinion) as much confidence that the models are wrong.

Nor could there be any confidence that the models were right. Indeed with iterative models where the previous ‘prediction’ is the source of data for the next model iteration not only errors but also uncertainty will propagate. As the system is coupled non-linear chaotic system there is no way of telling in which direction or with what magnitude those errors and uncertainties will propagate. Indeed, the models are not very much better than random number generators. Presumably, the modelers put in what to them is a reasonableness test for values in the model which effectively means they have created a random number generator that is only allowed to show results that the modeler thinks are reasonable. Presumably those that meet the requirements of the funding agency.

So, it’s a physical error statistic. And it shows that advanced climate models do not correctly partition thermal energy among the various climate sub-states. I show that cloud error is highly correlated among CMIP5 climate models, which implies that the (+/-)4 W/m^2 reflects a theory-bias.

This error must then enter into every single step of a global climate simulation. I also show that global air temperature projections are just linear extrapolations of GHG forcing (any forcing, really). Hence the linear propagation of cloud forcing error.

The disagreement with climate modelers arises because, first they do not understand error propagation and so reject its diagnosis, and second they don’t understand the difference between a physical error statistic and an energetic perturbation, and so treat the statistic as though it impacts the model expectation values — in this case air temperature. That’s why several of the reviewers supposed that the error bars imply the model itself is oscillating between hot house and ice house conditions.

Ian, my ms has a section discussing exactly the point you raise — the unknown magnitude of the error in a futures simulation, and the entry of prior error into subsequent steps.

It also discusses the difference between growth of error and growth of uncertainty. The latter grows without bound in a futures simulation, but the former does not. As the magnitude of the error is unknown and unknowable in a futures simulation, all one has is uncertainty. And the greater the number of modeling steps, the greater the state of ignorance about the final state.

Kevin, you hit the nail right on the head. There isn’t one branch of consensus climatology where the practitioners take systematic error into account. Not one. Not the modelers, not the global air temperature people, and certainly not the paleo-temperature reconstructionists.

I use “practitioners” deliberately, because their level of practice doesn’t qualify as science.

I was exposed to propagation of error in my undergraduate classes that included measurement labs, most especially analytical chemistry. It was also emphasized in physical chemistry — all the courses where measurement data was important.

One of the physicists at work recommended Bevington and Robinson to me, when I asked about a good text on error analysis. He said he took a junior-level course organized around that book. That led me to believe that error analysis is a formal part of the usual undergraduate physics major. But I’ve made no survey and could be wrong about that.

As you clearly know, nothing could be more important than understanding and propagating error when evaluating the level of quantitative knowledge yielded by some experiment, observation, or sequential calculation — including use of a model to predict a result.

But as you saw in your chosen example, and probably recognized throughout, climate modelers evidently don’t have a clue. One suspects their education has a huge and scientifically fatal hole.

I’m guessing you’re really frustrated when even truisms are being dismissed as nonsense. The inability to distinguish between precision and accuracy is a telling one. I think in many fields however, uncertainty analysis and estimates of error propagation are conflated – particularly in engineering. But you are right.

On the issue of propagation, it seems like quite a hard thing to get right before the event. I’m guessing the only way to really measure this, is to assess sensitivity to starting conditions. Therefore, run many models with different data sets, even different floating point precision, in order to see how these effect the model runs. But then they do this anyways when they collate the models from all the groups – or is this not the case. I’m I wrong here?

I’ve actually been to a number of talks that presented the results of simulated and measured error propagation; the results showed that simulated propagation (under various assumptions) was actually far greater than measured error propagation (using sensitivity-type approaches). Most of the studies pertained to mechanical failure of rocks.

Thanks, cd, it’s been quite an experience. Let’s see, propagation of physical error is different from sensitivity to starting conditions or simulations with varying parameter sets or floating point precision. These modeling experiments test the variability of the model expectation values. They are a measure of model precision. They don’t say anything about whether the model is physically correct — whether it is accurate.

Evidence of physical accuracy comes with comparison against the relevant observable. Propagation of physical error takes the inaccuracy metric — the known observed physical error made by the model — and extrapolates it forward to produce a reliability estimate of model expectation values. The further out the model is pushed, the greater will be the uncertainty due to propagated physical error. The uncertainty is a measure of how much trust one can put in the expectation value magnitudes predicted by the model.

One could take the outcomes of different starting conditions, or use of different model parameters, and compare them against observations. This would give information about how accurate, or inaccurate, the model might be under those conditions.

“But this blog does not qualify as peer-reviewed literature” ( according to the emminent we-don’t-know-what-he-knows-warrenlb…) Please check “is ±114´ larger” above. Should that be 114% rather than a minute sign ‘

RACook, t should have been 114x, i.e., times. The global annual average (+/-)4 W/m^2 of CMIP5 cloud forcing error is 114 times larger than the global annual average 0.035 W/m^2 increase in GHG forcing since 1979.

In the real world, all the laws of physics are in effect all the time. The most important reason for one to attempt to develop a complex computer model is to learn whether you understand the situation well enough to include all relevant physical processes. Only after demonstrating that a model agrees with real world scenarios is the model useful to predict any real world “what if” outcomes. Clearly, Climate Science has failed to learn all the relevant.processes.. .
.

The people who input the data into these models are not “modelers”, they are data entry clerks.
All they truly know is that they can “adjust” the program inputs to get the answers the managers want, and they get a paycheck every two weeks.

Forget the fact that the models can’t cope with reality, their financial situation can’t accept that concept.

Here is a thought for these modelers: Let’s have them submit their resumes to the top 5 aerospace firms to get a job modeling planes in flight. Do you think they would get hired?

……..The IPCC climate models are further incorrectly structured because they are based on three irrational and false assumptions. First, that CO2 is the main climate driver. Second, that in calculating climate sensitivity, the GHE due to water vapor should be added to that of CO2 as a positive feed back effect. Third, that the GHE of water vapor is always positive. As to the last point, the feedbacks cannot be always positive otherwise we wouldn’t be here to talk about it. ”
The climate models are built without regard to the natural 60 and more importantly 1000 year periodicities so obvious in the temperature record. Their approach is simply a scientific disaster and lacks even average commonsense .It is exactly like taking the temperature trend from say Feb – July and projecting it ahead linearly for 20 years or so. They back tune their models for less than 100 years when the relevant time scale is millennial. This is scientific malfeasance on a grand scale.
In summary the temperature projections of the IPCC – Met office models and all the impact studies which derive from them have no solid foundation in empirical science being derived from inherently useless and specifically structurally flawed models. They provide no basis for the discussion of future climate trends and represent an enormous waste of time and money. As a foundation for Governmental climate and energy policy their forecasts are already seen to be grossly in error and are therefore worse than useless. A new forecasting paradigm needs to be adopted.
For forecasts of the timing and extent of the coming cooling based on the natural solar activity cycles – most importantly the millennial cycle – and using the neutron count and 10Be record as the most useful proxy for solar activity check my blogpost linked above,
The most important factor in climate forecasting is where earth is in regard to the quasi- millennial natural solar activity cycle which has a period in the 960 – 1020 year range.For evidence of this cycle see Figs 5-9. From Fig 9 it is obvious that the earth is just approaching ,just at or just past a peak in the millennial cycle. I suggest that more likely than not the general trends from 1000- 2000 seen in Fig 9 will likely repeat from 2000-3000 with the depths of the next LIA at about 2650. The best proxy for solar activity is the neutron monitor the count and 10 Be data. My view ,based on the Oulu neutron count – Fig 14 is that the solar activity millennial maximum peaked in Cycle 22 in about 1991. There is a varying lag between the change in the in solar activity and the change in the different temperature metrics. There is a 12 year delay between the neutron peak and the probable millennial cyclic temperature peak seen in the RSS data in 2003. http://www.woodfortrees.org/plot/rss/from:1980.1/plot/rss/from:1980.1/to:2003.6/trend/plot/rss/from:2003.6/trend
There has been a declining temperature trend since then (Usually interpreted as a “pause”) There is likely to be a steepening of the cooling trend in 2017- 2018 corresponding to the very important Ap index break below all recent base values in 2005-6. Fig 13. The Polar excursions of the last few winters are harbingers of even more extreme winters to come more frequently in the near future.

Excellent Article! Relating model predictions to reality is indeed basic, but not frequent. Even Albert Einstein made this mistake. He suggested actual reality did not exist. He suggested perception, regardless of reference, was equally valid. But nothing can be stated as valid without relating it to a known absolute reality.

And thus was physics and astronomy led astray for decades and decades. Not that there haven’t been some interesting stuff found out since Einstein, but they went off into la-la land a bit too far. It will take some hard headed thinkers to undo some of the silliness that has transpired.

“Are Climate Modelers Scientists?”
####
Good question. My impression is that they are heavy in mathematics and programming but only with a smattering of understanding of geophysics and radiative physics and utterly deficient in the other natural sciences. The study of climate is a multi-science affair that involves all fields of natural science.

I was educated as a physicist, but most of my career was in engineering. One thing that engineering taught me is that precious few, especially pure “scientists,” effectively understood the difference between accuracy and precision. Orders of magnitude true for the general population. And in engineers it is worse among those who came to the field after the introduction of digital displays on everything including calipers. If you’ve struggled with an old caliper, or even a slide rule, you intuitively understand these things. If your only experience is using a caliper that some idiot has tacked a 5 digit past the decimal point display on you’re less likely to understand.

I also did a great deal of computer simulation and modeling, but being as this was engineering not “pure” science, I was always constrained by reality. I wasn’t able to say, well, the data must be wrong the model is right! To do so would have been a quick trip to the unemployment office.

All of which explains why I have so little respect for “climate science” as practiced by the majority of its adherents. Feynman is probably spinning in his grave so fast if you could attach a shaft to him he would power a small city.

As a wannabe physicist who opted out of school but ended up in engineering myself, I can only say, “Amen!” to that.

Engineering will quite rapidly put your feet on solid ground. And any science that doesn’t have its feet on solid ground – WTF is it, anyway? I’ve shaken my head and rolled up my eyes now thousands of times at the silliness that academics/ivory tower dudes can come up with. I have to ask if they have ever done anything practical in their lives – ever had to make something that actually WORKS.

I wondrously got to spend 7+ years in R&D, and THAT allowed me to learn how to assess problems, possible solutions, and to derive REAL empirical experiments to test out what sounded like reasonable explanations. I was the person running the experiments – and sometimes the results confounded my best “reasonable” thinking. THAT is a lesson in humility. People would be amazed at how even in hard-nosed industry that many times “reasonable” ends up being wrong – when the experiments are actually done in the real world. And then extend that to “the frontiers of science” instead of industry – “reasonable ideas” are only starting points. It is reality that tells you what is science and what isn’t.

I will give examples of all of the following concerning climate modelers:

1) They neither respect nor understand the distinction between accuracy and precision.
2) They understand nothing of the meaning or method of propagated error.
3) They think physical error bars mean the model itself is oscillating between the uncertainty extremes. (I kid you not.)
4) They don’t understand the meaning of physical error.
5) They don’t understand the importance of a unique result.

Thanks. I now have oatmeal in my sinuses and all over my computer. This should have been under lock and key until Friday Funny. WUWT, please handle dangerous material responsibly!

Well …

1) At least they are precise about misunderstanding accuracy.
2) They understand much about the meaning and method of propagating error. It’s called “socialism.”
3) There’s a physical error bar right down the street from me. It’s call: Mann. That place is oscillating every Friday and Saturday night.
4) Nor that the unformation is in the errers.
5) Of course they understand the importance of a unique result. Otherwise, why would they have so many of them?

The errors associated with accuracy and precision are lost in the idea that there is such a thing as global average temperature that changes with time and the assumptions that go into models that are suppose to explain why that global average temperature changes. One false assumption that is critical to the AGW argument is that anthropogenic emissions of CO2 are the sole cause in the accumulation of CO2 in the atmosphere. I have done statistical analysis mass balances on different regions of the globe that strongly shows this critical assumption to be false. http://www.retiredresearcher.wordpress.com. At the end of this blog I dared to project the confidence limits of these statistical models into the future. However I did not consider propogation of errors. I would greatly appreciate reviews of this work by anyone. Your knowledge of propagation of errors would be greatly appreciated in such a review. Anyone who wishes to review this study can comment on my blog. You can get there by clicking on my name.

You should write up your work formally and submit it to some journal, fhhaynie. A formal systematic write-up will help your thinking because the process will cause you to think through every detail. You’ll discover any holes in your analysis. The entire exercise will give you the confidence to proceed to submission, and the knowledge to meet your reviews.

Did you read it or just scan it? I have spent a lot of time going through the details and I’m asking anyone to do critical reviews to let me know where I missed something. I thought that with your expertise I could get a good review. I haven’t written for journal publication in over twenty years and have no desire to begin again. For the last few years, I have been studying the work of “climate scientist” because I recognized years ago that their models were on the wrong track and they have gotten worse since. I hope that by openly presenting what I have learned that some minds will be changed and my great grandchildren will have a better future.

As for journal publication, I don’t recall ever having one of my papers rejected, I have been advised by reviewers and editors to make changes and corrections (which I did). I maintained membership in three societies (not now), served as chair on commitees and symposia, did many reviews of papers, and even served on the edtorial board of a short lived journal. You can find some of my published papers by googling Fred H. Haynie. My expetise was in atmospheric corrosion. If someone wants to write my blog post up for journal publication, I will help them.

Honestly, I just scanned it Fred. To do a proper review of your topic, which is outside of my professional field, would take days of effort. These, I presently do not have.

If you’ve written for publication, and have clearly already gone to significant effort, then it seems like a relatively small further effort to take the one last step and write it up formally. Energy and Environment is friendly to well-conceived critical papers about climate.

Like you I’m a veteran of peer-reviewed publication. All my professional manuscripts have been successfully published. It’s just that never have I run into such near uniformly poor-quality reviews as from these two top-flight climate journals. Like you, I’m in the fight for our future.

Thanks for being honest. Some Journals have been hijacked and I don’t consider them to be unbiased scientific journals. Pal review is well documented. As to me needing to formally publish in a journal, I would prefer some qualified scientist, engineer, statistician, or economist (who is twenty or thirty years younger) to take what I have done, improve on it and get it published. When I retired over 20 years ago, I retired from formal publishing. My wife thinks I spend to much time commenting on blogs as it is. My “honey do” list is about five years long.

I had a similar experience long ago. Six months after final rejection, someone better known in the community published essentially the same result, though with less detail and utility. I do not lay all the blame to my lack of publishing credentials at the time – the paper was not written in a manner to engage the casual reader or immediately make it clear why it was important, and differed from what had come before.

But, the thing that rankled so much at the time was some of the reviewers’ comments betrayed a total lack of awareness of some basic subject matter. I didn’t mind getting rejected so much as being rejected for completely wrong reasons.

It is pretty much a given in any profession, I think, that only a small minority really know what they are doing. You’ve got to spoon feed it to them, and clearly state what you are doing and why it is important, or at least different from what has come before. And, it helps to have a long paper trail and a well established reputation in the field.

In climate science, you’ve also got to have… well, maybe it’s just impossible in climate science today. After all, for the reviewers to approve, you’re actually asking them to stand at the window where so many have been defenestrated, even previously well-respected authorities.

I used that when teaching. Rather than an arrow, I used the analogy of a rifle shooter with a good technique and sight set up correctly but a short barrel. The shots get spread over a wide range around the bulls eye so he is still accurate if not precise. The shooter with a longer barrel and a poorly adjusted sight is more precise but inaccurate.

Great visual message, that really captures the essence of the difference. Climate models are presently upper left, with the precision-centric efforts of modelers bringing them inevitably to the upper right. There, they’ll stay, so long as current practice remains in force.

3.3.2 In practice, there are many possible sources of uncertainty in a measurement, including:
a) incomplete definition of the measurand;
b) imperfect reaIization of the definition of the measurand;
c) nonrepresentative sampling — the sample measured may not represent the defined measurand;
d) inadequate knowledge of the effects of environmental conditions on the measurement or imperfect measurement of environmental conditions;
e) personal bias in reading analogue instruments;
f) finite instrument resolution or discrimination threshold;
g) inexact values of measurement standards and reference materials;
h) inexact values of constants and other parameters obtained from external

I’ve experienced somewhat the same in regards to archaeology. Yes, they follow solid methodology when laying out a grid on a dig site, and they carefully log things, giving the impression of accuracy and scientific rigor. But then 99% of what they write about is about qualitative things and about religious ceremonies and the meaning of artifacts – all of which is pulled out of their collective agreed-upon interpretations of the past societies, not on scientific quantification. YES, they can tell you relatively well enough about WHEN something was laid in the dirt – but when it comes to WHAT it means they pull out the interpretations of 1850. For those who say that they use C14 dating and OSL and such, you have to remember that they are sending all those samples off to LABS to do the actual science. It is those labs which do the science, not the archaeologists. In analogy, lawyers and police send off samples for lab testing, but no one would assert that a cop or lawyer is a scientist, merely because he used the OUTPUT from labs.

The next time you hear an archaeologist talk about ritual artifacts or ceremonial plazas or temples, ask yourself exactly what scientific basis there is for using those ideas. What you will find is an accepted paradigm that has been around for a LONG time and which is not allowed to be challenged from within. No matter HOW reasonable the terms “ritual” and “ceremonial” and “temple” SOUND to us as laymen, please be aware that there were many very practical people living in those old societies – carpenters, farmers, metal workers, bricklayers, architects, etc. – and the archaeological view that the societies were run predominantly by priest castes is only one point of view. The portion of people who live close to the land or who work out architectural plans and design buildings that last for thousands of years cannot have been a bunch of mumbo-jumbo. Mumbo-jumbo doesn’t work in OUR practical-minded society, so why should it work in societies in the past?

The reviewers ignored that literature. The final reviewer dismissed it as mere assertion.”

I have recently run across the views of historians about assertions. If what I’ve found is typical (and I have no reason to think it is not), then basically they have the attitude that, “Everyone has an opinion and all opinions are equally valid.” This is so far from science that it shocked me. They LITERALLY think that in quantified science anyone’s opinion is worth listening to.

I have yet to ask them about 2+2=4, about the acceleration due to gravity, etc. Or if someone who thinks that 2+2=13.5 is worth listening to. BUT I WILL.

To answer the question as put, if a climate model designer believes his results reflect reality then no, he is not a scientist. If the modeler believes his results are data then he is not even a modeler. Written and shall be accepted as a gender and sexual orientation-neutral statement.

Well maybe when they get it to go more than 250 miles, take only 10 minutes to recharge, not require $$$$ of taxpayer subsidy, operate in conditions less intense than outback Australia, and oh yea, look half-way decent, give us a call.

Pat,
Thank You (!) for this tour de force on all of the ways ‘climate modelers’ do not adhere to scientific principles. There is just toooooo much info here to enjoy on my lunch break today. I’ll give it a thorough perusal this evening!

rgbatduke: * The Multi-Model-Mean is an abomination, and all by itself proves utter ignorance about statistics in climate modeling.

Strictly speaking, actual data have never been obtained from processes that satisfy the assumptions of mathematical statistics, including the assumptions that the data have all been sampled from the same defined population and that they have been sampled independently. The multimodel mean is not intrinsically worse in this regard than the mean of 5 realizations of a diffusion process, the mean time lapse of 4 atomic clocks (as was used in an experiment testing a prediction of general relativity), or successive measures on an industrial process.

A model with its parameters and their imprecisions defines a population, the population of possible realizations. If there are variations on the model, then the model, its variations, and all their parameter estimates and precisions defines the population. If the imprecisions are represented by normal distributions (and most other distributions, notably excluding the Cauchy distribution), then that population has a mean and standard deviation, and the mean and standard deviation can be estimated from the sample of realizations computed via randomly sampling and resampling values of parameters from their corresponding imprecision distributions. The point of doing this is that neither the model mean nor the model sampling distribution can be calculated analytically. Without doing the simulations to obtain a good estimate of the sampling distribution, you can not infer much from the misfit between one or a few of the realizations and the actual temperature (or “mean temperature” trend) — any such misfit might be due to a single poorly estimated parameter (and it is highly unlikely that all of them will have been estimated with the requisite accuracy). When, as now, almost all of the realizations are above the mean temperature trajectory, and the CI on the mean model trajectory clearly excludes the data, then you can have a lot of confidence that the population defined by the model does not include the “true” model.

The argument between Pat Frank and the reviewers seems to be that the reviewers are not interested in the results that come from sampling as well from the distribution of the “forcing” change of 4 W/m^2. In effect, they are regarding the value of 4 W/m^2 as reliable as any of the famous physical constants such as the Boltzmann constant or universal gas constant. Or it is the only value of the change in forcing that they want to consider for a while.

The modelers could consider varying the change i n forcing in a long series of simulations/samples from the model population (4, 3.5, 3, 2.5 etc) until they got some means consistently below the mean temperature. The could pick the value (my guess now is that it would be close to 1.5) that produced the ensemble mean closest to the observed trajectory; assuming the model to be correct, as a bunch of modelers do, that would then be the best estimate of the actual forcing change produced by the CO2 change. It’s equivalent to estimating the TCS for the model from the temperature trend.

“A model with its parameters and their imprecisions defines a population, the population of possible realizations. If there are variations on the model, then the model, its variations, and all their parameter estimates and precisions defines the population.”

The contingency resolved by running a model is not the same as that resolved by a physical experiment. The only thing a model can teach us (i.e. its information gain) is about the model itself! All model are question beggars and remain so until the questions are answered, by observing REAL data.

Jeff Patterson: The only thing a model can teach us (i.e. its information gain) is about the model itself! All model are question beggars and remain so until the questions are answered, by observing REAL data.

No disagreement here: that is why model outputs are always compared to data.

“…The multimodel mean is not intrinsically worse in this regard than the mean of 5 realizations of a diffusion process, the mean time lapse of 4 atomic clocks (as was used in an experiment testing a prediction of general relativity), or successive measures on an industrial process….”

The mean of the sample of realizations is an unbiased estimate of the mean of the population of possible realizations of the model. The population is well defined when the model is defined and the uncertainties of the parameter estimates are specified. What part of that do you think is in error? Are you asserting that the sample of atomic clock readings was a random sample of the possible atomic clock readings that might have been taken and weren’t? The population of possible model realizations is better defined than the population of possible atomic clock readings.

…global warming (“good” no longer spend cold)… pouring pollutants greenhouse effect into the air is melting the Poles. Besides on melting permafrost will free into the Atmosphere million Tm of methane with big greenhouse effect. This large amount of freshwater to the ocean could stop vertical deep sea currents which depend on a starting from surface downwards on a delicate balance between fresh and salty water and temperatures. Heat from the Sun reaches the equator and currents distribute it throughout the Planet, then…goodbye to our warm climate. The horizontal oceanic currents produced by winds and some others by the rotation of the Earth, rotating all by the Coriolis effect, will continue…but the vertical currents produced by the sinking of horizontal currents of dense salty water that reaches the Poles where the water is sweeter, less salty, and form deep currents would stop (why are the Grand Banks fishing in cold latitudes?…because over there is the POLAR ICE, freshwater, different sweet/salty density, salty dense water arriving and sinks in a little salty water environment…nutrients that are removed from the bottom and rise to the surface, phytoplankton that feed on nutrients, zooplankton that feed on zooplankton, fish that feed on zooplankton)… No polar ice over there will be no vertical currents…could reduce the rise of nutrients to the surface and therefore PHYTOPLANKTON SHORTAGE MAY DECREASING ITS VITAL CONTRIBUTION WITH OXYGEN TO THE ATMOSPHERE (90 %)…fish…winds in some places of more warm latitudes carry out the surface hot water permitting the outcropping of water and plankton (the upwelling) from the bottom cold current coming from the Pole, forming other Banks fishing… Without polar ice the sea it could almost stratified into horizontal layers with little energetic movement of water masses in vertical which is what removes fertilizer nutrients from the sea bottom… Besides lowering salinity of the sea, for that contribution with freshwater to melt the Poles, will increase evaporation (ebullioscopy: the less salt has, more evaporates) producing gigantic storm clouds as have never seen, that together with altering of the ocean currents, could cool areas of the Planet causing a new ice age… Warming invasion of tropical diseases carried by their transfer agents, already without the “general winter” containing them would fall upon the World like a plague…can produce a cooling, a new ice age, like living at the North Pole…and less oxygen in the Atmosphere… Is not known to be worse… Go choosing.

The fundamental law of information theory show definitively that computer models by themselves are incapable of increasing our knowledge about physical reality beyond that already known to those who programmed the model. Information gain can only occur when a range of possible outcomes (contingency) is resolved by observing the true outcome. Models can create contingency (i.e. a hypothesis) but cannot by themselves resolve it. Climateers seem to think they’ve found a loophole. Just run more simulations (which they laughingly call experiments) and average the results! They really are an ignorant lot. They reject the very notion accuracy because they claim the models make no predictions (which can be tested against reality), only projections which can never be falsified. Good work if you can get it.

Jeff Patterson: The fundamental law of information theory show definitively that computer models by themselves are incapable of increasing our knowledge about physical reality beyond that already known to those who programmed the model.

I think that you overstate the case. Much depends on how thoroughly the models have been tested and how accurate they have been shown to be. Three examples:

1. A lab tech runs a sample of your blood through a measuring device and reports your measured cholesterol as 327 or something. That number is the output of a model, computed from the area under a curve put out in the device, and an equation relating such areas to actual (accurately but not perfectly known) test concentrations. Have you acquired information about your cholesterol level? What you wrote says no.

2. Above the Earth surface are interplanetary probes and orbiting satellites, whose locations 3 months from now have been computed from models, in this case really well-tested and accurate models. Do those calculations provide information about their actual future positions, information that might be used in a course correction for example? What you wrote says no.

3. Your GPS device says that you are 3.2 miles away from an unmarked street where you have to turn left to get to the next gas station. The message has been computed from a model, again it a well-tested and accurate model. Is that information about where the next gas station is? What you wrote says no.

The point of making a model is to use it in a situation where the calculated output tells you something about the reality that you could not tell without the model.

Why not add the example of calculating the the length of a hypotenuse of a right angle using measurements of it’s sides?

Simple counts as per example 1 or simple geometry with known, well defined inputs as example 2 and 3 are horrendously simplistic views of what climate models are poorly attempting. Not to mention that these examples have been proven reliable, while climate models have repeatedly been shown as unreliable.

I think you’re going down the semantic rabbit hole. Those things that you are calling “models”, most of us would call “filters”. When the climate bunch use “models”, they don’t process data, they create it. It’s one thing for Hansen to use a “statistical model” to adjust data sets. It’s something entirely different to use something that purports to be physics to “simulate” a physical thing. That’s what most people mean by “modeling”.

I think we need to stop calling statistical models “models”, and start calling them “filters” to eliminate the confusion.

You are confusing the transfer of information with the creation of information. A calculation, any calculation no matter how complex can only transfer information. 2+2=4 whether you were aware of that fact or not. When you do the calculation by hand or by computer, _you_ may learn something but knowledge (the sum total of what is known (or thought) to be true, not by you necessarily, but by “us” corporately) is not increased. A model simulation is just a series of calculations, each of which can only transfer information. The die was cast, the outcome certain, before you pressed run.

To your examples:
1) Data analysis of physical data is not a simulation but here again, knowledge is not increased. The knowledge (about how to transfer the information contained in a blood sample to some human readable form) is encoded in the algorithm. If the algorithm has been tested and found to give reliable results AS COMPARED TO SOME INDEPENDENT REFERENCE, the test may prove useful as a time effective (or cost effective) substitute for the reference test.
2) The equations for planetary motion are well known. Doing the calculation can not result in information gain.
3) Your GPS device has transferred static information (the street map) to you and calculated your distance from a specified point. If you are surprised by the result (a necessary condition for information gain) it is only because _you_ didn’t have the information. The information was in existences before the GPS gave it to you and it certainly did not create it :>)

Matthew R Marler February 25, 2015 at 12:20 pm
Jeff Patterson: 2) The equations for planetary motion are well known. Doing the calculation can not result in information gain.

Really? You can not now calculate the proper thrusts for the course correction, whereas you could not do so before the calculation? Or the knowledge of the proper thrusts is not “information”?

Your answer is in the circularity of your reply. “You can not now calculate the proper thrusts for the course correction, whereas you could not do so before the calculation?” You could always do the course correction calculation, before or after the calculation. perhaps not fast enough to do you any good but the speed at which you arrive at the answer has nothing whatsoever to do with the information gain. Knowledge can only come from surprise and arithmetic should never surprise us.

In defense of climate modelers, these concepts are difficult to understand for most people. Math people have no problem but word people struggle. I suspect that people who reviewed Pat’s manuscript went through two filters. First, people who got into climate science did so because they thought they could get by without as much math as the other sciences. Second, it is the climate scientists who lean more towards words versus math that end up reviewing manuscripts for journals. My opinion anyway.

There is a lot of truth to that, I could see it myself back in my college days. Sciency people who could not do math stayed away from physics and chemistry and gravitated to things like biology, geography, geology, climatology, and other science fields that have a lot more qualitative aspects. However, at the end of the day, if you’re gonna start doing things like modelling, then you need the proper math and the statistics with the proper treatment of errors, as this article so clearly shows.

In answer to your question, “Are Climate Modelers Scientists?” the answer is No. They are just highly skilled technicians assigned to a specific task that is supposed to verify the prejudices of their superiors. Well paying job if you don’t contradict your supervisor.

“To say that this error indicates that temperatures could hugely cool in response to CO2 shows that their model is unphysical.”

Oh, my. Well this one has been conditioned to respond to “models can only be hotter.” It is when he gets access to the food bowl. This person thinks they are looking at a climate model rather than the uncertainty input to the model propagated throughout to the result.

From the comments of the reviewers it looks like the climate research field continues to grow more incestuous all the time. They refuse to look outside their own area for any relevant knowledge and apparently prefer to reinvent the wheel, jumping into the mud puddle with both feet more often than arriving at the desired ‘new and innovative solution’. They have done this consistently in everything from statistics to control theory, and now, as you have pointed out, computer modeling theory. They have made some pretty egregious errors in each, and, when these are pointed out, refuse to even admit such.

It’s pretty obvious why you paper has been rejected. If correct (it certainly appears so to me) it will nullify something like 80% to 90% of the climate papers published in the last 10 or 15 years. At least that many have been based either partially or entirely on results from the current crop of GCMs.

Please keep trying on the paper. It should be required reading for all scientists and engineer that currently accept the GCMs as proof or prophets of global warming.

Pat,
As a last resort you could always try the Chinese Science Bulletin. That’s the journal that published ‘Why models run hot: results from an irreducibly simple climate model’ for Monckton, Soon, & Legates. They apparently don’t have the CAGW baggage that infects most of the U.S. journals.

I agree with the Reviewers comments. This paper should not see the light of day in a reputable Science Journal. Its unfortunate that the Author attributes his rejection to ‘bias’ instead of the paper’s own failings, which are considerable.

Why? Why are you qualified to make that decision? “Your” opinion does not matter, because – according to your answers, YOU have no ability to make a judgement. Now, “you” can repeat somebody’s else’s judgement – and have over 300 times here! – but then again, that particular trait also means you have no business voting.

warrenlb: A hit-and-run troll comment. Evidence: Nothing to back up the “which are considerable” remark. The failing of the Climate Models are far more considerable, and will one day be written about in science history as the following paragraph to Ptolemaic geocentrism and the Vatican circa 1615 AD.

I can’t claim I understand it all 100% but if I understand it correctly, running similar analysis on our current relatively reliable weather models would lead to predictions such as “in three days the temperature will be thirty, plus or minus fifty degrees”. Any model of chaotic system is going to propagate its uncertainity over the whole phase space very fast, that’s not talking butterfly effect, that’s considering errors piling up in the most uncomfortable way for extended periods of time. But that does not tell us much about accuracy of the model.

Referring to the b pane of the first image, it seems to me the error bounds presented there are highly unrealistic. I really don’t think falling into ice age within 50 years is physically possible without global nuclear war or large asteroid hitting Earth.
At the point where the blue area starts in the image, the climate model is already running for ~150 years. Hitting the target result is test of accuracy of the model using known forcings. When switching from past to future, of course known forcings are replaced by estimates. But considering maximum deviation in these estimates from the very beginning does not sound like realistic assumption to me.

Kashua, your understanding needs a little work. Currently, weather modelling in SE Australia is 70% accurate at 7 days away from the date of forecasting. I find that impressive. Further out, the forecasting quickly loses accuracy, closer is much better, though not by much due to chaotic effects (mountain waves for example).

I find that the best way of viewing this is to think of it as a picture on a computer screen, say 1024 x 768 pixels. Let’s imagine that you are looking at a picture of the pope waving at you from his balcony. Zoom back far enough and the pope and his balcony are now described by 4 pixels instead of 786,432 pixels. Now you can’t tell whether it’s the pope or Catherine the Great doing something unspeakable with a donkey.

Kasuha, “Referring to the b pane of the first image, it seems to me the error bounds presented there are highly unrealistic. I really don’t think falling into ice age within 50 years is physically possible without global nuclear war or large asteroid hitting Earth.”

You’ve made the same mistake made by virtually all the modelers, Kasuha. You suppose that the propagated error bars indicate the behavior of the model itself. They do not.

Look closely at panel b. Do you see the lines? Those lines show the behavior of the model: discrete expectation values. Discrete, by the way, does not mean physically unique.

The uncertainty envelope indicates how much information those expectation values have, as regards the future climate. In these particular cases, that information is: none. Uncertainty bars indicate an ignorance width. They do not indicate the behavior of the model.

The blue area shows propagated error as though the simulation started at the year 2000. When I calculated the uncertainty starting at 1850, the uncertainty envelope was already huge at the year 2000. So, I decided to be merciful and include only the uncertainty in the futures simulation. Putting in the larger uncertainty envelope would only lead the the necessity of long explanations.

If you want to get something published — respond clearly and concisely to reviewer comments. If reviewers make mistakes (certainly not uncommon), then explain in clear terms and give simple examples to make your point. Also, do not use an indignant tone and insult them.

I had the experience of having a paper rejected because of one reviewer. He said (it was obvious who it was) that I hadn’t considered something. I let the editor know that the paper contained a whole section with a titled discussing it. She gave it to a third reviewer who wrote “I refer to my previous comments” that were those of the original reviewer.

warrenlb – It was hard to write it politely but I refrained from using the f word when I pointed out to the editor that there was a section discussing it with a title, along with many other inane objections. I did write that the reviewer had clearly dismissed the paper before even reading through it.

It did get it published in a minor journal but the same person spread lies about it being mathematically incompetent. It probably could have been done better but the only actual fault I found was a subscript that was an i instead of a j and, knowing him, that was enough to slam the paper with a straight face.

My wife was telling me of one of her colleagues that had a manuscript reviewed for publication, made all of the required changes … and then it was summarily rejected by the “journal”. To say this researcher is unhappy is understated.

I did not read all of the comments, so I suspect this has been stated. But, for what it’s worth, I feel your pain. I do uncertainty work in experimental aerodynamics and continue to struggle to get similar work out of my CFD brethren. They demand my uncertainties so that they can prove that their answer is within my error bars, but don’t realize that they also have error bars.

In any case, I suspect one of the problems is how uncertainty propagation is now being accomplished. In the good ol’ days, we did a Taylor series expansion by hand. It meant you had to do a lot of hard math and you could easily miss cross-correlated factors. But, it also meant that you could (relatively) easily find the sensitivities of the overall uncertainty to individual uncertainties (in an experimental setting, this tells you which device it’s worth buying better versions of and which ones are just not going to bring you much benefit).

With the cheapness of computing power, everyone is moving to Monte Carlo methods. To find those individual sensitivities, you have to dither (as, I assume, you did with the +/-4W/m^2) a single component, then stop and do the same with another component.

Simultaneously, many of the models use Monte Carlo methods to deal with some other types of uncertainty than the ones we are talking about here (if there’s a volcano, how many hurricanes there are, and whatnot). This gives a range of values that it could output, given a single input.

To come to your uncertainty, given current methods, you are asking them to do a Monte Carlo simulation on the results of all of the Monte Carlo simulations (since you can’t just do it on the high and low lines, because the sensitivities may vary in other ways). They probably just think “I already did a Monte Carlo simulation, therefore, I have my uncertainty.”

But this is a case that is vastly different. You’re hitting the uncertainty of the uncertainty. Because I find that very different examples sometimes help illustrate something, consider an aircraft kill-chain. If a missile has a Probability of kill (Pk) of 0.9, we can do a Monte Carlo simulation on the engagement between an airplane and an attacker firing some number of those missiles. But, to add to the difficulty, we recognize that we don’t really know the Pk. It’s 0.9 +0.05/-0.15. Which means that the whole thing has a wide band on top of the wide band. The lazy way would be to just do a simulation with a Pk of 0.75 and one with Pk of 0.95, which at least gets you closer. But you still have to do a Monte Carlo of both.

On the side of the modelers (for the sake of fairness), I would admit that the uncertainty community really has caused some confusion in its effort to reduce confusion. Eliminating the term “bias” and so forth has just confused some folks and they don’t know what is being referred to anymore.

Also, it’s always nice to remind people that No one knows the value of the systematic error!

Just skimming, I spotted what looks like a serious mistake at the core of the matter: Standard error-propagation methods are designed to help estimate the effect of a change in inputs in an output. However, the less linear the system, the worse the estimate. The climate is a system of such complex non-linearity that you cannot practically translate from results back to spaces of possible inputs. This is why standard error-propagation doesn’t work for strongly non-linear systems. More importantly, perhaps, when the sixth reviewer shifted the inputs, he actually looked directly at what the error-propagation was supposed to estimate. Using error-propagation the way it is done here shows precisely the same mistake that seems to appear in a lot of climate models, a false assumption of linearity, starting from some conditions in a system that is physically strongly non-linear and numerically chaotic.

The numerical stability is also important, but describes something other than accuracy or even precision themselves. Physically, a small change in inputs should not cause a dramatic change in outputs. However, there are approximations in the models for computational reasons which can introduce chaos. The most important of these simplifications is that the air is treated as a fluid and not an enormous collection of individual particles. The numerical stability, using the same model with slightly different initial conditions, is a measure of the impact of these approximations. That is not to say the system is non-chaotic: Hydrodynamics are always chaotic, so there are always some small changes, a few degrees in some lakes, or something like that, which will send temperatures off to crazytown, but it appears as though modelers have not stumbled onto them overwhelmingly often. However, looking at another model-output, like the (non-physical) linear sensitivity to a specific forcing, any given model could be suffering from chaos.

I really would like to see a proper analysis of physical sensitivities, but that would demand a rerun of the models many times for each parameter to map out the changes in sensitivity. Then I would want to see combinations of perturbations, requiring at absolute minimum 2^N runs of central models where N is the number of physical parameters being checked. While it may be a better use of computing resources than continued predictions of unknown reliability, the resources to do it right may not exist today.

I really would like to see a proper analysis of physical sensitivities, but that would demand a rerun of the models many times for each parameter to map out the changes in sensitivity.

Ayep. This is how you do a real test of errors in simulations: you look at how the output varies with changes in the inputs. The reviewers were right in that regard.

I find it kinda odd that almost no one here is actually discussing Pat’s work. Looking at his AGU poster from last year, we see that his calculated uncertainty grows with the square root of time. Which means that in about 10,000 years, his model of the uncertainty is supposed to include temperatures below absolute zero.

If your work includes the possibility of temperatures going below absolute zero, then it’s certainly wrong. No matter how low we set the cloud forcing in the models, they’re still not going to go below absolute zero, which is a pretty strong hint that there’s something wrong with Pat’s work. It’s “not even wrong”.

What do they represent, if not the uncertainty in the temperature anomaly in the models? It’s the right axis on the main chart. C’mon, there’s no need to be cryptic.

A statement like yours leads me to think that you don’t know the difference between a physical error statistic and an energetic perturbation.

In models, a physics peturbation is the right way to account for uncertainties in the underlying (physical) parameters. Peturb the physical parameter(s) across the true range of uncertainty, and analyze the resulting change.

Of course this is distinct from the overall subject of “error statistics”: in models, there are errors / uncertainties in both the inputs and outputs. This is necessarily so; all models are approximations.

There wasn’t a review comment in the OP that didn’t have me nodding along and saying “yep”. They pretty much nailed it: error propagation in models is best-handled via physics peturbation.

I’d be interested in reading your paper, if you have it up on Arxiv or somesuch.

windchasers, uncertainty in temperature is not physical temperature. The uncertainty in temperature is a measure of how confident one can be in the accuracy of the temperature expectation value of the model.

The ordinate axis of the plots refers to the temperatures calculated from the models. The uncertainty envelopes refer to the reliability of those temperatures. The graphic is a standard way of representing a sequence of calculational results and their uncertainty.

In the case of the CCSM4 model, as in all CMIP5-level models, the propagated uncertainty says that no confidence can be put in the simulated temperatures. They convey no information about the possible magnitude of future air temperatures.

This is not being “cryptic.” It is the direct and unadorned meaning of physical uncertainty.

You wrote, “In models, a physics peturbation is the right way to account for uncertainties in the underlying (physical) parameters. Peturb the physical parameter(s) across the true range of uncertainty, and analyze the resulting change.”

All that tells one is how the model behaves, e.g., see the fourth figure in the head-post. That exercise reveals nothing about whether the model produces physically accurate predictions.

You wrote, “There wasn’t a review comment in the OP that didn’t have me nodding along and saying “yep”. They pretty much nailed it: error propagation in models is best-handled via physics peturbation.”

Too bad. Physics perturbation has nothing to do with propagation of physical error. See Bevington and Robinson.

Agreement with those reviewer comments amounts to an admission that one understands nothing about physical error analysis, or about the meaning or method of propagated of physical error.

Still, the reason we use physics peturbation as the way to do error propagation is because of the highly non-linear effects in many models. If your model is linear, then sure, a direct propagation using either analytical equations or Monte Carlo is fine.

Climate models are not linear, though, so errors in cloud forcings will have other feedback effects, whether positive or negative. The linear approach is fine for a rough guess, but it’s just a starting place.

The uncertainty envelopes refer to the reliability of those temperatures. … This is not being “cryptic.” It is the direct and unadorned meaning of physical uncertainty.

Yet, it’s plainly wrong. It doesn’t pass a sniff test: if we actually inserted this range of cloud forcing into the GCMs, will we ever get model temperatures that are below absolute zero or hotter than the Sun? No.

Most people use sniff tests to figure out whether what they’re doing makes sense. You claim that what you’re doing shows the effect of cloud forcing uncertainty in the models, but if those results are plainly not really representative of what would happen in the models, then there’s a disconnect. Your model-of-the-models is plainly off.

Physics perturbation is not physical error propagation. It does not follow the mathematical form of physical error propagation. It does not propagate error at all. Physics perturbation merely shows the limits of model variability, given a range of parameter uncertainty. That is merely an exploration of model precision, because there is no way to tell which projection, if any, is physically more correct.

If you were following an error analysis protocol standard in physics, every single one of your perturbed physics projections would have it’s own uncertainty envelope derived from propagated error. The propagated error would, in each and every case, include the physical uncertainty widths of your parameter set. Uncertainty would grow as the step-wise root-sum-square of error through every step. The final uncertainty of the ensemble mean would be the root-mean-square of the uncertainties of the individual realizations.

That is the ruthless self-analysis employed within all of physics and chemistry. It is a tough standard and is singularly neglected in climate modeling. Your professional modeler progenitors have set up the system to be very easy on themselves. And your professional perception has suffered for it.

Whatever you think about climate model non-linearity, those same models linearly project air temperature. That is fully demonstrated, both in the ms and in the poster. The rest follows.

You wrote, “Yet, it’s plainly wrong. It doesn’t pass a sniff test: if we actually inserted this range of cloud forcing into the GCMs, will we ever get model temperatures that are below absolute zero or hotter than the Sun? No.”

That statement merely shows that you, like the climate modeler reviewers, have no concept whatever about the meaning of physical error.

Figure it out: physical error statistics are not energetic perturbations They do not impact model expectation values. Repeat those sentences until you grasp their meaning, Windchaser, because your argument about “model temperatures” is complete and utter nonsense.

Look at panel b of the first figure: discrete model relizations embedded within error envelopes. Error was propagated there. Nevertheless, do you see any ‘absolute zero or sun surface temperatures‘ anywhere within?

WC, this might help — sure, go ahead and assume the probability of zero or Sun-surface temperatures is arbitrarily low. But the accuracy of a prediction about a totally reasonable temperature can also be arbitrarily low. Imagine monkeys throwing darts at a board with temperatures — it doesn’t matter how reasonable the temperatures on the board are, you don’t have a robust predictive model.

And that’s why we have hilarities like Lamb predicting a “definite downhill trend for the next century.”

Stephan, standard error propagation is in fact not “designed to help estimate the effect of a change in inputs in an output..” It is designed to estimate the reliability of a final result, given the impact of calculational and/or measurement errors in the calculational terms. One finds that in any text on physical error analysis.

You wrote, “However, the less linear the system, the worse the estimate. The climate is a system …”

However, the analysis is about climate models, not about the climate. Climate models project air temperature as a linear extrapolation of GHG forcing. That point is demonstrated in the poster linked in the head-post, and is thoroughly demonstrated to be true in the manuscript. All that business about non-linearity is irrelevant.

So, when you wrote, “the same mistake that seems to appear in a lot of climate models, a false assumption of linearity…” you in fact made the mistake; one of supposing an assumption where there is instead a demonstration.

Climate model air temperature output is demonstrated to be linear. That makes it vulnerable to a linear propagation of error.

Your discussion misses the point of my analysis, entirely.

You wrote, “I really would like to see a proper analysis of physical sensitivities, but that would demand a rerun of the models many times for each parameter to map out the changes in sensitivity.”

That exercise would tell one nothing about model physical accuracy. The effort you propose just measures model precision and is the wrong approach to learning how to model the climate. There needs to be a close collaboration with climate physicists, and a detailed interplay between prediction from theory and observation.

This needs to be done in a reductionist program, one that investigates smaller scale climate physical processes. Eventually small scale knowledge expands and can be collated into larger theoretical constructs. A global model would be the final outcome. It should never have been the first effort.

Global climate models are thoroughly premature. They have been leveraged into acceptance on the back of the pretty pictures available from realistic-like numerical constructs.

Visual realism is seductive, and convincing to the impressionable, the careless, and the fatuous-minded. But realism is not what science is about. It’s about physical reality — a much, much more difficult enterprise.

It is normal to be skeptical of one who criticizes a review board who rejects their work. Except in this case the criticisms build and build to a damning expose of peer review and those who practice it.

Holy Batman and Robin, a philosophical position? Does that peer reviewer really see accuracy vs. precision the same as pondering how many angels can dance on the head of a pin?

Unfortunately the belief in Climate science is that the more decimal places you have the more “accurate” the model or predictions are, the more times you calculate roughly the same answer in different ways ignoring error, compound or otherwise, it is impossible to be wrong. This is beyond stupid.

So to be generous I’ll just call climate modelers childish because in a way they are right. You plug 1+2 into a calculator or you plug in 2+1 or you plug in 1+1+1 you always get 3, the calculator is always right. It is precise, but accurate only to the extent you don’t have an idiot manning the calculator who has no idea what “+” means or what 1,2, or 3 represent.

Based on the peer review comments exposed in this article, I ran a model which showed climate modelers to be children allowed to play with big computers. Then changed some parameters and re-ran the model and found that climate modelers veered into delusional psychosis. Based on the error range compounded annually, climate modelers are somewhere between childish and delusionally psychotic.

Alx, there is no evidence that philosophers ever pondered “how many angels can dance on the head of a pin”; it’s a story made up by some scientist in the 19thC and subsequently went viral on the Internet of the day (written correspondence between scientists). Concepts such as accuracy versus precision are the stuff of the philosophy of science; i.e. how do you determine whether you are looking at science or pseudo-science?

PG, don’t like to contradict, but the distinction between accuracy and precision is what separates physical science from philosophy.

Stillman Drake made this point in his excellent book, “Galileo: a very short introduction.” Galileo was the first to attach his theories to the critical test of observation. As Drake notes, attachment to observation fully, completely, and ineluctably separated science from the essences and axioms of philosophy.

Galileo was really the first truly modern physicist; a scientist in the manner we recognize. That’s what got him in so much trouble, especially with the academic philosophs of his day.

Philosophical deductions can be perfectly precise. But their accuracy is a completely nother matter. :-)

Appreciate your understanding of the problem, Alx. As you note, it’s risky business openly criticizing one’s reviewers. I’ve never done it before, in years of peer-reviewed publication. But this experience was so abnormal, the reviews were so incompetent, and the subject is so widely important, that I just couldn’t keep quiet.

[E]lectronic publishing distinguishes between the phase where documents are placed at the disposal of the public (publishing proper) and the phase where ‘distinctions’ are being attributed. It used to be that being printed was ‘the’ distinction; electronic publishing changes this and leads us to think of the distinction phase completely separately from the publishing phase.

However, doing so changes the means by which distinction is imparted, and imparting distinction is a sure sign of power. In other words, those who now hold that privilege are afraid of losing it (‘gate keepers’) and they will [use] every possible argument to protect it without, if possible, ever mentioning it. — Jean-Claude Guédon and Raymond Siemens, “The Credibility of Electronic Publishing: Peer Review and Imprint”

I’m not sure that I understood example 6. Did the reviewer use the example that if you put in the same value on the RHS of the equation that you almost get the same value on the LHS? Is the point being made that depending on how you end up with 4W/m2 on the RHS you could get a 2% error in the uncertainty?

I hope someone can point out that I misunderstood for the sake of my sanity.

From the gist of what I have read, the “Climate Scientists” think the range of the GCM estimates is larger than the actual climate variation. Despite the actual measurements being completely out of the GCM projection range.

In the real world that is a complete falsification of the GCM’s. There is no way that they will ever admit that.

Really, if you look at the fourth head-post figure, Genghis, you’ll see that climate models cannot make unique predictions. That makes them inherently unfalsifiable, thus a-scientific. No matter what the observed temperature does, that wouldn’t change.

The justification for climate models that are physically incorrect/unrealistic is to enable the models to be tuned to produce an abrupt climate change. It is a fact that there are cyclic warming and cooling events (periodicity 1400 years plus or minus a beat timing of 500 years) in the paleo record and roughly every 8000 years to 12,000 years the cooling is abrupt and there is a larger magnitude change.

As noted above, if the planet resists forcing changes (negative feedback) rather than amplifies forcing changes (positive feedback), then the explanation (only physical possible explanation) for cyclic abrupt climate changes is there is a very, very, powerful forcing mechanism.

If I understand the mechanisms and what is currently happening to the sun we are going to experience an abrupt Dansgaard-Oeschger cooling event (not too bad, say 0.6C cooling over two to three years, based on the earth’s response to a step forcing change such as a large volcanic eruption, the solar minimums last 100 to 150 years) which may be followed by what causes a Heinrich event. I will explain the mechanisms in detail, if and when, there is unequivocal observational evidence of global cooling.

Sudden climate transitions during the Quaternary
Abstract
The time span of the past few million years has been punctuated by many rapid climate transitions, most of them on time scales of centuries to decades or even less. The most detailed information is available for the Younger Dryas-to-Holocene stepwise change around 11,500 years ago, which seems to have occurred over a few decades. The speed of this change is probably representative of similar but less well-studied climate transitions during the last few hundred thousand years. These include sudden cold events (Heinrich events/stadials), warm events (Interstadials) and the beginning and ending of long warm phases, such as the Eemian interglacial. Detailed analysis of terrestrial and marine records of climate change will, however, be necessary before we can say confidently on what timescale these events occurred; they almost certainly did not take longer than a few centuries.

Various mechanisms, involving changes in ocean circulation, changes in atmospheric concentrations of greenhouse gases or haze particles, and changes in snow and ice cover, have been invoked to explain these sudden regional and global transitions. We do not know whether such changes could occur in the near future as a result of human effects on climate. (William: Come on man, the sun is causing what is observed) Phenomena such as the Younger Dryas and Heinrich events might only occur in a ‘glacial’ world with much larger ice sheets and more extensive sea ice cover. However, a major sudden cold event did probably occur under global climate conditions similar to those of the present, during the Eemian interglacial, around 122,000 years ago. Less intensive, but significant rapid climate changes also occurred during the present (Holocene) interglacial, with cold and dry phases occurring on a 1500-year cycle, and with climate transitions on a decade-to-century timescale. In the past few centuries, smaller transitions (such as the ending of the Little Ice Age at about 1650 AD) probably occurred over only a few decades at most. All the evidence indicates that most long-term climate change occurs in sudden jumps rather than incremental changes.

…According to the marine records, the Eemian interglacial (William: Eemain is the name of the last interglacial period, the current interglacial period is called the Holocene) ended with a rapid cooling event about 110,000 years ago (e.g., Imbrie et al., 1984; Martinson et al., 1987), which also shows up in ice cores and pollen records from across Eurasia. From a relatively high resolution core in the North Atlantic. Adkins et al. (1997) suggested that the final cooling event took less than 400 years, and it might have been much more rapid.

The event at 8200 BP is the most striking sudden cooling event during the Holocene, giving widespread cool, dry conditions lasting perhaps 200 years before a rapid return to climates warmer and generally moister than the present. This event is clearly detectable in the Greenland ice cores, where the cooling seems to have been about half-way as severe as the Younger Dryas-to-Holocene difference (Alley et al., 1997; Mayewski et al., 1997). No detailed assessment of the speed of change involved seems to have been made within the literature (though it should be possible to make such assessments from the ice core record), but the short duration of these events at least suggests changes that took only a few decades or less to occur.

The Younger Dryas cold event at about 12,900-11,500 years ago seems to have had the general features of a Heinrich Event, and may in fact be regarded as the most recent of these (Severinghaus et al. 1998). The sudden onset and ending of the Younger Dryas has been studied in particular detail in the ice core and sediment records on land and in the sea (e.g., Bjoerck et al., 1996), and it might be representative of other Heinrich events. (William: 75% of the Younger Dryas cooling occurred in less than a decade. The planet went from interglacial warm to glacial cold during the Younger Dryas period with cooling for around 1000 years. Heinrich events terminate interglacial periods.)

Timing of abrupt climate change: A precise clock by Stefan Rahmstorf
Many paleoclimatic data reveal a approx. 1,500 year cyclicity of unknown origin. A crucial question is how stable and regular this cycle is. An analysis of the GISP2 ice core record from Greenland reveals that abrupt climate events appear to be paced by a 1,470-year cycle with a period that is probably stable to within a few percent; with 95% confidence the period is maintained to better than 12% over at least 23 cycles. This highly precise clock points to an origin outside the Earth system; oscillatory modes within the Earth system can be expected to be far more irregular in period.

Abrupt tropical cooling ~8,000 years ago
“We drilled a sequence of exceptionally large, well-preserved Porites corals within an uplifted palaeo-reef in Alor, Indonesia, with Th-230 ages spanning the period 8400 to 7600 calendar years before present (Figure 2). The corals lie within the Western Pacific Warm Pool, which at present has the highest mean annual temperature in the world’s ocean. Measurements of coral Sr/Ca and oxygen 18 isotopes at 5-year sampling increments for five of the fossil corals (310 annual growth increments) have yielded a semi-continuous record spanning the 8.2 ka event. The measurements (Figure 2) show that sea-surface temperatures were essentially the same as today from 8400 to 8100 years ago, followed by an abrupt ~3C cooling over a period of ~100 years, reaching a minimum ~8000 years ago. The cooling calculated from coral oxygen 18 isotopes is similar to that derived from Sr/Ca. The exact timing of the termination of the cooling event is not yet known, but a coral dated as 7600 years shows sea-surface temperatures similar to those of today.”

“In climate research and modelling, we should recognize that we are dealing with a coupled-nonlinear chaotic system, and therefore that long-term prediction of future climate states is not possible.” IPCC Third Assessment Report (2001), Section 14.2.2.2, page 774

Anyone who claims that a purported computer game climate simulation of an effectively infinitely large open-ended non-linear feedback-driven (where we don’t know even close to all the feedbacks, and even of the ones we do know, we are unsure of the signs of some critical ones) chaotic system – hence subject to inter alia extreme sensitivity to initial conditions, strange attractors and bifurcation – is capable of making meaningful predictions over any significant time period is either a charlatan or a computer salesman.

Ironically, the first person to point this out was Edward Lorenz – a climate scientist.

You can add as much computing power as you like, the result is purely to produce the wrong answer faster.

“You can add as much computing power as you like, the result is purely to produce the wrong answer faster.”

No, No, No, it means you can produce many more answers (even though they are wrong) faster than ever before thus “providing” the desired “confirmation” of the “theory”.

The “models” say that ice on the Great Lakes will be rare/declining in the future, yet, as I type this (about 100 feet south of the southern shore of Lake Ontario) there is more ice on the water than I have seen in the last 3 decades.

The original post is correct, if we tried this nonsense in the engineering community people would be killed by falling airplanes, short circuiting electrical grids and exploding rockets. Of course the engineering community had a few of those in the past, but they LEARNED from their mistakes, a mark of intelligence.

Folks that keep repeating the same “modelling” mistakes decade after decade and deride anyone they deem “not as wise” as them are doomed to failure. And in the history books written in the future (aren’t they all ?) the topic of climate “modelling” will be seen as a TOTAL ABJECT FAILURE (my condolences to good meaning folks that chose that profession, but the clues were there if you looked hard enough).

Dr. Frank, thank you for your efforts, keep explaining reality to those that are severely challenged in that department.

Who is this Pat Frank? His AGU poster has no affiliation. I see he may be an x-ray absorption spectroscopist — certainly a scientist in that case, but stretching the definition of climate scientist, I would say (like engineers who call themselves climate scientists). And what’s to say he didn’t submit his paper 10 times, and just compile all the dumb reviewer comments for the sake of this blog? I still say if he can’t clearly explain his logic and respond to the comments of 9 out of 10 reviewers, then there’s no surprise he can’t get his paper published.

I did the work on my own time and expense, Barry. Therefore, I can’t claim my work affiliation.

My manuscript is about physical error analysis, not about climate. Here is a paper I published a couple of years back about sulfur in the wood of a recovered military bronze ram from an ancient Roman warship. The Supporting Information document is free access. Take a look and see if I know anything about error analysis.

Given your logic, and the obvious inexpertise shown by my reviewers, will you now agree that climate modelers are unqualified to review?

Your “…what’s to say…?” is just you making unfounded negative speculations.

As to “explaining logic,” every one of my responses started with a summery header, like this one …

• The reviewer has repudiated the distinction between accuracy and precision as a “philosophical rant,” when in fact the distinction is central to physics.
• The review evidences a lack in understanding of propagated error, item 3.
• The reviewer has mistakenly assumed that differencing between modeled climate observables is identical to differencing between modeled and physically measured climate observables, items 4 and 8.
• The reviewer is apparently unaware that the large measurement uncertainties vitiate attribution and validation, item 5.

… and then followed on with fully quoting the reviewer and a response. The “items” are review/response numbers. The summary was there for the editor, who could easily have gone to the mentioned items and checked the facts of the matter.

The reply to this particular reviewer included 10 pages of detailed responses to the individual criticisms, along with 24 citations to the relevant literature. Such content was pretty typical.

So don’t go on in ignorance about the process Barry. You’ll only look foolish.

Professor Patrick Frank of Department of Chemistry, Stanford has more than 50 peer reviewed publications. And how do you define a “climate scientist”? Is being a railway engineer, a failed theology student, or a psychologist an appropriate qualification? Hubert Lamb didn’t gain his qualification as a climate scientist until he was awarded an honorary Doctorate of Science three years after he retired.

Hey Pat, maybe you’ll be awarded an “appropriate qualification” after you retire :-)

Lets see. Pat Frank said
1) 9 of 10 reviewers rejected his paper.
2) He didn’t re-submit with the suggested changes.
3) He went into an elaborate defense, published on WUWT, but didn’t send his defense to the reviewers

Your number 2) is factually wrong, warrenlb. My responses to the reviewers were detailed, extensive, on-point, and went back to the editors as is the usual practice.

I neither want nor need your sympathy.

WUWT represented a place where the reviewer scientific incompetence could receive open exposure. The encounter with climate modelers has been unique in my experience publishing scientific manuscripts.

Finally, you have let us all know that you’re not shy about expressing confident opinions while operating from complete ignorance. That intellectual trait qualifies you to become a star in cultural studies.

Wow. I’m impressed. Four rejections? It’s eight reviewers or was it 12 reviewers to check this paper? It’s hard to believe that there’s not some problem with the writing or the results. Could you not use the early rejections to rewrite tha paper? But I have to admit I have had only two rejections max before things get in, so what do I know.

It was 10 reviews in 2+2 submissions, Pippin. I did make requested changes. The basic problem is that climate modelers reject any physical error analysis. For them, error = only model variability.

Plus, modelers apparently think that all model errors are present in their base-state (e.g., 1850) simulation, so that by taking differences against the base state they remove all the physical error from their anomalies. Incredible but true. They live in physical phantasy land.

The more important question is whether climate modellers are even computer modellers. They seem to have no respect for the basic principles of computer modelling right down to whether the source data they work with has been properly quality controlled with regular certification of both the instruments and the environment in which they are used. They put in place a verification network and when it does not give the answer they want totally ignore it. On a daily basis their colleagues in weather forecasting tell us about the two degrees difference between cities and the rural areas but ignore this when it comes to climate changes.
Here we all know about this and so many other deficiencies but mainstream both the Independent and the Guardian as well as the charter obligated impartial BBC in the UK manage to block comments by dissenters. So much for free speech. Makes china look positively tolerant.

I think peer reviewers vary in quality across all scientific disciplines.
I have had experience of publication being temporarily blocked by someone who wanted to “own the space”.
I have also struggled to get both peers and clients to understand the fragile and evolving nature of the modelling process.
It is not just a phenomena limited to climate science.

Reblogged this on gottadobetterthanthis and commented:
–
Down into the tedious weeds, but exceptionally good article showing that there is no value whatsoever in climate models.
Automotive crash models do the things Dr. Frank indicates. Automotive crash models compare to physical, real crashes. Over and over we compare the model results to real crashed cars. Over and over! We don’t stop comparing the models to reality. Aren’t you glad? Anybody willing to drive a car for which the design has never be actually crash tested, but only modeled? I hope not.
Note RGB’s comments below the article.

A good example of the reasons for peer-review:
1) Multiple reviewers reject the paper, but the author seems unwilling to make changes.
2) The author doesn’t list his educational accomplishments. Does he have education for advancing the state of knowledge on this topic?
3) Instead of pro-actively correcting his errors, or successfully explaining his analysis to the reviewers, he writes a long winded defense. Not a good indicator.

If the author is certain that he is right, has he resubmitted to another journal? It would be useful for him, and for us, to see if the same criticisms are received, or if he is successful with another journal.

Still waiting for you to present the repeatedly requested evidence which has personally convinced you that AGW is real. And if so, then that it’s something about which the world should be concerned rather than happy.

warrenlb, how do you know any of your list is due to CO2 emissions? That knowledge would require a valid theory of the terrestrial climate, which does not exist.

Your “physics of the greenhouse effect” is just the Stefan-Boltzmann equation. The S-B equation is not a theory of climate. This, by the way, underlays the entire misperception of alarm: the mistaken but implicit presumption that the S-B equation describes how the terrestrial climate responds to some added tropospheric forcing.

Your last item is just enhanced radiative cooling due to increased CO2. This effect happens because gases are so dilute in the stratosphere that the radiative decay of CO2* is faster than the collisional decay. In the troposphere, the opposite is true. So, cooling of the stratosphere has nothing to do with the supposed mechanism of CO2 greenhouse warming.

“Your “physics of the greenhouse effect” is just the Stefan-Boltzmann equation.”

No, incorrect. The Stefan–Boltzmann law states that the total energy radiated per unit surface area of a black body across all wavelengths per unit time (also known as the black-body emissive power) is directly proportional to the fourth power of the black body’s absolute temperature.

The term ‘greenhouse effect’ refers to the absorption of thermal radiation from the planet’s surface by atmospheric greenhouse gases; those gases are then re-radiated in all directions. Since part of this re-radiation is back towards the surface and the lower atmosphere, it results in an elevation of the surface temperature above what it would be in the absence of the gases.

In short, Stefan-Boltzman refers to black body radiation. The Greenhouse Effect refers to the absorption and re-radiation of IR thermal radiation by molecules including CO2, methane, water vapor, fluorocarbons, nitrous oxides, and SF6. These two ideas are entirely different.

Then you say:
“Your last item [Cooling of the Stratosphere consistent with operation of Greenhouse Effect] is just enhanced radiative cooling due to increased CO2. This effect happens because gases are so dilute in the stratosphere that the radiative decay of CO2* is faster than the collisional decay.”

Also wrong. Stratospheric cooling is caused by the increasing presence of CO2 and other Greenhouse gases in the lower troposphere. The increased absorption of IR energy by CO2 (and the other GHGs) in the troposphere reduces the thermal energy that reaches the stratosphere from the planet and lower troposphere.

Don’t take my word for it. Check any College-level physics book, or textbook in atmospheric science.

So, explain how CO2 is able to re-radiate absorbed energy toward the surface, when its collisional decay rate in the troposphere is more than 10x faster than its radiative decay.

The S-B greenhouse effect comes from the change in the emissivity of the atmosphere, due to the presence or increase of GHGs.

Your suggestion that stratospheric cooling is due to, “The increased absorption of IR energy by CO2 (and the other GHGs) in the troposphere reduces the thermal energy that reaches the stratosphere from the planet and lower troposphere.” is wrong.

The same amount of thermal energy always leaves the troposphere and passes through the stratosphere out to space. CO2 just retards the emission in the troposphere, like a dam retards the water in a river. The total water flow doesn’t change, and neither does the total IR emission from Earth.

The stratosphere cools with more CO2 because up there, the radiative decay rate is faster than the collisional decay. Stratospheric CO2 that is activated by collision decays by radiation. That radiation is lost to space. Hence the net cooling.

GHGs increase the amount of thermal energy of the atmosphere, without changing the total IR emission from Earth. Again, like a dam in a river. The central question is how the climate responds. If the main climate response channel is increased convection, or greater tropical rainfall, there could be no discernible change in air temperature at all from increased GHGs.

@Pat Frank
My explanation of Stefan-Boltzman, the Greenhouse Effect, and Stratospheric Cooling as driven by the increases in the Greenhouse Effect, can be found in any college level textbooks on those topics. Neither is controversial. But your reply is opaque gibberish, and goes a long way to explaining why your paper was rejected by the reviewers — your lack of understanding of science. Try a different vocation.

Arctic sea ice in rapid decline — no it isn’t, and global levels are rising
Global sea level rise is accelerating. — no it isn’t
Global deglaciation — since LIA
Mountain ice caps melting worldwide.– mostly land use changes
Climate zones shifting polewards and uphill.– since LIA
Migration of species to higher altitudes and colder latitudes — since LIA
Atmosphere becoming more humid. — since LIA
The Arctic warming 3 times faster than the global mean — no it isn’t
Snow cover is declining.– no it isn’t
Ocean heat content is rising — no it isn’t, model-based
The tropical belt is widening — since LIA
Storm tracks are shifting polewards. —
Jet streams are shifting polewards and becoming more erratic.
Permafrost all over the northern hemisphere is warming and thawing.
Difference between nighttime lows and daytime highs decreasing — no they aren’t
Warming of the planet since 1880 — same trend since LIA
40% rise in Atmospheric CO2 since ~1800 — has little effect
Underlying physics of the Greenhouse effect — you don’t appear to understand them, and neither do modellers, which is why their predictions have been so wrong

Note that Great Lakes ice is at record highs. The local raw temperatures say this has been a very cold couple of years. The GISS adjustments make them average. It’s becoming blindingly obvious that the data is unreliable.

Oh, the things one finds in college textbooks. My professors used to mock them so.

“The increased absorption of IR energy by CO2 (and the other GHGs) in the troposphere reduces the thermal energy that reaches the stratosphere from the planet and lower troposphere.”

This is hilarious. It reminds me of when Goddard was posting pictures of a simple electric circuit to mock the people who said CO2 could not warm the atmosphere.

Let’s see if I can explain this simply… imagine you build a very special dam. The dam is very special because it reduces the amount of water that flows past it (unlike normal dams, which merely change the equilibrium height of the water). What happens to the water level on the two sides of the dam? (Hint: you do NOT want to build this dam near your house.)

You said: “Let’s see if I can explain this simply… imagine you build a very special dam. The dam is very special because it reduces the amount of water that flows past it (unlike normal dams, which merely change the equilibrium height of the water). What happens to the water level on the two sides of the dam? ”

You miss the key point: As atmospheric Greenhouse Gases increase, less thermal radiation escapes the planet. To maintain the flow of energy to space, the Planet warms so that thermal radiation leaving the surface of the planet increases again (in proportion to T^4, the Stefan-Boltzman relationship) to maintain energy leaving the planetary system equal to energy in (sun’s rays). Thus the planet warms –the entire point of the Greenhouse effect.

Then you say: “Oh, the things one finds in college textbooks. My professors used to mock them so.”

My response: I used to say that those that reject AGW believe in science, but just don’t understand the Greenhouse effect. But your quote says you reject Science in general.

Here is the IPCC on stratospheric cooling: “When the CO2 concentration is increased, the increase in absorbed radiation is quite small and increased emission leads to a cooling at all heights in the stratosphere.”

So the point of peer review is to tell the reviewee to remove anything of value from the paper- just because the reviewers don’t understand or like what is presented. We sort of always knew that- basically peer review pre-publish has no critical analysis value.

Congratulations, warrenlb, you found a way to sustain your initial prejudice by embrace of self-serving and ignorant speculation. Factual neglect is hardly a quality to cultivate, but you’re working at it anyway.

warrenlb posted lots of assertions claiming that global warming is approaching lift-off. But as usual, warrenlb is flat wrong.

I could go down warrenlb’s list, starting with his false assertion that Arctic sea ice is “in rapid decline” [it isn’t], as I have done several times before, but why bother? He just cuts and pastes more misinformation.

The basic metric is this: global warming stopped, anywhere between ten and 18 years ago, depending on which measurement is used. In any case, global warming has stopped, and not just temporarily — it has been stopped for many years now.

That fact completely deconstructs warrenlb’s belief in CO2=CAGW [he says he doesn’t claim CAGW, but let’s get real. Of course he believes that. Why else would he argue incessantly?]

The fact is that there is nothing either unusual, or unprecedented happening. The climate Null Hypothesis has never been falsified. Thus, the alarmists’ Narrative is debunked. Rational folks understand that; only warrenlb is still clueless.

warrenlb, nothing at that site supports your denial of the S-B basis of climate alarm, supports your neglect of the significance of rapid collisional vs. slow radiative decay of CO2* in the troposphere, or supports your dismissal of CO2* radiative decay as the source of stratospheric cooling.

Let’s add that the entire NASA case, like that of the IPCC, rests solely upon the physical reliability of climate models; a position that lacks any scientifically valid foundation.

“The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible. “

Is a nice sentence, but not true. I think it’s actually worse. The climate is influenced by unknown outside factors like asteroids, vulcans, sun fluctuations and more. What was it again that triggered an ice age, or ended an one?

Without those unpredictable factors, the climate might be predictable, if only we knew more…
Regards, lb

It’s worse than you say:
No one knows what a normal climate is, if there is such a thing, and no one can agree what a pleasant climate would be — the wimmenfolk are always too cold, so they turn up the thermostat, then the men are too hot, and they turn down the thermostat, and the worst are fat women going through menopause — fuggetaboutit — they’re cold, then they’re warm, and then they’re crying — never happy with the climate.
.
So even if we humanoids could figure out how to control the climate like a thermostat, so predictions would NOT EVEN BE NECESSARY, there would be world wars over what average temperature to set for Earth.

Welcome to the club. My paper was rejected too back in 2010 by Journal of Geophysical Research. Guess what. The sole reviewer was an IPCC climate modeler. By the way, your 2008 Skeptic paper was one of my references in my paper. Good work.

NO they aren’t scientists. Worse than that the climate modelers aren’t familiar with what’s needed to “draw” an algoritm to be used in computer-systemprogramming……
Please let us know where they spent their days when others listen to tutors and learnt. Dreaming?

Thanks for the interesting post. I understand exactly where you’re coming from and cannot believe your peer reviewers fail to, unless their deliberate misinterpretation is designed to delay publication until after they write the rebuttal and present it as original work.

“Now suppose Rowlands, et al., tuned the parameters of the HADCM3L model so that it precisely reproduced the observed air temperature line.

Would it mean the HADCM3L had suddenly attained the ability to produce a unique solution to the climate energy-state?”

It’s worst than you think, Pat. They would answer with a resounding YES.

In fact, the standard line (on SkS and RC, among others) is “if the hindcast is accurate, there’s no reason to think the forecasts aren’t accurate.” This in defiance of all empirical evidence, common sense, and modelling theory. Mind-boggling.

Song to the tune “I Am the Very Model of a Modern Major-General” from “The Pirates of Penzance” music by Sir Arthur Sullivan, original words by W.S. Gilbert] Sung by two Climate Modellers – M1 & M2.

(M1 & M2):We are the very model for all modern Climate Modellers
We forecast things for foolish kings and not precocious toddlers
By using tricks that would excite a high priest of the Aztecs
For example those subjective “priors” in Baysian stitastecs?? (pauses)
(M1): Our models have been classified as “complex” (M2): Make that “very”
(M1&M2): That are built upon assumptions by default are arbitrary;
So we get a lot of flack about our “dubious” hypotheses, (thinks)
That others take to then promote their cataclysmic prophesies.
(Background chorus)
That others take to then promote their cataclysmic prophesies.
That others take to then promote their cataclysmic prophesies.
That others take to then promote their cataclysmic prophe-prophesies.
(Modelers)
It’s obvious that leaders hang on every word we utter we
Call anyone who disagrees a mal-contented nutter we
Forecast things for foolish kings and not precocious toddlers
We are the very model for all Modern Climate Modellers.
(Background chorus)
They forecast things for foolish kings and not precocious toddlers
They are the very model for all Modern Climate Modellers.
(Modelers)
Comparing obs to models we maintain without compunction:
That when obs aren’t in agreement “It is instrument mal-function”
It’s the only explanation our position’s categorical,
“The obs must match the models because models are The Oracle”.
We try to use a history so the curve-fit looks re’listic we mix
Celsius and Fahrenheit and other faults simplistic and we
Always have a choice of trends so we can choose the greater set.
(pauses to think)
We don’t archive our data; ‘case we change it for a later set.
(Background chorus)
So they don’t archive their data; they may want to change it later
So they don’t archive their data; they may want to change it later
No they don’t archive their data; they may want to change it late-it later.
(Modelers)
We’ll make dire pronouncements that some N.G.O. can seize upon
To pressurize a government for action it agrees upon
We forecast things for foolish kings and not precocious toddlers
We are the very model for all Modern Climate Modellers.
(Background chorus)
They forecasts things for foolish kings and not precocious toddlers
They are the very model for all Modern Climate Modellers.
(Modelers)
We mix with peers and journos at expensive destinations
All go but we keep going as it has its compensations
We group at airport lounges; (M1): I’m the consummate jet-setter-er)
We’re constantly in transit flying business class or better-er.
(M2): At conferences I’m centre stage and eager to engage in chat
As long as my response can start “I’m really glad you asked me that”
(M1): On broader issues my colleagues agree on one essential, (thinks)
(M1&M2) It’s plain to us the masses have become too affluential.
(Background chorus)
It’s plain to them the masses have become too affluential
It’s plain to them the masses have become too affluential
It’s plain to them the masses have become too affluen-fluential.
(Modelers)
It’s in our job description that of all the things we have to do
The most important one by far is demonizing CO2
We forecast things for foolish kings and not precocious toddlers
We are the very model for all Modern Climate Modellers.
(Background chorus)
They forecast things for foolish kings and not precocious toddlers
They are the very model for all Modern Climate Modellers.

Great job. Brilliant lyrics. You made my day. Anyone who takes the coming climate change catastrophe boogeyman seriously should have his head examined. I can say for sure, based on your comedy, that you do not need your head examined. I’m sure they would find nothing.

Some consideration of privacy, a touch of personal sympathy, and a bit of ethical discretion keep me from naming editorial names. I don’t know who the reviewers were, though one became easy to identify from the content of the review.

Are climate modelers scientists?
Of course not.
Real science requires data.
Computer games are not data.
Therefore they are not real science.
I’ve been saying that since the late 1990s.
.
Leftists, who claim to be ‘environmentalists’, input whatever the ‘boogeyman of the year is — DDT — acid rain — hole in the ozone layer — Alar in apples — etc. — etc. — global warming — into a REALLY BIG COMPUTER, and then it makes humming and grinding noises, lots of lights flash, and then there’s a puff of smoke, and that REALLY BIG COMPUTER ejects a piece of paper that predicts the future results of that environmental boogeyman… but it seems the prediction is always the same: “Life on Earth will end as we know it.”, and there’s a chart that looks like a hockey stick that proves it.
.
And then the computer gamer, with a science degree, applies for another government grant to play computer games for another year to “study” the catastrophe he just predicted.
.
This has worked for 45 years so far, thanks to the process developed in the 1960s by Roger Revelle — predict doom in a very serious voice — use a lot of hand gestures — claim 105% confidence in your prediction — and ask for a goobermint grant not for yourself, but to save the Earth.
.
Governments LOVE to have a crisis to “solve”: a real one, or imagined like the ‘global warming crisis’ … and they all “require” more government spending, more government taxes on corporations, more government regulations, more government power to micro-manage people’s lives … all supported by a foundation of climate astrology …er …I mean computer models.
.
If you were a nerdy scientist and could a great salary for playing computer games in an air-conditioned office, get in the media by making a scary climate prediction, and possibly become famous — maybe even getting to fly to an overseas global warming conference in Al Gore’s private jet, and while doing all of this you can tell everyone you are working” hard” (9am to 5pm heh heh) “to save the Earth” …. as compared to being a real scientist getting a mediocre salary, having to write scientific books your wife wouldn’t even read, having to gather data and samples in the too hot, or too cold, field, and doing experiments in a warm, smelly laboratory on unobtanium, where you could accidentally set your tie on fire with a bunsen burner, well, which one would you choose?
.
Playing computer games to “save the Earth”, of course
.
But think of the good news about the (never) coming climate change catastrophe:
No one has been, or will be, harmed by climate change, and since Earth’s climate is always changing, there can be a permanent “war” on climate (to keep goobermint bureaucrats busy).