In a comment on the WUWT article about the abject failure of UKMO weather forecasts, “Slingo Pretends She Knows Why It’s Been So Wet!”, Doug Huffmanwrote, “Each forecast must be accompanied by the appropriate retro-cast record of previous casts” (January 6, 2013 at 7:06 am). I pointed out years ago that Environment Canada (EC) publishes such information. They expose a similar horrendous story of absolute failure. This likely indicates why it is not done by others, but provides adequate justification for significantly reducing the role of the agency.

It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.

If your prediction (forecast) is wrong; your science is wrong. Unlike the IPCC, they cannot avoid the problem by calling them projections, not predictions. They can and do avoid accountability.

Initially I thought EC was admirable for publishing results. Now I realize it only shows arrogance and sense of unaccountability: we fail, but you must listen, act, and keep paying. It underscores the hypocrisy of what they do. More important, it shows why they and all national weather agencies must be proscribed. It is time to reduce all national weather offices to data collection agencies. When bureaucrats do research it is political by default. The objective rapidly becomes job preservation; perpetuate and expand rather than solve the problem.

EC is a prime example of why Maurice Strong set up the IPCC through the World Meteorological Organization (WMO) and member national weather agencies. EC participated and actively promoted the failed work of the Intergovernmental Panel on Climate Change (IPCC) from the start. An Assistant Deputy Minister (ADM) of EC chaired the founding meeting of the IPCC in Villach Austria in 1985. It continues, as they sent a large delegation to the recent Doha conference on climate change. Their web site promotes IPCC work as the basis for all policy on energy and environment. They brag about their role as a world class regulator. All this despite the fact their own evidence shows the complete inadequacy of their work.

They display their failures on maps. Pick any map or period and it shows how a coin toss would achieve better or at least comparable results. Here is their caption for the maps.

” The upper panel shows the seasonal air temperature or precipitation anomaly forecasts. The forecast are presented in 3 categories: below normal, near normal and above normal. The lower panel illustrates the skill (percent correct) associated to the forecast.”

The maps are for temperature and precipitation for 12, 6 and 1-3 months.

Everyone knows that regional weather forecasts are notoriously unreliable, especially beyond 48 hours. This fact weakened the credibility of the IPCC predictions with the public from the start. Some supporters of the IPCC position tried to counteract the problem by saying that climate forecasts were different from weather forecasts. It is a false argument. Climate is the average of the weather, so if the weather science is wrong the climate science is wrong.

Some experts acknowledge that regional climate forecasts are no better than short term weather forecasts. New Scientist reports that Tim Palmer, a leading climate modeler at the European Centre for Medium – Range Weather Forecasts in Reading England saying, “I don’t want to undermine the IPCC, but the forecasts, especially for regional climate change, are immensely uncertain.” In an attempt to claim some benefit, we’re told, “…he does not doubt that the Intergovernmental Panel on Climate Change (IPCC) has done a good job alerting the world to the problem of global climate change. But he and his fellow climate scientists are acutely aware that the IPCC’s predictions of how the global change will affect local climates are little more than guesswork. The IPCC have deliberately misled the world about the nature, cause and threat of climate change and deceived about the accuracy of their predictions (projections), for a political agenda.

Some claim the failures are due to limited computer capacity. It makes no difference. The real problems are inadequate data, lack of understanding of most major mechanisms, incorrect assumptions, and a determination to prove instead of falsify the AGW hypothesis.

Einstein’s definition, “Insanity: doing the same thing over and over again and expecting different results” applies. However, EC do the same thing over and over with results that indicate failureyet fail to make adjustments as the scientific method requires. What is more amazing and unacceptable is they use public money, are essentially unaccountable yet demand the public and politicians change their energy and economic policies. On their web site, they state; “The Government of Canada supports an aggressive approach to climate change that achieves real environmental and economic benefits for all Canadians.” They could begin by reducing EC to data collection. Their failures are more than enough to justify termination in any other endeavour. Another is their involvement and political promotion of well documented IPCC corruption.

Bureaucrats are, by definition, overpaid and generally overqualified filing clerks. They should not be allowed to do anything other than keep the files in order, under no circumstances should they be allowed to “manage” anything and certainly not anything scientific.

Everyone knows that regional weather forecasts are notoriously unreliable, especially beyond 48 hours. This fact weakened the credibility of the IPCC predictions with the public from the start. Some supporters of the IPCC position tried to counteract the problem by saying that climate forecasts were different from weather forecasts. It is a false argument. Climate is the average of the weather, so if the weather science is wrong the climate science is wrong.
——————
This is a particularly clueless argument, in that it has been shown to be bogus many times over. Weather forecasts are an initial value problem. Climate projections are a boundary condition problem. This is well known.

As an imperfect analogy, predicting a toss of a coin from the angular velocity and momentum is difficult – small errors in the initial estimates of the parameters will rapidly escalate and cause errors. In contrast, estimating the distribution of head and tails is much easier.

[snip. Gratuitous insulting of Dr. Ball. Any more such insults will get your future comments deleted. — mod.]

An interesting piece. I would never have know EC did that! FWIW, I would like all the other agencies (Met office, etc) to do the same – but as Dr Ball suggests, this would reduce theri public credibility to zero. Of course, their credibility (and that of the IPCC) with outside ‘real’ scientists is already zero, but thats a different issue!
Here in the UK, the Metoffice has a much smaller ‘region’ to forecast over, but they still can’t get it right!

I may be mistaken, but I think that is a Yogi Berra quote. Here are some others by the great New York Yankees catcher [who is still alive at 86, BTW]:

“It’s like deja vu all over again.”

“We made too many wrong mistakes.”

“You can observe a lot just by watching.”

“A nickel ain’t worth a dime anymore.”

“He hits from both sides of the plate. He’s amphibious.”

“If the world was perfect, it wouldn’t be.”

“If you don’t know where you’re going, you might end up some place else.”

Responding to a question about remarks attributed to him that he did not think were his, Yogi said:

“I really didn’t say everything I said.”

“The future ain’t what it use to be.”

“Predicting is hard, especially about the future.”

“I think Little League is wonderful. It keeps the kids out of the house.”

On why he no longer went to Ruggeri’s, a St. Louis restaurant:

“Nobody goes there anymore because it’s too crowded.”

“I always thought that record would stand until it was broken.”

“We have deep depth.”

“All pitchers are liars or crybabies.”

When giving directions to Joe Garagiola to his New Jersey home, which is accessible by two routes:

“When you come to a fork in the road, take it.”

“Always go to other people’s funerals, otherwise they won’t come to yours.”

“Never answer anonymous letters.”

On being the guest of honor at an awards banquet:

“Thank you for making this day necessary.”

“The towels were so thick there I could hardly close my suitcase.”

“Half the lies they tell about me aren’t true.”

As a general comment on baseball: “90% of the game is half mental.”

“I don’t know if they were men or women running naked (across the field). They had bags over their heads.”

“It gets late early out there.”

Carmen Berra, Yogi’s wife asked: “Yogi, you are from St. Louis, we live in New Jersey, and you played ball in New York. If you go before I do, where would you like me to have you buried?” Yogi’s answer: “Surprise me.”

Part two shows how our modern scientific idea of nature, the self-regulating ecosystem, is actually a machine fantasy. It has little to do with the real complexity of nature. It is based on cybernetic ideas that were projected on to nature in the 1950s by ambitious scientists. A static machine theory of order that sees humans, and everything else on the planet, as components–cogs–in a system. But in an age disillusioned with politics, the self-regulating ecosystem has again become the model for utopian ideas of human “self-organising networks”, with dreams of new ways of organising societies without leaders and in global visions of connectivity like the Gaia theory. This powerful idea emerged out of the hippie communes in America in the 1960s, and from counter-culture computer scientists who believed that global webs of computers could liberate the world. But, at the very moment this was happening, the science of ecology discovered that the theory of the self-regulating ecosystem wasn’t true. Instead they found that nature was really dynamic and constantly changing in unpredictable ways. But it was too late, the dream of the self-organising network had by now captured imaginations…

“The Government of Canada supports an aggressive approach to climate change that achieves real environmental and economic benefits for all Canadians.”

I would like them to list those environmental and economic benefits in detail. Specifically I want them to quantify what the benefit is, how it was derived, how it is measured and what the alternative was if the “aggressive approach to climate change” had not been followed.

Besides which, what is an “aggressive approach to climate change” anyway? Other than merely calling people who question the rate, extent and drivers of climate change insulting names? I would like to see if this “aggressive approach to climate change” has actually reduced the temperature of the climate at all and if so, how?

Extremely interesting. Shows that all efforts to make government more “transparent” are foolish. Even when a bureaucracy is perfectly “transparent”, admitting their own errors systematically, they still continue to make the same systematic errors.

It should be blazingly clear by now that “democracy” has failed. Only absolute dictators can rule competently.

“In contrast, estimating the distribution of head and tails is much easier.”

And applying that less than perfect analogy to climate change, the many different global climate models ALL fail to even get that distribution correct. They ALL failed to project the last 20 years failure to warm. Now they are having to be rewritten with a fudge factor of “aerosols” to force the model to fit the evidence after the fact.

The fact is, the real world NOT doing what the models projected means simply that the models were WRONG! Climate scientists have been predicting a result of heads 60% and tails 40% When the result has come in at 50% 50%, they are scratching their heads and refusing to admit error in the models.

The climate “scientists” are the worst sort of pseudo scientific charlatans.

It would be a start if Lord Lawson, or someone of his ilk, were to suggest a requirement for the Met office to produce similar retro-casts as provided by EC. The inability of the Canadians to improve the service though might preclude the possibility of getting the MO to pull their socks up.

richard telford wrote:
January 8, 2013 at 3:33 am
[blockquote] Everyone knows that regional weather forecasts are notoriously unreliable, especially beyond 48 hours. This fact weakened the credibility of the IPCC predictions with the public from the start. Some supporters of the IPCC position tried to counteract the problem by saying that climate forecasts were different from weather forecasts. It is a false argument. Climate is the average of the weather, so if the weather science is wrong the climate science is wrong.
——————
This is a particularly clueless argument, in that it has been shown to be bogus many times over. Weather forecasts are an initial value problem. Climate projections are a boundary condition problem. This is well known.

As an imperfect analogy, predicting a toss of a coin from the angular velocity and momentum is difficult – small errors in the initial estimates of the parameters will rapidly escalate and cause errors. In contrast, estimating the distribution of head and tails is much easier.[/blockquote]

Some climate scientists argue, and I agree with them, that climate prediction is an initial value problem:

Canada’s inability to properly forecast the weather was noticed amoung my coworkers when I lived in Toronto in the 90’s. It came up in a discussion in the lunch room one day that all of us watched the American weather forecasts because we considered them more accurate.

At the turn of the century I moved to Ottawa and bought a cottage about 30 minutes west of town and was constantly annoyed that I couldn’t rely on the Canadian forecasts to determine if I could go there on weekends and enjoy being outside. So I started a spread sheet and for 6 months tracked the forecasts 7, 6, 5, 4, 3, 2 and day before. I allowed a five degree C error for temperature and the results were as follows. 7 day – 30% accuracy, 6 day 37%, 5 day 42%, 4 day – 66% (!!!?) 3 day – 50%, 2 day – 58%, day before – 66%.

No idea why the 4 day spiked to match the day before but I was just an observer. I also tried to track the rain forecasts but realized the area was so large that it wasn’t possible.

Adrian Kerton says:
January 8, 2013 at 3:15 am
Today, 8th Jan at 08.00 on UK Radio 4 news
“The met office says it does not believe global warming will be as severe as it had previously predicted.”
Nothing on the BBC website though.

Interestingly they said something about no warming from 1997 until 2017, when it would start all over again. Reminding me of the “paper” that had modelled a downturn in global temperatures from some British University a couple of years ago claiming to have predicted the downturn, only a couple of years after it had started, & claiming that warming would resume in earnest in 2014! To paraphrase Shakespeare. “Is this an element of goalpost shifting I see before me?” That makes 20 years of no warming at least, so petitio principii, just how long a period of cooling do they wish to see before somebody amongst them has them has the balls to ask, “have we got this global warming crap all wrong?”

Everyone knows that regional weather forecasts are notoriously unreliable, especially beyond 48 hours. This fact weakened the credibility of the IPCC predictions with the public from the start. Some supporters of the IPCC position tried to counteract the problem by saying that climate forecasts were different from weather forecasts. It is a false argument. Climate is the average of the weather, so if the weather science is wrong the climate science is wrong.

you wrote

This is a particularly clueless argument, in that it has been shown to be bogus many times over. Weather forecasts are an initial value problem. Climate projections are a boundary condition problem. This is well known.

As an imperfect analogy, predicting a toss of a coin from the angular velocity and momentum is difficult – small errors in the initial estimates of the parameters will rapidly escalate and cause errors. In contrast, estimating the distribution of head and tails is much easier.

Your reply demonstrates that it is you that is “clueless”.

Clearly, “if the weather science is wrong” then “the climate science is wrong” when “Climate is the average of the weather”.

And the boundary conditions are not known when the science is wrong.
In the probably forlorn hope of helping you to understand why, I shall refer to your ‘coin analogy’.

Predicting the distribution of sides that a coin will fall on is possible. But the prediction will be wrong if it is not a coin but is a 6-sided die which is being tossed. In other words, the system under discussion needs to be adequately understood for a prediction of its behaviour to be valid and correct.

So, if the science of weather is wrong then the boundary conditions of climate (i.e. average weather) cannot be adequately defined (you may be predicting ‘coin tosses’ when you should be predicting ‘dice tosses’, or ‘pancake tosses’, or …).

Tim Ball says
Some supporters of the IPCC position tried to counteract the problem by saying that climate forecasts were different from weather forecasts. It is a false argument. Climate is the average of the weather, so if the weather science is wrong the climate science is wrong.
———–
Wrong Tim.

The difference between weather prediction and climate prediction could be said to be analogous to the ease of predicting the time of high tide at some port in ten years time and the difficulty of predicting the height of the waves on that same day. But if the analogy is not a good one then… . This has to be shown.

I enjoyed Adam Curtis’ documentary. IIRC it also mentioned how “Nature in balance” and “Natural Order” were ideas derived from notions of Empire and Colonialism, and used to justify Apartheid.

The term “holistic” was invented by Jan Smuts.

Many environmentalists seem to still subscribe to this notion, although they stack the pyramid in a different sequence, putting “biosphere” at the top because it “encompasses” all species and hence humans.

richard telford says:
January 8, 2013 at 3:33 am
This is a particularly clueless argument, in that it has been shown to be bogus many times over.

The models are failing and we don’t know why? :-p

“The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes. The likely causes of these biases include forcing errors in the historical simulations (40–42), model response errors (43), remaining errors in satellite temperature estimates (26, 44), and an unusual manifestation of internal variability in the observations (35, 45). These explanations are not mutually exclusive. Our results suggest that forcing errors are a serious concern.”http://www.pnas.org/content/early/2012/11/28/1210514109.full.pdfhttp://www.pnas.org/content/early/2012/11/28/1210514109

I’m afraid that these arguments can be applied to any dogmatists, not just ‘bureaucrats’.

We in the UK have one set of dogmatists who believe that any state-owned enterprise is intrinsically evil. We now have a 30 year track record of many privatisations and how they panned out.

On the plus side, there is the aerospace manufacturer, Rolls Royce, which is now a highly profitable, successful global company, continuing to employ and invest in the UK and a major regional employer. The British Airways privatisation has been equally successful, although the attractiveness of that sector for investors is probably less rosy than for Rolls Royce.

Where other manufacturing industry was concerned, you can probably say: ‘we no longer subsidise them but we have lost strategic ownership in these sectors’. We no longer have a significant UK-owned steel industry, although some specialist niche providers remain. Our car industry now assembles foreign cars for the European market (be that for Japanese, German, Chinese or American owners). The conclusions to be drawn from this are not a good judgement on UK management. UK workers are amongst the most productive in Europe working for foreign owners. They are well paid, so they are not exploited. Says that both unions and management from bygone eras were a crock of [snip].

The third arena concerns what we traditionally have called ‘public services’. This includes the energy and water utilities, railways, increasingly it includes the National Health Service, parts of the police and prison services etc etc. Where the railways are concerned a very clear conclusion can be drawn: public subsidies now are higher than under single ownership, competition does not exist in any meaningful form and the privatisation has privatised profits whilst socialising liabilities. Prices have sky rocketed for commuters and the verdict must be: ‘the public have been shafted to line the pockets of the rich’. The same conclusion can be drawn about gas, electricity and water, some of the most attractive investments globally for the mostly foreign owners we now have. Prices have escalated hugely, cartel price fixing has been going on for years and service quality has declined along with it. So these ones are very much along the lines of ‘whether or not the privatisations have improved operations, the public is very, very skeptical as to the benefits and wants these essential elements of life (heating, lighting, water and mass transport to work) run primarily for the benefit of the users, not the shareholders. There is of course a reality that tough decision-making will sometimes inflame public opinion, however.

In the main, the dogmatists are only interested in shovelling [snip] into their friends’/masters’ pockets.

Solutions will only come if private sector partners are rewarded in equal measure to beholden customers and their investors.

I doubt it will happen that way, as it would require a mass clear out of the City of London’s money-driven and public decisions taken for the good of the majority, not the minority.

As you will find out, there are greedy, avaricious, private monopolists who are every bit as disgraceful as ‘bureaucrats’.

Whether it is acceptable to talk about them, hold them to account, without endangering your livelihoods is another matter.

If you can’t, you might like to ponder on whether you have unfair agendas…………

A very telling phrase: ”Climate is the average of the weather….’. But, that is not what climate scientists like Slingo believe. For them ‘climate drives weather’. This is the inversion of thought which distinguishes the traditional meteorological paradigm from that of the (new) climate scientist. The former is informed by a straightforward empiricist view of the world – the latter by a form of rationalism based upon the Platonic concept that there are certain ideal and immutable laws (in this case the laws of radiation physics and thermodynamics) from which the future of the climate can be deduced and modelled. The problem is that these methodological paradigms are quite incommensurable and the debate between the two sides is like ships passing in the night. For Slingo what is observed is the servant of theory and quoting Feynman or Popper will not make the slightest bit of difference. Slingo isn’t pretending she knows why it has been wet – she really, really believes she knows.

Adrian Kerton says: Today, 8th Jan at 08.00 on UK Radio 4 news “The met office says it does not believe global warming will be as severe as it had previously predicted.”

I have transcribed BBC Radio 4 “Today” Programme 2013/01/08 08:05:30 from their web player:

Presenter: “The Met Office has revised downwards its projection for climate change through to 2017. The new figure suggests that although global temperatures will be forced above their long-term average because of greenhouse gases, the recent slowdown in warming will continue. More details from our environment analyst, Roger Harrabin.”

Harrabin: “Last year the Met Office projected that as greenhouse gases increase, the world’s temperature would be 0.54 degrees warmer than the long-term average by 2016. The new experimental Met Office computer model looking a year further ahead, projects that the earth will continue to warm, but the increase will be about 20% less than the previous calculation. If the new number proves accurate, there will have been little additional warming for two decades. The Met Office says natural cycles have caused the recent slow down in warming, including maybe changes in the sun and ocean currents. Mainstream climate scientists say that when the natural cooling factors change again, temperatures will be driven up further by greenhouse gases.”

You are plain wrong (as is usual with your posts).
To understand why you are wrong then read my reply to Richard T which I posted at January 8, 2013 at 4:32 am. He, too, has unthinkingly swallowed the same bollocks as you and which flows from warmist propaganda sites such as SkS.

Bureaucrats are, by definition, overpaid and generally overqualified filing clerks. They should not be allowed to do anything other than keep the files in order, under no circumstances should they be allowed to “manage” anything and certainly not anything scientific.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
And I will add:
The last thing Bureaucrats should be allowed to do is write laws , especially laws that harm their countries and increase their power.

In re “It should be blazingly clear by now that “democracy” has failed. Only absolute dictators can rule competently.(Polistra, January 8, 2013 at 4:14 am)”

I believe that it is our expectations of government and democracy, and our narrow definition of democracy that have failed. Please understand the sortition of the democracy that produced the giant civilization of Ancient Greece, upon whose shoulders we stand – tiptoe as we try to keep our noses above the rising waters of demotic ignorance.

Please also read Jonathan Zittrain’s The Future of the Internet – And How To Stop It, for his discussion of “rule competently.” http://futureoftheinternet.org/static/ZittrainTheFutureoftheInternet.pdf In a word, you and we do not want competent rule for its thoughtless zero-tolerance prior restraint. “Jonathan Zittrain is a Professor of Law at Harvard Law School, and faculty co-director of the Berkman Center for Internet & Society at Harvard University.”

This is a particularly clueless argument….
>>>>>>>>>>>>>>>>>>>>>>>>>>>>No it is not. Meteorological Organizations and Climate Scientists have set themselves up as all-seeing prophets who see the future and warn governments that they MUST ACT NOW! or there will be CATASTROPHE! The USA is in the process of dismantling her economy completely based on this Predictions

The Empirical proof of these models assumptions is seen in the short term forecasts made by the bureaucratic member national weather agencies that belong to the World Meteorological Organization (WMO)

You can not deny the link between the short term forecasting models and the long term models used by the IPCC.

An Assistant Deputy Minister (ADM) of EC chaired the founding meeting of the IPCC in Villach Austria in 1985. It continues, as they sent a large delegation to the recent Doha conference on climate change. Their web site promotes IPCC work as the basis for all policy on energy and environment. They brag about their role as a world class regulator.

It is a Catch-22
A.) If the member national weather agencies use the IPCC assumptions the weather forecasts are flat out WRONG.

B.) If they do not use the IPCC assumptions, they prove they think the IPCC is full of bovine feces.

So Richard, which is it? A or B those are the only two choices. Except for the third choice, of course – The IPCC and the national weather agencies don’t know what the frecking heck they are doing.

Typhoon says:
January 8, 2013 at 4:21 am
————————
If you take an ensemble of model runs, each initiated with different initial values, the output will be markedly different in the short term, but the mean state after several decades won’t vary much between runs. In the long term it is the uncertainly in the boundary conditions that dominate. Most IPCC climate model runs are not initiated from observations, but from a long run in.

Only if you are interested in predictions for the next decade are initial values very important. These models currently have little skill.

——–
Dr (or is it Mr?) Richard S Courtney, I get the distinct impression that your knowledge and understanding of climate does not progress past Genesis 8:22.

I have it on good authority that this year ‘temps will be well above normals’ and have no reason to doubt – so the MO getting a prediction 180 degrees wrong once more would be my main bet in 2013. The shame being that it will mess with our nice run of flat-to-down trends if it turns out that way and the shrill and scared will make it difficult to talk quietly amongst ourselves.

Then though, it starts to get colder.

I just read Réaumur’s transcript (thank you) and the creature Harrabin couldn’t wait to get his excuses in – just as we have been saying for years – if it gets colder or stays mild they will claim that ‘natural causes’ are merely delaying the return of CO2-geddon and there is absolutely no reason to stop being very afraid, believing your government departments and paying here, here and here.

Thanks for the transcript, Reamur..!
To paraphrase Roger Harrabin’s last sentence: ‘I’ve got to state that mainstream climate scientists say that ‘natural cooling factors’ will change, driving temperatures up again, otherwise our theories are not worth a pitcher of warm spit…’

I don’t see a problem with trying to predict the weather or climate (even with government funding), but the problem is that such research must be (for now at least) akin to blue-sky research: “we can’t predict the weather now, and we may never be able to, but I know we won’t be able to unless we try”.

Further, I suspect that if such research were ever to succeed, truly accurate weather predictions would be a great boon to vast numbers of people all over the world.

So, I’m happy to pay for the Met Office to crunch its numbers, but less happy when it tells me how to live my life on the basis on bad number crunching – to put it mildly.

To produce a valid model, one needs to understand what one is modeling. Climate modeling is an exercise in futility, and in the hands of the AGW crowd it is a public nuisance and threat to the well-being of humankind.

Marginally off-topic, but relevant
I’m intrigued by a notice alongside the stream below Bodiam Castle in Kent – presumably written by some well-meaning soul about 15 years ago at the height of the ‘global warming’ hyperbole.
It states that you are advised to enjoy the river bank and associated views, because ‘in 50 years time this will all be under water due to sea level rise..’
The stream, of course, still meanders serenely past…. (give or take the odd recent flooding of the associated floodplain)…

I think a comment similar to this got booted into the ether:
richard telford says:
January 8, 2013 at 3:33 am
….This is is a particularly clueless argument….
>>>>>>>>>>>>>>>>>>>>>
No it is not.

Starting from the quote.

. An Assistant Deputy Minister (ADM) of EC chaired the founding meeting of the IPCC in Villach Austria in 1985. It continues, as they sent a large delegation to the recent Doha conference on climate change. Their web site promotes IPCC work as the basis for all policy on energy and environment. They brag about their role as a world class regulator.

It is obvious the climate models of the member national weather agencies are based on the same assumptions as the IPCC climate models. The short term forecasts from these models therefore provide empirical proof as to the correctness of those assumptions. Those short term forecasts FAIL, thereby providing real world proof the assumptions upon which all these models are based are incorrect.

This leads to a Catch-22 situation for you.

A.) All the models are based on the same assumptions and therefore real world testing shows they fail.

B.) Short term models are based on other assumptions which means the member national weather agencies acknowledged that the IPCC assumptions are bovine feces and are trying to come up with something that actually works..

So Richard, take you pick which is it? A or B? Of course there is a third option,

C.) None of them know what the freckin’ heck they are doing and are just playing with the computers at our expense.

This is a comment strictly on how the PC(%) numbers on the charts may be intended to be interpreted. The lowest (gray: 0-40%) areas are said to be “Not significantly better than chance”. Does that not mean that the forecasts in these places were the 50/50% variety with no benefit received from the EC forecast? That would mean that the higher (41-100%) areas are where the EC forecast was beating the 50/50 coin toss, i.e. 50% on their scale would mean 50% better than chance, or 75% accurate forecast (or some percentage like that). This would also mean that they need to define how close an actual had to be to the forecast to be considered accurate, e.g. within 1 degree or within 2% or some such allowance for a near miss.
Is this a case of ambiguous data presentation or is it just me?
Not that it should matter in a discussion on how to read these graphics, but I am in the 95% sceptic range on man made global warming being a “problem” and wish Dr. Bell all the best in his work.

Adrian Kerton says: “Today, 8th Jan at 08.00 on UK Radio 4 news ‘The met office says it does not believe global warming will be as severe as it had previously predicted.’ Nothing on the BBC website though.”

I think a better analogy is to consider a sequence such as the following:

a1 = 1 a2 = 1 an+2 = an+1 + an:
1 1 2 3 5 8 13 21 …

To calculate say the 1000th number in this sequence you have to start at the beginning of the sequence, but your analogy is to just guess and then justify our guess by claiming something called “boundary conditions”.

Recall that in September 2009 UN SecGen Ban Ki-moon demanded an immediate transfer payment of $10-trillion (yes, trillion) to his ensconced kleptocrats, lest Planet Earth become a baking desert by January 2010.

Having excreted reigning U.S. munchkins by 2016 if not before, we suspect that by c. 2018 Moonies and Pachauri alike will see their RICO enterprise become roadkill on two-lane, unpaved, public-sector tracks. Can’t happen soon enough.

Re the bon mot attributed to Niels Bohr, “Prediction is very difficult, especially about the future,” and also credited above to Yogi Berra (my impression, too), I did a quick Internet search and came up with this letter to The Economist:

A letter attributes the following comment to Niels Bohr: Prediction is very difficult, especially about the future. It is said that that Bohr used to quote this saying to illustrate the differences between Danish and Swedish humour.

Bohr himself usually attributed the saying to Robert Storm Petersen (1882-1949), also called Storm P., a Danish artist and writer. However, the saying did not originate from Storm P. The original author remains unknown (although Mark Twain is often suggested).

Bob Ryan says:
January 8, 2013 at 5:06 am
A very telling phrase: ”Climate is the average of the weather….’. But, that is not what climate scientists like Slingo believe. For them ‘climate drives weather’. This is the inversion of thought which distinguishes the traditional meteorological paradigm from that of the (new) climate scientist. The former is informed by a straightforward empiricist view of the world – the latter by a form of rationalism based upon the Platonic concept that there are certain ideal and immutable laws (in this case the laws of radiation physics and thermodynamics) from which the future of the climate can be deduced and modelled. The problem is that these methodological paradigms are quite incommensurable and the debate between the two sides is like ships passing in the night. For Slingo what is observed is the servant of theory and quoting Feynman or Popper will not make the slightest bit of difference. Slingo isn’t pretending she knows why it has been wet – she really, really believes she knows.

A fascinating observation, this difference in ‘methodological paradigms’ explains how ostensibly intelligent scientists can become intractably wedded to an overarching belief (where ‘belief’ is the right word) in the rightness of the Climatist ’cause’ (cf. Climategate II) and the utter irrelevance of the contrary evidence and arguments propounded by ‘deniers’.

It is easy enough to deride the Climatists as ‘true believers’ (viz. Eric Hoffer), but hard to understand how real scientists, buried in data-intensive research, can latch onto glib and easily falsifiable conclusions, simply on the strength of one idee fixé, namely the theoretical ability of one trace gas to ‘trap’ heat in the atmosphere. I have always assumed it was a result of ideological blinders, an overriding desire to right the wrongs of Western civilization and cure the ills wrought by mankind on the Planet. But that seemed an implausible leap of faith for real scientists to make. Bob Ryan has perhaps shown how the more thoughtful among the Climatists may rationalize that leap philosophically, by turning empirical science on its head.

I see climate as the average of the weather. From a statistical standpoint weather is akin to the prediction interval, climate is the confidence interval. Has nothing to do with initial values or “boundry” conditions. It’s
just how we describe the individual data and summary statistics.

We all know the climate next summer will be hot and muggy in Philly. How hot and how muggy and specifically which days will be the hottest is anyone’s guess.

You have alerted a bureaucrat that someone is watching. If you have not taken a photo and documented the date of placement of this sign, please do so immediately. The thing will be “disappeared” in 3, 2 ,1 days.

For years I have been frustrated with the inaccurate forecasts from Environment Canada. In the very short range (24-48 hours) they are generally OK, but I found that forecasts from, for example, weather.com and WeatherBell are more reliable. On top of that most Canadian radio and TV stations and newspapers pick up the Environment Canada forecasts, making it difficult to hear an alternative unless you go digging for one.

richard telford says:
January 8, 2013 at 3:33 am
As an imperfect analogy, predicting a toss of a coin from the angular velocity and momentum is difficult – small errors in the initial estimates of the parameters will rapidly escalate and cause errors. In contrast, estimating the distribution of head and tails is much easier.
===========
Your understanding of statistics is incorrect. A coin toss is predictable in the long term because it has a constant mean and distribution. This type of distribution is not typical of time series data such as weather, climate or the stock market. Such data is not predictable using the law of large numbers or the central limit theorem, given our current understanding of mathematics.

In effect climate is a coin that is constantly changing shape, each time you toss it. All one can say with any certainty about climate is that it will get warmer, colder, or will stay about the same.

The massive downgrade of predicted temperature increase by the UKMO to 2017 is very worrying. Given their track record one is forced to conclude that the UKMO forecast will mean that temperatures will rise spectacularly over the next 5 years

“This is a particularly clueless argument, in that it has been shown to be bogus many times over. Weather forecasts are an initial value problem. Climate projections are a boundary condition problem. This is well known. ”

As someone who has worked professionally in computational fluid dynamics for over 25 years, this statement by Mr. Telford is quite incorrect. All of the GCMs use time-marching schemes for coupled parabolic-hyberbolic differental equations describing the basic conservation laws of atmospheric physics. Future surface conditions are not known a priori, as would be required for a boundary value problem – but then if you knew the future, why would you need to predict it? (True elliptic BVPs require knowledge of conditions at all system boundaries – e.g. heat conduction in a metal plate, modeled using Laplace’s equation. This is all basic mathematical physics).

Some scientists fool themselves into thinking climate is a boundary value problem because they “fix” the boundary conditions in the future (hence “projections” versus “predictions”), which of course is equivalent to pure guessing given the non-linearity of the coupled system of differential equations modeling the air and ocean.

For those who want to read further about this, the Dr. Roger Pielke Sr. reference cited earlier is a good one.

An interesting fellow. Wikipedia gives some details on the man. The most important details have to be picked out from between the lines.

Some facts: born in Manitoba, made money in the oil business of Alberta, somehow wound up at the UN. In 2005 he accepted a $989,000 bribe from the Korean Park, and enscounced himself in Beijing when this became known, safe from prying eyes and bothersome questions. Still heads the IPCC. Strange world, hunh?

The historical Percent Correct (PC) charts have a period from 1981 to 2010. What would be interesting is a study on the improvement in accuracy for successive forecasts. If someone was to obtain the dataset these PC charts are based on then a study of each successive forecast as to whether they became more accurate over time would be revealing.
If the technology and methods improved as expected then obviously the forecasts would get more accurate. If there was no improvement or a even a degradation in forecasts then it would be necessary to determine the cause. Not to completely anticipate that outcome, but when did it get worse (?) would be revealing.
My slow dial up connection and not knowing where to get the data prevents me from doing it but perhaps others are interested.

I also found this to be quite insightful. It makes me think that we are confronted by a form of “Scientism” that seizes on some “laws” as eternally valid and determinative, and like Biblical literalists, the believers can be certain how the future will unfold.

Years ago, I had a discussion with a distinguished professor concerning research showing that proximity to electric fields were dangerous to human health. I asked he ever took personal action based on such research. His response was he knew the researchers, and they weren’t unplugging their refrigerators or turning off their AC. If the researchers didn’t see a threat, despite their own findings, then he saw none.

Tie the pay of climatologists to their predictions, including margins of error, and I suspect all discussion on global warming would grind to a halt. The margins of error would increase to cover a wide range of possible climates, and it would become clear that those who most understand the climate are not willing to risk their money on their own predictions. If they aren’t willing to risk their money on their own theories, then surely we shouldn’t risk ours.

I know there are examples of corrupt, inept, unconscionable people in the private sector; it is a function of the range of human characteristics. I chose not to enjoin the debate in the article about the merits of government versus private sector involvement in different aspects of a society. We still don’t know what should be done by each. It is the central issue in all significant political changes, from the fall of Fascism, through collapse of communism, to the emergence of State Capitalism in China, to the recent US election. Exploitation of climate and environment for total political control was the theme of Vaclav Klaus’s book, Blue Planet in Green Shackles”.

The important difference between the two sectors is accountability. It is very unlikely any private sector endeavour could produce meaningless results, let alone publish them, as Environment Canada (EC) did, without some accountability, whether to a Board, the Shareholders or the Marketplace. It is more likely they would try to hide them knowing their jobs were on the line. If caught they would go to jail. Ironically, their behaviour is used to argue for more government involvement or control.

In government I don’t get to vote for who and how Environment Canada is run. I can vote for politicians who may make bureaucrats accountable, but it is very difficult. Bureaucrats know; they can outlast a politician, the politician will lose an election because of their failures, they control the information going to the politician, that even if a new set of rules and regulations are introduced by the politician they determine how much and how effectively they are implemented. All these factors are more problematic when a science issue is involved because very few politicians know or understand science. This was an issue I confronted when I appeared before the Canadian Parliament on the ozone issue.

All these are reasons why Maurice Strong worked through the WMO and the national weather agency bureaucracies. He exploited Mary McCarthy’s observation: “Bureaucracy, the rule of no one, has become the modern form of despotism.” The key was he controlled some bureaucrats who then appointed like minded people to the IPCC.

Others have grappled with this problem of bureaucracies. The US Founding Fathers of the US gave the public some control over senior bureaucrats who are elected. They also realized the only control politicians have over bureaucracies is funding so they gave control of the purse strings to the people through Congress.

In Canada, Prime Minister Harper cut funding to Environment Canada (EC) set up by the same Assistant Deputy Minister who chaired the 1985 meeting IPCC formation meeting. Prior to retiring this person established the Canadian Foundation for Climate and Atmospheric Services (CFCAS) with $61 million from EC. In the month he retired he took over as Chair of the Foundation. Most of the money went to climate research by people producing ‘proof’ of the AGW hypothesis. My commentary in an article on this persons actions drew another lawsuit. At that time I chose not to fight because of the cost. Interestingly, in newspaper articles he wrote protesting the cuts he never mentioned he was the person who organized the funding or even that he had been an ADM.

Everyone must be accountable, but how do you do that without establishing a police state? As Cicero asked, “Quis custodet ipsos custodes?”
(It’s time to stop, I am starting to sound like Lord Monckton.)

“It does not matter who you are, or how smart you are, or what title you have, or how many of you there are, and certainly not how many papers your side has published, if your prediction is wrong then your hypothesis is wrong. Period.”

Adrian Kerton, Henry Galt and others:
The climate scientists saying that the pause in global warming (which they didn’t predict) is only temporary and it will take off again soon, reminds me of the old story of the man who saw someone walking along the road spreading a powder everywhere.
“What are you doing?”
“Putting down the Elephant Powder. It keeps the elephants away.”
“But this is the Highlands of Scotland. There aren’t any elephants here.”
“I know – works well, doesn’t it?”

richard telford, a specious argument. There will be growth of uncertainty due to initial value errors, combined with propagation of those errors through stepwise calculations that include serially re-initialized intermediate states, all of which employ a model also subject to theory-bias. When the propagated uncertainty exceeds the bounds of the model, the mean expectation value no longer has any physical significance; no predictive value whatever.

Using your coin toss analogy, a climate model prediction of future temperature is like attempting to predict the persistence runs of heads and tails in a sequence of coin tosses. The claim to know specifics of climate after hundred years is like claiming to know that the last 100 tosses of 10,000 tosses will all be tails. All that, with initial value errors, propagated errors, imperfectly constrained bounds, and missing bounds due to an incomplete theory. Fat chance.

Tim Ball says: January 8, 2013 at 9:23 am
The important difference between the two sectors is accountability
==========================
With your finger you have touched the nub of the problem. Even elected officials are not held fully accountable.

Maurice Strong was smart enough to recognize the possibilities. He admits to having “a few millions”, but no one knows how much he is worth. The Tongsun Park affair erupted in 2005. The investigators had a $890,000 check traced back to Sadam Hussein, made out to and cashed by Strong, but Strong decamped to Beijing where he remains unmolested by any bothersome investigator. To those who understand politics, this means that Strong had the “right kind” of protection- and seemingly in two different administrations, and probably more.

Dr. Ball, “Climate is the average of the weather, so if the weather science is wrong the climate science is wrong.”

And this is why those who claim it will be warmer in Canada in six months are fools. If we can’t predict the weather three days from now, how can we know this summer will be warmer than this winter. Foolish people who try to tell us the future of the climate like that. Unbelievable that whole sectors of the economy such as agriculture and tourism rely on such nonsense.

Also, the WSJ gave out that Strong had deposited his papers at Harvard, some 685 boxes of documents. The article did not say anything about a contribution, but perhaps Harvard now feels a certain hopefulness toward Mr. Strong.

So it’s true, Strong is smart enough to recognize the possibilities, and he knows how to win friends and influence people. Interesting fellow.

Your post at January 8, 2013 at 10:58 am competes with Richard Telford’s woeful contributions as being the daftest post in the thread. It says in total

Dr. Ball,

“Climate is the average of the weather, so if the weather science is wrong the climate science is wrong.”

And this is why those who claim it will be warmer in Canada in six months are fools. If we can’t predict the weather three days from now, how can we know this summer will be warmer than this winter. Foolish people who try to tell us the future of the climate like that. Unbelievable that whole sectors of the economy such as agriculture and tourism rely on such nonsense.

What do you think “climate” is?

Oh, sorry, it was silly of me to have asked you that.
If you were able to think then you would recognise that average summer weather (i.e. summer climate) is warmer than average winter weather (i.e. winter climate).

Knowing what summer and winter climates are gives no information on how they are likely to change.

But, like Telford, your understanding of such issues is limited to what you can copy-and-paste from SkS or other warmunist propaganda sites.

“…Tim Palmer, a leading climate modeler at the European Centre for Medium – Range Weather Forecasts in Reading England saying, “I don’t want to undermine the IPCC, but the forecasts, especially for regional climate change, are immensely uncertain.”

A key point: Palmer believes that what the IPCC achieve OVERALL is good, though their work to predict the changing climate is very poor. He doesn’t want to stop the machine because, though it may be saying it is headed to Brisbane, while it is actually headed for Darwin, Palmer believes that it is a good idea to end up in Darwin.

Is this not the corruption of the end justifies the means?

I sympathize. But I do not agree. It is not legitimate to tell all those paying passengers that they are headed for one destination while you and your friends know otherwise. It is manipulative and dishonest.

If I were Palmer, I’d say, “I’m sorry that what I have to say undermines a large part of the IPCC, but …”.

As someone who has worked professionally in computational fluid dynamics for over 25 years, this statement by Mr. Telford is quite incorrect. All of the GCMs use time-marching schemes for coupled parabolic-hyberbolic differental equations describing the basic conservation laws of atmospheric physics. Future surface conditions are not known a priori, as would be required for a boundary value problem – but then if you knew the future, why would you need to predict it? (True elliptic BVPs require knowledge of conditions at all system boundaries – e.g. heat conduction in a metal plate, modeled using Laplace’s equation. This is all basic mathematical physics).

The boundary conditions that are prescribed are not “future surface conditions”. They are the basic top-of-the-atmosphere energy balance conditions involving radiation emitted and absorbed by the Earth system. The point is simply this: Although the detailed evolution of the weather…and even of the climate (in the sense of whether it will be warmer or colder or wetter or drier than average) over, say, monthly to yearly timescales…is sensitive to the initial conditions, the future climatic conditions in response to increases in greenhouse gases is not. This is clearly seen in climate models…If you perturb the initial conditions, some things are sensitive to this and some things are not. Let’s make a list of what is sensitive and not sensitive to initial conditions —

Sensitive:

(1) Weather, especially about a week or more into the future.
(2) Regional climate over monthly to yearly timescales relative to the local seasonal means.
(3) The evolution of various climate fluctuations such as ENSO.

Not Sensitive:

(1) The basic response of the climate system to the perturbations in the distribution of solar energy that occur on a yearly time scale, or in other words, the seasons. For example, we can predict with high confidence that the average temperature in Rochester, NY in January will be colder by somewhere in the neighborhood of 20 C than the average temperature in July. We can also predict with high confidence the difference in the average temperature in January between, say, Rochester, NY and Miami, Florida.

(2) The basic response of the climate system to perturbations in radiative forcing caused by things such as an increase in atmospheric greenhouse gases.

Admittedly, just because the predictions of climate change due to the increase in atmospheric greenhouse gases is insensitive to initial conditions does not mean there are not still important uncertainties associated with such predictions, such as uncertainties in the response of clouds to this radiative forcing…or uncertainties regarding thresholds for dramatic shifts in things like ocean currents that could lean to sudden climate shifts, as have occurred in the past.

However, these uncertainties are quite different in nature than the uncertainties due to chaos, i.e., the sensitivity to initial conditions. It is thus not particularly scientifically literate to make the claim that because we can’t predict the weather a few weeks in advance or because we can’t predict whether this spring will be unusually warm or cold or wet or dry in some region, it therefore follows that we can’t predict the seasonal cycle or the response of the climate system to a significant increase in greenhouse gas concentrations.

‘The Met office has revised downwards its projections for climate change through to 2017… (and) the recent slowdown will continue.’

Before we all breathe a collective sigh of relief, Roger Harrabin, the BBC’s Environmental This morning on BBC 4’s Today programme, presenter John Humphrys released a highly misleading news bulletin stating, ‘The Met Office does not believe that global warming will be as severe as it had previously predicted.’ He later expanded that, ‘The Met office has revised downwards its projections for climate change through to 2017… (and) the recent slowdown will continue.’

sceptical says:
January 8, 2013 at 10:58 am
….And this is why those who claim it will be warmer in Canada in six months are fools…..
>>>>>>>>>>>>>>>>>>>>>>>>>
So tell me on July 1 2013 will it be warmer or colder than the average temperature of that day in Smithers BC and by how many tenths of a degree?

Environment Canada has a pitiful service record, weather prediction has been a joke in the arctic since 1990s.
The EC site brags about Environment Canada’s Science, go and see, its quite sad.
Not science, oh no, “EC’s science”, something they created themselves?
Sort of like the science archive they do not have, when asked for the science supporting the policies they propose.
On the bright side, they have made it crystal clear, how little the taxpayer needs their services.
Its nice when bureaucrats are this helpful.

richard telford says:
January 8, 2013 at 3:33 am
“This is a particularly clueless argument, in that it has been shown to be bogus many times over. Weather forecasts are an initial value problem. Climate projections are a boundary condition problem. This is well known.

As an imperfect analogy, predicting a toss of a coin from the angular velocity and momentum is difficult – small errors in the initial estimates of the parameters will rapidly escalate and cause errors. In contrast, estimating the distribution of head and tails is much easier.”

If that is so easy, Richard, then why did the IPCC’s model fail even at the relatively simple task of modeling the past 16 years of stagnant temperatures. Did Hansen not adjust the “average global temperature” up enough?

Now, there are a lot of apologetic texts by IPCC climate modelers that explain how totally unimportant local details like precipitation are for the “boundary condition problem” that climate models “solve”…

Again, what have you and those apologists to show for? Nothing. Empty words. The models all fail. Do we have to pay you and the other rent-seekers up to the year 2100 until you admit scientific bankruptcy? Or much rather, scurry off silently into the dark of night never to be seen again.

” It is a false argument. Climate is the average of the weather, so if the weather science is wrong the climate science is wrong.”

What if I say that in six months daily temperatures at my place will be about 20 degree celsius higher than they are now?
I know nothing about what weather there will be, but I know enough about my climate to place this forecast with sufficient certainity.
Do you really insist that my 6-month forecast is completely wrong because I can’t run a weather model that long?

You don’t need to simulate motions of individual particles to stude thermodynamics.

No, the problem is not that we must be able to simulate weather with absolute precision to be able to make climate forecasts.
The problem is that we still don’t understand climate. And it may take hundreds years of observation and research to get to some basic understanding, similarly how to it took many decades to get to today’s level of reliability for weather forecasts.

Adrian Kertonsays:January 8, 2013 at 3:15 am
Today, 8th Jan at 08.00 on UK Radio 4 news
“The met office says it does not believe global warming will be as severe as it had previously predicted.”
Nothing on the BBC website though.

Of course they have it. See Met Office Decadal forecast. They don’t go as far as to say it, they are just showing it to you. Compared to their previous forecast, which is gone, but not quite yet. Tricky.

Even more tricky is that they have changed their old forecasts retrospectively to match facts (while line). That’s what no scientist ever does.

“Who controls the past, controls the future: who controls the present controls the past”

Kasuha says:
January 8, 2013 at 1:26 pm
“What if I say that in six months daily temperatures at my place will be about 20 degree celsius higher than they are now?
I know nothing about what weather there will be, but I know enough about my climate to place this forecast with sufficient certainity.
Do you really insist that my 6-month forecast is completely wrong because I can’t run a weather model that long?”

Kasuha, the Null hypothesis for the weather in your place is probably that summer is about 20 degrees warmer than winter, so in that regard, you’re predicting what the Null hypothesis says.

Like you, climate modelers need to use a statistical approach for their models. The statistical approach breaks down when the number of process instances per grid box becomes small. That’s why their approach cannot correctly approximate the behaviour of large scale processes like large convective fronts.

Richard Telford will tell you that all is well nevertheless, never mind the details, it’s a “boundary condition problem” so you don’t need much accuracy anyway… Well then the science should be available cheaply, if any ole shoddy model can do it; let’s start with paying climate modelers half the minimum wage plus a few food stamps (bring your own X-box).

—————–
Richard, then why did the IPCC’s model fail even at the relatively simple task of modeling the past 16 years of stagnant temperatures.
—————–

Because, obviously, predicting the next few year’s weather is an initial value problem. To do that you need to initiate the models with observational data. Until recently, the models used by the IPCC haven’t even tried this.

I see climate as the average of the weather. From a statistical standpoint weather is akin to the prediction interval, climate is the confidence interval. Has nothing to do with initial values or “boundry” conditions. It’s
just how we describe the individual data and summary statistics.

We all know the climate next summer will be hot and muggy in Philly. How hot and how muggy and specifically which days will be the hottest is anyone’s guess.
————–
Exactly. If you treat it as an initial value problem, you have know idea whether the end of next week will be warm or cold, and so impossible to predict as far ahead as next summer. As an boundary problem, it is much easier to predict that summer will be hot.

———————–
Pat Frank says:
January 8, 2013 at 10:46 am
richard telford, a specious argument. There will be growth of uncertainty due to initial value errors, combined with propagation of those errors through stepwise calculations that include serially re-initialized intermediate states, all of which employ a model also subject to theory-bias. When the propagated uncertainty exceeds the bounds of the model, the mean expectation value no longer has any physical significance; no predictive value whatever.

Using your coin toss analogy, a climate model prediction of future temperature is like attempting to predict the persistence runs of heads and tails in a sequence of coin tosses. The claim to know specifics of climate after hundred years is like claiming to know that the last 100 tosses of 10,000 tosses will all be tails.
————————————————
That is exactly what it is not like.

That would be to try to predict the weather on the 8th January 2113. Nobody will be trying to do this for at least 99.9 years. What the IPCC is trying to do is predict what type of weather can be expected in 100 years. That is an entirely different and massively more tractable problem.

This is not to say that initial values are not important for the new decadal predictions, but that in the long term boundary condition uncertainty is much the bigger problem.

Bureaucrats are, by definition, overpaid and generally overqualified filing clerks. They should not be allowed to do anything other than keep the files in order, under no circumstances should they be allowed to “manage” anything and certainly not anything scientific.

While government scientists are often referred to as bureaucrats, most of the field-science staff are unable to influence opinions in the “boss” level mind. They are also temporary hires with little or no benefits, low pay and little chance of improvement. Most temporary “seasonal” staff are hired at GS-5 or GS-7 rates, which range from about 33 to 41 thousand per year. Because the hire is seasonal, the actual take-home can be roughly a third of that, or in other terms close to the poverty level. Been there, done it, left ASAP. By and large the US Gov’s treatment of its lower echelon staff makes Walmart look like a kindly, charitable, caring company where employees are concerned.

One of the more gruesome traits of government bureaus is that they could be more efficient, IF they actually hired staff, rather than contracted out work. The work isn’t cheaper because its done “outside,” and the results are generally neither better nor worse than government employees would produce. In fact, because of layers of overhead costs the contractor has to add on just to be sure slow business times are buffered, the same job done under contract costs more per hour and frequently is not as thoroughly done, since the contractor low-balled the estimate to begin with. Unhappily, once the work is completed, the analysis carefully done, and the results handed upstream for review, not infrequently comments come back such as “I don’t agree.” “Say [this] instead.” “Use the official government spelling for ….” [No joke there.] Arguing can lead to the door being indicated. Essentially, what happens inside a government bureau often resembles the alimentary process. What goes in is useful. What often comes out is only useful for spreading on gardens after a lengthy composting process.

Essentially Joel Shore claims the only thing you need to predict climate is the TOA forcing. So Joel, why do we need GCMs, why do we need massive computers? The fact is one wouldn’t even need a calculator. You could predict the climate with a slide rule.

Isn’t it nice to know we can fire all those people wasting their time and our money producing what Joel could do all by himself with a slide rule. When do we start?

Weather forecasts aren’t quite the same as climate forecasts because the latter are much worse.

Weather forecast use models with the forecast verified each day, the global forecast hasn’t even been verified once yet.

Since the 1970’s verified forecasts:- (weather v climate)

15349 v 0 (zero)

Climate is weather over a 30 year period and It is clear there is massive ignorance with how weather should behave in future. Weather patterns greatly depend on the jet stream, so this failure to predict includes failure for climate too.

Joel Shore says: January 8, 2013 at 11:38
“This is clearly seen in climate models…If you perturb the initial conditions, some things are sensitive to this and some things are not. Let’s make a list of what is sensitive and not sensitive to initial conditions —”
===================================
No, no Joel , you have it backwards. AGW types always get confused at this point.
Your observations are supposed to come from nature, and you should test climate models against these observations. If the model fails this test, i.e., by not replicating nature, then you are supposed to modify the climate model. Think about this, it’s an important principle. Let me know if you need any more help in understanding this crucial aspect of scientific prcedure.

There are two “redefinition lies” being accepted / used here. One is the assertion that “climate is the long term average of weather” or “the 30 year average of weather”. It is not.

Yes, the “climate science” folks have tried to hijack the term and turn it into that (for their own ends). It’s still not true.

Climate is a geography effect. Latitude. Altitude. Distance from water. Land form. Those geologic / geographic effects determine your climate. The Mojave Desert was still a desert when it was wetter and dryer, hotter and colder of the years. The Mediterranean Climate zone was still a Mediterranean Climate zone during the Roman Warm Period and during the Dark Ages / Migration era cold period. (And the Greek cold period and the Modern Warm Period and…)

It is this basic confounding of what is really climate with the “30 year average of weather” that is the heart of the basic lie of “Climate Change”. As there are known 60 year cycles in weather (PDO / AMO etc.) a “30 year average of weather” will be ramping up for 30 years, then down for 30 years. Wash and repeat. That’s “weather change” folks, not “climate change”.

Don’t drink the coolaid, don’t swallow the lie.

(One can make a case for adding the geologic change of precession, obliquity, etc. that causes the Ice Age cycle, which does shift climate zone enough to matter. I’ll worry about that fine point in about 2000 years… for all practical purposes it can be ignored during the lifetime of any one civilization.)

Once you realize that, and that the fractal nature of weather makes it ‘self similar’ at different scales, you see why shifting the ‘time scale’ of your weather prediction from 1 month to 1 decade doesn’t improve the accuracy.

The other “polite lie” is the idea that the TOA is a radiative boundary. This is enforced by the notion of a ‘pause’ in the world “tropopause” as though things are quiet and only a dead air radiative transfer is possible. It isn’t.

That space is running a Cat 2 Hurricane force wind sideways between the troposphere and stratosphere. (It’s about 1/2 that just a few thousand feet each side). There is also a large descending air mass at the cold pole (from the stratosphere level of altitude) in the polar vortex. That air had to get “up there” somewhere. Mass flow across the “pause” happens. A lot of it.

There are also intermediate zone descending air masses where various weather bands have rise / fall interfaces (Hadley Cells et. al).

There is no PAUSE at the tropopause, there is strong wind, mass transfer, and mixing. It is NOT a radiative only boundary. Arguing about how much of which light spectrum crosses the radiative boundary is a “MU!” question. “The question is ill formed!”

The first step to scientific accuracy (and freedom) is to recognize when the definitions have been cooked and get the more correct ones.

Once you buy into the ‘it is all radiative at that quiet stationary tropopause’ you end up sucked into endless useless dead end bickering over how that non-real place does that non-real thing and no one can ever be right about it. It’s an ‘angels and pins’ world then.

Once you realize it is a very very fast wind zone mass transport mixing zone, it becomes much easier to see why “IR transfer” doesn’t matter at that point.

So stick to Kopen climate zones for the definition of real climate and avoid the ‘pause’ trap. The world is then much easier to “get right”.

(1) The boundary conditions that are prescribed are not “future surface conditions”. They are the basic top-of-the-atmosphere energy balance conditions involving radiation emitted and absorbed by the Earth system.

NO. NO. NO. In addition to TOA, you must have surface boundary conditions for ocean and land boundaries (e.g. temperature or heat flux, velocities, mass/species fluxes etc.). How fast or slow the atmosphere/ocean heats up or cools down is critically dependent on this. As the numerical solutions are time marched, how are you setting these boundary conditions? Are they fixed? Do they change? If they change, what assumptions are being made? Do you know what they are decades out?

(2) The point is simply this: Although the detailed evolution of the weather…and even of the climate (in the sense of whether it will be warmer or colder or wetter or drier than average) over, say, monthly to yearly timescales…is sensitive to the initial conditions, the future climatic conditions in response to increases in greenhouse gases is not.

You are assuming that the differential equations and initial/boundary conditions that supposedly model the climate system are accurate representations of the system. The only evidence I’ve of any accuracy at all is seen in tuned hindcasts. Of course, some research organizations such as NASA/GISS don’t even document for us (1) what differential equations they are solving and (2) what numerical methods are being applied and their specific formulations (their code is pretty bad too). GFDL and NCAR are much better in this respect. I’m sure the European models are better too, although they won’t release their source code as far as I know.

(3) This is clearly seen in climate models…If you perturb the initial conditions, some things are sensitive to this and some things are not. Let’s make a list of what is sensitive and not sensitive to initial conditions —

Please cite references…do you have comparison of long term predictions for the temperatures at a specific place like Rochester, NY versus data? That would be interesting.

(4) It is thus not particularly scientifically literate to make the claim that because we can’t predict the weather a few weeks in advance or because we can’t predict whether this spring will be unusually warm or cold or wet or dry in some region, it therefore follows that we can’t predict the seasonal cycle or the response of the climate system to a significant increase in greenhouse gas concentrations.

Actually, even the Farmer’s Almanac can predict the seasonal cycle. But you can’t predict with any certainty the magnitudes of temperature or precipitation changes 1, 2, 3 years out, or for that matter decades out, with computer – they simply have NOT demonstrated any skill.

In any case, my thesis is still valid, namely that the climate problem as formulated and solved is an initial value problem, and it was pretty dumb of Richard Telford to say otherwise.

Clearly, “if the weather science is wrong” then “the climate science is wrong” when “Climate is the average of the weather”.

And the boundary conditions are not known when the science is wrong.

I expanded on that by using the ‘coin toss’ analogy you had used in your post.

At January 8, 2013 at 5:49 am you mentioned but evaded my rebuttal of your fallacious assertion.

Subsequently, at January 8, 2013 at 2:48 pm at you have repeated your assertion saying.

That would be to try to predict the weather on the 8th January 2113. Nobody will be trying to do this for at least 99.9 years. What the IPCC is trying to do is predict what type of weather can be expected in 100 years. That is an entirely different and massively more tractable problem.

This is not to say that initial values are not important for the new decadal predictions, but that in the long term boundary condition uncertainty is much the bigger problem.

As my post – which you evaded – explained, it is not relevant whether or not “long term boundary condition uncertainty is much the bigger problem”. This is becausethe boundary conditions cannot be defined and, therefore, their change cannot be determined.

Simply, climate modelling is based on the false premise – which you promote – that the boundary conditions of an inadequately defined system are known. They cannot be known when the system is little understood.

No, no Joel , you have it backwards. AGW types always get confused at this point.
Your observations are supposed to come from nature, and you should test climate models against these observations….

How exactly do you think that Edward Lorenz originally discovered the phenomenon of chaos and the hypothesis that Earth’s atmosphere shows chaotic behavior? ( http://en.wikipedia.org/wiki/Lorenz_system ) Are you under the impression that Lorenz ran an experiment where he made an exact duplicate of the Earth, perturbed the initial conditions and then compared the weather on that Earth II to the weather on our Earth?

That is not in fact what he did. He wrote down some equations that, by modern standards, could at best be described as a cartoon of atmospheric dynamics (much, much, much cruder than modern climate models), and he found that when he perturbed the initial conditions in this model and ran the program for a while, he got very different results than with the original initial conditions.

And, since what we are interested in is whether perturbation of initial conditions in our models affects the result for the change in climate due to a rise in greenhouse gases, the best way to see if the results change is to run the models with perturbed initial conditions and see what happens. (We can also use our knowledge, gained through scientific study, of what sort of physical problems tend to show extreme sensitivity to initial conditions and what don’t.)

[I suppose one could hypothesize that in the real world, the final climate in response to a rise in greenhouse gases would depend very sensitively on the initial conditions even though it doesn’t in the models, but I know of no evidence to support that hypothesis…and the basic understanding that we have of under what situations chaotic behavior occurs leads us to believe otherwise.]

NO. NO. NO. In addition to TOA, you must have surface boundary conditions for ocean and land boundaries (e.g. temperature or heat flux, velocities, mass/species fluxes etc.). How fast or slow the atmosphere/ocean heats up or cools down is critically dependent on this. As the numerical solutions are time marched, how are you setting these boundary conditions? Are they fixed? Do they change? If they change, what assumptions are being made? Do you know what they are decades out?

We know in principle what sorts of relations have to be satisfied at the boundaries, e.g., because of conservation of mass, conservation of energy, etc. (And, of course, if we have a coupled atmosphere-ocean model then the atmosphere-ocean boundary is not an external interface anyway.) And, even if there are uncertainties associated with the boundary conditions or various processes, that does not mean that the problem in principle has the sort of sensitivity to initial conditions that is the characteristic of chaotic systems. My whole point is that there are two different questions:

(1) Are there uncertainties in regards to modeling the future of the climate system under an increase in greenhouse gases?

(2) Is the problem intrinsically intractable because of the chaotic nature of the system?

The answer to the first question is Yes. But, the answer to the second question is No.

You are assuming that the differential equations and initial/boundary conditions that supposedly model the climate system are accurate representations of the system.

First of all, that is not a binary question. The question is always one of degree, i.e., how accurate the representation is. Making it a binary question assures us the answer is always NO, as it is for any model of any system.

Second of all, we know from the success of numerical weather prediction and other scientific evidence that the basic equations are accurate representations of the system to a good degree.

Third of all, the model does not have to be completely accurate in all respects in order to answer some basic questions about the system like which sort of problems will show extreme sensitivity to initial conditions and which sorts of problems will not.

Actually, even the Farmer’s Almanac can predict the seasonal cycle. But you can’t predict with any certainty the magnitudes of temperature or precipitation changes 1, 2, 3 years out, or for that matter decades out, with computer – they simply have NOT demonstrated any skill.

The point is simply this: The seasonal cycle is an example of something that can be predicted that does not show extreme sensitivity to initial conditions. A problem like how the climate will change in response to increases in greenhouse gases (over periods of time long enough that the effect of this perturbation dominate over other fluctuations) is also of this type. And, indeed, when the modelers perform an ensemble of different runs with different initial conditions, they find that although the initial conditions affect the fluctuations up and down in the global temperature (and hence the climate over the next few years), they do not significantly affect the resulting change in global temperature over time periods long enough that the change is dominated by the radiative forcing due to the increase in greenhouse gases.

In any case, my thesis is still valid, namely that the climate problem as formulated and solved is an initial value problem, and it was pretty dumb of Richard Telford to say otherwise.

Do you think that the problem of determining the seasonal cycle is also an initial value problem? What evidence do you have to support your view in light of the evidence to the contrary, i.e., the evidence I have talked about above that the predictions that the models make for future warming are not sensitive to the initial conditions (whereas the predictions of the weather, say, a few weeks out or of the climate fluctuations over the next few years are seen to be sensitive to these initial conditions)?

I am really puzzled why these simple notions are so controversial to you guys. It is not as if saying that the predictions are not extremely sensitive to initial conditions means that the predictions are necessarily accurate. It just means that the response is not inherently unpredictable in the way that things that depend sensitively on the initial conditions are.

Andrew Weaver, Canada Research Chair in climate modelling at the University of Victoria says:
“Decadal predictability today is kind of what seasonal predictability was 15 years ago. Now we routinely look at El Nino forecasts, we routinely look at seasonal forecasts, and they’re very good…”

joeldshore says: January 9, 2013 at 6:32 am:
===============================
I, of course, am not a climate modeler. My scruples would not allow it. This Lorentz perhaps was fundamental to present day climate modeling, which has built on and improved what Lorentz presented forty years ago, presumably. Well, forty years and umpteen $ billion later, I think that we are in a position to draw conclusions about climate models. Time and again, one sees verification of the climate models put in terms of… what the models show, such as in your last comment to rebut me. Never before had I imagined that scientists were capable of such fatuity. The trouble is that you only talk to each other and thus the illusion is thereby maintained. You will doubtless go back to to solicit more support from your modeling confraternity and return to add yet another card to the ediface. Look closely Joel, that house of cards is leaning and on the verge of collapse.

Despite my insertions that climate is weather during a 30 year period, this was just the general idea and not necessarily the correct one.

I do agree that because 30 years is only half of the very noticeable 60 year cycle, it can only be weather not climate. Roughly 30 year alternate cooling and warming periods are observed during the temperature records, so when one half is warming or cooling this is half of a natural cycle and shouldn’t be called climate. A minimum 60 year period therefore should only be called climate and cover both warming and cooling parts of the shortest natural observed cycle.

NO. NO. NO. In addition to TOA, you must have surface boundary conditions for ocean and land boundaries (e.g. temperature or heat flux, velocities, mass/species fluxes etc.). How fast or slow the atmosphere/ocean heats up or cools down is critically dependent on this. As the numerical solutions are time marched, how are you setting these boundary conditions? Are they fixed? Do they change? If they change, what assumptions are being made? Do you know what they are decades out?

We know in principle what sorts of relations have to be satisfied at the boundaries, e.g., because of conservation of mass, conservation of energy, etc. (And, of course, if we have a coupled atmosphere-ocean model then the atmosphere-ocean boundary is not an external interface anyway.) And, even if there are uncertainties associated with the boundary conditions or various processes, that does not mean that the problem in principle has the sort of sensitivity to initial conditions that is the characteristic of chaotic systems. My whole point is that there are two different questions:

(1) Are there uncertainties in regards to modeling the future of the climate system under an increase in greenhouse gases?

(2) Is the problem intrinsically intractable because of the chaotic nature of the system?

The answer to the first question is Yes. But, the answer to the second question is No.

You are assuming that the differential equations and initial/boundary conditions that supposedly model the climate system are accurate representations of the system.

First of all, that is not a binary question. The question is always one of degree, i.e., how accurate the representation is. Making it a binary question assures us the answer is always NO, as it is for any model of any system.

Second of all, we know from the success of numerical weather prediction and other scientific evidence that the basic equations are accurate representations of the system to a good degree.

Third of all, the model does not have to be completely accurate in all respects in order to answer some basic questions about the system like which sort of problems will show extreme sensitivity to initial conditions and which sorts of problems will not.

Actually, even the Farmer’s Almanac can predict the seasonal cycle. But you can’t predict with any certainty the magnitudes of temperature or precipitation changes 1, 2, 3 years out, or for that matter decades out, with computer – they simply have NOT demonstrated any skill.

The point is simply this: The seasonal cycle is an example of something that can be predicted that does not show extreme sensitivity to initial conditions. A problem like how the climate will change in response to increases in greenhouse gases (over periods of time long enough that the effect of this perturbation dominate over other fluctuations) is also of this type. And, indeed, when the modelers perform an ensemble of different runs with different initial conditions, they find that although the initial conditions affect the fluctuations up and down in the global temperature (and hence the climate over the next few years), they do not significantly affect the resulting change in global temperature over time periods long enough that the change is dominated by the radiative forcing due to the increase in greenhouse gases.

In any case, my thesis is still valid, namely that the climate problem as formulated and solved is an initial value problem, and it was pretty dumb of Richard Telford to say otherwise.

Do you think that the problem of determining the seasonal cycle is also an initial value problem? What evidence do you have to support your view in light of the evidence to the contrary, i.e., the evidence I have talked about above that the predictions that the models make for future warming are not sensitive to the initial conditions (whereas the predictions of the weather, say, a few weeks out or of the climate fluctuations over the next few years are seen to be sensitive to these initial conditions)?

I am really puzzled why these simple notions are so controversial to you guys. It is not as if saying that the predictions are not extremely sensitive to initial conditions means that the predictions are necessarily accurate. It just means that the response is not inherently unpredictable in the way that things that depend sensitively on the initial conditions are.

For those interested in an executive level description of how climate models work, this is a great resource:

Climate Models: A Primer

William O’Keefe
Jeff Kueter
2004

I haven’t read that in detail and it may give some good information but readers should be aware that it is written by two people whose main credentials seem to be that they are the President and the Executive Director of an advocacy group, not scientists who actually work with climate models.

I, of course, am not a climate modeler. My scruples would not allow it. This Lorentz perhaps was fundamental to present day climate modeling, which has built on and improved what Lorentz presented forty years ago, presumably.

Lorenz’s work was not only fundamental to numerical weather prediction and climate modeling. It was the first real identification of chaos, i.e., extreme sensitivity to initial conditions in any system…period. See here http://en.wikipedia.org/wiki/Edward_Norton_Lorenz for Wikipedia’s bio on him.

[And, by the way, I am not a climate modeler either. I am a physicist who has done modeling of a variety of physical systems.]

Time and again, one sees verification of the climate models put in terms of… what the models show, such as in your last comment to rebut me.

No…The models are not verified in terms of what they models show. However, making models of physical systems is fundamental to our understanding of these systems. And, in this particular case, one can test directly whether or not the predictions that the models make are or are not extremely sensitive to the initial conditions by simply running the models with perturbed initial conditions. And, the answer is that certain things do show such sensitivity to the initial conditions and other things (like the seasonal cycle or the basic global temperature response of the climate over long enough periods of time to a significant radiative forcing due to increasing greenhouse gases) do not show such sensitivity.

And, our understanding of the general situations in which one sees extreme sensitivity to initial conditions and when one does not, gleaned by both experimental and theoretical studies of chaos, lead us to understand why this is true and why we think it is true not only within the models but also in the real world.

I also liked Sceptical’s post. (Mime casting a fishing line, getting a strike and reeling in.) Some people ain’t got no sensayuma.

If you “liked Richard Telford’s suggested solution” then your dabbling was inadequate and you failed to read my explanation (at January 8, 2013 at 4:32 am) of why Telford’s nonsense it is NOT a solution.

Please note that Telford mentioned but evaded addressing my rebuttal (at January 8, 2013 at 4:32 am) of his nonsense.

Joel: “Do you think that the problem of determining the seasonal cycle is also an initial value problem? ”

Me: Yes it is if you use a climate model! Please look at the differential equations, Joel!! If you are not qualified to do that, please let us know. It really demonstrates that you know very little about basic mathematical physics and numerical methods.

Now, you do have external solar/planetary boundary conditions that will ensure that a time-marched numerical solution will behave “something like” the four seasons. But there is NO demonstrated predictive skill with the current crop of models for getting the magnitude and spatial variations of the climate right even a few years after being initialized. Then there are the numerical issues of error growth and nonlinear instability…

Joel: “What evidence do you have to support your view in light of the evidence to the contrary, i.e., the evidence I have talked about above that the predictions that the models make for future warming are not sensitive to the initial conditions (whereas the predictions of the weather, say, a few weeks out or of the climate fluctuations over the next few years are seen to be sensitive to these initial conditions)?”

You can certainly develop a simplified numerical model of the ocean and atmosphere that shows little sensitivity to initial conditions. That doesn’t make it a boundary value problem, because a boundary value problem in time assumes that you know the future, and you don’t – ESPECIALLY AT THE LAND AND OCEAN BOUNDARIES!

By the way, if the solutions are insensitive to initial conditions, why don’t you ask your modeling friends to initialize their models with an atmospheric temperature of 50 K. Or 5000 K. Let’s see how that goes…

Joel: “I am really puzzled why these simple notions are so controversial to you guys. It is not as if saying that the predictions are not extremely sensitive to initial conditions means that the predictions are necessarily accurate. It just means that the response is not inherently unpredictable in the way that things that depend sensitively on the initial conditions are.”

Me: It is my contention that if the models do not represent the system accurately (through poor numerics and/or physical modeling), then the solutions are of little use (outside of being pure money-wasting academic exercises). I’m glad you admit here that the predictions are not accurate.

Again – My thesis still holds (regardless of Joel Shore’s protests) – namely, that the current climate models are formulated and solved as initial value problems. If they are or aren’t sensitive to initial conditions depends on the models/numerical formulations in question.

When modeling weather or climate, “initial conditions” do not only exist in the model’s first iteration. Each subsequent iteration essentially re-initializes using the prior results as the new initial condition. Any inaccuracy in initial conditions is thus propagated forward with a compounding effect. That is why models become progressively less accurate over time until they are no more accurate than a random walk. All models function within the preset constraints of the model’s boundary conditions. Not all “boundary conditions” are created equal. Some like The Second Law have a very low degree of freedom. Others, like albedo, lapse rates, clouds, ocean heat capacity are educated guesses at best (biased towards positive feedback) with substantial error bars. Thus weather and climate models can be ‘tuned’ to backcast beautifully, and yet lose any meaningful predictive value as the number of models runs pile up over time. If error propagation were random, the compounding effect would offset by self-canceling members within the set, and your model ensembles might be accurate for a pretty good long run. Not surprisingly, in climate models the output errors are predominantly in direction of increasing temperature because the models are built to presume this result. The ‘tipping points’ or ‘runaway warming’ scenarios in these models merely reflect compounded error propagation in the positive direction. Once the scenario appears sufficiently catastrophic, its time to turn off the computer and declare victory.

By the way, if the solutions are insensitive to initial conditions, why don’t you ask your modeling friends to initialize their models with an atmospheric temperature of 50 K. Or 5000 K. Let’s see how that goes…

The question in regard to chaotic behavior is how the system behaves for SMALL perturbations to the initial conditions, not arbitarily large ones. Systems that are not chaotic can still have multiple basins of attraction (and hence hysteresis) or even regions of instability.

I’m glad you admit here that the predictions are not accurate.

That is not what I said. I said: “It is not as if saying that the predictions are not extremely sensitive to initial conditions means that the predictions are necessarily accurate.” My point is simply that one could still claim that the models’ predictions of future warming are inaccurate without clinging to the incorrect and demonstrably wrong notion that the predictions of this future warming shows extreme sensitivity to initial conditions. I did not say this claim of inaccuracy would be correct…but it would at least have the virtue of being not so obviously wrong. The problem with the “AGW skeptic” movement is that there often seems to be the strategy of throwing as much mud as possible and hoping that some of it sticks, rather than actually picking your battles, restricting arguments to issues where there is at least some genuine uncertainty, to at least maintain some credibility.

Again – My thesis still holds (regardless of Joel Shore’s protests) – namely, that the current climate models are formulated and solved as initial value problems. If they are or aren’t sensitive to initial conditions depends on the models/numerical formulations in question.

And, here lies the gist of the whole disagreement: Yes, the models take initial conditions but the point is that some things are very sensitive to those initial conditions and other things are not but are instead sensitive to the “boundary values”, i.e., the radiative forcings. And, it is easy to see which is which simply by experimenting with the model. The seasonal cycle and the response of the climate system over multidecadal scales to increases in greenhouse gases are two things that are not very sensitive to the initial conditions.

Look, there is plenty of discussion of this both on blogs of actual climate scientists ( http://www.easterbrook.ca/steve/?p=1257 ) and in the scientific literature ( http://www.met.rdg.ac.uk/~mat/predict/ic.pdf ). That latter paper is particularly nice because it deals with the fact that there are some questions about climate (such as the evolution of climate on time scales of months to a few years, e.g., ENSO and the like) that are dominated by the initial conditions and there are other questions (like the evolution of climate in response to a significant change in greenhouse gas concentrations) that are not but are instead dominated by the boundary conditions (i.e., the radiative forcing).

“My point is simply that one could still claim that the models predictions of future warming are inaccurate without clinging to the incorrect and demonstrably wrong notion that the predictions of this future warming shows extreme sensitivity to initial conditions.”

OK. To summarize Joel – it may be true that future predictions of “warming” using a particular climate model may be insensitive to the initial conditions. I can agree with that. However, the amount of warming (or cooling) may be quite inaccurate. I agree with that as well. It is the latter that disturbs me the most about warmists’ fanatical insistence and certainty on catastrophic global warming.

And it’s unfortunate that Collins, in the paper you link to above, repeats the misnomer “boundary value problem” to describe Lorenz’s predictability problem of the second kind (Lorenz never uses the term “boundary value problem” in his 1975 report).

And, so – again – my thesis stands, namely that climate models, as formulated, are solved as INITIAL VALUE PROBLEMS. It is mathematically incorrect to state otherwise.

“And, the answer is that certain things do show such sensitivity to the initial conditions and other things (like the seasonal cycle or the basic global temperature response of the climate over long enough periods of time to a significant radiative forcing due to increasing greenhouse gases) do not show such sensitivity.”
================================

Let me see if I understand you: When a climate model incorporates AGW theory into a properly conceived set of algorithms, the model demonstrates the premise that anthropogenic CO2 contributes to warming. Is this what you mean to say?

And, so – again – my thesis stands, namely that climate models, as formulated, are solved as INITIAL VALUE PROBLEMS. It is mathematically incorrect to state otherwise.

Nobody has claimed that the models are not run by starting them with certain initial conditions. The point is that certain things turn out not to be very sensitive to those initial conditions and hence the claim that the effect of changes in radiative forcings can’t be predicted for the same reason that you can’t predict the weather weeks in advance (or can’t predict if this particular spring will be above or below average in some particular region) is incorrect. That is what people mean when they say it is a boundary-value problem: that the results are dominated by the radiative forcings.

mpainter says:

Let me see if I understand you: When a climate model incorporates AGW theory into a properly conceived set of algorithms, the model demonstrates the premise that anthropogenic CO2 contributes to warming. Is this what you mean to say?

First of all, I think it is a mischaracterization to say that the models are incorporating “AGW theory”. They are simply incorporating the known laws of radiative physics and the known radiative properties of components of the atmosphere such as CO2. But, the larger point is that we are not talking about demonstrating or not demonstrating that premise. What we are talking about is addressing the question of whether or not the predictions that the models make regarding the effect of increases in greenhouse gases are or are not very sensitive to the initial conditions. And, the answer, verified by actually perturbing the initial conditions a little bit in the model and running it again, are that the basic predictions are not sensitive to the initial conditions.

“Thx for those posts. The boundary problem argument indeed assumes the mechanisms are given, known. No boundary analysis will either reveal or test them.”

Yes – for some bizarre reason, people Richard Telford and Joel Shore insist on the misnomer “boundary value problem” when the governing differential equations, as documented by the code authors themselves, reveal otherwise. Unfortunately, this mischaracterization is rife in some of the climate literature, but I suppose they feel free to make up names even if they’re wrong.

Environment Canada supplies the raw weather data to The Weather Network to use in their various media presentations. One thing that has always annoyed me about their presentations is their excessive use of the term ‘NORMAL’ as in degrees above or below normal. Normal is merely that day’s average temperature from the years where and if accurate records exists. As Anthony has pointed out, the temperature record has been tortured with more than enough revision and often with a particular goal in mind that its effective use has become somewhat subjective. The use of the word ‘normal’ has become a distraction and a prop in an attempt to sometimes show trends where none exist.

“That is what people mean when they say it is a boundary-value problem: that the results are dominated by the radiative forcings.”

This is incorrect terminology (from a strict mathematical definition) and it is unfortunate that some in the climate science community have adopted it. Moreover, the results of certain models may be dominated by radiative forcings, but that is only because the models have been formulated that way.

Again – all GCMs are solved as INITIAL VALUE PROBLEMS. To suggest otherwise is clearly incorrect (or an attempt to be deliberately misleading). [sigh]

I note your claim that the AGW theory is not incorporated into the GCM’s. Those of the climate modeling confraternity might be surprised to learn that.

For the record, please re-iterate that claim.

I think I stated what I meant clearly in that original post https://wattsupwiththat.com/2013/01/08/wrong-prediction-wrong-science-unless-its-government-climate-science/#comment-1195400 and don’t know how to state it more clearly. There is no special “AGW theory” to incorporate into the climate models. They simply incorporate what is known regarding the radiative behavior of greenhouse gases (and aerosols and …) and the laws of radiative transfer…and, of course, laws of convective transport and the like. (And, there are, of course, various approximations and parametrizations, most notably involving clouds, since these can’t be well-resolved at the resolution of the models.)

That increases in greenhouse gases lead to significant warming is a prediction of the models, that is, it is something that emerges from the models given the physics that goes into them.

Frank K. says:

This is incorrect terminology (from a strict mathematical definition) and it is unfortunate that some in the climate science community have adopted it.

Fine…Maybe it is slightly sloppy from a strict mathematical point-of-view. But, it makes the important point which has to be made, which is that there is a difference in what aspects of the problem dominate the results when you look at different things. For the seasonal cycle and the long-term response to increases in greenhouse gases, the boundary conditions dominate…and there is not great sensitivity of this to initial conditions. Therefore, statements like “We can’t predict the weather 2 weeks in advance, how can we possibly predict what happens to the climate a century from now?” or “We can’t predict whether this spring will be cooler or warmer than average in some regions, so how can we possibly predict what happens to the climate a century from now?” are misleading because they ignore this difference.

Moreover, the results of certain models may be dominated by radiative forcings, but that is only because the models have been formulated that way.

What does that mean? Are you saying that you think that the Earth’s climate is insensitive to the (top-of-the-atmosphere) radiative forcings?

“Fine. Maybe it is slightly sloppy from a strict mathematical point-of-view. But, it makes the important point which has to be made, which is that there is a difference in what aspects of the problem dominate the results when you look at different things.”

I’m glad, Joel, that you finally wish to use the correct terminology (and no it is NOT “slightly sloppy” – it is flat out wrong). Of course, mangling correct mathematical terminology in a sloppy manner like this is not unknown to climate science.

And, yes, boundary conditions can dominate an initial value problem if you formulate them to do so. You still need to prescribe the boundary conditions as functions of time, however. Some are well known (such as incoming solar radiation which controls the seasonal cycles) – and some not so much (albedo, surface absorption, cloud processes, interaction with the biosphere, etc.). Albedo, for example, is a function of the solution (and hence an unknown function of time).

“What does that mean? Are you saying that you think that the Earth’s climate is insensitive to the (top-of-the-atmosphere) radiative forcings?”

No, but the interaction of the greenhouse effect with other processes (clouds, atmospheric dynamics, ocean circulation) is not so simple, such that the formulation and coupling of these various processes in the GCM will have a definite impact on the solutions, and more importantly the magnitude of the response of, say, global SAT (among other parameters) to increases in CO2. And this has everything to do with models predicting a small amount of warming versus catastrophic warming. And catastrophic global warming is what the misguided (and highly compensated) scientists like Hansen and Trenberth are trying to peddle, like snake oil, to our society.

mpainter says:
January 8, 2013 at 10:51 am
Tim Ball says: January 8, 2013 at 9:23 am
The important difference between the two sectors is accountability
==========================
mpainter says: With your finger you have touched the nub of the problem. Even elected officials are not held fully accountable.
====================================================
Me: I’m reminded of what Al Gore said when he was caught violating campaign laws by (I think it was) making fund raising calls from the White House phone, “There was no controlling legal authority.”

Whatever political leanings you may have and if the candidate leans the same way, only vote for those who have their own internal “legal authority”. Integrity matters.

January 11, 2013 at 12:01 pm
There is no special “AGW theory” to incorporate into the climate models.
================================
But in fact, climate models are AGW theory elaborated into algorithms.

Thankyou for your excellent post at January 11, 2013 at 1:03 pm which details the issue I have repeatedly stated in this thread first at January 8, 2013 at 4:32 am.

To provide emphasis to them, I copy this selection of your important points.

boundary conditions can dominate an initial value problem if you formulate them to do so. You still need to prescribe the boundary conditions as functions of time, however. Some are well known (such as incoming solar radiation which controls the seasonal cycles) – and some not so much (albedo, surface absorption, cloud processes, interaction with the biosphere, etc.). Albedo, for example, is a function of the solution (and hence an unknown function of time).

and

but the interaction of the greenhouse effect with other processes (clouds, atmospheric dynamics, ocean circulation) is not so simple, such that the formulation and coupling of these various processes in the GCM will have a definite impact on the solutions, and more importantly the magnitude of the response of, say, global SAT (among other parameters) to increases in CO2. And this has everything to do with models predicting a small amount of warming versus catastrophic warming.

Frank K. First, I want to remind you how this whole discussion started. Richard Telford quoted TIm Ball noted that the following statement by Tim Ball was not correct:

Everyone knows that regional weather forecasts are notoriously unreliable, especially beyond 48 hours. This fact weakened the credibility of the IPCC predictions with the public from the start. Some supporters of the IPCC position tried to counteract the problem by saying that climate forecasts were different from weather forecasts. It is a false argument. Climate is the average of the weather, so if the weather science is wrong the climate science is wrong.

Richard then went on to explain:

As an imperfect analogy, predicting a toss of a coin from the angular velocity and momentum is difficult – small errors in the initial estimates of the parameters will rapidly escalate and cause errors. In contrast, estimating the distribution of head and tails is much easier.

And, in fact, Richard was right in this. One could nitpick about how one wants to discuss these two issues…in terms of boundary conditions vs initial conditions or in terms of predictability of the first and second kind. The fact remains that Tim Ball’s argument was completely bogus.

Of course climate is an initial value problem. If it is 43 in Toronto today as compared to 10, this will make all the difference in determining if the average temperature will be warmer or colder in Toronto Canada during the summer months as opposed to January. The initial temperature today is much more important in determining seasonal averages than the known changing of the seasons.

Gail COmbs, would you like to make a wager as to whether the average temperature in Smithers is higher in July 2013 or January 2013? I’ll wager on July being higher no matter the forecast for the next 5 days and no matter the initial conditions today.
REPLY: wagers can only be conducting on WUWT by two persons both with verifiable names displayed on the blog. You’ll have to upgrade from anonymous coward status to qualify – Anthony

There is no special “AGW theory” to incorporate into the climate models.
<<<<<<<<<<<<<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>>>>>>>>>>
Joel, for the sake of expediency you flatly assert that there is no Theory of Anthropogenic Global Warming, simply to win a point in an argument. Not a smart move.

Your fellows in the movement might dispute such a sweeping negation of their beliefs, because here you have put all of the precious eggs into one basket: the failed climate models.

Joel, for the sake of expediency you flatly assert that there is no Theory of Anthropogenic Global Warming, simply to win a point in an argument. Not a smart move.

Your fellows in the movement might dispute such a sweeping negation of their beliefs, because here you have put all of the precious eggs into one basket: the failed climate models.

You risk being branded as a “revisionist”, and worse.

You are misunderstanding or misrepresenting what I said. What I have said is that the models incorporate the known physics and what emerges from the models is the prediction that there will be significant rises in global temperature from an increase in non-condensable greenhouse gases. I did not say that there is no theory of AGW…but what I am saying is that this theory is formulated / supported on the basis of both empirical evidence and evidence from climate models, which are mathematical representations of what we know about the atmosphere, oceans, land surface, … and their interactions.

Your claim is that for all ~20 major climate models that are out there, AGW theory is somehow put into them rather than that support for it emerges from them. So, why don’t you give us specific examples of things that are put into the climate models that you classify as “incorporat[ing] AGW theory” into the models?

To give you a concrete analogy: Let’s take consider numerical weather forecasting models (which are close relatives to climate models). These models exhibit chaos, that is, extreme sensitivity to initial conditions, such that slightly perturbed initial conditions leads to very different weather predictions when you run the model out for a few weeks. However, I would not say that “chaos theory is incorporated into the models”. Rather, what I would say is simply that the understood physics is put into the models and what emerges supports chaos theory.

Frank K. First, I want to remind you how this whole discussion started. Richard Telford quoted TIm Ball noted that the following statement by Tim Ball was not correct:

Everyone knows that regional weather forecasts are notoriously unreliable, especially beyond 48 hours. This fact weakened the credibility of the IPCC predictions with the public from the start. Some supporters of the IPCC position tried to counteract the problem by saying that climate forecasts were different from weather forecasts. It is a false argument. Climate is the average of the weather, so if the weather science is wrong the climate science is wrong.

Yes, Richard Telford did make that fallacious assertion and I demolished it in my rebuttal at January 8, 2013 at 4:32 am. I concluded that rebuttal saying

So, if the science of weather is wrong then the boundary conditions of climate (i.e. average weather) cannot be adequately defined (you may be predicting ‘coin tosses’ when you should be predicting ‘dice tosses’, or ‘pancake tosses’, or …).

At January 8, 2013 at 5:49 am, Telford mentioned my rebuttal but evaded any discussion and/or refutation of it.

At January 8, 2013 at 2:48 pm, Telford repeated his untrue – and plain wrong – assertion, so at January 9, 2013 at 3:41 am I reminded him of my rebuttal.

Since then ,you and Telford have not made any attempt to discuss the reason why Telford’s assertion is plain wrong. But your post tries to pretend he is right despite the post from Frank K (at January 11, 2013 at 2:41 pm) which details why Telford’s assertion is plain wrong.

Your post concludes saying

And, in fact, Richard [Telford] was right in this. One could nitpick about how one wants to discuss these two issues…in terms of boundary conditions vs initial conditions or in terms of predictability of the first and second kind. The fact remains that Tim Ball’s argument was completely bogus.

I know you like to proclaim things you know to be plain wrong because you do it so often. In this case – as so often in the past – you are completely wrong and everybody can see you know you are wrong. In reality
1.
Telford was completely wrong.
2.
I explained how and why Telford was completely wrong.
3.
You and Telford have not addressed his being completely wrong.
4.
It is not a knit-pick to point out that Telford was completely wrong.
5.
Ball made true and undeniable statements: n.b. not and “argument”.

Your claim is that for all ~20 major climate models that are out there, AGW theory is somehow put into them rather than that support for it emerges from them. So, why don’t you give us specific examples of things that are put into the climate models that you classify as “incorporat[ing] AGW theory” into the models?

Easy, each climate model uses a unique but grossly high climate sensitivity to atmospheric CO2 and balances that to force its hindcasts to fit twentieth century global warming by using a unique cooling from aerosol.
See

Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:
”Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.”

I did not say that there is no theory of AGW…
======================================
joeldshore says:

January 11, 2013 at 12:01 pm
There is no special “AGW theory” to incorporate into the climate models.
==============================================
So then, do we finally agree that there is what is called the Theory of Anthropogenic Global Warming, aka AGW?
Do we agree that AGW theory involves radiation physics and the greenhouse effect? and that the various aspects of AGW theory involve an increase of atm humidity and multiplication by that of the incremental CO2 effect, i.e., positive feedback? There is more to it, but you get the idea, I am sure.

I say that climate models elaborate the above mentioned aspects of AGW theory, incorporating it as algorithms. You seem to disagree. What climate modeler would agree with you?
======================================================

Joel Shore says: “Your claim is that for all ~20 major climate models that are out there, AGW theory is somehow put into them rather than that support for it emerges from them.”
====================================================

Yes, that is my essential claim: climate models incorporate AGW theory and then the product of these models is presented as confirmation of AGW theory. I say that is the fallacy of circular reasoning.

mpainter: By your reasoning, any modeling to understand how our universe works would be circular (including my example about chaos). You simply define all of the science, no matter how well-understood and verified, that goes into the modeling as being part of the theory that comes out…and, viola…You have circular reasoning.

It is not circular reasoning just because the well-understood physical laws about the propagation of radiation in the atmosphere is one of the ingredients that leads to the prediction of AGW. It is not circular reasoning just because modeling of the transport of water vapor in the atmosphere predicts that a warmer atmosphere has a higher (absolute) humidity and this is one of the ingredients in the prediction of AGW.

We model the atmosphere with the physics we know. We don’t just ignore that physics because it leads to results that are ideologically inconvenient to some.

Joel Shore: You seem as a cat chasing its tail. All support you offer to AGW is AGW restated i.e., the theorems of physics that constitute AGW theory.

“no matter how well-understood and verified” This is hardly the case- in fact it is a contentious and disputed area. The so-called “climate sensitivity” factors vary from model to model and it is disputed from modeler to modeler. Everything is in dispute: feedbacks, saturation curves, clouds, etc.

And here you come and say that the product of the models verifies AGW theory, and that is not circular. Perhaps you can believe that it fails to convince.

There is a well known type; the theoretical physicist that has nothing to do with observations, never has, cannot incorporate observations into his thinking because theory is supreme and observations subordinate.

This is hardly the case- in fact it is a contentious and disputed area. The so-called “climate sensitivity” factors vary from model to model and it is disputed from modeler to modeler. Everything is in dispute: feedbacks, saturation curves, clouds, etc.

Of the things that you mention, the only one that is seriously in dispute in the scientific community is the value of the cloud feedback. And, yes, this leads to uncertainty about the climate sensitivity. But, that doesn’t mean that we know nothing about climate sensitivity. And, it also doesn’t mean that the models represent circular reasoning.

There is a well known type; the theoretical physicist that has nothing to do with observations, never has, cannot incorporate observations into his thinking because theory is supreme and observations subordinate.

There is also another well-known type: The person whose evaluation of scientific evidence tends to reach conclusions that just happen to agree with their ideological preconceptions. (And, in fact, studies show that MOST people do this when they are not experts in the field that they are looking at.)

The basic physics that underlies the climate models is well-verified experimentally. We have an entire practical field of science and technology (“remote sensing”) based on radiative transfer in the atmosphere. (Ironically, some people who dispute this basic radiative physics nonetheless trust the UAH temperature data set that is fundamentally based on it!) We have satellite and radiosonde data that strongly support the basic idea and the magnitude of the water vapor feedback, i.e., how the atmosphere moistens with increasing temperature.

While the cloud feedback remains a significant source of uncertainty in the models, none of the modeling groups nor any of the “AGW skeptics” (who have access to several open source climate models) have been able to produce a parametrization of clouds that is physically reasonable and leads to a low sensitivity (below about 1.5 to 2 C per CO2 doubling). [I find it particularly interesting that none of those who claim that the cloud feedback is a significant negative feedback have been able to demonstrate that there is even a way to produce such a feedback in the climate models using any sort of physically-reasonable parametrization of clouds.]

Furthermore, the empirical evidence, including paleoclimate data (like the last glacial maximum), response to volcanic eruptions (like Mt Pinatubo), the instrumental temperature record, and the seasonal cycle provide evidence for a significant climate sensitivity (likely greater than 2 C and very likely greater than 1.5 C per CO2 doubling).

Yes, Richard Telford did make that fallacious assertion and I demolished it in my rebuttal at January 8, 2013 at 4:32 am.

No, you didn’t. Your rebuttal made an incorrect assumption that “the science of weather is wrong”. However, our understanding of chaos does not tell us that the science of weather is wrong. What it tells us is that the weather is inherently very sensitive to initial conditions so that even if you have the correct equations, you still cannot predict that far out into the future because of uncertainties in initial conditions. And, that same science of chaos tells us that although such weather predictions will fail, the equations will still produce weather with the right statistical properties.

And, in fact, numerical weather prediction these days is quite impressive, as was pointed out here in a recent post about how accurately the path of Sandy was predicted even 4 to 5 days in advance. And, the models are run in “ensembles” with slightly different initial conditions in order to tell the forecasters to what degrees the predictions are sensitive to this particular source of uncertainty.

But your post tries to pretend he is right despite the post from Frank K (at January 11, 2013 at 2:41 pm) which details why Telford’s assertion is plain wrong.

The statements by Frank K that you quote in that post don’t relate to Telford’s assertion. Instead, they attack a “strawman” argument, basically an argument that there are no uncertainties in predicting future global warming. That is an argument that nobody was making and is unrelated to the point about whether or not the inherent sensitivity to initial conditions makes prediction of future climate due to a significant change in radiative forcing impossible.

Science is essentially about understanding universe in the face of the complications and uncertainties that always exist. Part of this is carefully identifying and quantifying these uncertainties. It is not helped by people making up uncertainties that don’t exist.

At January 13, 2013 at 7:05 am you use your usual fallback position when cornered in a discussion, and you provide a blatant falsehood: viz.

the only one that is seriously in dispute in the scientific community is the value of the cloud feedback.

Bollocks!
At January 12, 2013 at 1:33 pm I informed you – with link and reference – that
(a) each climate model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^-2 to 2.02 W/m^-2
and
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^-2 to -0.60 W/m^-2.

In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.

I have already explained in my previous posts to you what my position is: What the models incorporate into them are the physics of the atmosphere, oceans, radiation, … as we currently understand them. They are not circular reasoning.

Now, you might object to certain particular elements of physics that are in the models, but then it is up to you to explain what your objections are and to provide evidence that the models are incorrect in their physical description of this process and that this lack of correctness has (or is likely to have) a significant effect on the basic predictions made by the models (such as the equilibrium climate sensitivity).

Richard: The sentence that you quote from me was a response to the list that mpainter presented, as is clear in the first part of my sentence that you failed to quote: “Of the things that you mention…”

I agree that the total radiative forcing of aerosols still has a large uncertainty…and this uncertainty is the main reason why the instrumental temperature record alone does not provide a very good constraint on the equilibrium climate sensitivity or transient climate response. However, this uncertainty in aerosol forcing is unlikely to have a significant direct impact on predictions of future climate because, unless we want to either choke ourselves with pollution (or try some risky geoengineering scheme where we continuously inject aerosols into the stratosphere), the accumulating effect of the non-condensable greenhouse gases dominates the temperature response. It just makes it more difficult to determine from the instrumental temperature record which of the range of possible climate sensitivities is the most realistic, which means that models with a fairly large range of climate sensitivities and aerosol forcings can all do a reasonable job of simulating the instrumental temperature record.

Your posts at January 13, 2013 at 10:07 am and January 13, 2013 at 11:58 am is each a classic fail.

re your post at January 13, 2013 at 10:07 am
I made no “assumption” in my rebuttal of Telford’s nonsense. Tim Ball said “if the science of weather is wrong the climate science is wrong.” In the context of defining boundary conditions for climate there is no known “science of weather”. Weather is a chaotic system with unknown strange attractors and unknown chaotic equations. This lack of knowledge means it is not possible to define the boundary conditions of the system. Hence, any definition of boundary conditions is ‘wrong science’ because the definition is any one of an infinite number of guesses.

re your post at January 13, 2013 at 11:58 am
You claim you said other than you did. The context was – and is – clear. You claimed “cloud feedback” is the only parameter in the climate models which is “in serious dispute”. That is not true and you knew it is not true because I had told you – with link and reference – it is not true.

You were and are wrong on both issues. You should ‘man up’ and admit you were wrong on both issues, but I anticipate you will wriggle as you usually do when you are shown to be wrong.

Weather is a chaotic system with unknown strange attractors and unknown chaotic equations. This lack of knowledge means it is not possible to define the boundary conditions of the system. Hence, any definition of boundary conditions is ‘wrong science’ because the definition is any one of an infinite number of guesses.

Frankly, this is nonsense. The equations are not “unknown” except in the sence that one can say the equations governing anything are unknown. After all, the equations of gravity are unknown since we know that the current picture of gravity is not compatible with quantum mechanics. And, yes, it is possible to define the boundary conditions, which are things like the concentration of various greenhouse gases (at least in models that don’t have their own carbon cycle; if they do, then one defines emissions and it computes the concentration). There may be uncertainties in certain boundary conditions (such as solar forcing and volcanic eruptions) but, barring the sun doing something extremely dramatic or there being a super-eruption of the kind experienced only very rarely in every given century, these effects will be small compared to the effect of greenhouse gases. And, do you think this would be the first boundary-value problem in the history of science where the boundary conditions are not known with infinite precision?

You claim you said other than you did. The context was – and is – clear. You claimed “cloud feedback” is the only parameter in the climate models which is “in serious dispute”. That is not true and you knew it is not true because I had told you – with link and reference – it is not true.

Richard, when you originally deleted the first part of the sentence that I had written as “Of the things that you mention, the only one that is seriously in dispute in the scientific community is the value of the cloud feedback,” I tried to interpret it as just an honest mistake on your part. Now that you are continuing to press the point, it is hard not to consider it as an indicative of intentional dishonesty.

At any rate, I have explained why the uncertainty in aerosol forcing is not that important to future predictions except in the sense that it makes it more difficult to narrow down the climate sensitivity (which, in an indirect way, means the cloud feedback) using the instrumental temperature record.

The modelers all confirm that the models essentially incorporate AGW theory.

I assume that you have a quote to that effect, presumably in a statement that says it in a way that they mean it in the same way that you do…i.e., that it is just an exercise in circular reasoning? I would be quite surprised to see a modeler claim that.