Monckton makes it up

If you look around the websites dedicated to debunking mainstream climate science, it is very common to find Lord Christopher Monckton, 3rd Viscount of Brenchley, cited profusely. Indeed, he has twice testified about climate change before committees of the U.S. Congress, even though he has no formal scientific training. But if he has no training, why has he become so influential among climate change contrarians? After examining a number of his claims, I have concluded that he is influential because he delivers “silver bullets,” i.e., clear, concise, and persuasive arguments. The trouble is his compelling arguments are often constructed using fabricated facts. In other words, he makes it up. (Click here to see a number of examples by John Abraham, here for a few by myself, and here for some by Tim Lambert).

Here I’m going to examine some graphs that Lord Monckton commonly uses to show that the IPCC has incorrectly predicted the recent evolution of global atmospheric CO2 concentration and mean temperature. A number of scientists have already pointed out that Monckton’s plots of “IPCC predictions” don’t correspond to anything the IPCC ever predicted. For example, see comments by Gavin Schmidt (Monckton’s response here,) John Nielsen-Gammon (Monckton’s response here,) and Lucia Liljegren. Monckton is still happily updating and using the same graphs of fabricated data, so why am I bothering to re-open the case?

My aim is to more thoroughly examine how Lord Monckton came up with the data on his graphs, compare it to what the IPCC actually has said, and show exactly where he went wrong, leaving no excuse for anyone to take him seriously about this issue.

Atmospheric CO2 Concentration

By now, everyone who pays any attention knows that CO2 is an important greenhouse gas, and that the recent increase in global average temperature is thought to have been largely due to humans pumping massive amounts of greenhouse gases (especially CO2) into the atmosphere. The IPCC projects future changes in temperature, etc., based on projections of human greenhouse gas emissions. But what if those projections of greenhouse gas emissions are wildly overstated? Lord Monckton often uses graphs like those in Figs. 1 and 2 to illustrate his claim that “Carbon dioxide is accumulating in the air at less than half the rate the UN had imagined.”

Figure 1. Graph of mean atmospheric CO2 concentrations contrasted with Monckton’s version of the IPCC’s “predicted” values over the period from 2000-2100. He wrongly identifies the concentrations as “anomalies.” Taken from the Feb. 2009 edition of Lord Monckton’s “Monthly CO2 Report.”

Figure 2. Graph of mean atmospheric CO2 concentrations contrasted with Monckton’s version of the IPCC’s “predicted” values over the period from Jan. 2000 through Jan. 2009. Taken from the Feb. 2009 edition of Lord Monckton’s “Monthly CO2 Report.”

It should be noted that Lord Monckton faithfully reproduces the global mean sea surface CO2 concentration taken from NOAA, and the light blue trend line he draws through the data appears to be legitimate. Unfortunately, nearly everything else about the graphs is nonsense. Consider the following points that detail the various fantasies Monckton has incorporated into these two graphics.

Reality #1.
The IPCC doesn’t make predictions of future atmospheric CO2 concentrations. And even if we ferret out what Lord Monckton actually means by this claim, he still plotted the data incorrectly.

The IPCC doesn’t really make predictions of how atmospheric CO2 will evolve over time. Rather, the IPCC has produced various “emissions scenarios” that represent estimates of how greenhouse gas emissions might evolve if humans follow various paths of economic development and population growth. The IPCC’s report on emissions scenarios states, “Scenarios are images of the future, or alternative futures. They are neither predictions nor forecasts. Rather, each scenario is one alternative image of how the future might unfold.” Lord Monckton explained via e-mail that he based the IPCC prediction curves “on the IPCC’s A2 scenario,which comes closest to actual global CO2 emissions at present” (2). In his “Monthly CO2 Report” he added, “The IPCC’s estimates of growth in atmospheric CO2 concentration are excessive. They assume CO2 concentration will rise exponentially from today’s 385 parts per million to reach 730 to 1020 ppm, central estimate 836 ppm, by 2100,” which is consistent with the A2 scenario. In other words, Monckton has picked one of several scenarios used by the IPCC and misrepresented it as a prediction. This is patently dishonest.

Monckton’s misrepresentation of the IPCC doesn’t end here, however, because he has also botched the details of the A2 scenario. The IPCC emissions scenarios are run through models of the Carbon Cycle to estimate how much of the emitted CO2 might end up in the atmosphere. A representative (i.e., “middle-of-the-road”) atmospheric CO2 concentration curve is then extracted from the Carbon Cycle model output, and fed into the climate models (AOGCMs) the IPCC uses to project possible future climate states. Figure 3 is a graph from the most recent IPCC report that shows the Carbon Cycle model output for the A2 emissions scenario. The red lines are the output from the model runs, and the black line is the “representative” CO2 concentration curve used as input to the climate models. I digitized this graph, as well, and found that the year 2100 values were the same as those cited by Monckton. (Monckton calls the model input the “central estimate.” )

Figure 3. Plot of atmospheric CO2 concentrations projected from 2000-2100 for the A2 emissions scenario, after the emissions were run through an ensemble of Carbon Cycle models. The red lines indicate model output, whereas the black line represents the “representative” response that the IPCC used as input into its ensemble of climate models (AOGCMs). Taken from Fig. 10.20a of IPCC AR4 WG1.

Now consider Figure 4, where I have plotted the A2 model input (black line in Fig. 3), along with the outer bounds of the projected atmospheric CO2 concentrations (outer red lines in Fig. 3). However, I have also plotted Monckton’s Fantasy IPCC predictions in the figure. The first thing to notice here is how badly Monckton’s central tendency fits the actual A2 model input everywhere in between the endpoints. Monckton’s central tendency ALWAYS overestimates the model input except at the endpoints. Furthermore, the lower bound of Monckton’s Fantasy Projections also overestimates the A2 model input before about the year 2030. What appears to have happened is that Lord Monckton chose the correct endpoints at 2100, picked a single endpoint around the year 2000-2002, and then made up some random exponential equations to connect the dots with NO REGARD for whether his lines had anything to do with what the IPCC actually had anywhere between.

Figure 4. Here the black lines represent the actual A2 input to the IPCC climate models (solid) and the upper and lower bounds of the projected CO2 concentrations obtained by running the A2 emissions scenario through an ensemble of Carbon Cycle models. This data was digitized from the graph in Fig. 3, but a table of model input concentrations of CO2 resulting from the different emissions scenarios can be found here. The red lines represent Monckton’s version of the IPCC’s “predicted” CO2 concentrations. The solid red line is his “central tendency”, while the dotted lines are his upper and lower bounds. Monckton’s data was digitized from the graph in Fig. 1.

[Nielsen-Gammon] says my bounds for the 21st-century evolution of CO2 concentration are not aligned with those of the UN. Except for a very small discrepancy between my curves and two outliers among the models used by the UN, my bounds encompass the output of the UN’s models respectably, as the blogger’s own overlay diagram illustrates. Furthermore, allowing for aspect-ratio adjustment, my graph of the UN’s projections is identical to a second graph produced by the UN itself for scenario A2 that also appears to exclude the two outliers.

It is fair enough to point out that Fig. 10.26 in IPCC AR4 WG1 has a plot of the projected A2 CO2 concentrations that seems to leave out the outliers. However, Monckton’s rendition is still not an honest representation of anything the IPCC ever published. I can prove this by blowing up the 2000-2010 portion of the graph in Fig. 4. I have done this in Fig. 5, where I have also plotted the actual mean annual global CO2 concentrations for that period. The clear implication of this graph is that even if the A2 scenario did predict atmospheric CO2 evolution (and it doesn’t,) it would actually be a good prediction, so far. In Figures 1 and 2, Lord has simply fabricated data to make it seem like the A2 scenario is wrong.

Figure 5. This is a blow-up of the graph in Fig. 4 for the years 2000-2010. I have also added the annual global mean atmospheric CO2 concentrations (blue line), obtained from NOAA.

Fantasy #2.
Monckton claims that “for seven years, CO2 concentration has been rising in a straight line towards just 575 ppmv by 2100. This alone halves the IPCC’s temperature projections. Since 1980 temperature has risen at only 2.5 °F (1.5 °C) per century." In other words, he fit a straight line to the 2002-2009 data and extrapolated to the year 2100, at which time the trend predicts a CO2 concentration of 575 ppm. (See the light blue line in Fig. 1.)

Reality #2.
It is impossible to distinguish a linear trend from an exponential trend like the one used for the A2 model input over such a short time period.

I pointed out to Lord Monckton that it’s often very hard to tell an exponential from a linear trend over a short time period, e.g., the 7-year period shown in Fig. 2. He replied,

I am, of course, familiar with the fact that, over a sufficiently short period (such as a decade of monthly records), a curve that is exponential (such as the IPCC predicts the CO2 concentration curve to be) may appear linear. However, there are numerous standard statistical tests that can be applied to monotonic or near-monotonic datasets, such as the CO2 concentration dataset, to establish whether exponentiality is being maintained in reality. The simplest and most direct of these is the one that I applied to the data before daring to draw the conclusion that CO2 concentration change over the past decade has degenerated towards mere linearity. One merely calculates the least-squares linear-regression trend over successively longer periods to see whether the slope of the trend progressively increases (as it must if the curve is genuinely exponential) or whether, instead, it progressively declines towards linearity (as it actually does). One can also calculate the trends over successive periods of, say, ten years, with start-points separated by one year. On both these tests, the CO2 concentration change has been flattening out appreciably. Nor can this decay from exponentiality towards linearity be attributed solely to the recent worldwide recession: for it had become evident long before the recession began.

In other words, the slope keeps getting larger in an exponential trend, but stays the same in a linear trend. Monckton is right that you can do that sort of statistical test, but Tamino actually applied Monckton’s test to the Mauna Loa observatory CO2 data since about 1968 and found that the 10-year slope in the data has been pretty continuously rising, including over the last several years. Furthermore, look at the graph in Fig. 5, and note that the solid black line representing the A2 climate model input looks quite linear over that time period, but looks exponential over the longer timeframe in Fig. 4. I went to the trouble of fitting a linear trend line to the A2 model input line from 2002-2009 and obtained a correlation coefficient (R2) of 0.99967. Since a perfectly linear trend would have R2 = 1, I suggest that it would be impossible to distinguish a linear from an exponential trend like that followed by the A2 scenario in real, “noisy” data over such a short time period.

Temperature Projections

Atmospheric CO2 concentration wouldn’t be treated as such a big deal if it didn’t affect temperature; so of course Lord Monckton has tried to show that the Fantasy IPCC “predictions” of CO2 concentration he made up translate into overly high temperature predictions. This is what he has done in the graph shown in Fig. 6.

Figure 6. Lord Monckton’s plot of global temperature anomalies over the period January 2002 to January 2009. The red line is a linear trend line Monckton fit to the data, and the pink/white field represents his Fantasy IPCC temperature predictions. I have no idea what his base period is. Taken from the Feb. 2009 edition of Lord Monckton’s “Monthly CO2 Report.”.

FANTASY #3. Lord Monckton uses graphs like that in Fig. 6 to support his claim that the climate models (AOGCMs) the IPCC uses to project future temperatures are wildly inaccurate.

REALITY #3.
Monckton didn’t actually get his Fantasy IPCC predictions of temperature evolution from AOGCM runs. Instead, he inappropriately fed his Fantasy IPCC predictions of CO2 concentration into equations meant to describe the EQUILIBRIUM model response to different CO2 concentrations.

Monckton indicated to me (5) that he obtained his graph of IPCC temperature predictions by running his Fantasy CO2 predictions (loosely based on the A2 emissions scenario) through the IPCC’s standard equation for converting CO2 concentration to temperature change, which can be found here.

The problem is that the equation mentioned is meant to describe equilibrium model response, rather than the transient response over time. In other words, they take the standard AOGCMs, input a certain stabilized CO2 concentration, and run the models until the climate output stabilizes around some new equilibrium. But it takes some time for the model systems to reach the new equilibrium state, because some of the feedbacks in the system (e.g., heat absorption as the ocean circulates) operate on fairly long timescales. Therefore, it is absolutely inappropriate to use the IPCC’s equation to describe anything to do with time evolution of the climate system. When I brought this up to Lord Monckton, he replied that he knows the difference between equilibrium and transient states, but he figures the equilibrium calculation comes close enough. But since the IPCC HAS published time-series (rather than just equilibrium) model output for the A2 scenario (see Fig. 7,) why wouldn’t he just use that?

The answer is that if Lord Monckton had used the time-series model output, he would have had to admit that the IPCC temperature projections are still right in the ballpark. In Fig. 8, I have digitized the outer bounds of the model runs in Fig. 7, and also plotted the HadCRUT3 global annual mean temperature anomaly over the same period. The bottom line is that Monckton has put the wrong data into the wrong equation, and (surprise!) he got the wrong answer.

Figure 8. The blue and green lines represent the upper and lower bounds of the global average temperature anomaly from AOGCM output for the A2 emissions scenario during the 2002-2010 period. The black line represents the HadCRUT3 global temperature anomalies for that timeframe, normalized to the same base period.

Summary

I have shown here that in order to discredit the IPCC, Lord Monckton produced his graphs of atmospheric CO2 concentration and global mean temperature anomaly in the following manner:

He confused a hypothetical scenario with a prediction.

He falsely reported the data from the hypothetical scenario he was confusing with a prediction.

He plugged his false data into the wrong equation to obtain false predictions of time-series temperature evolution.

He messed up the statistical analyses of the real data.

These errors compound into a rather stunning display of complete incompetence. But since all, or at least nearly all, of this has been pointed out to Monckton in the past, there’s just no scientifically valid excuse for this. He’s just making it up.

665 Responses to “Monckton makes it up”

I can’t find Monckton anywhere in the list of Nobel Prize Laureates. I see Al Gore there. Is the SPPI bio comment more akin to a rock band winning a platinum disc and a fan saying he won it, too, because he read the lyrics before the release and corrected a spelling error, I wonder?

“He blew his cred, he’s such a bore,
He didn’t notice that the climate’s changed,
A crowd of people stood and groaned,
They’d read his words before,
All but Monck were really sure that he’s not from the House of Lords”

Dr. Aiguo Dai was kind enough to point me to his team’s collected drought database on line and explain to me how to use it. I have put together annual time series for what I’m calling F, the fraction of Earth’s land surface in “severe drought” by the Palmer Drought Severity Index (PDSI <= 3), and P, the mean global PDSI.

I've tentatively fit the first with a cubic [!] equation. Still have to correct for autocorrelation and so on, but given that human agriculture collapses when F hits 70%, my very preliminary estimate is that this happens in 2037 AD, give or take five years. The 70% number is of course debatable, as is my curve-fit (N = 136, R^2 = 0.55). I'm working on refining my statistical analysis in hopes of publishing a paper about this.

For those who care, F was 5% in 1870, 12% in 1970, hit a temporary peak of 31% in 2003, and is currently at 21%. It's a very variable series, which is NOT good news. The reason why is left as an exercise for the student.

I can’t help but notice the pride of having contributed to the report that he tries to destroy with all the weapons at his disposal. The pride of being part of an organisation he so much wants to wipe of the face of this earth. The pride of earning a part of a prize that, according to his beliefs, should have never been awarded in the first place.

The Peace prize refers to the Gore one which is shared with anyone that participated in the IPPC process. I guess it is sort of humorous with prior knowledge. But I think that a lot of people wouldn’t be aware of the background info.

[Response: The only official recipients of the IPCC Nobel Prize certificates were the coordinating and lead authors. It didn’t go down to the contributing authors (sob ;-( ), nor the expert reviewers, nor to the proof readers. As to Monckton’s specific claim, that was an typo that many people spotted immediately. – gavin]

HAS suggests: “I think it would help you if you read a little physics and climate science so you understand better the place of uncertainty and parameter estimation in both. In the context of my comments about model skill have a look at Hargreaves and you will see there how some of these issues arise.”

Really? Do tell.

[Nonchalantly sharpens claws.]

Sweetie, I’ve been doing physica a VERY long time–certainly long enough to know the difference between a dynamical model and a statistical model. Both have parameters. All models do. What distinguishes them is how the values of the parameters are set. Climate models are dynamical models. Their parameters are set by the physics–or by independent data. As such, the fact that Hansen’s model reproduces late 20th century temperatures (and much else besides) illustrates that it is quite skilled. Certainly, the models can be improved. They are constantly being improved. However, if the physics were drastically off wrt climate sensitivity, say, they would not perform nearly as well. Now why don’t you extract that stick and actually bother to learn a little bit about the models.

I think it is reasonable to make a statement using the word catastrophic (in the english sense). My argument is that it doesn’t make political sense, especially if it is coming from a scientific source (to the lay public, which as we know may not only be scientists sadly). By trying to win political arguments refering to catastrophic science, you are reinforcing the prior viewpoints of many right leaning people. Yes there are some who are already way over there on this issue, but that argument is only swelling their ranks. If the scientific advice is more policy neutral (or policy prescriptive) than this is not happening.

This advice is purely political advice for how to communicate scientific advice to the maximum effect. This actually comes through in \The Art of War\ as well as \Strategy\ by B. Hart. Never try a direct approach since this only strenghtens an opponents resolve. The best method is an indirect approach in order to gently nudge an opponent in the desired direction.

Barton Paul Levenson (#253)

Can you post any links to the data regarding drought? I’m curious if there has been an analysis of an bias’s (eg more coverage in later periods, thus more information in database, higher F)? Also, what statistical technique are you considering using since, as you mention, the cubic doesn’t apply very well (espcially if it goes over 100% at some point)?

‘The House of Lords has stepped up its efforts to make Christopher Monckton – climate sceptic and deputy leader of the UK Independence party (Ukip) – desist in his repeated claims that he is a member of the upper house. The push comes as Buckingham palace has also been drawn into the affair over his use of a logo similar to parliament’s famous portcullis emblem . . ‘

Following the link from Deech56, I actually had a thought just eyeballing the graph. In that short time span, Hansen could never have predicted the timing or severity of the Pinatubo eruption. Over a hundred years he could have factored in occasional volcanoes, sure, and things would average out…

But knowing that such an event lowered temperatures by blocking inbound radiation, it seems fair and proper (to me) to basically shift the actual temperatures over such that the observational record cuts out the Pinatubo event until the point where temperatures return to where they were when the eruption occurred.

If you do this Hansen’s skill appears to be not merely good, but instead shockingly good. Of course, this only comes from eyeballing a teeny tiny image of a graph, and then futzing with it in Photoshop.

Boiled down to its essence, the usual scenario: Monckton versus the world of facts:

Monckton argues his use of the portcullis emblem, which has appeared on his letterheads and lecture presentations, does not breach any rules: “My logo is not a registered badge of parliament, and is plainly distinct from parliament’s badge in numerous material respects. The Lords do not use the portcullis at all on their notepaper: they use the Royal Arms within an elliptical cartouche.”

A House of Lords spokeswoman said: “The emblem is property of the Queen, and Parliament has a Royal Licence granted for its use. Any misuse of the emblem by either members or non-members breaches this licence, and if a person refuses to stop using it the matter is drawn to the attention of the Lord Chamberlain, who is an Officer of the Royal Household. The Lord Chamberlain has been contacted regarding Lord Monckton’s use of the emblem, and it will fall to him to follow up on any misuse of the emblem.”

“It did, though, guide the Guardian towards a document on its website which says misuse of the emblem is prohibited by the Trade Marks Act 1994, meaning Monckton could potentially be liable for fines and a six-month prison term if the Palace pursued the matter and successfully prosecuted him.”

Monckton threatens frivolous lawsuits against those who argue against him.

The Crown may threaten Monckton with six months … pity the poor prisoners, though, if he were convicted and forced to serve! Being forced to listen to him prattle on for six months would be cruel and unusual punishment, without doubt.

Peter Tatchell is actually a good guy in many ways, but he should obviously steer clear of writing about science.

Anyway, Lord Monckton has inspired me to go into the bookmaking business. When anyone has a supposedly winning bet I will explain to them that I have done my own calculations and when they said they predicted horse x to win the race they actually meant that horse y would win and so unfortunately they have lost, regardless of what they actually wrote on their betting slip.

Why is it that on all sides, sceptics and here the warmists, we see fighting about prominent figureheads (may it be the irresistable Mr. Gore, or the ever charming Mr. Monckton). Shouldn’t we discuss arguments that have more merit?

Certainly the last years have shown that longer (time-wise) variability phenomens can prevail against an increase of CO2. There certainly is an influence by CO2, whoever denies that has never had a proper physics education. However, how large the effect is, if it is dangerous or not, is a totally different matter. Here, we have to look at sensitivities of different feedback effects and on this point I am not convinced on the actual state of science. I have done complex modelling and you never get usable solutions if your uncertainty of your coefficients and conditions is as big as those feedback effects (that why we have large deviations between models and real life year-to-year temperature calculations).

And if I now read that there were even higher CO2 levels 400 million years ago, then I am well aware that there is a lot we don’t know yet about past climate (as much was already acknowledged by Mr. Mann on the NAS panel).

259, Chris, by coincidence just read that. In here lies the not House of Lord character with respect to everything. He stands by his beliefs despite them being false, illegal and or ludicrous. A typical spokesperson for contrarians, many having a likewise philosophy. The substance has meaning only to those resonating anti AGW sentiments the same way, as with this Lords logo, swiped from the Queen’s inventory, so by habit and transference he takes IPCC data and transforms them to make his own graphs,heavily distorted, but slightly recognizable, similar enough to be used by even meteorologists at FOX “news” shows, meeting the no peer review presentation method often used with fellow contrarians just to spur further descent into nonsense arguments, discussions and a waste of time dearly clearly wanted by contrarians. Obfuscation has always been their weapon of choice, Gavin handled him best when he was here, there is no debating a leader of a political movement who understands science as much as his role in the house of Lords.

If some of these papers have merit, I think it will change the global warming debate considerably.

No, they won’t. They’ve been seen before in the form of Rutledge’s work and others’. No new news here for anyone following energy issues closely. The relevance to PO discussions is only that the paper reinforces previous understandings. The relevance to climate is zero.

First, big “if.” Because someone writes a paper, doesn’t mean it is accurate.

Second, irrelevant. The changes we are seeing at 390 ppm were not expected until we hit even higher levels of CO2 and/or CO2e. More cannot be better. We have raised CO2 by 105 or so ppm. Burning the rest of the FFs we have, and we are at 50% used of each, or less, will raise that at least another 100 ppm, given a similar period of use. A recent paper indicated severe changes in climate and effects thereof at between 400 and 550 ppm. Taking them to 500ppm is gambling with the future at best, and suicide at worst.

Third, using the rest of the FFs at the same rate we already have ignores a number of feedback systems. Methane from the seabed and permafrost has already been noted to be occurring. With the oceans warming, there is no reason to think this will stop any time soon. A paper on the methane venting from the sea floor also noted that a very small portion of methane held in the sea bed could trigger massive CO2 rises, thus temps. So, should a catastrophic failure or even a slow failure over decades occur, it would swamp the FF emissions. BTW, methane concentrations have gone from a steady .07 over the last hundreds of thousands of years to a current 1.7 (ppb, I believe.)

Fourth, some carbon and heat sinks appear to be slowing their uptake, indicating saturation. This is the equivalent of raising emissions even higher.

Fifth, it seems a degree or more of warming is already guaranteed, even if we ceased immediately.

Sixth, as with K. Aleklett and others, they do not consider the chaotic nature of climate, or that it can change rapidly.

The number is 350 or lower. Take no solace that we have achieved climate chaos burning only half of the FFs. As I posted on the non-peer reviewed paper by Rutledge, much-discussed on http://www.theoildrum.com, climate sensitivity, in the colloquial sense, at least, of the entire climate system, is higher than currently believed. Otherwise, we wouldn’t be seeing the changes we already are.

These are non-linear (climate) and chaotic (everything else) systems working in tandem. Rapid changes can, and almost certainly will, occur.

The real implications for FF reserves and climate come in the long term, as the planet should begin to cool at some point, and, in fact, likely already had prior to the Industrial Revolution, making those reserves valuable mitigation tools against cooling in the far future.

Also, oil, in particular, is extremely fungible and cannot be 100% replaced by any other substance. This makes it an extremely valuable asset in maintaining technology and in technological advancement.

Looking at single issues as proxies for complex system is a good way to mess things up very badly.

Sambo, catastrophic is actually an adjective I use quite a lot in applied physics (specifically radiation effects in semiconductors). Systems do fail catastrophically–that is irreparably and irrecoverably. Looking at environmental threats, we have:
1)potential catastrophic failure of aquifers
2)potential catastrophic of sensitive ecologies ranging from oceans to mountain tops
3)potential catastrophic failure of crops on a global scale

and on and on. I do not think it is a good strategy to become so afraid of being called \alarmist\ that we cease to use the language as it was intended and as it has always been used. The fact is that many of the threats we face are alarming

NASA global July temp not yet published, but thumbnail-map reveals 0.71C warmer than normal; would be hottest ever: http://bit.ly/Gisthom

[Response: No. Please do not jump the gun on issues like this because misunderstandings (the little gif is for the met-station index, not the land-ocean index) will end up being blasted around the web as if they were official press releases. They are not, so please be patient while the data are checked. – gavin]

I am also on the side of those who prefer to concentrate on the middle of the range of main stream projections rather than the worst of them…..BUT…

I am not sure that the Pakistanis will appreciate the delicate debate about the meaning * of the adjective ‘catastrophic’. They are just receiving a taste of what it is like , and that is just a meteorological event rather than a climatological trend.
————-
* Not including Rene Thom’s version.

Regarding #268 (potential catastrophic failure of aquifers)…I found this curious. Ray, I would think this requires an extremely substantial sea level rise in onder to qualify as a potential catastrophic environmental threat. Where are you getting your information on this?

I’m not disputing that catastrophic is applicable, I’m just noting the political consequences of using it too often, as I think it is being used at the moment. For an example of what I’m talking about see

I don’t agree with what he say’s, but his impression is fed by seeing too many headlines that are “alarmist”, even if by the strict sense, they are entirely factual and correct.

The end result is political action is much harder to secure and it takes longer, thus the consequences of the “catastrophic” events affect more people. I do realize that what I’m saying appears backwards, but I strongly believe it is correct.

BTW, I’m an engineer and I do use the term catastrophic in failure analysis. It is valid, although I would argue the context and scope of “catastrophic semiconductor failure” is much clearer and less open to interpretation that “catastrophic anthropogenic global warming/climate change”. Just my opinion though.

Thanks all for the insights on O2. Differences if scale often makes things hard to understand until they are pointed out.

One question for Patrick–

You said, “Halting all marine photosynthesis and letting respiration/decay continue at the same rate (it would actually decay over time as less organic C would be available) would result in an O2 decrease at a rate of about 0.011 % per year, but it could only fall at that rate for about 3 weeks, with a total O2 decrease of about 0.000675 %”

Yes, the transition from fossil fuels to clean renewable energy sources is a job of work to do.

But once it’s done, it’s done, and maintaining a wind/solar energy infrastructure in perpetuity is pretty easy and extremely low-cost. And of course as the technology improves (which it is already doing rapidly), and that infrastructure gets periodically upgraded, it just keeps getting better and better.

And the transition is a far easier job than most people imagine, technologically and economically. In fact it will have enormous, far-reaching economic benefits. Except of course for the fossil fuel corporations, who will see their hundreds of billions of dollars in profits shift to other sectors of the economy.

The ongoing, extraordinary growth of renewable energy — both the rapid, widespread deployment of today’s mature wind and solar technologies, and the ongoing development of even better wind, solar, storage and distributed smart-grid technologies — is in itself a wonderful thing to behold. The solution to all of our fossil fuel problems is, in fact, at hand, with the technologies already in hand.

Unfortunately, it’s a race against time, and at this point, global warming is winning.

And what is slowing the phaseout of fossil fuels is not the lack of alternatives, or any technological or economic barriers to implementing them. The ONLY obstacle is the massive, entrenched wealth and power of the fossil fuel corporations.

SecularAnimist… Very well said. I think you are exactly right. We are at the cusp of a major paradigm shift in energy. It’s extremely encouraging that car companies are stepping up to the plate with new EV’s.

The future of humanity could be incredibly bright once we get past this difficult phase. Like you, I only hope it’s not too little too late.

Thank you for pointing out the James Annan thread, but it doesn’t deal with the particular issue. I’ve note belatedly that both the Blackboard and Peilke jnr’s blogs also have threads discussing this, the former in some more detail around the particular issues I’d been looking at.

Ray Ladbury #257

I’m please we have now got to the point where you agree that climate models include parameter estimation. Now while I didn’t raise the issue of statistical models and dynamic models, you seem to be at great pains to prevent the corruption of the latter by the former.

I’d just make two points you should perhaps think about.

First, in dealing with skill you are comparing the forecast from the model in question with that of a naive model, which almost ipso facto will be a statistical model. I suspect it was you not understanding that it was the construction of naive models this was being discussed that got you rushing into print in the first place.

Second, parameter estimation and the study of variability in any model comes down to studying statistical models. You can’t avoid it. They’re joined at the hips. Right down to measurement problems in the most supposedly deterministic ends of physics. And as a consequence any study of the utility of a model in terms of its fit with real world also comes down to studying statistical models.

I’m not really sure where all this leaves us? Two reflections.

First I think Hargreaves has made an error and there is strong argument to say that Hansen is less skillful than a naive model.

Second I’ve had my view reinforced that a greater appreciation of statistics is needed not just in climate science but apparently even in physics (and I say that as a non-statistician).

While I agree with your position in theory, I think it’s overstated. Transitioning to clean technologies is an extremely long term, investment-intensive effort. The world has converted infrastructures (telegraph and telephone lines, satellites, road surfaces, rail lines, airports) many, many times, but never on this scale. Every vehicle, motor, engine, tool, power plant, service station, transport line, utility machine, factory, etc., etc., in a now huge, massively interconnected world must be changed.

Beyond this, the only way to convert will be by using fossil fuels. One can’t boot strap by using electric to convert to electric. We must use our FF tools to convert to clean technology tools.

And then, there are still huge technological hurdles, not the least of which is the problem of fuel cells/power packs to be able to put energy into efficient, easily transportable forms, as well as the problem of how to efficiently store unused energy surplus.

Added to this is human nature, and the nature of human society. Soviet communism failed because people cannot work together, or rather, they cannot work with the efficiency with which a human brain directs the attached human body. There are fits and starts, pushes and pulls, advances and setbacks. And that would be within one corporation, or industry, or nation. Across multiple nations and ethnic groups and economies and technologies and industries, it’s daunting and destined to be slowed by extreme friction.

That’s a tough row to hoe. Yes, that makes it all that much more important to get started ASAP, but the lure of the rosy image at the end of the extensive labor will not make getting there that much easier, because it is still very, very far away.

SecAni
“advanced, elegant, low-environmental-impact technology” is a very narrow opinion. These words can also be used to describe fossil fuels because they are more “advanced, elegant, low-environmental-impact” than energy sources that are not. (whatever you could imaging those to be)

Once the transition from fossil fuels is complete, there will likely be movements against solar and wind, because every energy source has its positives and negatives. I know you would like to downplay the negatives, but they are still there. You can never get something for nothing. These movements may be beneficial, as long as they don’t demonize the current energy consumers or producers.

Organic based solar panels are just starting to reach viability. Nuclear is a poor choice to rely on and the transition to use biochar is slow and a bit expensive. I am not sure how anyone can believe this transition is easy, cheap or quick.

I believe that the difficulty of the renewable energy transition has been grossly exaggerated — indeed, the fossil fuel industry’s propaganda campaign to exaggerate the costs and “problems” of wind and solar and denigrate their potential, is on a par with the AGW denial campaign, and has in fact been going on longer. And of course they serve the same purpose: to perpetuate for as long as possible the fossil fuel industry’s billion-dollars-per-day profit stream.

Of course “investment” will be needed — investments that will pay off handsomely and beneficially.

Of course it will take time — although it need not be anywhere near as “long term” as some people think; well-thought-out plans for transitioning to 100 percent renewable energy within 10-20 years exist.

(BTW, if I’ve correctly extended Gavin’s test in the above link through 2009, the Scenario B trend in Hansen 1988 has now drifted just outside the error estimate of the lower GISTEMP record. But nature seems to be playing a bit of catch-up lately, so we’ll see how it goes in a year or two…

SA while I am encouraged by world watch and vital signs data and information, much of the claims are still filled with overgeneralized filler and at times, misleading numbers. That wind power and solar use is up in places in Germany, the Netherlands and China, along parts of the US is not a bad thing but wind mills are still terribly ineffecient and the battle against the initiative to place solar panels in the Mojave also slows down progress.

HAS,
We have something in common: Neither of us has a clue what you are talking about.

Yes, statistical inference is an element in a dynamical model. That does not make the model “statistical”. The dataset used to determine the parameters is different and distinct from the verification dataset. Indeed, the physical system may be different.

Your assertion that Hansen-1988 is less skilled than a naive model is absolute BS. You have zero basis for this assertion. In the first place, there is no reason why a naive model would have predicted a 32 year warming trend. It is pretty much inevitable given the physics in Hansen’s model. Hansen’s model and its successors also predict stratospheric cooling and get it pretty much right.

Now even more puzzling is why you are so desperate for the model to be “unskilled”. Do you think that if the models are wrong that you can simply call off the crisis and go on your merry way? Hardly. That the climate is warming is an established fact. That CO2 is a greenhouse gas is 100% certain. That CO2 sensitivity exceeds 2.1 degrees per doubling is a matter of 95% confidence. That implies a likely warming this century of more than 4 degrees C, and that without any fancy model. Without skilled models, we are flying blind, and have all the more reason to exercise extreme caution.

Particularly not when those denialists whom you claim are “following suit” claim to have been awarded a Nobel Prize when they haven’t. Claim that they have had a peer reviewed paper published in a scientific journal when in fact it was an un-reviewed editorial. Claim the earth is cooling when it isn’t. Claim that the sun is responsible for the observed warming when it’s output has diminished over the last several solar cycles. Claim that weather is climate, etc. etc. etc.

Last time I checked the thesaurus, a ‘scenario’ is a prediction based on a set of assumptions. So Monckton is absolutely correct to call it a IPCC prediction. IPCC may have made several different predictions based on several different set of assumptions, but they are still predictions. You are picking nits. I don’t see anything qualitatively wrong with Monckton’s graphs [edit to remove gratuitous insults]

[Response: So let’s say I take your comment and edit it some more to say “Monckton is absolutely … wrong” and then claimed publicly that this was your opinion (when it clearly is not). Wouldn’t you be bothered that someone was claiming you said something that you didn’t say? That is what Monckton is doing to the IPCC report. A little consistency please. – gavin]

JacobMack – “primarily fantasy”? You’re pretty hair triggered about something that has a working prototype paid for by a DOT “Small Business Innovation Research Award”. The more I consider it, the more I think ‘what took so long for this idea to appear’? The “fantasy” is in believing there’s enough political intelligence to realize the thousands of miles of asphalt we spread on the ground every year are made with petroleum that could be going to much more beneficial uses …. while it lasts.

Thanks for the link. I didn’t realize that was there. More reading to do.

But I think my statement still stands. While a smaller volcanic eruption was factored in in 1995, it was nothing near the size of the impact of Pinatubo, either in magnitude of negative forcing or duration. While its not entirely fair to remove a modeled event from the record, it’s also not at all fair to compare skill over such a short time frame, when volcanic and other effectively random events can skew either path (observational or modeled) so greatly.

Given that we know, however, that no Pinatubo event was modeled in, and the impact of the modeled ‘El Chichon’ sized volcanic eruption was comparatively small… I’m just saying that if you are going to try to compare, given the short time frame, Hansen’s model to a naive linear model, then it’s also fair to compare an observed-minus-volcanic event period to Hansen-minus-volcanic-event period.

Is it really correct to do? No, absolutely not, because a chain of events is interconnected, and you can’t just remove a period. But if we’re going to compare trends in a thread of blog comments to a “seventh grader” naive model, then yes, in this type of forum, it becomes valid… and Hansen’s skill becomes spot on. When compared to the Scharf/HAS/7th grader model (i.e. a simple straight line with a slope defined by the 1979-1988 trend), it beats it in predictive skill, and wallops it in hindcasting, wherein the naive model with a slope of 0.5C/30 years predicts that in 1700 the temperature of the planet was 5C cooler than 2000, and in 1400 the temperature of the planet was 10C cooler, and yes, keep on going.