A relative sent me the article, asking for my thoughts on it. Here’s what I said in response.

Hi [Name Removed],

I don’t have time to do a full reply, but I’ll take apart a few of their main points.

The WSJ authors’ main point is that if the data doesn’t conform to predictions, the theory is “falsified”. They claim to show that global mean temperature data hasn’t conformed to climate model predictions, and so the models are falsified.

But let’s look at the graph. They have a temperature plot, which wiggles all over the place, and then they have 4 straight lines that are supposed to represent the model predictions. The line for the IPCC First Assessment Report is clearly way off, but back in 1990 the climate models didn’t include important things like ocean circulation, so that’s hardly surprising. The lines for the next 3 IPCC reports are very similar to one another, though. What the authors don’t tell you is that the lines they plot are really just the average long-term slopes of a bunch of different models. The individual models actually predict that the temperature will go up and down for a few years at a time, but the long-term slope (30 years or more) will be about what those straight lines say. Given that these lines are supposed to be average, long-term slopes, take a look at the temperature data and try to estimate whether the overall slope of the data is similar to the slopes of those three lines (from the 1995, 2001, and 2007 IPCC reports). If you were to calculate the slope of the data WITH error bars, the model predictions would very likely be in that range.

Comparison of the spread of actual IPCC projections (2007) with observations of annual mean temperatures

That brings up another point. All climate models include parameters that aren’t known precisely, so the model projections have to include that uncertainty to be meaningful. And yet, the WSJ authors don’t provide any error bars of any kind! The fact is that if they did so, you would clearly see that the global mean temperature has wiggled around inside those error bars, just like it was supposed to.

So before I go on, let me be blunt about these guys. They know about error bars. They know that it’s meaningless, in a “noisy” system like global climate, to compare projected long-term trends to just a few years of data. And yet, they did. Why? I’ll let you decide.

The WSJ authors say that, although something like 97% of actively publishing climate scientists agree that humans are causing “significant” global warming, there really is a lot of disagreement about how much humans contribute to the total. The 97% figure comes from a 2009 study by Doran and Zimmerman.

So they don’t like Doran and Zimmerman’s survey, and they would have liked more detailed questions. After all, D&Z asked respondents to say whether they thought humans were causing “significant” temperature change, and who’s to say what is “significant”? So is there no real consensus on the question of how much humans are contributing?

First, every single national/international scientific organization with expertise in this area and every single national academy of science, has issued a statement saying that humans are causing significant global warming, and we ought to do something about it. So they are saying that the human contribution is “significant” enough that we need to worry about it and can/should do something about it. This could not happen unless there was a VERY strong majority of experts. Here is a nice graphic to illustrate this point (H/T Adam Siegel).

But what if these statements are suppressing significant minority views–say 20%. We could do a literature survey and see what percentage of papers published question the consensus. Naomi Oreskes (a prominent science historian) did this in 2004 (see also her WaPo opinion column), surveying a random sample of 928 papers that showed up in a standard database with the search phrase “global climate change” during 1993-2003. Some of the papers didn’t really address the consensus, but many did explicitly or implicitly support it. She didn’t find a single one that went against the consensus. Now, obviously there were some contrarian papers published during that period, but I’ve done some of my own not-very-careful work on this question (using different search terms), and I estimate that during 1993-2003, less than 1% of the peer-reviewed scientific literature on climate change was contrarian.

Another study, published in the Proceedings of the National Academy of Sciences in 2010 (Anderegg et al, 2010), looked at the consensus question from a different angle. I’ll let you read it if you want.

Once again, the WSJ authors (at least the few that actually study climate for a living) know very well that they are a tiny minority. So why don’t they just admit that and try to convince people on the basis of evidence, rather than lack of consensus? Well, if their evidence is on par with the graph they produced, maybe their time is well spent trying to cloud the consensus issue.

The WSJ authors further imply that the “scientific establishment” is out to quash any dissent. So even if almost all the papers about climate change go along with the consensus, maybe that’s because the Evil Empire is keeping out those droves of contrarian scientists that exist… somewhere.

The WSJ authors give a couple examples, both of which are ridiculous, but I have personal experience with the Remote Sensing article by Spencer and Braswell, so I’ll address that one. The fact is that Spencer and Braswell published a paper in which they made statistical claims about the difference between some data sets without actually calculating error bars, which is a big no-no, and if they had done the statistics, it would have shown that their conclusions could not be statistically supported. They also said they analyzed certain data, but then left some of it out of the Results that just happened to completely undercut their main claims. This is serious, serious stuff, and it’s no wonder Wolfgang Wagner resigned from his editorship–not because of political pressure, but because he didn’t want his fledgling journal to get a reputation for publishing any nonsense anybody sends in.[Ed. See this discussion]

The level of deception by the WSJ authors and others like them is absolutely astonishing to me.

Barry

PS. Here is a recent post at RealClimate that puts the nonsense about climate models being “falsified” in perspective. The fact is that they aren’t doing too badly, except that they severely UNDERestimate the Arctic sea ice melt rate.

262 Responses to “Bickmore on the WSJ response”

1. Most importantly: a comparison over a short period (less than about
15 years, which applies at least to the last two IPCC curves shown) is
not very meaningful. This is because short-term variability due to
various factors (El Niño, volcanic eruptions, solar variability) have
a similar magnitude to the global warming signal over such short
periods (over longer periods the warming signal dominates, because it
only goes upward rather than wiggling up and down). Due to these
short-term wiggles you can get a 10-year period with very strong
warming trend or with very little warming trend, depending on which
time interval you pick, but this has nothing to do with an under- or
overestimated warming trend due to greenhouse gases. A proper
comparison for a sufficiently long period (i.e. for the projections of
the third assessment report) is shown here:http://www.pik-potsdam.de/~stefan/update_science2007.html as update to
our Science paper of 2007.

2. In Foster&Rahmstorf 2011http://iopscience.iop.org/1748-9326/6/4/044022 we have shown that if
you take the known short-term variability effects out, you’re left
with a completely steady warming trend. The IPCC lines shown are
projections that do not account for future solar variability,
volcanoes or a specific sequence of El Niño or La Niña. The
projections are the smoothed, averaged effect of the rising greenhouse
gases. So they should be compared with the warming signal as computed
by Foster&Rahmstorf, after the variability factors are removed. This
actually allows comparison also for shorter period, and then again you
see good agreement of observed warming with projections.

3. The WSJ graph cherry-picks the HadCRUT3 data, which show the least
warming of any surface temperature data set over the period shown. The
reason for the discrepancy with other data has been shown to be the
lack of coverage in the Arctic, which has warmed most strongly over
this period. In fact these data are being replaced now with the
HadCRUT4 data where the data coverage has been improved and the
warming now agrees better with the other data sets.

4. Even more minor point: the vertical positioning of the IPCC lines
is wrong. They are tacked onto those years in the data where the IPCC
reports appeared. This is not actually the years where the IPCC
projections start. E.g. the projections of the 3rd assessment report
start in 1990 and of the 4th report in 2000. Also, because the
projections do not account for interannual variability, they should
not be tagged onto single years, no matter what year one chooses, but
to the smooth temperature evolution. Many readers of WSJ would not
realise that what counts here is not whether the red curve is below
the IPCC lines (which completely depends what year you tag the IPCC
line to), but whether the average slope agrees (and that only after
subtracting short-term variability as explained above).

Any thoughts about where do they get the idea that Ocean Heat Content is “perhaps not increasing at all”. Are they living in some kind of bizarro world where this (http://i.imgur.com/k3Rre.gif) is not an increase in OHC?

As a non-climate scientist but as someone who uses technical information professionally, I have been completely dismayed by the whole Gang of 16 affair and their followups. From the Lysenkoism reference (heck, why didn’t we throw in the Piltdown Man hoax?) to the martyrdom of Chris de Freitas to the graph referred to above, this motley crew has made further serious policy discussion (are you listening, Rogers Sr. and Jr. and Judith C.?) even more unlikely, if that is possible. Can you imagine a Republican majority in the Senate and a continued House majority after the 2012 election? Thomas Jefferson, author of the American Declaration of Independence and our third president wrote in his “Notes on the State of Virginia” , “I tremble for my country when I reflect that God is just, that His justice cannot sleep forever.” He was writing about the corrosive effects of the institution of slavery. I think if we reflect instead on physical processes that we understand reasonably well and the changes we have set in motion, the quote may have new meaning–and the Gang of 16 will have explaining to do down the road. Small comfort, that.

“The individual models actually predict that the temperature will go up and down for a few years at a time, but the long-term slope (30 years or more) will be about what those straight lines say.”

So, out of any 30 year period moving forward the global average temperature of 15 of those years (give or take depending on magnitude of departure + or -) should be above the long term slope line?

According to the IPCC AR4 individual realizations graph above, the global average temperature has been below the average of the ensembles for about 6 years with varying degrees of variance. This is an unprecedented departure since 1980. Is there a graph that goes further back in the hindcast such that we could see if there were similar departure episodes that eventually “worked out” to the ensemble average over the long haul (30 year period)?

I appreciate such a thoughtful and well crafted response. One of the problems many non-climate scientists like myself have in trying to follow authoritative arguments by people who make their living in the field, is that we get lost in the jargon.

In viewing the charts you provide and reading the accompanying explanations, the WSJ authors remind me vividly of Marty Feldman’s hunchbacked character Igor in Young Frankenstein. When offered help for his obvious deformity, he replied in innocent surprise, “What hump?”

Responding to a comment by AIC at 328 of February Unforced Variatuons, I tapped out a Blow by Blow Precis of this WSJ letter (Comment link – http://www.realclimate.org/?comments_popup=10823#comment-228620 ), cutting it down from 2,000 words to 200 Thus exposed, their argument is evidently laughable.
One point not mentioned in the post here is the figure given for the rate of warming. Despite their assertions about small surface temperature increases, they manage to come up with the rise bring perhaps 0.2 deg C over ten years, a figure that is surely rather high. But that is the point in their argument when they summon up the Little Ice Age and and its chum the MWP when Eskimos exported wine to England or some such.
Maybe that is the same wine these retired space cadets have been imbibing coz with a letter as trashy as this they sure do need an excuse.

Copying part of my posting over from #334 in Unforced Variations: February 2012

When you get down to it, the WSJ letter needs response on four levels:
1) AGW is happening.
2) If fossil fuel CO2 emissions are continued at a rapid rate it is going to be very bad for our civilization
3) We can develop substitute sources of energy for our civilization at a cost less than the cost of continued AGW
3a) The sooner we get working on developing non-fossil-fuel sources of energy for our civilization, the easier it will be.

“These people are not “skeptics”. They are not “contrarians”. They are deliberate, calculating liars.”

The fact that they must be aware that a reasonable level of education allows people to see the paucity of their argument shows another thing: They fully expect their target audience not to think critically about what they write. i.e. they’re hoodwinkers relying on the ‘one born every minute’ principle.

From the title (Wall Street Journal) I’d assume that people use the WSJ to make help them make financial decisions. Would you take investment advice from an advisor who dissembles and thinks you’re stupid?

Your error bars comment is scary. I mark first year undergraduate lab reports, and if they had made those conclusions without reporting their error bars I would have put a big red circle around it and written ‘ERRORS!!!’ next to it, then knocked off marks.

Weren’t these scientists and engineers trained in uncertainties? I thought it was standard practice in undergraduate degrees.

It seems quite a common thing to do though: Spencer & Braswell got PUBLISHED without properly reporting uncertainties.

“From the title (Wall Street Journal) I’d assume that people use the WSJ to make help them make financial decisions. Would you take investment advice from an advisor who dissembles and thinks you’re stupid?”

This would make a perfect letter to the editor of the WSJ. It cuts to the quick.

SA: Well, that’s the whole point — to make sure that any serious effort to address AGW is “off the table” for the foreseeable future.

It’s fun to imagine a world where tomorrow or a month from now the last remaining reserves of hydrocarbons were finished being monetized. Exactly how quickly would the contrarian universe collapse? How many “Fellows” from how many “thinktanks” would find themselves looking for their next opportunity for carpetbagging? How many unpaid volunteer chumps would be looking for another hobbyhorse to ride?

Prof. Bickmore asks why the authors of the letter compared long term projections to short term data, while omitting the bars on the projections. He is too polite to answer the question, but I am not. Their intent was to mislead, in short they were lying.

Prof. Bickmore further states that he is astonished by the level of deception exhibited by the authors in the letter. I am not, for the editorial page of the Wall Street Journal is a cesspit. The only advantage of reading it might be to educate the more naive among us in the interests and motives of the few thousand families who run the USA.

Naomi Klein has pointed out that denialist positions are not necessarily based on ignorance or misunderstanding of the science. Quite often they spring from the accurate perception that the measures to mitigate fossil carbon release must invoke increased regulation, increased governmental power, increased costs of production as the costs of CO2 release are internalized, and redistribution of resources from the developed countries to the developing world. All these are anathema to to a robber-baron’s soul, such as it is, to be resisted by every means possible. Some of these means include paying shills to mouth propaganda in exchange for their thirty pieces of silver. Some involve legal attacks on climate scientists. Others might be orchestrating sock puppets on web fora, such as this one.

@1…
Disagree with point 4 being “minor”. By starting the prediction line at a local max, even a totally correct prediction line of the correct slope will be assured of being above the measured values. It give the appearance but not the reality of “serious underprediction”.

Example R snippet showing what I mean:

x <- seq(0,10,by=.01)
y <- 2*cos(5*x) + .2*x
plot(x,y)
abline(2,.2)

Note the trend is "obviously" mispredicted.

This is one of those small deceits that actually fools people–as I think @8 is fooled if I understand that post.

The warming Arctic is so strong that it not only makes this WSJ op-ed look like its made by US isolationists not looking beyond the contiguous USA, again a dishonest presentation, for those who don’t know I showed on my website melting small glaciers since 2006! (scroll down please). Taking at face value the graph they presented would mean that these glaciers should have remained in a quiescent state, why? Because the High Arctic is mostly dark during winter, and has a low sun during summer. A sun elevation much like the one at 45 degrees latitude North at mid winter. This means that Arctic heat is dominated by advection coming from the South either from sea currents or warmer Northwards migrating Cyclones. The mock Intellectual freedom posted by WSJ here is the equivalent of propaganda during war time. The only battle they want us to do is to argue on their lala land field of our planet in no dire warming straits, the temperature graph they displayed looks similar to the real one adding on purpose a deception illusion, I strongly suggest we respond with the overwhelming evidence we have at our disposal. Exposing the truth in more than one way reveals the lies even better.

I just realized the humor in this: “…there really is a lot of disagreement about how much humans contribute to the total. ”
Given long term geologic trends that say we should be entering a new ice age soon, easily 110% of the observed warming can be attributed to anthropogenic CO2 (Ie, without man-made green house gases, the Earth might be in a natural cooling trend leading to an ice age, hence our GHCs trap enough heat to cause the observed temperature rise, plus the heat to compensate for the natural cooling.

Look guys, you can pick apart any analysis of the temperature trends with a “it hasn’t been 30 years yet” or “we already fixed that problem in the latest model run”. But everyone can check out current performance and make some observations.

1. The measured temperature trends are below expected. It may or may not correct itself. But 12 years is enough to say “it is running low” at the moment. If you want to cover your eyes until the 30 year timer expires, go ahead. I sure hope the modelers aren’t doing so, or the the model update iteration loop will take forever.

2. The high level theory for positive feedback expects that both sea level and temperature will begin to rise in an ACCELERATING fashion giving linear or greater CO2 increases in BAU expectation. Well CO2 has been more or less BAU and there is no evidence of an accelerating temperature rise, or sea level rise (satellite measurements) over the past several decades. In fact the trends are opposite of this. I consider this the most important parameter measurement of all, am I wrong?

Things may change in the next 10 years, one way or the other. But to sit back and pretend that no-one but a climate scientist can correctly read and interpret a graph is an insult to an engineer’s intelligence. It’s very clear all things being equal that CS is being over-estimated given the actual measurements to date.

[Response: Lovely bit of spin – “things may change” to “it’s very clear that CS is overestimated”. It’s not clear at all in fact – if we had better data on aerosol forcings, if we had better coverage and less uncertainty in the OHC numbers, then perhaps the recent decade would be marginally useful in constraining CS, but we don’t. Your ‘very clear conclusion’ is only possible if you are not looking at the whole picture. – gavin]

Seriously, though, I am curious as to how to modelers make interim judgments on model quality. They must have some mechanisms, I would be very interested in posting these internal variables vs actual measurements.

[Response: Models are assessed based on matches to satellite-era climatologies for the most part – read Schmidt et al (2006) for a discussion. – gavin]

I suspect many will assume this type of data will only be used to attack the integrity of the models if it is released, and you would probably be right. However I think that most people (myself included) have little faith in these things to start with, so there is little ground to lose.

I’m also curious as to how the models hold up over time at the regional level and smaller. I assume these models must map out large weather systems, pressure systems, ocean currents, etc. We all know they are quite sensitive to initial starting conditions. So it seems to me that you can compare the actual measurements of the larger systems to the models and at some point the divergence becomes so large that running the model further becomes effectively meaningless against reality. The question is: How long does that typically take? 1 month, 1 year, 10 years? I have no idea, but it seems like the model that can go the furthest has the best chance of being correct.

I understand at a high level that the models really can’t be held to this standard to be “useful”, with chaos and probabilities taking their toll. However until the models get a track record with good prediction skill against a reasonable null model (1C per century) than they really aren’t telling us much.

[Response: You can tell yourself this as many times as you like, but it won’t make it true. – gavin]

The rough market evaluation of the top one hundred petroleum companies along with the top one hundred coal companies totals something like eight trillion dollars. Double or triple this to include sovereign, nation-state reserves and you have something like $20 trillion. This market perception is entirely based on the future prospect of mining and setting these substances on fire.

If you wanted to prevent a market revaluation like you wouldn’t believe (with geopolitical ramifications), wouldn’t you put forward supposed learned spokespeople (usually older white males) such as this Murdoch enabled group?

Then there is Heartland and countless other protectors of vested, invested and nation-state interests. They can all say what they want in our post-Fairness Doctrine world. Welcome to the fossil end-game, it’s going to be a bumpy ride. By the way, global fossil profits are on the scale of the half-trillion per year of public financial handouts (IEA, 2011).

Tom Scharf throws out an el Nino/la Nina dismissive. What he fails to see is that successive la Ninas (the cool ones) keep getting warmer. And that includes the 12 years or so where ‘warming stopped.’ If warming stopped, why do the cool years keep getting warmer?

#26 Gavin: “Your ‘very clear conclusion’ is only possible if you are not looking at the whole picture. – gavin”

Well you are right about that, it is admittedly clear as mud. If the trend is low, one has to include the possibility that CS is too high. Your other comments may be correct as well of course. It is scrambled eggs, and unscrambling them is no doubt difficult in the extreme.

I sometimes wonder if effectively reducing CS in the model is considered a “radioactive” change consciously or subconsciously. Consider that if we had versions of the 2000 models with effective lower CS, they would be forward tracking better against the current lower trend, and thus be seen as “more correct”. That would be a political liability.

Are there model runs with lower CS being performed for the next IPCC report?

Hansen made a comment about 40% changes in aerosol forcings, and that seemed like quite a lot. The large uncertainty in aerosols could be used as kind of get out of jail free card with regard to forcings.

[Response: GCMs don’t have their CS value set ahead of time – the CS is an emergent property of the model. You only get to diagnose it afterwards. Emulators like MAGICC can set their CS to whatever they like, and can be used to explore some other parts of phase space, but the uncertainty in the aerosol forcing precludes any real constraint on CS from the recent record. It is not as if we are not trying to do a better job with the aerosols – cf. Glory – but we have to work with the conditions that we have. – gavin]

The SkS article I linked to has some graphs but they are made up by SkS themselves. They don’t deal with the FAR predictions at all. The prediction the FAR made (and the one that Barry Bickmore says, correctly, was way off) was that temperatures would rise by about 0.3 degrees per decade if few or no steps were taken to reduce greenhouse gases (their definition of BAU) The uncertainty limits are described as 0.2-0.5 degrees per decade which corresponded to climate sensitivities of 1.5 and 4.5 degC/2xCo2. The main prediction being based on a sensitivity of 2.5 degC/2xCo2.

That’s it – that’s what the FAR told the world in its summing up of climate science at the time. I have no problem with that, and the fact that it was ‘way off’ (at least so far) is not the point. Much has been learned since then and science progresses by making mistakes.

However, there is something very wrong with even partisan advocates like those at SkS misrepresenting the whole report by failing to mention not only the central prediction but any prediction at all. Check it out – not a single mention of a prediction. It’s like the systematic revisionism in 1984 – which means that they can end their little fantasy piece by claiming that

“..even two decades ago, global climate models were making very accurate projections of future global warming”

The internet is a very strange place, [edit – no name calling please]

[Response: Note that projections are a function of two things – the scenario and the model. What was wrong in FAR was the scenario (too fast growth rate of GHGs, no aerosols, no ozone, no BC etc.), not the model (though the projections were with simple emulators not GCMs). Indeed, models today have similar sensitivities and with the same scenario will give the same temperature rise. – gavin]

So, you begin with the assumption that the design of climate models is politically motivated.

And from that assumption, you reach the conclusion that the models have a predetermined CS which is set to a politically correct value that will produce predetermined politically correct results.

However, as Gavin has just explained, the models don’t have a predetermined CS.

Might this cause you to reevaluate your starting assumption? Might you consider the possibility that the scientists who design the models are actually, in good faith, trying to figure out how the climate system actually works?

Might you consider the motives of whoever has been telling you that climate modeling is politically motivated?

Might you consider being a little more, shall we say, SKEPTICAL of such claims?

“From the title (Wall Street Journal) I’d assume that people use the WSJ to make help them make financial decisions. Would you take investment advice from an advisor who dissembles and thinks you’re stupid?”

Weird, huh? Dating back to before Murdoch, the WSJ has long had a split and contradictory reputation between its reporting and its asinine fulminations in the op-ed section. It’s not just the WSJ of course, but I think what we’re looking at is the ossified world view of 19th century oligarchs.

I note that objections about the veracity of climate science often come from engineers, which in some cases seem less capable of seeing the bigger picture, but as Gavin mentioned, you can’t change reality with your beliefs just because you don’t have enough knowledge or understanding.

Mankind emitted more GHG’s, that will trap more heat. Just because Oceans are very good at not only absorbing that heat and turning it over to deeper ocean, which btw is a very good lead suspect in this case does not help your case.

Natural variation of the ocean cycles can take heat down under, so to speak, but that does not change the fact that the forcing levels have been increased. Would you wait until a cancer has nearly killed you or grown to a point where recovery is extremely unlikely before you sought treatment, though you had clear knowledge of its presence, or would you upon learning early of the cancer seek treatment?

What you are saying is let’s wait till it gets worse.

I have but one question for you, can you prove increased GHG’s are not warming the planet?

The uncertainty limits are described as 0.2-0.5 degrees per decade which corresponded to climate sensitivities of 1.5 and 4.5 degC/2xCo2. The main prediction being based on a sensitivity of 2.5 degC/2xCo2.

…

However, there is something very wrong with even partisan advocates like those at SkS misrepresenting the whole report by failing to mention not only the central prediction but any prediction at all.

Anteros needs to learn to read the graphs presented in the SkS piece and understand what they mean …

Don’t ya just love irony. Tom Scharf says, “But to sit back and pretend that no-one but a climate scientist can correctly read and interpret a graph is an insult to an engineer’s intelligence. ”

He then proceeds to show that that he, as an engineer, cannot correctly interpret a graph. He then almost get it right, “It’s very clear all things being equal…” Ah yes, ceteris paribus… But Tom, who says ceteris is paribus? It ain’t. That is what those error bars are for…or didn’t they teach you that in engineering school?

Tom, there’s really good documentation on the models. You could probably understand it with a few months effort at least well enough to find out why you were wrong.

Gavin – thanks for your response. Yes, the reason for the over-estimation in FAR is clear enough. I don’t think there’s much controversy over it [if that is possible in the climate debate..] but I suppose my reason for mentioning it is that to identify the reason for predictions being ‘wrong’ it is necessary to accept that they are ‘wrong’ in the first place – which SkS very much fail to do.

I’m surprised at the editing of my comment and the insistence on no ‘name calling [I think it was a defensible description] I suppose consistency is what I’m looking for – I noticed this on my way here @6

Great response to the intellectually dishonest hijinks in the WSJ

Incidentally, my reason for believing my comment was very mild was this description of working scientists @2

These people are not “skeptics”. They are not “contrarians”. They are deliberate, calculating liars.

– which is the kind of thing I read quite frequently here directed at dissenters from the IPCC position.

Dhogaza @ 38. It is not ‘my’ lexicon that prompts the use of prediction it is the IPCC’s. Which you’d discover if you read the FAR. Would you disagree with Barry Bickmore that the predictions were “way off”?

I think you have mixed up the future and the past. The predictions the FAR made were for the future. You linked to a graph going 110 years back into the past.

You’ll also find the graph is not a representation of the BAU scenario and the FAR predictions because even the Gistemp data runs beneath the lower uncertainty bound [the 1.5 degC/2xCo2] which is why Barry Bickmore describes it as “way off”.

Speaking of the WSJ Op-Eds, Brian Angliss wrote an open letter to engineer signatory Burt Rutan in response to the first “Gang Of 16″ Op-Ed. It’s well worth a read.

It got even more interesting when Burt turned up in comments, apparently trying to defend the Op-Ed (and his 98-slide “there’s nothing to worry about, the concern is all due to ‘data presentation fraud'” slide deck).

Have a read for yourself and see if you think Burt’s claims are justifiable, whether his position in comments matches that of the Op-Ed (and slide deck), whether he demonstrates scientific understanding at a sufficient level to make the published claims that bear his name, and – given the critiques of his claims in comments – whether he demonstrates honesty in the light of additional evidence or engages in dismissal and avoidance.

It might also be interesting to see if any of the claims in the new WSJ Op-Ed have been refuted in comments made to Burt prior to that Op-Ed’s publication date.

Anteros, I’m having trouble understanding what your saying. You write, “I recommend open-minded readers have a look bearing in mind that they refuse to put up a graph showing the FAR predictions.”

I’m trying to be open-minded here, but when I look at the link you provided I not only see several grapha of predictions. There is a predicted forcings due to CO2 emissions, also a graph of predicted temperature rise. And another graph of the IPCC 1990 temperature predictions along with a superimposed observed temperature (Figures 4 & 5).

You also write, “I think you have mixed up the future and the past. The predictions the FAR made were for the future. You linked to a graph going 110 years back into the past.”

I think you need to look again at the graph. It only starts in 1880, it ends in the 2000s, probably around 2010-2020. Further, if you click on the link you provided, Figure 5 is a graph starting at 1990 and going forward only. I think it is simply a “zoom & crop” of the graph that dhogaza linked (which is Figure 4). So just scroll down in your link and you’ll see the “graph showing the FAR predictions” that you claim SKS refused to provide. What am I missing?

Sorry, yeah, I used the word prediction when really it’s “projection”, but I think that it fits with what Anteros is claiming to be a prediction. Figure 2 in the link provided by Anteros seems to me inline with the graph provided in the IPCC Figure 6.11.

Anteros you wrote: “The SkS article I linked to has some graphs but they are made up by SkS themselves. They don’t deal with the FAR predictions at all. The prediction the FAR made (and the one that Barry Bickmore says, correctly, was way off) was that temperatures would rise by about 0.3 degrees per decade if few or no steps were taken to reduce greenhouse gases (their definition of BAU) The uncertainty limits are described as 0.2-0.5 degrees per decade which corresponded to climate sensitivities of 1.5 and 4.5 degC/2xCo2. The main prediction being based on a sensitivity of 2.5 degC/2xCo2.”