As Jeff Nesbit tweeted: “Being the last scientist to accept established climate science doesn’t make you Galileo.” Quite the opposite indeed.

The Galileo-complex also suggests a rather simplistic view of how science progresses. Rather than a lone skeptic overthrowing a scientific (rather than a cultural) consensus, scientific progress is a usually a gradual process. New evidence has to be reconciled with the existing mountain of evidence; it doesn’t simply replace it. Observing a bird in the air doesn’t disprove gravity. “Skeptics” and their supporters often bring up Galileo as an example of that the scientific consensus can also be wrong, and has been wrong in the past. True enough, though as Carl Sagan said: “they laughed at Galileo, but they also laughed at Bozo the clown”.

“Yes, the hot spot is expected via the traditional view that the lapse rate feedback operates on both short and long time scales. (…) it [the hot spot] is broader than just the enhanced greenhouse effect because any thermal forcing should elicit a response such as the “expected” hot spot.”

So why is he claiming something in the WSJ that he knows to be untrue?

But rather than doing a careful analysis of various potential explanations, McNider and Christy, as well as their colleague Roy Spencer, prefer to draw far reaching conclusions based on a particularly flawed comparison: They shift the modelled temperature anomaly upwards to increase the discrepancy with observations by around 50%. Using this tactic, Roy Spencer showed the following figure on his blog recently:

So what did he do? Jos Hagelaars tried to reproduce the different steps involved. A comparison of annual data, using a 1986-2005 baseline, would look as follows:

Spencer used a 5 year running mean instead of annual values, which would (should) look as follows:

The next step is re-baselining the figure to maximize the visual appearance of a discrepancy: Let’s baseline everything to the 1979-1983 average (way too short of a period and chosen very tactically it seems):

Like this:

Related

This entry was posted on February 22, 2014 at 16:36 and is filed under Climate science, English, Skeptics. You can follow any responses to this entry through the RSS 2.0 feed.
You can skip to the end and leave a response. Pinging is currently not allowed.

I’d say it’s both. It’s evidently true what you’re saying about the combination of # of people and technology enabling us to do so, but it’s also true that the old belief was than humanity couldn’t possibly influence global climate and that science actually showed -long before there were enough people to actually do so- that we can.

By the late 1800s it was clear that people could change the climate by clearing lots of land, like the US west and Australia. Actually this was one of RP Sr.s first arguments about climate. The historical climatologists could spot stuff like that

Is the point here that there is no significant disparity between models and observation, that Christy et al find a non-existing disparity solely due to a flawed statistical comparison; or, that they use questionable statistical methods to make the existing disparity look worse than it actually is?

The efficacy and value of climate models properly gets judged differently “within the laboratory” and outside it. For researchers, Gavin’s explanation changes the disparity to an opportunity to, through hindsight, learn something about the climate and improve the models. Outside the lab, much more of the value of the models is in their predictive capacity where foresight, not hindsight, is the criterion.

I’m reading the Royal Society, NAS climate change report. From page 19: “Comparisons of model predictions with observations identify what is well-understood and, at the same
time, reveal uncertainties or gaps in our understanding. This helps to set priorities for new research.”

You are drinking deeply of the koolaid and it does not improve you in any way. Instead of dealing with what is apparent and widely admitted- that the models have failed- you go low class and call those pointing out the failures “flat earthers”. You used to be on the border of reasonable. You chose instead to plunge into the deep end of climate obsession.

It appears that this issue is more about how to show the data and how “bad” the graph looks. In fact, the rates of change are unchanged by these baselining methods I think. As shown by a lot of people, the rates of change over even 30 year time periods are not even close.

I am looking at the 5 year running mean graph, prior to any further adjustments.

Four observations follow:

One observes that the vast majority of the models lie under UAH start.
One observes that the vast majority of models lie above UAH end.
One observes that HADCRUT4 start lies in the middle of the models
One observes that HADCRUT4 end lies below the vast majority of the models.

I know the models are reevaluated frequently, and improved. If current models are slightly overestimating the slope of the projected temperature increase, where is the first place you would look for root cause?

There need not be one root cause. It’s important to look at all possibilities:

– The exact trajectory of natural variability is different in real life than it is in the multi-model mean (in which it is averaged out to a large extent) and also than any one particular model run.
– Observations may be off (Cowtan and Way)
– Recent forcings may be smaller than expected in the models (Schmidt et al offer some evidence for that)
– ENSO has been more so in the cooler La Nina phase recently.
– Possibly connected with that, more heat has gone in the (deep) oceans recently.
– Climate sensitivity may be smaller than predicted by most climate models.

This list is probably not exhaustive, but these are the most important potential factors I believe. The first point is evidently true (and is always at play when comparing modeled to observed temps). For points 2 to 5 evidence has built up that indeed they have contributed to a smaller increase in GMST than would otherwise have been the case. Point 6 is the “skeptics” favorite for obvious reasons, but doesn’t seem supported in light of the whole body of evidence relevant to estimating climate sensitivity.

‘The next step is re-baselining the figure to maximize the visual appearance of a discrepancy: Let’s baseline everything to the 1979-1983 average ‘

I’m sorry but I don’t understand how this is done. Could you explain more fully the process of baselining involved or point me to some examples on the internet. I appreciate the trickery involved but don’t understand how it is achieved.

The second figure is a comparison of the annual data and for every line in that figure the average of 1986-2005 is set to zero. The third figure is based on the second figure but now every 5 points are averaged, e.g. the first blue dot in the third figure is the average of the first five dots in the second figure, the average of 1979-1983, and is given the x-value of 1983. The average of 1980-1984 is given the x-value of 1984 and so on, also for the green dots and the CMIP5 data.

The fourth and last figure is created by offsetting all the data-points in the third figure, but every set with a different value. The first blue dot of 1983 is put on zero, a shift of -0.11 °C, all other blue dots are shifted with the same value. The green dots are shifted by -0.20°C and the CMIP5 model average data points are shifted by -0.37°C. By doing this all data points in the fourth figure with the x-value of 1983 will have a value of 0.

As you will notice the blue and green dot at the x-value of 1983 are above the average of the models and the blue and green dots for 2005 and later are below the average of the models. The average of 1979-1983 was anomalously warm against the average based on 20 years (1986-2005), especially for UAH. The CMIP5 model average is really an average: all natural variation present in each model run is averaged, only the influence of the large volcano eruptions is present.
By setting everything to 0 in 1983 an image is created where what was anomalously warm in 1983 against a longer average (1986-2005) is now not anomalously warm anymore and difference between the observations and model average after e.g. 2005 is increased.

Spencer’s figure (the first figure here) is created to trick the reader, the difference between the model-average and the observations is artificially exaggerated. That’s not science.

Nice mental gymnastics. Which proves nothing other than Spencer’s graph is correct.With regard to the hotspot, it is not the sole requirement for global warming theory to be accurate, but it is a necessary condition. No hot spot, and the models are wrong. Full stop.

I think you missed the quote from John Christy in which he admitted that the hot spot is not specific to the greenhouse effect, and thus its relative presence/absence says nothing whatsoever about the cause of the warming:

[the hot spot] is broader than just the enhanced greenhouse effect because any thermal forcing should elicit a response such as the “expected” hot spot.

dbakerber said “Nice mental gymnastics. Which proves nothing other than Spencer’s graph is correct”

Err no… The mental gymnastics is all on Spencer’s part and proves that he will go to any lengths to mislead the general public.

Also dbakerber said: “No hot spot, and the models are wrong. Full stop”

He is showing his ignorance here. As has been stated previously, the so-called “hotspot” is predicted by well understood and undisputed atmospheric physics, not just the models, and any thermal warming should produce it. The fact that it has not been reliably measured does not mean that AGW is not happening but that it is extremely difficult to conduct measurements in that portion of the atmosphere. Absence of evidence is not evidence of absence. It is indisputable that the atmosphere is warming.hence the “hotspot” will be there, no question.

MikeN and hunter, if you’re concerned that ‘flat earther’ is ‘low class’ rhetoric, best to submit your critique to Christy, McNider, and Spencer. You both fail to understand that ‘flat earthers’ is what Christy et al. were calling ‘consensus’ climate scientists in their WSJ article; meanwhile they cast themselves as Galileos. Bart V. took *their* words in *their* claim and asked whether the roles are accurate.

Reblogged this on Climate Denial Crock of the Week and commented:
Maybe best analysis so far of John Christy’s go-to magical graph that gets so much traction in the deniosphere.
There’s a reason, as Gavin Schmidt has noted, that it’s never been published.

Christy presents “tropical stratospheric” temperature data at a pressure of 150mb, with no cooling trend. But that pressure level is for the tropical tropopause, not the stratosphere, as shown in:
“Tropical Tropopause Layer” [doi:10.1029/2008RG000267]

So in order to avoid addressing stratospheric cooling (one of the hall-marks of CO2-induced global warming), Christy tries to pass off tropopause temperature as being stratospheric temperature. Isn’t that misleading?