In Paul Krugman’s May 29 column he wrote about Pat Michael’s “fraud, pure and simple” that James Hansen’s 1988 prediction of global warming was too high by 300%. (Michael’s fraud was described earlier by Hansen, Gavin Schmidt, Hansen again and me.)

Michaels has posted a denial, so I’m going to go back to the original sources so that everyone can see what Michaels did.

Ten years ago, on June 23, 1988, NASA scientist James Hansen testified before the House of Representatives that there was a strong “cause and effect relationship” between observed temperatures and human emissions into the atmosphere. …

At that time, Hansen also produced a model of the future behavior of the globe’s temperature, which he had turned into a video movie that was heavily shopped in Congress. That model was one of many similar calculations that were used in the First Scientific Assessment of the United Nations Intergovernmental Panel on Climate Change (“IPCC”, 1990), which stated that “when the latest atmospheric models are run with the present concentrations of greenhouse gases, their simulation of climate is generally realistic on large scales.”

That model predicted that global temperature between 1988 and 1997 would rise by 0.45°C (Figure 1). Figure 2 compares this to the observed temperature changes from three independent sources. Ground-based temperatures from the IPCC show a rise of 0.11°C, or more than four times less than Hansen predicted. …

The forecast made in 1988 was an astounding failure, and IPCC’s 1990 statement about the realistic nature of these projections was simply wrong.

Hansen et al’s paper is not available online, but I’ve posted some extracts so you can check that I haven’t taken anything out of context. If you move your mouse over Michaels’ Figure 1, you can see the corresponding figure from Hansens’s paper. Michaels has erased scenarios B and C from his version of the graph. What did Hansen write about the scenarios?

These scenarios are designed to yield sensitivity experiments for a broad range of future greenhouse forcings. Scenario A, since it is exponential, must eventually be on the high side of reality in view of finite resource constraints and environmental concerns … Scenario C is a more drastic curtailment of emissions than has generally been imagined … Scenario B is perhaps the most plausible of the three cases.

So Scenario A was the worst case and Scenario C was the best case and Hansen felt that both of these were unlikely and Scenario B was the most plausible. Hansen’s prediction was that the temperature would be between A and C. He wrote:

The model predicts, however, that within the next several years the global temperature will reach and maintain a 3σ level of global warming, which is obviously significant.

The 3σ level is 0.4 degrees above base line in the figure above. In the model this happened in 1998. In reality this happened in … 1998. OK, maybe he got lucky, but it is wrong to call it an “astounding failure”, and erasing B and C from the graph and presenting Hansen’s worst case scenario as his prediction really is “fraud, pure and simple”.

Michaels also cheated on his presentation of the results of Scenario A. First, he seems to have made a mistake when he measured the temperature rise under scenario A — it was 0.41, not 0.45. He also calculated the change from 1988 to 1997 but the last year of the observed data was 1987; so he should have started then. 1988 was 0.07 warmer than 1987 so the increase in observed temperatures should have been 0.18. Scenario A increased by 0.44 over that time. So scenario A was too high by 150%, not the 300% that Michaels claimed.

Krugman was incensed with my July 27, 1998 testimony before the House Committee on Small Business. In it, my purpose was to demonstrate that commonly held assumptions about climate change can be violated in a very few short years.

One of those is that greenhouse gas concentrations, mainly carbon dioxide, would continue on a constant exponential growth curve. NASA scientist James Hansen had a model that did just this, published in 1988, and referred to in his June 23, 1988 Senate testimony as a “Business as Usual” (BAU) scenario.

BAU generally assumes no significant legislation and no major technological changes. It’s pretty safe to say that this was what happened in the succeeding ten years.

He had two other scenarios that were different, one that gradually reduced emissions, and one that stopped the growth of atmospheric carbon dioxide in 2000. But those weren’t germane to my discussion. Somehow, Krugman labeled my not referring to them as “fraud.”

The trick Michaels is using here is to use BAU to mean something different to what Hansen meant. Hansen did not use the term in his paper, but he did use it in his testimony:

The other curves in this figure [besides the observations] are the results of global climate model calculations for three scenarios of atmospheric trace gas growth. We have considered several scenarios because there are uncertainties in the exact trace gas growth in the past and especially in the future. We have considered cases ranging from business as usual, which is scenario A, to draconian emission cuts, scenario C, which would totally eliminate net trace gas growth by year 2000.

In his paper (which was attached to his testimony) Hansen said that scenario A was “continued exponential trace gas growth”. So by “business as usual” Hansen meant “continued exponential trace gas growth”. All he did was use simpler language to describe scenario A in his testimony. Nor is it accurate for Michaels to pretend that Hansen assumed that greenhouse gas concentrations would continue to grow exponentially since he stated that scenario A was on the “high side of reality” and that B was the “most plausible”. Even under his own interpretation of BAU Michaels is wrong since scenario A included exponential growth in CFC emissions, when in fact they fell dramatically as a result of significant legislation (because of the Montreal protocol).

Furthermore, if you go back and look at what Michaels said in his testimony, he wasn’t using scenario A to show that BAU increases in emissions hadn’t happened. He used it to argue that Hansen’s climate model was wrong, that is, that even if given the correct numbers for emissions, it would overestimate (by 300%!) the amount of warming. The fact is, and Michaels knew it the time, that scenarios B and C were close to actual emissions and produced results close to the actual warming.

Michaels continues:

There’s also the nagging possibility that we haven’t yet figured out the true “sensitivity” of surface temperature to changes in carbon dioxide. Scientifically, that’s a chilling possibility.

But somehow, Hansen’s model came up with a good prediction. How does Michaels address this? He just ignores it.

On May 30, Roger Pielke, Jr., a highly esteemed researcher at University of Colorado’s Center for Science and Technology Policy Research, examined Hansen’s scenarios. Of the two “lower” ones, he concluded, “Neither is particularly accurate or realistic. Any conclusion that Hansen’s 1988 prediction got things right, necessarily must conclude that it got things right for the wrong reason.” (italics in original)

Pielke’s criticism of Hansen’s scenarios is badly misconceived. The important input to Hansen’s model was the total forcing from greenhouse gasses, but Pielke ignores this to focus on the growth rate of emissions of each gas. For instance, he claims that scenario B was off by a factor of 2 on CO2. This sounds like a lot until you discover that means that emissions grew by 0.5% per year instead of 1% a year. And that works out to scenario B having the concentration of CO2 in the atmosphere within 1% of what has actually happened. Pielke is being much more than a little unfair by calling a prediction that got within 1% of the correct answer as not being “particularly accurate or realistic”.

Comments

You’ve badly mishcharcterized my posts on Hansen’s scenarios. The point was to evaluate (not criticize) the inputs to the scenarios against what has happened in order to better evaluate various claims made about them by various people. I did not discuss Pat Michaels’ testimony.

Clearly, with respect to emissions, Hansen’s Scenario C produced in 1988 did the best through 2000, when it was then held constant, thus Scenario B performs better than Scenario A since then. A thoughtful commenter on our blog and I discussed the merits of evaluating Scernaio C after 2000, and you can see that discussion there, with the different perspectives well represented. Hansen’s Scenarios B and C were not far off on CO2 as you point out, though Scenario C was more accurate. And Scenario C was most accurate with respect to just about every other gas.

Hansen himself recognized this in a peer-reviewed paper in 1998, which clearly shows this result in Figure 5:

None of this has any relevance to Pat Michaels or the political debate over climate change. The issue is mischaracterized by both sides of the debate, and I for one am glad to have sorted through the history.

Roger, I have not mischaracterized your posts. I’m glad that you now concede that “B and C were not far off on CO2″, but in your post you wrote: “Neither [B nor C] is particularly accurate or realistic.” I think you should correct your post.

The claim here that Pat Michaels perpetrated “fraud” with his account of Hansen’s 1988 predictions by suppressing two of the latter’s scenarios does not seem to fit the facts.

First, Hansen’s 1988 Scenario A temperature prediction was clearly intended (according to Hansen et al 1998, 4117) to show the outcome of continuation of the then current (1963-1988) “exponential growth of emissions of CO2 and other gases” (by 1.5% pa in the case of CO2), and this is the “business as usual” scenario presented by Michaels as being falsified by the outcome to 1998. Scenario B assumed “slower and approximately linear growth” of CO2 at 1% but linear (i.e. simple interest not compound), but while Scenario C had the closest temperature prediction to the outcome, that was fortuitous, as C had even slower emission growth to 1998 than in B, at absolute 1.5 ppm for CO2, i.e. a constantly declining growth rate from 0.4% p.a. to nil, with “assumed stabilization [i.e. zero increase] of GHG after 2000″.

Second, in practice world CO2 emissions from fossil fuel use grew by 1.51% p.a. from 1980 to 1987, but although growth slowed to 1.18% p.a. from 1987-1997, it was still exponential and not linear. Growth after 1997 rebounded and remained exponential, with annual compound growth from 1997 to 2003 at 1.56% p.a.. (see IEA annual report 2005). So C’s warming prediction was correct but for the wrong emission reasons. Hansen at al. clearly conceded as much, by admitting that the fortuitous temperature increase implied by B and C at only 0.1-0.2 degrees C per decade is only “about half the rate that occurs in the ‘business as usual’ or equivalent 1% CO2 per year scenarios” of the IPCC, which is closer to the actual emission growth rate than are the assumptions of Hansen’s B and C. Michaels was therefore justified in disregarding Hansen’s B and C, since their basic assumptions were invalid even though by chance their predictions look right. In short Hansen’s exponential A predicted warming by 0.9 by 1998, more than double the actual, and his linear B and C predicted around 0.4 as observed, but emissions actually grew exponentially as in A and not linearly as in B or C.

Meantime there is a clearer case of wilful error (but NOT fraud) in the endorsement of the hockey stick by the likes of Nicholas Stern and John Quiggin, but I hope they keep it up as I find that their denial of the medieval warm period and the later Little Ice Age and Maunder Minimum immediately turns my ANU associates into AGW sceptics.

Ironically, Hansen et al. (1998) adopt much the same position as Michaels when they state that the already (i.e. pre-Kyoto) evident “slowdown of greenhouse climate forcing growth rates suggest that there is an opportunity to avoid the more rapid rates of climate change in the 21st century. Even the equivalent of doubled CO2 climate forcing (4.2 W/sq. m) is not inevitable” (p. 4119). They end with an admission of “our ignorance of many issues that influence predictions for the 21st century”, including “why has the CO2 growth rate leveled out in the past two decades despite increased emissions and deforestation” and “why has the growth rate of methane plummeted”. All that is consistent with Michaels’ position.

Finally, having been chastised myself, one would hope to see less use of inflammatory language like “fraud”. No court would convict Michaels on the evidence as presented: not showing Hansen’s B and C curves was valid since they referred to non-events, only A was pertinent in the context of Michaels’ submission, and as Hansen himself admits, its warming prediction was out by a wide margin. B B and C were right about the modest warming but wrong about the level of emissions supposedly responsible for it.

Wow. Pat Michaels says he was justified in concealing B and C because emissions did not follow A, while Tim Curtin says Michaels was justified because emissions *did* follow A. If you [check the numbers](http://cdiac.ornl.gov/ftp/maunaloa-co2/maunaloa.co2), you’ll find that, as Pielke Jr states up thread, C was closest up till 2000 and B has been best since then. As I said in my post, and Curtin has failed to notice:

>The fact is, and Michaels knew it at the time, scenarios B and C were close to actual emissions and produced results close to the actual warming.

Show me actual emissions as per IEA and as per Hansen. Even Hansen admits B and C did not reflect actual emissions; A did but got the warming wrong. Do you know the difference between exponential and linear?

Hansen’s 1988 A Scenario projected CO2 emissions in 1997 at 24.0 billion tonnes of CO2 from fossil fuel usage (ignoring emissions from land clearing and deforestation); the actual was 22.9 billion, a difference of 64%. This A Scenario suggested an increase in ANNUAL mean global temperature change from around 0.3-0.4C to about 0.9C over that period.

Hansen’s 1988 B Scenario projected – if unspecified control measures were put in place – that CO2 emissions would increase from 21 billion tonnes of CO2 in 1988 to 22.9 in 1997 (again ignoring emissions from land clearing and deforestation); – spot on, except that B assumed CO2 emission controls that had not been put in place by 1997. The B temperature increase projection was from about 0.3 to 0.4 by 1997 (not easy to tell from Hansen’s scrubby Fig.4).

Is it not fair to conclude from Hansen’s work that the dire predictions in his A were not fulfilled, as Michaels asserted, and that the more benign B was attained without Kyoto, i.e. by Business as Usual?

Tim Curtin, your initial claim was that emissions followed scenario A and not B and therefore Michaels was justified in erasing scenarios B and C. In your latest comment you concede that A was too high and B was spot on and therefore Michaels was justified in erasing scenarios B and C.

Do I have your position correct? Hansen said that B was most plausible, it was “spot on” for emissions, it did a good job of predicting the increase in temperature, but Michaels was justified in erasing it to make Hansen look bad?

Throughout the comments here you are confusing CO2 emissions with the total forcing in Hansen’s 1988 Scenarios. The 1988 Scenarios covered other important GHG emissions besides CO2.

So you are mistaken when you write, “I’m glad that you now concede that “B and C were not far off on CO2″, but in your post you wrote: “Neither [B nor C] is particularly accurate or realistic.””

It is indeed the case that neither B nor C was realistic when one looks at the total emissions, and at the same time C was better than B on CO2 but not far off in ppm. However, have a look at the assumptions in each Hansen Scenario for methane, nitrous oxide and CFCs for the full picture.

Since Jim Hansen has discussed this exact same point in 1998 in the peer reviewed literature with the exact same conclusions as I’ve presented on my blog, there is obviously not much value in pursuing further here.

[quote]Hansen’s 1988 A Scenario projected CO2 emissions in 1997 at 24.0 billion tonnes of CO2 from fossil fuel usage (ignoring emissions from land clearing and deforestation); the actual was 22.9 billion, a difference of 64%. [/quote]
I make the difference 1.1 billion tonnes, or somewhere just over 4%. Is this a slip of the fingers, or what?

Tim Curtin says: “Do you know the difference between exponential and linear?” Actually, for quantities growing at 1-2% a year over, say, 15 years, the difference is remarkably small. If you take a growth rate of 1.5% over 15 years, then the compounded (i.e., exponential) result is a growth of 25% whereas the uncompounded (i.e., linear) result is 22.5%. (This essentially just states the fact that even an exponential curve is locally linear.)

By the way, as an addition to this whole argument, isn’t it also true that Scenarios B and C included a major volcanic eruption and Scenario A did not. And, of course, a major eruption did occur (Mt. Pinatubo)…which was another reason why B and C are more realistic than A in evaluating how well Hansen’s modeling predicted the climate sensitivity.

By the way, I just realized that the Michaels’ talk in question occurred in 1998, only 10 years after Hansen’s prediction. The difference between exponential and linear increase with a 1.5% growth rate over 10 years is just 16% vs. 15%.

Joel Shore hits an important point. Moreover if you look at the forcings for B and C you see that they only strongly diverge after ~2000 for CO2 and later for the other greenhouse gases. If the forcings matched what has happened, then the prediction is spot on, no matter what games are played between emissions, forcings and atmospheric concentrations.

Oh yes, Tim L, Roger is quite capable of posting a reference to something as evidence of what he says when in fact it is diametrically opposed to what he says. He tends to get cross when someone actually goes and reads the paper and calls him on it. It is part of his bag of tricks.

The message you get when posting without signing in via TypeKey seems inaccurate when it says that the post will normally appear within six hours after vetting by the moderator. Based on my experience Roger isn’t retrieving those messages. Comments made after signing in through TypeKey haven’t been a problem, for me anyway.

I’ve noticed that Michaels, Singer, Idso, et al. often seem to hide behind the word “Liberty.” It seems as though they are at liberty to be ignorant of the facts.

I acknowledge their freedom of speech, but if one is so blatantly oblivious to the truth, maybe we should have limits on who can proclaim they are an expert in a certain field. This would reduce the propaganda and brainwashing to which the general public is exposed.

Joel Shore and Eli R make good points about Hansen’s scenarios, but Pat Michaels’ entire premise is wrong as soon as he uses the word “prediction”.

In everyday usage, a model run or scenario is often called a prediction. But a prediction is ususally unconditional. Scenario runs on a model are better called projections, and Hansen uses an even weaker term, “experiments”.

And Hansen 1998, which you cite here (and in a previous posting), tells us exactly how we can test the accuracy of the model today: “The forcing for any other scenario of atmospheric trace gases can be compared to these three cases by computing ΔT0(t) with formulas provided in Appendix B.” Plug in the actual trace gas levels for 1988-1998, plus the Mt. Pinatubo eruption.

Thanks Joel Shore, but Hansen’s B and C also had lower emissions to reflect policy interventions that did not eventuate. However I will in early November release about 20 scenarios which will definitely reveal the winner of the Melbourne Cup. Watch this space.

Hansen 1998: “Scenario B is perhaps the most plausible of the three cases.” Also, “Scenario C is a more drastic curtailment of emissions than has generally been imagined”.

We can quibble for the whole thread over whether Hansen’s B envisioned curtailments. Scenarios A, B, and C were clearly conceived as an envelope; he expected the forcings and the resulting global temperature to fall somewhere in the envelope. They did. Pretty good demonstration of cause and effect on a planetary system, done in 1988.

Hansen, et al. turned out to be pretty accurate both with respect to the evolution of green house gas concentrations and the resulting individual forcings. This has yielded a very accurate estimate of global temperature almost 20 years from when the prediction was made.

By the nature of such estimates, where forcings from some sources will be overestimated, while those from others will be underestimated, one may anticipate that the overall trend will pretty much follow their prediction into the future. Various people to the contrary, this cancellation of errors in individual estimates makes such modeling robust. (Unless of course you think you can beat the house at roulette)

“I’ve noticed that Michaels, Singer, Idso, et al. often seem to hide behind the word “Liberty.” It seems as though they are at liberty to be ignorant of the facts.”

Another example of the holistic nature of the radical right’s pathology. Consider Curious George’s recent speech in favor of a Constitutional Amendment against gay marriage, for instance, which includes this:

“America is a free society which limits the role of government in the lives of our citizens. In this country, people are free to choose how they live their lives. In our free society, decisions about a fundamental social institution as marriage should be made by the people.”

Freedom from the government interfering in your interference in your fellow citizens’ lives. Undoubtedly the very concept Thomas Jefferson had in mind.

TimB’s question reminds me of a really bad student who asks questions in a seminar that display his ignorance. It really is true that to ask a relevant question you have to have a large clue. Bad questions based on false assumptions and poor to no understanding of the subject have no answer (When did you stop beating your blog?). They have the effect of sucking information out of the room as people who are aware of the situation struggle to understand the depth of stupidity questions such as Blair’s display and some of those not familiar with the fool or the subject focus on the ignorant certainty and braggadocio that our aggressive fool displays.

Behavior such as Blair’s validates ad hominem arguments. Anyone who has watched him operate and knows his limitations and methods can easily dismiss his feeble arguments without wasting time. On the other hand, when arguments made by others prove time after time to be correct, their arguments gain credance even before checking.

Let me offer this deal, let TimB open his blog to all comers for a month and we might choose to try and educate him. On the other hand, maybe not, aggitating a bag of wind soon loses its charm.

Until then I would recommend Lincoln’s advice to TimB: It is better to remain silent and be thought a fool then to speak up and remove all doubt.

As Tim Lambert pointed out in his post above, simply comparing projected “growth rates” (scenarios) with actual “growth rates” over the period in question is NOT an accurate way of gauging how “close” (or far away) a particular scenario came to reality over a particular time period with regard to total accumulated emissions of each type (CO2, methane, etc) over that period.

One must compare the actual magnitude (in ppm, for example) of the difference between scenario and reality. To do this, one has to figure the ACTUAL growth of each greenhouse gas (eg, in ppm) and compare that to the projected total growth under the scenarios. It is especially important to do this in the case in question, since the gowth rates involved are very small.

Not only that, in the case of CO2 (for example) the “growth rate” is actually that for the “annual increment” of CO2 (CO2 added to atmosphere each year in ppm) , which is assumed to start with value 1.5ppm and change from one year to the next by a small percentage (eg 1%) of that base value.

In other words, we are talking about small changes in the “annual increment” from one year to the next. And the DIFFERENCE between two small changes — eg, the difference between the change for 1% growth (0.015ppm) and that for 0.5% growth (0.0075ppm) — is ALSO a SMALL change: 0.0075ppm per year.

Sure, the effect is cumulative, but the difference over the 15 year period that Hansen’s scenario’s have been playing out does not add up to much: 0.11ppm over 15 years, for CO2 (difference between 1% and 0.5%). That’s ONLY about 0.4% of the total accumulated CO2 over that period.

Finally, Hansen evaluated — and adjusted — his original emissions assumptions in 1998 NOT because the difference between his initial assumed rates (eg, for sceanrio B) and the actual rates had made some dramatic difference in the accumulated emissions to date** but because he wanted to have the most accurate emissions assumptions to test his model going forward in time.

**The difference between ACTUAL accumulated CO2 and that projected under Hansen’s scenario B was about 1.26ppm over the first 12 years, or about 7% of the actual accumulated over that period. I’ll let you decide for yourself if this is a “dramatic” difference.

A. I think Mark Shapiro does a good job of trying to concisely bound the discussion of scenarios. IMHO the community has not done a good job of describing what the heck scenarios are.

I could prattle on for some time about this topic, but basically scenarios are used best for adaptive management, and subject to change as more information is available or the trajectory changes. You manage TO or AWAY from a scenario’s trajectory [e.g. 500 ppmv atm CO2], you use a trajectory to judge whether your management is on the right track [(viz. Laurence’s first para just above here) and e.g are our actions X, Y, Z preventing 500 ppmv atm CO2], or you manage to change the scenario trajectory itself [e.g. a combination of the two previous examples].

B. It really is true that to ask a relevant question you have to have a large clue. Bad questions based on false assumptions and poor to no understanding of the subject have no answer

I view this condition as a failure of parenting. That is: it is correct to tell a toddler that no question is a stupid question, but at some point in your child’s life you have to change that behavior and teach them something. By the time the child is, say, ten or eleven years old they shouldn’t be asking all kinds of dumb questions any more because you should have taught them how to think.

You mention Mt. Pinatubo, but fail to recognize the crash of the Soviet Union.

Their heavy industry crashed consequently in the early 1990’s, the hot water pipes froze in their cities, kolkhoz fields went unplowed and unsown, herds of cattle were eaten up, etc. Also their various accounting systems crashed so the statistics are unreliable, for a period of time at least.

Might well have flipped between Hansen’s scenarios. May happen again if China loses stability. Predictability of the CO2 can be only partially succesful.

Here everyone is trying to analyse the outcomes from prediction models using abstracted models nearly 20 years old.

Whilst the component parts of the models is relatively unchanged the forcing,effects such as the radiative aerosols and the chemistry of the processes and mechanisms including the constituent formation mechanisms is not qualitatively understood.

As the commentators above write with regard of the lower co2 outputs of the USSR-RF and the limitations of the understanding of the severe climatic changes caused by Mt Pinatubo.This is what the increased knowledge of the component parts of the systems involved,and the relationship to observation.

We also see it on another thread where commentators try to quantify the energy component of external sources whilst failing to observe the radiative cosmic component.

Mark Simovich misses an important point. For quite some time the ability to predict future emissions/concentrations of the greenhouse gases has been questioned. Here is an example where a relatively simple set of scenerios has accurately predicted forcings over an almost 20 year period. The importance can be easily gauged from the attacks launched by our friends from Cato.

Hank: apologies, I have been busy with other matters; obviously 64 was a dyslexic typo for the actual 4.5 or 4.7 depending on divisor; since I gave the relevant data I think it was obvious 64 was wrong. The more interesting issue is why Hansen’s A was only out for emissions by say 4.6% whilst A’s temperature projection was out by 100% from the actual, and his B was about spot on for emissions but largely because of the collapse of USSR, and for temperature only thanks to Pinatubo. That seems to be why Hansen et al 1998 were deliberately more circumspect than in 1988 and confessed to still large areas of ignorance as well as admitting to previous alarmism.

Eli misses an important point as well,the models showed some of the changes but not all correctly by observation.Indeed the global cooling of the lower stratosphere 1979-2003 shows the initial variables were incorrectly calculated,

It was nor until 2002,2003 that the increased role of aerosols and the natural and anthropogenic changes of ozone,and the change of solar variation .
J. E. Hansen et al., J. Geophys. Res. 107, 10.1029/2001JD001143 (2002).

A was higher because, as well as the exponential increase in emissions, it didn’t have any volcanic eruptions and it also threw in some extra forcings to allow for some uncertainities. It was, as is obvious from the paper, a worst case scenario. Presenting it as Hansen’s forecast was dishonest. Refusing to retract is also dishonest. Pat Michaels is dishonest.

Along with the usual nonsense (“But is carbon dioxide a “pollutant,” a harmless byproduct of human activity, or even an adjuvant?”) he gets scientific and offers us a testable hypothesis.

“That’s a testable hypothesis. Our cities have been warming for decades (with or without global warming) because the bricks, buildings and pavement retain the sun’s warmth. But as they have warmed, heat-related deaths have fallen. Indeed, in some large North American cities, there is no longer any significant association between hot weather and mortality. The same technology (electrically powered air conditioning) that emits “polluting” carbon dioxide also prolongs life and makes it more comfortable.”

Is there anybody who would be surprised to learn that ten seconds work on Google produced:

“Heat-Associated Mortality — New York City
The estimated annual death rate in New York City based on data collected during the week ending Friday, June 15, 1984, was 1,343 per 100,000 population, a 35% increase over the average rate for the preceding 4 weeks (Figure 7). This was the highest mortality rate recorded in New York City since January 1981 and was associated with a sudden and severe heat wave–mean daily temperatures* rose from 21.1 C (70 F) in the preceding week to 28.9 C (84 F).”http://www.cdc.gov/mmwr/preview/mmwrhtml/00000380.htm