Who says you
can't "prove a negative"? In a recent issue of the refereed
scientific journal Geophysical
Research Letters (GRL) several
WCR colleagues published a paper that gets pretty close to showing
that the climate models being used as the basis for gloom-and-doom
projections are simply wrong over the balance of the lower atmosphere.

We obtained this
result months ago but have been impatiently sitting on it so as not to
jeopardize publication in GRL,
which, like most scientific journals, publishes only previously
unreleased results. Truth be told, an oversight by a third party led to
some of our results appearing on the Internet for a couple of days while
our submission was in review for the journal Science.
They rejected it, and rightly so, given their policy that a
submission must "remain a privileged document and...not be released to
the press or the public before publication."

(We find it
somewhat ironic that shortly after our paper was rejected on these
grounds, that same publication ran a piece describing some of the
contents in the draft version of the yet-to-be-released IPCC Third
Assessment Report, even though each and every page of the IPCC document
was marked "Do not cite. Do not quote." Go figure.)

Still, we
understood. So we expanded the study and sent it to GRL.

Our paper is a
bullet through the heart of the global warming scare, which requires
that the computer models used as the excuse for the United Nations'
climate treaty match reality.

The well-known
problem we examine stems from satellite and weather-balloon data for the
balance of the lower atmosphere that appear to show very little warming
during their period of concurrency, which is the 21-plus years (since
Jan. 1, 1979). But the computer models all indicate that there should
have been a dramatic warming. Literally billions of our tax dollars have
now gone to try to explain away that discrepancy.

Finally, last
March, newspapers around the world trumpeted—some on the front
page—that new research by federal climatologist Ben Santer had
reconciled the difference. Along with several co-authors, he argued in Science
that computer models could account for the lack of warming after all,
mainly because of the cooling influence of the 1991 eruption of the
Philippine volcano Mt. Pinatubo.

Two things troubled
us: Pinatubo wasn't the only big volcano in recent decades (El Chichón
caused a cooling in the early 1980s of about half of the magnitude of
Pinatubo), and the paper's data ended precisely at a very hot
point—the mega–El Niño of 1998. (To those who think history repeats
itself here—you recall correctly that these same guys tried using a
different, highly fortuitous dataset a few years ago to prove that the
models were OK. See sidebar).

The recent Santer Science
paper argued that including Mt. Pinatubo's cooling effects made
the difference between GCM-predicted temperatures and those measured by
the satellites only 0.045ºC per decade, which they found to be
statistically insignificant and which therefore led them to conclude
that they had reconciled observed and modeled temperatures.

But when we add in
all of the volcanic action (including the cooling from El Chichón) and
allow for the fact that El Niños are pretty much random occurrences (in
other words, a huge one happened to
occur in 1998, which happened
to produce this happy result), the difference between the models and the
observed temperatures works out to a whopping 0.162ºC per decade, or
360 percent of the amount Santer and colleagues published in Science
(Figure 1).

Figure
1.The difference in
temperature trends between the satellite observations of global
temperature from 1979 to 1998 and the climate model output incorporating
the Mt. Pinatubo eruption's interruption was reported by Santer and colleagues as
only 0.045°C per decade.But
when the effect of ending their study during a strong El Niño is
factored in, the temperature difference increases to 0.081°C per
decade.And when the other
volcanic eruptions that occurred during this time are considered, the
difference further increases to 0.162°C per decade—a value 360
percent larger than the one reported by the Santer team in Science.

Interestingly, this
is almost exactly the difference in warming between surface temperatures
and those of the rest of the lower atmosphere—proving, as we have
maintained in these pages for more than half a decade now, that recent
warming is confined to the very lowest layers of the atmosphere and,
further research confirms, that it is largely confined to shallow,
coldest air masses of winter that no one likes anyway.

In other words, the
gloom-and-doom models don't work, which effectively leaves us with no
scientifically based projection of 21st-century climate at all, except
the projection that results from observed data and the laws of physics
that dictate that human-induced warming should be relatively constant,
rather than increasing in an alarming exponentiality.

That leaves us
about 0.65ºC of warming to "worry" about for the next 50 years.
That's the only conclusion we can take from the recent GRL
paper. The models are wrong and nature has told us the answer.

Will we ever have a
climate model that works? We think so, and we think we know how to
"make" that happen. As NASA scientist James Hansen recently did,
simply adjust the warming radiation down in the models to make them
consistent with reality.

But that's called
throwing in the towel, sending the champagne to World
Climate Report, and finding something else to do for a living. Not
very likely.

Longtime devotees
of World Climate Report know
of at least one other instance where data selection highly influenced
the conclusion that the computer models were correctly simulating global
warming. It was in the July 1996 paper by Santer et al. in which they
compared lower atmospheric temperatures from 1963 through 1988 and found
a statistically strong match. We examined their result in light of the
complete record that was available (1957–1995 at the time of
publication) and showed (Figure 1) that the main region of strong
warming in fact showed no
warming when all the data were used.

Figure
1.In their 1996 paper,
Santer and colleagues used only data from 1963 to 1988 (filled
circles)—although data were available from 1958 through 1995.A more complete record provides a clearer picture.

Thomas Kuhn, the
late, great historian of science, wrote in his classic The
Structure of Scientific Revolutions that such actions are in fact
the norm in science when a "paradigm," or overarching logical
framework, is assaulted by inconvenient data.

Maintaining the
paradigm, he wrote, is the work of "normal science." In 1962, Kuhn
wrote:

Closely
examined, whether historically or in the contemporary laboratory, that
enterprise [normal science] seems an attempt to force nature into the
preformed and relatively inflexible box that the paradigm supplies.

Then:

In
science…only the anticipated and usual are experienced even under
circumstances where the anomaly is later to be observed. Further
acquaintance, however, does result in an awareness of something wrong or
does relate the effect to something that has gone wrong before.

What this means for
climatology: The reigning paradigm is that computer models can simulate
the behavior of the atmosphere. When data appear that show that they
can't, the scientists' natural response is to ignore reality or to
convolute the facts in a way that props up the paradigm. Thus the
current tendency to either selectively cite data or to ignore
inconveniences is, sadly, the real way that science works—until the
entire house of cards implodes, which is what the recent Geophysical
Research Letters paper might have accomplished.