Reexamining The Anthropogenic Global Warming Issue

The following is an effort to reduce the AGW issue to a minimum number of necessary claims to support or reject. When I read of “thousands of supporting papers” on the issue, I would like to hear what these papers show that refutes the following.

The issue of anthropogenic global warming and its consequence arose from several factors. The obvious three main factors that prompted the issue coming up were:

The global average temperature has been rising noticeably since about 1850 (with obvious results of glaciers melting, sea level rising, etc)

The atmospheric CO2 concentration has been increasing significantly since it was first accurately measured in 1958, and proxies indicate the main part of the rise started about 1850. This rise seems to be clearly related to human burning of fossil fuels.

There is a theoretical basis for the fact that CO2 is a major greenhouse gas and contributes to the surface temperature level.

It seemed an obvious step to correlate the temperature rise to the human activity. The next step that followed was an effort to estimate the long-term effect of continuing increases in the use of fossil fuels, and thus production of CO2. This was particularly important, since the developing nations of the world seemed headed to ever-greater uses of fossil fuels as their economies expanded.

The theoretical models only show a modest temperature increase from the direct effect of the increase of the CO2. Additional factors had to be implicated before the increase could be considered a problem. Some studies indicated that there was a feedback effect from increases in CO2 level that greatly magnified the resulting temperature increase. The mechanism was that the modest temperature increase from the increased CO2 also caused the water vapor content of the atmosphere to increase. Water vapor is the primary greenhouse gas and has the largest contribution to the Earths surface temperature increase. The increase in water vapor engendered by the increased CO2 then causes more temperature increase, and this results in a net positive feedback. The estimate generally used (but not proven) is that there is a gain of a factor of 3X over the direct CO2 effect alone. With the large positive feedback assumption and projected increase in CO2 from growing worldwide fossil fuel burning activity, computer models were developed to estimate the worldwide increase in temperature over the next several decades. These models all showed significant increase in temperature over the next 50 to 100 years and longer, and raised considerable concern.

In order for the models to be reasonable, two facts had to be established:

Much of the temperature rise since 1850 had to be shown to be not due to natural variability, but due to human activity, with the greatest activity from 1970 to the present.

The positive feedback with large gain factor due to water vapor had to be supported by measured data.

The first of these two facts required that the global temperature rise since 1850, with emphasis from 1970 to the present, had to be significantly larger and occurring faster than any other time in at least the last couple of thousand years, so that it would not be reasonable to associate the recent rise with natural variability. This requirement seemed to contradict the previous best understanding of the global temperature variation. Previous work had indicated that there were periods of warming and cooling every few hundred to thousand years that were comparable to the recent one. However, since the direct temperature measurements only went back a few hundred years, and direct global measurements even a shorter time, the older global variation had to be determined by indirect measurements from proxies.

There are several proxies that can be used. These include ice cores from glaciers, cores from sea floors, coral cores, tree ring data, bore hole temperature distributions, etc. All of the proxies are limited in the number of locations they can be obtained from, or in temporal resolution, and many have sensitivity to several parameters. As an example, tree ring data is affected by cloudiness, rainfall, CO2 fertilizer, and surrounding conditions as well as temperature, and in fact does not measure nighttime or winter temperature, so extracting just average temperature data is not easy and in fact may be not possible in many cases.

Some data that was deduced from these proxies seemed to support the claim that even though some regions (such as Europe and Greenland) may have had other warm periods such as the “Medieval Warm Period” and cooling periods such as the “Little Ice Age”, that these were regional and not world wide as the present warm period is. If that was true, then the present period of worldwide warming would seem to be unique over the last several thousand years.

However, more recent and complete data, along with reanalysis of some of the previous data, contradicts that conclusion of uniqueness of the present average temperature. In addition, the present temperature trend is not continuing upwards even though CO2 is still increasing. The comparison of the present temperature trend to previous trends falsifies the claim of uniqueness of the temperature rise. The upper Troposphere absolute water vapor content has not increased with increasing CO2, and the temperature trend has continued flat or even down for the last few years. This tends to falsify the positive feedback assumption between CO2 and water vapor. In addition, the present low sunspot activity and PDO phase make it very likely that there will be more cooling over the next several years. With these two claims falsified, there is NO argument supporting AGW, and the computer models are clearly GIGO.

Neil Fergusonsaid

Here is an attitude test for anthromorphic “climate change” believers and skeptics. Put a check next to the items you believe are probably true.

(1) Carbon dioxide concentration in the atmosphere has been rising dramatically in the last hundred year,
(2) due to human activities,
(3) and global temperatures have been rising in the past fifty years
(4) at a much faster rate than in the century before
(5) because of the increased CO2,
(6) as shown by unequivocably validated computer models
(7) for which all input data and methodologies have been published and verified,
(8) and the rising temperature will cause terrible material damage to people
(9) but humans can reduce their CO2 generation dramatically
(10) before the damage is irreversible
(11) enough to make a significant difference
(12) without nuclear energy
(13) and without inflicting on humanity ghastly net reductions in prosperity and freedom
(14) or that the harm done is less than the harmful effects of the global warming,
(15) and that the expenditure of similar material resources on other more pressing issues wouldn’t benefit humanity far more.
(16) And the choice of “global warming” vs. “climate change” wasn’t made out of cynical political considerations.

I start hedging at about (4). I put it to you that skeptics will mark the questions truthfully. Believers will, at some point, start to fudge.

Jeff & Leonard
Thanks for a really clear and succinct post. As well as being eloquent it was without any of the vitriol that usually manages to infect posts from either side.
I would love to see a similar reasoned and non-hysterical reply to this post from the Global Warming enthusiasts. I expect I would have to wait along time.
Regards
Bill

Carricksaid

• In addition to anthropogenic CO2 forcing you have sulfate emisisons, which tend to cool the atmosphere.

• According to the climate models, until 1980 the two anthropogenic forcings were nearly equal. Thus, not only do we have to worry about two forcings, but we “know” that the two forcings historically have been nearly equal. You simply can’t neglected sulfate emissions compared to CO2 emissions.

• Since 1980, CO2 has overtaken and dominated over sulfate emissions, in part due to increased pollution control in the industrialized nations.

• Since 2000, there is a large-scale industrialization occurring over much of the globe in the developing nations. With that it’s very likely that sulfate emissions have again started to rise, causing a net cooling.

• Thus we can’t say definitively that the lack of warming post-2000 is a failure in the models, so much as an indication of increased global-scale sulfate pollution.

• FInally, the climate models typically don’t attempt to model short-duration climate fluctuations. 10 years is probably short on their time scale, and it’s very possible that other natural climate forcings can dominate over these short time scales yet result in no additional long-term secular drifts in global mean temperature.

JAEsaid

RBsaid

“Much of the temperature rise since 1850 had to be shown to be not due to natural variability, but due to human activity”

This is a strawman. See here The blog author can drop the pretense of upholding some scientific standard.

But remember that the IPCC concludes only that some or most of the warming since 1957 was very likely (90% confidence) caused by humans. They do not conclude that the warming from 1850 to 1957 was human, because they can’t rule out the possibility that it was a natural recovery from the Little Ice Age, perhaps caused by changes in the intensity of the Sun. It is important to recognize that when a politician or scientist claims that the warming prior to 1957 is human caused, that they are giving their own conclusion, and not representing the IPCC scientific consensus.

Mark Tsaid

I think you’re having a problem reading, RB. He did not say that warming from 1850 was due to humans. No, he said that “for the models to be reasonable,” which is quite a different statement.

Indeed, there is a strawman, but you wrote it, not Dr. Weinstein. Furthermore, your subsequent statement about some scientific standard would seem to be at odds with your own inability to comprehend normal language. In other words, you’re a hypocrite.

Leonard Weinsteinsaid

Layman Lurker,
I was not attempting to explain why the temperature had increased the last 150 years, just show that it does not pass the reasonable variation range based on past history. I do think a combination of variation in Solar activity (both Sun spot activity affecting clouds and direct flux level) along with long period ocean currents, volcanoes, soot, aerosols, etc. are all factors. Even CO2 and methane are probably factors, but almost certainly at a low level.

Neil,
I would give 2 a partial, and cut off 4 and beyond

Carrick,
I do not know if the combination of sulfates and CO2 result in a near balance. I tend to be skeptical without evidence (not model waving). Assuming two large effects nearly cancel, and the net difference can swing either way is a convenient out for answers that do not track anything proposed. Your last statement is pure hypocrisy. Choosing 30 years as climate but rejecting a trend that has gone on 10 years and seems very likely to continue at least several more is absolutely arbitrary. Why not choose 70 years and observe the rise over that time is only 0.04C per decade. If you choose the full 150 year period, the average rise is 0.04C per decade. The rise appears to be well fit to a linear rise rate of 0.04C per decade with a near sinusoidal variation with period of 60 years superimposed on it.

Mark Tsaid

Choosing 30 years as climate but rejecting a trend that has gone on 10 years and seems very likely to continue at least several more is absolutely arbitrary.

I’ve long held that this notion of some straight-line fit has legitimacy with any length is nonsense for data such as this, statistics be damned. For that matter, there’s no reason to assume there is any polynomial fit that will work for all times which implies they are all nothing but overfitting.

Frank K.said

“Finally, the climate models typically don’t attempt to model short-duration climate fluctuations. 10 years is probably short on their time scale, and it’s very possible that other natural climate forcings can dominate over these short time scales yet result in no additional long-term secular drifts in global mean temperature.”

Oh really?? Please tell us what equations the AOGCMs are solving, and how they solve them (the specific algorithms, if you can). Hint – they’re very nearly the same as the short term weather forecasting codes (except they have a lot of numerical dissipation and other unphysical artifices so that they don’t blow up when integrated over years and years of computational time). And they can’t make accurate predictions on a 10 year time scale!? Heh…

Carricksaid

Frank, it’s actually a slam-dunk that these models simply don’t contain high frequency climate fluctuations. It’s not enough to say they are solving the same equations, since in practice the GCMS aren’t being run with fine enough temporal/spatial resolution to be able to resolve “short duration” fluctuations associated with e.g. ENSO and other climate fluctuations.

Frank K.said

“Frank, it’s actually a slam-dunk that these models simply don’t contain high frequency climate fluctuations. It’s not enough to say they are solving the same equations, since in practice the GCMS aren’t being run with fine enough temporal/spatial resolution to be able to resolve “short duration” fluctuations associated with e.g. ENSO and other climate fluctuations.”

I believe they run with a time step of about 30 min. Is that not fine enough? Do you happen to know the stability requirements for the leap frog time integration algorithm that is used (CFL number)? Of course, these days, they can run these codes in parallel on thousands of cores of a LINUX cluster, perhaps enough to provide the required resolution.

My belief is that the AOGCMs can’t accurately predict the evolution of the real climate because the solutions to the highly coupled, non-linear governing equations are inherently unstable/chaotic. To stabilize them, unphysical methods of controlling numerical errors are required (e.g. mass and energy “fixers”, filters, etc.).

It is interesting that you provided links to results from GISS Model E, since they are particularly bad about documenting their code. I don’t think even they know what equations they’re solving anymore…

DeWitt Paynesaid

Much more than simple water vapor feedback is necessary to get the climate sensitivity to 3X the first order effect of CO2. Plugging the assumption of constant relative humidity into MODTRAN only gets you from 0.87 to 1.29 C surface temperature offset for doubling CO2 (1976 standard atmosphere, 280 to 560 ppmv CO2, 100 km, looking down, clear sky) or less than a 50% increase. So where’s the other 150%? I’m betting on clouds as a positive feedback.

AMacsaid

Steve Easterbrook is a computer scientist who is developing a professional interest in climate modeling. Here’s a recent post of his, on attending a talk at the recent American Geophysical Union meeting by paleoclimatologist Richard Alley of Penn State [sic], AGU Day 2: The role of CO2 in the earth’s history.

Yesterday afternoon, I managed to catch the Bjernes Lecture, which was given by Richard B. Alley: “The biggest Control Knob: Carbon Dioxide in Earth’s Climate History”. The room was absolutely packed – standing room only, I estimated at least 2,000 people in the audience. And it was easy to see why – Richard is a brilliant speaker, and he was addressing a crucial topic – an account of all the lines of evidence we have of the role of CO2 in climate changes throughout prehistory.

Carricksaid

My belief is that the AOGCMs can’t accurately predict the evolution of the real climate because the solutions to the highly coupled, non-linear governing equations are inherently unstable/chaotic. To stabilize them, unphysical methods of controlling numerical errors are required (e.g. mass and energy “fixers”, filters, etc.).

I actually write codes for solving a particular class of nonlinear 3-d fluid mechanics problems, so I have some experience with the stability, range of scale sizes and other numerical issues here. The differences in scale sizes in the processes that are involved, in my experience, is the biggest issue.

The large scale size is planetary-scale circulation, which is on the order of 10^6 m. The smallest scale is the viscous limit of turbulent motion, which is 1-mm. That’s roughly 9-orders of magnitude in scale that one in principle has to encompass to do a “first principles” analysis. The shortest time scale is on the order of 0.01 seconds (set by the viscous limit), the longest on the order of 10^10 s. That’s 10^12 orders of magnitude in time.

30 minutes is roughly the turn-over time for the boundary layer. That’s going to be too long of a time scale to capture weather of course, so no chance at all in realistically modeling cloud physics with that, let alone realistic transport of moisture and aerosol particles via boundary layer convection. I suspect for ocean currents that drive many natural fluctuations (assuming they even have all of the physics in it), it’s more of a spatial resolution issue, most climate models have a resolution of about 200 km, which is entirely too coarse of a scale to capture atmospheric ocean dynamics.

that said, whaqt I do know from experience is one should look at the frequency content of the model output and compare it to measurement. As you can see from Bob Tisdales’s comparisons, there is no question that at least Model E fails to faithfully reproduce the high frequency content of the experimental measurements.

Also, if you actually look at the frequency content of the global temperature record, it’s not really just noise, rather it’s a series of modes with distinctive frequencies. This is the sort of feature than any realistic model (or temperature reconstruction) needs to be able to reproduce.

Finally, chaotic isn’t the same thing as random of course. Chaotic implies a sensitivity to initial conditions, which means when you run the model with slightly different input parameters (that includes the historical forcings data), you’ll see differences in details of the various ocean-atmospheric oscillations, but that will allow you to build statistical models of their variations, which in turn can be used to set realistic uncertainties in GCMs. (Another pint, because the system is forced, and the forcings are somewhat constrained you can get phenomena like mode entrainment, which can greatly reduce the sensitivity of the system to its initial state.)

As to Model-E versus other codes… well if you can find a code that gives a realistic PSD over time scales shorter than 30 years let me know. I’m interested.

Leonard Weinsteinsaid

S. Geiger,
There is a need for specific (absolute) humidity to increase significantly in the upper Troposphere in order to support the claim of positive feedback from water vapor. The argument is quite clear. Lack of a significant increase clearly falsifies that argument. However, even if the specific humidity did increase as expected, that still is not proof of positive feedback, because cloud variation may result in negative feedback. Thus increase is necessary but not even sufficient.

Leonard Weinsteinsaid

DeWitt,
Increasing low clouds are generally negative feedback sources. Higher one may be positive feedback. However, there is no indication that the cloud variation has been a net positive feedback source. The correlation of Sun spots variation=cosmic ray variation=cloud variation seems better supported that the correlation of CO2 variation=cloud variation.

Fran Mannssaid

(1) Carbon dioxide concentration in the atmosphere has been rising dramatically in the last hundred year,

[false – it has been over 400 ppm three times in the past 180 years of measurements]

(2) due to human activities,

[False – due to warming the oceans and the property of inverses solubility in water]

(3) and global temperatures have been rising in the past fifty years

[False – it fell from 1940 to 1960 and began to rise again. Temps stabilised in 1999 and have been unsteady since, but not correlated with CO2.]

(4) at a much faster rate than in the century before

[The twentieth century rate was determined by sunspot cycle length – peak frequency – until the Pinatubo eruption in 1991]

(5) because of the increased CO2,

{False – no evidence that CO2 is anything but an effect of warming.]

(6) as shown by unequivocably validated computer models

Computer models are not science – only elaborate garbage in garbage out calculators, far too primative to predict a system as chaotic as weather forcasts, much less weather.]

(7) for which all input data and methodologies have been published and verified,

[true or false – who knows after the raw data have been cooked by CRU and Penn State.]

(8) and the rising temperature will cause terrible material damage to people

[Rising temperatures will be easier to adapt to than falling; climate is always changing.]

(9) but humans can reduce their CO2 generation dramatically

[False – look around at the prosperity we have.]

(10) before the damage is irreversible

[false – only hubris sees damage. I see plant growth.]

(11) enough to make a significant difference

[Measure it!]

(12) without nuclear energy

[We will go without if the NGOs have their way.]

(13) and without inflicting on humanity ghastly net reductions in prosperity and freedom

[False – the EPA – is a rogue agency.]

(14) or that the harm done is less than the harmful effects of the global warming,

[The friction on the credibility of the free economy has already been devestating.]

(15) and that the expenditure of similar material resources on other more pressing issues wouldn’t benefit humanity far more.

[True – What a concept]

(16) And the choice of “global warming” vs. “climate change” wasn’t made out of cynical political considerations.

[Only about 15% are true believers, but that is what scares the politicians.]

Remember the lesson of “The Boy Who Cried Wolf”. The assumption that there is or ever will be man-made Orwellian climate change is the big dead elephant in the room. Remember global warming? I think all the politicians on the bandwagon need to be replaced by public servants who can objectively evaluate scientific and technical data. The environmental protection agencies and the NGOs around the world have been dominated by extremists since the first Earth Day, and are part of the problem.

Frank K.said

I too have been professionally involved in computational fluid dynamics for over 20 years (either writing or using codes). While I haven’t worked on geophysical fluid dynamics (with the exception of writing a code to solve the barotropic vorticity equation for a class project in grad school :^), I do know that the more complex the system you’re modeling, the more important (really essential) it is to adequately document your modeling procedures. If you don’t do this, it is very easy for unintended problems to show up as you keep adding or modifying physical models (like turbulence models, Eulerian multiphase models, Lagrangian particle models, advection schemes, etc.). I have a very negative opinion of the GISS group simply because they choose to use their time making press releases and blogging rather than documenting their code, with the most famous member of the modeling group (G.S.) famously stating that he’s not paid to document but “to do science”.

There are many other groups doing much better computational work: NCAR (CAM 3.0 and related), GFDL, MIT, ECMWF. For example, look at the CAM 3.0 documentation:

If I were doing this kind of research, I’d start there without question. Model E is a piece of junk by comparison.

Your plot here

is interesting, but what does it represent (axes aren’t labeled, no units, tsk tsk :^)?

Finally, as for chaotic solutions, you are correct that one manifestation is sensitivity to initial conditions (which also speaks to the well-posedness of the differential equations + BCs + ICs), but you also leave out sensitivity to numerical errors, both temporal and spatial. This gets back to the stability issue – without knowing the stability constraints of your scheme (e.g the leapfrog scheme for Model E), your solutions for all modes can become garbage after some period of computational time as errors grow. I’m sure there are “fixers” and filters in the code to keep temperatures, pressure, and velocities from becoming unphysical, but these fixes are themselves unphysical. Running simulations out to 100 years of computational time is fun for the analyst (creating colorful animations of an overheating earth) but of no use practically, given the errors that have likely built up at that point. The idea of constructing an ensemble of forecasts for statistical analysis is interesting, but assumes that all solutions are “exact” (i.e. numerical errors are negligible) – how do you ensure this? Do you run the codes with a sequence of time steps and spatial resolutions to demonstrate mesh insensitivity (which is the standard validation approach in CFD)?

DeWitt Paynesaid

I should have been more specific. I’m betting that in the models the missing 150% comes from clouds as a positive feedback. In the real world, clouds, on average, appear to be a negative feedback, cirrus ice clouds (and jet contrails which are effectively cirrus clouds) being the exception.

Dan Pangburnsaid

Tens of billions of dollars (grants from the deep pockets of governments) have been spent in futile efforts to prove that added CO2 caused Global Warming while an unpaid engineer with a desk-top computer and using simple engineering analysis has discovered what really determined the average global temperature history since 1895.

With this discovery, changes to ghg levels have been found to have no significant effect on climate and Natural Climate Change has been verified. This does not show that added ghgs have zero effect. It does show that the temperature anomalies for 114 years…and counting can be accurately calculated by ignoring any effect from changes to the level of ghgs.

Dan Pangburn wrote, “All average global temperatures since 1895 are predicted by a simple model. There was no need to consider change to the level of CO2 or any other greenhouse gas.” Your link was a discussion of a model that uses the PDO to recreate and predict global temperature anomalies.

#31, I just took a quick look at Dan’s sunspot numbers and integrated them myself. He’s right that by the numbers he’s presented, sunspot count is substantially higher since about 1930. By integrating (summing from month 1) and removing (subtracting) an average annual number, I found quite a hockey stick since that time. You may be already aware of the effect, but it’s not small.

Leonard Weinsteinsaid

Bob Tisdale,
Correlating warming with ENSO may be a sign that ENSO directly causes the warming, but does not answer the question why the levels in ENSO are as high as they are. In fact, the variation in energy in ENSO may be due to the variations in Solar effects (the actual source of the energy) changing what is absorbed into the sea, and after a period of lag, causing the air above the ocean to heat. Since this ocean current lag does not affect land direct heating, and since other factors such as volcanoes are included, the story is not quite that simple. The need is to explain why ENSO varies, as well as its effect.

RBsaid

In the lower portion of the figure are the results of additional simulations in which the model was operated with only one forcing factor used at a time. A key conclusion of the Meehl et al. (2004) work is that the model response to all factors combined is to a good approximation equal to the sum of the responses to each factor taken individually. This means it is reasonable to talk about the temperature change due to individual aspects of the evolving man-made and natural influences on climate. The zeros on both plots are set equal to 1900 temperatures, and it is apparent that most of the 0.52 °C global warming between 1900 and 1994 should be attributed to a 0.69 °C temperature forcing from greenhouse gases partially offset by a 0.27 °C cooling due to man-made sulfate emissions and with other factors contributing the balance. This contrasts with the warming from 1900 to 1940 for which the model only attributes a net increases of 0.06 °C to the combined effects of greenhouse gases and sulfate emissions.

Mark Tsaid

Don’t get me wrong, I’m not advocating either Leif’s or tallbloke’s positions, just commenting on that thread (as well as others I’ve seen him post in). Leif is nothing if not… firm in his belief in his own work.

The heat released by an El Nino is the product of the prior La Nina, so there is little lag, a few years at most. The warm water in the Pacific Warm Pool is charged during the La Nina when trade winds increase. The increase in trade winds reduces cloud cover, allowing more downward shortwave radiation to warm the tropical Pacific. The stronger trade winds also help to force the warm water west to the Pacific Warm Pool where it accumulates (some slops over into the eastern Indian Ocean through the Indonesian Throughflow). The 1997/98 El Nino was fueled by the 1995/96 La Nina, not by some long-term build-up of heat. Refer to:

You wrote, “Correlating warming with ENSO may be a sign that ENSO directly causes the warming, but does not answer the question why the levels in ENSO are as high as they are.”

What high levels? The long-term linear trend of NINO3.4 SST anomalies is flat.
The short-term linear trend (since 1975) is negative:
The short-term linear trend (since 1979; i.e., the last 30 years) is even more negative:

And if we smooth the NINO3.4 SST anomalies with a 121-month filter, NINO3.4 SST anomalies in the latter warming period of the 20th Century are not significantly higher than they were during the first warming period, from the 1910s to the 1940s.

So there is no question as to “why the levels in ENSO are as high as they are.” NINO3.4 SST anomalies have been declining recently.

You wrote, “In fact, the variation in energy in ENSO may be due to the variations in Solar effects (the actual source of the energy) changing what is absorbed into the sea, and after a period of lag, causing the air above the ocean to heat.”

RBsaid

PeterB, long-term is a matter of definition but I second the viewing of the very entertaining video AMac#22 links to (which I’d linked to over at CA earlier as well) discussing the current evidence for CO2 as a climate driver over the really long-term, much of which hasn’t even made it to IPCC yet because of its newness. Obviously CO2 cannot be blamed for the ice age cycles. As he says, over the really long-term it is evidence for the only known explanation to geophysicists failing which we do not have any alternative explanation for the last 420 million years. Very interesting also is that the really long-term model-based sensitivity plausibly fits 2.8C per doubling of CO2 concentrations which is also the sensitivity over the short-term used by IPCC (3C over 100 years).

danpangburnsaid

Bob Tisdale- One of the discoveries in the research presented in the October 16 pdf at http://climaterealists.com/index.php?tid=145&linkbox=true is an effective oscillation consisting of a temperature up trend and down trend with amplitude 0.45 C and period 64 years with no net energy change over a full period. This oscillation contributed to a model that accurately predicts average global temperatures since 1895. Calling this oscillation the PDO was a misleading mistake. The oscillation is certainly brought about by ocean circulation with temperature variation along the circulation paths and different positions along the paths being at the surface at different times. It would probably have been better to coin a new expression such as Effective Ocean Turnover (EOT) or maybe Effective Ocean Surface Temperature.

The time-integral of the (properly scaled) PDO Index from http://jisao.washington.edu/pdo/PDO.latest happens to match EOT quite closely and even better if the AMO Index (properly scaled) is subtracted (??). This is, however, only incidental to the research but may be of interest to others.

The research applied the first law of thermodynamics (called ‘energy balance’ in the paper).

The second discovery is that the time-integral of sunspot count (properly scaled) added to the oscillation, results in an excellent prediction of all average global temperatures since 1895 including the recent downtrend. Standard deviation of the difference between concurrent 1.1.1 smoothed measured and predicted temperatures since 1900 is 0.064 C. Note that others have looked at time or amplitude separately but I have not found where anyone else has looked at the combination of time and amplitude which the time-integral does.

I have skimmed (and added to ‘favorites’ for study later) your Jan 25 paper. Where I merely discovered the net that works, it appears that you have dug much deeper and identified the pieces of the puzzle that fit together to make the net that works.

RBsaid

Carricksaid

Thanks for the link RB. I don’t think he did a particularly great job in explaining clearly why the models fail to capture short term climate variability, and I think he still managed to overstate the agreement of models and data. But at least we have somebody who is an expert on climate models weighing in on this.

Layman Lurkersaid

WRT Bob Tisdale’s link in #33 to Tallbloke’s graph showing the correlation between SST and cummulative sunspot counts, here is a link to another post at WUWT by David Archibald which shows the relationship between rate of change in sea level rise with sunspot activity. Note the relationship is noisy in the period between 1930 and 1950 – the same period of discrepency noted by Tallbloke which he attributed to “maladjustment for cooling water inlet sensors in military vessels”

Mark Tsaid

Leif will often acknowledge that correlations exist but wants to reconcile with the physical mechanism.

I’ve never seen him admit that, and quite frankly, it seems like I’ve seen him say there is no discernable correlation, but don’t hold me to that since I’m going off of memory (it has been a while since I’ve seen him heavily involved in any threads).

I agree, however, that I’d like to see the mechanism as much as I’d like to see the CO2 mechanism. IMO, the sun’s role seems more obvious simply from a common sense standpoint, but when discussing systems as complex as the earth’s climate, common sense may not be an appropriate tool to bring to the party.

The only statement I’ve ever heard from Gavin that I believe is that the ensemble mean is not the truth. What he does not mention, however, is that we have no reason to believe the “truth” is even representable by any of the models, or that the models are any reasonable representation of the population of possible outcomes. Any other statistical claims about their veracity are equally dubious. As someone in another thread noted, in other fields, any model that does not match observations has obviously failed and should not be trusted. For whatever reason, this is not true with climate models.

Sabretruthtigersaid

The problems with proving Anthropogenic Global Warming seem to be many, not least of which is the fact the Earth has been warming steadily for 300 years with no apparent response to the escalation in Anthropogenic CO2 emission.
For the past 8 years Atmospheric and sea temperatures have not increased, this may be too short a timescale to attach significance to but the scientists at East Anglia appeared to have been distraught by this.

Solar radiation in combination with oceanic oscillations appear to correlate much more with climate change than CO2. (I must admit I’m not a scientist and am not sure what causes the oscillations.

A recent study shows the correlation http://www.mail-archive.com/fairfieldlife@yahoogroups.com/msg171465.html
An excerpt from the study, “That mean global tropospheric temperature has for the last 50 years fallen and risen in close accord with the SOI (Southern Oscillation Index) of 57 months earlier shows the potential of natural forcing mechanisms to account for most of the temperature variation.”

Solar activity-wise, an abstract from the article in Energy & Environment, Volume 17, Number 1, January 2006 , pp. 29-35(7)

“Projections of weak solar maxima for solar cycles 24 and 25 are correlated with the terrestrial climate response to solar cycles over the last three hundred years, derived from a review of the literature. Based on solar maxima of approximately 50 for solar cycles 24 and 25, a global temperature decline of 1.5°C is predicted to 2020, equating to the experience of the Dalton Minimum. To provide a baseline for projecting temperature to the projected maximum of solar cycle 25, data from five rural, continental US stations with data from 1905 to 2003 was averaged and smoothed. The profile indicates that temperatures remain below the average over the first half of the twentieth century.”

The climate models also appear to exaggerate the upper-tropospheric water vapour. They seem to ignore sub grid cumulonimbic convection leading to increased return flow subsidence which means less water vapour in the upper Troposphere near the emission level. This means they understate the outgoing longwave radiation and overstate warming.
Climate models seem to ignore relative humidity.

The models also seem to ignore the low level clouds that form due to humidity that reflect back solar radiation creating a cooling effect. So there seem to be counterbalancing factors to a rise in temperature as opposed to a runaway effect. Climate sensitivity / radiative forcing appear to be thus exaggerated in the models.

Also global ice cover appears to be normal, the Arctic having recovered from it’s 1997 minimum. Antarctic ice has increased recently and was at a record maximum in 97 when the Arctic was at it’s minimum. Nasa claimed the Arctic decrease at the time was due to unusual tropical winds and the volcanic activity documented beneath the Arctic.

I once again reiterate I’m not a scientist and want to get to te bottom of this and merely are voicing the problems I see with AGW from the point of view of a layman. I realise many if not all on this forum have far more knowledge than I do on the subject, especially the technical aspects, and I would appreciate any feedback be it in support or counter to it.

Layman Lurker (52): You wrote, “Note the relationship is noisy in the period between 1930 and 1950 – the same period of discrepency noted by Tallbloke which he attributed to ‘maladjustment for cooling water inlet sensors in military vessels’”

RBsaid

Re: #54
“…but the scientists at East Anglia appeared to have been distraught by this.”
I’m not sure if you refer to Trenberth’s email, but this was an alternative explanation for the email – his comments apparently were meant to draw attention to his paper where he thinks that the planet is still warming but that the observation systems are inadequate to account for the energy flow.

Carricksaid

Note the relationship is noisy in the period between 1930 and 1950 – the same period of discrepency noted by Tallbloke which he attributed to “maladjustment for cooling water inlet sensors in military vessels”

This problem doesn’t explain the even larger discrepancy for land-based records compared to simulation results of course.

Leonard Weinsteinsaid

Thanks for your details. That does help clarify my understanding on some of the details. The main point I was trying to make is that the oceans are not sources to heat the Earth, they are just storage means to release Solar energy that they absorb at different times. When I mentioned why the levels in ENSO are as high as they are, I was not making a point on the anomaly of 98 or other specific case, I was referring to how high the resulting recent temperature peaks were from an El Nino adding heat on top of an already warming bias. I reread my comment, and I see how it was not clear. I also am aware of the present cooling trend and weakening variations. The comment on land only refers to the fact that the land heats and cools much faster, with much less storage, and except for wind transport, less movement of energy across ground levels. As to the Sunspot effect, I suspect a long period lag effect, but have no supporting evidence myself. However, the PDO has a fairly good correlation with longer term temperature trend and multi-decade Pacific and Atlantic cycles may also enter into the trend.

Frank K.said

“Good old Gavin. He never gives an inch on anything. Here he is defending FORTRAN!”

Thanks for the link, Ron! GS’s reply is simply too delicious not to reprint in full…

“[Response: You might think that, but it’s just not true. Fortran is simple, it works well for these kinds of problems, it complies efficiently on everything from a laptop to massively parallel supercomputer, plus we have 100,000s of lines of code already. If we had to rewrite everything each time there was some new fad in computer science (you know, like ‘C’ or something ;) ), we’d never get anywhere. – gavin]”

—

He forgot something though…

“…we have 100,000s of lines of *** UNDOCUMENTED *** code already.”

I suppose if they actually DID rewrite the code, they may have to think about the physics, the numerical algorithms, code structure and … well, let’s not go there.

You wrote, “I was referring to how high the resulting recent temperature peaks were from an El Nino adding heat on top of an already warming bias.”

But some of the “warming bias” is really the aftereffect of ENSO events. (The other part would be a function of the AMO.) The relationship between ENSO and Global temperatures is not linear. There are “ENSO residuals” that cause an upward trend in global temperatures.

You wrote, “As to the Sunspot effect, I suspect a long period lag effect, but have no supporting evidence myself.”

And while I also have no supporting evidence, I suspect that the reason everyone has so much difficulty pulling the solar signal from the temperature record (with lags that vary all over the place) is because they first remove ENSO from the temperature record. This removal of ENSO “noise” seems self-defeating to me since the heat that is released and redistributed by an El Nino was added to the tropical Pacific OHC only a year or two before the El Nino.

Leonard Weinsteinsaid

Bob,
I have read more of your write-ups in detail and I defer to the expert. All of the added discussion I had was not critical to my main two points of the fact that historical variations were as large or larger than the present and that we presently are in a flat to cooling trend which will likely continue. The rest was pure guesses why the variations occur, and your details are very clear. The only point you leave out is why there are large very long period trends such as the MWP, the LIA and others, and I think these have to be either Solar related, or similar to the concept of “super waves”, i.e., semi-random processes that sometimes add up and sometimes not.

[…] Posted in Uncategorized tagged CA, CRU emails, denialists, Nature editorial, skeptics at 7:05 pm by shewonk I want to spend some time today reviewing the Nature article, “Climatologists Under Pressure” since it has been the subject of several posts now at Climate Audit and other skeptic blogs. […]

Mark Tsaid

Smiley or not, that “fad” began in 1978 with the publication of The C Programming Language, became an ANSI standard in 1983, and is still the defacto programming language in nearly every industry (though C++ is gaining).

It always amazes me how stupendously ignorant Gavin is about so many things.

DeWitt Paynesaid

I don’t understand all the flap about which upper level language was used. Poorly written and minimally commented code is going to be just as hard to reverse engineer whether it’s in Fortran, C or whatever.

Leonard Weinstein (65): I have had little interest in the LIA and MWP. My goal over the past year has been to identify the natural basis for the warming since 1975. If and when the powers that be clean up the instrument temperature record for the warming period from 1910 to the 1940s, I’ll address that period too.

Frank K.said

Smiley or not, that “fad” began in 1978 with the publication of The C Programming Language, became an ANSI standard in 1983, and is still the defacto programming language in nearly every industry (though C++ is gaining).

It always amazes me how stupendously ignorant Gavin is about so many things.

Mark

—

When I was doing my Ph.D. research, I originally wrote my CFD research code in FORTRAN. After learning C/C++, I reimplemented it in C – it wasn’t that hard! There are so many advantages to using C, mainly it’s availability on virtually every known computing platform.

However, Dewitt’s comment above is right – regardless the language, one can do a good job writing code that is commented and structured to the point of being self documenting. The reason I rail on GISS so much is that their modeling group is actually PAID by us, the taxpayers, to do this, but for whatever reason, they only do a minimal job of it (a pathetic job if you ask me). The excuse that “it’s only a research” code doesn’t cut it either, given that their results are used in the IPCC reports and elsewhere to influence public policy. If they were interested in quelling any questions about the accuracy of their code, they would begin providing complete documentation (including equations, numerical algorithms, all models, etc.). Alas, it appears to me that they are more interested in blogging and appearing on TV than doing any real work…

Mark Tsaid

I agree. My comment was not to point out that Gavin should be updating his code to C, just that he’s too ignorant to understand these very simple concepts. I’d guess this carries over into his work, too.

Mark Tsaid

Blous79said

Why does CO2 and other “greenhouse” gases like the predominant water vapour have to provide positive feedback??? The natural stablility of earth’s temperature over time surely suggests some negative feedback control in operation. All biological systems operate on negative feedback, and biological systems have significant interactions with the earth, so the influence of plants and animals on earth is to provide some negative feedback.

The positive feedbacks attributable to CO2 and other GHGs have been assumed in Hansen’s computer model and embedded in Trenberth’s Global Energy Budget in a stunning peice of circular logic.

“There is a TOA imbalance of 6.4 W m-2 from CERES data and this is outside of the realm of
current estimates of global imbalances (Willis et al. 2004; Hansen et al. 2005; Huang 2006) that are expected from observed increases in carbon dioxide and other greenhouse gases in the atmosphere. The TOA energy imbalance can probably be most accurately determined from climate models and is estimated to be 0.85±0.15 W m-2 by Hansen et al. (2005) and is supported by estimated recent changes in ocean heat content (Willis et al. 2004; Hansen et al. 2005). A comprehensive error analysis of the CERES mean budget (Wielicki et al. 2006) is used in Fasullo and Trenberth (2008a) to guide adjustments of the CERES TOA fluxes so as to match the estimated global imbalance. CERES data are from the Surface Radiation Budget (Edition 2D rev 1) (SRBAVG) data product. An upper error bound on the longwave adjustment is 1.5 W m-2 and OLR was therefore increased uniformly by this amount in constructing a “best-estimate”. We also apply a uniform scaling to albedo such that the global mean increases from 0.286 to 0.298 rather than scaling ASR directly, as per Trenberth (1997), to address the remaining error. Thus the net TOA imbalance is reduced to an acceptable but imposed 0.9 W m-2 (about 0.5 PW). Even with this increase, the global mean albedo is significantly smaller than for KT97 based on ERBE(0.298 vs 0.313).”

What if Nordell and Gervet’s thermal pollution explains a big chunk of warming?
What if proper corrections for the airport heat island effect on surface temperature measurement reduce the observed warming significantly to be consistent with satellite data?
What if no greenhouse gas can exert a radiative greenhouse effect in the first place (Gerlich and Tscheuschner)?

And for a bit of fun, jet fuel burning causes global surface temperatures to rise:

Dan Pangburnsaid

All average global temperatures since 1895 are accurately predicted (standard deviation of concurrent measured minus predicted temperatures since 1900 is 0.064 C) by a simple model using the first law of thermodynamics and the time-integral (same as ‘running total’ if time steps are equal) of sunspot count.

The effective sea surface temperature oscillation (zero change over a period) was discovered. There was no need to consider any change to the level of CO2 or any other greenhouse gas. Climate change is natural.

The SSTs are associated with a thermal capacitance so the time-integral of temperature anomalies is proportional to energy. This makes it rational to plot the time-integral of temperature on the same graph with calculations using conservation of energy.

This model predicted the ongoing temperature decline trend. None of the 20 or so models that the IPCC uses do.