Planetary energy imbalance?

The recent paper in Science Express by Hansen et al (on which I am a co-author) has garnered quite a lot of press attention and has been described as the ‘smoking gun’ for anthropogenic climate change. We have discussed many of the relevant issues here before, but it may be useful to go over the arguments again here.

The key points of the paper are that: i) model simulations with 20th century forcings are able to match the surface air temperature record, ii) they also match the measured changes of ocean heat content over the last decade, iii) the implied planetary imbalance (the amount of excess energy the Earth is currently absorbing) which is roughly equal to the ocean heat uptake, is significant and growing, and iv) this implies both that there is significant heating “in the pipeline”, and that there is an important lag in the climate’s full response to changes in the forcing.

As we have discussed previously, looking in the ocean for climate change is a very good idea. Since the heat capacity of the land surface is so small compared to the ocean, any significant imbalance in the planetary radiation budget (the solar in minus the longwave out) must end up increasing the heat content in the ocean. This idea was explored by Levitus et al (long term observations of ocean heat content) and Barnett et al (modelling of such changes) in a couple of Science papers a few years ago. Since then, the models have got better (for instance, coupled models generally do not require ‘flux corrections’ to prevent excessive model drift any more), and the estimates of ocean heat changes, particularly over the last ten years have improved enourmously. The observational improvements come from incorporating satellite altimeter data (which mainly reflects changes in heat content) and, more recently, the ARGOS float network which is providing unparallelled coverage in sub-surface waters (particularly in the southern oceans). This implies that the estimates of ocean heat content changes over the last 10 years are the most accurate that we have had to date and thus provide a good target to compare against the models.

For their part, the model simulations that have been run for the IPCC AR4 have tried to simulate the climate of the last hundred or so years using only known and quantifiable forcings. Some of these forcings are well known and understood (such as the well-mixed greenhouse gases, or recent volcanic effects), while others have an uncertain magnitude (solar), and/or uncertain distributions in space and time (aerosols, tropospheric ozone etc.), or uncertain physics (land use change, aerosol indirect effects etc.). Given these uncertainties, modellers nevertheless make their best estimates consistent with observations of the actual forcing agents. The test for the modellers is whether they reproduce many of the elements of climate change over that period. Some tests are relatively easy to pass – for instance, we have discussed the model skill in response to the Mt. Pinatubo eruption in a number of threads.

The overall global surface temperature is also well modelled in this and other studies. While impressive, this may be due to an error in the forcings combined with compensating errors in the climate sensitivity (2.7 C for a doubling of CO2 in this model) or the mixing of heat into the deep ocean. Looking at the surface temperature and the ocean heat content changes together though allows us to pin down the total unrealised forcing (the net radiation imbalance) and demonstrate that the models are consistent with both the surface and ocean changes. It is still however conceivable that a different combination of the aerosol forcing (in particular (no pun intended!)) and climate sensitivity may give the same result, underlining the continuing need to improve the independent estimates of the forcings.

So how well does the model do? The figure shows the increase in heat content for 5 different simulations in the ensemble (same climate forcings, but with different weather) matched up against the observations. All lines show approximately the same trend, and the variability between the ensemble runs being greater than the difference with the observations (i.e. this is as good a match as could be expected). The interannual variability, predominantly related to ENSO processes, is different but that too is to be expected given the mainly chaotic nature of tropical Pacific variability, the short time period and the models’ known inadequacy in ENSO modelling. The slope of these lines is then related to the net heat imbalance of around 0.60+/-0.10W/m2 over 1993-2003, and which the models now suggest has grown to around 0.85+/-0.15 W/m2. The distribution of heat in the ocean in the different runs is quite large (figure 3 in the article) but clearly spans the variations in the observations, which is of course just one realisition of the actual climate.

What does this imply? Firstly, as surface temperatures and the ocean heat content are rising together, it almost certainly rules out intrinsic variability of the climate system as a major cause for the recent warming (since internal climate changes (ENSO, thermohaline variability, etc.) are related to transfers of heat around the system, atmospheric warming would only occur with energy from somewhere else (i.e. the ocean) which would then need to be cooling).

Secondly, since the ocean warming is shown to be consistent with the land surface changes, this helps validate the surface temperature record, which is then unlikely to be purely an artifact of urban biases etc. Thirdly, since the current unrealised warming “in the pipeline” is related to the net imbalance, 0.85+/-0.15 W/m2 implies an further warming of around 0.5-0.7 C, regardless of future emission increases. This implications are similar to the conclusions discussed recently by Wigely and Meehl et al.. Many different models have now demonstrated that our understanding of current forcings, long-term observations of the land surface and ocean temperature changes and the canonical estimates of climate forcing are all consistent within the uncertainties. Thus since we are reasonably confident in what has happened in the recent past, projections of these same models under plausible future scenarios need to be considered seriously.

60 Responses to “Planetary energy imbalance?”

I have been reading William Kinninmonth’s article called “Climate Change – A Natural Hazard” In it he tries to imply that climate researchers are not taking into account ocean warming and the effect on the Earths radiation balance.

Thank you for this summary of what the peer reviewed work is saying. Once again please never give up this site. As a non-scientist trying to convince others of the possible consequences of Global Warming I use Real Climate as a standard reference.

As an educator in a community college teaching Conservation of Natural Resources, I cover a comprehensive list of topics in the course including Climate Change. I would like to make the point that many people do not understand science, do not realize that their activities impact the Earth’s ecological sustainability, and are fine tuned to mainstream media. An issue that will have an impact on climate change is creeping up quietly and touted as alleviating our dependence on fossil fuels, ameliorating climate change, and creating jobs. It is the planned expansion of ethanol production from corn. This is a solution looking for a problem and contrary to fulfilling the expectations blatantly expressed by the industry and our government, will increase dependence on fossil fuel, negatively affect climate change, exacerbate economic hardship for small farmers, increase the planting of genetically modified corn, and bankrupt the natural resources we need to sustain life.

Since “ordinary” people make up the masses in our population, I would like to know how to make “ordinary” people understand that we breathe the same air, drink the same water and are all connected, and that the uninformed choices we make are escalating climate change and destroying the Earth’s systems that support all life. The media and culture of the 21st century Americans are more focused on an assortment of activities that rarely include science. The people I am describing are not necessarily ill intentioned, but they represent the unformed masses who get their “knowledge” from TV and other mainstream media. In order to create a change in perception, these people need to be made aware. The people, not the politicians will ultimately make the changes necessary to effectively bring about an ecological balance and global sustainability. I have excellent success in generating student awareness, and understand that as they become aware, they intentionally communicate their concerns with others, but this is insufficient in reaching the mainstream population. I believe from experience that when people understand the consequenses of their activities, they change those activities. Is there anyone who would know how to reach these people so they too would be informed? Crichton has pandered to the people long enough, there has to be a better way for informed people to create a legacy for future generations that will allow health, liberty and the pursuit of happiness, and a sustainable ecological world. A committment needs to be made but how?

Jean, that’s a good post. There are, in fact, people who know how to reach these people so they too would be informed. They are the folks that are the reason this site was created (too bad the ‘informed’ part is not what you intend). They communicate via emotion to stimulate familiar things in people. Trouble is, things like humanist perspectives and living in boxes rather than on the land has taken away much of our familiarity with the natural world.

Science hasn’t done a very good job at communicating its findings in an understandable way. Activist groups have tried to communicate via emotion but, IMHO, have instead created crisis fatigue. Our science curricula should include communications courses so future scientists can either share their work with the public, or give their work to someone who can effectively communicate the implications to the public (and decision-makers).

This site is a good bridge to communication with the public (although I doubt Joe or Jane six-pack reads it) and I hope some of the communication bugs are being worked out here.

Does this mean the IPCC projections for 2100 are more valid, or does it means the upper end & perhaps lower end projections have to be racheted up a bit? Or am I confusing apples with oranges again?

Response: Well the projections for 2100 are uncertain for two seperate reasons: uncertainty in economic factors (growth, technology, politics), and only secondarily on the model errors. As modellers we are trying to reduce the second kind of error, but there is little we can do about the first. – gavin]

Re: comment #2. I’ve thought about Crichton’s work and have come to the conclusion that Crichton was simply writing a melodramatic thriller which uses GW as a backdrop and that he doesn’t much believe his own premise. After all, he did provide footnotes to work that contradicted the work’s allegations. Crichton may be doing a grave disservice, but he isn’t an atmospheric scientist any more than Mary Shelley was an anatomist. When I read ghost stories when I was a kid, many of them made reference to non-existent newspaper reports. Writers take verisimilitude where they can get it.

which suggests that the model errors are much larger than the 0.85 W/m2 that is claimed to be predicted.

Vish

Response: There are two different things here. One is the error in any particular flux (which can be large), and the second is the error in the overall balance (which is very small). To see why these things are different, think of a lake which is fed by a number of streams of uncertain flow, and drains into a river. If the lake level is steady, I know that the inputs are balancing the outputs, even if I don’t know their exact magnitude. If I now observe the lake level change, I can calculate very exactly the net imbalance regardless of the error in my estimate of the individual streamflows. -gavin]

â��We are likely seeing a decadal fluctuation here. Right now we are looking at what might lead to such a fluctuation,â�� says Chen. He believes that this is a natural climate anomaly much like El NiÃ±o, La NiÃ±a, or the North Atlantic Oscillation. It is a natural part of the rhythms of the Earthâ��s climate system. But unlike these other anomalies, the Hadley-Walker cell fluctuates over the course of decades instead of years. Chen feels that this phenomenon has no direct relation to global warming or any other hypothesis related to climate change, including the Iris Hypothesis. The Iris Hypothesis states that as sea surface temperature increases due to climate change, the increase will alter the extent of certain types of overlying clouds so that the excess heat is allowed to vent through the top of the atmosphere. Though the Iris Hypothesis may seem to jibe with what has occurred over the past 15 years in the tropics, Chen says that the thermal radiation leaving the top of the atmosphere has increased much too rapidly. Evidence has also shown that the clouds above the tropics are not changing in the ways predicted by the Iris Hypothesis.

“Continuation of the ocean temperature and altimetry measurements is needed
to confirm that the energy imbalance is not a fluctuation…”

If it were a fluctuation, could that mean the imbalance is just a short term thing that could reverse in a few years?

If so, does that invalidate the following claim just posted?:

“Firstly, as surface temperatures and the ocean heat content are rising together, it almost certainly rules out intrinsic variability of the climate system as a major cause for the recent warming”

Response: The longer the record, the more sure we will be that what is seen is more than a fluctuation. However, given the match of the data and models using the most up to date forcings, we are reasonably confident that it is not a fluctuation. There is always a chance that nature is conspiring against us, and so continued monitoring is essential to stregthen the conclusions. -gavin]

Just how accurate are the temperature measurements made in the ocean? Working through the numbers, we’re only talking a few hundredths of a degree a year of temperature change. It looks like your reference 19, J.K. Willis, D. Roemmich, B. Cornuelle, J. Geophys.Res., 109, C12036,
doi:10.1029/2003JC002260 (2004) would be the one with the data. Is it available on-line?

Response: Try here. In general, ocean temperature measurements are very high quality and routinely reported to the third decimal place. The bigger problem is spatial coverage, but the addition of integrated constraints from the altimeter data goes some way to correcting for that. -gavin]

Gavin, I agree completely with the standard picture that you describe, but I don’t agree with the claim that “… as surface temperatures and the ocean heat content are rising together, it almost certainly rules out intrinsic variability of the climate system as a major cause for the recent warming”. Suppose that there has been a multi-century increase in the poleward heat transport in the oceans due to internal variability, which warms the poles, reduces ice extent and albedos, and thereby warms the planet. The land surface and atmosphere go along for the ride, having little heat capacity. There is no evidence for anything of the sort, but I don’t see any logical inconsistency — we need to be careful not to claim too much, thereby creating an opportunity for a critique that is besides the point. The consensus picture that you paint is convincing because the estimated forcings, the models, and the observations are all consistent, not because of an argument that internal variability has to cool one part of the system in order to warm another.

Response: Isaac, I agree with your general point that a redistribution of heat in the system can alter feedbacks in a way that could affect the energy balance. The obvious example is collapse in the North Atlantic MOC, which leads to more sea ice etc. However, the impact on global (as opposed to local) temperatures of even large variations in the overturning is very small. Thus it doesn’t appear to be much of a practical effect. For the case at hand though, the distribution of heat anomalies in the ocean makes a redistribution+feedback effect very unlikely. -gavin]

What are the model assumptions about radiative forcing from clouds? Is Earth’s cloud cover increasing, decreasing? Are there any new observations about this and the types of clouds involved?

Response: The models cloud radiative forcing is a diagnostic of the model, rather than an assumption that is built in. Clouds are however very important, and we (and other groups) are trying hard to analyse what they are doing. Comparison with data (such as ISCCP) is problematic because they records are noisy, short, and may still have systematic problems that affect the trends. As and when those analyses are done, we will report on them here. -gavin]

I’ll have to go to the primary references — but this raises a few questions for me:

Is the record detailed enough to understand 3-dimensional changes in sea temperature or is it mostly based on sea surface altimetry? In particular is there an observed or predicted change in the temperature or volume of mode waters (or rates of deep water formation)?

Can you use whole planet radiation budgets to calculate the planet’s net entropy gain? Is there a way to budget how much of that entropy gain is due to life?

since the current unrealised warming “in the pipeline” is related to the net imbalance, 0.85+/-0.15 W/m2 implies an further warming of around 0.5-0.7 C

I am interested as to when this ~0.6 C increase will become evident. I have had a quick look at the Hansen paper and in it he says something to the effect that the time lag is roughly proportional to the square of the sensitivity, So does this mean that if t is the time delay in years and s is the sensitivity, t can be expressed as

t = k.s^2

Dr Hansen also states that ” time delay could be as short as a decade if climate sensitivity is as small as 1/4 deg C per W/m2″. Substituting this (i..e. t=10; s=0.25) into the above equation yields k=160 (very rough, I know)

Also in the paper the sensitivity resulting from the models appears to be about 2/3 deg C per W/m2, so if we maintain the current level of forcing indefinitely, this suggests the time delay before equilibrium is established is as follows

t=160 x 4/9

which is over 70 years – or have I got something wrong.

Also could you tell me if the increase in temperature towards equilibrium is expected to be linear. This would mean a steady increase of <0.1 deg/decade (0.6 in 70 years) if my interpretation is correct.

Response: This is all discussed more thoroughly in Hansen et al (1985) (no online version unfortunately). The basic idea is that since feedbacks are a large part of the response, lags due to ocean thermal inertia slow down the full feedback response. For the current imbalance, and idea of how long it will take for the warming to come down the pipe can be seen in the figures in the Wigley and Meehl et al paper referenced above. In the GISS ‘committed climate change’ simulations, most of the additional warming has occured by 2050, but there remains a slow increase for decades afterwards. -gavin]

Gavin,
Following up on my comment above, my concern is with the “smoking gun” language and whether it might be counterproductive. The idea is that there is a small set of observations and a theoretical argument that goes along with it that is essentially irrefutable, even if one is unfamiliar with or rejects the rest of the edifice of global warming science. I am not sure that this is a good tack to take, since the argument, in fact, is not irrefutable, and probably is not very convincing to someone who has no intuition for internal climate variability and how it manifests itself. (I wasn’t supporting the idea of the dominance of internal variability, but just addressing the logical consistency of the argument.) I think we need to emphasize the consistency of the whole picture, and not place too much emphasis on one line of argument, which is what the “smoking gun” language tends to encourage.

Mickey Crichton, Chief Climatologist for the New Confederate States Department of Coastal Defense, gazes towards the noon ferry arriving from the port city of Lakeland, fifty miles to the southwest. Although this is not an official visit, Disney has given Mickey permission to examine the feverish efforts to strengthen the eastern and southern portions of the seawall.

Hurricane Johnnie, now a devastating Category 5, is projected to arrive in the next four to five days. Although few people seriously expect it to breach the 120 foot high seawall surrounding this privately owned island city, it could seriously tax the pumping system that keeps the tourist mecca dry.

Since purchasing it from the NCS government right after the revolution, Disney has spent trillions on its defense system against the rising ocean, but now…

This prescient short story, Disney, the Fall of an Empire by John M. Crichton III, written in 2035 by the young great nephew of the famous novelist, finally galvanized the public and, with it the Federal Government, into action. Sadly, it was too late. Although scientists and foreign governments had been urging the US to cut back CO2 emissions for decades, the legacy of the second Bush administration and the influence of Crichton IIIâ??s great uncle had been too strong. The public had been in denial and the massive disinformation campaign by the fossil fuel industries had stymied any legislative effort to enforce cutbacks.

Now, although the draconian laws passed in 2037 have cut CO2 emissions in half, the US is in a state of near collapse. Federal money diverted to the emissions effort has wiped out Social Security. Most other unneeded programs such as environmental, transportation, health, education, housing etc. have been eliminated or cut to the bone. The Defense budget has tripled due to threats of invasion by the EU and the Asian Alliance.

Sea levels continue to rise at a rate never envisioned by climatologists and other scientists in the late 1990â??s and early 2000â??s due to the unforeseen effects ofâ?¦

Thanks for the correction. It seems to me that deducing heat content from altimetry is less accurate than from direct measurements, as altimetry is influenced by wind speed and barometric pressure too…

IMHO, the increase in speed of the Hadley/Walker cells may be the result of higher ocean temperatures (or temperature differences over long distances), not the origin (or to a lesser extent, as less clouds lead to some extra insolation, thus warming). Some indication for that can be found in several studies, one of them at http://hydro.jpl.nasa.gov/sst/sst.html
This is not the same as the Iris hypotheses, as that is on a much smaller scale, but over the whole tropics/subtropics.

The loss of energy to space, measured by Wielicki e.a. in the past 15 years, is of near the same magnitude as what the theoretical increase in greenhouse effect is from the extra greenhouse gases since the beginning of the industrial revolution. That is a large impact. Remains to be seen if the current 0.85 +/- 0.15 W/m2 imbalance found by Hansen e.a. is not simply a part of the natural variability found by Wielicki (if it is natural at all)…

Thank you, Gavin, for the link. I’ve sort of skimmed it so far. There’s a lot more to be looked at. But I’m afraid I’m unconvinced that the .85 +-.15 w/m2 figure is very believable. The problem is figure 6 which shows the spatial extent of the various storage values for each year. They show both high resolution and high variability over short distances. This wouldn’t seem to me to be a sign of a highly accurate system. Further, compare, say, 1997 and 1998 where we switched to the super-El Nino situation. Very striking, but also quite a problem. The same areas of the ocean went from something like +80 w/m2 to -80 w/m2 (or vice versa) in less than a year. And we’re not talking single pixels, but large sections of the ocean. With such gigantic variability common, expecting a decadal global average to be accurate within about 1% of the annual variability is asking a lot.

Response: Well, the 0.85 number is the mean from the models, and so is not directly related to the uncertainty in the data. However, it’s easy to assess the error in the global mean ocean heat content based on the measurement error and spatial variability, and that is done in the Willis et al paper. Despite the variability, the global means are relatively well defined. The key is that the tropical Pacific is actually very well sampled (through the TOGA-COARE array) and the patterns you see in the annual means change slowly enough for the heat content anomalies to be well characterised. As Willis et al states, the errors due to spatial coverage (rather than variability) are more important. Given that, it’s clear that the longer the record, the more confident we will be. -gavin]

Thanks for another enlightening article. When looking at the total forcing in the second part of the first graph whoch summarises forcings over the 1850-2000 period, there is at first sight a long-term rising trend, including in the 1940-70 period (well, there is a significant fall in the sixties though, due to solar irradiance). I am thus looking for an explanation of the 1940-1970 “cooling” (strong quotes). The global temperature fall in the sixties can be linked to solar irradiance I assume. But is there a good and intuitive explanation for the 1940-1960 period (where it looks more like a global temperature stabilisation than a cooling) or is it just the result of the complex interaction of the climate system?

In comment 20, I mentionned solar irradiance as falling in the sixties and possibly explaining the low global temperature in the sixties. I actually meant stratospheric aerosols. I got confused by looking at a black and white printout of the figure.

OK, I see what the .85 is then, though I’m not sure it’s meaningful in that case. It’s very close to what the article you linked me to has, .85 +-.12 BTW.

Looking further at that article, I examined figure 6. It shows a graph of the global heat storage over time. I did a quick analysis of that, estimating the value by eyeball for each year (I ended up with 7.0 rather than 8.5 as I suppose it should be, giving you an idea of how my aging eyes are doing. I then put those figures in a spreadsheet, calculated the trend (y = .1213x), subtracted again by eye to get the difference (total off by .3) and then calculate the RMS of the differences which amounts to, ironically, .85. Again, it doesn’t look real robust, in terms of the trend being real as opposed to a fluxuation. I’d have been real careful about letting words like “Smoking Gun” be associated with your findings.

BTW, since the data ends in 2002, have you gotten any preliminary data for 2003 and 2004? This would certainly help since the ending trend seemed to be down.

While rereading the ocean heat content changes by Levitus 2005 at http://www.nodc.noaa.gov/OC5/PDF/PAPERS/grlheat05.pdf a remarkable sentence was noticed:
“However, the large decrease in ocean heat content starting around 1980 suggests that internal variability of the Earth system significantly affects Earth’s heat balance on decadal time-scales.”

Thus it may be that the 1993-2003 period of ocean warming used by Hansen e.a. is an entirely natural warming.

My question: The GISS climate model follows the 1993-2003 trend quite good. But does it follow the 1980-1990 cooling trend as good?

I read in a school textbook that it takes as much energy to convert 1gm. of ice at 0oC to 1gm. of water at 0oC as it does to heat said gram of water from 0oC to 80oC. With recent talk of the extra heat the earths system is absorbing ‘hiding’ here and there, is there a connection?
The energy taken from the system to cause the observed glacial melting worldwide, thinning and shrinkage of Arctic sea ice, melting collapsed Antarctic iceshelves etc. must be ‘hidden’ in the meltwater, unrecordable by themometer. Does this amount of energy rate a mention in the earths energy equation, or is it too insignificant to bother with?

[Response: The latent heat of melting for ice is indeed large, but given the amount of heat we are talking about and the (relatively) small amount of ice melt seen so far, it is negligible. Levitus et al (2001) did the calculation and the ocean heat content is by far the biggest term. -gavin]

It seems to me that “Earth’s Energy Imbalance” paper is not strictly a science paper; there are also policy warnings e.g. “this example [~0.6 C warming in the pipeline] … implies the need for near-term anticipatory actions”. This deserves some more comment since the situation could be even worse than the paper states (and this comment is not peer-reviewed!).

It is my understanding of the paleoclimate that only about half of the rise (during interglacials to ~280ppmv) and fall (during glacial maximums to ~190ppmv) of CO2 levels can be accounted for by climate models. The rest presumably comes from unknown feedbacks in the biosphere. The paper (and presumably the climate model used) does not deal with this issue but notes the “sawtooth” pattern. Also, it could be argued that the initial stages of ice sheet disintegration have already been detected (WAIS, Greenland) but admittedly no one knows what this looks like. Recent accelerated rates of ice sheet melting and our current ignorance is cause for alarm given that “the destabilizing issue of comparable ocean and ice sheet response times is apparent”. The inferred positive eustatic GLSR from the ice sheets is a recent result.

But the real problem as I see it is that the slow response times of the oceans and ice sheets creates slow human response times with respect to policy to stabilize GHG levels. So, an equilibrium response is a convenient fiction which enables the science but is not a goal of policy makers and so can not exist in the real world. Also, the GISS model with it’s 2.7c sensitivity is on the low end. The Kerr (2004) Science “Three Degrees of Consensus” summary (cited in the paper but not online) gives the current range across models for climate sensitivity as 2.5c to 4.0c.

So, there is ample reason to worry. This paper dovestails nicely with Elizabeth Kolbert’s 3 part series in the New Yorker on climate change. Try here. I’ll let you know how it ends. This is the last sentence of part 3:

It may seem impossible to imagine that a technologically advanced society could choose, in essense, to destroy itself, but that is what we are now in the process of doing.

As a lay person to the climate sciences the debate over anthropogenic global warming is very confusing since there appears to be so many contradicting opinions. For instance, a candian television program about anthropogenic global warming seems to say almost the exact opposite of what is being stated here and in some of the recently published articles in Science. Any opinion on this particular program and the bona fides of its experts?

The “friends of science” website you reference is a propoganda site designed to sow the kind of confusion you feel. Most such sites are sponsered by fossil fuels industry. Here’s an introduction to such sites at Mother Jones called “As The World Burns”.

“Friends of Science” are an industry-funded group affiliated with the Fraser Institute in Canada which receives funding from ExxonMobil to disinform and confuse (to obfuscate) the public. They are certainly not what their name says. (Because of this, I like to call “Foes of Science.”)

As for the “Errr.. Mother Jones is not a propaganda site? It’s certainly far left.” Mother Jones is a centre-left newsmagazine, which features excellent journalism and analysis. Two of the three (I don’t know who Chris Mooney is) who wrote in the “As The World Burns” issue are very good. I have met Ross Gelbspan and he is a brilliant journalist (hence, his Pulitzer award). Bill McKibben is also brilliant. I would definitely recommend anyone read their work.

I could have added in my remarks in #26 that these new “global dimming” results point to the conclusion that negative forcing from sulfate aerosols masked the actual ongoing warming up to about 1990. I look forward to your post on this subject.

I find it interesting that a particular organization might be indicted for supposedly being funded by the fossil fuel industry when so many organizations and studies fueling the global warming alarmism are funded by governments and not suprisingly endorse MORE government (in the form of regulations, regulatory agencies, taxes, fees, etc…) as a supposed “solution” to the problem. Talk about conflict of interest!

For Stuart Hobbs much of the information disputing climate change science is politically driven. There is scientific consensus about climate change (the earth is warming, this warming is mostly caused by humanities release of greenhouse gases and this could be harmful). There is scientific consensus because the evidence is overwhelming. Look at the “just what is this consensus” Dec 22 post here on realclimate. If the consensus is accepted the next step is to take action. Action would mean enacting laws and regulations limiting the emissions of greenhouse gases. These regulations would probably be extensive and be expensive.

A group of conservatives who favor small government and industries who could lose financially are adamantly opposed to these regulations and are the most vocal opponents. They are hostile to environmental regulation generally and climate change regulation especially. Current environmental regulations were passed after environmental groups ran political campaigns that reflected the public’s call for action addressing environmental problems. Conservatives and industry were stung by these political defeats. Noting the success of environmentalists, conservatives and industry founded their own political advocacy groups to oppose environmental regulation.

Being politically active for industry, environmentalists or any other group is not a bad thing. Any political advocacy group has an agenda and when evaluating their messages about climate change science it is important to examine how they advance this agenda. Every political group uses spin to try to persuade the public, but some of the groups that represent conservatives and industry use what can be called extreme tactics in the climate change science debate.

These groups, including Fraser Institute and Friends of Science are waging a public relations campaign against environmental regulation. Because climate change science when objectively examined supports environmental regulation, science, scientific institutions and scientists have also been subject to criticism. The goal is to publicly cast doubt on the science so the public will not support climate change legislation. Seehttp://www.luntzspeak.com/graphics/NewYorkTimes.NewsStory.pdf
Without public support it is unlikely any climate change regulation will be enacted. Even conservative politicians who support climate change legislation have been targeted.

The claims about climate change science made by these conservative/industry groups have questionable scientific value. Most of these claims are political attacks that promote a partisan agenda and are not objective information about climate change science.
Concern about policies based on science is understandable and can be used to create better policies, but in many cases the concern about policies is prompting some to misrepresent the facts about climate change science.

Re comment #34: Consensus does not mean unanimity. You can find a few PhD biologists who dispute the theory of evolution too but that doesn’t mean there is a lack of consensus on the issue in the peer-reviewed scientific literature.

If you look around the “Friends of Science” site, you will see just how pathetic it is. Their motto probably should be, “No argument is too ridiculous for us.” They even try to sow doubt over whether the current rise in CO2 levels is primarily the result of humans!

I can’t tell from Table 1 what share of GHGs are contributed by volcanic greenhouse gases; is this an unknown part of the 2.75W/m2 figure? Or are volcanic GHGs known to be negligible? (I’m obviously a newcomer to these questions.)

Also, are undersea eruptions likely to contribute significant heat to ocean temperatures? If so, perhaps the argument is still that your model matches observed temperatures well enough, so adding this input is unnecessary. But assuming a number could be put to this heat source, would it draw down the other forcings proportionately, or would it all come out of GHGs?

I have no idea how frequent undersea eruptions are, or how constant their rate is from one century to the next. I’m guessing you probably don’t either. But you raise a good point that if the heat release per century is constant, that can’t account for rising temperatures. Maybe one can reasonably infer fairly constant undersea vulcanism from on-land vulcanism. I’m hoping I’ll find out here.

But the thing is, even if it’s constant, if there’s a lot of it I would think that might affect the man-made GHG argument. It seems like it would have to come at the expense of some other heat source for the model to work as well as it does. On the other hand, then it might take less GHG to cause the observed temperature increase. That might actually be good news, in that reducing GHG would have a correspondingly big effect too.

Re # 34: there is only one climatologist in the video, and that person didn’t bother to correct the copy that allowed the non-climatologist to state the UHI is contributing to the surface temperature rise. That very simple fact should raise alarm bells and cause one to scrutinize other statements in the video.

Do I understand this Ocean data correctly?
It was said above that the ocean is warming just like the land (& air and ice sheets/glaciers),
that the heat in the ocean dwarfs that in the land and air,
that the warming is due to the net solar imbalance (solar in, less LW out- no mention of CO2.)

[Response: The NYT quote from Wielicki is incomplete, he meant that the net imbalance measured by CERES (SW and LW) is consistent with the ocean heat storage data. I quote from an email posted on the climatesceptics group :

The times reporter got the sense of it, but not all the specifics. your confusion is about longwave flux forcing of CO2 vs shortwave solar albedo changes. my comment to the times was that if you put the shortwave reflected and thermal infrared emitted energy terms together and look at them over the last decade: for 1992 to 2004: you get variations in net radiation into the Earth that vary up and down with peak variations of about 1 W/m^2 but that have fluctuations very consistent with total ocean heat storage: as they should.

The net planaetary imbalance has very little to do with the sun. -gavin]

THE SUN IS CAUSING THE GLOBAL WARMING? and we can’t do a damn thing about it!
(makes sense to me since it is also the sun that supplies the heat to drive the greenhouse effect!!)

SO no matter what we do to CO2 (by Kyoto) we are at the mercy of the sun and the imaginary “Pipeline”. and when the sun decides to cool off and reverse its 300+ year warming trend (see IPCC solar irradiance history http://www.grida.no/climate/ipcc_tar/wg1/245.htm )
we will go into global cooling like we did from ~1300 to 1700? and 1940-1970?
It will cool off someday – it always has in the past, just look at the sawtooth ice core temperature data.

Which by the way will immediately eliminate the energy in the imaginary “Pipeline”- seems as if this happens just after noon every single day!

The next conclusion is that any prediction of (air) temperatures for the next 50 years is totally dependant on the assumption made for what the solar irradiance levels are? Which according to this study is +/- a factor of TWO? (total incredulity!!)
Excuse the dumb question but why are we wasting our research time on CO2 emissions rather than the cause of changes in solar irradiance?

BUT that if we continue to add CO2 to the air, the air has the added heat capacity to get warmer, IF and ONLY IF driven by the sun, but rapidly come to equilibrium with the ocean, by means of rain and the daily heating & condensation of the water vapor feedback mechanism.
BUT already the air never gets to maximum heat capacity anyway.

Other than the fact that CO2 is necessary to human survival (we breathe it out after processing hydrocarbon food) and for 98% of the energy that has allowed us to live longer and improve the standard of living over the last 300 years, just why exactly do we even care how much CO2 is in the air, especially if the temp of the ocean which is dictated by the sun, will dictate the air temp. (I observe this daily in chilly San Francisco!)

All I can say is that in the northern US at least the temperatures in the outlying suburbs were always lower than those at the airport and higher than those in the countryside. And Columbus, OH was not that large a city in the 50s and 60s that it probably has had much of an adjustment made to its temperatures.

Chris Mooney is a fine journalist who covers science (and the politics of science)
Besides Mother Jones he’s written for the American Prospect, Skeptical Inquirer Online, The Boston Globe, etc. He also participates in various panels across the country.

1. Rural is rather loosely defined and often means fairly substantial cities.

2. A primary way UHI is going to affect temperatures is via reduced radiation to space from a horizion effect (i.e. close to buildings, etc. a goodly # of the sky will be blocked and there will be less radiation through the IR holes directly to space. This will not be well correlated with wind conditions or necessarily with lights visible from space.

[Response:
You are correct that blocking sky view is a major portion of the UHI. But you are wrong to say that effects from it wouldn’t correlate to wind speed – William]

3. As I said, it certainly doesn’t jibe with my experience growing up. A typical weather report would say something like “Lows about 35 in the city, chances of frost in the suburbs, heavy frosts in outlying areas.” Surely you’ve noticed the same thing?

[Response:You’re missing the point, which is that the UHI is accepted, of course, but the point at issue is whether it affects temperature *trends*, which a constant bias doesn’t. The research indicates a small-to-negligible effect on the global temperature record: see post 43 here – William]

A major problem in discussing UHI, is that people are not aware of how surface temperature methods are analyzed. Each station records temperatures. The difference between these temperatures and the average temperature over some arbitrary period AT THAT STATION is called a temperature anomoly. The data entered into the surface temperature database is the ANOMOLY not the temperature. Therefore the issue is not that on average it is hotter in the middle of the city or the suburb or the airport, but rather whether the environment of the weather station has changed significantly in such a way as to effect the anomoly.

It is interesting to note that population density of in the center of large cities and small towns has decreased significantly in many places over the 20th century.

A second point is that the surface temperature anomoly is calculated by dividing the earth’s surface into equal area units. The anomoly in each area is found by averaging the anomolies from all the stations in that area and finally averaging the anomoly from all the equal areas.

(BTW, you have to carry out special procedures to compensate for movement of weather stations in the anomoly record, which is why looking at raw data can be very misleading if the station has moved or procedures were changed. The people who compile these records are quite aware of the difficulties)

The above is my understanding of the situation. Continuing my tradition of making work for others, it would be excellent if there was an article on real climate from the people who actually compile these databases on their procedures.

I think you’re missing the point, William. Unless you’re claiming that any area where humans abide are going to have the same degree of UHI, you’re going to have to agree that there’s a curve relating population with amount of UHI. (Actually there are other factors of probable importance like degree of industrialization and energy use but we can assume they’ve been allowed for). Now the earth’s population has been increasing and further the degree of urbanization has been increasing over the past century. Therefore, both the areas which need a UHI adjustment have increased and the amount of adjustment needed in a given area has been increasing.

[Response:No, I don’t have to agree that there is a curve relating UHI to population, because the real effect is far more complex. If the main effect is sky view (it often is) and your met station is in a park, then increasing the population may well make no difference to the UHI measured by that station. Check out the Peterson ppt – esp slides 6 and the last slide (does that look like a curve?). In fact, if you’re interested in UHI, the ppt is worth close study – William
You seem to be disagreeing with this, and I don’t see why or how.]

“After the records for the same location are combined into a single time series, the resulting data set is used to estimate regional temperature change on a grid with 2Â°x2Â° resolution. Stations located within 1200 km of the grid point are employed with a weight that decreases linearly to zero at the distance 1200 km (HL87). We employ all stations for which the length of the combined records is at least 20 years; there is no requirement that an individual contributing station have any data within our 1951-1980 reference period. As a final step, after all station records within 1200 km of a given grid point have been averaged, we subtract the 1951-1980 mean temperature for the grid point to obtain the estimated temperature anomaly time series of that grid point. Although an anomaly is defined only for grid points with a defined 1951-1980 mean, because of the smoothing over 1200 km, most places with data have a defined 1951-1980 mean.”

GISS does correct for urban effects (I should have RTFR I guess), you can read the details at the link. They adjust the temperature anomalies of urban stations by comparison with close by rural ones. This limits most of the influence of the urban areas to short term trends. GISS’s bottom line being:

“Why does the urban influence on our global analysis seem to be so small, in view of the large urban warming that we find at certain locations (section 5)? Part of the reason is that urban stations are a small proportion of the total number of stations. Specifically, 55-60% of the stations are rural, about 20% are small town, and 20-25% are urban, with some temporal variation. In addition, local inhomogeneities are variable; some urban stations show little or no warming, or even a slight cooling, relative to rural neighbors. Such results can be a real systematic effect, e.g., cooling by planted vegetation or the movement of a thermometer away from the urban center, or a random effect of unforced regional variability and measurement errors. Another consideration is that even rural locations may contain some anthropogenic influence [Mitchell, 1953; Landsburg, 1981]. However, it is clear that the average urban influence on the meteorological station record is far smaller than the extreme urban effect found in certain urban centers. “

I’m not sure whether this is exactly the right place to ask, but I have a fairly simple question…

As the increasing levels of anthropogenic CO2 used for climate prediction are essentially predicated by the increase in economic activity world-wide and the effects thereof, has the IPCC’s SRES model been adjusted in the light of the criticisms made by Castles and Henderson in 2002/3 and subsequently presented at the IPCC TGCIA meeting in Amsterdam, Jan 2003?

Response: The models can of course use any scenario that people think is interesting, and that is clearly not limited to the IPCC suggested ones. Remember though that there is a wide range of scenarios, and whether the C&H criticisms actually lead to a substantially different set of ‘marker’ scenarios (the ones the modellers actually use) is unknown at this point. -gavin]

[Response:Also worth noting that the C+H critique may well be quite wrong: Quiggin or a more direct response – William]

Please excuse this high school physics level question but:
Why is it not accurate to assume that the Earth is a passive object sitting in an incoming energy stream, where the earth temperature has to be dictated by a balance of the energy in and the energy out? Since the energy in is at the moment the increasing solar irradiance, would you not expect the earth, air, water and land to be increasing in temperature in order to reach a new energy balance?

Response: Beacuse i) solar irradiance is not increasing appreciably, and ii) the bigger effect is the reduction in outgoing long wave radiation by increasing GHG concentrations. -gavin]