Cuckoo Science

Sometimes on Realclimate we discuss important scientific uncertainties, and sometimes we try and clarify some subtle point or context, but at other times, we have a little fun in pointing out some of the absurdities that occasionally pass for serious ‘science’ on the web and in the media. These pieces look scientific to the layperson (they have equations! references to 19th Century physicists!), but like cuckoo eggs in a nest, they are only designed to look real enough to fool onlookers and crowd out the real science. A cursory glance from anyone knowledgeable is usually enough to see that concepts are being mangled, logic is being thrown to the winds, and completely unjustified conclusions are being drawn – but the tricks being used are sometimes a little subtle.

Two pieces that have recently drawn some attention fit this mold exactly. One by Christopher Monckton (a viscount, no less, with obviously too much time on his hands) which comes complete with supplementary ‘calculations’ using his own ‘M’ model of climate, and one on JunkScience.com (‘What Watt is what’). Junk Science is a front end for Steve Milloy, long time tobacco, drug and oil industry lobbyist, and who has been a reliable source for these ‘cuckoo science’ pieces for years. Curiously enough, both pieces use some of the same sleight-of-hand to fool the unwary (coincidence?).

But never fear, RealClimate is here!

The two pieces both spend a lot of time discussing climate sensitivity but since they don’t clearly say so upfront, it might not at first be obvious. (This is possibly because if you google the words ‘climate sensitivity’ you get very sensible discussions of the concept from Wikipedia, ourselves and the National Academies). We have often made the case here that equilibrium climate sensitivity is most likely to be around 0.75 +/- 0.25 C/(W/m2) (corresponding to about a 3°C rise for a doubling of CO2).

Both these pieces instead purport to show using ‘common sense’ arguments that climate sensitivity must be small (more like 0.2 °C/ W/m2, or less than 1°C for 2xCO2). Our previous posts should be enough to demonstrate that this can’t be correct, but it worth seeing how they arithimetically manage to get these answers. To save you having to wade through it all, I’ll give you the answer now: the clue is in the units of climate sensitivity – °C/(W/m2). Any temperature change (in °C) divided by any energy flux (in W/m2) will have the same unit and thus can be ‘compared’. But unless you understand how radiative forcing is defined (it’s actually quite specific), and why it’s a useful diagnostic, these similar seeming values could be confusing. Which is presumably the point.

Readers need to be aware of at least two basic things. First off, an idealised ‘black body’ (which gives of radiation in a very uniform and predictable way as a function of temperature – encapsulated in the Stefan-Boltzmann equation) has a basic sensitivity (at Earth’s radiating temperature) of about 0.27 °C/(W/m2). That is, a change in radiative forcing of about 4 W/m2 would give around 1°C warming. The second thing to know is that the Earth is not a black body! On the real planet, there are multitudes of feedbacks that affect other greenhouse components (ice alebdo, water vapour, clouds etc.) and so the true issue for climate sensitivity is what these feedbacks amount to.

So here’s the first trick. Ignore all the feedbacks – then you will obviously get to a number that is close to the ‘black body’ calculation. Duh! Any calculation that lumps together water vapour and CO2 is effectively doing this (and if anyone is any doubt about whether water vapour is forcing or a feedback, I’d refer them to this older post).

As we explain in our glossary item, climatologists use the concept of radiative forcing and climate sensitivity because it provides a very robust predictive tool for knowing what model results will be, given a change of forcing. The climate sensitivity is an output of complex models (it is not decided ahead of time) and it doesn’t help as much with the details of the response (i.e. regional patterns or changes in variance), but it’s still quite useful for many broad brush responses. Empirically, we know that for a particular model, once you know its climate sensitivity you can easily predict how much it will warm or cool if you change one of the forcings (like CO2 or solar). We also know that the best definition of the forcing is the change in flux at the tropopause, and that the most predictable diagnostic is the global mean surface temperature anomaly. Thus it is natural to look at the real world and see whether there is evidence that it behaves in the same way (and it appears to, since model hindcasts of past changes match observations very well).

So for our next trick, try dividing energy fluxes at the surface by temperature changes at the surface. As is obvious, this isn’t the same as the definition of climate sensitivity – it is in fact the same as the black body (no feedback case) discussed above – and so, again it’s no surprise when the numbers come up as similar to the black body case.

But we are still not done! The next thing to conviently forget is that climate sensitivity is an equilibrium concept. It tells you the temperature that you get to eventually. In a transient situation (such as we have at present), there is a lag related to the slow warm up of the oceans, which implies that the temperature takes a number of decades to catch up with the forcings. This lag is associated with the planetary energy imbalance and the rise in ocean heat content. If you don’t take that into account it will always make the observed ‘sensitivity’ smaller than it should be. Therefore if you take the observed warming (0.6°C) and divide by the estimated total forcings (~1.6 +/- 1W/m2) you get a number that is roughly half the one expected. You can even go one better – if you ignore the fact that there are negative forcings in the system as well (cheifly aerosols and land use changes), the forcing from all the warming effects is larger still (~2.6 W/m2), and so the implied sensitivity even smaller! Of course, you could take the imbalance (~0.33 +/- 0.23 W/m2 in a recent paper) into account and use the total net forcing, but that would give you something that includes 3°C for 2xCO2 in the error bars, and that wouldn’t be useful, would it?

And finally, you can completely contradict all your prior working by implying that all the warming is due to solar forcing. Why is this contradictory? Because all of the above tricks work for solar forcings as well as greenhouse gas forcings. Either there are important feedbacks or there aren’t. You can’t have them for solar and not for greenhouse gases. Our best estimates of solar are that it is about 10 to 15% the magnitude of the greenhouse gas forcing over the 20th Century. Even if that is wrong by a factor of 2 (which is conceivable), it’s still less than half of the GHG changes. And of course, when you look at the last 50 years, there are no trends in solar forcing at all. Maybe it’s best not to mention that.

There you have it. The cuckoo has come in and displaced the whole field of climate science. Impressive, yes? Errrr…. not really.

186 Responses to “Cuckoo Science”

I struggle to see the point you are trying to make Hank, someone fired the 95% contribution by water vapour at me on the nam (nation association of manufacturers) site, I checked Gavins articles on this site, traced back to the EIA page that was given as a reference, Asked Gavin about it, he hasn’t come up with anything, emailed the Energy Information Administrations Paul McArdle, no reply. So I asked “Framo” on the NZ CSC site; Unfortunately after all that I still don’t no where the EIA got the 95% figure from.

You state that “it’s being asked about a lot of places” please could you advise me where all these other places are as perhaps someone asking the question at those places will find the answer.

As speculated above, the key reference for the DOE statement is Freidenreich and Ramaswamy (JGR, 1993). However, the appendix authors appear to have made an understandable but significant error. FR93 are discussing the absorbtion of downwelling SOLAR Near-IR by H2O and CO2 – that is the shortwave part of the spectrum (the 4.3, 2.7 and 2 micrometer bands). The key factor for the roles of H2O and CO2 as greenhouse gases is of course the long wave spectrum (i.e. 12-18 micrometers, 10 and 7.6 micormeters). The problematic statement regarding the relative roles:

Given the present composition of the atmosphere, the contribution to the total heating rate in the troposphere is around 5 percent from carbon dioxide and around 95 percent from water vapor. In the stratosphere, the contribution is about 80 percent from carbon dioxide and about 20 percent from water vapor.

refers ONLY to the solar radiation absorbtion, not the long wave absorbtion (which is much larger), and doesn’t take into account ozone in any case, which is by far the dominant term in the stratosphere (particularly between 15 and 20km). Thus, I am inclined to stand by my calculations in the water vapour post which concur with the Ramanathan and Coakley (1978) results. I will email the DOE website and see if I can’t get a correction made.

If one knows anything about the Stefan-Boltzmann theorem (total integrated radiated power is proportional to T4), it is obvious that it does not apply to the Earth + atmosphere (E+a) system: the S-B equation is not a fundamental starting point, but an equation derived from the Planck blackbody radiation formula – and the derivation only works if the emissivity is independent of frequency. This is blatantly not true of the E+a, so S-B doesn’t apply.

What has taken me longer to realize is that, if you were to assert the S-B equation to be true without proof, as Monckton does in his M-model, you end up with a negative value of lambda, the climate sensitivity!

Why? According to the M-model, higher temperature is related to higher energy-flux, and likewise, lower energy-flux is related to lower temperature – no ifs, ands or buts. So if I’m considering the effect of CO2 on temperature, I would have to say: the extra CO2 reduces the outwards energy flux, so this corresponds to reduced temperature. Therefore, the more the CO2, the lower the temperature must be. So the M-model predicts a negative lambda.

What’s going on here? In a realistic analysis, we recognize the outgoing energy flux as a cooling mechanism, so a reduction in IR flux (from the pre-industrial equilibrium) has a heating effect: It results in net energy income to the E+a. This means that a negative delta in the energy flux is still a positive forcing. But in the M-model, a negative delta in the energy flux is tied to a negative delta in temperature, or else the equation is simply not true.

Could this mean instead that our standard understanding of E+a is wrong? No. What it rubs home again is that results that are true for systems in thermal equilibrium are not true for systems that are not in thermal equilibrium. In trying to apply the S-B equation to E+a, a system that is manifestly not in thermal equilibrium, Monckton has committed a fundamental error. Once that is done, anything can be derived: garbage in, garbage out.

Re “If one knows anything about the Stefan-Boltzmann theorem (total integrated radiated power is proportional to T4), it is obvious that it does not apply to the Earth + atmosphere (E+a) system: the S-B equation is not a fundamental starting point, but an equation derived from the Planck blackbody radiation formula – and the derivation only works if the emissivity is independent of frequency. This is blatantly not true of the E+a, so S-B doesn’t apply.”

I think your discussion of the Stefan-Boltzmann law leaves out a crucial point — the role of emissivity. The full Stefan-Boltzmann law is:

F = epsilon sigma T^4

where F is the outgoing flux, epsilon the emissivity, sigma the Stefan-Boltzmann constant (5.6704 x 10^-8 in the SI, I forget the units but you can work them out from dimensional analysis), and T of course the absolute temperature (K in the SI). You can apply S-B to nearly any radiator if you assign the appropriate emissivity. Depending on what kind of spectrum you’re using, you can even alter the exponent. It’s a very flexible law.

I don’t agree with you about the flexibility. Certainly, for any spectrum, and any one temperature, you can define an epsilon such that:

flux = epsilon * sigma * T**4;

Just set: epsilon = flux/(sigma * T**4)

But it will only be true at that specific value of T ! There’s no significance to a “law” like that. The entire point of the S-B law is that it says that the flux is proportional to T**4, with a CONSTANT proportionality factor. This will simply not be true if the real emissivity, which is a function of frequency, is not a constant.

(There are other derivations, but this is the most explicit. He doesn’t use the emissivity, but you would just stick in a factor of epsilon(f).) The point is that, only if the emissivity is constant over frequency can you pull out the factors of T and get a definite integral for which the limits of integration don’t depend on T. Then you easily get T**4. Indeed, for a graybody (where epsilon(f) = constant), you get a numerical value for the constant. It’s all quite definite.

But if you’re willing to allow the “constant of proportionality” to vary with T, and even the exponent, why bother calling this a “law” at all? To the best of my knowledge, the fact that the blackbody flux (and therefore the blackbody energy density) is indeed T**4 is an important fact of the thermodynamics of radiation; the constant of proportionality could only be fixed after Planck’s radiation formula was found, in the context of quantum theory. I don’t at all agree that you could say that a statement such as:

“flux = constant * T**a
where a is about 4, but changes a little bit as you change T”

is the Stefan-Boltmann law/theorem, or anyone else’s law either. I certainly wouldn’t want it named after me!

I think that Neil’s point is of interest but it goes too far. I have written it up on the other web site. Roughly speaking you get the ”(SB-lambda) model” result by assuming hat am additional greenhouse gas = a small reduction in emissivity determined by the forcing. I shall also send in a rather long winded critque of Moncton’s M1 etc. (Needs checking).

Have some of the comments above, over-emphasised the significancde of the errors which might be made in Monckton’s use of the SB grey body law?

Has anyone tried testing THE T^4 law with a climate model?
(here T= surface temp. and the dep. variable is OLR)
—————————————
Whatever the answer to that question I think that Gavin’s other criticisms are decisive on their own. This is why I think so:

Monckton uses a simple energy balance equation to estimate the so called negative feedback from radiation (see e.g. Hartmann’s book p.231 but I’ll try to explain it). It is sometimes given this name because :
energy input > energy output, leads to warming leads to more energy output and hence involves a trend to restore energy balance.
Using this balance you can estimate one contribution to lambda.
The results provide zero information about the other feedbacks and the time delays because this model, does not contain time and is only designed to estimate the one thing (negative feedback arising from extra radiative loss) and deliberately excludes all other effects. The actual estimate is of course based on an approximation to the radiation (single temperature = surface value) , T^ 4 dependence and so on. But that is not the main point.

Monckton’s conclusions depend implicitly on claims he makes about the other feedbacks and about the time dependence. These claims are based on zero evidence and must be ignored. To find out about these phenomena you have to leave Monckton and return to the climatologists. This is more or less the drift of the lead article above which refers to the neglect of the delay and of the other feedbacks. The only difference is a reduced emphasis on black or grey bodies. Gavin has said sufficient anyway, (invalid comparison with observations because of neglect of time delays and aerosols; inconsistent treatment of solar).

According to Neil King there may be still more problems with Monckton (reported on a “Wiki” wherever that might be)

[Re: # 153 ; (negative lambda; I was surprised at first, but I now think this is just a faulty way of solving the energy balance equation i.e.
R(T) = S
where R is the outgoing long wave radiation expressed as a function of surface temperature (there are many other variables which I have omitted). We can write the solution of this equation as
T=r(S).
The only thing we need to assume about the functions R and r is that they are monotonic (warm bodies radiate more than cold ones and conversely a body which radiates more is warmer).
An addition of greenhouse gas reduces the radiation by f say where f<1.
The energy balance equation is now fR(T)=S . (A)
with solution r(S/f) > r(S)
demonstrating the warming. An alternative starting point is to add bit of forcing to S. You get the usual answer not the one in #153 which violates the equation we need to solve; I see no problem here]

Second thoughts about my #159.
Iâ��m sorry but I am having second thoughts; I made it too simple last time.
The energy balance equation contains no time. That point remains. It only applies to the final temperature. Therefore it is impossible to anticipate anything about time delays from such an equation. So you cannot easily check the output against observed temperatures. Gavinâ��s other criticisms about aerosols and inconsistent method of treating solar forcing remain. BUT that leaves the question of other feedbacks. To answer that , you do need to go into the details of the radiation equation. If the exact equation is k(T)T^4 then an assumption that k is independent of T removes the other feedbacks (apart from the albedo). If you take k=constant you have no right to claim that your result includes the feedbacks. Monckton does make this
claim (factor of 2.7).

I am somewhat curious about the figure for climate sensitivity of 0.75 degrees w/m2. If we were to apply this to the UK one would expect that the seasonal variation would be slightly more than 100 degrees, whereas it is only 11 degrees.
In mid-summer the UK gets appr 88% 340 w/m2 *(0.7 Albedo) and in mid winter approx 25% 340 w/m2 *(0.7 Albedo. Why doesn’t a calculation for forceing apply to the temperature changes that occur seasonally?
More over, given that from the seasonal data we find that delta 150 w/m2 causes a change of 11.5 degrees, why sould a 4 w/m2 give us a change of three degrees?

And then there is this heating lag I don’t understand. Would the authours of this site care to guess the amount of forcing a cold or mild winter will have on the following summer? If for instance a winter was 3 degrees hotter than normal, what do they predict, from their models, the following summer average temperature to be?

Re: 161: Temperature in a given locale is dependent on a lot more than local solar forcing. Look at temperature variations from day to day: the 10 day forecast for my region has high temperatures varying by almost 10 degrees C from one day to the next, with no solar changes at all. This shows how easily heat is carried from one region to another by air movements. And indeed, you get heat transport from the equator to the northern latitudes, larger in the winter, smaller in the summer.

The UK in particular is a very poor place to look at seasonal temperature variations because it is an island, and therefore the water mass around it serves to dampen temperature oscillations by a lot. It is well known that continental interiors display much larger temperature swings from summer to winter because of this effect.

Even on the global level, a change of 1 w/m^2 does not lead to an instant 0.75 degree temperature change: because of the inertial heat sink that is the ocean, plus slow feedbacks like arctic ice and glacial retreat, it takes decades to centuries to fully realize a temperature change from a forcing change.

I’m not sure about your heating lag question: I would think the correlation between a winter heating anomaly and a summer heating anomaly to be small, once you remove the overall climate trend. You can check this on a global level yourself by going to the giss temperature records – I’d be curious about the answer…

I have checked out the Lag, for both the UK and for Alcie spring in Oz. I just wonered what the modlers think the answer would be.
If find the idea that heat is moved horizontally a little unconvincing, and the idea that the sea acts as a heat sink in summer, but a heat delivary system even less so, given the difference in delta T in both conditions.

Heck, you live in the UK – don’t you ever wonder why your winters are so much milder than, say, Canada at the same latitude? It is because you are next to an ocean, and the prevailing winds are blowing east!

And a final mental bone for you to chew on: think about the north pole. It has ZERO sunlight for weeks on end in the middle of winter. If there is no forcing, why isn’t it at zero Kelvin? In contrast, look at the Moon, and the temperature differential between the lightside and the dark side. The atmosphere does a wonderful job of transferring heat from hot regions to cool regions.

Marcus — an interesting proxy for what you describe above are the plant zone charts of the U.S. included in all of the seed catalogs people get in the mail. My brother, for example, lives in Wareham, Mass. less than 10 miles from the ocean and is in a different plant zone from my mother, who lives approx. 30 miles from the ocean in North Easton, Mass.

What component of the air is transfering the heat?
I find it hard to believe that it is direct transfer of molecular motion.
Air inland is drier than costal air, and water vapor in the air above the sea saturates the air, as this air moves inland in traps IR radation.
Moving wet air from the sea to land transfers a vaporized water molecules and associated latent heat inland; the IR tapping ability will also helpe heat the ground.
My guess is that it is the water vapor transfer, rather than the air (N2/O2) itself which heats the ground.
This would explain why the South pole is so much cooler than the North, the south being a desert (altitude helps as well).
With regard to the winter temperature at the poles, the North pole drops to minus 35 and the south to minus 68. In your sarcastic question you wander why the North pole does not fall to zero K. A better question would be is what is the expected temperature of the South pole and how far off from that is it. It is very interesting that the summer peak is very short, yet the winter trough very shallow. This would suggest that the pole is very hard to heat and easy to cool. The heat transfer from the bedrock in winter shows the rate of heat transfer maximium is about 98.5 w/m2. Wether the heat transfer rates in the north and south determine the winter temperature I very much doubt, my guess would be that the difference is mainly due to IR trapping by water vapor.
Of course if CO2 sensitized water trapping is going to have big effects, it would be on the poles. The winter temperature would be elevated. The effect would be greatest in the South, compared with the north. even 1-2 watts/m2 would standout like a sore thumb.

I don’t understand why you can’t believe that air can transfer heat. Have you ever used a blowdryer? How about forced air central heating for a house? Thermoses and dewars use vacuums for insulation, because air transfers heat.

And again, on the large scale, think about warm and cold fronts – you can get mid-day temperatures varying by 10 degrees C from one day to the next.

Calculate temperature effect of radiative forcing thus: sum all forcings; use SB eqn. to find dT(forcings); sum all feedbacks in wm-2K-1 & allow for mutual amplification; multiply by dT to get feedbacks in wm-2; add feedbacks to forcings; use SB eqn. to find dT(forcings+feedbacks). CM does it this way. He’s right, surely? And he rightly doesn’t assume Earth as a blackbody. Also, he rightly allows for the difference between transient and equilibrium response. CM got it right: real climate got it wrong. Why has no prev. post pointed this out?

Because Monckton is wrong. The Stefan-Boltzman equation simply doesn’t apply when the emissivity is a function of frequency, as is very much the case with the atmosphere-Earth system. So he starts off wrong, and builds from there.

Garbage In, Garbage Out.

(Interestinglyl, in one of his follow-up rebuttals to an article in The Guardian, he grudgingly says “The atmosphere is a badly-behaved graybody.” as a defense of his use of S-B. This is nonsense: It’s not a graybody at all.

I’ve been debating with an AGW sceptic who brought up Monckton’s article. He (the sceptic) claims that this site (RealClimate) is “is trying to dismiss the very notion of a mediaeval warm period”. Another link to a letter by Monckton is put forward, by this sceptic, to further reinforce his position. I’m not really qualified to argue but, as he directly accuses RealClimate of some kind of cover up, I thought I’d try to get a RealClimate response.

[Response: The question to ask your friend is why he thinks the MWP is important. I imagine that he isn’t particularly interested in early medieval responses to climate anomalies, and so it’s probably related to whether he thinks the current warming is unprecedented or not. Well, in that case, it is completely moot. It was warmer than today 120,000 years ago (the Eemian), it was warmer in the Pliocene (3 million years ago) and much warmer during the Eocene or Cretaceous periods. The arguments for current climate change thus do not depend on any notion of ‘unprecendented’ warmth. The issue instead is whether the changes in the past can be explained and whether those explanations provide any insight into the current situation. For the older periods greenhouse gases and plate tectonics seem to have been key, for the Eemian it was orbital forcing – either issues that either aren’t relevent on the current timescale or supportive of the GHG forcing idea.

So the issue for the MWP is not how warm it was relative to today, but are the reasons why it was as warm (or not) as it was understandable and relevant to today? Unfortunately, our information about solar and volcanic forcings 1000 years ago are rather uncertain and those uncertainties preclude any clear estimate of what we expect the MWP to have been like. For today of course, we have a very good idea about what solar and volcanic activity is doing (not very much), and so our expectations for what climate should be doing now are dominated by the GHG change. When you then go and look at the data (this paper was simple and straightforward), you find that medieval anomalies aren’t as regionally coherent as anomalies today, and different records have maxima at different times. This was noticed decades ago (and way before the ‘hockey stick’ paper), and continues to be true. So even if new data shows that the MWP was clearly wamer globally than than the late 20th century it wouldn’t effect our estimates for why temperatures are rising today. If this is a ‘cover-up’, we must be reading from very different dictionaries. -gavin]

Thanks, Gavin. I think this sceptic was saying that early IPCC graphs showed the MWP but later ones removed it. I can’t verify this, personally, but what you say makes a lot of sense. What drives some of this scepticism is the “argument” that today’s temperatures are the highest for the last x years, so something is going on. The sceptics take this statement, find data (of think they find data) that refutes the “argument” and so AGW is wrong. But yours is a great point; it’s not so much the fact of temperature variations, but how can those be explained. I think that the fact that climate has always changed in the past (d’uh!) means that the current warming is natural (i.e. not induced by human behaviour), according to many sceptics.

However, it looks like Monckton even got the temperature record wrong, in that the MWP, on current data, was not as warm as present.

Mr. Wade, if your sceptic was talking about what they usually talk about, an early assessment report (can’t remember whether it was 1 or 2) had a SKETCH of what was thought to be previous global temperature. This was not a proxy or multiproxy study result, but an expert opinion, with all of the vauguaries such things carry. One of the important areas of progress in the last decade has been the emergence of proxy/multiproxy based data allowing us to trace climate back through the centuries

I remember this the same way Eli does, from the Barton hearings (the transcript for those is overdue, for now you’d have to watch the video to find where this came up). Someone held up an IPCC page and pointed to a bump on the line, I think it was Dr. Wegman, and said he’d digitized the line and worked backward to get numbers that must have been used to create the chart, then criticized the accuracy of the numbers. And someone else, I think Dr. North, said no, there were never any numbers, that was a picture of what we thought at the time was sort of what happened.

That whole exchange really illustrated the dangers of data mining, Dr. W’s area of expertise, where the great risk is being sure there _is_ something, and so convincing yourself you’ve derived proof of its existence by doing an analysis from spotty sources and coming up with a certain, but wrong, conviction that you have proved the existence of WMD, er, MWP.

Remember, someone says “MWP” they aren’t referring to any actual real _thing_ — they’re referring to someone’s publication, of their study of some selected group of temperature records as estimated by things written down in many places over many years by many people using many different criteria. Each person publishing something about temperature change over time and location has to go back to the original records.

What they found eventually was that there was no “MWP” worldwide —- there are spots during the Medieval when some areas were warmer and others were not, around the world. That’s said in the hearings, explicitly.

I’m looking forward to listening to the hearings again with the Barton committee’s transcripts in hand. I want to see if they really do a verbatim transcription as they promise, with all the words as they were spoken.

interesting your piece about climate forcing. In a book by Phillip Eden, (a British climatologist and not a greenhouse sceptic) he points out that all things bieng equal the total warming for a doubling of co2 concentrations is 1degC. The Stefan-Boltznman must be where he gets that from. The greater forcings quoted in the models depend on feedbacks.

While nobody now doubts a man made warming trend a legitamate question is how much. how reliable are the models?, given the complexity of the climate is such that even supercomputers can not comprehensivly cover them.
One query I have always had is that the models assume that the feedbacks are net positive. However I would have thought that some negative feedbacks must be built in to climate, otherwise any change in global temperature either way would cause a runaway forcing. Can you enlighten me on this?

>all things being equal
That means — without considering any feedbacks. Magically double C02, with no other change.

>how reliable are the models
You can look this up, for each model, but do it now, don’t rely on some old information because this changes. For one example, check how this one developed (picked out of a Google search, do your own for more examples): http://www-as.harvard.edu/chemistry/trop/ids/abstract.html

>feedbacks are net positive
That means — after adding and subtracting. Run the model out to a few hundred years in the future and feedbacks will differ. That’s why you don’t see the Venus scenario predicted here.

Can anybody point point me to any known predictions made by a GCM and subsequent measurments of predictive accuracy? I’m not looking here for either “hindcasting” or holdout / validation sets, but actual forecasts made on date X for some date after X with subsequent measurments of accuracy (something like an empirical skill measurment).

In 1988, James Hansen testified in Congress about predictions of future global average temperature, based on GCM runs at GISS. His predicted temperature for 2005 is “right on the money.” You can find out about how denialists have *distorted* this to discredit global warming here.

Hansen’s is indeed a classic, as is the pointer to how it’s been misreported. You should always check anything you find someone claiming against the actual sources, there is a whole lot of “Public Relations/Advocacy science” available arguing only one side or story, that can fool you into thinking you’re learning facts when you’re being fed fiction. Look for original research articles, to support any claims made at ‘advocacy’ sites.

On problem is that over the ages climate forcing agents such as solar radiation, Co2 concentration, and the position of continants has changed greatly yet the earths temperiture has varied only within a 10degC range. There seems to be some mechanism that acts against forcings. Something unknown to the modelists. If so the warming from a doubling would be less than 1degC. Has anybody any theories on this?

while the 10degC covers all ranges of climates on the warming side the upper limit appears to be 4deg above present, whatever the co2 level. However sources suggest that a doubling will be 3deg.
What has passsed unnoticed is the weired assumpions of some doom mongers. For one thing more than a doubling of co2 levels is impossible unless vast new source of fossil fuels is discovered somewhere.
Also I have seen it assumed that we will still be using fossil fuels in 2100. Surely by that time burning coal, oil etc will be seen as incredibly primative.

When I mentioned fossil fuels I was refering to all of them, not just oil. Even if we burn all the coal I doubt if a doubling of co2 could happen. Reservres of coal are less extensive than is believed, and oil and gas will soon have passed their peak.
Within a few decades renewables could be made to work on a large scale. I doubt if we will burn something as dirty as coal if we have any alternative.

I’m only surprised he didn’t include Eric von Daniken in his list of ‘references’. The nobel lord clearly has too much time on his hands ( in fact he’s admitted as much) – and he’s clearly delighted how how clever he can make himself seem to his chums at the Telegraph, armed only with Google and a scientific calculator.

No doubt someone will challenge this as an ‘ad hominem’ attack – if so they don’t understand the meaning of the term. Examination of Monkcton’s ‘work’ shows him to be incompentent, unqualified, scientifically illiterate, and happy to use all sorts of tricks aimed to deliberately mislead a naive reader. He would never have been published if it hadn’t been for his personal connections with the newspaper involved.

No doubt the ‘Telegraph’ is highly delighted at all the rumpus it has caused – even if this was at the expense of lowering its journalistic standards.