The Global Cooling Bet – Part 2

Last week we proposed a bet against the “pause in global warming” forecast in Nature by Keenlyside et al. and we promised to present our scientific case later – so here it is.

This is why we do not think that the forecast is robust:

Figure 4 from Keenlyside et al ’08. The red line shows the observations (HadCRU3 data), the black line a standard IPCC-type scenario (driven by observed forcing up to the year 2000, and by the A1B emission scenario thereafter), and the green dots with bars show individual forecasts with initialised sea surface temperatures. All are given as 10-year averages.

Their figure 4 shows that a standard IPCC-type global warming scenario performs slightly better for global mean temperature for the past 50 years than their new method with initialised sea surface temperatures (see also the correlation numbers given at the top of the panel). That the standard warming scenario performs better is highly remarkable since it has no observed data included. The green curve, which presents a set of individual 10-year forecasts and is not a time series, each time starts again close to the observed climate, because it is initialised with observed sea surface temperatures. So by construction it cannot get too far away, in contrast to the “free” black scenario. Thus you’d expect the green forecasts to perform better than the black scenario. The fact that this is not the case shows that their initialisation technique does not improve the model forecast for global temperature.

Their ‘cooling forecasts’ have not passed a the test for their hindcast period. Global 10-year average temperatures have increased monotonically during the entire time they consider – see their red line. But the method seems to have produced already two false cooling forecasts: one for the decade centered on 1970, and one for the decade centered on 1999.

Their forecast was not only too cold for 1994-2004, but it also looks almost certain to be too cold for 2000-2010. For their forecast for 2000-2010 to be correct, all the remaining months of this period would have to be as cold as January 2008 – which was by far the coldest month in that decade thus far. It would thus require an extreme cooling for the next two-and-a-half years.

Even for European temperatures (their Fig. 3c, not part of our proposed bet), the forecast skill of their method is not impressive. Their method has predicted cooling several times since 1970, yet the European temperatures have increased monotonically since then. Remember the forecasts always start near the red line; almost every single prediction for Europe has turned out to be too cold compared to what actually happened. There therefore appears to be a systematic bias in the forecasts.

One of the key claims of the paper is that the method allows forecasting the behaviour of the meridional overturning circulation (MOC) in the Atlantic. We do not know what the MOC has actually been doing for lack of data, so the authors diagnose the state of the MOC from the sea surface temperatures – to put it simply: a warm northern Atlantic suggests strong MOC, a cool one suggests weak MOC (though it is of course a little more complex). Their method nudges the model’s sea surface temperatures towards the observed ones before the forecast starts. But can this induce the correct MOC response? Suppose the model surface Atlantic is too cold, so this would suggest the MOC is too weak. The model surface temperatures are then nudged warmer. But if you do that, you are making surface waters more buoyant, which tends to weaken the MOC instead of enhancing it! So with this method it seems unlikely to us that one could get the MOC response right. We would be happy to see this tested in a ‘perfect model’ set up, where the SST-restoring was applied to try and get the model forecasts to match a previous simulation (where you know much more information). If it doesn’t work for that case, it won’t work in the real world.

When models are switched over from being driven by observed sea surface temperatures to freely calculating their own sea surface temperatures, they suffer from something called a “coupling shock”. This is extremely hard, perhaps even impossible, to avoid as “perfect model” experiments have shown (e.g. Rahmstorf, Climate Dynamics 1995). This problem presents a formidable challenge for the type of forecast attempted by Keenlyside et al., where just such a “switching over” to free sea surface temperatures occurs at the start of the forecast. In response to the “coupling shock”, a model typically goes through an oscillation of the meridional overturning circulation over the next decades, of the magnitude similar to that seen in the Keenlyside et al simulations. We suspect that this “coupling shock”, which is not a realistic climate variability but a model artifact, could have played an important role in those simulations. One test would be the perfect model set up we mentioned above, or an analysis of the net radiation budget in the restored and free runs – a significant difference there could explain a lot.

To check how the Keenlyside et al. model performs for the MOC, we can look at their skill map in Fig. 1a. This shows blue areas in the Labrador Sea, Greenland-Iceland-Norwegian Sea and in the Gulf Stream region. These blue areas indicate “negative skill” – that means, their data assimilation method makes things worse rather than improving the forecast. These are the critical regions for the MOC, and it indicates that for either of the two reasons 5 and 6, their method is not able to correctly predict the MOC variations. Their method does show skill in some regions though – this is important and useful. However, it might be that this skill comes from the advection of surface temperature anomalies by the mean ocean circulation rather than from variations of the MOC. That would also be a an interesting issue to research in the future.

All climate models used by IPCC, publicly available in the CMIP3 model archive, include intrinsic variability of the MOC as well as tropical Pacific variability or the North Atlantic Oscillation. Some of them also include an estimate of solar variability in the forcing. So in principle, all of these models should show the kind of cooling found by Keenlyside et al. – except these models should show it at a random point in time, not at a specific time. The latter is the innovation sought after by this study. The problem is that the other models show that a cooling of one decadal mean to the next in a reasonable global warming scenario is extremely unlikely and almost never occurs – see yesterday’s post. This suggests that the global cooling forecast by Keenlyside et al. is outside the range of natural variability found in climate models (and probably in the real world, too), and is perhaps an artifact of the initialisation method.

Our assessment could of course be wrong – we had to rely on the published material, while Keenlyside et al. have access to the full model data and have worked with it for months. But the nice thing about this forecast is that within a few years we will know the answer, because these are testable short term predictions which we are happy to see more of.

Why did we propose a bet on this forecast? Mainly because we were concerned by the global media coverage which made it appear as if a coming pause in global warming was almost a given fact, rather than an experimental forecast. This could backfire against the whole climate science community if the forecast turns out to be wrong. Even today, the fact that a few scientists predicted a global cooling in the 1970s is still used to undermine the credibility of climate science, even though at the time it was just a small minority of scientists making such claims and they never convinced many of their peers. If different groups of scientists have a public bet running on this, this will signal to the public that this forecast is not a widely supported consensus of the climate science community, in contrast to the IPCC reports (about which we are in complete agreement with Keenlyside and his colleagues). Some media reports even suggested that the IPCC scenarios were now superseded by this “improved” forecast.

Framing this in the form of a bet also helps to clarify what exactly was forecast and what data would falsify this forecast. This was not entirely clear to us just from the paper and it took us some correspondence with the authors to find out. It also allows the authors to say: wait, this is not how we meant the forecast, but we would bet on a modified forecast as follows… By the way, we are happy to negotiate what to bet about – we’re not doing this to make money. We’d be happy to bet about, say, a donation to a project to preserve the rain forest, or retiring a hundred tons of CO2 from the European emissions trading market.

We thus hope that this discussion will help to clarify the issues, and we invite Keenlyside et al. to a guest post here (and at KlimaLounge) to give their view of the matter.

198 Responses to “The Global Cooling Bet – Part 2”

#113 Thank you, Barton Paul Leveson. An average, as I understand it, is a statistical artefact derived by dividing the sum of a set of numbers, each weighted according to purpose, by the count of the set. What interests me, is why two statisticians get different averages from a common data set. This is relevant to the current “cooling equals warming” dialogue. GISS and UKmet presumably use the same raw thermometer readings and the same area-based weights. However, the former says that, on average from 2001 to 2007, the planet warmed while the latter said it cooled (the difference being 1.5 degrees C/decade http://sciencepolicy.colorado.edu/prometheus/archives/climate_change/001425how_to_make_two_deca.html). Would this measurement uncertainty not warrant climate scientists requiring the statisticians to agree a “data cleansing” methodology (to account for heat island effect, for example – what else could explain the difference?)and to remove this measurement ambiguity before proclaiming the science to be settled. Moreover, spare a thought for the hapless modeler looking into the distant future starting now. How does he handle the ambiguity – toss a coin, halve the difference or play favourites?

[Response: An average is just a mathematical operation – not an artifact. Since we don’t have perfect information, all global mean averages are estimates. And like with all estimates there are uncertainties. For short time periods those small changes can lead to large swings in trends – because a linear trend is not a good fit for such periods. People in the future will be looking at much longer timeseries and so the ambiguity will be less. Not too hard to understand surely? – gavin]

#138 Thanks,Jim Eager.
Jim, following your first response I changed language from “solar activity” to “solar impacts” meaning that this would include variability in the earth’s orbit and behaviour. Despite the CO2 amplification of a warming originating in such variability, the system periodically reverses from warming to cooling. Those natural dynamics are still in play in the current warming period. I have no difficulty with relating increasing atmospheric concentrations of CO2 to man burning fossil fuels. And I can accept the radiative physics of CO2 (understanding it is another matter, particularly its quantitative effect). But while our understanding of the radiative physics of CO2 has only recently been acquired, the physics itself has been in play from the start. This is why I reject the scientific discovery as a peg to hang the AGW hat on, leaving the rate of temperature increase (related to the atmospheric concentration of CO2) as the only basis for the AGW hypothesis.

Jim, Winston Churchill famously said: “There are lies and damned lies. Then there are statistics”. Outliers in a data set can make trends unrepresentative of the body of data. Statisticians routinely trim outliers, not permanently, but to get a better understanding of what the data are telling us. They also play around with raw data; Churchill was on to something.

Jim, the oceans covering 70% of the planet’s surface not only contain most of the heat in the climate system but also contribute 70% of the average global surface temperature. Would it not be more plausible to see the lower than expected warming as natural causes dominating man-made ones?

Jim, to stabilise the system we have to either reduce the inflow rate or increase the outflow rate, irrespective of the elevation at which the outflow occurs. Since the backward radiation from the atmosphere to the surface tends to increase the inflow rate, there is an urgent need to increase the photon escape rate. The need is urgent because in a column of air one metre square energy is accumulating at the rate of 150 watts per second, according to step 1 of CO2 in 6 easy steps. It must be that as the atmospheric temperatures increases photons escape at a faster rate as well as from further up in the atmosphere?

#140 Thank you, Ray Ladbury.
As a photon leaving the surface with escape to space on my mind and confronted with a whole bunch of N2 and O2 molecules in my path I don’t really care whether they give me that permanent “have-a-nice-day” smile or ignore me. What I do care about is avoiding those H2O and CO2 guys – not many of them, but each with evil intent towards me. I take comfort from the laws of probability. For each baddie there are “n” good guys where “n” is a large number. I’m gonna make it!

Ray, I see a problem. “Energy in = energy out” at the top of the atmosphere would imply “energy in > energy out” within the atmosphere. This means that the surface temperature has to fall from an energy level of 390 W/m2 to 240 W/m2 to reach equilibrium. But back radiation from a warming atmosphere will prevent this.

Agreed and to finish off your point the energy has risen so the temperature has gone up even if there is no translational energy involved. But as far as definitions are concerned, you have just shifted the problem from defining temperature to defining entropy and as far as I remember (I’m a bit rusty now) that depends on whether there is equilibrium, or if not, whether you can imagine that there is.

I am not sure that you need to bring in single molecules. Remember that the point was raised in connection with very short times before the excited CO2 had time to undergo collisions. To discuss this regime it might be good enough to consider a slightly different system i.e. a large uniform set of CO2 molecules in a box with the same total energy and interacting with infra-red photons and nothing else. This slightly hypothetical system would reach thermodynamic equilibrium with a well defined entropy and temperature. This argument is not perfect because some energy might eventually leak into the translational modes because of the photons’ momentum, but its only a thought experiment.

Re 153. You can take comfort from the laws of probability but your reasoning is utterly flawed. The probability that “you” get absorbed by N2 or O2 is zero. If you’re an infrared photon the N2 and O2 might as well not be there. N2 and O2 play no role in the process. The chance that “you” get absorbed by CO2 or some other GHG is much closer to 1.

#156 Thank you John E. Pearson. The probability of my dying in a plane crash is the product of a number of independent events occurring: one, that I am in a plane; two, that the plane crashes; and three, that the rash kills me. The probability that a crash would kill me is close to 1 but that doesn’t mean that I’m virtually certain to die in a plane crash because the probability of a plane crashing is very low as is the probability of my being in that plane rather in some other plane or none is also low. The product of these probabilities is very low indeed. Similarly, the probability of a photon “dying” in its passage through the atmosphere is the product, first of it being intercepted by a GHG, which is a function of GHG concentration (no?) and therefore low, and second of it being “killed” by the capturing GHG, which as you suggest is close to 1. The product of the two probabilities is low. However, other posts indicate that the word “similarly” is inappropriate. I would be grateful to be told why.

Gavin, with respect you haven’t got to the nub of my concern. An average, which is the end result of a mathematical process, is not an estimate with attached uncertainties unless the data being processed are estimates with attached uncertainties. Neither thermometer readings nor the areas of which they are deemed to be representative are estimates. The only part of the process involving estimates and attached uncertainties must be data cleaning. This uncertainty would be eliminated if statisticians employed the same methodology.

[Response: You forget that spatial sampling is not complete and it is mainly the decisions on how to to infill missing data that cause the differences in the products. And frankly you don’t want everyone to employ the same methodology. Different groups making different (but reasonable) assumptions tell you what is and what isn’t robust. – gavin]

Your logic is still flawed: Similarly, the probability of a photon “dying” in its passage through the atmosphere is the product, first of it being intercepted by a GHG, which is a function of GHG concentration (no?) and therefore low, and second of it being “killed” by the capturing GHG, which as you suggest is close to 1.

I am not a climate scientist so I don’t know the numbers. I do know the logic and your’s is incorrect.

NOTATION: [GHG] = concentration of the greenhouse gas under discussion
PA([GHG],f) = probability that the GHG present at concentration [GHG] absorbs a photon of frequency f

Given that PA([0],f]=0.

So far we’re good. then you fall into incoherency.

You are comparing the concentration of GHG’s to that of O2 and saying since [GHG]

I didn’t quite follow John E. Pearson’s comment 159 — I suspect he used a less than or greater than sign that the blog has interpreted as HTML — and perhaps he has expanded it in comments not published at the time of this writing, but the missing piece in John Millett’s thought-computation is simple. The concentration of CO2 in the atmosphere is indeed quite small, so if the photon encounters a molecule on its ascent through the atmosphere that molecule is very likely to be, say, N2 or O2 and have no effect. However, there is quite a lot of atmosphere that a photon must travel through: we need an additional factor for the number of molecules the photon can expect to meet, and this factor is very, very large, far more than adequate to bring our probability near to 1.

This was already explained by Ray Ladbury in comment 117 and Hank Roberts in comment 141 (and perhaps elsewhere?), but perhaps the third time will be sufficient?

John Millett, So you really think the probability of being absorbed by any of the 59 trillion CO2 molecules you encounter would be small, huh? Because that is the only event that stands between you and escape. The O2 and N2 are only relevant because they increase the range of wavelengths the CO2 can absorb. The only molecules you see as a 15 micron photon are the CO2–and 59 trillion between you and space doesn’t give you good odds. We know this is true, John, because we know CO2 is responsible for 20-25% of the greenhouse warming that keeps Earth from being an inhospitable ice-ball. We’ve just increased the CO2 by 38%, and there’s no reason to expect the physics to change at 280 ppmv (the preindustrial value).

John, if the probability of any single IR photon being intercepted by a CO2 or H2O molecule before reaching space is so low, then pray tell why satellite photos of Earth in the infrared bands absorbed by CO2 are opaque, i.e. the surface can not be seen?

Gavin, I take it that the term “robust” implies reliability of the data and confidence for users of the data. Using different technologies to measure temperature – instruments, balloons and satellites – provides a basis for assessing reliability, the closer the measurements are to each other, the more confidently can the data be used. But, within the instrument technology, how does giving statisticians discretion in the processes of cleaning data and extrapolating to unsampled areas – and consequently confronting their customers with the widely divergent results displayed for the period 2001-07 – improve users’ confidence? Isn’t there a scientific “best practice” for these processes?

[Response: You are confusing the extreme sensitivity of a trend estimate through a small number of points with the reliability of the points themselves. The correlation between HadCRU and GISTEMP is extremely high. – gavin]

#160 Thank you JBL. I think you may be hinting that the probability of interception is also a function of transit time? Perhaps that’s what John E. Petersen is in the process of telling me. Let’s wait and see.

John Millett, No. The probability of absorption has nothing to do with transit time. It has to do with the probability of absorption by each molecule encountered and the number of molecules encountered. Since N2 and O2 do not interact with IR photons (except when distorted), that leaves H20 and CO2, mainly.
Yes, climate science is complex. However, the basics of anthropogenic causation of current warming are fairly simple. CO2 traps radiation, generating about 20-25% of the 33 degrees C of known greenhouse warming on Earth. This is virtually indisputable. Increassing CO2 traps more radiation, so Earth must heat up until Energy out=energy in again (definition of equilibrium).
The anthropogenic hypothesis explains the trends we are seeing in terms of known physics–something no competing hypothesis does. That is essentially the argument–and if you don’t accept it your really have a lot of ‘splaining to do, Lucy.

Sorry I didn’t know that I couldn’t use a less than sign. I’ll try again.
First my confusion with the “less than” sign, then a power outage, and dead internet kept me from fixing this earlier.

NOTATION:

[x] = concentration of whatever x is

[O2] = oxygen concentration

[GHG]=greenhouse gas concentration

PA([GHG],f) = probability that the GHG present at concentration [GHG] absorbs a photon in the infrared portion of the spectrum and converts to heat via collisions

Given that PA([0]]=0.

You are comparing the concentration of GHG’s to that of O2 and saying since

[GHG] < < [O2] that it then follows that PA([GHG]) < < < 1

The value of PA([GHG]) has nothing at all to do with how the concentration of greenhouse gases compares to the concentration of [O2] and [N2]. I believe, but might be wrong about this, that the probability of an IR photon getting absorbed by GHG’s and captured into heat is only slightly increased by a doubling of CO2. After all, we’re only talking about changes in the forcing of a few W/m^2 in comparison to the forcing which is a few hundred W/m^2. I would think that the change in the probability of absorption with increasing CO2 wouldn’t need to be enormous.

John Millett, As far as peer-reviewed work that gets cited in subsequent papers–and that is what matters–scientists do not differ on what is causing climate change. Nor do they differ particularly on the basic approach toward mitigation, since it is unlikely anything will work in the near term except cutting greenhouse gas emissions. The only aspects where there is still uncertainty–and some controversy, it is true–are how serious the effects will be–disastrous or catastrophic.

#166. Cussed lady that she is, Lucy argues thusly. Ray, you posit that the absolute number of GHG molecules in a parcel of air is the sole determinant of the probability of absorption by a GHG molecule of a photon passing through the parcel; and that the very much larger number of non-GHG molecules are irrelevant. That is, for the purposes of the exercise these molecules can be eliminated and, in effect, replaced by space. Imagine a column of air horizontally thinly sliced, say one molecule thick. Each layer would contain a small fraction of the total number of GHG molecules in the air column, and the distance between the molecules would be very much greater than their diameters. On your premise, the probability of capture of a photon passing through a layer would be a small fraction of the overall probability. Your premise therefore requires that the probabilities of capture in each slice be additive which, I would venture to suggest, would give an overall probability greater than 1 and must be rejected. QED, Lucy.

Ray: “….Earth must heat up until Energy out=energy in again (definition of equilibrium)”.
I see a problem. “Energy in = energy out” at the top of the atmosphere would imply “energy in > energy out” within the atmosphere. This means that the surface temperature has to fall from an energy level of 390 W/m2 to 240 W/m2 to reach equilibrium. But back radiation from a warming atmosphere will prevent this.

[Response: John, you’re getting silly, and I don’t feel inclined to let this pointless discussion go on much longer. Regarding your first remark, as Ray no doubt would have told you the absorption is proportional to the number of molecules encountered only when the total absorption in a layer is small. Each layer takes out photons from what would be absorbed by the next layer; when you multiply (1-epsilon) by itself, to lowest order the absorption is 2epsilon, but when you multiply enough of these together, the absorption approaches 1. Simple stuff. By the way, ih saying that the other gas molecules didn’t count, Ray probably didn’t want to confuse you by giving too much information at once, but actually the others do count in the sense that collisions increase the absorption by the GHG molecules, so it does matter that the other molecules are there. Your second point is just plain wrong. In equilibrium, energy in = energy out at the top of the atmosphere, and also energy in = energy out at the bottom. However, at the bottom of the atmosphere, the energy budget includes not only radiative terms (both solar and infrared), but also turbulent heat fluxes of latent and sensible heat. –raypierre]

Re John Millett @169: “yes, Jim, that’s me – requesting more time for scientists to sort out their widening set of differences before economists are let loose on the project.”

John, It seems quite clear that you are doing rather more than that in your submission, and that you are using a good many unsubstantiated, ill-informed, and just plain wrong assertions and arguments, many of them repeated in this very discussion here at RealClimate, all of them repeated elsewhere on RC.

– that It will be cheaper to adapt than to mitigate [with no mention of economic studies arguing the opposite, such as Stern’s]
– that mitigation [by reducing dependence on carbon-based fuels] will not work because climate change is caused by natural phenomena
– that ocean circulation (ENSO, PDO) drives climate change [never mind conservation of energy]
– that the greenhouse gas hypothesis itself can not be correct since the gases are ‘trace’ components of the atmosphere [the dilution argument]
– that climate sensitivity to burning fossil fuels is a contested hypothesis [never mind the well-established physics]
– that the hypothesis of anthropogenic forced climate change “can only be tested by the predictive reliability of computer models derived from [the hypothesis]”
– that the temperature record is flawed, biased, and inconsistent
– that temperatures in the 1930s were just as high or higher, supported by Australian temperature records and the flat out wrong US figures that you used here earlier

John, far from striving to understand the science of climate change, as you state in your opening paragraph, you are actively seeking out evidence–no matter how slim or how incorrect–to refute the science of climate change. I see no point in indulging you further.

Raypierre: I have no desire to outstay my welcome. Before signing off, I need to clarify a few things. I am not an activist (#175) seeking to refute the science of climate change (#173). Rather, I am a loner lacking competence to refute received wisdom (as evidenced here over recent days) but attracted to those who aren’t so constrained (#173). Coming to RC, the centre of the AGW universe, was meant as a counter to that innate tendency. The differences (#174) among the scientific community, as I read the situation, are about the relative climate sensitivities to natural and man-made behaviour, not about the fact of greenhouse warming. AGW is the front runner offering us the choice between catastrophe or mere disaster (#170). The peer-review process (#170) – the thing that did for Galileo but not his science – left competing hypotheses trailing the field. Will peer-review of the current cooling and experiments relating to the cosmic ray hypothesis bring AGW back to the field? Getting a better fix on relative climate sensitivities is highly desirable since it determines how well we allocate resources between adapting to climate change and mitigating it.

What have I learned?
That the proportion of LW radiation absorbed by GHGs may be 38% (CO2 in 6 easy steps); or 26% (CO2 only, #113); or 10% (CO2 only, #92); or approaching 100% (…any photon will have a very high likelihood of interacting rather than escaping the planet, #141). That only those wavelengths between ~13 and 17 microns will be absorbed by CO2, in that band almost all will be absorbed within a few meters (#116). This implies that other bands in LW radiation escape directly to space; and that CO2 in the atmosphere above this height is largely redundant, no?

Any offered reconciliation of these inconsistencies would be helpful and gratefully received.

There is one other point. I visualise the atmosphere as a thin spherical shell with a heat source at its centre and under the variable influence of the distant sun. What influence does the internal heat source have on the temperature of the inside surface of the shell and on climate?

I will explain the interesting point that I want to remark with that graph, and which supports the increase of the photosinthesis in the NH as the main cause for the discrepancies between our carbon emissions and its effects on the Mauna Loa CO2 concentration measurements.

In the linked graph, with cyan squares, you have how much the CO2 concentration dropped during the NH summer (approx June to October), for every year on record. Do you notice something special about it? Yes, that’s it. It shows absolutely no trend. How can it be? No trend means that, during the NH summer, the ammount of carbon emissions minus the CO2 absorption by natural causes has remained stable since the 50’s. But we know that carbon emissions during the summer didn’t remain stable, did they? This necesarily means that natural processes have improved the earth’s capability to absorb CO2 during the summer. That’s OK, we had already agreed on that, the only difference is that I said it was because of photosinthesis, and you claimed it was because of ocean absorption.

Here is the data that gets us out of any doubt: in the same graph, in purple triangles, you have the ammount of CO2 concentration increase that took place every year on record, in winter (approx October to June). Hey! That really shows a trend! An increasing trend, indeed. The trend it shows is the reason why there is a trend in how much the CO2 concentration increases every year. So what this data says is that, in winter, nature is not being able to counter our increase in emissions. It does in the summer, and quite perfectly, but it doesn’t in winter.

What would you expect, if it was the ocean acidification the main cause for nature’s improved response? Well, you should see more CO2 uptake by the sea in winter, for 1 main reason: the sea is colder, and it is able to hold more CO2. Do we see that? The answer is no.

Does this mean that ocean is not increasing the CO2 it is absorbing? No, of course, CO2 absorption by the oceans is taking place. It is part of nature’s response. But it is not the main response. The main response is improved photosintesis in the NH because of increased temperatures.

Now, the final proof: look again at the cyan squares. Although, in general, there is no trend, if we go to more reduced time frames, we can easily distinguish an increasing trend between 1970 and 1998, followed by a decreasing trend later on. Now, what was happening between 1970 and 1998, for the planet to be able to increase the ammount of CO2 concentration that was taken from the atmosphere in the summer, in spite of our always increasing carbon emissions? Right. The planet was warming. Photosintesis was improving. What has happened AFTER 1998? Right. The planet is no longer warming. As a result, photosinthesis is not improving. Because carbon emissions continue to increase, the overall result is a reduction in the ammount of CO2 taken from the atmosphere in the summer.

I’d like to correct myself (#178): Ocean is colder in NH summer because most of the oceans’ surface is in the SH. Still the oceans’ uptake of CO2 is not as seasonally driven as photosinthesis. If oceans acidification was the most important thing going on, one would expect both the observed increase of CO2 in winter and a lower CO2 reduction in the summer. But CO2 reduction in the summer is stable.

Nylo,
I refer you to the counsel of H. L. Mencken: “Explanations exist: they have existed for all times, for there is always an easy solution to every problem — neat, plausible and wrong.”

First off, we know that there is more plant growth in the summer, but you will know notice that the summer reduction is not trending significantly upward, while the winter decrease is definitely trending upward. I don’t see evidence supporting your hypothesis.

Re #180: That’s exactly my point. Although emissions are increasing, also in the summer, the CO2 reduction in the summer remains the same. There is only one posible explanation for that: IN THE NH SUMMER, photosinthesis is improving as much as our emissions are increasing. And of course it doesn’t happen in winter. In winter there is little photosinthesis to increase, because there is less land in the SH, and also, because SH temperatures are not raising as much as NH temperatures. So this constant CO2 reduction in the summer with growing CO2 increase in winter is consistent with overall increased emissions and increased photosinthesis happening almost only in the NH summer. It’s the fingerprint that blames photosinthesis for the reduction of the increase ratio of CO2 concentration between the 80’s and the 90’s.

Re #181 Ron: I’m glad that you provided that link, I was looking for it. I would like to mention the abuse of the term “acidification”, when it is a WRONG term. More alarming, but wrong. Any pH value over 7 is not acidic. It is alkaline. And the most so-called acidic value they found was 7.5, therefore alkaline. So we could only say that the sea is turning, regionally, less alkaline. If the trend continued and it became less than 7, then it would begin to “acidify”.

Also about that article, it has to be absolutely false and profoundly misleading to claim that “the scientists found regions where the water was acidic enough to dissolve the shells and skeletons of clams”. First, because the water was NOT acidic. And as long as the water is ALKALINE, any “acid” you add will counter the alkaline substances already disolved in the water, making the water less alkaline, but without affecting the shells within.

You can tell me that the clams depend on an alkaline sea to grow better, and therefore if the sea gets less alkaline, they grow less. Fine, I could buy that. But you cannot persuade me that an acid ocean is disolving them. No way. For the sea to start to disolve the shells and skeletons of clams, it would first need to be pH

[Response: please use & l t ; (with no spaces for a < symbol). Plus your logic is flawed. Think of the analogy with warming or cooling – even if it is -40 deg an increase of a couple of degrees is still warming, even though that temperature is not considered ‘warm’. Acidification (or de-alkalinisation) is a statement about the direction of change, not the state. And there are plenty of places in the ocean where carbonate dissolves even without a ph < 7 (look up lysocline). – gavin]

Nylo, what astonishes me is that you are accusing the scientists quoted in the article of incompetence or dishonesty. Are you genuinely qualified to make such a judgement? If not, then perhaps you should follow Hank Robert’s suggestion and try to find out why the scientists are concerned.

Nylo, think about this. The term “denialist” comes up on this site, usually in a rather perjorative sense. But I think it is often more accurate use the term to apply to someone who is “in denial,” rather than someone who is trying intentionally to be damaging.

I knew a vibrant young woman who was diagnosed with childhood diabetes as a young girl. She craved sweets and carbs (as do all diabetics) and she could not see that they were doing any particular harm. She felt fine, so simply refused to listen to her doctor and follow her diet. Total denial. She would not accept that what she was doing was setting in motion terrible long-term damage to her body. She died in a nursing home last month, at age 36, blind, on dialysis, unable to walk. Long before she died, she realized she had been terribly wrong. But it was too late. Lethal damage was already in the pipeline.

AGW is very much like this, but on a longer timescale. If you remain in denial, by the time there is overwhelming evidence that even you can no longer deny, it may be too late to avoid extremely serious, even catastrophic consequences. Remember, temperatures will continue to increase for decades after we put a cap on emissions.

Stop grasping at any straw to argue for inaction. Study the problem, get informed, think. And if you refuse to trust the climate scientists, then so be it. But you will be taking the same approach to life as the young diabetic who refused to trust her doctors.

Re #184 Hank: Only the summary is available for me about that article. Anyway I notice that they claim acidification to be a problem by 2050, not earlier. Then there is the report that was linked before, about scientists finding acidified water right now, “ahead of time”. What I am missing in that report is 1) did they only go to that specific location or did they take measures everywhere? 2) What were the results in the measures of other parts of the ocean? Did they find any excess of alkalinity anywhere? 3) Do we have a history of the acidity state of that specific part of the ocean that they claim to be unusually acidic? 4) Was it an upwelling part of the ocean, with cold water coming from below and getting warm and, as a result, having a temporary excess of CO2? 5) Why haven’t they published a peer-reviewed paper?

As for the article you link, I would like to know 1) How much was the “notable dissolution” experienced by the pteropods? 2) At which level of acidity? 3) We know what happened after two days, but what happened to them in the longer term, say, 1 month? I guess that if you make reference to that article is because you read it and can answer those questions.

Oh, I forgot, do we have any prehistorical data about acification of the oceans? If this acidification is real, is it unprecedent? If it is not, did pteropods survive to the last acidification? I’m just guessing that during any ice age, the CO2 content of the sea must have been quite higher, and this means it got more acidic, am I right?

“each time starts again close to the observed climate, because it is initialised with observed sea surface temperatures. So by construction it cannot get too far away, in contrast to the “free” black scenario.”

If the Keenlyside model is used without this re-initialisation, what would it come up with? Would the coupling effect send it off course? and would the course shift up or down between 1960 and 1990?

#32 – “Since such radiation is the only way energy escapes the climate system, that has to heat things up.”

Not completely true. The energy does NOT all have to radiate from the surface. When water condenses into clouds at altitude it releases the energy that evaporated it at the surface. Therefore, the energy that evaporated the water at the surface was not ~only~ radiated back up, whatever went into evaporation was first physically transported up several thousand feet and THEN radiated. Therefore, the energy that goes into the heat of evaporation is transported up through, and unimpeded by, any concentration of CO2 below the altitude of cloud formation, (where most of it exists). It is my understanding that IPCC models do not account for that mode of heat energy transport.

[Response: Your understanding is wrong. You must have gotten it from some really rotten source. IPCC models, indeed even the simple radiative-convective models going back to Manabe’ work in the 1960’s, account for, and indeed rely on, this mode of heat transport. –raypierre]

Even Keenlyside et al.’s green line shows a global (? or just North Atlantic? That seems a bit unclear) warming of around 0.8 degrees C in the next fifteen years or so (until 2025). So why are they talking about global cooling? Re #13 “media in comfortable lockstep with AGW proponents”: As far as I read the media thats pure nonsense. They give the impression that there exists a great scientific debate among IPCC and the so-called “climate scepticists” (this expression is silly, since it has no more meaning “to be sceptical to the climate” than to be sceptical to the sun. The better expression would be “the global warming deniers” for some and “the greenhouse warming scepticists” for others, even if the two groups are normally almost identical). This media impression is pure propaganda, and some of it has been proven to be paid for by fx. ExxonMobil. The main reason for the global warmning denial/ridiculization industry (as I find it reasonable to call most of the media coverage) is of course not bribery but rather a weak spot in human nature which led fx. the guatemalian president in 1902 to proclaim: “There are no active volcanoes in Guatemala” – just in the middle of a big eruption and even while a city was being destroyed by it! Some human beings will unfortunately do whatever it takes to deny for themselves the parts of reality they don’t like, often because they believe this reality prohibit their business and/or leisure activities, their so-called beliefs etc. etc.

I want to put it on the record that I agree with John Millett, albeit only on one tiny point:

John says: “I am a loner lacking competence to refute received wisdom (as evidenced here over recent days) but attracted to those who aren’t so constrained.”

However I would like to see John put a bit more effort into distinguishing between “those who aren’t so constrained” and those who, as it turns out, are. It would save everyone contributing to and (gratefully, like me) using this site a lot of time and effort.

I have not posted on RealClimate before, so I thought I could add something to this thread to “get my feet wet” without causing needless disruption if I haven’t quite got the hang of posting here. Let me add some comments on one of the main subthreads.

John Millett (#90 et seq.) demonstrates a common difficulty with problems involving conditional probability: the inability to formulate the problem correctly as a sequence of conditional events rather than a single event.

Millett (mis)formulates the probability of an IR photon radiating into space as the probability of “getting past” a *single* molecule, despite talking of an air column, as in #132:

“#117 Ray, what you didn’t tell me is that as well as the 59 trillion photon-gobbling CO2 molecules in the air column there are 98 quadrillion photon-friendly molecules of N2 and O2. The probability of the photon escaping to space is 0.9994, no?”

The proper way to formulate the problem is inductive: the probability of “getting past” the first N molecules in the air column is the probability of not being absorbed by the Nth molecule, given that the photon was not absorbed by the previous N-1 molecules. This conditional probability, for these sequential and (assumed) independent events, becomes the *product* of the probabilities of not being absorbed by the N molecules individually.

Now the probability of not being absorbed by a non-GHG molecule is, by definition, unity, and if the probability of not being absorbed by a GHG molecule is (1-x), then the conditional probability of not being absorbed by the first N molecules is (1-x)^n, where “n” is the number of GHG molecules in the N molecules. Since x is non-zero by definition, letting n increasing to 59 trillion leads to an escape probability of essentially zero.

The non-GHG molecules (N2, O2, etc.) have no effect on this conditional probability product, since for these molecules we have x=0. This is clear from the product formulation, a point that Millett has not yet understood.

The Sun is very quiet indeed at present — no sunspots. This has been the case for quite a while. In the past, such low solar activity has been associated with cold spells on Earth, like the Little Ice Age of the 17th century. It could be that (i) the greenhouse effect is stopping the Earth cooling now and (ii) the solar effect is partly canceling the greenhouse effect which is why global warming has slowed. The implication is that when “normal service” resumes on the Sun the world will get hotter, quicker.

John Gribbin,
The question is this: What is the mechanism by which a quiet sun cools Earth. If total solar irradiance has not changed, if the globe is not appreciably cloudier, then what possible mechanism could there be? Certainly, we cannot judge by the past year–a very deep La Nina year. What is more, the current solar cycle is not grossly out of the normal range of variability just yet.
Have you seen Tamino’s analysis?

I agree there are (seeming) correlations between grand solar minima and cooling (Usoskin 2007), and that such cooling lasts at most a few decades. So if we do enter a cool spell, it provides at best, temporary relief.