A significant part of the Met Office Report concerns what it refers to as a recent comprehensive study, being the 2013 Nature Geoscience paper “Energy budget constraints on climate response” (Otto et al). The Report contains several misrepresentations of the findings of the Otto et al study as well as a number of other misleading or erroneous statements.

The author team of Otto et al includes fourteen lead or coordinating lead authors of the forthcoming IPCC scientific report (AR5 WG1 report). The author of this memorandum, Nicholas Lewis, whilst being one of the authors of the Otto et al 2013 study, writes in a personal capacity, not as a representative of its author team.

The Met Office Report discusses in some detail estimates of transient climate response (TCR) and equilibrium/effective climate sensitivity (ECS). TCR, a measure of the rise in global surface temperature at the end of a 70 year period over which atmospheric CO2-equivalent concentrations grow at 1% per annum (and hence double), is closely linked to projections of human-caused warming several decades into the future. ECS measures the eventual surface temperature increase from such a doubling once ocean temperatures have fully adjusted. ECS largely determines TCR.

The Met Office Report considers whether climate model estimates of TCR and ECS need to be revised in the light of recent observational evidence, in particular the relatively slow increase in global surface temperature over the last 15 to 20 years. It concludes that, broadly, such estimates – and hence climate model projections of future warming – do not need to be reduced.

One of the Met Office Report’s main conclusions is that “the upper ranges of TCR and ECS derived from extended observational records … are broadly consistent with the upper range from the latest generation of comprehensive climate models” (CMIP5 models). This is contradicted by Otto et al, which stated “Our results match those of other observation-based studies and suggest that the TCRs of some of the models in the CMIP5 ensemble with the strongest climate response to increases in atmospheric CO2 levels may be inconsistent with recent observations”. Barely half CMIP5 models analysed (Forster et al. 2013) have TCRs below the 95% upper bound for TCR of 2.0°C given in Otto et al, using observational data for the latest decade.

These issues are of particular relevance to the Met Office. Both the TCR and ECS of its flagship HadGEM2-ES model, used for policy advice, are very near the top of the range for CMIP5 models (Forster et al. 2013). Its TCR exceeds the 95% bound derived from CMIP5 models other than HadGEM2-ES. The HadGEM2-ES model’s TCR also exceeds the 95% bound derived from previous generation (CMIP3) models. The Met Office HadGEM2-ES model’s TCR of 2.5°C is not only well above the upper 95% bound of 2.0°C given in Otto et al, but also above the 2.3°C bound given in Gillett et al 2013 – the only other study cited in the Met Office Report that derived TCR from observational records. Indeed, the HadGEM2-ES TCR is nearly double the Otto et al best estimate for TCR of 1.3°C. As for ECS, the HadGEM2-ES model’s ECS of 4.6°C lies well beyond the upper 95% bounds given not only by Otto et al6 but also by Aldrin et al 2012 (3.5°C) and Lewis 2013 (3.0°C). HadGEM2’s ECS also exceeds the 4.5°C top of the IPCC’s ‘likely’ range, and the 95% bounds both from CMIP3 models and from CMIP5 models excluding HadGEM2-ES (Murphy et al. 2009).

The TCR of the Met Office’s previous generation climate model, HadCM3, is at the 95% upper bound for TCR of 2.0°C given in Otto et al. A perturbed physics ensemble study based on adjusting the parameters in HadCM3, now set out in two major published papers (Sexton et al 2012 and Harris et al 2013), represents the techniques and climate model on which the official UK Climate Projections (Murphy et al. 2009) produced by the Met Office were based. HadCM3’s high TCR might not matter if, as the Report claims, “uncertainty in the response of the climate system to CO2 forcing is comprehensively sampled” in the study. However, it is not. Despite thorough attempts to make it do so by varying its parameters, HadCM3 is unable to sample the region where, according to several recent observational studies, key characteristics of the real climate system are most probably located. It is therefore unsurprising that incorporating observational data barely alters the Harris et al 2013 prior central estimate for TCR.

Misrepresentations relating to transient climate response (TCR)

The Met Office Report refers to three methods of estimating TCR: from simulations made with climate models, from observations, and by combining climate model and observationally-derived values. It makes the contentious claim that none of these methods can be said to be superior to the others. In science, it is standard to test the validity of theoretical models by comparing their predictions to observational data. Accordingly, it seems possible to say that estimates derived purely from simulations by climate models, without combination with observationally-derived values, are likely to be inferior to those from the other two methods.

Figure 1 and Table 1

Figure 1 in the Report, which gives the Otto et al TCR 5–95% range as 0.7 to 2.5°C, most likely value 1.4°C, based on 1970–2009 data, is objectionable on a number of grounds:

1. Figure 1 does not use the Otto et al primary TCR best estimate of 1.3°C and 5–95% range of 0.9 to 2.0°C, based on data for the decade 2000–09. Although caution is required in interpreting results for any short period, arguably – as stated in Otto et al – the estimate based on the most recent decade’s data is the most reliable since it has the strongest forcing and is much less affected by the 1991 eruption of Mount Pinatubo. Accordingly, showing in Figure 1 only the (wider) TCR estimated range based on 1970–2009 data for Otto et al for comparison with other estimates is misleading.

2. Classifying the Harris et al 2013 HadCM3-perturbed-physics-ensemble derived TCR estimates as based on model and observations gives a misleading impression of the relative influence of those two factors. Although highly sophisticated, the study would more appropriately be classified as primarily model based. As HadCM3’s parameters are perturbed, the resulting changes in ECS and aerosol forcing are closely linked. When significantly lower values for ECS – as suggested by recent observational studies – are obtained, HadCM3’s aerosol forcing takes on highly negative values. The observational data strongly contraindicate aerosol forcing being highly negative, so parameter combinations resulting in significantly reduced model ECS levels (and thus highly negative aerosol forcing) are heavily down-weighted. As a result, whatever the actual level of ECS, HadCM3-derived ECS estimates are bound to be high. See Box 1 [refer to full document Lewis UKMO]. The HadCM3-derived TCR estimates are very largely determined by its ECS estimates, so the same is true for TCR. Since the Harris et al 2013 results largely reflect the particular characteristics of the HadCM3 model, and are at variance with results from several recent fully observationally-constrained studies, no reliance should be placed upon them.

3. The central TCR estimate of 1.6°C given for Gillett et al 2013 is a mean, and is not comparable to the median estimates quoted for Otto et al and Harris et al 2013. Since TCR and ECS probability distributions are usually skewed, the median – the 50th percentile of the distribution – is a much preferable central estimate for TCR and ECS than the mean (the probability-weighted average value). Gillett et al 2013 used a standard detection-and-attribution method, giving regression coefficients which can then be multiplied by the model TCRs to produce observationally-based TCR estimates. The median TCR estimate based on individual model regression coefficients was 1.46°C.

4. The multi-model CMIP3 and CMIP5 central TCR estimates given of 1.8°C are also means rather than medians. The median TCR for the set of CMIP3 models referenced is lower, at 1.6°C. (The mean–median difference is negligible for the set of CMIP5 models.)

Several of the TCR estimates given for Otto et al in Table 1 are wrong. The central TCR estimate based on the 2000s observational period is given as 1.4°C. It was actually 1.3°C. Several of the other TCR percentiles given are also wrong.

A revised version of Figure 1 in the Met Office Report showing the impact of the revisions discussed above, and a better idea of the distribution of probability, is shown in Figure 1.

Figure 1: Estimates of TCR from the same sources as in Figure 1 of the Met Office Report, with estimates based on 2000-2009 data added for Otto et al.

The flasks span the 5–95 % uncertainty ranges for the estimates from each source; the black horizontal lines mark their 50th percentiles (medians). The width of the flask shows probability density, in each case for the shifted lognormal distribution corresponding to the applicable 5th, 50th and 95th percentiles. Since flasks from all sources therefore have equal area, probability is proportional to area between as well as within flasks. Short white horizontal bars show the central estimate for each source given in Figure 1 of the Met Office Report. The red bar in the CMIP5 column shows the TCR of the Met Office HadGEM2-ES model. The colours denote type of source, with blue showing estimates based on observations, purple showing estimates based primarily on model but partly on observations, and salmon for model-only estimates.

Physical considerations

The Met Office Report states that “To reach the very low values [for TCR] quoted in Otto et al. (2013) would require negative feedbacks to be acting quite strongly to counteract the well understood physics of greenhouse gas radiative forcing, water vapour feedback and surface albedo feedback”. That is misleading. The strong negative lapse rate feedback is very closely linked to the water vapour feedback (they are sometimes combined into a single feedback) and has a similar level of understanding. Therefore, it should be included along with greenhouse gas radiative forcing, water vapour feedback and surface albedo feedback.

A multi-model study of feedbacks, Soden and Held 2006, showed a median ECS for the model ensemble of 1.8°C after combined water vapour/lapse rate and surface albedo feedbacks. The median 1.3°C and 1.4°C estimates for TCR in Otto et al both correspond to ECS estimates above 1.8°C, so are consistent with the Soden and Held 2006 findings without requiring any additional negative feedbacks to be acting. Moreover, real-world water vapour feedback may not be as strong as in typical climate model simulations. Although the basic physics of these feedbacks may be well understood, there remains substantial uncertainty as to their magnitudes. Furthermore, cloud feedbacks are highly uncertain.

Estimated warming at 2100

The Met Office Report miscalculates estimated warming at 2100 under the RCP8.5 scenario for all the studies shown, mainly by using inconsistent bases in the calculation. Figure 2 of the Report also contains an additional misrepresentation of Otto et al’s results. Both issues can be illustrated using the values for Otto et al. The 5–95% range given in Figure 2 of the Report for Otto et al, of 1.7–6.2°C with a median value of 3.5°C, is evidently based on multiplying the study’s TCR estimates based on 1970–2009 data by 8.5 Wm-2 and dividing by 3.44 Wm-2, the CMIP5 mean F2x level in Forster et al 2013, as used in Otto et al. The RCP8.5 scenario was so named because the resulting forcing approaches 8.5 Wm-2 in 2100. However, that was based on a different, higher, F2x basis. In fact, the RCP8.5 indicative forcings dataset (Meinshausen et al, 2011) gives a level of 8.34 Wm-2 in 2100, not 8.5 Wm-2, and it is based on a canonical F2x estimate of 3.71 Wm-2, not 3.44 Wm-2. So the RCP8.5 forcing in 2100 should be divided by 3.71 Wm-2, not 3.44 Wm-2, in order to estimate warming in 2100. Substituting that divisor in the calculation, and taking the indicative RCP8.5 mean forcing over 2095-2105 to even out the solar cycle, reduces the 3.5°C median 2100 warming estimate from Otto et al using 1970–2009 data to 3.1°C.

Moreover, the Met Office Report does not reveal that the warming at 2100 shown for Otto et al is based on its TCR estimates using data for 1970–2009 rather than its primary, arguably more reliable, TCR estimates based on data for the 2000s. Using those estimates instead, the median 2100 warming estimate from Otto et al under the RCP8.5 scenario becomes 2.9°C, with a 5–95% range of 2.0–4.4°C. Both the median estimate and the top of the 95% range are substantially lower than the figures of respectively 3.5°C and 6.2°C given in Figure 2 of the Report. Note that the statement in the caption thereto that medians are marked is incorrect for 3 out of 5 sources; it is means that are marked, except for Otto et al and Harris et al 2013.

Values are calculated as in Figure 2 of the Met Office Report but on a corrected basis and using the TCR values from Figure 1 above rather than from Figure 1 of the Report. The flasks span the 5–95 % uncertainty ranges for the estimates from each source; the black horizontal lines mark their 50th percentiles (medians). The width of the flask shows probability density, in each case for the shifted lognormal distribution corresponding to the applicable 5th, 50th and 95th percentiles. Since flasks from all sources therefore have equal area, probability is proportional to area between as well as within flasks. Short white horizontal bars show the central estimate for each source given in Figure 2 of the Report. The red bar in the CMIP5 column shows the Estimated warming at 2100 for the Met Office HadGEM2-ES model. The colours denote type of source, with blue showing estimates based on observations, purple showing estimates based primarily on model but partly on observations, and salmon for model-only estimates.

The use of inconsistent F2x bases and overstatement of RCP8.5 forcing at 2100 does not just affect the Otto et al study. All the estimated 2100 warming figures given for Gillett et al 2013, Harris et al 2013, CMIP3 and CMIP5 similarly need to be reduced by 10% in order to correct these errors. A revised version of Figure 2 in the Met Office Report showing the impact of these corrections is shown in Figure 2. As noted in the Report, RCP8.5 is the highest of all the RCP scenarios, which should be borne in mind when considering the projected Warming to 2100 levels in Figure 2.

The uncertainty ranges shown both in the Met Office Report Figure 2 and Figure 2 here are in any case exaggerated, since 25–30% of the decadal-mean radiative forcing projected under RCP8.5 for 2100 relative to pre-industrial levels has already occurred. Moreover, according to observational records, by 2012 the global mean temperature had increased by about 0.8°C. That is in line, scaling pro rata, with 25–30% of the medians for the Otto et al and Gillett et al projected increases to 2100. But the non-observational studies’ central warming projections need to be adjusted down to reflect their overestimates of the temperature increase to date.

Misrepresentations relating to equilibrium climate sensitivity (ECS)

Estimation of equilibrium climate sensitivity is dealt with in Section 4 of the Met Office Report. It repeats the misleading claim that “Positive feedbacks in the physical climate system, the largest of which is the water-vapour feedback, increase this number [ECS] to over 2°C”, and compounds this distortion by stating that “the fundamental physics of climate sensitivity, involving black body radiation and water vapour feedbacks … alone give a climate sensitivity of at least 2.0°C”. As already pointed out, these claims ignore the negative lapse-rate feedback, which is intimately linked to the water-vapour feedback. After including lapse-rate feedback as well as water-vapour feedback, all the models analysed in Soden and Held 2006 had a climate sensitivity of below 2.0°C.

The Met Office Report also claims, concerning the Otto et al ECS estimate based on data for the 2000s, that “As for TCR the observations of most recent decade are not representative of the full observational record and so not expected to be representative of the longer term future.” That claim is not valid. Decadal-scale internal variability of global temperature was allowed for in Otto et al. Moreover, the slower increase in surface temperatures during the 2000s was, as one would expect, associated with a higher estimated rate of ocean heat uptake, which increases the ECS estimate. The Otto et al ECS estimates based on data for each of the 1980s, the 1990s and the 2000s, and for 1970-2009, measured in each case relative to estimates for 1860–79, were all very similar. Indeed, the Otto et al ECS estimate based on 2000–09 data was actually the highest of those four ECS estimates, as a result of the top-of-the-range ocean heat uptake estimate used.

Figure 5 of the Met Office Report only gives ECS 5-95% bounds for Otto et al and CMIP3 and CMIP5 model ensembles. It is surprising that no other recent peer-reviewed observationally-based estimated ranges for ECS were shown. Examples are Aldrin et al 2012, Lewis 2013 and Masters 2013.

It is questionable whether valid estimates of probabilistic ranges can be constructed from an ensemble of simulations by different climate models. But even assuming so, Figure 5 of the Report indicates it does not show, as stated, the 5-95% ECS range for CMIP5 models given by Forster et al 2013. That range is 1.9–4.5°C, not 2.1–4.6°C. Moreover, the CMIP3 and CMIP5 central estimates are means, whereas those for Otto et al are medians.

A revised version of Figure 5 in the Met Office Report that does include several recent observationally-based studies, and corrects the CMIP5 ECS 5-95% range, is shown in Figure 3. All central estimates are medians to provide consistency. The range from palaeoclimate estimates (1.2–5.2°C) has been omitted since it is not on a comparable basis to the remaining ranges and palaeoclimate estimates involve great uncertainties.

It is evident from Figure 3 that a variety of recent observationally-based studies all produce median estimates for ECS that are very substantially below the CMIP3 and CMIP5 model-based estimates, indicating that ECS in the more sensitive CMIP3 and CMIP5 models, at least, is likely to be out of line with reality.

Figure 3: Estimates of ECS from the same sources as in Figure 5 of the Met Office Report, with those from Aldrin et al 2012, Lewis 20138 and Masters 2013 added and the Palaeo range omitted.

The flasks span the 5–95 % uncertainty ranges for the estimates from each source; the black horizontal lines mark their 50th percentiles (medians). The width of the flask shows probability density, in each case for the shifted lognormal distribution corresponding to the applicable 5th, 50th and 95th percentiles. Since flasks from all sources therefore have equal area, probability is proportional to area between as well as within flasks. Short white horizontal bars show the central estimate for each source given in Figure 5 of the Report. The red bar in the CMIP5 column shows the ECS of the Met Office HadGEM2-ES model. The colours denote type of source, with blue showing estimates based on observations and salmon for model-only estimates.

Conclusions

The Met Office Report contained a number of misrepresentations of Otto et al 2013, misleading statements and outright mistakes. Correcting these and providing a fairer analysis reveals that, although some of the recent observationally-based estimates of TCR and ECS considered have upper bounds to their uncertainty ranges that are comparable to those for model-based estimates, they all give lower best estimates than model-based best estimates. Observationally-based median estimates for TCR and ECS are often comparable to the bottom of model-based uncertainty ranges.

The statement in the Met Office Report that “the upper ranges of TCR and ECS derived from extended observational records … are broadly consistent with the upper range from the latest generation of comprehensive climate models” has been shown to be questionable. The TCR and ECS characteristics of the Met Office flagship HadGEM2-ES model are of particular interest in this regard. None of the 95% bounds for TCR estimates from the observationally-based sources cited in the Report exceed the TCR of HadGEM2-ES. And HadGEM2-ES has an ECS that exceeds not only the 95% bound from Otto et al but also that from two other recent observationally-based studies. Moreover, both the TCR and the ECS of HadGEM2-ES exceed the 95% bounds derived not only from CMIP3 models but also from CMIP5 models other than HadGEM2-ES.

It has also been shown that estimates based on perturbing parameters in the Met Office HadCM3 model, as exemplified by Harris et al 2013, do not sample combinations of ECS and aerosol forcing located in the region most supported by several recent observationally-based studies, and so cannot be regarded as properly reflecting observational evidence. That is of concern since these same techniques, using the HadCM3 model, represent the basis on which the official UK Climate Projections were constructed.

JC note: The complete document with footnotes and references can be downloaded here [Lewis UKMO], including Box 1.

583 responses to “Nic Lewis on the UK Met Office on the pause”

Three cheers for climate auditors.
Thanks Nic for detailed quantitative rebuttals, corrections & clarifications. The evidence Nic cites is making its way into the IPCC AR5 and the public media. e.g. Matt Ridley in WSJ Dialing Back the Alarm on Climate Change

A forthcoming report points lowers estimates on global warming . . .leaks from this 31-page document . . .that “equilibrium climate sensitivity” (ECS)—eventual warming induced by a doubling of carbon dioxide in the atmosphere, which takes hundreds of years to occur—is “extremely likely” to be above 1 degree Celsius (1.8 degrees Fahrenheit), “likely” to be above 1.5 degrees Celsius (2.4 degrees Fahrenheit) and “very likely” to be below 6 degrees Celsius (10.8 Fahrenheit). In 2007, the IPPC said it was “likely” to be above 2 degrees Celsius and “very likely” to be above 1.5 degrees, with no upper limit. . . .
Most experts believe that warming of less than 2 degrees Celsius from preindustrial levels will result in no net economic and ecological damage. . . .The most plausible explanation of the pause is simply that climate sensitivity was overestimated in the models because of faulty assumptions about net amplification through water-vapor feedback.

I agree that there is value in Nic’s work – but it misses the main point: Since we do not know what natural factors were involved, we cannot use the observed temperature rise over that absurdly short period to estimate climate sensitivity to CO2.

Omanuel—-I am not sure this from the NAS is really a movement away from AGW as much as it is an excuse to compensate for what they believe to be a rock solid absolutely certain effect of CO2 increase which temporarily has been nullified by the effect of humanities production of aerosols.
They are then certain that in the near future the effect of the aerosols will not increase but that of the CO2 will increase, at which time we will go back to the warming. That is why it is referred to as a (temporary) pause and not a stop in warming.
I am learning as much about human nature as I am about science.

The Met office report is a political document. Your arguments, while technically relevant and clear, seem to me orthogonal to the issues the Met document is trying to address; to wit, lower empirical estimates show that the CMIP ensemble, and the Met GCM in particular, are essentially without value for projecting future warming. This is a political problem for the Met (and the UK government), because they are absolutely committed to mandated draconian reductions in fossil fuel use in the UK, justified by projections of extreme future warming. Any reduction in the consensus best estimates of transient and equilibrium responses could undermine political support for those reductions in fossil fuels, and add to political support for extensive use of fracking technology to recover large reserves of natural gas in the UK.

The goal of the report seems to me only to cloud the water with very questionable claims which will delay public perception of a new scientific consensus on lower sensitivity; they simply want to delay changes in public opinion for as long as possible. While I think this sort of thing being done by scientists is reprehensible, it is certainly not unexpected for scientists with one foot in science and the other in political advocacy. I think you can count on several other climate science/political advocacy organizations doing much the same as the Met.

Come on Willard. Nic Lewis simply stated the facts of a study in which he was involved. Then he pointed to the fact that the findings of that study were misrepresented. Many of us are trying to be objective, but your statement goes beyond the 95% probable boundary of incredulity.
It honestly makes other statements you make less believable, and I am sure you do not intend to do that.

Willard, lots of people go on to have normal lives after they leave a cult;
a bit of deprograming and you will be fine. However, the staring point is that you have to accept reality to be true and not some ‘mumbo-jumo’ as the fundamental reality. The first step is the most important, admit you have a problem reconciling you belief system with reality, then all will follow.http://www.intervention101.com/

Willard: If Nic Lewis’s critique is correct and complete, then the MET made a number of errors and misrepresentations in its report, all tending in the direction of making the situation appear more alarming AND supporting the validity of their own climate model as Steve Fitzpatrick noted. A non-political document would be more likely to have errors pointing in both directions. So the one-sidedness of the errors is evidence tending to confirm the hypothesis that the MET document was biased in the direction of alarmism and supporting their own work product. (If Nic’s memo leaves out instances where they erred in the other direction, then the evidence would be less compelling.)

Thank you for making me realize I wrote “two” whence I should have written “three”. I already acknowledged it. You are wrong to claim that I miscalculated, as I assure you that the fumble is a result of my Android tablet creating a wormwhole in the text when I delete two or three times, which renders my comments uneditable.

It’s OK, I will own it.

***

You should stop, now, as you’re putting someone else’s business on the line. My first intention was to illustrate the dogwhisting that goes by underlining that the A’s “misrepresentations” are undetermined and by hinting at the A’s adverbs, some of which have little to do with a technical analysis. I will now follow through my reading of this op-ed thanks to you, among others.

> A non-political document would be more likely to have errors pointing in both directions.

I have no idea why this criteria would separate non-political documents from political ones, but here’s what James Annan concluded a while ago about a related analysis by the A:

I have some doubts about Nic Lewis’ analysis, as I think some of his choices are dubious and will have acted to underestimate the true sensitivity somewhat. For example, his choice of ocean heat uptake is based on taking a short term trend over a period in which the observed warming is markedly lower than the longer-term multidecadal value. I don’t think this is necessarily a deliberate cherry-pick, any more than previous analyses running up to the year 2000 were (the last decade is a natural enough choice to have made) but it does have unfortunate consequences. Irrespective of what one thinks about aerosol forcing, it would be hard to argue that the rate of net forcing increase and/or over-all radiative imbalance has actually dropped markedly in recent years, so any change in net heat uptake can only be reasonably attributed to a bit of natural variability or observational uncertainty. Lewis has also adjusted the aerosol forcing according to his opinion of which values are preferred – concidentally, he comes down on the side of an answer that gives a lower sensitivity.

I’m not sure I would use one-sidedness as a criteria to measure the political content of a document, as it might have lukewarm consequences on the way we should interpret the constant fight to push down sensitivity.

1) How can you have “no idea” why independent errors all pointing in one direction suggest political intent? By definition, it shows bias. Now you just have to see what kind of bias is most plausible. In this case, the MET’s errors, if undetected, would work to defend its own modeling product and its past conclusions. Why did the chicken cross the road? Maybe it was a fit of inattention, but most likely it was trying to get to the other side.

2) James Annan does well to consider the possibility that Nic Lewis is shading his findings toward lower sensitivity. Of course, Lewis might say that Annan is shading his interpretation of the evidence toward higher sensitivity. The key difference is that in the quotation you offer, Annan does not identify any error or misrepresentation on Lewis’s part, merely differences of opinion about the quality of different pieces of evidence. In the post here, Lewis identifies specific misstatements about what others have actually claimed and errors in plugging in the wrong numbers. Horse of a different color.

One need not look at it as a political document. One could also suppose that they don’t want their models thrown out of the ensemble of models used – even though they arguably should be. The report can thus be looked at as more of a reflexive, self-protective measure. Just to annoy some people, I will paraphrase Feynman from his Cargo Cult science lecture: A scientist has to have a certain type of honesty. After all, it is easy to fool yourself. One needs to bend over backwards to look at your own data in a critical light and report all its shortcomings as well as its strengths.

They didn’t even have to bend over backwards to understand that the water vapor feedback was exaggerated. They’ve been told that for years. Furthermore, the pretzels they twisted themselves into(aerosols, deep missing heat) are now becoming much more excruciating than the simple bend backward needed at the outset.

> How can you have “no idea” why independent errors all pointing in one direction suggest political intent?

First, they’ve been called “misrepresentations”, not “errors”, because the author focus on claims he considers misleading.

Second, they have not been shown to be independent, something that affects their accounting.

Third, there are lots of different bias, some of which are not ipso facto erroneous.

Calling these so-called independent errors “misrepresentations” shows a very big auditing bias, insofar as we see the same pea and thimble game where an auditor uses technical nits to dogwhistle his editorial.

***

Even you do shows this auditing bias when you say:

> Lewis identifies specific misstatements about what others have actually claimed and errors in plugging in the wrong numbers.

The fact that the Met messed up their table 1 is quite independent from the dispute about which kind of estimates is the best ones, if negative feedbacks should be considered physical, if we should use medians or means, or else, which constitute most of the A’s criticisms.

I hope you do realize that playing this kind of pea and thimble game shows more political intent than making technical decisions which do not exclude possibilities some like you may lukerwarmingly frown upon.

***

Just think about what entails your claim that political intent is shown when there’s a systematic bias in one direction:

Thanks Nic! The audit states that “cloud feedbacks are highly uncertain.” I would be interested to learn to what extent cloud feedback is now better constrained using up-to-date observational data than it was, say, ten years ago.

The last thing the warmista want to do is better constrain cloud data. It would reveal that clouds absorb around 15-25W/m^2 more than the microphysics says they should, and that will invalidate all the models.

Let me know when main stream science realizes that the climate sensitivity of CO2 is indistinguishable from zero. Until then this is “All sound and fury, signifying nothing” Macbeth, William Shakespeare.

Jim, you will be dead many times over before main stream science would ever admit to such an erroneous idea, or we’ll have returned to the dark ages ruled by wacko right-wing nutters who will teach that the Earth is only 7000 years old.

R.Gates, you write “or we’ll have returned to the dark ages ruled by wacko right-wing nutters who will teach that the Earth is only 7000 years old.”

By that I take it to mean you equate me with people who believe the world is only 7000 years old. I sort of resent that. I have stated over and over again, that I rely on empirical data, and empirical data only. There is ample empirical data on what the true age of the earth is. So how you can equate me with such an erroneous idea, I have no idea.

The little empirical data that we have, gives a strong indication that the climate sensitivity of CO2 is indistinguishable from zero. Now if you can produce some empirical data which shows that this is wrong, I would,be grateful. ALL the numbers associated with the climate sensitivoty of CO2 are either estimates and not measurements, since we cannot do controlled experiments on the earth’s atmopshere; or they are things like paleo data, where there is no proof that the alleged rise in temperature was, in fact, caused by the rise in CO2 levels, and not the other way around.

Those whacko nutters who believe God created the heavens and the earth 10,000 years ago are the central actors in Western civilization and nutter headquarters, the United States of America, which boasts a higher percentage of said believers than any other western nation, is by far the greatest military and economic power on the planet.

The moral of my story is that believing in a young earth does not restrain science and technology. In fact it appears to accelerate it and I would put forth the reason as those people don’t waste time wondering if birds evolved from dinosaurs but rather wondering how they might help free the oppressed, feed the hungry, and do God’s good work in every way they can.

You gotta understand the RGates responds to his fervid religious belief that the world is overpopulated. Unfortunately, the religion is primitive, doesn’t really believe in observational science, rather believes in a narrative of fear and guilt, which narrative is leading them into the swamp.
=========================================

Thanks for decoupling lapse rate from specific humidity itself. Observationally, the CMIP3 and CMIP5 model ensembles overstate humidity ( by an average of at least 10%) and understate precipitation (by as much as half, depending on which paper one chooses to believe most). The reasons are almost certainly owing to Lindzens adaptive iris hypothesis about tropical convection cells — Tstorms. There is more rainout, so more top of cloud residual latent heat to radiate away, and less humidity available to the upper troposphere where it matters most for positive water vapor feedback. Put differently, a negative lapse rate feedback via a specific mechanism.
And AR5 SOD (leaked) chapter 7 (clouds), explicitly says that given supercomputer power limitations, sufficiently small grid scales to resolve such convection cells and model this phenomenon are not possible, and may never be.
In other words, the essential negative lapse rate helping to explain oversensitivity cannot be adequately modeled in present GCMs, so any IPCC conclusions based on them are necessarily off.

They have a sensitivity range for a reason. The global temperature behavior is within their stated uncertainty, but I suspect the Arctic sea-ice loss was outside what they expected because ice-sheet modeling is a tough problem, being how much more susceptible it is to feedbacks than other parts of the system.

It may or may not rain tomorrow. Is that useful? Now look at that range of uncertainty, 1.5 to 4.5 or up to 1.5 per CO2 doubling with up to 3.0 wild ass guessed water vapor feedback with clouds being only positive, because one guy disagreed with the original consensus to inflate the impact. The only reliable information is that CO2 equivalent forcing of 3.7 Wm-2 will produce a surface temperature increase of 0.8 to 1.2 C degrees. There is zero reason to assume clouds are a globe positive forcing or that water vapor will follow anyone’s script. Zero reason to assume 1951-1980 is “normal”. Zero reason to assume everything that disagrees with the models is wrong. Zero reason to assume that the guys in charge so far deserve to remain employed.

captd, so you have decided not to agree with even Nic Lewis that there is a 60% positive feedback that he thinks he has observational evidence to prove. You are in a dwindling minority. Is there some aspect of the Lewis work that you think is weak?

“The overprediction of the tropical upper hot spot shows that the models are overdoing the negative lapse rate feedback, if anything.”
—–
You realize that the enhancement to the Brewer-Dobson circulation as predicted by some models (but not all) has been seen and is related to more vigorous tropical troposphere vertical motion. This would tend to decrease the “hot spot” effect over the tropics, and also decrease the wind speed of the QBO. All this relates back to increased GH gas forcing.

Jim D, that is the ultimate result. What is amazing is that MITs Lindzen explained why a decade age. His paper was explicitly rejected by AR4. Nic Lewis re-identified it in another guise, since AR4 chose to treat specific humidity and it’s lapse rate as one, since special box 8.1 (previous guest post here) treated them as one. So there is a negative lapse rate feedback to water vapor, which AR5 WG1 SODsays CANNOT be modeled at all, because of grid scale limitations. QED

The foundation of all climate science rests on our ability to explain the 33C discrepancy between the non-GHG atmosphere Earth and what we currently observe.

Skeptics always seem to massively fail when this is factored in. About 1 part on 3 is due to CO2, perhaps 1.5 part in 3 is due to associated water vapor increases, and the rest due to other GHGs and albedo changes.

When more CO2 is added, the 33C number is accordingly scaled upward. A doubling of CO2 adding only 3C to this number is the result of the log sensitivity .

Sceptics will need a theory to explain this better than the consensus has done.

The negative lapse rate feedback is well known and easily quantified thermodynamically being a consequence of how the upper part of the moist adiabat changes with temperature. As long as GCMs can maintain realistic moist adiabatic lapse rates, and they can, they would have no trouble with this particular feedback.

The lapse rate feedback has a clear physical justification, but it seems to be the case that lapse rates are more widely near to 6.5 C/km than simplistic arguments lead to think. We would expect significantly higher lapse rates where humidity is well below 100% and somewhat lower in areas of high humidity. The actual variability appears to be less than such arguments predict. It seems that winds and other mechanisms that cause lateral mixing must be involved in that.

To me it appears plausible that something similar may affect the lapse rate feedback. If the lapse rate is laterally more uniformly close to a single “environmental lapse rate” than naively expected it might also be less sensitive to warming than expected.

Even if my speculation (and it’s pure speculation) is true, that doesn’t necessarily affect much the combined effect of water vapor feedback and lapse rate feedback.

Pekka, “To me it appears plausible that something similar may affect the lapse rate feedback. If the lapse rate is laterally more uniformly close to a single “environmental lapse rate” than naively expected it might also be less sensitive to warming than expected.”

The main issue is the situation below the atmospheric boundary layer and above. You have an average of nearly 2.5km of highly turbulent mixing where convective mixing caps the ABL in the heat of the day and a moist air dominate temperature inversion caps it at night. It is referred to as the Water Vapor Greenhouse Effect. Above the ABL you will have the classic lapse rate. The models are basically defining the “surface” as some point ~3000 meters above sea level but using sea level temperatures to drive the CC estimates of water vapor. It doesn’t work that way when the majority of the surface on Earth is sea level oceans.

A fact that helps here is that the tropical troposphere has a lapse rate that stays very close to moist adiabatic, and therefore the tropospheric profile is closely tied to the surface temperature there. This lapse rate is a consequence of the radiative-convective equilibrium that dominates in the tropics where horizontal temperature gradients are weak. In tropical temperatures, as the surface of the moist adiabat warms, there is a slightly faster warming of this adiabat in the upper troposphere. This extra warming leads to more outgoing longwave radiation which is a negative feedback compared to if the lapse rate did not change. Held and Soden have had good explanations of this back to 2000 or so.

JimD, “A fact that helps here is that the tropical troposphere has a lapse rate that stays very close to moist adiabatic, and therefore the tropospheric profile is closely tied to the surface temperature there.”

Right and since the moisture in the tropical atmosphere at the ABL stays near super saturation, you won’t see much change in the tropical lapse rate. Do you expect CO2 to create a new super duper saturation?

The argument is clear. According to this picture the profile appears, indeed to be close to moist adiabat in tropics. (Constant moist potential temperature means that the lapse rate follows the moist adiabat.)

Some other data on the temperature profile of tropical atmosphere tells, however, about a lapse rate close to 6.5 C/km at altitudes where the moist adiabat would lead to a considerably smaller lapse rate. I’m puzzled about this conflict between various data sources that I have seen.

JimD, “captd, the tropical lapse rate is a function of surface temperature. I do expect a change in surface temperature, don’t you?”

Over a few hundred years, not much. If you check the TAO data it has surface air and sea temperatures with relative humidity. When the humidity is at 85% and the SAT at 29C, the nocturnal atmospheric boundary layer is saturated at 25C degrees.; If you want that temperature higher but at the same altitude, you have to add pressure. If you add more water vapor, to already super saturated air, you don’t need cloud condensation nuclei, it will just rain anyway. If you add heat (any type of forcing above) you just create deeper mid level convection. CO2 does not change the properties of water.

Here is a page I found with an illustration of how the lapse rate varies with temperature.
Actually in very cold temperatures, the lapse rate is almost dry (near 10 C/km), but as you add moisture this lapse rate steepens, and this effect changes faster nonlinearly with temperature because of the nonlinear increase of saturated water vapor with temperature.

When the humidity is at 85% and the SAT at 29C, the nocturnal atmospheric boundary layer is saturated at 25C degrees.; If you want that temperature higher but at the same altitude, you have to add pressure. If you add more water vapor, to already super saturated air, you don’t need cloud condensation nuclei, it will just rain anyway.

As discussed earlier, that claim simply wrong. The partial pressure of water vapor must be higher, but that is possible for any atmospheric pressure.

Thermodynamic properties change enough to modify the moist adiabatic lapse rate and the lapse rate depends also on the atmospheric pressure, but the consequences are nothing like what you claim.

Pekka, ” I’m puzzled about this conflict between various data sources that I have seen.”

Any change with current forcing would be feet or meters in one of the most turbulent regions of the atmosphere. One long term radiosondes study indicated a slight decrease in the cloud base (ABL) but that is challenged because the data doesn’t jive with model predictions, LOL. It doesn’t change the fact that super saturation is a pretty firm limit.

The concept of steepening is confusing as that means that the lapse rate is reduced. In many ways it would be easier to discuss plots with a vertical temperature axis as we discuss the dependence of temperature on the altitude.

Pekka, “Supersaturation is certainly a strong limit, but that relates the absolute humidity to the temperature and has nothing to do with atmospheric pressure (except through temperature).”

No, it relates to the actual operating condition of the system. The difference between the SST and MAT is about 0.8C. There is not a lot of room for warming without changing the convection rate of the precipitation rate and both are a negative feedback to surface temperature. That is what you have to work with. Now how do you change that so you have a warmer surface with more water vapor and not change convection or precipitation?

Check a psychrometric chart for 2250 meter altitude. In the tropics, if the lapse rate is continuous from the actual surface, the temperature at 2250 meters would be 14.65 C lower than the surface. With an average tropical SST of 29C that would be 14.35 C degrees and super saturated in the afternoon/night.

What gets tracked is the geopotential height H at 500 mbar and the surface temperature over the years
If we select a lapse rate, L, for 1960 of 0.0051°C/m, we calculate H = 5647.9m for P(H)=500 mb.
If we select a lapse rate, L, for 2010 of 0.0050°C/m, we calculate H = 5668.4m for P(H)=500 mb.

There is enough information to calibrate the surface temperature and the geopotential height using the barometric formula. The lapse rate is subtly reduced over the years, by a value of only 2%, but the temperature differences at altitude do shrink.

The difference at sea-level from the chart is 15.4°C-14.65°C = 0.75°C whereas the difference at the 500mb altitude assuming the modified lapse rate is 0.75°C – 0.46°C = 0.29°C. If the lapse rate didn’t change then this sea-level difference would maintain at a constant atmospheric pressure isobar in altitude.

There is no substitute for trying to fit the numbers according to the long-standing model.

I suspect the Arctic sea-ice loss was outside what they expected because ice-sheet modeling is a tough problem,

That’s very brave of you .The changes in the model (HadGEM1 to2) are by using corrections to the physics from better observations, and albedo changes by parametization schemes as proposed by the proprietor ( Curry 2001)

I agree that the mixed layer is different from the rest of the troposphere, and that this difference should be taken into account. My reaction was specifically against your statement you have to add pressure as you appear to be using that as a strong constraint related to the ambient air pressure.

Changes in pressure are important when pressure differentials that affect circulation are concerned, but the overall ambient pressure levels have only a small influence on the relevant thermodynamics, because N2 and O2 remain by far the dominant constituents under all imaginable conditions, and because they are also nearly ideal gases under atmospheric conditions. Condensation and evaporation of water are important processes, but they are controlled by absolute humidity and temperature, and affected very weakly by the ambient pressure when temperature is taken as an independent variable.

This modern warm period has warmed much the same as the Roman and Medieval Warm periods warmed. In the past ten thousand years there were warm and cold periods that have always alternated,warm then cold then warm then cold.

They tell us that alternating was natural variability then, but now, that has stopped and this Modern warming would have not happened without man-made CO2.

I have asked, “why should we have not warmed just like we have done for ten thousand years and why will we not cool again like we always did.

They tell us that it is warmer now than in those other warm periods. The data does not show that. Only model output shows that.

They have used their models to build a Hockey Stick that is ten thousand years long.

They must explain why natural warm and cold alternating periods should have stopped so that only CO2 can cause a warming just like happened after every cold period before and only a lack of CO2 can cause a cooling just like happened after every warm period before. CO2 has replaced natural variability in Climate Control and they can’t tell us what it was that was replaced.

If the A claims that each of the MET statements are erroneous AND misleading AND misrepresentations, then he has more work to do than to claim that such statement is erroneous, such other statement is misleading, and such other is a misrepresentation.

Also note that showing that a statement is erroneous entails another kind of analysis showing that it is misleading or a misrepresentation.

I am going to assume most of us who have commented here have read the aforementioned work of Nic L. above.. I am curious as to how many of us have the due diligence to have read the entire document including sources of information. I have not, I think I am too lazy– and also have been playing 500. Just curious!

You need to show some integrity and honor and thank me for pointing out your error. It’s pretty simple. Not a food fight. But as long as you want to deny the obvious, then food fights are fine. Someone who cant even acknowledge others is hardly a good partner for dialogue..

My tablet’s behaviour shows a very nasty bug. The editor loses track of its cursor when CR or LF gets deleted. This has the effect to delete whole words and keep the insertion point at a random place. Anyway.

No willard, your explanation doesnt fly. We want a full accounting of your mistake. You know the kind Joshua wanted from peter lang.

explain exactly how your laptop had anything to do with the miscounting?

no copy and past was required. Ask yourself this. I read the document.
I read your description of the first two sentences. I knew it was wrong without even CHECKING. why? because I read the document. maybe you should get checked for a memory disorder.

So you wrote your comment. did you double check before posting?
or double check after posting?

If your tablet has a problem with copy and paste have you noticed it before? if so, then why didnt you check? perhaps you were in a rush to spam the thread..

> The Met Office Report discusses in some detail estimates of transient climate response (TCR) and equilibrium/effective climate sensitivity (ECS). TCR, a measure of the rise in global surface temperature at the end of a 70 year period over which atmospheric CO2-equivalent concentrations grow at 1% per annum (and hence double), is closely linked to projections of human-caused warming several decades into the future. ECS measures the eventual surface temperature increase from such a doubling once ocean temperatures have fully adjusted. ECS largely determines TCR.

I kind of like Willard, He’s intelligent and witty in a nasty kind of way. I see him as the other side’s Kim, though he’s not nearly the writer Kim is.
The real troll around here is lollywot. I’m surprised more don’t see it.

The Met Office Report tries to give weight to its erroneous conclusions about cause and effect — by dreaming up a chimera about warming in the later part of a warming period as providing some special signal about the part of the total warming caused by humans — and, sure enough, we’re all doomed.

Who *is* this willard person, I ask myself. …..who crawls through marshy weeds looking for bugs. ….when there is a bright sun shining. Isn’t there a bright sun shining? I wonder why he doesn’t want to notice.
….Lady in Red

Last night Professor Judith Curry, head of climate science at Georgia Institute of Technology in Atlanta, said the leaked summary showed that ‘the science is clearly not settled, and is in a state of flux’.

She said it therefore made no sense that the IPCC was claiming that its confidence in its forecasts and conclusions has increased.

For example, in the new report, the IPCC says it is ‘extremely likely’ – 95 per cent certain – that human influence caused more than half the temperature rises from 1951 to 2010, up from ‘very confident’ – 90 per cent certain – in 2007.

Prof Curry said: ‘This is incomprehensible to me’ – adding that the IPCC projections are ‘overconfident’, especially given the report’s admitted areas of doubt.

As pointed out by Armour et al., these short-term studies are likely to underestimate sensitivity because short-term responses don’t represent the full longer term response, being skewed by fast responding areas like the land and Arctic that have less water vapor feedback. In the longer term, Armour says that the warmer ocean feedback plays more of a role in the ECS. The models tend to have the tropical ocean responding too fast (hot spot issue), and the Arctic not enough (sea-ice overestimate).
I would also note that with a TCR of 1.6 C, the warming since 1950 is 100% accounted for by CO2, but that is just a corollary of Lewis’s number. The skeptics have come a long way in accepting these types of sensitivities and the fact of a significant positive feedback, and Lewis is helping them to see the light.

Lewis defined TCR as the response to a 1% per year CO2 increase of 70 years, which is a doubling over 70 years. I doubt this is the only definition of TCR, but this is not ECR, which is what you get if you run the model beyond that 70 years, at fixed CO2, until it stops warming. A more observationally based TCR would use that we have increased CO2 by 25% in the last 50 years at an accelerating rate.

> TCR is defined as the mean change in global mean surface temperature at the time of doubling of atmospheric carbon dioxide (CO2) concentrations from pre-industrial levels, in a scenario of a cumulative 1% increase in CO2 per year. Hence TCR describes the change in global mean surface temperature in response to reaching a doubling of CO2. ECS describes the equilibrium response of the climate system, as defined by the global surface temperature reached eventually after stabilisation of atmospheric concentrations at a doubling of CO2.

Does anyone know whether the ECS derived from models takes closer to 150 years or 400 years? I assume it varies from model to model. Also, is the assumption of a deep ocean layer that does not mix with the upper layer fixed into the models such that the ECS ignores the lower ocean? If this is the case, would not Trenberth’s hypothesis that the heat is going into the deep ocean, change the ECS and/or the time it took to get there? Nic? Mosh?

”
It concludes that, broadly, such estimates – and hence climate model projections of future warming – do not need to be reduced.
”

For this to change, someone will have to show how the land temperature signal — which is closer to a representation of the eventual ECS — is deviating significantly from the projected average 3C warming per doubling of atmospheric CO2 based on the pre-industrial average.

webster, “For this to change, someone will have to show how the land temperature signal — which is closer to a representation of the eventual ECS — is deviating significantly from the projected average 3C warming per doubling of atmospheric CO2 based on the pre-industrial average.”

Bullshi$t. The models estimate “global” sensitivity when sensitivity is obviously regional. The models miss the tropics, miss the Antarctic, miss Arctic amplification and miss the mid-latitude land amplification. It is time for the models to “prove” they are useful.

JimD, “Is it a comfort to you if the models are underestimating the land warming, or is it worse than you thought?”
It is called a clue. If a model predicts one thing and another happens, you find out why.

we just have problems with folks who tell us the models can do more than they really can.

Although it is not an exact analogy, the similarity to traveling salesmen pushing a cure all tonic. The tonic isn’t useless – it will give you a nice buzz – but it doesn’t reduce swelling, improve eyesight, help grow hair and make it stand up all night.

GCM’s are a tool. One that has a rather narrow applicability. Making it reasonable to ask, Why are there 70 different versions of the same tool, considering the small number of uses? & Where us the evidence the tool performs as advertised?

Jim D,
“I would also note that with a TCR of 1.6 C, the warming since 1950 is 100% accounted for by CO2”
I suspect you don’t mean just CO2, but all GHGs. But the TCR value which is consistent with warming since 1950 depends on what you assume for the change in GHG forcing since 1950, as well as how much ocean heat accumulation was taking place in 1950.

If you consider the whole of the instrumental period, the temperature increase is about 0.8C and the forcing increase (absent aerosol offsets) about 3.1 Watts/M^2. If you accept the AR5 SOD best estimate for net aerosol offsets of 0.7 watt/M^2 and assume heat accumulation of ~0.6 watt/M^2, then the estimated ‘effective sensitivity’ (heat balance sense) is about 0.8/(3.1 – 0.7 – 0.6) = 0.444 degree per watt (~1.65 degree per doubling). The transient response would (of course) be considerably less than the effective sensitivity: (0.8/(3.1-0.7) = 0.333 degree per watt (~1.24 degrees TCR), and the equilibrium sensitivity a little higher. A transient response of 1.24C is not a particularly low value; Isaac Held seems to think is most likely near 1.3C.

You don’t need to know anything about aerosols or other GHGs to see the CO2 effect and compare it with the actual warming. At a 1.6 C TCR a rise from 310 ppm to 385 ppm (the average over the last decade) and a temperature rise of 0.5 C from 1950 to the last decade average, shows that these two numbers are the same. Yes, there are negative aerosol effects and positive effects due to other GHGs, but I am just looking at the CO2 attribution. Take away the CO2 effect, and you leave no warming. This is 100% attribution.

Jim D,
“You don’t need to know anything about aerosols or other GHGs to see the CO2 effect and compare it with the actual warming.”
.
Sorry, no. Radiation is fungible. You really do need to include a reasonable estimate for all significant forgings, or your analysis is not meaningful.

This is precisely why the closest corollary paleoclimate data needs to be used in combination with both the longest modern observations and the best models. The paleoclimate data will include all the feedbacks and full Earth system response to fill in the gaps of modern observations and model weaknesses. Lake El’gygytgyn in Siberia probably is our best single source for this paleoclimate data and confirms that major changes are ahead for the Arctic as CO2 goes over 400 ppm. Thinking about the Earth’s climate in mid-Pliocene terms is probably a good general analog if you want to see what the Earth system sensitivity is to doubling of CO2 from pre-industrial levels.

R. Gates, what do you think the trigger might be?Abstract
2.8 Million Years of Arctic Climate Change from Lake El’gygytgyn, NE Russia
The reliability of Arctic climate predictions is currently hampered by insufficient knowledge of natural climate variability in the past. A sediment core from Lake El’gygytgyn in northeastern (NE) Russia provides a continuous, high-resolution record from the Arctic, spanning the past 2.8 million years. This core reveals numerous “super interglacials” during the Quaternary; for marine benthic isotope stages (MIS) 11c and 31, maximum summer temperatures and annual precipitation values are ~4° to 5°C and ~300 millimeters higher than those of MIS 1 and 5e. Climate simulations show that these extreme warm conditions are difficult to explain with greenhouse gas and astronomical forcing alone, implying the importance of amplifying feedbacks and far field influences. The timing of Arctic warming relative to West Antarctic Ice Sheet retreats implies strong interhemispheric climate connectivity.http://www.sciencemag.org/content/337/6092/315.abstract

Matt Skaggs
“I would be interested to learn to what extent cloud feedback is now better constrained using up-to-date observational data than it was, say, ten years ago.”

I’m not sure it will fully answer your question, but there is an excellent, detailed discussion of clouds, their radiative effects, feedbacks and forcing in section 3 of the 2012 review paper “Observing and Modeling Earth’s Energy Flows” by Stevens and Schwartz. Available at http://www.bnl.gov/envsci/pubs/pdf/2012/BNL-96154-2012-JA.pdf

The Stevens and Schwartz paper also has an excellent section on aerosols. Regarding direct aerosol forcing (DARF), where estimates have come down over the last ten years – the AR5 best estimate is expected to be only -0.27 W/m2, down from -0.5 W/m2 in AR4, it concludes that ” it would not be surprising if the sum of the various contributions to the DARF is much closer to zero than previously thought”. And it concludes about aerosol indirect forcing (cloud adjustments): “the variety of ways in which clouds adjust to aerosol perturbations (Stevens and Feingold. 2009), many of which are not possible to account for given the relatively crude description of cloud processes in climate models, lends weight to the argument that, after a full accounting, the radiative forcing attributable to cloud adjustments to aerosol perturbations is likely to be small”.

Thanks for the excellent post as well as this link to the Stevens and Schawartz paper for those who have not read it. This quote from that paper is particularly interesting, especially in light of discussions going on here about ocean heat content and TOA:

“Because the atmosphere has a relatively small heat capacity, an imbalance of energy flows at the TOA can be sustained only an increase in ocean enthalpy, augmented to lesser extent by melting of the cryosphere and warming of the land surface and the atmosphere.”

What this gets at, is that considering the relative sizes of energy storage, thermal inertia, and the importance of ocean to atmosphere energy flow for maintaining tropospheric temperatures, small alterations in that flow have big repercussions in tropospheric tempertures, and a better assessment of causes of those alterations is important if we really want to understand tropospheric sensitivity to longer term increases in GH gases. A more accurate measurement might well be total Earth system sensivity that captures energy increase related changes to the full system.

R.Gates, “A more accurate measurement might well be total Earth system sensivity that captures energy increase related changes to the full system.”

You have that already if you like. The average effective energy of the oceans is ~334.5 Wm-2 approximately equal to the estimated DWLR value. Increasing CO2 will increase DWLR within the limits of the specific heat capacity of the atmosphere warming the “average” of the oceans by roughly the same amount. That is 0.8C per 3.7 Wm-2 of additional DWLR. If the lower atmosphere cannot hold more energy, then there would be an increase in BDC, SSW events, deep convection, etc. etc. etc. until a new steady state is reached between the oceans and atmosphere.

We already have a fairly good estimate of the rate of ocean heat uptake ~0.8C per 300 years

Ragnaar, “Would it be possible to recover from the LIA without the above increased OHC?” No, OHC and sea level would rise as the oceans recover the energy lost. Sea level rising has a better fit to a longer term recovery than CO2 forcing. OHC data is too short and too coarse to extend that far back, but since OHC and SST track well with sea level, tropical SST reconstructions can give you a good idea.

Without considering LIA depression, you get the ~1.6 C, If you think CO2 does it all, about 3.2 C, but if you allow for LIA depression, ~0.8 C. Which is remarkably close to Kimoto’s estimate of the Planck response once you include more current data.

captdallas:
I kind of followed that Kimoto paper, his approach his general point I think, but not so much the math.
I follow the 0.8 C in the Oceans over the last 300 years as the land temperatures aren’t going far without the permission of the Oceans.
Why isn’t this stuff simpler?
That 0.8 C is an ore boat load of energy though. 800 C in the atmosphere I think. Would you use 0.8/264 K to figure the percentage increase?

Well, it is a planet. Kimoto has an interesting approach and other than the dF/dT~4F/T which gives some folks heartburn, pretty simple for a reasonable estimate. If that boat load of energy magically got transferred to the atmosphere it would have about a 1000 times impact, but it can’t.

I am not sure where you came up with the 264K, but there are a couple of ways to use the Kimoto simplification. The gross ratio is 4(F,r+Fe+Ft)/Ts using the Stephens budget estimates is 4(398+88+24)/289= 7.05 Wm-2/K which is how much the surface energy should change per degree.

That is the end of simple though because only part of that is due to back radiation, solar provides energy at the surface and in the atmosphere that is not going to be changed by atmospheric forcing that much. This is where the ocean and back radiation relationship comes in. Back radiation is ~340Wm-2 per 277K or 4(340)/277 = 4.9Wm-2 per K. Adding 3.7Wm-2 to the back radiation produces ~0.75 C of effective temperature change i.e. average temperature of the oceans increases. That takes 7.05Wm-2 per degree of surface energy to generate 4.9 Wm-2 of back radiation. So surface energy will need to increase by 7.05*(3.7/4.9)=5.35 Wm-2 to compensate for 3.7Wm-2 of atmospheric forcing. You might recognize the dT=5.35*ln(Ci/Co) as the “no feedback” climate sensitivity. Unless increasing latent cooling by about 1 Wm-2 triggers some major water vapor feedback, that is about all she wrote.

So even in slightly dialing back their rhetoric against the “pause”, the MET’s still making every argument (and error) possible to keep the highest climate sensitivity possible. Well there’s a shocker.

Next thing you know, the revised summary for policy makers for the brand spanking new AR5 will argue that the pause, if it exists, is no evidence that AGW is any less C. Just like they have been told to.

“Paleo analysis is the means with which the slow feedbacks are estimated and how the long tails on the high side of ECS projections come about.”

Not so. The long tails come about because uncertainty in the fractional change in forcing net of ocean heat uptake is large relative to uncertainty in the fractional change in global surface temperature. The fat long tails come about when improper statistical methods are used.

“We simply do not have enough data from the modern-day observational records to estimate the slow feedbacks that occur over hundreds of years.”

Simulations by AOGCMs with dynamic oceans can be used to estimate the difference between ECS as estimated from data spanning 100 years or so and the eventual ECS when the model is run until equilibrium is reached. The result of one such test was reported in Andrews et al 2012, running an AOGCM for ~6,000 years – a very computationally demanding simulation indeed. The two ECS figures were within 10% of each other.

If we have both methods that have fat tails on the high side and methods that do not, the first group of methods is of little value in determining the likelihood of high values. The fat tails have existed, because every method has had fat tails, having even one reliable method without such a tail is enough to remove it.

Uncertainty of the denominator is a much more likely reason for a fat tail than uncertainties in the numerator, just as Nic wrote. One well known paper that has discussed this issue is that of Knutti and Hegerl (Nature Geoscience, 2008) (Fig. 2 in the article). They refer further to Roy and Baker (Science, 2007)

What the guy has to do is tell us what the mean values of the Paleo data set that he removed are. If the mean values of the paleo data set are 4C, then of course removing these will drop the resulting mean.

This is not that hard. The questions are not difficult and the answers are presumably not difficult.

Webby, Annan and Schmittner among others has lower paleo estimates, as I recall 2.3 and 1.7. I know this is the latest ploy, if models are wrong and they look very wrong, then data from 24000 years ago is better. And recent paleo estimates are also trending down.

Fan, please pardon any typos, as my eyes are so full of tears of laughter that I can hardly see straight.

You do realize that none of your points apply here, right? Let’s go through your points:

1. Nic’s comments are based on physical measurements. It’s the Met that has gotten it wrong — and wrong twice: once because of disregard for physical evidence, and once because of misunderstandings of the data.

2. You did notice that Nic was mainly expanding upon Otto, et al, right. You do know what “et al” means?

3. Science determines what science will determine. Being “conservative” or “enthusiastic” (or whatever you’d call it) in your conclusions has nothing to do with science. You do know what “science” means?

4. I missed the part where Nic talks about a conspiracy. Please help me find it. Or perhaps that word doesn’t mean what you think it means?

(3) Conservatism enters when the implications of science are considered (individually and/or organizationally); this largely determines whether the relevant time-scales are annual (as in business), decadal (as in politics), or generational (as for ordinary humans).

(4) When Nic Lewis finds 12 flaws in a MET report, and all 12 flaws apply same-sign corrections to the climate-sensitivity, then with probability P<1/2048 we an infer either that (A) The Met Office is conspiring to slant the analysis, or (B) Nic Lewis is cherry-picking his critique to support a predetermined (but unconscious) denialist rejection of high-end climate-sensitivity.

Point (4) is decisive–unless Fan can show that Nic has missed OTHER mistakes by the MET that point in the other direction, we can assume that the MET non-conspiratorially slanted their findings to make their own model look good and maintain policy consistency with Urgent Mitigationism. Thanks, Fan!

Yes, the fan is into “Strong Trolling” to combat what he calls weak skepticism, but he’s a little weak on his calculations himself, as we have seen. Seems unable to calculate 7 x 40. His “weak calculation”? 400. LOL.

> The Met Office Report considers whether climate model estimates of TCR and ECS need to be revised in the light of recent observational evidence, in particular the relatively slow increase in global surface temperature over the last 15 to 20 years. It concludes that, broadly, such estimates – and hence climate model projections of future warming – do not need to be reduced.

Was there other observations to than the global surface temperatures?

“Relatively low” according to whom, the Met or the A?

“Broadly” according to whom, the Met or the A?

What were the arguments that warranted the Met Office to reach such conclusion?

Quoting the main conclusions one wishes to criticize may follow the best practices.

Glad you asked. Of course it can be replaced but it would sacrifice information in the act. “More than half” could be any number between 50% and 100%. “Barely half” tells us that the number is very close to 50%. If you had the verbal skills you think you do you wouldn’t have asked such a stupid question.

The word “half” carries enough information in both expressions, Big Dave. The main information that carries the “barely” is that half is not a lot for the A of an op-ed. This appears to be an evaluation void of any objective criteria. An unnecessary subjective judgement in something some may claim is a technical analysis.

” As pointed out by Armour et al., these short-term studies are likely to underestimate sensitivity because short-term responses don’t represent the full longer term response, being skewed by fast responding areas like the land and Arctic that have less water vapor feedback. In the longer term, Armour says that the warmer ocean feedback plays more of a role in the ECS.”

IMO the Armour paper is suspect. It is based around a particular AOGCM that has a latitudinal pattern of climate feedbacks that is substantially different from those exhibited by most AOGCMs. The Andrews et al 2012 paper’s findings support the view that the Armour paper is mistaken. Indeed, it is now routine to estimate ECS for AOGCMs from what you call the short-term. At 100-150 years, the period involved it is long enough to activate all but very long term feedbacks, and appears to be sufficient to get a fairly accurate approximation to equilibrium sensitivity.

“I would also note that with a TCR of 1.6 C, the warming since 1950 is 100% accounted for by CO2, but that is just a corollary of Lewis’s number.”

I don’t think a TCR of 1.6 C is my number, if that is what you are saying.

Armour et al. offer a mechanistic explanation of why sensitivities based on a few decades always will underestimate actual ECRs if there are fast-response components in the climate system. Land and Arctic areas are currently warming at twice the global average rate, which I would say rates as a fast-response component of the type Armour was talking about. The Otto et al. work acknowledged Armour’s paper and they clearly were aware of this limitation of their own assumptions.
On the other point, if you use 1.4 C for TCR, the CO2 rise alone accounts for 90% of the warming, still comfortably in the “most” range of the IPCC’s very likely most statement in AR4. I only mention this because many here seem to doubt even that statement.

I decided that you were making stuff up, but didn’t want to make an accusation without substantiation.
So I plotted the SH and NH temperatures of land and ocean, from 1971 to 2012.
Then I took the slopes.

The Southern Hemisphere land is warming at 50% the Northern Hemisphere land.
The Southern Hemisphere SST is warming at 30% the Northern Hemisphere land.
The Northern Hemisphere SST is warming at 50% the Northern Hemisphere land.

As the Northern hemisphere temperature, SST and sea level change bears no relation to its Southern hemisphere equivalent, you do wonder what the purpose is in coming up with a composite Global average which bears no relation to any of it’s constituent parts.

The difference in surface temps above land and ocean is the result of a lack of water availability over land. If it warms at all this should reduce over time with more evaporation over oceans and rainfall over land. The oceans drive land temperatures and will drive land temperatures down.

CH, the lack of water availability is exacerbated by the growing temperature differential between land and ocean. Less relative humidity is less clouds is less rain is drier soil is more warming. A positive feedback of sorts.

It is pretty clear that warming in the norther hemisphere is greater than the southern because most of the aerosol emissions and large aerosol cooling effects are in the northern hemisphere….. um, no, wait…… never mind.

SteveF, “It is pretty clear that warming in the norther hemisphere is greater than the southern because most of the aerosol emissions and large aerosol cooling effects are in the northern hemisphere….. um, no, wait…… never mind.”

Yes, Smartsols. I think there is a knob on most of the models for those.

Think, people, the NH land masses are much larger while the ocean is 90% of the SH. It stands to reason the SH land is dominated by the ocean, while the NH land has large interiors far from the ocean. These are the areas that warm fastest too. The more poleward areas in the NH also are affected by Arctic warming. Taken together the NH and SH land warmed more than twice as much as the global ocean in the last few decades, as I showed.

CH, a warmer world would be wetter in equilibrium when the RH returns, but we are in a transient state with faster land than ocean warming. As you may know, a cooler ocean than land is not conducive to land humidity, which is why the summer is less cloudy than the winter over land.

‘A characteristic feature of global warming is the land–sea contrast, with stronger warming over land than over oceans. Recent studies find that this land–sea contrast also exists in equilibrium global change scenarios, and it is caused by differences in the availability of surface moisture over land and oceans. In this study it is illustrated that this land–sea contrast exists also on interannual time scales and that the ocean–land interaction is strongly asymmetric. The land surface temperature is more sensitive to the oceans than the oceans are to the land surface temperature, which is related to the processes causing the land–sea contrast in global warming scenarios. It suggests that the ocean’s natural variability and change is leading to variability and change with enhanced magnitudes over the continents, causing much of the longer-time-scale (decadal) global-scale continental climate variability. Model simulations illustrate that continental warming due to anthropogenic forcing (e.g., the warming at the end of the last century or future climate change scenarios) is mostly (80%–90%) indirectly forced by the contemporaneous ocean warming, not directly by local radiative forcing.‘

The difference is caused by differences in water availability – which are less in a warmer world.

RH by the way remains fairly constant – it not exactly so. Transition – smansition. There is always lots of moisture around – not limited at all over oceans – and warmer air holds more water. A physical fact aye Jim?

Just remember that the RWP and MWP and LIA were NOT global. Only NH. Therefore they can be ignored. The melting of Arctic sea ice (-2.5% per decade for Winter Max.) and the warming of NH today however is a clear sign of GLOBAL warming. Got that?

“The land is not actually warming faster than the ocean. Land temperatures are driven by the oceans.”

Incorrect. Land is warming faster than ocean. Land temperatures are moderated by the ocean not driven by it. The moderation effect was named “continentality” in the 19th century when some astute individuals noticed that seasonal temperature change was larger the farther inland one was from the ocean.

“Think, people, the NH land masses are much larger while the ocean is 90% of the SH.”

You’re the one that needs to do some thinking and it needs to be preceded by fact checking. Land is 30% of the earth’s surface. There is twice as much land surface in NH vs. SH. Therefore 20% of the earth’s land surface is NH and 10% in SH. But SH is 50% of total surface area so land occupies 20% (not 10%) of the southern hemisphere and 40% of the northern hemisphere.

The difference in temperature is the result of difference in water availability – and therefore of lapse rates. Water availability changes. The temperature in an ocean dominated world – ‘the big kahuna’ – is maintained indirectly by ocean heat content.

Nicholas Lewis “Independent climate scientist” ?
What does this mean? Is there some sort of qualification implied here? Some examination to be passed? Or, can anyone, even people like Wagathon, describe themselves in like fashion?

Used to be called in days of yore, a “gentleman scientist.” Someone without formal affiliation with academic institution or government. Charles Darwin was a gentleman scientist for most of his life I believe.

Wikipedia lists a bunch of independent scientists, including James Lovelock and Peter Mitchell who won a Nobel Prize in chemistry… despite what were regarded at the time as radical ideas,,, for which he likely would not have been able to get funding through the usual channels.

Peter was part of the Tarmac family, and married up.
He and his brother put the money up to buy the house in Bodmin and turn it into a research institute. He had worked in conventional academia, but had funding problems.
Peter hated reviewers; no imagination. He published his work via the vanity press; the Gray book and the Blue book.
He also had gold ear-rings, just like a pirates.

James Lovelock. Of Gaia fame? I think we should leave him out of it. Peter Mitchell looks to have followed a conventional academic pathway. C Darwin was born in 1809 when science was in its infancy. Is Nic Lewis being compared to him?
I’m sure NL must have some credentials. Uni degrees, work experience etc. Have I just missed seeing what they are, or is he reluctant to say?

As Steve McIntyre shows, you don’t need to know anything about climate to do statistics. You get a hold of the data somehow, maybe just asking around or doing FOIAs, and just treat it like numbers. You don’t have to know what the numbers mean.

McIntyre has been published, more than once. He found Hansen’s mistake on Y2K, and Hansen’s colleague acknowledged McIntyre and thanked him. It was important because they actually had to change a claim about the warmest year on record. McIntyre demolished the Hockey Stick. He figured very prominently in the Climategate emails, and that wasn’t because he didn’t have a clue; it was because he did have a clue and they were afraid of him. He’s not getting any dumber as time goes on either.

I’m not sure how much climate science SM knows. You think it isn’t much at all? You could be right. But, I’m asking for info on NL. He does claim to be a climate scientist. Is this a lifelong interest of his? If he is being compared to Charles Darwin I should point out that he was always genuinely interested in biology. He didn’t take up the study to bolster a particular preconception. Can NL say the same?

There is nothing magical about a climate scientist. Every one of the moderators at SkS is not a climate scientist. Yet that site gets referred to so often one might think it the bible of climate science. The chief reference book at least.

Look around and you find all sorts of non climate scientist actors in this drama. Peter Gleick? Lewandowski? An NGO of your choosing? How many contributors to the IPCC reports have/had ties to enviro groups?

Statistical analysis does not belong solely in the realm of climate science. If you have followed the debates, you might think it barely exists in that realm at all, based on the apparent lack of quality.

If you want to believe that Mann, Marchot and Gergis are superior SA practitioners’ to McIntrye and Lewis (among others) go ahead.

With the scare quotes that ‘climate science’ has assiduously earned, we are well on the way to trusting non echo-chambered critiques instead, by scientists from other realms, or ‘rooms’, not captured and dependent upon the ’cause’.
=================

Webby, Willard, Fan et. al.
You only weaken your asserted position(s) by posting such as above.
Why not address the substantive points Mr. Lewis, that Dr. Curry chose to post, made about MET misrepresentations. Present alternative facts, or (just for grins), an alternative physics explanation to his negative lapse rate feedback expanation posted above. So far, you whimper in objection.
Assuming, of course, that you understand how GCMs actually work. if you don’t, try reading the imminently accessible technical documentation to GCM NCAR CAM3. It is free online at NCAR/TN-464+STR(2004). a simple Google will get you there. And, it is free!
Then get back here about whether GCMs can now (or ever) model micro phenomena on a macro scale, as required for Nics negative lapse rate feedback, the impossibility of which was admitted by AR5 WG1 SOD 7.2.1.2. (Leaked, so we will see if it stands or gets modified by posts such as these.).
On such a scientific post thread, one desirable requirement is to know or learn a minimum about the science being discussed. Opinions, beliefs, world views, empathies, and such count for ZERO in such situations. Just generally accepted fundamental facts (you know, like here the Clausius-Clapeyron equation of AR5 WG1 SOD (leaked) Chapter 2.5.6.) Plus additional data, and maybe even additional theory, as both Nic and I ( in comments)posted above on this thread.
Just state your alternative theories and experiments with error bounds, and supporting alternative math. That would be really interesting to discuss, but I am guessing the sounds of silence will result.

see above. It’s 42. Or 56. Depending on whose count you want to go by.
Psst, I wouldn’t put much faith in 56. The rigor with which that number was arrived at was on par with the question it was purported to answer

“Sounds like experience talking and since you were never a science teacher we can reliably infer where your experience lies…”

From my own university experience, the toughest job is to grade assignments, lab reports, and exams. You want to give the students a fair shake by putting the effort into trying to interpret what they mean , but then you realize that to deconstruct each wrong answer is like another assignment on its own. So you do the best that you can, knowing that the students are really there to learn and have invested their money.

On the other hand, message board trolls like Springer deserve little empathy, and they can be roundly debunked or ignored, take your pick.

Remember, this is what Rud said:

” Opinions, beliefs, world views, empathies, and such count for ZERO in such situations. “

The most likely future for climate-change ain’t complicated Rud Istvan.

During the next ten years, increasingly sophisticated GCMs will agree with increasingly complete paleoclimate data to increasing accuracy. Meanwhile increasingly accurate global thermometry, gravimetry, and altimetry (ARGO/GRACE/JASON) will affirm Hansen-type global energy imbalance predictions with increasing accuracy. As an overall result, the GCMs, paleoclimate record, and energy-balance observations will all evolve into increasing perfect accord.

During the same coming decade, the world;s denialist cohort will cherry-pick, quibble, spin, and slogan-shout with unabated fervor. And yet (for good common-sense reasons) fewer-and-fewer citizens will pay any attention to the increasingly desperate denialists.

Finally, the presence of vigorous climate variability presents significant challenges to near-term climate prediction (25, 26),
leaving open the possibility of steady or even declining global mean surface temperatures over the next several decades that could present a significant empirical obstacle to the implementation of policies directed at reducing greenhouse gas emissions (27). However, global warming could likewise suddenly and
without any ostensive cause accelerate due to internal variability. To paraphrase C. S. Lewis, the climate system appears wild, and may continue to hold many surprises if pressed.http://deepeco.ucsd.edu/~george/publications/09_long-term_variability.pdf

Ever considered you might be wrong FOMBS? And the likely trajectory for climate – since the 1998/2001 climate – is cooling fro decades more? No – I don’t suppose so.

But none of those reverals happened. When are we going to see these reversals? And why should we believe your decadal predictions of future reverses, when your decadal retrodictions of past reverses are dismally wrong?

Straddling the 2 periods – pre and post 1998/2001 – is difficult in both energy flux and ocean heat data. Different instruments and little in the way of intercalibration. A world of difference in data gathering.

I have downloaded this some time ago. Do you mind if I make a comment?

This model is of extreme complexity, although many of the components are based in fairly conventional mathematics.. I have dealt with far simpler models in a different domain and it would take me months, if not years, to understand this model mathematically. I would have to do a vast amount of research to understand the basic physics that is incorporated in this model, let alone to the depth at which I could criticise it meaningfully. The oint I am making is that this is a Herculean task for non-specialists.

Therefore, is there any way that a non-specialist can engage with these models. The only way is to examine their outputs, which is a more basic scientific approach. I agree that this is fraught with difficulties but it is an approach that can be undertaken by people with a more general background in statistics and signal processing.

Thus there are three levels of scepticism. The first is highly specialised that can engage with the basic implementation of the models, the second is the criticism (very important IMO) of specifc aspects of the modelling of the physical processes and the third is comparison of outputs and observations. I would suggest that most non-specialised, people with a scientific background fall into the latter.,

All these are valid lines of criticism? Yes they all are and they all contribute to uncertainty of model prediction, but the third is probably more compelling for the majority, particularly for policy makers.

RC, you draw very good distinctions about levels of understanding. Certainly, comparison of model to actual is the simplest and most accessible to all. My concern is that certain important phenomena here negative lapse rate feedback, aka the adaptive iris hypothesis, inherently cannot be modeled in any GCM in the foreseeable future due to computing power limitations. Why then, do we spend so much money trying? And why does the IPCC put so much faith in the modeled results?
AR5 SOD 7.2.4.4 literally says cloud feedback is positive. “This conclusion is reached by considering a plausible range for unknown contributions by processes yet to be accounted for, in addition to those occurring in the climate models.”
That is cargo cult science.

@Dave Springer,
I go along with your analogy up to a point. However, if you are trying to build a formula 1 racing car, which is as ambitious as GCMs, you can either say that this car won’t work properly because the parameters of the suspension damping system is wrong … or that it is a lousy car because lots of different drivers have failed to get it to perform.

The distinction is that if one analyses the car computer model, one may see that either the car is fixable or that it cannot be fixed because its design is totally flawed. This is useful information. If one simply discards the car, someone else can come with a different design and try it, although it contains the same flaws, because they hadn’t analysed why the earlier design doesn’t work.

This is the problem with GCMs. Nobody knows, corrrect me if I am wrong, whether the difference between results and prediction are due to some fairly simple and analysable oversight or whether GCMs are completely flwed because they are incapable of modelling climate with a feasible amount of computing power, their solutions are chaotic or the physics model is just plain wrong. This is quite an important distinction because if it can be shown that climate models are intrinsically flawed, as opposed to being wrong in one small detail, it directs thought into other, and possibly more productive, avenues.

CMIP5 ensemble is clearly running too hot. You are correct that no one knows for sure where or in how many places the ensemble is flawed. The mere fact that an ensemble is used instead of a single model shows that no single model is deemed trustworthy. The hope in using an esemble is the error or errors will average out i.e. you’ll get a result that’s a little wrong all the time instead of spectacularly wrong some of the time. The expected result of ensembling materialized as expected and over the course of 22 years the small error all-the-time gradually ran the projected temperature outside the 95% confidence bound when compared to measured temperature. Notably the measured temperature is lower than expected which makes the skeptics and their back of the envelope calculations the intellectual giants.

Notice how not one “skeptic” catches Rud’s attention as being so inferior to him so as to be unqualified to comment on this thread. And yet, Rud says:

<blockquote Opinions, beliefs, world views, empathies, and such count for ZERO in such situations.

So that must mean that Rud’s failure to notice any “skeptics” that reach his elevated level of viewpoint must be purely coincidence, and not at all related to his “opinions, beliefs, world views, or empthies,” – or otherwise Rud would not have added that comment to this thread.

Well, either that are all the “skeptics” here understand GCM’s at levels as lofty as Rud’s understanding and (presumably for that reason) agree with his interpretation of their validity.

“It is interesting, RC, that in your response to my comment, you failed to address the substance of my critique.”

Your critique had no substance. It never does. You are a science illiterate posting on a science blog. Willard is in the same boat. You both try to be relevant but all you do is annoy others and make fools of yourselves. That in my opinion is your first best reason for remaining anonymous cowards.

That in my opinion is your first best reason for remaining anonymous cowards.

Here’s what’s interesting about that, David. You repeat the same opinion over and over even though it is completely illogical to begin with, and at least in my case not only proven wrong but also – when challenged to put some money behind that opinion – you instead chose to go running away with your tail between your legs.

It really takes a special kind of person to persist under such circumstances, David, and you are exactly that kind of special person.

Models are unable to predict anything – they are chaotic. There are many feasible solutions. At most a perturbed physics model can produce a probability distribution of future climate states. This is shown schematically here.

Moreover the physics of actual climate states can not be modeled at all with precision greater than tossing a coin. That climate shifts occur at decadal frequencies seems to put an impossible burden on expectations of precise – or even imprecise – forecasting.

Therefore – the only senible answer for climate sensitivity is …. wait for it… γ in the linked diagram.

Now we need 1000′s of times more computing power to find out what the question is.

Sensitivity is more correctly sensitivity to initial conditions and can indeed be negative or positive at different times depending on the distance to a bifurcation point, the direction of approach and the nature of the resultant instability.

‘The climate system has jumped from one mode of operation to another in the past. We are trying to understand how the earth’s climate system is engineered, so we can understand what it takes to trigger mode switches. Until we do, we cannot make good predictions about future climate change… Over the last several hundred thousand years, climate change has come mainly in discrete jumps that appear to be related to changes in the mode of thermohaline circulation.’ http://www.earth.columbia.edu/articles/view/2246

Although THC seems significant in large scale shifts – there are of course other shifts that have been less persistent and long lived in the modern era. These seem related to ocean and atmospheric patterns associated with cloud changes that – from the data – seem the dominant forcing in the satellite era by far.

‘What happened in the years 1976/77 and 1998/99 in the Pacific was so unusual that scientists spoke of abrupt climate changes. They referred to a sudden warming of the tropical Pacific in the mid-1970s and rapid cooling in the late 1990s. Both events turned the world’s climate topsy-turvy and are clearly reflected in the average temperature of Earth.’ http://www.sciencedaily.com/releases/2013/08/130822105042.htm

“Moreover the physics of actual climate states can not be modeled at all with precision greater than tossing a coin.”

Nonsense. Physics explains why the temperture of Venus is warmer than Earth and why Earth is warmer than Mars. Physics also explains why land surface temperature is more variable than SST. Physics explains the seasons, ocean currents, atmospheric circulation, and a whole raft of other phenomenon. Your continual harping on chaotic boundary conditions is tedious, irrelevant, and defeatist.

“The winds change the ocean currents which in turn affect the climate. In our study, we were able to identify and realistically reproduce the key processes for the two abrupt climate shifts,” says Prof. Latif. “We have taken a major step forward in terms of short-term climate forecasting, especially with regard to the development of global warming. However, we are still miles away from any reliable answers to the question whether the coming winter in Germany will be rather warm or cold.” Prof. Latif cautions against too much optimism regarding short-term regional climate predictions: “Since the reliability of those predictions is still at about 50%, you might as well flip a coin.”http://www.sciencedaily.com/releases/2013/08/130822105042.htm

What happened in the years 1976/77 and 1998/99 in the Pacific was so unusual that scientists spoke of abrupt climate changes. They referred to a sudden warming of the tropical Pacific in the mid-1970s and rapid cooling in the late 1990s. Both events turned the world’s climate topsy-turvy and are clearly reflected in the average temperature of Earth.

Changes in means and variance in the time series is chaos Jarhead the Jabberwock.

Conclusion Thank you Nic Lewis — and James Hansen and Naomi Oreskes and Wendell Berry and Pope Francis and Samuel Locklear and Ronald Reagan — for adding your diverse conservative voices to chorus of conservative voices in science, history, religion, the military, politics, business, and most of all, plain ordinary conservative citizens!

Nic Lewis, it is good that you recognize that among the greatest long-term threats to the maturation and prospering of *true* conservatism is the willful ignorance, cherry-picking, faux-science, statistical quibbling, and selfish short-sightedness that constitute the foundations of climate-change denialism!

It’s nice to see climate skeptics accept the IPCC statement that over half the warming since 1950 is caused by human greenhouse gas emissions?

For that IS what your figures show isn’t it Nic Lewis?

We’ve come a long way from denial that man causes is causing global warming haven’t we?

Also our skeptic friends might like to take a look at figure 2. Look at the numbers on the Y axis and compare them to climate variability in the holocene. Yep that’s a hockey stick blade you are looking at.

To reiterate, surely the big news here is that climate skeptics are beginning to accept man, not nature, dominates global temperature change in the modern era. These figures are closer to what the IPCC has been saying all along rather than the “global warming is a hoax” james inhofe-style denial.

It’s amusing to see climate skeptics trying to spin these developments backwards. As if they’ve been telling us to expect 2C+ warming from human greenhouse gas emissions all along!

lolwot,
Humm… I don’t know what ‘skeptics’ you have been talking to, but I have always said that adding GHG MUST raise the average surface temperature. (People who claim GHG driven warming is either non-existent or trivial are not really skeptics, they are simply misinformed, in the same way that people who claim GM soybeans are dangerous to your health are misinformed.) The argument has always been about how much warming there will be in response to GHG forcing, and that argument continues. I have said for at least several years that the likely ECS is in the range of 1.8C-2C per doubling of CO2, and 1.2C to 1.3C for transient response (near the low end of the IPCC range). What I have consistently argued is that 3.2C (model ensemble mean) or 4.5C or more (some individual models) for ECS and 1.6C to 2.0C for transient response are simply not consistent with the observational data. The magnitude, rate, and most of all, down-stream consequences of GHG driven warming all depend on the sensitivity to GHG forcing, so prudent public policy on GHG emissions depends critically on reasonably accurate sensitivity values.. The disagreement is mainly about sensitivity, and not too much else.

Fan is going offside by falling back on the paleo data. The UM team’s response to McIntyre’s comprehensive destruction of the paleo reconstructions is that the paleo data aren’t relevant, we believe in UM based on fundamental physics, or the GCMs and don’t need no stinking ice-cores or tree-rings. Get with the party line, Fan!

stevefitzpatrick affirms his short-sighted faith: “What I have consistently argued is that 3.2C (model ensemble mean) or 4.5C or more (some individual models) for ECS and 1.6C to 2.0C for transient response are simply not consistent with the [short term] observational data.”

Catastophic global warming is an ideologically inspired narrative which is being rapidly disproven by observation.

The big news is that actual climate response is trumping hysterical narratives and mainstream climate scientists are beginning to accept the idea of a lesser beneficial anthropogenic global warming instead of the hyperbolic visions of drowning in a rapidly rising ocean.

Thanks for playing. There’s a consolation prize waiting as you exit stage left. It’s an autographed copy of “An Inconvenient Truth”. The autograph however is by Richard Lindzen and the inscription is “Bullshiit!”

Lolwot
I see just the opposite. The establishment is having to admit there is a pause and based on very recent papers here, they are admitting to less sensitivity and a bigger role for natural variability. It will be more clear with the IPCC release. What I find fascinating is the great psychological investment by many warmists and their inability to even entertain the thought they might be wrong. It reminds me of some fund managers on Wall Street who are so enamored by their superior intellect that they refuse to believe they could have made a bad investment and they ride their position all the way to zero. And then Poof!

> The best estimate of TCR based on observations of the most recent decade is 1.3 °C (0.9–2.0 °C; dark red, Fig. 1b). This is lower than estimates derived from data of the 1990s (1.6 °C (0.9–3.1 °C); yellow, Fig. 1b) or for the 1970–2009 period as a whole (1.4 °C (0.7–2.5 °C); grey, Fig. 1b.

A recent comprehensive study, based on making a simple calculation of the global energy budget based on observational estimates of surface temperature rise and radiative forcing, estimated that the TCR ranged from 0.7 to 2.5°C using data over the period 1970-2009 (Otto et al, 2013, see Figure 1). The uncertainty range derives from uncertainties in the global surface temperature estimated from observations and uncertainties in the estimated radiative forcing.

[Figure 1 goes here.]

The upper estimate of the TCR is lower when using observations from the 2000s, and conversely higher when using the observations from the 1990s – a period of more rapid warming (see Figure 1 in the second report).

The emphasized sentence is still true if we correct the 1.4 in Figure 1 to 1.3. In other words, this error does not lead to any misrepresentation that has been spelled out in the A’s memorandum.

***

The mediation that is needed is about the claim that “the best estimate of TCR based on observations of the most recent decade is 1.3 °C”, which has been said to be, in Otto & a lii 2013:

This is how I see it. Most anthro. heating is in the northern hemisphere. The southern hemisphere temperature is mostly influenced by the surface tenperature of the oceans. One of the unknown unknowns is the delay of the S hemisphere catching up with the N hrmisphere’s rising temperature. The 1910 – 1940 temperature rise seemed to disappear sharply after 1940, but it did not. It produced the first ‘pause’ in history starting in 1948 and extending to 1970. Now the S hemisphere resists temperature change for two reasons: First, because water is a poor conductor of heat, so heat transport depends on Coriolis at depth and wind and haline density induced slow currants. This is not an inertial delay but a true transport delay. This difference is vital in climate models, but is largely unknown as a parameter But the southern oceans’ huge heat storage means it will be slow to change – say at a guess about 30 years. My thesis is that the global temperature rise between 1970 and 1998 was just the second installment of the 0.5C atmospheric rise between 1910 and 1940. See my website underlined above.Of coarse the IPCC missed all this because it mostly confined its investigatiohs to post-1961..

The difference between EcS and TCS, discussed at some length above by Nic, is this largely unknown transport delay which may be, at a guess, about 30 years. It may not even be a constant, we just don’t know what the currents are at the bottom of the oceans. Until this is better known the ECS will always be in doubt.

Well, ECS is a concept ripe for conjecture, if not frankly imaginary. Equilibrium never occurs, and even were it to occur, the path there is unknown. Similarly, though less fabulously, TCS may never be separated conclusively from the turbulence of chaotic nature.

And, as previously demonstrated over and over again, any warming man can generate from the apparently low sensitivity being shown over and over again, will be beneficial in net.
=======================

Waiting for the mediation that the “it’s 1.4, no it’s 1.3” conversation the A might have with the Met, let’s follow our reading of the op-ed. Here are the sentences from the 7th paragraph that contain adverbs that would deserve due diligence:

the TCR and ECS of its flagship HadGEM2-ES model, used for policy advice, are very near the top of the range for CMIP5 models (Forster et al. 2013). […] The Met Office HadGEM2-ES model’s TCR of 2.5°C is not only well above the upper 95% bound of 2.0°C given in Otto et al, but also above the 2.3°C bound given in Gillett et al 2013 – the only other study cited in the Met Office Report that derived TCR from observational records. Indeed, the HadGEM2-ES TCR is nearly double the Otto et al best estimate for TCR of 1.3°C. As for ECS, the HadGEM2-ES model’s ECS of 4.6°C lies well beyond the upper 95% bounds given not only by Otto et al6 but also by Aldrin et al 2012 (3.5°C) and Lewis 2013 (3.0°C). HadGEM2′s ECS alsoexceeds the 4.5°C top of the IPCC’s ‘likely’range, and the 95% bounds both from CMIP3 models and from CMIP5 models excluding HadGEM2-ES (Murphy et al. 2009).

I first read the Met Office report in an earlier thread and I thought at the time that the Met Office was emulating many sceptics by placing too much credence on short term data.

The defensive posture of the paper seemed to reflect a perceived need to reinforce their position on AGW and that their consistent advice to the UK government that CO2 emissions needed to be drastically reduced was justified.

The pause IMO does not preclude further spikes in the global warming metric but neither does it preclude a continuation of the pause or even further cooling resulting from the DPO and the likely dominance of La Nina’s over the next few decades.

Consequently, the paper by Nic Lewis, is probably also falling into the same logical position as for many AGW supporters and for many sceptics, by accepting the premise that the current pause in the global warming metric is of significance, statistically or otherwise.

The AGW Union of Secular, Socialist Researchers have nothing left but for a city somewhere to simply bust into flames and burn to the ground (like a modern day Sodom and Gomorrah) as a result of the flipping of the switch on the final incandescent bulb that sets off thermageddon.

Already happened, thermageddon. Great Lakes area, 1871 – October, of all times! For a rural thermageddon, Victoria, Australia, 1851. People blamed comets, someone’s cow etc for Chicago-Peshtigo. It was, of course, our old mate the climate. Never been nice, the climate. More snips and snail and puppy dog tails than sugar ‘n spice. Climate’s a boy.

A plea to those clever people who say they can manipulate the climate through processing trillions in Other People’s Money in weird ways. If you really can perform this magic…

Please don’t dial us back to the 1930s! What with North America and Australia parching to dust, and China half washed away, I just don’t think I could take it. Choose some normal, stable era like…like…you know, like…um…

FAN what would it take to make you decide the world had entered a cooling phase. A dip in global temperatures for how long?
If it just goes back to zero anomaly for 2 months or 2 years?
Forget ice mass at the poles, that Relies on a wonky algorithm that keeps being changed every time the sea ice extends out at the South Pole and has massive error margins.
Don’t know if they can do it up north being sea ice and not on land and being thus even more unreliable.
They have PIOMASS but it is even more wonky and is going up a little?
Draw a line in the sand for me, if you acknowledge having one.

angech, you ask: “FAN what would it take to make you decide the world had entered a cooling phase?” The answer is “Nothing.” The Fanster would’nt be able to be a funster if he gave up the belief basis on which he can yank people’s chains. Get a button-flush toilet, no more exposure to yanking.

But none of those reversals happened. When are we going to see these reversals? And why should we believe your decadal predictions of future reverses, when your decadal retrodictions of past reverses are dismally wrong?”

Finally, the presence of vigorous climate variability presents significant challenges to near-term climate prediction (25, 26),
leaving open the possibility of steady or even declining global mean surface temperatures over the next several decades that could present a significant empirical obstacle to the implementation of policies directed at reducing greenhouse gas emissions (27). However, global warming could likewise suddenly and
without any ostensive cause accelerate due to internal variability. To paraphrase C. S. Lewis, the climate system appears wild, and may continue to hold many surprises if pressed. http://deepeco.ucsd.edu/~george/publications/09_long-term_variability.pdf

Ever considered you might be wrong FOMBS? And the likely trajectory for climate – since the 1998/2001 climate – is cooling fro decades more? No – I don’t suppose so.

There is either a pause or there is not a pause — unless you are speaking in Trenberthian — and, statistics is irrelevant. For example, for a pause in elevation gain, you need a level not a probability table.

angech lives in Australia and has fallen asleep not silent.
The question [unanswered, I like that] remains“FAN what would it take to make you decide the world had entered a cooling phase.”
Not some rubbish about the past and 1998 . The future. What values would make you decide on a cooling phase.
Simple
Easy
Values [Mathematical]

I’ll give my answer to the same question, but it may sound repetitive to my posts on the pause, but here goes.
The appearance of any trend on any of the major published data sets that is negative and excludes zero, for example if the trend calculated on the Skeptical Science trend calculator for GISS was -0.115 +/- 0.0114 C per decade, I would say we have entered a cooling period.

Since I have no idea what role the eight chapter plays in the overall economy of the A’s argument, I’ll skip it for now and will read the ninth:

The Met Office Report refers to three methods of estimating TCR: from simulations made with climate models, from observations, and by combining climate model and observationally-derived values. It makes the contentious claim that none of these methods can be said to be superior to the others. In science, it is standard to test the validity of theoretical models by comparing their predictions to observational data. Accordingly , it seems possible to say that estimates derived purely from simulations by climate models, without combination with observationally-derived values, are likely to be inferior to those from the other two methods.

I have emphasize the adverbs, but what matters in that paragraph is the Met claim purported to be contentious: “none of these methods can be said to be superior to the others”. Let’s compare that excerpt from the Met report, which we can find at the beginning of section 3:

The TCR can be estimated in a variety of ways. These include estimates from simulations made with climate models, estimates made from observations, and estimates made by combining climate model and observationally-derived values. Estimates from each method are assessed in the following sections. Each method has its own assumptions, and so it is not possible to say that one method is superior to the others.

The A omitted to mention the reason behind Met’s claim. Neither has the A mentioned why this reason would be contentious. Nor has he shown how this “standard to test the validity of theoretical models by comparing their predictions to observational data” can evaluate estimates obtained via different assumptions.

In fact, the conclusion presented with the adverb accordingly, purely, and likely should be more direct than that. If we accept that comparing predictions to observational data is the litmus test, then it follows quite immediately that observational models should win.

If our argument is correct, the the A has hidden an a priori argument behind weasel words which makes it look empirical.

I thank and congratulate Nic Lewis for his excellent contribution and for continuing to discuss his work with commenters on Climate Etc. and elsewhere.

Two thoughts occurred to me as I read this post:

1. It seems that skeptics from outside climate science have uncovered the major problems with the orthodoxy – e.g. Steve McIntyre and Nic Lewis. Why is this, given we are paying so much money to climate science organisations to do good science?

2. The BOM repeatedly selected or changed figures to make sensitivity higher and projected temperature in 2100 higher. How can this happen if scientists are not being biased and negligent?

More junk from the Daily Mail I see. David Rose doubling down on his errors and Judith Curry joining him.

Take this for example:

“She said it therefore made no sense that the IPCC was claiming that its confidence in its forecasts and conclusions has increased.

For example, in the new report, the IPCC says it is ‘extremely likely’ – 95 per cent certain – that human influence caused more than half the temperature rises from 1951 to 2010, up from ‘very confident’ – 90 per cent certain – in 2007.
Prof Curry said: ‘This is incomprehensible to me’ – adding that the IPCC projections are ‘overconfident’, especially given the report’s admitted areas of doubt.”

How can Curry justify this when this very post by Nic Lewis (which the Daily Mail even reports on!) presents figures that back up the IPCC attribution statement?

I agree. The color of her eyes goes very well with the blue sky in the background. Her eye color is indicative of highly evolved perspicacity. Not surprisingly my eyes are exactly the same steel blue color. :-)

With no further assumptions about the time constants involved it is not the case. The only thing we know (provided there is no overshoot in response) is that TCR can’t exceed ECS and can’t be negative.

It puts the ratio TCR:ECS in the [0,1] range, but that’s all.

Therefore the general form of response function with all time constants specified is of primary importance. Without that a small TCR is fully consistent with a large ECS.

Please note ECS has no relevance to policies whatsoever if full equilibration time is centuries or millennia. We do know for sure a runaway warming is impossible because of the strongly negative Stefan–Boltzmann feedback (and also because of climate paleohistory).

What politicians may be concerned about is the maximum possible rate of warming on decadal scales and that seems to be severely limited by the long response time of oceans.

“With no further assumptions about the time constants involved it is not the case. The only thing we know (provided there is no overshoot in response) is that TCR can’t exceed ECS and can’t be negative.”

A pendulum does not swing provided there is no overshoot.

Overshoot is very common in dynamic systems and it almost certainly exists in many forms in the climate system.

I posted this yesterday afternoon, but did not get a reply from R. Gates. Let me try again.

Jim Cripwell | September 14, 2013 at 4:40 pm |
R.Gates, you write “or we’ll have returned to the dark ages ruled by wacko right-wing nutters who will teach that the Earth is only 7000 years old.”
By that I take it to mean you equate me with people who believe the world is only 7000 years old. I sort of resent that. I have stated over and over again, that I rely on empirical data, and empirical data only. There is ample empirical data on what the true age of the earth is. So how you can equate me with such an erroneous idea, I have no idea.
The little empirical data that we have, gives a strong indication that the climate sensitivity of CO2 is indistinguishable from zero. Now if you can produce some empirical data which shows that this is wrong, I would,be grateful. ALL the numbers associated with the climate sensitivity of CO2 are either estimates and not measurements, since we cannot do controlled experiments on the earth’s atmopshere; or they are things like paleo data, where there is no proof that the alleged rise in temperature was, in fact, caused by the rise in CO2 levels, and not the other way around.
Do you have any measured, empirical data to prove that I am wrong?

To explain half the 0.6C warming since 1950, transient climate sensitivity would have to be no greater than 0.81C per doubling of CO2.

According to figure 2 above, only Otto 2013 1970-2009 (which we are told not to use!) barely covers this range.

Figure 2 therefore suggests it is very likely that climate sensitivity is greater than 0.81C per doubling of CO2.

In other words we could say “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations”

Hmm where have we heard that before?

So why is it that so many denizens and even our hostess have attacked the the IPCC attribution statement?

Can we have an apology for wasting everyone’s time, wrongly giving people the impression that the IPCC attribution statement is too certain, and a statement that the IPCC attribution statement is, if anything, too conservative?

Lolwot-
It doesnt take more insightfulness than a 5th grader to answer the question of “what is the blue line showing at the far right end of the graph?” No trend you say? We will see. The only thing I have in all these debates is the data. And I understand how to read a simple graph.

Lolwat Will this do ? From today’s Australian “According to The Daily Mail, the IPCC draft report recognized the global warming “pause”, with average temperatures not showing any statistically significant increase since 1997
Thats 16 years a lot more than 2 or 3.
Thanks for playing.

But none of those reversals happened. When are we going to see these reversals? And why should we believe your decadal predictions of future reverses, when your decadal retrodictions of past reverses are dismally wrong?”

To have two decadal pauses in a row with co2 rising strongly would be highly unusual so my guess would be that there will be a significant change upwards or downwards, rather than maintain the stats quo.

Bearing in mind the temperatures have been increasing for 350 years, if that step was decisively down wards against the backdrop of rising co2 I think that would cause widespread confusion in the climate community

“The strong negative lapse rate feedback is very closely linked to the water vapour feedback (they are sometimes combined into a single feedback) and has a similar level of understanding.”

That’s what I’ve been saying. DWLIR doesn’t warm the ocean basin significantly. It drives evaporation in a 10um surface film where it is effectively completely absorbed. The DWLIR energy is instantly made latent. The greater evaporation rate reduces the lapse rate. Clouds condense higher in the atmosphere where they have a less restricted radiative path to space and a more restricted path radiative path back to the ocean surface. The result is a wash. Ocean heat content (OHC) is increasing from warmer runoff from the continents where the GHG effect works as advertised to raise surface temperature. By that path it does not necessarily enter the oceanic mixed layer as much of it is colder than the ocean mixed layer and it simply follows the continental shelf into deep water along the bottom. It’s colder than the mixed layer because much of GHG warming is in higher latitudes in the winter when the surface is frozen resulting in winter beginning later and ending earlier.

The foundation of all climate science rests on our ability to explain the 33C discrepancy between the non-GHG atmosphere Earth and what we currently observe.

Skeptics always seem to massively fail when this is factored in. About 1 part on 3 is due to CO2, perhaps 1.5 part in 3 is due to associated water vapor increases, and the rest due to other GHGs and albedo changes.

When more CO2 is added, the 33C number is accordingly scaled upward. A doubling of CO2 adding only 3C to this number is the result of the log sensitivity .

Sceptics will need a theory to explain this better than the consensus has done.

It’s the ocean, stupid. Its albedo is close to zero. It is warmed by shortwave which penetrates hundreds of feet at the speed of light. The warming at depth by shortwave radiation cannot escape radiatively as the ocean is almost perfectly opaque to longwave. Thus the thermal energy at depth imparted by shortwave solar energy must be mechanically transported to the surface before it can escape. This is exactly the mode of operation that atmospheric greenhouse gases exhibit – transparent to shortwave and opaque to longwave. The difference is the ocean has thousands of times the heat capacity of the atmosphere. Just the top 10 meters of the ocean water has more thermal mass than the entire atmosphere above it.

In other words deep bodies of water are uber greenhouse fluids. The earth is warmer than the moon because of its oceans not its wispy atmosphere. The primary role of the atmosphere isn’t trapping heat it’s establishing a surface pressure so that liquid water can exist in a 100C temperature range enabling a liquid ocean in the first place.

Thanks for asking. I realize I don’t represent all skeptics but don’t say no skeptics can explain why the earth is warmer than the moon without the aid of greenhouse gases because I’m a skeptic and I just explained it.

Ocean surface temperature where most of the solar energy enters the system (the tropics) is about 30C so your explanation of how 240W/m2 average insolation cannot make the surface warmer than 15C is easily disproven. Because the warm tropical water is less dense it spreads away from the equator towards the poles remaining on the surface as it goes warming the atmosphere along the way mostly through evaporation and condensation. For the most part rain warms the troposphere. Write that down.

SpringyBoy, For your theory to work, an expanse of ocean would need to deviate from a smooth Planck profile to one that shows notches in IR bands.

As it stands, the upper layer of water is so dense that it scatters ALL infrared radiation equally, thus leading to the smooth Planck response. And a smooth Planck response will not do what you want it to do.

Your problem is that you are a bully, and you think the lack of responses by well-qualified physicists allows you to run wild with your ill-thought-out theories. Well, the reason you don’t find responses is that these guys have better things to do with their time, and they don’t have an interest like I do in Voodoo Science and their kwack practitioners.

First, try the defaults at 70 km, looking down
Notice the notches in the spectrum. Those are due to GHGs

Next, set the altitude at 0.1 km, looking down
Note how close it looks to the black-body spectrum.

SpringyBoy, your theory (aka liquid GHG) that the ocean differs from a black-body radiator is so bizarre and far removed from anything known in the research literature that it places you into the ranks of the kooks and krankpots.

Pekka, “Air pressure does affect thermodynamics in many ways, but not dramatically.”

I think you are missing what I am saying. C-C is based on a surface temperature that has nothing to do with the temperature of the air becoming saturated at 2250 to 3000 meters. The saturated and mixed phase surface at that altitude is what is needed to determine the saturation pressure of that air. If you are using CC with a surface temperature you have to be assuming that there is a fixed lapse rate and CC does not deal well with supersaturation. The atmosphere above the atmospheric boundary layer is not well mixed, CC is not going to be accurate across the atmospheric boundary layer, it should predict too little water vapor and too little convection.

A better way is to consider the ABL as the “surface” then to determine the heat capacity you need to consider the local temperature and pressure/altitude. Then at sea level, 26.5 C & saturation you have ~80 joules per grams dry air while at 2250 meters, 26.5 C saturation you have ~105 joules per gram dry air due to the difference in specific volume. I used 26.5 C because that is considered the convective triggering potential.

If Planck response were smooth on the ocean surface then SST would not vary by longitude along the same latitude line over the ocean. It inarguably does vary. The reason it varies, despite TSI being equal at the same latitude and well mixed CO2 is that evaporation and condensation varies and that in turn changes the vertical temperature distribution. Evaporation and condensation insensibly transports energy from the ocean surface and deposits it high up in the atmosphere.

Furthermore, you might have missed the memo that “global warming” is now happening in the ocean below the mixed layer with no Planck response at all on the surface. How’s that work, dopey?

Webster, “Cappy Dick, You have said in the past that the number is much larger than 33C and that the lapse rate feedback reduces the number.”

That was Manabe that said that, but he never said how much IIRC. One estimate was about 90C which would include the surface warming to atmospheric cooling range since the tropopause is considerably colder than 255K degrees.

As far as the value of the 33C, it is pretty limited because it assumes constant albedo, constant albedo ratio, land to atmosphere and ignores latent heat transfer. It is one of those “if all things remain equal” kind of things.

But things don’t remain equal do they?

If you consider latent and sensible surface energy, the total “average” surface energy is 398+88+25 = 510 Wm-2 at a temperature of ~289K degrees. At 240Wm-2 and 255K the difference is (510-240)/(289-255) = 270/34 or 7.97 Wm-2/K instead of (390-240)/(288-255)=150/33=4,54 Wm-2/K There is a pretty big difference if you consider latent and sensible using current estimates.

lolwot – in fact I did explain it. You not agreeing with the explanation does not negate the fact that I offered an explanation.

dallas – I made no mention of how much water the atmosphere will hold vs. pressure. I said that without atmospheric pressure there could be no liquid ocean. That is a statement of fact as at 0 millibars water does not have a stable liquid phase.

So “offering an explanation” is all you need to do to get a scientific degree or a job at a research institution?

What world does Springer live on?

Ahhh, yes, the world where Intelligent Design is an appropriate answer to any question put forward.

So the game that Springer is playing is that you get 1 point if you offer an explanation and 0 points if you don’t. Springer will either win every time this game is played, because most of us will say “don’t know” at some point.

How is it being extreme to say that the most important climate function of the atmosphere is providing an environment wherein a liquid ocean can exist? I consider that a simple of fact and calling it “extreme” makes no sense to me at all.

So “offering an explanation” is all you need to do to get a scientific degree or a job at a research institution?

Not just any explanation. It doesnb’t have to be a scientifically correct explanation but rather a politically correct explanation and then you’re in like Flint provided it agrees with the politics of the institution in question.

Thanks for asking. If I can assist you again in the future don’t hesitate to beg for help.

” It doesnb’t have to be a scientifically correct explanation but rather a politically correct explanation and then you’re in like Flint provided it agrees with the politics of the institution in question.”

Politically correct explanations do not help when you try to apply that science to a technology. You can’t engineer a product to work based on a political explanation for some scientific phenomenon.

Similarly, Springer’s idea that a sphere of water (enclosed by a transparent film and inside a vacuum to set up a boundary condition) will increase temperature beyond that expected for a black body (when irradiated by light until it reaches a steady state) is not correct.

The problem with any of these explanations is that there are an infinite number of them. And just because you have one doesn’t make you special.

David, because water exists in all three phases. If the pressure is low enough it can only exist as a solid or a gas. Earth is in the unique position where water can exist up to 100C and down to -18C at the surface thanks to salt and down to -40C in the atmosphere thanks to super saturation. Picking 0 mb kind of trivializes things.

Yes, when we have so little water that it’s partial pressure as gas is less than 6 mbar, it’s never liquid (except for limited time as supercooled or superheated liquid). That depends only on the amount of water, not on ambient air pressure.

If there was no CO2, the quiescent state of the earth would be closer to smowball earth, as nothing could thermally activate the water vapor and thus prevent it from semi-permanently condensing out of the atmosphere.

Pekka, “That depends only on the amount of water, not on ambient air pressure.”

How much energy the air can hold depends on the local temperature and pressure. The air becomes super saturated as it cools and stays liquid if it can’t cool as fast as its surroundings. If you assume that CC and surface temperature can estimate the moisture, then you have to be assuming a constant rate of convection. Mixed phase clouds can form in the evening as the cloud base decreases in altitude. If you don’t know the local conditions you can’t predict anything.

The water vapor becomes supersaturated, not air. Whether water vapor is supersaturated or not depends only on temperature of and absolute humidity (or equivalently on the partial pressure of water), not on air pressure.

Air pressure does affect thermodynamics in many ways, but not dramatically.

My contention is that if the atmosphere were nothing but nitrogen and water vapor the surface temperature would not be substantially different because the ocean is the big Kahuna as far as greenhouse warming is concerned. If the atmosphere were absent, like on the earth’s moon, there would be no ocean and no greenhouse warming. If we had an atmosphere but no ocean there would be no significant greenhouse warming. The ocean is what makes the earth warm enough for life as we know it not the atmosphere except for the fact that a surface pressure of 14.7 psi allows an ocean to exist in a wide enough temperature range that it doesn’t boil away.

“Springer’s ideas of ocean acting as a “liquid H20″ is only plausible in the comical sense, good for a laugh when introduced as a topic in a climate science classroom.”

Presumably you meant to write “liquid GHG”. That would be close. Liquids and gases are both fluids. I fail to see why a fluid with the same properties as a greenhouse gas would not also be a greenhouse agent. Specifically those properties are opacity to longwave and transparency to shortwave. Explain to me, if you can, exactly why liquid water cannot be a greenhouse fluid.

JimD, “captd, you probably just have wrong terminology because a supersaturated boundary layer makes fog.”

Fog or clouds which can be water, ice or both and not necessarily thick enough to see well. The ABL creates a convective capping zone during the day near the saturation point which at night turns into the nice inversion layer that keeps conditions nice and muggy down here even on a clear night. The Atmospheric Boundary Layer (ABL) is called a boundary layer because it is a boundary layer. You don’t have a nice predictable lapse rate or constant specific humidity ratio across atmospheric boundary layers. So over the tropical oceans we have an ocean of water about 2000 to 3000 meters over our heads that has its own latent, radiant and convective thing going on.

Without an atmosphere there would be no global ocean.
[ ] true
[ ] false

My point was that the most important function the earth’s atmosphere performs is making a global ocean possible. Regardless of how anyone wants to change what I wrote this was my only assertion. I made no mention at all of how much water vapor the atmosphere can contain at various pressure levels as that has no relevance whatsoever to my point.

“Politically correct explanations do not help when you try to apply that science to a technology. You can’t engineer a product to work based on a political explanation for some scientific phenomenon.”

You mean like trying to limit CO2 emissions at a cost of trillions of dollars with the result being a reduction of 0.05C in the year 2100? That kind of application of politically correct ideas to engineering problems? If that’s what you meant then I agree but it raises the question of why you’d possibly support a costly engineering effort that yields no practical benefit. I assume the answer to that question is you’re a zealot with a chip on your shoulder and you don’t give a fig how stupid you look to an engineer who understands that the consensus demand for CO2 regulation is a hideously expensive boondoggle.

It’s obvious that the pressure that prevents boiling of water is essential, but there’s plenty of leeway in that.

Another obvious thing that we need GHE in the atmosphere to keep the surface warmer than the effective radiative temperature of the Earth. That’s the 33 C up to a correction from the unavoidable change in albedo. Nothing that happens in oceans can do the same for the surface.

“It’s obvious that the pressure that prevents boiling of water is essential, but there’s plenty of leeway in that.”

It’s not often you agree with the obvious. I wouldn’t describe the leeway as “plenty”. No telling what all would happen if the boiling point of water at sea level was say 50C instead of 100C. You didn’t put much if any thought into the implications is my guess and neither did I except inasmuch as I recognized it needs more thought before deciding there’s “plenty of leeway”.

“Another obvious thing that we need GHE in the atmosphere to keep the surface warmer than the effective radiative temperature of the Earth.”

No, this is not obvious. Liquid water has the requisite properties of transparency to shortwave and opacity to longwave that distinguish greenhouse gases from non-greenhouse gases. Instead of just waving your hands with unsuppported assertions I challenge to describe why the ocean does not exhibit a greenhouse effect independent of the atmosphere. I do not believe you can make that case but I welcome you jto try.

“That’s the 33 C up to a correction from the unavoidable change in albedo. Nothing that happens in oceans can do the same for the surface.”

I disagree. Why can’t it? The ocean is warmed to depth at the speed of light as shortwave traverses the vertical column with impurities in the water gradually thermalizing the energy. The thermal energy cannot escape radiatively due to water’s opacity to longwave. The energy at depth must therefore be mechanically transported to the surface where it may shed the energy by the usual mechanisms including (in order of importance) latent, radiative, and conductive.

This unimpeded penetration of shortwave and impeded release of longwave is the very essence of greenhouse warming and you need to be very specific as to why it works in a gas but not in a liquid. Good luck.

” I fail to see why a fluid with the same properties as a greenhouse gas would not also be a greenhouse agent. “

So you fail in more ways than one. Your idea of the ocean acting like a “liquid GHG” is preposterous. You seem to believe that just because you can put words together that it makes a valid argument is wrong-headed. Perhaps if you try to sketch out some math and try to show how a “liquid GHG” would work, we could feel some empathy for your efforts. But as it is, you are the usual pompous denier that pulls in the occasional sucker who will believe in your FUD.

“Explain to me, if you can, exactly why liquid water cannot be a greenhouse fluid.”

I already did. A thick layer of water will block all infrared wavelengths. Since it is flat across the spectrum, the temperature can not compensate to favor a particular wavelength band and so will adopt a Planck blackbody spectrum to balance the incoming radiation.

A thin film of water will stop IR. It is not really the point. The oceans are warmed by solar SW – they lose energy from the top millimetre by IR, evaporation and conduction. There is no blackbody here at all.

The Earth emits according to the Planck distribution. The peak of this distribution varies with temperature. At Earth temperatures this is enough to make a speck of difference.

What changes is scattering of IR photons in the atmosphere.

Webby understanding of atmospheric is distorted along the usual space cadet paths. There is – they believe – less IR being emitted with more CO2 in the atmosphere. It may be but only for so as it takes for the oceans to equilibriate. It may be moot as the oceans are never in equilibrium and the large changes in toa flux from natural variability dominate.

” Liquid water has the requisite properties of transparency to shortwave and opacity to longwave that distinguish greenhouse gases from non-greenhouse gases. Instead of just waving your hands with unsuppported assertions I challenge to describe why the ocean does not exhibit a greenhouse effect independent of the atmosphere. “

Fascinating the way that the mind of a krank works. Here Springer makes a hand-wavy assertion in his first sentence, yet then accuses others of “waving your hands with unsupported assertions” when they are simply referencing black-body radiation physics.

If that ain’t a heavy dose of psychological projection, I don’t know what is.

Earth surface at the present temperature emits about 396 W/m^2 of energy as IR. Earth as whole absorbs about 240 W/m^2. Nothing that occurs below the surface affects that. Without GHG’s in the atmosphere the Earth would cool with the power of over 150 W/m^2. That would lead to a very fast drop in temperature until the surface would be so cold that the radiation from the surface would be close to the absorbed solar radiation. At that point we would have a snowball Earth, if water is available for that.

The gas atmosphere with GHG’s does have it’s effect because the adiabatic lapse rate is highly negative making such a situation stable where the temperature drops around 70 C from the surface to the tropopause. That makes it possible that the Earth has an effective radiative temperature much lower than the surface temperature.

A gas atmosphere has a highly negative environmental lapse rate, because its compressibility is large. When compressibility is large the gas does a lot of work when it expands, and cools from the conversion of heat to work.

Liquid water has a very small compressibility. A situation where the surface is much colder than the bottom is stable only in solar ponds where a strong salinity gradient is maintained artificially. In a solar pond “a greenhouse effect” is present as solar SW penetrates deep but no effective mechanism can bring heat from the bottom to the surface. In a solar pond the bottom is heated by the effect, not the surface. The oceans don’t have enough salinity gradient to act as solar ponds, they don’t have hot bottom layers, and we live at the surface.

The oceans act as large heat stores, they affect the climate in many ways, but without GHG’s in the atmosphere they could not keep the surface warm.

Pekka I read your objection as there is no mechanism in the ocean for a cold top layer to trap heat in a lower layer. You go on to explain a solar pond defeats this by floating a high salinity layer at the surface of the pond which may be much cooler while maintaining a lower density.

Of course that’s quite correct.

The atmosphere over the ocean is the cold layer which traps the heat in the ocean. It doesn’t matter if the atmosphere is pure nitrogen it will still be colder than the water and serve the purpose of trapping the heat in the warmer layer below causing its temperature to rise until the Planck effect causes its temperature to rise until equilibrium is restored.

Dialing Back the Alarm on Climate Change
A forthcoming report points lowers estimates on global warming

By MATT RIDLEYLater this month, a long-awaited event that last happened in 2007 will recur. Like a returning comet, it will be taken to portend ominous happenings. I refer to the Intergovernmental Panel on Climate Change’s (IPCC) “fifth assessment report,” part of which will be published on Sept. 27.

There have already been leaks from this 31-page document, which summarizes 1,914 pages of scientific discussion, but thanks to a senior climate scientist, I have had a glimpse of the key prediction at the heart of the document. The big news is that, for the first time since these reports started coming out in 1990, the new one dials back the alarm. It states that the temperature rise we can expect as a result of man-made emissions of carbon dioxide is lower than the IPPC thought in 2007.

Dazed and confused, Western science staggers into a future about which they haven’t a clue. Michael Mann is the Karl Marx of Climatology.

Right now we have over 65% more Arctic sea ice area, a record high sea ice area around Antarctica, a record low tornado season, record late start hurricane season, 15 years of no global warming, a cooling tropical Pacific and a “strongly cooling Southern Ocean”.

The above are just some examples illustrating just how embarrassingly wrong climate science has been. ~P. Gosselin

I found this document by Nic Lewis to be troubling. Lewis appears to be intelligent as well as serious about climate issues. If he were to address these objectively, I think he has the potential to contribute substantially more to the field than as an advocate who creates false impressions in his zeal to emphasize those data that support low values for equilibrium climate sensitivity (ECS). Indeed, he has criticized the Met Office Report, which I haven’t read, for misrepresentations, and perhaps the Report is guilty on all counts, but if Lewis himself has made misrepresentations with the deliberate intention of creating a false impression, that action comes perilously close to dishonesty. I’ll let readers judge that possibility, starting with his own comments about a section of the report. He stated:

“Misrepresentations relating to equilibrium climate sensitivity (ECS).
Estimation of equilibrium climate sensitivity is dealt with in Section 4 of the Met Office Report. It repeats the misleading claim that “Positive feedbacks in the physical climate system, the largest of which is the water-vapour feedback, increase this number [ECS] to over 2°C”, and compounds this distortion by stating that “the fundamental physics of climate sensitivity, involving black body radiation and water vapour feedbacks … alone give a climate sensitivity of at least 2.0°C”. As already pointed out, these claims ignore the negative lapse-rate feedback, which is intimately linked to the water-vapour feedback. After including lapse-rate feedback as well as water-vapour feedback, all the models analysed in Soden and Held 2006 had a climate sensitivity of below 2.0°C.”

I expect that many readers knowledgeable about climate science and familiar with Soden and Held 2006 (SH06) may have compared that statement with their recollection of SH06. The questions for those and others are the following: (1) How many ECS values in SH06 were below 2.0C? Some? All? None? (2) How many values in SH06 does Lewis intend the reader to believe were below 2.0C?

Nic Lewis of course is entitled to his own opinions about various data in SH06, but that doesn’t touch on the question of honesty. Readers can revisit SH06 to draw their own conclusions on that question. I happen to believe Lewis has been wrong on many points in his long disquisition regarding TCR, ECS, aerosol forcing, the relationship between “effective climate sensitivity” and ECS, etc. (which is not to say that the Met Office was not itself wrong on many points). I won’t dwell here on why I disagree with his conclusions, but if he hopes to be taken seriously, he would do well not to commit the sins he imputes to others.

Many commentators appear more interested in advocating a preconceived conclusion than conveying an accurate understanding of climate phenomena, but most of them are neither very smart nor well informed. I think Nic Lewis is both. He can and should do better.

Is it me, or is this incredibly passive-aggressive? I see multiple implications of fault, a suggestion of dishonesty and implied statements of fact. What I don’t see is a single meaningful statement, a direct criticism or piece of evidence.

> Many commentators appear more interested in advocating a preconceived conclusion than conveying an accurate understanding of climate phenomena, but most of them are neither very smart nor well informed. I think Nic Lewis is both. He can and should do better.

This paragraph does seem to make sense. Its function is to exhort Nic Lewis to more prudence.

Some may even liken this paragraph to what seems to be called paraenesis:

Fred,
” I won’t dwell here on why I disagree with his conclusions, but if he hopes to be taken seriously, he would do well not to commit the sins he imputes to others.”

Rich, much too rich. You, Fred would do well to actually lay out why you disagree with Nick’s conclusions, and let us evaluate the merit of your arguments (which I honestly think will be weak or nonsensical, but please show me I am wrong). Otherwise, some may think that you are acting simply as an advocate for a political POV. Your claims that Nick is being dishonest are risible when considered in light of the Met report, which is horribly biased, does indeed distort other people’s work, and does grossly and willfully misrepresent much..

Want us to take you and your comments seriously? If yes, don’t be shy, lay out your technical arguments so that we can evaluate them. I am betting you won’t… or can’t.

Steve – Thanks for your opinion. I haven’t participated much in climate blog discussions in the past half year mainly because it tends to degenerate into angry arguments rather than thoughtful discourse, and I don’t particularly enjoy arguing since I’m not very good at persuading people to change their firmly held convictions. My comments were intended primarily for bystanders who know something about climate and might want to form their own judgments rather than for individuals who want to defend or attack views expressed by others. I do hope those bystanders will read what I wrote, review the Soden and Held paper, and decide for themselves whether or not it supports my statements. I’ll leave it at that and regretfully decline your invitation to elaborate on my disagreements with what Nic Lewis has written regarding all the other climate phenomena.

That is how I figured you would respond. I will conclude that whatever factual arguments you have, they are not worthy of discussion.

“I’m not very good at persuading people to change their firmly held convictions.”

Nobody asked you to try to change anybody’s convictions. Just make a factual argument instead of the cowardly passive/aggressive rubbish you posted. Your comment was a blow far below the belt. I do not usually become angry about blog comments; yours was the rare exception.

Jim – At first, I too thought Nic Lewis was simply confused because climate sensitivity was described in terms of radiative restoring per unit temperature rather than temperature per radiative imbalance. However, on close reading, I realized he wasn’t confused, but rather that he created a false impression that the SH06 values for climate sensitivity were all below 2 C (when they’re actually all above it) by ignoring the SH06 cloud feedback terms in writing his comment. As I said earlier, he can hold any view he wants on cloud feedback, but honesty (in my view) requires him to report what the authors stated rather than misrepresent their conclusions. In any case, readers can look at SH06 and make up their own minds.

I haven’t participated much in climate blog discussions in the past half year mainly because it tends to degenerate into angry arguments rather than thoughtful discourse, and I don’t particularly enjoy arguing since I’m not very good at persuading people to change their firmly held convictions.

This effectively says he quit posting here regularly because the people who disagreed with him often are close-minded fools. He continues this depiction by saying:

My comments were intended primarily for bystanders who know something about climate and might want to form their own judgments rather than for individuals who want to defend or attack views expressed by others.

Which further implies anyone who would disagree with his whiny description of this post is also a close-minded fool. And this is all wrapped up with the beautiful comment:

I do hope those bystanders will read what I wrote, review the Soden and Held paper, and decide for themselves whether or not it supports my statements.

Where he basically says, “Look it up.” And then in a follow-up comment, he drops all pretenses and flat-out accuses Nic Lewis of dishonesty:

As I said earlier, he can hold any view he wants on cloud feedback, but honesty (in my view) requires him to report what the authors stated rather than misrepresent their conclusions.

Personally, I think anyone who writes comments like these has no room to talk about discussions “degenerat[ing] into angry arguments rather than thoughtful discourse.”

Like any other Denizens, Fred has the right to come here, state his opinion, and be done with it. Many do, and I can provide names on request. This right is as old as bulletin boards.

In return, others are also entitled use ad superbiams to make him spell out his case, e.g.:

Want us to take you and your comments seriously? If yes, don’t be shy, lay out your technical arguments so that we can evaluate them. I am betting you won’t… or can’t.

On the other hand, piling on will always be.

***

Fred’s argument amounts to: read S&H06. If any of you find this argument abhorent, I have similar arguments that I would like to have evaluated.

***

Speaking of S&H06, according to my PDF reader’s search function, the word “median” does not appear in it. So it might be fair to say that Soden & Held did not insist on the median ECS for the model ensemble they studied.

Joshua, I made a claim. Like any claim, it could be wrong. The fact I made the claim does not mean I am stating my interpretations are absolutely correct or that I get to dictate the meanings of words and sentences.

You’re welcome to dispute my criticisms of Fred Moolten’s passive aggressiveness with nothing more than passive aggressiveness of your own. You’re welcome to not even take a position, but merely imply I’m wrong.

So just like Springer elsewhere in this thread, all that is important is making a claim. Springer rationalizes his krackpot ideas by saying all that is important is claiming that he could provide an explanation for an observation. Doesn’t matter if it is wrong or right. The rhetorical point-scoring is what matters. Same for Shollenberger.

Must remind them of their high-school debate days,

George E.P. “Trick” Box once said: All claims are wrong, some may turn out useful.

The comments are broken because I called you a boor with delusions of grandeur who was imminently ignorable. Most of the other responses tend to agree but in more couched language. The directness of my reply earned it a disappearing by the hostess and such disappearances cause what’s been described as orphaned comments further down in the thread which have no predecessor linked to them.

Thanks for asking. You should do more asking and less telling if you’d like to be taken seriously.

What has happened to you, freddie? You used to be civil and reasonable, like Pekka. Now you are in the sad position of having the boards’ nasty clowns defending your petulance. We want the old freddie back.

I thought what Nic wrote: “A multi-model study of feedbacks, Soden and Held 2006, showed a median ECS for the model ensemble of 1.8°C after combined water vapour/lapse rate and surface albedo feedbacks.” was clear enough, but if you disagreed, then why not simply note that he should have been more clear about that value not including positive cloud feedbacks? It was the accusation of deceit which was, IMO, neither appropriate nor fair.

> It was the accusation of deceit which was, IMO, neither appropriate nor fair.

Here was Fred’s accusation:

[I]f Lewis himself has made misrepresentations with the deliberate intention of creating a false impression, that action comes perilously close to dishonesty.

Since Fred also mentions that honesty is on the line, does that mean that when the A’s talk of errors, misrepresentation, and misleading claims is a matter of honesty?

Seems that this is Fred’s reading.

Seems that this is also Steve Fitzpatrick’s reading above:

> The goal of the report seems to me only to cloud the water with very questionable claims which will delay public perception of a new scientific consensus on lower sensitivity; they simply want to delay changes in public opinion for as long as possible.

Joshua,
What I objected to in Fred’s comment is simple to understand. If he thought that Nic Lewis had not accurately represented the content of Soden&Held, he should just have stated clearly what he meant; something like:
“Nic, I think people might conclude (incorrectly) from your essay that Soden&Held showed the models they examined had a climate sensitivity below 2C, when in fact that was the sensitivity before the influence of positive cloud feedbacks. S&H show that the models they studied have an average ECS of well over 3C per doubling. You may want to clarify that the under 2C figure you used from S&H includes no positive or negative cloud feedbacks.”

That sort of comment would have stated clearly what Fred was objecting to, and not even caused a stir. Fred then makes the situation worse by suggesting that he thinks most of Nic’s technical arguments are wrong, but will not take the time to tell us why (and seems to be saying, ‘I would never waste my time or stoop so low as to actually engage Nic Lewis, or anyone else, on the technical substance, but trust me, Nic is wrong..’). On top of all that, he implies that people who disagree with him (Fred) are mostly uninformed and stupid. This is exactly the kind of rubbish I expect to from Joe Romm or the RealClimate team. Fred’s comment might be better received on those blogs, where obnoxious arrogance is the norm, and completely acceptable.

It seems a little hypocritical Steve Fitzpatrick for you to complain about tone given the tone of your comments directed at the Met Office report. You seem quite happy to interpret errors into accusations of deceit but not when it is done back at you.

What I find fascinating about the concern trollery is that Fred Moolten has had a longstanding reputation on Climate Etc of being very measured and patient in his explanations, rarely getting wound up.

In fact, most of the skeptics here applaud him (and Pekka) and use Fred as a paragon of decency — as opposed to me, whom they consider rabid.

So one comment showing frustration is met with a huge backlash. There goes all the built-up goodwill. POOF !

It just goes to show you that the relevant watch phrase is: Don’t give them an inch, because they will take a mile.

> If he thought that Nic Lewis had not accurately represented the content of Soden&Held, he should just have stated clearly what he meant; something like: […]

then follows an example of what Nic Lewis did not do.

***

> That sort of comment would have stated clearly what Fred was objecting to, and not even caused a stir.

While that it would not have caused a stir is an empirical matter, what we can see is that Nic Lewis’ comment did cause a stir, e.g.:

Nic Lewis, a climate scientist and accredited ‘expert reviewer’ for the IPCC, also points out that Met Office’s flagship climate model suggests the world will warm by twice as much in response to CO2 as some other leading institutes, such as Nasa’s climate centre in America.

lolwot,
My tone? I said pretty clearly that I considered the Met report to be a political document, and that its technical content was a result of that. I did use the word ‘reprehensible’ for scientists who generate political documents in the name of science; it is a word I do not use lightly, and I used it because I think those scientists are damaging the public credibility of climate science by using it and to try to advance a political agenda. IMO, the scientists at the Met should be focusing on figuring out what is wrong with their costly and publicly funded model (and one of the ones that is furthest from reality, with comically high diagnosed sensitivty), not trying to avoid a reckoning with reality via political screeds. You are of course free to disagree.

It is the first thread were Nic posted here at Climate Etc. You will notice that he wrote a strongly implied accusation of willful misrepresentation – one which many “denizens” interpreted so as to support their presupposition of climate scientists lying and the like. Nic then goes on to, further, offer an entirely implausible explanation (IMO) for how he went about investigating the questions he had about the work he was writing about. You will notice that in that thread, I offer a suggestion of what Nic could have done that would have been more productive than what he did – in a manner similar as you just offered a suggestion to Fred.

My point in asking you about the uniformity of your standard was not to question whether Fred was guilty of a similar offense, or to question whether such accusations (usually not based on a scientific approach to quantifying and validating the supporting evidence), but to point out that the objections that people raise about such accusations in these blogophseric threads, indeed, the “concern” that people express, is almost uniformly selective. It was also not to highlight that you, individually, might not be entirely uniform in how you apply standards. But if you take the time to look at that previous thread, do you find it interesting to reflect on how your standards should be applied to Nic.

My point is not to say that “they do it to” or “he did it first” in order to defend such argumentative and in the end, IMO, counterproductive bickering – but to go back to a brief conversation that you and I had about the prevalence of earnest conversation versus confirmation-bias-based bickering in these threads.

Perhaps you don’t know about Fred that he has a well-established record here at Climate Etc. of participating in long and extensive exchanges, at notable levels of technical depth, with people that disagree with him on a range of issues – and to persist in maintaining a measured tone even when his interlocutors direct much personal venom his way (without responding in kind).

You are certainly entitled to your conclusions about where he might be better off participating, but you might want to look through the archives a bit to gather more information, or otherwise you might not realize that you are forming conclusions that could be inconsistent with easily available evidence. Perhaps Fred’s expression of the reasons for his reluctance to engage further could be entirely accurate, and not the “rubbish” that you seem to think it to be.

With climate science that verges of being a compliment. After all it implies they actually knew what they were doing :)

Yes, it’s a good joke. But with a similar application of binary thinking, Cap’n, we could make any branch of science the brunt of the same joke. For that matter, libertarians and “skeptics” could be the subject of the same joke, as no doubt could fisherman.

And don’t even get me started on how many HVAC contractors I’ve met that don’t have a clue what they’re doing.

Joshua, “And don’t even get me started on how many HVAC contractors I’ve met that don’t have a clue what they’re doing.”

It all boils down to when something goes wrong. Then you have that binary choice, “I listened to an idiot” or “I was taken in by a fraud.” Most of the time everything works out pretty well, that is the huge gray people forget. But when someone says they have a 95% confidence, there is a line drawn and only a binary choice to make. Which would you rather be? Just another idiot or a cunning fraud?

Willard, “misrepresentation” is a common theme. Do you believe, yes believe, that there is any validity to a 95% confidence level that CO2 caused “most” of the warming from any point in time? Would you stake your reputation and career on it? Now do you believe that someone with their reputation and career on the line might fudge things?

Joshua,
“Perhaps you don’t know about Fred that he has a well-established record here at Climate Etc. of participating in long and extensive exchanges, at notable levels of technical depth, with people that disagree with him on a range of issues – and to persist in maintaining a measured tone even when his interlocutors direct much personal venom his way (without responding in kind).”

I am aware of Fred’s participation in the past. This is one of the reasons I was very unhappy with his comment, which was far from measured, and IMO, intended only to discredit without offering substance. He did not even explicitly state what he objected to until pressed by other commenters. I already wrote what I think would have been a fair and reasonable way to address the issue of Soden&Held with Nic Lewis, but most anything other than the comment he actually made would have been an improvement.

With regard to Fred refusing to engage in a technical exchange: Simply declaring that he thinks Nic is wrong without offering any reasoned explanation, even when that explanation is requested, is indeed rubbish, IMO. If Fred wants to say Nic is wrong, then he has a burden to show why. Consider the possible exchange:

Steve: “Joshua is evil and corrupt.”
Joshua: “Why do you say that?”
Steve: “I couldn’t be bothered to explain, it is obvious to all the smart people who comment here; only the stupid ones think otherwise, and I would be just wasting my time on them… and most of the people here are in fact stupid.”
Joshua: “You call me evil and corrupt, but won’t say why?”
Steve: ” If any smart bystanders want to confirm that Joshua is evil and corrupt, they can just read over all the things he has written in the past and confirm it.”
Joshua: “You are an obnoxious a$$hole.”

And your conclusion in this case would be perfectly justified. Feigned, civility (“Thanks for your opinion.”) is not the same as civility.

This means that the Met report might very well be warranted to claim that:

> Each method has its own assumptions, and so it is not possible to say that one method is superior to the others.

3. I don’t believe Nic Lewis’ allegation of undetermined misrepresentations were meant to have the Met report issue a corrigendum for some nits he found on figures or tables.

4. And speaking of passive aggressive behaviour, I don’t believe Denizens (go team!) have anything to counter this, and just like Steve Fitzpatrick succeeded in ignoring my comments so far, will indulge in another concert of crickets.

Willard, “3. I don’t believe Nic Lewis’ allegation of undetermined misrepresentations were meant to have the Met report issue a corrigendum for some nits he found on figures or tables.”

They were his nits, plus the 68 other A’s. I doubt Nic expects much of anything from the MET, but he does have the right to defend his nits. In his opinion, his/their (the 69 or however many authors) nits were misrepresented but Nic is expressing his own opinion and not representing the 69. There are a lot of fields that have little but nits to defend, so don’t belittle Nic’s nits.

You may recall that there has been considerable discussion on whether there is or is not a pause in warming. The total warming from 1980 to 1998 was about 60 nits, since 1998 there has be maybe 6 nits with a margin of error of about 12 nits. That is not a lot, but to some the fate of the world as we know it depends on those nits. If someone had not cared dearly for those nits and though they could accurate measure those and predict future nits, we would all be doing something a lot more productive. Like possibly having a barbeque summer of nits :)

But albedo just makes it worse. Note that on its own it is positive, so adding this to the water vapor and lapse rate feedback is far above a 2 C sensitivity. Did he misread Figure 1 to assume the surface albedo column was cumulative of the columns to its left? We know surface albedo is a positive feedback. Where did 1.8 C come from?

Now I think I see. From SH06 Table 1, the effective sensitivity is 1-1.5 W/m2/K with all feedbacks, which is ~3C per doubling. But what he has done is zeroed out the cloud feedback to give something more like 2 W/m2/K which gives < 2 C per doubling. So he gets it by assuming cloud feedback is zero which is well below what any of the models give. He is complaining that the Met Office said you need a strong negative feedback for the 1.8 ECS claimed by Otto et al. After then claiming that the Met Office intended to put the lapse rate feedback with their positive feedbacks, and ignoring the positive cloud feedback, he can get the SH06 "positive" feedback to be 1.8 C per doubling, but only by redefining what they mean by positive.

The most common attribution is to assign 1.2 C of warming to CO2 by itself, about 1.5 C to water vapor which gets “control knobbed” by the CO2 and the remainder of 0.3 C to albedo and other GHGs. This gives about 3C for doubling of CO2.

Hey Fred is there any particular reason you keep mucking with your blog profile. In this thread you changed the link associated with your name from facebook to none to google +1. Just curious. The motivations of computer illiterates fascinates me.

The Met Office’s response to David Rose’s political hit job included a comment that is relevant to this thread:

The article also goes on to mention some of the claims made in a commentary published by Nic Lewis yesterday. This is a lengthy and technical commentary covering several topics and will require time to provide as helpful a response as possible, so further comment will be released in due course.

There are a couple of points raised in the Mail story which should be addressed now, however.

The article states that the Met Office’s ‘flagship’ model (referring to our Earth System Model known as HadGEM2-ES) is too sensitive to greenhouse gases and therefore overestimates the possible temperature changes we may see by 2100.

There is no scientific evidence to support this claim. It is indeed the case that HadGEM2-ES is among the most sensitive models used by the IPCC (something the Met Office itself has discussed in a science paper published early this year), but it lies within the accepted range of climate sensitivity highlighted by the IPCC.

Equally when HadGEM2-ES is evaluated against many aspects of the observed climate, including those that are critical for determining the climate sensitivity, it has proved to be amongst the most skilful models in the world.

Finally, in our aim to provide the best possible scientific advice to the UK Government, the Met Office draws on all the scientific evidence available to us. This includes many other physically based climate models from leading research centres around the world, which provide a range of climate sensitivities and a range of potential future warming.

Time for more reading of Nic Lewis’ “memorandum” (scare quotes ™ Auditing Sciences). We can analyze the whole Figure 1 and Table 1 section in one go.

***

First, we note some adverbs, one of which (“accordingly”) was already seen in the previous paragraph (not “chapter”, of course — we thank Moshpit even if he did not notice that nit):

Figure 1 does not use the Otto et al primary TCR best estimate of 1.3°C and 5–95% range of 0.9 to 2.0°C, based on data for the decade 2000–09. Although caution is required in interpreting results for any short period, arguably – as stated in Otto et al – the estimate based on the most recent decade’s data is the most reliable since it has the strongest forcing and is much less affected by the 1991 eruption of Mount Pinatubo. Accordingly, showing in Figure 1 only the (wider) TCR estimated range based on 1970–2009 data for Otto et al for comparison with other estimates is misleading.

What Nic Lewis does here is to repeat the main argument in Otto&al 2013 for the claim that the data from the last decade provided the best estimate. This argument does not respond to any of the arguments we can read in the Met Report.

Considering that the Met Report rejects Otto & alii’s claim that the data from the last decade provided the best estimate and that Nic Lewis does not refute the Met Report’s arguments, Nic Lewis’ “misleading” claim would deserve more diligence from his part.

***

Second, Lewis’ point about the classification of Harris et al 2013 only matters for Lewis’ own position, and only insofar as the colours of his flasks matter in the grand scheme of things. Since the Met Report does not claim that observation-based estimates are superior to model-based one, like Lewis does, this classification does not matter much for the conclusions of the report.

This point is also important in the discussion that follows: without having Harris et al 2013 on its side, Otto & al 2013 may look alone in its corner.

***

Third, notice how Lewis distinguishes his third point from his fourth point. Watch the pea:

I’ll make it easy for you. They don’t answer Nic, and they defend their model by saying it is within the absurd range of the IPCC. willard, the model runs hot. Does it do so deliberately or ignorantly? Something worthwhile auditing, wouldn’t you say?
======================

willard, you are just making a fool of yourself auditing nits while British electric rate unnecessarly skyrocket and old people die of the cold. This is the reality, not your sad sophistic ideal.
=========

The model is most probably running hot, because the historical process of its development has made it run hot. The present models are more strictly constrained by fundamental equations than earlier ones, but they do still contain semi-empirical parameterizations of subsystems that cannot be modeled from first principles

I don’t believe that it has been built by purpose to run that hot, but it’s possible that some of the subjective choices made by the model developers during the development process have been biased by the expectations of people who made those choices.

While all models are tuned in many different ways, they are not tuned specifically to run hot or cold. That cannot be done without affecting other properties of the models. In case of a model that has been developed to reproduce many details correctly those virtues would be lost in tuning based on one specific feature.

Further development of the models should take all new knowledge into account and lead step by step towards models that agree more extensively with observations.

Willard, “Hmmm. Nic goes for the lower bound justified disingenuousness can buy. Why would that leave anyone lukewarm about the technical content of such comment?”

Lowest reasonable bound would be more accurate. The low end of the IPCC range 1.5C not only assumes positive feedback, but significant positive feedback. The “no feedback” response should be ~1C/3.7Wm-2 with a range of uncertainty, +/-0.2 making a reasonable low end 0.8 possibly even a little lower. Everything greater than “no feedback” is an assumption. Since there is a Planck response that does impose an upper limit plus cloud uncertainty to the point the sign is not known, there is nothing disingenuous about including a reasonable lower bound.

So when the MET mentions their models are in the IPCC “recommended” range, they are admitting their model is biased high just like the rest. Remember the Charney Compromise, 2C Manabe, 4C Hansen plus with a wild ass guess of 0.5C uncertainty :)

Webster, there is nothing to deny. Per ERSST, which also has land only data, 30N-60N land temperatures increased ~1.75C from the early 70s through the late 2000s. That is about twice as much as the little land area in the 30S-60S band. The worst warming is in the 30N-60N range, so you guys should pay the taxes.

Looking at the Otto et al paper, I don’t understand the methodological choice they have made in using the full 40 year period. Including 1970’s and 1980s with equal weight in the calculation makes the final result much more uncertain than it would be if the decades were combined taking into account the higher power of the latest decades in the combined estimate.

Comparing with their value 1.4 C (range 0.7-2.5 C) the alternative method that weights the decades based on the uncertainties of the decadal estimates would lead to a slightly higher best estimate (still 1.4 C when given in tenths of degree) and significantly narrower uncertainty range (perhaps 1.0-2.0 C). That much can be seen from Figure 1b without any calculations. The uncertainties concerning the reference period 1860-79 combine differently from those related to the final period. That should be taken into account in estimating the uncertainty of the combined estimate, but the effect of that is likely to be very small.

Taking into account what the Otto et al paper contains, the choice of Met Office to use the values based on the 40 year average is reasonable. I would not fault Met Office for using the value with a wide error bar but the Otto et al paper as picking the last decade, when the previous gives a different result would be a biased choice.

A recent comprehensive study, based on making a simple calculation of the global energy budget based on observational estimates of surface temperature rise and radiative forcing, estimated that the TCR ranged from 0.7 to 2.5°C using data over the period 1970-2009 (Otto et al, 2013, see Figure 1). The uncertainty range derives from uncertainties in the global surface temperature estimated from observations and uncertainties in the estimated radiative forcing.

These authors also estimated the dependence of their values of TCR on the period of assessment, using observations from each decade since 1970 (see Table 1, with values taken from Otto et al, 2013). The upper estimate of the TCR is lower when using observations from the 2000s, and conversely higher when using the observations from the 1990s – a period of more rapid warming (see Figure 1 in the second report).

The lower value for the 2000s is associated with the recent pause in global surface temperature rise combined with continued rise in CO2 concentrations. The lower bound of the TCR is around 0.7 – 0.9 for the most recent decades, whereas the uncertainty around the upper bound is much larger. Both exceed the bounds estimated from the models.

Part of the dependency of TCR on the observational period may well represent real decade-to-decade fluctuations in the strength of some climate feedbacks, such as rates of ocean heat uptake as discussed in the second report in this series. It raises the question of whether 10 years is sufficiently long to estimate the TCR without introducing substantial sampling errors. Otto et al (2013) state that, whilst the most recent decade may be better observed, “caution is required in interpreting any short period, especially a recent one for which details of forcing and energy storage inventories are still relatively unsettled”.

Despite the fact that the first decade of the 21st century was a period during which there was a pause in the global mean surface temperature rise, the upper range of the 40-year average TCR derived from observations, including this pause period, is broadly consistent with the latest model results (Figure 1). As was also shown in the second report, averages of at least 30 years in length are needed to detect global warming above internal variability.

If, as the results presented in the first and second reports suggest, the recent pause in global mean surface temperature rise is not representative of other aspects of the climate system, which still show warming, and that some of the warming may be hidden below the ocean surface, then the TCR estimated from the most recent decade may not be a useful estimate of the TCR for projections of longer-term future warming.

It is not anyone’s responsibility but Nic Lewis to argue that medians provided truths while means were vile misrepresentations, or something along those lines.

On advantages of median over mean as a robust central tendency estimator, when you have a skewed distribution, start here. It’s easily found and well known.

I suppose we all have different views coming from our different disciplines—as I see it, it is your obligation as a reader to familiarize yourself with these concepts. If it were an obscure point, difficult for me to substantiate on my own, I might agree with you over the need for the author to substantiate his work.

This is a rather trivial point though, easily checked. You not being an expert in statistics doesn’t suddenly make it somebody else’s responsibility to educate you.

Please leave the mansplaining to Moshpit, Carrick. You’re basically implying that climate scientists like Isaac Held, who can write papers in which the word “median” does not occur and that even Nic Lewis cite, should learn more about medians, because if they did they would some kind of lukewarm light. I find this unconvincing.

Nic Lewis claims that using medians (among other things, but that seems his main object) provides “a better idea of the distribution of probability”.

Willard I pointed out a readily obtainable link explaining the advantages of median over mean. I then gave my opinion, being careful to focus it on my own discipline. It’s churlish on your part to dismiss either of these as “mansplaining”

As to the quote, the full quote in context is:

A revised version of Figure 1 in the Met Office Report showing the impact of the revisions discussed above, and a better idea of the distribution of probability, is shown in Figure 1.

This refers to the figure not to the use of median. It is an obvious point to anybody with a mote of statistical training.

> This refers to the figure not to the use of median. It is an obvious point to anybody with a mote of statistical training.

And yet the revised figure 1 shows medians instead of means. One of the revisions of this figure was the use of medians instead of means for the model studies. The other two points were to reclassify one study and to insert an estimate based on his favorite decade.

You’re only way to back up your claim would be to tell us how only Nic Lewis’ two last point bears on a better idea of the distribution of probability. In other words, you have to show that Nic Lewis speaks of distributions of probability without referring to his newly added medians. Surely you are joking, Mr. Carrick!

While it may work at Lucia’s or Jeff’s, one does not simply use parsomatics to get to Mordor.

‘AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.’ http://www.pnas.org/content/104/21/8709.full

Pekka’s idea of models is fundamentally incorrect. Ultimate precision is well beyond us – and uncertainty in data, parameterizations and coupling create structural instability in the models.

‘Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.’ http://rsta.royalsocietypublishing.org/content/369/1956/4751.full

The situation is shown schematically here. Each independent solution is feasible – and the range of feasible solutions remains unknown.

If we understand James McWilliams in the top quote properly – there is no necessity for models to ‘run hot’. Merely bias in the subjective assessment of plausibility of the one possible solution.

The alternative is to evolve pdf’s from perturbed physics models. Even there the prediction of ‘climate shifts’ remains an utter impossibility – and therefore we should expect divergence of climate and models over the rest of the century.

‘Q2. What does showing a better idea of the distribution of probability even mean?”

well, since a PDF is defined and delimited by key metrics, showing more of those metrics gives you a better idea of the PDF.

lets do an example.

Suppose willard takes an intelligence test 100 times.

His scores range from 0 to 100, where 0 is total idiot and 100 is genius.

If a draw a bar from 0 to 100 I have only shown you ONE parameter of the PDF.

Now suppose I show you a bar like MET does, where 95% of the measures are between 5 and 95. Now I’ve shown you a second metric of the PDF.

But look at the ways this can happen.

Willard could have 5 scores at 0 and 45 scores of 5, and 45 scores at 95
and 5 at 100.

Or he could have 5 score of zero, 1 score of 5, and 89 scores at 95, and 5 at 100. That would be a totally different PDF although two metrics would be identical.

So we want to know the shape. is it uniform across the range? peaked?
double peaked? is it SYMETRICAL about some value, or skewed, is it platokurtic or leptokurtic.

To get this added information we want to know the shape OVER THE RANGE. and we want to know the measures of centrality. if the mean equal to the median? or is it skewed.

Lets suppose his tests are uniform. he gets 34 scores of zero, 33 of 100 and 33 at 50. Now imagine that instead he gets 100 scores at 50.

So a better idea of the PDF is one that shows more of the information that determines the PDF. Its one that reduces the uncertainty in the readers/viewers mind. The uncertainty in a PDF is governed by its moments. When you show all the moments you have conveyed all the information bits. A better idea is one that shows all the bits.

> So a better idea of the PDF is one that shows more of the information that determines the PDF.

Illustrating centrality does not answer how using medians instead of means would show more information that determines the PDF in our specific case.

Medians abstract away weights. There may be problems where weights may matter more than protecting one’s judgement against outliers, e.g Backgammon, Scrabble. Even outliers may provide information when estimating risks.

I don’t think we should interpret runs as potentially erroneous data points. Unless misspecified, model runs do not simply go beyond the problem space to reach Mordor. Oh, but look which kinds of studies use medians and which kinds are using means…

***

If medians do provide more information about the estimators of CS, an argument should be made to that effect. If the argument has already been done, it should be cited. And this argument should also apply to measurements.

Interestingly, Nic does seem to appreciate the use of means in that context:

On the basis of my simulations, during 2002-2011 Pinatubo would have depressed the mean global surface temperature by about 0.02 K, and increased mean OHU by about 0.04 W/m^2. Not a large effect. And there was a smaller volcanic eruption a decade or so before my starting 1871-1880 period, which would have had perhaps 25%-30% as much effect during that decade. So, if we took out the effects of both volcanoes, the change in mean global surface temperatures between the two decades would have been about 0.015 K (2%) higher, and the increase in the change in { forcing net of OHU } would have been about 0.03 W/m^2 (also 2%) higher. Lo and behold, these two effects cancel out.

So the decade earlier volcanic eruptions have no effect at all on my climate sensitivity estimate. Which is actually obvious from the physics involved, if you think about it.

The observation is that median is a more robust central tendency estimator than mean:

This relates to samples of a population associated with a given PDF. One useful metric of a PDF is the “mode”, the most probable value in the distribution.

In distributions that are symmetric (not climate sensitivity PDFs), the mean equals the median equals the mode. In distributions that are not symmetric (have nonzero skew), the median is closer to the mode than the mean.

A simple example of this is the Rayleigh distribution. For sake of simplicity, let’s set sigma = 1. Then,

Secondly, if we have a small sample of points, it is easy to demonstrate that the median is a more reliable estimator of this central tendency of the distribution than the mean for distributions with long tails (the PDFs associated with climate sensitivity are a nice example of this). It’s easy to demonstrate that a few outlier points (meaning low-probability here, not erroneous)

If the number of points in the sample is large, there isn’t much difference in terms of the reliability of the statistic. But the argument favoring median over mean has nothing to do with “potentially erroneous data points”, just distributions that have long tails.

That said nobody, including Nic, argues that mean is useless. The example you raise relates to his paper. An explanation is give on why he used means in his paper: to compare to other papers that only provided means. Another is to avoid confrontation with people like yourself, who seem more interested in attacking him than in admitting the narrow scope of your own knowledge of statistics.

> It’s easy to demonstrate that a few outlier points (meaning low-probability here, not erroneous) that the mean will be more substantially affected by these “outlier” points than the median.

Indeed. It would be even easier to show the Wiki entry of robust statistics, in which we can find that

> The median is a robust measure of central tendency, while the mean is not; for instance, the median has a breakdown point of 50%, while the mean has a breakdown point of 0% (a single large sample can throw it off).

It would be perhaps a little less easier to find a random paper for a simpler example than the speed of light data discussed the Wiki, e.g. :

Consider a data set containing the following values:

1, 1, 1, 2, 2, 5, 5, 5, 6, 20, 40.

The mean of the values is 8. However, the mean is distorted by two outlying values (20 and 40). All of the other values in the data set are less than or equal to 6. Consequently, the mean does not accurately reﬂect the central values of the data set. Instead of using the mean as a measure of central tendency, we could instead use the median, which in this case is 5. The median is an extreme form of a trimmed mean, in the sense that all but the middle score is trimmed.

Interestingly, the discussion follows by underlining a non-negligible aspect of using medians:

However, calculating the median discards a lot of information, as every value above and below the middle point of the data set is removed. A compromise between the mean and the median is the 20% trimmed mean. To obtain the 20% trimmed mean, we remove the lowest and highest 20% of the values from the data set, leaving

1, 2, 2, 5, 5, 5, 6.

The mean of the remaining values is then calculated. In this case, the 20% trimmed mean is 3.71, which reﬂects the central values of the original data set more accurately than the untrimmed mean of 8. The trimmed mean is an attractive alternative to the mean and the median, because it effectively deals with outliers without discarding most of the information in the data set.

If what the author says is true about medians, the claim that we get so much information out of medians that it lets the truth out might have been a bit over-enthusiastic.

***

I have no idea if using a 20% trimmed mean would make sense, but it would be nice to see how this looks. In fact, why not show all the estimators we can find? Here would be a robust motto:

The important point to remember is that for fat-tailed distributions that are integrable, the mean may diverge to infinity but the median will always be a finite number.

Thanks… that’s a nice example.

The fact the mean isn’t finite for some integrable distributions, but of course the median is, is a very elegant demonstration of the non-robustness of the mean as a statistic.

My summary is it is okay to use means with impunity when we have a Gaussian-like distribution, which is often. But we shouldn’t use mean when the distribution is defined only for positive values and has a long tail.

Your median versus mean income is a nice example of this of the distortion caused by the (mis-)use of the mean statistic. Another simple example is, suppose you are a water manager in a desert area.

You’ve just been hired and you need to plan for typical water usage in your area. (Suppose it’s a new position, so you are responsible for establishing a new policy.) You look at the last five years total rain fall, and you see:

1″, 1″, 11″, 1″, 1″

The mean is 3″, the median is 1″.

Which should you use?

[This example is based on a real case from California in the 1980s…. don’t have time to hunt it down. The manager in that case was terminated.]

willard I don’t have much time to spend on this. I do apologize for my tone yesterday–it was needlessly snitty.

I don’t claim that a cursory look at the web is going to reveal all that is known about the relative robustness of medians over means, just that you can establish that this is “well known” to be true. The why—you learn that from years of application. (“The essence of knowledge is application.”)

But regarding this point:

I have no idea if using a 20% trimmed mean would make sense, but it would be nice to see how this looks.

An example where this is useful (not necessarily 20%, it varies) is when you have a signal (say sinusoid) plus stationary (Gaussian-like) noise plus intermittent episodic noise.

This is a long example, but I think a good one, if you are follow the narrative all the way through:

Playing a tone in a person’s ear and recording it with a microphone (as is used when an audiologist obtains a DPgram in a hearing assessment test) is an example of this. The microphone is very small, so it has lots of self-noise. There is also noise associated with heart-beat (typical 1-1.2 per second in a resting position) breathing (typical 1 breath per five seconds in a resting position) and irregular shifts of the subject that transmit noise into the ear via the mechanical coupling associated with the wire for the microphone.

Anyway Here’s an example from a real ear (no external stimulation though), showing the level of the signal over time. The “level” is the level in a particular (fixed-duration) window in the ear and the “time” is the central value for that window.

And here are the spectra associated with retaining a given fraction of the data. These are generated on a frequency-bin by frequency-bin based by taking the mean value of the square of the amplitude [e.g. power] for that frequency bin. (See “Welch periodogram”).

Explaining the threshold choices: Choosing a threshold for averaging where you keep 81% gives you 90% of the original statistical power retained, 56.25% gives 75% retained and 25% gives 50% retained.

The peaks here are “spontaneous emissions”. They are sounds produced by the subjects inner ear in the absence of external stimulation. Reducing the number of windows in this fashion increases the variance in the mean (the line gets “thicker”), but for low frequencies, the rate at which the line gets “thicker” is at a slower rate than the rate at which it drops.

Note the the height of the peaks relative to mean noise floor get larger when more episodic noise is removed. This is evidence that the peaks are associated with the “signal+stationary noise” rather than being associated with the intermittent noise that is being removed by a threshold cut. If the signal were associated with a whistling noise associated e.g. with “stopped-up” sinuses, as you remove more episodic noise, the smaller the signal you’ll see (I have examples of that too).

Using “median” spectra is an alternative, and part of my standard arsenal. One might be tempted to say — just use median. Well there’s a catch—for spontaneous emissions, median actually works worse because the emissions frequency and amplitude varies over time, so the median effectively treats it like noise:

That is, for this case, median attenuates the signal as well as attenuating noise.

The bottom line is there are multiple statistics out there. We use the one most appropriate for a problem, but making the right decisions is as much an art as a science.

You can objectively demonstrate that a particular choice is a better once you’ve made the choice, but there are no first-principle methods that a priori tell you to make that choice.

In return, I will agree with you that we’re discussing choices that in the end may turn to be mostly an empirical matter, and will concede that however best practices turn out to be settled in our case, showing studies with medians and other with means in one same graph, as we can see in the Met report, does look suboptimal to me.

> ideally one would display both, but if you have to choose, choose the median when the distribution is skewed.

Please go tell the Met Office that they fail Stat 101, Moshpit.

Then report.

Sometimes, an argument that is too powerful only looks dubious.

***

Let’s cut to the chase. Either there’s a formal answer to my questions, in which case a decision can be obtained, and some round kicks may rejoice followers of Chuck Norris. I have in mind this argument by Carrick:

Yes, using a flat prior for climate sensitivity doesn’t make sense at all.
Subjective and objective Bayesians disagree on many things, but they would agree on that. The reasons why are repeated in most text books that discuss Bayesian statistics, and have been known for several decades. The impact of using a flat prior will be to shift the distribution to higher values, and increase the mean, median and mode. So quantitative results from any studies that use the flat prior should just be disregarded, and journals should stop publishing any results based on flat priors. Let’s hope the IPCC authors understand all that.

This morning I received a gracious email from Nic Lewis responding to my comments here yesterday, and elaborating on some points he made in his post here. I thanked him in reply, and agreed with him that the adversarial atmosphere that sometimes prevails here and in other blogs is not the best environment for reaching an accurate understanding of climate phenomena. I told him I hope the Met Office will consider his comments seriously.

We can all hope so, but the Met Office is off on the wrong foot defending their hot-running model by saying it is within the range of all the other hot running models. Fred, do the models run hot from ignorance or deliberation?
========================

well, my experience is that most people who post on a blog will be open to an email exchange if things get too nasty on the blog. In contrast if you try to ask the MET questions about their white papers, you’ll have to get a friend in parliment to ask the questions. or maybe use FOIA..

If it gets adversarial one has a choice to either join in, ignore it, or move on. I am still looking for a way of blocking out some commenters in WordPress as you can do on a emailing list and facebook etc

If it gets adversarial one has a choice to either join in, ignore it, or move on. I am still looking for a way of blocking out some commenters in WordPress as you can do on a emailing list and facebook etc
”

Good idea. I will program one up and post a feed on my semantic web server. The nice thing about RSS feeds is that they use standard schema, in this case Dublin Core for the creator of the comment.

No. Fred Moolten raised an issue in public. If the issue has been resolved, he should state that resolution in public. If the issue hasn’t been resolved, nobody should pretend it has been. Either way, it’s perfectly reasonable to ask him what the status of the issue is.

It all seems a bit odd. Models are unable to predict anything – they are chaotic. There are many feasible solutions. At most a perturbed physics model can produce a probability distribution of future climate states. This is shown schematically here.

Moreover the physics of actual climate state shifts can not be modeled at all with precision greater than tossing a coin. That climate shifts occur at decadal frequencies seems to put an impossible burden on expectations of forecasting.

Therefore – the only sensible answer for climate sensitivity is …. wait for it… γ in the linked diagram.

Global temperature obviously follows SST. Land does as well with the difference being different lapse rates – which change all the time over land with water availability. Starting in the last decade – the difference is more pronounced. Is this NH drought?

Background warming is at most 0.1 degrees c/decade – 0.12 in BEST. But that is not independent of rainfall especially. Some of it is solar TSI and some is changes in cloud cover – which is independently associated with variations in ocean and atmospheric circulation. Negatively correlated with SST it seems.

Climate shifted chaotically 4 times last century – the last in 1998/2001. These decadal regimes are a robust feature of climate. They are associated most clearly with changes in the state space of the Pacific Ocean. The Pacific Decadal Variation – ENSO, the PDO, the Pacific gyres, the trade, clouds, SST all change – shifts modes over 20 to 40 years.

Following up on our reading of the op-ed, we get to the section Physical Considerations:

The Met Office Report states that “To reach the very low values [for TCR] quoted in Otto et al. (2013) would require negative feedbacks to be acting quite strongly to counteract the well understood physics of greenhouse gas radiative forcing, water vapour feedback and surface albedo feedback”. That is misleading. The strong negative lapse rate feedback is very closely linked to the water vapour feedback (they are sometimes combined into a single feedback) and has a similar level of understanding. Therefore, it should be included along with greenhouse gas radiative forcing, water vapour feedback and surface albedo feedback.

A multi-model study of feedbacks, Soden and Held 2006, showed a median ECS for the model ensemble of 1.8°C after combined water vapour/lapse rate and surface albedo feedbacks. The median 1.3°C and 1.4°C estimates for TCR in Otto et al both correspond to ECS estimates above 1.8°C, so are consistent with the Soden and Held 2006 findings without requiring any additional negative feedbacks to be acting. Moreover, real-world water vapour feedback may not be as strong as in typical climate model simulations. Although the basic physics of these feedbacks may be well understood, there remains substantial uncertainty as to their magnitudes. Furthermore, cloud feedbacks are highly uncertain.

Only two adverbs, one by Nic Lewis and one from the Met report.

An interesting case of “this is misleading”. The “this” refers to a complex sentence. Let’s try to simplify it.

***

There is a claim that Otto & al require negative feedback to reach the very low values. Nic Lewis seems to counter this claim where he says that the median estimates for TCR in Otto et al are consistent with Soden and Held 2006 findings without requiring any additional negative feedbacks to be acting. If this claim is true, it may contradict what the Met report states, which in turn may be false.

“This” may be misleading, but because “this” would be false. There would be no requirement as the Met report claims. Is that the case?

This is a clear case where getting into the misrepresentation business does not help clarify technical matters.

***

There is also a claim that negative feedbacks counteract the physics of greenhouse gas radiative forcing, water vapour feedback and surface albedo feedback. I don’t think this claim is false. This claim has not been disputed by Nic Lewis. Nor has it been discussed.

As Joshua would say, a negative feedback is negative.

***

There is also the presupposition that the physics of greenhouse gas radiative forcing, water vapour feedback and surface albedo feedback is well understood. This has been accepted by Nic Lewis, not without the usual caveats “but uncertainty” and “but clouds”. We can suppose that this modulation does not matter much in the argument. It may matter for Denizens, who require that we lukewarmingly state all uncertainties imaginable at the risk of overselling them.

***

There is also the implication from the Met report that negative feedbacks are not “well understood”. To this, Nic Lewis contends that the strong negative lapse rate feedback is very closely linked to the water vapour feedback and has a similar level of understanding. Here we reach another mediation point. Nic Lewis implies that not including negative feedback would be “misleading”, since it is as well understood as other well understood feedbacks. What’s up with that?

Some citations for the parenthesis “they are sometimes combined into a single feedback” might be of help here. The notion of “level of understanding” may also deserve due diligence: to clarify what is conveyed by this expression, it might be good practice to justify this claim. Finally, some mediation might be needed to adjudicate if both including negative or excluding negative feedbacks are warranted.

This implication shows another case where speaking of misrepresentation (“misleading”) distracts us from the resolution of a technical point. Levels of understanding is somewhat more factual than the choice to include or exclude feedbacks, which may in turn be distinct from the soundness of the physical considerations found in the Met report.

One should not simply conflate what, how, and why questions and expect to reach Mordor.

Reading with care, what Met Office writes, they are not discussing in that paragraph the central values of the Otto et al paper, they discuss first the upper limits, and turn then to the lower limits. Concerning the lower limits they note that values quoted in Otto et al, i.e. values close to 0.7 C, would require strong negative feedbacks. Who would disagree on that? On this scale the lapse rate feedback is very weak, something much stronger would be needed.

Reading the paper we see also that picking out water vapor feedback and surface albedo feedback was done by Met Office. Nic added the lapse rate feedback to that, but Fred was wrong in faulting Nic on this point as he was discussing the Met Office paper, not the Soden and Held paper in more general terms.

All in all my interpretation of this all is that Nic overreacts to some features of the Met Office paper. He was presenting for the paper requirements that could not be met in a paper of its length and general nature. Met Office could not represent the Otto et al estimates on TCR better, because the original paper didn’t offer the results in a form suitable for that. They didn’t discuss the models from a point of view that would have satisfied Nic, because that would have required going into much more detail and discussing models individually, not as a group. Met Office has acknowledged that HadGEM2-ES has a rather high climate sensitivity. If they hide that when the model is used for policy advice and present projections as if the model would be center-of-line and also agree well with all recent data, then they can be criticized, but not for having the model or for not discussing it separately in this particular paper.

That may be truly interesting, You are an exceedingly expert writer. I’ve got registered ones rss and turn up pertaining to seeking really the outstanding article. As well, We have distributed your web site around my internet sites