A major problem with the Resplandy et al. ocean heat uptake paper

Obviously doubtful claims about new research regarding ocean content reveal how unquestioning Nature, climate scientists and the MSM are.

On November 1st there was extensive coverage in the mainstream media[i] and online[ii] of a paper just published in the prestigious journal Nature. The article,[iii] by Laure Resplandy of Princeton University, Ralph Keeling of the Scripps Institute of Oceanography and eight other authors, used a novel method to estimate heat uptake by the ocean over the period 1991–2016 and came up with an atypically high value.[iv] The press release [v] accompanying the Resplandy et al. paper was entitled “Earth’s oceans have absorbed 60 percent more heat per year than previously thought”,[vi] and said that this suggested that Earth is more sensitive to fossil-fuel emissions than previously thought.

I was asked for my thoughts on the Resplandy paper as soon as it obtained media coverage. Most commentators appear to have been content to rely on what was said in the press release. However, being a scientist, I thought it appropriate to read the paper itself, and if possible look at its data, before forming a view.

Trend estimates

The method used by Resplandy et al. was novel, and certainly worthy of publication. The authors start with observed changes in ‘atmospheric potential oxygen’ (ΔAPOOBS).[vii] In their model, one component of this change (ΔAPOClimate) is due to warming of the oceans, and they derived an estimate of its value by calculating values for the other components.[viii] A simple conversion factor then allows them to convert the trend in ΔAPOClimate into an estimate of ocean heat uptake (the trend in ocean heat content).

On page 1 they say:

From equation (1), we thereby find that ΔAPOClimate = 23.20 ± 12.20 per meg, corresponding to a least squares linear trend of +1.16 ± 0.15 per meg per year[ix]

A quick bit of mental arithmetic indicated that a change of 23.2 between 1991 and 2016 represented an annual rate of approximately 0.9, well below their 1.16 value. As that seemed surprising, I extracted the annual ΔAPO best-estimate values and uncertainties from the paper’s Extended Data Table 4[x] and computed the 1991–2016 least squares linear fit trend in the ΔAPOClimate values. The trend was 0.88, not 1.16, per meg per year, implying an ocean heat uptake estimate of 10.1 ZJ per year,[xi] well below the estimate in the paper of 13.3 ZJ per year.[xii]

Resplandy et al. derive ΔAPOClimate from estimates of ΔAPOOBS and of its other components, ΔAPOFF, ΔAPOCant, and ΔAPOAtmD, using – rearranging their equation (1):

ΔAPOClimate = ΔAPOOBS − ΔAPOFF − ΔAPOCant − ΔAPOAtmD

I derived the same best estimate trend when I allowed for uncertainty in each of the components of ΔAPOOBS, in the way that Resplandy et al.’s Methods description appears to indicate,[xiii] so my simple initial method of trend estimation does not explain the discrepancy.

Assuming I am right that Resplandy et al. have miscalculated the trend in ΔAPOClimate, and hence the trend in ocean heat content (OHC), implied by their data, the corrected OHC trend estimate for 1991–2016 (Figure 2: lower horizontal red line) is about average compared with the other estimates they showed, and below the average for 1993–2016.

I wanted to make sure that I had not overlooked something in my calculations, so later on November 1st I emailed Laure Resplandy querying the ΔAPOClimate trend figure in her paper and asking for her to look into the difference in our trend estimates as a matter of urgency, explaining that in view of the media coverage of the paper I was contemplating web-publishing a comment on it within a matter of days. To date I have had no substantive response from her, despite subsequently sending a further email containing the key analysis sections from a draft of this article.

How might Laure Resplandy[xiv] have miscalculated the ΔAPOClimate trend as 1.16 per meg per year? One possibility is that the computer code for the trend computation somehow only deducted ΔAPOFF and ΔAPOCant from ΔAPOOBS when computing the 1991–2016 trend, thereby in fact obtaining the trend for {ΔAPOClimate + ΔAPOAtmD}, which is 1.16 per meg per year.[xv]

Uncertainty analysis

I now turn to the uncertainty analysis in the paper.[xvi] Strangely, the Resplandy et al. paper has two different values for the uncertainty in the results. On page 1 they give the ΔAPOClimate trend (in per meg per year) as 1.16 ± 0.15. But on page 2 they say it is 1.16 ± 0.18. In the Methods section they go back to 1.16 ± 0.15. Probably the ± 0.18 figure is a typographical error.[xvii]

More importantly, it seems to me that uncertainty in the ΔAPOClimate trend, and hence in the ocean heat uptake estimate, is greatly underestimated in Resplandy et al., because of two aspects of the way that they have treated trend and scale uncertainties affecting ΔAPOOBS. First, they appear to have treated corrosion, leakage and desorption errors in ΔAPOOBS as fixed errors that have the same influence each year.[xviii] But these are annual trend errors, so their influence is proportional to the number of years elapsed since the 1991 base year.[xix] Secondly, they appear to have treated both these trend errors and the scale systematic error in ΔAPOOBS as uncorrelated between years.[xx] However, each year’s error from each of these sources is simply a multiple of a single random trend or scale systematic error, and is therefore perfectly correlated with the same type of error in all other years.

On a corrected basis, I calculate the ΔAPOClimate trend uncertainty as ± 0.56 per meg yr−1, more than three times as large as the ± 0.15 or ± 0.18 per meg yr−1 values in the paper.[xxi] This means that, while Resplandy et al.’s novel method of estimating ocean heat uptake is useful in providing an independent check on the reasonableness of estimates derived from in situ temperature measurements, the estimates their method provides are much more uncertain than in situ measurement-based estimates, and are consistent with all of them.

Effect on climate sensitivity and carbon budgets

Resplandy et al. point out that a larger increase in ocean heat content (a higher ocean heat uptake) would affect estimated equilibrium climate sensitivity. That is true where such sensitivity estimates are derived from observationally-based analysis of the Earth’s energy budget. However, after correction, the Resplandy et al. results do not suggest a larger increase in ocean heat content than previously thought. In fact, using the corrected Resplandy et al. estimate of the change in ocean heat content over the relevant period (2007–2016) in the recent Lewis and Curry (2018)[xxii] energy budget study would slightly lower its main estimate of equilibrium climate sensitivity. Moreover, a larger estimated increase in ocean heat content would principally affect the upper uncertainty bound of the equilibrium climate sensitivity estimate. Contrary to what Resplandy et al. claim, the lower bound would be little affected and would remain well below 1.5°C, [xxiii] providing no support for increasing the lower bound of the IPCC’s range for equilibrium climate sensitivity to 2.0°C.

Resplandy et al. also make the bizarre claim that increasing the lower bound of the IPCC’s equilibrium climate sensitivity range from 1.5°C to 2.0°C would have the effect of “reducing maximum allowable cumulative CO2 emissions by 25% to stay within the 2°C global warming target”. In fact, that cumulative carbon emissions budget is very largely determined by a combination of carbon-cycle characteristics and the transient climate response.[xxiv] Observational estimates of the transient climate response are unaffected by the level of ocean heat uptake. Therefore, increasing the lower bound of the equilibrium climate sensitivity range would have little or no impact on the cumulative carbon emissions budget to stay within 2°C global warming.

Conclusions

The findings of the Resplandy et al paper were peer reviewed and published in the world’s premier scientific journal and were given wide coverage in the English-speaking media. Despite this, a quick review of the first page of the paper was sufficient to raise doubts as to the accuracy of its results. Just a few hours of analysis and calculations, based only on published information, was sufficient to uncover apparently serious (but surely inadvertent) errors in the underlying calculations.

Moreover, even if the paper’s results had been correct, they would not have justified its findings regarding an increase to 2.0°C in the lower bound of the equilibrium climate sensitivity range and a 25% reduction in the carbon budget for 2°C global warming.

Because of the wide dissemination of the paper’s results, it is extremely important that these errors are acknowledged by the authors without delay and then corrected.

Of course, it is also very important that the media outlets that unquestioningly trumpeted the paper’s findings now correct the record too.

[iv] A value of 13.3 zetta Joules (ZJ) per year, or 0.83 Watts per square metre of the Earth’s surface. ZJ is the symbol for zetta Joules; 1 ZJ = 1021 J. 1 ZJ per year = 0.0621 Watts per square metre (W/m2 or Wm–2) of the Earth’s surface.

[vi] However that is in comparison with an IPCC estimate for 1993–2010; estimates for 1991–2016 are higher.

[vii] ΔAPO is the change in ‘atmospheric potential oxygen’, the overall level of which has been observationally measured since 1991 (ΔAPOOBS). It is the sum of the atmospheric concentrations of O2 and of CO2 weighted respectively 1⤬ and 1.1⤬.

[viii] The authors break the observed change in ΔAPOOBS into four components, ΔAPOFF, ΔAPOCant, ΔAPOAtmD and ΔAPOClimate, deriving the last component (which is related to ocean warming) by deducting estimates of the other three components from ΔAPOOBS. ΔAPOFF is the decrease in APO caused by industrial processes (fossil-fuel burning and cement production). ΔAPOCant accounts for the oceanic uptake of excess anthropogenic atmospheric CO2. ΔAPOAtmD accounts for air–sea exchanges driven by ocean fertilization from anthropogenic aerosol deposition.

[ix] 1 per meg literally means 1 part per million (1 ppm), however ‘per meg’ and ‘ppm’ are defined differently in relation to atmospheric concentrations and are not identical units.

[xi] Dividing by their conversion factor of 0.087 ± 0.003 per meg per ZJ. ZJ is the symbol for zetta Joules; 1 ZJ = 1021 Joules.

[xii] I used ordinary least squares (OLS) regression with an intercept. That is the standard form of least squares regression for estimating a trend. Resplandy et al. show all APO variables as changes from a baseline of zero in 1991, but that is an arbitrary choice and would not justify forcing the regression fit to be zero in 1991 (by not using an intercept term). Doing so would not in any event raise the ΔAPOClimate estimated trend to the level given by Resplandy et al.

[xiii] I took a large number of sets of samples for each of the years 1991 to 2016 from the applicable error distributions of ΔAPOOBS, ΔAPOFF, ΔAPOCant, and ΔAPOAtmD given in Extended Data Table 4, and calculated all the corresponding sample values of ΔAPOClimate using equation (1). I then computed the ordinary least squares linear trend for each set of 1991–2016 sampled values of ΔAPOClimate, and calculated the mean and standard deviation of the trends.

[xiv] Laure Resplandy was responsible for directing the analysis of the datasets and models.

[xv] This fact was spotted by Frank Bosse, with whom I discussed the apparent error in the Resplandy et al. ΔAPOClimate trend.

[xvi] All uncertainty values in the paper are ± 1 sigma (1 standard deviation). Errors are presumably assumed to be Normally distributed, as no other distributions are specified.

[xvii] The statement in their Methods that “ΔCant′ cannot be derived from observations and was estimated at 0.05 Pg C yr−1, equivalent to a trend of +0.2 per meg−1, using model simulations” is presumably also a typographical error. The correct value appears to be +0.12 per meg yr−1, as stated elsewhere in Methods and in Extended Data Table 3.

[xviii] On that basis , I can replicate the Extended Data Table 4 ΔAPOOBS uncertainty time series values within ±0.1. Note that all the values in that table, although given to two decimal places, appear to be rounded to one decimal place.

[xix] The overall uncertainties given in Table 3 in Resplandy et al.’s source paper for its errors in ΔAPOOBS support my analysis.

[xx] When using the Resplandy et al. Extended Data Table 4 ΔAPOClimate total uncertainty time series and assuming that each year’s errors are independent, despite the trend and scale systematic errors being their largest component, the estimated ΔAPOClimate uncertainty reduces to between ± 0.20 and ± 0.21 per meg yr−1. That is still slightly higher than the ± 0.15 and ± 0.18 per meg yr−1 values given in the paper. The reason for the small remaining difference is unclear.

[xxi] It seems likely that the same non-independence over time issue largely or wholly applies to errors in ΔAPOCant, ΔAPOAtmD and probably ΔAPOFF. If the errors in ΔAPOCant and ΔAPOAtmD (but not in ΔAPOFF) were also treated as perfectly correlated between years, the ΔAPOClimate trend uncertainty would be ± 0.60 per meg yr−1.

[xxiii] Even if the 2007–2016 ocean heat uptake estimate used in Lewis and Curry (2018) were increased by 3 ZJ yr−1 to match Resplandy et al.’s (incorrect) estimate for 1991–2016, the 1.05°C 5% lower bound of its HadCRUT4v5-based estimate of effective/equilibrium climate sensitivity would only increase to 1.15°C. Moreover, Resplandy et al.’s ΔAPOClimate data imply have a lower ocean heat uptake estimate for 2007–2016 than they do for 1991–2016.

In Lewis’ error analysis is correct, then I congratulate him on catching that error, because I didn’t. He can feel free to submit his post as a comment or a response, so that it can undergo peer review.

However, I don’t think this reflects badly on Nature at all, or shows some conspiring on the part of media and climate scientists. After all, I remember a previous time a high-tier scientific journal published an incorrect result that was trumpeted by the media (and then years later by much of the Internet). It had something to do with members of UAH under-estimating tropospheric warming due to their poor homogenization:

[from: “Correcting Temperature Data Sets”]

“Although concerns have been expressed about the reliability of surface temperature data sets, findings of pronounced surface warming over the past 60 years have been independently reproduced by multiple groups. In contrast, an initial finding that the lower troposphere cooled since 1979 could not be reproduced. Attempts to confirm this apparent cooling trend led to the discovery of errors in the initial analyses of satellite-based tropospheric temperature measurements.”http://science.sciencemag.org/content/334/6060/1232

“However, I don’t think this reflects badly on Nature at all…..”
Others may disagree with you.
“… a quick review of the first page of the paper was sufficient to raise doubts as to the accuracy of its results.”

Not really. Last I checked, you rely on the fruits of peer-reviewed research everyday. Unless you’ve never taken medical treatment, eaten processed food, drank treated water, worn manufactured clothing, etc.

Is peer review perfect? No. Is peer review at top-tier journals good enough to be relied upon? Yes, as shown by your daily life and the scientific advances peer-reviewed research has brought us for centuries. Complaining that peer review is unacceptable because it sometimes lets mistakes through, is like complaining that medicine is unacceptable because doctors were not perfect in the past, or that flying in planes in unacceptable because pilots have crashed planes in the past.

I also love the selective outrage and special pleading of some faux “skeptics”. They complain that peer review is horrid when it lets through some flawed papers that present evidence in support of the mainstream evidence-based consensus. But those same “skeptics” will trumpet peer review that lets through Lewis and Curry’s papers. Or they trumpet peer review that let through the (later debunked) work of Spencer+Christy at UAH, work that challenged the mainstream evidence-based consensus. Or peer review that let through Lindzen’s paper that contained (in his own words) stupid mistakes:

“Dr. Lindzen acknowledged that the 2009 paper contained “some stupid mistakes” in his handling of the satellite data. “It was just embarrassing,” he said in an interview.”

Peer review is not some plot to feed the media “alarmist” (whatever that is) material. Peer review is done better at some journals that at others. It is an imperfect, relatively reliable process that served us well for centuries, and there’s always room for improvement in it. Respect it.

“Peer review is not some plot to feed the media “alarmist” (whatever that is) material.”
.
Absolutely right. Feeding the media with sensational, alarming material is done via blaring press releases at the same time as the paper’s publication.

It’s OK to make mistakes and then admit and/or fix them. This is what Christy did. This is what Lindzen did. This is what Michael Mann refused to do with his hockey stick. This is what Sherwood refused to do when he first used a deceptive color graph to “prove” the existent of a tropical hot spot, and when he later wanted us to believe that we should throw out balloon and satellite data and instead depend on his derivation of tropospheric temperatures through wind shear and an assumption of natural variability.

Hughes blamed 2016 GBR coral bleaching on global warming and Jim Steele, and then later Wolanski in a published paper, showed that bleaching was due to lowered sea levels from El Nino and natural current mechanics. Did Hughes admit any error?

Lister recently published a paper purportedly showing that insects in the Luquillo Forest of Puerto Rico died because of global warming– i.e., an (alleged) temperature rise from 26C to 28C!!! Lister ignored the widespread use of insecticides in Puerto Rico as a potential cause. It’s absurd to imagine that insects are dying because of 82F but all we’ll hear about is how that’s the future we’re facing.

Consensus climate science has all the fingerprints of advocacy science and maybe even a touch of downright dishonesty.

Atomsk: The important question is whether the peer-reviewers of this paper would have given similar scrutiny to a paper that concluded the ocean heat content was rising slower than expected rather than faster. As far as I can tell, faster ocean heat uptake will reduce the discrepancy between observational-based estimates of ECS (EBMs) and estimates from climate models. So the natural tendency of all supporters of the consensus constructed around AOGCMs would be more carefully scrutinize a paper which enlarged this discrepancy than shrank it.

The rivalry between UAH and RSS has ensured that the work of both sides has always be carefully scrutinized – and it has probably resulted in both parties scrutinizing their work more carefully than they might otherwise have. One the other hand, the discrepancy between observational-based and model-based ECS remained unrecognized for more than a decade before Nic Lewis’s contributions. Unfortunately for climate science, there are relatively few skeptics, and even fewer willing to deliberately “audit” the details.

Then there is the unwillingness (acknowledged by some) to publicly discuss doubts or problems that might be picked up by the conservative press and skeptical blogs.

IMO, the fact that auditing by Nic Lewis and Steve McIntyre has turned up so many problems (real problems as best this biased individual can tell) suggests that you and the whole climate science community should be deeply concerned about confirmation bias during peer review. However, that is another subject that can’t be publicly discussed without it reaching the conservative press and skeptical blogs. In many areas of research, there is a crisis in confidence in published work. Ionnaides, for one example. Pharmaceutical companies find their laboratories are unable to reproduce about 75% of key studies claiming to have validated a particular molecular targets (enzymes, receptors, etc) for new drug discovery.

Yes, Franktoo, well said. I would add however that some of this widespread problem in climate science (as in all fields) can simply be chalked up to a dysfunctional culture of peer review. There are huge numbers of papers to be reviewed and top researchers are very busy generating their own papers and results. Virtually all peer reviewers don’t have time to do even a cursory check of the work other than reading it for obvious problems and conflicts with already published papers. If peer reviewers were paid and expected to devote at least a couple of weeks to each review, the quality would be higher. The real problem here is that 90% of what is published is not worthy of the paper its printed on.

Re: “Atomsk: The important question is whether the peer-reviewers of this paper would have given similar scrutiny to a paper that concluded the ocean heat content was rising slower than expected rather than faster.”

This case does nothing to support the line of reasoning you’re going towards. For example, I know of plenty of research that made it through peer review and which reduced estimates of climate trends. Take the following paper on the topic of altitude-dependent warming, a topic I’m interested in:

“Artificial amplification of warming trends across the mountains of the western United States”

And, of course, Curry herself co-authored research that later needed to be corrected:

So no, you can drop the insinuations of slanted bias in favor of showing more warming in the context ocean heat content.

Re: “One the other hand, the discrepancy between observational-based and model-based ECS remained unrecognized for more than a decade before Nic Lewis’s contributions”

Oh, come on.

First, Lewis’ work uses an energy budget model. And you conveniently left out the paleoclimate observations that show a higher climate sensitivity that Lewis’ model-based approach. So your dichotomy between “observational-based and model-based ECS” here is a false one, if you’re acting as if Lewis doesn’t use a model:

“These studies employ observations but still require an element of modeling to infer ECS.
[…]
Forster & Gregory (2006, p. 39) overstated the benefits of such an approach by claiming, “Importantly, the [ECS] estimate is completely independent of climate model results.” As Equation 1 derives directly from conservation of energy, the Forster & Gregory (2006) claim would appear valid. But it in fact makes the assumption that the α derived from a particular observational period is the same as the α applicable under long-term climate change. Another way of stating this assumption is saying that the effective climate sensitivity (the apparent ECS diagnosed from a specific α) is the same as the true ECS. Uncertainties around the derivation of ECS from an energy budget approach can be attributed to two causes: the model used to translate α into an ECS estimate and the quality of the observation-based data sets.”https://www.annualreviews.org/doi/abs/10.1146/annurev-earth-060614-105156?casa_token=vRrWzUdKwDEAAAAA%3AUMXASkMMfYggx6oKU4oOgDux4qh0qRfqgNYSuaj_sYPDCjHwsFgkCpNVZi6gWQc6sNAXQHSakfnCdME&journalCode=earth

“Proxies for CO2 and temperature generally imply high climate sensitivities: ≥3 K per CO2 doubling during ice-free times (fast-feedback sensitivity) and ≥6 K during times with land ice (Earth-system sensitivity). Climate models commonly underpredict the magnitude of climate change and have fast-feedback sensitivities close to 3 K.”https://www.annualreviews.org/doi/full/10.1146/annurev-earth-100815-024150

Second, energy-budget-model-based and energy balance approaches were used long before Lewis’ work. And observations were compared to models. You were wrong when you claimed otherwise:

“To answer that question, starting in the 1960s scientists have used energy balance arguments combined with observed changes in the global energy budget, evaluated comprehensive climate models against observations, and analysed the relationship between external forcing and climate change over different climate states in the past (see Methods for a list of early publications).”https://www.nature.com/articles/ngeo3017

I’ve told you this many times before, frank: please do not make false claims for which you have no cited evidence. It’s annoying.
Maybe instead of making baseless insinuations about peer review, you should spend instead more time reading the peer-reviewed literature? That would help stop you from making the sort of false claims you made above.

Re: “In many areas of research, there is a crisis in confidence in published work. Ionnaides, for one example”

And now you round things out with the usual abuse of Ioannidis’ work. Amazing. This is getting too predictable.

Ioannidis notes that the evidence (and level of certainty) on anthropogenic climate change is on par with the evidence (and level of certainty) that smoking kills people. He made this comparison because he recognizes that scientific hypotheses become more reliable (and more likely to be true) as more and more research groups test the hypothesis using different lines of evidence, methodologies, etc., and keep finding that the hypothesis passes the tests:

Re: “It’s OK to make mistakes and then admit and/or fix them. This is what Christy did. This is what Lindzen did. This is what Michael Mann refused to do with his hockey stick. This is what Sherwood refused to do when he first used a deceptive color graph to “prove” the existent of a tropical hot spot, and when he later wanted us to believe that we should throw out balloon and satellite data and instead depend on his derivation of tropospheric temperatures through wind shear and an assumption of natural variability.”

The so-called “hot spot” has been shown to exist multiple times, both in research Sherwood co-authored and in research from other groups. I’ve read his IUK papers; they were not deceptive. The color-scale was clearly defined in the papers. Feel free to show otherwise. Here’s a sampling (I suspect that figure 6 of the 3rd paper is what you’re complaining about):

Moving on: Mann doesn’t need to admit to all the mistakes you claim he made, especially if he didn’t make them. He also re-did the hockey stick analysis without tree ring data and with multiple different analysis techniques. Other people replicated the hockey stick result as well. Not my fault if you don’t accept it. I’d suggest you read papers such as:

“Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia”
“Robustness of the Mann, Bradley, Hughes reconstruction of Northern Hemisphere surface temperatures: Examination of criticisms based on the nature and processing of proxy climate evidence”
“A global multiproxy database for temperature reconstructions of the Common Era”
“A Reconstruction of Regional and Global Temperature for the Past 11,300 Years”

Re: “Consensus climate science has all the fingerprints of advocacy science and maybe even a touch of downright dishonesty.”

And pigs fly.

I remember when some conservatives made the same baseless claim on consensus medical science on smoking causing cancer. Or consensus climate / chemical science on CFC-induced ozone depletion. Or consensus biological science on human evolution. Or…

Franktoo wrote: “One the other hand, the discrepancy between observational-based and model-based ECS remained unrecognized for more than a decade before Nic Lewis’s contributions”

Atomsk wrote: “Oh, come on. First, Lewis’ work uses an energy budget model. And you conveniently left out the paleoclimate observations that show a higher climate sensitivity that Lewis’ model-based approach. So your dichotomy between “observational-based and model-based ECS” here is a false one, if you’re acting as if Lewis doesn’t use a model …”

The history of the failure of the IPCC to recognize the discrepancy between observational-based and model-based ECS is documented in great detail below. There is no reason (except confirmation bias) why recent and current efforts to understand the origins of this discrepancy shouldn’t have started a decade earlier. If the IPCC published reports that met Schneider’s standard for ethic science (the whole truth with all of the caveats), the discrepancy would have been candidly discussed in AR5.

An energy balance model simply divides the current radiative forcing into two parts: 1) The current imbalance that is causing warming right now – mostly in the ocean. 2) The increase in OLR + OSR associated with a warmer planet. (OSR + reflection of SWR). d(OLR+OSR)/dTs is the reciprocal of ECS expressed in K/(W/m2) rather than K/doubling. As best I can tell, saying that EBM’s are merely models is functionally equivalent to saying applying conservation of energy to our climate system is “merely a model”. There are acknowledged uncertainties in the forcing and warming data, but I see no reason not to trust this “model” as much as I trust the law of conservation of energy.

Estimates of ECS from paleoclimatology similarly depend on conservation of energy. However, we have far more accurate information about the planet from 1970-2010 (the period used by Otto 2013)) and the instrumental period used by others (and hindcast by AOGCMs). Where EBM’s and paleo disagree, the more reliable answer should be obvious and the central estimate for EBMs is within the confidence interval for paleo. Citing paleo is misdirection.

You quoted a 2016 review by Forster. The full quote is:

“As Equation 1 derives directly from conservation of energy, the Forster & Gregory (2006) claim would appear valid. But it in fact makes the assumption that the α derived from a particular observational period is the same as the α applicable under long-term climate change. Another way of stating this assumption is saying that the effective climate sensitivity (the apparent ECS diagnosed from a specific α) is the same as the true ECS. Uncertainties around the derivation of ECS from an energy budget approach can be attributed to two causes: the model used to translate α into an ECS estimate and the quality of the observation-based data sets.”

I interpret this to mean that past surface temperature change has been forced by known phenomena that effect the radiative balance across the TOA and by unforced by chaotic fluctuations in ocean currents that control mixing between the surface and the deeper ocean (internal variability not arising from the TOA). EBM’s certainly assume that all warming is forced warming. The last sentence (the one you quoted) adds nothing new. The model used to “translate alpha into an ECS estimate” involves only F2x and conservation of energy and F_2x is only needed if you want to express ECS is terms of CO2 (K/doubling) instead of forcing (K/(W/m2)). The latter works for any forcing and is far more general. The quality of the observation-based data sets includes the confidence intervals around the inputs which produce the confidence interval around ECS and the possibility of systematic errors in forcing,

Atomsk finishes with: “And now you round things out with the usual abuse of Ioannidis’ work. Amazing. This is getting too predictable.”

I may have been misunderstood, but I didn’t intentionally abuse Ioannidis’ work. The point I was trying to make was that there is somewhat of a crisis of confidence in the reliability and meaning of published work in many areas of science. Standards are tightening, independent replication of important findings is getting more attention, what does it mean when 5 labs independently test a hypothesis, 4 don’t find a statistically significant effect (and don’t publish) while a fifth does find an statistically significant effect and does publish? In contrast, climate science charges forward with no public doubts despite: model parameterization and “ensembles of opportunity”, AR5’s revision of the lower limit for ECS after AR4 lowered it, decreased confidence in the MWP, the effect of GW on hurricanes, Climategate, misused of extreme weather …

Re: “The history of the failure of the IPCC to recognize the discrepancy between observational-based and model-based ECS is documented in great detail below”

You’re citing a 2014 GWPF document as your source on information, despite GWPF’s long history of publishing misleading reports for ideological reasons. That is why you’re confused on this topic. Please try reading better sources.

Instead of believing their claims about how the evil IPCC is suppressing science, actually read what the IPCC said. If you had, then you’d know that IPCC AR4 was already discussing observational estimates back in 2007. For example:

And you’re simply repeating the mistakes I already addressed, without addressing the points. Again. To reiterate: you were already cited work comparing models and observational estimates, dating back to before the IPCC even existed. So you were wrong when you claimed this issue was ignored before Nic Lewis.

Re: “I interpret this to mean that past surface temperature change has been forced by known phenomena that effect the radiative balance across the TOA and by unforced by chaotic fluctuations in ocean currents that control mixing between the surface and the deeper ocean (internal variability not arising from the TOA). EBM’s certainly assume that all warming is forced warming. The last sentence (the one you quoted) adds nothing new.”

First, I didn’t just quote the last sentence. Second, your interpretation is wrong. The quote is actually pointing out that the assumption that estimate for the recent historical record is the same as the estimate that will apply at later periods of time. To quote the relevant portion again:

As Forster notes, this assumption is not just conservation of energy; it’s an assumption one can reject without rejecting conservation of energy. Thus you’re wrong when you claim that EBM-bsed approach is just using conservation of energy. There are plenty of papers that call the assumption by pointing out how the effective climate sensitivity increases with time, such that the recent effective climate sensitivity need not be equivalent to the true ECS. For instance, see the following and the references cited therein:

You’re also assuming that EBM-based approaches necessarily show lower climate sensitivity than models. And that’s not the case either. For example:

“Reconciled climate response estimates from climate models and the energy budget of Earth”

Re: “Citing paleo is misdirection.”

Nope. It’s citing evidence that you’re trying to avoid, by acting as if the EBM-approach doesn’t use a model that depends on challenge-able assumptions. In fact, the higher climate sensitivity estimate of paleo support rejecting the claim that the “the α derived from a particular observational period is the same as the α applicable under long-term climate change”. For example:

“A better characterization of feedbacks in warm worlds raises climate sensitivity to values more in line with proxies and produces climate simulations that better fit geologic evidence. As CO2 builds in our atmosphere, we should expect both slow (e.g., land ice) and fast (e.g., vegetation, clouds) feedbacks to elevate the long-term temperature response over that predicted from the canonical fast-feedback value of 3 K.”https://www.annualreviews.org/doi/full/10.1146/annurev-earth-100815-024150

Re: “The point I was trying to make was that there is somewhat of a crisis of confidence in the reliability and meaning of published work in many areas of science. […] In contrast, climate science charges forward with no public doubts despite”

Of course there are public doubts, which are often baseless. If there weren’t such doubts, then websites like this wouldn’t exist. Similarly, there are often baseless public doubts about whether Earth is round, HIV causes AIDS, humans evolved from non-human animals, etc. Public doubt isn’t necessarily rational.

You also make claims about a “crisis of confidence”. But you ignored why such a “crisis” does not apply in the scientific community to various topics, like humans causing most of the recent global warming, and smoking having caused at least hundreds of thousands of cases of cancer. As your own source Ioannidis notes, these scientific scientific hypotheses become more reliable (and more likely to be true) as more and more research groups test the hypothesis using different lines of evidence, methodologies, etc., and keep finding that the hypothesis passes the tests. This has occurred for both the science on smoking killing people and the science on anthropogenic climate change. So your misuse of Ioannidis’ work as applying to climate science, is as misguided as someone using his work to cast doubt on the medical science regarding smoking causing cancer.

Frank wrote: “The history of the failure of the IPCC to recognize the discrepancy between observational-based and model-based ECS is documented in great detail below” and referenced “A Matter of Sensitivity”.

Frank replies: No, I’m citing an article by NIc Lewis and Marcel Crok (who ran climatedialogue.com, which hosted debates between both skeptics and supporters of the consensus). And the report was endorsed by in a forward by the host of this blog. Pick any evidence cited by in this article and check to see that it is factually correct. For example, you provided a link to what AR4 said about ECS. If you had bothered to read the history section of Lewis and Crok, you would know that Froster and Gregory (2006) was the only pre-AR4 study to report a low central estimate for ECS (1.6 K) that is in reasonable agreement with most recent publications using energy balance models. Foster and Gregory (2006) got the “right answer” found later by Otto (2013, with 11 co-authors supporting the consensus and one skeptic, Lewis) and many others. The consensus vehemently disagrees that this is the correct value for future climate change. However, the AR4 Figure you linked reprocessed the pdf from F&G (2006) using a Bayesian methodology so that the median was greater than 2 K and consistent with AR4’s conclusion that ECS must be above 2.0. AR5 had to reverse this conclusion and return the lower limit to 1.5 K. Numerous studies seeking to identify the reason why EBMs and AOGCMs are somewhat inconsistent didn’t begin until after Otto (2013).

Of course, you haven’t read the report or audited any of this report’s evidence. If it was published by the GWPF, it must be wrong. That is confirmation bias. I certainly don’t believe everything the GWPF publishes, but experience has taught me to have respect for these particular authors. That doesn’t mean I believe everything they say.

This statement is correct. However, the IPCC chooses to ignore the possibility of high internal variability when they say (with inappropriately high certainty) that at least 50% of warming since 1950 must be anthropogenic. In that case, they rely upon climate models to estimate that natural variability likely contributes less than +/-0.1 K to warming and state that the best estimate for anthropogenic warming is observed warming. So they dismiss the possibility of high internal (unforced) variability when making attribution statements, but cite high internal variability as an excuse for ignoring EBMs. Unforced variability of +/- 0.1 K would cause little impact on the conclusion from EBMs.

Even worse, EBM’s give low estimates for climate sensitivity when applied to a great variety of periods. Otto (2013) analyzed each decade between 1970 and 2010 individually and the four decades as a whole and obtained similar best estimate for ECS. This was a period with little change in aerosol forcing, the greatest source of uncertainty. Lewis and Curry have studied 65 and 130 year periods to negate the effects of AMO. They averaged over starting and ending periods long enough to average out the any effects from ENSO and avoided volcanos. Others have studied the entire 20th century and obtained low climate sensitivity. It seems almost absurd to suggest that unforced variability had a similar distorting influence over all of these periods. I say almost, because most of the signal in terms of forcing and warming comes from the period after 1970.

Logically, you have two choices: a) You can say AOGCM’s are correct when they say that unforced variability is low. In that case, the high climate sensitivity of AOGCMs us inconsistent with observations from EBMs. b) You can say that unforced variability can be high enough to badly distort the results from EBMs and AOGCMs must be wrong about unforced variability being low. One way of the other, AOGCMs appear to be inconsistent with observations.

Atomsk cited Richardson (2016) as a refutation of low ECS from EBMs: “Reconciled climate response estimates from climate models and the energy budget of Earth”. He doesn’t cite Lewis’s rebuttal to that paper:

Nic has some substantial counterarguments. With dozens of defenders of the models, it is impractical for Lewis to respond to each with formal journals articles. If these papers went through an appropriately skeptical peer review process, many of the discrepancies and complications Nic reports would be dealt in these papers. At the moment, Richardson’s paper shouldn’t be taken as the definitive work on this subject.

Furthermore, Richardson is saying that AOGCMs produce 24% more warming than we expect to have observed – which is one way of saying that models are wrong. AOGCM’s (with huge, parameterized grid cells on either side of the surface are completely incapable of properly modeling the micro-environments where we measure GMST and heat transfer between them. In normal science, observations are “right” and inconsistent hypothesis are wrong. In the crazy world of climate science, it is perfectly acceptable to say the observations are wrong. An AOGCM Is a hypothesis about how our climate system responds to forcing – or more accurately a large family of possible hypotheses created by ad hoc tuning incapable of identifying an optimum set of parameters.

Atomsk writes: “You also make claims about a “crisis of confidence”. But you ignored why such a “crisis” does not apply in the scientific community to various topics, like humans causing most of the recent global warming, and smoking having caused at least hundreds of thousands of cases of cancer. As your own source Ioannidis notes, these scientific scientific hypotheses become more reliable (and more likely to be true) as more and more research groups test the hypothesis using different lines of evidence, methodologies, etc., and keep finding that the hypothesis passes the tests. This has occurred for both the science on smoking killing people and the science on anthropogenic climate change.”

There have been numerous articles over the last decade in Science and Nature about the reproducibility of important scientific work and the limitations of peer review.

There is no doubt in my mind that rising GHGs slow radiative cooling to space, and therefore must cause the planet to warm. These are the consequences of applying conservation of energy and the physics of radiation (quantum mechanics) to the planet’s radiation balance. (Thanks to the central role AOGCMs play in IPCC reports, there are a lot of misinformed people who don’t understand this “settled science”.) IMO, climate science takes over from physics with the climate feedback parameter – how much does the planet need to warm to emit or reflect enough radiation (W/m2/K) to restore a steady state – given all of the feedbacks associated with warming. Climate science has made no progress on this subject since Charney in 1979! This post supports the current lower end of this range. (The apparent absence of progress may be due to earlier over-confidence.)

There is no crisis of confidence in climate science – as there is in many areas of science – because it is politically impossible for climate scientists to discuss this subject publicly. Given that IPCC SPMs must be unanimously approved by all delegates (mostly political appointees), there is no chance that any of what Schneider called the “ifs, ands, buts and caveats” characteristic of ethical science is going to appear in SPMs.

“Peer review is not some plot to feed the media “alarmist” (whatever that is) material. ”

It can be. It depends on whom the editor chooses for peer review, and the quality and nature of the review those persons produce. In most cases the reviewers are anonymous, but that’s not necessarily a good idea since it keeps the process opaque.

In his landmark book “The Structure of Scientific Revolutions,” Thomas Kuhn discusses at great length how a scientific field can be captured by a system of false ideas, to the point where these beliefs determine what the legitimate scientific questions are. He called such belief systems “paradigms,” an example being the Earth centered model of the solar system.

Kuhn had no name for the way that a paradigm shields the field from hard questions and contrary evidence, so I call it “paradigm protection.” Paradigm protection is rampant in the field of climate science, where the controlling paradigm is the idea that humans are causing dangerous climate change.

The US National Science Foundation has just produced a remarkably clear example of alarmist paradigm protection. It is a multi-million dollar research funding Program cleverly titled “Navigating the New Arctic.”

First things first, scientists should always question results, seems that the climate change promoters didn’t do their due diligence; hence egg on their face, compounded by not responding tot he per-publication disclosure -indicative of a bias perhaps?

Whoah. Good work Nic.
We’ve seen estimates of OHC trend get adjusted up and up over the last 8 years. The purpose is clearly to bootstrap and justify higher sensitivity estimates observed surface T changes won’t support.

“But perhaps that is too much to hope for.”
.
Of course it is. If it bleeds, it leads. “New paper is found in error, and when corrected is consistent with earlier estimates” doesn’t get a bid of press coverage.

It is very good you found this apparent error, but it does mean two things:
1) The Nature reviewers didn’t actually review the paper (except maybe for typos), which is unfortunate. I would suggest confirmation bias, but maybe that is too harsh.
2) You are not going to be the authors’ favorite person in the world.

The interesting question is if Nature will ever publish a correction. My bet: not an icecube’s chance in Hell.

I didn’t see the paper, only the press releases, but my immediate reaction was: “Well, if that is right, then thermal expansion has to have been the dominant cause for measured sea level rise, and ocean mass increases (melting of land supported ice) estimated from Grace data have to be WAY wrong. That seems unlikely.”

Just a thought, but with higher CO2 levels driving increased photosynthesic activity, and O2 being a direct byproduct of photosynthesis, would one not expect O2 levels to rise in the atmosphere? After all, this was the mechanism that generated oxygen in the first place. This increase in O2 would on the surface, appear to correlate well with the increased crop production and greening shown by Dr. Spencer and NASA studies.

VTG,
I think it is falling somewhat less than what would be expected from combustion of fossil fuels, due mostly to greater overall plant growth and oxygen production. You can also see it in the slight increase in the magnitude of seasonal fluctuation of oxygen near the end of the record compared to the early part of the record.

Which claim? “60% more heat” or “settled science”? The 60% more heat claim certainly was made, but no claim of settled science. An paper which claims earlier work was way off also implicitly says the science is not settled.

Clearly it doesn’t have anything to do with settled science. But considering that it looks like it has serious (freshman physics level) mistakes, it might be better considered ‘erroneous science’. I hope all the blaring press releases and frightening MSM headlines get walked back, something like say “Sorry, we were mistaken, it really isn’t worse than we thought”….. but I won’t hold my breath.

The cheap goal was all the incorrect information that has been fed to the public from this paper, and that will never be taken back. If it weren’t for the blaring press releases and frightening MSM headlines that followed, a paper with problems like this could be (and likely would be) handled very differently. Maybe an exchange of email messages, followed by a low keyed joint correction published in Nature. That is not possible when a paper makes dramatic claims which are then breathlessly pitched to the MSM about how dire things are, accompanied by comparably breathless quotes from the authors about the ‘seriousness of the new findings’. When a scientific paper instantly becomes a bludgeon used to advance political goals, it had damned well better be bulletproof and perfect. And the truth is, very few are. If all press releases about new papers in climate science were withheld for 4 months, civility in resolving errors would be a lot more likely.

Nice work Nic. This fits with my contention that a lot of climate science deals with noisy data and uses poor methods (and poor quality control) to reach conclusions that are dubious. Mann has a new post at RealClimate that perfectly illustrates these points. The bulk of the post is about how “simple physics” explains worsening weather. “simple physics” is another way of saying vague verbal formulations that lack quantification.

We are all in your debt for doing the homework so called “peer reviewers” seem disinclined to do.

Well done. Short, succinct, to the point. Clear and irrefutible.
In immortal praise even.
“ Atomsk’s Sanakan (@AtomsksSanakan) | November 6, 2018
I congratulate him on catching that error”
On page 1, Atom, worth rereading Nic’s logic.
I am hopeful you have done a McIntyre on this paper’s conclusion.

Before everyone gets to excited, John Kennedy tweeted something 3 days ago that suggests there might be a legitimate reason for the difference. It has to do with weighting data by uncertainty. Weighting by uncertainty is a legitimate thing to do. It is something that would need to be called out in the paper and so on and so on. Also, the difference between weighted and unweighted results should be called out.

Playing with data from the OHC paper. I get their gradient for the regression (1.16) by weighting the data by 1/unc^2. This forces gradient to match the early data and overshoot the later, less certain, data. Other weightings give lower gradients. 1/2https://t.co/NNP5kyMOMJpic.twitter.com/bPsXrl65Eh— John Kennedy (@micefearboggis) November 3, 2018

They say more in the methods section. They do a million simulations for trend, presumably varying numbers according to the stated uncertainties. For each one, they do a OLS regression, so the terminology is correct. The uncertainty has an effect because the earlier terms are more likely to be close to their more highly trending values in each sample, so the overall effect is uncertainty-weighted.

John Kennedy’s weighted regression method (which could indeed be that used by Resplandy) forces the regression fit through zero in 1991. There is no justification for doing so, as I point out in note [xii]. The uncertainties aren’t really zero in 1991; it is simply that the 1991 data value has been deducted from all years’ data. The uncertainty only appears to be low in the early years because 1991 is used as the base year. One could instead have deducted the 2016 data value from all years’ data. That would result in zero uncertainty in 2016 and maximum uncertainty in 1991. Using weighted regression would then force the fit through the 2016 data point and produce a very low trend, as the slow growth in the (then lower uncertainty) later years dominates the fit.

The underlying issue is that the error in dAPO_Climate values is dominated by trend and scale systematic errors in its components. Those errors give rise to irreducible uncertainty in the trend in dAPO_Climate. It is arbitrary which year is used as a base to measure trend errors from. Whichever year is chosen, the trend error magnitude will be zero in that year and grow in both directions away from it. If the method used results in the estimated dAPO_Climate linear trend depending on the arbitrary choice of base year for measuring the trend error, it must be wrong. Where data error ranges arise due to trend uncertainty that affect all years’ data in proportion to distance from a base year, it is not appropriate to weight the data values inversely with their error variance, as Kennedy’s method does.

The uncertainties aren’t really zero in 1991; it is simply that the 1991 data value has been deducted from all years’ data.

Fair enough. I haven’t read the papers. So I assumed theses were some sort of published measurement uncertainties — similar to what we would do with instrumentation. (Hadcrut has published uncertainties that exist external to any individuals decision to fit the data. )

I agree the uncertainty is defined as zero for 1990, and then that’s not legitimate. (If one does want to do that, at least define it as zero in the midpoint of the time range! Even then that’s not quite right.)

“estimate the rate of change in the original quantity”
The original quantity is APO. But they are looking at trend in APO_climate. That involves attribution, which you can do for a change. But not necessarily for the original quantity.

It’s like where you measure temperature of a place since 1991, and during that time the location has shifted up a hill. You can talk about the part of the change that was due to climate, and the part that was due to location change. But it doesn’t make sense to ask what fraction of the temperature in 1991 was due to climate, and what fraction due to location. So there is no absolute definition of T_climate.

I can reproduce John Kennedy’s numbers. The R code is here, with lewis.csv being derived from the linked xlsx file:
v=read.csv(“lewis.csv”)
x=v$year
y=v[,2]
wt=v[,3] # to deal with zero uncertainty at 1991
wts=1/wt/wt
wts[1]=100
h=lm(y~x,we=wts)
print(summary(h))

That gives the same slope of 1.162, and the std error 0.0514, which JK notes is lower than the paper. The essential thing, as Nic says, is that the regression is constrained to pass through 0 at 1991. I agree with Nic that this may not be the right thing to do, even though it is the defined reference value. If you don’t, but assign an uncertainty same as 1992, the slope is 1.014.

“Wow! Did they really do that?!”
No, they did nothing like that, or like what Nic did either. It’s just my shortcut for a regression with fixed intercept. What they in fact did was create a statistical model for the uncertainties of the data, create a million ensembles (realizations) and derive the statistics of that.

It takes a lot more analysis that what is given here to work out the effect of what they did. What John Kennedy did was to show that an uncertainty weighting with fixed intercept was adequate to explain the trend. It mimics some of the effect of what they did, but it isn’t the same.

“Assign the uncertainty for all the data points to be the same as 1992? I”
No, assign the uncertainty of 1991 to be the same as 1002. At present it is artificially zero because it is chosen as the reference value.

Nick Stokes,
OK, so assigning the starting point some (small) uncertainty moves the resulting calculated slope about half way to Nic’s result of 0.88. But that doesn’t answer Nic’s question of the validity of having ever growing uncertainty the further away in time from 1991. Do you think this is reasonable, and if so, why?

“Do you think this is reasonable, and if so, why?”
It may well be. It is, as said, ΔAPO, the difference between a given year and 1991. That becomes more uncertain as time proceeds, and the difference becomes larger.

Nick Stokes,
” It is, as said, ΔAPO, the difference between a given year and 1991.”

But could it not just as easily be calculated based on the differences between a central year and both earlier and later years? (the signs for the ΔAPO’s before and after would be different, of course) Seems to me what date you choose as a reference point ought not substantially change the calculated slope.

David“estimate the rate of change in the original quantity”
The original quantity is APO. But they are looking at trend in APO_climate. That involves attribution, which you can do for a change. But not necessarily for the original quantity.

It’s like where you measure temperature of a place since 1991, and during that time the location has shifted up a hill. You can talk about the part of the change that was due to climate, and the part that was due to location change. But it doesn’t make sense to ask what fraction of the temperature in 1991 was due to climate, and what fraction due to location. So there is no absolute definition of T_climate.
(apologies for misplacing response upthread).

There are 2 further problems to discuss.
One is the magical pudding result of this paper.
It is all very well to say we have been underestimating the amount of heat that went into the oceans but where is it?
(Hansen).
If it really truely occurred it has to be detectable by Argo the satellites and (choke) the models.
But it is not there.
Are our observations so unbelievably wrong? Rhetorical.
It is one thing to make a grandiose claim.
You have to be able to show proof of the result of that claim.

Second is the concept.
Of using O2.
The authors start with observed changes in ‘atmospheric potential oxygen’ (ΔAPOOBS)
And CO2.
the recent paper that [quantifies] ocean heat uptake from changes in atmospheric O2 and CO2 composition, by Resplandy et al
Reliably.
For the world.
I understand they have figures. Nic has found the figures. But seriously.
We are talking about these levels at sites all over the world and yet using a presumed aggregate of levels at a few sites possibly not even related to the oceans themselves.
The potential standard deviations are enormous.
CO2 level estimation is only done at a couple of places with very por correlation to that at sea level over the oceans which is substantially unmeasured.
O2 measurement is even more fragile and unreliable.
While the concept is great it is of one of those measures that should help to provide backup and reassurance to our observations, not refute or challenge them.
A glaring difference like this raises two possibilities. The measurements of CO2 and O2 potential have serious limitations or errors (do more measurements in different places).
Or worse, confirmation bias of the worst sort.
The desperate effort to find some way to refute observations which question AGW (climate change) makes people do strange things. Mann, Gerghis and the holy grail (hot spot) searchers come to mind.

Kneeling agrees with me, Atomski
“In addition, we realized that the uncertainties in the assumption of a constant land O2:C exchange ratio of 1.1 in the calculation of the “atmospheric potential oxygen” (APO) trend had not been propagated through to the final trend.”

Just curious: I tried to do a back-of-the-envelope calculation. If I did the math right, this paper says that we’ve increased the average temperature of the *entire ocean* by about 0.065 degrees in the past 25 years.

First, did I do the math right?

Second, assuming I did, if the ocean is warming more slowly than the atmosphere, would that mean that over time, the ocean will dampen the effect of CO2 warming?

And an aside: the said that they found an error of 60% in the estimate of ocean heat uptake. It’s a pretty big error. Is there any reason that the error couldn’t have been “the other way”? As in: could we have found that the oceans absorbed 60% less heat? And, if so, what does that say about our ability to understand the climate at all?

Most of the heat should stay/ warm the upper ocean so the SST should go up more than your estimate. Secondly the temperature changes are minute whereas what we are fed in OHC sounds massive even for 0.01 C of warming. I imagine your calculations could be good.

Playing with data from the OHC paper. I get their gradient for the regression (1.16) by weighting the data by 1/unc^2. This forces gradient to match the early data and overshoot the later, less certain, data. Other weightings give lower gradients. 1/2https://t.co/NNP5kyMOMJpic.twitter.com/bPsXrl65Eh

“We show that the ocean gained 1.33 ± 0.20 × 1022 joules of heat per year between 1991 and 2016, equivalent to a planetary energy imbalance of 0.83 ± 0.11 watts per square metre of Earth’s surface.”

An energy (sic) imbalance of 0.83 W/m2 is consistent with both Argo heat and CERES raw monthly power flux imbalance. Thus the first hurdle is passed. Although I suspect that the result may at lest unconsciously be tuned to that.

Natural variability continues to be underestimated. There is clearly some not so simple physics – Rayleigh–Bénard convection – in cloud formation over oceans that is related to ocean surface temperature. From observation cloud variability over the eastern and central Pacific is the dominant source of global cloud variability (Clement et al 2009).

The cloud radiative effect is the result of a non-linear relationship of rain to the internal kinetic energy of water vapor molecules in the presence of nucleating aerosols. Closed convection cells persist for longer over cool ocean surfaces before raining out from the center. It leads to modulation of TOA power flux by inter-annual to millennial variability in the Pacific state. The rate of ocean heating changed in the late 1990’s as a result of low frequency climate variability – largely to do with shifts in the Pacific state. That will shift again within the decade.

In a practical sense the estimated components on the RHS of this equation have uncertainties greater than +/- 20%. And I suspect that biological component is radically underestimated through neglect of the terrestrial system.

These estimates are unlikely to be significantly improved leaving CERES and Argo as the most refined energy observing systems. Nonetheless – this study is a refreshingly new and plausible idea in Earth system science – one that opens up a different perspective – and they get serious scientific points.

The noise around it is simply incredulous hyperbole from the climate tribes as usual – sound and fury signifying nothing.

Robert I. Ellison
If CERES provided robust absolute values for energy going in and out, there would be nothing more to debate – the energy budget would be seen to either track CO2 levels or not.
But since there is still a debate, I’ll take it CERES isn’t up to that vital measurement ?

CERES and SORCE it is. And the mismatch is a little more than 1% – some 4W/m2. But of course the EEI is less than that – somewhere around 0.8 W/m2 in the most recent decade – it is subject to a one off adjustment to close the budget.

The result is most intriguing.

Power flux imbalances change from negative to positive on an annual basis. The average is 0.8W/m2 – consistent with the rate of ocean warming. Adjusted to be so. The trend over the period of record is negative as the planet tends to maximum entropy. The large swings in imbalances are due to the current orbital eccentricity. Nor is carbon dioxide the major source of change in precisely and stably measured outgoing energy anomalies. There is no substantive ‘debate’.

Robert I. Ellison
Thanks for your comments.
So the earth system is supposedly warming at 0.8 w/m^2. But is that energy budget something that is
– directly measured, in absolute energy units ? or
– something that is merely modelled from relative radiation changes ?

But even if we grant that figure, why do you then go on to say it isn’t consistent with the overall CO2 trend ?

And what exactly do you maintain is there “no substantive debate” about ?

Besides this, Nic needs to write a letter to Nature if he wants his objection to be considered seriously, where the original authors can reply and the science community can read. Few scientists read blogs.

Besides this, Nic needs to write a letter to Nature if he wants his objection to be considered seriously, where the original authors can reply and the science community can read. Few scientists read blogs.

Looks like you are not being blocked.
“Few scientists read blogs.” I wonder if that includes all the scientists who actualy host blogs. Is that an ‘appell’ to your personal authority, or do you have some published citations?

I don’t know how many scientist host blogs, but there obviously are some who work in climate science. But I didn’t make the claim of “few scientists read blogs”, Appell did. He ought to be able to substantiate that claim if he thinks it is true.

Figure 1 is amazing. Is it correct that there are only about 25 data points, and that you can see their mistake just by graphing and eyeballing the data? And that it looks like a linear trend doesn’t fit right and the shape is concave, which is even worse for their conclusions? It may be an honest mistake, but it looks inexcusably sloppy, especially when there are 10 authors who presumably have at least read the paper but none of them noticed an obvious major error. Not to mention the referees and editor!

Nic,“On a corrected basis, I calculate the ΔAPOClimate trend uncertainty as ± 0.56 per meg yr−1,”
I think your approach is overly simplistic here, and doesn’t take account of the Monte Carlo aspect of what they have done. As I read it, the steps are
1. They form, in effect, a statistical model of the data
2. They input this to a million OLS regressions
3. The result trend stated is an average of these, and their σ is deduced from the distribution of results.I think that the σ’s in Table 4 col 3 are also from these million realisations of the stat model.

They are clearly aware of the non-independence of some of the error terms, as mentioned in their methods text. I think the way they have done the Monte Carlo is where there is effective uncertainty weighting. I don’t know if the underlying stat model is a good one, but the key thing will be whether they vary the terms like corrosion independently from year to year.

As John K said, a simple unc-weighted regression gives a much smaller standard error than theirs, but with similar trend. It is possible that their Monte Carlo gives an error that is larger because of correlation, but is more discriminating than your idea of a single regression. Of course, it may also be wrong.

Nick
I had thought that they did what you say. I consider that to be an appropriate method, provided the statistical model used for the data is realistic. It is what I did, initially using, in effect, their statistical model. (I took 200,000 rather than 1,000,000 samples for faster computation.) See my note [xiii]. This method gives a dAPO_Climate trend estimate of 0.88 per meg per year.

However, the wording in their Methods section is not very clear, and I now think it likely that they did something different, as follows:

1. They form, in effect, a statistical model of the data.
2. They sample from it a million times to obtain a million realisations of the dAPO_Climate time series.
3. They compute the standard deviation sd[i] of the million realisations for each year i.
4. They carry out a million weighted least squares (not OLS) regressions, using weights w[i] = 1 / sd[i]^2
5. The result trend stated is the mean of the regression slopes, and their σ is their standard deviation.
6. Alternatively they might have carried out a single weighted least squares regression on the best-estimate dAPO_Climate time series (Col. 2 of their Extended Data Table 4) using the above-mentioned weights. That is what John Kennedy did.

Both variants of this procedure give a slope estimate of 1.16 per meg per year, but the error is much smaller in the second case. Unfortunately neither variant is appropriate, given the nature of the data errors.

I like you think that the σ’s in Table 4 col 3 are also from their million realisations of the stat model.

Interesting though the question is, how Resplandy et al actually derived their estimated trend and uncertainty is not the key issue. Rather, the key issue is how the trend and uncertainty should have been derived, and what their resulting correctly-derived values are. I consider that the approach I used, as set out in the Uncertainty analysis section of my article, is a reasonable method (albeit it could be improved slightly), and that the results of Resplandy’s method – whatever its details – cannot possibly be justified.

Their paper includes the following“Code availability. ESM codes are available online for IPSL-CM5A-LR (cmc.ipsl.fr/ipsl-climate-models), GFDL-ESM2M (mdl-mom5.herokuapp.com/web/docs/project/quickstart), UVic (climate.uvic.ca/model) and CESM (www.cesm.ucar.edu/models/).”
and on data“Data availabilityScripps APO data are available at http://scrippso2.ucsd.edu/apo-data. APOClimate data, contributions to APOOBS and ocean heat content time series are available in Extended Data Figs.1–4 and Extended Data Tables1–5. Model results are available upon reasonable request to R.W. (IPSL anthropogenic aerosol simula-tions), L.B. (IPSL-CM5A-LR), M.C.L. (CESM-LE), J.P.D. (GFDL-ESM2M) or W.K. (UVic).”
Requests for code seem to be a kneejerk response from people who really have no idea what is involved. They didn’t just do a regression; they did a million-member ensemble. They used several GCM’s. So what code do you want?

Of course, it is also very important that the media outlets that unquestioningly trumpeted the paper’s findings now correct the record too.

Kevin Trenberth is father to the idea that humanity’s heat is there but it’s hiding from our detection in the deepest of the deep, deep ocean. AGW alarmism is a sort of extended Trenberthian that takes us right up to the edge of the cliff where nothing is left but the simple reality that, we can’t see humanity’s heat amidst all that pesky natural variability.

Any other weighting, or extensive statistical processing and/or selection of a base year is subjective without sound first-principle reasoning. As done, it is susceptible to cherry-picking. And importantly, is not fully described (whatever was done) in Methods by the authors. There can be no excuses for this without invoking an intent.

Richard
Thanks for confirming. That is how I also had read the methods section when I wrote my article. As my Uncertainty analysis section and endnote [xx] indicates, I effectively did almost the same as you, and obtained the same trend and trend uncertainty. (I took their dAPO_Climate values and uncertainty values from columns 2 and 3 of their data table, which had already had the subtraction and adding of errors in quadrature performed).

However, as I wrote in my Uncertainty analysis section, this procedure is clearly wrong since it treats errors as being uncorrelated between years, whereas in fact the largest components of the error are perfectly correlated across all years – the same error just scales with time elapsed since 1991. When one allows for this, the mean trend estimate doesn’t change, but its uncertainty becomes much larger.

“The dependence of the prediction skills of ENSO on its phase is linked to the variation of signal-to-noise ratio (SNR). This variation is found to be mainly due to the changes in the amplitude of the signal (prediction of ensemble mean) during different phases of the ENSO cycle, as the noise (forecast spread among the ensemble members), both in the Niño3.4 region and the whole Pacific, does not depend much on the Niño3.4 amplitude. It is also shown that the spatial pattern of unpredictable noise in the Pacific is similar to the predictable signal. These results imply that skillful prediction of the ENSO cycle, either at the initial time of an event or during the transition phase of the ENSO cycle, when the anomaly signal is weak and the SNR is small, is an inherent challenge.” https://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-18-0285.1

A sophisticated understanding of the limitations of models – and of the system dynamics – is possible – then there is a naive waving them about like magic talisman.

Someone first has to ask “How Does CO2 and LWIR between 13 and 18 microns warm the oceans?” That is the first basic question that has to be asked. Everything has to tie back to CO2’s contribution. CO2’s only contribution is through the radiation or thermalization of 13 to 18 micron LWIR. That is the only mechanism defined by which CO2 can affect climate change.

The oceans are warmed by shortwave/high energy visible blue light, not LWIR. CO2 is transparent to visible radiation. The warming oceans are the greatest evidence that CO2 is not the cause of global warming. What is warming the oceans is also warming the atmosphere above it.

What I take from this is that the raw data in Figure 1 supports Argo very well even to the apparent break in the gradient around 2005.
If you do the conversion this line would agree with the black line in Figure 1, both averaging 10 ZJ/yr for 1991-2016.
The controversy is about the red line fit, not the raw annual data. I regard Argo as a more direct measure anyway, with perhaps more uncertainty in the early years, so here is an independent dataset that also supports those early years. Useful plot to have in addition to Argo.

It is called calibration – standard for instruments of all kinds. It is needed not for drift – that can be compensated for and the satellites these days are made to be as stable as humanly possible – but because there is no known absolute value of power flux with which to compare the instrument reading. Once it is calibrated you can go on your merry way.

And I answer only because #jiminy’s cites are as rare as hen’s teeth. I like to encourage him.

I am tempted to just write Mark Twain – rather than say that Argo commenced in 2004 with about a 1000 floats and what seems a bit of a glitch. XBT coverage doesn’t come close to that coverage. And I have given the 1990’s Josh Willis data many times.

Any anthropogenic signal there is lost in the ‘noise’ of low frequency climate variability.

And despite Resplandy et al being a fun idea with promise – well yes – that’s what it is – a fun idea with promise. Stop sucking the life out of science to buttress your silly, superficial partisan rhetoric.

#jiminy is one will will reject science – bizarrely without reading it and on the basis solely of what is happening in his head at the time – and then claim that it validates the warmer members of a CMIP opportunistic ensemble.

“In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system�s future possible states by the generation of (perturbed physics ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles. The generation of such model ensembles will require the dedication of greatly increased computer resources and the application of new methods of model diagnosis. Addressing adequately the statistical nature of climate is computationally intensive, but such statistical information is essential.” IPCC as long ago as 2001.

I am inclined to think that nothing can validate opportunistic ensembles.

“A one-time adjustment to shortwave (SW) and longwave (LW) TOA fluxes is made to ensure that global mean net TOA flux for July 2005–June 2015 is consistent with the in situ value of 0.71 W m−2”.
In situ – that’s Argo. One-time adjustment assumes a linear drift. Maybe it is, maybe it isn’t. Maybe they don’t know so they do what they can. However it tells you this is not an independent measure of the OHC.

“The press release [v] accompanying the Resplandy et al. paper was entitled “Earth’s oceans have absorbed 60 percent more heat per year than previously thought”,[vi] and said that this suggested that Earth is more sensitive to fossil-fuel emissions than previously thought.”

I think the interesting question here is what would have happened to the paper if the results, as Lewis describes them, had been described by the authors themselves? I suspect they wouldn’t have tried to publish it, or, if they had, Nature and the referees would have declined it. Does anyone really doubt this?

One should focus on the facts. In the main post Nic rolled out a “major problem” in the methods which questions the basic findings in the Resplandy et al. (2018) paper. Only a hasty look at fig.3 of the paper shows, that an inceasing of APOclimate of more than 1 per meg/year is very unlikely for the time span of 25 years after 1991:
A more stringent peer review should have steped in to avoid further confusion.
On the other hand it was also mentioned by Nic:
“Moreover, even if the paper’s results had been correct, they would not have justified its findings regarding an increase to 2.0°C in the lower bound of the equilibrium climate sensitivity range and a 25% reduction in the carbon budget for 2°C global warming.
The conclusions ot the paper are not justified at all by the results of the paper. This was shown by other scientists, e.g. Thomas Stocker, James Annen too.
Both findings are reasons enough that this paper ( as we find it at the time in “Nature”) should have never been released. IMO this should also lead to some questions relating the processes in the editors policy of the journal.

Was really disappointed in how Curry advertised this blogpost as “Nature strikes out again”. If she really thinks that, PNAS (which is about a high-tier a journal as Nature) also struck out when it published error-containing research Curry co-authored. The double-standard in Curry’s position is amazing.

As I said before, if Lewis analysis is correct, then I congratulate him on catching that error, because I didn’t. He can feel free to submit his post as a comment or a response, so that it can undergo peer review. But this doesn’t reveal some deep unreliability in Nature, unless people also accept that Curry’s past mistakes mean she’s unreliable. And I doubt people would be willing to accept that.

Seriously, Atomsk? You’re comparing the actual error in the Nature paper (if Nic is correct, which he appears to be given that the authors agreed with him that the uncertainty was too small) with Dr. Curry’s error, which was:

“The authors note that the legends for Figs. 1, 2, and 3 appeared incorrectly.”

Other than the fact that this is a totally trivial error, it’s not even clear who made it, Dr. Curry or the Journal. Nor does it matter. It’s as important as a typo, which is to say, not at all. Unlike the error Nic found, it doesn’t change any of her claims or conclusions in the slightest.

Re: ““The authors note that the legends for Figs. 1, 2, and 3 appeared incorrectly.”
Other than the fact that this is a totally trivial error, it’s not even clear who made it, Dr. Curry or the Journal. Nor does it matter. It’s as important as a typo, which is to say, not at all. Unlike the error Nic found, it doesn’t change any of her claims or conclusions in the slightest.
How petty can you get? Sheesh”

JC SNIP: This is irrelevant. The lead author on this paper was my postdoc. He corrected a minor error in the journal.

Re: “Other than the fact that this is a totally trivial error, it’s not even clear who made it, Dr. Curry or the Journal. Nor does it matter. It’s as important as a typo, which is to say, not at all. Unlike the error Nic found, it doesn’t change any of her claims or conclusions in the slightest.
How petty can you get? Sheesh …”

JC SNIP: this is a guest post and technical thread. Your specious criticisms about things that I have previously written that are irrelevant to this thread are worse than irrelevant.

It looks like you messed up, Willis. Tell me what the error was in the legends. Or in other words: tell me how the figure changed between before the correction and after the correction.
You would need to know this in order to make the claims you did (such as that the error was “total trivially”, “as important as a typo”, etc.).

Not true at all. If the error required her to retract even one of her claims, it would have had to be mentioned in the correction.

It was not. Not one of her ideas, claims, or conclusions were required to be changed, corrected, or retracted based on the error in the legends.

Ergo, unlike the problem that Nic identified, it was a trivial error, not something that affected her scientific claims.

Re: “If the error required her to retract even one of her claims, it would have had to be mentioned in the correction”

Please read more closely, Willis. I didn’t say anything about retracting her claims. I said it would be difficult to use the uncorrected figures alone to support informed acceptance of the conclusions, because the uncorrected figures are difficult to interpret. That’s why the figure legends needed to be corrected.

Re: “Ergo, unlike the problem that Nic identified, it was a trivial error, not something that affected her scientific claims. But you knew that …”

And you failed to meet my challenge, as expected, because you made your claims without knowing what the actual error in the figure legends was. This is why you’ve been warned before to actually read scientific sources before you comment on them, instead of relying on press pieces or your gut feeling:

You are comparing a blueberry to a grapefruit and claiming they are the same because both are roughly spherical. The central claims of the paper appear to be just wrong, and the stated uncertainty clearly so. That is bad for any journal….. even Nature. The question people should be asking is how the review process failed in this case, and how it could be improved. My guess is that had Nic been a reviewer the paper would not have been published in the form it was. Maybe that points toward how reviewers of ‘groundbreaking’ papers should be selected.

There should be massive opportunities for future generations for case studies of how well reviewers were prepared academically to fully grasp the methods and approaches that were “novel” or untried. But, if interviewed, I wonder how many would be candid and say “ I had no idea what they were doing” just out of embarrassment of such an admission.

To be around in 2050 for such a retrospective. And not just this issue but being able to analyze the entire decision making process of the establishment for the preceding 70 years. Oy!!!

Re: “You are comparing a blueberry to a grapefruit and claiming they are the same because both are roughly spherical. The central claims of the paper appear to be just wrong, and the stated uncertainty clearly so”

The central claim of the paper was using a novel, proxy-based method to show that ocean heat content increased. That central claim is true. At best, Lewis showed the OHC increased was over-estimated and the uncertainty under-estimated. And people are now disputing his claim on that, which is why I recommended he submit it for peer review.

Once again:
This isn’t a black eye for Nature, unless you want to claim that the journals that published Spencer and Christy’s debunked UAH work, are also garbage. And note that in those cases, unlike this case, the central claim was wrong. Spencer and Christy’s central claim was that satellite-based MSU analysis did not show tropospheric warming. They were wrong.

Re: “The question people should be asking is how the review process failed in this case, and how it could be improved.”

People have been asking how to peer review can be improved since before you or I were born. This blogpost offers nothing novel for that discussion.

Atomsk’s Sanakan,
“The central claim of the paper was using a novel, proxy-based method to show that ocean heat content increased. That central claim is true.”

Please. The central claim of the published paper, and the one that was blared across every publication from Scientific American to the NY Times, is that the new method indicates with *high confidence* that actual warming of the oceans is ~60% greater than earlier temperature based estimates. The paper goes on to say that the new analysis is directly policy relevant: the lower limit for EBM estimates of ECS should be increased to 2.0 per doubling and cumulative fossil fuel use needs to be 25% lower to limit warming to 2C above pre-industrial.

All that was simply wrong.

The central estimate for the corrected analysis (http://www.realclimate.org/images//resplandy_new_fig1.png) is only as high as it is because the authors simultaneously changed another part of their original calculation: the “oxidative ratio (OR) of land carbon” was 1.1 in the original paper, but in the revised calculation this was reduced to 1.05. This change had the effect of increasing the estimate of the trend by 0.15, yielding a net of 1.05 ± 0.62 per meg/y­r. Absent the change in oxidative ratio, the revised calculations would have produced a trend of 0.90, very close to what Nic suggested (0.88), and a central estimate of ocean warming BELOW earlier estimates. I take the authors at their word that the value of 1.05 for oxidative ratio on land is “more appropriate” in the calculations. Some may think that change was made mostly to “save” central estimate from the paper. But in any case, the now extremely broad uncertainty makes the results overlap the entire uncertainty ranges of earlier temperature based estimates….. the new study in fact can’t say much of anything about the rate of ocean warming, only that it is warming (but we knew that already). Earlier studies clearly provide better estimates of ocean warming.

I am sure you will continue to say the errors in the paper in no way reduced its “scientific importance”. You will continue to be mistaken about that.

Steve, The authors can’t have it both ways. If 1.05 Is a more appropriate value for OR why did they use 1.1 in the original paper? The circumstantial evidence is that they used this parameter to get the result to be more credible.

I am not sure that the ‘error’ has been demonstrated even close to conclusively or that it not utterly trivial as well. We shall see. On the other hand the personal, the politics, the motivated sociology, the self lauding verbosity and the prominence of trifling debating points from #atomski is most certainly inconsequential.

I make one or two comments a day – instead of a morning paper with my coffee. And respond to people as they respond. And this sort of nonsense from you is contrary to blog rules – so unless you have something interesting to say… and that would be novel for you.

I have been wondering about the connection between O2 depletion, N2 and CO2 increase. I found some interesting stuff.
“While no danger exists that our O2 reserve will be depleted, nevertheless the O2 content of our atmosphere is slowly declining–so slowly that a sufficiently accurate technique to measure this change wasn’t developed until the late 1980s. Ralph Keeling, its developer, showed that between 1989 and 1994 the O2 content of the atmosphere decreased at an average annual rate of 2 parts per million. Considering that the atmosphere contains 210,000 parts per million, one can see why this measurement proved so difficult.

This drop was not unexpected, for the combustion of fossil fuels destroys O2. For each 100 atoms of fossil-fuel carbon burned, about 140 molecules of O2 are consumed. The surprise came when Keeling’s measurements showed that the rate of decline of O2 was only about two-thirds of that attributable to fossil-fuel combustion during this period. Only one explanation can be given for this observation: Losses of biomass through deforestation must have been outweighed by a fattening of biomass elsewhere, termed global “greening” by geochemists. Although the details as to just how and where remain obscure, the buildup of extra CO2 in our atmosphere and of extra fixed nitrogen in our soils probably allows plants to grow a bit faster than before, leading to a greater storage of carbon in tree wood and soil humus. For each atom of extra carbon stored in this way, roughly one molecule of extra oxygen accumulates in the atmosphere.

At first glance, this finding appeared to be good news to those worried about the climatic effects of the ongoing buildup of anthropogenic CO2 in the atmosphere, for it suggested that during this five-year period an amount of carbon equal to one-third of that burned for energy production had taken up residence in the biosphere. As another third was taken up by the ocean, this meant that between 1989 and 1994 only one-third of the CO2 we produced by burning fossil fuels accumulated in the atmosphere. However, this enormous biospheric storage is likely an anomaly reflecting an unusual climate, perhaps related to persistent El Niño conditions or emissions by the volcano Pinatubo. A burst of plant growth during this period allowed carbon storage to exceed respiratory losses temporarily, but once climate conditions return to normal the products of this burst will be eaten up, releasing this carbon stored in organic matter back into the atmosphere as CO2 gas. Thus, we can’t use Keeling’s observation as evidence that the biosphere will serve as a major sink for the CO2 we generate. But through Keeling’s O2 measurements we now have a reliable means to monitor the ongoing changes in global biomass. Eventually his record will allow us to diagnose the response of the Earth’s biomass to changing climate and nutrient availability.” by Wallace S.Broecker http://www.columbia.edu/cu/21stC/issue-2.1/broecker.htm#footnotes
I think it questions the methodology of using CO2 level and O2/N2 ratio as an exact thermometer.

The greater issue that the likes of Atomski don’t confront, is : How much can the consensus and its peer-review be trusted ? Have they really ceased practicing the advocacy science so clearly revealed by their whitewashing of the Climategate, are have they just got better at it ?

No scientists read blogs, someone above said. Well, blogs are how peer-review itself can now be reviewed and kept honest. Without blogs, would a paper with the above content this have any chance of seeing the light of day ? Would Resplendy et al have deigned to respond ? Or would the gatekeepers of Mannian “redefined” peer-review have buried it ?

2) There is a 333 W/m^2 up/down/”back” energy loop consisting of the 0.04% GHG’s that absorbs/”traps”/re-emits per QED simultaneously warming BOTH the atmosphere and the surface. – Good trick, too bad it’s not real, thermodynamic nonsense.
And where does this magical GHG energy loop first get that energy?

3) From the 16 C/289 K/396 W/m^2 S-B 1.0 ε ideal theoretical BB radiation upwelling from the surface. – which due to the non-radiative heat transfer participation of the atmospheric molecules is simply not possible.

Maybe, maybe not. But it doesn’t matter, the obvious errors have been corrected. The claims of have to reduce cumulative emissions by 25% don’t now seem supportable, but Keeling didn’t address that in his post at RealClimate.

But it DOES matter. The scientific world and the public need to study ALL of Nic’s arguments and ALL the errors he found in Keeling, learn from them, and build on them. Not just an indirect reference that he identified one smaller problem. Study in how Richard Feynman (1974) identified the problem of researcher Young systematically studying and successfully identifying ALL the significant problems in his rat maze study – BUT then to have the rest of that branch of science (psychology via rat mazes) ignore his discoveries – and continue to publish irreproducible results! Cargo Cult Science http://calteches.library.caltech.edu/51/2/CargoCult.htm

He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.
Now, from a scientific standpoint, that is an A‑Number‑l experiment. That is the experiment that makes rat‑running experiments sensible, because it uncovers the clues that the rat is really using—not what you think it’s using. And that is the experiment that tells exactly what conditions you have to use in order to be careful and control everything in an experiment with rat‑running.
I looked into the subsequent history of this research. The subsequent experiment, and the one after that, never referred to Mr. Young. They never used any of his criteria of putting the corridor on sand, or being very careful. They just went right on running rats in the same old way, and paid no attention to the great discoveries of Mr. Young, and his papers are not referred to, because he didn’t discover anything about the rats. In fact, he discovered all the things you have to do to discover something about rats. But not paying attention to experiments like that is a characteristic of Cargo Cult Science.

Willis as the climate grows colder due to weakening magnetic fields I wonder when you will admit to being wrong on your climatic theories which do not and have not held up when viewed against the historical climatic record.

I am more confident then ever that those of us who agree solar is the main driving climatic force will be proven right over the next few years.

Thank you for the reference Willis.
What is the meaning with “saving one`s ass”?
Here is a good example:

“It isn’t clear whether the authors agree with all of Lewis’s criticisms, but Keeling said “we agree there were problems along the lines he identified.”
Paul Durack, a research scientist at the Lawrence Livermore National Laboratory in California, said promptly acknowledging the errors in the study “is the right approach in the interests of transparency.”
But he added in an email, “This study, although there are additional questions that are arising now, confirms the long known result that the oceans have been warming over the observed record, and the rate of warming has been increasing,” he said.
Gavin Schmidt, head of the NASA Goddard Institute for Space Studies, followed the growing debate over the study closely on Twitter and said that measurements about the uptake of heat in the oceans have been bedeviled with data problems for some time — and that debuting new research in this area is hard.
“Obviously you rely on your co-authors and the reviewers to catch most problems, but things still sometimes slip through,” Schmidt wrote in an email.
Schmidt and Keeling agreed that other studies also support a higher level of ocean heat content than the Intergovernmental Panel on Climate Change, or IPCC, saw in a landmark 2013 report.
Overall, Schmidt said, the episode can be seen as a positive one.
“The key is not whether mistakes are made, but how they are dealt with — and the response from Laure and Ralph here is exemplary. No panic, but a careful reexamination of their working — despite a somewhat hostile environment,” he wrote.
“So, plus one for some post-publication review, and plus one to the authors for reexamining the whole calculation in a constructive way. We will all end up wiser.””

I, with the other co-authors of Resplandy et al (2018), want to address two problems that came to our attention since publication of our paper in Nature last week. These problems do not invalidate the methodology or the new insights into ocean biogeochemistry on which it is based, but they do influence the mean rate of warming we infer, and more importantly, the uncertainties of that calculation.
We would like to thank Nicholas Lewis for first bringing an apparent anomaly in the trend calculation to our attention. We quickly realized that our calculations incorrectly treated systematic errors in the O2 measurements as if they were random errors in the error propagation. This led to under-reporting of the overall uncertainty and also caused the ocean heat uptake to be shifted high through the application of a weighted least squares fit. In addition, we realized that the uncertainties in the assumption of a constant land O2:C exchange ratio of 1.1 in the calculation of the APO trend had not been propagated through to the final trend.

As the researcher in charge of the O2 measurements, I accept responsibility for these oversights, because it was my role to ensure that details of the measurements were correctly understood and taken up by coauthors.

We have now reworked our calculations and have submitted a correction to the journal.
Details

In our definition ΔAPO, we used a default value of 1.1 for O2:C oxidative ratio (OR) of land carbon. However, a lower ratio is probably more appropriate. Specifically, Randerson et al. (2006) argued for a ratio of around 1.05, based on the composition of stems and wood, given that woody biomass dominates long­term carbon sources and sinks on land. Other recent studies have suggested similar ratios e.g. Clay and Worrall (2015). Our previous calculations did, in fact, allow for a range from 1.05 ± 0.05, consistent with above estimates and typical uncertainty ranges. However, we applied this range only for the ΔAPOClimate­ to­ ΔOHC ratio but neglected the impact on the APO budget itself, which used a fixed ratio of 1.1. If the actual OR were lower than 1.1, the observed APO decrease (ΔAPOOBS) would include a contribution from the global land carbon sink, because the ΔO2 term then imperfectly cancels the 1.1 ΔCO2 term.

In the updated calculations we now also allow apply the OR range (1.05 ± 0.05) to the APO calculation which by itself increases the APOClimate trend by 0.15 ± 0.15 per meg/y­r relative to an estimate using 1.1.

Bottom Line

We recomputed the ΔAPOClimate trend and its uncertainty based on the distribution of the unweighted least square fits to each of the 106 ensemble realizations of ΔAPOClimate generated by combining all sources of uncertainty, with correlated errors now treated as systematic contributions to the trend. The resulting trend in ΔAPOClimate is 1.05 ± 0.62 per meg/y­r (previously 1.16 ± 0.18 per meg/yr) which yields a ΔOHC trend of 1.21 ± 0.72 x 1022 J/yr (previously 1.33 ± 0.20 x 1022 J/yr), as summarized in the updated Figure 1:

The revised uncertainties preclude drawing any strong conclusions with respect to climate sensitivity or carbon budgets based on the APO method alone, but they still lend support for the implications of the recent upwards revisions in OHC relative to IPCC AR5 based on hydrographic and Argo measurements.

“Hughes blamed 2016 GBR coral bleaching on global warming and Jim Steele, and then later Wolanski in a published paper, showed that bleaching was due to lowered sea levels from El Nino and natural current mechanics. Did Hughes admit any error?”

It’s important to try to stay in reality. Nic’s (laudable) finding of the error here does not retroactively endorse or prove every flight of fancy questioning a mainstream paper that has been posted on a contrarian blog (and we all know there are many).

Better to talk to an expert about a specific epoch. The most obvious reasons:

* shorter term adaptation – under the more gradual rates of change that appear more normal in the geologic record, reef locations can more easily shift over time (see last glaciation sea level changes – the current Great Barrier Reef is only ~20K years old)

* longer term adaptation/evolution. Reefs receding or disappearing for a time and coming back thousands of years later under better conditions, or with different groupings of coral species, is a blink in geologic time. But it’s not a blink in the timeframe of human civilization and economies.

In general adaptation and evolution aren’t magic – they happen over longer periods of time. This is the crux of the entire “abrupt climate change disrupts ecosystems” discussion.

Importantly, it is just an error in reasoning to think this sort of observation (corals have been around for awhile) contradicts the directly observed mass coral mortality we’re seeing in the real world. This is basically rejecting observations in favor of theory. Any implied theory of coral super-resilience has been falsified by observations in nature…

Flora and fauna adapt rapidly to climate changes. Species that were struggling at one temperature thrive at the warmer temperature, while others are disadvantaged. Some die out. Species move rapidly to new habitats on ocean currents and wind.

the current Great Barrier Reef is only ~20K years old)

In which case it began at the LGM, when sea temperatures were some 5C coldest than and yet it is still thriving. Coral reefs in much warmer temperatures that the GBR are also thriving. Coral atols have grown up 140 m as temperatures rise from the LGM and sea levels rose.

There is ample evidence that life thrives as temperatures warm up from the severe coldhouse conditions Earth is presently in. And ample evidence that life thrives much better than now at substantially warmer temperatures.

It was a response to David A. and I made a mistake in placing it. Sorry
In your blogpost you write: ” So only two days after he pointed out what he thought was an error, Nic Lewis was already castigating Resplandy et al for not acknowledging his analysis. He gave them no-to-little time for analysis, no time to figure out what he was saying or to address the subtleties involved…”
This is not true. The lead author Resplandy was informed about the issues on the 1st. of November as Nic wrote in his post:” …so later on November 1st I emailed Laure Resplandy querying the ΔAPOClimate trend figure in her paper and asking for her to look into the difference in our trend estimates as a matter of urgency.” Indeed the given time was much more than two days. At RC you introduced the wording “gentleman scientist”, one should introduce also the wording “gentleman commenter” and it’s first duty: not to lie!

In your blogpost you write: ” So only two days after he pointed out what he thought was an error, Nic Lewis was already castigating Resplandy et al for not acknowledging his analysis. He gave them no-to-little time for analysis, no time to figure out what he was saying or to address the subtleties involved…”
This is not true. The lead author Resplandy was informed about the issues on the 1st. of November as Nic wrote in his post:” …so later on November 1st I emailed Laure Resplandy querying the ΔAPOClimate trend figure in her paper and asking for her to look into the difference in our trend estimates as a matter of urgency.” Indeed the given time was much more than two days. At RC you introduced the wording “gentleman scientist”, one should introduce also the wording “gentleman commenter” and it’s first duty: not to lie!

Why are coral dying en masse all over the world as waters warm, instead of thriving, Peter?

Mostly (a) you have no idea, (b) you confuse ~millennial-scale geologic time with short-term ~decadal human experience timeframes, and (c) your “rapid adaptation” theory has been falsified in the real world, but you’re more interested with philosophy than what’s real.

We all get to choose whether we put more faith in empirical, scientific reasoning or in such personal philosophy. Knock yourself out. Your theorizing is off-topic for this thread, I just wanted to point out the earlier whopper of an error in somebody believing heat stress-based bleaching of coral has been debunked (by alternative amateur/blog theory). That’s an obviously and dangerously incorrect thing to believe.

We in the US are extremely proud of Nic Lewis. He has proved and validated that President Trump was always right, and that climate change isn’t for real. Nic Lewis has become a star among us Trump supporters to help tell people the real truth about climate. Thank you for this great gift. Jesus has said warmth is in the hearts of people. May God bless you Nic Lewis. You have done America a great favor.

I propose there are numerous flaws in the entire global warming situation, simply because no one is willing to consider other factors beyond the green house effect, that may be contributing to the problem.

Theory one. They lied to us. The Earth is not traveling around the sun in a perfect orbit , but is rather, being sucked into the sun and as we get closer each moment, storms are becoming more violent, skin cancers are rising, the polar ice cap is melting, and ocean temperatures are rising. This theory of ine will undoubtedly be scoffed at, until someone in the scientific community claims it as thier own, and studies are conducted.

We keep discovering new planets at the edge of oursolar system. They’re being sucked in also..like a bath drain.

Mars is suddenly sprouting water as it settles into our planets former orbit while our climate becomes more like that of Venus each day..

No? Look at a whirlpool. Look at a hurricane’s eyewall. Look at a twister. To believe we are in some sort of perfect orbit is insane..if the sun can capture Jupiter it csn certainly draw us in as well.

We all know what kind of heat source is needed to heat the ocean waters…Which brings me to theory 2, and another obvious potentially overlooked contributor to the problem…

Engines require oil, and coolant to operate.so does the earth.

We live on tectonic plates..masses of dirt and rock that float on a sea of molten lava at the Earth’s core. The oceans are the Earth’s cooling system…and the oil is the only thing that separates the plates under our feet, and the magma below them.

That layer of oil is what we pump out of the ground every day. Oil companies like to say the oil is a natural biproduct of the cooling magma..not that it matters. It is what keeps the magma from engulfing all the lands of the earth.

We have plenty of alternative fuel possibilities, yet the world will not be satisfied until, like locusts, we consume every last drop of a limited resource.

Either or both of the above theories alone or in combination with the greenhouse effect, could all be contributing factors, and if so, there isn’t anything we can do about it. No one is willing to see a bigger picture. No one will consider other causes. Except me. Now you all can mock me for being right about it.