It then, however, descends into a discussion of the so-called “pause” and finishes with a claim that

The 2015-16 El Nino has been one of the strongest on record, temporarily elevating global temperatures by a significant margin. This means that their case rests on the El Nino temperature increase and will be destroyed when the El Nino subsides….

However, what he leaves out, is that his own link which supports his claim that the 15/16 was one of the strongest El Nino’s on record, goes on to say that it is “comparable with the 1997-98 and 1982-83 events. It is too early to establish conclusively whether it was THE strongest.”

The figure on the right shows GISTemp monthly data from November 1966 to November 2016 (blue), with a 12-month running average (red), and an OLS trend (black). You can clearly see spikes corresponding to El Ninos in 82/83, 97/98, and one that is not complete in 15/16. You can also clearly see an underlying warming trend. The reason it is warmer now than it has been in the past is clearly not because of the El Nino, the impact of which is currently waning. It also seems pretty clear that once it has waned, we will not be back in some period that can reasonably be regarded as a “pause”.

Consider, also, the figure on the left which shows the 0-2000 m ocean heat content. It’s clear that even over the period when there was meant to be a “pause” in surface warming, there was no indication of a pause in overall warming. This is probably a key point. Most of the excess energy goes into the oceans (about 93%) while only a small fraction heats the surface. Given intrinsic variability, we don’t expect the surface to warm smoothly and at a constant rate, even if the system as a whole is undergoing unequivocal warming. Using periods when surface warming happens to be slow (despite overall warming continuing) to argue that global warming has paused, is extremely disingenuous.

So, I have no idea what will happen once the impact of the 15/16 El Nino has ended, but even if we do enter another period of slower surface warming, it will almost certainly be at a higher level than the previous period of slower surface warming. One should bear in mind (in my view, at least) that surface warming acts to reduce the energy imbalance and that there is probably a limit to the magnitude of the planetary energy imbalance. Therefore, how much we warm in the long term will likely depend on how much we emit. Even though surface warming will almost certainly be variable, we really should be careful of using this short-term variability to infer things about overall anthropogenic global warming.

When I posted David Whitehouse’s article on Twitter (with a rather exaggerated – unfair? – assessment, I will admit) some responses pointed out that there was much to agree with (as, I will admit, there was). The impression I got was that some regarded this as an indication that maybe there was some common ground to work with. Maybe, but I have a slightly different interpretation. That the beginning of the article actually presented a reasonable description of anthropogenic global warming would seem to indicate that David Whitehouse knows enough to know that his conclusions are disingenuous. That suggests, to me at least, that there is little chance of reaching a common ground; his article seems much more an attempt to regenerate claims of a “pause” than as an attempt to improve public understanding of this topic. I am, however, more than happy to be proven wrong.

Yes he is being disingenuous. He is avoiding talking about the global warming that has occurred since 1998, and is trying to blame the last couple of warm years solely on the El Nino. And not a temperature chart or trend line to be seen.

If Whitehouse accepts the underlying science behind global warming, as suggested, then he must also accept that as long as annual atmospheric CO2 concentrations continue to rise without pause, then the overall warming of the planet will continue unabated and in step—albeit influenced by natural variations as some of the heat sloshes back and forth between sea, land and atmosphere.

“The only way is up”, as some songster once sang. Anything else can only be an attempt to deceive the public for political ends.

That the beginning of the article actually presented a reasonable description of anthropogenic global warming would seem to indicate that David Whitehouse knows enough to know that his conclusions are disingenuous.

Bingo. He also knows enough to know that the average conservative reader does NOT know that his conclusions are disingenuous. The anti-intellectualism of this scares me, as it applies not only to climate science, but to most aspects of our civil society.

Consider, also, the figure on the left which shows the 0-2000 m ocean heat content. It’s clear that even over the period when there was meant to be a “pause” in surface warming, there was no indication of a pause in overall warming. This is probably a key point.

It is crucial, IMO. The key point is that the climate system is not the troposphere – it’s *mostly* ocean and energy continued to accumulate in the ocean throughout the so-called ‘pause’. The typical response from deniers is that ARGO data are unreliable. There’s no pleasing some people.

BBD said essentially what I was going to. I’ll only add that while I have respect for the arguments that the GMST record does NOT show a statistically significant trend difference over Teh Paws era, it is visually discernable. Thus, in the increasingly infamous musings of Teh Dilbert, I don’t find the significance argument as *persuasive* as the unabated OHC trend over the same interval.

The strength of the AGW argument is in the coherence of theory, consilience of *multiple* lines of evidence, and the consensus of experts who interpret both in refereed literature published by reputable journals.

Some people are right to be displeased. The evidence against them is overwhelming and unsettling.

To combine a number of ‘skeptical’ conditions I’ve read over the years, we won’t be able to draw any firm conclusions about global warming until we have several hundred years of data from thermometers accurate to 0.01 °C and evenly spaced at 1 km2 coverage across the planet’s surface.

I can’t tell whether those making such arguments are doing so in bad faith or ignorance. Maybe a mix of both.

“The strength of the AGW argument is in the coherence of theory, consilience of *multiple* lines of evidence, and the consensus of experts who interpret both in refereed literature published by reputable journals.”

brandonrgates says: “I’ll only add that while I have respect for the arguments that the GMST record does NOT show a statistically significant trend difference over Teh Paws era, it is visually discernable.”

Make your period short enough and you will find a trend that is not statistically significant. In addition, the statistical tests for whether a series has a significant trends are for random series, not for cherry picked period.

If you start conceding such talking points, you would also need to concede the anti-hiatus of the last 10 years, which is just as much bunk.

Those with long memories or time to search may recall Whitehouse has been banging this drum for a while. From The New Statesman, December 2007 (courtesy of desmogblog.com):

“The fact is that the global temperature of 2007 is statistically the same as 2006 as well as every year since 2001. Global warming has, temporarily or permanently, ceased. Temperatures across the world are not increasing as they should according to the fundamental theory behind global warming – the greenhouse effect. Something else is happening and it is vital that we find out what or else we may spend hundreds of billions of pounds needlessly.” — David Whitehouse

@Brandon – thx. credit and reference noted. … interesting Rabett post … Oh gosh. Popper keeps popping up! I suppose because in its naive form its a kind of binary truth detector. Popperianism is a comfortingly simple(listic) explanation of a complex process (science). I have a section marked up in my copy of Kuhn’s ‘The Structure of Scientific Revolutions’ that perfectly refutes this naive form. When commenting of falsification in science (Chap. XII, p.146), Kuhn writes:

“… no theory ever solves all the puzzles with which it is confronted at a given time; nor are the solutions already achieved often perfect. On the contrary, it is just the incompleteness and imperfection of the existing data-theory fit that, at any time, define many of the puzzles that characterize normal science. If any and every failure to fit were ground for theory rejection, all theories ought to be rejected at all times.”

> If you start conceding such talking points, you would also need to concede the anti-hiatus of the last 10 years, which is just as much bunk.

I’ve been following your Tweets along the same lines, and I think it’s an elegant argument, Victor. Even so, one big problem I have with it is that as soon as I begin to talk about statistical significance, somebody is going to ask what significance test I’ve applied.

Your blog post illustrates your point using (I assume) gaussian noise. I’ve been around this conversation long enough to know the usual rebuttal:

1) GMST only random in the sense that it’s not reliably predictable on an annual basis because
2) we’re dealing with a non-linear deterministic system which implies
3) some degree of persistence in the GMST signal which calls for
4) a statistical model which takes autoregression into account when testing for trend significance.

I think that’s a compelling rebuttal. I’m also aware that a number of consensus researchers are not themselves content with the “no significance” argument, and are actively researching (and publishing!) *physical* explanations for the slowdown. I like physical explanations over purely statistical arguments pretty much always.

Thus rather than get into a bunch of statistical concepts with which I’m only passingly fluent when talking Hiatus with most contrarians, it’s a lot simpler (and I think valid) for me to acknowledge the visually apparent GMST slowdown *and* then throw OHC data at them, point to the 1998-present interval and say, “Paws, what Paws?”.

In communicating with the public, the statistical argument is basically telling Mark Twain – damn lies… statistics – the reason he’s wrong about what he sees in the data just as plain as the nose on his face is statistics. It’s largely failed before it starts.

I believe I started the “Paws”. It comes from my days trooping around with my father, who was a large animal veterinarian back in the glory days of large animal veterinary medicine. After an animal dies, it gradually bloats with gas, and the carcass sometimes sort of rolls over on its side or back and the feet stick up in the air. It was a death pose I saw thousands of times as a kid. So when it became clear to me the pause was dying, I would say the pause was going paws up.

brandonrgates says: “some degree of persistence in the GMST signal which calls for
4) a statistical model which takes autoregression into account when testing for trend significance. I think that’s a compelling rebuttal. ”

If you take autocorrelations into account the uncertainties become even larger. I had almost added them to the post, but it would have made it more complex and longer. The autocorrelations are quite modest when it comes to annual average temperatures. So it is not a biggy and goes into the wrong direction for people claiming to see a trend change.

brandonrgates says: “I’m also aware that a number of consensus researchers are not themselves content with the “no significance” argument, and are actively researching (and publishing!) *physical* explanations for the slowdown. I like physical explanations over purely statistical arguments pretty much always.”

I am a big fan of studying physical explanations for the fluctuations.

The study of climate variability used to be nearly all of climatology, it is still one of the largest groups. If I have one main peeve against “hiatus” claims it is that all they have is their lying eyes and no physical explanation for what caused it.

Many of these papers are fine, just use the wrong framing. Unfortunately if you want to get into Science of Nature you have to give your fine study this terrible framing.

brandonrgates says: “it’s a lot simpler (and I think valid) for me to acknowledge the visually apparent GMST slowdown *and* then throw OHC data at them, point to the 1998-present interval and say, “Paws, what Paws?”.”

Why not just show OHC data?

You could alternatively point the people to the “visually apparent” 10-year global warming explosion and make the case that this urgently needs action. If they do not agree, use the same arguments for the fake “hiatus”.

>The 2015-16 El Nino has been one of the strongest on record, temporarily elevating global temperatures by a significant margin. This means that their case rests on the El Nino temperature increase and will be destroyed when the El Nino subsides….

Since ‘The Pause’ is also relies upon the El Nino of 97/98 does it not also fall victim to this same issue in reverse? It relies upon an outlier value to define its very existence. If he wants to toss out the 2015/16 El Nino then it only seems fair to toss out all the El Nino values, or at least smooth them. Poof! pause gone.

Stephan,
In a sense he should indeed be consistent. However, I think the real issue (as others have been discussing earlier) is that trying to determine trends over relatively short time periods is problematic because of the intrinsic variability. See, for example, Victor’s post about short-term trends being uncertain.

> If you take autocorrelations into account the uncertainties become even larger.

Ahhhh, thanks … now it’s clicked for me. As you allude, that means an AR test will be *less* likely to find a significant trend change. That makes this argument bollocks (my emphasis):

Since CPA is designed to answer the question, “has something changed” (we use it where I work to monitor the defect rate of electronic assemblies as a process control metric), one can forgive the naive application to global temperature undertaken in the aforementioned post. The author’s basic thesis is that since a CPA analysis detects no significant recent change in the slope of the GISS dataset there is no pause. Unfortunately, the analysis is of no value because, as is commonly known, the CPA cannot be used on auto-regressive time series. This can be easily demonstrated. Here’s a random sample of an ARIMA[3,1,1] process (This is not to infer the climate can be modeled as an ARIMA process. CPA fails for any integrative process, a class which in all likelihood the climate falls within.)

> I am a big fan of studying physical explanations for the fluctuations.

A DISTURBING reality confronts us: A the deliberate creation of a double standard, with one set of facts for internal scientific discourse and another for public consumption.

h/t Willard, who informs me, “that double standard exists since Plato, it’s probly older, as communication should be adapted to the audience”.

I would only add that it may be more accurate to think about this as a double bind, not a double standard, and further note that it isn’t only science communicators who perpetuate it. It may not even be *mostly* science communicators who amplify the bind.

> Unfortunately if you want to get into Science of Nature you have to give your fine study this terrible framing.

Thanks for that perspective, I’ll keep it in mind.

> Why not just show OHC data?

If you’re not asking rhetorically, but are testing my understanding I’ll answer in a later post.

VV has nailed it. The only arguments made for the pause/not pause are statistical – including how it is defined. So the pause is Shrodinger’s pause, alive or dead at any given moment depending on what answer a Climate Wars proponent is pushing. A physical explanation is needed – I have a revised paper in review at Earth Systems Dynamics that contains one, so we’ll see how that goes.

I am less sanguine than VV about autocorrelation. If the ocean-atmosphere’s normal behaviour is to maintain in steady-state mode then any perturbation will take it back to its mean state. The only way that warming can occur is if the atmosphere warms independently (which I think is a physical impossibility because it requires natural and anthropogenic ghgs to behave differently and means the atmosphere has a heat storage capacity independent of the surface), or the ocean absorbs everything and releases it gradually as the sea surface warms gradually. (I don’t think that is the case either, but at least it’s physically plausible)

If the latter is the case, autocorrelation will mean that error probabilities within whatever statistical tests are problematic because of the role of the ocean in messing with serial independence. There is no way these issues can be solved through statistical induction alone. The smarter operators in the contrarian world know this and are gaming a gullible scientific community who continue to argue statistics, thus maintaining the he said/she said debate.

Testing needs a combination of theoretic-mechanistic reasoning that proposes scientific hypotheses, matched by statistical-inductive reasoning that proposes statistical testing capable of distinguishing between different hypotheses. If internal variability and external forcing are interacting, then ordinary least squares trend analysis will not cut it. And pauses, hiatuses, regimes or however periods of little surface warming are defined, may be very real, but so would be the abrupt warming that punctuates such periods.

Brandon Gates too often illustrates that the trouble with “the deliberate creation of a double standard, with one set of facts for internal scientific discourse and another for public consumption.”
is less hermeneutics than folks like himself citing the latter to cast doubt on the former-

THEREIN LIES the political paradox: what we can perceive, we can endeavor to put right. That scar on the Soviet landscape, the vanishing Aral Sea, bears witness to the deranged power of central planning like the mark of Cain. Yet, the diverted rivers that caused it can he swiftly returned to their courses. But the action of the invisible hand of energy economics upon the world is imperceptibly slow.

Bear in mind the beaver. Without benefit of godhood, its mindless industry acting over eons has transformed the Canadian landscape into a wilderness of lakes. Likewise ,creating a brave new world with an atmosphere transformed by the total depiction of fossil fuel is a labor of generations yet unborn.

We cannot govern the actions of posterity, but we can teach by our example. We can plant trees and stay the hand of mindless deforestation. We can value the richness of biological diversity and recognize the intellectual poverty of sullen indifference to the majesty of nature. But any pretension to oracular foreknowledge of how, over the next quarter century, the earth will respond to our presence lies in the realm not of science but of intuition.

If only our collective arms could stretch so our invisible hands could grasp all the low-hanging fruits around.

“It would be fair to say that most climate scientists think the ‘hiatus’ exists and is a fascinating phenomenon that deserves study. There have been hundreds of research papers about it and over 30 explanations proffered.” D Rose.
Meatloaf, a la Brandon, same problem in both comments.
“The strength of the AGW argument is in the coherence of theory, consilience of *multiple* lines of evidence, and the consensus of experts who interpret both in refereed literature published by reputable journals.”
–
Victor Venema said January 8, 2017 at 6:15 pm “Short term trends are highly uncertain.
Make your period short enough and you will find a trend that is not statistically significant. In addition, the statistical tests for whether a series has a significant trends are for random series,”
–
Significant trends could be done on any series I would have thought.
A significant trend could vanish in both too short, as you state, and too long periods, as you are aware.
Defining the period to be studied, the phenomena to be studied and assigning a definition of statistically significant to that period is arbitrary anyway and can cause cherrypicking too occur.

Victor talks about the length of the data at his linked post. One comment is
” With “at least 17 years” Santer does not say that 17 years is enough, only that less is certainly not enough. So also with Santer you can call out people looking at short-term trends longer than 17 years.”
He then states” Sometimes it is argued that to compute climate trends you need at least 30 years of data, that is not a bad rule of thumb and would avoid a lot of nonsense”
but it is a rule of thumb only.
30 year periods are ideal for claiming global warming, just long enough to see “significant ” trends but too long for anyone to ever claim a “pause”.
So game over. Define a length of time longer than your oppositions argument and you cannot lose.
Pause, what pause?
But using the same logic one could say we need 100 years, what then of global warming?
By the same implacable logic[see the anti-hiatus of the last 10 years] the trends are now too short and become statistically insignificant.

brandonrgates, just wanted to ask: why do you need to make any statement on the visual appearance of the temperature curve. That is anyway just subjective opinion. I would suggest to ignore it and go the OHC.

Roger Jones, also without a statistically significant trend chance, someone can say I want to study this and find a physical reason; this is so important that I do not care that there is a high probability that it is noise. That is high risk research and the ones that tried did not find anything (and did not publish) or only a minor effect (volcanoes). Statistics shows no need to study it, but everyone is welcome to spend their precious short life time on the research they fancy.

VV, I don’t understand what you’re getting at in your comment – could you elaborate? I think your NOAA post, is very good btw.

I’ve been spending a lot of time looking at the philosophy of science and statistics in the past 12 months trying to get our research accepted. The whole area of significance is highly contested and I’m beginning to think that statistical climatology needs a major rethink. Some areas would not change much but other assumptions don’t hold water. If a system is storing energy and releasing it in a process independent of the storage mechanism with a lag effect then ‘significance’ using ordinary least squares analysis is not a reliable guide to the underlying statistics of the system – the appropriate test(s) would need to be identified.

Yes, in the end you have to compute the uncertainty of a trend, like I wrote in the post just after the part you opted to quote.

You can compute the statistical significance of any series, but not all series have a significant trend, including long ones.

A significant trend just means a significant trend. That there is likely something interesting to study and explain physically. It does not automatically mean man-made global warming, like Angech suggests.

If a system is storing energy and releasing it in a process independent of the storage mechanism with a lag effect then ‘significance’ using ordinary least squares analysis is not a reliable guide to the underlying statistics of the system – the appropriate test(s) would need to be identified.

I agree that ordinary least squares (OLS) is not a reliable guide to the underlying processes associated with some system. However, isn’t this really the difference between inferential and descriptive statistics? OLS can answer simple questions like “has it been warming” and “how much has it warmed” and “what are the uncertainties in our estimates”. It can’t, however, say much about why it’s warming. That would seem to require some understanding of the system being studied – i.e., some understanding of the underlying physical processes.

If there is one topic that sends a small subset of climate scientists’ temperature into the stratosphere, it’s the topic of the global warming ‘pause’ or ‘hiatus’. This is the idea that global surface temperatures haven’t changed much for almost 20 years.

[…]

But around 2007 it began to be noticed by so-called sceptics (usually scientists from other fields) that for a few years, global temperatures had not gone up.

[…]

When taken into account, this new effect makes the oceans warmer in recent years and so obliterates the ‘hiatus’.

[…]

Well, not quite. The 2015-16 El Nino has been one of the strongest on record, temporarily elevating global temperatures by a significant margin.

==>

Another lesson in how being a climate change “skeptic” means always being right.

So if you conflate “global warming” and “global surface temperatures,” and pick and choose between those phenomena even as you talk of OHC, you can selectively point to discrete measurements of surface warming and use them to measure “global warming” in order to dismiss OHC as a meaningful measurement for evaluating global warming as opposed to surface warming.

My point was that some “skeptics” frequently reference surface temps to describe a “pause in global warming,” as if describing a trend in global warming (or a lack thereof) wouldn’t, necessarily, include measurements of OHCs. That is what Judith did in her Congressional testimony (where she compounded her irresponsible advocacy by ignoring uncertainty about OHCs to protest that “denying” a “pause in global warming” was ignoring uncertainty).

That was what I see David doing in the article linked – where he alternately speaks about surface warming, global warming, and oceans warming without any consistency.

Again, he says:

==> But around 2007 it began to be noticed by so-called sceptics (usually scientists from other fields) that for a few years, global temperatures had not gone up. ==>

Those “skeptics” were not “noticing” OHC changes, but were “noticing” a pattern in surface temps (only).

And then here, he says:

==> When taken into account, this new effect makes the oceans warmer in recent years and so obliterates the ‘hiatus’.

Well, not quite. The 2015-16 El Nino has been one of the strongest on record, temporarily elevating global temperatures by a significant margin. ==>

Where he links to an article about El Nino that describes surface temps so he could say “not quite” in response to an observation about warming oceans.

So maybe it’s me that’s confused (wouldn’t be the first time, for sure), but as I see it, David is using inconsistent terminology. As far as I am concerned, anyone who wants to talk about a “pause,” should be specific – they should make it clear that the “pause” (to the extent that there is/was one) is a short-term slowdown in a longer-term increase in surface temperatures only.

> why do you need to make any statement on the visual appearance of the temperature curve. That is anyway just subjective opinion.

Because The Hiatus is discussed explicitly in AR5 so it’s a subjective opinion which carries weight with me, and my interlocutors tend to know both things. I also try to find ways to agree with people when I can, even if it’s something trivial or subjective.

> I would suggest to ignore it and go the OHC.

I’m sure I’ve tried that before. I can’t recall specifically, but probably the reason I don’t ignore is that skipping straight to OHC got me accused of being a Paws Den!er, chasing squirrels, or some other silly thing like that.

I generally find it doesn’t work well *for me* to gloss over an opponent’s main argument, even if it is ridiculous. 🙂

This should not be a partisan issue. It is good business and good economics to lead a technological revolution and define market trends. And it is smart planning to set long-term emission-reduction targets and give American companies, entrepreneurs, and investors certainty so they can invest and manufacture the emission-reducing technologies that we can use domestically and export to the rest of the world.

I might have written the first sentence differently: This can be and needs to be a bipartisan issue.

I think it’s good and important to seek out and find positive messages and appeal to things upon which political conservatives can agree.

> If internal variability and external forcing are interacting, then ordinary least squares trend analysis will not cut it.

Of course, but I don’t think OLS was never meant to be the ultimate reality cutter. Failing OLS testing implies less than passing it. This is why Da Paws peddlers rely on double negatives and other ploys for their plausible denial.

I have a question about that interaction about internal variability and external forcing, RogerJ. Is this interaction symmetric? I can understand that external forcing could influence internal variability, but I’m not sure how the converse would work. My intuition is that if we blend internal variability and external forcing, then the classical way to look at the attribution problem could be very conservative.

ATTP, I would think a bet that the next La Nina will be the warmest La Nina on record is pretty safe, and any bet that the trend in average temperatures for La Nina years will continue to head upwards would be a shoo-in.

I still think the single most informative single image in the whole debate about climate change is the one created by John Nielsen-Gammon, which separates out the trends in La Nina, neutral and El Nino years, which is featured here, as it seems the link to the original has expired.

Willard says:
” a question about that interaction about internal variability and external forcing. Is this interaction symmetric? I can understand that external forcing could influence internal variability, but I’m not sure how the converse would work.”
–
Symmetric not the best choice of word perhaps.
Equivalent might be better.
A bit like that snowflake Christmas dome people shake.
The amount of reaction in the system is equivalent not symmetrical to that put in.
Due to the seeming chaos [natural variability] where the snowflakes end up or in the real world where the clouds form will effect how future forcings act.
natural variability is totally dependent on the external forcing , feeds back on it, but is beyond full prediction, otherwise there would be no variability.
–

The problem with the paper, which seems to be reactive to the pause which doesn’t exist, is that it proves for the period it considered, that there was no pause.
Not for the period that the pause actually existed in, when it wasn’t existing, except in Brandon’s fevered optics and the imaginations of the people listed below.
“The NOAA group’s updated estimate of warming formed the basis of high profile paper in Science (Karl et al. 2015), which joined a growing chorus of papers (see also Cowtan and Way, 2014; Cahill et al. 2015; Foster and Rahmstorf 2016) pushing back on the idea that there had been a “pause” in warming”.
If only they had been told that 18 years is too short to see a pause they could have saved a lot of time and effort.
On the other hand one could always reverse engineer it and say that the studies do not mean anything [warming] because their time span is also too short.
Warning, a pause or spurious pause can only be defined from a recent time going backwards where there is no trend.
Showing that a pause no longer exists does not make it non existent in the records.
Changing the records does not make a pause not exist in the original records, even if it is too short to be statistically significant.
Changing the records to achieve a result is not “disingenuously leaving out crucial information to deceive gullible readers.”

“If candor prevails, climate professionals will realize once again that laymen too can recognize cant when they hear it and cartoons when they see them. Scientists would do well to recall that insight’s inevitable corollary-the neutrality of scientific institutions must first exist if it is to he respected. ”

The reason that David Whitehouse can re-life the pause zombie is probably best explained by the guest post from L Hamilton;
post-factual-perceptions-of-weather
Memory is dependent on belief and just 6 months is sufficient to remodel the past to conform to preference.

So by this July, the fact it is colder than last July will weight more than the fact it is warmer than the July 1999 post El Nino dip.

In fact as mentioned up-thread it is likely it will be warmer than the 1998 peak. Anyone over the age of puberty is never going to experience a year as cold as year of their birth. That is probably a first in the last ~6000 years of human civilisation. But memory seems to follow a inverse power law. The further back, the less force it exerts.
M=1/T^4 ?

yes, but the error probabilities change, so if you’re arguing the 5%/95% split autocorrelation and lag effects blow that out of the water quickly. That gets back to significance argument – whether a result meets certain criteria or not. Whether a test is being used inferentially or descriptively makes no difference – the error probabilities are the same.

There’s no doubt now that ocean heat content (OHC) shows the world is warming. When a signal was first emerging (late 80s), there were no reliable OHC measurements. The debate at that stage was focused on surface warming and its likelihood. Least squares was used at the 5% threshold. Using the same argument now makes no sense.

Yes, I mostly agree. The OHC is a much better source for assessing AGW than surface temperatures and is one reason – I think – why many dislike the “pause” narrative because it confuses this issue (the surface could undergo a period of slow warming while the OHC rises rapidly, for example). However, we do live on the surface and most of the projections are based on what the surface temperatures will do, so there is still merit in considering how the surface is warming even if the OHC is a better metric for understanding the overall process.

There could be a certain retired professor – who is highly political – who might argue that OHC can’t do anything bad to surface dwellers. Even helped write a paper about stadiums full of waving it all away… like magic spoon bending.

“I have a question about that interaction about internal variability and external forcing, RogerJ. Is this interaction symmetric? I can understand that external forcing could influence internal variability, but I’m not sure how the converse would work. My intuition is that if we blend internal variability and external forcing, then the classical way to look at the attribution problem could be very conservative.”

All complex systems have one direction, so are not symmetric, they are governed by entropy. The classical view is conservative. This is the problem with climate warriors who work in the classical mode – they are promoting the understatement of risk just to win a stupid argument that should be debated on different grounds.

I published a paper in 2012 that did nonlinear attribution for south-eastern Australia and we looked at the economic impacts of that nonlinearity in a 2013 report. The widespread impacts in 2015-16, especially in coastal and marine ecosystems mark another shift to a warmer regime. If that regime is influencing current polar temperatures (i.e., those warmer temps are likely to be sustained), we’re in trouble.

I have seen that some climate scientists are mentioning that there is a pattern in this blue curve with periods of warming followed by periods of cooling or little variation. Each period seems to be around 30 years. However, before it’s possible to see that such a period has changed to a new period of the other kind in the blue curve, more than ten years must have passed.

There is a bend at the end of the reliable part of the blue curve (around 2005) but many more years are needed before it can be seen how this will develop. On the other hand this bend is about thirty years from the previous change, so if changes come after about 30 years it is time for a change.

As can be seen from other parts of the blue curve, within a period with little variation of the temperature there may be hills, perhaps such hills have something to do with El Ninos. It may also be that in a new period of thirty year with diminished variation, there will still be an upward trend, but slower than before the change, because the radiative forcing is changing at a faster rate now than more than sixty years ago.

Does this mean that any hypothesis that there are changes between warming and cooling/standstill/slowdown periods every thirty year has been rejected by climate science? Could you elaborate on that?

As I said I have seen this before from climate scientists, but I did not connect this with this what you call the stadium wave, and I cannot remember the details. A name coming in my mind is professor Tsonis, he and coauthor published some hypothesis of similar kind. The concept of hiatus was proposed by Kevin Trenberth as I remember but I am not sure.

Given a choice between strong theory and questionable statistics the wise bunny always goes with the theory.

The problem with social science is that there is no good theory. The problem with climate and medical science is that the theory is newly won and the fossils still around. Eli ßaw this happen in chemistry in the 1970s

I’ve been looking into the question of relative strength of the 2016 El Niño. Those ranking it alongside the 1998 event seem to be solely referencing Nino3.4 SST variability. The problem with that metric is contamination by the underlying long-term trend in SSTs. I believe NOAA try to correct for this by comparing to the recent decadal average or some multi-year average. The problem with that approach for the 2016 El Niño is that the prior decade featured unusually persistent La Niña behaviour, hence the comparison would perhaps artificially enhance the apparent strength.

An alternative approach of detrending using the CMIP5 mean for Nino3.4 region SSTs indicates that the 1983 and 1998 events are out on their own in a class of “Super El Niños”, whereas 2016 was merely a very strong event, ranking alongside 1973, 1988, 1992.

That conclusion is supported by looking at the Nino1+2 record, which has larger high frequency variability, so less need for detrending. The SOI record indicates the same.

The RSSv3.3 and UAH TLT datasets found a new record in the global annual average for 2016. However, 2016 was not a new record in either for the tropics, where the largest El Niño effect would be expected. The RSS TTT v4 dataset does find a new annual average tropics record in 2016, though by about half the margin of the global average record. The 2016 peak in the tropics was also still just below the 1998 peak, which would tend to indicate that the record was mostly caused by trend rather than fluctuation.

So, I think there’s a consilience of evidence that the 2016 El Niño was a very strong event, but not at the same level as 1983 or 1998.

Roger Jones says: “VV, I don’t understand what you’re getting at in your comment – could you elaborate?”

Only noticed your response now. Maybe my response to yours was only loosely related to your comment. I guess what I wanted to say is: statistics is just a first step on the path to a better understanding. Even if there were statistical evidence for a change in the long-term trend, I would only start talking about global warming stopping or slowing down when we would understand why. I hope we will one day find that global warming stopped due to a reduction in CO2 emissions.

izen says: “Apart from the rise in sea level.”

I guess you know, but for innocent readers: The sea level graph by itself is not sufficient evidence of an OHC increase, there are multiple contributions to sea level rise, glacier melt, ice sheet melt, ground water depletion. But you are right that after accounting for these other contributions, there is something left and the ocean heat content must have increased.

JCH, yes, fluctuations exist, that does not mean that there was a decrease in long-term warming due to global warming. Studying the natural variability is very important. There is a famine in East Africa at the moment due to El Nino. If this climate mode changes, that is important for agriculture there. Predicting El Nino’s is thus important so that people can prepare. Ship food there in time, grow drought resistant crops that year. Similar for other climate modes. The Climate Variability group CLIVAR who studies these kind of things is the largest group in the World Climate Research Program. There is currently much research on making decadal climate prediction to help people adapt. It is a very important topic, but not a reason for a wrong claim that a change in the long-term trend due to global warming can be seen.

paulski0, the observed temperature increase is in the lower half of the projected increase. Thus using the mean of the projections as background is also not fully fair. You could maybe correct the temperature curve for La Nina’s (and other known causes for short-term variability). Or you could have a look at ENSO indices that are based on pressure, wind, precipitation. In the end every El Nino is unique and a ranking will be somewhat subjective.

There was also a higher degree of uncertainty about the amount, rate and start date of sea level rise in the 80s, the graph is a product of improved hindsight I think. I did briefly consider calculating how much more energy is required to raise sea levels by 1mm due to expansion compared to a 1mm rise due to melting land ice. But I think(?) it is several orders of magnitude so did not bother…
The data may not have been as reliable, but even in the 80s there was sufficient information to convince most that AGW was real and potentially disruptive.

But the bottom line has always been (is the physics correct), are we gaining energy at the surface because of increased CO2?
To a first approximation all the energy goes into the oceans. With a very large thermal capacity many Joules can accumulate without much temperature change. Heat is not stored, the movement of energy from ocean to atmosphere generates larger temperature changes because of its smaller thermal capacity. What we want to know is how fast, and how much energy in Joules is accumulating. Looking at the aspects of the system with the largest variation, because the lowest thermal capacity has the advantages that the changes are large and where we live. But the drawback of being most susceptible to internal variation.

The enthusiasm for looking for waves, cycles and quasi-periodic oscillations in the most variable data, satellite, local land surface and ice cores and then deriving conclusions about the amount or rate of energy accumulation seems misplaced. If a medic wants to detect a fever then it is best to measure the bulk core temperature, not the variation in the extremities. This can involve sticking the thermometer …where the sun dont shine?

What some of that analysis does seem to show is that any detection of PDO/AMO stadium waves in the past data is unlikely to indicate future behaviour because the warming trend has overwhelmed the unforced patterns of internal variation that can be discerned in historical data.

Victor Venema says:
izen says: “Apart from the rise in sea level.”
you know, but for innocent readers: The sea level graph by itself is not sufficient evidence of an OHC increase, there are multiple contributions to sea level rise, glacier melt, ice sheet melt, ground water depletion.
izen says:
“Yes, I’m not that innocent;(grin)” concerning
“When a signal was first emerging (late 80s), there were no reliable OHC measurements. ” “Apart from the rise in sea level.”
Not to mention using a graph that stopped in 2000.
This is the point about genuine discussion, instead of trickery or alarmism.
You have a good argument, You have up to date graphs that confirm your argument, so why try for cheap points on innocents. As in that sea level is reliably related to OHC when there are an number of other causes which make it less than reliable?.

On OHC and its usefulness we have the three problems of reliability, reproducibility and measureability .
We all agree that if we knew it accurately it would be the best measure of energy retained in our system.
But the “Yes, I’m not that innocent;(grin)” concerning” OHC applies to most of the comments above.
If it is so good, and it is, why is it not used. Why is the ship not floating?
Measurability is the key.
Two main ways to measure, Satellite of the Sea Surface, [satellite systems cannot penetrate the oceans in ways that measure temp], combined with Argo or other depth measuring thermometers and volume estimates . Correlation with ship measurements to help refine surface data.
Problems here as Victor might say, are the limited amount of Argo devices available and in working order not to mention that the temperature of water in the upper ocean can vary as much as 2C in minutes due to currents. Due to the OHC being measured in millifractions of a degree C. The error margin is so great as to make this measure almost worthless.
10 to the 23 joules equals 0.04 C?
Convincing people of a change of 0.01 C in a year is a very hard argument to sell when certain you are correct.
–
The other was Ocean Acoustic Tomography a technique used to measure temperatures and currents over large regions of the ocean,stopped due to concerns re damage to sea creatures. The data from this study did not support the view that the OHC was increasing in the areas it was used.

JCH says:
“Data derived from tide-gauge stations throughout the world indicate that the mean sea level rose by about 12 centimeters in the past century.”
Using tide gauge stations i.e. inconsistent, unstable starting points which move up or down dependent on crustal shift , earthquakes and glacial isostatic rebound is an interesting argument.
At 0.03 mm a year you have to take 3 cms off that figure which gives you 9 mm.
As Victor said there are multiple contributions to sea level rise, glacier melt, ice sheet melt, ground water depletion. Strangely he did not mention the variable GSAT increase preferring the much higher retained energy OHC as the cause.
“The sea level change has a high correlation with the trend of global surface air temperature.”
So there should be a discernible yearly pattern, under the noise, due to the closeness of the sun in European summer, Interesting.
BTW we are always heading for an El Nino or La Nina, seeing that it is a constrained semi random walk crossing the baseline the odds are always that it will head in the other direction to where it is so I agree possibility of an El Nino has increased, however being down there already gives it an edge on going further down still [less to go].

Problems here as Victor might say, are the limited amount of Argo devices available and in working order not to mention that the temperature of water in the upper ocean can vary as much as 2C in minutes due to currents. Due to the OHC being measured in millifractions of a degree C. The error margin is so great as to make this measure almost worthless.

Angech, are you familiar with the Law of Large Numbers? It’s quite relevant here. It explains how, though error margins of individual measurements may be large, the error margins of the global average is much smaller.

The original claim I was responding too was that there was no reliable OHC measurement before the late 80s.
That is why I used a graph of the pre-satellite sea rise.
However as VV points out the rise in sea level is not only a rise in energy content. In the late 80s the uncertainty around sea level rise was larger than now.

In the late 80s there was enough indication of a rising OHC to confirm to the mainstream science that AGW was real. To still be quibbling about such evidence after another couple of decades with the measurements increasing in accuracy and magnitude and further confirming the conclusion.

the observed temperature increase is in the lower half of the projected increase.

I suspect that conclusion is baseline-dependent. This chart attempts a like-for-like comparison between HadSST3 and CMIP5mean over the full historical record. Both are clipped to 50S – 50N in order to mitigate against coverage and sea ice issues. There is no clear divergence over time from the 1861-1910 baseline and the linear trends are basically identical.

It’s fair to say that the trend of the past ~30 years is below the CMIP5 mean, though it would be somewhat circular to take this into account in detrending since the difference will have been influenced by the same internal variability we’re trying to identify.

Of course, there’s no clear reason why the CMIP5 mean would represent the “true” forced response but it does seem the most reasonable start point for detrending. I’ve also tested sensitivity to scaling CMIP5 mean by 0.8 and it doesn’t alter the conclusion.

The pressure-based SOI record also puts 2015/16 at a lower level from 1983 and 1998.

It’s possible there are simply different types of El Nino and it’s difficult to judge their relative impacts, but all indicators I’ve seen point to it being a smaller event than 1983 and 1998, which is not saying much because those seem to be outliers on at least a centennial timescale.

I made the mistake of not copyrighting my coining of Mr. Uncertain T. Monster, and Judith has been exploiting that lapse ever since…(in her self-sacrificing endeavors to promote true science in the private sector – despite her personal cost for doing so. Of course, the fact that her work in academia put her in the position to make money in the private sector is just a coincidence).

Angech: “Convincing people of a change of 0.01 C in a year is a very hard argument to sell when certain you are correct.”

There is at least a group of people in the USA who will claim not to buy the argument. It is much easier to measure the temperature of the ocean than the temperature or the air over land. Water is a good conductor, thus the temperature of the thermometer is the temperature of the water. Over land it is easy for the temperature of the thermometer to be different because air is a good isolator. The temperature of the water is a very smooth field, over land changing the position of the thermometer or its screen can have large effects, in the ocean it does not.

It is a beautiful feat of engineering, but I have no trouble believing dedicated scientists can do this.

Victor: “the observed temperature increase is in the lower half of the projected increase.”

paulski0 says: “I suspect that conclusion is baseline-dependent. This chart attempts a like-for-like comparison between HadSST3 and CMIP5mean over the full historical record. ”

Yes, making a fair comparison is hard given how small the deviations are. One of the sources of uncertainty is the selection of the baseline. Others are the selection of the area and the dataset.

But even in your graph only the last value, I presume 2016, is above the model mean. Next year will likely be a bit cooler again and below the model mean. Thus I believe it is fair to say that we are currently in the lower half. 2016 is just a one-time peak above.

That is fine. Reality is just one realisation, that one realisation could happen to have been above, below or or like in your example sometimes above for long periods and then below for long periods. It should just be within the model uncertainties (which is larger than the model spread) and it is.

But even in your graph only the last value, I presume 2016, is above the model mean.

Both 2015 and 2016 are above the model mean – they’re about the same level. Sure, 2017 will very likely be lower than the model mean but isn’t that what we would expect if it is an accurate representation of the forced response – El Nino above, La Nina below?

The preceding years (say 2007-2013) are persistently below the model mean, but again we have good reason to think that was a period of strongly negative decadal variability and so would expect those years to be below trend.

Time will tell, to some extent at least. In any case, a lower forced response doesn’t appear to alter the conclusion that the 1983 and 1998 El Nino events were a level above the 2016 one.

> Symmetric not the best choice of word perhaps. Equivalent might be better.

An equivalence is symmetric, reflexive, and transitive. Reflexivity adds nothing and transitivity may be too strong.

***

> All complex systems have one direction, so are not symmetric, they are governed by entropy. The classical view is conservative. This is the problem with climate warriors who work in the classical mode – they are promoting the understatement of risk just to win a stupid argument that should be debated on different grounds.

I’m not talking about the symmetry between then inputs and the outputs of the atmospheric system, RogerJ, but about the interaction between the two sets of processes we use to settle questions of attribution. Let’s call them A (A for anthro) and N (for natural). If the A processes influence the N processes, then we ought to substract that influence from our attribution. OTOH, if the N processes also influence the A’s, then we need to balance both.

Examples to illustrate the direction from A to N are easy to find. Take Russell’s Canadian beaver who helps renew forests. We know that controlling the beaver population influence our forests. I don’t think we should go as far as saying that forests are anthropocentric sinks. Transitivity is problematic from an ecological view.

The other direction is less intuitive to me. I suppose we could say that the beaver population influence our own control of it, like we could say that warming of the Arctic influence our drilling habits. The unintuitiveness may come from the fact that N processes lack the agency humans have. I doubt it’s just that: suggesting that some A process may influence big swings like (say) ENSO also looks farfetched. Perhaps we’re just stuck into man-nature metaphysics or something. The alternative may not be more palatable: systems theory still doesn’t sell very well.

So I don’t think the classical view tries just to win a stupid argument. It is simple and informative. Abstracting away the interaction between A and N processes makes sense, insofar as we don’t try to solve everything with attribution. As far as I am concerned, it’s not the silver bullet to beat contrarians. (It’s unnecessary and perhaps impossible.) Nor is it the One Single Proof we need, like contrarians sometimes portray attribution.

Victor Venema (@VariabilityBlog) says:
” It is much easier to measure the temperature of the ocean than the temperature or the air over land. Water is a good conductor, thus the temperature of the thermometer is the temperature of the water. Over land it is easy for the temperature of the thermometer to be different because air is a good isolator.”
?
Not sure what you mean.
A thermometer heats to the temperature of the medium it is in. It measures the heat of the medium it is in. If the medium has energy this must transfer to the thermometer. The conductivity might slow or hasten changes in heat moving through the medium but will not effect the temperature being measured at the at the measuring site itself.
Offhand I would say the temperature of the thermometer is always at or very close to and trying to reach the temperature of the medium it is in.
Perhaps you meant changes in temperature.
“The temperature of the water is a very smooth field, over land changing the position of the thermometer or its screen can have large effects, in the ocean it does not”.
In Ocean Acoustic Tomography at Wiki it states
The fact is that currents in water have a much greater effect on rate of change in temperature.
measurements by thermometers (i.e., moored thermistors or Argo drifting floats) have to contend with this 1-2 °C noise, so that large numbers of instruments are required to obtain an accurate measure of average temperature.The ubiquitous small-scale turbulent and internal-wave features of the ocean usually dominate the signals in measurements at single points.
I think this contrasts with your claims.
Windchaser says: January 12, 2017 at 5:14 am
“Angech, are you familiar with the Law of Large Numbers?”
“Yes, I’m not that innocent;(grin)” You and ATTP are right for large numbers. I would contend that ARGO in particular has far too few floats too give meaningful results. It is not suitable for a Law of Large Numbers application.

Why surface temperatures and not ocean heat content as the most significant measurement? I suspect that’s because they were the most comprehensive and long running data set within which people could look for and identify climatic changes – the measurement that is also the most familiar and directly perceived by people. If there was a deliberate choice to make surface temperatures the main reference measure I’m not aware of it although I think the argument has been made that it is more appropriate than OHC for climate policy development (realclimate if I recall correctly had a thread on this). I suspect that for communication purposes OHC has a valuable role – by more directly measuring the physical evidence of ongoing warming than surface temperature and with shorter term and less “ambiguous” variability.

As Foster and Rahmstorf and others have shown, a significant amount of the variability (in surface temperatures) can be attributed to specific climate system processes – ENSO being the biggest. I recall asking Tamino how much of the variability that was left might be similarly attributed to specific processes and adjusted for – and would that leave the AGW trend standing naked and undisguised? Tamino suggested (IIRC) that was for more qualified people to answer. It’s worth asking again, more widely.

One point that emerged was that there is going to be a residue of random variability. I would expect that every bit of variability will have physical processes underpinning them but some may well be too chaotic to allow predictability or they average out over periods short enough to make their effects effectively moot for the practical purpose of understanding human contributions to climate change. So (realising it’s undoubtedly not straightforward or without it’s issues) can we make more use of this kind of approach – quantifying known influences on internal variability in order to subtract them from the seemingly chaotic variability, to reveal the underlying trend? What other climate relevant processes besides ENSO, solar intensity and volcanic aerosols might be included in such an approach?

I think that in truth we have more than enough sound reasons to take the climate problem seriously and, valuable as it is to refine our understanding of how the heat balance changes will play out at smaller and shorter scales, it seems like we are still stuck with arguing with people determined not to accept it, that it is a problem at all. In that respect the value of pinning down cause and effect at shorter and geographically smaller scales is more about communicating with confidence and effectiveness that there is real and demonstrable cause for alarm than as any kind of fundamental scientific evidence that the problem is real.

angech writes: “Offhand I would say the temperature of the thermometer is always at or very close to and trying to reach the temperature of the medium it is in.”

Thermometers try? No, I don’t think so.

Every thermometer has a response time which we can gauge by how quickly/slowly it responds to changes. That response time is also dependent on the heat capacity and thermal conductivity of the medium it is in – as VV has already said.

That your ‘offhand’ analysis disagrees merely indicates that you are thinking incorrectly. I doubt that many here are surprised by that. I am surprised that after all these years of being consistently wrong you actually believe your opinion on *anything* is worth posting. That may sound harsh, but it’s more a serious question on my part of how people can continue without any apparent self-reflection or realistic judgement of their own skill, knowledge, or lack of same..

oneillsinwisconsin says: ” I am surprised that after all these years of being consistently wrong”
“Every thermometer has a response time which we can gauge by how quickly/slowly it responds to changes” good. That’s the thermometer.
“response time is also dependent on the heat capacity and thermal conductivity of the medium it is in”
Why?
Isn’t the whole point of a thermometer that it measures heat whatever medium it is in regardless of it’s heat capacity and thermal conductivity . You may be right but it is hard to see how. Convince me.

Isn’t the whole point of a thermometer that it measures heat whatever medium it is in regardless of it’s heat capacity and thermal conductivity . You may be right but it is hard to see how.

The thermometer measures nothing other than the temperature of itself. So you need to consider a heat balance on the thermometer to understand what that means.

In air, there is a radiative heat gain (a large one in the sunshine) or loss. In any fluid, the rate of heat transfer to the thermometer will depend on the fluid dynamics and heat transfer properties. In formal terms, the heat transfer, characterised by the Nusselt number, is a function of the flow (Reynolds number) and heat transfer properties (Prandtl number) of the fluid under forced convection. The Prandtl number is a function of viscosity, thermal conductivity and heat capacity.

In practical terms, this means that heat transfer is rapid and unconfounded by radiation in water.

In air, heat transfer is slow, and radiation may be significant. That results in both potentially significant lags *and* the need to account for radiation (eg with screens).

[my bold]

Convince me.

Anyone foolish enough to try to convince a climate “sceptic” of anything is doomed to disappointment.

temperature in the ocean and the land. I think you’re missing something when you say that temperature in the ocean is easier to measure than temperature on land. I have in the past had to adjust sea surface temperatures and atmospheric surface temperatures to create homogeneous time series. I know which I would rather.

The atmosphere is well mixed and the Stephenson screen has been shown by a wide range of experiments to be pretty close to the actual temperature at 1.5 m (Neville Nicholls has done a great deal of work on this). Night and day processes combined with this mixing means that air temp has a wide regional homogeneity when climate is measured. This is critical. Mixing and spatial homogeneity is the key to reliability.

SST is all over the place. Because it is at the mercy of shortwave radiation, mixing including tides and winds, upwelling and has a memory due to high rates of storage. Pulses of warm/cool water being transported into a location mean that artificial/natural inhomogeneities cannot easily be told apart. The physicality of the ocean means that sensors are being washed out all the time. Ships of opportunity have their own problems because they are spot measurements and difficult to standardise.

The historical atmospheric record is way more reliable than the historical SST record for these reasons. Perhaps you are speaking of the future when the perfect measurement system comes in. But I don’t think so.

I get really distressed when I see comments from people with a lot of skill and experience that suggests they aren’t working from the basic knowledge set that they should be.

Roger Jones says: “temperature in the ocean and the land. I think you’re missing something when you say that temperature in the ocean is easier to measure than temperature on land. I have in the past had to adjust sea surface temperatures and atmospheric surface temperatures to create homogeneous time series. I know which I would rather.”

Roger, we were talking about the ocean heat content not about the sea surface temperature. We were talking about the OCH in the last decades during which some people are unfortunately apparently able to see a “hiatus” in the surface temperature.

As an aide, maybe I know too much about the land surface temperature and all the economic and social transitions that change the surrounding and the observations, but I would not dare to say whether the sea surface temperature or land surface temperature is better. I understand why people may feel land surface temperature is better, because you can remove large inhomogeneities well statistically, but that is not enough, you also need to remove the influence of the small ones very accurately. The marine temperature people have done an impressive job in trying to understand the biases in historical measurement methods. They have the large advantage that because their platforms are mostly moving, they can compare these with each other and not just a few direct neighbours that mostly have similar measurement methods.

You and ATTP are right for large numbers. I would contend that ARGO in particular has far too few floats too give meaningful results. It is not suitable for a Law of Large Numbers application.

Alright, if you’re going to argue that, then argue it. ^_^

The Law Of Large Numbers always applies for this type of problem; the question is how closely the results have converged to the correct answer. So, presumably you’re saying that there aren’t enough data points to get good convergence.

If you’re correct, you should be able to show it mathematically. Would you mind doing that?

Whoa… why is everyone arguing about all that military data? Ocean Heat Content is navy data gathered by the navy, shared between all allies, tested and verified. We use those databases (sparse geo/temporal) to predict the the temperature profile anywhere and any time. We know exactly how accurate the data is, and trillions of dollars of global naval budgets have been spent based on our understanding of it and its accuracy.

Here is where ocean heat data is used;

Ocean heating isn’t theory, its old, well understood applied engineering, involving collaboration between all western allied nations. To think they are faking it for some reason is downright crazy. That’s like multinational treason for? reasons? I’d need to know how and by what means all the navies have been intentionally rendering us defenseless.

Windchaser says: January 13, 2017 at 7:29 pm
“It is not suitable for a Law of Large Numbers application. Alright, if you’re going to argue that, then argue it. ^_^, The Law Of Large Numbers always applies for this type of problem; the question is how closely the results have converged to the correct answer. So, presumably you’re saying that there aren’t enough data points to get good convergence.
If you’re correct, you should be able to show it mathematically.”
Exactly. The results converge to answers.
Currently, there are roughly 3000 floats producing 100,000 temperature/salinity profiles per year. . Basically though each set of readings is for 3000 numbers. The law of large numbers applies to numbers somewhere north of this mathematically speaking.
Large numbers [law of] approach infinity. 3000 is far short of infinity. This sort of data can never be judged as adequate on a law of large Numbers basis.

verytallguy says: January 13, 2017 at 8:39 am
“The thermometer measures nothing other than the temperature of itself.
In air, heat transfer is slow, and radiation may be significant.heat transfer is rapid and unconfounded by radiation in water.”
Thermometers measure the heat of the air and the water at the time said medium is in contact with the thermometer. They do not care about how the medium is being heated, slowly or quickly, just how hot it actually is.
not how quickly or slowly the water or air is heated.
Radiation through or not through a medium is also irrelevant. The thermometer is designed to measure the mediums temperature without the radiation. That is why there is a screen for air temperatures.

The law of large numbers applies to numbers somewhere north of this mathematically speaking.

You still haven’t demonstrated this. As I understand it, it really relates to whether or not you can – in this case – make a suitably large number of measurements so that they have a Gaussian distribution. If you then repeat the measurements at a later time and also get a Gaussian distribution you can then estimate how the system has changed. If you think that there aren’t enough measurements to do this, then I think you really need to try and demonstrate it, not just continue to assert that it doesn’t apply.

Owen Patterson on Any Questions yesterday stated that global temperatures have only risen 0.5C in the last 50 years – which appears to be a lie (with the caveat that it is presumably possible to find a data set and use uncertainties to demonstrate that this is within those bounds) – and also that the pause continues. There seems to be remarkably little reaction to this. Is everyone so inured to it that it no longer seems worthy of response?

Angech: “Thermometers measure the heat of the air and the water at the time said medium is in contact with the thermometer.”

No thermometers measure the temperature of the bulb/sensor of the thermometer. We want to know the temperature of the medium and the problem is when they are different.

The problem of the sun shining is not that the sun warms the air, the medium. If the sun warms the air that is the warmer temperature we want to measure. The problem is that the sun warms the thermometer more because the thermometer is solid and the air transparent.

If the medium conducts heat well (like water) it will cool the thermometer and the radiation error will stay small. If the medium is a good isolator (like air) the radiation error will be larger.

Thus you cannot say that ocean temperature measurements cannot see 0.01°C differences just because that would be hard to do for land air temperature. Feel free to show it is impossible and show where the error in the OHC computations is (not holding my breath), but your argument is completely wrong, air and water are different.

Reflections about unskeptical “skepticism” of climate science and its not-so-distant cousin fake news lead to questions about how language is used to communicate. Increasingly, assumptions about using language with goodwill and for common welfare are falling to attacks that undermine meaning itself.

In the academic world (and my world) we have come to take for granted the imprecision of words and are willing to accept common definitions in order to build knowledge. People wishing to break down those assumptions have only to use the built-in symbolic nature of language to attack it. Once the trick is learned, using words out of context with alternate meanings that sound real but in fact attack meaning, it’s easy for the shallowest arguer to attack the integrity of meaning.

Knowledge is work. Tearing things down is easy. We find it baffling that the simplest premises we have agreed to make sense and progress are attacked. We cannot force people to pay attention to assumptions they wish to attack, which is sadly harmful to our common progress.

Language is necessary for communicate. Common agreements as to meaning are necessary to move on. People who don’t want to move on, and people who don’t want others to move on, need only emphasize the basic nature of language itself and attack the agreement that meaning is useful.

[Slightly but not entirely unrelated, the meaning of temperature measurement has been attacked. As soon as people refuse to acknowledge that proxy, anecdotal, thermometer, and satellite measurements can be checked and calibrated and insist that any attempt to do so is “cheating”, they are able to convince the credulous that temperature is not temperature unless it is measured by the same means. They find one exception and hold fast to what they want rather than what is true.]

Willis, 3000000000000000000000000 or so very good points.as numbers tend to infinity accuracy improves. This is short by infinity but still good enough for human understanding.
ATTP, the Gaussian distributions made, on 3000 or so numbers applies to one data set at one time period. The next Gaussian set applies to another 3000 numbers at a different time period with a change in some of the measuring devices.
You are not flipping the same coin.
Each Gaussian distribution is extrapolated out to a model ocean whose parameters do change seasonally.
3000 observations in a set is not normally regarded as a large number, sorry.
If adding Gaussian sets for comparison you only get one every 10 days and what you are measuring has changed.
OHC is measured in millidegree changes.
I am glad we are doing it. It gives a good rough approximation we need.
Miles better than nothing.
Changes are so small and error will always be so great that it cannot be the panacea people want.
Susan Anderson, may be a shock butI agree with you completely above.

ATTP, the Gaussian distributions made, on 3000 or so numbers applies to one data set at one time period. The next Gaussian set applies to another 3000 numbers at a different time period with a change in some of the measuring devices.

If you have the measurements from two different times and at both times the results are close to being Gaussian then you can estimate the change more accurately than you might be able to estimate the actual measurements.

angech says: “Victor, we are not putting a thermometer in the sun when we measure air temperature.”

Exactly, for the reasons I described described. You were thus wrong when you claimed:

“Thermometers measure the heat of the air and the water at the time said medium is in contact with the thermometer. They do not care about how the medium is being heated, slowly or quickly, just how hot it actually is. not how quickly or slowly the water or air is heated. Radiation through or not through a medium is also irrelevant.”

Even with a screen radiation gets onto the thermometer bulb/sensor, you cannot close it fully because you need ventilation. The screen has the same problem as the thermometer: it heats up in the sun and warms the air flowing in. Then there are still the infra-red radiation fluxes between the sensor and the screen and the heating by the electronics. All of which need ventilation to reduce errors and this ventilation is better in the conductor water than in the isolator air.

It is also fine by me if you now want to claim the observations are perfect. That would, however, undermine your original claim that ocean heat content cannot be measured with sufficient accuracy.

This was a comment for the other readers. Also in the light of your “discussion” on the law of large numbers, I will assume that you are trolling and aim to waste other people’s time.

Basically though each set of readings is for 3000 numbers. The law of large numbers applies to numbers somewhere north of this mathematically speaking.

That doesn’t really make sense. I mean, convergence in the LLN means that as the number of samples goes up, the error gets smaller and smaller. This applies no matter how many samples you have. There isn’t a requirement for the number of samples where it starts applying. It always applies.

So, using statistics, you can calculate what you expect the uncertainty to be, given the actual measurements. What you seem to be saying is that the actual uncertainty is higher than the expected uncertainty, but I can’t understand why. It’s not because of the LLN. You can take subsamples and test those, and they’ll reproduce the same (expected) result, showing convergence.