Natural internal variability: sensitivity and attribution

There is growing evidence to support the hypothesis that the pause cause is tied to a change in tropical Pacific Ocean circulations. What are the implications of this for climate sensitivity and attribution of warming in the latter part of the 20th century?

The paper by Kosalka Xie (discussed on the previous thread Pause tied to equatorial Pacific cooling) is generating substantial discussion in the blogosphere and twitosphere. I focus in this post not so much on the pause, but on the warming in the last quarter of the 20th century. I reproduce here the following figure from the paper:

In my previous post, I argued that POGA-C (fixed external forcing) showed substantial warming since 1975 (through 1998), and a substantial fraction of the observed warming (possibly 50% or more, based on eyeball estimate). The significance of this is in context of the IPCC AR4 attribution statement, whereby most (>50%) of the warming in the latter half of the 20th century is anthropogenic.

I’ve had email and twitter discussions on this with Ed Hawkins and John Nielsen-Gammon. One major issue is exactly how Kosalka and Xie conducted their simulations in terms of forcing the equatorial Pacific temperatures; another is how to interpret the implicit external forcing that exists in the POGA-C simulation.

Nielsen-Gammon’s arguments

I have been emailing with John Nielsen-Gammon this past weekend, who has two posts on this paper:

Nielsen-Gammon has digitized the results from the paper. He computes trends for 1975-2002, and 2002-2012. He comes up with a trend of 0.19C for POGA-C, and 0.51C for observations. Which might lead you to infer ~37% of the observed trend can be explained by Pacific variability for this period (note, if the end point is 1998 as in my initial analysis, the trend is higher). He then goes on to infer that:

So according to the model, the tropical Pacific by itself was responsible for 0.19 C of warming, and all of that was due to the response of the tropical Pacific to radiative forcing. The effect of natural variability in the tropical Pacific on the linear trend over that period was very small and negative, a mere -0.04 C. So contrary to Curry’s mind-blowing first impression, the results of Kosaka and Xie imply that natural variability in the tropical Pacific did not contribute at all to the rapid warming from 1975 to 2002.

Nielsen-Gammon justifies the bolded statement as TPNV= POGAH – HIST, which he interprets as natural-only changes in the central and eastern tropical Pacific. Which doesn’t make sense to me, since it looks the 1998 El Nino effect had a cooling effect on the climate. I find the bolded statement to be unjustified.

JC: Yes, the way that fixing that 8% influences the global trend is through setting the global atmosphere/ocean circulation, i.e. natural internal variability.

JNG: Also it’s not just 8% of the ocean that’s affected. I expect the temperature signal is advected to the west tropical Pacific by ocean currents and from there spreads north and south. The strong atmosphere-ocean feedback in the area will also cause the winds to be altered and in turn affect ocean temperatures upstream.

JC: agreed that’s how it works, through circulations. but to have the global temperature raised in a substantial way, something is going on other than specifying the temperature in this region. if you specified 8% of land temperature, it wouldn’t make much difference at all to global temperature. If you specified 8% in the indian ocean, not much would happen either. The point is that specifying the temp in this particular region, this synchronizes the global network of ocean/atmospheric circulations in a realistic way, with the natural internal variability of the coupled ocean/atmosphere system providing the major signal to the global climate.

JNG: Agree.

So, we agree on the basics of the the equatorial Pacific can influence global climate. Where we disagree is to what extent the specification of surface temperature for 8% of the global surface area influences the global trend through radiative forcing. A simple analysis might say less than 10%, simply by virtue of the small area, and by virtue of understanding that say the 1998 El Nino event was not directly triggered by radiative forcing. But JNG seems to think otherwise.

He makes the following point that I agree with:

the GFDL model is too sensitive to external forcing (the period 1970-2010 warms by 0.2C too much).

If we take the difference between the POGA-H models (with ENSO constrained to follow historical data) and the HIST models, we see the estimated influence of ENSO on global temperature history:

This is, according to the new research, how ENSO has modified global temperature since 1950. The influence is clear: a pronounced recent ENSO-induced cooling which has cancelled the continued global warming due to man-made CO2, leading to the “hiatus” in the increase of global temperature.

I assume that Tamino also got his numbers by eyeballing Kosalka and Xie’s graphs. At least in Tamino’s version of the diagram, the 1998 shows a small warming anomaly, although the magnitude seems unrealistically small.

Tamino then criticizes my analysis:

Her first mistake — quite an embarrassing one really — was to assume that this [POGA C] was the influence of ENSO on global temperature history. This quite misses the point, that one of the strengths of the new approach is that it allows climate forcing and ENSO to interact in a nonlinear manner. The actual estimate of the influence of ENSO, according to the new research, is shown in the graph labelled “POGA-H minus HIST.”

Oooooh, I’m just blushing with embarrassment. The interesting thing about POGA C is that it uses FIXED external forcing. External forcing has some small influence via the specification of the 8% surface temperatures in the central Pacific, which includes the effects of PDO/ENSO as well as external forcing.

So the question du jour is: Does POGA C or POGA-H minus HIST provide a better estimate of the impacts of ENSO/PDO on the global climate?

I say it is POGA C. There are nonlinear interactions between the forced and unforced variability, and we have seen that GFDL model is too sensitive to external forcing. So I don’t think much of Tamino’s and JNG’s interpretation of POGA-H minus HIST. But that said, all this is not easily untangled.

Tamino closes with this howler:

As the graph labelled “POGA-H minus HIST” shows, the influence of natural variation, at least that part of it from ENSO, has been cooling, not warming, and if we want to assign a percentage we should say that natural variation has been responsible for about negative 25% of global warming. Not only did Judith Curry execute one of the most blatant, most obvious, and most ludicrous examples of cherry-picking, she couldn’t even get the sign of the influence right. That’s what I’ve come to expect from her.

Pay attention, Tamino. The PDO/ENSO had a warming effect during the period 1976 to circa 2000, then a cooling effect since about 2002. The question du siecle is How much of the warming in the last quarter of the 20th century was caused by natural internal variability? Looking at unforced simulations such as POGA-C provides important clues. The conclusion that Tamino draws, that natural variability has a uniformly cooling effect, and JNG’s analysis that it has no effect (or maybe up to 30% with the uncertainty analysis in his second post) are not convincing, particularly in context of their allowing for the PDO to be the pause cause since 2002.

The bottom line is that natural internal variability and forced variability are very difficult to disentangle. IMO the natural internal variability is of intrinsic importance to global climate on multidecadal time scales, and this needs to be considered in an integrated way; not just a forced signal with natural variability noise.

Tsonis et al. provide some insights regarding how to think about this. Below is an additional paper of relevance.

A Mathematical Theory of Climate Sensitivity or, How to Deal With Both Anthropogenic Forcing and Natural Variability?

Abstract. Recent estimates of climate evolution over the coming century still dier by several degrees. This uncertainty motivates the work presented here. There are two basic approaches to apprehend the complexity of climate change: deterministically nonlinear and stochastically linear, i.e. the Lorenz and the Hasselmann approach. The grand unification of these two approaches relies on the theory of random dynamical systems. We apply this theory to study the random attractors of nonlinear, stochastically perturbed climate models. Doing so allows one to examine the interaction of internal climate variability with the forcing, whether natural or anthropogenic, and to take into account the climate system’s non-equilibrium behavior in determining climate sensitivity. This non-equilibrium behavior is due to a combination of nonlinear and random effects. We give here a unified treatment of such effects from the point of view of the theory of dynamical systems and of their bifurcations. Energy balance models are used to illustrate multiple equilibria, while multi-decadal oscillations in the thermohaline circulation illustrate the transition from steady states to periodic behavior. Random effects are introduced in the setting of random dynamical systems, which permit a unified treatment of both nonlinearity and stochasticity. The combined treatment of nonlinear and random effects is applied to a stochastically perturbed version of the classical Lorenz convection model. Climate sensitivity is then defined mathematically as the derivative of an appropriate functional or other function of the systems state with respect to the bifurcation parameter. This definition is illustrated by using numerical results for a model of the El Ni~noSouthern Oscillation.

I have seen mention of this paper pop up several times on different threads, here is an opportunity to discuss this paper in context of this specific issue.

JC summary

The results of the Kosalka and Xie simulations can be interpreted in numerous ways. Trying to filter out the ENSO from the PDO signal seems to me to be an erroneous thing to do, given their intrinsic relationship. Using these simulations to attribute the pause (since 2002) to the cooling effect of ENSO/PDO has a corollary that the warm phase of the PDO in the last quarter of the 20th century also contributed to this warming.

The focus for the last two decades has been on the forced climate response. Natural internal variability has been regarded as noise. The pause has stimulated research into the contribution from natural internal variability, which is a very welcome development. How can we proceed to better understand the role of natural internal variability on climate change? More climate model simulations are needed along these lines, with different experimental designs and using different climate models. More insights are needed from observational analyses. And better theoretical frameworks are needed for understanding climate sensitivity to external forcing in a system with substantial natural internal variability.

Without knowing how he chose to smooth/average it, I don’t know what it means. For all I know, he tried ten different “smooths” until he got one that looked the way he wanted. Then he plotted it with the red line 20 times as thick so it grabs the attention of the weak-minded. As Judith points out, 1998 somehow does not add much to heating which is strange.

As long as we insist on defining global warming on the basis of ATMOSPHERIC temperatures, we will need to deal with the issue of internal variability. Global warming, however, should correctly refer to the increase in the total heat contant in the climate system (atmosphere, ocean, land, ice), and defined in those terms global warming has not paused, even in the slightest in the past four decades.

The Cornwall Alliance, a coalition of clergy, scientists, and others, touches on the subject of natural variation in its preamble
“We believe Earth and its ecosystems—created by God’s intelligent design and infinite power and sustained by His faithful providence —are robust, resilient, self-regulating, and self-correcting, admirably suited for human flourishing, and displaying His glory. Earth’s climate system is no exception. Recent global warming is one of many natural cycles of warming and cooling in geologic history.”
______

So if the earth’s design makes it self-regulated and admirably suited for human flourishing, I guess there’s nothing to worry about.

‘The Gaia hypothesis, also known as Gaia theory or Gaia principle, proposes that organisms interact with their inorganic surroundings on Earth to form a self-regulating, complex system that contributes to maintaining the conditions for life on the planet. Topics of interest include how the biosphere and the evolution of life forms affect the stability of global temperature, ocean salinity, oxygen in the atmosphere and other environmental variables that affect the habitability of Earth.

The hypothesis was formulated by the scientist James Lovelock[1] and co-developed by the microbiologist Lynn Margulis in the 1970s.[2] While early versions of the hypothesis were criticized for being teleological and contradicting principles of natural selection, later refinements have resulted in ideas highlighted by the Gaia Hypothesis being used in subjects such as geophysiology, Earth system science, biogeochemistry, systems ecology, and climate science.[3][4][5] In 2006, the Geological Society of London awarded Lovelock the Wollaston Medal largely for his work on the Gaia theory.[6]’ Wikipedia

‘Using a new measure of coupling strength, this update shows that these climate modes have recently synchronized, with synchronization peaking in the year 2001/02. This synchronization has been followed by an
increase in coupling. This suggests that the climate system may well have shifted again, with a consequent break in the global mean temperature trend from the post 1976/77 warming to a new period (indeterminate length) of roughly constant global mean temperature.’ Swanson, K. L., and A. A. Tsonis (2009), Has the climate recently shifted?, Geophys.
Res. Lett., 36, L06711, doi:10.1029/2008GL037022.

Max – time is the dimension in which entropy occurs. Conceptually time has been considered an arrow. It moves from a past that is no longer through the evolving moment to the unformed future.

Close enough for government work. We will go with it for now – although Einstein has literally added another dimension to the problem inextricably linking time with space.

Time itself is almost irrelevant except as a measure of the increase in entropy in the universe. As the universe evolves – small changes in Earth system control variables – solar intensity, orbit eccentricities, atmospheric composition – drive nonlinear change to the system through changes in subsystems – ice, snow, biology, winds, dust, ocean currents, etc. This causes abrupt change in what we presume is an ergodic, chaotic system. Ergodic because it seems there are certain preferred states. In the last 2.58 million years – states at and between glacials and intergalacials.

It makes sense to run the climate models using the entire ocean temperature as in input. In this run, ocean grid cells would simply track the ocean temperature in a given grid, there would be no feedbacks into the grid from entities external to the grid.

Perhaps only the last 30 years would be doable due to the sketchy nature of ocean temperature measurements. In fact, that is probably the biggest drawback in this approach. Nevertheless, it would be an interesting exercise.

That’s been done with the GFDL models and is summarized in a few posts by Isaac Held. They are much more faithful to the observations than the fully coupled models. Right, it is only for about the last 30 years.

There is a lot there, lots of pages, lots of links. I haven’t been able to find the run using SST as the independent variable. Should I be looking for a chart, a dataset (I can do a little R), or an interactive page somewhere?

Thanks, billc. It is interesting that the model still overshoots/undershoots the measured temp. Since we know the ocean is the primary driver of atmospheric temps, it seems it would make sense to tune the model to match the measured atmospheric temp to the modeled atmos. temp as possible. Then, re-couple the ocean with the rest of the climate and see what happens.

I don’t have access to the literature, so this might have been done already, as far as I know.

Well the model explored in that post was a “high resolution” atmospheric model, so it may have different parameterizations than coupled models at a lower resolution. Dunno. Probably still an excess of tunable parameters from which to choose…

“The whole CO2 argument is tiresome and absurd. The unmoved mover is the sun. The system below responds to the variations in the sun with a lag. The “warming” was because when the oceans went into their warm cycle (remember there is far greater “energy” in warm water than in cold dry air), the natural response had to be a warming in the north – where there is more land- because equatorial ocean warming increases the transport of warm humid air northward. That air has to then warm areas where there is dry air. If you dry out warm humid air, then you warm the air temperature if the wet bulb is constant. So there is warming over the continents, BUT IT ONLY CONTINUES UNTIL THE OCEANS HAVE ADDED THEIR INPUT. The leveling off of temps is completely consistent with an atmosphere that has absorbed the heat from the warming cycles of the oceans that occurred in tandem from 1995-2007 (warm PDO was 1978-2007, warm AMO 1995 till present, but it will shift).”

Judith Curry’s present post amounts to “The ‘best available science’ tels us that the above foundational principles all are true, and our next task is to fill-in the scientific details, by proceeding to a better quantitative understanding of the role of natural internal variability.”

no, nature is a generalization. A cause would need to be specific. God could will the magic of nature to overwhelm man’s activities among other things, but climate variability is due to fluid dynamics.

‘Finally, Lorenz’s theory of the atmosphere (and ocean) as a chaotic system raises fundamental, but unanswered questions about how much the uncertainties in climate-change projections can be reduced. In 1969, Lorenz [30] wrote: ‘Perhaps we can visualize the day when all of the relevant physical principles will be perfectly known. It may then still not be possible to express these principles as mathematical equations which can be solved by digital computers. We may believe, for example, that the motion of the unsaturated portion of the atmosphere is governed by the Navier–Stokes equations, but to use these equations properly we should have to describe each turbulent eddy—a task far beyond the capacity of the largest computer. We must therefore express the pertinent statistical properties of turbulent eddies as functions of the larger-scale motions. We do not yet know how to do this, nor have we proven that the desired functions exist’. Thirty years later, this problem remains unsolved, and may possibly be unsolvable.’ http://rsta.royalsocietypublishing.org/content/369/1956/4751.full

Looks like a bit of circular logic by Tamino et al: take the difference between the models’ predictions and observation, and attribute it to natural factors. But the models’ predictions were themselves obtained by coding the late 20thC warming into the models. Ergo, the only effect of natural factors is the recent interruption in warming. Using this circular logic, no other result is possible.

There is growing evidence to support the hypothesis that the pause cause is tied to a change in tropical Pacific Ocean circulations–i.e., natural causes.

There also is a voluminous amount of historical evidence to support the hypothesis that past pauses as well as the previous warmings all are tied to natural causes. “The fact is that the ‘null hypothesis’ of global warming has never been rejected: That natural climate variability can explain everything we see in the climate system.” ~Dr. Roy Spencer

The implication is that we need to stop wasting money on the fabrications of Western academics. Otherwise, the next time you hear Climatists use their GCMs to attribute climate sensitivity and, warming in the latter part of the 20th century, to America — as if a change in the weather is tied to the Left’s hatred of capitalism and the country’s Judeo/Christian heritage — realize that you are only getting what you are paying for.

Xie 2001:
“In fact, much of the recent Northern Hemisphere warming can be accounted for by the phase-shifting events of the PDO and NAO in the 1970s. It suggests that the global warming, most likely forced by the increasing green house gases in the atmosphere, projects strongly onto the modes of natural climate variability (Palmer 1998), giving another good reason for studying PDO.”http://iprc.soest.hawaii.edu/~xie/pdo.doc

” It suggests that the global warming, most likely forced by the increasing green house gases in the atmosphere, projects strongly onto the modes of natural climate variability…”

——-
When altering atmospheric chemistry to the extent humans have, you cannot easily separate anthropogenic forcing rom “natural variability”. There is not a clean break where one ends and one begins. The human influence may “project strongly” onto natural modes, and increasingly so as GH concentrations increase. We do not have a control planet that is clean of human influence, and so we can only look to the paleoclimate record and models to guide us. The Pliocene record would suggest big changes ahead as CO2 soars above 400 ppm.

But what is this “extent” you speak of? With regards to CO2, we are a tiny, tiny source compared with other sources. And, CO2 levels have been much much higher at past times where we had no influence at all. Any great “extent” of alteration of the CO2 component of atmospheric chemistry by humans exists only in your mind, not in reality. Not to mention that just because CO2 is a greenhouse gas does not mean that earth’s temperatures are driven solely by its concentrations, or even partially by its concentrations. As I am learning, there are many many other factors at work here.

Judith, you write “Natural internal variability has been regarded as noise.”

This I regard as THE key issue. From Tamino’s paper I quote

@@@@@
While the ENSO phenomenon has a potent impact on global temperature, it’s one of those phenomena which doesn’t create a long-term trend. It can and does cause temperature to go up and down and up and down and down and up and down and up, so that short-term (a decade or even longer) trends are profoundly affected, but on longer timescales (30 years or more, which we usually associate with “climate”) the ups and down mostly cancel each other and the long-term trend impact is minimal.
@@@@@
This strikes me as completely wrong. I cannot see why “the ups and down mostly cancel each other and the long-term trend impact is minimal.”

The PDO has a cycle time of around 60 years. For half this time, El Nino’s tend to predominate, and the other half, La Nina’s. So we expect a warming for the first half, and a cooling for the second half which MAY cancel out over the full cycle of 60 years. But we do not know whether all PDO cycles follow the same pattern and intensity. To take an analogy from solar cycles, there are 11 year Schwab cycles, which combine into a 22 year Hale cycles. But each Schwab cycle has a Rz value which can vary between 0 and over 150.

I can see no reason why we can assume that the effects of the PDO, El Nino and La Nina cancel out on a time period as short as 30 years. So far as I can see, there could be residuals showing up in global temperatures on a time scale of centuries, from the effects of the PDO. And then there is the AMO, which might beat with the PDO. And then there is……………..

The issue is the multitude of different time scales of natural internal variability, many important modes have longer time scales than the 1-3 decades that are the focus of pause and AGW attribution analyses. The other issue is the focal interest on surface temperature, which allows for a lot of redistribution of heat in the atmosphere and ocean while still conserving energy.

I don’t see that “internal variability” must cancel over the long term. These variations are driven by, most likely, the interaction of humidity, clouds, the Sun, and large scale wind – along with continental topography, our position in the Galaxy, the long term trend of the Sun, which change over the very long term. We have probably never had constant conditions. If you believe these variations to be chaotic, there is no a priori reason to believe they will cancel out on any given time frame. Chaotic isn’t the same as random. A driven chaotic system conserves energy just like any other.

Just because the Holocene has been warming naturally for 20,000 years doesn’t mean energy is being created. It just means the cycles are really, really long.

(Of course energy from the sun can be captured when it would not normally be if the albedo is darker and energy can be lost if the albedo is brighter. And then there is the waste heat from 7,000,000 people and their industry)

Yes, the multiple time scales complicates the problem, as does only having one time series to look at. But if folks want to argue that internal variation can actually create energy, they need to do something other than merely assert it

Take note Steven: Dr Curry thinks like a man and a geologist. That’s why the “Team” and their minions get all hysterical over her blog. You need to take your aircraft simulation experience and remember that the climate instrument record compares with a few micro-seconds of flying time on a $100 hamburger run.

The recent spate of interesting papers primarily reveals that we know much less about climate and human influences. In regards to sensitivity, hopefully you might begin to understand that it is not measurable, rather it is weak signal trapped and constrained by a cacophony of strong noise.

Geologists know that noise is just a cop-out. Noise is signal you don’t understand.

If they dont cancel then you have an internal unforced variation that either creates or destroys energy. not a good thing.

Huge amounts of energy are entering and leaving the system, with no direct causal relationships between the size of the flows. “[I]nternal unforced variation” doesn’t need to create or destroy energy, just retain a (very) slightly smaller or larger amount of what’s flowing.

‘Just because the Holocene has been warming naturally for 20,000 years doesn’t mean energy is being created. It just means the cycles are really, really long.”

Yes, really long unicorns.

Here is the task.

Given: Ghgs cause warming, how much you dont know.
Given: Humans add a tiny amount of waste heat, how much you can
calculate
Given: there are unforced “cycles” of varying amplitude and period that
recharge and restore.

Solve for the contribution of each.

Positing long “cycles” of unicorn length does not solve the problem it merely gives a name to your ignorance

Steven, you write “If they dont cancel then you have an internal unforced variation that either creates or destroys energy. not a good thing.”

You miss the point. They do cancel out in the end. The question is, how long does it take for them to cancel out? Tamino claims it is short compared with 30 years. I claim it is long compared with 30 years. That is the difference

Steven Mosher: If they dont cancel then you have an internal unforced variation that either creates or destroys energy. not a good thing.

Over what time span do you claim the ups and downs have to cancel? Internal unforced variation could cause a long-term increase in daytime cloud cover, producing cooling for a “long” time; and that could alternate with long-term decreases in cloud cover, producing warming for a “long” time. There is no reason (on present evidence) to conclude that there has to be canceling out within any particular decade, 30 year period, century, or millennium.

Drought cycles could be the result of less clouds and more sunshine. This would “create energy” from sunshine that would normally be reflected back into space.

“The new study reports that northern Great Plains droughts have recurred at roughly 160-year intervals. As in forest ecosystems, fire was a key player in the drought cycle and an important factor in regenerating plant life the Plains.

“As it starts to get dry, grass cover is lost,” Clark explained. “There’s no more fuel for fires, and as a result some pretty dramatic erosion occurs, which lasts for decades.””

“Well, what can happen is changes in cloud distribution or snow/ice cover, which can change the amount of radiant energy entering/leaving the earth system.”

and

‘Internal variation could, for example, change the earths albedo for decades. A darker albedo would capture more solar energy and lighter albedo would reflect more solar energy.”

yes, and monkeys could fly out of my butt.

You can construct all manner of suppositions that would result in a net negative OR net positive outcome. you can imagine all sorts of things that could be the case other than net neutral outcome.

The problem is that these suppositions lead nowhere.

There are two approaches.

1. an approach that assumes net neutral and the problem becomes tractable.
2. an approach that says, there could be positive unicorns or negative unicorns, we dont know, but if the assumption of net neutral shows X, we will assert the possible existence of unicorns of size -X and if the assumption of net neutral shows -X, then we will assert the possible existence of positive unicorns.

So, you want to argue that there might be positive or negative unicorns. You cant say which, you cant say how big. On the other hand, I’ll assume no unicorns, and state that the solution is subject to this assumption and revisable and falsifiable when and if you show up with a unicorn. On the other hand, you provide no direction on how to find the unicorns, whether they are big or small, and how I can know that I’ve found one.

One approach moves forward with the possibility of being wrong. The other throws up its hands and says “you cant assume net neutral”, but doesnt provide any insight, even a provisional insight into alternative assumptions. It just renames ignorance

Well yes there is alot of ignorance, largely because people have been ignoring natural internal variability. Pointing out these possibilities provides targets for people to investigate. There is no a priori reason to think that the warm phase and cool phase of the PDO are symmetrical in terms of warming/cooling trend, since there are different circulation patterns (and cloud patterns) associated with each phase.

you are being trapped by the italian flag; sort out the red, green and white and then we can get somewhere; no need to assume no white or all white.

It’s called a unicorneutrino, an unobserved entity introduced to reconcile observations with conservation laws. Why would you expect the earth’s albedo to remain constant in the face of temperature changes and all the other stuff they affect (clouds, whitecaps, ice, biological ground cover, etc.)? That would be weird.

Mosher, “If they dont cancel then you have an internal unforced variation that either creates or destroys energy. not a good thing.”

No one has a real clue if the oscillations are oscillations or just damped recovery patterns. The realistic range of error in the sensible portion of the energy budget is 7Wm-2 and latent is 8Wm-2, the average northern hemisphere to southern hemisphere energy imbalance is on the order of 18Wm-2 and the ITCZ and westerlies are known to shift on centennial time scales. Tamino assuming he “KNOWS” the relevant time scale for zeroing out or even what “normal” might be is a tad nonsensical.

What really is happening is that a model fudged with central pacific SST is kicking models fudged with aerosols, butts. The SST fudged model indicates that this can be happening,

As the oceans approach a preferred state, sensitivity decreases, just like charging a battery.

Your point relies on a system of natural climactic variation cycles of equal amplitude over decadal time-frames. If that were true, it would be the only geologic complex process to operate in that way.

“the change of albedo of 0.01 is comparable to
global energy balance change of 3.4 Wm-2 (average incident
solar radiative flux is 341 Wm-2) which is similar in magnitude to
the impacts of doubling carbon dioxide in the atmosphere”.

“This paper says that the decrease of ~2 Wm-2 in shortwave
reflected flux or albedo decrease of 0.006 for 2000 through
2003
􀂘 Palle et al. say that increase of 6 Wm-2 in the reflected flux or
increase of albedo 0.017 for same time period”

“The fact that two different observations give totally different trends in
the change of albedo implies that people need to put more efforts to
not only qualify but also quantify the change in the albedo.”

I disagree on the first two, it’s in this case the set of NCDC surface stations for the US, the best sampled location on the planet, so it’s consensus data, without the butchering. The processing is something different from the done for the 1,000th time annual average temperature series, a daily anomaly. It does however need error bars though.
But it is the same NCDC data that is the only empirical full resolution surface temperature data that’s available. The same data that BEST starts from I think.

You can always make whatever simplifying assumptions you want to. Those of us who asked you how you know the period of time over which natural oscillations balance out are still waiting for you to tell us. So far, you have given us that you assumed the arbitrary epochs that you wanted in order to get tractability. The mechanisms that we directed your attention to that might produce non-neutrality over some time spans are known, and can be studied. Your assuming them not to matter over any time spans does not advance understanding at all.

There is this 700 to 1000 year cycle that is evident in the Roman Warm Period and the Cold period after the and the Medieval Warm period after that and the Little Ice Age after that and the modern current warm period after that. They ignore all of this natural variability and consider anything above or below the average as not normal. It is normal to go above and below a long term average. That is how you get an average, you average the temperatures that are above and below. The average is not the normal. The normal goes up and down and the average is not normal.

You are clear as a bell. Remarkable, really. I understand that your critics might disagree with you but I cannot understand how they always manage to misinterpret your position.

My guess is that quite a few of them fail to appreciate that there is a world of natural phenomena that exists apart from our theories, which exist to describe that world. It is a very widespread problem. Once upon a time, youngish engineers could not win their wings in computer modeling until they were able to explain just how the world modeled is distinct from the model.

“If they dont cancel then you have an internal unforced variation that either creates or destroys energy. not a good thing.”

That’s not the case though. You are looking at the 60 year PDO cycle as the fundamental frequency, but that is not necessarily or even likely the case. You should talk to some electronics or sound engineers on the behaviour of perturbed bodies. You get all kinds of artefacts whereby the fundamental pitch drifts and modulates. If you accept that there were warm and cold periods in the Holocene such as the roman and medieval warming (and the intervening colder periods) then there is a signal on centennial scales which could contribute to a trend. That’s not a unicorn.

It’s not a closed system. You never reach an equilibrium. You have multiple lags involved and a chaotic system that can come to various “steady states” for a few years or (best we can tell) for hundreds of thousands of years. Not a system that can be easily modeled. The error bars in all the energy balance calculations are large, so I have no confidence in the amount of energy trapped by additional CO2 or how much of it is available to heat the part of the atmosphere we humans occupy. If the heat is efficiently mixed into the oceans, how much will a 0.003 degree increase in the oceans affect us?

After mulling this for a bit, I am really gobsmacked that Steven Mosher said this. Prof. Curry’s rejoinders (long time scales, narrow focus on surface temps, clouds) clearly show that a very clever dude is way behind the curve in understanding the basis for climate modeling. I can’t help wondering if, say, Richard Muller suffers from similar weakness of thought regarding internal variability. If so, we may have a long wait before Muller realizes variability is not just statistical noise.

“While the ENSO phenomenon has a potent impact on global temperature, it’s one of those phenomena which doesn’t create a long-term trend. It can and does cause temperature to go up and down and up and down and down and up and down and up, so that short-term (a decade or even longer) trends are profoundly affected, but on longer timescales (30 years or more, which we usually associate with “climate”) the ups and down mostly cancel each other and the long-term trend impact is minimal.”

This a priori reasoning in all its glory. The short term trends have to average to zero because “radiation-only” theorists demand it. Heaven forbid that anyone should investigate (empirically) short term behavior.

Why don’t “radiation-only” theorists state their results and evidence in joules rather than temperatures?

“While the ENSO phenomenon has a potent impact on global temperature, it’s one of those phenomena which doesn’t create a long-term trend.” – Tamino.
Which is to say it cancels itself. Do we have a problem of definitions? If variabiliy is cycles around a mean, is variablility the underlying long term trend as well? I sense a hardline against the long term trend being up at times. Say we say that PDO in the long run cancels itself out. Is Tamino saying the long term trend is all CO2 plus water vapor?

I agree with JIM2 above on the uselessness of the CO2 sensitivity discussion.This is really a continuation of the earlier discussion here on the Xie paper and the post on JNGs site.It is irrational to talk about climate sensitivity to CO2 when natural CO2 levels follow temperature and whatsmore not in a simple way.It it worth reposting here a comment made on the earlier thread and also on the JNG site.

“The problem is that the IPCC models are structured incorrectly and are actually totally useless as a basis for discussion. In the case of Xie he completely ignores the possibility that the underlying trend is also natural ie that it reflects a solar millenial cycle.I recently posted a comment on the Curry site on this matter I think it appropriate to repost it here.
Dr Norman Page | August 31, 2013 at 4:55 pm | Reply

In the last couple of years the climate science establishment has been forced to deal with the fact that there has been no net warming since 1997,that the warming trend peaked in about 2003 and that the earth has been in a cooling trend since then .- see Figs 1 and 4 in my blogpost athttp://climatesense-norpag.blogspot.com/2013/07/skillful-so-far-thirty-year-climate.html
Figs 1and 4 are but two examples of an ever increasing number, showing the growing discrepancy between model outputs and reality.This disconnect has been acknowledged by the establishment science community which is now busy suggesting various epicycle like theories as to where the “missing” heat went.Some say its in the oceans (Trenberth) some say its due to Chinese aerosols (Hansen) but the all main actors still persist in the view that it will appear Lazarus like at some unspecified future time.This is like the Jehovah’s witnesses recalculating the end of the world each time a specified doomsday passes.
In Britain , the gulf between the Met Office expectations for the last several years and the actual string of cold and snowy winters and wet summers which has occurred has made the Met Office a laughing stock-
to the point of recently holding a meeting of 25 “experts” to try to figure out where they went wrong.
The answer is simple.Their climate models are incorrectly structured because they are based on three irrational and false assumptions. First that CO2 is the main climate driver ,second that in calculating climate sensitivity the GHE due to water vapour should be added to that of CO2 as a feed back effect and third that the GHE of water vapour is always positive.As to the last point the feedbacks cannot be positive otherwise we wouldn’t be here to talk about it .
Ms Curry found a recent paper mind blowing.What is really mind blowing is that the IPCC – Met Office model outputs were ever accepted as having any useful connection to the real world.
Much of the the discussion on this site however still accepts the model outputs as a basic framework for rational discussion and policy guidance. They are not.
A completely different approach to forecasting is required.One such is outlined in the link above.Others have exposure mainly in the Blogsphere- eg Scafetta and Easterbrook.It is the forecasts of these type of approaches which should be the topics of discussion .The IPCC modellers can only advance by scrapping their basic assumptions and restructuring their models completely.For most of them this is psychologically and professionally impossible.
Here are the conclusions of my approach based on the recognition of quasi cyclic – quasi repetitive patterns.
“To summarise- Using the 60 and 1000 year quasi repetitive patterns in conjunction with the solar data leads straightforwardly to the following reasonable predictions for Global SSTs

1 Continued modest cooling until a more significant temperature drop at about 2016-17
2 Possible unusual cold snap 2021-22
3 Built in cooling trend until at least 2024
4 Temperature Hadsst3 moving average anomaly 2035 – 0.15
5Temperature Hadsst3 moving average anomaly 2100 – 0.5
6 General Conclusion – by 2100 all the 20th century temperature rise will have been reversed,
7 By 2650 earth could possibly be back to the depths of the little ice age.
8 The effect of increasing CO2 emissions will be minor but beneficial – they may slightly ameliorate the forecast cooling and more CO2 would help maintain crop yields .
9 Warning !!
The Solar Cycles 2,3,4 correlation with cycles 21,22,23 would suggest that a Dalton minimum could be imminent. The Livingston and Penn Solar data indicate that a faster drop to the Maunder Minimum Little Ice Age temperatures might even be on the horizon.If either of these actually occur there would be a much more rapid and economically disruptive cooling than that forecast above which may turn out to be a best case scenario.”
Y’all need to break free of the Modellers mindset and their mathematical approach to climate forecasting .Judith in particular as a geologist – consider -putting together past climates is much more like putting together the Geological Timescale (see gradstein 2012)
You cobble together bits here and there from different disciplines in different times and regions to come up with a narrative.Then look for patterns which can be projected forward for some limited time period.

lolwot. You don’t have to guess what I think. See conclusion 8 in the original post.also note that the sensitivity equation is logarithmic and also these key points from the comment.
“climate models are incorrectly structured because they are based on three irrational and false assumptions. First that CO2 is the main climate driver ,second that in calculating climate sensitivity the GHE due to water vapour should be added to that of CO2 as a feed back effect and third that the GHE of water vapour is always positive.As to the last point the feedbacks cannot be positive otherwise we wouldn’t be here to talk about it .”

Lolwot.Dont copy the modellers and throw out basic simple observations and common sense in looking at climate change. Here’s a quote from the blogpost linked in the original comment.
“b) A Simple Rational Approach to Climate Forecasting based on Common Sense and Quasi Repetitive- Quasi Cyclic Patterns.

How then can we predict the future of a constantly changing climate?

When,about ten years ago ,I began to look into the CAGW – CO2 based scare, some simple observations immediately presented themselves.These seem to have escaped the notice of the Climate Establishment. ( See the Post 5/14/13 Climate Forecasting for Britains Seven Alarmist Scientists and for UK Politicians.)
a) Night is colder than day.
b) Winter is colder than summer.
c) It is cooler in the shade and under clouds than in the sun
d) Temperatures vary more widely in deserts and hot humid days are more uncomfortable than dry hot days – humidity (enthalpy) might be an important factor. We use Sun Screen against UV rays – can this be a clue?
e) Being a Geologist I knew that the various Milankovitch cycles were seen repeatedly in the Geologic record and were the main climate drivers controlling the Quaternary Ice Ages.
f) I also considered whether the current climate was unusually hot or cold. Some modest knowledge of history brought to mind frost fairs on the Thames and the Little Ice Age and the Maunder Minimum without sunspots during the 17th century . The 300 years of Viking settlements in Greenland during the Medieval Warm Period and viniculture in Britain suggested a warmer world in earlier times than at present while the colder Dark Ages separate the MWP from the Roman Climate optimum.
g) I noted that CO2 was about 0.0375% of the Atmosphere and thought ,correctly as it turns out, that it was highly unlikely that such a little tail should wag such a big dog.
I concluded ,as might any person of reasonable common sense and average intelligence given these simple observations that solar activity and our orbital relations to the sun were the main climate drivers. More specific temperature drivers were the number of hours of sunshine,the amount of cloud cover,the humidity and the height of the sun in the sky at midday and at Midsummer . It seemed that the present day was likely not much or very little outside the range of climate variability for the last 2000 years and that no government action or policy was required or would be useful with regard to postulated anthropogenic CO2 driven climate change.

These conclusions based on about 15 minutes of anyone’s considered thought are,at once , much nearer the truth and certainly would be much more useful as a Guide to Policymakers than the output of the millions of man hours of time and effort that have been spent on IPCC – Met Office models and the Global Warming impact studies and the emission control policies based on them.

Its like the guy on the street corner preaching that Jesus Saves.
No point in talking to him since he refuses to acknowledge his own asssumptions and construes all objections to his position as a pathology.

The external cycle is the 61 year Scafetta cycle, which produces, …or
named “is tied to…”, a PDO cycle. The PDO is only a product or effect and does not “cause” a warming/cooling period. Everyone can download the Allan GISP2 2000 Greenland temps, stretch the graph horizontally at any point of the Holocene and will be able to see those 61 year upticks or downticks for the past 10,000 years. The 61 year PDO is just superimposed upon the general temp trendline, which is governed by five external forcings:http://www.knowledgeminer.eu/eoo_paper.html Cheers JS

‘Her first mistake — quite an embarrassing one really — was to assume that this [POGA C] was the influence of ENSO on global temperature history. This quite misses the point, that one of the strengths of the new approach is that it allows climate forcing and ENSO to interact in a nonlinear manner. The actual estimate of the influence of ENSO, according to the new research, is shown in the graph labelled “POGA-H minus HIST.” ‘

Excuse me, but who is making the assumption? Tamino is assuming that natural regularities such as ENSO cannot be investigated empirically and independently of the standard assumptions of the “radiation-only” crowd. If Mr. Tamino thinks that I am wrong then I am sure that he will explain how empirical investigation of ENSO Could Not reveal a role for ENSO that is greater than what he estimates and that is independent of an “radiation-only” account.

“There is growing evidence to support the hypothesis that the pause cause is tied to a change in tropical Pacific Ocean circulations.”

Scafetta & Loehe 2011 I think does the best job of showing just two oscillations, 20 and 60 year (Pacific and Atlantic respectively) in 3:1 phase synchronicity, and how well they track observations with two linear residuals. An underlying 0.1C per century linear trend since 1840 which coincides with the end of the Little Ice Age and subsequent natural warming associated with it. A second linear trend of 0.1C per decade beginning in 1950 makes the fit perfect so far.

Notably the data used in S&L 2011 ends in 2010. The data since 2010, which has been a comparatively rapid drop in global average temperature, fits their prediction perfectly. I know of no mainstream climate models that have done this well. Every one of them missed the pause but not S&L 2011. And every one of them that missed the pause from 2000-2010 really missed the period from 2010-2013 by more.

The real knee slapper (the usual suspects won’t see the humor) is that Loehe and Scafetta have the Pacific and Atlantic oscillations aligned with orbital parameters of sun, earth, and gas giants. I won’t be so quick to diss astrology in the future. LOL

David, you write “I won’t be so quick to diss astrology in the future. LOL”

It is not astrology, but astronomy. The solar system is a closed Newtonian system. The sun has 99% of the mass, and the planets have 99% of the angular momentum. Both have to be conserved. The sun makes periodic cycles around the barycenter, and is now in the process of making two loop-the-loops.

Can’t believe that Tamino criticizes Judith Curry for not assigning PDO/ENSO as NEGATIVE warming! He seems to be blissfully unaware that these natural oceanic cycles have positive and negative phases and that, if one arrives quantitatively at an estimate for PDO/ENSO’s effect upon climatic cooling, it is probably logical to conclude that, when PDO/ENSO are in opposite relative phase, they will contribute a very similar amount of warming.

But I suppose that is symptomatic of the obsessive one-sidedness apparent in the thought processes of so many people who argue for an overriding man-made influence upon our climate and – to be fair – in some of those who argue the complete opposite, though I would say that it is probably far less prevalent in the latter group of individuals.

It is absolutely imperative that reliable estimates for the magnitudes of anthropogenic and natural forcings upon our climate are arrived at soon and agreed upon by both parties, otherwise climate wars will just rumble on ad infinitum or until such time as the real world provides conclusive evidence which shows that either side’s arguments are untenable – which may in fact happen within the next year or so, or it may take decades.

In observing the work of Tsonis and Co., I see a desirable (IMO) focus on what I can only call “concentrations of causality”: Parts of the system where influences concentrate to determine powerful factors that then, in turn, produce widespread influence over the entire system.

The Eastern Pacific, as exemplified by sea surface temperature, would appear to be such a concentration. The question is, what are the factors that determine that EPSST, and how much of the influence is caused by increased pCO2 (or other greenhouse gasses)?

Perhaps modellers should concentrate on tweaking their parametrization to produce the observed EPSST, then try to work out what’s different about the resulting model runs, such as how they differ from observations in other parts of the world.

The Earth’s sustained (and astonishingly stable?) radiative energy imbalance can variously appear (as Judith Curry says) “as redistribution of heat in the atmosphere and ocean while still conserving energy.” In consequence, it is natural that the Earth’s various local heat reservoirs are observed to individually warm or cool, even as the net global heat energy increases steadily.

Observation Young scientists, especially, appreciate that scientific progress is sustained by “repetitive thumping” on the strongest available climate-change science, not by quibbling over details of mediocre climate-change science, or denialistically cherry-picking the weakest climate-change science.

I see this as an opportunity for the science to progress as we can begin to break away from the myopic viewpoint of focusing climate change and AGW to the tropsophere and look at the full Earth system. The extrenal forcing of increasing GH gases adds to the overall energy of the system by altering the character of that system, just as a volcano does, but in reverse. Increase GH gases and the system retains more energy (most of which would naturally go into the oceans). It would be worthwhile to begin to discuss an new viewpoint of climate sensitivity as to how sensitive the overall system is to gaining energy by doubling of CO2, with natural variability dictating the flow of that energy within the system.

From a government policy perspective the question(s) really have not changed. Do we understand the net long term impacts (positive & negative) of adding more CO2 into the atmosphere? Unless/until there is clear evidence thatthere are net long term negatives there will be no large scale move to not emit CO2

“It would be worthwhile to begin to discuss an new viewpoint of climate sensitivity as to how sensitive the overall system is to gaining energy by doubling of CO2, with natural variability dictating the flow of that energy within the system.”

That is pretty much the meat. A big problem though is natural variability impacts direct forcing, solar and albedo differently than response forcing, CO2 equivalent which varies with surface energy.

Solar forcing in the tropics has a huge impact and due to near water vapor saturation, CO2 forcing at the tropical surface is nearly negligible. CO2 forcing has a greater impact in the mid and upper latitudes were there is less water vapor competition while solar impact decreases with incidence angle. We have already had a discussion or two on regional sensitivity, but it keeps reverting back to internal variability or heat transport is insignificant which is totally wrong.

@martyn
“why can the Earth respond to changes in energy flux induced by distance, equal to 12 w/m2 (328-340) with no lag, but slow changes of less than 0.1 W/m2 per year have lags of a decade or more?”

It’s possible that natural variability in solar insolation also have a lag effect. It could be a cause of observed variability in the ocean such as ENSO, PDO, AMO. We just don’t understand it enough to model it. That’s my guess.

” It would be worthwhile to begin to discuss an new viewpoint of climate sensitivity”

‘Well we would say that wouldn’t he’
Mandy Rice-Davis

So as the CS drops to an extent that threatens not catastrophic climate, but slightly better weather for spring break, you wish to focus on something else, like the possibility that in future fish will be caught pre-cooked.

Well, Fan of more trolling, I can’t believe that you’re saying that because Eschenbach has pointed out the extreme stability of the the earth’s climate system, that therefore it is a closed system. But, if you are, please note that energy can enter a system or leave it without necessarily knocking it out of balance to the extent that, for example, runaway warming must occur.

If a good martial artist stands in balance, you can shove him or her pretty hard, without creating much movement; so external energy can enter their “system” without affecting their “extreme stability.” The earth is a very good martial artist.

Why on earth would anyone who matters care what Tamino has to say? He doesn’t even rate a mention in wikipedia. His affiliation on the one paper with anything to do with climate is “Tempo Analytics” which appears to be a non-entity he probably made up out of thin air. An actual Professor of Statistics at Columbia University almost baldly states Foster doesn’t know his ass from his elbow.

Worse for Tamino a.k.a. Grant Foster is that the statistician at Columbia who disses him is a member of scienceblogs.com and did it on his blog. Scienceblogs.com is top shelf for consensus science blogs.

From that link “It helps to know the context. Anthony Watts has been particularly reprehensible in making fact-free claims and ignoring refutations. If you have never had to deal with climate change denialists, creationists and anti-vaxxers before let me tell you it is only a matter of time before you snap when dealing with them. Tamino was quite mild given the relentless ignoring of data practised by Watts (Tamino has just demonstrated a central claim of Watts was false, and had several people independently replicate his finding).”

Some say that they admire him because his science comes from the gut. Others say that he is the purest propagandist pushing a scientific position today. Others say that they are really impressed by his insults directed at genuine scientists. The list is quite long.

In zoology, the gut, also known as the alimentary canal or alimentary tract, is a tube by which bilaterian animals transfer food to the digestion organs.[1] In large bilaterians the gut generally also has an exit, the anus, by which the animal disposes of solid wastes. Some small bilaterians have no anus and dispose of solid wastes by other means, for example through the mouth.[2]

Matthew R Marler, please recognize that the stability and precision of satellite energy-balance measurements can be high, even if the absolute calibration of the radiative energy flux is certain. That is why James Hansen and colleagues affirm what Willis Eschenbach also affirms:

Willis Eschenbach says “The additional surprise [of the Earth’s radiative energy balance] was that neither upwelling solar nor upwelling longwave varied by more than ± 0.1% year-to-year. Remember that the variable portion of the albedo is controlled by things as ephemeral as ice and clouds and wind, all of which are changing daily … and yet every year, they average out to within a tenth of a percent.

from the same post by Willis Eschenbach: Finally, the idea that we have sufficiently accurate, precise, and complete observations to determine the TOA imbalance to be e.g. 0.85 watts per square meter is … well, I’ll call it premature and mathematically optimistic. We simply do not have the data to determine the Earth’s energy balance to an accuracy of ± one watt per square metre, either from the ocean or from the satellites.

He further shows, in his fig 5, that changes in ocean heat content are unrelated to the changes or consistency of the TOA imbalance, at least from 2001 – 2005.

A lot of the debate is about whether the scientific knowledge and measurements are amazingly complete and accurate, or whether they remain too incomplete and inaccurate to answer important questions about (CO2 contributions to) climate change. As the CO2 is increasing and the steady-state or equilibrium has not been achieved, the TOA imbalance should change along with the CO2 increase. Are the measurements sufficiently accurate that we could determine whether that is happening? Or has happened?

Couple of things all “pause” related.
First, should anyone be surprised that the Eastern Pacific has a big influence to NA surface temps? I’m in my 50’s and I’ve know as long as I can remember that the gulf stream keeps the UK warmer than it should be. Should we be surprised that there are probably other place around the world that the same thing happens?
Second, Once the air is over the continent and has the water vapor reduced, co2 does not restrict night time cooling. On a 35F clear day, a hardware store IR thermometer reads zenith temps as ~-40F. And this was the middle of the day.
Lastly I think a lot of the thermal transportation is due to living on a planet whose temperature is about in the middle of waters freezing and boiling points. Lots of energy getting moved around just in the different phases of water. The Earth doesn’t get cold enough to liquify Co2. Which BTW doesn’t actually create any energy, just stores it.

Oh, I forgot to point out that only surface water has a low albedo (at least when the Sun is at an acute angle > than 20 or 30 degrees (think glare). Actually glare is why the Arctic doesn’t have runaway melting. There’s only a fairly small area with the Sun being overhead, most of the open ocean just radiates into space when it gets a chance, warm water (compared to ice) cools much more than ice does.http://www.iwu.edu/~gpouch/Climate/RawData/WaterAlbedo001.pdf

Good point Mi Cro. In the Winter the air is dry over land. Seemingly pushing water vapor feedback down. Now while considering the Summer Hemisphere is loaded with humidity, there is still a gap. The more heat their is in the Summer hemisphere, the more preassure there is to push it right out of the Winter hemisphere. Have I missed something? Is the water vapor still their doing iis insulating thing when the humidity seems to be as low as it gets?

Tinfoil hat comment follows: So if the North Polar season is pushing humidity South each Summer, is that humidity falling out on the South Polar regions? North would push to South as the North would seem to be at its highest energy, temperature, humidity and insulating phase.

The amount of water air holds changes a lot over temp. At saturation @ -40 C a kg of air holds .1gm water, @ 0C 3.8 gm, @ +40C 49.8 gm.
There’s just very little water even with 100% humidity below freezing to create any amplification. And once it’s dry the clear sky is plain cold, even during the day.

“On a 35F clear day, a hardware store IR thermometer reads zenith temps as ~-40F. And this was the middle of the day.”
And you conclude that CO2 must have little effect? Were it not for greenhouse gases, your IR thermometer should read -455ºF (if this were in its measuring range). This is the temperature of the background microwave cosmic radiation. CO2 and other greenhouse gases, including residual water vapor, account for the temperature of the sky being over 400ºF warmer than that.

“And you conclude that CO2 must have little effect? Were it not for greenhouse gases, your IR thermometer should read -455ºF (if this were in its measuring range). This is the temperature of the background microwave cosmic radiation. CO2 and other greenhouse gases, including residual water vapor, account for the temperature of the sky being over 400ºF warmer than that.”

Actually this is a very good point. But it is still -40F, that the surface radiates to in the S-B equation. As I noted earlier though there isn’t a lot of water vapor at 35F, ~4gm water/kg of air.

True. There is little water vapor. And so, a large part of the effect (the 400ºF temperature increase of the downwelling radiation from the atmosphere) is an effect from CO2, methane, etc. Doubling the concentration of those gases would increase that effect significantly.

“True. There is little water vapor. And so, a large part of the effect (the 400ºF temperature increase of the downwelling radiation from the atmosphere) is an effect from CO2, methane, etc. Doubling the concentration of those gases would increase that effect significantly.”

Dry air is a good insulator, I didn’t mention on that same 35F day the asphalt I was standing on was ~50F.
But it still doesn’t show up during over night cooling. The average of 109 million world wide surface station measured daily rising temp and falling night falling temp is 17.4654F/17.4656F for the period of 1950 to 2010 (basically identical). Based on NCDC’s summary of days data set.

Those are raw tmin and tmax data uncorrected for time of observation bias? Are those 109 million continuous records? If you use BEST data instead, you will find a sharp reduction in diurnal range since 1900, and a smaller increase since about 1987. I am unsure what caused the recent increase, but it might be ENSO related.

@Pierre,
This isn’t DTR, and BEST doesn’t preserve it. My name in the above posts has a link with details.
Sample time of day doesn’t really matter, as long as most pairs (since it takes readings from 2 days) are consistant.
They aren’t all in a row, there are very few stations with 60 straight years of reporting.

Thank you for this post Judith. This topic is probably among the most important in climate science right now and your approach to really going into detail on the various perspectives does a great service to all who are interested- both professional and layman. I am among those who think that anthropogenic factors are beginning to play a bigger and bigger role in modulating internal variabililty, but this is a topic that will be debated for some time. Again, Outstanding job.

1. Gaseous CO2, and the gaseous phase of H2O, both are interactive media with respect to radiative energy transport in the wavelengths of interest to the earth’s climate system.

2. The earth’s climate system is an open system relative to radiative energy transport: the system can both gain and reject radiative energy. Some portion of the planet is rejecting energy out of the earth’s climate system for some portion of every day.

3. The earth’s climate system has never been in the past, is not now in the present, and will never be in the future in thermodynamic equilibrium. In particular, radiative energy transport at the top of the atmosphere has never been in exact balance between the incoming and outgoing energy. Thermodynamic equilibrium between components within the climate system is an impossibility.

4. The liquid phase of water, and, to lesser extent, the solid phase, and various other radiatively interactive solid particulate matter are present in the earth’s atmosphere. Some of the non-gaseous matter in the atmosphere reflect a portion of the incoming radiative energy back out of the earth’s climate system.

5. Relative to the postulated energy imbalance assigned to increases in CO2 concentration, convective transport of energy into the atmosphere from the surface and energy transport / exchange issues associated with the phase changes of water are not minor.

Let me expand on 3 above.

The GHG hypothesis relative to earth’s climate system includes also an equally important concept that is seldom mentioned in the bumper-sticker / press-release version of climate change. This concept is an equilibrium state for the radiative energy transport at the top of the atmosphere (TOA). The incoming SW radiative energy is always equal to the out-going LW radiative energy. Here, equilibrium means incoming = outgoing. Balance is a better word, I think.

The radiative energy budget at the TOA cannot be in a state of radiative balance. There will always be time periods for which the incoming energy is greater than the outgoing and time periods for which the out-going is greater than the incoming. This behavior is not due solely to ‘feedbacks’, but instead is a function of the daily, seasonal and yearly cycles of the earth’s climate system. And the fact that the sub-systems within the earth’s climate system are always and forever changing on a wide range of temporal and spatial scales. The subsystems can never be in equilibrium, steady or stationary states, either within a subsystem or between subsystems.

We are assured that weather is chaotic. This means that climate is chaotic. (Given that climate seems to be defined as some kind of unspecified average of weather over some unspecified, but extensive, period of time.) Because weather aspects affect some of the interactive-media interactions with radiative energy transport, the chaotic nature of weather means that the radiative energy transport will be chaotic. The chaotic nature of climate then means that the radiative energy transport of climate is also chaotic. ( Note that every single aspect of radiative energy transport interactions with the earth’s climate system are described by parameterizations. Up to and including the motion, composition, and positions of clouds.)

The Question.
Disregarding any and all possible effects due to human activities other than addition of CO2 into the earth’s climate system, on what basis is it known with absolute certainty that the energy content of the earth’s climate system shall increase over time as the concentration of CO2 increases? That is, what aspect(s) of the earth’s climate system ensures with certainty that over time the energy content of the system must remain above the level associated with a previous state having lower CO2 content.

The Earth is a water planet. Equilibrium, steady or stationary states are not possible. Variability is the norm. Engineers have known this for centuries. Why are so many people so excited?

5. Relative to the postulated energy imbalance assigned to increases in CO2 concentration, convective transport of energy into the atmosphere from the surface and energy transport / exchange issues associated with the phase changes of water are not minor.

In point of fact latent heat is the majority transport mechanism globally and almost exclusively in the tropics where most of the energy enters the system.

The following should be understood very well before writing anything about surface heat budget:

Equilibrium is still very important in understanding surface heat budget. Although the system, or components thereof, only cross over the equilibrium point momentarily, like a swinging pendulum at the bottom of its arc, the system is still driven towards a theoretical equilibrium point. The point itself may be a moving target but it may still be estimated and the force driving toward it evaluated at any instant. This is especially important as the system is forced farther from equilibrium and the force to restore it increases as the distance from it. This often manifests as what’s commonly called a negative feedback.

Dan Hughes, if something as small as the 11-year solar cycle is detectable in the global surface temperature with a forcing change of 0.2 W/m2, you might also expect a doubling of CO2 with 20 times this forcing change to be detectable. The system is not so noisy that 0.2 W/m2 can’t be detected when it is a regular forcing cycle, and papers have been written on its detection with a signal of 0.1-0.2 C.

‘Since irradiance variations are apparently minimal, changes in the Earth’s climate that seem to be associated with changes in the level of solar activity—the Maunder Minimum and the Little Ice age for example—
would then seem to be due to terrestrial responses to more subtle changes in the Sun’s spectrum of radiative output. This leads naturally to a linkage with terrestrial reflectance, the second component of the net sunlight, as the carrier of the terrestrial amplification of the Sun’s varying output.’ http://bbso.njit.edu/Research/EarthShine/literature/Goode_Palle_2007_JASTP.pdf

The sensitivity to solar irradiance changes is almost 1 K/(W/m2) which is quite high, certainly helping its detectability and helping to explain how the LIA could be connected with the Maunder Minimum. Certainly there have to be positive feedbacks of some sort because the Planck response is 0.25 K/(W/m2).

I was agreeing with you. Solar irradiance forcing changes are detectable and important and as we see from the LIA there must be a big feedback. That was your message, wasn’t it? The numbers are what they are from measurements.

1976 was the year that the met office introduced a temperature factor to account for uhi. I do maintain that cet shows a reasonable but by no means perfect correlation with global temperatures, but that it seems to specifically relate to pdo/enso is quite interesting

No, Jim, it won’t. I agree. And I suppose I’m overstating the case when I say I’d prefer they don’t back off, because of course it would be better if they did. But we both know that’s not going to happen in any serious way. All that said, there’s no question the past year has seen many peer reviewed papers that are friendly to the skeptical point of view. In concert with the ever more CAGW subverting pause, this is all very good news…

To tell you the truth lollywot, being “well respected” ain’t what it’s cracked up to be these days, anyway. The supposedly big journals are publishing all kinds of crap. Any reasonable person already knew this paper was utter garbage (leaving you out of course). It’s just nice to see the reasons why laid out so tidily, and with the imprimatur of peer review for a tasty cherry on top.

Picking up on what Judith said earlier, that there is no a priori reason to suppose that PDO/ENSO warming/cooling is symmetrical, I do agree. However, averaged over hundreds of years, and controlling for the effects of other long term variable forcings which might affect PDO/ENSO, one might reasonably suppose the average effects of oscillating ENSO/PDO cycles to be neutral otherwise they would, in themselves, contribute either to sustained warming or cooling. Of course, asymmetry in PDO/ENSO warming/cooling might of itself explain underlying trends in multi-decadal climate variability, but I suspect long term trends will turn out to be dominated by solar forcings and, dare I say, not anthropogenic effects, contributory as they may be.

You are scientifically correct lolwot, and John Nielsen-Gammon speaks sensibly to the same scientific point:

Nielsen-Gammon remarks (sensibly): “I don’t like that term [“hiatus”], because it’s only the atmospheric part of the globe that is enjoying a hiatus from global warming. The oceans continue to take up lots of extra heat, and the glaciers continue to melt.

Judith says, at the end of her post: “The focus for the last two decades has been on the forced climate response. Natural internal variability has been regarded as noise.”

Look at the natural variability ramping up to, and down from, the Medieval Warm Period, or the earlier Roman Warm Period. Look at the ramp down to, and up from, the Little Ice Age. How can a serious scientist then say that natural variabilty in the Holocene, so obvious in previous time frames, has now become relegated to meaningless noise?

I’ve been taking mainstream climate scientists seriously for a very long while, disagreeing with their Church’s exaggerations but still viewing the Establishment on the whole as science minded (with the corollory that it is the PR people who do the worst damage to science).

But if the science folks don’t get a grip and not just nderstand but publicly agree that natural variability isn’t something you can pretend doesn’t exist, it is going to get harder and harder to give them the respect I thought they deserved.

I agree with the thrust of your comments. Most of my moving toward a skeptical view in the last 5 years was not due to skeptics data or reasoning but rather the establishment’s intransigence to any other view but theirs. At some point common sense has to prevail.

As has been pointed out to you numerous times the very coarse paleo proxy sieve of the hockey stick and it’s spaghetti derivatives bear no relation to real world annual and decadal instrumental temperatures . Their lack of variation also fail to reflect changes in glacier movements caused as temperatures fluctuated

“In my previous post, I argued that POGA-C (fixed external forcing) showed substantial warming since 1975 (through 1998), and a substantial fraction of the observed warming (possibly 50% or more, based on eyeball estimate). The significance of this is in context of the IPCC AR4 attribution statement, whereby most (>50%) of the warming in the latter half of the 20th century is anthropogenic.”

Having stated the IPCC statement concerns the latter half of the 20th century, why does Dr Curry continue to look exclusively at the period from 1975, ie the latter quarter of the 20th century?

The same method from 1950-2000 appears to show over 80% of the observed warming to be anthropogenic. Should not Dr Curry be announcing that this study backs up the IPCC statement rather than vaguely implying it contradicts?

The danger is that to outside observers this post by Dr Curry looks like a deliberate cherrypick of data to spin a study as showing the opposite of what it actually shows. The picking of 1998 especially is troubling as it is a very specific choice that just so happens to maximize such spin. I of course know this isn’t the cast, but I am concerned how this might appear to the wider public and the credibility of climate science.

Fake skeptics and deniers have no help as far as I can tell.
If they ever try to do the math, they get debunked real quick.
I am waiting for the first skilled scientific analyst to make an appearance on behalf of the deniers in these comments.

Do you ever worry about being so trivial and irrelevant webster? Perhaps if you tried saying something that wasn’t just a whine – when not preening about idiot blog science. But that would require having a some knowledge of climate science.

I love the unashamed passive aggression. Your argument reminds me of how often doctors still insist that their failure to diagnose illness in women is explained by “it’s all in your pretty little head”. Maybe we should ask the denizens to diagnose lolwot’s deep rooted psychological pathology responsible for it’s latent sexism. My guess is unrequited Oedipal yearnings.

It has held true for 90 some percent of humans who have ever drawn breath. That 97 percent or so of the people who comment here have never experienced the hard part is a fact one should always keep in mind.

lolwot, I also made a similar (probably unnoticed) comment on the first thread. The POGA-C trend with fixed climate forcing over the whole period from 1950-2010 is indistinguishable from zero contrasting with both the runs with changing climate forcing. From this I would conclude that their whole trend is from changing climate forcing.

What do the “skeptics” conclude from comparing the whole POGA-C trend since 1950 with either POGA-H or HIST? It is an easy question similar to what Judith asked. I think they are shielding their eyes from this and its implications.

Do you ever worry about being so trivial and irrelevant webster? Perhaps if you tried saying something that wasn’t just a whine – when not preening about idiot blog science. But that would require having a some knowledge of climate science.

Both John N-G and Tamino have emphasized (in agreement with the authors of the paper as far as I understand) the fact that the area controlled is only a small fraction of the oceans and an even smaller fraction of the Earth surface. That’s a relevant point but by itself does not tell, how strongly the authors must perturb the model to get their POGA results.

A better measure of the strength of the perturbation would be the size of the forced energy flux. If this flux is only a small fraction of the energy imbalance at TOA, comparable to the fraction of the control area to the total area of oceans, I would consider the argument of the above paragraph rather strong. In the opposite case that the flux is much larger than the average net energy flux thorough such an area, and perhaps not very much smaller than the total energy imbalance of the Earth, then I would consider the results of the paper to have little evidential power.

I haven’t found that essential piece of information from the paper, perhaps i just overlooked it, or perhaps it has been reported elsewhere. If so I would like to learn what the strength of this artificial additional forcing of heat transport is.

Assuming that the topic is energy, you make a reasonable case. Perhaps you can answer a question for me? Why is all the observable evidence for global warming stated not in joules but in temperatures? If a natural regularity that redistributes temperatures across the Pacific has been described in terms of temperatures then why is empirical investigation of the phenomenon opposed by all who insist that the topic is energy?

Measuring energy contents is much more difficult, and is actually done by measuring temperatures and multiplying by the heat capacities that are relatively easy to determine. Another possibility is to measure heat fluxes, which is possible in many cases, but not accurately enough.

One further reason to consider temperatures is that we and the ecosystems are affected by temperatures.

Heat contents measured in joules are more significant in many ways, but the above factors have presently a stronger influence on the practices.

OK, so it is difficult to state observable evidence in terms of joules. The rest I knew.

But my underlying question is this: Why do the “radiation-only” theorists rule out empirical investigation of natural regularities that are described in terms of temperatures and rule them out by declaring that according to the energy calculations the short periods of temperature change must balance in the long run?

Do they really mean that the observed temperatures must balance or do they mean that the energies must balance?

If the latter, do observed temperatures have no independence from energies? If yes, then why can you not use joules in stating your evidence?

I’m sure every scientist knows that the conserved quantity is energy, all such statements about the temperatures must be based on expectation that the conservation of energy leads under typical conditions also to something close to conservation of average temperature. That’s not accurate, but often close enough. There are also cases where the deviations are large, but usually only when the reasons for that are obvious.

But surely you realize that your observed evidence, temperature readings, are in conflict with your theory. The natural regularity that is ENSO can be described very well in terms of observed temperatures but is rejected on the theoretical grounds that variations in ENSO must average to zero as a matter of energy balance. Scientists face such problems all the time. Why do you think that dismissing the description of ENSO in terms of temperature evidence is not a priori reasoning that should be suspect?

If you have such little regard for observed temperatures then why do you use them as the basic evidence for climate theory?

Pekka Pirilä posts “Theo, I’m sure every scientist knows that the conserved quantity is energy, all such statements about the temperatures must be based on expectation that the conservation of energy leads under typical conditions also to something close to conservation of average temperature.”

Pekka Pirilä is posting good scientific common-sense (in which by “typical conditions” every scientist understands “in the absence of phase-changes such as melting/freezing or evaporation/condensation.”)

Thank you (again) Pekka Pirilä.

Whereas Theo Goodwin is posting ideology-driven rhetorical gibberish (as nearly as I can understand it). What scientific point are you attempting to establish, Theo Goodwin?

“I’m sure every scientist knows that the conserved quantity is energy, all such statements about the temperatures must be based on expectation that the conservation of energy leads under typical conditions also to something close to conservation of average temperature. That’s not accurate, but often close enough. There are also cases where the deviations are large, but usually only when the reasons for that are obvious.”

For emphasis:
“…must be based on expectation that the conservation of energy leads under typical conditions also to something close to conservation of average temperature.”

Of course, by conservation of average temperature you can only mean the averaging to zero of natural cycles in, for example, the AMO which has a warm phase and cold phase.

I am so very glad that we reached this point. You have just stated the absolutely unjustified rule that “radiation-only” theorists use to reject any suggestion to the effect that ENSO, the AMO, or related phenomena are natural regularities just as planetary orbits are natural regularities and that empirical investigation of those phenomena is an important pursuit in climate science. Your reasoning is entirely a priori, based on no empirical research on the phenomena in question.

The albedo of the tropical eastern Pacific changes quite a lot between el Nino and la Nina phases due to changes in cloud cover, and IRRC, so does upwelling LW in the same region. If specifying the TEP eliminates errors in albedo and/or upwelling LW by essentially adding or subtracting heat as needed to match the specified temperature, then perhaps looking at the amount of added (or removed) heat in the TEP region to maintain the specified temperature would give an indication of where the model is deficient, and how much heat is involved compared to, for example, estimates of global GHG forcing. The area of the TEP may be only 8% of the total, but the average insolation for the TEP is considerably greater than the global average, so its influence ought to be greater than 8%.

Pekka, maybe we see from the POGA-C run that the net influence of such forcing was not to warm or cool the global mean given a constant prescribed forcing, so that might mean that its impact on the global energy over the 60 year period is minimal.

If we pull out the sine wave, that 8 percent in the South Pacific Ocean, whatever the exact wording is, we should continue to pull out other sine waves until all that is left is the long term trend which would be natural plus forcings. It would seem that each sine wave effects all others, in effect communicating with them. All part of a network.

Since 1840 a linear trend of 0.1C/century which is recovery from Little Ice Age. Since 1950 an additional linear trend of 0.1C/decade which is probably effect of one or both of AGW and Modern Solar Maximum (Svensmark GCR hypothesis).

using the BEST forcing estimate you can assume that either 1.6 C is the “real” approximate “sensitivity or 0.8C the approximate no feedback “sensitivity” as references. That doesn’t include solar, land use etc.

Then looking at just the 1936 to present period,

That pretty much brackets the sensitivity range other than land only which has something else, land amplification whatever going on.

The 1976/00 cycle was smaller than the 1910/40 cycle and volcanic impacts seem to get less intense. The ocean battery is getting charged.

Pretty much like like the old days before novel paleo statistical methods smoothed out the past.

I used sine wave as an approximate shape not a plot of y = sin x fercrisakes. Yeah maybe it’s decaying (damped) and maybe it isn’t. It might be ringing from an asteroid punch to the gut. The crux is that the period lines up with astromical phenomena so it’s likely to be driven and is all bent out of shape because there’s a bunch of beat frequency oscillators in the system due to myriad systems with different lag times in response to the driving forces. A buttload of jumbled harmonics that looks like noise at first blush.

Solar and the tropical stratosphere. No surprise, the sun impacts weather.

Solar versus the difference between the north Atlantic and north Pacific, they have different areas and heat capacities so they have different time constants. That implies there will be thresholds which would impact the amplitude and timing of the equalizing curve between the two basins. Since the north Atlantic is “boxed in” hurricanes, tropical storms etc. would be the relief valves. Those would be some of the monkeys that Mosher likes up his butt.

Once again: I’ve never seen the cloud data you talk about Stephen. Please point directly to the data.

So far as I can tell, you assume the cloud data must do what you imagine in order to be consistent with your narrative.

I mean no offense. I really want to explore the data (if they exist) upon which you base your claims (if they are based on reality, not yet another abstract model based on false assumptions), so please point directly to the data (if they exist).

Frankly, I think (a) you have a poor grasp on aggregation criteria and (b) you’re wrong. The evidence I’ve seen firsthand does not allow the specificity (clouds alone) that you claim. The well-constrained evidence I’ve seen points only to coupled mechanical processes as a whole collection, not to clouds alone. A superior grasp on aggregation criteria will empower everyone to lucidly realize this.

‘The top-of-atmosphere (TOA) Earth radiation budget (ERB) is determined from the difference between how much energy is absorbed and emitted by the planet. Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.’ http://meteora.ucsd.edu/~jnorris/reprints/Loeb_et_al_ISSI_Surv_Geophys_2012.pdf

If we start with the absolute understanding that ocean and atmospheric circulations induce large changes in TOA radiative flux – the global climate equation changes considerably.

This paper – however – misses the locus of ocean variability by a great deal. The 2008 La Niña is shown in the following link – but also upwelling in the northern Pacific in the PDO pattern and in the Southern Ocean. Change propagates across the surface of the oceans changing the pattern of upwelling especially and therefore the energy budget of the planet. This seems driven by patterns of solar variability – especially UV. Heating and cooling of the stratosphere driving variability in sea level pressure, winds and currents. The proxy records moreover show variability over centuries to millennia.

The decadal pattern is most obviously associated with warming and cooling in the atmosphere and oceans. At a very minimum this reduces the rate of warming from greenhouse gases in the most recent period of warming to about 0.1 degrees C/decade. This was done by Kyle Swanson for instance – in a realclimate post – by excluding the 1976/1077 and 1998/2001 ENSO dragon kings and assuming the warming between 1979 and 1997 was the greenhouse signal. – http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/ – You can get a similar result by simply averaging over a full warming/cooling period – 1945 to 1998 for instance.

It that was all there was to it – such a slight rate of increase would have little impact for a hundred years at least. Dragon kings – however – suggest that there is something happening that is much more fundamental to climate than slow changes to the system. Dragon kings are defined as extreme events at times of chaotic bifurcation. The system changed abruptly in 1976/1977 to a warming pattern and in 1998/2001 to a cooling pattern. As this seems driven by solar UV variability – there is much to suggest that a shift to a yet cooler state is entirely possible. A shift driven by CO2 forcing to conditions that are problematic for modern societies is mathematically a finite probability – and this can take as little as a decade.

There is much to suggest also that natural variability was the dominant factor in 1976 to 1998 warming. The satellite records show substantial cooling in IR and warming in SW. The highest resolution pre-ARGO ocean heat data shows consistency between the ocean heat content and ERBS. Indeed it shows ocean heat content peaking in 1998 and then cooling to an extent that is not reversed in the moderate rise in ARGO. .

This is governed by changes in cloud over longer periods than mere ENSO. Indeed – if the focus is on ENSO it was shown in CERES that El Niño cools the planet and La Niña warm. The decadal pattern seems more critical for the energy budget.

Here can be seen the decrease in cloud to the late 1990’s, the 1998/2001 climate shift and the lack of much change since. A moderate warming in SW consistent with the moderate increase in the ARGO period. It combines ISCCP-FD and MODIS and both are validated with SST.

Now we need 1000′s of times more computing power to find out what the question is. We get back to the nonlinearity of the system. If a change in warming driven by CO2 results in cooling by a large amount – something entirely possible or at least not provably so impossible – then sensitivity is very large and negative.

A sensitivity of γ might certainly include 1 degree or 6 degrees. However, it seems credulous in the extreme to suggest that this is a definitive range.

‘In each of these model–ensemble comparison studies, there are important but difficult questions: How well selected are the models for their plausibility? How much of the ensemble spread is reducible by further model improvements? How well can the spread can be explained by analysis of model differences? How much is irreducible imprecision in an AOS?

Simplistically, despite the opportunistic assemblage of the various AOS model ensembles, we can view the spreads in their results as upper bounds on their irreducible imprecision. Optimistically, we might think this upper bound is a substantial overestimate because AOS models are evolving and improving. Pessimistically, we can worry that the ensembles contain insufficient samples of possible plausible models, so the spreads may underestimate the true level of irreducible imprecision (cf., ref. 23). Realistically, we do not yet know how to make this assessment with confidence.’ http://www.pnas.org/content/104/21/8709.full

W&H is work and heat – power over a period is energy. The terms power in and power out can’t be directly compared. It is somewhat a matter of measuring apples and oranges and not having an appropriate intercalibration. The changes in each however are more precise and instructive.

What we have is ARGO. Which shows ocean warming – so d(W&H)/dt is positive and power in – power out was positive.

Power in is measured by SORCE. In the period covered by ARGO – TSI decreased. To estimate the decrease – divide the TSI change by 4 to get the geometric projection onto the illuminated portion of the Earth. There is about a 0.25 W/m^2 average decrease in the period.

Power out is measured by 2 CERES instruments – AQUA and TERRA. To balance ARGO and SORCE we would expect an increase in CERES of about 0.75 W/m^2 in the period 2005 to 2010. Globally this was pretty much the case.

This one shows annual resolution ocean heat content from 1993 to mid 2003. It seems a little difficult to join earlier data and ARGO – but the decline in ocean heat content after 1998 is quite steep and it doesn’t seem likely to be reversed by the modest increase in ARGO.

Nice post Chief. “If we start with the absolute understanding that ocean and atmospheric circulations induce large changes in TOA radiative flux – the global climate equation changes considerably.”
The above key was one I was looking for.

Funny Y in your link, increases its slope as we move to the tipping point. It exhibits positive feedback until we leave Kansas, then it returns to a more normal level. I think that to accept the regime change theory, positive feedbacks are needed.

Just to make things more interesting as to what the true value of the forcing factor is, let’s have it be variable too.

Don’t bother, Willis. Mosher painted himself into a corner and Dr. Curry called him on it. Now he will have to either figure out some contortionistic rhetoric or admit he was wrong. Based on his past antics, I expect he will show up on a future thread and claim he coined the phrase “natural variability”.

I agree with Mosher on this. Sea-ice, yes, it can have long-term effects from one year to the next. Clouds/snow cover, no, there is no memory from one year to the next and too much variability that an anomaly one year is gone within months. The changes of seasons tend to erase anomalies. A drought one year can be gone the next. Can it self-sustain for a decade (without climate forcing)? No, the weather is too fickle for that. Oceans are where the internal variability lives but that is fairly random and self-canceling too if you average over a few decades.

CH, some people think clouds even drive changes in ocean circulations, but I don’t see how that could happen. I can see how oceans affect clouds, but then the oceans change to a different internal mode, and those clouds change along with them. It is not the other way around, and it is important to know what drives what in climate, otherwise you can get hopelessly confused.

‘Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.’

As I mentioned somewhere above to Dan Hughes, even weak changes in forcing are detectable and can explain the LIA for example, or how we can clearly see the effect of Pinatubo forcing, so this means the interannual noise is not having any effect to hide these signals, let alone hiding the even bigger effect from CO2.

‘With this final correction, the ERBS Nonscanner-observed decadal changes in tropical mean LW, SW, and net radiation between the 1980s and the 1990s now stand at 0.7, -2.1, and 1.4 W m2, respectively, which are similar to
the observed decadal changes in the High-Resolution Infrared Radiometer Sounder (HIRS) Pathfinder OLR and the International Satellite Cloud Climatology Project (ISCCP) version FD record but disagree with the Advanced Very High Resolution Radiometer (AVHRR) Pathfinder ERB record. Furthermore, the observed interannual variability of near-global ERBS WFOV Edition3_Rev1 net radiation is found to be remarkably consistent with the latest ocean heat storage record for the overlapping time period of 1993 to 1999. Both datasets show variations of roughly 1.5Wm2 in planetary net heat balance during the 1990s.’

Thank you for this thread Judith. It contains a wealth of information and discussion, especially Mosher and Dr Page, and of course the usual contributions from denizens of both sides of the AGW debate.

The head post seems to accentuate the need for disentanglement of anthro forcings from natural forcings, or is it true that such forcings are, in reality, endogenous over periods in excess of 30 years?

‘Doing so is vital, as the future evolution of the global mean temperature may
hold surprises on both the warm and cold ends of the spectrum due entirely to internal variability that lie well outside the envelope of a steadily increasing global mean temperature.’

Reading a single sentence does not take you very far in understanding the nature of sensitivity in a chaotic climate. As Michael Ghil says – the answer for climate sensitivity is …. wait for it… γ in the linked diagram.

I looked at the paper, but I couldn’t figure out if climate sensitivity was γ or tan γ. It is one in the figure but the body of the paper says the other.

And what of fig 1.6 in the paper, what say you to that?

And you are right that one sentence doesn’t take you very far, but it takes you farther than ignoring it as many who claim warming is caused by PDO and ENSO cycles, not that you are one of those, are wont to do.

“The question du siecle is How much of the warming in the last quarter of the 20th century was caused by natural internal variability?”

This is how I see it. Most anthro .heating is in the northern hrmisphere. The southern hemisphere temperature is mostly influenced by the surface tenperature of the oceans. One of the unknown unknowns is the delay of the S hemisphere catching up with the N hrmisphere’s rising temperature. The 1910 – 1940 temperature rise seemed to disappear sharply after 1940, but it did not. It produced the first ‘pause’ in history starting in 1948 and extending to 1970. Now the S hemisphere resists temperature change for two reasons: First, because water is a poor conductor of heat, so heat transport depends on Corioliis at depth and wind and haline deensity induced slow currants. This is not an inertial delay but a true transport delay. This difference is vital in climate models, but is largely unknown as a parameter But the southern oceans’ huge heat storage means it will be slow to change – say at a guess about 30 years. My thesis is that the global temperature rise between 1970 and 1998 was just the second instajjment of the 0.5C atmospheric rise between 1910 and 1940. See my website underlined above.Of coarse the IPCC missed all this because it mostly confined its investigatiohs to post-1961..

Judith Curry posts sensibly: “Well, what can happen is changes in cloud distribution or snow/ice cover, which can change the amount of radiant energy entering/leaving the earth system.”

Willis Eschenbach posts sensibly: “The additional surprise [of the Earth’s radiative energy balance] was that neither upwelling solar nor upwelling longwave varied by more than ± 0.1% year-to-year. Remember that the variable portion of the albedo is controlled by things as ephemeral as ice and clouds and wind, all of which are changing daily … and yet every year, they average out to within a tenth of a percent. Amazing!”

How can Judith Curry and Willis Eschenbach *both* be correct?

First, Judith Curry is entirely correct that some global-scale radiation-balance changes *DON’T* average to zero over the course of a year. Massive volcanic eruptions are a well-studied case (see for example, Harries and Futyan, On the stability of the Earth’s radiative energy balance: Response to the Mt. Pinatubo eruption, 2006).

But Willis Eschenbach is entirely correct in teaching us that weather-related albedo changes *do* average to zero over the course of one year, to a staggering accurate of order ±0.1%.

As a helpful climate-change Fermi calculation, let’s estimate whether Willis’ figures are statistically credible. If we divide the earth into “cells”, and assume that the albedo of each cell varies by ±25% in the course of two days, then for the yearly variance in radiative balance to be as low as ±0.1% (the value that Willis reports), then the number of cell-days in a year must be of order 250^2, which in turn implies that there must be, at any given moment, about 340 active weather cells/ocean current cells around the world.

Conclusion The “best available science” teaches us that Judith Curry, Willis Eschenbach, and James Hansen are all three of them entirely in accord, regarding the CO2-driven secular increase in the Earth’s energy balance, accompanied by (relatively small, relatively local) fluctuations in local energy balance.

Acknowledgement It has been my great pleasure to so naturally unite your respective scientific understandings of climate-change, Judith Curry, Willis Eschenbach, and James Hansen! Aye, Climate Etc lassies and laddies — that’s the enlightening power of “best available climate-change science” for yah!

The downward trends in POGA-C before 1975 and after 1990 cancel the upward trend between, so I conclude that natural variability cancels itself out over this whole period, and you have to be selective to either see a net rise or a net loss in a sub-period. It makes sense that given long enough (60 years) here, the natural variability cancels out. POGA-C shows the same downward trend after 1990 as POGA-H minus HIST, so I don’t think these two ways of looking at it are telling us anything different. Some care is needed in looking at POGA-C because the specified POGA area must have a global warming signal in it too, so it is not a clean natural variability.

Also I don’t think it is correct to say that any of these are isolating ENSO because POGA-H and HIST both would have ENSOs in different phases, which would lead to short-term variation between them, so their difference is more likely a longer term thing like PDO that POGA-H represents and HIST apparently does not. My conclusion is that PDO is the main natural variability being tested here, not ENSO.

‘This study uses proxy climate records derived from paleoclimate data to investigate the long-term behaviour of the Pacific Decadal Oscillation (PDO) and the El Niño Southern Oscillation (ENSO). During the past 400 years, climate shifts associated with changes in the PDO are shown to have occurred with a similar frequency to those documented in the 20th Century. Importantly, phase changes in the PDO have a propensity to coincide with changes in the relative frequency of ENSO events, where the positive phase of the PDO is associated with an enhanced frequency of El Niño events, while the negative phase is shown to be more favourable for the development of La Niña events.’ http://onlinelibrary.wiley.com/doi/10.1029/2005GL025052/abstract

The Eschenback article – that I finally got to by Googling – seems unmitigated blog science fantasy. Interanular variabilities are an order of magnitude greater than 0.1% of the incident TSI. Especially in the tropics.

AMO has quasi-cycle of 50-90 years. PDO has 50-70 years cycle reconstructed historically. THC takes 1,000 years to cycle. It is presently in the warm mode. Incidentally the Medieval warm period occurred circa 1000 AD. Given these long-term natural variability, it is entirely possible that observed global temperature trends since 1850 are largely driven by them.

The answer to the question “Why do Alarmist climate scientists who embrace the “radiation-only” account of global warming not state their evidence in joules rather than temperatures?” Because if they talked about joules the public would not have lifted a highbrow at talk of global warming. In Alarmist climate science, nothing is more important than what the public can be made to believe.

The key finding is the tropical regions (in contrast to extra-tropical) restore the degree.day integral in response to such event.

This implies a strong non-linear negative feedback. Since climate science largely seems intent on representing (or approximating) most of climate as linear system responses, this NON-linear behaviour leaves a residual tropical “input”.

The need to introduce ENSO as an additional driver comes from the deviation of the non linear reality from the assumed linear modelling.

While there are enormous theoretical and computational advantages to modelling as additive linear systems, somewhere the non linearity in the tropics and the approximation has to be recognised explicitly and the resulting residual behaviour needs to be viewed in that light.

The non-linearity is very likely a result of tropical storms whose self-maintaining nature leads to an effect that is not a linear response to the triggering conditions. These storms a key element of tropical climate yet are below the resolution of climate models and are substituted by parametric assumptions/guesses at a cloud cover amount.

Now that interest is focussing on the region, it is time this was given more rigorous and appropriate modelling efforts.

I wandered non-linearly as a cloud
That floats on high o’er warming ills,
When all at once I saw a crowd,
A host, of shrunken climate shills;
Baying “Feedback!”, senselessly,
Ignoring natural variability.

Are there any previous suggestions (before mine) that consider how solar variation could skew the balance between El Ninos and La Ninas for centuries at a time to produce MWP, LIA et al?

Given that the recent pause coincided with a quieter sun and the past warming coincided with an active sun that issue is at the heart of this thread especially since a similar relationship goes back to at least the MWP going by Jetstream tracks at the time as revealed by ships logs and contemporary weather reports.

Has anyone else ever suggested that that is what is going on ?

Outside the major transitions between ice ages and interglacials that would account for the millennial scale climate cycling that we see in response to solar variability.

Yeah. But solar variation alone doesn’t explain all the wiggles. Different ocean basins have different time constants that set up the internal oscillations that decay as a setpoint is reached. You need to know the setpoint, 4C 335 Wm-2 which results in the “sensitivity” that matches the specific heat capacity limit of the atmosphere in the tropics. Then you know enough to realize there is a lot more to learn.

Stephen, I don’t disagree that you have the concept, the issue is isolating a primary settling response and the rough limits in a meaningful way. Solar for example TOA averaged over the surface doesn’t have the magnitude required to drive most of the longer term pseudo-oscillations. At the sub-surface, solar provides enough energy and a large enough gradient to drive most of the oscillations. To show that you have to get people to understand why they have to consider different layers and different frames of reference. When you include that differential forcing with mechanical mixing, tides and Coriolis, it is pretty easy to explain 150 year settling times that Toggwieler et al at the GFDL have modeled for years.

It is not like the work has not been done, it is just getting people to recognize the people with the best understanding of the system dynamics. You do that by isolating a system response and making a reliable prediction with reasonable uncertainty.

@WebHubTelescope (@WHUT) | September 3, 2013 at 11:29
Unlike some alarmists, skeptics don’t make something up just so they can say; “This is the best we have,” when we know it isn’t good enough”

You aren’t going to pull a fast one. Your little helpers are people like Chief who repeatedly assert that the world will cool for the “next decade or three”. Or Cappy, who creates all these bizarre concordances between sets of data to prove who knows what.

My point is that your team lacks any skill at all, but think they do, and then they make a mess of the entire situation. This is what the deniers and fake skeptics are all about. FUD

Not really.
The side of science has laws of physics that they have applied over the years such as Planck’s law, the heat equation, Stefan–Boltzmann law, Wien’s law, Henry’s law, the diffusion equation, and on and on.

These laws build on each other and they all have to make sense, otherwise an artifice will collapse. Nothing is close to collapsing from the realist side of climate science. On the other hand, krank science constantly goes through collapses and revivals, with the most stubborn krackpots holding on to their theories like a dog with a bone.

But the simple laws alone do not model climate. They have to be combined in complex ways. Simple laws govern an internal combustion engine. But a pile of parts do not an engine make. We don’t understand how the climate “engine” is constructed. That is the problem. The problem is not, as you would like to imply, that we don’t understand fundamental physics.

You are calling out the wrong area for competition, putting the cart before the horse, so to speak. First, you have to have a good concept of how something works. Only then do you model it. Writing differential equations based on a wrong-headed concept is worse than no model at all, since it misleads at best and distracts from the truth, which is worse.

“And better theoretical frameworks are needed for understanding climate sensitivity to external forcing in a system with substantial natural internal variability.”
_ _

For sure the conceptual framework is inadequate. Too much of what everyone says about “internal” & “sensitivity” is wholly at odds with observations. I’m interested in having a dialogue about this with someone who’s a climate “science” “insider”. In landscape ecology there was a conceptual framework paradigm shift when it was realized that pre-existing conceptual frameworks were totally & thoroughly inadequate. The climate discussion appears unable to advance due to misguided preconceptions tied to a primitive conceptual framework that cannot be reconciled with reality. Given the constraints on my time, I barely know how to begin this dialogue (which will have to be efficient), so I’m just stating intent. Let me start like this: How on Earth is it justified to call it “internal”?? It makes no sense whatsoever in light of observation.

Actually, internal denotes that it is unforced variability, associated with nonlinear, chaotic dynamics of the coupled ocean-atmosphere system. Whether there is some solar or other extraterrestrial forcing say for the PDO remains unknown (Scafetta has hypothesized external forcing). Even if the variation is truly internal, it may change the earth’s energy balance through surface albedo and cloud changes.

I like the general approach. I have a new paper that is hopefully coming out soon that links together alot of the known modes of variability. We tried to identify a solar signal in all this, but it is really ambiguous because of the solar forcing uncertainties.

Thanks for the reply Judy, but the second sentence seems to contradict the first. Are you saying that it is assumed to be unforced “internal” despite that this is not known to be true?

I thought the main trust of IPCC ‘consensus’ was that internal variability was stochastic variation with no long term effect and thus it does not really “matter” if models don’t correctly model it, it is sufficient to add a bit of noise to make it look climate-like.

Current models seem to effectively be linear relaxation response to (CO2+volcanism)*3 + noise.

ENSO is almost certainly unforced. There are almost certainly longer-term unforced variations. Whether the PDO in particular is forced/unforced is not known. My hypothesis is that these are truly internal, but external forcing can project onto the internal modes, help change phase/amplitude, synchronize, etc. We don’t know, all this is at the knowledge frontier

All the ocean cycles have to be forced. Imagine if the Sun goes dark. The Earth would freeze and there would be no ocean cycles at all. If the Sun turns back on, then there would be ocean cycles once again. The use of “unforced” isn’t logical to me. If you mean, instead, that some of the cycles will average to zero over some time period, then why not just state it that way?

Forcing is a myth. However, as it’s usually (roughly) defined, it refers to the effect of anomalies in external factors, that push a system from its “normal state”. In complex non-linear systems, that “normal state” can (usually will) involve a lot of variation that’s not driven by such external anomalies. Such variation is typically called “unforced”. By contrast, when variation is seen as resulting from an external anomaly, it’s called “forced”.

I call it a myth because the basic assumptions involved in shoehorning descriptions of complex non-linear systems into the Procrustean Bed of linear assumptions are invalid, and well known to be invalid. Words such as “forcing”, “equilibrium”, “feedback”, etc. are used as useful metaphors, and don’t really mean what they do in simpler, more linear, systems.

These discussions make me think that mathematical descriptions of each person’s hypothesis would clarify things, because the verbal analysis seems plagued by equivocations of meaning. (I’ve seen the same thing in the economics of industrial organization where there are long-standing disputes about the proper definition of “barriers to entry” even among people who all agree on the appropriate mathematical model and its predictions.)

Is “internal” a well-defined term? Apparently, per Prof. Curry, it is “unforced variability, associated with nonlinear, chaotic dynamics of the coupled ocean-atmosphere system.” But couldn’t there be unforced variability present in a periodic or other non-chaotic system, such as a pendulum? It seems that the answer to this question must be yes, and so the “associated with non-linear, chaotic dynamics” phrase is not actually part of the definition of “internal” but rather an empirical statement about how the earth happens to work. So we’re left with “unforced” as the sole defining criterion for internal versus external variability. It seems then, that the internal vs. external terminology is superfluous when what we really mean is forced vs. unforced.

OK, but a) “forced” as used in climate science discussions seems to mean four different things and b) people don’t seem to agree on whether something can both be forced and forcing.

On a), sometimes “forced” means 1) “proximately caused by a change in the earth’s energy balance,” sometimes 2) “ultimately caused by a change in the earth’s energy balance,” sometimes 3) “proximately caused by something which is itself unaffected by climate,” and sometimes 4) “ultimately caused by something which is itself unaffected by climate.” (The need to distinguish the first two from the last two arises because i) some hypothetical ultimate causes might affect the climate by changing how energy is used and where it goes rather than how much is trapped, and ii) changes in the energy balance may be triggered by variables that themselves are caused by energy-balance shifts, which is what I think people mean by “feedbacks” in the climate system.)

In versions 2) and 4), except for “unforced forcers” such as volcanos, the sun, and human CO2 combustion, all variables are “forced,” hence “internal.” So under these versions, if we had the causal chain (more CO2 leads to increased air and ocean temp leads to more shiny clouds leads to higher albedo leads to less insolation), then insolation, as well as temperatures, clouds, and albedo would be “forced” or “internal” variables. Under versions 1) and 3), however, looking at the same chain, only the temperatures would be called “forced;” insolation, for example, would still be “unforced” because it is proximately driven by albedo, not energy balance or some unforced variable.

On b), consider this quotation from Prof. Curry, “Even if the variation is truly internal, it may change the earth’s energy balance through surface albedo and cloud changes.” It will confuse people who believe that a forced variation of any kind can never be a forcing itself. They believe that the “credit” for the forcing should go back up the causal chain to the ultimate unmoved mover.

So we have a problem where some people want to combine something like version 2) or 4) with the view that nothing can be both forced and forcing. That leads them to simply discount what Prof. Curry is saying because the words don’t make sense to them. I believe that Prof. Curry is better characterized as falling into version 1) or 3) as well as the forced-is-compatible-with-forcing linguistic community. But I’m not sure that there is an actual disagreement about the math or the concepts–it seems like there is a substantive empirical judgment dispute unfortunately mixed together with an idle linguistic dispute.

We should also not forget Sudden Stratospheric Warming. So far as I am aware, there is no explanation as to what causes this phemenonon. And unlike other things like the PDO, it ONLY causes cooling. There is not a reverse phase which causes waming. Again so far as I am aware, we have no idea whether the frequency and intensity of this phenomenon that has been observed for only around 50 years, is typical for “normal” conditions, whatever this means.

Ryan, that is generally true, variability is an indication of inefficiency. The problem is variability improves mixing efficiency so the deep oceans take up more energy during higher variability while the surface loses more energy the deep oceans and close to the atmosphere. One system feeds the next. Tamino is wrong because variability does not have a “uniform” cooling effect, it is much more complex than that.

Stephen, “The primary settling response is the behaviour of the jets (zonal or meridional) and the latitudinal positions of the climate zones.”

No, the primary settling response would be related to the highest specific heat capacity, the oceans. Changes in the jet stream, SSW events, brewer-dobson etc. are responses. W armer tropical ocean increases the height of the hadley cells increasing both loss to space and heat transfer poleward. The ITCZ shifts north or south forcing the westerlies, jets and precipitation zones to shift.

However, I see that as a modulation of the initial top down solar influence emanating outward from the configuration of the polar vortices.

The sun alters those temperature gradients around the poles by working on the ozone creation / destruction balance differently from above the equator which alters the vertical temperature profile of the atmosphere and particularly the gradient of tropopause height between equator and poles.

That then changes global cloudiness and albedo.

In a sense the entire global air circulation is a complex combined response to ANY forcing whether internal or external and its function is to maintain energy balance at ToA and so enable the atmosphere to be retained.

The air circulation changes as necessary to balance in coming solar shortwave with the rate of energy release from the oceans and as you realise the latter is variable.

“ENSO is almost certainly unforced. There are almost certainly longer-term unforced variations. Whether the PDO in particular is forced/unforced is not known”

I would go as far as saying that ENSO is internal (unforced) and arises from uneven heating either side of the equator due to the clouds of the ITCZ being mostly in the northern hemisphere. That is a consequence of the landmass distribution.

I would say that the PDO (or rather the Pacific Multidecadal Oscillation) of around 60 years is possibly externally forced by lunar effects on the oceans.

Then, crucially, I would say that the background trends on millennial time scales are solar induced hence the upward stepping of temperatures from one positive PMO phase to the next as noted by Bob Tisdale amongst others.

The problems of diagnosis then become pronounced because solar activity can vary significantly within the solar millennial cycle such as the period of more active sun in the 1700s which was associated with a warmer spell untypical of the LIA as a whole.

However I suggest jet stream behaviour as the best indicator for short term diagnostics because that is affected by the intensity of the polar vortices and they seem to vary with solar activity on a short time scale.

It is no coincidence that the recent record negative AO was associated with the recent very low level of solar activity.

The secret must lie in the stratosphere especially above the poles so as to produce the observed shifting jets and climate zones.

To get an equatorward shift when the sun is inactive there must be relative warming of the polar stratosphere as compared to the equatorial stratosphere.

We can see from sudden stratospheric warming events that it has to be that way round. Such events mimic the longer term solar effects on the global air circulation whereby the warmer stratosphere pushes polar air masses equatorward.

So, watch jet stream behaviour to determine the current direction of temperature change and the more extreme that behaviour the faster the trend.

We are currently on the cooling side of thermal balance due to the solar influence on global cloudiness and albedo.

How important is the influence of wind on ocean albedo? If I understand correctly the paper of Jin et al (2004) in Geophys. Res. Lett., at high solar zenith angles differences in waves caused by wind can alter the ocean albedo by some 0.1 or so. OTOH, areas where the solar zenith angle is high have lower weight in global albedo as they get less power per unit area. Are there any long-term wind speed/direction trends or cycles that could make such changes matter?

It occurs that I don’t remember reading any discussion of the paper last year by Karnauskas of Woods Hole. The authors did 1000-year climate model runs and found a 100-year oscillation pattern in the Pacific. They concluded the “Unforced variability and trends on the centennial time scale therefore need to be addressed in estimated uncertainties, beyond more traditional signal-to-noise estimates that do not account for natural variability on the centennial time scale.” The oscillation affects temperature and precipitation around the globe. For a news release on their results see:http://www.whoi.edu/oceanus/feature/pco

A couple of quotes from the paper:
“Simulated internal centennial variability yields overall changes in the equatorial Pacific zonal SST gradient of roughly half a degree Celsius. Such changes are equivalent to trends that have been estimated over the modern instrumental era since 1880.”

“If nature exhibits such strong natural variability of tropical Pacific SSTs on centennial time scales, then assumptions that the observed trend over the past century to a century and a half is a response to radiative forcing are tenuous.”

A mechanism contributing to centennial variability of the Atlantic Meridional Overturning Circulation (AMOC) is tested with multi-millennial control simulations of several coupled general circulation models (CGCMs). These are a substantially extended integration of the 3rd Hadley Centre Coupled Climate Model (HadCM3), the Kiel Climate Model (KCM), and the Max Plank Institute Earth System Model (MPI-ESM). Significant AMOC variability on time scales of around 100 years is simulated in these models. The centennial mechanism links changes in the strength of the AMOC with oceanic salinities and surface temperatures, and atmospheric phenomena such as the Intertropical Convergence Zone (ITCZ). 2 of the 3 models reproduce all aspects of the mechanism, with the third (MPI-ESM) reproducing most of them. A comparison with a high resolution paleo-proxy for Sea Surface Temperatures (SSTs) north of Iceland over the last 4,000 years, also linked to the ITCZ, suggests that elements of this mechanism may also be detectable in the real world.

I don’t understand how they reconcile this with any claimed 95% certainty.

Climate scientists have attributed changes in the westerlies over the past 50 years to the warming from higher CO2. The changes predicted by climate models in response to higher CO2 are fairly small, however, and tend to be symmetric with respect to the equator. The observed changes have been quite asymmetric, with much larger changes in the Southern Hemisphere than in the north (3). The results of Anderson et al. (7) suggest that in the past, the westerlies shifted asymmetrically toward the south in response to a flatter temperature contrast between the hemispheres. The magnitude of the shift seems to have been very large. If there was a response to higher CO2 back then, it paled in comparison. Changes in the north-south temperature contrast today are not going to be as large as they were at the end of the last ice age, but even small changes could be an additional source of modern climate variability.

That is the Law Dome CO2 with Eastern (Marchitti) and Western (Stott) Pacific and the Indo-pacific warm pool (Oppo). Marchitti has a paper on solar dynamics and ENSO during the mid-holocene. The best I can tell, CO2 is as or more dependent on solar forcing in the SH as it is temperature and ENSO is mainly dependent on solar. What do you think?

I have a hard time thinking that CO2 physics can be such a variable influence: a forcing at one time and a response another; depending…?

When I have seen a system like this; i.e. variability to a particular agent like CO2 for instance, the most important components to the system were the ones impacted, not the CO2 per se. Sometimes the system is quite sensitive to small changes in CO2 and other times, sluggish, almost irrelevant, as the levels of CO2 rose or fell out of a very narrow band. It seems that out of very specific band, and this may be true of the Atmosphere and Oceans, CO2 is a minor player. In a very specific band, CO2 is the prime regulator but its effects can be overcome and CO2 varying widely and wildly doesn’t amount to a hill of beans.

If I translate what I observe in one system and apply it to atmosphere and oceans, CO2 as a control knob becomes a “yes, but..” and a “it all depends on the other players” who are more important and objects of interest and study.

RiH008, yeah, with so much going on it is hard to tell who is doing what. Solar impact appears to be underestimated quite a bit with CO2 over estimated by about the same amount. I think the GFDL guys are getting ready to kick butt and take names with their ocean models.

Why setting climate sensitivity to 3K?
All models: nonlinear or stochastics, must be based in scientific deductions supported by experimental evidences.
Climate sensitivity in Charney’s report (1979) was based in 4·s·T^3 formula.
Has anyone asked Mr. Gregory Flato or Jochem Marotzke (the leading authors of IPCC’s AR5 WGI Chapter 9 about evaluation of climate models) in what scientific evidence they base their models for getting the actual values of their climate sensitivities?

Ms Curry the notion of unforced variations is a nonsense on which most of the IPCC modelling is based.Noise is simply a signal that we don’t understand.What the modellers didn’t bother to investigate or understand -principally the effects of solar variations on climate- they simply ignored.They were asked by the IPCC not to investigate climate but to investigate anthropogenic effects on global warming.To keep the grants and jobs coming that is what they did – and built their models which lo and behold
regurgitated the assumptions of high climate sensitivity to CO2 which they had built in.
Furthermore the modelling approach is inherently of no value for predicting future temperature with any calculable certainty because of the difficulty of specifying the initial conditions of a large number of variables with sufficient precision prior to multiple iterations. There is no way of knowing whether the outputs after the parameterisation of the multiple inputs merely hide compensating errors in the system as a whole. The IPCC AR4 WG1 science section actually acknowledges this fact. Section IPCC AR4 WG1 8.6 deals with forcings, feedbacks and climate sensitivity. The conclusions are in section 8.6.4 which deals with the reliability of the projections.It concludes:
“Moreover it is not yet clear which tests are critical for constraining the future projections,consequently a set of model metrics that might be used to narrow the range of plausible climate change feedbacks and climate sensitivity has yet to be developed”
What could be clearer. The IPCC in 2007 said itself that we don’t even know what metrics to put into the models to test their reliability.- ie we don’t know what future temperatures will be and we can’t calculate the climate sensitivity to CO2.This also begs a further question of what mere assumptions went into the “plausible” models to be tested anyway.
This quoted statement was necessarily ignored by the editors (censors) who produced the AR4 Summary for Policymakers. Here predictions of disaster were illegitimately given “with high confidence.” in complete contradiction to several sections of the WG1 science section where uncertainties and error bars were discussed. Almost all the worlds politicians, media and eco-activist organisations uncritically accepted and used these predictions as infallible guides to the futrure and acted on these delusions of certainty which are now, six years later ,seen to be just that -delusions.
In summary the projections of the IPCC – Met office models and all the impact studies which derive from them really have no useful place in any serious discussion of future climate trends and represent an enormous waste of time and money.

Capt. D: “The observed changes have been quite asymmetric, with much larger changes in the Southern Hemisphere than in the north (3). The results of Anderson et al. (7) suggest that in the past, the westerlies shifted asymmetrically toward the south in response to a flatter temperature contrast between the hemispheres.”

A provisional look at power spectra of Arctic ice and meridional component of west pacific trade winds show notable similarities.

Steve is taking a more atmospheric approach, which is fine, but I don’t think any approach will do much good without an exceptionally good ocean model. A good ocean model can use the ocean paleo which is getting better, to explain some of the hitches where atmospheric models are more limited by instrumental records.

Although mechanisms underlying the current pause in surface warming remain to be sorted out, the role of internal climate variability in the warming since 1975 is almost certainly minimal. Conversely, the role of external forcing (changes in anthropogenic ghgs and aerosols, or in natural volcanic and solar forcings) can be shown to be by far the dominant factor in the warming. The evidence, while reinforced by GCM simulations reported in AR4, does not depend on these models. The relevant physics applies to the post-1950 attribution of climate warming to anthropogenic forcing as well, but I’ll focus here on the post-1975 data.

The following links will be useful in understanding the basis for these conclusions:Ocean Heat ContentHeat Uptake and Internal Variability
As seen in the first article, ocean heat content (OHC) has increased by about 10^23 joules since 1975. To assess the relevance of this increase to the relative roles of internal vs forced variability, readers should consult the second article by Isaac Held for a detailed analysis. For anyone who doesn’t wish to wade through Held’s mathematics, however, I’ll present here a simplified but reasonably accurate summary
.
The quantity of heat stored in the oceans dwarfs that stored elsewhere in the climate system, and therefore, substantial quantities of heat that warm the surface after transfer from somewhere else within the system must come from the ocean = there’s not enough internal heat anywhere else in the system to produce more than trivial warming over any extended interval. To summarize this element of the warming, surface warming from internal variability is almost entirely equivalent to heat subtracted from the ocean.

On the other hand, surface warming from external forcing is not equivalent to heat added to the ocean. The reason for this is that when surface temperature rises from external forcing, only a fraction of the added heat enters the ocean. The rest (more than half) is radiated away to space as a result of the temperature rise without ever adding to ocean heat content. In other words, the same extent of warming adds much less heat to the ocean than is lost from the ocean during ocean-to-surface transfer.

What this means is that if external and internal heat transfer contribute equally to surface warming, the ocean will exhibit a substantial loss of heat – its OHC will decline significantly. By the same reasoning, if OHC does not change during a warming interval, most of the warming must have been externally forced. A fortiori, if OHC actually increases, as it has since 1975 (and since 1950), the external forcing contribution must be even more dominant.

Is there any physical mechanism that would eliminate this disparity and allow for a larger or even a dominant role for internal variability? The answer is yes, but no such mechanism is plausible or supported by theory or physical evidence. What it would require is a climate sensitivity to ocean-to-surface warming that is enormously greater than climate sensitivity to external forcing. Current estimates place a likely upper bound on forced climate sensitivity of about 4.5 C, and it would require a climate sensitivity to internal heat transfer well above 10 C and probably above 20 C to reduce or eliminate the disparity. Physical evidence fails to support such a mechanism, and indeed climate sensitivity estimates based on some internal phenomena such as ENSO tend to be lower than those estimated for CO2-based external forcing.

The above only applies to intervals during which we can be confident of the sign and magnitude of chances in OHC. Shorter intervals, even as long as a decade or slightly longer, are more problematic with our current methods of OHC assessment. It is likely that there have been subintervals since 1950 or 1975 when internal variations accounted for most temperature change. Over the longer intervals, the average role of internal variability has been very small.

Readers can refer to the linked articles and form their own judgments.

I do recall vaguely some of your posts from a long time ago, and that I thought them very informative. One in particular involved aliasing, which was a phenomenon I wasn’t very familiar with. In the case of internal climate variability, I think the evidence speaks for itself – It’s not unequivocal – that’s rare in science – but it seems pretty convincing.

Just to elaborate a little bit, I haven’t posted here much recently because of other commitments and because the atmosphere here is sometimes too adversarial. However, regarding the internal variability/forcing ratio, I probably should emphasize that the principles I described above didn’t originate with me. Most of the concepts can be found, in far more mathematically rigorous fashion, in Isaac Held’s article (my second link). Anyone with questions about the conclusions would probably do better to visit his blog. If he is approached respectfully, he will certainly respond in a respectful and thoughtful manner, and with more credibility than I can muster, given his expertise in this area. The issue is important, which is why I brought it up, but a visit to Held’s blog will be a more efficient way for you or other knowledgeable individuals to delve into it further.

Re Ghil’s article you refer to below, I found it fascinating. I was thrown by an apparent typo early on regarding the insolation parameter, but the material certainly seems worth pursuing in more depth than I’ve done so far. I should add that the described oscillations don’t negate the principle that external forcing and internally induced warming do not balance out in terms of ocean heat content change.

Held as well poceeds from false assumption that internal variability does not change global energy dynamic. It seems that it should difficult to maintain the delusion given the evidence. There seems however a propensity to cognitive dissonance.

‘In summary, although there is independent evidence for decadal changes in TOA radiative fluxes over the last two decades, the evidence is equivocal. Changes in the planetary and tropical TOA radiative fluxes are consistent with independent global ocean heat-storage data, and are expected to be dominated by changes in cloud radiative forcing. To the extent that they are real, they may simply reflect natural low-frequency variability of the climate system.’

We know of course that there is low frequency variability in patterns of ocean and atmospheric circulation. That these cause changes in cloud cover and result in decadal patterns of global warming and cooling.

Oh…’and µ is an insolation parameter, equal to unity for present-day conditions. To have a closed, self-consistent model, the planetary refectivity or albedo and grayness factor m have to be expressed as functions of T; m = 1 for a perfectly black body and 0 < m < 1 for a grey body like planet Earth.'

m is a greyness factor obviously and µ the insolation factor obviously as it is referred to otherwise consistently in the text.

If you are going to point to a patently obvious typo – at least have the courtesy to explicate. It should have thrown anyone for all of a second and a half.

‘Suppose that most of the global mean surface warming in the past half century was due to internal variability rather than external forcing…’

Suppose that ocean and atmospheric circulation variability causes large change in the global energy budget and the whole rotten edifice collapses at the first assumption.

‘Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global
climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.’

I prefer data to opinion. The almost exact 1st order differential global energy equation is:

d(W&H).dt = power in – power out

W&H is work and heat – power over a period is energy. The terms power in and power out can’t be directly compared. It is somewhat a matter of measuring apples and oranges and not having an appropriate intercalibration. The changes in each however are more precise and instructive.

What we have is ARGO. Which shows ocean warming – so d(W&H)/dt is positive and power in – power out was positive.

Power in is measured by SORCE. In the period covered by ARGO – TSI decreased. To estimate the decrease – divide the TSI change by 4 to get the geometric projection onto the illuminated portion of the Earth. There is about a 0.25 W/m^2 average decrease in the period.

Power out is measured by 2 CERES instruments – AQUA and TERRA. To balance ARGO and SORCE we would expect an increase in CERES of about 0.75 W/m^2 in the period 2005 to 2010. Globally this was pretty much the case.

This one shows annual resolution ocean heat content from 1993 to mid 2003. It seems a little difficult to join earlier data and ARGO – but the decline in ocean heat content after 1998 is quite steep and it doesn’t seem likely to be reversed by the modest increase in ARGO.

Fred, no, Held knows that a change in OHT can cause both warming and increases in OHC and that this can occur from internal variability. He knows it, the models show it, and it makes common sense. It isn’t a question of can it happen. The question to answer is has it happened. Reconstructions of the Gulf Stream transport says it has.

Hi Steven – Other than minor transient variations, I’m not aware of evidence that substantial parallel increases in both OHC and surface temperature can result from internal variability. If you have a reference, I’d like to read it. Held’s article suggests otherwise, so I’m not convinced that “he knows it”. His main point is the opposite.

Steven – Thanks. Held apparently changed his perspective between 2005 (your first link) and 2011 (the paper I cited). In 2005, he acknowledged the theoretical possibility you refer to, although stating he saw “no evidence for anything of the sort”. By 2011, he has judged that the kind of horizontal heat redistribution mechanism leading to the hypothesized albedo changes would only be seen under conditions where the forced component dominates – i.e., most of the warming would still be externally forced even if the transport changes were inducing warming feedbacks with the potential to add heat to the ocean. In fact, bacl when I read his 2011 article, I remembered his 2005 comment, and was struck by the fact that he now saw the theoretically possibility as one very unlikely to be realized -at least during the past half century (see below)..

Your other two links are consistent with the mechanism you cite, and thanks for including them. They don’t, however, state the direction in which OHC is changing. After all, increased transport of warm ocean water poleward would lend itself to OHC loss, and it’s unclear this would be outweighed by any feedback that tended to put heat back into the ocean.

As far as I know, there is no evidence for a substantial increase in OHT poleward since 1950 or 1975, and so I tend to see all the above as not very relevant to the role of internal variability during those intervals.

Fred, I don’t see anything in his post about changes in OHT. Perhaps he was refering only to internal variation regarding exchanges between the atmosphere and the ocean. It is an increase in OHT unless that model is completely backwards from the rest. As the reconstruction shows there was still acceleration of OHT as of 1950. I suppose it could have suddenly stopped at exactly that time but even if it had, how long before the changes in albedo came to equilibrium with the new level of OHT?

Steven – Among Held’s relevant statements are these. The first refers to the past half century. The second one refers to a control model run examining internal variability in the absence of external forcing.

“If one accepts that the forced response dominates, one can consistently free up the horizontal structure of the internal component, potentially producing a dramatically different, and possibly much weaker, radiative restoring for the internal component”…

“Heat is being lost from the oceans to space in this period, but at a much slower rate than in the forced response to CO2, due in large part to positive feedback from polar ice and snow (and low clouds over the oceans) in the model. ”

You’re right that with no data on OHT since 1950, I can’t say it hasn’t changed, although your second reference suggests that OHT changes were more important before 1950 than after 1980. In either case, though, Held’s point seems to be that this mechanism would reduce but not reverse OHC loss from internal variability-based warming.,

Fred, my reference doesn’t suggest anything after 1950. It ends in 1950. ARGO says there is no current trend, however. No current trend and no warming. Held think increases in OHT cause a loss of OHC? If this is the case he must believe that a positive AMO causes a loss of OHC. It should be easy enough to find a correlation without having to do too much guesswork. The Atlantic should have aquired much less OHC compared to other oceans during the time period the AMO was heading positive.

Fred, apologies, you were refering to the model paper and not the reconstruction. I should know by now answering past my bed time is a bad idea. It should be clear to you that their speculation was based upon the hypothesis of the time that GHG warming and other sources of warming produced different water vapor feedbacks. Water vapor feedback would be the major source of feedback under either scenario with current thinking. Water vapor also shows no recent trend.

I was rather encourage by Ghil’s paper.
1) It introduces some sensible mathematics into the feedback debate.
2) It shows clearly that “equilibrium” in climate is a fundamental misunderstanding.
3) Following from 1 it discusses the idea that feedback can lead to oscillatory behaviour.

This study uses proxy climate records derived from paleoclimate data to investigate the long-term behaviour of the Pacific Decadal Oscillation (PDO) and the El Niño Southern Oscillation (ENSO). During the past 400 years, climate shifts associated with changes in the PDO are shown to have occurred with a similar frequency to those documented in the 20th Century. Importantly, phase changes in the PDO have a propensity to coincide with changes in the relative frequency of ENSO events, where the positive phase of the PDO is associated with an enhanced frequency of El Niño events, while the negative phase is shown to be more favourable for the development of La Niña events.

Unlike El Niño and La Niña, which may occur every 3 to 7 years and last from 6 to 18 months, the PDO can remain in the same phase for 20 to 30 years. The shift in the PDO can have significant implications for global climate, affecting Pacific and Atlantic hurricane activity, droughts and flooding around the Pacific basin, the productivity of marine ecosystems, and global land temperature patterns. This multi-year Pacific Decadal Oscillation ‘cool’ trend can intensify La Niña or diminish El Niño impacts around the Pacific basin,” said Bill Patzert, an oceanographer and climatologist at NASA’s Jet Propulsion Laboratory, Pasadena, Calif. “The persistence of this large-scale pattern [in 2008] tells us there is much more than an isolated La Niña occurring in the Pacific Ocean.”

That there’s some connection between ENSO and PDO should be evident on physical grounds, given the circulation pattern in the N. Pacific. That connection, however, seems to be stronger at subdecadal frequencies, where the squared coherence is quite consistently significant. At transdecadal frequencies, that metric rises from insignificant values at the Nino3.4 secondary spectral peak, but never exceeds the max seen at the higher frequencies. Indeed, Nino3.4 leads PDO throughout the energetic frequencies. It’s hard to argue against what the empirical record shows!

John S. the problem with ENSO and PDO is that both are defined as oscillations when they are not and the PDO is a very noisy area of the north eastern Pacific. The main part of both is they change atmospheric circulation.

The AMO is a real live SST based oscillation for a fixed region. The AMO has a higher correlation with “Global” surface temperature because it provides energy to a larger portion of the global land mass that is not under a couple of thousand meters of ice. Berkeley has recently noted “Land Amplification” where the mid-northern latitude land mass tends to amplify forcing.

If you compare just the North Atlantic deep ocean temperature anomaly (0-2000 meters in temperature instead of Joules) you can see how well they track.

Then if you like you can compare the entire ENSO region SST, (15S-15N from 80W to 140W) with global ocean temperature you can see another very good correlation.

PDO has a major impact on North American temperature and precipitation, but the interaction with the North Atlantic through the Arctic Oscillation makes it difficult to determine a “Global” impact because of all the noise. The big Kahuna is going to be the AMO/NAO oscillation shift.

Captn:
You omit the relevant part of the citation: “Because its [ENSO’s] spectrum has a long low frequency tail, fluctuations in the timing, number and amplitude of individual El Nino and La Nina events, within, say, 50-yr intervals can give rise to substantial 50-yr trends…”
“…It [The Pacific decadal oscillation or the interdecadal Pacific oscillation] is strongly reminiscent of the low-frequency tail of ENSO and has, indeed been argued to be such in previous studies (e.g. Alexander et al 2002, Newman et al 2003, Schneider and Cornuelle 2005, Alexander et al 2008)…”

The “wrong conclusions” warning that you cite refers to defining variations in terms of linear regression, not cross-spectrum analysis.

John S. I have seen a few attempts to estimate the PDO global impact, not much luck making anything work with just about any method except for Tsonis’s neural network which included quite a few of the known “oscillations”. Their work even indicated that the known “oscillations” had more unknowns, like a Mongolian impact on the NAO and the India ocean dipole. So I looked at changes in the northern hemisphere sudden stratospheric warming events. They have a bit of an oscillation to them as well that is related to the Tibetan Plateau, likely Tsonis’ Mongolian influence. So a more westward ENSO changes the PDO response through AO shifts cause the SSW events. I believe that qualifies as a chaotic relationship.

All that indicates to me that they are not oscillations, but weakly damped responses to perturbations overshooting a long term secular trend.

Makes sense with the huge ocean thermal mass taking hundreds of years to charge and discharge. If we are approaching a peak, semi-stable operating point, the typical oscillation relationships will break down, then there will be different “oscillations” to define.

“The PDO has two phases- a positive (warm) phase, and a negative (cool) phase. In the +PDO phase, we see below normal sea surface temperatures extending from northeast Asia across the North Pacific and to the waters well offshore Western Canada and southern Alaska. Immediately offshore of these regions, we see above normal SSTA values. The opposite set-up occurs in a -PDO. In the negative PDO, we find above normal SST anomalies stretching from east Asia across much of the northern Pacific and into the Northeast Pacific. Below normal SST values are found just offshore the western coast of North America. In the positive PDO, the Southeast tends to experience above normal precipitation, while the southern Ohio Valley will see slightly drier than normal conditions. In the temperature department, just about everyone east of the Front Range experiences below normal temperatures, except the Northern Plains, which sees above normal temperatures. The negative PDO sees the opposite of all this, with much of the US warm in a -PDO, and precipitation trends finding the Southeast in a dry area, but the lake effect snow belts in the Great Lakes at above normal precip levels. Right now, the PDO appears to be trying to switch phases from negative to positive. At the moment, I’m still expecting a negative PDO for the winter, and the chances of a neutral to slightly positive PDO this winter are low, but not completely zero. ”

John S. I am sorry you fell that way but as I said I would not expect to find a high confidence correlation of any kind since PDO is a poorly defined “oscillation”.

The indopacific warm pool would be a better region for a reference than the ENSO as far as climate goes, because it stays put. ENSO is up to region 4 and may have a few new regions because is moves. The PDO is related to ENSO but with a hot tongue/cold tongue Modoki etc. what would you really expect?

Chief:
What part of “Alas, cross-spectrum analysis does not show high coherence between PDO and Nino3.4 at low (transdecadal) frequencies” do you fail to understand? It is a straighforward statement about an empirical time-series relationship, which no competent analyst would call “specious.” The point of interest is the CONTRA-indication it provides regarding conjectures that ENSO, which has negligible power in the low-frequency bands, is driving the multidecadal PDO.

I think the Xie paper and Judith are saying two different things and looking at different periods.

Xie is trying to explain the recent hiatus in surface temperatures for the last ~15 years and tracks ENSO changes from the 1950s. A glance at any ENSO data shows the for the last ~15 years ENSO has had a negative slope, and as ENSO is well correlated to global temperatures, this might explain the apparent hiautus.

Judith’s starting point is the IPCC claim of >50% anthropogenic warming since the 1950s, and to make her point she selects (!) a period (1975 – 1998) which begins with very low ENSO values, ending with extremely high ENSO values, subtracts the difference between them, and uses this value for her calculations. This method deals with neither the period from the 1950s, nor the ‘recent’ effects of ENSO on global temperatures, and so says little about either.

I asked in the other thread why Judith chose these exact years (is there a physical reason, or is it that the data produces the best support for her analysis). I also asked if her analysis was based on the difference between the end points, which would be non-statistical.

At this point it seems to me that Judith has simply cherry-picked her data to support her conclusion. As others have pointed out, one need only shift the end points by a few years to get wildly different results, even a negative influence. Judith’s explanation that she chose the period due to IPCCs claim (?) that anthro warming signal emerging strongly from the 1970s doesn’t work if one chooses slightly different periods under the same rationale (from somehwere in the 1970s to….?). A proper test of the IPCC claim would include all data from 1950 to 2005, the period for which the IPCC first claimed in AR4 that the anthro contribution was >50%. This has the added benefit of fairly neutral solar temps, evening out other fluctuations, and therefore being able to better isolate ENSO effects.

Out of interest I went to woodfortrees and ran some OLS regressions on global temperature and PDO data from 1950 to 2005 and 2012, and then from 1998 to 2012 (inlcusive). PDO warming for the longer periods has the same sign as global warming, but for 1998 to 2012, PDO gives a very strong cooling of 0.5C/decade, while global temps have been flat (HadCRUt4). The correlation breaks down for recent temps.

For this period global warming is 0/17C/decade, and the PDO trend is -0.28C/decade. opposite signs, and PDO brings strong cooling for this period.

Although this is an extremely naive procedure, it suggests that PDO correlations with global temps are very weak. But if PDO is a strong driver of temps, there is a significant force acting against PDO in the last 37, or 15 years.

Ragnaar – yes, the long-term surface warming trend is removed. Same for ENSO, which seems proper in order to isolate in-system fluctuations. These patterns move heat between the ocean and atmosphere. I don’t think they are done to make the long-term fluctuations zero. That’s simply a consequnce of removing the warming signal, which is not deduced from long-term trends in oscillatory patterns. That would be putting the cart before the horse.

Some have argued that removing the long-term trend assumes that the long-term trend is not caused by PDO/ENSO. But I can’t see how an ocean/atmosphere pattern would generate energy of itself, nor have the comments purporting to explain that been convincing. (eg, if it’s the sun causing changes in PDO, then why is the correlation so poor?)

The first graph here shows the PDO and I am kind of seeing it as only recently gaining traction, spending most of its time as red (warm). So while the trend is negative, it would bend more negative towards the present. Maybe with some more time, if it stays blue?

It’s interesting that NOAA would talk about regimes which I take to mean chaos theory. And they have the recent regime as a split regime.

As far as the correlation of temperatures and PDO, I am novice and just trying to get some understanding here, but I’d say it’s at least a medium correlation. I suppose my current idea is Sun + I don’t know > ENSO > PDO. The Pacific Gyres just seem to me to be some big players.

And then as I am supposed to remember, ‘I don’t know’ above includes about every other thing, as all things effect every other thing.

Not between solar flux and PDO, which was the point I was discussing. See my post below.

“Generate energy?”

Yes, what physical mechanisms could explain a long-term (centennial) forcing of surface temperatures from PDO? How does a system which swaps heat from oceans to atmsophere over 20 – 30 year timescales generate energy to cause a centennial trend? I can see this happening with volcanic forcing, with atmospheric changes, or with solar forcing, but not with ENSO, PDO, AMO or other oscillating systems that are analogous to long-term ‘seasons’, albeit not as regular.

My mistake, CH. I commented on the correlation between solar flux and PDO, but it wasn’t the main part of the conversation.

However, if I take Judith’s start-point of 1975, which is when AGW is supposed to have emerged most strongly, then I get a poor fit using linear trends.

I’m not saying that there is no correlation between global temps and PDO, but I’m unsure how PDO could be the cause of long-term trends, and when and how lags occur between PDO and global temps. Could the appraent regime changes actually be a residual effect of global temps on PDO data (as it apears to be regarding AMO: 2 – 3 month lag), rather than the other way around?

CH, that may cover decadal fluctuations, but the long-term trends for clouds, like PDO, is minimal. My point is that the oscillating nature of PDO may account for 20- 30-year variation, but not for the overall warming trend (eg, centennial).

(Ragnaar, pointed out that the long-term (centennial) warming trend is removed from PDO data, hence my question as to how PDO might create heat energy over that timescale that would make the removal of the trend inappropriate)

Timescale is an important factor here, of course.

Xie’s point is that for the last ~15 years, ocean/atmosphere fluctuations (ENSO) have made a negative contribution to surface temperatures, masking any warming that might be underway. ENSO and PDO trends agree for that time-scale. They’re both negative. PDO linear trend is -0.5C/decade, 1998 – 2012. That’s a strong negative forcing, suggesting that an equally strong positive forcing must have occurred to keep surface temps flat.

‘The global climate system is composed of a number of subsystems | atmosphere, biosphere, cryosphere, hydrosphere and lithosphere | each
of which has distinct characteristic times, from days and weeks to centuries and millennia. Each subsystem, moreover, has its own internal variability, all other things being constant, over a fairly broad range of time scales. These ranges overlap between one subsystem and another.
The interactions between the subsystems thus give rise to climate variability on all time scales.’ Michael Ghil – http://www.atmos.ucla.edu/tcd/PREPRINTS/Math_clim-Taipei-M_Ghil_vf.pdf

ENSO and the PDO are part of a basin wide system known as the Pacific Decadal Variation. As ENSO is known to vary over decades to centuries to millennia – there is little to suggest that the decadal pattern is all there is.

Yes, it has a number of components, including temperature patterns, which seem to have impacts thousands of kilometers away. I doubt that it can provide a forcing for long-term global warming (or cooling). Similar to the latest centennial warming, the milennial PDO reconstruction doesn’t correlate well to milennial temp reconstructions either.

Fluctuations are somewhat correlated (1950 – 1980) as much as anti-correlated (1980 – to early 2000s: [eyeballometer]), and lag appears to vary from one to the other. Opposite sign over the whole period. I wonder how it is supposed to work.

As far as the Sun driving, it’s just my assumption, but I should have added something, Gatekeeper clouds and water vapor, having to do with their varying albedo and insulating qualities.

When I try to visualize what’s going on with the Oceans, the PDO seems big and stable, but not necessarily driving. Maybe some moderating, indicating and being a resultant and a driver. A North Pacific hub:

Connected to the South Pacific hub, and sharing the El Nino/La Nina equatorial lane. And with the South Pacific hub connected to the Antarctic Circumpolar hub.

At the end of the day, I have a generalized picture at best, with a lot of unknowns

My main question/concern is that, if I am interpreting the figure correctly, Earth’s average surface temperature would drop by 120 K if TOA insolation decreases by a very small amount below what it is at present. If my understanding is correct, it means the planet is in a precarious position when CO2 concentration is at the low levels it was at pre-industrial revolution, and only a little better now (at 400 ppm).

This is my interpretation of the figure:

1. Assuming that the rate of change of insolation is slow enough so that equilibrium is maintained, and

2 we begin at temperature 287.7 K (marked on the vertical axis) and at insolation as at present (1.0 on the horizontal axis), then

3. if TOA insolation increases (e.g. GHG concentrations increase?), the average surface temperature would increase along the upper blue line.

4. However, if the fractional TOA insolation decreases to slightly less than 1.0 (e.g. by just 1% to 0.99), the planet would sink into an ice ball and the average surface temperature would sink ~100 K (from ~278 K to 175.4 K).

5. To get out of iceball Earth and return to a greenhouse phase, insolation would have to double (to 2.05 on the horizontal axis) so that temperature increases along the lower blue line to point Td.

7. If TOA insolation >0.99 and temperature is between the upper blue line and red line, the temperature will increase to the upper blue line.

8. If If TOA insolation >0.99 and temperature is between the lower blue line and red line, the temperature will sink to the lower blue line.

9. If If TOA insolation drops to 0.99 or below the average surface temperature will sink to the lower blue line at 175.4 K.

Is my understanding correct?

If my interpretation of the figure is correct it strains credulity to think that the planet’s temperature could drop by 100 K. The planet’s temperature has stayed within a range of about 15 K for the past billion years. My understanding is that even snowball earth seems to have stayed within this range (roughly). and not dropped by 100 K. Therefore, it seem to me that Ghil’s mathematical analysis is not consistent with empirical evidence.

The energy-balance model is too simple to realistically mimic climate. Although the snowball earth temperature seems about right.

These particular nonlinear equations have a couple of degrees of freedom. Earth climate has hundreds at least. They are one of the class of deterministically chaotic systems – of which there are many and of which climate is one. It conceptualizes the behaviour of nonlinear systems more generally at tipping points. It is a toy model in other words – intended only to illustrate the principle. Other than that you are perfectly correct in your description of chaotic bifurcations.

1. The system is complex, the model is simple, so cannot take the output as meaningful but it does help to explain the concept of tipping points;

2. my interpretation of Figure 1.1 is basically correct

3. “Although the snowball earth temperature seems about right.”

Point 3, I don’t understand. We’ve had about three snowball Earth phases between 1 billion and 600 million years ago. My understanding is that average surface temperature was around 10 K colder than now, not 100 K colder. Do you have a reference that suggests showball earth was around 100 K colder then now?

Chief thanks for that link. It does give a similar temperature figure but it is for the case with no greenhouse gases (188 K) and no atmosphere at all (185 K). The Ghil case is with the atmosphere and GHG concentrations as they are now (or pre-industrial?). So a very different case. So, I still believe the 100 K temperature drop to snowball Earth is way off the mark and challenges the credibility of the Ghil paper. (I do recognise it is a ‘toy’ model as you said in your first comment).

Another reason to question a 100 K temperature drop is that we know life did survive through the several occurrences of snowball Earth. Life would not have survived if the temperature had dropped by 100K. As the Barrett-Balamy post says:

Without the greenhouse gases the reduced temperature would lead to the freezing of the oceans, and clouds would be minimal if there were any at all. This would imply a much higher albedo, since snow and ice are very good reflectors of sunlight [they are white or transparent]. There would be no life, no greenery.

[my emphasis]

So I find the Ghil Figure 1.1 interesting in showing the concept of the tipping points. But I don’t accept the figures.

What it also does is scare the hell out of me about the risk of cooling. It suggests we are close to catastrophe on the cold side, but I there is no sign of a major risk on the warm side. It further reinforces my conviction I don’t want us to waste money on mitigation policies to reduce GHG emissions without a high level of certainty they will succeed and low risk of negative consequences. That is not the case with the mitigation policies that have been pushed in the UN climate conferences to date (like Kyoto Protocol, carbon pricing and renewable energy).

Pole to pole ice by definition. No water vapour to speak of and little carbon dioxide. The temps are a little cooler with no greenhouse gases – 183 degree C – than with no atmosphere – 188 degree C. I haven’t done the calcs and it is such a hypothetical.

Warming may actually drive cooling – open water increasing snow and ice, reducing MOC. A complex system holds no guarantees.

I accept water vapour concentration would be lower, but not CO2. All the land surface is covered so no weathering so no removal of CO2 from the atmosphere by weathering of rocks. And the oceans are covered so CO2 cannot be absorbed b y the oceans. So CO2 released by volcanoes increases CO2 concentration.

However, this is irrelevant to interpreting Figure 1.1, because the vertical line at 0.99μ is constant insolation (i.e. same CO2 and water vapour concentration as at the tipping point.. At constant 0.99μ the temperature drops 100 K.

Again, I realise this is a simple, toy, and just a diagram. But it seems to me the Ghil analysis says that once insolation gets down to 0.99μ, the temperature drops from ~278 K to 174.4 K with no change in insolation (so GHG forcing is constant). An I still interpreting this figure correctly?

Fred Moolten : “What it would require is a climate sensitivity to ocean-to-surface warming that is enormously greater than climate sensitivity to external forcing. Current estimates place a likely upper bound on forced climate sensitivity of about 4.5 C, and it would require a climate sensitivity to internal heat transfer well above 10 C and probably above 20 C to reduce or eliminate the disparity. Physical evidence fails to support such a mechanism, and indeed climate sensitivity estimates based on some internal phenomena such as ENSO tend to be lower than those estimated for CO2-based external forcing.”

The whole concept of “climate sensitivity” assumes a linear system and very little of climate is linear. Some linear approximations work reasonably well but starting to calculate climate sensitivity estimates for things like ENSO and you are spinning a web of linear fantasy around a non linear system.

As I pointed out above, but no one commented on, that whole idea that ENSO is a driver is very likely an illusion created by just this obsession with making everything linear which is not.

Tropical climate is dominated by tropical storms which are strongly non-linear negative feedbacks which are capable of maintaining the degree.day integral stable across major eruptions ( ie significant change to radiative input to the region).

If we insist on modelling the global response as linear there will be a residual response with will _appear_ to be an ENSO “forcing”.

Now it may be possible to once again estimate the non linear response as two linear actions (or maybe not!) but it is first necessary to recognise and measure the true response then try to approximate it.

Current attempts to insert cloud ‘parameterisations’ as inputs to a linear system will only work over the calibration period.

Yes, well as you say that’s a trivial model with two parameters when we need hundreds. A nice way to conceptualise the flip between glacial and interglacial, for example.

We could regard CS as the tangent to such a curve but the real one will have a lot of kinks and nobbles and we don’t learn anything before we get the full picture. Until we can model the chaotic system.

It seems to me that the answer to the current question of ENSO “driver” is to recognise that is it not a driver but a residual from a linear approximation to a non linear part of climate.

Get a more realistic model that represents tropical storms, ie much higher spacial resolution over limited latitudes and link this to global models as is done with arctic ice models.

Use that model to create realistic cloud parameters to provide GCM inputs.

It seems that we are still at the stage of making excuses for the divergence rather than addressing the fundamental inadequacy of the linear model / fixed CS paradigm.

I found the first lengthy comment of Fred Moolten more confusing than clarifying. That’s related to the many ways the word “warming” can be interpreted, It may refer to both continuing supply of heat that’s enough to maintain some temperature, or to a change in the heat balance that leads to a rise of temperature.

The main energy flux of the Earth system goes from the sun to the ocean, from the ocean to the atmosphere, and from the atmosphere back to the space. Continental areas contribute, but less, and part of the solar radiation stops in the atmosphere.

In the oceans layers below the skin heat the skin, and layers more than a few meters deep provide some of that heat. A very small fraction of the energy absorbed by the oceans seems to cause slow warming of the bulk of oceans as the OHC has a rising trend according to the measurements. All the rest is released from the surface to the atmosphere or to a much lesser extent radiated directly to space.

Based on the above the ocean from below the skin is always the main heat source for the thin surface layer of the ocean, whose temperature is the SST.

Most of the continental areas are heated by sun to the extent that they are net cooled by the oceans through the atmosphere, not net warmed, but some areas are also net warmed by the oceans.

All above is related to the situation that has been true whether the Earth has been warming or cooling. How warming is discussed there should not be confused with what happens when temperatures are changing, but my impression is that Fred was to some extent doing just that. Therefore I consider his comment confusing.

I’m sorry I confused you, Pekka. I was referring to the change in heat balance responsible for a rise in global surface temperature.

Consider a situation in which the climate system is in balance to start with – it is neither gaining nor losing energy. In an “internal variability” scenario over a subsequent specified interval, an excess of energy from within the ocean will be transferred to the surface, raising the surface temperature of the ocean (and indirectly contributing to land warming as well). Because of the higher temperature, heat will escape to space faster than it is being absorbed in the climate system. The system will lose a quantity of energy over the interval, and that loss will be reflected in a more or less equivalent loss of OHC, since there is no other heat repository large enough to supply more than a small amount.

In an external forcing scenario, the surface temperature will also rise as the system gains excess energy. Once again, heat will escape to space at faster rate than it did before the interval. In this case, however, the excess in the rate of heat escape represents energy that does not go back into the ocean. In other words, in the internal scenario, energy radiated to space in excess of the rate prior to the interval is energy subtracted from the ocean, but in the forcing scenario, the increase in energy radiated to space is not the converse – it is not energy added to the ocean but energy that fails to be added. This is why equal warming (rise in temperature) from an internal vs a forcing scenario implies a net loss of OHC. If an actual rise in OHC is observed (as is the case since 1950), the same principle dictates a predominant role for forcing.

Some of the caveats and assumptions surrounding this principle are mentioned in earlier exchanges of comments, but the entire concept is described in more quantitative detail in Isaac Held’s blog that I linked to in my original comment. It would probably be more informative for anyone interested in the details to visit that blog, and if necessary, submit a question to Held rather than for me to transmit the various points here, perhaps not always as precisely as Held would wish. This is particularly the case for the quantitative aspects. Rather than copy them from his blog, I’d simply suggest a visit to the blog.

In the external forcing scenario, I should have stated that because of the temperature rise, heat will escape to space faster than it would have without the change in temperature. Whether that is faster or slower than before the forcing depends on the nature of the forcing.

You essentially divide the Earth system in two parts: oceans and the rest.

By energy conservation we know that the rest can warm from energy added to the whole system as well as energy from the oceans, or any combination of the two where the amounts from both either add or partially cancel.

That’s not controversial, but that alone is not particularly useful either.

The paper discussed here presents a model that is not in agreement with any of those alternatives as it’s based on a third energy flux between surface ocean and a mysterious black hole. The question is: Does a model based heavily on that tell something about an Earth system that is in agreement with laws of energy conservation.

Pekka – The First Law is of course critical to the point I’m making. It is more rigorously described in Held’s blog article, which I recommend to you. It tells us, in essence, that energy from a forcing that raises global temperature includes a component that is radiated to space without entering the ocean, and so only a fraction adds to OHC – typically less than half. On the other hand, energy from internal variation that raises temperature consists almost entirely of energy that is subtracted from the ocean. That is why an equal contribution to a temperature rise from internal and forced contributions will result in a net OHC loss, an unchanging OHC will signify a predominant forced component, and an increasing OHC will signify even greater predominance of forcing as opposed to internal variation.

Based on the above, combined with OHC data and the observed geographical patterns of post-1950 warming, it seems unlikely that internal variation has contributed more than a small fraction of post-1950 warming, and even more implausible that it has contributed anywhere close to half. Again, I would recommend Held’s article for a more ample description.

I realize that the current post here is based on a different paper, but since Dr. Curry raised the subject of post-1950 or post-1975 warming, I thought it worth reiterating that only a minor fraction at most can be attributed to internal variability unless some unidentified process is operating that conforms to the constraints that appear to exist – conservation of energy, energy balance equations relating temperature change to radiative restoring, observed changes in OHC, and the observed distribution of global warming. To my knowledge, no-one has yet proposed an alternative explanation that contradicts these conclusions. If any reader here is aware of one, he or she should describe it. What I don’t think is at all persuasive would be claims that internal variability may have contributed substantially during that interval that don’t address the points I’ve tried to convey here.

Addendum:
To expedite any further discussions, I thought it might be worthwhile to again cite the link to Held’s article, so that interested readers can visit it and address points it makes. It’s atHeat Uptake and Internal Variability

The “money quote” summarizing his conclusions at the end of the article is the following, where ξ is the fraction of a temperature rise that is forced:

“I see no plausible way of arguing for a small- ξ picture. With a dominant internal component having the structure of the observed warming, and with radiative restoring strong enough to keep the forced component small, how can one keep the very strong radiative restoring from producing heat loss from the oceans totally inconsistent with any measures of changes in oceanic heat content?”

Held concludes that around 25% of the driving force from internal variability would roughly cancel the increase in OHC, if the share of energy stored as OHC is 30% when no internal variability is involved. To me that’s obvious: those two numbers (25% and 30% above) must be essentially equal. His argument is just an unnecessarily roundabout way of deriving this fact that could be stated directly from energy conservation and assumed linearity.

What I’m trying to say is that these issues are actually much simpler than one might think from Held’s article and your comments.

Fred, if you compare the north Atlantic OH uptake with northern hemisphere temperatures there is an atmospheric warming while there is slow ocean heat uptake with a step starting ~2000 and peaking ~2005 then a gradual decline to present with with northern hemisphere surface temperatures leveling off. In the south Atlantic, OH uptake is neutral from 1980 until ~2000 then it starts to ramp up. Different responses in the two hemispheres. Along with Arctic sea ice reduction and Antarctic sea ice increase.

J. R. Toggwieler has a nice little article on the shifting westerlies that are a response to changing temperature gradients.

While the overall increase to total heat heat content is undoubtedly due to a change in forcing, internal variability on longer time scales change the rate of heat uptake. Since there was a little ice age and more volcanic activity in the 20th century, that lack of negative forcing needs to be considered, just like the negative volcanic forcing prior to 1998 would need to be considered. If 50% of the imbalance is due to longer term centennial scale recovery, 50% of the warming would be due to natural causes unless humans are responsible for volcanoes and plate tectonics.

The issue is more natural variability, internal variability and anthropogenic forcing are difficult to separate. You can’t assume, one forcing fits all.

Pekka – Perhaps the main principle is “obvious” if one thinks about it, yet we are still seeing claims that internal variability may have contributed more than a small part of the post-1950 and post-1975 warming – claims that are contradicted by the principle and the evidence at hand. Of course, Held’s article goes beyond the basic principle and considers possible confounding factors, concluding that they might plausibly modify the conclusion to a small extent, but not likely to a large extent. For that reason – both the evidence and the caveats – I recommend it to readers interested in assessing the relative importance of forced vs internal variability in the multidecadal warming.. My own sense is that it won’t put an end to the claims. That would be acceptable if plausible refutations of the evidence for only a minor role of internal variability are presented, but not if the evidence is ignored.

Barring something new and significant, I’m inclined to leave it at that and let readers visit the preceding comments and the Held article to draw their own conclusions.

To the extent there are weaknesses in the argument, they are in the assumptions. The most essential is that changes in the albedo do not play a major role.

Lindzen and other skeptic scientists are competent enough to know that the only place where their arguments are not contradicted by very strong physical arguments is there. The most plausible form of dominance of natural variability and low climate sensitivity is based on assuming that the role of oceans is not so much to take or release energy than to influence albedo. The oceans could bring in the long term memory, while the changes in the energy fluxes are due to albedo effects. That kind of view is consistent with negative feedback from clouds and low sensitivity.

We have data on OHC and on many other relevant variables. That data supports the main stream views, but as far as I can see, there are still so large uncertainties everywhere that drawing strong quantitative conclusions is not possible. That’s most directly visible in the difficulty of determining the climate sensitivity with reasonable accuracy (limits like 1.5 – 6 do not really represent reasonable accuracy). This is acknowledged by main stream scientists, although many of them try to find ways to avoid stating that directly.

Could it be that we are on an energy plateau with a 1998 or so peak? Could it be that CO2 is much less effective than assumed? Could it be that climate shifted in 1998/2001?

There are a couple fundamental concepts in modern climate science without which understanding is impossible. Climate – and models – are chaotic. Patterns of ocean and atmospheric circulation shift every few decades. The last shift was in 1998/2001. We are currently in a cool global mode and these last for 20 to 40 years. The global surface temperature – at the very least – is not increasing for another 10 to 30 years. This is mainstream and leading edge climate science.

The importance of the chaotic features of the Earth system are not really known. In many ways the system is more dominantly stochastic than chaotic and stochasticity leads to dissipation that may reduce the importance of the chaotic properties. This is by no means certain, but this is a real alternative.

(The influence of a butterfly in the Amazonas is a prime example of wrong claims made based on neglecting the stochastic nature of the Earth system and the related dissipation.)

The multidecadal variability may be either quasi-periodic or a series of totally non-periodic state transitions

A stochastic system is often more predictable than a chaotic one, but again generic rules should not be given too much weight.

Climate science does lead to improving understanding of the Earth system. Based on that projections can be made, but their reliability is difficult to assess.

The situation would be hopeless for policy conclusions, if they would require accurate knowledge, but that’s not the case. Reasonably well justified likelihood estimates are enough in many cases.

‘Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.’ http://rsta.royalsocietypublishing.org/content/369/1956/4751.full

Let’s call it abrupt climate change then – as chaos appears to confuse with ideas that emerge from physics but have no well defined application in climate. Abrupt climate change is defined as emergent behaviour of the self organising climate system that is faster than the forcing. Abrupt climate change is the central climate process on all relevant scales. It is apparent that people like Tim Palmer and Anastasios Tsonis have little problem identifying this as deterministic chaos.

“There are a couple fundamental concepts in modern climate science without which understanding is impossible. Climate – and models – are chaotic.”

As I understand it, weather is chaotic and climate less so – like the seasons.

“Patterns of ocean and atmospheric circulation shift every few decades. The last shift was in 1998/2001. We are currently in a cool global mode and these last for 20 to 40 years. The global surface temperature – at the very least – is not increasing for another 10 to 30 years. This is mainstream and leading edge climate science.”

I’ve seen little in the literature to suggest we are in a “cooling mode.” How would you demonstrate this is ‘mainstream’? It’s not, for example, a conclusion shared by the IPCC. What is the rationale for a ‘shift’ 1998/2001? Is it correlation with the PDO index?

Eg, the AMO lags global temps, apparently, so the oscillation may reflect rather be responsible for the global temperature record (other processes notwithstanding). I’m curious to know if anyone has attempted to assess any lag between PDO and global temperatures.

You are more than several years behind the the science on natural variability Barry – and arguing from a position intellectual inflexibility. This post is about natural variability.

Unlike El Niño and L a Niña, which may occur every 3 to 7 years and last from 6 to 18 months, the PDO can remain in the same phase for 20 to 30 years. The shift in the PDO can have significant implications for global climate, affecting Pacific and Atlantic hurricane activity, droughts and flooding around the Pacific basin, the productivity of marine ecosystems, and global land temperature patterns. This multi-year Pacific Decadal Oscillation ‘cool’ trend can intensify La Niña or diminish El Niño impacts around the Pacific basin,” said Bill Patzert, an oceanographer and climatologist at NASA’s Jet Propulsion Laboratory, Pasadena, Calif. “The persistence of this large-scale pattern [in 2008] tells us there is much more than an isolated La Niña occurring in the Pacific Ocean.”

Natural, large-scale climate patterns like the PDO and El Niño-La Niña are superimposed on global warming caused by increasing concentrations of greenhouse gases and landscape changes like deforestation. According to Josh Willis, JPL oceanographer and climate scientist, “These natural climate phenomena can sometimes hide global warming caused by human activities. Or they can have the opposite effect of accentuating it.” http://earthobservatory.nasa.gov/IOTD/view.php?id=8703

We construct a network of observed climate indices in the period 1900–2000 and investigate their collective
behavior. The results indicate that this network
synchronized several times in this period. We find that in those cases where the synchronous state was followed by a steady increase in the coupling strength between the indices, the synchronous state was destroyed, after which a new climate state emerged. These shifts are associated with significant changes in global temperature trend and in ENSO variability. The latest such event is known as the great climate shift of the 1970s. We also find the evidence for such type of behavior in two climate simulations using a state-of-the-art model. This is the first time that this mechanism, which appears consistent with the theory of
synchronized chaos, is discovered in a physical system of the size and complexity of the climate system.
Citation: Tsonis, A. A., K. Swanson, and S. Kravtsov (2007), A new dynamical mechanism for major climate shifts, Geophys. Res. Lett., 34, L13705, doi:10.1029/2007GL030288.

What happened in the years 1976/77 and 1998/99 in the Pacific was so unusual that scientists spoke of abrupt climate changes. They referred to a sudden warming of the tropical Pacific in the mid-1970s and rapid cooling in the late 1990s. Both events turned the world’s climate topsy-turvy and are clearly reflected in the average temperature of Earth. Today we know that the cause is the interaction between ocean and atmosphere. http://www.sciencedaily.com/releases/2013/08/130822105042.htm

The Earth’s climate system is highly nonlinear: inputs and outputs are not proportional,
change is often episodic and abrupt, rather than slow and gradual, and multiple equilibria are the
norm. While this is widely accepted, there is a relatively poor understanding of the different types of
nonlinearities, how they manifest under various conditions, and whether they reflect a climate system
driven by astronomical forcings, by internal feedbacks, or by a combination of both.http://www.unige.ch/climate/Publications/Beniston/CC2004.pdf

Here’s a few to start with – come back when you have a little more depth and a little less of the climate partisan.

Chief Hydro mixes just enough science with a whole lot of inaccuracy to make himself sound plausible to the unsuspecting. Tread with caution. His views on ocean heat content are especially more fiction than fact.

‘Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.’

This contains two points that I object to:

The first point is that the word even on the first line is misleading as much of the conclusions is stronger for a simple set than for a complex system like the Earth system.

The second point is that the text refers specifically to deterministic systems, while I emphasize that it’s wrong to consider the atmosphere as a deterministic system. It’s not only seemingly random, but genuinely random (stochastic) for all practical purposes. If the Earth system is deterministic at all it’s that only as part of the whole universe. No single part of the universe is deterministic even, if the whole is deterministic.

A complex system like the Earth has many properties. In some ways it may behave similarly with the simple deterministic systems that Lorentz studied mathematically but how important those features of the Earth system are is an open question.

Pekka – I think that you have mistaken unpredictable for truly random.

Finally, Lorenz’s theory of the atmosphere (and ocean) as a chaotic system raises fundamental, but unanswered questions about how much the uncertainties in climate-change projections can be reduced. In 1969, Lorenz [30] wrote: ‘Perhaps we can visualize the day when all of the relevant physical principles will be perfectly known. It may then still not be possible to express these principles as mathematical equations which can be solved by digital computers. We may believe, for example, that the motion of the unsaturated portion of the atmosphere is governed by the Navier–Stokes equations, but to use these equations properly we should have to describe each turbulent eddy—a task far beyond the capacity of the largest computer. We must therefore express the pertinent statistical properties of turbulent eddies as functions of the larger-scale motions. We do not yet know how to do this, nor have we proven that the desired functions exist’. Thirty years later, this problem remains unsolved, and may possibly be unsolvable.

Perhaps if we had the wondrous intellect of Laplace’s demon we could describe each turbulent eddy – we can’t but they each have a cause.

But the essential point is that abrupt climate change is the central organising principle of climate at all relevant scales.

I add some further explanation to the above comments I addressed to Fred Moolten.

The point that Held made in his article can be explained without any formulas.

He was comparing two cases (and their mixture).

In case one a change in external forcing has caused warming. Adding CO2 to the atmosphere is considered such a forcing. Based on the estimate that doubling of CO2 causes a forcing of 3.7 W/m^2 (I use here 3.7 although 3.4 might now be preferred) the rice from 270 ppm to 400 ppm leads to a forcing of 2.1 W/m^2. The present imbalance is usually estimated as less than 1 W/m^2. Based on the observed warming rate of the oceans, it could be around 0.8 W/m^2, i.e. about 35% of the calculated forcing, most of that goes to oceans, but a fraction to heating continents and melting ice. The rest is not anymore part of the imbalance, because the outgoing IR (OLR) has also increased from the warming (the Planck response).

The other case is warming by extracting heat from deep ocean to the surface. If the resulting warming were the same, the Plank response would also be the same. It would have been necessary to extract not only the heat needed to warm the surface, melt the ice, but also the extra heat radiated to space from warmer surface and atmosphere. The last observation is all that Held was telling, and what Fred has been referring to.

As i wrote above, this argument requires that everything else is kept the same. Held listed some of the issues related to that, but he didn’t discuss, what happens to this argument, if we allow long term variability in the albedo. Allowing that makes the whole argument moot.

The old travesty of Trenberth has not disappeared. We still do not know, why the observed rate of increase in OHC is lower than what would correspond to the TOA imbalance of typical climate models. The model runs described in the paper of Kosaka and Xie leads to significantly faster rise in OHC than observed in the HIST simulations. In the POGA-H an additional imbalance is introduced and breaks energy conservation as even the model has no place for that extra energy.

There’s still much to learn in the heat flows of the oceans, and the heat balance of the Earth as whole. ARGO has not provided better explanations, it’s effect has rather been the opposite as the observed rate of warming of the top 700 m has been very low over the short period of good ARGO measurements.

My main point has been all the time that the Earth system is to a very significant degree truly random. It’s an error to claim that it’s only unpredictable.

The system has in many ways properties that would make from a deterministic system an unpredictable one, but that’s only part of the truth. In additions it’s stochastic, i.e. truly random. The stochastic disturbances are also amplified by the same features that make a deterministic system unpredictable. Therefore even small stochastic disturbances are very significant. The stochastic inputs have, however, another influence as well. They lead to dissipation, and through that they improve predictability.

What’s the relative importance of the factors that tend to make the system unpredictable and those that improve the predictability is a big question that has not been answered so far for the large scale behavior of the Earth system.

‘We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.’
—Pierre Simon Laplace, A Philosophical Essay on Probabilities[3]

It would seem vastly unlikely that cause and effect is dispensed with in the physical system of the Earth. You would need to enumerate effects without causes to convince anyone Pekka. As opposed to merely asserting truth sans a foundation in rational discourse.

It is moreover irrelevant to the real and fundamental behavior the real climate system. Deterministically chaotic to the core.

An a philosophical level we can argue on the role of determinism in physics. That’s, however, of little relevance for the present question.

Every subsystem of the Earth system is affected by it’s surrounding. That involves effects that are truly random from the point of view of the subsystem as they are determined by something external to the system. The mechanisms that make the subsystems unpredictable even when these stochastic external influences are not included have also the effect that a small external influence may soon have a much larger influence on the state of the subsystem.

When we extend the limits of subsystems, more and more becomes internal, but at all levels external influences do contribute. For the Earth as whole the sun is such an external system. Variations in the influence of sun are stochastic perturbations for the Earth.

Looking in further detail and more quantitatively, it’s clear that the stochasticity is a very important factor, in many cases totally essential, in some other less so. The large scale decadal and multidecadal variability of the system are cases where the role of stochasticity is not known.

When the system is complex even the deterministic chaos may lead to results much closer to those of a stochastic system than those of simple chaotic systems. The large number of variables leads often to a system that agrees extremely well with predictions done assuming that it’s stochastic. Much of statistical thermodynamics is useful for exactly that reason.

Pekka Pirilä asserts (lucidly) “When the system is complex even the deterministic chaos may lead to results much closer to those of a stochastic system than those of simple chaotic systems. The large number of variables leads often to a system that agrees extremely well with predictions done assuming that it’s stochastic. Much of statistical thermodynamics is useful for exactly that reason.”

This lucid conclusion is beautifully stated. Thank you, Pekka Pirilä!

The observed monotonic rise of global energy imbalance measures (such as sea level), coupled with the observed stability of annualized measures of radiative energy balance, is strong evidence that the Earth’s aggregate climate system has the dynamical characteristics that you describe.

Fred, in his comments Held states that he is assuming the spacial structures of warming are the same. This is obviously not going to be the case in situations of changes in heat transport. You are confusing yourself and others.

Pekka and Steven – I believe you’ve both misread Held (or possibly I’m the one who misread him but I don’t think so).

Pekka – your claim that Lindzen et al could argue for low climate sensitivity and a predominant role for natural variability post-1950 involves a logical contradiction. The lower the climate sensitivity, the smaller the role of natural variability (see Held for the reasons, but I think they are fairly obvious).. Indeed, with current mid-range sensitivity estimates, natural variability cannot easily account for more than a minor part of post-1950 warming, but with the low sensitivity estimates Lindzen, Spencer, and others have claimed based on natural variability (ENSO) data, natural variability would become even more trivial. They do indeed invoke albedo, but unless albedo can change radically and persistently with no preceding cause (no evidence for this long term post-1950), their asserted invocation of albedo as a negative feedback weakens rather than strengthens a role for natural variability. A predominance for natural variability requires very strong amplifying feedbacks for warming – i.e., very high climate sensitivity.

Steven – Held did not assume identical spacial structures for natural and forced variability. Rather, he correctly pointed out that the observed spacial structure of post-1950 warming is inconsistent with a predominant role for natural variability based on.it having a different spacial structure from forced variability. It could only have such a different structure if it played a role that was small enough not to distort the observed pattern.

I realize that no conclusion in science is immutable. Nevertheless, unless the principles discussed here above and many other venues by Held and others are shown to be invalid based on new and unexpected evidence, it’s hard to escape the conclusion that for the post-1950 and post-1975 warmings, more than a minor role for natural variability is untenable (the same can’t be said for other intervals where we have no OHC data). The actual proportion is uncertain, of course – 20 or 25 percent is possible, a value close to zero is possible, but a value approaching 50 percent is not possible by any reasonable accounting, even allowing for uncertainties. There are many uncertainties in climate phenomena, but this doesn’t appear to be one of them in the absence of surprising new revelations that would compel us to rethink basic principles of geophysics. . This is one example of a strong expert consensus that is fully justified by the evidence even though consensus is not equally justified in various other areas. Interested readers should review all of the preceding discussions and linked articles for a useful perspective on this.

Fred – as I understand it, sensitivity is a measure of feedback. You can have a low positive feedback and a large forcing from “internal” variability and end up with an increase in global temp that is mostly due to the forcing. I don’t follow why any variation can’t be large when the sensitivity, or feedback, isn’t well characterized.

“You are more than several years behind the the science on natural variability Barry – and arguing from a position intellectual inflexibility.”

You must think many questions have been disingenuous. Perhaps that was what prompted the desultory ad hom.

“Here’s a few to start with – come back when you have a little more depth and a little less of the climate partisan.”

Thanks for the delicious invitation.

I have a moderate knowledge of some ocean/atmosphere systems and their impacts on global and regional climate, particularly ENSO, AMO, AO and PDO (there are many other indexes).

As far as I understand it, PDO is mainly (but not limited to) changes in temperature patterns in the Northeastern Pacific, almost certainly teleconnected to ENSO, with impacts thousands of kilometers away, such as drought and precipitation changes in various locations. Causes are uncertian, potentially including ocean gyre, winds, and/or ENSO. The periodicity and of PDO is very much in doubt, as there have been only 2 cycles in the last ~100 years. Prior to these two cycles there appear (from proxy records you yourself have linked to) to have been multi-centennial periods of one phase (eg, cool from 1000 to 1300), and in the last 15 years PDO has fluctuated on sub-decadal scales several times.

While PDO is unlikely to be responsible for centennial-scale warming, it is quite possible that it has had influence on 20 – 30 year global surface temperatures for the last 100 years.

I disagree that it is predictable, and you have provided no studies indicating that global climate is in a cool mode for the next 10 – 30 years (mainstream, cutting-edge science?). Most recent research that I have read indicate that it is unpredictable.

Thank you for the link to the article on the recent study. I was gratified to have one of my opinions supported by the leading researcher.

“…the climate is far less chaotic than the rapidly changing weather conditions…”

You could not make your ‘cool mode’ prediction if climate was chaotic, no?

I read the other cites – thanks again. There wasn’t much new there. The full text of the Ding/Latif hindcast paper, when it becomes available, should be interesting.

If you haven’t already checked these out, they may be worth your acquaintance. NOAA glossary on PDO, a couple of other pages from them, and a few pertinent studies from the last few years (full versions).

Causes for the PDO are not currently known. Likewise, the potential predictability for this climate oscillation are not known.

However, these decadal cycles have recently broken down: in late 1998, the PDO entered a cold phase that lasted only 4 years followed by a warm phase of 3 years, from 2002 to 2005. The PDO was in a relatively neutral phase through August 2007, but abruptly changed in September 2007 to a negative phase that lasted nearly 2 years, through July 2009. The PDO then reverted to a positive phase in August 2009 (Figure 5) because of a moderate El Niño event that developed at the equator during fall/winter 2009–2010. This positive signal continued for 10 months (August 2009–May 2010) until June 2010, when persistently negative values of the PDO initiated and have remained strongly negative through autumn 2012.

Whether the argument of Held can be applied to the case where the albedo is varying and the main factor driving the warming can be answered in both ways. On the one hand he’s not discussing at all the possibility that a lower albedo from natural variability would have been the main driving force. On the other hand it’s true that if it is, it must also provide the imbalance driving the warming even in situations where earlier warming has led to larger emissions.

In all cases forcing must be compared to forcing, not to the remaining imbalance after Planck response has taken it’s share and left only a fraction of the full forcing to add to the warming.

To me the warming from the lows of the period 1950-75 to the present level seems clearly to be due to additional CO2. My best guess is that over a period of the 60 years the human contribution is close to the total warming, i.e., approximately 100%. Over shorter periods of rapid warming the share has been less but now 100% seems to be a fair first estimate. That would correspond to the assumption that we were close to the minimum of the natural variability both 50 years ago and now.

The above conclusion is influenced on what I know about research results, but it cannot be proven by the results.

It appears to me too common on both sides of the argumentation to attack the weaker arguments of the opposing side and avoid mostly discussing the stronger arguments. That’s natural in a public argumentation, but that’s not optimal for the advancement of the science.

Jim2 – You should probably review the extensive prior discussion and links for a good perspective on this. It’s certainly possible for internal variability to induce significant warming in the absence of high climate sensitivity, but in that case, ocean heat content (OHC) would show a substantial decline rather than the observed rise since 1950

Fred, he is assuming they are the same spacial structures, that of the observed warming. He thinks it more likely that a forced response would create such a pattern. If he is wrong about that and it is actually the internal variability of OHT that would create the pattern and not that of additional GHGs then his calculations become moot. I’m not sure that he is including the possibility of a long term change in OHT in his thinking but in general I would say you are reading him correctly since he does mention the AMO. I have to admit I’m suprised. He is not normally such a renegade from the consensus view and that view is that increasing heat transfer warms the world. I’d also like to see when he figures out if the Indian Ocean could really supply enough additional energy to both warm the Atlantic in OHC and make up for the reduction in OHC the Atlantic should have had as the AMO went from negative to positive during the same time period using his calculations. That increase in Atlantic OHC considerably above that experienced in other oceans during the phase change of the AMO from negative to positive seems to indicate his calculations are mistaken.

Professor Curry,
There is no question that your interpretation is more intelligent than those of JNG and Tamino.

However, I suspect that your eyeball estimation of the amplitude of variation of the POGA-C was a little over the top. Fourier analysis, elemental mode decomposition or even best-fitting sinusoids to any of the modern temperature series (all) suggest that the amplitude of variation of the multidecadal cyclic signal (periodicities of greater than 20 years) is around plus or minus 0.16 degrees K. This should provide us with an approximate upper limit on the low frequency contribution of “natural variation”, if one accepts a loose assumption of approximate regularity in period and amplitude. My own eyeball estimate of POGA-C suggests (remarkably) that Xia’s variation is quite compatible with this level of estimate, once drift and higher frequency content is removed. The very low frequency trend in the data remains unchanged over the period 1976 to 2013 and amounts to slightly less than 0.1 deg K per decade. Once again it provides indirect evidence that the AOGCMs are on average overestimating climate sensitivity by at least 60%, and this seems to be perfectly compatible with all recent energy balance studies based either on matching ocean heat content or using direct estimation of radiative flux changes.

Although this observation is not important scientifically, it is perhaps important politically, since it could still leave open the possibility that “more than 50% of the late 20th century warming” is explained by external radiative forcing including GHGs.

A clarification: my eyeball estimate was designed to prove a reasonable ratio of the trends, not so much to provide a reasonable trend. In this regard, the same method used for both trends, and then taking the ratio of the trends, substantially reduces the bias from my simple eyeball estimate.

Some idea about the changes that Kosaka nad Xie have forced in their calculations can be gained from their figure Extended Data Figure 3 | Net radiative imbalance and ocean heat content increase in POGA-H and HIST

From that we can learn that the average net radiative imbalance since 1998 has been about 1.2 W/m^2 in their HIST simulations, and about 1.6 W/m^2 in their POGA-H simulations. Thus the overall warming of the Earth system has been about 30% higher in POGA-H while the average surface temperature trend has been flat in POHA-H, but risen by about 0.3C in 15 years in HIST. Thus 25% less warming has led to a rather rapid rise in GST while the stronger warming has resulted in zero change in temperatures. The radiative imbalance of HIST is high in comparison with most estimates, while that of POG-C is even higher.

The reason for that paradoxical development is that they remove a lot of energy from the surface ocean of Eastern Pacific. A natural conclusion is that they remove more heat than most estimates present as the long term average heating rate of the whole Earth system, and they do that by forcing that deviation.

In further figures they show also the changes in OHC. The total increase over the whole period 1950-2012 is slightly larger in POGA-H than in HIST, but the change over the period 1998-2013 is essentially the same in spite of the 30% larger radiative imbalance. That seems to tell that they do not conserve energy, but just remove the extra heat in POGA-H (or add at some earlier periods).

Not particularly convincing, and certainly worth some explanations by the authors.

Some more numbers based on digitizing that same figure I have discussed above. All these numbers refer to 15 years from 1998 to 2012.

In HIST simulation the average TOA net imbalance is 1.16 W/m^2. That leads to annual net flux of 1.87 10^22 J/a.

In POGA-H simulation the average TOA net imbalance is 1.52 W/m^2. That leads to annual net flux of 2.45 10^22 J/a.

The average total OHC increase is in the both scenarios 1.50 10^22J/a. Thus in HIST 80% of the net flux goes to oceans and 20% to continents, melting ice, and atmosphere. That’s reasonable. In POGA-H we have an extra imbalance of 0.58 10^22 J/a that just disappears as it has no place to go. That’s 30% of the imbalance in HIST.

Furthermore the recent ARGO measurements tell about a rate of OHC increase of 0.63 10^22 J/a from 2005 to 2012 for depths 0-2000 m. That’s only 40% of the OHC warming of the model. In the ARGO data the 0-2000 m rate of warming differs somewhat from the estimated longer trends, but not nearly as much as the 0-700 m data where the warming has been very weak over the whole period of ARGO data. Estimates over longer periods tell that most of the warming has occurred in the top 700 m, but ARGO data behaves differently. (My estimates based on the error bars shown for annual data by NOAA/NODC for the years 2005-2012 is 0.27±0.25 10^22 J/a for 0-700 m and 0.63±0.29 10^22 J/a for 0-2000 m.)

The discrepancies between the data and the model as well as the size of the imbalance of the energy budget in POCA-H make me wonder what we can really learn from this exercise. Getting surface temperatures right doesn’t prove much when the energy balance is seriously broken.

Judith in the first post on the subject you came to a conclusion that 50% or more of the 1970s-1990s warming might be due to internal variability. In this post your conclusion seems to be it’s difficult to disentangle forcing from variability. Does that mean you are backing away from your first statement?

The Kosaka and Xie paper has a very good model for the natural variability. It is somewhat circular at its core, but the source as ENSO has even Tamino saying “the agreement is outstanding”, and he is competitive about such things.

The residuals left from the fit are small, with the bigger deviations explainable by volcanic eruptions.

What is left is the significant 0.8C warming and 1.2C warming over land. Are they really going to try to pin that on the even longer term Pacific decadal oscillations?

“The IPCC AR4 attribution statement, whereby most (>50%) of the warming in the latter half of the 20th century is anthropogenic.”

That statement was 100% based on the assumption that the current pause would not happen because the assumption about natural variation made tehn was that it was in decline and that therefore manmade warming was required to make up the difference. Once you let natural variation increase there is no possibility of attribution using the models and hence no reason not to say that all the warming is caused by nature because you cannot tease out the manmade contribution any more.

Neilson-Gammon, Tamino and all the other self-righteous, overly-pessimistic blowhards either don’t want to understand this basic point or they are using the famous “gut feeling” argument of Dessler, Trenberth and lately Von Storch. Well we on planet Earth prefer facts to fantasy and that tells us there is nothing to conclude that we are in anything other than a continuation of the recovery from the little ice age cuasing very mild, beneficial warming.

The tone from N-G and Tamino is typical of these eco-warriors, none of whom have ever done diddly for planet Earth. Maybe it’s time skeptics got nasty too. Current energy policy will lead to energy scarcity. Without energy “life is “brutal and short”. These faux-greens just assume they are somehow superior because they believe in a thermageddon that is entirely speculative and pessimistic along with an energy policy that is anti-industry and anti-human. This despite most of them hypocritically using just as large a carbon footprint as the rest of us. In short they need to closely examine their own morals rather than passing judgement on others.

PekkaThe second point is that the text refers specifically to deterministic systems, while I emphasize that it’s wrong to consider the atmosphere as a deterministic system.

I have already recommended you several good textbooks on non-linear dynamics (called chaos theory in the general public). You apparently still didn’t educate yourself what leads to embarrassingly stupid comments like the one above.
I generally always read your comments because you show the right ability to quantitatively evaluate the consistence of different statements. And consistence is of course the basis of any science.
But when it comes to non linear dynamics you strangely show an ignorance which makes you forget any consistence or even logics indeed.OF COURSE that the atmosphere is deterministic ! ALL of classical physics is deterministic. Only Quantum Mechanics is fundamentally probabilistic but we are not concerned with QM here.
The atmosphere (and oceans as well for that matter) obeys Navier Stokes. Its dynamics is fully described by Navier Stokes. Even the GCM try to obey Navier Stokes because they must.
Indeed if any system, theory or model of fluids violated Navier Stokes, it would violate energy conservation or momentum conservation or mass conservation or any combination thereof.
And Navier Stokes is a fully deterministic system of PED. The solutions of N-S are obviously and necessarily deterministic. How can you say that Navier Stokes is not deterministic is beyond me.
Like The Chief is rightly saying you confuse deterministic and predictable. If you took a couple of days to study the Lorenz equations, you would realize that they solve a 2D fluid dynamics setting with a few approximations. So you are of course totally wrong – the Lorenz equations are just 2D Navier Stokes. That means both soundly physical AND deterministic.
So yes, clearly, the atmosphere is deterministic and it was even for such cases and to avoid that uninformed people misunderstand everything that the term deterministic chaos was coined. This is of course a pleonasm because a chaotic system is always deterministic.

Careful Fanny, that is easy to verify. If there is a +/- 5 Wm-2 range of latent heat uncertainty, you just run models with the two limits and compare results. There is a +/-17 Wm-2 range of uncertainty in the surface imbalance at any given time, try that range of uncertainty. Once you find model sensitivity is +/- 4C, tell me how much is due to CO2.

Stochastic models rule because they do a better job at fitting the variability with the fewest number of freeparameters, thus meeting the information criteria measures. That’s why Pekka is correct in saying that stochastic analysis is much easier to apply — and we can actually use it for designing stuff!

Wouldn’t the outcome depend on the amplitude of the stochastic process? If the amplitude of a sine wave is 1000 peak to peak, and you superimpose noise with an amplitude of 0.001, the sine wave is dominant. Swap the amplitudes and noise is dominant.

I think they are just talking past each other. With a non-linear system you can have both dissipation and absorption from one boundary to the next with indeterminate time scales. If you know the time scales, it is deterministic, if you don’t you just have a probability.

Take atmospheric and surface absorption, the true surface absorbs 330 Wm-2 and the atmosphere absorbs 150Wm-2. Changing the atmospheric absorption changes the surface absorption both with respect to external energy and dissipated energy. If you assume a reference energy balance at one surface you will get a different response at the other surface. Classic Thermo frame of reference 101.

The internally stored heat would have to supply the thermal energy to support surface warming. Yet the internal heat is rising as measured by OHC studies.

Where is this extra heat coming from that is warming both the depths of the ocean and the surface?

Navier-Stokes can’t answer that unless you introduce a forcing function. Navier-Stokes is from the class of continuity equations that require an external forcing to generate a response. With dissipation, and no forcing, the response will eventually damp out.

Cappy, I asked where it came from. You said it came from “time”, which is not where.

So according to your logic, since we have had about 1C of land warming in the last 100 years, then in the last 300 years, we should have had 3C. Also since you say time is responsible for warming, we will have another 1 C in the next century, bringing it to 4C.

Webster, as I said the heat is coming from the recovery. There was a decrease in ocean energy from ~1400 to 1700 followed by a recovery or recharge of that energy from 1700 to present. It does not take much of an imbalance to cause degree change in average ocean temperature with that amount of time. Time is definitely a factor. As the oceans recharge, the rate of ocean heat uptake decreases so there is an increase in surface temperature . With that long of a time frame, solar is just as good as any for the “cause” with volcanoes and albedo feedback changing the rate of ocean recharge.

There is of course a lot of controversy about past solar TSI, but Be10 and C14 do indicate that there was a significant change in solar forcing at the surface which matters. The Bard TSI is a good fit with the Oppo 2004, but not perfect. There is internal variability plus volcanoes etc. etc.

The biggest problem is the quality and length of the instrumental data. Land surface is an average of Tmax and Tmin while SST is an average at various depths with poor spacial coverage.

You may not find it all that compelling, but that Oppo indo-pacific warm pool appears to be one of the best ocean paleo reconstructions available. Like it says in the post, you can find anything you heart desires, but the timing tends to fit the nominal rate of ocean heat uptake.

No one knows whether the universe is deterministic or not. We can however rely on the fact that we have insufficient information about the intial state of the system and insufficient processing power to predict the weather or the climate from a model even if we had perfect information.

So it’s really just woolgathering to discuss whether it’s deterministic or not because either way we’re phucked when it comes to a predictive model based on first principles. Climatology is the way to go not climate modeling.

Its dynamics is fully described by Navier Stokes. Even the GCM try to obey Navier Stokes because they must.

NS is always problematic,in all models it must be remembered that the “experiment” is not on the NS equations,but always on the program representing it (Gallavotti Fluid mechanics).

GCM are not fully representative of the NS equations,as the state of the art GCM’s substitute the last full equation of motion for the Coriolis parameter, reducing the dimension of the equations to 2.5d (and where no theory of statistical mechanics exists)

I’m not discussing fundamental mathematics, I try to concentrate on what’s relevant for the issues being discussed.

If we go into the nitpicking that you seem to choose, we should recognize that Navier-Stokes equations are not fully accurate for the real physics, because they are equations for a continuum gas or fluid, not for one formed by individual particles. Deriving Navier-Stokes equations requires that kind of approach that I have been promoting. You don’t even have your “exact” equation without that assumption. At that point your argument is self-contradictory.

You could continue to say that both the equations of classical mechanics and the Schrödinger equation are deterministic, when we discard the Quantum Mechanical issues related to state preparation and measurement (Heisenberg’s uncertainty principle). That’s true also for the equations behind QED although solving them accurately has been impossible.

But all that is irrelevant. The value of methods based on stochasticity varies from case to case. Generic statements cannot be given on that, but an analysis of each case must be performed taking into account the specific assumptions made in framing the question.

Consider that Navier-Stokes is used often to figure out the amount of “sloshing in the bucket”, to put it somewhat crassly. Next consider that if CO2 didn’t exist, there wouldn’t be anything to slosh (except for the atmosphere). See Lacis for the snowball earth mechanism.

So what happens when we add more and more CO2 into the mix?

The bizarre notion of people like The Chief is to suggest that this extra CO2 will reverse the process, simply because some possibility exists to move in a cooling direction. That would take some exceedingly unlikely “alignment of the planets” type of scenario for that outcome to transpire. Yet these same people refuse to entertain the possibility of catastrophic warming.

Only Quantum Mechanics is fundamentally probabilistic but we are not concerned with QM here. The atmosphere (and oceans as well for that matter) obeys Navier Stokes. Its dynamics is fully described by Navier Stokes.

I have a lot of respect for your understanding of non-linear dynamics, but we have to always remember that, in science, the application of any mathematical model to the real world is a simplification, and may be subject to reconsideration.

My specific objection involves the question of scale, when the Reynolds number becomes so low that flow is constrained to be laminar. This seems to me to contradict the assumption you made in your post:

The best way to imagin [sic] a full spatio-temporal chaos theory is to imagine that there is a different chaotic oscillator like the Lorenz butterfly) at every point of space (so there is an infinity of them) and that they are all coupled strongly with each other in a non linear and time dependent way.

Because, when the Reynolds number is sufficiently low, the behavior of the atmosphere at any point is not independent of its behavior at nearby points, I cannot see how the number of “chaotic oscillators” can be infinite. Of course, the number may be so large that it’s effectively infinite, but is that the same thing from a perspective of spatio-temporal chaos theory? (AFAIK the usual term for such a number is “semi-infinite”, is this correct in this context?)

The question regarding QM has to do with the relative scales of the smallest possible vortex (due to low Reynolds number) and the breakdown of the wave function introducing true quantum indeterminacy into the “random” noise that controls the growth of parasitic vortexes at their smallest scale. Is the difference in scale really so great that we can say it’s fully deterministic?

Of course, this is a different issue from the one regarding which sources of “random” noise that control the growth of parasitic vortexes are actually within the system being modeled. For instance, the flapping of a butterfly’s wings can be “random” WRT the atmosphere, since the control of its wings is vested in a nervous system that responds to many things outside the atmosphere. The same could probably be said for the behavior of leaves on branches, the location of trees, buildings, hills, etc. Even these should probably be considered as stochastic perturbations to the atmospheric system.

When we are looking for effectively random influences, we should include everything from individual particles of cosmic radiation, and probably much more importantly effects of solar origin like those related to solar activity.

But even without all that the behavior of the atmosphere follows dominantly laws of statistics. The effects that can be considered in that way include all dissipative processes. In the totally unrealistic world that Tomas seems to be advocating we have no real dissipation. Nothing could be further from the truth. We have dissipation because the atmosphere consists of a really huge number of molecules. Each of them has at a specific time it’s own state (or an equivalent role in the wave function in the Quantum Mechanical approach). All molecules are affected by others in a way that can very well be described by a statistical approach.

For all practical purposes the calculation can be done assuming that these influences are stochastic. In some cases it may be necessary to consider conservation laws at the level of particles, usually that can be done at a higher level.

Whenever some turbulence is created or dissipated away, the processes behave as stochastic. Therefore dissipation destroys effectively the signs of the butterfly flapping wings in Amazonas.

It’s difficult for me to understand how Tomas can be so ignorant of real physics, and willing to replace that with a formal mathematical theory that cannot be solved, and is therefore worthless in drawing conclusions about the real world in the way he tries to do. It’s the role of physicists to understand, which of the possible approaches are valuable in solving each problem, and which might be of value when some totally different questions are asked.

My point is that there are quite a few interpretations and that quantum mechanics began as a statistical trick to make things work. To that point is was the only fundamentally probabilistic application in physics. If there are no unknowns or hidden energies/particles, it could be deterministic, eventually, but until then, you have got to consider its limits.

I believe the reverend is correct. The evolution of the wave function is unitary and time reverisible. Statistical (classical) mechanics is probabilistic not QM. The usual source of confusion about QM uncertainty stems from the fact that position and momentum cannot be simultaneously known by an observer thus one or the other is “uncertain”. This is not fundamental property of matter/energy IMO but rather a fundamental restriction on the observer. The universe has perfect information with regard to both position and momentum we simply can’t measure both at the same time. In Bohmian Mechanics position is a hidden variable. I tend to think the best place to look for the hidden variable is in so-called dark energy which is thought to compose some 70% of the stuff that makes up the universe. Dark energy reveals itself only through gravitational interaction with baryonic matter and that’s about all we know about it. The distribution of dark energy is thought to be homogenous but I posit that it isn’t quite homogenous just the cosmological constant isn’t quite zero. Inhomgeneity in the distribution of dark energy is the hidden information. We can call that “The Springer Interpretation”. ;-)

The usual source of confusion about QM uncertainty stems from the fact that position and momentum cannot be simultaneously known by an observer thus one or the other is “uncertain”.

Correct.

This is not fundamental property of matter/energy IMO but rather a fundamental restriction on the observer. The universe has perfect information with regard to both position and momentum we simply can’t measure both at the same time.

Wrong.

In QM the particles do not have well defined positions, when they are not observed, they have only probabilities of being found at various positions, if and when the position is determined by a measurement.

Probabilities enter, when the system being studied interacts with it’s surrounding as it must do when the state is prepared or observed. The Copenhagen interpretation of Bohr gives a idealized definition of, how measurements are performed. This interpretation has turned out to be very useful in practical use of QM. It doesn’t answer all philosophical questions related to the concept of measurement (like that about Schrödinger’s cat), but the approach is good enough for most physicists that use QM in their work.

Springer, ” In Bohmian Mechanics position is a hidden variable. I tend to think the best place to look for the hidden variable is in so-called dark energy which is thought to compose some 70% of the stuff that makes up the universe. ”

It is still working under an assumption that it is deterministic. It may be, maybe not. As they say down here, there are several ways to skin a catfish.

My criticism was fully justified and totally correct.
Everybody can notice how you avoid to admit that it was your statement that I quoted which was totally wrong.
So I repeat, do you still consider that Navier Stokes is NOT deterministic ?

Or do you want to nitpick and maintain that Navier Stokes is wrong and doesn’t describe the atmosphere (or océans for that matter) anyway because we should recognize that Navier-Stokes equations are not fully accurate for the real physics, because they are equations for a continuum gas or fluid, not for one formed by individual particles.

So which one is it Pekka ? Because if it is neither then the statement that the atmosphere is not deterministic is trivially wrong.

I cannot see how the number of “chaotic oscillators” can be infinite. Of course, the number may be so large that it’s effectively infinite, but is that the same thing from a perspective of spatio-temporal chaos theory?

It is not only infinite but uncountably infinite.
In any field theory (like QFT, N-S or spatio-temporal chaos), the state of the system is given by the state of the field in every point of the space.
As the space of all possible fields is a Hilbert space (square integrable functions) and the number of points is uncountably infinite, the number of local “oscillators” is uncountably infinite.
For instance the Feynman’s path integral in QFT must be taken over an uncountably infinite number of functions.

Now supposing a finite or countably infinite number of “oscillators” (the right term is the local dynamics of the field) is an approximation which is used to better understand how spatio-temporal patterns appear and vary when a large number of local “oscillators” interact. This is fully relevant for oceanic oscillations which are just an example of spatio-temporal interacting patterns.
This si exactly what Tsonis does in his paper – taking a small number (4) of oscillators (PDO, AMO etc indexes) and studying their interactions.

As an aside here is another ignorant statement of yours.The effects that can be considered in that way include all dissipative processes. In the totally unrealistic world that Tomas seems to be advocating we have no real dissipation.

If you had the slightest notion about non linear dynamics (or were ready to learn something new what you apparently are not) you would know that dissipation is a necessary condition to observe chaos in fluids.
This is because in a dissipative system volumes in the phase space are shrinking (Liouville theorem) but the existence of at least one positive Lyapounov coefficient leads to the expansion in at least one phase space direction.
It is these 2 phenomenons that cause chaos in physical systems like atmosphere.
For conservative systems the volumes in the phase space are preserved and the cause of Hamiltonian chaos (KAM theory) in such systems is different.
But as we speak here about atmosphere, this case is irrelevant.

So Poincaré, Lorenz and more generally all fathers of the ergodic theory like Boltzmann share with me this totally unrealistic world in which the dissipation is understood as one of the causes of deterministic chaos.
One wonders how looks the “realistic world” in which you live.

Naturally I know very well what all the issues are that you are discussing, and naturally I have great respect for the pioneers of knowledge.

It should also be clear by now that I have no respect for the way you draw conclusions from their work and from later developments in mathematical physics. The first step is in this case to understand the mathematics, the second is to understand the relevance of each mathematical fact for specific questions about the real world. On this second step our ideas don’t meet at all.

In any field theory (like QFT, N-S or spatio-temporal chaos), the state of the system is given by the state of the field in every point of the space.

This is where I have a problem. The Navier Stokes equation may require “the state of the field in every point of the space“, but I would question whether the real atmosphere does, when, at a sufficiently small scale, the state at one point is not independent of that at another.

Moreover, let’s suppose it does, i.e. that the state of the field requires independent definition at arbitrarily small distances, then when those distances become smaller than the mean free path, wouldn’t the breakdown of the wave functions would be continually introducing noise from actual quantum indeterminacy?

Moreover, let’s suppose it does, i.e. that the state of the field requires independent definition at arbitrarily small distances, then when those distances become smaller than the mean free path, wouldn’t the breakdown of the wave functions would be continually introducing noise from actual quantum indeterminacy?

Argument by assertion. I could have done that, in fact I started out doing it and changed my wording, as can be seen by the double “would”. I don’t have anything beyond intuition to support my assertion. Do you?

Assignment: (answer here) “Prove that no matter how clever you are, and no matter how much money you spend on the newest, cutting-edge quantum pencil-balancing equipment, you can never get a quantum pencil to balance on its point for more than about four seconds.”

You are welcome AK. Definitely we both of us respect knowledge-seeking … and neither of us thinks it’s easy. Your questions in regard to the triple interface of thermodynamics, classical dynamics, and quantum dynamics, are among the toughest that 21st century science (and engineering) grapple with.

Advice Study new subjects from the viewpoint of at least three texts: one text that you admire greatly, another text that you disdain utterly, and a third text that you find unbearably tedious. Critique each text from the viewpoint of the other two. Then chart your own path.

Your question is clearly related to the fact that Navier-Stokes equation is not strictly an equation of any physical system but an equation of a mathematical idealization of a physical system. It’s an equation for a continuum, not for matter that consists of particles.

One might imagine a true continuum equation for quantum field theories where particles do not exist in the same sense they do in classical physics and also in Quantum Mechanics of a fixed number of particles. Such an equation would, however, very likely show different behavior at very small distances in comparison with distances where the derivation of Navier-Stokes equations from statistical mechanics van be considered accurate.

I think that the fundamental mathematical problems of N-S equation that have led to it’s inclusion in the set of Millenium Problems come from the mathematical idealization that extends the equation to arbitrarily small distances, but this is an issue that I’m not at all certain about.

Thank you for your response. It was partly related to that, and partly to the fact that I can’t figure out exactly how quantum uncertainty is supposed to resolve in real-world situations. I’m assuming that the “break-down of the wave function” in classical thought experiments would translate to some sort of local entanglement between e.g. colliding air molecules, but I’m not sure exactly how to represent it.

I don’t see the actual issues as really determining anything in climate modeling, given the enormous amount of “random” noise entering the system at a local level anyway (e.g. butterfly wings), but it’s a question that certainly interests me, and may well be relevant to how the atmosphere should be treated as a chaotic system.

One thing, AFAIK, is that chemical expectations depending on statistical thermodynamic calculations appear to work even down the the level of single molecules isolated in single vesicles, although I’m not sure how much experimentation has actually been done to test this. (If any!) If so, that might mean that interacting wave functions can produce chemical results without needing the “breakdown”. Or something. I see resolving this question of quantum mechanics as critical in supporting the next few generations of bio-tech, which IMO represents one of the most likely avenues for solving problems of rising atmospheric pCO2.

AK “I can’t figure out exactly how quantum uncertainty is supposed to resolve in real-world situations.”

AK, textbooks that discuss these issues concretely include Nielsen and Chuang’s Quantum Computation and Quantum Information, in particular chapters 2,4,7,8; also Howard Carmichael’s Statistical Methods in Quantum Optics, volumes 1 and 2. As long ago as 1966, Laszlo Tisza wrote:

It is a noncontroversial statement that thermodynamics is a theory in close contact with measurement.
Yet the specific implications of this statement have changed beyond recognition since the formative years of the classical theory.

Today this is more true than ever.

Sincere best wishes are extended to you and your quest for a deeper understanding of these tough dynamical issues AK!

Pekka, I suspect that the reason NS is considered a grand challenge problem is just the chaos phenomenon. Numerical methods by their very nature cannot follow these details so that in the L2 or any other “normal” norm, the errors grow to large values. Statistical norms with time averaging are a frontier for research. I recently saw a book on this that proved the existence of statistical averages in some measure. Unfortunately, there is no statement about the dimension of this measure, so it may not have practical significance.

Reading through the detailed description of the challenge tells that it’s not about the chaos. That alone would not be enough to make it a grand challenge.

The problem is related to the observation that the solutions appear to blowup in a way that makes the solution “not smooth” in a mathematical sense. In other words divergent behavior involving values that exceed any preset limit tend to build up.

It’s common in physics that divergences are due to the behavior of the equations in the limit where the distances approach zero. In an infinite system they may also develop when distances grow without any limits, but my impression is that the first alternative is the key in this case.

If what I write is true, the solutions might be better behaved, if the N-S equations were modified at distances comparable to intermolecular distances. That might well lead lead to additional practical problems in applying the equations. On the other hand, all numerical methods based on grids or finite elements do modify the equations at small distances. Thus the numerical models would not have the same problems of blowup, but might have their own serious problems in situations where the exact equations lead to divergent behavior (if the exact equations, indeed, do that).

The N-S problem relates to both the existence and smoothness of analytical solutions – rather than the behaviour of numerical solutions. Numerical solutions of course exist, are very approximate and instability can be addressed by changing the grid size.

There are many equations that have a physical basis – as does N-S – but that have no analytical solution. The simple first order differential equation of hydrological storage has an analytical solution for a rectangular and infinitely long basin – but for real problems is solved numerically.

dS/dt = I – Q where S = storage, I = inflow and Q = outflow

The challenge to find smooth, analytical solutions of N-S is an order of magnitude more difficult, is expected to yield insights into turbulence and – as they say – ‘we probably need some deep, new ideas.’

I have downloaded the Mumford paper – which seems a lot of fun. There are practical uses for stochastic statistics in hydrology in creating flood and rainfall pdf. Chaos appears here as shifts in the means and variance of the time series of the data. It has been argued that this can be addressed using stratified data – giving much better probabilities for dry and wet regimes.

AK, I suppose in principle its possible for quantum fluctuations to filter up to the continuum scales. But generally that’s thought not to be the case. At least at very small scales as far as we can measure, there is an energy cascade to smaller scales, not the other direction. There is quite a bit of experimental evidence for this.

Similarly, it is possible for heat to flow from a colder object to a warmer one. The odds are just weighted against it (with them growing worse as the time period or difference in temperatures increases).

At least at very small scales as far as we can measure, there is an energy cascade to smaller scales, not the other direction. There is quite a bit of experimental evidence for this.

Yes, indeed. However, in the fields I’m really familiar with, there’s no reason to suppose that the flow if information follows, or is even in the same direction as, the flow of energy. This is certainly the case in biological systems, especially the neuron. And, of course, the digital information systems we use in our civilization.

I see no reason to model the climate, or other aspects of the biosphere, in terms of energy flows. Rather the flow of information, which (AFAIK) is generally more along the surface while the primary energy flow is vertical (in and out).

My own (cerebral) modeling of turbulent systems, in terms of intuitive pictures, has the primary flow of information in the opposite direction from energy: energy flows primarily from larger-scale vortexes (and potentially other, more complex, meta-stable structures) to smaller, while information more flows upwards, primarily in terms of where and when the energy is extracted from larger structures. The actual behavior of structures at any scale is primarily a function of information rather than energy. This picture is, of course, based on analogies derived from the cell.

As I understand it, weather is chaotic and climate less so – like the seasons…

Thank you for the link to the article on the recent study. I was gratified to have one of my opinions supported by the leading researcher.

“…the climate is far less chaotic than the rapidly changing weather conditions…”

You could not make your ‘cool mode’ prediction if climate was chaotic, no?

Oddly – the article was about predicting multi-decadal climate shifts in the Pacific. I also linked to a NASA page on 30 year patterns in the Pacific. In the proxy data these last anywhere from 20 to 40 years – with significant changes over much longer periods in means and variance. The system is statistically nonstationary over millennia that we know of. As in the modern era – it is presumed that global temperature responds to the state of the Pacific. It is certainly the case that hydrological extremes – far in excess of anything seen in the modern era – in the Holocene followed the extremes of the Pacific state.

In a philosophical sense it is difficult to argue that the universe is anything other than deterministic – this philosophical argument was resolved long ago. As Tomas said – all of physics is fundamentally deterministic. Even quantum indeterminacy just includes the observer in the frame of cause and effect. The Schrödinger wave equation projects a probability function though time – although the probability density function itself seems more a convenience than a fundamental description of wave/particle duality. Even the apparent random motion of molecules is not truly stochastic as each interaction obeys Newtonian physics of force and reaction. Laplace’s demon might foresee every pathway but statistical mechanics relies on average properties of simple systems. Like the pdf for the location of a photon in space – statistical mechanics is a mathematical convenience that allows solution of otherwise intractable problems.

Complex dynamical systems such as Earth’s climate resist reduction to simple statistical mechanical analogies. ‘Atmospheric and oceanic forcings are strongest at global equilibrium scales of 10^7 m and seasons to millennia. Fluid mixing and dissipation occur at microscales of 10^−3 m and 10^−3 s, and cloud particulate transformations happen at 10^−6 m or smaller. Observed intrinsic variability is spectrally broad band across all intermediate scales. A full representation for all dynamical degrees of freedom in different quantities and scales is uncomputable even with optimistically foreseeable computer technology. No fundamentally reliable reduction of the size of the AOS dynamical system (i.e., a statistical mechanics analogous to the transition between molecular kinetics and fluid dynamics) is yet envisioned.’ http://www.pnas.org/content/104/21/8709.full

What we should remember fundamentally is that climate shifts are far from theoretical. Data exists in both paleoclimate and the modern era for abrupt climate change. ‘The Earth’s climate system is highly nonlinear: inputs and outputs are not proportional, change is often episodic and abrupt, rather than slow and gradual, and multiple equilibria are the norm.’ http://www.unige.ch/climate/Publications/Beniston/CC2004.pdf

Dynamical complexity is the fundamental operating mode of the system. It requires new methods of investigation of mode shifts – slowing down and dragon kings. The modes are stable – multiple equilibrium – until the control variables change sufficiently to cause another climate shift. Fundamental periodicity includes decadal modes. We are in a cool mode since the 1998/2001 climate shift – and these last for 20 to 40 years. Non warming – or even cooling over the next 10 to 30 years seems the obvious hypothesis.

Beyond that the system is entirely unpredictable but entirely deterministic. Solar variability is a control variable along with orbital eccentricities – and it should be said greenhouse gases. When a threshold is passed – the system fluctuates madly – dragon kings – and then settles into an emergent pattern as tremendous energies cascade through sub-systems. We may posit long term cooling as solar decline is amplified through the Pacific mechanisms. Perhaps larger scale cooling as an open Arctic increases snow and subsequent melt resulting in an abrupt shift in MOC. It has happened before – so the potential must be there. We may posit runaway warming but this seems really unlikely.

Pekka argues that an impossible calculation of a statistical mechanics of climate is in principle simpler than a deterministic solution. He may be right in principle – but the methodology doesn’t exist. Webster, FOMBS and Barry don’t have informed positions and so can be safely ignored on the science and regretted on the policy.

Neither deterministic or statistical predictions are yet possible. Deterministically because of uncertainties in data, processes and coupling – and because computer power is not sufficient by orders of magnitude. Statistically because the functions are unknown and possibly unknowable.

‘Finally, Lorenz’s theory of the atmosphere (and ocean) as a chaotic system raises fundamental, but unanswered questions about how much the uncertainties in climate-change projections can be reduced. In 1969, Lorenz [30] wrote: ‘Perhaps we can visualize the day when all of the relevant physical principles will be perfectly known. It may then still not be possible to express these principles as mathematical equations which can be solved by digital computers. We may believe, for example, that the motion of the unsaturated portion of the atmosphere is governed by the Navier–Stokes equations, but to use these equations properly we should have to describe each turbulent eddy—a task far beyond the capacity of the largest computer. We must therefore express the pertinent statistical properties of turbulent eddies as functions of the larger-scale motions. We do not yet know how to do this, nor have we proven that the desired functions exist’. Thirty years later, this problem remains unsolved, and may possibly be unsolvable.’ http://rsta.royalsocietypublishing.org/content/369/1956/4751.full

The chasm of ignorance surrounding this new climate paradigm is unfortunate if not surprising. It is perhaps the case – Tomas – that progress will be made one death at a time. Can we afford to wait that long? It seems we will find out.

As far as I am concerned – this is the worst of all possible concatenation of events in the political sense. The world is not warming for a decade or three – yet there exists a finite risk of catastrophic climate change in as little as a decade.

What I’m saying is that no generic principle can tell on the difficulty of the practical problems. No existence or non-existence proof of some specific type of chaos is of practical help. Analyzing a complex system like the Earth system is certainly very difficult. The solutions may involve both quasi-periodic oscillations and non-periodic state shifts. Some formally defined requirements for considering the system chaotic may be satisfied or not, but there are certainly features that have a chaotic nature.

It may be impossible to tell what is stochasticity and what chaos in the behavior of the Earth system, but that doesn’t matter. Both may manifest themselves strongly or be of lesser significance. In practice the best answers for the relevant questions are obtained by studying empirically the real world and by studying quantitative results from theories and models.

My main point is that no proof that tries to shortcut a full quantitative analysis can tell the quantitative accuracy and reliability of obtainable understanding and of projections to the future. Such knowledge can be obtained only through actual research on the real system and realistic models.

An additional point that I made is that adding stochasticity may affect predictability in both ways: it may add to the uncertainty, or it may reduce the amplitude of variations and make by that the system more predictable. We have some understanding on what kind of stochasticity acts in each direction, but here again only case specific research can lead to quantitative conclusions.

‘In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles. The generation of such model ensembles will require the dedication of greatly increased computer resources and the application of new methods of model diagnosis. Addressing adequately the statistical nature of climate is computationally intensive, but such statistical information is essential.’ TAR WG1 14.2.2.2

More and better observations are certainly essential. Perturbed model ensembles may as well provide the possibility at least of generating pdf’s for the future state of climate. However, recognising the similarity of climate behavior to that of the broad class of deterministic chaotic systems suggest as well other approaches to predicting the timing of tipping points.

“In the proxy data these last anywhere from 20 to 40 years – with significant changes over much longer periods in means and variance”

Proxy data show no regular periodicity in PDO events. Same for very recent data. Your assumption for the next 10 – 20 years rests on two recent cycles, the persistence of which (20 – 30 years) appear to be anomalous in the longer record. Most research has recommended that the system is unpredictable. If you are so well-read, why do you persist in projecting a cycle that has only two iterations against the mainstream understanding of non-predictability of PDO?

Yes, there are significant climate shifts – Milankovitch cycles being the most obvious, but the basic cause is well-understood. Not so for PDO. The article on the recent paper suggests that Pacific patterns may be predictable (the abstract to their paper specifically mentions PDO), but no prediction is given for the coming decades. Do you have any support for your cool mode hypothesis? It certainly isn’t mainstream. If there is more to it than merely supposing the periodicities of the last two cycles will be repeated, do you have a better reference that specifically supports your thesis? Or of any multi-decadal Pacific OA system purportedly tied to global (not regional) temperature?

‘While the correlation displays decadal-scale variability similar to changes in the interdecadal Pacific oscillation (IPO), the LDSSS record suggests rainfall in the modern instrumental era (1910–2009 ad) is below the long-term average. In addition, recent rainfall declines in some regions of eastern and southeastern Australia appear to be mirrored by a downward trend in the LDSSS record, suggesting current rainfall regimes are unusual though not unknown over the last millennium.’ http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-12-00003.1

I linked to the graph earlier.

‘This study uses proxy climate records derived from paleoclimate data to investigate the long-term behaviour of the Pacific Decadal Oscillation (PDO) and the El Niño Southern Oscillation (ENSO). During the past 400 years, climate shifts associated with changes in the PDO are shown to have occurred with a similar frequency to those documented in the 20th Century. Importantly, phase changes in the PDO have a propensity to coincide with changes in the relative frequency of ENSO events, where the positive phase of the PDO is associated with an enhanced frequency of El Niño events, while the negative phase is shown to be more favourable for the development of La Niña events.’ http://onlinelibrary.wiley.com/doi/10.1029/2005GL025052/abstract

Thanks for playing barry – but I think you are a persistent twit with a penchant for BS space cadet science and no real interest in dialogue or the natural philosophy of climate science. .

Feel free to come back with a whine about how rude I am – and then do me the favour of not wasting my time further.

There have been some recent, since ca 2005, developments in reconciliation of the continuum and particle, not quantum, approaches to fluid motions. It seems to have started with Brenner, Dadzie and Reese. An early introduction by Brenner is here

The following Google Scholar searches will give you related papers. Including Validation of the modifications to the established Navier-Stokes-Fourier model.