Understanding multi-decadal climate changes

An in press article in the Bulletin of the American Meteorological Society reports on a 2012 Workshop in Taiwan that focused on understanding natural internal variability and multi-decadal climate changes:

The 2012 National Taiwan University International Science Conference on Climate Change focused on two of the most difficult challenges in the study of climate change: 1) delineating the multidecadal and longer time scale variations in historical records that extend back only ~150 years, and 2) distinguishing between anthropogenically forced and natural variability.

ANTHROPOGENIC FORCING AND INTERNALLY GENERATED VARIABILITY

That there exists a large amount of free climate variability in the climate models suggests that significant portions of the observed multidecadal variability in the climate record could be inherently stochastic, i.e., attributable to sampling fluctuations associated with naturally occurring modes of variability. This is even true for ENSO, where coupled interactions enhance the variability at a distinct timescale, leading to a peak that stands out above the red background spectrum.

Regardless of the mechanisms that give rise to it, multidecadal climate variability modulates the rate of global-mean surface air temperature rise. Applying a “dynamical adjustment” to remove (or at least reduce) the contribution of these circulation changes to the global-mean temperature trend simplifies the space-time structure of the surface air temperature record and renders it more spatially and seasonally coherent. Results presented at the workshop suggested that the enhanced wintertime warming over high northern latitudes from 1965 to 2000 was mainly a reflection of unforced climate variability. Disregarding this dynamically induced component of the 20th century warming leads to around a 10% reduction in the inferred global climate sensitivity.

MULTIDECADAL TO CENTURY-SCALE CLIMATE VARIATIONS

Analyses of observed climate records during the 20th century, the last ice age, and the Holocene, in conjunction with climate modeling results, suggest that pronounced multidecadal tocentury-scale variability can be produced internally by a number of different mechanisms.

The observed sea-surface temperature (SST) variations and proxy climate records in the Southern Ocean (50°S-70°S) suggest the existence of pronounced global scale centennial variations, with the most recent maxima around the mid-1870s and mid-1970s. In one climate model, these long time scale variations originate from the slow accumulation of North Atlantic Deep Water in the Weddell Sea at mid-depth, which destabilizes the water column from below and eventually stimulates deep convection there. The accumulation of heat during the quiescent regime and its subsequent release to the atmosphere during the convective regime acts as a recharge oscillator in that model.

The Atlantic Meridional Overturning Circulation (AMOC) is another important source of global and regional scale multidecadal climate variability. Numerical simulations show that these variations are advected along interior pathways in the extratropical North Atlantic, reaching the subtropics several years later. Several independent fingerprints of AMOC variability were proposed at the meeting and new evidence was brought to light that the North Atlantic multidecadal SST mode known as the observed Atlantic Multidecadal Oscillation (AMO) may be linked to AMOC variations.

Modeling studies indicat that the AMOC weakens most at northern high latitudes in response to increasing greenhouse gas concentrations. The simulated AMOC weakening under anthropogenic forcing cannot be distinguished from natural AMOC variability in the record extending through just the first few decades of the 21st century, but the free and forced variability should become separable toward the middle of the century. Analysis of the 350-year-long Central England historical temperature record is suggestive of pervasive multidecadal variability that climate models suggest could be associated with variations in the strength of the AMOC.

The recent decrease of Arctic sea ice has attracted widespread media attention. This decrease is superimposed by a rich spectrum of variability of the Arctic sea ice. Strong variations with time scales of 50-120 years have been reported. It was suggested that the AMOC might be capable of influencing Arctic sea ice on this time scale through the inflow of Atlantic Water into the Arctic Ocean. It should be kept in mind, however, that while Arctic sea ice exhibited a record low in the last decade; Antarctic sea ice featured a record high. The role of global-scale unforced variability needs to be quantified in this context.

MATHEMATICAL THEORY RELATING TO CLIMATE CHANGE

Since the climate system may possess multiple equilibria, the stability of these equilibria and the transition dynamics between them matter. A systematic stability and transition theory for the oceanic thermohaline circulation has been developed. It is found that the transitions are crucially determined by basin size and geometry, as well as by the thermal and salinity Rayleigh numbers. Numerical results across a hierarchy of models suggest that both jumps and continuous transitions between climate equilibria are possible. In particular, the jump transitions may be associated with hysteresis phenomena.

The presentations included a review of fluctuation-dissipation theory (FDT) from statistical mechanics. This theory and its various generalizations allow one to calculate a system’s mean response to external forcing through the knowledge of appropriate correlation functions of its internal fluctuations. Such an approach may help one interpret the results derived from ensembles of numerical integrations with climate models.

The combined dynamics of low-frequency climate variability and forced climate change may be simplified by treating the faster processes as random noise, superimposed upon and interacting with the slower, nonlinear processes. Several presentations described how the theory of nonlinear, stochastically forced dynamical systems can provide insight into a wide range of time-varying climate phenomena.

An analysis based on observations and experiments with an atmospheric model coupled to a mixed layer ocean suggested that teleconnections are substantially stronger at multidecadal versus interannual timescales. In effect, the former is dominated by a global-scale “hypermode” which exhibits an equatorially-symmetric structure reminiscent of the El Nino-Southern Oscillation (ENSO).

The fact that climate variability involves such a wide range of time scales renders the separation of trends and cycles difficult. Several innovative spectral-analysis methods—including Empirical Mode Decomposition, the Multi-Taper Method, and Singular Spectrum Analysis—are being more widely used to study the trend and cycles on different time scales in climate records. When applied properly, these methods behave like data-adaptive temporal filters, and thus facilitate the differentiation between the century-long trends and multidecadal cycles. Their application to the historical record of global mean surface temperature and to several proxy records of local and regional temperatures substantiates the existence of human-induced global warming over the past century and also highlights the role of AMV in modulating the rate of global warming.

CONCLUDING REMARKS

Most climate change meetings have tended to focus on the forced, thermodynamically induced variability of the climate system. In contrast, this meeting featured scientists who think outside of that box. The climate response to external forcing—especially on regional scales—is strongly influenced by dynamical processes in both the ocean and the atmosphere. Moreover, the existence of strong natural multidecadal to centennial variability makes the detection of anthropogenic climate change a challenge. The presentations at the workshop dealt with the full range of processes that contribute to forced and free (also referred to as unforced or internal) multidecadal climate variability. This broader framing of climate change science is required for quantifying the societal risks of future climate change, and for properly assessing the extent to which today’s weather, specifically the statistics of extreme weather events, is changing in response to human-induced climate change.

John Kennedy sent the Workshop link, that includes all of the presentations. This looks so rich, I’m expect I’ll be doing full posts on some of these.

JC comments: Well I had a tough time deciding what NOT to include in my excerpts, since all of this is music to my ears. Kudos to the National Taiwan University for hosting this workshop; dare I hope that this topic will be trending for workshops in the U.S. and Europe?

I do disagree with the following statement however:

Disregarding this dynamically induced component of the 20th century warming leads to around a 10% reduction in the inferred global climate sensitivity.

I regard this as THE key unknown, and I would not be surprised if it were significantly higher than 10%.

I’m sure you’re incapable of thinking about lots of things, Jim, whether they are outside or inside the so-called box.
But, you aren’t the worst, on thisbolg, in this regard. If that’s any consolation!

The issue of “CAGW” with respect to how Hansen relates it is that the fat tails on AGW estimates have to do with the slow feedbacks in the climate system [1]. Unfortunately, these are almost impossible to measure without having longer timescales of observation, and we are only 100+ years into this experiment.

So we are solidly entrenched on a 3C value for ECS but do not know if the ECS will creep up as the slow feedbacks of albedo changes, etc take effect.

Girma that’s meaningless. By impressing a 756-month mean from 1959 to present you turn the temp record into a linear trend that’s chopped off in 1985 because there aren’t enough subsequent samples to calculate the mean beyond that point. CO2 with a 12-month mean is a linear trend. So the equation you give simply converts one linear trend to another. It is absolutely meaningless.

Interesting. You say these slow feedbacks can’t be measured yet somehow they manifest in the sensitivity estimate as positive feedbacks. How exactly do you determine the polarity of a signal that has yet to be measured? Please produce a couple paragraphs of gibberish about that to amuse me. Thanks in advance Web.

“How exactly do you determine the polarity of a signal that has yet to be measured?”

Answer the question if you can with something more credible than waving your hands and saying all slow feedbacks must be positive. Data we can glean from ice cores, like the slow downward slide from interglacial maximum temperature to ice age minima, indicate slow feedbacks must if anything be negative not positive.

You’re a real piece of work Pukite. Does no one at BAE object to you spending your time at work trolling climate blogs because if you actually got involved in a paying project you’d just screw it up and make more work for people who aren’t imbeciles?

symmetries are a good thing.
To think inside a box,:
– take a dual representation of space where inside is out and out is inside
– think out of the box ! as usual ! deeply…
– revert the result by exchanging in and out

It is not a joke. I have a tendency to think inside the box, so to think out of the box, I just play the Attorney of the Devil, using the usual method, but defending the opposite position.

another symmetrical method to defend you own position is to translate the argument of your opponents in a symmetrical world, where they are the accused, and not you.
They quickly see that their arguments are absurds. works well against hypercritical arguments.
May be good also to know when you says stupid things…

Many factors contribute to noise. When all these factors are small and of the same order of magnitude in their impact, it may no longer make sense to deconstruct the individual contributions. At that point statistical characterization of the noise makes sense.

The application of the signal-to-noise idea comes in when one realizes that one of the contributors to the dynamics starts starts to rise above the background din. In the case of the climate, the signal emerging is that of AGW.

(BTW, I actually took graduate courses from a pioneer in modern noise analysis, Aldert van der Ziel, and learned how to characterize various noise terms, so this is not something that I am dreaming up. )

What is most exciting about the current situation is that we have struck gold by having this one measure, the Southern Oscillation Index(SOI), that essentially isolates one of the significant noise sources. This source is still much smaller than the AGW signal (see the first figure in the following post where the dynamic range is less than the total rise observed so far):http://contextearth.com/2013/10/04/climate-variability-and-inferring-global-warming/
So the obvious approach is to remove this fluctuation from the global temperature time-series so that the Signal-to-Noise Ratio (SNR) is even further improved.

This results in a significant reduction in uncertainty, and one that has gone unnoticed since the original excitement of the work by Kosaka & Xie [1] faded, since it doesn’t support the denier’s position any longer.

So you subtract the SOI from the temperature series. This removes a random perturbation which can make the global temperature appear warmer or colder than it otherwise would be. Having done this (and some other perturbations as well) you find that the “pause” in global warming is an illusion – the unperturbed series increases with no pause.

You say that the SOI is a good canditate for a perturbation because it has a mean of zero and because it is merely a pressure difference: “..the SOI reverts to is zero (because atmospheric pressures cannot maintain an imbalance for long)” How long?

If you are right the “pause” should end soon as the SOI changes sign again. So there is a bound on how long a pause your explanation can tolerate. Do you have any idea what that bound is?

Actually even if the SOI continues to make a negative contribution that masks the underlying increase, if that negative contibutiuon remains roughly the same the trend should sort of emerge again behind it like the sun from behind a cloud.

One hint on how long the “PAWS” will last before going paws up is the fact that it is warming during the La Nina lean of ENSO Neutral. The only time it actually cools is when we experience a snot-knocker La Nina (2011). It even warms in a gentle La Nina (2012). So out of the two phases and the two leans of ENSO neutral, it’s warming in 3.5 of them.

You guys are banking on snot-knocker La Nina episodes holding this thing down, and it’s a bad bet.

“But most skeptics is fear that AGW might be true”- Do you really think that? Don’t most skeptics accept the premise but not the amount of feared warming or the claimed consequences that are feared will result, or that the proposed mitigation actions make economic sense?

They don’t say “its both” they say “enhanced wintertime warming over high northern latitudes from 1965 to 2000 was MAINLY a reflection of unforced climate variability”.

Looking at your Berkeley Earth Grrenland temperature and AMO, I would say “mainly” is about 70% or more, if there is unaccounted UHI in recent temperature data.

Then there is still soot to consider with about 2/3 of CO2 forcing globally but much more in the Arctic, because 1) soot occurs mainly in the northern hemisphere and 2) the large direct forcing of “soot on snow and ice” (large because the global value of 0.1W/m2 has to adjusted for the fact that only a fraction of the global surface are covered by snow and ice).

Rather an unfair analysis Steve, many of us have long believed that the current warming trend is due to both CO2 emissions and natural variability. What we care about is the temperature change due to 2xCO2; should it be 3.0, then we need to invest a sizable fraction of Global GDP into nuclear and completely change third world governments; can’t see the current bunch being allowed to play with the full nuclear cycle can you?

Still waiting for you to show us that “underlying AGW climate trend” Moshpup. Oh, and in the meantime, maybe you can show us that magic you use to separate out the multiple forcings with inadequate models??

Steve,
Self evident until you said, “But most skeptics, in fear that AGW might be true, think that they are forced into a position of total knee jerk reflexive denial. I say white, skeptic thinks he must say black”. Now you are pushing a rope up hill (we can say anything but sometimes with no benefit other than the self satisfaction).

It is hard to not to agree that observed data argues for:
“Not just one ( its all AGW)
Not just the other ( Its all natural)
But rather and open minded look at both”.

Dr. Curry seems to take the later approach to the facts every week. Without the extreme positions there remain enough gray shades for great parsing?

Steven,
You are making the classic mistake of lumping all skeptics in one box. By some definitions you would be considered a skeptic. Also Judy. However, neither of you or I or most honest scientific skeptics are like you portray. Most would be classed as lukewarm on AGW and skeptical of CAGW. Quit throwing rocks when you are ignorant on a subject.

… you cant have a null hypothesis about natural variability without
first defining it numerically.

But to define it numerically you must fully understand the extent of natural variation. Oops – we don’t have that degree of understanding yet.

I see where you are going with this. That would mean there is no null hypothesis because we don’t have enough information to build one. So the null hypothesis is rejected on the grounds that it doesn’t exist. QED.

Second, it’s not logically disconnected from the physics of CO2. Heat transfer (multimudal) at the surface is the right physics for the Earth’s surface temperature. The bulk of the atmosphere, N2 and O2 are the real GHGs – they gain most of the heat from the surface non-radiatively, but cannot significantly radiate to space, only via the radiative active gases (and clouds).

It’s a stupid argument and stupid doesn’t interest me. We have a paleo temperature record (of questionable quality) established by various means such as tree rings, ice cores, sedimentation, sea level change, and etc. These extend back in time before humans were doing anything potentially climate-changing thus they are implicitely records of natural variation. Insofar as these temperature reconstructions are reliable they give us a numerical record of the duration and magnitude of natural variation. The null hypothesis is then that natural variation occurs with duration and magnitude shown in the reconstruction. Constraining the null hypothesis by trying to show things like the Little Ice Age wasn’t global is an AGW cottage industry in and of itself.

From my reading, they seem to be saying that taking into account natural variability actually increases estimated climate sensitivity. “DISregarding…leads to a 10% REDUCTION in the inferred climate sensitivity.” So regarding it should lead to a 10% INCREASE in the inferred climate sensitivity.

“Thinking outside the box” is a good start, but it’s still models all the way down. I’m with you, Judith, on the 10% – my instant thought. A simple eyeballing of the global temperature graph suggests something more like 1/3 [NB. suggests, nothing stronger].

Cappy D, You don’t even realize that the SOI is a simple measurement of the difference between two barometric pressure readings at displaced locations in the Pacific, Darwin and Tahiti.

That is all it is, a perturbation in atmospheric pressure — that whatever the cause — manifests itself as a disturbance in the climate that raises and lowers the global temperature above a normally stable value.

The SOI has a reversion to the mean property which means it has zero effect on the long term climate change. And because this average value that the SOI reverts to is zero (because atmospheric pressures cannot maintain an imbalance for long) it makes it a PERFECT correction term to temperature records such as GISS.

Webster, “That is all it is, a perturbation in atmospheric pressure — that whatever the cause — manifests itself as a disturbance in the climate that raises and lowers the global temperature above a normally stable value.”
Don’t your arms ever get tired?

” we propose that the role of the sun in modifying Southern Hemisphere tropospheric circulation patterns has probably been underestimated in model simulations of past climate change. More investigations are yet to be carried out to shed light on possible mechanisms that explain the relation between solar activity and westerly wind variability.”

The pressure differential measured for the SOI is effected by those stationary highs/lows that can very of remarkably long time frames, centuries and even millennial time scales. Should you ever realize that the atmosphere is not a milquetoast homogenize layer you may realize you can’t leap thermodynamic boundary layers willy nilly with your diffusion cures all approach.

““Bottom-up” and “top-down” mechanisms are not mutually exclusive.” meaning there are various thermodynamic boundary layers that have to be included in any reasonable approach. “Shell” or envelopes tailored to dominate thermodynamic properties where ya keep track of energy transfer and work done. That is Thermo 101 stuff actually.

WebHubTelescope (@WHUT), you have once again expressed your misunderstandings of ENSO. With respect to an ENSO index, you wrote, “That is all it is, a perturbation in atmospheric pressure — that whatever the cause — manifests itself as a disturbance in the climate that raises and lowers the global temperature above a normally stable value.”

ENSO is a chaotic, sunlight-fueled recharge (La Niña)-discharge (El Niño) oscillator–not simply a “whatever the cause” perturbation in atmospheric pressure, as your limited grasp of the subject portrays. During multidecadal periods when El Niño events discharge more heat than “normal” into the atmosphere and cause more warm water than “normal” to be distributed from the tropical Pacific, global surface temperatures must warm. I’m not sure why that’s so hard for you to grasp.

WebHubTelescope (@WHUT), you continued, “The SOI has a reversion to the mean property which means it has zero effect on the long term climate change.”

Since November 1981, that is true in only one location, the sea surface temperatures of the East Pacific Ocean (90S-90N, 180-80W):

But it does not hold true for the rest of the global oceans (90S-90N, 80W-180):

As I’ve been presenting for almost 5 years now, the major El Niño events have to be accounted for separately. How and when was the warm water created that fueled the El Niño? (For the 1997/98 El Niño it was created by an increase in downward shortwave radiation associated with the 1995/96 La Niña. Everybody knows that.)

Where did all of that warm water go after the strong El Niño? (Obviously, it was returned to the surface of the East Indian and West Pacific Oceans via a number of coupled ocean-atmosphere processes—otherwise the sea surface temperatures of the East Pacific would have warmed.)

What was the long-term impact on the long-term warming global surface temperatures of the warm water that was redistributed after the strong El Niño? (See the second graph above.)

Where did all of that warm water go after the strong El Niño? (Obviously, it was returned to the surface of the East Indian and West Pacific Oceans via a number of coupled ocean-atmosphere processes—otherwise the sea surface temperatures of the East Pacific would have warmed.)

Don’t forget the mixing layer is a 3-dimensional phenomenon. IIRC there are substantial (or at least significant) changes to the thermocline depth in the West Pacific associated with ENSO. (A recent quick search for references didn’t provide clear links, but AFAIK they’re there, if anybody wants to dig.)

And, for what it’s worth, changes to the thermocline depth could well be involved in changes to the rate of heat transport to the depths (700-2000m) in that area.

Webster, “Cappy D, pressure changes are the result of gas laws and this includes a temperature term.” The only technical paper I ever bother to publish was on velocity pressure considerations in Variable Air Volume system design. It takes a pressure differential to move air and velocity changes pressure. Check out Venturi and orifice plates some time.

If you want to figure out the real pressure difference, you consider the static pressure, velocity pressure, temperature and specific humidity of the air. In the atmosphere you can simplify things by looking for those semi-permanent highs and low associated with the Hadley, Ferrel and polar cells. If the southern westerlies shift north, that means the high between the Hadley and Ferrel cells shifted north. . That changes the average air temperature, surface wind velocity, direction of average winds and average humidity of the air. Since Darwin and Tahiti are not exactly on the same latitude, you can have a very long period of relative pressure difference. Look for those big Hs and Ls on the weather charts.

Wayman is worried that Kosaka & Xie, Tamino, and others have completely marginalized his theory that the Pacific is the control knob for the climate, as opposed to a perturbation that will eventually get swamped by the relentless forcing of the AGW signal.

If you can download that spreadsheet, find out just how well you can match high CO2 forcing to the oscillations. Since I have linked to a number of papers that indicate solar forcing is grossly underestimated, you probably won’t get it, but Solar gets amplified a touch and there are a few significant heat transport lags to watch. That also has volcanic forcing with is stronger in the NH creating a bit more exotic lag relationship. If you try for accuracy of better than +/- 0.5 C, you are going to have to figure out each and every one of those lags and weakly damped decays curves.

WHUT – you say “the SOI has reverted to the mean over the last 130+ years“. I haven’t checked your statement, but let’s suppose it is correct. That opens the possibility that the SOI can have an influence on global temperature over periods significantly less than about 130 years. The ~1970-2000 warming, for example.

Webby points to 130-years of southern oscillation index to justify it varying about a mean of zero (no trend). This 130-year represents just two 60-year cycles that are evident in more reliable continental temperature records such as CET going back several centuries.

Be that as it may that two cycles is scant evidence to establish lack of any residual liner trend it gets worse when we read this:

The data also shows that there have been more frequent and severe El Nino events since the mid 1970s than the previous periods. Furthermore, La Nina events are less pronounced.

So for the reliable length of the record (since 1935) it has not centered on a mean zero but was neutral for the first half of it then skewed towards El Nino (globally warming) in the second half as can be easily seen here:

In other Webby is the quintessential AGW apologist squinting his eyes at the data and making up narratives that don’t hold water upon even modestly close examination.

“Where did all of that warm water go after the strong El Niño? (Obviously, it was returned to the surface of the East Indian and West Pacific Oceans via a number of coupled ocean-atmosphere processes—otherwise the sea surface temperatures of the East Pacific would have warmed.) ”

No that’s not obvious at all, Bob. By my reckoning we can see the 1999 El Nino translated into latent heat of melting in Arctic sea ice beginning about 18 months later which happens to be about equal to the transit time by oceanic conveyor belt from north tropical latitudes to nor polar latitude. As the name implies latent heat of melting causes no change in temperature but rather a change in phase.

If one were to roughly calculate the anomalous BTUs in the Pacific warm pool for the strong 1999 El Nino and calculate the latent heat of fusion in the 10% reduction in Arctic sea ice since the year 2000 one will find the values roughly the same.

Thus a pulse of anomalously warm water from the 1999 El Nino headed up to the Arctic via conveyor belt, arrived about 18 months later and over the course of a few years caused a step-change reduction in Arctic sea ice and a step-change increase in global average temperature.

Moreover Arctic sea ice functions like a thermostat. When there is less of it the open ocean loses heat far faster because a layer of sea ice an excellent insulator for the water below. As the reduction in Arctic sea ice played out global average temperature stabilized at a new level. The stabilization we now call “the pause”.

That’s my story and has been my story for several years having found nothing whatsoever in the way of contrary data or more elegant explanation in that time.

Be that as it may that two cycles is scant evidence to establish lack of any residual liner trend it gets worse when we read this:

The data also shows that there have been more frequent and severe El Nino events since the mid 1970s than the previous periods. Furthermore, La Nina events are less pronounced.

So for the reliable length of the record (since 1935) it has not centered on a mean zero but was neutral for the first half of it then skewed towards El Nino (globally warming) in the second half as can be easily seen here:

In other Webby is the quintessential AGW apologist squinting his eyes at the data and making up narratives that don’t hold water upon even modestly close examination.

“Where did all of that warm water go after the strong El Niño? (Obviously, it was returned to the surface of the East Indian and West Pacific Oceans via a number of coupled ocean-atmosphere processes—otherwise the sea surface temperatures of the East Pacific would have warmed.) ”

No that’s not obvious at all, Bob. By my reckoning we can see the 1999 El Nino translated into latent heat of melting in Arctic sea ice beginning about 18 months later which happens to be about equal to the transit time by oceanic conveyor belt from north tropical latitudes to nor polar latitude. As the name implies latent heat of melting causes no change in temperature but rather a change in phase.

If one were to roughly calculate the anomalous BTUs in the Pacific warm pool for the strong 1999 El Nino and calculate the latent heat of fusion in the 10% reduction in Arctic sea ice since the year 2000 one will find the values roughly the same.

Thus a pulse of anomalously warm water from the 1999 El Nino headed up to the Arctic via conveyor belt, arrived about 18 months later and over the course of a few years caused a step-change reduction in Arctic sea ice and a step-change increase in global average temperature.

Moreover Arctic sea ice functions like a thermostat. When there is less of it the open ocean loses heat far faster because a layer of sea ice an excellent insulator for the water below. As the reduction in Arctic sea ice played out global average temperature stabilized at a new level. The stabilization we now call “the pause”.

That’s my story and has been my story for several years having found nothing whatsoever in the way of contrary data or more elegant explanation in that time.

**in the previous comment please disregard the portion past this point as it was leftover in the comment edit window from a previous response and I didn’t notice it still there before posting.

springyboy better run over to the most recent thread where they are talking about reducing uncertainty by including natural variability. I did this and the TCR went from 1.7C to 2C while the r2 went from 0.9 to 0.96 !

They are just saying that the winter warming in the Arctic could have enough natural variation to reduce global climate sensitivity by 10%. That seems like a fairly large amount for a small area during one season a year to me.

Ah corrections! Who can resist the opportunity to make the data tell the story it is supposed to tell! Not most climate scientists apparently.

Q: UHI could result in spurious warming trends being measured. There is a need for a correction, how should we alter the data?
A: Lets lower the recorded temperature of surrounding rural stations in the past to obtain an even greater warming trend!

Q: The weather station in Marysville is now located in a parking lot right next to a bank of air conditioning outlets. It might not be measuring the right temperature. There is a need for a correction, how should we alter the data?
A: Lets raise the temperature record of the adjacent pristine rural site to match, to obtain an even greater warming trend!

Q: Natural variation might have caused part of the ramp up in temperatures in the late 20th century. There is a need for a correction, how should we alter the data?
A: Lets adjust the temperature record by adding on a term for natural variation to obtain an even greater warming trend!

“Atlantic multidecadal SST mode known as the observed Atlantic Multidecadal Oscillation (AMO) may be linked to AMOC variations.”

The mode is present in global and other hemispheric/regional/land/SST temperature indices. Detrend them and they look exactly like AMO. Try it out at wft. So, it’s actually Global Multidecadal Oscillation.

Judith Curry adds her own emphasis “Most climate change meetings have tended to focus on the forced, thermodynamically induced variability of the climate system. In contrast, this meeting featured scientists who think outside of that box.”

Conclusion Kudos to you, Judith Curry, for encouraging young researchers to acquire the high-level mathematical and large-scale computational skills that are required to increase our understand of the dynamical “decadal box”, and to link that emerging dynamical/decadal understanding to the existing family-scale “centennial box“ understanding and especially the culture-scale “millennial box“ understanding.

I wonder if they received permission to creep out of the box? After all their careers may be at risk. Next thing you know they will be shifting more time to observations and less to models. Then they will call for model validation.
Then they will try to understand clouds, the water cycle, the sun, and the long list of other unknowns. Good grief, History in the making!

“Their application to the historical record of global mean surface temperature and to several proxy records of local and regional temperatures substantiates the existence of human-induced global warming over the past century and also highlights the role of AMV in modulating the rate of global warming.”

There’s a reason they use the analogy of a box: you’re either inside or outside. The amount (10%) isn’t what’s key, it’s the sources of the adjustment. Once sources outside the “traditional” are allowed, the number can wander all over the place.

The “skeptics” like this concept of “dynamical adjustment” even not knowing exactly what that is, if it is perceived to go in their direction. It is very telling what their “skepticism” includes or doesn’t. Did we see anyone ask for further details of this “dynamical adjustment”? No. I would because it looks like a statistical trick of some sort. Is it objective? How is it shown to work? What is the uncertainty?

The global sea level this summer is a quarter of an inch lower than last summer, according to NASA scientists… ~Juliet Eilperin, Weather cycles cause a drop in global sea level, scientists find, The Washington Post, August 25, 2011

Hi Matt. The original sentence is easy to confuse as it reads like a double negative. Consider the recent sequence of La Nina’s. These have depressed surface temperatures (as WebHubTelescope is tirelessly trying to point out). According to the quote, a sensitivity estimate that fails to take such internal variabiilty into account needs to be increased by 10% or it will prove too low going forward.

That may be reasonable, but were it not equally unsurprising to have a smaller number or even the opposite sign for the influence.

As more long-term influences are identified, more “adjustments” will have to be added to the “inferred global climate sensitivity“. Some will probably be positive, others negative. Many will probably be conditional, i.e. the value of the “adjustment” depends on other, variable, factors, perhaps such as the nature of vegetation cover in key areas. The possibility of feedbacks where the results of “global climate sensitivity” drive such factors cannot be ruled out. Overall, these facts push the PDF towards the 0-10C value sometimes mentioned here.

They also tend to deprecate the whole meaning of “global climate sensitivity“. As I’ve said before “global climate sensitivity” is a myth.

In the past I’ve written a brief history of how the position came to be

Short version:

Climate audit: Somebody suggests that we each assign a percentage to explain how much warming is due to humans. There were basically three classes. with lukewarmers being in the middle.

Discussion then picked up at Lucias about who came up with the term and what it meant. At that time because we were looking at model comparisons I think I floated a number like .15C decade warming was a luke warmer position… as opposed to .2C

Some more fights later and we settled on a definition that was tied to
ECS.

1. The fundamental ( andy lacis ) physics argue for a ECS no less than
1.2C
2. Paleo etc Suggests a mean of 3C

3. Observational suggests the majority of the PDF lies below 3C

So: its highly unlikely that the ECS is less than 1.2C and its more likely than not that it falls below 3C. Put another way more than half of the PDF falls below 3C.

if you look very carefully at this you will see that its not very different than the IPCC position, except for this: we look at the median. alarmists type focus on the mean and upper tail of ECS.

Now explain why folks who believe in AGW say horrible things about lukewarmers.

S/B “folks who believe in CAGW“. And the reason is obvious: the political solutions they’re pushing can only be justified by the tacit assumption that the “catastrophic” outcomes are essentially certain. Calling attention to the comparatively low probability of catastrophic outcomes boils down to a political attack on their cause.

“Now explain why folks who believe in AGW say horrible things about lukewarmers.”
For the same reason they say horrible things about the scientists who’ve examined GMO, nuclear power, and fracking for gas and found the exaggerations to be silly. This is a political fight between progressives who cling to the Club of Rome versus conservatives, independents and progressives who don’t.
It’s being sold as a fight between conservatives and liberals, but it never has been. James Inhoff couldn’t care less if you build nukes or frack for gas as long as you do it.
No “C” in AGW means no luddite fantasy of depopulation, high carbon taxes, and a future of “sustainably” shivering in the dark with your federal ration of organic gruel.

This skeptic is in the camp that no one knows what then ECS is because no one understands the climate enough to model it. If you can’t model the climate, how the hell do you calculate how it will respond to a doubling of CO2?

‘This skeptic is in the camp that no one knows what then ECS is because no one understands the climate enough to model it. If you can’t model the climate, how the hell do you calculate how it will respond to a doubling of CO2?”

You dont need to model the climate to estimate the ECS. the best approach is to look at observations.

think of it this way. you running down the highway at 55mph.

you apply an additional forcing of 100 hp.

a couple minutes later you note that you are going 105 mph.
you can now calculate the response to that forcing. you dont need to build a simulation model of the car to do this. you dont need to understand how a car works to do this. You could build a stupid model of a car and come close to the expected response. That model wouldnt give you any more information but it would just tell you that your modelling was on the right track and in the ballpark.

put another way, ECS was calculated LONG LONG before any climate model was every built.

Mosh,
I would consider that the ECS may be a little under 3 C based on current observational evidence, but the unknowns of the slow feedbacks would push it up at least a couple of tenths back to 3C.
The paleo is the only evidence for slow feedbacks.

“you running down the highway at 55mph.
you apply an additional forcing of 100 hp.
a couple minutes later you note that you are going 105 mph.
you can now calculate the response to that forcing.”

Climate science watches the bike do this down a steep hill, calculates the forcing as 50 mph and asks for a ban on motorcycles that go 100+ mph.
The bike goes 75mph uphill and David Springer said only an idiot would ban this bike. The bike goes 80 on the flat. David Appell certifies the 50 mph forcing as “settled” and angrily blames Republican “greed” for the failure to impose the 100+ mph ban on a 80 mph bike.
Webby produces a chart proving that the 80 mph is really 105 if you assume the rider was driving into a hurricane, Lolwot graphs the downhill run and ask “what speed drop?”
Judy tears out her hair in frustration. Joshua accuses her of bias while asking us to ponder whether our politicized definition of “motorcycle” prevents us from recognizing the need for a ban.
Mosher says we’ve always known exactly what the forcing was: 12.2 +/- 37.8, duh!
Peter Lang can’t figure out why the rider doesn’t drive a truck instead and Wagathon suggests the truck could be used to run over the socialist radar gun operator.
Fan says the young know that 80 is the new 105 and that’s good enough!
Kim says: “two wheels or four, pop a wheelie and on one tire go out the door.” Pekka says 50 is probably too high but shouldn’t be criticized until we know more.
Willard writes: “Steve McIntyre said he rode a bike once, but it may have been a scooter. That’s all”
Jeffn checks in three days later and chuckles that nothing’s changed.

“put another way, ECS was calculated LONG LONG before any climate model was every built.”

Yes, ECS was calculated before those calculating it knew anything about the myriad forcings and feed backs of which we are only now coming to have some basic understanding; before we had any way of accurately measuring the myriad initial conditions; before we even knew natural oscillations like the PDO etc existed.

Not to mention it seems bizarre to me to claim there is just a one size fits all ECS. It seems to me that a doubling of CO2 during an ice age might have a different impact on global temperature than during a period of deglaciation.

If the response of the climate to a doubling of CO2 were such a simple, back of the envelope computation, that did not vary based on initial conditions and forcings, how come the GCMs suck so bad?

I don’t see how ECS could be anything other than the end result, the output of a climate model. How do you know how the climate will respond to a doubling of CO2 if you don’t know the initial conditions, forcings and feedbacks? Accurately. And since we can’t accurately model the climate, I don’t see how anyone can come up with an accurate ECS.

Jeffn, That was perfect! New article, same arguments. I find myself just scanning the names and looking for someone I have not read before. Sad but the challenge to thought on this blog is waning. Each subject simply devolves into personalities and politics. Trolls to the right of me, trolls to the left, here I am! Dr. C, any chance you can bring in some of your science buds to debate and just let us watch?

Cars, I like them too. A great analogy translating added horsepower to increased speed to determine an equilibrium sensitivity.

In my SUV pulling a trailer going from Denver Colorado (5280 + above sea level) to Vail Pass (10662 ft above sea level). I set my speed control to 55 mph with a full tank of gas. My SUV has a mile per gallon calculator. I go from 13 mpg going around Denver on I 70 to now 9 mpg as I cross Vail Pass. My engine RPM go from 1800 to 2400 (I’m using more horsepower).

I fill up near Vail Pass and head East through Denver towards Limon CO (5370 ft). I stop at Limon. My mpg is 26 and I have traveled 120 + more miles. At no time did my engine RPM go above 1800 heading East.

To me, it would be pretty hard to determine an Equilibrium MPG for my trip unless I knew which direction I was going, and then, for the round trip, some sort of “assembled means” are being invoked. I get a number, its just is non-sensical. Kinda like climate sensitivity and climate models using CO2 as the prime metric.

I think the same thing applies to the climate. If we are coming out of a Little Ice Age there is one climate sensitivity, and if we are heading into a so call Maunder Minimum, then we have another “Equilibrium Sensitivity” using your automotive illustration.

The problem in part is that the climate is not in Equilibrium nor has it ever been. So the calculation ECS is spurious. Of course, a transient climate sensitivity is even a worse problem to calculate and more likely than not, all climates are transients.

So, we don’t know the direction we are going, for how long, and whether the weather is climbing out of a minimum or coming down from a maximum.

Observing a car for which we know all we need to know about how it works and why, has absolutely nothing to do with observing a climate, about which we know so little.

Adding horsepower to a car is meaningless in this context. You are equating CO2 not with a thermostat, buy the engine of the car.

And “a couple minutes later you note that you are going 105 mph.
you can now calculate the response to that forcing.” Forget the truncated time between cause and effect, and forget the assumption of attribution. (Well, don’t forget them, they are additional gaping holes in your analogy, but not my main point.)

You have just contradicted your main point that ECS is a product of simple physical calculation.

But I understand the logic. Show the warmists the weaknesses in their models and they tell you its about the paleo record. Show the huge holes in the paleo record, and they tell you its about observations. Show them that the observations contradict the models and paleo records, and they tell you its about the basic physics. Remind them that the basic physics ignore initial conditions, feedbacks, and forcings, let alone unknown unknowns, and they tell you its about the models.

I will attempt to repair Mosher’s analogy and see if it helps you get where I think he is coming from.

You start out in NYC in car with a 200 HP engine and a lot of gas. Drive to San Fran running all lights and stop signs, flooring it all the way. Let’s say it took 35 hours total driving time. Ship the car back to NYC. Take off the cat converter, change the air intake, or whatever it takes to get another 25 HP, without changing the weight or aerodynamics of the car. Drive to S.F again. Will the trip take 1)more time 2)less time 3)about the same ?

That’s a worse analogy. We are not comparing current climate with a past climate where we know everything was the same except for CO2.

A more accurate analogy would be making the drive the first time, then taking a different route the second time, not knowing topography, road conditions, the type of fuel or distance to be driven, weather and whether there is construction along the later route.

And then believing you can predict within seconds the amount of time the second trip will take.

You failed to answer the question, Gary. Let’s forget the analogies. Pick whatever climate you want, or 400 climates. If you add 3.7 W sqm of positive forcing-let’s say it’s increased energy from the sun-do you expect it to get warmer, or not?

I answered your question by demonstrating that it was based on a false premise.

Your problem, and Mosher’s is that you treat ECS as a constant that can be calculated independent of initial conditions, forcings and feed backs.

“•In IPCC Reports, equilibrium climate sensitivity refers to the equilibrium change in global mean surface temperature following a doubling of the atmospheric (equivalent) CO2 concentration. More generally, equilibrium climate sensitivity refers to the equilibrium change in surface air temperature following a unit change in radiative forcing (degrees Celsius, per watts per square meter, °C/Wm-2). In practice, the evaluation of the equilibrium climate sensitivity requires very long simulations with Coupled General Circulation Models (Climate model). The effective climate sensitivity is a related measure that circumvents this requirement. It is evaluated from model output for evolving non-equilibrium conditions. It is a measure of the strengths of the feedbacks at a particular time and may vary with forcing history and climate state.”

Talking about ECS without reference to initial conditions, forcings and feedbacks is nonsensical.

Your question is the same as asking if an increase in CO2 would also cause a rise in temperature. The answer to both is, all other things being equal, yes. But ECS is not based on all other things being equal. it is supposed to be the actual reaction of an actual climate to an actual doubling of CO2. In fact, the ECS values you warmists preach with such certainty require strong feedbacks to get to the right number.

So asking hypotheticals or making analogies in which a single factor is isolated have no bearing on the issue.

I know Mosher knows he is being an obscurantist again, but I am not sure you get the point.

“Yes, ECS was calculated before those calculating it knew anything about the myriad forcings and feed backs of which we are only now coming to have some basic understanding; before we had any way of accurately measuring the myriad initial conditions; before we even knew natural oscillations like the PDO etc existed.”

1. you are assuming that initial conditions matter.
2. your are assuming that there are a myriad of forcings and feedbacks

The point is rather simple. These early estimates are on the order of
1.5 to 5. The factors you imagine are important are 2nd and 3rd order
effects. Now, these are important, perhaps, for policy but they are not important in calculating a good estimate. That is, they get you within
and order of magnitude.

“Not to mention it seems bizarre to me to claim there is just a one size fits all ECS. It seems to me that a doubling of CO2 during an ice age might have a different impact on global temperature than during a period of deglaciation.”

1. Doubling C02 gets you 3.7 more watts.
2. Nobody claims one size fits all, in fact the literature
shows there may be some dependency on inital conditions.
3. merely noting the POSSIBILITY of dependence on initial conditions
doesnt prevent you from doing the calculations.

“If the response of the climate to a doubling of CO2 were such a simple, back of the envelope computation, that did not vary based on initial conditions and forcings, how come the GCMs suck so bad?”

1. They dont suck so bad as far as complex physics models go,
they are far better than the physics models we used to design aircraft.
2. The biggest issues with models are not related to their sensitivity
you can see this by plotting RMS versus sensitivity. Hindcast
accuracy is closely tied to senstivity.

“I don’t see how ECS could be anything other than the end result, the output of a climate model. How do you know how the climate will respond to a doubling of CO2 if you don’t know the initial conditions, forcings and feedbacks? Accurately. And since we can’t accurately model the climate, I don’t see how anyone can come up with an accurate ECS.”

Very simple.

Look at the temperature back in 1850
Look at the temperature today
Take the difference
Estimate the increases in all forcings from then to now

Look I believe you are between 3 and 7 feet tall. My uncertainty doesnt mean that your height doesnt exist. The fact that I make assumptions (gary is not a giant ) doesnt keep me from making an estimate.

Now, to estimate ECS , you have to note that doubling c02 gives you 3.7 Watts 3.7 * .5 = 1.8C. If you want to be technically correct you have to account for heat storage, so 1.8 is probably closer to transient response.

See how easy. Now all along this path there are several assumptions one makes. You dont like those assumptions?

A) propose different assumptions
B) provide a rational for those assumptions.

Steven Mosher: 1. you are assuming that initial conditions matter.
2. your are assuming that there are a myriad of forcings and feedbacks

Other people assume that initial conditions don’t matter and assume that there are not a myriad of forcings and feedbacks.

All the studies of high-dimensional non-linear dissipative systems display the importance of initial conditions (sometimes called the “sensitive dependence on initial conditions”, and variations on that.) Studies of the climate system have documented a myriad of forcings and feedbacks. If you have to choose (I merely assert possibilities), the italicized set of assumptions has more support than the unitalicized set.

Thank you for admitting that ECS is dependent on forcings and feedbacks, thus admitting that your and Don Monfort’s analogies were non-sequitors.

As for why I believe initial conditions also impact ECS, I got my understanding of that from the same place I have gained my understanding of any other climate science topic. from climate science consensus advocates.

The definition I quoted above came from the EPA, and “It is a measure of the strengths of the feedbacks at a particular time and may vary with forcing history and climate state” sure sounds like initial conditions to me.

“Earth’s climate history potentially can yield accurate assessment of climate sensitivity. Imprecise knowledge of glacial-to-interglacial global temperature change is the biggest obstacle to accurate assessment of the fast-feedback climate sensitivity, which is the sensitivity that most immediately affects humanity. Our best estimate for the fast-feedback climate sensitivity from Holocene initial conditions is 3 ± 0.5°C for 4 W/m2 CO2 forcing (68% probability) . Slow feedbacks, including ice sheet disintegration and release of greenhouse gases (GHGs) by the climate system, generally amplify total Earth system climate sensitivity. Slow feedbacks make Earth system climate sensitivity highly dependent on the initial climate state and on the magnitude and sign of the climate forcing, because of thresholds (tipping points) in the slow feedbacks. ”

The person writing there that climate sensitivity is “highly dependent on the initial climate state” was that famous skeptic, James Hansen.

“Observing a car for which we know all we need to know about how it works and why, has absolutely nothing to do with observing a climate, about which we know so little.”

Wrong gary.

Your claim was this.

“This skeptic is in the camp that no one knows what then ECS is because no one understands the climate enough to model it. If you can’t model the climate, how the hell do you calculate how it will respond to a doubling of CO2?”

That amounts to this claim.

1. you only know what ESC is you can model the climate
2. Nobody understand the climate enough to model it

What my analogy should show you is that you dont have to understand how to model a car to calculate a system metric. You can just measure it.
And you could also estimate it from first principles. and FURTHER that even when you have a model the model doesnt add to your estimate.
It just confirms your model, not the other way around.

1. ECS was estimated from first principles, crudely, but nobody needed to understand all the details to make an estimate
2. ECS can also be estaimted from observations. No GCM required.

Therefore, you dont need to know how to model the climate to do these calculations. To be sure the more you know the better your answer will be. But, you dont need to know everything to actually do the math.
And yes you will have to make assumptions. We make assumptions all the time.

Steven Mosher: The factors you imagine are important are 2nd and 3rd order effects.

The AGW theory asserts that a tiny increase in the net absorptive capacity of the atmosphere will produce a tiny increment in the spatio-temporal averaged temperature, tiny in % terms (about 1%) and small compared to natural variation. The entire discussion is about 3rd order effects.

fwiw, the Tacoma Narrows Bridge was rendered useless by third order effects. Calling an effect third order does not make it negligible.

If Gary won’t admit that he would expect adding 3.7 W/sqm forcing to any realistically conceived climate would cause warming, there isn’t any point in continuing to argue with him about estimating ECS.

ECS is dependent on forcings and feedbacks, but we don’t need to actually know those forcings and feedbacks to determine what ECS is.

And we will pretend the consensus climate science community isn’t uniform in recognizing that initial conditions also are “highly dependent” on initial conditions.

For those of us not living in Obscurantland, I think it is obvious that ECS can only be the output of a climate model that includes initial conditions, feedbacks and forcings. And since we can’t create an accurate model of our climate with those parameters, we can’t know what ECS is.

Steven Mosher, Shouldn’t be much to show. The 30N-60N SST has warmed by about 1.2 C since 1900 and started below normal if you use 285 ppm as a baseline. The absolute temperature of that band is currently around 14.2 C, so a 1.2 C rise would be the regional equivalent of a no feedback doubling. I think you noticed the 30N-60N land amplification, well the 30N-60N oceans appear to be the signal that got amplified.

That has the 0.8C CO2 sensitivity as a baseline with estimates of Volcanic from BEST with Kopp TSI plus the TSI composite using a 27 month and 70 month lag. That is one of those multi-decadal variability things, solar lagging just enough to amplify the impact of the next cycle. Schwartz has the “global” lag at 8.5 years, but it appears to vary regionally.

It is a shame that solar variation is FUBAR what with the larger than expect spectral variation and sun spot numbers sucking like they do. Who knows, Lean 2000 may have been right on time.

“What my analogy should show you is that you dont have to understand how to model a car to calculate a system metric. You can just measure it.”

That is precisely MY point. It is not a metric. Or if it is, it sure as hell shouldn’t be.

We are assured that the IPCC’s predicted future temperature is accurate because it is the result of numerous runs of climate models that can predict our climate’s reaction to increased CO2.

If one of the “metrics” input into the GCMs is that our climate as it currently exists will react to a doubling of CO2 by a certain amount, you have programmed your answer in at the beginning.

I don’t throw the word fraud around much, but if that is how the GCMs are actually constructed, they are not just wrong, they are a fraud.

If I sold a stock based on my claim that I have a model showing the stock will increase 100% based on our current economy, but concealed the fact that I programmed an assumption of such a rise as a “metric” of the model, I would expect to eventually spend a fair amount of time as a guest of the government.

Don Monfort,

You are asking me to make an estimate that I have just argued at length no one can make. I don’t mind responding to questions, but they have to make a modicum of sense.

Don, “Capt. dallas made his own crude estimate. Looks plausible. Now you try, Gary.”

I am known for crude :) That is actually a transient estimate and if solar has as much influence as appears, there is no meaningful ECS. The lower CO2 impact gets, the more all those itty bitty 2nd and 3rd order effects get to play. I believe it is called chaotic.

Can you at least admit that you would expect that ECS (if there is such a thing) would be a positive number, if 3.7 W/sqm were added to just about any climate that we are likely to experience in our freaking lifetimes?

I don’t know. I would think it definitely would initially, all other things being equal. And the longer the period of the increase, the longer I might expect an increase in temp would last. But that is nothing more than a guess, not science. And I have no idea how much of an increase. Nor do you.

And it is entirely possible that the Earth’s climate is such now that it responds to increases and decreases in various forcings in such a way as to remain in a relatively narrow range of temperatures. So admitting that all other things being equal an increased forcing will result in an increase in temp isn’t terribly relevant to the debate.

You underestimate yourself, Gary. Your answer was not a guess. It was science. An application of your knowledge of physics. If you were more confident in your scientific abilities, you might even try your hand at a crude estimation of ECS, or whatever CS. Between .8C and 2.2C seems plausible, don’t it? Just to get you started.

Even crude estimates of our climate’s sensitivity to a doubling of CO2 might prove to be relevant, because it is entirely possible that the Earth’s climate is such now that it responds to increases and decreases in various forcings in such a way as to result in at least 3C of warming. There are a lot of smart people who think that could be a serious problem. I don’t know enough about the physics to prove them wrong, so I am keeping an open mind that tends towards hopeful low-grade lukewarmism. Think about it, Gary.

And Mosher is not trying to pull the wool over your eyes, Gary. His hat size has increased since he fell in with that BEST crowd and at times he seems to be running a little hotter than lukewarm, but he is honest. He is also smarter than he looks.

7 assume equilibrium — i.e. uniform temperature at all levels of the atmosphere

With respect to your modifier “crudely”, How crude is it? +/- 10C? I am always referring to the inaccuracy of the estimate, and to the fact that the inaccuracy isn’t known.

You and others frequently refer to “the physics” or “first principles” and such without listing the propositions that you mean. What you call “first principles” could equally be called “counterfactual conditionals” of unknown inaccuracy.

Not necessarily, Matt? It’s a theory, ain’t it? If you accept that there is a TCS in their theory, won’t you also allow them the ECS? If there is climate sensitivity in the short term that is “transient”, doesn’t it make some sense that in the long term there is also a climate sensitivity that they can call ECS, if they want to? Some of you people argue over the most mundane things.

Don Monfort: If there is climate sensitivity in the short term that is “transient”, doesn’t it make some sense that in the long term there is also a climate sensitivity that they can call ECS, if they want to?

Yes it does “make some sense” but it does not follow logically either that the ECS exists or what size it is if it does exist, or how long it takes to arrive if it does exist. This occurs all the time in a dynamic system with a “strange attractor” for example: the short term effect of an impulse may to knock the trajectory out of the attractor, but the long-term effect may be that the trajectory returns to the attractor. There are lots of possibilities exemplified by results in other fields: an electric defibrillator knocks the heart from one attracting basin into another, and there is no long-term equilibrium following the transient response.

And that’s if the TCS is non-zero or non-negligible, given the state of the system now.

A few trillion dollars of investment/damage and corresponding amounts of human effort and misery ride on getting the best possible answers to the questions.

Don Montfort: Matt, if the earth started receiving 3.7 W per sqm more energy from the sun, do you have any idea what effect that might have on our climate?

I wrote out some ideas when Prof Curry posted the heat transport diagram of Graeme Stephan. The effects will certainly be different over water, forest, and desert; and different in the tropics, temperate zones, and poles; and different in different seasons; and different in day time and night time.

The water surface would probably produce more water vapor than it does under current conditions, and might warm very little. Desert will warm more than under current conditions.

In effect, an experiment has been per formed on the Earth during the past half-century – an experiment that includes all of the complex factors and feed back effects that determine the Earth’s temperature and climate. Since 1940, hy dro carbon use has risen 6-fold. Yet, this rise has had no effect on the temperature trends, which have continued their cycle of recovery from the Little Ice Age in close correlation with increasing solar activity.

“Results presented at the workshop suggested that the enhanced wintertime warming over high northern latitudes from 1965 to 2000 was mainly a reflection of unforced climate variability. Disregarding this dynamically induced component of the 20th century warming leads to around a 10% reduction in the inferred global climate sensitivity.”

The quoted sentences have no clear meaning. On the one hand, they attribute most of the warming in the “high northern latitudes” to natural variability but, on the other hand, they refer to a 10% reduction in “inferred GLOBAL climate sensitivity.”

Clearly, they are headed in the right direction, as stated in their conclusion:

“The climate response to external forcing—especially on regional scales—is strongly influenced by dynamical processes in both the ocean and the atmosphere. Moreover, the existence of strong natural multidecadal to centennial variability makes the detection of anthropogenic climate change a challenge.”

They endeavor to improve the ability of climate models to show natural variability and they recognize that such improvements will make the anthropogenic signal harder to detect.

But let us back away and look at the bigger picture. The “forcings and feedbacks” calculation remains a mystery. The role of clouds alone could swamp everything discussed above. Yet everyone agrees that models have nothing approaching an acceptable representation of cloud behavior. And we all know that there are no well confirmed physical hypotheses that describe cloud behavior. In conclusion, while I applaud the approach to models described above, the deficiencies of climate science and of existing computer models are so great that they overwhelm this effort.

Our industrial and technological civilization depends upon abundant, low-cost energy. This civilization has already brought unprecedented prosperity to the people of the more developed nations. Billions of people in the less developed nations are now lifting themselves from poverty by adopting this technology. (Ibid.)

Modeling studies indicat that the AMOC weakens most at northern high latitudes in response to increasing greenhouse gas concentrations. The simulated AMOC weakening under anthropogenic forcing cannot be distinguished from natural AMOC variability in the record extending through just the first few decades of the 21st century, but the free and forced variability should become separable toward the middle of the century. Analysis of the 350-year-long Central England historical temperature record is suggestive of pervasive multidecadal variability that climate models suggest could be associated with variations in the strength of the AMOC.

Ah, man. And here I have been projecting that we’ll have sufficient information by about 2030. Bummer.

Energy-intensive hydroponic greenhouses are 2,000 times more productive per unit land area than are modern American farming methods [and]… if energy is abundant and in expensive, there is no practical limit to world food production… With plentiful inexpensive energy, sea water desalination can provide essentially unlimited supplies of fresh water [and]… human ingenuity in the use of energy has produced many technological miracles [that]… have markedly in creased the quality, quantity, and length of human life. Technologists of the 21st century need abundant, inexpensive energy with which to continue this advance. (Ibid.)

Wagathon, got bad news for you. About 60% of direct human food calories, and almost 80% if you include indirect from meat converted from grain ( poultry, pork, a portion of beef, farmed fish), come from five crops: wheat, rice, maize(corn), soybeans, and potatoes. There are many good reasons those are inherently not greenhouse crops. Most diet is not hydroponic hothouse vegetables. And the green revolution (dwarfing, rust resistance, now GMO corn and soy against pests and weeds) can happen once, not twice.
Got more bad news for you. Energy is not cheap, and will only get more expensive since the cheapest stuff always gets used first, and is now going or gone. At current rates of production even mighty Ghawar will be fully depleted by about 2035; watercut is already 55% in the north Ain Dar sector, and approaching 45% in the reworked Hradah sector. The temporary price depression in North American gas will self correct by 2015-2016 without LNG exports as demand grows (CCGT electricity) and oversupply contracts (fracked well decline curves, 2/3 of rigs shifted to oil from gas).
CAGW has focused attention on energy policies that manifestly do not work, since the sun does not always shine and the wind does not always blow, ahasty pod support of such ‘renewables’ has detracted from work on innovations that could really help in the future (like better nuclear fission).
You can check out Gaias Limits for many supporting factual details.

The article does not address how best utilize abundant energy to provide carbohydrates — although obviously energy avoids manual threshing &etc. — but, it does address energy production–e.g., the example of 50 nuclear installations at a cost of $1 trillion that would repay the construction costs “in just a few years,” based on a $60 per barrel cost of oil.

If all of the sun’s energy impacting the Earth were dedicated to the sustenance of humans @ 100W/human, then the theoretical maximum population of the Earth is in the quadrillions, approximately a million times more humans than are presently sustained.

A tiny increase in our use of that energy would sustain a human population many times the present one, and in a style to which we would all like to become accustomed.

Of course, a human population in the quadrillions is practically impossible but the comparison illustrates that what the Malthusian Doomsayers most lack is creative imagination. Destructive imagination is their milieu.

We’re not even talking of the use of extra-terrestrial energy or of the use of potential energy resources already available.

Living creatures increase according to the energy available, but I suspect humans will never reach the limit of using all the energy in the universe. We’ll not get that far, but just how far, even kim doesn’t know.
=====================

Kim, I admire almost all of what you post. Am very grateful for your support, as on my intro goof in the catastrophic sea level guest post titled By Land or By Sea. But theoretical insolation does not translate to useful energy to humans. Much only causes (gasp) weather–rain, wind, … Much more is already used for photosynthesis. The remainder???

I used creative imagination plus hard objective facts to imagine Gaia’s Limits. Those are hardly a catastrophe as Paul Ehrlich or others have previously wrongly asserted (including Malthus himself). But there must be eventual soft and hard limits. You might find the ebook an interesting read. Soft limits involve food. Hard fossil fuel limits are more worrisome. They are reached within (for sure) the lifetimes of my children and hoped for grandchildren. Evidence here per several previous guest posts thanks to Dr. Curry. One problem with CAGW is it distracts from reasoned dialog on more pressing (even if a decade or two away) actionable issues.
Highest Regards

All physical resources on our planet have a finite limit (by definition).

Fossil fuels are no exception. WEC 2010 estimated in 2008 that we had remaining total inferred possible recoverable fossil fuel resources equal to around 85% of the original amount (i.e. we had used 15% of the total by 2008). So, while “peak oil” (or, rather “peak fossil fuel” – since they are interchangeable with existing technology) will happen some day, it does not appear that it will happen very soon.

The one resource, which does not have a limit, is human ingenuity. And kim points out that this resource may make totally new physical resources available to us, of which we are not even aware today.

From 1970 to 2010, global yields of the major crops you cite increased by 2.4 times, while human population increased by 1.7 times, CO2 increased by 20% and global temperature increased by a smidgen over the same time period.

At the same time starvation rates decreased and (despite the sub-Saharan HIV/AIDS epidemic) global average life expectancy at birth increased by around 10 years.

Yes. The higher CO2 levels probably helped the crop yield increaase; and possibly even the slightly warmer temperature. But the main cause of the increase in crop yields was “human ingenuity” (improved technology, etc.).

This is the factor that Malthusian doomsayers always overlook – and that’s the principal reason their dire prophesies never come true.

“In the late 1950s and 1960s, a longstanding inclination among some members of the upper class was about to become a national issue. This inclination was to redefine achievements in science and technology as either evil actions threatening to nature or as futile attempts to reduce human suffering that was said to be the result of overpopulation. This tendency, partly articulated as a worldview in the writings of Thomas Malthus, takes what might be reasonable concerns over issues
such as air and water quality and embeds them in an ideology deeply hostile to economic progress and the majority of human beings… The overall thrust was still clear: the U.S. and the world should move in the direction of ending population growth, and protection of the environment should be given an importance equal to or greater than that of improving the standard of living… Economic growth and technology were portrayed as problems…” ~Dr. Donald Gibson, Battling Wall Street: The Kennedy Presidency (1994)

It is a matter of ethics. If we are talking about the ethical thing to do, certainly the Left should consider the benefits of energy to mankind as well as whatever risks there may be due to humanity’s release CO2 into the atmosphere.

For example, consider an analogous situation: the benefits of UV exposure versus the risk of skin cancer. A recent initiative from a government agency, the CDC, has focused on the idea of increasing public health by reducing our exposure to the Sun.

Is this a good idea or just an idea that sounds good and makes government feel needed and important? Let’s look at the science: “According to the American Cancer Society, in the U.S. skin cancer deaths make up just 2% of all cancer deaths,” says Tom Weishaar of the Vitamin D Counsel. Weishaar continues, “In this data set**, a heart attack is about 15 times more likely than melanoma. The policy experts who insist that UV should be limited have no evidence showing that limiting UV is beneficial to health in any way other than reducing skin cancer. Meanwhile, here we have a study that shows that even in a population where 90% of the individuals have a light skin color, and even among individuals who have had skin cancer, increased levels of UV exposure are related to better health.”

Wagathon, your introduction of risk weighted cost benefit is welcome rationality here and elsewhere.
Sorry if my previous reply to you did not catch your drift. Much more of this multidiscipline, eclectic global perspective is needed in policy oriented climate change debates.
Regards

The multidecadal to century-scale climate variations are sufficiently regular and correlated to solar/astronomical indexes to make unlikely an interpretation based only on internal unforced variability.

This paper contains a detailed analysis of all CMIP5 models used by the
IPCC, and demonstrates that they do not reproduce the decadal and
multidecadal patterns since 1850 (not just the temperature standstill since 2000, the failure is nearly total). The paper extensively discusses my astronomical based model since the Medieval Warm Period and demonstrates its far better performance than the CMIP5 models.

Abstract:
Power spectra of global surface temperature (GST) records (available since 1850) revealmajor periodicities at about 9.1, 10–11, 19–22 and 59–62 years. Equivalent oscillations are found in numerous multisecular paleoclimatic records. The Coupled Model Intercomparison Project 5 (CMIP5) general circulation models (GCMs), to be used in the IPCC Fifth Assessment Report (AR5, 2013), are analyzed and found not able to reconstruct this variability. In particular, from 2000 to 2013.5 a GST plateau is observed while the GCMs predicted a warming rate of about 2 °C/century. In contrast, the hypothesis that the climate is regulated by specific natural oscillationsmore accurately fits the GST records atmultiple time scales. For example, a quasi 60-year natural oscillation simultaneously explains the 1850–1880, 1910–1940 and 1970–2000 warming periods, the 1880–1910 and 1940–1970 cooling periods and the post 2000 GST plateau.

This hypothesis implies that about 50% of the ~0.5 °C global surface warming observed from 1970 to 2000 was due to natural oscillations of the climate system, not to anthropogenic forcing asmodeled by the CMIP3 and CMIP5 GCMs. Consequently, the climate sensitivity to CO2 doubling should be reduced by half, for example from the 2.0–4.5 °C range (as claimed by the IPCC, 2007) to1.0–2.3 °Cwith a likelymedian of ~1.5 °C instead of ~3.0 °C. Also modern paleoclimatic temperature reconstructions showing a larger preindustrial variability than the hockey-stick shaped temperature reconstructions developed in early 2000 imply aweaker anthropogenic effect and a stronger solar contribution to climatic changes. The observed natural oscillations could be driven by astronomical forcings.

The ~9.1 year oscillation appears to be a combination of long soli–lunar tidal oscillations, while quasi 10–11, 20 and 60 year oscillations are typically found amongmajor solar and heliospheric oscillations drivenmostly by Jupiter and Saturn movements.

Solar models based on heliospheric oscillations also predict quasi secular (e.g. ~115 years) andmillennial (e.g. ~983 years) solar oscillations,which hindcast observed climatic oscillations during the Holocene. Herein I propose a semi-empirical climate modelmade of six specific astronomical oscillations as constructors of the natural climate variability spanning from the decadal to the millennial scales plus a 50% attenuated radiative warming component deduced from the GCM mean simulation as a measure of the anthropogenic and volcano contributions to climatic changes. The semi-empirical model reconstructs the 1850–2013 GST patterns significantly better than any CMIP5 GCM simulation. Under the same CMIP5 anthropogenic emission scenarios, themodel projects a possible 2000–2100 average warming ranging from about 0.3 °C to 1.8 °C. This range is significantly below the original CMIP5 GCM ensemblemean projections spanning from about 1 °C to 4 °C.

Future research should investigate space-climate coupling mechanisms in order to develop more advanced analytical and semiempirical climatemodels. The HadCRUT3 and HadCRUT4, UAHMSU, RSS MSU, GISS and NCDC GST reconstructions and 162 CMIP5 GCM GST simulations from 48 alternative models are analyzed.

It’s now accepted that 17 years worth of data are required to confirm a climate change. If this two month swing continues for 17 years, then we’ll be able to say that the pause ended, or never occurred at all..

I was countering the claim that the warming in Australia indicated that the “pause” in global warming was over. I missed the most up-to-date UAH data.

People who claim global warming has increased or something pick the most recent hot spot as evidence, as fomd did there. I merely countered with a simultaneous instance of a cool spot. It’s always unusually warm somewhere and unusually cool somewhere else. There was an unusual cattle die-off due to cold a couple weeks ago. These isolated events don’t mean anything. Your UAH graph showed that the Australia hot event had not been exactly balanced by cold elsewhere, but the swing was well within what has been recorded since 1998.

I don’t think either you or Fan are correct, but I think Fan is less wrong, or he is playing devil’s advocate.

The warmth in Australia doesn’t indicate the end of the “pause” because no one has defined what the “pause” is and the recent trends have uncertainty too high to distinguish between the hypothesis of continuing warming and the null hypothesis of zero warming.

Even if you are using someone else’s published data, citing start and stop points, you are performing an experiment, and you must show that your method shows distinction between your hypothesis and your null.

For example, using Dr Roy’s data, and now as an end date, you have to go back a full 20 years before the uncertainty in the trend allows you to distinguish between a warming trend an no trend.

How would you want to publish that result

1. No statistically significant warming for the last 19 years.
2. No significant warming for 19 years
3. About .2 C per decade warming for the last 20 years.
4. An observed trend of 0.176 +/- 0.162 C per decade.for the last 20 years.

One in four is wrong, all others are true.

I never pick short term trends as evidence of warming.

Some may say I deny the “pause” but until they define it, I’m just denying spin.

Bob Droege: The warmth in Australia doesn’t indicate the end of the “pause” because no one has defined what the “pause” is and the recent trends have uncertainty too high to distinguish between the hypothesis of continuing warming and the null hypothesis of zero warming.

On that we agree. I expect lots of information in the upcoming 20 years, but 2 recent posts by Dr Curry are saying not to expect much clarity before mid-21st century.

Simulations by Santer support the idea of taking 17-year long epochs. In two years it will be reasoanble to compare the 1998-2014 epoch to the 1981-1997 epoch. The idea that you needed to wait some years before identifying a change in climate emerged long after James Hansen and others started the alarmism about warming.

There is no agreement about what the “null hypothesis” is, because all of the hypotheses have been formulated post-hoc. Before there was “warming” there was “cooling”, and the alarmism was based on no hypothesis test of any kind. If we take as our null hypothesis that all recent change is unrelated to CO2 and is an extension of “background” variation, we are faced with the problem that we do not have enough data for a really good null distribution for any test: that is, there is substantial debate about the “background variation”, such as the debate about where the mean temperature reaches relative max and relative min with a period of about 1000 years: climate “optimum”, “Minoan warm period”, “Roman Warm Period”, “Medieval Warm Period” and all that. If that period exists in the “CO2-independent” natural variation, then the current relative maximum is approximately on schedule, and about the right size as the height of the temperature maxima has been declining.

I repeat, the demand for careful statistical analysis has arisen mostly since a small vocal group of activists convinced a large segment of the population, including politicos like Al Gore and the UN, that potentially catastrophic warming was occurring. According to the kind of criterion advocated by Santer et al (and you in your last post), the potentially catastrophic warming never even got started.

Paraphrasing a famous aphorism; you can’t expect a statistical argument to persuade people to abandon a belief that they formed without statistical reasoning in the first place.

After reading the first presentation, it occurs to me that perhaps we could find some recent (1-3000 years) proxies and do some PCA work, finding variations at these time scales. Might this provide a longer-term baseline for comparison with the models?

Judith, You are hoping for this sort of workshop to occur in N. America or Europe. I have bad news on that score. It is exceedingly unlikely to happen. Learned societies in our part of the world dare not put their head above the parapet. It goes back to the desire expressed by Prof. Darriulat; to see a disussion between acedemics from both sides of the CAGW issue. The warmists will ensure that this NEVER happens.

I would note that when the Royal Society organized a 2 day conference to discuss the AR5, they ensured that not one single, solitary skeptic/denier was invited to speak.

Results presented at the workshop suggested that the enhanced wintertime warming over high northern latitudes from 1965 to 2000 was mainly a reflection of unforced climate variability. Disregarding this dynamically induced component of the 20th century warming leads to around a 10% reduction in the inferred global climate sensitivity.

I’m a bit rusty on this whole “mainly” thing. Is that 50%+1 +/- X?

Is it 60% +/- 10%? +/-20%? +/-30%? What is the forced fraction?

And what about the rest of the world the rest of the time, not just in winter in high northern latitudes? (By the way, how high? North of 70 degrees? North of 60 degrees? North of 80 degrees? North of 45 degrees? Because there’s a huge difference in area for those.)

IPCC AR5 suggests that forced variation due CO2 emission may exceed observed warming by up to 10%, and that’s for the whole world all the time; your music apparently only plays at the North Pole at Christmas: is it Jingle Bells?

It is a high-coal, high forcing, high end climate change case on steroids that is billed as the worst case scenario reflecting what would happen if “policymakers” do not begin implementing “mitigation” actions NOW (all other RCP scenarios assume some mitigation actions will be taken).

By 2100:
CO2 will increase to 806-936 ppmv
Temperature will increase by 2.6 to 4.8C (mean = 3.7C) above today
(an average decadal warming rate of 0.425C for the rest of this century!)
SL will rise by up to 84 cm by 2100
(an average rate of SL rise of close to 10mm/year!)
Ocean pH will drop from ~8.1 to ~7.7
NH sea ice will disappear (late summer) before 2050
Dry regions will get drier; wet regions will get wetter
Bad stuff generally will increase dramatically while good stuff decreases or disappears

Christopher Booker (The Telegraph) has labeled, “Climate change scientists,” as “just another pressure group.” If the IPCC was a cartel and not a political entity supported by Left wing governments, charges would be brought based on the IPCC’s ‘Santorizing’ of the underlying research to run a scam on the public.

“In years to come,” Booker says, “this will be looked back on as the most astonishing example in history of how the prestige of ‘science’ can be used to promote a particular belief system… All this would not be so serious if the IPCC had not been so successfully sold to the world as an objective scientific body rather than as just a political pressure group.”

As Richard LIndzen reminds us “Even the text of the IPCC Scientific Assessment agrees that catastrophic consequences are highly unlikely, and that connections of warming to extreme weather have not been found.”

One of you alarmists kindly explain to me why you’re still setting your hair on fire, when from the very mouth of the beast, we’re hearing such reassuring news.

Well Fan, perhaps. But since you likely read and commented on the very recent Climate Etc. post on this very subject, I simply assumed you’d concede that in fact, the IPCC has dialed back most of not all the catastrophic scenarios, or more accurately perhaps, the likelihood thereof..

There is high confidence that sustained warming greater than some threshold would lead to the near-complete loss of the Greenland ice sheet over a millennium or more, causing a global mean sea level rise of up to 7 m.

Current estimates indicate that the threshold is greater than about 1°C (low confidence) but less than about 4°C (medium confidence) global mean warming with respect to pre-industrial.

Abrupt and irreversible ice loss from a potential instability of marine-based sectors of the Antarctic Ice Sheet in response to climate forcing is possible, but current evidence and understanding is insufficient to make a quantitative assessment.

So disingenuous, Fan. You just got finished asserting that the IPCC is packed with too-timid Panglosses to explain the lack of alarmism in the latest report. I’m not going to play an unwinnable game whereby I spend an hour digging up links to charts and wading through nearly impenetrable IPCC language when it can do no good. You seem to have very little else to do with your time.

You know, or should know damn well they’ve dialed much of their previous alarmism back substantially…Reader’s digest version as per Barry Brill:

“Few of the large number of previously accused items are expected by the IPCC to cause “substantial disruption”. Some excluded favourites are:

And the other good news is that every one of the “substantial disruption” possibilities are seen as “unlikely” by the IPCC except* Arctic Sea Ice melting. This is mainly positive in opening up new sea lanes – while albedo effects have low significance in a slow-warming world.”

Go ahead and argue about arctic sea ice if you like. It’s just about all you have left. Regards to the Pope.

Most encouraging to see more climate scientists pay attention to multi-decadal and even century scale internal variability. It would be nice start to set some real baseline understanding so that we can see if and how anthropogenic forcing might begin to impact this variability.

Rather than being chastened by having been caught red-handed promoting pseudoscience and anti-Americanism, Penn State’s continued defense of Michael Mann, fabricator of the ‘hockey stick’ (the very antithesis in stick–figure art of the real nature of the Earth’s climate in all its, multi-decadal and even century scale internal variability), is proof academia remains contemptuous of the public, reason and of the most basic of all Judeo-Christian ethics: sincerity and honesty.

I just read a “skeptical” post by Dr. Mike Stopa, displayed on his personal homepage. He’s suggesting that enhanced CO2 might plausibly fail to have *any* warming effect on the climate; that there isn’t any evidence for a positive water feedback; that there has not been any significant 20th century warming apart for a small 20 year period (that can entirely be explained by cosmic rays, internal variability, or any such); that our fossil fuel burning contribution to atmospheric CO2 increase is somehow refuted by the fact that the airborne fraction is significantly less than 100%; and that atmospheric CO2 increases can’t have any effect anyway since the atmosphere already is saturated with it in the infrared band. He suggests that the “greenhouse effect” just is a 200 year old speculation by Joseph Fourrier, which late 20th century scientists have dogmatically revived and propped up with phony evidence. He also likens mainstream climate science to Lysenkoism.

Stopa is Harvard physicist. He must be a force of nature to survive in that environment. The Daily Caller has an article him.

It seems that he is “Director of the National Nanotechnology Infrastructure Network Computation Project and a nationally recognized expert on nanoscale electronics and computation — predicts a change in climate science in the near future.”

“I just read a “skeptical” post by Dr. Mike Stopa, displayed on his personal homepage. He’s suggesting that enhanced CO2 might plausibly fail to have *any* warming effect on the climate; that there isn’t any evidence for a positive water feedback; that there has not been any significant 20th century warming apart for a small 20 year period (that can entirely be explained by cosmic rays, internal variability, or any such); ”

It’s plausible that quantities of CO2 which are higher than 300 to 400 ppm may have no measurable increase in global average temperature.
It’s a fact that the increase so far in the last century in global CO2- more 100 ppm- has not yet been measurable.

We know without any doubt that Urban Heat Islands increase the average temperature of a region, but we have not been able measure the increase in global average temperature which is due to UHI effects.
In terms of regional effects there aren’t any factors which cause more increase in average temperature than Urban Heat Island Effects and/or other types changes to land area- whether human modification or naturally occurring changes [if a forest grows in region which did not have forest, this will have effect on a regions average temperature].

So it’s well known and it can measured that land changes made by humans or land changes which are due to natural changes, change regional climate and average temperatures. But these changes can’t be measured on a global scale. Likewise, it’s possible that a global change in CO2 will not be measurable.
Now, people/scientist have stipulated that CO2 levels and land changes have increased global temperature, but they have not actually found the fingerprints [despite some claiming they have]. It used to be fashionable to claim human have changed global average temperatures due to land changes. Likewise Jim Hansen at one time, claimed that global Methane levels were causing 3 times more warming the current rising CO2 levels- and that also became unfashionable.

Well now, maybe that could actually be so. After all, many climate deniers, climate skeptics, and denizens do often go bananas and get hysterical when discussing the problem of global warming.

Oh, but wait. Mike Stopa (in his earlier postings) is also making the claim that it is “possible” that CO2 has no influence on global climate, that AGW depends on a feedback mechanism between increase in CO2 and increase in atmospheric water vapor (about which there supposedly is scientifically justified doubt), that the atmosphere was already nearly opaque in the wavelengths that are absorbed by CO2, and that the only apt comparison of AGW was to the anti-genetics theory of Trofim Lysenko that was bought wholesale by Stalin.

Wow! What does this guy know that we don’t?

Turns out that Mike Stopa is a Harvard physicist and a nationally recognized expert on nanoscale electronics and computation. So how is it that somebody with a PhD in physics, who understands nanoscale electronics, and teaches graduate level chemistry, can be so clueless when it comes to understanding the basic nature of global warming, and the role of atmospheric CO2 and water vapor in climate.

My first guess would have been that despite his academic credentials, Mike Stopa has actually no more knowledge about atmospheric physics, radiative transfer, or climate modeling, than the typical high school dropout. Or perhaps it was that Mike Stopa had somehow gotten himself duped and deceived by the fossil fuel disinformation lobby into believing the erroneous points that they try to make as part of their ongoing disinformation campaign.

But then I see that Mike Stopa is now running for Congress. Now he seems to be saying that there are many excellent scientists who are doing great work on climate science, carbon dioxide is a greenhouse gas, carbon dioxide has been increasing due to human activities, and that through most of the twentieth century the temperature of the planet went up. He then sounds knowledgeable by expounding on the scientific objections to AGW: that AGW depends crucially on a feedback mechanism between carbon dioxide and water in the atmosphere, that the physics of water in the atmosphere is really not well understood, in particular the physics of the formation of clouds, and that there are natural variations to the temperature which IPCC has not fully taken into account, and they include the forcing factor of the solar radiation, the variation of the temperature due to ocean currents. He simply doesn’t believe that IPCC takes into account solar forcing and other forms of natural climate variation.
(He should go and read the IPCC report to get his facts straight.)

Despite trying, his expose still comes across as being unsure and in need of further tutoring, but it is almost to the point of sounding semi-plausible now, though way off target in regard to any real understanding the climate role of atmospheric CO2, or water vapor feedbacks, or in accounting for solar radiation forcing.

From all this, would you really consider buying a used car from this guy?

When I see someone bring up ‘radiative transfer’, I think this person has no knowledge of the right physics for the Earth’s surface temperature. It’s heat transfer at the surface and it’s multimodal with the non-radiative fluxes dominating.

Hoax, hysteria, fraud, panic, all mischaracterize the problem because each touches only a bit of the elephant. This has been an ‘Extraordinary Popular Delusion and a Madness of the Crowd’.
=======================

“By definition, horizontally directed energy transports must average to zero when globally integrated.”

A literal reading of this sentence would be obviously true but also irrelevant and one that could only have the purpose of deception. I wouldn’t buy a used car from someone that used this sentence in the literal sense. That is why I assume you were actually trying to convey the meaning that horizontal heat transport can’t affect the energy budget. Would you care to expand on your belief that heat transport can’t affect the energy budget? Is there a time period involved and if so how long and on what do you base your estimate?

A Lacis, “So how is it that somebody with a PhD in physics, who understands nanoscale electronics, and teaches graduate level chemistry, can be so clueless when it comes to understanding the basic nature of global warming, and the role of atmospheric CO2 and water vapor in climate. ”
Andy, do you consider that he might be the clueless one? As I recall, you and North still defend the notorious Mann.

I hesitate to comment because so many contributors here are better qualified to do so than I am, but I would like to make the following observations about the original post.

How is it possible to hypothesise about the causes of climate change in short periods such as centuries or millennia without first agreeing, or at least stating a position, about the causes of climate change over longer periods? There seem to me to be three periods which it is important to hypothesise about before homing in on the last 200 years: (i) the current ice age, (ii) the glacial/interglacial cycles within it and (iii) the Holocene.

The first two periods are the dominant influences on our current climate: the current ice age started about 2.6 million years ago and might have been triggered by changes in ocean heat transport caused by tectonic plate movements – the closing of the Panama Isthmus. Ice ages do not occur cyclically – there have been at least 5 at irregular intervals in the 4.5 billion year life of the Earth and for the majority of the time, the Earth has not been in an ice age.

The most likely cause of the glacial/interglacial cycles during the current ice age are changes in the amount and location of solar insolation caused by the Earth’s orbital cycles, the Milanković cycles.

Any climate scientist worthy of the name should, in my opinion, have a clear view, based on evidence and theory, on what caused these changes before he/she hypothesises about modulators of shorter term climate change.

The third period worth talking about is the current interglacial, the 12,000 year old Holocene. Anyone who discusses the causes of climate change over periods of a few hundred years must surely first have stated a clear theory of what has caused the climate changes seen during the Holocene, and to the extent that their history is known, during the previous interglacial periods.

When I read the comments on this site, from all sides of the debate about CO2, I wonder if the contributors:
1. Agree that the warmest period in the Holocene was about 6,000 years ago and have an explanation of the causes of that.
2. Agree that there appears to have been cyclical warming & cooling since then with a period of about 1,000 years, with each 1,000 years colder than the last, and have an explanation of the causes of that.

For example, those who talk about the “recovery” from the Little Ice Age as if that recovery were inevitable seem not to be articulating a theory of the causes of actual climate changes during the Holocene – there must have been a cause of the recovery. It may be a result of a natural human desire for narrative to assume that identifying natural phenomena as cyclical is an explanation of them in itself, but it is not; that night follows day is not a reason for a warm period to follow a cold one.

This post from the Taiwan conference says “Since the climate system may possess multiple equilibria, the stability of these equilibria and the transition dynamics between them matter.” What is the evidence that the climate system possesses any equilibria, let alone stable ones? The actual dynamic climate which exhibits both cyclical and secular changes surely does not have an equilibrium state on any time scale and mathematical climate models which are predicated on the assumption that there is an equilibrium climate state cannot be correct. To quote Roy Clark, author of “The Dynamic Greenhouse Effect and the Climate Averaging Paradox”: “There is no climate equilibrium on any time scale and all of the energy transfer processes are dynamic, not static.”

The Taiwan conference may have been a step in the right direction if its participants were setting out to agree first the facts about the Earth’s climate which they are trying to explain the causes of, and then consider the possible explanations. But it looks to me that it was looking mainly at climate changes over 200 years or so without first setting out a position on the causes of longer term changes described above, and therefore while it may not have obsessed about CO2, it may not have been terribly profound.

To get you up to speed, let’s start with your assumption that all causes are known. That assumption rules out all empirical investigation. Until just recently, that assumption was a very effective tool for the Alarmists as it allowed them to treat ENSO, the AMO, the PDO, you name it, as something other than natural regularities worthy of investigation. Fortunately, through the good offices of WUWT and especially Bob Tisdale, we managed to get the Alarmists to take seriously the plethora of unexplored natural phenomena relevant to climate science. Our first big score in the Alarmist world came when Trenberth realized that he would have to actually investigate the ocean mechanisms that transport heat from near the ocean surface to the deep oceans. Then he realized that if the oceans sequester heat then the standard top-down assumptions about Earth’s energy balance used by Alarmists are wrong. They are wrong because part of the energy that, according to Alarmists, leaves Earth’s surface only to be sent back by naughty little CO2 particles is actually resident in the deep oceans. Oh well, back to the drawing board.

The ocean heat content was incorporated in models by Hansen in 1981 [1] and considered by Manabe [2] even earlier than that.

“Our first big score in the Alarmist world came when Trenberth realized that he would have to actually investigate the ocean mechanisms that transport heat from near the ocean surface to the deep oceans. Then he realized that if the oceans sequester heat then the standard top-down assumptions about Earth’s energy balance used by Alarmists are wrong. “

The realization isn’t new. Isn’t it bizarre how the deniers have to lie through their teeth to support their position?

Anyone who believes the BOM’s endlessly adjusted, patchwork and very short (in historical terms) temperature records signify anything but the sound and fury of the CAGW crowd has not been paying attention.

As an example, I have the BOM’s latest and most recent readings permanently running on my computer. According to them, it was 6.6 degrees C here about 3 hours ago (at midnight), and it is currently about 16.9 degrees. According to my thermometer, my body and just plain common sense, both of these numbers are nonsense. I was here, although I realise that empirical evidence is scorned by some people in these discussions.

At midnight-ish, when I put out some garbage, it was maybe 10-12 degrees. Currently, according to my thermometer, it is about 11 degrees. But this is the crap that gets fed into the BOM database, which was one of the things that made poor harry, of harryreadme fame, tear out his hair. The BOM data is rubbish.

Judith: Your omitted the most important section explaining what “internally-generated variability” or “unforced variability” is:

“Attribution of the causes of climate change may be approached by carefully inspecting model outputs from ensembles of “historical” and “future” climate simulations. If the number of ensemble members is sufficiently large, the ensemble mean response to a prescribed external (e.g., anthropogenic) forcing can be considered as a measure of the “forced” variability and the departures of the individual realizations from the ensemble mean response the “free” variability. For small incremental changes in the climate system, the simulated climate changes can be further divided into dynamically and thermodynamically induced signals (Fig. 1). That there exists a large amount of free climate variability in the climate models suggests that significant portions of the observed multidecadal variability in the climate record could be inherently stochastic, i.e., attributable to sampling fluctuations associated with naturally occurring modes of variability.”

Unforced variability in real climate is assumed to be the difference between individual model runs and the mean of all runs. We know that the climate sensitivity of models varies with the exact values of the roughly two dozen parameters that are used by each model. Therefore, what is assessed to be “unforced natural variability” will depend on the precise parameters chosen. We know that different models give different projections for regional climate change and most examples of unforced natural variability will be regional in nature.

I know you believe that unforced natural variability has been ignored for far too long – but that doesn’t mean you should get excited about naive attempts to understand the problem with models that you presumably believe aren’t fit for this purpose.

The most important fact to determine about multi-decedal internal variability is its magnitude. Last I heard, when PDO and AMO are averaged to decadal time scales they may contribute of order 0.1 C to global mean temperature in either direction averaging out over time. Have any of these papers quantified them, or something else internal, to be significantly higher? Because, lacking that, internal variability is just meaningless noise against a background of several degrees of warming. At best, these can only confuse short-term trends, but not long-term ones, where the 60-year running trend is curving upwards.

Ot but humans should not be living anywhere north of 40 degrees latitude south or north. We derive from Monkeys in the tropics and our body temperature is 37C so warming would be very welcomed there LOL. Unfortunately because life was so easy in the tropics we humans did not really develop mentally unless we had hardship, for this reason the northerners did develop massive brains just to survive the cold HAHAHA

The multi-decadal oscillations you mention may actually be playing a larger role, according to Dr. Syun-Ichi Akasofu – “On the present halting of global warming”http://www.mdpi.com/2225-1154/1/1/4/pdf

Abstract: The rise in global average temperature over the last century has halted since roughly the year 2000, despite the fact that the release of CO2 into the atmosphere is still increasing. It is suggested here that this interruption has been caused by the suspension of the near linear (+ 0.5 deg C/100 years or 0.05 deg C/10 years) temperature increase over the last two centuries, due to recovery from the Little Ice Age, by a superposed multi-decadal oscillation of a 0.2 deg C amplitude and a 50 ~ 60 year period, which reached its positive peak in about the year 2000—a halting similar to those that occurred around 1880 and 1940.

Because both the near linear change and the multi-decadal oscillation are likely to be natural changes (the recovery from the Little Ice Age (LIA) and an oscillation related to the Pacific Decadal Oscillation (PDO), respectively), they must be carefully subtracted from temperature data before estimating the effects of CO2.

Taking the underlying linear warming trend since 1850 long before human GHG emissions amounted to anything of 0.05C per decade (0.15C over three decades of the late 20thC warming cycle) plus the “multi-decadal oscillation of a 0.2 deg C amplitude and a 50 ~ 60 year period, which reached its positive peak in about the year 2000”, we end up with 0.35C out of the observed total linear warming of 0.5C attributed to causes not related to human GHG emissions.

So why should the recovery from it have been caused by a sudden increase in CO2?

Duh!

Use your head, Bob!

Max

Jim D

“You think” Akasofu’s paper is “bunkum”.

Howdat?

He simply points out that 70% of the observed 1970-2000 warming cycle (which has now stopped as PDO has reversed) can be attributed to natural (pre-human GHG) factors plus the ~30-year warming PDO cycle.

Since both events started long before humans were emitting any appreciable quantities of GHGs (primarily CO2), and since concentrations of these GHGs did not start to change appreciably until after WWII, after a good part of the recovery had already occurred, it must be something else.

Right?

So we’ve ruled out CO2.

How about the same Mother Nature that has caused all the many climate fluctuations of our planet over the past?

Makes sense. But now let’s try to identify the mechanisms involved.

Well there’s ENSO, PDO, AMO, variations in solar activity, etc., but it’s not so easy, since there are many possible mechanisms (and we probably don’t even know most of them yet).

I am asking what specific natural processes are causing the 0.5 C per century trend.

You seem to be saying you don’t know, which means you aren’t supporting your argument, which means it’s just because you say so, which is what someone else said. Bunkum.

1. The “0.5 C per century trend” IS the ” recovery from the little ice age”.

2. The person who stated this is the author of the paper I cited.

3. It makes sense to me that this is a natural phenomenon rather than AGW, because it started long before there were any appreciable human GHG emissions.

4. I do not know the underlying mechanism for this observed trend; nor do you. (Nor does IPCC.) But since it started long before humans started emitting significant GHG quantities, it is clear to me that the author is correct in deducing that it could not have been caused by human GHGs.

This statement: “That there exists a large amount of free climate variability in the climate models suggests that significant portions of the observed multidecadal variability in the climate record could be inherently stochastic, i.e., attributable to sampling fluctuations associated with naturally occurring modes of variability” demonstrates a profound misunderstanding of the term “stochastic”. It is the antonym of “deterministic” and means “governed by the laws of probability”. It is not necessarily associated with sampling and relates to the underlying physical processes themselves, processes such as wave breaking, vortex shedding and turbulence. Physicists and signal processing engineers have been comfortable with this concept for some time but it appears that fluid dynamicists continue to be locked in the deterministic head-set of the nineteenth century. This is why fluid dynamical models such as OAGCMs, do not work. Deterministic models cannot adequately describe stochastic behavior. Multidecadal variability is a symptom of this; it is a measure of the failure of the models to account for the underlying stochastic nature of climate processes.

This research from Taiwan is yet another example of how we were misled by the ‘science is settled’ statement at the beginning of the IPCC work. How can we correct such an historical error? Unfortunately there is no credible explanation for the 0.5C rise between 1910 and 1940 other than the CO2 greenhouse gas theory. Despite that theory flies in the face of carefully measured specific heat of CO2. The only crack in this paradox comes from that CO2 is not a gas with singular properties: It has many different vibration modes – all of which absorb energy. So, in effect the specific heat of Co2 depends greatly on the temperature at which it is measured. That is why I claim that the effect of CO2 depends not on total CO2 in a well mixed atmosphere, but on the proportion of new hot CO2.

“What does the science say about the temperature of the oceans – which, after all, constitute about 70% of the Earth’s surface? The oceans store approximately 80% of all the energy in the Earth’s climate, so ocean temperatures are a key indicator for global warming.” ~ Roger Pielke, Sr.

It isn’t just logical, it’s Physics 101: “If ocean cooling does occur, it DOES mean global warming as stopped during that time period.” (ibid.)

“The predicted temperature in 2100 by the IPC is simply an extension of the warming trend between 1975 and 2000… As a result, the IPCC prediction during the first decade of the present century has already failed.” ~Syun-Ichi Akasofu

Where does the calculation for ECS come from? Since the CO2 has risen consistently since the 40’s or so and the temperature has fell, climbed, and then flattened which part do we attribute to CO2? If we just average the whole period, we still have to explain the 80 or so years prior to that when temperature climbed, peaking in the 30’s when arthropogenic CO2 was not a factor.
Can we say for sure right now that CO2 is not causing a massive increase in temperature but the natural factors not well understood are dropping temperature causing a flat result?
For every CO2 rise or fall there’s a reaction in temperature to support anything from CO2 will cause catastrophe, to CO2 causes cooling. Without identifying the natural factors, quantifying them, and then understanding how they react together I am unconvinced that anyone knows ECS outside of a guess.

“The ratio of the total climate response to the no-feedback response is commonly known as the feedback factor, which incorporates all the complexities of the climate system feedback interactions. For the doubled CO2 (a) and the 2% solar irradiance forcings, for which the direct no-feedback responses of the global surface temperature are 1.2° and 1.3°C, respectively, the ~4°C surface warming (b) implies respective feedback factors of 3.3 and 3.0 (5).” – Andrew A. Lacis, Gavin A. Schmidt, David Rind, Reto A. Ruedy, – 2010

The result is the log-sensitivity of temperature rise per CO2 doubling. The land is close to a ECS=3C as it does not depend as much on the slow feedbacks and long response time.
The global TCR=2C is less because it includes the lagged ocean response.

But you don’t know what the feedback factors are. You can’t take a known number and multiply it by an unknown number and get a known number. Feedback factors are unknown, the IPCC admits this.

According to this analysis, because we are 80% to a CO2 doubling we would have warmed approximately 3.2C over the last 80 years or so. We warmed .8C, so the only thing we can say for sure is that that analysis is wrong.

Yeah, you got me. I’m talking about the warming so Log2(402/280)=.52
.52X4= 2.08. The earth has not warmed 2.08C so that figure cannot be supported unless you can calculate natural factors creating cooling the last 15 years.

So this is my point. To get ECS you need the doubling of CO2 (log2 of a doubling=1.2C) then you need the feedback effects.
So to get the feedback effects you take the difference between the increase in temperature one year and subtract natural variation. What is left is the effect of CO2 plus feedback effects.
So if you are telling me you know ECS and you know feedback effects then you would have to know all of the factors effecting the climate that are not related to those two things.
So then what are the factors for natural variability right now?
Please list all the natural factors and tell me how much each one is adding or subtracting from the earths temperature. If you can tell me that I can go back or forward to any point in time and calculate the exact temperature.

Since this is what models attempt to do, and they fail to do, I’m sure the world is interested to know the answer so we can build the perfect model.

You can’t just look at a small piece of time, average it and say the rest is just noise from natural factors. Natural factors have brought the earth in and out of ice ages and everything in between. They do not follow a continually recurring oscillation. All of those patterns break down over enough time so the only way to know those factors is to quantify what is causing the oscillations.

what about the equivalent to CO2 doubling (include methane, etc).
serious question I’m too lazy to find out. Lindzen seems to think its 80 %, although he could (and has been) wrong on this in the past. Everyone makes mistakes whats the answer?

No, I think you have that wrong as well. That 0.8 you are referring to is not a dimensionless number but a conversion from the CO2 doubling radiative forcing to a temperature rise (including the feedbacks).
3 C ~ 0.8 C/(w/m^2) * 3.7 w/(m^2)

“The remaining uncertainty is due entirely to feedbacks in the system, namely, the water vapor feedback, the ice-albedo feedback, the cloud feedback, and the lapse rate feedback”;[9] addition of these feedbacks leads to a value of the sensitivity to CO2 doubling of approximately 3 °C ± 1.5 °C, which corresponds to a value of λ of 0.8 K/(W/m2).”

And of course, if you don’t agree with the Wikipedia entry, you can try to modify it.

No, that has nothing to do with anything, it’s only relation is that it’s also .8, but many numbers can be calculated to equal .8.

If a doubling of CO2 will create 3.7 watts/m2 (with feedbacks) and 3.7watts/m2 = 3C then we can figure out what warming we should have seen.

If AR4 figures that we are receiving 3.0watts/m2 from all greenhouse gasses (and they did) then we can calculate that we have received an equivalent forcing to 81% of a doubling of CO2. And if we have received an equivalent forcing of 81% of a doubling then we should have warmed 81% of their temperature increase as calculated from 3.7watts/m2. .81 X 3 = 2.43C. (The earlier post was figured on Hansen’s earlier work which said 4C per doubling)

But we have not warmed 2.43C, and they know we have not. So rather than rethink their conclusions about feedback effects they have invoked yet another unknown variable in aerosol forcing to balance the checkbook.

So if they don’t know the natural variability factors at any moment then they can’t know the feedbacks, and if they don’t know the feedbacks they can’t know the ECS, and if they don’t know the ECS they can’t know the aerosol forcing effects which is simply the difference between the observed temperature anomaly and (unknown natural factors + known CO2 forcing + unknown feedback factors).

The authors seem to be suggesting that unforced variability can be a significant player on centennial time scales (good). But I do think their interpretation of what constitutes a significant deviation will be on the order of 0.2 deg C or so. In other words the researchers are still suffering from the condition known as ‘the cause fallacy’. Their acknowledgment of known modes such as AMOC reaffirms this position. It’s extremely unlikely that you would be able to explain 20th century warming with a natural cycle if it 1) exhibits shorter variability than a century, and 2) only has a limited impact globally. The cause fallacy has its roots in the interpretation of variability in proxy records with regards to mainstream estimates of climate sensitivity. Was the last glacial maximum 5 or 10 deg C? 5 deg C sits well with current estimates of climate sensitivity, so is considered more likely, even though other proxies have trouble ruling out the 10 deg C estimate (circular reasoning). I’ll do a post on this if I get the chance.

It’s extremely unlikely that you would be able to explain 20th century warming with a natural cycle if it 1) exhibits shorter variability than a century, and 2) only has a limited impact globally.

But if one or both of the conditions you list are not valid?

The early 20thC warming cycle cannot be explained by AGW. so something caused it, which has more than “a limited impact globally”

The statistically indistinguishable late 20thC warming cycle can be – but only if one specifically excludes the reasons for the earlier warming cycle.

The mid-century cycle of slight cooling also does not fit the AGW explanation.

Nor does the current cooling, which has lasted at least a decade so far.

A correlation between global average temperature and AGW is contrived (and certainly not statistically robust). It fits well for a 30-year period out of the 160+ year temperature record, but not for the remainder of the record.

What about ESPC — the Economic Sensitivity to the Political Climate (i.e., what we see is the reality of Eurocommunism) –e.g., “the present trend toward all-round government control clearly points toward more underground activity… [as] the regulated economy stagnates or withers away…” ~Hans F. Sennholz

Depends on which models. In general, the models that are supposed to advise the IPCC spit out an ECS. ECS is simply greenhouse CS times a feedback fudge factor. It doesn’t matter hw accurately we know the greenhouse number, the range of uncertainty in the feedback gain is so great that it, for practical intents and purposes, is a knob. Thus the wide IPCC range.

CO2 isn’t a knob, it’s an input. Feedback isn’t an input, it’s a knob. Lots of people get this exactly backward.

An action plan incorporating an open-door climate policy coupled with win-win blue sky thinking, can push the envelope of proactive processes to provide an empowerment and ownership of knowledge transfer that can clearly show on the public’s radar screen, ultimately transitioning towards a shifting paradigm scenario.
So, at the end of the day, there’s lessons to be learned and not all can be conceptualised by thinking outside the box!

Judith
. The climate response to external forcing—especially on regional scales—is strongly influenced by dynamical processes in both the ocean and the atmosphere. Moreover, the existence of strong natural multidecadal to centennial variability makes the detection of anthropogenic climate change a challenge.
.
Indeed I agree with you that it is very satisfactory to see that this approach to the climate which takes the dynamical problems seriously and tries to tackle the climate with its full spatio-temporal variability is more and more developped.
It refreshingly breaks with the eternal masturbation of a single statistical meaningless variable that has been so filtered that nothing is left anymore (I refer of course to the GMT) and a billionth “paper” about ECS which is a theoretical construct with no relevance to real dynamics.
I have just printed all the talks and will spend a pleasant weekend reading them.
Perhaps to save you some time, I already selected 2. Of course the order of my reading is biased by my preference for themes with a robust mathematical formulation but this workshop didn’t fail my expectations.
.
First I rushed on M.Ghil because M.Ghil presence is a guarantee of quality : http://www.tims.ntu.edu.tw/download/talk/20120918_2293.pdf
I found it amusing that I found the name of A.Bracco who was unknown to me untill you told me that she was your collegue and spoke to you about oceanic circulation. You have a good intellectual taste 
I quote : There are several natural modes of variability internal to the climate system. It is the chaotic interaction of these modes that are forced by us and not some dumb “equilibrium”
This talk is a highly readable jewel introducing to non linear dynamics in the climate.
.
Then I recommend http://www.tims.ntu.edu.tw/download/talk/20120920_2295.pdf
I quote slide 3.Major theoretical challenge for complex non equilibrium systems.
– Mathematics : Stability properties of the time mean state say nothing about the properties of the system. We cannot define a simple theory of the time mean properties relying only on the time mean fields.
– Physics : We have no fluctuation-dissipation theorem for chaotic dissipative systems. There is no equivalence of internal/external fluctuations from what follows that the climate change is hard to parameterize. (My remark: this is an euphemism !)

– Complex systems feature multiscale properties, they are stiff numerical problems hard to simulate “as they are” (My remark : again an euphemism)
Despite being a bit technical, this talk explains nicely why the ergodic hypothesis (time mean = phase space mean) is necessary and why it is a problem that we have no FDT (fluctuation-dissipation theorem).
.
Even if it is not in the talks but would belong there, I add a recent paper : http://adsabs.harvard.edu/abs/2012EGUGA..14.8591V
This paper deals with a mystery that the masturbators of averages either neglect or are not even aware of. Namely while the surface albedo shows a significant hemispherical asymmetry, the global albedo shows none. This is doubtlessly one of the major subjects because the energies considered here are vastly above the second order effects like CO2 forcing. So there is apparently a natural mechanism which modulates hemispherical albedos so as to suppress hemispherical asymmetries.
.
Of course like the most beautiful girl, the paper can only give what it has and uses coupled circulation models for an aquaplanet with a shallow slab ocean. The conclusions depend on the chosen convection scheme what moderates my enthousiasm but it is an interesting and worthy attempt anyway.

There is high confidence that sustained warming greater than some threshold would lead to the near-complete loss of the Greenland ice sheet over a millennium or more, causing a global mean sea level rise of up to 7 m.

Current estimates indicate that the threshold is greater than about 1°C (low confidence) but less than about 4°C (medium confidence) global mean warming with respect to pre-industrial.

Abrupt and irreversible ice loss from a potential instability of marine-based sectors of the Antarctic Ice Sheet in response to climate forcing is possible, but current evidence and understanding is insufficient to make a quantitative assessment.

“Even the text of the IPCC Scientific Assessment agrees that catastrophic consequences are highly unlikely, and that connections of warming to extreme weather have not been found.”

There is high confidence that sustained warming greater than some threshold would lead to the near-complete loss of the Greenland ice sheet over a millennium or more, causing a global mean sea level rise of up to 7 m.

What’s catastrophic about this? In any sense of the word? A “millennium or more“?!?!?!!

I think the suit is in Canada. Defamation is a lot easier to prove there. Steyn will still win, but the game is played very differently from if it were in the US. In the US, the case would be tossed on the grounds that Mann is a public person.

No, the suit is in the D.C. federal District Court. They filed there because it has cumulatively the most progressive judges in the country.

And Steyn’s lawyers are one of the country’s mega firms, Steptoe & Johnson, ranking consistently in the AmLaw Top 100. The lead lawyer is apparently Shannen Coffin who beside being a partner at Steptoe, is a co-contributor at National Review with Steyn.

I got off the fast track long ago. They need no help from a small town guy like me.

I’m not a lawyer, but I almost spent a night once at a Holiday Inn Express, and I have a hard time seeing how Mann is going to do anything with this but burn through a lot of his sugar daddy’s money, and possibly end up with a SLAPP suit. But he may have a lot of short term success before his final spanking.

Last thing I read, Steyn and NRO were trying to appeal the trial court’s denial of their motion to dismiss the complaint under the D.C. anti-SLAPP act. Plus there is apparently a new judge, which can’t hurt.

Thanks Gary. Actually, a new judge might well hurt, if the judge is smarter than a fifth grader (unlike the previous one) but has the same views.

I don’t blame you for jumping off the “fast track.” I have a few friends from university days who stayed on it, and while they are rich, for the most they are stunted personalities, usually with a trail of broken marriages and bewildered children behind them. One or two have managed to have a decent life, but the rest are pretty miserable human beings, despite the baubles. My contemporaries who became suburban solicitors or low key corporate lawyers are much happier and better people, IMO.

The meme of the small town/suburban lawyer who can mix it with the big boys if required is a very powerful one, and has some basis in truth.

Anyway, good to hear that Steyn has excellent legal representation. Watching the progress of this case is going to give me great pleasure.

Despite fluctuations down as well as up, “the sea is not rising,” he [Swedish geologist and physicist Nils-Axel Mörner] says. “It hasn’t risen in 50 years.” If there is any rise this century it will “not be more than 10cm (four inches), with an uncertainty of plus or minus 10cm”. And quite apart from examining the hard evidence, he says, the elementary laws of physics (latent heat needed to melt ice) tell us that the apocalypse conjured up by Al Gore and Co could not possibly come about.

“The choicest detail is when the lady explains that, during the hours they were stuck in the hotel and prevented by armed guards from walking next door to see Old Faithful, every hour-and-a-half throughout the day, just before the geyser was due to blow, your supposedly “closed” government dispatched a fleet of NPS SUVs to ring the site just in case any of those Japanese or Canadian tourists had managed to break out and was minded to take a non-commissar-approved look at it.

Oh, and stay tuned to the end when she recounts how the Park Service, on the two-and-a-half hour bus journey out of the park to Checkpoint Charlie at the Yellowstone Wall, forbade the seniors from using any of the bathroom facilities en route. If you did that to foreigners you’d captured on the battlefield, it would be in breach of the Geneva Conventions. But, if you seize them in an American park, you can do what you want.”

Don’t worry. locking down tourists and keeping the elderly from using a toilet is “for the children.”

It’s grossly understated by the way. There are off budget obligations that weren’t even conceived of along the way of the chart history. Keynesian “consensus” is the core intellectual support for these results, the original junk science political meme.

Yes, tea party style conservatives are trying to blackmail the employed-for-life political class into not bankrupting the entire country just to ensure their continued employment, pensions and, most of all, power.

Sadly, it is not true that it would never happen in Australia. Causing public inconvenience is the classic response of public sector agencies when they are faced with budget cuts.

For example, a few years ago when the Australian War Memorial (plug: fantastic place, well worth a visit) was asked to trim expenditure along with every other agency, what did they do? Why, they deleted the daily sunset ceremony where the Last Post is played at the Eternal Flame. They have heaps of paperclip counters in the back room, but funnily enough they couldn’t be asked to share the sacrifice.

I tell you what, though, when I am one of those oldsters on a tour and they won’t let me use the toilets, I will pee anyway. Assertive peeing is called for here.

More generally, the US is already turning tourists off with its horrible post 9/11 “security” procedures at airports and concomitant absurd, expensive and intrusive pre-approval requirements. You practically have to offer your first-born child as security, plus providing fingerprints and your life history. As someone who had a wonderful six month holiday in the US before this all happened, I’m not at all sure that it’s an experience that is likely to be repeated.

There’s a crucial distinction between internal variability and externally forced behavior that is not brought into proper focus here, The former can only redistribute energy; the latter can change its magnitude. Inasmuch as multidecadal variations dominate the aggregate global surface temperature record, it is difficult to explain this solely in “internal” terms, despite a rich array of theoretical possibilities proferred by nonlinear dynamical systems. The more likely driver seems to be an “external” mechanism that modulates the thermalization of insolation, by regulating cloud cover.

a) an underlying warming trend that started long before there were any significant human GHG emissions
b) multi-decadal strong warming and slight cooling cycles lasting around 30 years each
c) the beginning of such a cycle of slight cooling starting in 2001.

Citations of data series that are more the product of statistical massage and “homogenization” than of measurement are for lemmings. If you were capable of recognizing bona fide time series and performing proper power spectrum analysis, you would find that well over half the total signal variance is concentrated in multidecadal and longer spectral components.

The “internal” narrative is total BS, strictly ruled out by hard-constrained properties of earth rotation & atmospheric angular momentum records. So John S. we’re up against widespread dark ignorance &/or deception. I wish you stamina, penetrating insight, and ruthless efficiency.