Chaos

There’s yet another mathturbation post at WUWT. This one, by Andy Edmonds, argues that because weather is chaotic (in the mathematical sense), it’s impossible to model climate. In fact that’s the whole argument — a lot of words, but it boils down to nothing more.

The idea that a chaotic system can’t be predictable is right — and it’s wrong. It certainly can’t be predicted, at least not long-term, because of the phenomenon of “extreme sensitivity to initial conditions.” The slightest change in the starting conditions eventually (usually sooner rather than later) leads to a drastic change in the state of the system. Even if you know the exact “equations of motion” and the exact starting condition, to predict long-term you’d need infinite computing power and to calculate with an infinite number of significant digits. That can’t be done. And that makes prediction, frankly, impossible.

At least, detailed prediction over the long term is impossible. Weather is like that: long-term detailed prediction is beyond our ability and probably always will be. We can predict it short-term, up to about a week or maybe two at the most, but we have no hope of predicting, with any accuracy, whether or not it’ll rain in Boston on April 1st, 2095. In fact the study of mathematical chaos was jump-started by Edward Lorenz when he studied computer models of weather.

But even though a chaotic system can’t be predicted, its statistical properties often can be. The statistical properties of weather — it’s long-term average and variation — are referred to as climate. Those who believe that chaos in weather makes climate modeling, or climate prediction, impossible, have failed to comprehend the difference between weather and climate.

Allow me to illustrate. Let’s use a simple chaotic system: the logistic map. It’s a simple function of a single argument

.

I’ll actually use an equivalent form

.

This is the same as the previous equation, except I’ve replaced the old x variable by a new one, equal to the old one minus one half.

For values between 0 and 4, if we start with an x value between and , then apply the logistic map to get a new value of x, the new value will also be between and . We’ll then apply the logistic map to that new value to get an even newer value of x, etc., repeating the process as many times as we wish. By doing so, we can generate a time series of values.

The logistic map with is chaotic (actually chaos begins about ), so the time series can’t be predicted long-term. It’s not random — it’s perfectly deterministic — but the values will certainly seem random and will indeed defy prediction. Weather is like that. With this value of the parameter, it’s

.

I’ll use the parameter (the same as used by Andy Edmonds in his post) to generate a time series of x values which we’ll pretend are monthly temperature anomaly. They don’t really behave like temperature anomaly in detail (they follow a different distribution) but they’ll illustrate the point. Then we’ll compute annual averages to get a time series of yearly average temperatures. This time series will also be chaotic.

I did so for 1000 simulated years, and got this:

At least visually, it’s plausible as a time series of annual average temperature. It appears to exhibit random year-to-year variation, but it’s truly chaotic. But what about the climate? Here are 10-year averages (with error bars) of this temperature time series:

Although the simulated weather is unpredictable, the simulated climate is not. It’s stable. Almost all of the 10-year averages are within their error limits of zero, and those that aren’t are no more frequent nor more extreme than we’d expect from random fluctuations. If you test this series for significant change (using either the analysis of variance, or the non-parametric Kruskal-Wallis test since they don’t follow the normal distribution), there’s no climate change — not even close. The “weather” in this system is unpredictable, but the climate is not: it’s stable.

Computer models which simulate climate, do so by simulating weather. Nobody seriously expects them to get the weather right because nobody seriously maintains that weather isn’t chaotic. But predicting its long-term average and variation — the climate — is not in vain. That’s why climate models are usually repeated, doing as many “runs” as possible (within the limits of computing time), so that we can see as many weather simulations as practical and get a better handle on the long-term statistical properties — the climate.

The logistic map is chaotic, but its long-term statistics are not. They’re stable. The real point of climate change is not the apparently random fluctuations due to the chaotic dynamics of weather, but the fluctuations of climate due to changes in the dynamics of weather. When you increase greenhouse gases and therefore inhibit heat loss, you change the dynamics — the “equations of motion” as it were — and that will change the climate. In ways that are predictable.

Allow me to illustrate. Let’s write the logistic map in a more general form:

.

If we set , , and , we get the previous version of the logistic map. Let’s use this to simulate the weather, but this time we’ll allow the dynamics (the parameter values) to change over time. We’ll keep and , but we’ll let start at zero and increase linearly at a rate of 0.017 deg.C/year (just like the climate is doing presently). Simulating 100 years, I got this:

Lo and behold — with stable dynamics we got a stable climate, but with changing dynamics we got a changing climate.

Ironically, the “example” which Edmonds gives of a chaotic system exhibiting drift is this one:

Here’s something else to stimulate thought. The values of our simple chaos generator in the spread sheet vary between 0 and 1. If we subtract 0.5 from each, so we have positive and negative going values, and accumulate them we get this graph, stretched now to a thousand points.

… The point I’m trying to make is that chaos is entirely capable of driving a system itself and creating behaviour that looks like it’s driven by some external force. When a system drifts as in this example, it might be because of an external force, or just because of chaos.

All he’s doing is taking a pseudo-random time series and accumulating it, to create a pseudo-“random walk.” His “drift” has nothing to do with chaos, I could get the same behavior by accumulating truly random numbers in a random walk. It’s just more mathturbation.

I’m sure Andy Edmonds is a pretty smart guy. I’m also sure he’s pretty dumb — it’s all too common to find both in the same person. As for when he manages to be one rather than the other, I’m not sure whether that’s predictable, random or chaotic.

Correct. A tank-slapper has nothing to do with chaos. He’s an ignoramus who doesn’t realise how ignorant he is. Of course, we know how dishonest (or delusional) he is because he’s already said

While much of the global warming case in temperature records and other areas has been chipped away, they can and do, still point to their computer models as proof of their assertions. This has been hard to fight, as the warmists can choose their own ground, and move it as they see fit.

I haven’t seen the reactions yet, andrew, but I can imagine. I think of my brother’s very confident prediction (as it started to be clear that cycle 24 was having a slow takeoff) that 2009, 2010, 2011, 2012 would be the start of a cooling trend. One wonders how many wrong predictions one can make before one starts to question one’s assumptions. A lot, it seems.

The variety of the reporting is astounding. For instance, the AP notes that the researchers were hesitant about connecting the sunspot prediction to climate change and there could be at most a “little less” global warming

Then you get the Register, which was picked up by Fox News: Scientists now say we’re heading into another Ice Age!

Just another idiot with a PhD who thinks the entire world can be modelled by his thesis. Funnily enough, he mentions Bayesian inference somewhere in the bowels of that post but neglects to mention that we have pretty good priors to begin with.

Another gem from WTFUWT. So computer models utilizing the laws of physics to simulate the climate are bogus. But some simplistic model easy enough to implement in a spreadsheet is sufficient to overthrow climate science. Understood.

“We can predict it short-term, up to about a week or maybe two at the most, but we have no hope of predicting, with any accuracy, whether or not it’ll rain in Boston on April 1st, 2095.”

Surely it’s the seasonal cycle that is the real death of the argument that, because one can’t predict weather one can’t predict climate?

It’s quite hard to predict whether June 21st will be warmer than June 14th – a bit more than half the days will be, and the forecasts are getting better, but this is a difficult thing to do. However, in England, one can be very confident that December 21st will be colder, even though it’s more than six months away.

Unfortunately, they interpret this as imbuing “cycle” with some sort of mystical significance, over and above what we understand about the physical mechanisms that drive these cycles.

The really weird thing about the whole ‘random walk’ argument is that it is patently obvious that the climate has never ‘randomly walked’ into the ‘Planet frozen solid’ or ‘Oceans Boiling’ regimes; and it’s fairly obvious why.

Shalizi has a good online lecture PDF on the logistic equation – 1st lecture at the bottom – making much the same point. Used that to make a toy one to play with (drag with space bar to slow the parameter sweep.) Should get round to sticking these stats points in there at some point.

Maybe he thinks casinos don’t really exist. After all, how the next hand of blackjack or spin of the roulette wheel will turn out is totally unpredictable. So by Andy-kun’s logic, there’s no way to know if a casino will make a profit or not, and you’d be a fool to invest in one.

Ah but there’s almost a ‘cooling trend’ if you cherrypick 9 of the last 10 points at the end of the ‘chaotic with trend’ graph! Careful, or certain people will suggest that that the rising trend has stopped and is about to reverse…

More seriously, nice post once again. I always feel like I’m on a learning trip when I visit Open Mind.

The moment someone argues this I know it’s a bunch of bunk. Just because a system is chaotic, doesn’t mean it’s not deterministic. That’s something taught on the first day of chaos theory and anyone making scientific claims or calculations better know that. *sigh*.

“The statistical properties of weather — it’s long-term average and variation — are referred to as climate.”

When this “it’s too chaotic to be predictable” meme comes up, I always end up comparing weather fluctuations to seasonal cycles, since the seasons are such an intuitively easy way to convey climate boundary conditions (and the fact that it’s possible to have cold spells in Spring without that meaning Summer isn’t coming.)

So would it be fair to say that climate isn’t quite just “the long-term average of weather” (though averaging weather does get you to climate-related numbers), but that it’s also defined by its boundary conditions? (And that the seasons are a a nice intuitive example of one such boundary condition, from forcing coming from the angle of the Earth to the sun?) It’d be regiional climate, not global, of course.

Mainly just wondering if I’m making any daft errors making the weather-seasons comparison.

Thanks for this – it helped me realized the mathematical similarities between chaotic weather vs. climate and information theory (you can’t know what the next bit of data will be, but you can estimate what the distribution of bits will be, and then you can do all sorts of neat stuff with the information).

In my mind chaos in climate is much like the Devil staircase. You cant predict when you move by one step. But in average, you will go up at a given speed. Abrupt climate shift will happen but the average will go up.

By the way, roulette is a chaotic system. You cant predict where the ball will fall, but any shift in average position will be suspect.

One issue that is mentioned only in passing in this post and is often ignored; but which I think might fruitfully be factored in to other discussions – the variation around your stable or forced climate is not gaussian.
The result of that must be that you get more “non conformant” weather (e.g. runs of flat, upward or indeed downward trends against the background of the overall dynamics) than with, e.g. normally distributed random systems.
Indeed; someone who really really believed that weather is chaotic and wanted to illustrate that point in terms of statistical distributions might like to consider The plight of Phil Jones…

My wife teaches high school physics, and often poses the following problem :
A horse is harnessed to a cart. The horse refuses to try to pull the cart, explaining that, according to newtonian mechanics, any force he applies to the cart will be exactly matched by an equal and opposite force that the cart applies to him. Since the net force is zero, it is impossible to pull the cart, so he won’t try.

A disconcertingly large portion of students conclude from this that a horse cannot pull a cart. Sometimes they propose that two horses are required.

Likewise, one would have thought that the existence of climate models which accurately reproduce many features of the climate would constitute evidence that climate modelling is possible.

“But even though a chaotic system can’t be predicted, its statistical properties often can be.”

Weather arises from the random cycling of sticky variables (Mean temperature/precipitation/etc. values on some date) around their attractors. The mentioned acute sensitivity to initial conditions exhibited by Earth’s atmosphere is what makes forecasting individual weather events far into the future computationally intractable. However predictions regarding Climate, as Tamino states in his post, is not concerned with the exact values of the afformentioned variables but instead with the distributions of these variables. The distributions are dependent on the location of the attractors which shouldn’t change absent an imbalance in the ingoing and outgoung fluxes of energy to the Earth. There is a measured global heat imbalance of .75Wm^(-2) which is causing various attractors to change location. Accurate predictions on what effects these will have are in principle doable.

I think the issue is a little bit more elaborate that the simple debate predictable/unpredictable. The basic conservation laws assure that the trajectories are confined in a finite volume of the phase space. And it is true that, on a very long time , all averaged quantities will converge towards a constant value. But this does not say precisely which characteristic times and which amplitudes are really observable. A chaotic system will be characterized by a number of limit cycles and spatio-temporal intermittence that are generally impossible to predict “from the first principle”, precisely because they do not arise from “first principles” (that govern only the averages values) but from very peculiar features and non-linearities of the physical system. And unfortunately, numerical simulations have a very low skill to reproduce the precise features of these oscillations. Solar models do not predict the 11 years periodicity for instance, or El Niño/La Niña oscillations cannot be precisely reproduced by the models.

I don’t think we can so easily dismiss the issues raised by spontaneous variability. Because models aren’t verified on a very long period – just a few decades. And it is almost impossible to know precisely which amount of variability is due to a variation of forcings and which is due to spontaneous cycles. The climate dogma says that “climate starts after 30 years” , but there is actually absolutely no precise reason to say that. We KNOW that there are oscillations with 60 years periods , we KNOW that climate has fluctuated in the past without precise reason (no strong variations of forcings), so it is quite difficult to be sure that spontaneous variations at the century scale don’t exist. The argument that “we don’t see them in the models” is very weak, since, as I said, models are generally very bad to reproduce them. And even worse : we SEE them in models – but they are called “drifts”, that have persistently poisoned climate simulations, and that have been are carefully avoided in the simulations , which select precisely initial conditions corresponding to the most steady preindustrial conditions. In some sense, climate scientists carefully hide from the beginning what they don’t want to exist. The issue is far from being settled….

[Response: First, this post does not claim that “the issue” is settled. It’s about Andy Edmonds making such a claim, explicitly stating that chaos made climate modelling impossible. His thesis is nothing but hand-waving.

Second, it’s fine to talk of chaos and spatio-temporal intermittence and the implied impossibility of predicting (or modeling) anything, but the fact is there are constancies which arise from first principles. Namely, those which follow from conservation laws. Conservation of energy puts some serious boundaries on the variability of things like temperature. That’s why energy-balance models (even the simplest zero-dimensional kind) do a surprisingly good job predicting the global mean temperature on this, and a host of other planets too.

Third, your suggestion that the time scale defining climate is too short for us to know its true variation is faulty. Not only do you make explicitly false claims like “We KNOW that there are oscillations with 60 years periods” (NO we don’t) and “we KNOW that climate has fluctuated in the past without precise reason (no strong variations of forcings)” (sorry but there ARE forcings), your conclusion that “it is quite difficult to be sure that spontaneous variations at the century scale don’t exist.” The point is that there are limits on the magnitude, and the statistics, of such variations, and we can be very confident of some of those limits because we have reasonably good climate information going back nearly a million years from ice cores. If the climate (esp. global temperature) were as naturally variable as you suggest, then we wouldn’t see the strong correlation between temperature, Milankovitch forcing, and yes, greenhouse-gas forcing that has persisted for at least 800,000 years and probably a lot longer.

Fourth, when you say that climate scientists “carefully hide from the beginning what they don’t want to exist” I don’t believe you. I think you’re describing yourself, not the climate science (or climate modeling) community.

Frankly, the idea that global temperature change such as is expected in the next century, happens by spontaneous variability (either random or chaotic) without physical climate forcing, is flatly contradicted by paleoclimate — and I’m referring to very deep time, not just a millenium or so. The idea that natural variation will somehow prevent a response to very real, and very large, man-made climate forcing is even more ludicrous.]

Tamino, if you read me carefully, you’ll see that I never stated that ” global temperature change such as is expected in the next century, happens by spontaneous variability’ : I said that it was impossible to ascertain precisely the amount of natural variablity in the currently observed warming. Now if you think you can, please tell me which upper limit you take, and how you determine it.

Concerning your : “when you say that climate scientists “carefully hide from the beginning what they don’t want to exist” I don’t believe you.

Then tell me how they determine the state of the Earth at the beginning of the century – since there were no precise measurements of things like the heat content of oceans, the transport by oceanic circulations , and so on … how do they fix them in the models ?

Now, I would eyeball a ‘baseline’ of roughly -0.4K; there is the occasional ‘blip’ of about +- 0.4K around this, with sustained averages of perhaps +- 0.2K; but bear in mind that not all of this could be attributed to internally generated variability – there are certainly some volcanic influences, probably solar influences, and possibly some anthropogenic changes (i.e. land-use).

From this, it would be very hard to find room for more than 0.1K of the global average temperature being attributable to internal variation on timescales of more than a decade.

Then tell me how they determine the state of the Earth at the beginning of the century – since there were no precise measurements of things like the heat content of oceans, the transport by oceanic circulations , and so on … how do they fix them in the models ?

I don’t think you understand how climate models work. The things you mention are GCM outputs, not inputs. Unlike weather prediction models, which are necessarily initialized with real world conditions, climate models can be initialized with dynamic conditions set at some fixed value or even a set of random numbers and run until they stabilize, at which point they do a pretty good job of simulating most natural variations. They can then be inputted with either known past external forcing changes; volcanic, solar, greenhouse gasses, land use changes, etc., such forcings derived from various realistic future scenarios, or, for the sake of experimental validation, totally unrealistic changes in one or more inputs or parameters.

As for your ‘oscillations with 60 years periods’, this is a rough average of fluctuations with no apparent periodicity that correspond with fair agreement between temperature reconstructions and models inputted with GHG, tropospheric aerosol, solar and volcanic dust veil reconstructions over the last millenium.

“…. the figures (maximum trend, minimum trend, average trend, square root of variance in trend) all stabilize once the data length used is 20-30 years. And, conversely, that for periods of 3-13 years, the figures all depend sensitively on how long an averaging period you choose….”

okatino: Then tell me how they determine the state of the Earth at the beginning of the century – since there were no precise measurements of things like the heat content of oceans, the transport by oceanic circulations , and so on …

BPL: There aren’t now, either.

o: how do they fix them in the models ?

BPL: Same way as for now–from the available observations and the constraints imposed by physical law.

Okatiniko,
The literature is there for you to read–I would suggest that it would be more profitable for you to do so than for us to tell you. Suffice to say that we do have very good data for land temperatures and a significant amount of ocean data going back that far. Your slander makes clear that you haven’t even bothered to look at the data that are available.

As to “background” vs. human causation, we can characterize each by looking at the time series–as Tamino has done multiple times on this site and as the climate science community has done repeatedly.

A real scientist does not merely throw up his hands in the face of complexity. He works until he understands it–a task to which you clearly are not equal.

Thanks for the answers. It seems that you disagree, since some of you think that we don’t need any initialization at all, and the other think that we have all the data to initialize correctly the computation.

Luminous beauty : you say : “climate models can be initialized with dynamic conditions set at some fixed value or even a set of random numbers and run until they stabilize, at which point they do a pretty good job of simulating most natural variations.” but that’s exactly what I’m claiming : that it is assumed that the “right” state is the equilibrium state obtained after relaxation – which is an assumption that is not proved, and that is not correct if you’re in the middle of a limit cycle for instance. Now concerning the “good job” to simulate most natural variations : I’m curious to know when and how this has been verified, since essentially the precise measurements have been obtained when the climate was supposed to be already disturbed by human influence. Which period do you use to test the “good job” ?

That’s precisely why I say that climate science checks only its own assumptions. You may well know that GCM models are quite unable to compute accurately the absolute value of the average temperature – they just compute anomalies with respect to baseline periods, which is precisely the beginning of the century. So the agreement between the average value is granted automatically – zero anomaly. Now concerning the agreement on the details , again : on which period, and with which accuracy, do you think it has been verified ?

[Response: You really miss the point. Several, in fact. The most important being: initialization establishes that the climate model is *stable*. Changes in forcing establish that it responds by changing temperature. As for computing the exact mean temperature to the nearest milliKelvin (I doubt anything less would satisfy you), that can’t be done simply because we don’t know the forcings with sufficient accuracy — but we know them with sufficient precision, i.e., we have a good handle on their changes.

And frankly, your whole argument is a red herring. We hardly know the total volume of the oceans with stunning accuracy — but we can measure *changes* to that volume (via changing sea level) with impressive precision. Likewise we know, from basic physics, that if we add water to the oceans the volume will increase, if we remove it then its volume will decrease, a result which would be undeniable even if our absolute knowledge of its volume were in error by a factor of 2 (or more).

Yours is just a fancy version of “there’s a lot we don’t know so we can’t know anything.” Which illustrates that what you’ve accused climate scientists of — inherent bias which prevents real knowledge because of wearing blinders — is what you yourself are guilty of.]

Ray Bradbury : I would be glad to read any reference where numerical computations have been initialized by detailed physical measurements made at the beginning of the century, except of course the normalization of anomalies on the same baseline as I said, but that’s not physics.

Tamino, I’m not advocating that we don’t know anything and that it is useless to try to simulate natural systems. I’m only saying that if one claims to have a given accuracy, he has to prove it. So if climate scientists claim that they are certain that the natural variability cannot exceed X °C during T years, they have to seriously substantiate this claim. And running a computer code and saying “I don’t see it in my code” is – in my opinion – *not* a reliable argument – they’re plenty of examples where numerical simulations are unable to reproduce observed data. Already the uncertainty on climate sensitivity proves that the law of physics are not enough to simulate perfectly a system – so I don’t see how you can claim that there is no doubt that GCM simulate “well enough” the natural variability. Again it may be true – I just need evidence for that.

[Response: Again you miss the point. Climate computer simulations aren’t needed to establish limits for natural variability. Paleoclimate does that. The claim that evidence for dangerous man-made global warming depends on computer models is total bull.

The real usefulness of climate models is for forecasting the future progress of global warming. Certainly they’re imperfect and I expect them to get some this very wrong — and others very right. But they’re the best tool we have for prognostication.

Considering that the expectation of very dangerous warming follows from basic physical principles and multiple lines of evidence — most not involving computer models — I’d say we’d be fools to ignore (and ideologues to denigrate) the best forecasting tool we’ve got.]

okatiniko,
Climate models provide some of the most stringent constraints on the high side of climate sensitivity–without them the 90% CL goes up to about 5.5-6 degrees per doubling. That hurts the case for complacency badly.

So make up your mind: Either the models are pretty good, OR we need to act immediately.

“Again you miss the point. Climate computer simulations aren’t needed to establish limits for natural variability. Paleoclimate does that. The claim that evidence for dangerous man-made global warming depends on computer models is total bull.”
I think you missed my points : when you compare paleoclimates with “dangerous man-made ” global warming, you need to be sure of the amount of “man made climate warming”, and so a good confidence in climate models.

[edit]

[Response: Total bull, and downright dishonest — which is offensive.

You don’t need computer models to estimate the amount of man-made warming precisely enough to know that it’s dangerous. If you want to dispute the estimate, that’s one thing — but for you to claim that it can’t be made without computer models is just a lie.

And again with the “you have to be sure of the amount” bullshit. You only have to restrict it to a range within which we can be sure the consequences are dangerous. And that has been done. Again, you’re just dressing up the “we don’t know enough” argument, and considering that you *tried* to disown that, you’re being dishonest again.]

Really Tamino, we don’t need computer simulations to know which amount of fossil fuel is dangerous for mankind ?
I’m surprised, but it’s good news : so what is the answer , if you know it ?
and why did we spend money to fund climate scientists, since the answer is known, and was known before computer simulations ?

[Response: What amount of cigarette smoking is dangerous? Why do we continue to spend money to fund lung cancer research, since the answer is known?

That pretty well sums up how sensible you are. You do not discuss, you babble.]

First, we have several independent derivations of the temperature response to a given change in forcing, based on observations, not modeling. Not least, the observed changes in the paleorecord, as a function of solar forcing. These are straightforward – forcing changes by x, temperature changes by y +/- z. They are all in good general agreement.

Second, using only the line-by-line codes, we can derive a delta-forcing for a given delta-CO2. The line by line codes are very mature and tested – they are used for missile tracking, tracking pollutant plumes, and on and on – all cases where the results derived by observations of radiation analyzed by referring to the line-by-line codes can be tested against other empirical testing, and they do very, very well.

Given JUST THOSE TWO facts, we can derive a CO2 sensitivity, and therefore look at anthropogenic contributions to warming. No climate models required.

You really should learn some of the science, before you embarrass yourself ranting about it.

Tamino : I’m not an epidemiologist, but I assume we know pretty well quantitatively the risk caused by smoking, or how would we know that smoking is dangerous ? and research is done more to cure the cancers than to ascertain the risk, I presume. Now the initial question was : what was the possible influence of spontaneous variations in the observed warming ?I still didn’t get a quantitative answer, such as “it can’t be more than xxx °C during yyy years” : if there is nothing like that, how can one be so affirmative ?

Lee : “These are straightforward – forcing changes by x, temperature changes by y +/- z. They are all in good general agreement.”
do you really think that we have precise measurements of forcing changes, and temperature changes, millions of years ago ? waooow , so why don’t we measure them currently with the same (or conceivably much better ) accuracy ? this would for sure fix the factor 3 of uncertainty published by IPCC !
BTW, i see no precise reason why the ratio of temperature changes to forcing changes would be constant. Do you know its value between boreal and austral summer ? it is actually negative (the Earth is warmer on average during boreal summer , although it is farther away from the sun) , so there is certainly nothing like a universal, constant sensitivity.

“Second, using only the line-by-line codes, we can derive a delta-forcing for a given delta-CO2”

you mean, without feedbacks, I presume ? but feedbacks do a lot in the story !

I believe I did give some basic numbers. From paleoclimate data, there is no real possibility of spontaneous variability greater than 0.2k, and little probability that it is greater than 0.1k. If you disagree, feel free to show us the data.

okatiniko… I don’t believe you’re understanding the cigarette analogy… at all. YOU are saying it’s required to know exactly how many cigarettes per day will cause cancer (exact climate sensitivity to CO2 perturbation) which is a tactic common to disinformers from both the tobacco industry and the climate change denier crowd.

If we know cigarettes cause cancer we don’t need to know that 18 cigarettes a day doesn’t but 19 a day does. The proper response is to just stop smoking because we know that it can cause cancer.

How much certainty do you need to convince you to stop smoking? Or, say, insure your home or car? We act based on small chances of a negative outcome every day of our lives. But people like you are telling us that we MUST have high degrees of certainty (usually levels of certainty that are unattainable) in order to take action… which is fundamentally an argument to NOT take action.

In other words, you’re telling us that since we don’t know if it’s 18 or 19 cigarettes a day that will cause cancer then we can continue to smoke 2 packs a day.

Do you know its value between boreal and austral summer ? it is actually negative (the Earth is warmer on average during boreal summer , although it is farther away from the sun) , so there is certainly nothing like a universal, constant sensitivity.

Another problem with climate models is that they rely on a theory, GHG effect, that does not exist. There are other explanations for Earth’s average surface temperature being ‘too high’ without need of a theory that violates the laws of thermodynamics.”

Chris : i perfectly know that the sensitivity is computed over a many years average and that intra-annual variations are irrelevant, but the fact that temperature can increase although flux is decreasing shows that the average temperature is not a single valued function of the incoming flux, because it depends also on other factors , such as spatial distribution of forcings, of albedo, and temporal variations throughout the year, which have all probably changed since paleoclimate. So “the” sensitivity” ∆T\∆F is by no means a definite, precise value characterizing the Earth and valid for any kind of forcing irrespective of its localization. Do you agree with that ?

The flux is not decreasing. Please google “why do we have seasons?” Really, I strongly suggest that you pick up an intro meteorology textbook. I’m not trying to be mean or arrogant or what have you, but your questions display a very poor knowledge set.

Of course I agree that sensitivity varies by location (e.g Arctic amplification) and that local energy budgets require non-radiative terms like advection into an area, moisture terms, etc. I also agree that the response to different forcings of the same magnitude can be different, although this is probably of second-order importance (see Hansen’s efficacy papers). None of this is very surprising to the community, but there are certainly questions left in this area, like the regional response to global warming. I think we need to be on the same page with some basics though.

I realize that there may be some misunderstanding in the discussion. If your point is that if the sensitivity is , for instance , 3°C / doubling and if the CO2 reaches 1000 ppm, there is no way that spontaneous variability could prevent a warming of 6 °C because it is excluded that it reaches a few degrees over one century, I fully agree with you of course. There is no indication that climate can vary so much in such a short time without a change of forcings. My point was only that it is not excluded, on the other hand, that it represents a sizeable fraction of a 0.5 °C in 30 years – I don’t see where measurements exclude it. So it is not excluded that it contributes significantly to the uncertainty on the current sensitivity , therefore lowering for instance the “real” sensitivity estimate. That’s why I think that spontaneous variability is an important piece of the puzzle that cannot be easily dismissed.

I could be bothered to correct your poor maths, and worse science if it weren’t for your ad hominem attacks.
When you spend so much time attacking the messenger, and not the message you clearly don’t have much of a argument. You’re on the wrong side of history, belong to the wrong tribe, think tribally, and choose your views on that basis, not on science.
Your blog is peppered with nastiness, and with some seriously derivative thinking.
I intend to follow up on this deviously incorrect “attractors are limited so can be ignored” argument, do keep reading my blog.
Oh, and while I may need to get a sub-editor, you need to get some manners.

His list of published papers include a couple of symposium presentations on data mining of XML representations of trees, locating similar documents in large datastores in O(log(n)) time, and a paper on the effectiveness of marketing tools used by universities in the UK.

And, of course, in his spare time he’s overturned most of climate science.

I jsut reread Tamino’s post, just to be sure. There isnt a single ad hom in it – not one. He only actualy mentions you – as oppsoed to your article and its content – in the first and last paragraphs. In the first, he simply names you, refers to THE ARTICLE as an example of mathturbation, previews the problems in your argument – and then launches into a technical analysis of the issues with your article.

Not ad hom – analysis.

The last paragraph an insult. A clever and awfully innocuous one. But it isnt an ad hom – he analysis attacks your argument, not you.

If you choose to get this butthurt at this kind of response, I’m not sure how yo survived grad school. No one ever went after your ideas and analsyis in a lab meeting, or a graduate seminar? Really? Maybe they thought you weren’t worth it, perhaps.

… but the fact that temperature can increase although flux is decreasing shows that the average temperature is not a single valued function of the incoming flux.

Don’t equate radiative forcing with incoming flux or even a change in incoming flux.

For a definition of radiative forcing, please see:

The definition of RF from the TAR and earlier IPCC assessment reports is retained. Ramaswamy et al. (2001) define it as ‘the change in net (down minus up) irradiance (solar plus longwave; in W m^–2) at the tropopause after allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropospheric temperatures and state held fixed at the unperturbed values’.

The idea here is that increased solar radiance or increases in CO2 concentration affect the balance of radiation entering/leaving the climate system — and will result in a response at the “top of the atmosphere” or – TOS – which is typically taken to be at the tropopause which separates the troposphere and the stratosphere. Feedbacks are in response to this change.

As for a definition of climate sensitivity, please see:

The long-term change in surface air temperature following a doubling of carbon dioxide (referred to as the climate sensitivity) is generally used as a benchmark to compare models.

The real importance of time to climate sensitivity isn’t that it has to be computed as some sort of average over time but that feedbacks that eventually achieve equilibrium and go into climate sensitivity take time. The climate’s response to a forcing isn’t instantaneous. And importantly, some feedbacks are faster than others, and some feedbacks that we thought were slow aren’t as slow as we thought.

The equilibrium value in the above definition of climate sensitivity is for the Charney Climate Sensitivity that takes into account the fast feedbacks, e.g., water vapor, clouds, sea ice, ice shelves, but it omits the so-called slow feedbacks associated with changes in vegitation, feedbacks due to the carbon cycle, and ice sheets — the latter of which are land-based.

Trouble is, though, that whether it is the melting permafrost forming methane-bubbling thermokarst lakes as far as the eye can see in part of Siberia and Northern Canada, methane bubbling up from shallow methane hydrates along the north continental shelves of Siberia at a rate equal to the rest of the Earth’s oceans, or the saturation of parts of the ocean’s carbon sink, some of these “slow feedbacks” don’t seem so slow any more. So people are beginning to pay more attention to Earth System Sensitivity that incorporates the slow feedbacks.

…it depends also on other factors , such as spatial distribution of forcings, of albedo, and temporal variations throughout the year, which have all probably changed since paleoclimate. So “the” sensitivity” ∆T\∆F is by no means a definite, precise value characterizing the Earth and valid for any kind of forcing irrespective of its localization.

Even if one sets aside your repetitive use of the term “precise value” you are still arguing against a strawman. When climate sensitivity is defined, it is defined in terms of a doubling of carbon dioxide. By definition it has an “efficacy” equal to 1. Other forcings have different efficacies where the long-term, equilibrium change in temperature will be the climate sensitivity of the climate system times the forcing times the efficacy.

For a definition of “efficacy” please see:

Efficacy (E) is defined as the ratio of the climate sensitivity parameter for a given forcing agent (λi) to the climate sensitivity parameter for CO2 changes, that is, Ei = λi / λCO2 (Joshi et al., 2003; Hansen and Nazarenko, 2004). Efficacy can then be used to define an effective RF (= Ei RFi) (Joshi et al., 2003; Hansen et al., 2005). For the effective RF, the climate sensitivity parameter is independent of the mechanism, so comparing this forcing is equivalent to comparing the equilibrium global mean surface temperature change.

Efficacies exist because different forcings act on the climate system in different ways:

The efficacy primarily depends on the spatial structure of the forcings and the way they project onto the various different feedback mechanisms (Boer and Yu, 2003b). Therefore, different patterns of RF and any nonlinearities in the forcing response relationship affects the efficacy (Boer and Yu, 2003b; Joshi et al., 2003; Hansen et al., 2005; Stuber et al., 2005; Sokolov, 2006). Many of the studies presented in Figure 2.19 find that both the geographical and vertical distribution of the forcing can have the most significant effect on efficacy (in particular see Boer and Yu, 2003b; Joshi et al., 2003; Stuber et al., 2005; Sokolov, 2006)…

In any case, calculations performed by climate models do not involve the concepts of forcing, climate sensitivity or efficacy. The calculations of climate models are based upon the physics. Analysis in terms of forcings, climate sensitivity and efficacy only come afterward — as a means of conceptualizing the results for the ease of our understanding. When you argue as if climatologists believe that climate sensitivity is a single number that doesn’t change over time as the result of the motion of the continents or other factors, that it doesn’t take into account the differences in the radiative forcings, or as if it means anything in terms of the validity of the climate models themselves, you are arguing against a strawman.

Chris, I know perfectly why we have seasons ! the *local* flux per unit area is locally increased in summer of course – but the *averaged* flux on the Earth is decreasing during boreal summer because the Earth is close to aphelia and the distance to the Sun has increased, by around 3 % with respect to perihelia, making a 6 % difference on average flux. A 6 % decrease is a huge one, more than 10 W/m2, and yet the Earth is warmer on average, because the impact of warmer lands in Northern hemisphere overwhelms the cooling of Southern one. So a global “sensitivity” ∆T/∆F is meaningless , or it would be negative. You cannot define a single valued “derivative” for a function of several variables – there is nothing like that in mathematics. And given the changes in spatial distributions and time dependance of insolation since paleoclimatic times, I don’t really see the relevance of this notion.

Timothy :
“That’s part of it, but only a small part of it. The larger part is not with it being some sort of “average” over time but with climate sensitivity being an equilibrium concept. See below.”

This doesn’t change the issue : even the equilibrium value is sensitive to other factors than the global forcing.
”
For a definition of radiative forcing, please see:
The definition of RF from the TAR and earlier IPCC assessment reports is retained. Ramaswamy et al. (2001) define it as ‘the change in net (down minus up) irradiance (solar plus longwave; in W m^–2) at the tropopause after allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropospheric temperatures and state held fixed at the unperturbed values’.”

This definition has a big problem : it doesn’t define a state function, in the thermodynamic sense; because it is defined by a differential quantity, that isn’t an *exact* differential (much like work and heat in thermodynamics). For a given “trajectory” , the reference state varies all along the trajectory, so the “unperturbed value” is not defined for an non-infinitesimal change. so you can’t really compute a “total forcing” which wouldn’t depend on the path between two states.
”
The idea here is that increased solar radiance”

So solar radiance is indeed considered as a “forcing”, and as I said, solar radiance decreases in boreal summer whereas averaged temperature increases.

”
Even if one sets aside your repetitive use of the term “precise value” you are still arguing against a strawman. When climate sensitivity is defined, it is defined in terms of a doubling of carbon dioxide. By definition it has an “efficacy” equal to 1. Other forcings have different efficacies where the long-term, equilibrium change in temperature will be the climate sensitivity of the climate system times the forcing times the efficacy.”

but there is nothing like THE climate sensitivity, independently of anything else. Even a doubling of CO2 has no reason to provoke the same change in temperatures for different spatio temporal distribution of energy input – this is absolutely not an “absolute” quantity like the Earth density or anything like that. “Efficacy” is just another way to recognize that – and you cannot measure independently the “efficacy” and the “sensitivity” for paleoclimates.

“The calculations of climate models are based upon the physics. Analysis in terms of forcings, climate sensitivity and efficacy only come afterward — as a means of conceptualizing the results for the ease of our understanding. When you argue as if climatologists believe that climate sensitivity is a single number that doesn’t change over time as the result of the motion of the continents or other factors, that it doesn’t take into account the differences in the radiative forcings, or as if it means anything in terms of the validity of the climate models themselves, you are arguing against a strawman.”

then if you recognize that, you recognize also that any determination of “sensitivity” millions of years ago is fairly useless to say anything about the current “sensitivity”.

Okatiniko,
This is such horsecrap. Dude, most of the folks here are actual scientists. We aren’t going to be taken in by your technobabble. Many of us have done scientific modeling. It is very clear you have not.

… speaks of efficacy and effective radiative forcing and distinguishes the latter from radiative forcing. This is why I quoted them, and I presume this is why you chose to ignore the passages I quoted from. Specifically regarding your “other factors than the global forcing,” I quoted and IPCC AR4 WG-1 states:

The efficacy primarily depends on the spatial structure of the forcings and the way they project onto the various different feedback mechanisms (Boer and Yu, 2003b). Therefore, different patterns of RF and any nonlinearities in the forcing response relationship affects the efficacy (Boer and Yu, 2003b; Joshi et al., 2003; Hansen et al., 2005; Stuber et al., 2005; Sokolov, 2006).

I quoted the definition for radiative forcing from IPCC AR4 WG-1 2.2 Concept of Radiative Forcing:

“the change in net (down minus up) irradiance (solar plus longwave; in W m^–2) at the tropopause after allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropospheric temperatures and state held fixed at the unperturbed values”.

This definition has a big problem : it doesn’t define a state function, in the thermodynamic sense; because it is defined by a differential quantity, that isn’t an *exact* differential (much like work and heat in thermodynamics). For a given “trajectory” , the reference state varies all along the trajectory, so the “unperturbed value” is not defined for an non-infinitesimal change. so you can’t really compute a “total forcing” which wouldn’t depend on the path between two states.

In any case, calculations performed by climate models do not involve the concepts of forcing, climate sensitivity or efficacy. The calculations of climate models are based upon the physics. Analysis in terms of forcings, climate sensitivity and efficacy only come afterward — as a means of conceptualizing the results for the ease of our understanding.

Radiative forcing, climate sensitivity, efficacy. These concepts and the calculations that make use of them aren’t meant to be a substitute for the actual physics.

So solar radiance is indeed considered as a “forcing”, and as I said, solar radiance decreases in boreal summer whereas averaged temperature increases.

When I use the phrase “increased solar radiance” this refers to a change in solar radiance, and from the definition that I quoted, forcing refers to “the change in net (down minus up) irradiance (solar plus longwave; in W m^–2) at the tropopause…” If you let the climate system equilibriate (which would take decades) under a boreal summer (that in reality is only three months) what would be relevant is where I quoted:

The efficacy primarily depends on the spatial structure of the forcings and the way they project onto the various different feedback mechanisms (Boer and Yu, 2003b).

but there is nothing like THE climate sensitivity, independently of anything else. Even a doubling of CO2 has no reason to provoke the same change in temperatures for different spatio temporal distribution of energy input

When you argue as if climatologists believe that climate sensitivity is a single number that doesn’t change over time as the result of the motion of the continents or other factors, that it doesn’t take into account the differences in the radiative forcings, or as if it means anything in terms of the validity of the climate models themselves, you are arguing against a strawman.

this is absolutely not an “absolute” quantity like the Earth density or anything like that.

I have repeatedly pointed out that climatologists recognize the fact that climate sensitivity to a forcing is dependent upon other factors. So why do you continue to flog a strawman that was never alive in the first place?

The calculations of climate models are based upon the physics. Analysis in terms of forcings, climate sensitivity and efficacy only come afterward – as a means of conceptualizing the results for the ease of our understanding.

When you argue as if climatologists believe that climate sensitivity is a single number that doesn’t change over time as the result of the motion of the continents or other factors, that it doesn’t take into account the differences in the radiative forcings, or as if it means anything in terms of the validity of the climate models themselves, you are arguing against a strawman.

then if you recognize that, you recognize also that any determination of “sensitivity” millions of years ago is fairly useless to say anything about the current “sensitivity”.

Is it?

We then compared model calculations against an independent proxy data set for atmospheric CO2 over the Phanerozoic [the past 420 million years] (Fig. 1). The best fit between the standard version of the model and proxies occurs for ∆T(2x) = 2.8 °C (blue curve in Fig. 2a), which parallels the most probable values suggested by climate models (2.3–3.0 °C)

Of course, if you are interested in a period that would be most comparable to our current circumstances you probably won’t want to look back 251 million years ago. At the time there was only a single continent, Pangaea. It was in the process of breaking up, giving rise to the supervolcano in Siberia. This elevated carbon dioxide levels and temperatures, resulting in the Permian-Triassic extinction that nearly wiped out all multicellular life on Earth.

That is part of the Phanerozoic, but a much better parallel would be the Early Eocene during the formation of ocean floor by means of what was essentially an undersea supervolcano. 55 million years ago this pumped carbon dioxide into the atmsophere, resulting in the Paleocene-Eocene Thermal Maximum and mass extinction. At that point in the Earth’s history the continents were roughly where they are now. But that may be a better parallel for later in this century, assuming we continue along our present path towards 1000 ppm.

Looking at just where we are now with current carbon dioxide levels:

The research, published in Csank et al 2011, uses two independent methods to measure Arctic temperature during the Pliocene, on Ellesmere Island. They find that Arctic temperatures were 11 to 16°C warmer (Csank 2011). This is consistent with other independent estimates of Arctic temperature at the time. Global temperatures over this period is estimated to be 3 to 4°C warmer than pre-industrial temperatures. Sea levels were around 25 metres higher than current sea level (Dwyer 2008).

In the preceding I have pointed out how you continue to flog a strawman that was never alive in the first place. I have pointed how when you respond to someone you repeatedly ignore whatever they have stated that you find inconvenient. Frankly your tactics remind me of the Young Earth Creationists I argued with years ago. They would engage in quote mining, argue against strawmen and somehow continue to make the same mistakes over and over again, sometimes recycling the same argument only a few days after the previous time it had been debunked.

I have had patience, enough patience to write eighty page papers that critique The Critique of Pure Reason and that critically analyze early twentieth century empiricism. I had enough patience to argue with Young Earth Creationists for weeks at a time. But beyond a certain point, arguing with them was pointless given the nature of their motivation, something that I analyzed at some length in an essay for the British Centre for Science Education. I suspect I am dealing with something similar here.

I am getting older. I haven’t the time or the patience to argue like that any more, to deal at length with those who keep bringing back debunked arguments or ignore points made by others that that they find inconvenient, and both repeatedly and deliberately misrepresent the science.

Timothy and Barton, I’m honored that you spent so much tim to answer me, but that’s mainly wasted time. I never attacked any “strawman”, and particularly not IPCC. I just said that there is nothing like a single sensitivity, and you seemed to agree, so what is your point ? and concerning the comparison with creationists; in my opinion, creationists defend opinions without real scientific support : that life could have evolved with mysterious influences different from normal physical laws. It’s non scientific because it contradicts the foundation of science : that physical laws are universal and apply to any system. But what is at stake here is the accurate determination of spontaneous variability , it is just a matter of amplitude of well known and well recognized features of chaotic system. There is nothing like a negation of physical laws, and I don’t understand your comparison.

okatiniko, your theory of intelligent climate does truly amaze. One really wonders how the climate system knows if a watt comes from IR or UV.

Dude, chaotic systems can have well behaved average behavior, and that average behavior can exhibit trends. Do you eschew investing in stocks merely because no one can with certainty say where a given stock will trade on any single day?

Ray, “intelligent climate” is a perfect strawman. The fact that the Earth is warmer in boreal summer whereas the input flux has significantly decreased is an objective fact. But it has nothing to do with any “intelligence” , except that you need to understand simple physical laws : this is only due to the fact that the input power determines the effective temperature, related with the average of T^4, and not the averaged temperature (average of T). And there is nothing preventing the former to change without the latter changing, or inversely – it is even possible that both vary in opposite directions, as the example of boreal vs austral summer shows. It is enough to change the spatial repartition of forcings or the transport of heat through oceanic circulation for instance. Now it is another fact that spontaneous variability of chaotic systems, even if bounded, is very difficult to determine and to model because it depends very sensitively on non linear phenomena that are very difficult to catch. Explaining the amplitude and frequency of the 11-years solar cycle or describing a ENSO event is far beyond our capabilities, for instance. So as I said, the exact amount of natural variability in the recent warming is far from being settled.

[Response: Warmer temperature in boreal summer has nothing to do with any T^4 “effective temperature.” It has to do with the vastly greater thermal inertia of the oceans and the dominance of ocean over land in the southern hemisphere. Hence whereas temperature is greater in northern summer, total heat content is greater during southern summer — it follows forcing just as it should, and even a simple zero-dimensional energy-balance model will mimic both the greater heat content and lesser temperature during austral summer.

In fact your use of this example as though it indicates any incompleteness in our understanding of climate dynamics, energy balance, or climate sensitivity, shows how little you really understand what’s going on. Yet you’ve attempted to use it as an actual straw man by pretending that it has anything to do with the issue of climate sensitivity.

As for temperature as a metric of global warming, the average of temperature anomaly is an imperfect but still very useful measure of trends in total heat content, and it eliminates the seasonal variation in both temperature and specific heat which is really irrelevant to the global warming issue.

And please get off your high horse about how ignorant we are of chaotic climate variability, and learn something about natural climate variability before you embarrass yourself further. Global temperature paleo reconstructions show that the limits of natural (chaotic) variability are much less than what we’ve experienced in the last century — especially since much of the variation in the paleo record is attributable to known forcings (volcanic, solar, and yes greenhouse gas changes) not to inherent variability. The actual scientific evidence is that inherent variability, however chaotic it may be, is quite small.

I get the impression that “natural variability through chaos” is your excuse to pretend that global warming isn’t a monster.]

well I never said that there wasn’t any physical explanation ! you may be right for heat content (I didn’t check), but this doesn’t change the fact that the sensitivity defined by a global change of temperature over a global change of solar flux is negative, in this case.
“Yet you’ve attempted to use it as an actual straw man by pretending that it has anything to do with the issue of climate sensitivity.”

But it has obviously to do, since it shows that other things than the value of the forcing controls the average temperature, and your argument of thermal inertia is precisely something else to add to the forcing.

[Response: No it doesn’t. Climate sensitivity is not the *instantaneous* temperature change due to an *instantaneous* forcing change. It’s the equilibrium temperature change due to a sustained forcing change. And it necessarily requires averaging out the annual cycle. You don’t get to redefine sensitivity. The truly revealing fact is that clearly, you don’t know what it is.]

” the average of temperature anomaly is an imperfect but still very useful measure of trends in total heat content, and it eliminates the seasonal variation in both temperature and specific heat which is really irrelevant to the global warming issue.”

I didn’t state that seasonal variations were important for global warming ! it just shows that other factors are important , and that these other factors have changed in the past, so measuring a ” sensitivity ” by comparing two different periods when other factors have changed is simply worthless.

“Global temperature paleo reconstructions show that the limits of natural (chaotic) variability are much less than what we’ve experienced in the last century”

that’s a very strong assertion that needs to be substantiated ! do you mean that paleo reconstructions have the ability to measure the variation of a few tenths of degrees of the global averaged temperature with an accuracy of 30 years ? I’m really amazed, I never saw things like that, do you have a reference ?

[Response: The 20th-century global warming is about 0.9 deg.C — considerably more than the paleo (last thousand years and probably two) variation, and yes it’s sufficiently precise to know that. We’ve even had 0.6 deg.C warming since 1975 — again more than in the paleo record. AND not all of the paleo variation is “natural due to chaos” — much of it is due to external forcing (volcanoes, solar variation) so the chaotic variation is even less. As for references, if you haven’t found ’em already then why are you so full of … pontification?

I think you’re just being obstinate. You simply refuse to believe, and no amount of reason or evidence will budge you from your “chaotic variation is unknown and possibly too large” trench.]

so what is the upper limit of natural variations during 30 years, without changes of forcings, if it is so well constrained ?

[Response: Tell ya what. Prove to me that if I answer your question you won’t just continue to argue because you’d rather argue than learn. Then I’ll do a post on that very topic.]

He went on to say;_
“Ugh. “Equivalent form” needs to match after stated substitutions. Just admit that Tamino screwed up.”

and again:-
“I can only work with the evidence I am given. The evidence, to date, suggests he sucks at math.
Since you are here, I posed several questions to you in a conversation last week. Is there I reason you chose to break off the conversation?”

Oh well, what can I say, it appears he is unwilling to come to your forum, to complain directly about a simple error, and then makes a bold assumption based on one error, or so it would appear.

“Prove to me that if I answer your question you won’t just continue to argue because you’d rather argue than learn. ”
Tamino if you provide me with a correct, scientifically substantiated answer, I will accept it of course. Now I expect you to do a honest estimate , including :

* the fact that comparing two different (instrumental vs proxies) methods of measurement should include a proper estimate of possible systematic errors between them (based on the comparison of simultaneous indications for instance)
* the possible (probable) loss of variance of indirect proxies with respect to direct measurements
* the fact that chaotic systems may be intermittent and not gaussian – the likelihood of a Maunder minimum of solar activity would be for example highly unlikely, based on a gaussian statistics made with the following 5 centuries. The current low cycle would also be highly unlikely compared to previous ones. Yet they happened, and certainly not because of anthropogenic influences.

[Response: I haven’t even responded, and you’re already arguing. As I thought: you’d rather argue than learn.]

Barton : yes I know what a variance analysis is. And you probably know that it doesn’t imply a causality link. At least, you should properly define independent calibration and verification periods. Which one do you choose ?

“I could get the same behavior by accumulating truly random numbers in a random walk.”

Well, if graph 4 is just an accumulation of truyly random numbers, it looks an awful lot like many of the graphs of temperatures from 1850 on. How do you differentiate between the behavior of the temperature record and a random walk sequence? That random walk also seems to correlate fairly well with the rise in CO2 over the same period. Does that correlation mean anything?

So I hear you would like me to admit my error. I will happily do so, provided you admit yours. My error was my inability to read your mind. For that, I truly apologize. Your error was withholding the fact that you defined f(x) recursively until I called you out on your process.

For what it is worth, I see how you got what you got now. Please be more clear in the future.

How is it dishonest to check your math? Surely you can see how anyone with a math background would question your result. If it weren’t for your strange recursive method, you would be wrong. Please be more clear in the future. Moreover, please point out any errors in my method, if you can find any.

[Response: When shown that your criticism was mistaken, rather than “man up” you chose to lie about how I made a mistake. That’s dishonest. As for “strange recursive method” — that’s what the logistic map is *about*, it’s a “first-return map”. You’re not honest enough to admit that because then you’d have to admit that you were just plain wrong.

And by the way, your substituting x-1/2 for x amounts to a failure to see that if new equals old minus 1/2, then old equals new PLUS 1/2. Error.]

“And by the way, your substituting x-1/2 for x amounts to a failure to see that if new equals old minus 1/2, then old equals new PLUS 1/2. Error.”

Alright. I’ll admit my error. I read your explanation wrong. You still can’t get from your first equation to your second using that (you still have a constant). So I apologize for my error.

Will you apologize for not including anything about your recursive method in your original post? Or is it too difficult to admit error in front of the sycophants?

[Response: I described the recursive method explicitly. But you claiming that I didn’t include anything about the recursive method. That’s because you still want to blame your own error on me. How sad.]

Show me the word “recursively” or “recursive” in the original article.

[Response: Let’s see …

we start with an x value between and , then apply the logistic map to get a new value of , the new value will also be between and . We’ll then apply the logistic map to that new value to get an even newer value of , etc., repeating the process as many times as we wish.

But I didn’t use the word “recursive” — I only described exactly what it means.]

Now, if we have reasons from physics to think more CO2 => higher temperatures, and CO2 accounts for 76% of the variance over the period in question, and the period in question is 130 years, and it only takes 30 years to establish a climate trend, and if adding in more known-to-be-causal variables account for yet more of the variance… it means unknown “natural causes” or “cycles” or “chaotic variation” aren’t having much of an effect? Doesn’t it?

I struggle to explain this in my own field. The behavior of an individual vehicle is chaotic (as anyone who has driven can attest) however the aggregate behavior of traffic is quite predictable within certain statistical bounds.

Likewise the behavior of individual water molecules, or even small elements within a stream, is quite chaotic. But the aggregate behavior can be quite well characterized by various empirical equations.

Whoa! I have been putting together a short series of articles on complexity, chaos, and climate modeling. I do a similar analysis, using the Lorenz equations since they are the canonical example of chaos in the weather. I come to exactly the same conclusion – the bulk properties of a system can be meaningfully described even when the details are extremely sensitive to initial conditions. My article is here:http://topologicoceans.wordpress.com/2011/08/30/did-chaos-theory-kill-the-climatology-star

I was curious about the analysis of the Italian winter temps time series, so I left a comment on the article Dr. Edmond’s personal blog. I asked where I could find the presented data and what publications had used the data/his analysis.

My comment is, some time later, still Lost In Moderation so I emailed him the other night. He responded, among other things, by telling me that he was unsure if he had the rights to send the data to third parties.

THIS IS A CLIMATE AUDIT! DOESN’T HE KNOW WHAT AN AUDIT IS?? HE BETTER GET ME THAT DATA BEFORE I FILL OUT AN FOIA REQUEST