~ Liberty and Justice for All

Global Warming a la Mode

The top pane shows the decimated Hadcrut3 data series we’ve been using. The second pane shows mode 2 of the SSA decomposition. The third pane reconstructs the series with all 35 modes, excluding mode 2. The last pane shows that if we add mode 2 back in, we can re-construct the original series exactly (both are overlayed on the plot).

Focusing on pane 3, where we’ve removed the mode 2 contribution, we see a linear trend with a slope of ~.5 degsC per century which has remained constant across the entire 113 year span, with no sign of acceleration due to anthropogenic effects. Thus, if there is any signature of AGW to be found, it must be found in the contribution we’ve excluded, mode 2 of the SSA decomposition.

Note that this conclusion is completely independent of the method of decomposition. If I claim an angel from on high appeared suddenly and handed me the extracted mode plotted on pane 2, the conclusion above would be scientifically valid, as long as you accept the laws of addition. This is shown in the last pane which proves that pane 3 (the reconstruction excluding mode 2) + pane2 (mode 2) = Observed Temperature Record, as it must. Pane 3 shows no AGW signature, therefore it must be in mode 2 if it is in the observed temperature record at all.

The conclusion that any sign of AGW must appear in the mode 2 component is also free of any assumptions about the nature of mode 2 or of the underling climate system as a whole. It depends only on the observed temperature record.

Looking at mode 2 (SSA[35,2,t] plotted as a time series in pane 2), we see a periodic, sine-ish looking signal tilted upward. I’ve shown a linear regression with a slope of about .25 degC per century but the slope of this line depends on where we choose to start and stop the record. In fact it is just the average value of the instantaneous slope (year-to-year difference) plotted below.

First difference SSA Mode 2

Note that the maximum slope achieved in 1993 barely exceeds that of 1923. That alone disproves the AGW effect (remember – if there’s AGW it has to be seen in mode 2). But look closely at 1980. There is a suspicious step change in the slope that is undoubtably something hinky with the dataset. If one were to look back at the history of Hadcrut3, I’d bet dollars to donuts there was an “adjustment” made to the data from 1980 onwards – probably because the data wasn’t matching the GCM models.

Hinkiness aside, if somehow the climate should assume the maximum temperature slope ever seen in the past 113 yeas (actually the past 163 years since the slope from 1850 to 1900 was down to flat) and stayed constant at this extreme, the most we’d see is 1.85 degs/century (.5 degsC/century from the overall trend in pane 3 plus the worst-case 1.35 deg/century above).

Of course that’s not going to happen. The mode 2 slope is negative, as is the mode 1 slope (see the trend analysis in part 2) but future trends aside, the main point is, there is absolutely no AGW signature to be found in the observed surface temperature record. There is nothing to indicate the last half of last century warmed any faster than the first half, and if my suspicions about the dataset are correct, the maximum rate in the second half was substantially lower. This obviates the need for forecasts altogether, since there is no evidence mankind is having any effect whatsoever on the global surface temperature.

Note to SkepticalScience readers The foregoing is not a model, it is a deconstruction. It makes no predictions, projections or divinations. It cannot be validly backcast, forecast (but it should be broadcasted – widely). It makes no assumptions about forcings, debunked or otherwise, and is valid whether the extracted mode is internal,external or a fluke. It depends only on observed data and the Law of Addition.

Correcting the Record

Just for fun I’ve taken the step out of the mode 2 first difference (it turns out it was between 1979 and 1980), reintegrated to create a “corrected” mode 2, and used the corrected mode 2 to reconstruct the dataset. The difference is a significant 0.12 degrees overestimation of current warming caused by the adjustment.

12 thoughts on “Global Warming a la Mode”

What you have is mathematics. You decompose a signal into 35 components. You show the sum 34 components. And then you show the sum of all 35. It is a mathematical truism that you can do this, and therefore that you can do this has by itself no empirical implications, and hence is not science.

For it to be science, you have to claim that one or more of the components represent actual physical signals – ie, are the direct consequence of some identifiable causal factor. From that identification, you then need to make empirical predictions to test your identification. But you refuse to do this. In fact, you explicitly state the signal must not be backcast or forecast. Thus, you eschew all possible empirical tests of your mathematical games and preclude any possibility of it being science.

If you want to try actual science, make the identifications and the predictions that follow and test them by a back cast. Or instead, try seeing how actual predictions from best estimates of historic forcings plus ENSO stack up.http://diyclimate.x10.mx/responsemodel/nbox.html

But please do not waste our time by pretending that tautologies are science.

On the contrary, this is pure science based on nothing but empirical observation. It is the advocates of the theory of AGW that have created the hypotheis. The burden of validation is on them. No predictions is necessary and neither are physical explanations in order to refute the hypothesis. The analysis reveals there is no signal in the empirical data to sustain their claim. End of story.

And since the data shows no AGW during the past 100+ years, a period in which CO2 has been rising exponentially, why on earth bother with predictions based on a model of a system which defies modeling and a correlation which isn’t there?

The “advocates of AGW”, by which you mean climate scientists, created a hypothesis that temperature is a lagged response to forcings plus ENSO plus a small component of inter-annual variation. The way you test that hypothesis is by modelling forcings with lag, plus ENSO as per the 2 box model linked above. Doing so yields an r^2 of 0.925 and a very close match, suggesting the theory is sound.

There is nothing in the theory of AGW suggesting that forcings will exhibit a simple wave form. Therefore there is no basis in the science to presume that SSA decomposition, which returns simple wave forms, will uncover the forcings responsible for changes in global temperature. Indeed, it would be an extraordinary coincidence if they did. Consequently your mathematical manipulations are in no way a test of AGW. If you pretend otherwise, you are simply erecting a very fragile straw man.

SSA used correctly in this context is an exploratory tool only. It identifies possibly significant components of the temperature signal. To turn that identification into science, you have to identify the possibly significant component with a physical feature of the climate system, and make empirical predictions based on that identification. There is no way around it, and as you know everybody who has done so has found their empirical predictions fail in hindcast.

Sorry Tom, but that is just silly. That a total is comprised of its parts is, as you point out, a truism, and remains so whether the parts were obtained by SSA or out of my ASS. If we then examine each part in turn and find no detectable AGW then it is equally a truism that no detectable AGW exists in the total. That you refuse to draw this inescapable conclusion shows which of the two of us is in denial.

Jeff, as you say, it is a truism regardless of where you get the components from. I could model the temperature series as a square wave plus other components. I could then show that I can obtain the temperature series by adding the square wave back in to the other components. Does that mean the square wave represents a real physical phenomenon? In fact, I can model the temperature sequence with any curve such that for all x values it has only one y value, plus other components. Does that mean every such curve represents a real physical phenomenon? Clearly the notion is absurd.

Nor do the specific components of SSA have any privileged position in this. First, there are other equally good ways to mathematically decompose a signal, so no privileged status exists for SSA. More fundamentally, if a tautological procedure such as SSA inevitably produces the real physical phenomenon underlying any signal as its primary components (ie, the components containing the most information about the signal), then science would not be a contingent endeavour, and you would have solved the problem of induction. We don’t need falsifiability, apparently – just SSA.

The whole idea that SSA will magically produce the actual physical phenomenon underlying a signal every time without fail, and without need for empirical checking is an absurdity. It completely misunderstands the nature of science, and the relationship between mathematics and science.

Finally, regardless of what magical properties you are inclined to ascribe to SSA, it is not the theory of AGW that the first n components of SSA are the physical signals that drive temperature. Rather, the theory is that forcings (plus ENSO) drive temperature. Therefore you can only test the theory by checking to see if forcings (plus ENSO) do indeed predict temperature with reasonable accuracy.

You are arguing in circles. If you choose to represent the data as a square wave + a residual, then by definition in toto they represent the data. Any conclusion drawn by examining each part in turn, is applicable to the whole, whether each part represents a physical process or not. This is not an argument from superposition, it’s arithmetic.

It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.
Richard P. Feynman

I am puzzled by your response. You seem to state my point as though it were an argument against me. To paraphrase your response:

“If you choose to represent the data as a sine wave plus a linear trend plus a residual, then by definition in toto they represent the data. Any conclusion drawn by examining each part in turn is applicable to the whole, whether each part represents a physical process or not.”

Of course, if the conclusion follows regardless of whether each part represents a physical process, no inference can be made from that conclusion to the claim that any particular part represents any particular physical process. Nor can any inference be made to the effect that any particular physical process must be represented by any particular part.

In particular, you are not entitled to assume that any particular physical process is represented by just one part, nor that any single part represents (parts of) just one single physical component. It follows that your entire argument in the first three paragraphs of this post is invalid.

Of course, if the conclusion follows regardless of whether each part represents a physical process, no inference can be made from that conclusion to the claim that any particular part represents any particular physical process. Nor can any inference be made to the effect that any particular physical process must be represented by any particular part.

The question is not whether the components represent a physical process, they must because some physical process produced the data. The question is whether it is necessary to know what the process was. If you trying to build a model to predict the future (impossible by the way) or to examine what-if scenarios, then you need to figure that out. But if you’re only trying to detect a signal, then that signal is in the data! I don’t need to know anything about the atmospheric processes that effect the transmission of radio waves in order to detect the transmitted information in the receiver.

As to your assertion about arbitrarily assuming a square wave and examining the residual, you could do that as I stated. What I left unstated is the obvious fact that by doing so, you’ve simply moved all the information to the residual (which is now the original unaltered data), and so my assertion about conclusions stands.

In particular, you are not entitled to assume that any particular physical process is represented by just one part, nor that any single part represents (parts of) just one single physical component.

But we are entitled to assume the data represents a signal, containing the non-ergodic information we seek to detect, plus short term stochastic noise which by definition cannot contain a persistent signal of interest. The SSA algorithm is an adaptive filter that allows the separation of the two. You can examine the results yourself in pane 3 of figure 1. Do you see anything in that residual that could be attributable to a long-term increase in slope due to AGW? Me either. Is it your position that adding that residual back in would magically create the AGW signature that I’ve somehow cancelled through decomposition?

The link sends you to a page where you can twist the knobs on a climate model. We are truly through the looking glass when the claim is made that real science can only be done with models, and models cannot be falsified with real data unless and until one proffers an alternate model!

Climatologists should be barred from research until they can recite the history of Trofim Lysenko and explain why his approach was bad for science (and the world)

Thank you for the gratuitous (and ridiculous insult). Your incomprehension does not mean climatologists follow Lysenko either in their philosophy of, or practice of science. On the contrary, it shows that you would rather vilify than discuss.

On to more substantial matters:
You are confused as to the nature of, and role of models in science. A model is a means of predicting from theory. It is no different in theoretical terms than if you are given the velocity and mass of two bodies, along with Newton’s laws of motion and gravitation, and asked to calculate the further evolution of the system. Computer models are used for such calculations in climate science only because of the very large number of calculations involved in predicting the evolution of something as complex as climate; or in this case simply for convenience.

The two box model I linked to is also a means of predicting from theory. As a tool of exploration, it allows you to quickly vary the theory you are using (the control knobs) so that you can compare different theories with the observations, ie, the actual temperature record. Thus, you can set all forcing values except solar and volcanoes to 0 to test the theory that the current temperature rise is due to listed natural forcings only.

The simplest version of AGW is represented by the default settings; and therefore it is the default settings I use for comparison with reality. The forcing data is taken from the two best known sources of such data, and hence represents the best available estimates. The time constants represent the time, at constant forcing, for the atmosphere, and land surface (tau 1) and surface ocean (tau 2) to reach eqilibrium in isolation. The default settings are based on physically realistic values as determined by observation.

Ideally the Transient Climate Response would also be independently tweakable. As it happens, this model calculates the best fit TCR given the other parameters, which is usefull for some purposes, but less than ideal for testing a theory. However, I have performed similar calculations using a set TCR based on IPCC estimates (2 C per doubling of CO2) and got similar correlation coefficients.

So, to summarize, my claim is that science is done by testing the predictions of the theory against observation. The predictions of the theory, however, must be calculated – and that is what the model does on default settings. Your alternative procedure is to independently determine a curve, claim that that curve are the predictions of the theory without without regard to the actual content of the theory, and proceed to claim the theory falsified because the curve doesn’t match well.

“You are confused as to the nature of, and role of models in science.”

I doubt it. I have run computer simulation virtually everyday of my 35 year professional career and have written numerous models, including one beastly coupled non-linear system that included an oscillator, a thermal loop for gross frequency tuning, a PLL for phase stability and an acoustic loop for vibration cancellation. I know full well the use and limitations of such systems. We spent nearly a thousand man-hours on that last one, most of it in the lab making meticulous measurements on the individual components in order to parameterize the model. And it still exhibited a critical dependency on the initial state.

[climate scientists] created a hypothesis that temperature is a lagged response to forcings plus ENSO plus a small component of inter-annual variation.

As Trenbreth stated candidly, “natural variation is just another way of saying for we don’t know”, so the hypothesis is unscientific from the outset. If model divergence is chalked up to natural variation then the hypothesis can never be falsified. You might as well say God did it. And since the models show no skill in recreating the ENSO or other multi-decadal quasi-periodicites apparent in the data, nor apparently the thermal uptake in the deep ocean (the latest excuse for the “pause”), why on earth do we think they have predictive power?

Let’s go back to basics. What can a model teach us? Or in other words, what is the knowledge gained by running a simulation. Knowledge is gained from analysis of information. Where does information come from? From the resolution of contingency (if that last sentence doesn’t make sense, consult any introductory text on information theory). What contingency then can running a simulation resolve? It’s output trajectory, unknown before the run. On what does that result depend? The model’s equation plus initial state. Thus a model can only teach us about itself! Models can create contingency (i.e. what if scenarios showing a range of possible outcomes) but can never resolve it, and therefor cannot provide information gain if the metric is information about the real world. Models can only provide a hypothesis. Information gain cannot occur unless the hypothesis is tested by comparing the models predictions to the observed data, something the IPCC refuses to acknowledge. They claim models do not make predictions. But a model is only as good as its predictive power.

I’m all for building climate models. But unless or until they can be completely parameterized with physical constants, they are under-constrained and therefore just glorified curve fitting algorithms and so useless for extrapolation. Making trillion dollar policy decisions based on thier outcome is beyond foolish.

We have the data. The claim is that there is an AGW signature (i.e. information) contained in the record. Prove it! As I said, this is a small-signal detection problem, not a modeling problem.