Tilburg University sacked high-profile social psychologist Diederik Stapel, after he was outed as having faked data in his research. Stapel was director of the Tilburg Institute for Behavioral Economics Research, a successful researcher and fundraiser, and as a colleague expressed it, “the poster boy of Dutch social psychology.” He had more than 100 papers published, some in the flagship journals not just of psychology but of science generally (e.g., Science), and won prestigious awards for his research on social cognition and stereotyping.

Tilburg University Rector Philip Eijlander said that Stapel had admitted to using faked data, apparently after Eijlander confronted him with allegations by graduate student research assistants that his research conduct was fraudulent. The story goes that the assistants had identified evidence of data entry by Stapel via copy-and-paste.

Robert Smithson raises some interesting issues and questions. Regarding means and opportunity:

Let me speak to means and opportunity first. Attempts to more strictly regulate the conduct of scientific research are very unlikely to prevent data fakery, for the simple reason that it’s extremely easy to do in a manner that is extraordinarily difficult to detect. Many of us “fake data” on a regular basis when we run simulations. Indeed, simulating from the posterior distribution is part and parcel of Bayesian statistical inference. It would be (and probably has been) child’s play to add fake cases to one’s data by simulating from the posterior and then jittering them randomly to ensure that the false cases look like real data. Or, if you want to fake data from scratch, there is plenty of freely available code for randomly generating multivariate data with user-chosen probability distributions, means, standard deviations, and correlational structure. So, the means and opportunities are on hand for virtually all of us. They are the very same means that underpin a great deal of (honest) research. It is impossible to prevent data fraud by these means through conventional regulatory mechanisms.

Regarding motive:

Cognitive psychologist E,J, Wagenmakers (as quoted in Andrew Gelman’s thoughtful recent post) is among the few thus far who have addressed possible motivating factors inherent in the present-day research climate. He points out that social psychology has become very competitive, and

“high-impact publications are only possible for results that are really surprising. Unfortunately, most surprising hypotheses are wrong. That is, unless you test them against data you’ve created yourself. There is a slippery slope here though; although very few researchers will go as far as to make up their own data, many will “torture the data until they confess”, and forget to mention that the results were obtained by torture….”

I would add to E.J.’s observations the following points.

First, social psychology journals exhibit a strong bias towards publishing only studies that have achieved a statistically significant result. This bias is widely believed in by researchers and their students. The obvious temptation arising from this is to ease an inconclusive finding into being conclusive by adding more “favorable” cases or making some of the unfavorable ones more favorable.

Second, the addiction in psychology to hypothesis-testing over parameter estimation amounts to an insistence that every study yield a conclusion or decision: Did the null hypothesis get rejected? The obvious remedy for this is to develop a publication climate that does not insist that each and every study be “conclusive,” but instead emphasizes the importance of a cumulative science built on multiple independent studies, careful parameter estimates and multiple tests of candidate theories. This adds an ethical and motivational rationale to calls for a shift to Bayesian methods in psychology.

Third, journal editors and reviewers routinely insist on more than one study to an article. On the surface, this looks like what I’ve just asked for, a healthy insistence on independent replication. It isn’t, for two reasons. First, even if the multiple studies are replications, they aren’t independent because they come from the same authors and/or lab. Genuinely independent replicated studies would be published in separate papers written by non-overlapping sets of authors from separate labs. However, genuinely independent replication earns no kudos and therefore is uncommon.

The second reason is that journal editors don’t merely insist on study replications, they also favor studies that come up with “consistent” rather than “inconsistent” findings (i.e., privileging “successful” replications over “failed” replications). By insisting on multiple studies that reproduce the original findings, journal editors are tempting researchers into corner-cutting or outright fraud in the name of ensuring that that first study’s findings actually get replicated. E.J.’s observation that surprising hypotheses are unlikely to be supported by data goes double (squared, actually) when it comes to replication—Support for a surprising hypothesis may occur once in a while, but it is unlikely to occur twice in a row. Again, remedies are obvious: Develop a publication climate which encourages or even insists on independent replication, that treats well-conducted “failed” replications identically to well-conducted “successful” ones, and which does not privilege “replications” from the same authors or lab of the original study.

Most researchers face the pressures and motivations described above, but few cheat. So personality factors may also exert an influence, along with circumstances specific to those of us who give in to the temptations of cheating. Nevertheless if we want to prevent more Stapels, we’ll get farther by changing the research culture and its motivational effects than we will by exhorting researchers to be good or lecturing them about ethical principles of which they’re already well aware. And we’ll get much farther than we would in a futile attempt to place the collection and entry of every single datum under surveillance by some Stasi-for-scientists.

JC comments: I find this case and Smithson’s comments interesting and of relevance to climate science for several reasons. The research culture and motivational factors in the field of social psychology have arguably contributed to rewarding behaviors that are not in the best interests of scientific progress, in the same way that I have argued that the IPCC and the culture of funding, journal publication, and recognition by professional societies have not always acted in the best interests of scientific progress in climate field.

I was particularly struck by the “data torturing” concept. Consider a chemistry experiment conducted in a controlled laboratory environment, whereby the raw data is used for the analysis, with fairly clear procedures for determining the uncertainty of the measurement. Testing hypotheses using climate data is much more challenging from the perspective of the actual data. In climate science, uncertainty associated with observations can arise from systematic and random instrumental errors, inherent randomness, and errors in analysis of the space-time variations associated with inadequate sampling. Applications of climate data in hypothesis testing may require either the elimination of a trend or high frequency “noise.” Hence for any substantive application, climate data needs to be “tortured” some way, in the sense of applying some sort of manipulations to the data and making some assumptions. The problem occurs when the data is “tortured” to produce a desired “confession.” Seemingly objective manipulations of the data can inadvertently produce “confessions” beyond what is objectively obtained in the original data set.

Documented manipulations of the data can be reproduced if the data and metadata are available, and sufficient information (preferably code) is provided so that independent groups can evaluate the objectivity and technical implementation of the method used in the analysis. It is because of complexity of the climate system and the inherent inadequacy of any measurement system that complex data manipulation methods are used. It is essential that we better understand the limitations of the methods and how to assess the uncertainty that they introduce into the analysis.

205 responses to “On torturing data”

No problem, I have heard that stated before. If you filter data you are throwing away information. But there are certain obvious things you can do. For example, if you know that there is likely a seasonal influence of one year and you have monthly data, then it is okay to invoke a 12 month sample mean.

So you notice that the derivative of CO2 concentration with respect to time seems to match the global temperature variations. The next obvious thing to do is to run a cross-correlation to check for obvious lags, and you don’t see any. Then try a Proportional model on the actual CO2 and see how much the variance is reduced by running a Proportional-Derivative model

Then you wonder what the likelihood is of having a zero-lag, high correlation agreement between temperature and a substance theorized to have an effect on global temperature. Seems small, likely less than 1:25 compared to completely randomized data.

You next wonder if the fluctuating carbon fossil fuel emissions somehow relate to the fluctuating CO2 measurements.
Not quite as striking but still there is still zero lag between the two time series. I also did a long term trend of FF emissions against CO2 and found agreement there by applying a convolution of historical FF data prior to 1900 with a fat-tailed impulse response function.http://mobjectivist.blogspot.com/2010/05/how-shock-model-analysis-relates-to-co2.html

So next you wonder what the likelihood of finding agreement of a time-series correlation between CO2 and Temperature, and of a time-series correlation between FF Carbon emissions and CO2 over the last 50 years. Conservatively I would think it would be 1:25 X 1:25 or 1:600 for the agreement to be just coincidental. And I also wonder if it is indeed an agreement, how dominating is the effect?

I am just as curious as everyone else as to what this all means, but I am not going to wait for a statistician to come around and do it for me. Instead, I retrieved the data and did the signal processing myself, and hope for one of the stat auditors to emerge and tell me what I did wrong. Maybe it is the premature normalization, maybe I shouldn’t have averaged over 12 months? Would that have made any difference in the final interpretation?

You haven’t really explained the cross correlation of atmospheric CO2 with anthropogenic emissions. Monthly values should really be used here given the usual response times of the systems – and it seems annual values were used? So you have what is assumed to be a peak correlation of 0.4 at 0 years. So this explains 16% of the variance?

Thanks for the references. That NOAA version of the data appears to the eye even more correlated to the temperature. I will do the actual cross-correlation with the NOAA data and see how it differs.

We know that CO2 varies with temperature –

That said, say we don’t care about the causality. So if CO2 varies with temperature in this predictable way then aren’t we verifying the hockey stick rise in temperature? CO2 then becomes a perfect proxy for temperature records and we don’t have to worry about the kriging interpretation of spatial temperature records anymore (which is much of what McIntyre and the auditors complain about). All we need to do is look at these incredibly sensitive CO2 records from sites like Mauna Loa and from the mixing of CO2 we have a better interpretation at what is going on.

Sounds a bit high to me – and is probably the data smoothing. .

Not much we can do about that. I have been studying peak oil topics for several years, and the best we get in terms of records for oil and fossil fuels is the yearly amounts.

Bottom line, I am just trying to understand what is happening from a systems perspective. I have put in a lot of work over the last few years understanding oil depletion, and think I can provide a fresh interpretation that the climate scientists may have been missing. We’ll see how far I can go with it.

This is what the cross-correlation looks like between the yearly d[CO2] data that you referenced from the NOAA site and the hadcrut3 Temperature data.
I notice how much of the fine structure disappears because of a wider window but the strong correlation at 0 lag is still there.

And this is what the correlation looks like with a Proportional-Derivative model of [CO2] against Temperature.
This has zero lag and a strong correlation of 0.9.
The model is
The first term is a Proportional term and the second is the Derivative term. I chose the coefficients to minimize the variance between the measured Temperature data and the model for [CO2]. In engineering this is a common formulation for a family of feedback control algorithms called PID control (the I stands for integral). The question is what is controlling what.

When I was working with vacuum deposition systems we used PID controllers to control the heat of our furnaces. The difference is that in that situation, the roles are reversed, with the process variable being a temperature reading off a thermocouple and the forcing function is power supplied to a heating coil as a PID combination of T. So it is intuitive for me to immediately think that the [CO2] is the error signal, yet that gives a very strong derivative factor which essentially amplifies the effect. The only way to get a damping factor is by assuming that Temperature is the error signal and then we use a Proportional and an Integral term to model the [CO2] response. Which would then give a similar form and likely an equally good fit.

It is really a question of causality, and the controls people have a couple of terms for this (I know because I got grilled on it for my qualifiers). There is the aspect of Controllability and that of Observability.

Controllability: In order to be able to do whatever we (Nature) want with the given dynamic system under control input, the system must be controllable.
Observability: In order to see what is going on inside the system under observation, the system must be observable.

So it gets to the issue of two points of view:
1. The people that think that CO2 is driving the temperature changes have to assume that nature is executing a Proportional/Derivative Controller on observing the [CO2] concentration over time.
2. The people that think that temperature is driving the CO2 changes have to assume that nature is executing a Proportional/Integral Controller on observing the temperature change over time, and the CO2 is simply a side effect.

What people miss is that it can be potentially a combination of the two effects. Nothing says that we can’t model something more sophisticated like this:

What makes your approach difficult is that according to the present main stream views (as I have understood them) CO2 is driving the temperature in PI fashion with a effective delay of a couple of years, while the temperature is driving shorter term fluctuations of CO2 over one or two years, but in a reverting manner, because no persistent storages of CO2 size that would allow for effects extending over several years are involved in this process.

You think you have linear functions in a feedback loop, and that is how you model it. If the data is up to it, you could try forming the complete lagged covariance set. Its matrix C(t) =
[A(0)A(t), A(0)B(t),
B(0)A(t), B(0)B(t)]

t = time step.

I think I really do mean covariance not correlation.

Then convert to the impulse response function matrix R(t).

As I recall using the inverse matrix of C(0) = C(0)^-1

R(t) = C(t) x C(0)^-1

I think it is that way round.

In theory (Fluctuation Dissipation Theory) it is sort of hey presto! You have the response function as intiated by A or by B. Which I think is all you need to know.

If not, it might show up some additional complexity or that the data is simply not up to it.

So temperature causes changes in CO2 and the changes in anthropogenic CO2 explain (perhaps optimistically) 16% of the atmospheric variability. So what other factors independently cause temperature variability?

‘Change in SOI accounts for 72% of the variance in GTTA for the 29-year-long MSU record and 68% of the variance in GTTA for the longer 50-year RATPAC record. Because El Niño Southern Oscillation is known to exercise a particularly strong influence in the tropics, we also compared the SOI with tropical temperature anomalies between 20S and 20N. The results showed that SOI accounted for 81% of the variance in tropospheric temperature anomalies in the tropics. Overall the results suggest that the Southern Oscillation exercises a consistently dominant
influence on mean global temperature, with a maximum effect in the tropics, except for periods when equatorial volcanism causes ad hoc cooling.’

There is little doubt that CO2 drives temperature but this is certainly minor on interannular to decadal timescales. There are other more important drivers on these scales and probably longer. These include clouds, ice, dust and ocean thermohaline circulation.

I am still playing around with first-order perturbations so will keep that in mind if I decide to go deeper into the analysis.
From the best cross-correlation fit, the perturbation is either around (1) 3.5 ppm change per degree change in a year or (2) 0.3 degree change per ppm change in a year.

(1) makes sense as a Temperature forcing effect as the magnitude doesn’t seem too outrageous and would work as a perturbation playing a minor effect on the 100 ppm change in CO2 that we have observed in the last 100 years.
(2) seems very strong in the other direction as a CO2 forcing effect. You can understand this if we simply made a 100 ppm change in CO2, then we would see a 30 degree change in temperature, which is pretty ridiculous, unless this is a real quick transient effect as the CO2 quickly disperses to generate less of a GHG effect.

Perhaps this explains why the dCO2 versus Temperature data has been largely ignored. Even though the evidence is pretty compelling, it really doesn’t further the argument on either side. On the one side interpretation #1 is pretty small and on the other side interpretation #2 is too large, so #1 may be operational.

One thing I do think it helps with though is providing a good proxy for differential temperature measurements. There is this baseline increase of Temperature (or CO2), and accurate dCO2 measurements can predict at least some of the changes we will see beyond this baseline.

Also, and this is far out, but if #2 is indeed operational, it may give credence to the theory that that we may be seeing the modulation of global temperatures the last 10 years because of a plateauing in oil production. We will no longer see huge excursions in fossil fuel use as it gets too valuable to squander, and so the big transient temperature changes from the baseline no longer occur. That is just a working hypothesis.

I still think that understanding the dCO2 against Temperature will aid in making sense of what is going on. As a piece in the jigsaw puzzle it seems very important although it manifests itself only as a second order effect on the overall trend in temperature.

What really outs the global warming alarmists as deceivers and not simply unconscious incompetents is the absolute loss of the ‘official’ raw data upon which the AGW True Believers’ faked snapshot of the world rests. The original data has gone missing. The best examples of the missing data can be seen in the foi2009.pdf CRUgate disclosures and the information contained in the ‘Harry Read Me’ file. But, it doesn’t stop there. NASA dropped the number of ‘approved’ temperature stations altogether.

And on top of all this corruption, manipulation and incompetence is the fact that the process in wholly unscientific at the outset. The locations and numbers of ‘approved’ temperature stations are in no way representative of the entire surface of the Earth. That and the fact that the oceans have been cooling, and it is oceans that cover most of the Earth’s surface, have made a joke out of climate science.

The CRU data may be AWOL, but NCDC data is readily available. The raw bits are difficult to digest, but I worked up 1900-2009 using a 1×1 degree grid keeping only sectors with the full 110 years of data. http://justdata.wordpress.com Temperatures are “land only”.

Regarding climate science, I think we can assume that most published results are not tortured, although it is probable that more are than the field would like to admit. However, it depends on exactly what kind of science is being done. When you do an experiment that is not logistically difficult to replicate, others in your field can (and if it’s worthwhile, will) do so. This is the check on fields like molecular biology. If your lab can do it, then a hundred other labs can as well. How often is this true in climate science?

As an aside, I am familiar with a case in which a graduate student (non-climate science) came up with no statistical significance on her data – and thus no paper for her dissertation. Her advisor asked a post-doc for statistical help, and he advised running the data through logistical regression instead of ANOVA. It worked – the results under regression showed statistically significant differences. The work was submitted and accepted. And no one reading the paper would ever know that THIS data had been tortured under pressure to publish. It was an insignificant paper in a scientific backwater subject, so no one would care, either. This experience makes me skeptical of ALL published science – I know how things work behind the scenes.

I have also seen very similar instances with graduate students who are writing their dissertations.

It reminds me of when I talk to graduate students in the life sciences about their experiments; sometimes they describe their experiments as either “working” or “not working,” or being useful or not useful, or being a success or a failure, contingent upon whether their results proved their hypotheses. They look at me cross-eyed when I ask them whether or not even results that don’t prove their hypotheses should be considered useful.

Anecdotally, I find that phenomenon most prevalent with international graduate students (in particular with Asian students). They sometimes feel it is their personal responsibility to provide data that prove the hypotheses of their professors, and they have personally let their professors down if they don’t provide such results from their experiments. It can be a real problem – sometimes resulting in fairly significant psychological harm resulting form the emotional stress of letting down their professors. And sometimes professors exploit that attitude on the part of international graduate students to get those students to work harder than what should reasonably be expected (and what might typically be expected from American graduate students).

I assume that you’re mocking political correctness, but in case you aren’t:

I spoke anecdotally, from rather extensive experience.

In fact, there are data that back up my anecdotal experiences – such as data on different cultural attitudes towards authority.

Obviously, distinctions between individuals are greater than between different groups, but that doesn’t mean that you can’t make valid generalizations about how culture of origin affects the attitudes of graduate students.

If you don’t think that some broad, cultural generalizations are valid with respect to how cultural differences manifest among graduate students in American academia, I would suggest that you haven’t worked with a diverse group of graduate students in American academia.

If you have any data that disprove the validity of my generalizations, I’d love to see them.

Joshua:
You may well be right, I have no first hand experience working in such a context. However, I have conducted global surveys for many years, and invariably Asian respondents treat rating or response scales in a far more considered way than their US and European counterparts. This generally led to lower Halo effects and more complete use of the scales. My conclusion is that precision and accuracy are also more pronounced cultural norms Asian respondents. (Note that background and discipline also play a role. Geert Hofstede did a lot of work in this area.)

The scientific method requires first a hypothesis (including a null hypothesis) and data to be collected plus statistical testing determined before embarking on the experiment. Determining your statistical testing beforehand (and asking a statistician about the form of the data and the tests to apply beforehand is de rigeur).

Applying enough statistical testing to a set of data post hoc invariably will eventually generate a statistically “significant” result. It’s perfectly legitimate to undertake an exploratory study like this so long as one recognises the limited validity of one’s findings.

It’s of course useful heuristically to collect data and massage it statistically and use the correlations thus elicited to generate hypotheses, which can then be tested by a properly designed experiment. The ensuing results and the probabilities pertaining to them are far likelier to be valid.

We need to be careful here. Different tests have different sensitivities. It is perfectly possible for a statistical test with a low sensitivity to fail to find statistical significance while a more sensitive test may quite legitimately find significance. I’m not saying that this happened in the case you quote because I don’t understand the details. But equally without understanding the details it is not obvious that there is a problem.

“MarkB | September 15, 2011 at 1:28 pm | Reply
Regarding climate science, I think we can assume that most published results are not tortured”

Unless there has been very careful experimental design to eliminate cognitive bias, that assumption is likely incorrect. We ALL torture the data every day, without being aware we are doing it.

Scientists are no different. We believe we are without bias, while the opposite is the case. Human beings are not capable of acting without bias, because it is subconscious. Intellectually we know we have it, but we can’t consciously control it. Thus it must be eliminated by experimental design.

How many climate researchers use double blind methods for example, when collecting and measuring samples or compiling results? I’d hazard a guess the answer is none.

So, it should be no surprise if the results confirm your expectations. Your subconscious is working to make sure they do, inserting small errors that you will not catch, that will skew the result in the direction you desire. And you should not be surprised when you lose the original data accidentally. Your subconscious knows there are errors that would best remain hidden. even if you don’t.

I’ve tried waterboarding data before, but that just tends to make the papers soggy. And with electronic data … let’s just say, don’t try this at home unless you’re heavily insured against electrical fires. ;-)

Again, remedies are obvious: Develop a publication climate which encourages or even insists on independent replication, that treats well-conducted “failed” replications identically to well-conducted “successful” ones, and which does not privilege “replications” from the same authors or lab of the original study.

Indeed…can you imagine Jones and Mann giggling about McIntyre’s inability to replicate a result under such a regime?

Fund it or not. Politically, there isn’t going to be any change in the US until climate science starts replicating studies. The stakes are too high, the investment too small to do otherwise. The alarmists should try to make their case with responsible science.

Of course, the alternative is to try to succeed with the tactics that Algore and Trenberth are trying. giggle. bigger giggle. ROTFLMAO.

You don’t need to torture data to get the results you want. You just have to leave the disagreeable info out of your original collection. Just find the stuff you want and don’t find the stuff you don’t want.

I give you Climate Skepticism. By far the worst torturing of data I’ve seen is done by them, when they fit straight lines or exponentials to data when the theory predicts that those will give a dreadful fit. They do it anyway then turn a blind eye to the bad fit.

Vaughan, I know you are feeling tortured by HADCRUT3 going down over the last 10 years and sea level going down …. but imagine how skeptics feel when you keep calling them names when its YOUR sides data going down even with the bogus adjustments!

There is far more independent due diligence on the smallest prospectus offering securities to the public than on a Nature article that might end up having a tremendous impact on policy. . . .
Ferson et al (2003) had observed that the problem of spurious regression is exacerbated by data mining – something that should be of profound concern in this field, given the proven recycling of the same proxies over and over. . . .
In the IPCC Third Assessment Report, they did worse than simply ignoring the problem. They deleted the declining portion after 1960, thereby giving a false sense of coherence between the proxies. . . .
there are serious and probably fatal problems with the main proxies used as supposed evidence against a warm MWP . . .
engineers, of all people, know that, even if the “science is settled”, the engineering work may have just begun . . .
climate scientists typically report their results in highly summarized form in journals like Nature, rather than in the 1000-page or 2000-page engineering studies that an aerospace engineering enterprise would produce

The Vaganov network has 400-500 cores over most of the 20th century. In contrast, the living tree portion of the Yamal “network” in Briffa (2000) and Briffa et al 2008 had only 17 cores (10 in 1990; 5 – in 1995). The somewhat expanded Limited Hangout network of Briffa 2009 still contained only 11-12% of the number of cores of the Vaganov network and thus falls far short os using “all” the data. (It didn’t even use the Polar Urals data.)

The effect of using “all the data to hand” is potentially quite dramatic. . . .The discrepancy becomes very pronounced from the 1970s on – the Vagnov network shows the characteristic “decline” in the late 20th century that also characterized the large Schweingruber network, while the Briffa Limited Hangout network surges to new records.

I dont think you want to bring up the problem with Dave Clarke’s (deep climates) analysis by linking to this. Nor do you want to point to Tamino’s (Grant Foster’s) difficulty’s with decentered PCA, and oneuniverse’s findings.. That’s a very interesting discussion. even Don Baccus (dehogza) shows up.

This is the overlay of the two (the vertical shift does not matter for cross-correlation):http://img194.imageshack.us/img194/2985/jonmax.gif
Someone put in a bad data point for 1988 (guess which one is bad). It is enough to influence the cross-correlation, so that Max’s data is much less correlated than Jon’s with CO2.
The moral is that one bad data point is enough to make the time-series cross-correlation meaningless. One can argue that we don’t have many points in the data set, but you have to use what you have.

Thus there would be two inputs:
Solar/cosmic to clouds to temperature to CO2
and Fossil Fuels to CO2 to Temperature.

Both need to be sorted out in detail to distinguish the effects.
Stockwell’s evidence appears to strongly say that solar/cosmic to clouds to temperature to CO2 is a dominant factor. Conversely, a near zero lag between SST and CO2 strongly says that CO2 is tied to ocean temperatures as a consequence.

I would also be interested in your evaluation of Fred Haynie’s Future climate change where he has a wealth of fascinating CO2 variations and analyses.

Thanks David,
That indeed is an interesting analysis. In terms of a poker-hand, a sharp cross-correlation peak at lag=0 usually beats out a broad correlation peak at a finite lag. They also have to put the TSI on a slope line to get the CC to make sense, which means there is still an overall forcing function not accounted for. TSI might be an additional effect and I would try to aggregate it with the FF forcing function in a variational fashion to see what comes out. Perhaps TSI gives some of the slow lagged dynamics and FF gives the sharp and spiky fast dynamics and the overall incline.

One other thing is that because they observed that Temperature has a 90 degree phase shift with the TSI cycle means that it is behaving a lot like a time derivative of TSI. This would suggest that Temperature is reacting very sensitively to the rate of change of TSI. What they really should do is just take the time derivative of TSI and overlay that curve on average Temperature. Then they can show a Lag=0 on the derivative and people can ponder what exactly that means.

Very interesting stuff.
I also like the Feynman quote which I have never seen before:

“When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.”

That’s essentially why you can never give up when you start down this path. Everything has to fit together like a jigsaw puzzle.

Definitely. Thanks.
I am working with this form at the moment to improve my understanding:

I have the temperature as an integral function on the left hand-side, but I know that [CO2] has the potential to generate immediate positive feedback so I have the derivative on the right-hand side of this as well. This can converge from both directions. It’s interesting to play around with these numbers (and I don’t consider that torture :)

Re: “I know that [CO2] has the potential to generate immediate positive feedback”
Do we empirically “know” that?
Or could it be that Temperature increase causes an immediate release of CO2 (via the Clausius Claperyon equation)
Can we statistically distinguish between d(CO2)/dt vs d(ln(CO2))/dt? (i.e. 1/(CO2))
(i.e., I am concerned that with small changes in CO2, a series expansion coupled with noise may provide simplistic results that may be misleading)
Thus I am interested in the potential for Fred Haynie’s evidence to help distinguish some of these issues.

Or could it be that Temperature increase causes an immediate release of CO2

That’s exactly what I was implying, Causius-Clapeyron or Arrhenius rate law. The positive feedback is then because of this release, we will have more GHG which will then potentially increase temperature further. It is a matter of quantifying the effect. It may be subtle or it may be strong.
According to best fit cross-correlation, the average derivative rate of change is about 0.3 ppm change for every degree change in a month . As a feedback term for Temperature driving CO2 this is pretty small but if we flip it and say it is 3.3 degrees change for every ppm change of CO2 in a month, it looks very significant. I think that order of magnitude effect more than anything else is what is troubling.

Can we statistically distinguish between d(CO2)/dt vs d(ln(CO2))/dt? (i.e. 1/(CO2))

d(ln(CO2))/dt is (dCO2/dt)/CO2 by the chain rule so using the lgarithm or not may be a fairly subtle effect to first order.

Haynies stuff is good in so far as he is really looking at deconvolving the seasonal pieces out of the puzzle. I am just not sure if he is going about it in the most efficient way.

study of three independent records, the net heat flux
into the oceans over 5 decades, the sea level change rate based on tide gauge records over the 20th century, and the sea surface temperature variations. . . .We find that the total radiative forcing associated with solar cycles variations is about< 5 to 7 times larger than just those associated with the TSI variations, thus implying the necessary existence of an amplification mechanism, though without pointing to which one.

That demonstrated amplification has not yet been incorporated by IPCC.

On the other hand, there is the opposite problem. Call it data coddling, the refusal to make the data work very hard. This is not common in actual science but is very common on the fringes, where people are actually eager for a null result.

How this relates to the spectacularly odd summer here in Texas and its attribution is left mysterious. It is almost as though Lubos wants to convince me that my lawn isn’t dead and my trees aren’t dying by wielding a noisy graph he scratched together.

I’m not sure there is any rule of thumb for avoiding these sorts of error except for this one: A scientist acting constructively will always bend over backwards to consider the interpretation least favorable to their point of view.

MT: I have read a lot of journal articles and never seen this back bending done. Maybe I just don’t know what to look for. Do you have an example? How about in the latest issue of Sciencemag, which most of us have access to.

Oh it’s mostly blog science and its ilk. (I said that in the first draft but accidentally deleted it before posting.) Getting null results into the literature is hard enough when they actually mean something.

David: MT: You seem to have misunderstood me. You claim the following: “A scientist acting constructively will always bend over backwards to consider the interpretation least favorable to their point of view.”

And in any case, a blog post, whether on Real Climate or a physicist’s blog is not intented as a summa theologiae of climate science; it would take a brain-dead monkey not to realize that Lubos is claiming the 2011 temperature is an outlier (among other, greater outliers) in an overal downward trend in temperature.

It’s too bad, though, that he didn’t to precip also because as I recall that is flat to up as well in Texas, which would have further supported his position that summer 2011 is an outlier, not a growing trend influenced by 40, 50, 60 yrs of steadily increasing CO2.

Oh, and I think I found some cases of your “bending over backwards to present adverse results”. The hiding of 20+ yrs of decline in tree-ring temperature signal in different forms and fashions by no less than 3 leading climate researchers. Oh wait…

Oh and burying deep in the SI or just not reporting quite low verification statistics.

Oh or brutally attacking a paper which merely added new data to Santer 08, showing that his borderline valid results were then invalidated with 2 more years of data (showing they were never very robust).

MT: You seem to have misunderstood me. You claim the following: “A scientist acting constructively will always bend over backwards to consider the interpretation least favorable to their point of view.”

I am asking for an example of this bending over backwards, as I have not seen it. Every journal article I have read has the authors presenting their findings in a positive light. I have never seen the “least favorable interpretation” discussed.

Most scientists want to find their errors themselves rather than being caught of attempt to publish an erroneous paper. There are also people, who prefer publishing without such self-criticism. This phase of trying to find the errors is not visible in the papers, because it’s done at an earlier stage.

But you can see something like that in the papers as well. They contain all kind of reservations, and it’s not at all uncommon that papers state openly, where the argument has weaknesses, as the authors feel (correctly) that it’s much better to state that rather than claim certainty and be shown by others to err in that.

Trying to get a paper published appears to involve often a careful play in giving on the first sight an impression of strongly justified and important results and including all the caveats “in fine print”. The reviewers and journals are often ready to accept this kind of bias in the emphasis.

Certainly, brave Sir Robin, you mean to point out that WUWT points out that AGW promoters do things like assign “global warming” as the middle name of hurricanes, or claim that drought will be the case in Australia, until it rains?

One interesting recent example is when we heard that the sun may be entering a cooler period. Scientists reacted to that with interest, and none tried to deny it could happen. This shows the neutrality of the scientific mainstream when genuine science is shown to them.

Perhaps what Lubos is pointing out is that if you live in Texas long enough that there is a good likelihood of trees and lawns dying from drought?
But that would fly in the face of Dessler’s claim that this drought is caused by CO2,and one cannot face that from the faithful pov, can you?

I looked at Lubos’s plot and it does have a weird asymmetry according to the season. He doesn’t seem to have much intellectual curiosity for being such a smart guy. If it was me, I would look into that and try to figure it out.

One place to look into would be other examples of the phenomenon. Stephen Wolfram comes to mind. Like Lubos he’s been a physics professor at a distinguished physics department (Caltech vs Harvard), though unlike Lubos he doesn’t need to fund his website with ads. But he’s been trying to reduce physics to cellular automata for over three decades now, and still doesn’t have a compelling cellular-automaton explanation of any of Maxwell’s laws, thermodynamics quantum mechanics, relativity, or just about anything that would establish a connection between cellular automata and physics. That didn’t stop him from writing a 1200 page book on his theory.

Motl and Wolfram both seem unable to debug their respective theories of how the world works. They have that in common with a great many people, 99.99% of whom however have not been admitted to the faculty of a top-ranked physics department.

Vaughan,
BTW, I liked your RC-circuit analogy describing the thermal mass of ocean versus land from the other day. That is the kind of imaginative stuff that I really appreciate and gets one thinking from a different perspective. You see that analogy and it locks in your brain and then you might be able to use it somewhere else. These are the cross-disciplinary patterns that lead to new insights.

At a time before powerful digital computers became readily available, people built that kind of circuits, only more complex, called them analog computers and used them both in research and in engineering.

Integrating was easy, but determining the derivative more difficult. In that also electromechanical components where used, namely tachometers similar to those used in cars to tell the speed.

MT- Regarding Texas- (as I am living there now also), yes it was a hot summer. Wouldn’t you agree that the data does not show that there is any long term trend of warming in Texas—but it was a hot summer.

I thought Dressler was the one who stated that the hot summer in TX was further evidence of AGW?

After taking a shot at Perry calling for Texans to pray for rain, “I know that climate change does not cause any specific weather event. But I also know that humans have warmed the climate over the past century, and that this warming has almost certainly made the heat wave and drought more extreme than it would otherwise have been.”http://www.theeagle.com/columnists/Paying-the-price-for-climate-change
So Dessler said squat other than his personal belief based on his knowledge.

Dallas-
How do you know that humans made the heat wave and drought more extreme than it would have been? Please consider that this years summer was weather and not climate. Please consider that this summer’s weather was driven by shorter term events that would completely overwhelm the impact of any potential change humans would have potentially done.

I would agree that it is possible that the summer was made hotter or dryer due to humanity, but it is not necessarily true. It is also possible that if CO2 were lower, that wind patterns would have been different and TX would have been cooler. You are drawing conclusions with insufficient information—do not be so sure.

This sort of behaviour is, although uncommon, not unheard of in science. This is not through any intrinsic problem with scientific methodology, but with those who practice it; i.e. human beings.

Ironically (especially so given my usual ‘drum’ on this forum) increased regulation (or QA) is not and cannot solve this problem. If someone is determined to torture data, they will and there’s very little you can do to spot it; save performing all the available calculations (AND their alternatives) yourself.

There are step-changes that can be adopted however to make this matter far more unlikely AND easier to spot: It all comes down to the materials and methods section :-)

If someone is submitting a paper/research which has used a specific statistical technique or trick, it must be detailed in the materials and methods section. The stats used should be outlined, but also the others considered/used with rational for the final selection/discounting of other methods covered. Any additional but unsed data (from ‘discarded’ statistical methods) should be included in the appendicies for reference.

Forcing people to explain the reasons for their particular methodological choice will greatly reduce instances of data torturing. Putting it in the opening section (materials and methods) also put’s it front-and-centre for all to see.

Finally, raw data and methods must be included WITH the research or paper submission (it’s technically required now, but lets be honest- it’s hardly a well-kept rule). or the paper/research is rejected.

NOw of course this will not stop those 100% determined to be duplicitous, but it would certainly stop those trying to twist the outcomes of their research to a pre-defined outcome.

Incidentally, one of the first questions i ask at work when presented with a projects results is ‘why did you analyse it that way?’. It can often be VERY illuminating.

Good to see that you acknowledge that there is a debate to be had, rather than the usual asserion that the ‘Science is Settled’.

In contrast to how my posts are frequently characterized, I have never posted anything that resembles “the science is settled.”

What’s interesting is that my posts are so frequently mischaracterized by people who have no data on which to support their conclusions. And they are frequently people who write posts decrying unsupported conclusions, no less.

Good to see that you acknowledge that there is a debate to be had, rather than the usual asserion that the ‘Science is Settled’.

The only explanation I can come up with is that you were comparing my posts to some unspecified group of people that you are associating with me – without specifying what evidence you use to confirm such an association?

And while you’re at it – maybe you can specify just a bit about who has actually said that the “science is settled?” Just so I can know who I’m being compared to?

The unspoken rule is that at least 50% of the studies published even in top tier academic journals – Science, Nature, Cell, PNAS, etc… – can’t be repeated with the same conclusions by an industrial lab. In particular, key animal models often don’t reproduce. This 50% failure rate isn’t a data free assertion: it’s backed up by dozens of experienced R&D professionals who’ve participated in the (re)testing of academic findings.

“… it is practically impossible to replicate or verify Dr. Mann’s work… Could it be that this particular work violates the principles of the scientific method…?”

EXCERPT

“Now, after some independent analysis it seems that all scientists could possibly be misled on some of their issues. Both the National Academy of Sciences and Dr. Wegman’s committee analyzed the hockey stick report by Dr. Mann that has become the poster child for proof of global warming. The committees came to the conclusion that Dr. Mann’s hockey stick report failed verification tests and did not employ proper statistical methods.

“Also, it appears that Dr. Mann is part of a social network… of climate scientists who almost always use the same data sets and review each other’s works. There is a contention that they would dismiss critics who had legitimate concerns, rarely used statistical experts for the data they used in their reports, and make it very difficult for reviewers to obtain background data and analysis.

“These revelations point to the lack of independent peer review and how it is practically impossible to replicate or verify Dr. Mann’s work by those not affiliated with the network of scientists, so we are looking forward to hearing about that work today. Could it be that this particular work violates the principles of the scientific method and should be dismissed until it meets the basic qualifications?

“Could that have been some of what happened to the Ice Age return theory of the 1960s?”

(Excerpt from the prepared statement of Tammy Baldwin, Committee on Energy and Commerce, 109th Congress Hearings, Second Session, July 19 and July 27, 2006)

Does this explain the serious reluctance for many researchers to let anybody else look at their raw data? That they know that it has been tortured within an inch of its life and that some other worker might blow the gaff to the scientific world?

Seems to be a very good explanation of the Hokey Stick and much of the subsequent anti-scientific behaviour of ‘The Team’. Like the priests in the 15th century who didn’t want Tyndale to publish an English Bible rather than Latin since it would take away much of their power and influence over the gullible public. Being the sole interpreter of he Truth is a very advantageous position. But not so much if others use the same sources and aren’t afraid to come up with a different position.

“… McIntyre & McKitrick (2003, 2005a,b,c,d), the NAS report (2006) and the Wegman report (Wegman et al., 2006) have all independently ascertained that Mann’s PC method produces spurious hockey-stick shapes from a combination of (i) inappropriate centring of the data series, (ii) non-random or biased selection of small data samples, and (iii) inclusion in the calculation of proxies that are known not to reflect a reliable temperature signal (notably, bristle cone pine datasets) …”

“The main inferences that we drew in our letter to parliamentarians of July 21 remain. They are:

“(i) that the magnitude of likely human-caused global climate change cannot be measured, has not yet been shown to have a high risk of being dangerous, and remains under strong dispute amongst equally qualified scientific groups;

“i.e. the science of climate change is far from settled;

“(ii) that the benefits of NZ having signed the Kyoto accord, or of the institution of any other policies intended to avert global climate change (such as a carbon tax), are entirely unclear, and under strong challenge;

“i.e. the economics and likely effectiveness of climate change mitigation measures are far from settled; and

“that because of the many special interests involved (amongst which number energy and mining companies, environmental consultants, environmental and other NGOs, scientists employed to research climate change, government bureaucrats and departments, local and regional councils, and national politicians), the best and perhaps only way to get dispassionate advice on this vexed issue is to convene a Royal Commission of enquiry.

“i.e. New Zealand’s participation in Kyoto will cost at least $1 billion more than originally estimated; seeking impartial advice as to the benefit seems only wise.

Here’s what I don’t get. If I were a young Turk just coming out of graduate school I’d look at this global warming as a giant opportunity to make a splash. Another dozen or so phony boloney AGW supportive papers aren’t going to do all that much these days since that’s what everyone’s doing. You’d think there’d be a bunch of these young guys chomping at the bit to punch holes in this thing.

How would they ever get such apostasy or heresy published? Or work again in climatology? It is only those approaching the end of their careers who can afford to put their ‘cojones’ on the line against the Forces of Consensus.

And I seriosusly doubt whether today’s bright youmg thigs would choose this as a subject. Their elders have hardly set them a great example of scientific greatness. Only the mediocre need apply lest they too easil;y eclipse the Old Guard.

FWIW, I would imagine that class and income are very positively correlated with levels of post-graduate degree attainment. In fact, I would imagine that it would be hard to find variables that are more strongly positively correlated.

OK so you need to find a graduate whose parents are rich and are perfectly willing to give him/her all the money he/she will need for the rest of his/her life…rather than, say, the usual parents who expect their children to do something with their lives, like having a career and achievements in any area including science.

Here’s torturing data. You can draw a flat line for temperature from 1983 to 1993. Someone in 1993 could have claimed warming had stopped based on that. A similar thing could have been done in the early 80’s. Does this sound familiar? Has it been explained why this one is different? The fact that we didn’t hear such claims back then is because people were generally more sensible about extrapolating variability, I think.

Dr. Curry,
Do you alert your grad students to the minefield that published results may contain? Can you identify with the grad student well into her dissertation who can’t make some component of her study “work,” who then calls the authors only to find out, (assuming they are honest with her), that they fudged a bit in the area in which she is interested? You would hope they’d tell her.

Friend of mine was bombed 2/3 way through his Cal Tech Doctorate by a phony paper fundamental to his work. He discovered its results didn’t work by replication, but by then the time had been invested and he lateraled out to Stanford B-School.

There should be a very high value placed on write-ups by smart people who know the area showing that some approach cannot be made to work, and hopefully why.

“… the global temperature graph for the past century. Notice how, after rising steadily in the early 20th century, in 1940 the temperature suddenly levels off. No — it goes down! For the next 35 years! If the planet is getting steadily warmer due to Industrial Age greenhouse gases, why did it get cooler when industries began belching out carbon dioxide at full tilt at the start of World War II?

“Now look at the ice in Antarctica: Getting thicker in places!

“Sea level rise? It’s actually dropping around certain islands in the Pacific and Indian oceans.

“There are all these . . . anomalies.

“… computer programs can’t even predict the weather in two weeks, much less 100 years … They sit in this ivory tower, playing around, and they don’t tell us if this is going to be a hot summer coming up. Why not? Because the models are no damn good!”

–From Joel Achenbach’s now famous May 2006 interview with Bill Gray (the “World’s Most Famous Hurricane Expert”)

As if this were not enough, even if the science were validated, there would still be complex software engineering issues associated with the GCMs – namely code and calculation verification. Fortunately, a substantial body of knowledge has been developed for CFD. For an introduction with references to much more information see http://www.variousconsequences.com/2011/09/separation-of-v-and-v.html. Unfortunately, the climate modelers have mostly ignored this established verification technology.

“We may quote to one another with a chuckle the words of the Wise Statesman, lies, damn lies, and statistics….”
J.A. Baines on ‘Parliamentary Representation in England illustrated by the Elections of 1892 and 1895’ in the Journal of the Royal Statistical Society, No. 59 (1896)

No one outside the West is going to commit suicide by starving their economies of energy. And, it really doesn’t matter what dead and dying Old Europe does or does not do. Furthermore, the IPCC probably will be defunded in 2012. Please make a note of it.

This is obvious and has been pointed out many times and ignored in climate science. It’s very unnerving to find out, after fighting for the data used in published papers, that the raw data is “gone” and only “adjusted” data is available; and this is before any “torture”.

It is absurd to accept the premise of doom derived from a few tenths of a degree here and a few tenths there and not examine the data yourself. A casual glance at many of the published papers and it becomes obvious at the amount of statistical tricks used to manifest a signal.

Any “scientist” should be embarrassed at being associated with the current climate community, especially after having this moron running around and telling children that killer storms are going to come for you because your parents won’t drive an electric car (not that he is a scientist, but he is aided and abetted).

After countless attempts to show the flaws in the data and its interpretation, those that have tried are bullied, ignored, and slandered.

Ever heard of Climate Audit? Its whole purpose was to show the subject of this post.

Try to tell me what the temperature was to the tenth of a degree 200,000 years ago and I’ll sign you up for a $500,000 mortgage for 0% interest. Later you can pout because you were “tricked” by somebody.

I was going to suggest that a productive thing do when there is a low signal to noise ratio … is to go fish elsewhere … and by the way, how about looking over in these places. The big picture offers tantalizing possibilities.

Then again I am living in a culture of science where the acceptable credo is “remove the doubt, reveal the deniers” and “peer review makes science credible”.

Peer review accomplishes the following …

1) Guaranteed Readership: It insures that some other(s) have actually taken the bother to carefully read and consider what you have done and offer some sincere comments as a result of doing so.

This is considerable improvement on having no person carefully read and consider your effort and/or receive no opinion in response to your effort.

2) Standardized Presentation: There is something to be said for presenting items in a common way. If each article of a collection of articles collected into a ‘Journal’ had it’s own individual layout, typeface and convention of meaning, it is somewhat messy and confusing.

In a very big sense … having reviewers make comments about the quality of the ‘science’, errors or oversights in the method, a lack of supporting statistical evidence … is NOT about excellence and credibility in scientific research. … it is about conforming to common group standards of presentation.

Of course peer review and group standards ensure a modicum of completeness and rigor. But is the consumer of such a packaged product to trust that it’s contents are worthy because it has met minimum standards of durability … a statistical calculation have been made and specific number is provided … a method is used which also known to be used by others … certain contextual information is provided … the author cites other authors who are involved with pertinent aspects.

Peer reviewed articles that I’ve read are in the range from rubbish to wonderful. Regardless of that, I seldom have any easy way of peering behind what is being reported to ascertain it’s credible worth.

I certainly have been brow beaten and snowed by generous flourishes of impressive and incomprehensible words and claims.

Frankly all that prim and proper style is more apt to mislead than reassure. Spit and polish, shiny and neat, umpteen testimonials.

Who isn’t going to trust the credibility of such a document? Surely not the very same scientist who is going to use that ‘Academy Certified’ authenticity in the course of producing a new and improved alternate tome of certified reliability and correctness.

Style impresses. It does convey achievement in a secondary and residual manner. That’s why considerable expense was made to produce very impressive trustworthy financial certificates

People trust the appearance. People know that minimal money, minimal skill and minimal effort has been provided to produce that tangible evidence. Just don’t take it further than that ….

The instant any scientist uses the “It’s credible because it’s peer reviewed” argument .. they are saying very very little more than “It’s credible because has the Good House Keeping Seal of Approval

Are you as a creditable scientist going to buy it because it’s grade A certified ‘peer reviewed’. Do you automatically assume that everything you read in Nature is credible important unassailable settled science because it passed peer review and was published in Nature?

I guess so.

Only deniers nutters shill men and riff raff dare to protest the great god Housekeeping Seal of Approval of ‘peer reviewed’ science.

There are other reason’s for peer review ..

3) Peer review serves to be selective about the contents of a Journal.
Fair enough. No problems there …

Pity that claims to shape journal content is more often used to play games of researchmanship and support “in group” interests and maximization of grant awards. Don’t here much talk as to how incredibly political research and science happens to be. That wouldn’t do in the climate of “remove the doubt, reveal the deniers”

It utterly baffles me how otherwise extremely capable researches would be so foolish as to defend themselves with predominantly and deliberately misleading claim of ‘peer review’ makes credible

I don’t recall hearing that the wonderful benefit of peer review is that it ensures that someone carefully considers and responds to author’s effort. THAT’S JUST TAKEN FOR GRANTED, ISN’T IT? … whole hordes or readers eager to relish every word of the scientist’s important work.

I apologize. How silly of me.

Don’t hear much mention of where or how credibility IS ENSURED in science. It comes from places such as research reviews, search committees, awards committees and perhaps most important of all … from peers recognizing and appreciate each others work and support.

Finally there is one very very crappy side to ‘Peer review’. That doesn’t seem to get mentioned much either, even though it’s spot on this topic.

“On torturing data” Only an idiot, a career climber or someone mesmerized with endless variety of results … would play the pump-the-data-for-all-it’s-worth-and-then-twenty-times-more-again game.

It’s the easiest way there is to win the “Look how many publications I have and how important I am!” game. With ‘peer review’ on your side it makes for an assured win. Learn a demonstrably effective production technique. Automate and mass produce with every conceivable context.

Peer review ruins science because …

ONE) It discourages impedes and rejects efforts that cannot be easily tailored to the code-of-presentation

TWO) It encourages the work that is specially intended to slip as quickly and easily as possible between the greased rolling pins of the code-of-minimal-mediocrity-in-presentation

Problem is they have no clue to how the planet works.
The solar system is a thousand times more complex than current science can grasp.
Following temperatures to the exclusion of all other factors is fiction at the height of stupidity due to economic funding that has generated the like minded zombies. 4.5 billion years is a vast amount of data being ignored for a few hundred years of science that has no value.
Many areas have been missed as they do not fall into established categories of consensus laws which missed the simplest of measurements of difference in circumference sizes and speeds of rotation.

E=MC2 There is far more than one type of energy and speed changes density.

Financial economist Andrew Smithers, writing in the Financial Times, said

“Data mining is the key tchnique for nearly all stockbroker economics. There is no claim that connot be supported by statistics, provided that these are carefully selected. For this purpose, data are usually restricted to a limited period, rather than using the full series available. Statistics, it has been observed, will always confess if tortured sufficiently.”

Judith Curry recently presented data at a Boulder conference showing that precipitation in the Pacific NW varied with the PDO. This explains the heavy snowfall in that area in the 1950s and in recent years. She used the full series of data available.

A few years ago, some climate scientists showed snowfall declining from 1950 to the mid-1990s , demonstrating that global warming was reducing snowfall. They used data restricted to a limited period, rather than the available data from 1914-2004, and the data confessed.

Don, that is an excellent example. Part of the reason that many climate scientists limit some of their data to the 1950s onward is their firm belief that CO2 began to prevail as “THE” forcing circa 1950. The period 1910 to 1940 is nearly taboo in climate circles. That is kind of scary to me.

The paleo reconstructions spent some time on the water board too. Opting for high frequency proxies with pretty heavy smoothing and a best instrumental fit selection, pretty much guarantees a subdued past climate record.

Yes we must trust the expert economists. They are the cream and superstars of those who do trend analysis ,dynamical systems analysis, and numerical simulations. In private economic forecasting there are no limits on research funds.

Economic forecasters do a wonderful job.
Climate change scientists are losers who couldn’t hack it in the big leagues of economic forecasting.

Sure I am as cynical as stink.
Academics are losers
Failed academics are even bigger losers.

You have hit the nail on the head.
You will find our creativity in generating new products is stifled by the restriction imposed by policies. Now that the debt level is too high for investors to take the chance and other countries have very little restriction.
Highest profits are more important than lives or survival. R and D is being devastated as their is always some risk.

So in a post entitled “On Torturing Data” with almost 100 comments, there is no definition at all of what it means to torture data. Everybody just piles on, each with his or her own unspecified definition, and tut-tuts over this noxious practice that, of course, must be curtailed. This kind of discussion is really just literary criticism. It is neither quantitative nor based on some sort of conceptual framework from which we might draw some useful conclusions. It is, rather, an extended metaphor concerning the pitfalls on doing competitive science. What’s the point? Seriously.

If you are saying it is all purely anecdotal postulating, I agree. Some research scientist could win a lottery and would that be news? No, because in a world of billions of people there are always weird statistical outliers. Anecdotes don’t really prove anything quantitatively.

Definition? No. Examples, yes. http://www.nytimes.com/2009/01/22/science/earth/22climate.html This is a NYTimes article on what is the most tortured data in all of climate science. Instead of 0.2 degrees warming per decade that completely blew NASA’s estimate of maybe some warming out of the water, a follow up paper by a bunch of non-science types, silly bloggers, showed that the methodology used was a bit too novel.

Even Kevin Trenberth, grand poobah of climate science said, “It is hard to analyze data where there is none.”, or words to that effect, in reference to this paper.

The statistical consultant for the Antarctic Warming paper, is a serial data torturer, er… world famous climate science novel statistical expert.

If people don’t know what they are doing, we call it “smoothing”.
If they know exactly what they are doing, we call it filtering.

Come on people, we can pick up something like femtowatt signals buried in noise from the Voyager spacecraft, we can uncorrelate doppler signals out of fast moving GPS satellites buried in noise with sophisticated Kalman filters to phase lock the clock drift and get positional accuracy to the meter, we can pick the license plate number out of a fuzzy camera shot from hundreds of feet away based on maximum entropy spectral analysis and then hunt the perp down, and we can do all this other wondrous stuff. This is completely dependent on modern digital signal processing techniques and advanced filtering algorithms. A filter is simply a way of removing everything you don’t want to see, so that you can pick out the information that you think is important. That is what smoothing does to, but it is simply the tip of the iceberg of what is possible.

Yet we marginalize what some scientists can potentially do by calling it “smoothing”.

I think there are talented scientists and naive scientists and a few troubled scientists, but we can’t really lump everyone together with broad brush strokes. Based on what I have seen done with filtering, we have to give the scientists a chance to see what they can do.

If people don’t know what they are doing, we call it “smoothing”.
If they know exactly what they are doing, we call it filtering. …

… but we can’t really lump everyone together with broad brush strokes. Based on what I have seen done with filtering [*], we have to give the scientists a chance to see what they can do.

[*] For example: Wolfram and his use of Cellular Automata

Idiots smooth things over …
Intelligent people filter things out …

Many intelligent people (who know exactly what they are doing) hold a deeply seated belief in a continuous reversible universe and are profoundly mistrustful of Wolfram’s idiotic discrete filtering of reality.

The problem resides in transiting between coherence and incoherence.

Neither linear smoothing, nor nonlinear filtering can provide assistance in traveling into and out of discontinuity (.. and confusion)

WHT, I am surprised at the hostility to CAs
The mistrust seems to arise from the irreversibility through discrete representation.and it’s implication of irreversibility.

It was a delight to start reading through your comments on fat tailed distributions. Although the maths is a bit much for me, I was reminded of percolation theory, coalescing random walks, convolution, criticality, renormalization and such things which I banged my head against many years ago. A shame Wikipedia wasn’t around back then.

Not sure what to say about your stuff, except that if you are at some considerable disagreement with the IPCC’s estimation then I cringe at their ineptitude or their deliberate misdirection, as the case may be.

Elsewhere you wrote …

I have written extensively on fat-tails so I look at outliers as the most significant piece of the equation. … in diffusional systems, where the velocity is highly variable, the time characteristic will always show a fat-tail

There is more than diffusion and percolation. Flow can be everything.

What intrigues me is flow process as it relates to variable velocities and outliers. It would seem possible to sail icebergs through that loop hole.

For me the problem is in making a transformation between coherence and incoherence.

Let’s just say that the outliers are everything as a thinning out by reiterated decimation … is but a means of moving out from continuous and across through to discontinuity.

Everything has to make sense like a jigsaw puzzle, and the fat-tail is a piece in the puzzle.

Not trying to be clever with words here … but incoherence and discontinuity can make a mess of jig saw puzzles.

For sure, convolution renormalization criticality and other tools can span that ‘incoherent messiness’. You know more about this than I do. …

Nevertheless incoherence/discontinuity remains as a formidable problem. If an appropriate narrow bandpass filter cannot be constructed, for whatever reason … and thinking of the problem of pattern acquisition here … the filtering is ineffective.

You frequently mention normal distributions and outliers. It reminds me of the Law of large numbers. An idea which I saw recently in CA research involved a Law of small numbers where N was fractional and much less than 1. (Aside: Sorry, but I don’t have a reference. I understand it because I came at the concept by way of my own work) That has to do with outliers too, In this case N is much less than unity. Not quite sure how to explain what that means or why it is important (significant effectiveness)

The terms incoherent and discontinuity are used interchangeably. It might be said that it has something to do with irrational numbers but that even misleads.

Chaos, strange attractors, infinite dimensions, bifurcations, dynamical systems … all have something to do with it. Trouble is that there isn’t something yet(?) that is practical in transiting from coherent definiteness to wobbly stretchable incoherence and beyond. There is no correspondence.

Formal mathematical ‘measure’ doesn’t apply to incoherent space. It really is about crossing over the boundary to something alluded to in Godel’s incompleteness theory.

This is a real problem. It’s the problem that preoccupied Von Neumann.

Not only is this incoherence problem ‘real’, it is also very ordinary … and damnable to understand and describe.

Just to share an anecdote from my own discipline, ancient studies: I’ve heard from a junior scholar in a position to know the truth (and whom I trust absolutely) that a prominent archaeologist, faced with a deadline, essentially sketched out a fabricated city plan for the site he had been digging. The job was convincing enough that one would have to visit the site and study it carefully to realize that large chunks of the published map were not based on reality. So dishonesty among top scholars can go on in just about any discipline. (Not that this will come as a surprise…)

From the stories I read, it happens far too often and then becomes reference to future scholars.
You have no idea how bad the current science is. 80% is pure fabrication. With 4 times still not explored or in massive confusion.

If data is good, there is no need to torture.
North Atlantic SST (and its de-trended derivative – AMO) is a favourite ground to hunt for cycles, or explain 20th century up-slope to justify global temps variability, despite the claim that ‘the nature and origin of the SST/AMO remains uncertain’.
No torture needed, both nature and origin look certain.http://www.vukcevic.talktalk.net/SST-NAP.htm

Week after week, people like Briggs enlighten people of the the SIMPLE errors that occur with statistics all the time (e.g. applying averages to averages). For an introduction to basic statistics (HS-College) Matt Briggs has a very good primer:http://wmbriggs.com/blog/?page_id=2690

Briggs doesn’t have all the answers …and unfortunately he has written some nonsense about averaging (along with some things that are true). Equipped with a good handle on the base spatiotemporal structures of a system, smoothing operators can be used as POWERFULLY illuminating resonators.

Tip: Don’t look to the abstract assumptions of statisticians for climate answers. Look to practical, sensible data explorers. It’s stat inference based on UNTENABLE assumptions – whether classical OR Bayesian – that’s the SERIOUS problem in a context where there are FUNDAMENTAL spatiotemporal sampling issues of which the MAJORITY of investigators are THOROUGHLY unaware.

The ONLY sensible option is to DROP the untenable assumptions – i.e. do data exploration (which should absolutely NOT be confused with statistical inference based on untenable assumptions). However, as Dr. Curry points out, academic culture FOOLHARDILY DEMANDS tests based on UNTENABLE assumptions.

We already know that most of the p-values are meaningless (whether classical or Bayesian) and yet the wholesale indoctrination of students continues. This is a fundamentally ugly issue facing our society & civilization. All but the few very brightest statistical leaders with extraordinarily rare lucid awareness aren’t hoodwinked and most of them have a smug grin on their faces, coyly opting to defend their TRIBE (unacceptably selfish) rather than admit the truth to a society & civilization that DEPENDS on them to not mislead innocents with untenable assumptions.

Perhaps Bayesian snake-oil salesman, some of whom perhaps don’t yet realize the LIMITED utility of their product for climate exploration, will understand an analogy. The Metropolis-Hastings algorithm sat on a shelf collecting dust for 4 decades. The same thing might happen with LeMouel, Blanter, Shnirman, & Courtillot (2010). Even the people actually taking the time to read it don’t understand it. [Worse: Some diss it from a base in PATENT MISconception.] This indicates FUNDAMENTAL lack of awareness of the spatiotemporal sampling framework, aliasing, integration across harmonics, and the effect of aggregation criteria on summaries.

Perhaps a larger concern should be the consequences of thus-highlighted deficiencies in our educations systems, including perceptive delays measured in decades, if not centuries.

Now I’m going to tell you the great truth of time series analysis. Ready? Unless the data is measured with error, you never, ever, for no reason, under no threat, SMOOTH the series! And if for some bizarre reason you do smooth it, you absolutely on pain of death do NOT use the smoothed series as input for other analyses! If the data is measured with error, you might attempt to model it (which means smooth it) in an attempt to estimate the measurement error, but even in these rare cases you have to have an outside (the learned word is “exogenous”) estimate of that error, that is, one not based on your current data.

If, in a moment of insanity, you do smooth time series data and you do use it as input to other analyses, you dramatically increase the probability of fooling yourself! This is because smoothing induces spurious signals—signals that look real to other analytical methods. No matter what you will be too certain of your final results!

Perhaps we can get the Bayesians & the Classicals to take a break from their tiresomely chronic feud for long enough to realize what they have in common in the climate context: patently untenable assumptions. The attack should be on ALL camps that base statistical inference on untenable assumptions. I remind all statisticians who may be reading here that data are meaningless without adequate context; what we have in the climate context is SEVERE sampling design misconceptions. Investigators have jumped straight to patently untenable assumptions without ever having explored the data properly – i.e. with a sound handle on the spatiotemporal framework outlined by EOP (Earth Orientation Parameters), which points CLEARLY to differential spatiotemporal aliasing.

I consider statisticians much like I do politicians. I don’t trust any of them on their own. I can see benefits of Bayesian methods, but they need to be verified against other methods. I need to read up on EOP, but with what little I do know, I am in agreement.

Methods are a dime a dozen and any of them (including Classical & Bayesian) can be applied sensibly, but there’s no method that compensates for basing inference (hypothesis tests & confidence intervals) on untenable assumptions. Regards.

A burden I once bore for several years was indoctrination of students. My specialty was introducing the statistical inference paradigm to newcomers. Stat inference is founded on assumptions, some of which are contextually indefensible.

Perhaps not clear to all, but the evidence is beyond all shadow of a doubt that rafts of standard mainstream climate inference assumptions are untenable – (same thing happens in other disciplines). We’ve reached harmoniously efficient disagreement. Cheers!

If a study was properly designed and conducted, and fails to reject H0, it is MORE informative than one that does reject it in most cases (especially rejections at the pathetic 95% “Climate Science Gold Standard interpreted as ‘highly likely’ “). Yet its odds of getting published are vanishingly small.

I guess it comes down to “positive results only” being a form of cherry-picking.

At least you see some concept of how science in it’s current form works.
The planet is at least 100 times more complex than what our forefathers(who generated the generalized theories), thought it was.
Missed simply measuring the differences in the circumference sizes to the different speeds in the 24 hour period.
This means any generalized mathematical formula is highly incorrect.

I think what he is saying is that we should look to the motivations. For example, dead and dying Old Europe tried through the UN to use the global warming hoax to take down America. Now, the secular, socialist agenda to consolidate power over the productive in America includes keeping Europe afloat even while Leftist ideology is dragging Western civilization down like a stone. Capiche?

The “reply” function seems to be placing posts randomly. Here’s another try:

I see the EU “leadership” and other leftist AGW-pusher-pols as bedazzled by the prospect of nearly unlimited power over the economics and demographics of the planet — in their hands, and very soon! To that end they are prepared to sacrifice any quantity of others and their rights and well-being.

The extent of this individual’s fabrication or deliberate and significant abuse of research methods is not yet known, but since his public career behaviour has been to publish compulsively and to groom himself to be something of a media celebrity, the personality involved – rather than the research climate, which quite frankly has not by this situation been shown to lead the majority of research scientists in social psychology or in climate science to fabricate their research data – seems to be at least as meaningful to understanding this particular story.

The bad work was almost certainly evident over time, so why did people follow him? Why, indeed. And yet we see it all the time, mostly outside of climate science. He was confident, charming, and able to get people to believe him and support him even thought at least some of his research conclusions defied or were not supported in the evidence and also apparently defied logic and common sense. If reports within the social science community are accurate, he ‘confessed’ immediately and almost playfully, suggesting that he was not ethically troubled. He is cooperating fully yet there have been no public statements of remorse. It is hard not to notice that a real comparison can be made to the personality and behaviour of e.g. Ian Plimer. Too bad some people have so much trouble understanding that an ad hominem argument (i.e., an argument that rejects an individual’s opinion based on the person’s demonstrated inconsistency, insincerity, incompetence or self-serving behaviour) is not always fallacious. :-(

Of course, these kinds of cases aside, there are manifold issues at the institutional level in Academe and especially science research, that are top of the list of skeptical concerns. However, I’m not sure this particular case is instructive since something is so clearly wrong with this guy’s fitness.

At the institutional level, it may be worth considering whether it is easier to ‘fabricate’ data in social psychology than in climate science. The two disciplines have mechanisms to support integrity and good work and to reflect standards, but they are methodologically different and the ethical role of the researcher while equally important is challenged in some uniquely different ways.

Most scientists want to find their errors themselves rather than being caught of attempt to publish an erroneous paper

This may be true in many cases, but there may be some who think there is a third alternate: publish their errors (without publishing the source codes, etc.), get some buddies to do the peer review and hope no one will challenge the results and conclusions.

These are the ones that need to be watched.

And it would be totally naive to think they are not out there in climate science today (especially after Climategate).

Whether the climate science is plagued more by these problems than other fields of science which study issues with similar complexity and uncertainties is not at all clear. What is clear, is that the problems in climate science have been publicized much more than in most other fields up to the point where false accusations easily outnumber real problems.

I do, however, see very large problems in WG2 type research on the impacts of climate change. The motivation, funding and publishability of research on impacts are all seriously biased. Much of that research is not good science at all, and lives only based on these biases. There is of course also good research on impacts, but in that field the problems are severe.

You may be correct in saying that climate science has no greater percentage of “data torturers” out there than other sciences.

I believe the problem comes primarily when there are extremely large sums of money at play and a particular branch of science becomes politicized, as is the case for climate science today.

This opens the door for a corrupted process, which demands scientific results to support a political agenda, as appears to have happened with IPCC, whose sole raison d’être is to determine human-induced climate changes, their impacts on society and suggested mitigation or adaptation strategies.

IOW if there are no potentially serious human-induced climate changes there is no need for IPCC to continue to exist.

This existential problem has contributed to creating the corrupted IPCC process we witness today, to which our host here has alluded in earlier threads.

So, yes, other fields of science may indeed have a similar problem (particularly if large sums of money are involved), but that should not detract us from our concentration on “data torturing” in climate science.

It’s totally clear that IPCC exists only, because many people had concluded that the climate change may have severe consequences, but IPCC is a special kind of forum of collaboration, not a major organization by itself. It’s budget is tiny.

You mention “extremely large sums of money”, but I see that to be true only in some areas, none of which is directly linked to climate science itself. Certainly there’s significantly additional funding for the climate science, but not at a level even approaching “extremely large sums of money”.

The extent of German funding of solar energy might reach that level (more than 50 billion euros committed so far), but mostly the extremely large sums of money are related to the potential costs of mitigation measures that would mean economic losses to everybody, or to the losses that many businesses fear to face.

The motivation that may have led to exaggeration of the risks is not economic, but either sincere belief in the views or a combination of that and prestige, which many people value even more than money.

Yes. IPCC’s only reason to exist is the premise that human-induced climate changes could represent a potentially serious threat to humanity. If it cannot demonstrate this, it has no further reason to exist.

Yes. There are extremely large sums of money involved. Climate science/research today involves a few billion dollars annually, but this is just peanuts compared to the trillions of dollars that would be involved in global carbon taxes or cap and trade schemes, not to mention the potential subsidies or grants of taxpayer funds to support “green” industries.

As you mention, these large sums of money are more related to “mitigation” schemes than to climate research, itself (I agree).

No. I do not agree with your statement:

The motivation that may have led to exaggeration of the risks is not economic, but either sincere belief in the views or a combination of that and prestige, which many people value even more than money.

On the part of individual scientists, who have “tortured the data” to get the “desired” result, I’d say you might be correct in saying economic interests were not the prime motivation (rather “prestige” and “belief”), but IMO anytime extremely large sums of money are involved, there will be those who are motivated by the prospect of getting a piece of the action.

I believe our host here has referred to the basic problem of “data torturing” as it relates to climate science more eloquently than I just did:

I have argued that the IPCC and the culture of funding, journal publication, and recognition by professional societies have not always acted in the best interests of scientific progress in climate field.

The problem occurs when the data is “tortured” to produce a desired “confession.” Seemingly objective manipulations of the data can inadvertently produce “confessions” beyond what is objectively obtained in the original data set.

It is because of complexity of the climate system and the inherent inadequacy of any measurement system that complex data manipulation methods are used. It is essential that we better understand the limitations of the methods and how to assess the uncertainty that they introduce into the analysis.

It is impossible to read “The Hockey Stick Illusion” and not think about torturing the data until it confesses!

I wonder if research programs in areas like climate change need to go through a public “design phase”, where for example an experimenter would be asked (and expected to answer!) how he/she proposes to detect CO2 warming when the global temperature varies for other reasons.

Perhaps this would have elicited the response that only tropospheric excess warming was diagnostic of CO2, and scientists would never have started watching global temperature data in the same frenzied way that people watch the stock markets!

The “design phase” could also cover such issues as what if any corrections should be applied to raw data, and how this should be documented, and also just how accurate the raw data would be.

Such questions would avoid the descent into confused thinking that seems to characterise climate science.

I feel there are also lessons to be learned from parapsychology research!

Someone reviewing a parapsychology paper would be free to think the unthinkable:

1) Maybe the experimenter was guilty of wishful thinking.

2) Maybe he actually cheated.

3) Maybe he discarded a lot of ‘failed’ experiments, only keeping those that succeeded by chance.

4) An experiment would be considered more convincing if it is done ‘blind’ – so that the experimenter can’t influence the outcome, because he never knows whether he is handling the experimental or control data until the coding of the data is revealed. This approach is also often used in medical trials.

Well if you leave climate change aside, I’d support Obama in preference to Bush any day – he just seems a much safer pair of hands! Does that mean I want to trash America – of course not – I like the place, and will be over in your country shortly!

The Republicans seem a lot more savvy about climate change, but not much else!

It isn’t unpatriotic to vote for another party in an election! If you really believe that, you need to live in a one-party state!

“Old Europe” (in particular, that majority, which now constitutes the EU) is burdened with “muddled thinking” at present. The “motor” (Germany) is being criticized by the EU leadership for being too small-minded in its thinking by being skeptical of the long-term solvency of Greece.

It doesn’t help that Timothy Geithner comes over here to lecture Europe on how to run things, while his own nation has just lost its top credit rating (after he promised this would never happen) and is going broke while spending like a drunken sailor.

At the same time China is slowly moving in to “help” Europe with its solvency problems.Will the USA be next?

But Europe is not about to destroy the USA.

The climate zealots that used to run the UK might have had such aspirations, but they are gone at the top and will gradually be replaced down the line with more reasonable politicians. IMO it is only a matter of months until the rest of Europe shelves the unilateral carbon reduction plans, as they realize that these will not change our global climate one iota.

Smoothing
A recorded observed data series is exactly what it is, i.e. “what you see is what you get”.

A 10-year running smoothed data series is by definition a manipulated data series. The “smoothing” does not remove “error”, but is designed to remove “noise”. But who can decide what is “noise” and what is “signal”?

Cherry-picking
Removing “outliers” from the data series (because they do not fit the hypothesis?) or carefully selecting data series, which help to prove the desired hypothesis, are examples of “cherry-picking”, a practice that is not unknown in climate science.

A 10-year running smoothed data series is by definition a manipulated data series. The “smoothing” does not remove “error”, but is designed to remove “noise”. But who can decide what is “noise” and what is “signal”?

Smoothing can also remove something that you know exists and want to get rid of. For example you can smooth the alternating current component to get a stable source of direct current. That is the gist of the Mauna Loa CO2 data in that one wants to get rid of the yearly cyclical effect. Otherwise a cross-correlation of d[CO2] with Temperature looks like this:
You might as well get rid of that yearly component beforehand and then you get something like this:
The second one is a bit easier to reason about, while the first is “corrupted” by what is commonly referred to as a nuisance term. We have to ignore this because Mauna Loa partly measures global conditions but the nuisance term is due to seasonal latitude locality, while the temperature is strictly a global average.

Cherry-picking
Removing “outliers” from the data series (because they do not fit the hypothesis?) or carefully selecting data series, which help to prove the desired hypothesis, are examples of “cherry-picking”, a practice that is not unknown in climate science.

When people start accepting fat-tail statistics for modeling behaviors, they will stop throwing out outliers. I read an interesting paper that said most outliers are thrown out because they don’t fit a Normal distribution, based on people’s preconceived notions that everything has to follow a normal.

This is obvious in the case of climate change skeptics who can’t understand that the adjustment/residence time of CO2 can have a significant fat-tail. Its not exactly cherry-picking but they can’t see that fat tails are possble and those all have to be outliers that should be thrown out.

I have written extensively on fat-tails so I look at outliers as the most significant piece of the equation. For example, in oil discoveries, only a few supergiant reservoirs exist as outliers, and the rest follow a fat-tail distribution without a mean value. This characteristic comes up over and over in natural sciences.

More specifically, it invariably comes up in systems with disorder and where the dimension of the measurement is inverted. For example in diffusional systems, where the velocity is highly variable, the time characteristic will always show a fat-tail. The easy way to think about is that time is the denominator in velocity and when a PDF in velocity gets reciprocated a fat-tail power-law will come out. That is why you see a few supergiants; the velocity at which supergiants grow is disordered in different regions, and the point at which we tap the supergiants is dependent on time. Then, bingo, a fat-tail comes out of the distribution and the occasional supergiants start to make sense.

All you have to do is look at the Stefan-Boltzmann Law to see this. The wavenumber distribution is damped exponential, while the wavelength distribution is reciprocal power law.
This is not a hypothetical premise as it describes the way things exist mathematically in nature. It’s really not my fault that no one can put two and two together and just explain it this succintly.

In no way do I want to belittle your work on “fat tails” as regards oil reservoirs. This is very likely a significant part of the calculation on the long-term economic viability of a reservoir.

I was referring to the discussions on the CO2 thread here about the “fat tail” in CO2 residence time in our climate system.

Admittedly, all of the estimates of the long-term CO2 residence time in our climate system are based on hypothetical deliberations, rather than empirical data based on physical observations, so all conclusions should be taken with a large grain of salt.

IPCC doesn’t help us much here, with its estimate of a long-term CO2 residence time of “5 to 200 years”,

But the generally accepted suggestion is that, even if we stop all human CO2 emissions at current levels, we will see atmospheric CO2 concentrations (and hence temperatures) remain essentially constant for decades before starting to reduce, (or temperature even rise initially, if one accepts the the “hidden in the pipeline” postulation of Hansen et al., as IPCC apparently does). And it would take centuries to get back to the “pre-industrial” CO2 level, as a result of a long “tail”.

For me the more practical consideration concerns the instantaneous rate of decay of CO2 in the climate system. If one accepts the “half life” estimate of 120 years (as has been suggested by one study, one would arrive at a decay rate of around 0.158% of the concentration, or around 2 ppmv per year.

In effect, this would mean that if the net input of CO2 to the climate system (from wherever it comes) were reduced to 2 ppmv per year, the concentration would stop rising. If it were reduced to less than 2 ppmv, CO2 concentration would begin to diminish.

I realize that these are all simplifications based largely on hypothetical deliberations, but I think they are more pertinent to our climate over the next several decades than “fat tail” deliberations.

The fat tails of probability distributions and the nonexponential nature of the persistence of CO2 in atmosphere are totally different issues. It’s better to use the term only for the former, where it has been used for long.

As with many other phenomena related to climate science the basic qualitative fact is well known. There isn’t reasonable basis to doubt the non-exponential nature of the persistence of CO2, but the quantitative details are not known accurately at all. The dynamics of the carbon cycle of oceans is certainly not known accurately, and all the estimates of the long term part of the persistence are dominated by those effects, which include as well the transfer of carbon between surface ocean and deep ocean as the “final” removal of carbon through mineralization.

But, as I just wrote in the other thread referring to a sentence in Padilla et al, does this really matter. Is there really any reason to worry on, what’s going to happen more that 200 years from now, because high CO2 releases cannot continue even nearly that long, and because the CO2 concentration will therefore turn to significant decline much earlier (unless the releases are rapidly reduced to a low level that can be maintained longer).

The fat tails of probability distributions and the nonexponential nature of the persistence of CO2 in atmosphere are totally different issues. It’s better to use the term only for the former, where it has been used for long.

“Probability Theory is the Logic of Science”

I derive the fat-tail in the time domain from the solution to a Fokker-Planck equation with suitable boundary conditions. This is essentially a master equation describing how probability flows away from a region. It really is just a matter of putting the right conceptual abstraction on the problem domain and one can see how this falls out. Perhaps it is not the static PDF that you have in mind but it certainly classifies as a probability density function over space and time, since the rules of probability have to be obeyed.

I really do think this is the quantitative solution to what you describe as the qualitative reality. A while ago I looked at the IPCC Bern curves and fitted those to a reciprocal sqrt(time) dependence, and I think that there is little doubt that a diffusional boundary layer is the fundamental basis for their simulation results.
(look at the following curves where I overlay empirical fat-tail curves over the IPCC results for impulse response function)
As with a lot of scientific results, the modelers didn’t care to give the short-hand interpretation that I recall my physics instructors always working-out. Since that is the way I was taught to solve problems, I continue to do it that way.

Its happened before with Burt’s psychological ‘work’ on twins , total fabrication but because of person no one asked questions on what were clear errors and this ‘research’ became the orthodoxy which lead to it being it cited a great deal and virtual questioned , Until that is some one outside of the area how was a statistician got to work on it . Does this sound familiar to anyone ?

Agree there is no point worrying about what will occur 200 years from now.

That is why I suggested to WHT that it might be more interesting from a practical standpoint to determine the instantaneous decay rate of CO2 in our atmosphere at today’s conditions than to deliberate what will happen more than 200 years from now.

AFAIK this rate has not been determined, although an upper limit could be today’s difference between human emissions (assuming these are the sole net input to the climate system, which has been questioned) and the measured increase in atmospheric concentration, which would calculate out to around 2 ppmv per year.

As you say, there will very likely be drastic changes in human CO2 emissions over the next century regardless of any mitigation strategies that may or may not be implemented.

I don’t know exactly, what you mean by “instantaneous”. In the 4-exponential parametrization of Maier-Reimer and Hasselmann the shortest component has the time constant of 1.9 years and represents 9.8% of the removal. The other three exponentials have corresponding values (17.3a / 24.9%), (73.6a / 32.1%) and (362.9a / 20.1%) leaving 13.1% to so slow processes that the parametrization considers it as permanent.

These values mean that the decay rate changes rapidly during the first years and remains to an essential degree non-exponential at all times. Using “instantaneous” with such a temporal behavior is not really reasonable.

While I consider 200 years to be clearly too long to have much weight in decision making and 100 years to be also on the long side, I do think that the rate of decay must be considered at least 50-100 years to the future. That cannot be described by one rate of decay.

I haven’t checked from original sources, but I think that volcanic events tell about the persistence over the first 10 years or so giving evidence for the applicability of the first two terms of the parametrization. I think that there’s reasonably strong evidence on the validity of the formula also for somewhat longer periods, but the behavior over periods of 100 years or more must be based almost solely on models of ocean circulation and other related phenomena.

The parametrization is not supposed to represent four separate processes, but it’s only a practical tool for calculating the persistence of CO2 released to the atmosphere.

I haven’t checked from original sources, but I think that volcanic events tell about the persistence over the first 10 years or so giving evidence for the applicability of the first two terms of the parametrization.

Warning Pekka, I got ridiculed by the resident hydro man for offering up this suggestion, i.e volcanic events.

I still think the time is a diffusion-related phenomena into deep stores.This comes up repeatedly in natural sciences when there is not a strong force component, such as what happens with dispersion of contaminants, anomalous transport in amorphous semiconductors, etc. The force component is not there because the carbon cycle keeps on recycling the material and we are really looking at the diffusion into the deep stores. With diffusion the initial transient looks like an exponential drop but then it slows down because of the length of time it takes a random walk to go anywhere.
I like this animated GIF from the Fokker-Planck wikipedia page:
You can see the drift component in this one; imagine that not being there.

I don’t know exactly, what you mean by “instantaneous”. In the 4-exponential parametrization of Maier-Reimer and Hasselmann the shortest component has the time constant of 1.9 years and represents 9.8% of the removal.

Interesting that they use that approximation for diffusion. Like I said elsewhere, they might be just empirically fitting a diffusional drop off.
I found the Maier-Reimer and Hasselmann parameters and compared it to a modified 1/sqrt(time) drop-off and this is what it looks like:

Note that they stuck in an unphysical constant to get the tail really fat at long times.
I am really convinced that they simply tried to heuristically model a diffusional drop-off with 4 exponentials and then try to explain it with different rates of absorption. I am sure this came out of a simulation and they didn’t understand how simply they could do a 1/sqrt(t) profile.

The basic idea of the multicompartment model is sound. The atmosphere reaches a balance with the most strongly mixed layer of ocean with a time scale of something like 2 years and this is behind the shortest time constant of Maier-Reimer and Hasselmann. Deeper ocean gets its carbon from this first “compartment”, and land areas form their own compartments bot directly and indirectly coupled to the atmosphere. For small and modest increases that system can be modeled by a set of coupled linear differential equations, which have as variables the amounts of CO2 in each compartment.

The above mentioned relationship between that one term and one real mechanism is not exact, but they are pretty certain to be strongly related. The biosphere contributes to that as well and there are no strict separations between layers of the ocean. Thus there is no exact correspondence, but rather one dominating effect modified by others.

The longer time constants are really only fits to a non-exponential curve without any similarly clear dominant mechanisms. For the time scale of tens of years turbulent mixing in a thicker layer of the oceans is important, but so are biological processes both on land and in the oceans. For even longer time scales of centuries up to 1000 years or so the mixing with deep ocean both through thermohaline circulation and through sinking and dissolution of marine biota must be the most important factor. On the scale of several thousands of years sedimentation to ocean floor starts to dominate.

Your approach makes sense in that it emphasizes the non-separability of times scales. We do not have well defined separate compartments, but the compartment model of a few compartments is only a discrete approximation of the reality with a continuous spectrum of time scales. Even so the question is more about the complexity of the Earth system than about probability distributions. The long tail is due to the fact that the ocean cannot absorb all the CO2 even in full equilibrium together with the long time scales of first reaching all parts of ocean and then of the sedimentation. The effect is most definitely present in the best deterministic description of the processes. This is the reason for my dislike of the use of the same word “fat tail” as with probability distributions.

Yes, agree entirely and that is the reason that a series of multi-compartment models will reduce exactly to a Fokker-Planck diffusion solution. I have shown this before, but this is a calculation of 50 of these layers concatenated:

This is worked out for CO2 diffusion into deeper sequestering sites but it is essentially the same argument for heat diffusion, as the heat equation is just another form of Fokker-Planck. I chose 50 slabs because you can see how the continuum can play out.

The long tail is due to the fact that the ocean cannot absorb all the CO2 even in full equilibrium together with the long time scales of first reaching all parts of ocean and then of the sedimentation.

And I just consider this as describing the random-walk conundrum as a random walker can get deeper and deeper but it will take longer and longer to reach that point according to a long-tail (or fat-tail) formulation.

Describing it as long-tail vs fat-tail is a matter of taste. Perhaps I can see your well-reasoned point if we consider that the integral of the 1/sqrt(t) over all time will not integrate to 1, yet the spatio-temporal density function has to integrate to 1. Therefore, a long-tail does describe the temporal behavior better.

I see the EU “leadership” and other leftist AGW-pusher-pols as bedazzled by the prospect of nearly unlimited power over the economics and demographics of the planet — in their hands, and very soon! To that end they are prepared to sacrifice any quantity of others and their rights and well-being.