We already know how that distribution is going to turn out, and that the average number is not even a choice, and that we will always get one of the 6 choices and nothing else can happen.

On the other hand, fitting models designed to look like some statistic beforehand, with a model range of 11.5 to 16.5 will of course almost always fit in SD, so what’s the point of using SD when we already know the answer. And likewise that the SEM is almost always going to fail, which is what this test tells us.

I don’t see why those arguing against SEM want to use SD to give us an answer we already know, and that they’re using SEM to show it’s too stringent like SD is too lax. SEM is simply proving it, as would SD. But how many papers prove SEM? Whatever, not important.

So in that respect, they are similar to die; they really tell us nothing that’s not, as Geoff put it, a bleeding obvious conclusion.

Unless you are talking about some new Douglass et al. paper, that is not what they did. They use standard errors, not standard deviations, and they did not do the ratio-ing of the temperature trends at the various heights relative to the surface. The first of these is an error in what they did, while the second is not an error…but points to a way that they may have been able to get a stronger result correctly using the standard deviation rather than incorrectly using the standard error.

The surface temperature trends in the Douglas et al. paper for the model results and the observed data are essentially the same and that would make the trends at height in the troposphere the same as a difference or ratio.

In fact the Douglas et al. calculation using SE is simply another version of what Santer et al. (2005) did for the temperature trend ratios when they did their regressions using T2TL and TFU to surface temperature trends. One need only use the points from the observational data to calculate whether or not these values are outside the confidence limits for the regression line from the model outputs. (See my Post # 134 above and graphs C and D at bottom of the post).

I find it curious that, on the one hand, Santer et al. in their 2005 paper use a bar graph (a bar graph is difficult to make the proper visualizations) and range limits to sort of indicate that the observed and model results can overlap within the wide range limits of the models and then, on the other hand, turn around and use a weighted average of the troposphere model results and regress it on a line to show how well the models agree.

Joel, perhaps you can explain how to reconcile the upper most graphs in Post #134 in A and B with the ratios of temperature trends and standard deviations of temperatures that appear to vary greatly with individual model results over the troposphere heights shown but when regressed as in the bottom graphs in Post #134 using the T2TL and TFU weighting averages seem to agree very well.

The high frequency argument that Santer et al. makes is merely an observation that the the ratios of temperature standard deviations, surface to troposphere, are similar between model results and observed data and in reality says little about decadal trends or contradictions between the two.

That there exists a contradiction between the model results and the observed data that is acknowledge by all sides of this issue is rather obvious — no matter how they chose to measure it. As one who tends to be skeptical in these matters, I await climate scientists coming to better grips in understanding this discrepancy. I do think that the observed data has a better chance of being examined and analyzed by all parties concerned than I do for the case of the climate models.

]]>By: Nylohttps://climateaudit.org/2008/06/07/march-2008-radiosonde-data/#comment-150693
Mon, 16 Jun 2008 13:11:25 +0000http://www.climateaudit.org/?p=3161#comment-150693The average of the models makes sense only when you assume that one of the models is correct, and that the chances to be “the chosen one” are similar for all of them. Then picking the average of the models, or the average of the models predictions, would be like trying to guess which of the models is true, by probability. Not very scientific. But that is, really, what the IPCC is doing. “Someone must have got it right, we don’t know who, so let’s try to make a safe bet that the true one will more or less agree with most of our models”. Corporativism?

Funniest of all is that the authors of the models which are farther from the average of the models seem to defend the average more than their own “creature”. This tells something about how being part of the consensus is more important for scientifical survival than defending your own work. Not good days for science. Climate science, anyway. How can any scientist rely more in the average prediction than in their own prediction? Are they defending each other against any possible incompetence as a group? “Let’s make it sound like all of us are right”.

]]>By: Geoff Sherringtonhttps://climateaudit.org/2008/06/07/march-2008-radiosonde-data/#comment-150692
Mon, 16 Jun 2008 06:27:15 +0000http://www.climateaudit.org/?p=3161#comment-150692But to sum up the large variability of model projections, using the unscientific, first approximation ‘eyeball’ method, a skeptic accustomed to viewing numerical data would have to say of the various models that

(a) at best, one model can be correct; and

(b) at worst, all are wrong.

To risk a storm, do we really need a few hundred learned posts to reach this bleeding obvious conclusion?

What is in contention here is the ratio of temperature trends in the tropics of the surface to various heights in the troposphere and not the absolute outputs of climate models or observations. Why would the internal variability come into play for comparing ratios?

Well, you may be right that internal variability would tend to largely cancel out if you look at ratios…In fact, this may be why, as Santer et al. discovered, the ratios are the better thing to use. However, that is not in fact what Douglass et al. used. They used the absolute temperature trend at each level…That is what their plots show and that is the data that they took the standard deviation and then the standard error of.

I think, in effect, this what Douglas et al. have done in their recent paper.

Unless you are talking about some new Douglass et al. paper, that is not what they did. They use standard errors, not standard deviations, and they did not do the ratio-ing of the temperature trends at the various heights relative to the surface. The first of these is an error in what they did, while the second is not an error…but points to a way that they may have been able to get a stronger result correctly using the standard deviation rather than incorrectly using the standard error.

The die analogy sucks, because there’s only 6 discrete variables that give 1 answer in that range every throw. On average, each number comes up 1/6th of the time.

I don’t see why that makes the analogy wrong. Of course, analogies are not exactly the same as the actual case you are making the analogy to…that is why they are analogies. However, this is not a material difference. The advantage of using an analogy to this discrete case is it makes it clear how silly it would be to predict that any particular die throw would likely give 3.5 to, say, within +-0.1 (what this actual +- value is depends on how many throws you use to compute the standard error).

However, you could make the argument work, for example, for the case of a random number generator that generates a value uniformly between 0 and 1. In that case, if you take, say, 1000 values then you will be able to compute the average and standard error as being something like 0.5 +- 0.01. However, you know that any particular value that you get from this generator will in fact very likely lie outside of this range (in fact, it will do so 98% of the time).

What I find difficult to interpret is what a ratio of standard deviations means in regards to higher frequency trend correlations or agreements between model output and observed results, since theoretically one could have the same variances for both groups at the surface and troposphere but the trends could have a high frequency negative correlation. I do not think, as a layperson in this area, that the authors have demonstrated anything telling about the higher frequency and decadal differences between the models and observations.

Well, I suppose it is true that standard deviation alone would not in general be enough to tell you how two time series were correlated. However, in this case, if you look at plots of the surface temperature and of T_{LT}, I think you will find that they are in general quite well-correlated (at least for plots I have seen on the global scale; I assume, but don’t know if I have seen for a fact that this also holds when you look only at the tropics). This is probably why they decided it made sense just to compare the standard deviations of the detrended data, rather than to do something more complicated.

]]>By: David Smithhttps://climateaudit.org/2008/06/07/march-2008-radiosonde-data/#comment-150688
Sun, 15 Jun 2008 21:49:53 +0000http://www.climateaudit.org/?p=3161#comment-150688Here’s an interesting reference chart, which gives the order of magnitude of the mass flux of radiationally-cooling, sinking tropical air:

The red line indicates a typical descent of 20 to 30 mb/day due to radiative cooling. The magnitude of rain/cloud downdrafts is significant, too.

]]>By: David Smithhttps://climateaudit.org/2008/06/07/march-2008-radiosonde-data/#comment-150687
Sun, 15 Jun 2008 13:48:15 +0000http://www.climateaudit.org/?p=3161#comment-150687Re #141 DeWitt, I imagine that any expansion remains adiabatic but, as you note, the radiational heat losses become significant. The chart in #42 shows the magnitude of the radiational cooling.

Geoff, my guess is that even at -80C the amount of CO2 is well below saturation, so the CO2 should be present as gas only.

I’ll have some time later today to search for an article which carries an air parcel through the Hadley-Walker cycle, for both 280ppm and 560ppm CO2 cases.