Berkeley Earth Update

It has been a while since we’ve done an update and there is much to report on, including an update to the web site, some additional memos/papers to discuss and an update on the papers. Let’s start with the web site.

Website update

Updated data and code drop. The process for automatically updating our temperature record has progressed somewhat. Since we rely on 14 different datasets that each have different updating processes our updates are not to a point where new numbers can be produced on a monthly basis but that is the goal. The code and documentation has been improved so that dedicated end users can download it and get it running without too much outside help. Still it is not a beginner project. Over the course of the next few months we will be working with researchers who have expressed interest.

Gridded data has been posted. The gridded data is in 1 degree grids and equal area grids.

State and Province data. Since we create a temperature field we can use Shape files to extract the average temperature field for irregular areas. Data for states and provinces of the largest countries is provided. Theoretically, one could specify any arbitrary polygon and extract that from the field which may be of use for certain applications such as reconstructions.

Memos

There are four new memos that we are posting for people to comment on. Two of the memos relate to Hansens PNAS paper (H2012) on extremes. Hansen’s PNAS paper was read by some to be an argument for a more variable climate. Here is an example of how some people understood Hansen’s paper. Both of these memos have been reviewed and improved by Hansen and Ruedy so we thank them for their contributions. The 1st memo was written by Sebastian Wickenburg and the 2nd memo was written by Zeke Hausfather.

The the PNAS paper does not establish “if you put more energy into a system variability increases.” This is shown in two ways. In Wickenburg, we show that the widening of the distribution can be a mere methodological artifact. Hausfather makes the same point and illustrates a different methodology that challenges assertions of increased variability. The primary insight of the H2012 remains, in a warming climate we expect to see more warm extremes. However, H2012 did not establish or aim at establishing that the distribution of temperatures has widened. Showing a change in distribution probably requires different statistical tests than those that were applied.

The 3rd and 4th memos are extensions of our Methods paper. The 3rd memo is a simple exercise to help people visualize the difference between the Berkeley method, the CRU method and the GISS method. To illustrate the difference we use visual data rather than temperature data.

The 4th memo stems from reviews of the method paper. A reviewer of the methods paper requested that we use GCM data to establish that our method was superior to CRU’s method. As the methods paper was already rather long, we decided to write up a separate memo focused on this test. The approach is straightforward. A 1000 year GCM simulation is used as ground truth. Since this data exists for every place and time we can calculate the “true” average at any given time. This “ground truth” is then sampled by using the GHCN locations as a filter. The experiment is repeated using sub samples of the 1000 year run. The results show that if you use a limited spatial sample ( GHCN locations ) with temporal gaps ( not every station is complete ) that the Berkeley method has the lowest prediction error. This should come as no surprise. As far as I know this is the first time any rigorous pure methodological test has been performed on CRU or GISS and is one of the benefits of having code posted for the various methods.

Papers

The results paper has been published by Geostatistics and Geoinformatics, available online. The Methods paper and UHI paper are under going final review prior to being submitted.

What would the diagram look like if we where to use a 24 hr heating/cooling cycle, observe the average coiled state of protein and then measure energy as (Tmax + Tmin)/2?
The odd thing is that in thermodynamics we know that ‘Energy’ is real, as is ‘Temperature’, however, ‘average’ energy and ‘average’ temperature is unreal when describing dynamic systems.
We can of course talk about a system being in local thermal equilibrium, and indeed these are the only locals we can describe using classical thermodynamics, but we cannot describe the thermodynamics of a thermally oscillating system using equilibrium thermodynamics.

Well Steve, the link is here Your first statement is correct, it is how Eli thought about the Hansen Sato and Ruedy paper, BUT Eli never said that was what HSR were trying to show, merely what the Rabett saw in the data. HSR did look different baseline periods (why Zeke used 0 – 0 baselines instead of 1 – 0 is a minor annoyance)

As Eli recalls reading HSR there is a major issue that Zeke has not confronted. For this type of analysis, a baseline period which includes a significant change in the controlling variable (temperature extremes) may not be suitable..HSR chose 1951-1980 because it was a period when the global temperatures were relatively constant. HSR state:
——————————-
The question then becomes, what is the most appropriate base
period. Our initial choice, 1951–1980, seems to be nearly optimum.
It was a period of relatively stable global temperature and
the earliest base period with good global coverage of meteorological
stations, including Antarctica. . . . .

The 30-y period 1951–1980 with relatively stable climate is
sufficiently long to define a climatological temperature distribution,
which is near normal (Fig. 9, Left), yet short enough that we
can readily see how the distribution is changing in subsequent
decades. This exposes the fact that the distribution is becoming broader and that there is a disproportionate increase of extreme
hot outliers. In contrast the 60-y base period, 1951–2010, and the
1981–2010 base period, which include the years of rapidly changing
climate within the base period, make it more difficult to discern
the changes that are taking place.
——————————-

Were Eli not a wonderful bunny he would call you and manacker out for telling untruths about him(Damn, just can’t do the McIntyre spit step). OTOH, Eli suggests that Steve RTFR (Maybe also Zeke:)

I am sorry Eli, but I read your comment as your interpretation of the main points of Hansens paper and not your analysis of the data

You linked to this:

“There are three simple points in the Hansen paper

First if there is an increasing/decreasing linear trend on a noisy series, the probability of reaching new maxima or minima increases with time

Second, if you put more energy into a system variability increases.

Third, if you put more energy into a system variability increases asymmetrically towards the direction favored by higher energy

Comment by Eli Rabett — 21 Aug 2012 @ 9:23 AM”

#########################

The way I read that you are stating that Hansen’s paper made these points. It didnt say that you thought the data showed this, you clearly stated that there were three points IN HIS PAPER. If you meant to write that Hansen missed something in his data that you saw, then I apologize. So, when I went back to your source where you quoted yourself I took the principle of charity with me. I assumed that when you wrote the paper made these points, that the paper actually made these points and that your words meant what they plainly mean.

Given that Hansen has reviewed our memos and made helpful suggestions and sees them as complimentary to his work, I’m at a loss to explain your displeasure. Of course you are free to say that when you wrote ” There are three simple points in the hansen paper” that you meant something other than the plain meaning of those words, I’m happy to accept your translation of rabbit. It might be easier just to say we didnt fully understand each other. Where is Willard?

By all means willard spend time with the young ones.
I suppose I can just say that I misunderstood Eli and leave it at that.
Not that anything turns on it. Plus the niners are in the super bowl so charity abounds.

————————–
This exposes the fact that the distribution is becoming broader and that there is a disproportionate increase of extreme hot outliers.
————————

and you did LOOK at Figure 9 which shows exactly what Zeke did with the change in base period? And you did FOLLOW their discussion about why the 1951-1980 base period was the appropriate one and not the others

Your argument is with Hansen not Eli. The Rabett does see the disconnect btw what Hansen wrote you and what is in the Hansen, Sato and Ruedy paper, but that is NOT Eli’s problem with respect to something he wrote before Hansen wrote to you.

Nice willard. I suppose I should come defend my friend ravetz. Funny the old school european leftists and Mosh got along pretty well in Lisbon.
As for the rabbett I think I’ve offered him the peace pipe and he is still off in his warren thumping his foot, oh well maybe zeke can have some luck

(I tried posting this earlier. I apologize if this ends up a duplicated post.)

Since some people have been taking umbrage with Eli’s comment:

Second, if you put more energy into a system variability increases.

I thought I’d weigh in on this while I’m over here. What Eli says here is actually true.

A van der Pol oscillator is a good example (it’s a simple physical system that shares many characteristics with more complex systems).

x”(t) + (-r1 + r2 x(t)^2) x'(t) + w0^2 x(t) = 0

A few statements about the interpretation of this equation:

Note that the linear damping term (-r1) is negative, this is a way of parametrizing an energy source that is external to the system being modeled. The Sun would act as that energy source for the Earth.

The term r2 x(t)^2 is referred to as the “nonlinear damping term”. If r2 = 0, for initial x(0) and/or x'(0) ≠ 0, the system is unstable and x(t) -> infinity as t-> infinity, that is it is physically unrealizable.

Increasing r1 is equivalent to increasing the amount of energy in the system.

Given any initial nonzero values for x(0) and/or x'(0) it will tend to the limit cycle given approximately by x(t) = 2 sqrt(r1/r2) sin (w0 t + phi).

(You can verify the general form of this by noting that r1/r2 has dimensions of length-squared.)

Note that if you increase r1, the amplitude sqrt(r2/r1) gets largre, meaning as I increase the amount of energy available to the system, the amplitude increasese.

Recognizing that this is a very simple system and that multiphase systems like the Earth’s climate needn’t follow this general prescription, it should still be noted that if increased variabilty is a generic feature of simple models, it isn’t an immediate conclusion that more complex models should behave in the opposite manner. To me, the burden of proof here seems to be on people claiming that the variability should be diminished.

That said, simply because the variability is increased, does not mean you’d expect to resolve it with measurement at this point. (In fact, you’d probably not to be able to resolve this effect at the moment, and if you could, things really would be “worse than we expected.”)

My first approach (lowess-based detrending) was a more explicit attempt to remove the need for baseline dependence. It didn’t completely work (1951-1980 baseline still had slight broadening, while 1981-2010 did not), but it eliminated most of it. The main point is that the method chosen by Hansen is not particularly appropriate to discern changes in variance in temperatures over time. That doesn’t change his conclusions per se, as an increasing mean with constant variance would still lead to considerably more extreme heat events.

Steven,
The same amount of energy entering the system, coupled with less energy leaving the system, equates to less overall energy flux.
If, otoh, you’re talking about the increase in potential energy, you have to compare that with the total amount in the system (the amount it took to increase the earth system from 0K)

Eli’s observations were simple restatement of the Boltzmann distribution as shown in this typical figure. if you do the integrals the variance in (width of) the distribution of likelihood that the system has energy E is ([3/2]^1/2) kT. The shape of the distribution becomes asymmetric in the direction of higher E with increasing T. That’s the easy part (actually trivial) and is exactly what Eli said.

Second, if you put more energy into a system variability increases.

Third, if you put more energy into a system variability increases
asymmetrically towards the direction favored by higher energy

The tougher one is seeing why this is true of temperatures, The temperatures that Hansen was talking about were local temperatures each of which was simply (ok, fairly simply) proportional to the local average energy. Since these are measurements of different things (situations) the central limit theorem does not apply.

FWIW, Zeke and Hansen are arguing about a data set, Eli is trying to find a basic principle underlying the data set.

Da pwobwem wit da wabbit’s second and turd laws is enewgy distwibution. The extra energy is preferentially going to the higher northern latitudes. This decreases the temperature differential between tropics and polar zone. A decreased delta T means less work can be accomplished across the gradient. Carnot’s law not Boltzmann’s. The result is more calm not more chaos.

I , steven mosher, was wrong when I thought Hansen was trying to prove that the distribution had widened. His charts show that, but he was not trying to prove it. See simple.
In some way people who thought vaughan Pratt was trying to forecast temps in 2100 made the same kind of mistake. Simple mistake. perfectly understandable, not a huge deal. not a ‘science mistake’ a failure to communicate.

The Sudden Stratospheric Warming event that began in late December and is only now fading is the direct cause of much of the cold outbreaks in the NH so far this winter, including the extreme snowfall in Moscow and the current cold period in Britain. That SSW event had its roots in a heat wave and high pressure area over India and Pakistan back in mid-December.

Amazing how this planet is so connected that heat in one area/region can lead to cold in another, isn’t it?

Gates
Careful analysis of NOAA’s animation for period at the 30 hPa (which is lower altitude than 10 hPa, on which you base your statement) shows no sign of anything over India and Pakistan but conclusively shows a plum of hot air originating from Kamchatka volcanic eruption and it is connected to Kamchatka for the period of about 10 days.http://www.vukcevic.talktalk.net/SSW2012-13.htm
Simple logic says that an observation at a higher altitude can not indicate the source accurately if it is not evident at the lower altitudes.

Bruce, last time you used EC data you screwed up by not applying the quality flags. So, I will take a pass on evaluating anything you have to say about that data. The gridded data is on the site. you need to understand ncdf to read it.

Bruce. You still need to apply qc to the monthly summaries. [ slaps forehead]

why .nc?

well because its a standard and because end users request that a standard be used rather than inventing an ad hoc format that requires coding. You are basically asking a question like ‘why did you use pdf?”
trust me, asking why we use ncdf does not make you look entirely credible

Respectfully, your Volcanic origin for this latest SSW just does not fit with observed facts. Remember the 10 hPa chart showed the warming staring in southern Asia in late December? Here is the vertical velocity averaged over the whole troposphere for that period,covering both Southern Asia and your volcanic area far Northeast Asia:

You can physically see the huge upward vertical velocity in the troposphere in S. Asia, far away from your volcanoes. This represents a massive vertically directed Rossby wave breaking on the tropopause. It compresses the air from the upward push of air into the stratosphere. Which you clearly see having the same origin in S. Asia at the 10 hPa level. What you see much further north and east of this near your volcanoes a few days later at the 50 hPa level is the downward falling warmed air recoil from the initial upward push shown above. This downward compression warming recoil is associated with the high pressure anomaly over the Arctic during SSWs.

Just for fun, I plotted the vertical velocity across the whole troposphere 1000 to 100 hPa just prior to the big SSW In 2009. Interestingly, here’s the location on Earth that had a huge vertical velocity pushing right into the stratosphere in the days prior to the SSW. Turns out to be the same region as this year (which was surprise to me):

This region is an area where sometimes the jet stream comes down across a high desert in Asia and then hits a range of 20,000 foot peaks, deflecting the stream to vertical. Probably more research is warranted…

Hi Gates
You are a scientist and possibly an expert in these matters. If you are studying the SSW, writing or contributing to a paper, then it is wise to consider all known factors.
I have drawn your attention to one, which could be a trigger by opening a ‘funnel’ in the tropopuse, for the warm Pacific air to flow into stratosphere.
I suggest to carefully study NOAA’s animation, the jet-stream effect on the SSW flow with it’s 10 days holding link to the Kamchatka area.
Your vertical velocity graph, I would say is at wrong latitude for the SSW and more likely to be associated with the Hadley/Farrel cells boundary around 30N.
Also consider tropopause (red line) height
in relation to the latitude. Kamchatka is at 55N, at the latitude where tropopause is much lower than 25-30 N you are suggesting as the origin.
Worth another look.

BEST’s obsession with warming for 1960 is so 20th century. The important question today is the 16 year pause. but your group is still obessesed with ignoring the 1930s/40s and proving 1998 was warmer than 1960.

Whether or not a globe that is warming by fractions of a degree will show significantly higher extremes is the subject of Hansen et al. (2012) “Perception of Climate Change”.

If the increase of the “extremes” is also only measured in fractions of a degree, there is not much to be concerned about. If, however, there is also an increase in the variance with warming, this could be of concern.

As you point out, some people erroneously interpreted Hansen 2012 to conclude that the variance in extremes would increase with warming.

In his analysis of the Hansen et al. paper, Zeke Hausfather addresses the question: “is variance increasing as the globe warms?”

Using the 1951-1980 baseline, he concludes, ”anomalies plot shows slightly less variance in the current decade than in prior decades”

Using the 1991-2010 baseline, Hausfather writes, ”the variance was greater in the past and is smaller now!”

So the “variance” is actually decreasing with warming, which means there should be less extreme hot days.

Sebastian Wickenburg identifies a “pinch effect”, which creates a spurious wider variance at later times.:

Therefore, the number of ‘hot’ events, i.e. much higher local anomalies than during the base period, will always increase in such a system, especially if there also is a global average increase in T. The results of comparing the number of 3 sigma events at later times to the number of 3 sigma events during the base period, therefore are difficult to interpret, because the ‘pinch’ effect will increase the number of such events at later times.

Hausfather and Wickenburg did a statistical analysis using “actual records” and concluded that there is no increase in variance with higher temperature (as Rabett claimed).

I simply took the “actual record” of “record” high temperatures by US state (a very limited data base, to be sure) and also came to the same conclusion as the statistically much more relevant memos of Hausfather and Wickenburg.

manacker, as I pointed out yesterday when you mentioned this, apples and oranges. Hansen’s paper was looking at 3-month average temperatures, not daily records. This would be a much smoother variation over time and would show climate change more clearly. I wish someone would do an analysis of summer records like this to compare properly with Hansen’s statistics.

The memos by Hausfather and Wickenburg answered the key question, which was left open by Hansen (but apparently misinterpreted by some):

The variance has NOT increased with warming (in fact, it looks like it may have decreased).

The statewide daily peak values I cited are obviously measuring top values differently, but suggest that there has been no real increase in record hot days recently.

It looks like the 1930s was the decade in the USA with the most statewide record hot days. Realize this is simply one set of data points in a puzzle that is much more complicated, but every bit of data tells us something.

manacker, yes, a lot of people misinterpreted the 3-sigma statements as being about a change in variance, when actually it is about the properties of a shifted bell curve which make what was once an extreme summer, much more common.

It’s still fundamentally wrong and highly misleading to define 3-sigma events in relation to some past average. That approach contains the same fundamental error as does the interpretation that the widening would be proof of increased variability. When the latter is dismissed the former should also be dropped.

Perhaps Steven might ask what Hausfather and Wickenburg think about this issue.

Pekka, Hansen made it very clear how he defined his terms. He showed that events that used to occur half a percent of the time, now occur nearly ten percent, or equivalently the area increased by this factor on average in a given year. These are useful things to point out, and they arise from his definitions that he used as a way of visualizing climate change.

Five states are different from that USA list. Old record and year are shown in brackets.
Colorado 114 1954 [118 1888]
Minnesota 115 1915 [114 1936]
S. Carolina 113 2012 [111 1954]
Vermont 107 1912 [105 1911]
Wyoming 115 1988 [116 1983]
Since 1951 there were 18 highs and 24 lows set or tied with previous record.

Not too many folks at temperate latitudes are deeply concerned about a warmer January, so your argument really is worthless, as you say.

Come back with data showing a warmer July and (even though the data is still meaningless) folks will at least listen to you (as they did to the recent silly Washington Post article forecasting more DC heat waves in 2050).

This is actually a good data point for more extreme hot temperatures against more extreme low temperatures, but still that may not address the topic, as it is still more a snapshot than a detailed analysis.

The shrinking variance example was an illustration of how the results are highly dependent on the baseline period used. All we establish is that this method is inappropriate to determine changes in variance over time, so I wouldn’t claim that variance is increasing OR decreasing at this point.

The Hansen memos. I hope, settle the question about whether of not H2012 established that the distribution has widened. H2012 didnt establish that , and Hansen communicated that it was not his aim to establish that. In short, many folks mis interpreted that aspect of his paper. So, he was happy to see us clarify that. Robert’s memos, I hope, should make people pause before they hang serious claims about “the pause” on the CRU record.

It is a shame that the statistically superior BEST record only covers land area rather than global temperature. Let’s hope that gets extended one of these days, so we’ll know if BEST also sees a “pause” or not.

SST has been added in. The release of that has been gated by various other projects. I hestitate to give a date on the release cause there are a couple other balls in the air.
However, now that we’ve posted a gridded land product, folks can add any SST they want. The trick is combining the two and there are limited number of ways to do that.

Steven, “At one point there were a couple of us who thought we should do a MAT plus SAT product since icoaads has as many MAT records as SST records.” The MAT is a PITA, but from what little I have looked at, it would indicated considerably more warming as Tave, but the Tmin would reflect the 1985 shift in diurnal temperature and more closely follow the ocean oscillations.

I think it would be worth the effort, but then I am just a fishing guy :)

I agree that is an interesting and robust-looking result. I think the diurnal range increases as the land area dries. Drier areas have larger diurnal cycles. We should expect land areas to dry because the ocean is not warming as fast as the land. It may have been decreasing before 1980 due to increasing aerosol effects decreasing surface solar radiation. My two cents.

If you mean marine surface air temperatures globally are warming faster than the global sea-surface temperature, I would be skeptical of that statement until I see the data. Upper air, possibly, because of the land warming effect spreading over the oceans.

“We should expect land areas to dry because the ocean is not warming as fast as the land.”
______

Please define your terms. The use of SST’s or even near surface temperatures over the oceans is of course not a good metric for how much the ocean itself is warming. The best metric for that would be ocean heat content down to the deepest levels you can consistently measure. Second to ocean heat content for measuring how much the oceans are warming would be measuing the effects of ocean warming from heat that is being advected away from the warmer tropical waters toward the polar regions via ocean currents and these effects would be expected to be seen primarily in the reduction of sea ice. Given that substantially more heat is naturally advected toward the N. Pole than the S. Pole on this planet, and the fact that ocean currents can actually reach the N. Pole, it is reasonable to expect that the N. Pole would warm faster and more than the S. polar region.

TSW, OK, I mean ocean surface temperature, because that determines the atmospheric water vapor. This part is not warming due to a multi-decadal cooling phase and has impacts on the global water vapor which impacts the relative humidities over the warming land, i.e. they go down. This may also impact cloud cover over land (clouds have been decreasing) further amplifying the differential land heating and drying out the soil more. Drier soils have larger diurnal cycles.

I was thinking at some point to address the “boundaries” on UHI contamination by looking at trends in MAT and Tropospheric trends. Hmm, lots of work left undone. Most folks think that this is un interesting..

You joke about Ship Heat Islands, but marine air temperatures measured during the day are affected by solar heating of the ship. For long term studies, most analyses have used only night marine air temperatures. The National Oceanography Centre at Southampton have done some work on daytime heating biases. NOCS also worked on a new NMAT data set and paper, which has some preliminary comparisons with land temperatures.http://onlinelibrary.wiley.com/doi/10.1002/jgrd.50152/abstract

Having seen the GIGS paper, I can see how it might not be considered an atmospheric science paper as its results are mostly land statistics. The things they to do relate the results to the atmosphere are interesting but not ground-breaking. They subtract a volcano signal and a log(CO2) expectation (as Vaughan Pratt did), and find that what is left correlates with AMO but leads it, indicating both AMO and land temperature change as a result of another forcing that can only be speculated on. Interestingly, from the perspective of Vaughan’s work, the BEST paper finds a sensitivity of 3.1 C per doubling fits the log(CO2) part best. This is with no delay, as would be consistent with land where none would be expected, but they have ignored variations in sulphates and other GHGs, effectively assuming it is part of the CO2 signal, so it is not truly isolating CO2 just some aggregate proportional forcing.

Although BEST offers no sea data, WoodForTrees has collected several sources for land and sea separately. This chart fits a trend line to each of CRUTEM3 (land, the red plot) and HADSST2 (sea, the blue plot).

Click on the “Raw data” link at the bottom and search for “slope” to see that the trend lines have respective slopes of 0.251 C/decade and 0.152 C/decade. That is, during that period, the sea is warming at 152/251 = 61% of the rate at which the land is warming.

Calculated naively, 61% of a land climate sensitivity of 3.1 C/doubling could therefore be expected to correspond to a sea climate sensitivity of 0.61*3.1 = 1.9 C/doubling.

Assuming zero Hansen delay in my spreadsheet (i.e. setting HanDelay (my new name for GWDelay) to 0 and refitting accordingly), climate sensitivity should be 2.1 C/doubling. (My revised spreadsheet can be seen here, many thanks for the geophysical if not astronomical volume of feedback at Climate Etc. over the past six weeks or so.)

Since the sea is 70% of the Earth’s surface, one might naively compute the global climate sensitivity as 0.7 * 1.9 + 0.3 * 3.1 = 2.26 C/doubling as the zero-Hansen-delay climate sensitivity.

This is a tad higher than my 2.1 C above.

As Yogi Berra said, when you come to a fork in the road, take it.

One branch of this fork says BEST is right about the 2.26 figure and I’m wrong.

The other says that I’m right.

If so it should be possible to calculate what BEST should have obtained for the land climate sensitivity. Call this x.

Then 0.61 * x would be the sea CS (climate sensitivity), Weighting these by the 70/30 ratio of sea to land in area, we would expect 0.7 * (0.61 * x) + 0.3 * x for the global climate sensitivity, or 0.727 * x.

Since I claim 2.1 C/doubling globally, BEST should have obtained 2.1/0.727 = 2.89 C/doubling.

That would make their 3.1 C/doubling figure high by 0.2 C/doubling.

Given that Richard Lindzen once seriously suggested 0.5 C/doubling, and some calculations have come up with 8 C/doubling, what’s a mere 0.2 C/doubling between friends? (Berkeley is only 60 miles by car from Stanford.)

Vaughan Pratt, your entire comment is based on BEST’s value of 3.1 degrees being for a doubling of CO2. It isn’t. BEST used CO2 as a proxy for all greenhouse gases.* That means you’re comparing a doubling of CO2 to a doubling of all greenhouse gases. That means those results are meaningless.

*Actually, it’s serving as a proxy for all anthropogenic forcings. That makes things even more complicated.

@Brandon Shollenberger: BEST used CO2 as a proxy for all greenhouse gases.*

While I’m flattered that you think I could be doing something else, I have no idea how I could be so clever as to do such a thing.

Like BEST, and like everyone else estimating climate sensitivity observationally, I’ve been using CO2 as a proxy for all anthropogenic forcings. If there’s an alternative I have no idea what it could possibly be.

My apologies. You did use CO2 as a proxy. I guess your results and BEST’s are apples and apples. Neither is a measure of sensitivity to a doubling of CO2, but they are both a measure of the same thing.

Vaughan Pratt, “Then 0.61 * x would be the sea CS (climate sensitivity), Weighting these by the 70/30 ratio of sea to land in area, we would expect 0.7 * (0.61 * x) + 0.3 * x for the global climate sensitivity, or 0.727 * x.”

Because of the Antarctic and Greenland, the “land” percentage would be closer to .25 than .3 I would think. I was playing with 0.68 ocean, 0.26 land and 0.06 icebox. That puts me in the range of 0.8 CO2eq with ~ 0.6 to 1.6 amplification. The 0.8 is based on the impact of ~ 4 Wm-2 on a 334 WM-2 near ideal radiant surface that happens to only cover about 68 to 70 percent of the total surface which makes a good thermal reservoir by not so great a heat sink. :)

@BS: Neither is a measure of sensitivity to a doubling of CO2, but they are both a measure of the same thing.

How do you define “sensitivity to a doubling of CO2?” Presumably holding aerosols constant (tricky in the case of aerosols from jet engines, which increase in proportion to increasing CO2 since it’s very inconvenient to have to scrub jet exhausts). But are you also holding water vapor constant? If so that would be a lot closer to the notion of no-feedback sensitivity, generally accepted to be considerably less.

I’ve been interpreting “business as usual” to mean not just that CO2 stays on the curve it’s been on but that its concomitants such as other GHGs (especially water vapor) and aerosols do so as well in full detail (same light-to-dark color ratio, low-to-high altitude ratio, etc.). As soon as you relax any of those assumptions you open a whole can of worms as to the meaning of “climate sensitivity.” While it’s perfectly reasonable to open that can of worms, one must be aware of when one is doing so and not speak naively of the climate sensitivity but instead state all your assumptions.

Furthermore the precise definition of “curve CO2 has been on” is up for grabs. Max Manacker has been insisting for a long time now, recently joined by Greg Goodman, that the (seasonally corrected) Keeling curve follows an exponential. That assumption would give atmospheric CO2 a CAGR of somewhere between 0.4% and 0.5% depending on how you fit it to its evidently Procrustean bed. The mauve curve in these plots is the function 1.00411^(y − 556.9) (i.e. a CAGR of 0.41%) when naively fitted to the endpoints and 1.00452^(y − 588.8) (CAGR 0.45%) when fitted to 1975 and 2005, with R2 respectively 99.3% and 99.0%. The graphs on the right show that this models CO2 as increasing at around 1.7 ppmv/yr lately, about 0.5 ppmv/yr lower than its actual 2.2 ppmv/yr (taken from the green curve), but in 1960 higher by an equal amount.

My poster used the whole of HadCRUT3 to infer a preindustrial level of 287.4. Taken in conjunction with the Hofmann-Butler-Tans exponential model of anthropogenic CO2 and fitting to 1975 and 2005 gives 287.4 + 1.02518^(y − 1823.6) for an R2 of 99.56%, the red curve at upper left in the above graph. With an annual increase today of 2.5 ppmv (green curve in top right chart), this errs on the high side, being about 0.3 ppmv/yr higher than actual.

If the goal is just to model the Keeling curve it is hard to see how to improve on the Hofmann et al function based on a preindustrial level of 270, the green curve in these graphs. This gives a CAGR of about 1.89% for the anthropogenic part of atmospheric CO2, with an R2 of about 99.83% (for the seasonally corrected Keeling curve) however fitted. (260 has a very slightly better R2 but a considerably less plausible preindustrial level. Hofmann et al used 280 which is midway between the green and red curves.)

My take on the Keeling curve is that Hofmann’s formula (which I believe he developed as an AMU poster before Butler and Tans joined him as coauthors on the journal version) is too naive as a model of how anthropogenic CO2 influences the atmosphere. The natural component should not be held constant but should decrease following a suitable law reflecting the details of the whole carbon cycle when our contribution to it is taken into account. This would improve both our understanding of the carbon cycle in general and (from my standpoint) my SAW+AGW model of multidecadal climate as well.

@cd: Because of the Antarctic and Greenland, the “land” percentage would be closer to .25 than .3 I would think.

Very interesting point, captd. I would think that would depend on whether your analysis was static or dynamic. Certainly the icecaps cool the atmosphere, presumably even more effectively than the oceans. Dynamically however the question is whether they warm faster, which is what matters when it comes to global warming.

When well below freezing, as in winter, they warm to at least the same extent as land since the icepack is a great insulator (whence igloos).

But at 0 C, as in summer, the latent heat of fusion of water kicks in and suddenly they hold the line on temperature even more effectively than the oceanic mixed layer. Getting above 0 C, which involves first melting the ice, requires an amount of heat equivalent to raising water from 10 C to 90 C! (Ice is such a great regulator that people use a mix of ice and distilled water to calibrate thermometers.)

Seeing as I’m the only one who commented on that issue, you’re presumably referring to me in this backhanded insult. If so, that’s silly as I do get using CO2 as a proxy. In fact, I’ve discussed the matter in some detail to correct people’s misuse of it as a proxy on multiple occasions. Even on this blog. Look at my exchanges with manacker for an example.

I’ve been interpreting “business as usual” to mean not just that CO2 stays on the curve it’s been on but that its concomitants such as other GHGs (especially water vapor) and aerosols do so as well in full detail

Seeing as other GHGs and aerosols haven’t followed the same curve as CO2, I don’t know why you’d interpret it that way. The overall anthropogenic forcing may have largely followed the CO2 curve (the uncertainty in aerosol forcings is so large any number of curves could be “right”), but the individual components are known to have diverged.

While it’s perfectly reasonable to open that can of worms, one must be aware of when one is doing so and not speak naively of the climate sensitivity but instead state all your assumptions.

Which is why it is important to be clear you’re using CO2 as a proxy. Talk of sensitivity to a doubling usually refers to CO2 itself, and not CO2 as a proxy. Both values can be calculated, but they are not (directly) comparable, and they should not be confused. In the case of the comment I responded to, you should have made it clear you were not referring to sensitivity as it is normally used.

How do you define “sensitivity to a doubling of CO2?”

I’d hope you already know my answer. I’d define it the same way the IPCC defines it. That is, a doubling of CO2 will cause a certain change radiative forcing. That change in forcing will cause a certain amount of warming. There will then additionally be some feedbacks that affect that amount.

Vaughan Pratt, “Very interesting point, captd. I would think that would depend on whether your analysis was static or dynamic.” I am still in static though I have attempted a little dynamic.

In the Arctic, latent heat of fusion does hold the line, in the Antarctic , not so much. So there is at least 10 to 17 Wm-2 of slop that I see no way around at the true surface. The 4C (334.5 Wm-2 per S-B) and the 334 Joules per gram latent heat of fusion does tend to create a stable point though in both the fancy radiant transfer and my old school HVAC stuff :)

Looking down south, the Antarctic is anti-phase in Tmin like I expect due to radiant forcing changes with Tmax drifting up as I expect due to OHC, but explaining the shift in diurnal temperature range to everyone’s satisfaction is pretty elusive.

@VP: I’ve been interpreting “business as usual” to mean not just that CO2 stays on the curve it’s been on but that its concomitants such as other GHGs (especially water vapor) and aerosols do so as well in full detail

@BS: Seeing as other GHGs and aerosols haven’t followed the same curve as CO2, I don’t know why you’d interpret it that way.

Sorry, I wasn’t clear as to my meaning. By “do so as well” I meant that each contributor to AGW stays on the curve it has been on, not the curve CO2 has been on.

If you’re arguing that forecasting based on a complex situation like past behavior of multiple curves is an imprecise art, I’m with you 100% on that. While it’s easy to extrapolate any given analytic model (merely knowing the exact values of its derivatives at a single instant in time is enough for an exact extrapolation arbitrarily far into the modeled future), the real future is fickle and frequently unfaithful to analytic models of the past.

@BS: In the case of the comment I responded to, you should have made it clear you were not referring to sensitivity as it is normally used.

My comment concerned the value of climate sensitivity obtained by the BEST team. I’m sorry if it wasn’t clear I was using the same understanding of the notion as theirs. As you say it would be nonsense to use some other meaning than theirs. I do spout nonsense sometimes, but I hope I managed to avoid doing so on this occasion.

“* Climate sensitivity depends on the prevailing circumstances, which we take here to be what obtained “on average” during 1850-2010. The profound disequilibrium of modern climate makes its circumstances very different from those of the deglaciations of the past million years, in which CO2 changed two orders of magnitude more slowly.”

There is no guarantee that the prevailing climate sensitivity, PCS, for the 21st century will turn out the same as that for the 20th century. On the other hand I have so far seen no suggestion from any quarter that climate sensitivity however defined is either increasing or decreasing, making past PCS as good a predictor of future PCS as any (which should not be taken to mean that it actually is a good predictor, merely the best we have for the moment).

Many people seem to be viewing climate sensitivity as a constant of nature like the speed c of light or Newton’s universal gravitation constant G instead of the ill-defined construct it really is. Giving it a value is like saying that Florence Colgate’s beauty is 987 millihelens, or that Helen of Troy’s beauty has been estimated at 1013 milliflorences bearing in mind that unlike the standard kilogram in Paris the former beauty metric is no longer available for comparison.

Judith Curry posts: “The primary insight of the [Hansen’s] H2012 remains, in a warming climate we expect to see more warm extremes. However, H2012 did not establish or aim at establishing that the distribution of temperatures has widened. Showing a change in distribution probably requires different statistical tests than those that were applied.”

Neil Plummer and colleagues have posted a very nice analysis What’s Causing Australia’s Heat Wave? that makes this same point: it’s the (definite!) global-warming trend that’s driving the increasing incidence of extreme temperatures, not the (plausible but still-unverified) increasing variance of weather-related fluctuations.

Mathematically minded Climate Etc readers may enjoy the following reflection on why increasing variance is thermodynamically plausible. The following reasoning is due originally to Boltzmann, and was made explicit by Onsager, and is accessibly discussed in many textbooks (see for example Charles Kittel’s Elementary statistical physics (1958), chapter 33, “Thermodynamics or Irreversible Processes and Onsager Reciprocity Relations). A terrific auxiliary reference is Zia, Redish, and McKay “Making Sense of the Legendre Transform” (2009).

Let be the time-dependent spatial energy density of a system in thermodynamic system, and let be the entropy density state-function of that system, and assume is the sole conserved quantity of that system. Then the variance of the time-dependent fluctuations in is related to the entropy function by a simple Boltzmann/Onsager relation:

For typical fluid-dynamical systems , and so for fluid dynamical systems the energy variance does generically increase with temperature, increasing specifically as . Because the global heating observed so far is rather small on an absolute temperature scale (however significant it may be to us living organisms!) this simple model predicts an increase in climate variance that is so small (of order one percent) as to be very difficult to observe.

This derivation is non-rigorous for two reasons. The first is the common-sense reason that the earth’s climate-system is not a dynamical system in thermodynamical equilibrium (mainly because the earth is continually radiating the sun’s heat into cold space). Nonetheless, to the extent that entropic ideas apply even approximately to the earth’s climate system, the above Boltzmann/Onsager relations provide reasonable grounds to anticipate that long-term analysis may show increasing climate variance.

The second reason is more sobering. The magnitude of this thermodynamical effect depends on how close we are (or aren’t) to the lethally hot temperatures of a planet-killing climate-phase transition. If we were to observe substantially increasing climate variance, that would be a very concerning thermodynamical warning flag, that our planet is approaching a devastating climate-change tipping-point!

That is why variance is well-worth careful study, and the entire Berkeley Earth community is to be congratulated for their foresighted and careful attention to this issue. Thank you, Berkeley Earth Surface Temperature (BEST) Group!

What part of (Tmax+Tmin)/2 isn’t temperature and that the temperature of the system isn’t a measure of the heat in that system?
Why do you insist that (Tmax+Tmin)/2 is directly proportional to energy density? It isn’t.
Moreover you cannot use a description of equilibrium thermodynamics and apply it to an open thermodynamic system, a complete description of an electrically heated boiler and the contents of a thermos flask are quite different.
Stop talking crap your points are very, very stupid. Reflect on Judy’s post on etiquette and stop trolling.

@DocMartyn: Moreover you cannot use a description of equilibrium thermodynamics and apply it to an open thermodynamic system, a complete description of an electrically heated boiler and the contents of a thermos flask are quite different. Stop talking crap your points are very, very stupid. Reflect on Judy’s post on etiquette and stop trolling.

By this criterion, had it turned out Newton was in competition with Methusaleh for longevity and was posting here, his corpuscular theory of light would be deemed “crap” given Thomas Young’s compelling double slit experiment in 1802. Newton would be judged a troll in violation of netiquette and Einstein (whose only Nobel prize was for the photoelectric effect he discovered in 1905) would have had to come to his rescue. Heisenberg and Schroedinger, whose work was done in 1925-6, would in turn have shot Einstein down for not believing in pure probability.

On my to-do list is a defense of Einstein against Heisenberg and Schroedinger and a proof that quantum computing is not what it’s cracked up to be. (Quantum security is fine however.) That in turn will be shot down by someone else, though not immediately. God only knows (by definition) where this all headed.

Increased variance with warming may well have been “plausible”, but as the two memos showed, it just didn’t work out that way in real life (in fact, it seems like variance may have decreased with warming).

Many “plausible” ideas turn out not to be real.

It is very good that Steven Mosher has posted this interesting new information, which clears up some earlier misconceptions.

when I looked at the temporal evolution of the standard deviation of tmax and tmin, the answer was decreased variability.
problem there is perhaps related to changing spatial coverage .. that is changes in the number of coastal stations. A station by the coast has a markedly different SD than one 50km inland. so, I scratched my head and went to look at other stuff. i should prolly go back and have another go at the data

Plummer et al state, accurately enough, that the higher temperatures in early January were the result of a moderately delayed (2 weeks) monsoon season, allowing the huge but quite stationary air bubble over the central Australian desert region to continue heating until shape-shifted by normal monsoonal influences, albeit a little late in the season

After that statement, there is no further discussion of any possible or likely causes of the 2-week monsoonal delay, just the usual “planet is slowly frying” attributions

Until a reasonable explanation of that moderate monsoonal delay is connected to CO2 levels, I’ll remain sceptical.

Winter months in Australia can and have had a mirror image. A huge cold air bubble sits over the Aus central desert, essentially unmoving and feeding cold air off its edges. On occasion (eg. late 68-69), for over 3 months, giving over 90 successive days of cool, cloudless sunshine. Aus winters can be absolutely marvellous times to be alive. I have not seen anyone blaming CO2 levels for them, even though it does seem a mirror image to summer

‘Our analysis does not rule out long-term trends due to natural causes; however, since all of the long-term (century scale) trend in temperature can be explained by a simple response to greenhouse gas changes, there is no need to assume other sources of long-term variation are present.’

Well that’s nice to know, we can cancel all the research and save a bit of money by laying off all those climate scientists.
Trouble is I was taught to question and, yes be skeptical about such definite claims. So sorry, with the paucity of reliable data I’ll just give it a few more years of gathering decent data. Sorry if that might offend the modellers out there but garbage in still means garbage out, no matter how much you polish it…

This is the crux of the debate, isn’t it. They say that simple ideas on climate sensitivity due to GHGs that go back to Arrhenius a century ago and the Charney sensitivity of 1979 (just refined a little recently) explain the century-scale warming without modification. People disputing that have to show why the simple ideas are wrong and what else happened instead. Two tough barriers, and a case for Occam’s Razor if ever there was one.

yup. That particular part of the paper has irritated folks on all sides. Climate engineers ( folks who want to understand every last detail ) argue its too simple and overlooks all sorts of nice details. C02 skeptics, well, they mostly sling mud and dont address the fundamental claim. Gimme C02 and Volcanoes and I can explain the rise in temperature. Of course it could be the ln(leprachuans) that is really doing the work, but like unicorns leprachuans are known to avoid direct observation.

I think the “crux of the argument” as beesaman has described it, is something else than you have concluded.

The cited paper concludes:

‘Our analysis does not rule out long-term trends due to natural causes; however, since all of the long-term (century scale) trend in temperature can be explained by a simple response to greenhouse gas changes, there is no need to assume other sources of long-term variation are present.’

This as a classical “argument from ignorance” (i.e. “we can only explain X if we assume Y” and “no need to do more work on other explanations; the science is settled”), which of course is invalid in a situation where there are still many unknowns.

If we truly knew everything there is to know about why our climate behaves the way it does, we could make such assumptions.

But we obviously don’t, so we can’t.

Just one example:

The CLOUD experiment at CERN has confirmed that the cosmic ray cloud nucleation mechanism works under controlled conditions when certain natural aerosols are present. It has, however, not been able so far to confirm that this mechanism will work in our climate system, nor what the magnitude of its impact could be.

Let’s assume further reproducible experimentation confirms that the mechanism does, indeed, work in a controlled environment simulating our climate system and that it is sufficiently strong to explain essentially all of the warming we have seen over the 20thC.

This would be new empirical scientific evidence, which we do not yet have today, which would completely change our conclusions on the relative impact of natural versus anthropogenic (greenhouse) forcing.

I am not predicting that this will happen in that way, of course, but it cannot be excluded.

And since we do NOT have empirical evidence to support the model-derived magnitude of GH forcing from human GHGs, we cannot rule this out yet.

manacker, it is a case of when the simplest explanation explains things, why look further, unless you can first prove that the simple explanation doesn’t work which becomes increasingly difficult when datasets like BEST come along.

The problem with Manackar’s argument is actually deeper. If you consider a number of drivers, x,y and find that they can explain your observation z based on good science and someone comes along and says, well w is more important, they not only have to demonstrate how w affects the system, but also that you got x and y wrong.

WRT cosmic rays, Scafetta cycles, Landscheidt astrology, whatever, at best you have correlations that are questionable. With CO2 and volcanoes you have things that are comparatively well understood.

manaker its not an argument from ignorance.
The structure goes like so.

1. Given: Gghs cause warming
2. Given: Volcanoes cause cooling.

If you take these two givens and fit the data your residual looks like AMO with natural variability being about .17C per decade.

Of course it could be something else. it could be ln(unicorns) works better to explain the data.

The point is you dont need to appeal to anything else to explain the the data. Take an example from evolution.
Evolution explains what we see in terms of life forms on the planet.
Of course it doesnt rule out a sneaky deity that really controls everything, but explanatory parsimony suggests that adding entities that are not necessary, is well, not necessary.

Volcanics are still poorly understood in terms of climate,ie there is no uniqueness theorem,the test being in the data eg Krakatau and Tarawera seem to have made little difference in the BEST data,yet the former had a forcing twice the size of Pinatabo

…volcanism, combined with a simple proxy for anthropogenic effects (logarithm of the CO2 concentration), reproduces much of the variation in the land surface temperature record; the fit is not improved by the addition of a solar forcing term. Thus, for this very simple model, solar forcing does not appear to contribute to the observed global warming of the past 250 years; the entire change can be modeled by a sum of volcanism and a single anthropogenic proxy. The residual variations include interannual and multi-decadal variability very similar to that of the Atlantic Multidecadal Oscillation (AMO).

Yikes!

I can “make it fit” when I ASS-U-ME there are only two forcing factors (volcanoes and CO2) plus variability from AMO.

Duh!

I could also “make it fit” by reducing CO2 impact and increasing solar forcing.

What the hell, I could “make it fit” by simply “making it fit” (check Vaughan Pratt’s “millikelvin” model).

manacker, so you are accepting AMO but not volcanoes in terms of valid contributions, or both AMO and volcanoes, but not understanding the ramping up part that is left over, and is their third component.

manaker. I make the analogy to evolution merely to draw attention to the structure of the argument and the case for parsimony. Parsimony is not an epistemic ground but a pragmatic one. of course you need not be swayed by appeals to parsimony.

@manacker: Evolution has been validated by empirical data from actual observations. The CAGW premise (=high 2xCO2 ECS) has not. The difference is very simple. One is a corroborated hypothesis, which has become an accepted theory and reliable scientific knowledge; the other isn’t there yet – it is still an uncorroborated hypothesis.

Only in your mind, Max, only in your mind. Switzerland would seem to have translated you into a vacuum.

Speaking as a scientist accosted from time to time in cocktail parties on various controversial topics like second-hand smoke, which races or species were made in God’s image (maybe God most closely resembles a chimp), evolution, global warming, etc., I have a much harder time with evolution than global warming. How could a velociraptor evolve into a bird, for example? What “empriical data from actual observations” are you talking about? We simply don’t have such a thing! If you believe otherwise I have a very valuable bridge I can let you have for pennies on the dollar.

Compared to global warming, evolution as a theory is totally nuts. If you can’t see that then you’re just blindly accepting what other people tell you about evolution.

Steven Mosher: Parsimony is not an epistemic ground but a pragmatic one.

Well said.

“Occam’s razor” has also been called “Occam’s lobotomy” and “Occam’s Guillotine.” A sticking point has been the definition of “beyond necessity” — for example, is it already “beyond necessity” if numerous correlations and past oscillations have still not been accounted for? Another sticking point has been “entities” — is it an “entity” to assume that something not known well can not possibly matter, something like variations in solar output?

As others have said, thank you for the work on BEST. I really liked this: The approach is straightforward. A 1000 year GCM simulation is used as ground truth. Since this data exists for every place and time we can calculate the “true” average at any given time. This “ground truth” is then sampled by using the GHCN locations as a filter. The experiment is repeated using sub samples of the 1000 year run. The results show that if you use a limited spatial sample ( GHCN locations ) with temporal gaps ( not every station is complete ) that the Berkeley method has the lowest prediction error.

Thanks mathew. yes, the GCM data for testing the methods is elegant and simple. I’ll probably add Nick stokes method to the pile which I think should get very close to berkeley accuracy with computation times than are sub 1 minute. I use it for massive sensitivity testing. basically a least squares approach.

You do know that we can measure cosmic rays, and those measurements tell us how many charged particles they can form in the atmosphere?
And we can measure the amount of charged particles in the atmosphere and compare the two.
And the theory is that a drop in the amount of cosmic rays causes less clouds which causes warming, right?
We measure them by the amount of charged particles they form.

How much warming would they cause by decreasing to 0?

Or come by one of the labs I work at in the US, I can make some cosmic rays for you!
And antimatter.

Jim D | January 20, 2013 at 1:20 pm | ReplyThis is the crux of the debate, isn’t it. They say that simple ideas on climate sensitivity due to GHGs that go back to Arrhenius a century ago..

It’s an answer from gobbledegook, the ignorance not of the data but of the scientists who continue to use unproven premises. Arrhenius didn’t have the faintest idea of what Fourier really said and imagined the atmosphere as being what Fourier said it wasn’t:

The “Greenhouse Effect” was originally defined around the hypothesis that visible light penetrating the atmosphere is converted to heat on absorption and emitted as infrared, which is subsequently trapped by the opacity of the atmosphere to infrared. In Arrhenius (1896, p. 237) we read:

“Fourier maintained that the atmosphere acts like the glass of a hothouse, because it lets through the light rays of the sun but retains the dark rays from the ground.”

This quote from Arrhenius establishes the fact that the “Greenhouse Effect”, far from being a misnomer, is so-called because it was
originally based on the assumption that an atmosphere and the glass of a greenhouse are the same in their workings. Interestingly, Fourier doesn’t even mention hothouses or greenhouses, and actually stated that in order for the atmosphere to be anything like the glass of a hotbox, such as the experimental aparatus of de Saussure (1779), the air would have to solidify while conserving its optical properties (Fourier, 1827, p. 586; Fourier, 1824, translated by Burgess, 1837, pp. 11-12).

So your arguments from authority, Arrhenius, are worthless, you have not established that there is any such thing as this “Greenhouse Effect” concept you claim exists.

And, this is regardless whether you go with the “classic CAGW Greenhouse Effect” which has clouds and carbon dioxide blocking longwave infrared from the Sun then magically absorbing longwave infrared from the Earth, or the variation that the Sun doesn’t produce any longwave infrared, heat radiation, which further idiocy supposedly answers the “classic” gobbledegook.

Your, generic, other argument from authority waving your hands in Fourier’s direction is also gobbledegook – Fourier said nothing about radiated heat apart from heat flow – do read further about this on Timothy Casey’s “The Shattered Greenhouse” from which I’ve quoted: http://greenhouse.geologist-1011.net/

Steven Mosher | January 20, 2013 at 5:22 pm |manaker its not an argument from ignorance.
The structure goes like so.

1. Given: Gghs cause warming
2. Given: Volcanoes cause cooling.

If you take these two givens and fit the data your residual looks like AMO with natural variability being about .17C per decade.

Of course it could be something else. it could be ln(unicorns) works better to explain the data.

The point is you dont need to appeal to anything else to explain the the data. Take an example from evolution.
Evolution explains what we see in terms of life forms on the planet.
Of course it doesnt rule out a sneaky deity that really controls everything, but explanatory parsimony suggests that adding entities that are not necessary, is well, not necessary.

If you want to object then deny #1 and be a skydragon

Now, you can’t even get that right, you’re the one’s believing in skydragons..

“1. Given: Gghs cause warming”

The greenhouse gases “causing warming” are predominately nitrogen and oxygen, the heavy voluminous ocean of real greenhouse gases, Air, which is our atmosphere. It is these which act as a blanket around the Earth slowing the rate of cooling and so avoiding the extreme lows of heat loss from the surface as seen for example on the Moon without any atmosphere.

AGW/CAGW have misattributed this effect to their own fiction “The Greenhouse Effect greenhouse gases” and have done so on the science fraud that the minus 18°C temperature is only without them, when a) this temp figure is for the Earth without any atmosphere at all, without nitrogen and oxygen, and b) the temp without the “the Greenhouse Effect greenhouse gases” would be around 67°C.

Taking out the Water Cycle but with the rest of the atmosphere of mainly nitrogen and oxygen in place the temp of the Earth would be 67°C.

So, let’s get this straight, your “AGW/CAGW Greenhouse Effect greenhouse gases and their effects” are based on an out and out lie.

This garbage in explains the garbage out we get from you, generic AGW/CAGW “climate scientists”. You’ve created a fictional sky dragon, and some of us in the real world can see how you’ve manipulated physics to come up with your fictional claims.

It is simply no longer possible for me to take any of you seriously as scientists who believe in such a silly fictional skydragon at complete odds, and fraudulently so, with the real physical world around us – less of the sarcasm from you would be in order..

“2. Given: Volcanoes cause cooling.”

Of interest here:
“”Scientists are convinced these plumes contain so many cooling sulfate particles that they may be masking half of the effect of global warming,” noted the July 20 Wall Street Journal.

Assumption Was Wrong

However, a team of researchers from NASA and the University of California at San Diego reported in the August 2 issue of the British science journal Nature that they sent instruments into “brown clouds” of aerosols over Asia to measure their effect on temperature. To their surprise, the researchers discovered the common assumption that aerosols lower temperatures was wrong.

The discussion in this thread has been particularly valuable. I am sorry to say this, but it seems clear (and very little is clear in the GW area), that the warmists are in a bind. They must make the case that the probability that there are other relevant (meaning, they would contribute enough) forcings beside CO2 and methane, is so low that the science is “settled”. But that case is far from strong enough, and the holes are too big. Cosmic radiation cannot be dismissed right now. It is a scientifically reasonable that variations in radiation can affect the earth’s temperature, and there is no credible argument to the contrary. Further work may or may not show that the effect of radiation is negligible. But to discount it because it is not well enough understood is a form of scientific head-in-the-sand. And this is only one of many areas in which the warmists must overstate the degree of confidence one can have based on known science.

One of the things that bothers me the most is the violation of the dictum that “correlation does not imply causation.” As good as the BEST work may be from a statistical and data analysis perspective, it completely ignores any arguments pro or con the mechanisms that may affect the earth’s temperature. For me, it is profoundly unscientific to declare that because their models correlate well with the concentration of greenhouse gasses in the atmosphere, anything is “proven”.

Occam’s razor is not relevant here. It is a heuristic, meaning that it is sometimes right, and sometimes not. You can tell from my handle that it is one of my favorite metaphors, but only in the right context. It has to do with complexity, and when a less complex answer might be better than a more complex one. But arguing complexity in the GW case is a red herring. Is cosmic flux a more complex explanation than CO2 and other man-generated greenhouse gas injections into the atmosphere? I think not, nor have I seen any good argument to that effect. It is rather the case that greenhouse gasses were deemed to be the culprit 3 decades ago, and very little effort has gone in to evaluating proposed alternative, or looking for other ones. Don’t blame Occam’s razor for three decades of betting on one horse to the relative exclusion of others. In this regard, I am particularly concerned that we have no credible ideas about the large variations in the earth’s temperature over, say, the last 100,000 years that would suggest why what we now observe is not driven by the same forces that existed before we started burning large amounts of coal and oil a century and a half ago.

Occam37,
Very neat summary. “In this regard, I am particularly concerned that we have no credible ideas about the large variations in the earth’s temperature over, say, the last 100,000 years that would suggest why what we now observe is not driven by the same forces that existed before we started burning large amounts of coal and oil a century and a half ago”. This latter missing point of explanation remains a starting point in understanding the climate puzzle .

Steve, thank you for the work on BEST and the fairness of your explanations. You are mellowing! BEST is a step in the right direction.

The thing is, it is the skeptics who look to cosmic rays to explain climate, but who don’t understand cosmic rays, look very foolish to those of us who do understand cosmic rays.

Simple enough really, just count the cosmic rays and calculate the total number of ions they could possibly produce and compare that number to the number of charged particles already present in the atmosphere, if the two numbers are close, you may have a point, but since they are not, it is time to reject the cosmic rays affect climate theory.
If you think I am hand waving, I suggest swinging a meter and doing some math.

DocMartyn, it is my pleasure to further illuminate for Climate Etc readers in general, and for you in particular, the crucial role of fluctuations in thermodynamics, particularly in regard to the crucial question of whether fluctuations might provide advance warning of CO2-driven bistable phase-changes in earth’s climate system.

This work further illuminates BEST’s vital role in studying fluctuations … there’s much more to climate-change science than just statistical analysis of data! Because if we start to see climatological “critical opalescence”, in the form of increasing fluctuations in climate measures, then that is very substantial scientific cause for concern, eh DocMartyn?

Keep trying to inform them, Fan. “Skeptics” tend to dismiss actual published papers with a wave of the hand, but they are intent on reading blog articles for their information because those are the tabloids of science information. I will add another thought here regarding this and the connection to policy-making. Surely when things like paleoclimate show evidence for GHG effects being important, it is just due diligence for skeptics not to ignore it when making their arguments to the politicians and public, otherwise they may give the appearance of being either (a) selective with their facts, (b) ignorant of relevant science, or (c) just lacking scientific curiosity. When paleoclimate studies are converging on the fact that 50 million years ago CO2 levels exceeded 1000 ppm and ocean temperature were more than 10 C warmer, they should think and wonder why, read those papers, just out of scientific curiosity, not close their minds off.

You appear not to like sceptics, perhaps undestandably if you’ve gotten yourself into a blue funk about the world coming to an end. But 50 million years ago the world’s climate was, according to Stanley, extemely mild, with temperatures at the equator being about the same as they are today and th poles being at the temperature of the Pacific Northwest. having said that it was a different world with India still to collide with Asia and the continents in general being in different places than today, so the resultant climates cannot be compared. It was a different world.

Learn to love your opponents, if you’re wrong then it’s easier to apologise. And you’re wrong.

Geronimo, please be aware that rational climate-change concerns are not assuaged by arguments based upon cherry-picking, willful ignorance, and Panglossian optimism … no matter how confidently skeptics assert these arguments, or how often they are repeated, or what cherished ideological authorities are cited!

geronimo, who is Stanley, and why do you quote this as an authority? Do you know, or does he know, that 50 million years ago was the Eocene epoch, which, if it is known for one thing, that is being very warm.

On all paleotemps on Wikipedia I see 6C at the peak. But I expect the error bars are rather large. And how do you know the CO2 concentration. Last source I checked gave a +-50% error bar. Hargreaves and Annon just published a paper about the last glacial maximum that estimated temps to be substantially less cold than Hansen and others and claimed at least that theirs was more reliable. This new estimate of course lowered the climate sensitivity estimate by a factor of 2. Surely, its best to look at work like Nic Lewis’ recent rework using IPCC AR5 aerosol forcings or its predecessors.

Beerling and Royer (2011, Nature Geosciences commentary) available from Beerling’s website, is an up-to-date look at CO2 and temperature for the Eocene. Hansen’s “Target atmospheric CO2” 2008 paper also is a multi-expert review, easily found with a Google search.

A fan of *MORE* discourse: This work further illuminates BEST’s vital role in studying fluctuations … there’s much more to climate-change science than just statistical analysis of data!

I personally would like to see a much more thorough account of all the actual heat and energy transfers through the climate system than what is available now, and a clear account of how CO2 changes those transfer processes. I share your earlier skepticism regarding the thermodynamic equilibrium approximation to the Earth climate system, and I am skeptical that a computed change in the “equilibrium temperature” has a relation to any change that might actually occur in the climate. It would make a great deal of difference whether there was more of a change in the temperature of the upper troposphere than at the surface and lower 100 feet; all kinds of changes are within the margin of error of the equilibrium approximation.

But what do you mean by “just statistical analysis of data”? That is plenty hard, especially when “statistical analysis” includes vector autoregressive modeling and fitting of high dimensional dynamical models to the data.

This empirical approach implicitly assumes that the spatial relationships between different climate regions remain largely unchanged, and it underestimates the uncertainties if that assumption is false.

As I’ve pointed out multiple times in the last year or so, BEST’s results don’t fit this assumption. The correlation structure of land temperature changes in BEST’s data over time. Either the assumption must be wrong or the BEST methodology is introducing biases with its spatial weighting.

The correlation structure of land temperature changes in BEST’s data over time.

Your assertion* above has also been a nagging thought here too. To say the correlation structure is unchanging of time would seem to be inconsistent with expectations of how the climate (and weather) might change with warming, e.g., more extremes, shifting of precipitation patterns, etc. If the climate is changing and has changed over the last few decades with demonstrable effects at the scale of the representative correlation length (100’s to 1000’s of kilometers) would it not be fortuitous that the correlation structure–a statistical concept–is not changed? Certainly there are suggestive regions (American Midwest?) where sufficient data exist and the idea can be tested. This can be examined with the data available.

The impact of the assumption, however, may not be too great*** in the BEST calculations as local kriging estimates can be relatively insensitive to the correlation structure (or semi-variogram) in comparison to kriging’s local error estimates–and BEST does not report the latter.** (Commercial contour packages, e.g., SURFER(tm), often just employ a fit linear variogram as the default, probably assuming most people will not be interested in the local error estimates.)

* I don’t recall your specific comments. Did you already go back and computationally look at the time dependence or is your thought present more like mine–intuitive.
** In the context of kriging this almost seems likely throwing the baby out with the dishwater, since BEST then turns around and tackles uncertainty in an ad hoc manner. Very curious…at a minimum one would like to the the kriging error (maps) vis-a-vis additional data needs.
*** I have no idea. That is why such loose threads can be frustrating, and compounded by premature PR efforts.

To say the correlation structure is unchanging of time would seem to be inconsistent with expectations of how the climate (and weather) might change with warming, e.g., more extremes, shifting of precipitation patterns, etc.

Indeed. At the very least, some areas are expected to warm more quickly than others. One would expect that to change the correlation structure BEST’s assumption says is unchanging.

I don’t recall your specific comments. Did you already go back and computationally look at the time dependence or is your thought present more like mine–intuitive.

I can find my earlier comments on the topic if you’d like, but what I did was a very simple test. I checked how well different regions correlate to each other (and to the globe) in different periods of time. It’s easy to do since BEST’s website has data for different regions available directly from it.

I have no idea. That is why such loose threads can be frustrating, and compounded by premature PR efforts.

Likewise. I’ve never formed an opinion on how important this issue is. I’ve just been bothered by the fact it hasn’t been addressed. Some time back Mosher responded to me by basically hand-waving it away. Zeke responded to me in a more meaningful manner, but he didn’t get back to me like he said he would (I assume he forgot).

I’d think people would be worried if a fundamental assumption in their error calculations was called into question. I certainly wouldn’t think they’d continue to rely on it without doing any checking.

…I’ve just been bothered by the fact it hasn’t been addressed. Some time back ….

I’d think people would be worried if a fundamental assumption in their error calculations was called into question. I certainly wouldn’t think they’d continue to rely on it without doing any checking.

That really just about sums it up.

I appreciate your answering in detail–likely time dependence of the correlation function has bothered me for a while and I have been surprised that the topic had not been broached. (I obviously had missed your comments–but missing comments is easy to do.) I was surprised when I first read Rohde’s papers (2011 then 2012).

Before looking at the BEST papers based on experience with geostatistics I anticipated a likely effort with kriging for each year or each year from a representative set of year, e.g., 1940, 1950, etc.–a lot of computation. I further anticipated that kriging error maps would be part and parcel of the routine output. Thus armed one could make comments on the time dependence of the correlation function/semi-variogram, changes in error magnitude and distribution over space and time, etc. All of this is well established ‘science’ and using parts of it would put it in a familiar form, easing review/acceptance. The paper and an approach of estimating error outside of the kriging thus strikes me as idiosyncratic, but that reflects my bias, i.e., is not necessarily a flaw. Still I am disappointed those topics to date have not be developed in BEST documentation and that no expert reviews have yet surfaced in public or posted at the BEST site on the BEST kriging.

Thanks for the briefing on how you arrived at your assertion. (I assume they are here or at Lucetia’s and can track them down.) Your back-of-the-envelope approach seems quite reasonable (and informative). I had toyed with a stripped down version of what I describe in the preceding paragraph, but got side-tracked with a wrinkle with the data as posted at BEST–not all of the posted material is QAed. (BEST has neither used nor QAed all of the data that they posted as processed, and so it is a modest emptor caveat when you use their data.) Murphy the All Powerful assured a conflict between that and my approach and I decided it was not worth the effort to retrench as BEST has the big Mo and bad ears anyway.

Finally, to me establishing the time behavior of the correlation would be paramount. It is essential to identify that which is demonstrated by the data and that which must be assumed in the calculations, if the calculations are to be meaningful in context.

I appreciate your answering in detail–likely time dependence of the correlation function has bothered me for a while and I have been surprised that the topic had not been broached. (I obviously had missed your comments–but missing comments is easy to do.) I was surprised when I first read Rohde’s papers (2011 then 2012).

I’ve been disappointed by the lack of response on a number of points. Even when I raised basic issues like station counts (there were mismatches in papers), I was seemingly ignored. I’ve tried figuring out how seasonality was handled, and again, nothing.* Then when I saw a key assumption underlying their calculations of errors is apparently wrong…

Thanks for the briefing on how you arrived at your assertion. (I assume they are here or at Lucetia’s and can track them down.) Your back-of-the-envelope approach seems quite reasonable (and informative)

I think those are the only two sites I discussed it at. I don’t remember just which posts it was on, but I do remember when I brought up the concerns again later. Zeke said he’d ask one of the authors about it (while acknowledging its importance), but I think he forgot. And then Mosher gave a non-responsive response. I never heard back when pointed out he hadn’t addressed my concern. Ultimately, the only useful thing to come from that exchange was Mosher saying:

The ASSUMPTION is that this structure stays the same going back in time. under that assumption you can calculate an uncertainty due to spatial coverage. without that assumption you have bupkiss.

If someone raises a concern that could mean “you have bupkiss,” would you really go five months without doing anything to address it?

*It’s possible I just missed it in the code BEST released, and if not, that it is included in the new code dump. I can’t tell because the new code is packaged with data in a 500+ MB file I can’t download on my current connection.

Thats really a question for Robert Rohde. I’d suggest emailing him. I’ve been a tad less involved with the Berkeley project over the last 6 months due to work, unfortunately (unlike those lucky academics, I don’t get paid to do climate science :-p).

Thats really a question for Robert Rohde. I’d suggest emailing him. I’ve been a tad less involved with the Berkeley project over the last 6 months due to work, unfortunately (unlike those lucky academics, I don’t get paid to do climate science :-p).

I’ll probably try getting in touch with him once I’ve examined things more. I always thought this would be something that got cleared up (a bit) before any papers got published. Now I guess the only way I’ll get an answer is to do some work myself.

But for the moment I’m a bit sidetracked. I need to download the newest code/data dump and see if I can figure out why BEST seems to have a more a notable seasonal cycle than any other major temperature series.

Which reminds me, is there an updated version of the BEST global record? When I go to the Results Summary page, I get directed to this file which only goes to November of 2011. All the regional files go to July 2012. I’d like to use the most up-to-date data, but it’s hard when I can’t find it for one series.

Thanks Zeke. Do you know why the two files have notably different values? I don’t recall seeing anything posted about a change in methodology, but there is a significant non-random difference in the two. To show what I mean, here is the difference from 1900 on. The two files do say they use different amounts of time series (36,853 vs. 37,145), but an increase in series of less than one percent shouldn’t cause a difference like that.

And it’s not just the temperature values which have changed. The uncertainty ranges have changed quite a bit as well. The combined effect of this is a full 13% of the current values since 1960 falls outside the 95% uncertainty ranges of the old series (as opposed to the 0.1% before 1960). I’d consider that a pretty big change, and it’s disturbing to see both files on that website at the same time with no apparent explanation for the difference.

Incidentally, those uncertainty ranges are screwy. I challenge anyone to explain to me why there should be a huge step change around 1955. Or why the new data file has huge uncertainy excursions shortly after that weren’t present in the earlier version. Seriously, these are the uncertainty values of 1968 in the most up-to-date file:

Since I was talking to mwgrant about the subject, I decided to do a quick examination of the correlation of some BEST data. I picked North America, South America and Europe as my regions to examine. I decided to focus on data post 1900 because of coverage, and I choose continents since they should have the most consistent correlations. Here are graphs showing correlation for ten year periods:

Similar variations are seen if you use different period lengths or examine all periods (increasing period length with time). Larger ones will be found if you compare smaller regions (or include earlier years), including ones that are distinctly non-random.

Speaking of which, something caught my eye. A while back I noticed some series still had clear seasonal cycles after BEST “removed” them. That made me wonder how BEST handles seasonal variations. I tried but failed to find out (and got no response from Mosher when I asked him). I now wish I had pursued the matter further as this is an ACF graph for BEST’s North America temperature series. It looks like there is an obvious seasonal cycle.

(None of this is conclusive. I put it all together in about 20 minutes. Still, I think it’s enough to merit concern.)

This just a quick response to the graphs. On first blush, yes, I would think register on a Bupkiss Sensitivity scale. That said a couple of observations from this quarter.

First, the range and correlation length in Figure 1 of the methodology supplement are on the order of 3000 meters and a little more than 1000 meters, respectively. I expect most of the pair distances in the intercontinental correlations to exceed that or lie in the upper part of the range, i.e., they do not shed a lot of light directly on the interesting part of the spatial structure. However, the figures do show changes between the intercontinental correlations changing in time and this is enough to merit attention in the paper.

Working with states data in the US, e.g., the Midwest, one could examine the time behavior of the correlation function/semi-variogram on a regional scale. In addition one could look at effects of decreased sample density in time and clustering on the correlation. If local kriging errors were a part of the effort one also could start to get a read on the spatial distribution of the error over time–an insightful handle on on just what can we say when we go back in time, at least from one established documented perspective. I would hope that there are things like this to varying degrees of completion in project files. BTW note the following text lifted from p. 9 of the May 2012 ‘Averaging Process Paper’:

NOAA also requires the covariance matrix for their optimal interpolation method; they estimate it by first constructing a variogram during a time interval with dense temperature sampling and then decomposing it into empirical spatial modes that are used to model the spatial structure of the data (Smith and Reynolds 2005). Their approach is nearly ideal for capturing the spatial structure of the data during the modern era, but has several weaknesses. Specifically this method assumes that the spatial structures are adequately constrained during a brief calibration period and that such relationships remain stable even over an extended period of climate change.

I’m not sure what I’d make of that. On the surface it suggests considerable thought was spent on time behavior, but there is not clear stans-alone development of the topic.

Regarding attention to time dependence of correlation in the papers : 1.) As Mathew R Marler says below “alternative models for the spatial autocorrelation could be the topic of someone’s PhD dissertation”, and I am reasonably confident that has also been evident for a while to BEST [e.g., see above]; 2.) Mosher acknowledged bupkiss thoughts regarding the importance of the time independence assumption; 3.) you raised the issue as an outsider to members of the BEST team earlier; and 4.) Marler shares some doubt on the topic below. These alone suggest that as a matter of due diligence at least some additional qualitative discussion should appear in the final public BEST documentation record, but a some point before that question needs airing before the scientific community. Hints are there, but development is incomplete.

The canned response of course is ‘well it is a work in progress.’ Unfortunately the chosen approach to publishing has been the modern paradigm: one of PR first, throw the incomplete preliminary work out for eliciting response second, then third, clean up. To me it doesn’t seem to work very well: ‘it is a work in progress’ too easy of an out that encourages creeping rationalization. Practically, step 3, cleanup, is a killer for all parties involved because by that point PR issues have sufficiently roiled the waters. IMO this approach this really tends to undermine healthy skepticism.

First, the range and correlation length in Figure 1 of the methodology supplement are on the order of 3000 meters and a little more than 1000 meters, respectively. I expect most of the pair distances in the intercontinental correlations to exceed that or lie in the upper part of the range, i.e., they do not shed a lot of light directly on the interesting part of the spatial structure.

Indeed. The last time I did a similar test I used countries instead of continents. In some cases, the results were notably more disturbing. Even tests between countries and the continents they’re part of showed similar results. I don’t think there’ll be a change at finer resolutions that negates the problem. I’d test myself but without Matlab, I’m not sure how much work it’d be to do.

I would hope that there are things like this to varying degrees of completion in project files.

I’m cyncial, but I don’t think there are.

Practically, step 3, cleanup, is a killer for all parties involved because by that point PR issues have sufficiently roiled the waters. IMO this approach this really tends to undermine healthy skepticism.

That step is especially difficult here given BEST’s bad PR decisions. I’ve highlighted exaggerations in BEST’s website and at least two of Muller’s op-ed pieces. If you consistently exaggerate your work, how can you be expected to address issues that diminish your work?

Anyway, BEST has published their results paper with a stated assumption underlying their uncertainty calculations that is known not to work. Either it isn’t true or their results aren’t in line with it. I don’t get that.

Indeed. The last time I did a similar test I used countries instead of continents. In some cases, the results were notably more disturbing. …

Yes one would expect that given the longer range time variation. It is just nice to exercise things at the same scale as the correlation length and range–and you had done so.

…Even tests between countries and the continents they’re part of showed similar results. I don’t think there’ll be a change at finer resolutions that negates the problem.

Tests with multiple states or countries (Europe) give you information where pair separations fall in the range and should be most telling. In the simple case go out much further and you are essentially in a distance regime where there is no spatial correlation and drift might be evident. The game for kriging is in closer. That’s where you chase detail.

I’d test myself but without Matlab, I’m not sure how much work it’d be to do.

Well, again, I think what you have done is sufficient for raising a flag at this time. Zeke suggests you contact Rohde. Well that might clear up* things in a hurry, or at least remove some chaff from the mix. Otherwise it is slog out the calcs–with no guarantees on whether it goes anywhere. With Zeke checking in on it today, it does seem like Rohde or calcs. One can empathize with “don’t get paid to…”, huh?

Well, again, I think what you have done is sufficient for raising a flag at this time. Zeke suggests you contact Rohde. Well that might clear up* things in a hurry, or at least remove some chaff from the mix. Otherwise it is slog out the calcs–with no guarantees on whether it goes anywhere. With Zeke checking in on it today, it does seem like Rohde or calcs. One can empathize with “don’t get paid to…”, huh?

That’s what I’m worried about. This isn’t my job. It’s not even in my field of expertise. The idea of working through the calculations to figure out how this issue impacts their uncertainty estimates is… daunting. The fact I’d have to rewrite nearly everything they’ve done in another programming language just makes it worse.

I don’t hold much hope for what could come from contacting Rohde. I can’t imagine anything he could say that would explain why he hasn’t tested this before. And if he has tested it before, I can’t see why it would never have been discussed. That, plus the fact I’m really not the right person to pursue this matter, makes me hesitant to e-mail him.

I probably will anyway. I just want to figure out what’s with the seasonal cycle in BEST before I do. The fact there’s a clear seasonal cycle at continental and global scales shocks me. Kriging requires correlation calculations. A failure to remove seasonal cycles will necessarily impact those correlations. I doubt it would change the overall results, but it could change area/regional trends, and it would certainly impact the uncertainty calculations.

(To be honest, I’m focusing on that issue because it’s simpler. I don’t feel quite as overwhelmed by it.)

I just want to figure out what’s with the seasonal cycle in BEST before I do. The fact there’s a clear seasonal cycle at continental and global scales shocks me.

Good choice. For now an assumption on the time independence of the correlation function just may be one of the things that people have to accommodate–ideally BEST eventually will get upfront with a discussion about that aspect. When tackling time dependence we’re talking about a lot of calculations and postprocessing–even when clever. Also as you are undoubtedly aware there are a number of practical issues that have be ‘resolved’ if one has to go by a route other than using the BEST code. A number of folks will be interested in what you have to say on the seasonality question.

[As an aside, I still hope to get back to my geostatistics regional exercise with the US data or a subset. However, that part-time effort can’t be considered for about 3 months–other plans and motivation.]

Kriging requires correlation calculations.

This is not a problem when using canned geostatistics software or even in R–but it should not be treated as plug and chug. There is some art in it, and one should have a certain amount of comfort with geostatistical concepts before diving in. I recommend R or an off-the-self-code such as GSLIB (a suite of command-line codes). To me there is no need to duplicate the BEST approach in detail–the objective is to further study the spatial correlation and related issues with BEST data, and not to verify the BEST results. BEST will produce verification in time as a part of their QA or they won’t. That’s their problem/joy.

For me a study of spatial correlation is much more easily handled using documented non-custom codes, e.g., GSLIB. [There could be dataset size limitations, but if one is looking at things on regional scale and having modest goals in the study that is likely not a problem.]

A failure to remove seasonal cycles will necessarily impact those correlations.

[Shudder.]

I doubt it would change the overall results, but it could change area/regional trends, and it would certainly impact the uncertainty calculations.

[Shudder, again.] I don’t think results from a ‘routine’ kriging exercise would be worth the cost of the electrons used in the calculations. Maybe clever minds can extract something from it but I suspect that quite a bit of conceptual underpinnings would have to be developed and/or laid out–much more than seen to date.

Again, thanks for sharing your thoughts on the time dependence. Bon chance.

I’m starting to wish I hadn’t even started looking into BEST. It seems I keep running into obvious problems. Even ignoring the difference in files I raise there, there’s a huge step change in BEST’s uncertainty ranges. How in the world could those shrink by more than half in the space of a few years? It’s not like there’s a notable change in station density around that point or anything.

A number of folks will be interested in what you have to say on the seasonality question.

I’m still dumbfounded at the idea BEST has a stronger seasonal cycle than any other record I look at. If it’s new and improved methodology, how can it do a worse job on such a (relatively) simple part? And how is it nobody ever noticed?

To me there is no need to duplicate the BEST approach in detail–the objective is to further study the spatial correlation and related issues with BEST data, and not to verify the BEST results.

I don’t want to duplicate the BEST approach. The problem is I can point out problem after problem, but eventually somebody will have to look and see how much they matter. And if nobody on the BEST team cares about the points I make…

I’m starting to wish I hadn’t even started looking into BEST. It seems I keep running into obvious problems.

I have felt that way a number of times, but keep coming back. I happen to think that the application of geostatistical method could really bring into to focus and facilitate the consideration of uncertainty. But BEST does not actually apply any geostatistics to the uncertainty, instead taking more of an add-on approach (outside of the kriging). I suspect flaws there this may well lead back to the issues you are poking. Or maybe it goes back to the ‘real’ data, i.e., the QC ‘nonseasonal’ stations numbers input into the BEST averaging process or the raw data underneath those ‘data’. It certainly seems some substantive changes have occurred between March and December of 2012.

One thing that comes to mind for me is that you are looking at the average of the gridded values in the case of the of the anomaly and post-kriging processing for the estimated uncertainties. Unwinding the cause of what you have picked up may require a little determination and luck. (Hence , your comment?!)

For kicks and giggles, I pulled the numbers you looked into R and played around a little–old habits die hard. Here are some observations from that:

1.) For the record I reproduced your figures.

2.) The number of series and observations have increased from March to December. However, this could well be a ‘net’ increase with a number of observations in the March compilation removed. Also maybe some ‘outliers’ have now been culled from the additions. Who knows, just got to assume it is documented.

3.) I plotted the new anomalies (December) versus the old (March) anomalies. Scatter was very low and the plot is very linear as one would hope. [The adjusted r^2 of the regression is 0.9888 with 1341 degrees of freedom.]

4.) I also plotted the new uncertainties versus the old uncertainties. This has much more scatter which is also a little lumpy in appearance. [The adjusted r^2 of the regression is 0.7495 with 1341 degrees of freedom.]

5.) Scatterplots of uncertainty versus anomaly for the two data sets look similar. One interesting feature in both is that the data appear to be clustered into two diffuse but distinct regions. High uncertainty has some tendency to be associated with lower anomalies. Also there is much more variation in the uncertainty for the higher uncertainty cases. Upon reflection these plot provide essentially the same information as your time plots–where one see two clusters in contrast to the step change. Color scheme can enhance the contrast.

So in my own meandering way I arrived at the same point as you and grok what you are pointing out. Everytime I look at something there is a glitch or problem. Instinctively or thru boneheadedness I can not shake the idea that the best thing about BEST is an emphasis on geostatistical methods and the the worst thing is not following through with them regarding the treatment of uncertainty–instead patching a composite scheme on top of gridded data.

Unwinding the cause of what you have picked up may require a little determination and luck.

I suspect it will take more than a little of both. A step change in the uncertainty levels suggests a design issue or bug, and finding those in a system you’re not familiar with is difficult. The seasonality issue will require going through BEST’s procedure for removing seasonal cycles as BEST doesn’t describe how it’s done. As for the files changing… I have no idea. I might find an explanation on my own, but if I do, it will be purely because of luck (or because it was documented).

1.) For the record I reproduced your figures.

That’s always good to hear. I’d hate to find out an “issue” I discovered was really just a screw up on my part.

Who knows, just got to assume it is documented.

I remember seeing tons of comments by Steven Mosher about documenting changes. I hope that means there’s documentation on this.

One interesting feature in both is that the data appear to be clustered into two diffuse but distinct regions. High uncertainty has some tendency to be associated with lower anomalies.

That’s an interesting observation. I hadn’t noticed that before. I’ll have to give some thought as to why that happens. I know I’ve seen that sort of thing arise from improper uncertainty calculations, but there are other possibilities.

Instinctively or thru boneheadedness I can not shake the idea that the best thing about BEST is an emphasis on geostatistical methods

I think they definitely have a good idea here. I’m just starting to become more and more convinced they implemented that idea poorly.

The last figure for anomalies shows pretty good consistency in the two sets of anomalies–ultimately calculated from the kriging part of the exercise, and the next to last figure (uncertainties) suggests the the uncertainty calculations changed more between March and December. Certainly the use of the smoothed density nicely shows where clumpiness is lurking. But it is risky to speculate.

Oh well, end of exercise. [I hope the links work after I hit the post reply button.

Finally,

I think they definitely have a good idea here. I’m just starting to become more and more convinced they implemented that idea poorly.

That is where I am at and going more in that direction as time passes. The PR stuff is just so unfortunate, as it may eventually preclude or hinder shaking out the bugs.

mwgrant, those are nice graphs. I’ve always been terrible with making things look good, and sometimes I hate it because a good graph can make things very easy to see.

The PR stuff is just so unfortunate, as it may eventually preclude or hinder shaking out the bugs.

For all the PR stuff, I have to wonder how much feedback BEST has gotten. The entire purpose of the pre-releases was to get more eyes to examine the work. I don’t see how things like strong seasonal cycles could slip by any real examination. Heck, Steve McIntyre made a post (or two?) questioning BEST’s handling of seasonal cycles when the first pre-releases were made.

I guess the question is, did they not get good feedback, or did they not listen to what they got?

” Muller has nothing to do with the journal. Muller didn’t even know about the journal until it was presented as an option. “

So tell us all please — Why was this paper published in this shockingly obscure, brand-new journal? Was it actually Peer-Reviewed (notice the initial caps please) — was it really sent out in its entirety to at least three world-class respected experts in the necessary fields, let’s say climate and statistics and computer modelling for instance, and thoroughly vetted, revised, etc before publication? Or was it reviewed by a single editor? and if so, whom?

Please only reply with what you known for certain for yourself from your actual personal experience. If you are going to relate “what you’ve been told” — please tell us what you have been told and by whom….supply quotes, please.

So tell us all please — Why was this paper published in this shockingly obscure, brand-new journal? Was it actually Peer-Reviewed (notice the initial caps please) — was it really sent out in its entirety to at least three world-class respected experts in the necessary fields, let’s say climate and statistics and computer modelling for instance, and thoroughly vetted, revised, etc before publication? Or was it reviewed by a single editor? and if so, whom?

1. Why was it published? The editor and the reviewers thought it was important work and good work.
2. Was it Peer Reviewed. Yes. There were three reviewers. I read the reviews and then checked our final draft to make sure that we addressed the points that we thought needed to be addressed.
3. Was it sent out in it’s entirety? Yes. I prepared the final draft.
4. Was it sent to 3 world class experts in climate and stats? The reviewers identities are not revealed so that I can only infer from their comments. They understood what we were doing and made helpful suggestions. This was in contrast to previous reviewer comments at other journals who seemed to struggle with kriging, so a geostats journal seemed the better fit.

I can assure you from personal first hand knowledge that “Muller didn’t even know about the journal until it was presented as an option.”

Mosher — Just to clarify for us — you state “geostats journal ” — yet there was nothing ‘geostats’ about this journal at the time of your submission except the name – which is the only thing that existed. It had no prior publication – your paper is the ONLY paper it has EVER published. Your paper is published in Volume 1 Issue 1.

Why in the world would a group such as BEST want to publish this paper of which they are so proud in a journal with ZERO prestige, ZERO history, ZERO recognition, gee….ZERO anything?

Who does BEST think is going to just accept that this until-this-moment-nonexistent journal can be trusted to have peer-reviewed this paper to the full satisfaction of the greater field of Climate Science?

How does this move, publishing in a brand-new infant journal, forward BEST’s purpose of putting all the doubts and controversy about the temperature record finally to rest.

“2. Was it Peer Reviewed. Yes. There were three reviewers. I read the reviews and then checked our final draft to make sure that we addressed the points that we thought needed to be addressed.”

It would be interesting to know why the reviewers from journals that had actually published scientific papers in the past rejected BEST. It seems that it was submitted to multiple journals and received multiple thumbs down, before going to G&G.

sher — Just to clarify for us — you state “geostats journal ” — yet there was nothing ‘geostats’ about this journal at the time of your submission except the name – which is the only thing that existed. It had no prior publication – your paper is the ONLY paper it has EVER published. Your paper is published in Volume 1 Issue 1.
############
Somebody has to be the first. There was a choice between 2 journals where we could be assured that the reviewers did not require tutorials in kriging.

#################

Why in the world would a group such as BEST want to publish this paper of which they are so proud in a journal with ZERO prestige, ZERO history, ZERO recognition, gee….ZERO anything?
##################
1. prestige didn’t matter to guys who have nobel prizes already.
2. history? we enjoyed the idea of setting a standard. being first was an honor.
3. Recognition? only seems to matter to skeptics who argued that peer review wasnt important anyway.

Basically, we liked the idea of being judged on the quality of the science by people not tainted by the kind of nonsense we have seen in other places.
####################

“who does BEST think is going to just accept that this until-this-moment-nonexistent journal can be trusted to have peer-reviewed this paper to the full satisfaction of the greater field of Climate Science?”

The researchers already using the data are happy to cite the journal. They didnt need peer review to tell them the result was sound. They needed peer review for the check box that it is. The only folks who object are people who dont like the answer. Actual folks who want to use the data and have a citation are happy. go figure.
#####################
“How does this move, publishing in a brand-new infant journal, forward BEST’s purpose of putting all the doubts and controversy about the temperature record finally to rest”

The doubts were put to rest when the data and code was published. That is the acid test. From my perspective ( see what Ive said elsewhere) peer review is a check box. Now that we have the check box, of course skeptics change their tune and suddenly think that being the 1000th paper is somehow most important. In short, when skeptics had no peer reviewed papers, they said peer review didnt matter. Now that we have peer review, the same folks want to move goal posts they thought were unimportant to begin with.

In fact, you might think that a new journal was selected merely to point out to people that skeptics will change their tune at the drop of a hat.

Rather amusing to watch “skeptics” who can’t say enough how much they discredit peer review, the way the system is based in impact factor, the way that the CAGW-cabal controls the science by only recognizing studies published in “elite” journals, run themselves in circles to discredit this publication.

The whole BEST initiative is a Rorschach test – as are opinions about those involved: Muller, Mosher, etc. are viewed as completely different people on the basis of the product of their science. It’s a good thing that so many “skeptics” are engineers – it gives them the skills to “reverse engineer” their opinions based on how they feel about the outcomes of science.

Those of you who are publishing CliSci researchers –> do his answers seem at all reasonable to you on a professional basis?

Dr. Curry — care to weigh in on this point about the decision of BEST to publish this particular paper in this particular journal?

Does anyone else have any more questions to place before Mr. Mosher on this issue?

PS: I wish to thank Mr. Mosher for taking the time to answer these questions here. The greater CliSci community, of course, will be the judge of whether or not this paper as published in this journal has been actually and factually properly peer-reviewed and whether or not it will be considered part of the accepted literature.

The failure to get the BEST paper published in a climate science journal goes against the skeptical view that pro-AGW papers always get published because of inside deals. It is not that simple. This speaks well of the filtering process in those journals and against bias. This paper was about as pro-AGW as they get, and wasn’t published in a climate science journal. Why it wasn’t published there, we can only speculate, but I think the climate science journals are not so interested in new statistical approaches, and the signal extraction was quite basic not adding any novel scientific insights.

Kip Hansen @ 7.19: even the most august and ancient of journals had a first issue and a first published paper. The issue is surely the quality of the published work, which I am not qualified to judge, but which from the discussion here seems to have merited publication. In years to come, the new journal might be highly valued and revered for its first published paper. Or not, let’s deal with the paper on its merits.

I will help you Jim D. It did not get published in a Team climate science journal, because the Team does not like the publicity seeking Muller, trying to hog their spotlight. Wouldn’t you have a good laugh if Spencer, or Lindzen published something in the G&G, aka journal of last resort? It’s amazing how you people can spot hypocrisy everywhere, but in your own house.

“Steve, you say that you prepared the final draft. Why is it that you were not an author? Also, do you care to mention how many journals rejected Best before settling on the present journal?”

I am not an author because I didn’t do any of the writing. The paper was written by the time I joined the team. Zeke and I basically sat in meetings and made suggestions. one of us ( or maybe it was robert ) suggested the c02 fit to temperature. In my book just making a suggestion or wordsmithing here and there doesnt mean you are an author. For the final draft my role was pedestrian. First I looked at all the reviewer comments and made sure that the valid questions got answered and that errors got corrected. Then making sure that the paper met the guidelines, citations etc, abstract, Secretaries are not authors in my mind.

Journals. It was submitted to one journal as I recall. That journal wanted to have the methods paper published first before they would consider the results paper. Since no other surface temp results paper even has a methods paper that seemed a bit odd, it was also odd since the results paper confirmed what was already known. go figure. confirms known science. extends the record back beyond 1850. ..

manaker. you are welcomed to disregard the bits about the c02 fit. For me that is an minor part of the paper. As i recall it came about from a discussion we had about the data prior to 1850. One of us( me or zeke or robert ) suggested looking at the relationship between volcanoes and that data. In short, is that early data confirmed by anything else. that grew into the c02 fit exercise. Obviously some people think its too simple. others dismiss it as curve fitting. In steves world the results paper would just be the results.. the DTR stuff was way more interesting.. the c02 and amo stuff.. not that interesting. but this is not burger king and i dont get things my way.

sure kim. I just look at the structure of the arguments. attribution is a thorny messy epistemic briarpatch. its the area where you are most likely to see fundamental epistemic issues raised. circumstantial evidence at best. but its evidence.. kinda like the glove in OJ

Twisted logic, Don. Publicity seeking won’t have disqualified this paper. Hansen and Trenberth have no trouble getting published. No, it is that the content probably didn’t advance the science, only confirmed it.
capt.d. doesn’t see it as pro-AGW, but we just had a debate here about the log(CO2) fit and that they say there is no need to assume other sources of long-term natural variation which looks like a pro-AGW comment that JC would never approve (hence not being an author).

JimD, “capt.d. doesn’t see it as pro-AGW, but we just had a debate here about the log(CO2) fit and that they say there is no need to assume other sources of long-term natural variation which looks like a pro-AGW comment that JC would never approve (hence not being an author).”

I don’t think they said CO2 and aerosols done it, game over. They did a fit. There are others things that can cause that fit. Remember since that fit was made, the magnitude of the aerosol forcing has been questioned, new stratospheric impacts discovered and the odd diurnal trend thingy has appeared. There is plenty of puzzle left to finish.

capt. d., they also had no qualms about the 3.1 C sensitivity that came out of the fit, which would have been a red flag to a non-AGWer. Regarding aerosols, yes, they probably assumed that those and the other non-CO2 GHGs roughly cancel, which agrees on the whole with the IPCC AR4 forcing.

Your logic is absent, Jim D. Hansen and Trenberth have no trouble getting published, because they are leaders of the Team. The Team do not like Muller. They don’t like publicity seeking dabblers in their climate science coming along and claiming to have done it better than they have been doing it all their lives. Hansen, Trenberth, Mann, Jones et al could have easily gotten this paper published. They would not have had to go to the journal of last resort to get their box checked. And references to the G&G paper in future climate science papers will be very few and far between, Jim D.

All of the above is exactly what the skeptics mean when they say it’s pal review, not peer review. Mosher and his BEST team have a beef with the big Team, not the skeptics. It wasn’t the skeptics who rejected their paper and caused them the humiliation of shopping their paper around, until they landed in the journal of last resort.

Don, I will agree with you on the fact that the authors are not climate scientists, which makes it hard for them to publish in a climate science journal. They haven’t really broken new ground in that area with this paper. Reviewers who are in the field would probably have seen some naivety in those aspects of the paper. Sometimes you may give the lack of background or breadth a pass if it is a new Ph. D. student, but here they probably figured that the material would have a good chance elsewhere.

Don, dataset-introducing papers tend to get very high citation counts. I am sure they won’t have any problem whatever the journal. The journal’s impact factor will become disproportionately large if this is one of their only papers.

That’s some halfway quais-plausibe spin Jim, but the reality is that G&G has zero credibility. And Mosher’s story that the BEST team is proud that their rejected paper landed there is ludicrous. And everybody knows it.

fwiw, probably not much, I like the decision to be the first to publish in a new journal. This paper will be highly cited, and represents really good work on a massive scale. Now everyone will want to publish in that journal.

One paper can not possibly address every technical criticism, and I agree with Brandon Schollenberger and mwgrant that the assumption of constant spatial correlation across years and season is dubious. I expect that alternative models for the spatial autocorrelation could be the topic of someone’s PhD dissertation. Based on my limited experience, trying to estimate some seasonal variations in the spatial autocorrelations of a large set of spatial domains would increase the computation time by a factor of about 100, and produce coefficients in the model with large standard deviations — if the algorithm converged to a solution at all.

@HR: Kip and others: Rather than trying to find dirt in the publication process look at the paper and give us your assessment of the science. It would be far more useful.

Unless “Kip and others” are “peers” in the sense of “peer review,” one could argue that it is far less useful.

The Internet has made it possible for those who are clueless about how climate works to generate reams of unscientific garbage about climate. The point of peer review is to weed out all that garbage so that what remains doesn’t drown those who are trying to “assess the science” as you put it.

This comes with the risk of monopolization of ideologies. With that in mind I’m all for allowing competing ideologies to express their views. But unless they provide their own competent peer reviewers these alternative perspectives are going to be lost in the vast unrefereed space of alternative theories. Who has time to evaluate every one of those when most of them are junk science?

I believe Mosher’s point. Kriging may be a relatively unknown method in the geosciences whereas its becoming a lot more common in other fields and is better known to statisticians. Kriging does seem rather good at providing reliable surrogates in the presence of noise in our applications anyway.

Geostatistics have been applied in the geosciences for decades, and this includes many peer-reviewed publications in AGU journals and elsewhere. My first exposure, a Battlele determination of areal radon fluxes, was 1980 and the practice was established by then. Much of the foundational was in the geosciences—mining. One thinks of Krige, Matheron, Journel, Delhomme, Clark. … Soviet work (Gandin) in 50’s and 60’s meteorological work. Working both from theoretical and practical perspectives Journel developed FORTRAN code for wider use in the 70’s. BYW I also believe the Soviets are also now credited with some leadup work work in the 1940’s, but that may be bad memory on my part.

Geostatistics, while a well established and mature discipline, is still a specialized niche and it may be appropriate reviewers just were not selected. [This is how I read Mosher’s comment.] Another possibility is that Rohde’s treatment of it, I suspect, is not what those who use it–meaning competent users but not researchers in the discipline–would expect. Also the writing style is dense and a little ambiguous in places compounding the effort, say if the reviewer was a user/applier as opposed to a more mathematical researcher. In any case you really should dismiss the idea that kriging is new in the geosciences That just isn’t the case.

So Kriging is unknown to JGR? And if Mosh is correct, is it bad etiquette for a journal to ask for methods?
As I asked near the beginning of this thread, Why did JGR (or everyone else) reject this paper?

No, i.e., Kriging is actually quite well known to those folks and many other for decades. To suggest otherwise in a gentleman’s discussion is to invite pyrrhic defeat sans Phoenix options. It is better not to go there. More important, evaluation of BEST’s merit/contributions should be judged on what it brings to the table and not what it is perceived* to bring to the table. This includes where it is and where it is not innovation.

For example below in italics is a snippet from an online search of volume 80 1975 random 70’s vintage article I turned up by is less than a minute. The literature is laden** with articles of that vintage and earlier.

For the record all I trying to get straight in peoples’ minds here is the simple fact that kriging is not new and don’t be distracted by the bauble–kick the geostatistical tires and remember that BEST’s value drops once you drive it off of the Berkeley lot. (We have to live with it.) A deeper point is that if you want to assess the work then one perspective is to obtain reviews by qualified practitioners who are who comfortable in this established niche. Don’t speculate and or buy in to speculation. [Maybe such will be part of an entire BEST public record some day. This would include JGR and GandG reviewer comments.]

The BEST work [choose one answer, a-h]:

a.) BEST does have merit
b.) has been and will continue to be hyped some
c.) has some loose threads/odd ends in regard to ‘published’ data, theoretical development, implementation, and documentation
d.) has promise to improve estimation of global temperature(s)
e.) has promise to improve understanding of errors in the estimation of temperature
f.) is a work in progress, so despite the hype some slack* is warranted
g.) is not as good as sliced bread
h.) all of the above

Note that my comment above mentions even earlier work in the field. HTH

[Note to JC – can’t leave the poppy fields, right now.]
_____
* Perceptions are shaped both by the work/papers and by comment/review. Not casting dispersions in any one direction, lust noting that perception is important and has many contributors.
** Well, has quite a few.

Your bit –> “It would appear that Kip hansen wants to engage in ‘re defining the peer reviewed literature’ Gosh, You actually get to see skeptics doing the very thing that Jones was criticized for.” is not professional.

You and the rest of the crew at BEST must have known that the decision to publish your paper in SciTechnol’s as-yet-unheard-of journal, Geoinformatics and Geostatistics, would raise questions, and it has…..There is no sense pretending to be upset by something you expected all along, or to try and blame the questioners for having those questions. It was your (BEST’s) decision as you have explained. Striking out at those who question the decision is uncalled for.

I asked the questions plainly and outright so that you (here representing BEST) would have a chance to publicly answer them. I asked politely, and for the most part, you have answered politely (with the exception of this lapse).

It is nothing new to have a paper questioned based on where it is published….this happens every day and it sometimes perfectly valid. Quite frankly, not all journals are equally trusted to perform rigorous peer-review or trusted to utilize peer-reviewers of the same quality. They obviously don’t, they can’t, it is not anyone’s opinion — simply an acknowledged fact in the scientific world.

Time will tell if this decision of yours works for you or not. Certainly not up to me.

Tom.
the climate technically speaking does not exist. the results paper gives you an estimate of the past. It isnt a forecast. If you want to use it to make a forecast or a prediction then you will have to make various assumptions. A naive prediction would be to say that feb 2013 would be like the average feb plus a boatload of error. feb 2033? add .3C give or take.. feb 2103. add 1-3C.

Steve, you say, “Kip. I’m not upset. im amused. amused because skeptics have responded as predicted.” Steve, given your history in climate data issues, you level of certainty is only partially justified. Do you understand that CAWG skeptics are equally amused because you responded exactly as predicted.

‘Do you understand that CAWG skeptics are equally amused because you responded exactly as predicted.”

yes, I bet that each an every one of them said “wait! if we move the goal posts on peer review, mosher will throw the “re defining peer reviewed literature” quote at us. Yup, I believe each and every one of them saw that coming and stepped on that rake.

Skeptics are treating you better than the Team has. They like you less than they like Muller. They stuffed your paper and you ended up with the least reputable journal that you could have found. Aren’t there other geostats journals, Steven? Do I have to Google it for you? Why don’t you take this up with the boys on realclimate? I bet they would find you amusing and have quite a bit fun, at your expense.

Mr. Mosher — Will BEST agree to share the anonymous reviewers comments with the world? Will you put them up on the BEST site along with the final published version of your paper?

Such an action might help support your claims of three independent truly peer-reviewers and that “.. I looked at all the reviewer comments and made sure that the valid questions got answered and that errors got corrected. “

Kip again why does that matter. If they were to do that what’s to stop you labelling it Pal Review and then demanding names and so on and so on. We get it you don’t trust climate scientist with a different opinion to you. Let’s move on to something more broadly enlightening.

yes. its trivially true. The important bit is that H2012 doesnt attempt to establish that the distribution has widened and that using an anomaly step in your method can mislead you and give a result that is an methodological artifact. The K-S test on daily data would probably be better suited. just a hunch

I’m not sure H2012 did that trivial thing. What it showed was if you decadally average the past half dozen decades then the last three decades have more warm extremes. Those three decades happen to be the three warmest decades of of the six but there are other features of those decades that might explain the difference in heat extremes in those decades. Hansens simple decadal averaging has actually hidden all opportunity to identify what’s causing those extremes and allows the simple acceptance of assumptions.

My reflex reaction was ” he didnt prove that the distribution widened” so, like, duh, mosher, he wasnt trying to prove that. Once that light bulb went on, the whole discussion becomes much more fruitful.

They only said it, and had a figure showing that it was dependent on the choice of base period and then explained why the base period they chose 1951-80 was the appropriate one. So no, they didn’t “prove” it. No way, but they did demonstrate that this happened given the surface temperature records from 1951 to 2011.

Whatever Hansen may or may not have said, this appears to be his position now:

One must be careful not to misinterpret the Hansen et al. [2012] paper as indicating a change (broadening) of the variance of the temperature distribution with time. According to the authors of that paper [private communication], it was not their intent to study that issue and their paper did not address it, although it has been misinterpreted to do so by some who did not closely examine their mathematical approach.

While Eli can argue what the paper said, I don’t think he should try and argue with the author’s intent, nor with the author’s opinion about what their paper says in this matter in light of Zeke’s analysis.

Rather than argue over issues relating to literary interpretation and authors intent that the original author appears to disagree with Eli on, I’d suggest Eli spend time looking at substantive issues.

One of these IMO is whether the distributions in Fig. 7 are varying in a statistically significant fashion. Zeke says they don’t, but he doesn’t put this to number. I would have been interested in seeing the Kolmogorov–Smirnov test applied here, for example, as a more quantitative way of looking at his results.

There might still be significant differences, but they were orders of magnitude smaller than the variance changes using the original method. I didn’t do a rigerous test, but merely stated that “If we take this approach for every decade, we get the frequency density plots shown below in Figure 7, which shows little change in variance over time.”

To be perfectly honest, this was mostly just a fun exercise for me to learn how to properly create frequency density plots :-p

Carrick, Eli is not arguing those points (although he may take it up with HSR) but with those who said that he mistated what was in the HSR paper including Mosher and Manakar. The text speaks for itself. You are correct.

No. there are basically two people who can do the work and they are over loaded to the max. The work is done, a write up takes time and the other balls in the air are
1. the data paper
2. the monthly update process
3. end user support
4. Station data and charts showing scalpeled data.

So, oceans is pretty much done. Folks who want to roll their own have everything they need. Gridded land, pick the gridded ocean you like and
combine. Combining requires you to.

A) align the datasets if you work in anomaly
B) decide how to handle ‘coastal’ grid cells.

dont forget the spring. The increased snow in the winter ( moist air hitting a region cold enough to produce snow) melts earlier in the spring.
its not that hard to understand. As a theory AGW is easy to defeat if you ignore what it says.

You must have been playing hooky from climatology class on the day they explained the asymetrical nature of global warming. NH winter temperatures rise more than summer temperatures. Combined with moister air that makes greater snowpack in winter and less melt in summer. Hence Mt. Shasta, the canary in the coal mine. Milankovich cycles do exactly the same thing on the way to ending interglacials. Orbital mechanics line up such that more sunlight is received in NH winter and less in NH summer. Net energy received remains unchanged yet despite no change in average annual insolation the end result is glaciers a mile thick approaching Washington, DC, which, all things considered, is an improvement.

This enigma gives people like BBD, and probably you too, a serious case of cognitive dissonance because ice ages begin and end with no change in forcing producing an infinite climate sensitivity number.

Pekka JC SNIP Pirila, remarkably enough, was able to think long enough and hard enough to figure out that climate sensitivity is non-linear. How much time it takes you to realize that is anyone’s guess but I’m going to guess ‘never’.

“And the data showing that the melting season starts earlier when the snow is heavier is where?”

Jim is dead, Scotty. Please don’t spoil the fun I’m having making the corpse twitch by asking the hardest questions right off the bat.
===================
made me spit my coffee this morning! Thanks for the laugh David!

I would like to see the reviewer comments from the climate journals that rejected the paper before it was submitted to this brand new commercial engineering journal. As far as I can see the journal has published no papers. There are two in press, this one and and introduction to the journal by one of the editors!

“To provide some physical basis for the ongoing controversy focused on the U.S. surface temperature record, an experiment is being performed to evaluate the effects of artificial heat sources such as buildings and parking lots on air temperature. Air temperature measurements within a grassy field, located at varying distances from artificial heat sources at the edge of the field, are being recorded using both the NOAA US Climate Reference Network methodology and the National Weather Service Maximum Minimum Temperature Sensor system. The effects of the roadways and buildings are quantified by comparing the air temperature measured close to the artificial heat sources to the air temperature measured well-within the grassy field, over 200 m downwind of the artificial heat sources.”

sure K scott. go download modis urban land cover, you will have to request access from the PI. Then get our station data. And from there its a piece of cake. To do it really right I would suggest using NLCD 30 meter data if you cant get modis. That requires more work but I show you how to do it on my blog.

Depending on the source a station is considered “very rural” if its ditance from a MOD500 urban region is at least 0.1 degree (11.1 km) or 10/25 km.

Anyway, presence or absence of buildings is not directly involved in the definition. Only if we assume there is no building farther from any MOD500 urban region than 10 km/11.1 km/25 km can we conclude no stations were used close to buildings.

Is that assumption reasonable? What fraction of land surface is “very rural” under each definition? What percentage of the world population lives in that area? Are those people all homeless?

In the year 2000 world population was 6.09 billion and 2.83 billion lived in areas considered “urban” by MOD500. That leaves us with 3.26 billion who should either live closer than 25 km to an urban pixel or be homeless. Is it the case?

If not, how can we make sure there were no building in the vicinity of any “very rural” station?

Been there. Read that. I have this image in my right brain of Ayla as a real fur coat covered hottie but the left side informs the right that she must’ve been a lice ridden stinking scraggle toothed hoe bag. Sucks to be me sometimes.

Jean Auel’s “Ayla” saga is a good read (the author obviously did some “homework” and added in a lot of imagination). The Neanderthal part (book #1) was the most interesting for me, but I enjoyed them all.

Springer’s left brain has got it wrong: this was a cool chick (in her prime).

What the hell, she domesticated both dogs and horses (plus a lion – but that one was just temporary).

I’m waiting for the next book – maybe she’ll invent the wheel (or the precursor of the internal combustion engine) – who knows?

Max,
The first book was waaaaay the best. Strong research and
imaginative insights into Neanderthal culture. Later books got
too indulgent, one woman discovering flint fire lighting, not ter
mention domesticating animals … and Jondalar, spunk as he
was, was a little too PC :: grin::
Beth.

I was hoping that this latest post would deal with the adnissions by both Berkeley Earth and Hansen that temperature change had been at ‘standstill’ or ‘pause’ for the last decade. However the present pause would be entirely lost within a thousand year simutation such as was used for the comparison.

As for the comparison, what can one say when we have a paper from just one of the protaginists. Of course climate is all about averaging because around the world at a particular place temperature can vary by 30C in a single day. The first thing is to agree on is a definition of climate. Climate is simply smoothed weather, but there are many different smoothing formulas and periods. Because of the this variability I suspect Judith has decided not to take a lead in this. It starts at the place of measurement when integrating temperature fot the day could provide an exact measure, they take the mean of the max. and min. and call that the average. This is the first stage of smoothing but it also introduces sampling error. I personally favour 11 year central averging with the disadvantage that you are always 5 1/2 years behind today’s true average, but it is easy to correct for that.

I suspect Judith would just as soon place as much distance between herself and Richard Muller as possible. Speaking of laying down with dogs and getting up with fleas, she should have known better than to get in cahoots with a UC Berkelety clown. Anthony Watts should have known better too. The day the BEST project was announced some of us warned Watts but he wouldn’t listen. Had to learn the hard way he did. The force is weak in that one.

@Alexander Biggs: I was hoping that this latest post would deal with the adnissions by both Berkeley Earth and Hansen that temperature change had been at ‘standstill’ or ‘pause’ for the last decade.

Has either Muller or Hansen evaluated the three decades prior to 2000-2010 for “standstill?”

The BEST data shows that all four of those decades have individually been at a standstill, as can be seen on the left side of this graph. All four decades look the same.

The right side shows the same data collected in one four-decade batch. Again all four decades look the same..

Certainly the past decade has been “at a standstill,” as have each of the preceding decades. Zeno observed the same thing millennia ago when he pointed out that in an instant an arrow travels no distance.

In climate terms Zeno would judge a decade to be an instant. How is this news?

Actually 1990-2000 wasn’t a standstill. Maybe it looks that way to the untrained eye?

Did you like totally miss the late 1960’s and early 1970’s when global cooling was the big scare? It wasn’t as engaging with the public back then of course as climate disaster had to compete with nuclear armegeddon and Mai Lia massacres and stuff for the hearts and minds of liberal nutcases like you. Tell us your story about how you dodged the draft, Vaughn? Did you bail out like Clinton in school or were you more creative like Bush in the National Guard?

That’s a little too cute (and I don’t mean the little trick of using 11 years for each decade to create a better looking effect). There is no need to arm wave and say a decade is an instant. A little exercise for you. Assuming global temperatures are an AR process with a 0.2 degree C trend and a standard deviation based off real world observations, what is the probability that we would observe 16 years without statistically significant warming?

Thanks Vaughan Pratt but No, they don’t look the same. As Springer says they look the same to the ‘untrained eye’ A problem we have is there is no agreed definition of climate. People tend to think of climate as a constant, but even in a constant climate temperatore can change by 30C in a day. Climate has to be an average and the only question is: over what period? The period is not critical so long it is long enough to elimanate the random fluctuations. I favour 11 year central averaging to cancel sunspot effects.

@DS: Actually 1990-2000 wasn’t a standstill. Maybe it looks that way to the untrained eye?

Excellent point. It was an odd decade (one whose years have a third digit which is odd). Why should that be relevant? Well, ever since 1870, when saloons were unlicensed and pantaloons therein licentious, the even decades of HadCRUT3 have invariably risen slower than the odd decades on each side, as judged by trend lines fitted to each of the 14 decades since 1870 (woodfortrees can supply those). This has been one of the most predictable things about the otherwise unpredictable decadal climate.

@DS: Tell us your story about how you dodged the draft, Vaughn? Did you bail out like Clinton in school or were you more creative like Bush in the National Guard?

Even more creative than that, David. Clairvoyantly forecasting the draft years in advance, I arranged to be born overseas (from your perspective) in a HUAC, a Horribly Unamerican Australian Community whose ancestors had never set foot in the Americas. That did the trick quite nicely, some might claim unintentionally.

@cynp: (and I don’t mean the little trick of using 11 years for each decade to create a better looking effect).

Huh? I used exactly 120 monthly datapoints in each decade, totaling 480 datapoints. The right hand side uses the same 480 datapoints. How does that come to 11 years per decade?

Try plotting any dataset for the period From: 1980 To: 1981 at woodfortrees.org. According to you that should be 24 months. If you click on Raw Data at the bottom you’ll see that it’s actually 12 months. This is because 1980 and 1981 mean January 1980 and January 1981 respectively (for July 1980 use 1980.5), and while From: 1980 is inclusive of January 1980, To: 1981 is exclusive of the month January 1981. I’m using the exact same definitions.
(WoodForTrees.org was up earlier today but seems to be down just now.)

I ‘fess up to little tricks when I use them, such as my ingenious little trick above to avoid the draft. Sadly I have no 11-year trick to ‘fess up to in the first place. I wish it had been otherwise so I could have congratulated you on spotting it. (Congratulations David Springer.)

@David Springer: Austria’s great. I’ve been to Austria.

Congratulations again, David Springer. Had you said Austria’s the BEST you’d even have been on topic.

@cynp: A little exercise for you. Assuming global temperatures are an AR process with a 0.2 degree C trend and a standard deviation based off real world observations, what is the probability that we would observe 16 years without statistically significant warming?

If you accept the RSS MSU records since 1979 as “real world climate,” someone more ignorant about climate than I am worked this out on February 10, 2010. His result, more than a year before the Santer et al paper to essentially the same effect, was that whereas 10 years barely hits two sigma, 16 years reaches several sigma. (But perhaps that particular little calculation was what you were referring to.)

That ignoramus knew nothing, nothing I tell you, about climate. Though he’d taken honours statistics in college he was even more ignorant about climate than I am today, however impossible that may seem.

Having come out of my millikelvin post badly scathed, I’m with you 100% on that. 11-year averaging was the basis for my 2011 AGU poster, which I ill-advisedly cranked up to double that period for my 2012 AGU poster last month. The rewrite I’m now working on backs off to your recommended 11 year averaging, albeit with the Greg-Goodman-approved filter F3 in place of the more simple-minded central averaging filter F1 you’re recommending which has bad side lobes in its frequency response as can be seen in Figure 5 of my poster.

The graphs are of 1970-79, etc. and oddly there is a jump in temp. from 79 to 80, 89 to 90, etc. So each decade does look fairly flat but if the 11th point was plotted then each would show a larger change. Interesting. So the person who said you did plot 11 years, possibly meant the opposite, that it would have shown the increase more clearly? Anyway, does not change your point that what is important is the long term trend. I still am not concerned about CAGW as I don’t think the evidence for the larger temp. increases is proven and I think humans can cope with a few degrees in 100 years. I think the next 10-15 years will be very interesting.

@Bill: The graphs are of 1970-79, etc. and oddly there is a jump in temp. from 79 to 80, 89 to 90, etc. So each decade does look fairly flat but if the 11th point was plotted then each would show a larger change.

What do you mean by the “11th point,” Bill? Each of the four 120-point decades on the left is smoothed with a 3-month moving average, so that for example 1970-1980 covers the 118 months February 1970 (which averages the 3 months January-March of 1970) to November 1979. The “11-th point” would therefore be December 1970.

The omission of December 1979 and (in the next decade) January 1980, and likewise for the other interdecade jumps, does create discontinuities, though the only sizable such is at 1990 which jumps sharply from 0.425 C in November 1989 to 0.833 C in February 1990.

@Bill: I still am not concerned about CAGW as I don’t think the evidence for the larger temp. increases is proven and I think humans can cope with a few degrees in 100 years.

I have no quarrel with that as it’s well above my pay grade. ;)

I think the next 10-15 years will be very interesting.

If 2010-2020 doesn’t rise sharply I’ll have to send my understanding back to the drawing board. I expect it to be similar to both 1990-2000 and 1970-1980 on account of the 20-year cycle in the upper plot in this chart.

Austria’s great. I’ve been to Austria. In the summer. Never seen somewhere where brazzieres are so universally eschewed by so many busty blondes. My head was still bobbing up and down along the Autobahn all the way back to Munich and it wasn’t because the BMW 730i rental car I was driving had a poor suspension, if you get my drift.

I was in Salzberg the summer of 1991AD. Ayla was in Salzberg the summer of 20,000BC. The crystal shops probably weren’t there back then and just as unlikely neither were there the pampered pieces of feminine pulchritude which maketh male cups everywhere runneth over.

Ever since UKMO told Tony B that there would be no more snow in England (because of AGW) he has succumbed to a snow fixation, a near-neurotic condition fueled by fear of changing climate. Makes him want to slip and slide on the stuff. Good news is it only tortures him in the winter months.

Sorry. I don’t get paid for any of the work I do for Berkeley. I probably started working on their stuff months before I ever got invited to sit in on the staff meeting. For now its a labor of love. weird hobby I know.

The small size, and its negative sign, supports the key conclusion of prior groups that urban warming does not unduly bias estimates of recent global temperature change.

“Negative sign?”

Hmmm…If my study ended up with a negative UHI effect (after all the studies out there showing just the opposite on a local basis), I’d toss the study results, rather than drawing any conclusions from them.

Beth,
Yep, just like the last big cold spell, the next is going to shake things up quite a bit. Especially since we are squandering ALL of our resources (physical and intellectual) for the hysteria du jour.
Skin color bias, I believe in part, stems from a behavior bias that all life practices in the day to day course of survival (if you don’t bother me I won’t bother you)..
The Sun that has helped provide a climate does not make that distinction. Life adapts or it does not. Ask 99% of the species that existed before us.

1. Shouldn’t blondes have a higher albedo than brunettes? In which case wouldn’t blondes get less out of a faint sun?

In a hot climate I’d buy a white car, in a cold climate a black one. Why does pigment work the other way?

2. Possible answer: unlike your car when all the shady spots are taken, you can always step into the shade, and then a white pigment will radiate less heat than a black one. In a desert with no other shade, the burnous can provide the shade.

Pigment serves to regulate cooling, not to protect against the Sun, for which other remedies are available for people, if not for cars.

Buy that? Then I have a bridge to sell you.

3. Dark pigment blocks sunlight from reaching the part of your skin where it activates vitamin D production. Light pigment lets more of the limited sunlight through for that purpose.

While 3 is the official answer in every accredited education program, I don’t recall anyone ever having even hinted at 2.

Yet both seem perfectly plausible.

Once both hypotheses are in play, how would you go about choosing between them?

Maybe the vitamin D answer is 75% and the other 25%. Or vice versa. Beats me.

In the case of polar bears I’d guess the competition would be mainly between 2 and 4 since for 3 polar bear hair has been shown to be essentially opaque to UV, which is the part of the spectrum relevant to vitamin D. Polar bear skin is black but the “white” (more precisely translucent like snowflakes) hair presumably acts simultaneously as an excellent insulator and as camouflage allowing them to get closer to their prey.

When you consider the effect of increases in human-caused carbon dioxide on global warming is virtually zero we can only stare with wonder at the amazing sensitivity of the government’s global warming models. Keep up the good work.

Let us assume an increase in the concentration of atmospheric CO2 as an avoidable consequence of modernity. Let’s work with 300 going to 400 ppm because of industrialization. A 33% increase. Sounds like a big jump, right? But, in going from 0.03 to 0.04%, I guess we’d all have to be idiots to believe that extra 0.01% of CO2 in the air really makes any difference.

“I guess we’d all have to be idiots to believe that extra 0.01% of CO2 in the air really makes any difference.”

Estimate, if you will, how many PhD man-hours alone it took over the past 30 years to make a half-assed case for it. A model, with poor and worsening skill as shorter term projections fail (cough cough pause cough cough), is as good as it gets. So much for Feynman famously saying if you can’t get a cocktail waitress to believe your theory you need a new theory. People like Richard Lindzen, Roy Spencer, and John Christy just to name a few are pretty far from cocktail waitresses too. I’m still deciding about Curry.

Parasite load is a measure of the number and virulence of the parasites that a host organism harbours. Quantitative parasitology deals with measures to quantify parasite loads in samples of hosts and to make statistical comparisons of parasitism across host samples.

In evolutionary biology, parasite load has important implications for sexual selection and the evolution of sex, as well as Openness to experience.

So you see, as a responsible member of my species concerned about the direction of evolution going forward, I can’t really help having a great deal of concern about parasite load. I’m a prisoner of my biological programming.

I didn’t look to see why wickedpedia capitalized Openess, btw. Is there a movement of some sort by that name that has escaped my attention thus far?

Steve, with due respect, I suggest you were more than a secretary as a reason you were not an author. You were part of the curve-fitting Co2/Temp thought, you were part of the volcanism claim, and you checked data and prepared the final draft. C’mon, Steve. You had a bigger role than Muller. Your explanation doesn’t fit. I don’t know the journal that rejected Best, but chances are it was a better known “climate” journal and it stretches credulity that they would have asked to have the methods published first. Important work gets attention by well known journals. Openness here is as important as open code, unless of course one believes in situational ethics. Not meant to sound harsh.

SubGenius members believe that those in the service of the conspiracy seek to bar them from “Slack”,[19] a quality promoted by the Church. Its teachings center on “Slack”[4] (always capitalized),[15] which is never concisely defined, except in the claim that Dobbs embodies the quality.[2][20] Church members seek to acquire it and believe that it will allow them a free, comfortable life without hard work or responsibility, which they claim as an entitlement.[9][21] Sex and the avoidance of work are taught as two key ways to gain “Slack”.[15] Davidoff believes that “Slack” is “the ability to effortlessly achieve your goals”.[19] Cusack states that the Church’s description of “Slack” as ineffable recalls the way that Tao is described,[6] and Kirby casts “Slack” as a “unique magical system”.[22

sorry Bob. Ive been an author on many things. and for the results paper, well in my mind I wouldnt call what I did authorship. I suppose if I were an author on it you’d complain that reformating the citatations didnt count as author. However, since zeke and I suggested to Anthony that he had an issue with TOBS data, and since in your book offering ideas counts as authorship, please go to WUWT and demand that zeke and I be listed as authors on Anthonys latest paper. That is a fair test of your sincerity.

Sorry Bob. You failed the sincerity test. If your intention was to get me to say I was more than a secretary, then claiming I was an author is an odd way to do that. Will you or won’t you demand that Zeke and I be added to Anthonys author list because we offered an idea?

Steve, I am happy to do that, but you should not argue equivalency, unless of course you know better.. My opinion, based on your statements of your involvement in BEST is that your role in BEST was substantially greater than TOBS. You equate the efforts, not me. Also, you failed to answer the questions above, namely, were there other journals, beside JGR, that rejected BEST.

Bob, I was pretty clear. The paper was submitted ( as far as I know ) to one journal prior to G&G. That journal editor wanted the methods paper published first. The decision was made to select a different journal. geostats seemed the right option given some of the comments mad eon the methods paper. There were two journals where it fit. one was selected

Bob. Im not arguing equivalency, I’m testing your sincerity and you failed.
Also, as I’ve stated before the paper was submitted to one journal prior to G&G. Now, lets see you go off to WUWT and demand that by your rules I should be an author on Anthonys paper

Steve Mosher, you say, ” Will you or won’t you demand that Zeke and I be added to Anthonys author list because we offered an idea?” I will email Anthony and make a suggestion that you should have been an author(and Zeke). But I have to say Steve, that is just your way of moving the pea. You, not I, made the equivalency argument, i.e. the amount of effort you gave to TOBS was equivalent. If you deny this, you are the one who failed the sincerity test. Understand!

Yes. As you show in the link, depending on arithmetic choices Canada is either warming by ~0.05C/decade or cooling by same.

As I’ve pointed out a million times there is no warming or cooling in the instrument record until arbitrary choices are made about adjusments. NASA documents the charts the choices it makes. BEST makes equivalent choices and from the same instrument record produces similar results. I’m shocked. Shocked I tell you. So shocked that on Watts Up With That on the day BEST project was announced I said the results would be essentially no different than past instrument record results.

“BEST makes equivalent choices and from the same instrument record produces similar results. I’m shocked. ”

actually not. go ahead and dump each and every GHCN monthly station ( about 7000 ) and the answer doesnt change. You acan, as we did, use unadjusted data ( like daily data which isnt adjusted ).. same answer.
You can use hourly data warts and all. same answer. You can pick only rural stations with long daily records, un adusted. same answer.
You can pick 100 random stations. same answer. you can pick 5000, and predict the temperature at other locations.. works.

Mosher says, actually not. go ahead and dump each and every GHCN monthly station ( about 7000 ) and the answer doesnt change. You acan, as we did, use unadjusted data ( like daily data which isnt adjusted ).. same answer.
You can use hourly data warts and all. same answer. You can pick only rural stations with long daily records, un adusted. same answer.
You can pick 100 random stations. same answer. you can pick 5000, and predict the temperature at other locations.. works.
You sound like an author, not a secretary. C’mon Steve, come clean.

Bob, given the nature (as opposed to Nature, haha I kill me) of the journal calling the named authors “authors” is a stretch. What they did is more like blogging. Comparing it to Principia Scientific in that regard is right on target. That paper wasn’t published it was blogged.

Mebbe he doesn’t like the attribution even more than he says he doesn’t like the attribution. Speaking for you, moshe. Egads, isn’t that against some rule posted or filed somewhere? Where’s willard.
==============

david.
The US is 2% of the land mass. If you change 2% by 50% how does the global answer change? perhaps you are a different page than I am when I say the answer doesnt change. there is the same issue with UHI since the land is only 30%. Throw out TOBS, throw SHAP and you have changed less than 2% of the data. large N is a bitch.

I need to start taking a shot every time someone trots out that old USHCN v1 chart. SHAP no longer exists, as of the switch to v2 circa 2009. They really need to take that old v1 page offline, as it confuses pretty much everyone.

Also, Berkeley uses the raw data (pre TOBS or any other documented adjustments) for both the U.S. and the globe. You also get similar results when you run the NCDC’s PHA using no TOBS or any other non-PHA adjustments (see our poster for a direct comparison).

Random changes are fine and relatively easy to detect, as long as they are not both temporally and spatially correlated. If all the local observers get together and in the course of drunken revelry all decide to change the obs time the next day, it would cause problems.

I don’t know why you ask what I have in mind. There is autocorrelation in BEST’s global temperature record similar to what I showed for BEST’s North America’s temperatures upthread: Peaks around ~12 months and dips around ~6 months. That’s a clear seasonal cycle.

I’m fairly sure you could pick any 30 year period (at least from 1900 on) and it’d be enough to see a seasonal cycle.

The “clear cycle after Pinatubo” (1991) that I was referring to in this graph has a period of close to four years. That’s one reason the human eye is able to see a steady rise throughout the period 1991-2010, including the decade 2000-2010, even though the variance is so large as to suggest this rise would be impossible to see.

Context is everything: the four plots on the left don’t give sufficient context to see that intriguing effect.

Manacker, I’ve been a barmaid, and Feynman (hopefully) implied that they are not necessarily stupid.

But back on topic, Tallbloke’s thread on adjustment to the Alice Springs record (in the middle of a desert in central Australia) is the kind of thing that makes me very wary of the shapeshifters who adjust the records:

It seems to me that not only is temperature record adjustment a Wild West where not even consistency is required (see the Alice Springs story), it matters a lot because we are mostly only taking about fractions of a degree.

I’ve tended bar in a past life, too – but Feynman’s point was really that one does not need to have a lot of formal degrees to be able to think logically and especially to be able to differentiate between hype and substance, as your question to Steven Mosher demonstrates.

I can’t say that I have done my best work in a bar, however, the ability to make a complex issue “make sense” to someone who knows nothing of the topic has been the “acid test” for me re: my pet notions. There have been many a time I have left a bar with my metaphorical tail between my legs wondering if it were the alcohol or the idea just stank. Upon reflection the subsequent morning, most times, it was a bad idea. I had let my confirmation bias add and subtract important numbers to get the result that I thought made my idea look great, when it didn’t.

So, back of the envelop calculations in a dimly lit stale beer smelling bar
netted me more confusion, which upon reflection and recalculation in the light of day, made the answers on the test the next day make more sense to me and the prof.

If only climate science had people who reflected and recalculated in the light of day.

johanna, as zeke explains below the adjustments in question ( TOBS and SHAP) do not make any significant difference to the global average.
There are two reasons for this.
1) the adjustments are not made to every record in the US
2. the US comprises 2% of the sample.

If you change 2% of the sample by 50% well, do the math.
Bottom line, it was cooler in the LIA

SHAP no longer exists, even in the U.S. (it was replaced by the PHA). Now NCDC only does TOBS and homogenization (PHA). TOBS itself isn’t really needed as a separate adjustment, since it tends to get picked up pretty easily as a breakpoint in automated homogenization.

Thanks for your responses. Just to clarify – are similar adjustments made uniformly across the global records, bearing in mind that the US records are only 2% of the sample? If so, who does this and how, and If not, what are the implications of variable application of adjustments?

The adjustments are largely the same both in the U.S. and worldwide with one exception: the U.S. records are subject to a separate time-of-observation adjustment, while global temperatures are not. Both are adjusted using the pairwise homogenization algorithm by the National Climate Data Center at NOAA. You can find more details about their approach here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/menne-williams2009.pdf

The AGU poster (linked in the original post) is an analysis we did comparing the results of a blinded study of the Berkeley method and the NCDC method both on actual U.S. temperatures and on synthetic temperature data with different types of added bias. Also worth reading is this paper by Williams et al from last year: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/williams-etal2012.pdf

sure thing joanna. dial the clock back to 2007 and I doubted everything that CRU and GISS and NOAA did. and with good reason, I thought. 5 years later after actually using the code I asked for, and after reading the papers, and after trying hundreds of ways of find problems I came to the conclusion that those doubts while sincere were misplaced. but I had to see for myself. An open mind is all that is required. If you have that then guys like me will help you.

No shiit, Sherlock. It’s lack of rise that isn’t explained. Hello? Earth to Steven. We haven’t gotten warmer in 15 years despite bumping CO2 from 370 to 395ppm during that time. So which volcano do you think will bail you out of this travesty?

Steve, I really wish you would jettison the idea of you be a “lukewarmer”. It is unbecoming and untrue unless you mangle or fabricate a definition (which I know you have). The hypothesis is that added carbon dioxide will cause X degree of warming. You are on board with that or not. There is no such thing as a “luke hypothesis” unless you fabricate a definition, which as I say you have done and are entitled to do so. But it is still somewhat juvenile. By the hypothesis above I am a warmer. See, felt good. Try it, and while at it come clean on what other journals beside JGR rejected BEST. After all, you are the Dean of Openness.

Steve, what exactly is a luke-warmer? I don’t know really, Dr. Muller seems to be what I’d describe as a luke warmer i.e. he clearly believes putting CO2 into the atmosphere will cause some warming, but he doesn’t believe that anyone can foretell the effects of this warming. The climate is a coupled non-linear chaotic system and it is impossible to forecast a future state. But what makes you lukewarm? And are there “tepids”?

” … In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the
long-term prediction of future climate states is not possible.”
IPCC TAR, Section 14.2 “The Climate System”, page 774.

And 0.52C/decade not 0.51C. I almost excluded ten whole millikelvins there. An unpardonable sin in the eyes of Mr. Millikelvin hisself no doubt. I wonder how he’ll forgive himself for missing a 520 millikelvin trend in a single decade? Enquiring minds want to know.

@DS: I wonder how he’ll forgive himself for missing a 520 millikelvin trend in a single decade?

I wonder how David Springer will forgive himself for missing the statement “the even decade 2000-2009 didn’t trend up as strongly as the odd decade 1990-1999” to the right of Figure 3 in my AGU poster.

The big print fooleth and the fine print schooleth. ;)

(I wrote 2000-2009 and 1990-1999 for the benefit of people like cynp who might otherwise mistake 2000-2010 and 1990-2000 for 11-year periods. In both cases I meant 120 month periods.)

It does when you replace “your preconceived belief it’s due to” by “the hypothesis of.” That’s because detrending by that even-odd behavior (which is phase-locked to the solar cycle) in addition to detrending by SAW as per Figure 2 of my poster leaves behind a steady increase after filtering out noise with a period below 9 years instead of the 21 years I used in the poster. This more than doubles the number of dimensions of the image of the filter, namely to over 20, while keeping the number of parameters essentially unchanged. This in turn decreases the concern about over-fitting.

Oh I’m sorry I said you had a pre-conceived belief that greenhouse gases cause warming. Logically there’s only belief or disbelief. So if you don’t believe then you disbelieve. That’s good. We’re both in disbelief.

The positive atheist professes belief in a godless universe. The weak athiest believes the question is not answered and disbelieves both i.e. has no beliefs in the matter.

So, to the question “do greenhouse gases positively raise the earth’s average surface temperature”? I neither believe they do or believe they do not. I’m an experimentalist not a theorist. Show me the well controlled repeatable experiments. So far I can’t find a simple experiment using a ~12um laser showing that downwelling radiation on a body of water has any insulating effect at all or if it just causes more water to evaporate with no change at all in the bath temperature. It that can’t be demonstrated then AGW has no theoretical underpinning across 70% of the earth’s surface and nearly all the thermal inertia.

Is the typical computer science emeritus at Stanford connected well enough to get access to some equipment better than scotch tape and saran wrap to perform the experiment I’d like to see?

@DS: So, to the question “do greenhouse gases positively raise the earth’s average surface temperature”? I neither believe they do or believe they do not. I’m an experimentalist not a theorist. Show me the well controlled repeatable experiments. So far I can’t find a simple experiment using a ~12um laser showing that downwelling radiation on a body of water has any insulating effect at all or if it just causes more water to evaporate with no change at all in the bath temperature.

To quote the first of my three posts to Climate Etc, about a year and a half ago, “I’d like to propose a strengthening of the skeptic argument that downward longwave radiation or DLR, popularly called back radiation, cannot be held responsible for warming the surface of the Earth.”

Please pose your question to those like retired Canadian meteorologist Alistair Fraser who still push the old-fashioned idea that the atmosphere is heating Earth’s surface twice as strongly as the Sun.

A more meaningfuly description of the greenhouse effect is as follows. Increasing CO2 shuts off some 60 absorption lines for each doubling. (And that’s just the dominant species, C-12 O-16.) Each line not already dominated by a stronger line controls approximately 2.3 GHz of spectral bandwidth. Those lines that are still “open” permit Earth to cool by direct radiation from the surface to space at the frequency of the line.

As any given line starts to close, the radiation to space at that frequency comes less and less from the Earth’s surface and more and more from the atmosphere. Initially the latter kind tends to come from low, hence warmer, altitudes, but as the line closes off it comes from progressively higher, hence cooler, altitudes.

Every doubling of the GHG in question adds around 5-6 km to that altitude for each absorption line of that GHG that is in the process of closing. And every extra km of altitude reduces the effective temperature at that altitude by up to 10 C (but no less than 5 C, and then only with air that is at 100% relative humidity).

That’s all there is to it. Downwelling radiation certainly exists but after subtracting out the variation resulting from the above mechanism, downwelling and upwelling radiation (which from hour to hour can be way out of equilibrium in either direction depending on insolation, surface temperature, cloud cover, etc.) are in essentially perfect equilibrium when averaged over periods much longer than a year. As such they have no bearing whatsoever on the greenhouse effect, which is far too slow to be detectable in less than a decade.

In short global warming is a very slow process resulting from a very gradual closing of the so-called atmospheric window, like the extra warmth you would get from zipping up your jacket very very slowly.

Quite apart from the difficulty of locating a practical 12 um laser (all the long-wave lasers in my basement are 10.6 um though I might be able to get them to lase at 9.6 um and with more effort up to maybe 11 um), I don’t see how beaming one down on water could tell you much if anything about the greenhouse effect.

Increase in CO2 increases downwelling IR. On dry land it results in a surface temperature increase (through slower rate of radiative cooling) which in turn increases upwelling IR and restablishes equilibrium in that manner. Over water DWIR is absorbed in the first few microns and water molecules peel off to become vapor with no rise in temperature if the air is not saturated and it rarely is or you’d see fog all the time. The vapor rises until it condenses from adiabatic cooling releasing the energy and warming the atmosphere.

A surface is warmed in both cases to reestablish equilibrium but in the case over the ocean the warmed layer is the top of the cloud layer. The cloud layer itself will rise to a higher altitude which should be about 100 meters for each CO2 doubling displacing what was previously cold dry air with a warm cloud. This isn’t disputed either and is why the signature of CO2 warming is a hot spot in the middle troposphere in the tropics. Maybe you’ve heard of that signature. Maybe not. The new cloud deck, being 100 meters higher, now has more greenhouse gases beneath it shielding the ground from the influence of the cloud while at the same reducing the amount of greenhouse gas between the cloud top and space. So over water the totality of the effect may be no more than no change in surface temperature, a reduced lapse rate from surface to cloud, and a greater lapse rate from cloud to space.

Without experimental confirmation or refutation of DWLIR’s effect on evaporation rate and temperature of a water body no one can know. The fact that sensitivity empirically obtained has mostly sharp probability peaks between 1C and 2C with only fat tails causing the average to go higher than that I leans towards AGW being a land-only effect for the most part. This is reinforced by all observations which find recent warming is greatest where there’s the least water available for evaporation – i.e. deserts more than ocean, frozen surface more than thawed, and so on. Follow the water.

1. You’re making the same mistake as Alistair Fraser, imagining that there’s more long wave radiation coming down than going up. As Kiehl & Trenberth’s Figure 7 from 1997 should make clear, the opposite is true: surface water emits more long wave radiation than it absorbs. This makes it irrelevant what happens to long wave radiation that is deeper than a few microns: it just bounces around inside the water. In the top few microns the net radiative flux is upwards.

2. Even if (counterfactually) there were more downward than upward long wave radiation, evaporation is only from the top nanometer of water while DWIR penetrates more than a thousand times that far. The remaining 99.9% of water molecules below the layer evaporating cannot evaporate and would therefore be warmed by this hypothetical excess DWIR, even if it did exist which it doesn’t.

Sure, there’s net lw emission from the ocean. Not much as percentage of sw absorption. Most of the solar energy leaves the ocean in latent form. This is very unlike land and there’s a simple reason for it. If there’s water available to evaporate that’s the path of least resistance. By a goodly amount too judging by the disparity.

Mebbe you should take an oceanagraphy class so you’d know what ocean heat budget looks like. I did.

Opens up with Trenberth’s cartoon heat budget then gets into a whole buttload more detail. Get back to me when you understand why the global maps showing incoming and outgoing energy in appear they way they do and when you understand what it means when the average oceanic heat loss to radiation is about 50W/m2 in the tropics and subtropics and latent loss is about 150W/m2.

Which process would you say is the more important to understand especially over the ocean?

Vaughan DS, isn’t this where you should think about what you are averaging?

The bulk of the oceans are radiating ~334 Wm-2, 24-7-365. To remain static, they would have to be balanced by a gain equal to their loss. The solar energy absorbed is ~330 Wm-2 (165 Wm-2 if you consider “average” but the sun don’t work at night.)

Most of the surface energy loss by the oceans is related to evaporation. Using a little more reliable estimate, Stephens et al, rather than the out dated and erroneous K&T comics, latent loss is ~176 Wm-2 (~88 Wm-2 average) with a sensible heat ratio of roughly 0.59 for a total 12 hour loss of ~300Wm-2 (150Wm-2 average). There is ~30Wm-2 not accounted for that could be measurement error or a variety of other factors. Stephens et al. indicate a surface uncertainty of +/- 17Wm-2.

The percentage of “surface” that is ocean (more accurately “moist”), ranges from 75% to ~65%. The moist surface would transfer energy to the lower thermal mass remaining surfaces. Approximately 30% of the 330 Wm-2 or about 100Wm-2 is advected from the moist to other surfaces. The average “apparent” energy absorbed is ~0.7*330=231Wm-2 reasonably close to the 236Wm-2 TOA.

Since the “moist” surface is transferring an estimated 100 Wm-2 to the other surfaces and the “average” radiant energy of the moist surface is ~334Wm-2 plus the 100Wm-2 being transferred or ~434Wm-2. Remarkably, the average radiant energy of the ocean surface is ~425 Wm-2, about 9Wm-2 lower, but within the +/-17 Wm-2 uncertainty indicated by Stephens et al.

That 9 Wm-2 BTW could easily be associated with that other latent where ice is formed at about -1.9 C (~307Wm-2) and thaws at 0C (~316Wm-2) degrees.

It would seem that if you happen to mix the wrong “averages” to establish initial conditions for a dynamic model, that perhaps more time should have been spent on the “static” model before taking that leap.

Also with roughly 100Wm-2 poleward energy transfer, the oceans as a heat SINK instead of the thermal reservoir providing the energy to maintain the atmospheric effect would seem a tad bassakwards :) With 100 to 120 Wm-2 internal or “Wall” energy transfer the norm, thinking that internal variations in ocean mixing and sea ice extent don’t have a significant impact on long term climate is insane. Grab the butterfly nets gang :)

Note: the “moist” area or “moist” envelop is a simple way to divide and conquer the confusing transition from a near “radiantless” portion of a system (liquid oceans) to a nearly pure “radiant” portion of the system (dry air at very low pressure). If you want to get fancy, you could use Helmholtz free energy and a roughly -25 C isothermal boundary layer and keep track of the dissipation of energy and mass. I am sure there are other ways to make the problem even more complicated.

Don’t know whether or not you realize it, but you have fallen into David Springer’s “logic trap” with your response.

DS states:

to the question “do greenhouse gases positively raise the earth’s average surface temperature”? I neither believe they do or believe they do not. I’m an experimentalist not a theorist. Show me the well controlled repeatable experiments.

You respond with several paragraphs of eloquent prose describing various aspects of the hypothesis, but do NOT cite any “controlled repeatable experiments”, which would provide empirical evidence to support the hypothesis, as David has requested.

This should be easy for someone (pardon me if this sounds snarky) who can predict the average temperature for the year 2100 to within “a millikelvin”.

@manacker: You respond with several paragraphs of eloquent prose describing various aspects of the hypothesis, but do NOT cite any “controlled repeatable experiments”, which would provide empirical evidence to support the hypothesis, as David has requested.

David will have to “disbelieve” many more geophysical hypotheses than just global warming if he’s going to make “controlled repeatable experiment” his criterion. I bet we both could name a great many such. I’ll start with plate tectonics, which took many years to be accepted and for which there is still no “controlled repeatable experiment.” Your turn.

Huh? How could repeating a GPS measurement tell you anything at all about plate tectonics?

GPS has only been available to the public since 1983. In that time the average plate moved 30 cm. GPS accuracy at the start of that period was two orders of magnitude worse, and only recently has been reduced to a few meters. Under those conditions repeating a GPS measurement would tell you nothing about plate movement unless you waited half a century between repetitions. Having to wait nearly a lifetime to repeat the experiment isn’t what I thought you had in mind by a “controlled repeatable experiment.”

Kind of makes you wonder what planet people have been living on for the last 30 years, huh? There were 10 GPS birds aloft by 1985. You could get millimeter accuracy if you wanted to wait a couple of days to refine the fix and the receiver wasn’t cheap. Back then if you needed it faster there were ground stations at known locations sending out correction signals so if you were in range and subscribed to the service (can you spell land surveyors and earth scientists) you were good to go for fast accurate fixes.

Millimeter accuracy for GPS requires a longer period of time to make the reading. If you’re not in hurry millimeter accuracy has been available for decades. Californian scientists should be especially aware of this since you boy have about a zillion of them employed watching every tiny motion across fault lines.

Sorry, I realized after making that comment that it was unclear. It was not intended as sarcasm.

In asserting “Logically there’s only belief or disbelief” DS appeared to be identifying “disbelieve” with “believe not” without actually doing so, confirmed in his next comment in the traditional manner of this particular debate. Exploiting the appearance of this identification has a long history in connection with the trichotomy theism-agnosticism-atheism, recounted here. By appearing to identify “disbelieve” with “believe not” DS was similarly attempting to dilute the definition of “believe not” to “not believe”, which logically are not the same thing (as DS is more than happy to point out) but which ordinary discourse rarely distinguishes (as DS does not like to point out).

Dawkins himself has been represented as being unclear on his position, on the one hand declaring himself an agnostic, e.g. in The God Delusion and in recent interviews, while on the other rabidly staking out a clearly atheist position in his writings. In a “public dialogue” with the Archbishop of Canterbury a year ago chaired by the noted philosopher Sir Anthony Kenny, Dawkins clarified his position by denying that an agnostic has to be someone that attaches equal probability to the existence or non-existence of God. At 00:52 in the video Dawkins declares “I’m a 6.9” (on a scale where 1 = “I know God exists” and 7 = “I know God does not exist”).

My own objection to Dawkin’s reasoning is that he appears to have uncritically bought into the Judaeo-Christian belief that there is only one god. This makes it easier for Dawkins to make 6.9/7 agnosticism sound reasonable. Applying Bayesian statistics to religions with a wider and more diverse range of gods, the probability that none of them exists is surely much less than the conditional probability, taking the prior (by fiat) to be that n − 1 of the n putative gods don’t exist, that the sole surviving god doesn’t either.

On statistical grounds polytheism is much more plausible than monotheism. And on logistical grounds a single god hearing each and every prayer is about as implausible as a single Santa Claus hearing each child’s request after they’ve waited in line at the store for quarter of an hour (requiring the fortitude of a soccer mom to be leavened with the patience of Job). Even a year would not suffice without multiple Santas.

Monotheism is childlike in its spiritual outlook. The high priests who invented it were treating their flock the way parents treat their young children, relying on the flock to respond with childlike enthusiasm. Had Paul followed his own advice in 1 Cor. 13:11 to “put away childish things” he would not have embraced monotheism.

But I digress. My main point was that DS was exploiting a tactic that has a long history, namely insisting on a logical distinction that is not made in ordinary discourse, much like Max Manacker’s insistence that this century started in 2001, contrary not only to the ordinary understanding but also to ISO 8601.

So, if there if someone proposed that there are 47, or 312 Santas, then that’s more believable than the traditional tale? Can you name a religion with multiple gods, that makes any more sense than the Judeo-Christian monotheism? What’s the mostest gods anybody has ever had, Vaughan? Personally, I don’t think 4 or 5 gods for every man woman and child would do it for me.

Can you name a religion with multiple gods, that makes any more sense than the Judeo-Christian monotheism?

That’s like asking whether you can name a comic genre with multiple heroes that makes any more sense than Superman comics. I can name them, and they may well make financial sense.

That wasn’t Dawkins’ point. Dawkins was addressing whether God actually exists, as distinct from merely being a central fictive character in some spiritual genre. He wasn’t denying the latter, he was merely giving the former 69:1 odds against.

Personally, I don’t think 4 or 5 gods for every man woman and child would do it for me.

With or without sharing? When I was a child Elvis and the Beatles did it for me. I would have given at least 69:1 odds against their being fictional characters. (For comparison, never having handled or even seen a firearm shorter than a rifle or shotgun in those days—this was the early 1960s in Australia—I gave six-shooters even odds of being fictional creations invented for the purposes of having exciting shoot-outs in cowboy movies, much like ray-guns in SF movies.)

Personally I would think any “real” gods “out there” would be well advised to keep a low profile when traveling anywhere near Earth. If two or more popped up on the radar at the same time they’d likely not receive a warm welcome from any monotheistic religion. Even one would at the very least be obliged to produce Mt. Olympus’s counterpart of an original birth certificate.

This is way off-topic so I’m not going to delve into it, but I can’t ignore this comment from Vaughan Pratt:

Monotheism is childlike in its spiritual outlook. The high priests who invented it were treating their flock the way parents treat their young children, relying on the flock to respond with childlike enthusiasm. Had Paul followed his own advice in 1 Cor. 13:11 to “put away childish things” he would not have embraced monotheism.

Calling monotheism childish is silly and offensive. There is nothing about polytheism that is more or less childish than monotheism. In fact, there is nothing about polytheism that makes it more statistically or logistically plausible as Pratt claims. Pratt’s criticism of monotheism is completely baseless.

Given this is not the appropriate forum for such a discussion, I won’t say anything more.

@BS: Given this is not the appropriate forum for such a discussion, I won’t say anything more.

Fine by me, I’ll follow suit, except to point out that “childish” was used only in the King James Version of Paul’s advice. I used “childlike,” not “childish,” (twice in fact).

So if you’re going to accuse me of being offensive please (a) at least quote me accurately and (b) don’t single me out when Christian Science literature itself promises that childlike trust in God quickly heals illness. Medically unsound perhaps, but how does that make it “offensive?” The first sentence of that article draws the very distinction you’ve glossed over:

The word childish brings to thought images of fussiness, selfishness, and stubbornness, but when the suffix is changed to create the word childlike, we get a very different mental image–one of trust, innocence, and joy.

Vaughan Pratt, you’re now engaging in the worst kind argument, argument by misrepresentation:

Fine by me, I’ll follow suit, except to point out that “childish” was used only in the King James Version of Paul’s advice. I used “childlike,” not “childish,” (twice in fact).

So if you’re going to accuse me of being offensive please (a) at least quote me accurately

You used the word “childlike” then quoted a source using “childish” as referring to the same thing. You conflated the two. That means I was perfectly justified in using either word. It’s true I shouldn’t quote you as using “childish,” but I didn’t do that. The lack of quotation marks around the word means it is not set aside as a direct quote. That means you’re criticizing me for using a legitimate paraphrase by falsely claiming it was a quote. And then you go on to say:

This makes no sense. You portray me as singling you out, but you are the only other person in the exchange. It’s impossible to single you out when you were the only person being talked about in the first place. There is no reason I would randomly start criticizing Christian Scientists, and that’s even ignoring the fact what the article you linked to doesn’t use “childlike” to refer to the same thing you did.

Just like I cut off the religious argument, I’m now done with this stupid semantic parsing you’ve forced upon me. If you want the last word, you can have it, but I would ask you not to level false accusations against me again.

> Monotheism is childlike in its spiritual outlook. The high priests who invented it were treating their flock the way parents treat their young children, relying on the flock to respond with childlike enthusiasm. Had Paul followed his own advice in 1 Cor. 13:11 to “put away childish things” he would not have embraced monotheism.

Dawkins is a much stupider thinker than you have pictured him as.
When push comes to shove, he declares that nobody could get their morals from the bible, and asked where , he replies “the same way atheists get theirs…from the zeitgeist…news reports , court decisions, dinner party conversations”

Dawkins therefore gives more influence in the zeitgeist to a single dinner conversation than to all the prayer meetings conducted, all the services, , architecture and art, music and song, volunteer work and so on.

Dawkins forgets that all one needs to have done is to have read a verse on a tract and have been affected by it. One good thought or goal.
He forgets that the news reports were affected by the religion, and courts have been affected too. As well as dinner party converstions.

He’s bonkers.

He said that it was worse to bring a child up Catholic than for to be molested by a churchman.

As well, he said that the word “Atheism” has a bad connotation, and so he went subjunctive mode, saying he pleads to have it called “Rationalism”.

Willard, I do have quotes.
e.g.
The Dubliner article “The God-Shaped Hole” by Dawkins.

“Regarding the accusations of sexual abuse of children by Catholic priests, deplorable and disgusting as those abuses are, they are not so harmful to the children as the grievous mental harm in bringing up the child Catholic in the first place.”

” I can’t speak about the really grave sexual abuse that obviously happens sometimes, which actually causes violent physical pain to the altar boy or whoever it is, but I suspect that most of the sexual abuse priests are accused of is comparatively mild – a little bit of fondling perhaps, and a young child might scarcely notice that. The damage, if there is damage, is going to be mental damage anyway, not physical damage. Being taught about hell – being taught that if you sin you will go to everlasting damnation, and really believing that – is going to be a harder piece of child abuse than the comparatively mild sexual abuse.”

um…just go to youtube and see the part 2 for the rest.
His reply said that taking just one moral lesson from the text is cherry picking.

What a lamebrain. If you only read a verse or two and adopted the sentiments, it’s not cherry picking at all. Supposing that it were a case of cherry picking. Still does not say that the moral was not taken.
So there he changes to the basis for the choice.
Then he gets into where we “really” get morals from.
News reports. Parties. But not . NOT. NOT EVER…texts from you know where.

Here’s a beauty from Dawkins on The Jewish Lobby. The man cannot distinguish a lobby group from a religion an ethnicity …population stuff, for Dawks sake

“When you think about how fantastically successful the Jewish lobby has been, though, in fact, they are less numerous I am told — religious Jews anyway — than atheists and [yet they] more or less monopolize American foreign policy as far as many people can see. So if atheists could achieve a small fraction of that influence, the world would be a better place.”

Do you have any kind of causal connection to propose for a cycle with such a large change in trend magnitude as ~1990-2012?

Sorry, not following. BEST (the subject of this thread) shows a trend for that period of

1990-2012: 0.141 C/decade

If you’re allowed to pick any dataset and any period then you can prove anything you want, as I illustrated with examples just now in response to Max.

If you stick to the data that this thread is about, namely BEST, and stick to honest decades, not your cherry-picked periods some of which aren’t even decades, then you get these trends:

1970-1980: 0.060
1980-1990: 0.034
1990-2000: 0.264
2000-2010: 0.268

All that these trends show in conjunction with the following further decadal trends

2000-2010: 0.268
2001-2011: 0.030
2002-2012: −0.062
2003-2013: −0.004

is that decadal trends are meaningless.

The longer the time series the more significant. Those who complained in connection with my poster that 160 years is not long enough to be significant for estimation of multidecadal climate can hardly turn around and claim significance for a mere 10% of that amount!

Do you have any kind of causal connection to propose for a cycle with such a large change in trend magnitude as ~1990-2012?

While I’m still not following, it might be worth pointing out that absence of a trend does not prove absence of a cycle.

Consider a 20-year cycle constructed as a sine wave from -10 years to +10 years, with its positive-going zero-crossing at 0 years. The trend from -5 years to +5 years is quite pronounced, going from -1 to +1. The trend from -10 years to +10 years, even though twice as long, is exactly zero, going from 0 to 0.

So I’m not at all clear as to how you propose to use a 15-year period to prove that there’s no 20-year cycle.

Do you have any kind of causal connection to propose for a cycle with such a large change in trend magnitude as ~1990-2012?

You apparently misunderstood the question. As I understood it the question asks about the “large change in trend” in global temperature from the first half of this period (strong warming) to the second half (slight cooling).

The curve below shows what is meant here (starting in 1993 instead of ~1990).

You are correct, Max. Pratt misunderstood the question. That you understood it is testament to the question being clear enough and the creation of a straw man then seemingly intentional. Misunderstanding the question is a tool in the artful dodger’s toolbox.

So, aside from the fact that the odd/even cycle fell apart (15 years since any warming not 10 and a reversal), in the past two decades as I showed with woodfortrees links we went from 0.52C/decade warming to -0.05C/decade cooling (1992-2002 and 2002-2012 respectively).

The 11-year solar cycle is easy to pick out of the record but it has nowhere near that much effect. In order to made some kind of case that this approximate 22-year cycle anything other than coindence (only 6 cycles in total since 1880 and the others were very weak in comparison to the the most recent). Do you have any kind of causal connection to propose for a cycle with such a large change in trend magnitude as ~1990-2012?

@DS: In order to made some kind of case that this approximate 22-year cycle anything other than coindence (only 6 cycles in total since 1880 and the others were very weak in comparison to the the most recent).

Are we talking about the same graph? I was looking at this graph, in which the dates of the 15 solar cycles from 9 to 23 are expressed by those 15 numbers placed at their respective dates. Each of the eight odd-numbered cycles is very well aligned with a peak of the upper curve; and the first two and last two such peaks are somewhat weaker than the middle four.

Do you have any kind of causal connection to propose for a cycle with such a large change in trend magnitude as ~1990-2012?

None whatsoever. The quite precise alignment of those eight peaks with the odd-numbered solar cycles may well be just one of those odd coincidences that Nature seems to love to create in order to lead scientists up a garden path.

@manacker: You apparently misunderstood the question. As I understood it the question asks about the “large change in trend” in global temperature from the first half of this period (strong warming) to the second half (slight cooling).

DS’s question started out by referring to 6 cycles of a 22-year cycle. How does your obviously cherry-picked graph (all those strange dates are a dead giveaway) relate to a 22-year cycle?

Max’s vision is 20-20 when you count just the letters on the eye chart that he got right. Had they counted the others he’d be judged legally blind.

By cherry-picking his letters Max can claim any degree of visual acuity he wants. Likewise for decades. Were one to tie Max’s hands by insisting that he define a decade to consist of years whose first three digits are the same, he would find all the evidence working against him.

Max’s ingenious sleight-of-hand is easily illustrated by looking at two sets of data concerning the slope of trend lines of specific decades expressed in degrees per decade.

1. Decades as one might think naively to define them as above, namely 1970-1980, 1980-1990, 1990-2000, and 2000-2010; and

Note in particular that according to the BEST data the most recent decade is climbing even more steeply than the previous decade, and way steeper than the two before.

Now let’s look at what options are available to Max when he’s allowed to pick any year he wants for the start of his preferred decade.

2000-2010: 0.268
2001-2011: 0.030
2002-2012: −0.062
2003-2013: −0.004

Obviously Max would be well advised to steer clear of 2000-2010 as it greatly undermines his point, however reasonable it might seem to some as a choice of “first decade of this century.”

But given that 1980-1990 is barely distinguishable from his three remaining choices, it’s hard for him to argue that any supposed “pause” in global warming is any different from the evident pause we saw in 1980-1990. The untrained eye might see a pause in that decade, but the trained eye would be naive to claim that 1980-1990 climbed at all steeply.

Moreover if we are allowed to pick our “decades” with the same freedom as Max, it suffi9ces to point to this decade:

1977-1987: −0.066

to make complete mincemeat of Max’s argument that after several decades of rising we’re now in a cooling period.

Brr, 1987 must have been freezing in this BEST of all possible worlds.

My theory is that Max doesn’t really believe all this stuff he comes up with, he’s too smart for that. He’s just having fun seeing whose leg he can pull.

They do not, in any way, invalidate the decadal temperature trends which I cited – they simply supplement them.

The global temperature records show warming over three decades from around 1971 to 2001 and cooling for a bit more than one decade since then.

This is true for BEST (land only) and, as I pointed out also for the Hadley SST record, as well as most of the global records.

Prior to 1971 the record shows three decades of slight cooling.

This was preceded by three decades of warming starting in 1911, which was statistically indistinguishable from the late 20th century warming period.

From this one can conclude (as Girma has) that there are cyclical forces at work, which drive short-term oscillations, with an underlying warming trend that has gone back to the early 19th century (as we have been emerging from a naturally occurring colder period, called the LIA).

Or one can make the observations “fit” forcing by CO2, for example, by removing everything else as “noise” (as you have done on the earlier thread).

Both analyses are equally valid IMO, and the next few decades will tell us which one is closer to being correct, with the eventual “truth” probably lying somewhere between the two.

Max, you show a trend line 2002 to 2013 that shows cooling. The BEST data on WfT, imo, ends at 2010.17. The two months after that are, imo, are screwed up for graphing purposes and are hopefully now fixed, but WfT has not updated BEST since the first release.

The “first decade of this century” started January 1, 2001 (not 2000).

You probably think the day begins at 1 am too, Max. ;)

Geneva (which you presumably live closer to than most of us) is the home of the International Organization for Standardization, popularly abbreviated ISO. ISO standard 8601 begs to differ from you. That standard specifies that time of day starts from 0, days of the month and months of the year start from 1, and years of the decade, century, and millennium start from 0.

A cockamamie scheme to be sure, but one that the public has grown so accustomed to that in Australia the media nominated Prime Minister John Howard as “party pooper of the century” for making your quaint argument when the rest of the world was joyously celebrating 1999-12-31T24:00 as the start of the first second of the new millennium.

Why not 2000-01-01T00:00? Actually that’s equally fine too according to ISO 8601, which recommends the former mainly in conjunction with phrases like “at the end of the day” (however overused that might be) or its clumsier synonym “at the end of the last second of the day.” Like the Morning Star and the Evening Star, in Geneva the end of the day is the same instant as the start of the next day.

Efforts to regularize verbs, spelling, and temporal indexing conventions can reliably be expected to continue for the foreseeable future by a vocal minority put here on Earth to amuse the peasantry with their pedantry.

Wikipedia is full of inconsistent information, Max. The bald and unsourced statement you quoted doesn’t even mention the long-running controversy that is discussed at considerable length in this section of the Wikipedia article “Millennium.”

Being unsourced, the statement you quote is by Wikipedia’s own definition Original Research. In contrast a great many sources are given in the Millennium article concerning the debate over this controversial topic. I take ISO 8601 as having settled that debate by creating an international standard that agrees with the majority view. However there are also other reasons such as given by the Wikipedia article on the proleptic Gregorian calendar:

For these calendars [Julian and Gregorian] we can distinguish two systems of numbering years BC. Bede and later historians did not use the Latin zero, nulla, as a year (see Year zero), so the year preceding AD 1 is 1 BC. In this system the year 1 BC is a leap year (likewise in the proleptic Julian calendar). Mathematically, it is more convenient to include a year zero and represent earlier years as negative, for the specific purpose of facilitating the calculation of the number of years between a negative (BC) year and a positive (AD) year. This is the convention used in astronomical year numbering and in the international standard date system, ISO 8601. In these systems, the year 0 is a leap year.[2]

The proleptic Gregorian calendar is sometimes used in computer software to simplify the handling of older dates. For example, it is the calendar used by MySQL,[3] SQLite,[4] PHP, CIM, Delphi, Python[5] and COBOL.

In other words those arguing that 1/1/1 is the first day of the first millennium are assuming a world in which The Venerable Bede’s system is still in use. With the modern replacement of Bede’s 1 BC with a year 0, 2 BC with -1, and so on, we now have a rational system in which the millennium begins at 1/1/0. If we think of this as Christ’s logical birthdate (his physical birthdate has been estimated as a year or so earlier) then logically Christ turns 1 on 1/1/1 and for the next twelve months he is one year old (thereby justifying calling this the year 1 AD) even though he is in his second year following the usual convention by which a baby is not deemed one year old until the end of his or her first year. This makes much more sense than Bede’s clumsy system, and is easier to work with besides.

But anyway, Max, congratulations on finding an observatory expressing the nonstandard minority view Evidently you’ve found a kindred spirit in Hong Kong. In the US in 1998, astronomers David Palmer and Samar Safi-Harb wrote similarly here, “I expect that, around February, 2000, people will start coming around to the belief that the millennium does indeed start with 2001, and plan their next party accordingly.” This expectation would appear to have gone largely unmet: it seems to have been merely an exercise in wishful thinking at the time.

As I said above, “Efforts to regularize verbs, spelling, and temporal indexing conventions can reliably be expected to continue for the foreseeable future.” You should have no trouble finding more such isolated examples to back you up. Human nature being what it is, I bet there’s quite a few out there.

There are two numbering systems in common use, the one standardized in England in the 8th Century by the Venerable Bede and popularized in Europe by Charlemagne, and the modern one standardized by ISO 8601 as well as by astronomers, adopted by MySQL, SQLite, PHP, CIM, Delphi, Python, COBOL, etc., and implicitly assumed by the majority of the public, which except for a few pedants accepted 2000 as the start of this millennium and this century. Sure there were parties in 2001, but the big money was in those in 2000.

Where Bede writes 2 BC, 1 BC, 1 AD, 2 AD the modern system refers to the same years as -1, 0, 1, 2. That is, n BC becomes 1 − n. Fractional years between 1 January 1 BC and 1 January 1 AD are expressed as 0.25, 0.5, 0.75 in the modern system. Bede’s system makes no provision for fractional years: should 1 July, 1 BC be written 0.5 BC or 0.5 AD, or do you toss a coin? There is no ISO standard for Bede’s system that would answer this.

If you take 1/1/1 as the logical date of Jesus’s first birthday, his n-th birthday is on the first day of n AD and he is n years old throughout n AD, and 3 months old in 0.25 AD. Makes sense to me, YMMV as they say.

Given that the GCM and BEST both produce smooth temperature fields while the other surface statistical models do not it makes sense that BEST performs best with the GCM. The question is whether this has anything to do with reality, which the GCM is not.

you have not looked at the data in question. I’ll suggest that you could also take empirical parameters describing the weather field, generate samples from that structure and get the same result. You could also take re analysis data and generate the same result. It should come as no surprise that an BLUE method performs better. Its pretty simple. Give me any dataset and the method will do better. That’s one of the points of memo 3. For folks who dont look at data and dont get math, we did a pretty picture. have a look

I didn’t read properly the memos of Hausfather and Wickenburg until a while ago. There’s always in a way a good feeling when more careful work of others is in complete agreement with own expectations. Fig 4. of the Hausfather memo did even implement a test that I proposed in a comment at Tamino’s during the early discussion – and it gave exactly the result I expected, i.e. reversing the time order make the apparent variability larger for the early periods in a similar way as the original figure made the later distributions wider.

In my view these memos confirm that no result that’s affected by the broadening of the distribution is what it looks to be. In particular that applies to the probability of exceeding some limit like 3-sigma in the temperatures. Numbers obtained from the figure for that do not represent that. Correct probability for exceeding 3-sigma is significantly less and could be estimated by replacing the widened distribution by an unwidened one.

Thanks Pekka. I’m really proud of sebastian. He worked very hard on that memo and faced some pretty good internal reviewers. I’ll let him know you liked it ( he doesnt read blogs ). Kid will make a great scientist someday

In the mathematical normal distribution, a shift of one standard deviation increases the probability of exceeding 3-sigma by nearly 20, the ratio of a 2-sigma exceedance probability to a 3-sigma one. This is not far off what Hansen suggested, because the climate shift is about a standard deviation, and the distribution is near normal, so I think it won’t be far from this if every effort is made to remove artificial broadening.

Just have a look at the Fig 3. of the recent paper. Try to figure out, what the figure would look out when the original PDF is shifted to the new location. It appears clear that what’s is 9.3% or 9.6% in the figure will drop to a fraction of that, perhaps to something like 2% or 3%. That’s not a minor change.

That is still about a factor of 20 over the 0.1% originally in the 3-sigma category. The numbers at the tail are not precise enough to guess the factor accurately. For a Gaussian it should be 0.13% above 3 sigma (one in 750 years to put it another way), while 2 sigma is 2.28% (one in 45 years). The extent to which these real curves are not Gaussian has to be evaluated, however.

I agree fully that the likelihood of a far tail increases by a big factor when a Gaussian distribution is shifted. I made a check by copying the graphics to a graphics application. There I redraw the original Gaussian to a curve I could move around. That way I could estimate that the share drops approximately to a third or a fourth.

While the relative change to 3% is much larger than from 3% to 9%, the absolute change is twice as large in the latter.

I continue to maintain that graphics that presents results that are misinterpreted by almost everyone to mean something that they don’t mean is a serious error that’s not acceptable. For a lay reader 9% is really much more serious than 3% (and not only for a lay reader).

It may seem that I’m more strict with science than with skeptics. That’s a true impression. I do, indeed, expect more from the scientists. Mostly they provide that, but unfortunately not always.

I do argue also against skeptics but it would be hopeless to do that every time it’s justified.

I would also note that attention was paid to the summer, not the winter, and that is because the winter has a broader distribution making the climate shift effect less dramatic on the frequencies of extreme events, even if the temperature change is just as large. This shows that Hansen had a particular message he wanted to emphasize, which was extreme events and frequency.

A 2008 study – “Oceanic Influences on Recent Continental Warming”, by Compo, G.P., and P.D. Sardeshmukh, (Climate Diagnostics Center, Cooperative Institute for Research in Environmental Sciences, University of Colorado, and Physical Sciences Division, Earth System Research Laboratory, National Oceanic and Atmospheric Administration), Climate Dynamics, 2008)
[http://www.cdc.noaa.gov/people/gilbert.p.compo/CompoSardeshmukh2007a.pdf] states: “Evidence is presented that the recent worldwide land warming has occurred largely in response to a worldwide warming of the oceans rather than as a direct response to increasing greenhouse gases (GHGs) over land. Atmospheric model simulations of the last half-century with prescribed observed ocean temperature changes, but without prescribed GHG changes, account for most of the land warming. … Several recent studies suggest that the observed SST variability may be misrepresented in the coupled models used in preparing the IPCC’s Fourth Assessment Report, with substantial errors on interannual and decadal scales. There is a hint of an underestimation of simulated decadal SST variability even in the published IPCC Report.”

Correct me if I’m wrong, but it seems to me that Steven Mosher believes that curve fitting implies conclusive attribution.
If that is true, he fails to understand what Prof. Curry has been trying to communicate. In any case, thanks to Mosher for posting here, as it makes the gap more obvious.

A: “Yes. There were three reviewers. I read the reviews and then checked our final draft to make sure that we addressed the points that we thought needed to be addressed.”

[ Mosher read the reviews. The authors did not read the reviews? Mosher checked the final draft….not necessarily the reviewed version, but the final draft. Not the after-review corrected version, but the final draft. Not ‘we corrected points needing correcting’, but “checked our final draft to make sure that we addressed the points that we thought needed to be addressed” — note: not the points the reviewers thought needed to be addressed, not the points the editor thought need to be addressed, but only the points we (I take it this means only Mosher himself, as there is never a mention of any of the authors being involved in this publishing process) thought need to be addressed. ]

Q: Was it sent out in it’s entirety?

A: ” Yes. I prepared the final draft.”

[ The meaning of the question was really ‘Was the entire paper, the whole kit-and-kaboodle, all the supplements, links to the data files, etc sent out to the reviewers?’ Mosher says only that he prepared the final draft (which should have only happened after the reviewers comments and subsequent corrections). ]

Q: Was it sent to 3 world class experts in climate and stats?

A: “The reviewers identities are not revealed so that I can only infer from their comments. They understood what we were doing and made helpful suggestions. This was in contrast to previous reviewer comments at other journals who seemed to struggle with kriging, so a geostats journal seemed the better fit.”

[ What? Does he simply mean they dared not question the authors? Does he mean not like those stupid fellows over the the Journal of Geophysical Research? Who had the audacity to want (quoting Mosher here) ‘… to have the methods paper published first before they would consider the results paper’?” ]

[ An aside here: As I understand it, the reason we are in this continuing endless controversy about surface temperatures, which the BEST Project was set up to settle, was that previous studies (such as Mann) were found to use a ‘method’ the produced a certain result independent of the data itself. Can BEST really wonder why JGR asked for a methods paper first? ]

I have asked Mosher in Comments above if BEST would make the reviewer’s comments available to the rest of the world, so we too can assure ourselves that the reviewers “understood what [BEST was] doing and made helpful suggestions” and weren’t just along the lines of — as I can imagine in my most cynical of moods — ‘oh so thrilled to be actually reviewing a paper by authors that included a Nobel prize-winner…and yes, of course, I love it and thanks for the opportunity and maybe someday when I get my degree maybe I’ll understand all that stuff you did with the data too.’ In my more sensible moments, I do wonder about the effect of ‘offering’ this paper to G&G (well, offering to pay them to publish it, same thing, almost). How could this ‘never had a paper to publish before’ journal refuse….an important paper, with important authors, to get their journal started on the right foot. Given this, I wonder if they wouldn’t have published it if it was written backwards in pig-latin and suspect the same effect on the volunteered or conscripted reviewers.

Well, maybe its just cynical old me….maybe its the coming thing for teams of important, prize-winning authors to publish in the vanity-science-press.

“[ Mosher read the reviews. The authors did not read the reviews? Mosher checked the final draft….not necessarily the reviewed version, but the final draft. Not the after-review corrected version, but the final draft. Not ‘we corrected points needing correcting’, but “checked our final draft to make sure that we addressed the points that we thought needed to be addressed” — note: not the points the reviewers thought needed to be addressed, not the points the editor thought need to be addressed, but only the points we (I take it this means only Mosher himself, as there is never a mention of any of the authors being involved in this publishing process) thought need to be addressed. ]

1. The authors, the entire team was given the reviewers comments. Then people gave there opinion on which comments were relevant and which were not. Since all reviewers approved the paper, the questions were which comments were the most important.
2. Authors then made a final version.

That final version was handed to me for final processing.

check box number 1. did the final version address the items the team thought were important. for example. Reviewer asks to add an explantion. team agrees. final draft conatins the explanation. simple

checkbox number 2. Are the publishing guidelines met.

Kip you can ask questions all day long and in the end you need to raise an issue with the paper. cause peer review in my mind is a check box. code and data are the acid test.

Yes, the BEST paper passed peer review in your mind, Steven. You know that Kip and others are raising an issue with the paper: has it passed legitimate peer review? You can’t send a paper to Marvel Comics to get their approval and expect anybody but a clown to believe that your box has been checked. You know that G&G has zero credibility. And on this subject, neither do you.

Kip.
” [ What? Does he simply mean they dared not question the authors? Does he mean not like those stupid fellows over the the Journal of Geophysical Research? Who had the audacity to want (quoting Mosher here) ‘… to have the methods paper published first before they would consider the results paper’?” ]”

No, I dont mean that they didnt dare question the authors. They read the paper. they approved it. had some suggestions. Some of those were worth adopting. Nothing major.

Well, I didnt characterize the other reviewers as stupid. The problem was pretty simple. The results paper was being held back subject to the approval of the methods paper. The reviewers of the methods paper ( in one case) didnt know what a nugget was. There were other personal criticisms in those reviews that really had no place in a peer review. meh?
Finally, audacity is you word. My word was odd. According to the physicists on the team, their experience was results papers typically came first and when the results depend on a known method there is no reason for a methods paper period. Since there is no GISS methods paper, and no CRU methods paper I could not argue with their logic. perhaps you can.

Your BEST paper has not passed legitmate peer review, Steven. Everybody knows that, including you. Why don’t you just admit that the BEST team decided to circumvent the peer review process? The reason being obvious and requiring no admission.

” [ An aside here: As I understand it, the reason we are in this continuing endless controversy about surface temperatures, which the BEST Project was set up to settle, was that previous studies (such as Mann) were found to use a ‘method’ the produced a certain result independent of the data itself. Can BEST really wonder why JGR asked for a methods paper first? ]

Yes. i can wonder. I wonder because kriging is a method known to be BLUE. yet, there is no paper showing that CRUs method is BLUE or that GISS method is BLUE. So, it seemed odd. However, if you have questions about the method you can read cressie. or you can look at Roberts methods memos. they are aimed at a audience that might not be interested in all the math details.

I think you are right in one sense. The problem is that there is no GISS methods paper, there is no CRU methods paper. There has generally been no in-depth explanation of methods used, no validation of methods, in some cases, not even careful records kept of methods, used in determining a global surface temperature dataset.

That’s what BEST was supposed to do — resolve all these ‘lack of’s.

If JGR editors were asking that your methods paper be peer reviewed first — and by this I assume we are talking about a paper that discloses, discussing, validates, proves, etc ALL the methodological components of BESTs novel approach, individually and as a total-combined ‘method’ — then I also assume that it was for this very reason — all previous papers and attempts failed to fully validate their methods leaving the world with a mess and confusion. That’s was BEST’s job — do it right — don’t make the same mistakes — and this should include testing and validating, proving the method used itself — and I would say first, then run it to get a result the world could have banked its future on.

Instead, for reasons that seem inexplicable, at least to me, you have not done so. You did not run a methods paper through the system first, in fact, instead you took your ball and went home…for a while…then have shown up putting it in play somewhere where the referees won’t be so strict.

I suspect that this would not satisfy you because you would then want to know if they were authentic or if I made them up. Plus they would not enlighten folks very much. 3 positive reviews with minor requests for changes dont get you what you want. Also, I’m not entirely clear that the reviews belong to berkeley earth. unclear on who controls that document from a copyright perspective. In any case, you are free to write to Muller and ask him. You do need to rememember that I am an unpaid volunteer there, so fire up your email and ask the boss.

Mr. Mosher –> FYI: I have requested GIGS release the Reviewers Comments or give Rohde permission to do so.

I have also requested confirmation that neither Rohde, nor the other authors, nor the BEST Project itself has been or will be charged publication fees and that SciTechnol or GIGS granted a waiver of fees for this paper.

Monfort –> It is part of OMICS/SciTechnol/GIGS’s corporate model to either demand full payment (they have been accused of not fully disclosing the amounts in advance), a discounted sum, or at their whim, granting a waiver of fees. Some of these discounts are based on country of origin — basically poor scientists or projects get a break.

When I enquired about fee structures this afternoon, they offered me a full waiver of fees for UVI students wishing to publish in their journal Marine Biology and Oceanographics.

I would not say either case is unethical. Nothing ethically wrong with paying for a paper to be published, anymore than it is to pay to have your wife’s poetry published. But then one can’t blame the world for having an opinion about why one had to take one’s science paper or wife’s poetry to a vanity press.

Actually , after hitting the button it occurred to me that I should have said “unseemly” instead of “unethical”. I didn’t correct it with a follow up comment, cause Judith got me on moderation, and I am not going to try correct something that has a good chance of not appearing anyway.

I think it looks worse that they didn’t pay. Smacks of favorable treatment, to get some Nobel Laureate business for a “journal” that was nothing but a title, until the BEST opportunity turned up to fill volume 1 issue 1. Mosher said he was assured that the reviewers would not need a kriging tutorial. It would really be interesting to see their approving reviews. My guess is short and sweet.

I really feel that some people are being hypercritical here. I believe Muller and Mosher to be rather honest and trustworthy. None of the caviling here about the paper has any direct evidence to support it. Can we just stop the conspiracy theories and address the paper content itself?

What are you talking about, David? Do you realize that the validity of the paper and the ersatz “journal” that allegedly reviewed the paper are different issues? Do you believe that G&G is a credible science journal? Do you believe that the BEST team, including a distinguished Nobel Laureate, are happy that their paper landed all the way down at the bottom of the barrel, after failing to pass review from a real journal?

I dont think they are thinking too clearly. We show in our methods paper that the other methods have larger bias. They dont care about that. one minute they want to bash CRU and when you show them something demonstrably better, they forget feynman, they forget the scientific method, they forget open data and code and they get personal. Oh well. F

You are whining about the wrong issue, Steven. The criticisms of the content of the paper are not your problem. You can handle that. The paper is really OK, as far as estimates of global temp go. If it were authored by Jones, Mann, Schmidt et al, it would have been a slam dunk.

It’s the phony peer review and your lame, dishonest cover story that is causing you grief. There are more than two geostats journals, Steven. It’s hard to imagine that the one you did not choose was less credible than G&G. We won’t ask you to name it. You would not have been any worse off, if Muller had created his own journal to review the paper. That would have had zero credibility, which is exactly where you are with G&G.

You could have found a legitimate venue for the paper that the Team does not control. What were you people thinking?

How about E&E? That’s one that the AGW proponents tend to bash and the skeptics tend to defend.

Regardless, the paper, data and code will be judged on their merits. Possibly some econometricians like McShane and Wyner will publish an alternate analysis in Annals of Applied Statistics, with comments and rejoinders. The BEST team are taking the advice of John Tukey and Frederick Mosteller: do not make an entire career out of one data set. Tukey also wrote that whatever is worth doing is worth doing badly.

You are missing the point. But I will answer your question. I haven’t thought much about E&E, but my impression is that it is a second or third-tier journal. But it is a journal. G&G is the equivalent of a Dominican Republic faux med-school diploma mill. A guy goes down there on a fishing trip and comes back two weeks later with a medical degree.

Why did they put the paper in G&G, Matt? And don’t you think it is too hilarious that it landed in volume 1 issue 1 of a pay-for- play journal of last resort, after the big media splash they made about their Greatest of All Time dataset?

But Mosher is amused at the skeptics, because they are inconsistent on peer review, or whatever. The truth is this incident is just more evidence of corrupt climate science pal review. If Muller had not pissed-off the powers that be, or if the paper had the right names on it, it would have sailed through.

Don Monfort: And don’t you think it is too hilarious that it landed in volume 1 issue 1 of a pay-for- play journal of last resort, after the big media splash they made about their Greatest of All Time dataset?

I already answered that: I applauded their decision to publish in the first issue of a new journal, and I wrote that the paper, data and code will be judged on their merits, and I mildly implied that purported defects may be addressed in subsequent publications.

If it were necessary that all potential reviewers agree that a paper be worthy of publication in order to be published, nothing would ever appear in published format.

“I can’t tell whether that is a serious comment. Its implication is that you consider the paper to be worthy of publication, in which case your harping on the particular journal is a red herring.”

I have said that I believe the paper would have been published, probably in JGR or similar, if Muller had not pissed off the Team, or if the paper had been written by Team members. Seems that Mosher’s little leaks of reviewers comments supports that. Is that clear now?

If you don’t think that publishing in the initial issue of a trash journal makes any difference, then I guess you would think it is a red herring. My guess is you wouldn’t publish your paper in one of OMICS stable of hundreds of pay-to-play “journals”. Or maybe I got you wrong.

” …already answered that: I applauded their decision to publish in the first issue of a new journal, and I wrote that the paper, data and code will be judged on their merits, and I mildly implied that purported defects may be addressed in subsequent publications.”

It is not a new journal with credible people behind it, Matt. Google OMICS. Unless you don’t care to know what you are talking about.

Don Monfort: I have said that I believe the paper would have been published, probably in JGR or similar, if Muller had not pissed off the Team, or if the paper had been written by Team members.

Then we agree that the paper was worth publishing. You provide a legitimate reason to avoid established journals and go with a new one: personal pique of the editors of the established journal. Limitations of established journals, such as the limitation you highlight in that comment, are the reasons that new journals are established.

OK, but you did say that the paper is worth publishing. Your only objections were that (a) the journal was not prestigious enough and (b) Muller may have offended the editors of the prestigious journals.

Yes, according to Steven and Matt if you don’t make it into JGR, then the default journal of second choice is G&G. Call me crazy, but I think G&G is more like the last resort. The fact that only one paper has ever been published there should tell one something.

CO2 is 0.04% of the atmosphere. If you change 0.04% by 50%, how does the global answer change?
Steven Mosher | January 21, 2013 at 11:36 am |
david.
The US is 2% of the land mass. If you change 2% by 50% how does the global answer change? perhaps you are a different page than I am when I say the answer doesnt change. there is the same issue with UHI since the land is only 30%. Throw out TOBS, throw SHAP and you have changed less than 2% of the data. large N is a bitch.

Instead of bashing the messenger, let’s give a tip o’ the hat to Steven Mosher, who had the guts to post this stuff.

The Hausfather and Wickenburg memos were both excellent, as they cleared up a misconception many people (like Eli Rabett) apparently had concerning Hansen2012, i.e. there has been no increase in variability, so a minor increase in globally and annually averaged land and sea surface temperature will likely cause only a minor increase in the new extreme high temperature (this is comforting news, especially for those that had gotten their knickers all twisted by misinterpreting H2012).

The BEST stuff is more dicey. While it appears that the data collection is an improvement over other land-only records, the “make it fit” paper by Rhode, Muller et al. published at G&G sort of “blew it” by causing more uproar than bringing real information (others have come to the same conclusion).

This paper tries to fit the land only temperature record to CO2 and volcanoes alone, with a residual multi-decadal oscillation from AMO. A bad idea IMO, since it ignores 70% of the globe (oceans) and could have been tied to anything else, just as well.

Steven has sort of half-heartedly defended this paper, but it is apparent that his heart really isn’t into it.

But, anyway, thanks to him for posting this all for us to read and discuss.

Actually, Steven has made the most bold and substantive contribution to climate science in the past two decades. The introduction of leprechauns into the mix is a stroke of genius solving many problems across the board. It adds a new powerful forcing into the scheme, the pot of gold promises to resolve many if not all funding controversies, and the rainbow will bring everyone together. Steven Mosher | January 20, 2013 at 1:32 pm |

Mosher said:”Somebody has to be the first. There was a choice between 2 journals where we could be assured that the reviewers did not require tutorials in kriging.”

How was Steven assured that 3 anonymous reviewers did not require tutorials in kriging? Especially reviewers that are assigned by the editor of a shady (that is a generous characterization) journal with zero track record.

I don’t believe that Steven will answer. I wish that he would go back to being the Steven Mosher he was, before he got involved with the BEST publicity hounds.

“How was Steven assured that 3 anonymous reviewers did not require tutorials in kriging? Especially reviewers that are assigned by the editor of a shady (that is a generous characterization) journal with zero track record.”

How you can possibly infer the journal is shady when it has zero track record? Well Don, it seems like you have made a conclusion based on no data. I thought that was frowned upon by ‘skeptics’. Guess not when your making the conclusion.

Don, the link to Venters comment is dubious and in no way represents evidence that a new journal at issue 1, page 1 is ‘shady’ despite what publishing group it is associated with. Did you read the comment by Vaughan Pratt immediately after? Did you reserve the same amount of skepticim for Venter that you presumably did for Vaughan?

As for you inferring from my comment that I called you a ‘skeptic’….sorry about that. But if Venters comment was enough ‘evidence’ for you to be convinced the journal is ‘shady’, then I’m a bit skeptical about how you evaluate information (even after reading all the other provided links). I mean if Venter quoting Jimbo’s comment at WUWT on his bad experience with some other unnamed journal in the same publishing group (OMICS) isn’t rock solid information that a totally different new journal with no track record isn’t also shady….. then I don’t know what is, right?

Hey Don, if Mosher comes back after awhile to report the new G&G journal ripped off BEST somehow and they were sorry they ever submitted to them, then I would consider that better evidence of the sort Jimbo if refering to. OK?

I am not going to get into the habit of spoonfeeding you information that you could just-watch this-GOOGLE IT! But I left a link for Matt M earlier today that you might find more informative than the Venter thing, which just happened to be handy. I am not going to look, but there may have been some links in there that you could have followed to get more of the story. Of course, you would have to actually read it.

I read VP’s comment on OMICS. Did you? Do you think he was praising their practices and credibility, or exonerating them? Why don’t you ask Dr. Pratt if he would submit one of his papers to an OMICS so-called journal, even as a last resort?

OMICS has created willy-nilly hundreds of so-called journals over the last few years. They are taking in over a million bucks a month in publishing fees and charging admissions to their cheesy conferences. They are journal spammers. G&G is just one little sardine from the OMICS kettle of fermenting fish. Smells bad to me.

As Mosher has explained, he is a volunteer…. He has no authorship or authority over the paper…. Your barking up the wrong tree…. Your requesting services he cannot perform…. quit being dense about the simplicity of his situation. He shared information here and explained more about whats going on at BEST. He’s inside watching and helping. If you got problems with where the paper was published or want to know details of what the reviews were…. Like he suggested, send an email to Muller.

Oh please, John. Mosher is here updating, talking about our paper, we do this, we say so and so, we do that. The fact is that he has done a lot of hard work on the paper much to its improvement from the original underachieving effort. It is a nice estimation of land temps, as estimations of land temps go.

Mosher and Zeke are carrying the BEST load here and there. They are the only ones to show their faces in public, since the paper was rejected by JGR and landed in the whatchyamacallit. And nobody is forcing them to do any of the above. If Mosher had nothing to do with the selection of G&G, one would think that he would have said so by now, unless I missed it.

The deal with G&G looks like an end run around the peer review process to some of us, and we got some questions. They can answer, or not.

When you are comparing different climate models by comparing their outputs as time series, how do you ensure that their initial conditions are the same? I suspect the 1000 year run is an attempt to provide stable initial conditions, but for that to work, we know that the rate at which heat percolates through the oceans is critical. Because of the variability of ocean currents at depths, this part of the model must be hard to validate and how much of the total delay can be regarded as inertial and how much is transport delay? It is important to get this right because even in linear systems of differential equations the two kinds of delay give significantly different results. Also to cater for the truelly random parts of the climate system, you could use either truely random ot pseudo-random series. Either is acceptable, but if comparison between models is desired then clearly it is facilitated by all parties using the same pseudo-random series. This may require a higher level of cooperation between the parties concerned.

Over on the Hansen on the standstill thread Arno Arrak wrote a comment including the following statement.

“This totally ignores the fact that there was a step warming thanks to the super El Nino of 1998 that raised global temperature by a third of a degree in four years and then stopped.”But my question is unrelated to the issue of whether there is a “standstill” or what the implications of that would be. Rather, my question has to do with the concept of “global average temperature” and the BEST (and other) temperature series in general.

It seems to me that if global average temperature means anything, it is the temperature of land, sea and air combined. Arrak’s comment was that the ’98 El Nino “raised global temperature by a third of a degree….” And both skeptics and consensus advocates speak of the effect of El Nino on GAT series, including BEST.

My question is – how can a weather phenomenon increase GAT? El Nino is not a heat source. It does not add heat to the climate system. A brief search of “what causes an El Nino” suggests that it is the result of an accumulation of heat in the Pacific. But whether that heat is concentrated in the Pacific, or diffused throughout the climate system, shouldn’t the average global temperature be the same?

And if the easy answer is…El Nino doesn’t raise GAT, then why do the temp series all seem to reflect an increase in GAT during El Nino years?

I guess my question is – is El Nino like a Pacific Ocean version of the urban heat island effect? A local physical phenomenon that causes a concentration of already present heat, that somehow skews temperature measurements to show an increase in the average? If not, why do global average temperatures (assuming there is such a thing) increase with an El Nino?

I understand how an El Nino can affect weather and temperature elsewhere, but how does it increase global average temperature?

The El Nino modus operandi is a slowdown in trade winds which reduces the amount heat upwardly removed from the water by evaporation and convection and also reduces heat downwardly removed into the colder water below. There would be no net gain or loss of energy in the downward mix but there would be in the upward decline as the energy removed upwards is heading out to space and now instead hangs around to raise readings on thermometers and angst in warmists.

Interesting idea. Fundamentally wrong, but interesting. The rise in tropospheric temperatures that accompany an El Niño is from the additional net heat released by the ocean to the troposphere. You can trace this energy easily by studying the Kelvin wave associated with El Niño as it traverses the equatorial Pacific, releasing additional heat to the troposphere as it crosses many thousands of miles of open ocean over a period of several months.

“It seems to me that if global average temperature means anything, it is the temperature of land, sea and air combined.”
________
Actually, I agree with you Gary, but what we need to define very carefully is exactly what is being measured on “land, sea, and air” combined and what we are really after.
GHG induced “global warming” is about a long-term energy imbalance in the Earth system, thus, if we want to see this energy imbalance, we need to be measuring the broadest and largest range of non-tectonic energy accumulating in the Earth System, right?
Measuring simply tropospheric temperatures at 2 meters over “land, sea, and air” gives you a very fickle, low thermal inertia region to measure a long-term gain in energy. Overall a very poor way to spot an energy imbalance. Thus, if we really care to measure a true “global” energy imbalance, we would want to actually measure net energy across the entire Earth system, which would include:
1. Troposphere temperatures near surface over land
2. Troposphere temperatures near surface over ocean
3. Ocean heat content (at various depths, near surface, 700m, 2000m, abyssal, etc.)
4. Cryosphere energy content (less ice= more energy in system etc.)
This would be the best measurement as to whether there is any GHG induced (or other forcing) energy imbalance in the Earth system.

Gates, the apparent lack of consistent surface data and confusing “surface” definition is a problem. Comparing the different data sets with lack of standardized “regions”, large variations in estimated absolute temperatures, mixing T ave for land with a more likely T min for oceans and major error potential in the critical regions, the poles is quite a challenge.

I have to give kudos to BEST for having taken at shot at Tmin, Tmax, Tave and a more realistic T absolute.

Thanks for the response. I must admit I thought “global average temperature” was supposed to represent temperatures throughout the troposphere, all land and ocean heat content at all levels.

When I looked at the results for a search of “global average temperature” trying to figure out what I am missing, I get a lot of hits that are really just “surface” temperature records. The headlines and titles of the graphs speak of global average temperature, then the limitations of the coverage area are dropped in the 4th or 5th paragraph.

But even if you limit yourself to surface temperatures, doesn’t the fact that an El Nino causes a spike in that average suggest a problem with the modern temperature records?

Every consensus advocate who wants to talk about whether global temperatures are rising as a result of AGW (C or not), wants to include 1998 in their graph. Most skeptics like to start theirs in 1999 or later. And both admit that the “super el nino” of 1998 skews the global average. But why should it?

qbeamus,

Maybe I’m wrong, but I don’t think your explanation below really answers the question. I understand the difference between the specific heat of the ocean vs. the surface air, but that explains HOW an El Nino could skew the global average temperature. That does not deal with the question of – SHOULD IT? Doesn’t your answer also suggest the reported “global average temperature” series are not properly accounting for the very disparity you point out? Otherwise, why would an El Nino have any impact on the average?

I doubt that GaryM can accept this, but Trenberth’s travesty had nothing to do with not enough warmth to affirm CAGW; it had to with the fact that a highly technical world lacks the technology to measure the temperature of the globe. ARGO is new. The land surface measurement system for weather allowed them to cobble together a usable backstop, and the space stuff appears to be a mess. Meanwhile, somebody invents a table saw that locks up the moment it detects contact with human skin. That’s the travesty.

“But even if you limit yourself to surface temperatures, doesn’t the fact that an El Nino causes a spike in that average suggest a problem with the modern temperature records?”
____
Natural internal variability such as we get from ENSO or even the AMO doesn’t create a “problem” per se with the modern temperature records, so long as we account for it in trying to find an external forcing signal amongst this natural variability. The problem arises in trying to identify the different time frames that this natural variability might operate on so that it can be accurately filtered out. The other complicating factor is not knowing if or how an external forcing may alter the character of something like ENSO or even the AMO, such that ENSO or the AMO might behave differently on an Earth at 450 ppm CO2 versus 280 ppm CO2.

I agree with the “mess” statement. It is a mess from the perspective that clearly defined terms are not present, nor is a consistently stated goal of measuring the change in energy of the Earth system that might be caused by an external forcing from accumulating GHG’s.

I once even quibbled (unsuccessfully of course) with NOAA on the use of the term “ocean heat content”. I really hate that term because what they are actually trying to measure is ocean thermal energy content, which keeps everyone focused on the goal of measuring the energy content of the Earth system and not just temperatures. Heat (I thought) was a measurement of thermal energy in flux or being transfered (i.e. leaving one area and moving to another) which would be measured in degrees C, F, or K whereas that is not the metric they give for ocean “heat” content, which they always of course give in Joules. What they are actually giving is ocean thermal energy content if they use Joules. They do of course do the nice conversion of turning ocean temperature into Joules of energy, so the measurement is valid, but I think confusing, especially when measuring temperature at the skin layer of the ocean, this is not necessarily energy in the ocean, but is often energy on the way out of the ocean into the troposphere, and thus, should not be included in ocean energy content.

Anyway, just one of my little pet peeves on trying to measure and accurately talk about the non-tectonic energy CONTENT of the Earth system.

I wasn’t discussing Trenberth or his missing heat. My question is about the impact of El Nino on global average temperatures. Since we are being told to make trillion dollar changes to the global economy based on reported temperature increases of tenths of a degree per decade, I just wondered why a localized weather event (albeit a over a large locale) like El Nino would cause a rise in GAT.

El Ninos apparently raise the GAT by 2 to 3 tenths of a degree from one year to the next, without adding any heat to the climate system. So it seems to me that either there is some external phenomenon that coincides with El Ninos, or the means of calculating GAT are out of whack.

Pekka Pirila says that the GAT temp series are the best we have, and they’re not very good. My question is, does the impact of El Ninos on reported GAT suggest that the flaws in the measurements are…worse than we thought?

Gates, Sadly, the Joules is probably the best way to go all around. I am trying to set up a static model in just Joules since converting latent energy into Wm-2 tends to cause eyeballs to glaze over almost as much as the concept of thermal envelopes and energy “walls”. When BEST gets the some standard regional data sets based on lat-lon bands, I am going to compare some of the SSW events to see if I can fine tune my guestimates with Fmax and Fmin values.

If the system we lack were in place, then global average temperature would be made up of precisely measured global values. It’s a travesty it ain’t.

In terms of ENSO, several climate scientists have taken 1998 out of their graphs, As well as 1978 through present. One skeptic here, one who says he’s looking for signals, called it “cooking the books”.

If one read that the average ocean temperature rose by X thousandths of a degree over time period Y, no one would get excited. Many would even ask: “how in hell could they measure such a small increase?”

But if one read that it increased by Z zillion joules, one could get real worried.

But you should “report” it in degC (as well as joules), so normal people can see that it is an extremely small amount of warming, unless, of course, you are trying to frighten them with “zillions” of joules.

Here we are discussing the BEST land-only record. But, as we know, the ocean is the real heat sink (as Pielke Sr. has advised us), so we should also look at what is happening there to get a full picture.

Why am I rationally skeptical of the OHC results being reported?

Prior to 2003 the measurements are next to worthless, so I will ignore these. Since the air above the ocean surface was showing warming, it is logical to assume that the water at the surface was also warming, but the actual measurements are so spotty and inaccurate, that they should be taken with a large grain of salt.

Once ARGO got set up in 2003, the first results showed net cooling to 2008 (Josh Willis’ “speed bump”). Everyone was amazed.

“Errors” were found in the ARGO measurements; when these were “corrected”, the results were almost flat, showing very slight warming.

Isn’t it reasonable to assume that if the air just above the ocean surface warms (or cools) over an extended time frame, such as a decade or more, that the upper ocean water will also warm (or cool) over this time frame?

The quality of the pre-ARGO measurements is questionable for several reasons: just prior to ARGO, the expendable XBT devices were used with very spotty coverage – these were found to “introduce a spurious warming error”. Prior top this the measurements were even spottier. One can probably draw the general conclusion that there was ocean warming prior to 2003 (what the hell, everything else was warming, including the air just above the ocean surface!). But to conclude anything quantitative or more granular would be a stretch involving a major “leap of faith”.

My question for the AGW convinced:
Why do you expect the Earth’s surface to have the idealized effective temperature (for the amount of outgoing IR at TOA) when the surface is not in vacuum and is cooled predominantly by non-radiative fluxes?

Why do you expect the Earth’s surface to have the idealized effective temperature (for the amount of outgoing IR at TOA) when the surface is not in vacuum and is cooled predominantly by non-radiative fluxes?

No mainstream theory makes such assumptions or gets such results as far as I know.

The effective radiative temperature is not a property of the surface but of the whole Earth with all the atmosphere including clouds as seen from the space.

The surface temperature is connected to the atmospheric temperature trough the constraints set by convection. The temperatures of the surface and the troposphere change together with rates that are not independent.

Atmosphere determines largely the net energy flux to the Earth system. The surface is more important in determining the temperature change that this heat flux will produce.

“The effective radiative temperature is not a property of the surface but of the whole Earth with all the atmosphere including clouds as seen from the space.”

Pekka, yes that’s the effective radiative temperature I mean. Why do you expect the Earth’s surface to have this temperature and involve the radiative GHE to explain the discrepancy between the actual surface temperature and the effective temperature of the planet (surface and atmosphere)?

The actual Earth’s surface temperature is higher than the effective terrestrial temperature and the explanation is the radiative GHE. The consensus expect the surface to have the effective temperature without the radiative GHE.

Yes the real Earth’s surface is cooled predominantly by non-radiative fluxes and cannot have the effective radiative temperature (for the total outgoing power at TOA!), exactly because it’s not cooled by radiation exclusively. The radiative GHE explanation is unnecessary.

Edim said, “Yes the real Earth’s surface is cooled predominantly by non-radiative fluxes and cannot have the effective radiative temperature (for the total outgoing power at TOA!), exactly because it’s not cooled by radiation exclusively.”

Actually, the real moist portion of the Earth is cooled by evaporation. That portion is roughly 70% of the total surface. That energy is transferred to the not moist portion of Earth. Since the average energy of the moist portion is 334 Wm-2 +/- a bit and the area of the moist portion is roughly 70%, then the total for all surfaces would be about 334*0.7=233.8 Wm-2 +/- a touch.

It is likely one of those Russian doll thingies. Layer in layers or envelopes inside of envelopes :)

@Edim: Yes the real Earth’s surface is cooled predominantly by non-radiative fluxes and cannot have the effective radiative temperature (for the total outgoing power at TOA!), exactly because it’s not cooled by radiation exclusively. The radiative GHE explanation is unnecessary.

Edim’s reasoning here is that because the GHE is now inhibiting radiation from the surface it is therefore no longer necessary to invoke the radiative GHE explanation to explain why Earth’s surface is hotter than Earth’s effective temperature (at TOA).

Hmm, let’s think about that a bit….

Ok. In place of heat escaping to space from Earth, consider cash flowing to Bashar al Assad from US banks holding some of his assets.

If the US were to freeze al Assad’s assets in US banks, the absence of cash flow from those banks to al Assad would make it no longer necessary to mention the US in explaining why al Assad is running low on cash.

Pekka was so stunned by the novelty of Edim’s line of reasoning that all he could say at first after catching his breath was “Do you have any idea what the words mean that you read and write?”

“Edim’s reasoning here is that because the GHE is now inhibiting radiation from the surface it is therefore no longer necessary to invoke the radiative GHE explanation to explain why Earth’s surface is hotter than Earth’s effective temperature (at TOA).”

Vaughan, the discrepancy between the actual Earth’s surface temperature and the effective terrestrial temperature is at the center of the GHG hypothesis. The GHG and the radiative heat exchange between surface and atmosphere are invoked to explain the discrepancy. My point is that the surface (with the atmosphere above) is cooled multimodally and it cannot be expected to have the effective terrestrial temperature – there’s no anomaly.

GHE is not inhibiting radiation from the surface, it’s allowing the non-radiative heat loss to the atmosphere, which cannot lose this energy effectively (the bulk of it anyway) to space. The atmosphere takes the energy from the surface ‘before’ it’s radiated directly to space.

And it’s also true that the surface exchanges heat with its environment (including space and the oceanic mixed layer) multimodally in the sense that radiation, conduction, and convection are all involved to various degrees depending on the immediate circumstances.

So if I’ve understood you we seem to be in agreement there.

GHE is not inhibiting radiation from the surface,…

True. Regardless of what’s in the atmosphere above, the Stefan-Boltzmann law tells us how much heat is radiated upwards from one square meter of surface of albedo A at temperature T in units of hectokelvins (100 K), namely 5.67 * T^4 * (1 − A).

So if A = 0 and T = 2.88 hectokelvins (i.e. 288 K) then that’s 5.67 * 2.88^4 * 1 = 390 watts being radiated upwards by the surface. (These values for A and T are what Kiehl and Trenberth use for Figure 7 of their 1997 paper “Earth’s Annual Global Mean
Energy Budget”. Obviously A is much less for snow and ice, and T at 273 K or less is too.)

it’s allowing the non-radiative heat loss to the atmosphere…

Assuming “allowing” and “not inhibiting” mean the same, I’d agree with that too. K&T’s Figure 7 shows 24 + 78 = 102 W/m2 of non-radiative heat loss to the atmosphere. Adding that to the 390 obtained above, that’s a total of 492 W/m2 of heat leaving the surface.

…which cannot lose this energy effectively (the bulk of it anyway) to space.

While “which” reads as though it refers to the atmosphere, it would make more sense if it referred to the surface. In any event Figure 7 breaks down the loss to space per square meter of Top-Of-Atmosphere as 40 W from the surface, 30 W from clouds, and 165 W from the rest of the atmosphere, for a total of 235 W to space.

The atmosphere takes the energy from the surface ‘before’ it’s radiated directly to space.

If by “takes” you mean “absorbs” (whether as radiation or via convection) then the above numbers mean that the atmosphere (including clouds) absorbs 452 W/m2 from the surface while allowing the remaining 492 − 452 = 40 W/m2 to radiate directly to space from the surface.

Since the atmosphere (including clouds) radiates 165 + 30 = 195 W/m2 to space, that leaves 452 − 195 = 257 W/m2 absorbed by the atmosphere from the surface that is not radiated to space. The atmosphere absorbs an additional 67 W/m2 from the Sun bringing the total amount of heat absorbed by the atmosphere and not radiated to space up to 257 + 67 = 324 W/m2. Figure 7 shows the whole of that amount as being radiated back down to the ground.

(My preference is to talk about radiation being exchanged between the surface and everything above it since the upwards and downwards components are happening in parallel, and to say that in the exchange of radiation between the surface and everything above it, there is a net loss from the surface of 66 W/m2 of radiation plus 102 W/m2 of non-radiative loss, totaling 168 W/m2, exactly balanced by a net gain from the Sun of 168 W/m2. However the math is equivalent to K&T’s Fig. 7. A subsequent Trenberth-et-al paper adjusted all this slightly to show a 1 W/m2 imbalance constituting the disequilibrium due to global warming.)

BBD, “No attempt is made to include an ice-albedo feedback due to snow cover or ice on the polar islands. There is no sea ice on the ocean.”

Yes, they were developing a model and it was idealized. I used my own little static model, very simple, to estimate the impact of changing ice conditions. I think. I look at implications of papers and try to verify to see what makes sense.

Toggweiler’s et al. 3C NH warming at the expense of 3C SH cooling with a total possible “global” cooling impact of up to 4C due to the opening of the Drake Passage which what change the mixing efficiency of the oceans, make sense.

If you are incapable of thinking on your own, needing “peer reviewed” papers to rely on to the exclusion of your own reasoning, they you and Jim Cripwell are in the same boat.

BTW, that paleo seesaw paper you love to cite, is missing a mechanism. “every second or third obliquity cycle” does not explain WHY every of third obliquity cycle. There is also a precessional cycle recurrence, ~4.3ka that also doesn’t FIT based on solar forcing alone and a 1470 +/- Bond event cycle that does FIT based on forcing alone. There is a missing link that hand waving doesn’t cut.

I think the problem you’re having arises from the distinction between temperature and heat. El Nino is not a heat source, as you say, but that does not mean that it can’t effect the average temperature, as that term is defined by climatologists. Because the specific heat of air is extremely low, in comparison to the specific heat of water, a given amount of heat energy can produce massively different average temperatures. According to numbers I just Googled, the specific heat of water is roughly four times greater than that of air, per unit mass, so if you reduce the temperature of 1 ton of water by one degree, you will increase the temperature of four tons of air by 1 degree.

The problem is compounded by the fact that climatologists have defined “average temperature” as a spatial average (by map grid blocks). This definition means that the enormous difference in the density of air compared to the density of water further exacerbates the distinction between temperature and heat. According to more numbers I just Googled, a column of air 1cm square, from sea level to the top of the atmosphere weighs about 1kg. In contrast, the average ocean depth is roughly 4.3km, meaning the average column of water (ignoring lakes and rivers) weighs over 430 times as much as the average column of water. So, if the Earth is 70% ocean, if you lowered the temperature of the oceans by 1 degree and dumped all that heat energy into the atmosphere, it would increase the temperature by roughly 300 degrees.

So the “average temperature,” as that term is used by climatologists, is not controlled by how much heat energy the Earth system contains. It is very largely a function of how that heat energy is distributed. The underlying assumption is that nothing really changes the proportion of energy stored in the oceans vs. the energy stored in the atmosphere. That assumption is not crazy, but it’s just an assumption–one that the El Nino effect appears to contradict.

Your response raises more question than it answers. No matter what you call it, the entire CAGW premise is being sold on the argument that the Earth’s average temperature is rising because the atmosphere is retaining more heat, longer, as a result of CO2 emissions.

If you want to suggest that the differential between the “specific heat” of water and air (let alone land) can cause a change in “average temperature” without the addition of any heat to the system, that undermines the entire CAGW argument. That is saying the global average temperature can rise without respect to any net increase or decrease in retained energy.

If true, that undermines the credibility of the temperature series even more than if El Nino is just skewing the averages every five years or so. What good is the term “global average temperature” if the term does not take into account the difference in the amount of heat retained by the various components that make up that average based on their heat as a result of their different densities?

The idea of AGW is not “that the Earth’s average temperature is rising because the atmosphere is retaining more heat, longer, as a result of CO2 emissions.” The idea is that the atmosphere with more CO2 makes the oceans and the surface retain more of the heat they receive from the sun. That makes the surface temperatures rise and the temperature of the troposphere rises automatically by roughly the same amount. To warm by that amount the atmosphere takes only a small fraction of all the extra heat retained.

The average surface temperature is for many reasons not a very good measure of the extra heat, but it’s the measure we have most knowledge about. It’s important also, because most of the consequences of global warming are more directly linked to the surface temperatures than to the heat content of the oceans. The sea level rise is perhaps the only important consequence that depends directly on the change in the ocean heat content.

I won’t quibble, I think your rephrasing is fine. I do understand that the oceans are claimed to be retaining more heat as a result of anthropogenic atmospheric CO2, the increased heat of the atmosphere means more heat is retained by the ocean that would otherwise radiate into the atmosphere and ultimately out of the climate system.

Your second sentence is more to the point.

“The average surface temperature is for many reasons not a very good measure of the extra heat, but it’s the measure we have most knowledge about.”

Exactly. And my question is, and has been, whether the effect of El Ninos on the reported averages mean the reported surface temperatures are even less reliable than most skeptics think.

If an El Nino can have such a drastic effect on “global average temperature,” even though it just represents concentration of heat in one element of the climate system, from other elements of that system, what does that say about the method used to calculate the average?

And trying to answer my own questions, it seems that El Ninos do have an outsized effect on reported global averages.

There were El Ninos in 1997-98, 2006 and 2009-10.

And the global average temp reported by NOAA spiked each of those years.

Either El Nino causes or represents an increase in global average temp or it doesn’t. If it doesn’t, doesn’t that make the reported averages even less reliable as far as determining whether there is actual global warming? (Forgetting for purposes of this discussion the claims to accuracy within tenths of a degree.)

Changes in the surface air temperature may not be the most important part of the scientific puzzle, as you point out, but it certainly is the most important part as far as the AGW effect on humans and our environment is concerned.

A potential 2C rise in surface temperature (especially above land, where we all live) is perceived to be much more important than an equivalent 0.002C average increase in the ocean temperature, even though the same number of joules are involved.

[BTW, that’s also why OHC is reported in joules, rather than in thousandths of a degree C.]

“It’s important also, because most of the consequences of global warming are more directly linked to the surface temperatures than to the heat content of the oceans. The sea level rise is perhaps the only important consequence that depends directly on the change in the ocean heat content.”
____
Not generally true. Heat content of the ocean and its rather massive advection to the polar regions via ocean currents has a great impact on sea ice, permafrost, and the cryosphere around the polar regions in general. The ocean is very efficient at getting heat from the tropics to the poles, and if (as seems very likely) we seen a seasonally ice-free Arctic in the next decade or so, this will have huge impact on everything from the potentially accelerated heat retention by the Earth system to weather patterns around the world.

The oceans are the main energy reservoir for the planet. Change them, even a little, and you will affect the entire system.

Max, think about what you wrote…a small increase in ocean heat energy can equate to a large increase in the atmosphere. Considering that net heat flow is always from ocean to atmosphere, I think you have proven the point about atmospheric warming being “in the pipeline”.

Considering that net heat flow is always from ocean to atmosphere, I think you have proven the point about atmospheric warming being “in the pipeline”.

Let’s analyze that BIG word, “always”.

Since 2001:
– the atmosphere above the ocean (Hadley SST record) has been cooling slightly
– the upper ocean water (ARGO) has been warming very slightly

So since 2001 there has been no net heat flow from the ocean to the atmosphere.

Oops!

The “pipeline” bit is derived from circular logic. [Models said past warming since industrialization should have been X. However it was only X/2. Ergo X/2 warming is still “in the pipeline”.]

Whether or not the added “mega-joules” that have warmed the ocean by a small fraction of a degree are ever coming back out to warm the atmosphere by several degrees (i.e. “climate-carbon cycle feedback”) is a hypothetical deliberation much like the question of how many angels can dance on the head of a pin.

“Since 2001:
– the atmosphere above the ocean (Hadley SST record) has been cooling slightly
– the upper ocean water (ARGO) has been warming very slightly

So since 2001 there has been no net heat flow from the ocean to the atmosphere.

_______
Max,

Not to question your basic knowledge of thermodynamics, but of course there has been net heat flowing from ocean to atmosphere since 2001. Do you understand what would happen to the atmosphere if this were not the case? The atmosphere would get cold Max…very cold. The ocean is the great thermal energy buffer for the atmosphere and the planet, and if there was not net heat flowing from that buffer to the atmosphere, constantly, that atmosphere would see far more diurnal variation than it does.
The oceans warming while the lower troposphere cooling slightly would be exactly what you would expect if the RATE of energy flowing from ocean to atmosphere was reduced. It is still flowing, but less readily as the thermal gradient is reduced at the ocean skin layer—exactly what would be expected by increased downwelling IR from increased GHG’s.

Am I right in thinking that it is *conduction* across the ocean skin layer that is inhibited as the thermal gradient is reduced? And that this is the mechanism by which ocean cooling is reduced by DLR?

Am I right in thinking that it is *conduction* across the ocean skin layer that is inhibited as the thermal gradient is reduced? And that this is the mechanism by which ocean cooling is reduced by DLR?

_____

Exactly so, and as such, the effect is rather small as well, but just enough to give a slight downward tilt to lower tropospheric temperatures near the ocean surface or simply flat if increasing GHG forcing is taken into long-term account. Conduction across the skin layer is of necessity reduced by increased downwelling of IR, as the very top of the skin layer is warmed by the IR, and thus, the thermal gradient across the skin layer is made less steep. What one must keep in mind is that the water just below the skin layer is generally warmer than the air above the skin layer when averaged over the planet, and this water below the skin layer is warmed by short wave sunlight passing through the skin layer. A warmer top to the skin layer means that conduction across the skin is reduced thereby allowing the ocean to accumulate more net energy.

The net effect of all this is that the oceans retain the vast amount of energy accumulating in the Earth system, while the atmosphere (through GHG’s) act as a governor or regulator dictating how readily that energy flows from ocean to atmosphere to space. As the oceans are very good at advecting that energy toward the poles through currents (especially the N. Pole as the currents can reach all the way to 90 degrees N), this excess energy in the oceans is passed to the rather sensitive cryosphere and we see the associated decline of Arctic Sea ice over time which is declining more from a warming ocean than a warming atmosphere.

When I wrote that the surface is what affects us I essentially divided the system to surface and deep ocean. In that division sea ice belongs largely to the surface.

In a situation where OHC goes up but atmosphere does not warm the heat must go to “deep ocean”, which in this case is everything below the well mixed layer. The mixed layer cannot deviate much from the atmosphere in its behavior. The coupling is too strong for that.

Thanks for those excellent references. The subtle, but essential, thermodynamics of the ocean skin layer is often overlooked by many. Many “skeptics” rightly point out that downwelling LW can’t penetrate the
ocean skin layer. Unfortunately, they fail to see that it does not have to penetrate beyond to alter that skin layer enough to regulate the flow of energy out of the ocean. The fact that LW primarily heats the top of the skin layer is exactly enough to alter the thermal gradient of that layer.

Gates, true, under calm or stable conditions, the surface skin layer does a wonderful job as a regulator. Change the average velocity of the surface winds though and the roughly 100:1 water:air heat exchange coefficient takes over creating the large variations in heat exchange with ocean/atmosphere oscillations. Not having a good long term indication of the variations in the “True” average SST, puts us at a deficit.

That is the main reason I focus on 45S-65S which limits that skin effect on surface temperatures and my estimate of the impact of 4Wm-2 atmospheric forcing is in the low range. I guess that makes me a tepid warmer?

The changes in the skin layer cannot have significant effect on the temperature difference of the lowest atmosphere and water clearly below the skin (like a few meters). To get significant variability it’s necessary to go deeper in the ocean.

There’s certainly local variability at all times but that averages out.

“IOW the net flow of energy was not from the ocean, but to the ocean.”
____
Not physically possible for the net flow of energy to be from atmosphere to ocean. Just like a jacket you wear does not add energy to your body, the jacket of the atmosphere cannot add energy to the ocean but only can serve to slow down the rate at which the ocean loses energy that it primarily gains from solar SW.

Sorry Max, you need to rethink your physics. The net flow of energy goes like this: Sun to Ocean to Atmosphere to Space. It’s been this way for hundreds of millions of years, only pausing during “Snowball Earth” episodes, in which so much ice covered the planet that the the net flow of energy was from Sun to Ice to Space as there was more reflected SW and much less LW and a very cold dry atmosphere getting mostly left out in between (much like Antarctica today), but the upside during those periods was that the ice kept most of the heat in the ocean.

A question for you: how many more years of “pause” would it take (despite unabated human GHG emissions) in your opinion to falsify the CAGW hypothesis?

When the radiative physics underpinning the ‘greenhouse effect’ is falsified and an alternative explanation for why the average surface temperature isn’t below freezing is established, then we can talk about this further.

Now that “the facts” show no warming (= slight cooling) since the turn of the millennium, it’s all too short a period to be meaningful.

But these are not ‘the facts’. This is the point here. You *cannot assert* that there has been no warming, let alone slight cooling since 2001. The data are contradictory (several reconstructions show warming, as we know) and the period *too short* for definitive pronouncements.

But your entire argument with R. Gates rests on exactly such definitive pronouncements. What I am attempting to explain to you is that you have no argument.

“When the radiative physics underpinning the ‘greenhouse effect’ is falsified and an alternative explanation for why the average surface temperature isn’t below freezing is established, then we can talk about this further.”

This is simple. Radiative physics is only a part of the problem, which is a multi-modal heat transfer problem – the surface is cooled by evaporation predominantly. The surface cooling fluxes in annual and global average are:
– evaporation (latent heat transfer) – 45%
– radiative cooling surface/atmosphere – 29%
– convective cooling – 14%
– direct surface radiation to space – 12%

The surface isn’t below freezing because the atmosphere is insulating it. Expecting the Earth’s surface to have an effective temperature according to Stefan-Boltzmann law is mind-boggling. The bulk of the atmosphere (N2, O2) insulates the surface.

Agree with the several points you mention, but there is also reflection of incoming radiation back to space by surface albedo, but even more importantly by clouds.

This impact and its causes are poorly understood (IPCC concedes it is “the largest source of uncertainty”).

A 10% change in low cloud cover has a greater impact on our climate than a doubling of atmospheric CO2.

And we (including IPCC) do not know what causes changes in cloud cover, although studies have shown that cloud albedo decreased in the 1990s and increased in the 2000s, concurrent with late 20thC warming, followed by early 21stC slight cooling of the global average temperature.

Negative carbon isotope anomalies in carbonate rocks bracketing Neoproterozoic glacial deposits in Namibia, combined with estimates of thermal subsidence history, suggest that biological productivity in the surface ocean collapsed for millions of years. This collapse can be explained by a global glaciation (that is, a snowball Earth), which ended abruptly when subaerial volcanic outgassing raised atmospheric carbon dioxide to about 350 times the modern level. The rapid termination would have resulted in a warming of the snowball Earth to extreme greenhouse conditions. The transfer of atmospheric carbon dioxide to the ocean would result in the rapid precipitation of calcium carbonate in warm surface waters, producing the cap carbonate rocks observed globally.

“350 times?”

[In 1998, atmospheric CO2 was at 366 ppmv.]

= 350*366 = 128,100 ppmv

Whew!

Had me scared for a minute, but since an increase to this concentration would involve over 200 times the amount of carbon in all the remaining fossil fuels on our planet, I guess I don’t need to worry about these ”extreme greenhouse conditions” happening again (at least not from human GHG emissions).

Hoffman’s original estimate is probably too high; see Abbot & Pierrehumbert (2010). But that isn’t the point at all (you invariably manage either to miss the point or mangle it until it is unrecognisable).

The point is that radiative forcing from CO2 was eventually sufficient to overcome the massive positive albedo feedback maintaining the Snowball Earth state. This is – or should be – an indication that the radiative physics you apparently doubt is both real and efficacious. I wasn’t for an instant suggesting that anthropogenic emissions could match the hypothesised concentrations of CO2 that overcame the albedo of a Snowball Earth. Simply that the forcing from increasing CO2 can and will cause energy to accumulate in the climate system more or less as expected.

Those who argue for a very low value for S to 2 x CO2 or that the brief warming hiatus ‘falsifies’ the scientific consensus on CO2 forcing have an enormous amount of paleoclimate behaviour to explain away before they can make a convincing case.

The mechanism whereby the Earth broke free of the snowball state is not certain, but seems to involve volcanic activity that both deposited ash on snow, thus raising the melting potential of that snow and also injected lots of CO2 into the atmosphere. Certainly nothing is definitive on this yet.

The altered albedo hypothesis is explored in Abbot & Pierrehumbert. A very compelling piece of evidence for extremely high concentrations of CO2 at the end of the Snowball phase comes in the form of the cap carbonates immediately above the Neoproterozoic glacial deposits.

And by the way, for what it’s worth, the responses of R. Gates, qbeamus and Pekka Pirila are a good example of why I like this site so much. I am pretty much a confirmed troglodytic conservative skeptic. Yet even the more, shall we say, committed, consensus advocates are willing to answer my questions civilly. It is just more entertaining to engage in discussion with people with whom I disagree, than simply echo back and forth agreement among those of my own “tribe.”

“And if the easy answer is…El Nino doesn’t raise GAT, then why do the temp series all seem to reflect an increase in GAT during El Nino years?”

—-
El Niño represents a net transfer of energy from ocean to troposphere, but does not increase energy of the Earth system as both ocean and atmosphere are obviously part of that same system. It takes an external forcing on the system to cause a net increase or decrease in energy of the system. ENSO is internal variability but not an external forcing.

The way to picture an El Nino is as follows. It starts with a deep pool of warm water in the West Pacific. However when the easterly wind lets up this pool spreads east and gets shallower exposing more warm water to the surface. The total energy has not changed, but more is exposed to the atmosphere which it proceeds to warm faster, hence El Nino, until the East Pacific warm water has cooled off and the easterly winds and currents resume to keep the warm water in the West Pacific where it recharges as a deep warm pool.

“…peer review in my mind is a check box. code and data are the acid test.”

Mosher at least, no idea about the paper authors, thinks that peer-review IS just a “check box” item — just a rubber stamp from someone having no particular importance.

So the issue of whether or not this paper has been rigorously peer-reviewed is moot….BEST (represented by Mosher) couldn’t care less about the quality of the peer-review as long as they “checked the box” and the BEST Team can proclaim (while disdaining) ‘peer-reviewed’.

So, who is ‘‘re defining the peer reviewed literature”? –>> Mosher and BEST — peer-review is just “a check box”.

With this attitude, it may well have been three undergrads at PoDunk U.

What a waste of the trust and faith of the people who supported this project over the last couple of years. What a waste of funding.

Now we have what once must have been proud professionals “gaming the system” of peer-review and instead of doing the right and hard thing and working through the peer-review system at JGR (or another reputable well-respected journal in their field) they pay-to-publish in a vanity-press journal.

Now the controversy will continue to rage — the Team will proclaim a “peer-reviewed result” and the skeptics will [rightly] cry foul (at least on this point).

The paper that was to be the “Answer to Life, the Universe, and Everything regarding the surface temperature record of the last 150 years”, from the project that was established to put this controversy finally, once-and-for-all, to rest, will just itself be the center of another unending controversy.

BEST will never get past the questions of why when the paper was rejected at JGR, they did not just do the work necessary to get it accepted. There will always be the suspicion, even amongst the Consensus Team, that the paper would not have survived rigorous peer-review at a major journal.

Thus, the paper will not settle any part of the issue BEST was established to settle. A near total waste. A scientifically suicidal publishing decision.

This the same Mosher who posted last month in this blog ” Don’t quote me that E&E Crap” when somebody referred to a paper from E&E in one of the hreads. Note the crtiticism. It was not about the paper. It was about E&E, a disdainful remark. And he has made such remarks many times

Now he’s defending a peer review and publication in volume 1 article 1 of a unheard new journal from India run by a dodgy group, who have been documented to show shady practices in publication. See below post frm Jimbo in WUWT yesterday

QUOTE

Hello.
I had a serious problem with one of the journals of OMICS Group. After receiving a lot of emails offering me publish my works in their journals, I asked them about the possibility of publishing a research paper. They asked me to read the paper and I made ​​the mistake of sending it. I did not hear anything about this editorial, until three months later they told me that they had accepted the job and would publish them if I were paid to them $ 2,700. Then the manuscript has not been published yet and I told them to publish in their magazine not interested me. I did not receive any review of the manuscript and I saw that the data on the web magazine about impact index were false. I only asked for information and I never authorized the publication of my work. Two months later, they published it without my permission. The published paper is full of errors. Since then I have sent a dozen emails urging the withdrawal of my work on their site. However, they did not withdraw and would require payment of $ 2700. What do you recommend I do? No doubt this is a fraud, and I do not know how to get them to withdraw the work and they stop sending payment requirements.http://poynder.blogspot.co.uk/2011/12/open-access-interviews-omics-publishing.html

Perhaps because I work in a broader range of disciplines than many, I get more than my fair share of solicitations for paper submissions from OMICS—at least one every week, some even in areas I have never worked in.

The OMICS business model is the same reasonable one as used by Springer-Verlag, Elsevier, Kluwer, etc, namely to provide publication venues in return for conference fees, page charges, etc.

The differences are (i) that these other publishers have been around for a great many decades whereas OMICS’s first journal was in 2008; and (ii) while these other publishers use their page charges to finance the hard copy versions of their publications as well as their large operations, OMICS has a tiny staff (AFAIK) and is completely electronic and therefore has no need for page charges on anything like the usual scale. Nevertheless their page charges are at the level one would expect for hard copy distribution by a large publishing entity with offices in many countries.

To my thinking OMICS has stumbled on a brilliant business model. Here’s their mission statement.

“OMICS Group through its Open Access Initiative is committed to make genuine and reliable contributions to the scientific community. OMICS Group hosts over 250 leading-edge peer reviewed Open Access journals and has organised over 100-150 world class scientific conferences all over the world. OMICS journals have over 2 million readers and the fame and success of the same can be attributed to the strong editorial board which contains over 20000 eminent personalities and the rapid, quality and quick review processing. OMICS Conferences make the perfect platform for global networking as it brings together renowned speakers and scientists across the globe to a most exciting and memorable scientific event filled with much enlightening interactive sessions, world class exhibitions and poster presentations.”

If you were applying for a job in marketing and they asked you for a sample of your work, do you think you could come up with something like that?

Perhaps not, but those who sold way more Girl Scout cookies in their youth than you surely could. With numbers like “2 million readers” and “20,000 eminent personalities,” the word “metrics” comes to mind. What does OMICS define as a “reader,” and what is their criterion for “eminent?” This has all the earmarks of what it takes to convince a gullible public that the emperor is wearing the finest of clothes.

I’m not saying these OMICS conferences and journals don’t exist. Science funding can usually be counted on to spring for a trip to a conference where your paper has been accepted (the criterion when I was a junior faculty in the 1970s), including registration fees however exorbitant. Likewise page charges have until recently not been critically examined, so that an “Open Access” journal with minimal overhead can extract the same page charges as a hard copy journal from either the taxpayer in the case of an already funded researcher or someone willing to pay in order to make progress towards tenure or some other career goal.

Rather, what we have here is a market that is as badly broken as the textbook market, if not even worse.

Its easy. E&E publish crap. having reviewed E&E papers in my area and seeing for myself that they are crap, is the acid test. When I read a technical comment on the paper that has some merit I’ll let you know.
Here is the primary claim of the paper.

1) using the berkeley method we extend the coverage back before 1850 and we reduce the error bands.

“BEST will never get past the questions of why when the paper was rejected at JGR, they did not just do the work necessary to get it accepted. ”

skeptics will never be satisfied and will always have questions. Fortunately people who want to use the data and need a reference are happy. And you have questions that they don’t care about. The technical work to get it accepted was done. i will say that one reviewer at JGR wanted to know what muller would do to apologize for criticizing CRUTEM. Seemed like an odd request for a science paper.

That request fits well what I have considered likely. Some climate scientist were irritated by the entry of outsiders “to do better” what they considered already good enough. The validity and sufficiency of the earlier temperature time series had been defended repeatedly. A large number of tests had been done to confirm that the remaining problems were not too severe.

Having that basic attitude it appeared unlikely that BEST was able to produce significant new scientific results. Assuming that the reviewers belong to those scientists I speculate about, it’s not surprising that they required good evidence of genuinely new scientific value, and that such a value would be dependent on the details of the methodology.

Another approach is equally possible and probably more correct. That starts from the observation that the temperature time series appear to have long-lasting value and therefore should be produced using systematically analyzed and best possible methods. Doing that is perhaps rather professional application of statistical methods and database engineering than climate science. The outcome does, however, serve climate science and is worth presenting to the scientific community through journals.

One problem with the present BEST appears to be that it’s not maintained by a similarly permanent organization as the other time series. If it is the methodologically best, as I tend to believe, it should perhaps be taken over by an organization that has an assured long-term interest in maintaining it. Are there any ideas of, how that could come out?

Mr. Mosher –> I am not worried about whether ‘skeptics’ will be satisfied or not.

My worry is that BEST has thrown away its scientific credibility, the trust that the scientific community, supporters and funding groups placed in the project, the hopes that it would achieve the objective of:

“The Berkeley Earth Surface Temperature project aims to help resolve criticisms of the temperature record and lower the barriers to entry into climate science.”

Your publishing decision in and by itself will result in the failure of this effort to “resolve criticisms of the temperature record” which was the main objective of the BEST project.

BEST’s objective to “… inviting comment at the earliest stages of the peer review process” has been turned to a mockery by 1) dodging the peer review process nearly entirely by the decision of which journal to publish in and 2) the almost total disdain you have exhibited here for the peer review process itself.

Now, anyone who needs a cite to support their cAGW leanings will happily use this paper and its data….and those who don’t take that side will point to the dodgy, questionable status of the paper and the doubts of the professionalism of the peer review itself and the attitudes exhibited here about peer review.

In other words, as they say in fencing, Nothing Done.

BEST has failed on its first paper out of the gate. It has just fueled the controversy with a incredibly poor choice of publishing journal. You couldn’t have done worse than this. It would have been a better choice not to publish at all.

Simple Don. I like watching you guys step on rakes.
And, there are nothing but rakes in the file.
Like I said substantive technical issues were addressed. when a reviewer tells you to “get off your american high horse”,
then, I’d say that moving to a different journal is probably a good idea.
there are more tidbits or rakes if you like. go ahead step on one

I sympathize with you on the rough treatment you got from the Team, Steven. It was entirely predictable and it ain’t the mystery here. Try to watch the football, Steve. What some of us want to know is why you landed in the G&G. Was that really your second choice? Do you really believe that it don’t matter where a scientific paper get’s its box checked? That sounds suspiciously like some kind of rationalization, sour grapes do they call it.

Why don’t you just show us, or tell us in your own honest words, what the alleged reviewers from G&G had to say about the paper? I promise to believe you, like I used to:)

What happened to Mosher? The guy who always demanded attribution and links from others refuses to tell us who the reviewers are. If he will not respond to Don or Kip maybe Judith can help since this information is so crucial. Otherwise the entire blog is a waste of everyone`s time.

This is a pointless discussion. The reviewers are anonymous, which is the standard practice for peer review of journals and research proposal. Even online journals that post reviews do not include the names of the reviewers, only the name of the editor that handled the paper

There are no charges to publish. If that is your criteria then the journals you worship are worse. maybe you should read the paper. When you find something wrong, you’ll understand why peer review is necessary ( check box) but not sufficient

“India-based OMICS Publishing Group has just launched a new brand of scholarly journals called “SciTechnol.” This new OMICS brand lists 53 new journals, though none has any content yet.”

[note to readers: G&G: Overview is listed on this page as #23 ]

“OMICS Publishing group has exploited many young researchers by inviting them to submit article manuscripts, leading them through the editing and review process, publishing the article and then invoicing the author. …. In most cases, the authors have no idea that an author fee applies until they receive the invoice.”

In a 2011 interview, founder and managing director of OMICS Dr.Srinu Babu, states: “Yes, OMICS operates an author-pays business model and author s are invoiced in relation to the funding available to them . In practice, this means that w e provide complete waivers , or discounts of up to 90% , for some articles — dep ending on the request/research , and the effort the author has put into the respective article. Right now out of every ten articles, two will get a waiver , and another four will get a discount.”

I personally confirmed this business model for SciTechnol, publisher of G&G, by calling their business office in Henderson, Nevada this afternoon. They confirmed their author-pays business model, but offered to waive fees for students in the USVI wishing to publish (which is part of the business model explained by Dr. Babu above.)

The fact that BEST didn’t pay (yet?) doesn’t change the fact that G&G is an author-pays, pay-to-publish journal. The climate science community will decide whether that is significant to them. So will the general public when it hits the MSM.

There are no charges to publish. If that is your criteria then the journals you worship are worse. maybe you should read the paper. When you find something wrong, you’ll understand why peer review is necessary ( check box) but not sufficient”

Are you saying that G&G does not charge for publishing, Steven? We know that not to be true. There business model is pay-for-play. They are taking in over a million bucks a month, mostly from publishing fees. Why didn’t they charge BEST? Did you plead poverty? Did you people know anything at all about the OMIC empire, before you submitted your paper to them? Did you care to know? Any old journal will do?

Had Berkeley Earth ravaged the other temperature series, the journals would have gone to war to be the publisher. Many people have said the BEST results are unsurprising. Meaning in part, already in the literature. Their findings so far are not sexy. Why would a mainstream journal republish the results?

It is not established that the new journal is the abysmal chithole E&E is. RIght now, it’s merely new. Is its first paper better than E&E’s standard fare? Appears so.

I wish Wood for Trees would get their data updated. There is apparently no reason for not doing so.

1. I don’t know who the reviewers are.
2. the reviews are not my property to release.
3. If you want the reviews released contact the journal.It is their document.
4. The reviews were all positive, so logically, if you have an issue with the paper, you would obviously have issues with the reviews which disagree with you.
5. Since none of you have raised a substantive issue with the results, you fundamentally agree with all the reviewers.

An aside, I think the PLoS model of review is fine for most scientific papers, ensure some basic competence and then let the major gate-keeping be done over time as subsequent researchers decide which papers are worthy of citation and further examination:

I think BEST is getting grief from various directions for putting the publicity machine ahead of the peer review, but but I do not see why narrow funnels into select ‘prestige’ journals should be allowed to hold up timely peer review and publication. Glad to see this data paper out there — I would have been happy to see it appear in PLoSOne (I have no connection with them but I applaud what they are doing).

Don, no, OMICS does not sound like a good or reputable outfit. I was mentioning the PLoS journals as a point of comparison about more rapid peer review and publication online. The OMICS group seems to take something of the PLoS model which was meant to serve rapidly changing fields such as microbiology, but then tries to milk it with traditional journal fees or higher, deceptive and predatory practices, etc., at least if a couple of disturbing anecdotes should prove accurate. Anyway, I have no brief for OMICS journals but don’t know of anything similar to be said against the PLoS journals, which I understand are run very much by scientists so far.

Don Monfort: All journals created equal and shall remain equal. Thank you Rt. Rev. David Young.

Journals acquire their prestige from the papers that they publish; the papers that they reject are largely unknown to the readers. The BEST paper will be highly cited, in praise and in criticism, and lend its prestige to the journal.

Didn’t you used to be Mattstat? Mattstat and Steven Mosher were two of the handful of guys I looked to when I first got interested in these climate blogs to help me sort out the BS from the plausible. It is not plausible that G&G will gain prestige by publishing the BEST paper. You can’t make a silk purse out of a sow’s ear, no matter how much lipstick you put on it.

Thank you for being courteous and rational, Skip. I don’t know why it is so hard for many of these people to admit that G&G is not a good place to get your science paper’s box checked. It’s really weird.

Imagine if JGR had published this and it got a press release announcing that a sensitivity of 3 C per doubling fits the land record, which also shows no sign of a pause. A lot of the same people will have been complaining about how such a paper with that much AGW signal could have made it through except for pal review. But it didn’t, so pal review is not in evidence at JGR, and their objectivity should be the message, yet we don’t see any of the skeptics supporting JGR on this decision. Why? Because it blows a hole in the peer-review conspiracy theory.

Hardly.Even though it’s true that people would be saying that, nothing about it says they are not corrupt. Nothing says it was not personal animus.
And Mosher’s Moon Landing only proves at long last, that it is made of cheese. .

OK, so you are thinking that JGR were looking at the credentials of the authors rather than the content, and for sure not at the political benefits of publishing such a paper. I agree with others here that lack of novelty in the results might have been the problem. At some point in time confirmation papers don’t matter so much which is a symptom of a by now well established temperature record.

Jim D said
“OK, so you are thinking that JGR were looking at the credentials of the authors rather than the content[/quote]I think there may have been personal animus involved, but that does not rule out political maneuvering, and does not rule out that they looked at the content.

Nothing is ruled out, so there is no reason why a skeptic would not be able to interpret things one way but in a different situation interpret things a different way.

“I agree with others here that lack of novelty in the results might have been the problem.[/quote] I agree with that too. It might have been.

“Imagine if JGR had published this and it got a press release announcing that a sensitivity of 3 C per doubling fits the land record, which also shows no sign of a pause.”

Well. It didn’t.

And, you know as well as I do, that the land record alone cannot lead to a sensible estimate of ECS, with all the uncertainties involved.

And there IS a ”pause”, so that would have been a silly conclusion.

You continue:

A lot of the same people will have been complaining about how such a paper with that much AGW signal could have made it through except for pal review. But it didn’t, so pal review is not in evidence at JGR.

So, whether or not people would have complained, such a paper would be ridiculous in any case, with or without “pal review”.

And your hypothetical case with silly assumptions doesn’t prove anything, one way or the other, about the “pal review” process at JGR.

How could it?

And, Jim D, whether you understand it or not, there is no “conspiracy theory” (except in your own mind).

Further to my point about explicitly modeling the 20-year octave of HadCRUT3 (as opposed to obtaining it by filtering as in Figure 9 of my AGU poster), I’ve become very interested in looking at that octave from various perspectives.

The land-vs-sea dichotomy gives one dimension to vary. I looked at this octave of BEST, CRUTEM4GL, and HADSST2GL, respectively land, land, and sea (all global), using WoodForTrees.org.

I extracted this octave using a wavelet constructed with three Isolate’s and three Mean’s, serving as respectively high-pass and low-pass filters bracketing that octave. The latter blocks over 99% of periods shorter than in that octave. The former is more gentle about blocking what’s immediately above but there doesn’t seem to be much in there in the first place, and both SAW and AGW are sufficiently far above that octave to be essentially completely blocked. The three Isolates in conjunction do not attenuate what they pass at all, while the three Means collectively attenuate the middle of the octave to 0.4 whence the Scale factor of 2.5 to correct for that.

Reassuringly BEST and CRUTEM4GL are in excellent agreement in phase. Unsurprisingly (since heat takes time to flow between land and sea) HADSST2GL show some phase variation.

Most of those variations when they occur at all have HADSST2GL leading. This would be consistent with this 20-year oscillation originating in the ocean with the land subsequently tracking it. That in turn tends to undermine my hypothesis that this oscillation comes directly from the Sun (to be precise the polarity of the heliospheric magnetic field): it could instead have a similar origin to whatever is driving the AMO and PDO, which would therefore presumably be phase locked to the Sun only by coincidence (correlation is not causation).

What I find really interesting however (YMMV) is the dip at 1920, which (a) doesn’t seem to be part of the overall cycle and (b) has the sea lagging instead of leading, suggesting a land origin.

That 1920 region is also where BEST and CRUTEM4GL differ most in amplitude (while still agreeing in phase). Elsewhere they’re in good agreement with both.

I have no idea what exceptionally chilling terrestrial event happened around then. There’s no other dip just like that one in all three datasets. It’s not part of the regular cycle, which hits bottom roughly every 20 years. The dip seems to push the regular troughs on either side slightly apart but otherwise the regular troughs maintain a good 20-year schedule.

@cd: Thanks for those. I didn’t think to look at the AMO in connection with that 1920 dip. In retrospect it looks like the 50-year cycle that I was taking to be the third harmonic of my alleged sawtooth (arguably identifiable with the AMO) is nicely sync’d at 1970 with SAW but 180 degrees out of phase 50 years (2.5 cycles of HALE) earlier.

Whereas the upper portion of Figure 9 of my poster around 1920 is clean as a whistle (though the lower 11-year-period portion is clearly weird), the corresponding period with BEST has a big dip there that is completely invisible in Figure 9.

So what’s your take on what if anything happened around 1920? My SAW already accounts for the 75 and 50 year cycles as best it could, and nothing slower than TSI gives any hint of trouble at 1920.

Edim, the AMO should match solar very well. Part of the AMO is drive by southern hemisphere circulations so there would be a 60 plus year delay for that and part of the AMO is surface driven in the northern hemisphere. You end up with a nice confusing seeSAW, which does BTW have a long term positive trend :)

Captain, I kinda agree that the correlation is not perfect, but considering that there must be other factors influencing global climate, it is very remarkable. The cycles are not constant (~60 years for example). Solar cycle has slowed down, we can test it. I predict the 60 year cycle to shorten and AMO to go negative soon.

BBD, “Energy is accumulating in the global ocean and DSW doesn’t appear to be the cause.”

Depends on where you look. The ENSO region of the tropical oceans have tracked solar fairly well with a reasonable lag. The mixing of the deep oceans is complex, so it is difficult to see any correlations on a short time scale. The deep ocean lag which is mainly transport from SH to NH could be on the order of 60 to 250 years for solar with longer pseudo-cycles or damped recurrences of 1500, 4300 and 5000 years.

Since the atmosphere only has about 1/1000th the thermal mass of the oceans, there is no reason to assume that long term surface temperature is related to any combination of short term forcing changes.

Since the equatorial Pacific receives the vast majority of the incident solar irradiation, watching the energy input looks reasonable. Some time down the road, the shift will be felt in the deeper ocean heat content. How long down the road, I don’t know, but for net next decade or three, I doubt there will be a significant increase in OHC. With the deeper oceans lagging the surface, it is not surprising that thermal inertia is causing a decaying rate of total ocean heat content.

With the Pacific ESNO region cooling to neutral, it is about time for an Indian Ocean shift. Some have noticed that the ESNO region has tended to shift to the west. The most recent SSW event originated over India. The energy is imbalanced more to the Indian ocean. Time for a little bit different regime.

CO2 actually provides a useful reference, if you were actually curious, or a useful scapegoat if you are not. Since CO2 should be producing a warming of 0.8 to 1.5 C depending on the surface you use as a reference, the regional variation from that range should be an indication of other influences.

Unfortunately, you have to dig out a variety of “surfaces” to compare since the original “surfaces” are not very good choices. It seems that Planck and Kirchoff had to use approximations of black bodies with infinitely small thickness in order to get the math to work, but an infinitely small thickness “shell” can’t provide the constant energy required to be considered a black body.

That kinda throws a monkey wrench in the 33C GHE range that is the assumption your parsimonious explanation is based on. A no greenhouse gas Earth would have a radiant “surface” of 0 to 4 C depending on the orientation of the land masses. Location, location, location. What the actual sea level surface temperature would be depends on the internal energy transfer, or advection.

Advection, BTW, above and below a poorly selected less than ideal radiant reference “surface” would tend to cause issues. I hope your parsimonious reasoning included that little issue.

I try to avoid arguments about supposed errors in the fundamental physics. The brevity of life, and all that. All I will say is that I’m sceptical of claims that fundamental errors remain undetected or ignored.

Since I have no prior commitment requiring me to reject CO2 as the most probable cause of much modern warming, I’ll stick with parsimonious reasoning.

BBD, I know, like is just to short too review the basics. The fundamental physics though are quite sound, the reference “shell” was just poorly selected which requires too many adjustments to make any accurate predictions.

The average effective radiant energy of the oceans is roughly 334 Wm-2 at 4C with an emissivity of approximately 0.982 making the oceans and near idea black body. Only rub is that the oceans only cover 70% of the planet. That gives the total planet an apparent radiant emissivity of 0.7*334 or ~235 Wm-2. Remember that the true surface receives ~330Wm2 and the atmospheric lens covering the the true surface ~150 for a total solar applied of ~480Wm-2. That energy is applied mainly in the equatorial region and has to advect toward the poles. There is no true 240Wm2 radiant “shell”. Since there is no true radiant “shell” until roughly -90C (67Wm-2), internal heat transfer, “advection” has the same impact as scattering would have in a true radiant model.

The physics is fine, the selection of 255K without considering the less than ideal surface orientation is the problem. Kinda like Trenberth’s missing 20 Wm-2, his physics were fine, it was his addition that was questionable :)

It is not a big deal, the mid-troposphere hot spot will not develop, Solar will have roughly twice its estimated impact, natural variability will have a much larger impact, but there are still plenty of other things to worry about. Just CO2 is not as large a treat as once estimated.

Solar does seems to impact the stratosphere more that CO2 and we all know that solar does fuel the oceans. That would be a plus one for my choice of theory and a minus one for the consensus.

Do you perhaps recall that stratospheric cooling is supposed to be a ‘fingerprint’ of AGW? And what do we have here?

Let’s take another look at the big three: TSI (Svalgaard composite, same as you used); OHC 0 – 700m; surface GAT.

Just as the real scientists have been saying for a long time now, something starts to happen around the mid-1970s. The point at which the CO2 signal begins to emerge from the noise of internal variability.

BBD, your problem is you have no concept of the time constants involved. The lag between solar and stratospheric response is 3 years +/- a year. That is the “rapid” response. Since you are comparing to surface temperature, think about that for a second. 70% of that “surface” temperature is based on ocean mixed layer, ~10 meter to 50 meter temperatures and 30% is based on land “surface” temperature from about 2 meters above some surface above sea level or it would not be land in most cases. Because of the huge difference in the specific heat capacity of those two “surfaces” there is an apples and oranges blend of temperatures that in no way represent the conditions of either surface. You have to use a consistent “surface” or you get nonsense.

If you want to compare solar to actual sea surface temperature, you would need to use 15 plus year smoothing on the solar data with the appropriated lag to tease out the solar impact. Comparing the satellite lower troposphere to solar only requires a few month lag for the “rapid” response, but that is just the upper meter or so of the ocean which is not an indication of the rate of ocean heat uptake. Average Tmin, is a better indication, but for some reason the powers at be prefer to add more noise than is required.

The stratospheric temperatures provide a better natural “averaging” of the impact of changes in heat content. While the stratospheric temperatures will change due to CO2 forcing, they would also change due to natural changes in the heat capacity. Having the rate of stratospheric cooling decrease while CO2 forcing is increasing is not a verification of CO2 forcing in any way shape or form. This would be the reason that Sanders, Solomon, Stephens and unbelievably even Trenberth are reevaluating the situation.

Since you have issues connecting these logical dots, it is a waste of time having a discussion with you.

“However, stratospheric cooling while the surface and troposphere are warming does indicate that CO2 and other “greenhouse” gases are likely influencers.”

Yes it would and the dramatic shift in the rate of stratospheric cooling starting in 1995 along with the “pause” while CO2 increased would indicate what Einstein?

“TSI has been falling for about three decades. OHC *increased* over the same period. Even lags of three years cannot explain this.”

A three year lag would not explain that. How long would it take the oceans to respond to a small but persistent reduction in solar forcing? That would depend on the initial condition of the oceans and the amount of the persistent reduction in forcing. Approximately 0.5 Wm-2 peak TSI reduction in the ocean input (~0.25 Wm-2 average or about 0.8 times 10^22 Joules per year) would take a while to have a measurable impact. Something like 20 to 40 years. Since the oceans have a heat content on the order of 10^26 Joules, they likely have a tad of thermal inertia. The stratosphere doesn’t have that much heat capacity, it would respond more “rapidly” now wouldn’t it?

Who knows, we may be recovering from something that happened centuries ago, perhaps even 1470 years ago?

How long would it take the oceans to respond to a small but persistent reduction in solar forcing? That would depend on the initial condition of the oceans and the amount of the persistent reduction in forcing. Approximately 0.5 Wm-2 peak TSI reduction in the ocean input (~0.25 Wm-2 average or about 0.8 times 10^22 Joules per year) would take a while to have a measurable impact. Something like 20 to 40 years. Since the oceans have a heat content on the order of 10^26 Joules, they likely have a tad of thermal inertia. The stratosphere doesn’t have that much heat capacity, it would respond more “rapidly” now wouldn’t it?

The upper ocean responds directly to DSW. How could the increase in OHC *lag* a direct SW forcing? Any more than OHC could continue to increase over several decades while TSI *fell*. The thermal inertia of the ocean would only determine the lag in decreasing OHC, not cause OHC to rise as the direct SW forcing fell.

BBD, “The upper ocean responds directly to DSW. How could the increase in OHC *lag* a direct SW forcing? Any more than OHC could continue to increase over several decades while TSI *fell*. The thermal inertia of the ocean would only determine the lag in decreasing OHC, not cause OHC to rise as the direct SW forcing fell.”

If you look at the ENSO region which has the greatest solar insolation, the heat capacity of the top 300 meters has shifted to neutral to slight cooling. The rate of heat content increase in the 0-700 meter depth has slowed more than the 0-2000 meter range. SW only penetrates to about 100 meters, so the deeper oceans would be responding to the deep ocean mixing, THC. Can you say 60 to 250 year cycles? I didn’t think so. Since thermo is obviously not your strong suit, perhaps you should joint Willard in solving all the other problems in the world.

There deep ocean changes in OHC can be reponses to both wind forcing (increased mixing in the subsurface) due to ozone forcing (unep chapter 10) or a millennial scales by the strong salinity forcing and th strong braking effects eg Ghil 2012

The problematic problem is that there is also uncertainty in the Southern Ocean (which captures around 50% of the AGW heat) due to natural variation greater then the recent trends of high latitude wind forcing in the SO eg Rodgers 2011.http://www.clim-past.net/7/1123/2011/cp-7-1123-2011.html

The foremost problem with TSI and the recent deep minima that is used by Kaufmann and Hansen to explain part of the pause in recent warming is that the observations are at odds with the solar physics As this is more a case of instrumental bias due to degradatrion,the minima of the previous cycle may be underestimated by 0.2wm^2 and hence it is not a get out of jail card,and the amplitude is also now greater eg Krivova 2011

You aren’t addressing the problem: OHC cannot lag a direct SW forcing. OHC has risen in all major basins since the 1970s. TSI has fallen over the same period. You only have to look at the graph to see the divergence. Something else is happening, and the vast majority of real scientists accept that the most likely (parsimonious) explanation is CO2.

I have no prior commitment obliging me to reject the most likely explanation, so remain persuaded that it is probably correct.

That’s why I don’t have to go into convolutions and contortions every time this comes up. I can say: increasing OHC and decreasing TSI over >30 years is clear divergence.

BBD, “You aren’t addressing the problem: OHC cannot lag a direct SW forcing. OHC has risen in all major basins since the 1970s. TSI has fallen over the same period. You only have to look at the graph to see the divergence. Something else is happening, and the vast majority of real scientists accept that the most likely (parsimonious) explanation is CO2.”

Oh really? If you unplug your freezer everything thaws within seconds? Now about this head up your arse, “the vast majority of real scientists accept that the most likely (parsimonious) explanation is CO2.” Yes, CO2 is happening and CO2 would have an impact. About the only freaking thing we really know is that CO2 will have an impact. You can use CO2 to help determine the magnitude of the other impacts dipstick. You really do have some learning disability don’t you? The question is the magnitude and the lag or timing of the various influences. You have to figure out the other factors and lags before you can determine the magnitude of AGW. CO2 is a good tracer for all of man’s activities. Pratt extended CO2 back to the 1800s for a reason. You can look at the EPICA CO2 data, it turns up while temperature turned down starting over 5000 years ago.

BBD, Since the concept of lags is so elusive to you. Perhaps you should consider a practical example. Like learning how to not cook a dry turkey.

If you stick a half full Foster’s beer up its arse, (I don’t care for Foster’s but the can fits) and cook it at 350F for about 20 minutes per pound until the internal temperatures reaches 155-160 F, then remove from the oven or grill and cover with foil. Let it rest for about 20 minutes, the internal temperature will rise by about 5 to 7 degrees F. Now how much it warms and how long it keeps warming internally depends on what? If the bird is 7 pounds will it warm as much as a 30 pound bird? If I use the broiler, top down heat, instead of bottom up heat, will that make a difference? What about if I have one of those convection ovens, would the time per pound change? If I wrap it up in foil, would it ever cook?

See there are all sorts of examples of how you can change the rate of heat transfer. Now just pretend that the oceans are a big turkey.

capt. d., where is the analog of your heated beer can in the ocean? Surely the OHC is all the heat. There is nowhere (like a beer can) for it to hide. Or maybe you are saying because the land will warm so much more, it seeps into the ocean?

JimD, “capt. d., where is the analog of your heated beer can in the ocean? Surely the OHC is all the heat. There is nowhere (like a beer can) for it to hide. Or maybe you are saying because the land will warm so much more, it seeps into the ocean?”

That is Iceland Tmin in absolute temperature. Notice how it stayed near 0 C from 1930 to 1985? From about 1985 or so until 1995ish, the magnitude of northern hemisphere sudden stratospheric warming events declined. There is ideally, about 120Wm-2 transferred from the tropics to each pole. Because of the Gulf Stream and THC, there is more energy transferred north. Now there are more NH SSW events and the larger events release on the order 10^22 Joules in about a month or two. That is like turning the oven to convection.

The heat content of the World Ocean for the 0–2000 m
layer increased by 24.0 1.9 1022 J (2S.E.) corresponding to a rate of 0.39Wm2 (per unit area of the WorldOcean) and a volume mean warming of 0.09C. This warming corresponds to a rate of 0.27 W m2 per unit area of earth’s surface. The heat content of the World Ocean for the 0–700 m layer increased by 16.7 1.6 1022 J corresponding to a rate of
0.27 W m2 (per unit area of the World Ocean) and a volume mean warming of 0.18C. The World Ocean accounts for approximately 93% of the warming of the earth system that has occurred since 1955. The 700–2000 m ocean layer accounted for approximately one-third of the warming of the 0–2000 m layer of the World Ocean.

There is no possible way ‘the Gulf Stream’ could be responsible for this.

Now, please explain, without further reference to turkeys and beer cans, how energy can continue to accumulate in the global ocean over three decades when TSI fell.

Not to get into your exchange with the Cap’n, except to note that the Levitus 2012 data you cite covers the period since 1955.

Up until 2003 it is reasonable to assume that the ocean warmed, but there are no conclusive quantitative data. So the 0.18C warming (0.036C per decade) over the 0-700m upper ocean is a highly questionable “guess-timate”.

After 2003, the corrected ARGO data show much slower warming of the top 700m of 7.68×10^21 Joules per decade = 0.008C per decade (~one fourth the rate from the dicey pre-ARGO data cited by Levitus).

BBD, If you had experimented with the turkey you might be in a better mood.

“Explain why OHC increased while TSI fell.” Fell to what? If I am charging a battery or a turkey to some energy level, the energy in the battery doesn’t reduce until either the charger voltage falls below the battery voltage or I increase the load on the battery.

If you are charging a 12V battery with a cheap charger that provides 12.5V RMS with 11.7V min and 13V max, the battery will charge, but the rate of charge decreases as the battery reaches 12V. Since the “average” temperatures are in anomaly instead of absolute values, you don’t know if the oceans should be charging or not. That means BBD, you are making a false assumption which tends to cancel your Parsimonious reasoning.

The ~5 year delay in surface temperatures versus solar is just an indication of the RMS value of the charger and the charge capacity of the battery. Batteries don’t care about “average”.

So if you want to charge a planet, consider the RMS value of the charger, the charge of the planet and the resistance of the load. Sticking to all Joules is a better way if you can’t separate the physics.

The RMS value of the Charger BTW is ~500/1.414=354 The charge of the oceans is ~334 and there is about 20 drain. You can pick your units.

You can’t give me a physical mechanism by which, over several decades, OHC increases while TSI falls. The reason you can’t is because none exists and once again, you have revealed yourself to be dealing in fairy dust.

I am reminded of previous mind-boggling errors. I’m struggling to be charitable here, but it gets harder and harder every time you do something like this.

Now, let’s return to reality. DSW increases OHC without significant lag. So we can say with reasonable certainty that your original contention is wrong. Recent warming is not the sun. OHC *cannot* continue to increase over several decades when direct SW forcing is decreasing.

Up until 2003 it is reasonable to assume that the ocean warmed, but there are no conclusive quantitative data. So the 0.18C warming (0.036C per decade) over the 0-700m upper ocean is a highly questionable “guess-timate”.

You are a serial misrepresenter and denier of evidence. Nothing anyone here can say seems to have any effect (witness Pekka’s exasperation on this topic in the past).

So when you do this, I am going to point out what you are doing – we can call it rejectionism or denial – I don’t care and it makes no difference.

BBD, I already gave that to you. You just need to let it sink in, literally.

Vaughan found a SAW tooth in his model and is looking for a cause.

Since the Earth is a fairly complex battery, you have to considered the charger and loads separately. See, the cells are different sizes and there is always one that is slower to charge.

If the tropics are the charger, the mid-latitudes would charge fairly quickly and fairly uniformly. The lower part of the high latitudes would take longer and may not charge at the same rate. By subtracting the loads from the tropical charger you can see Vaughan’s SAWtooth. The AMO is fine for weather, but climate would require a more “global” index, like the hemispheric imbalance. All that internal “Wall” energy transfer that someone forgot to take seriously.

I am not going to mention any names, but the Faint Young Sun paradox is horse hockey. You need to consider the Peak and RMS values when charging a battery, but then most dope smoking telescope jockeys have never actually worked in the real world. Not that I have anything against dope smoking or telescopes, but there is a time and place for everything.

Oh, that upturn in the blue curve is your CO2 amplified by land use, snow field reduction and land mass. You can estimate CO2 only by subtracting the Blue from the Orange, or you can go to the actual “average” charge on the battery, 334 Wm-2 that just happens to be limited by the 334 Joules per gram latent heat of fusion of the electrolyte.

Until you come to grips with the reality that there is not A climate sensitivity, stick to wine and whining.

More windy nonsense. All I want from you is a coherent explanation of why OHC has increased over three decades when TSI fell, if, as you argue, TSI is the principal driver of modern warming.

You aren’t going to manage that because there is no mechanism. The thin amusement to be gleaned from watching your evasive gyrations has dissipated, although the turkey and can comment deserves wider recognition. Perhaps it will come back to haunt you one day.

I know you know you are talking rubbish again because you have been really quite abusive throughout the exchange. A sure sign that you are under pressure.

Anyway, enough time wasted on this. As always, believe what you like; the rest of us will carry on with parsimonious reasoning and CO2.

BBD, “I know you know you are talking rubbish again because you have been really quite abusive throughout the exchange. A sure sign that you are under pressure.”

Really? I thought it was more a sign of boredom. Since long term, millennial scale recurrent fluctuations are beyond your ability to grasp along with the concept of a bistable system, there would be no satisfying your request any more than convincing Jim Cripwell. The deep oceans though tend to lag surface temperature by 1700 years +/- a half millennium. That lag is due to complex mixing of an ocean that has a variety of stratification conditions that are possible. More turbulent mixing reduces the lag, less turbulent extends the lag.

There is the pump that regulates the deep ocean mixing by pumping cold, dense water deep in the oceans. If the rate is slow, the sinking flow is more laminar. If it is fast, not as laminar. It is like pouring a B-52 Cocktail, a steady hand is a plus. Since the Antarctic Convergence zone has several layers above and below the 4 C, Which layer dominates would control the rate of heat uptake. Under some conditions, the SST can be much warmer with less total OHC, others the surface colder with a higher OHC. All depends on the mixing efficiency, which is way over your head.

Toggweiler et al. have a pretty good grasp on the mixing ratio influences. I assume they have less hours behind telescopes.

Oddly since sea ice extent and the sea ice balance between hemispheres is a major factor in the long term mixing, things that might not just jump out at you have an impact on climate. Like lunar and solar orbital influence on tides that tend to fix or free sea ice. Atmospheric circulation patterns that change the shape of the sea ice growth in the SH and the distribution of the THC flows. Farmer spreading peat over winter wheat fields to speed the snow melt. Ash and black carbon speeding ice melt. All that ice and snow is energy.

It is a marvelously complex puzzle and it is fun to watch all the geniuses explain how it works :)

You never did understand that the increase in OHC all the way down to 2000m and beyond means the meme that ancient OHC is responsible for modern upper ocean warming is nonsense. It’s another example of parsimonious reasoning. Deep ocean OHC would have to fall. It’s rising. QED.

You just don’t know when to give up, Cap’n. Possibly because you don’t have quite the iron grip on the topic that you imagine.

They have better pictures in this one of the Antarctic convergence zone. WOW! From 10 C to 0 C in just a couple of degrees of latitude!

Now if you can’t afford Bailey’s and Kahlua, try a dark stout and a pale ale and see if you can make a proper Half and Half. Can you change the level of the level between the dark and light? If the dark represents 12% alcohol and the light represents 6% C, can you change the alcohol content?

You really should stop waving around papers you haven’t understood in context and invest in several paleoclimate textbooks.

Birds roasted without beer cans inside them cool from the moment they are removed from the oven just like anything else does once it is removed from the oven. Nothing ‘just gets hotter’ once the energetic forcing ceases. What happens then is that things cool. It’s the truly bizarre stuff like this that makes me wonder about you Cap.

I am sorry, I didn’t realize that the sun got turned off! I thought it just got turned down. Silly me. The way I look at it, the charger stitched from fast charge to trickle charge.

Now with the turkey, the sun got turned off but the insulation increased. The “average” temperature of the bird increased without adding more energy. Of course that “average” would depend on what we are measuring and where. In both cases, “something is still getting hotter once the energetic forcing ceases.” Since you obviously can’t cook or maintain your own stuff, you might have problems grasping the ramifications.

So let’s take that nonsense that Toggwielder wrote about. If the rate of mixing is slow, the difference in temperature between stratification layers can increase. The surface temperature can be much warmer while the total OHC lower. If the rate of mixing increases, the total energy can be higher and the surface temperature lower. There can be a large variation in “surface” temperature and heat capacity with no change in applied energy. I assume you think they are whack jobs since they disagree with your favorite paleo gang. You might want to reconsider that.

Now thinking I am a whack job, buffoon, fraud, duffous or whatever is fine, since I share a similar opinion of you :). As I have said, I am just hanging out waiting to see all of the face plants as the “range of comfort” has to be left behind. It could get embarrassing for a few.

I may dig out a few more paleo recons for you, since I know how much you like my pretty pictures :.

I am sorry, I didn’t realize that the sun got turned off! I thought it just got turned down. Silly me. The way I look at it, the charger stitched from fast charge to trickle charge.

Now with the turkey, the sun got turned off but the insulation increased. The “average” temperature of the bird increased without adding more energy.

Stop playing the fake-sceptic nit-pick game. I’ve asked you repeatedly to explain to me how OHC can increase over three decades of decreasing TSI. You haven’t because you can’t.

Now more nonsense about turkeys. If we remove the bird from the oven and cover it with foil, it will not continue to heat up; it will cool at a reduced rated compared to a bird of the same weight and temperature left uncovered.

Cap’n, you are arguing for the creation of energy. And to compound the crazy, you can’t even seem to see that you are doing it.

Stop conflating *surface temperature* with OHC down to 2000m and beyond. That is definitely a problem here. Start reading what I write instead of an imaginary version of it in your head and we might actually get somewhere. And please, stop banging on about Toggweiler (NOTE SPELLING) – it’s irrelevant in this context and I’m sick of hearing about it.

And btw, I’m a good cook. Sod false modesty. Some other time, we can discuss la technique if you wish.

Wondered what was happening in the backblocks of CE. Blah blah being his usual luvable self it seems.

The role of clouds in modualting climate seems one of those space cadet blind spots.

‘The overall slight rise (relative heating) of global total net flux at TOA between the 1980’s and 1990’s is confirmed in the tropics by the ERBS measurements and exceeds the estimated climate forcing changes (greenhouse gases and aerosols) for this period. The most obvious explanation is the associated changes in cloudiness during this period.’

We all live and die by Newton’s 4th rule of natrual philosophy of course.

In experimental philosophy, propositions gathered from phenomena by induction should be considered either exactly or very nearly true notwithstanding any contrary hypotheses, until yet other phenomena make such propositions either more exact or liable to exceptions. This rule should be followed so that arguments based on induction be not be nullified by hypotheses.’

We have ERBS and ISCCP-FD, ICOADS cloud observations in the central and northern Pacific at the important ‘Great Pacific Climate Shift’ of 1976/1977 and the 1998/2001 climate shift, we have Project Earthshine at the 1998/2001 shift and CERES since 2000. All show secular change in cloud and – importantly – the earlier observations show abrupt change at times of climate shifts. These don’t seem to be able to process the language even let alone the data. It is a little odd don’t you think.

BBD, “Cap’n, you are arguing for the creation of energy. And to compound the crazy, you can’t even seem to see that you are doing it.”

Nope, energy is conserved. What I am arguing is that you have be careful how you average. The larger the thermal mass, the longer it takes for energy to accumulate. So you have to consider what is and isn’t penetrating that skin layer and where. The oceans absorb more energy than land and transfer energy to land to give you the “global” average. Each layer of the oceans would have a decay rate that depends on its heat capacity and the rate of heat loss to bounding layers. It is like an onion, lots of layers.

The crazy part is using “global” average temperature and “global” average forcing.

BBD, Since this is the Berkeley Earth update thread. They estimate the “global” land surface Tmin to be ~4.2 C, the Southern Hemisphere they estimate is about 2C cooler than the Northern Hemisphere because of the Antarctic. The estimated “average” ocean temperature is about 4C. Since 1985, Tmin has been increasing less than Tmax, that diurnal temperature range shift issue.

There you go again, Cap’n. Ignoring the really important point. Something you have been doing ever since this sub-thread got going. All the while claiming to ‘just follow the energy’. Try looking at OHC 0 – 2000m since the mid-1970s and TSI over the same period. The contortions you will go through to deny the effects of GHG forcing are dizzying to watch.

Re Toggweiler & Bjornsson (2000). Let’s sort this mess out once and for all. You say:

So let’s take that nonsense that Toggwielder [SIC] wrote about. If the rate of mixing is slow, the difference in temperature between stratification layers can increase. The surface temperature can be much warmer while the total OHC lower. If the rate of mixing increases, the total energy can be higher and the surface temperature lower. There can be a large variation in “surface” temperature and heat capacity with no change in applied energy. I assume you think they are whack jobs since they disagree with your favorite paleo gang. You might want to reconsider that.

What did T&B really say? They said this:

The effect of Drake Passage on the Earth’s climate is examined using an idealised[*] coupled model. It is found that the opening of Drake Passage cools the high latitudes of the southern hemisphere by about 3°C and warms the high latitudes of the northern hemisphere by nearly the same amount.

Did you miss that? T&S says: no net global change. But that’s not what I hear coming from you. You indulge in exactly the same class of misrepresentation that Ellison uses to claim that the YD was massive, global ~10C climate shift. It is the same as his (and others’) incorrect assertion that D-O events were likewise significant, global climate fluctuations.

These hemispherical redistributions of energy are quantitatively and qualitatively different from the increase in global ocean heat content down to 2000m and beyond over the last three decades.

This lack of *net global change* also makes hay of your previous argument that T&B somehow invalidates the now widely accepted view that reduced CO2 levels were the main driver of the ~50Ma general cooling trend since the Eocene Optimum. Going back to T&B again leaves me convinced that you haven’t understood it at all.

One more thing. What you derisively term ‘my favourite paleo gang’ is actually the emerging majority view, best supported by the evidence and employing the most coherent arguments.

Once again, I suggest that you take a break from rebroadcasting your muddle here and spend a few weeks working through some paleoclimate textbooks. Perhaps then we might actually get somewhere.

* ‘Idealised’ model is putting it mildly:

The model used in this paper describes a water-covered earth in which land is limited to two polar islands and a thin barrier that extends from one polar island to the other.

That’s… idealised. And look what else is missing:

The model does not have interactive winds or an interactive hydrological cycle. Instead, a latitudinally varying wind stress field is imposed on the ocean along with a latitudinally varying salt flux. No attempt is made to include an ice-albedo feedback due to snow cover or ice on the polar islands. There is no sea ice on the ocean.

(ERBS) Nonscanner Wide Field of View (WFOV) instrument Edition3 dataset. The effects of the altitude correction are to modify the original reported decadal changes in tropical mean (20°N to 20°S) longwave
(LW), shortwave (SW), and net radiation between the 1980s and the 1990s from 3.1, -2.4, and -0.7 to 1.6, -3.0, and 1.4 W m2, respectively. In addition, a small SW instrument drift over the 15-yr period was
discovered during the validation of the WFOV Edition3 dataset. A correction was developed and applied to the Edition3 dataset at the data user level to produce the WFOV Edition3_Rev1 dataset. With this final correction, the ERBS Nonscanner-observed decadal changes in tropical mean LW, SW, and net radiation between the 1980s and the 1990s now stand at 0.7, -2.1, and 1.4 W m2, respectively, which are similar to the observed decadal changes in the High-Resolution Infrared Radiometer Sounder (HIRS) Pathfinder OLR and the International Satellite Cloud Climatology Project (ISCCP) version FD record but disagree with the Advanced Very High Resolution Radiometer (AVHRR) Pathfinder ERB record. Furthermore, the observed interannual variability of near-global ERBS WFOV Edition3_Rev1 net radiation is found to be remarkably consistent with the latest ocean heat storage record for the overlapping time period of 1993 to 1999. Both datasets show variations of roughly 1.5Wm2 in planetary net heat balance during the 1990s. Wong et al 2006

People are well aware of adjustments in ERBS, ISCCP and the limitations of Earthshine. But clearly the latest versions represent hard won data documented in peer reviewed science. There are of course ICOADS observations in particular regions of interest and consistency with ocean altimetry where there is data. A consilience of evidence emerges that – remarkably – shows only one thing. Clouds change – duh.

As I say CERES shows large interrannual changes in cloud and the suggestion of a trend in SW that is consistent with ARGO. It seems obvious nonsense to think that clouds do not change with large scale changes in ocean and atmospheric conditions.

BBD, “The effect of Drake Passage on the Earth’s climate is examined using an idealised[*] coupled model. It is found that the opening of Drake Passage cools the high latitudes of the southern hemisphere by about 3°C and warms the high latitudes of the northern hemisphere by nearly the same amount.”

“The model experiments carried out in this paper indicate that the separation of Australia and South America from Antarctica was associated with a ca. 3°C cooling of the air and seas around Antarctica. This level of cooling comes about because of the transequatorial overturning circulation set up by Drake Passage and the westerly winds over the open channel. The upwelling branch of this circulation, forced directly by the westerlies in the south, takes up solar heat that would have been available to warm the polar regions of the Southern Hemisphere. This heat is carried across the equator into the North Atlantic where it is given back to the atmosphere, effectively warming the Northern Hemisphere at the expense of the Southern Hemisphere. The results here suggest that much of the full thermal effect of Drake Passage could have been realised well before the channel was very wide or very deep. This is because the mere presence of an open gap introduces an asymmetry into the system that is amplified by higher salinities in the north and lower salinities in the south. This kind of haline effect, and the possibility of increased Antarctic sea-ice and land-ice, lead us to conclude that the thermal response to the opening of Drake Passage could have been fairly abrupt and quite large, perhaps as large as the 4–5°C cooling seen in palaeoceanographic observations.”

The Drake Passage opening increased the ocean mixing efficiency and the ability of the southern pole to sink heat. The deep oceans do not cool from the tropical surface, they cool at the poles. With lower solar, they just gain less energy. Why you want to avoid the obvious is beyond me.

Now since the Sun warms the oceans in the tropics and mid latitudes which are at an average surface temperature ~20C, dropping that temperature to 19.8C will have what impact on the rate of heat uptake in oceans with an average temperature of 4C degrees?

Additional cloud cover and latent cooling could reduce the surface temperature to 19C, what impact would that have on the rate of heat uptake of oceans with an average temperature of 4 C degrees? Not much right? The poles are the ocean heat sinks and the ACC is a major factor in ocean mixing/temperature control.

Consider that the ACC has a flow of about 100 Sverdrups, 100,000,000,000,000,000 grams per second at roughly 3C average surface temperature. You can figure out the area of the ACC and with the average temperature of the Antarctic SUMMER of -30C about how much energy do you think is being transferred to the Antarctic. With no Drake Passage, that wasn’t there.

Now since you ASSUME that CO2 is cause “MOST” of the warming and that CO2 finally overcame natural variability in the last half of the 20th century, why is this?

BBD, “You are continuing to ignore everything I write in favour of conducting a discussion with yourself, as you have done throughout the thread.”

Pretty much. Since this thread started with my giving Vaughan Pratt a link to graphs I had made of Iceland Tmin temperature based on the Berkeley Earth data which happens to be the topic of the post and the non-detrended Kaplan AMO which is some what interesting “new” stuff, I have ignored your repetitive nonsense. That really shouldn’t surprise you since you really have nothing to offer on the subject.

The discussion is on “surface” temperatures and dip in “surface” temperatures pre 1920. Since you confuse 0-700 meters with the upper ocean and the “surface” temperature data is based on the ~”ocean mixing layer” with is roughly 0-50 meters, you are basically comparing the “skin” to the “body”. The turkey analogy is applicable for both your question and you :)

For some reason you keep implying that I am misinterpreting the Toggweiler et al. You don’t care to discuss, the Toggweiler implications, only your thoughts that I am misunderstanding Toggweiler papers. Well, BBD, no one really cares much for your thoughts or mine. It’s a tough internet out there. I think though that my paraphrase of Toggweilder is pretty accurate. Now you cutting and pasting a portion of Toggweiler, accusing me of misrepresenting Toggweiler, when just below is the 4-5C abrupt impact they think is possibly, is typical BBD form. That is laughable IMHO. I think I will nominate you for a Steig.

Still ignoring what I say and still failing to acknowledge everything that you have so far failed to acknowledge on this thread. Quite an impressive act of sustained denial.

The discussion is on “surface” temperatures and dip in “surface” temperatures pre 1920. Since you confuse 0-700 meters with the upper ocean and the “surface” temperature data is based on the ~”ocean mixing layer” with is roughly 0-50 meters, you are basically comparing the “skin” to the “body”. The turkey analogy is applicable for both your question and you :)

The ‘discussion’ is about anthropogenic global warming. You are doing this. I’m not. That’s the difference. That and the fact that I’m not desperately trying to misrepresent my way out of a tight corner.

For some reason you keep implying that I am misinterpreting the Toggweiler et al.

You keep claiming that this paper demonstrates *global cooling* resulted from the thermal isolation of the Antarctic and even the bloody abstract states clearly that it didn’t. Either you haven’t even read the abstract, or you don’t understand this paper, or you do understand it and are deliberately misrepresenting it.

Here it is again:

The effect of Drake Passage on the Earth’s climate is examined using an idealised coupled model. It is found that the opening of Drake Passage cools the high latitudes of the southern hemisphere by about 3°C and warms the high latitudes of the northern hemisphere by nearly the same amount.

See it now? No net global cooling. The ‘4-5C cooling’ you are fond of quoting refers to hypothesised Antarctic cooling rather than modelled results:

The results here suggest that much of the full thermal effect of Drake Passage could have been realised well before the channel was very wide or very deep. This is because the mere presence of an open gap introduces an asymmetry into the system that is amplified by higher salinities in the north and lower salinities in the south. This kind of haline effect, and the possibility of increased Antarctic sea-ice and land-ice, lead us to conclude that the thermal response to the opening of Drake Passage could have been fairly abrupt and quite large, perhaps as large as the 4–5°C cooling seen
in palaeoceanographic observations.

How do we know? Because we read this earlier:

The bold solid curve in Fig. 11 shows the departure of zonally averaged sea-surface temperatures (SSTs) in simulation 8 from the mean SST calculated by averaging the northern and southern hemispheres at each latitude. As with the air temperature differences in Fig. 10, SSTs poleward of 50° latitude are about 3°C warmer in the north and 3°C colder in the south than the interhemispheric mean.

Once again, you have made a right old mess. As I keep suggesting, you need to stop muddling around and read some paleoclimate textbooks. Perhaps you should take that suggestion on board.

BBD, you are flailing around sinking in your own ignorance. The Earth is heated in the equatorial region and energy transferred north and south. The Drake Passage opening reduced the southern polar temperature by up to 4 to 5 degrees. That increased the temperature differential between the equator and the southern pole by, wait for it, 4 to 5 degrees.

Now would more energy flow south after that change?

The Earth is 70% covered by oceans. The southern hemisphere has 206 million kilometers squared of oceans versus the northern hemisphere with 154 million kilometers squared of ocean. That is 81% ocean in the SH and 61% ocean in the NH.

Of the land mass in the SH, 14% is covered with ice. That 14% is covered with ice kilometers deep and relatively stable due to the Drake Passage opening and it being “fixed” to land mass. That changes the internal energy transfer from the heat source, the equator to the heat sink, the Antarctic.

Since the energy advection to the SH varies considerably from the advection to the NH, “average” global temperature is misleading. Regional internal heat transfer has to be considered,whether you use a radiant or a more applicable moist air thermodynamic model. You can’t “average” “surface” temperature. You can average Joules, but not “surface” temperature.

Since you can average the energy, Joules, which can be reasonably approximated by the “total” ocean heat content, that is the more stable thermodynamic frame of reference. The “average” heat content is asymmetrically distributed between hemispheres with quite different heat sinks, the distribution of that energy impacts the rate of energy transfer internally and to space.

Now, since real discussion was the internal variability or the SAW in Vaughan Pratt’s model, not some tired nonsense you think is or should be discussed, determining the impact of that internal variability will allow a better estimation of “ALL” the anthropogenic influences on climate.

Why don’t you roast a turkey, drink some wine and draw a model of the Earth with your crayons pointing out the directions of isotropic heat flux. Once you figure out where the Thermal Equator is and how its moving impact climate, you may actually enjoy the conversation.

BTW, the “Steig” award is for the most out of context argument. Lucia has a humorous post on the subject.

I had mentioned to you before that there was a three to four year lag in impact from solar TSI which could appear to be a 15 year lag depending on which cycle you started with. This appears most clearly in the lower stratosphere and is likely contributing to most of the current stooping and fetching in the climate science community. The Sudden Stratospheric Warming events, tend to be impact by solar fluctuations and the NH reduced overall SSW intensity correlates rather nicely with the 1985 to 2000 temperature rise, diurnal temperature trend shift, fluctuation in stratospheric water vapor and ozone also, whether it is involved or not, Cosmic Ray variation. :)

I’m not going to be reading anything addressed to Blah Blah Duh. That is a calculated insult. Obviously I cannot respond in kind or I will be returned to moderation, so you are being gutless in the extreme.

Hi Cap’n. The 3 solar cycles covered by the satellite data are 21 through 23. I don’t see why they can’t be considered almost synchronous, 1 year delay at most.

I have no guesses as to why 23 doesn’t give as strong a response as 21 and 22.

Meanwhile I’ve been reanalyzing my spreadsheet and have revised my estimate of Hansen delay downwards to 11 years (with ClimSens reduced accordingly to 2.66 C — type ^D in cells Q26 and V26 of the revised spreadsheet), based on the same least-squares-fit approach but done more carefully this time. That’s 1 year more than the (current) 10-year solar cycle, so conceivably the satellite peaks could be those for cycles 20-22 instead of 21-23, delayed by 11 years. But I think it’s unlikely since I would expect Hansen delay to have a massive amount of damping that would iron out such oscillations. I think it’s more likely that they’re synchronized to TSI to within a year,. But nonetheless it’s worth looking into. Thanks! I owe you at least n beers. ;)

Too bad that both RSS and Argo are much too recent to be useful for the ~21-year Hale cycle.

Vaughan, I think the three year lag is closer to the norm. There is a change in synchronization following the 1998 El Nino that seems to produce the weak response, but there is still a small peak with the longer lag.

The first two peaks were likely amplified by volcanic aerosols, but there is a shift as I see it.

Congratulations on the publication of the BEST land temperature paper. I am sure there are bean counters going over this with a fine tooth comb, which is as post-mortem science is supposed to work in the internets epoch. The paper provides red meat for everybody. The catty climate establishment says “meh, we told you so” The conspiracy loving skydragoons are now even more suspicious and having phantasmagorical hallucinations while speaking in tongues. No snakes required.

Is there going to be a press blitz, as there was for the unveiling of the preliminary work product?

Is this post in lieu of press release?

And if it’s an update on the status of the papers, shouldn’t it have mentioned the rejection by JGR?

By the way, I think it is probably a fine dataset as far as estimations of land temps go, + or – 1C. But to condition participation here on the basis that one must discuss the merits of the paper is inappropriate. The subject matter in paper has been discussed extensively, since it’s much ballyhooed premature revelation.

I happened to notice that the paper was published on December 7, according to the online version (via the link in Steven’s post). Look at the fine print. I wonder what took them so long to update us. Shhhhh!

Sorry. we send out to the location to the researchers who have signed up for our news list. If we were reluctant to mention it i wouldnt blog about it.
What you dont get is that the folks who want to cite the data can now cite it. If you want news sign up for the list. pretty simple.

Don. the paper was accepted on dec 7th. Then proofs come to us. You have the holidays, we had a couple changes to the proofs. One being the order of the authors names. Then we had several other things we wanted to bundle into the website update, the memos, the new data, beta testing the site.. did the astronuats really land on the moon?

Don. the paper was accepted on dec 7th. Then proofs come to us. You have the holidays, we had a couple changes to the proofs blah…blah…blah…” and you asked me if we landed on the moon. Yes we have landed on the moon.

Now maybe you will explain why you picked that particular inconsequential issue to address, but you duck and dodge the more interesting questions. I will help you, Steven. Because it would be very embarrassing for you to provide responsive, truthful answers to the other questions. You should read the Crutape Letters. It’s about people who are snitty and like to hide things, all the while pretending to be transparent. People like that are transparent, all righty.

BEST has limited its study to the “globally and annually averaged land surface temperature anomaly”, ignoring 70% of the globe’s surface (the oceans) and any part of the troposphere, which is above the surface.

This is where we humans live – so the day-to-day temperature there is the temperature we are concerned with and measure.

The “globally and annually averaged land and sea surface temperature anomaly” is a construct, which has no real meaning to anyone, but it is used to measure longer-term changes.

If we accept the information posted by Steven Mosher, the BEST information is a more accurate (or less inaccurate?) indicator of land-only temperatures than other records, such as Hadley or GISS, etc.

So far, so good.

Where the BEST team strayed from the factually reporting an improved and extended (if more restricted geographically) data set was in the Rohde, Muller et al. paper on attribution, which was published in G&G.

IMO this was unfortunate, because it detracted from the value of the contribution of the BEST team in improving and extending the land-based record itself by adding some conjectural statistical sleight of hand, which added nothing to our knowledge.

quote
A 2008 study – “Oceanic Influences on Recent Continental Warming”, by Compo, G.P., and P.D. Sardeshmukh, (Climate Diagnostics Center, Cooperative Institute for Research in Environmental Sciences, University of Colorado, and Physical Sciences Division, Earth System Research Laboratory, National Oceanic and Atmospheric Administration), Climate Dynamics, 2008)
[http://www.cdc.noaa.gov/people/gilbert.p.compo/CompoSardeshmukh2007a.pdf] states: “Evidence is presented that the recent worldwide land warming has occurred largely in response to a worldwide warming of the oceans rather than as a direct response to increasing greenhouse gases (GHGs) over land. Atmospheric model simulations of the last half-century with prescribed observed ocean temperature changes, but without prescribed GHG changes, account for most of the land warming. … Several recent studies suggest that the observed SST variability may be misrepresented in the coupled models used in preparing the IPCC’s Fourth Assessment Report, with substantial errors on interannual and decadal scales. There is a hint of an underestimation of simulated decadal SST variability even in the published IPCC Report.”
unquote.

The BEST data is unfortunately tackled the wrong way round. The important information is what is happening to SSTs.

See:
quote
A 2008 study – “Oceanic Influences on Recent Continental Warming”, by Compo, G.P., and P.D. Sardeshmukh, (Climate Diagnostics Center, Cooperative Institute for Research in Environmental Sciences, University of Colorado, and Physical Sciences Division, Earth System Research Laboratory, National Oceanic and Atmospheric Administration), Climate Dynamics, 2008)
[http://www.cdc.noaa.gov/people/gilbert.p.compo/CompoSardeshmukh2007a.pdf] states: “Evidence is presented that the recent worldwide land warming has occurred largely in response to a worldwide warming of the oceans rather than as a direct response to increasing greenhouse gases (GHGs) over land. Atmospheric model simulations of the last half-century with prescribed observed ocean temperature changes, but without prescribed GHG changes, account for most of the land warming. … Several recent studies suggest that the observed SST variability may be misrepresented in the coupled models used in preparing the IPCC’s Fourth Assessment Report, with substantial errors on interannual and decadal scales. There is a hint of an underestimation of simulated decadal SST variability even in the published IPCC Report.”
unquote

I like that ‘even’!

Then, when you have a good idea of what is hapenning to SSTs you could explain:

Well I’m glad it has been established that the European snow is from stratospheric warming, ie exactly the opposite of what we expect of carbon dioxide. Now if the boffins at the Met office like Peter Stott can just get their head out their arse and into looking at actual data then they might start to claw back some of the lost respect.

3. Given that aerosols cause warming.
4. Given aerosols cause cooling
5. Given clouds cause warming.
6. Given clouds cause cooling.
7. Given land use changes cause warming.
8. Given land use changes cause cooling*.
9. Given that the sun modulates cosmic rays.
10. Given that ocean pollution changes albedo and cloud cover.
11. Given that black carbon modifies ice albedo.
12. Given that cloud feedbacks are unquantified.
13. Given that biological feedbacks are unquantified.

what would you like the answer to be?

quote
Of course it doesn’t rule out a sneaky deity that really controls everything, but explanatory parsimony suggests that adding entities that are not necessary, is well, not necessary.
If you want to object then deny #1 and be a skydragon
unquote

You don’t need a sneaky deity, you just (just!) need to quantify 3 to 13. Deny that and be a believer, be among those who see a new forcing appear and incorporate it into a theory which is so capacious, so baggy, that anything, any data at all, can be squeezed in and will fit. See black carbon. See the decline. See lots of bad science.

BTW, there’s a 14.

14. Something is going on which has not yet been considered.

JF
*I made that one up, but considering what hoops some papers have jumped through to make data fit theory then I wouldn’t be surprised if it has been invoked somewhere.

Which brings us back to the “argument from ignorance” (“we can only explain X if we assume Y…”).

Until we can say, without batting an eye, that we understand everything there is to understand about how and why our climate behaves the way it does, we cannot use that argument.

It is the argumentation used by IPCC in:

1. Our models cannot explain the early 20th century warming
2. We know that the statistically indistinguishable late 20th century warming was caused by human GHGs.
3. How do we know this?
4. Because our models cannot explain it any other way.

I read Zeke’s defense of NCDC and taking exception to Anthony Watt’s characterization of the process of adjusting raw data in the surface temperatures. I am delighted that you have taken an interest in this issue of adjusting raw data and the utility of such data in further research.

I would be interested if you consider Zeke’s defense worthy. I don’t mean to imply a dichotomy; i.e., either Zeke’s right or Anthony is right. Rather: does Zeke’s paraphrasing Watts statements as being pejorative do justice to what Watts asks in the last 4 queries at the end of Lucia’s piece?

I did read one thing A. Watts wrote with some amusement. He said: (about Zeke)

“He’s mad, and people don’t often think clearly when they are mad.”

Hmmm., seems I remember a certain one-in-the-same A. Watts canceling a family vacation and working feverishly all weekend to put out a “groundbreaking” paper that was an attempt to upstage Richard Muller who he was mad at over the BEST release as Muller was to put out an editorial in the NY Times about the conversion of a climate change skeptic. Muller’s editorial was widely read– Watt’s paper (still a work in progress?) but not so groundbreaking.

I can’t believe something I read in that article. Zeke actually repeated that old canard:

If you disagree with someone’s approach and methods, the proper way to respond is to create your own approach and demonstrate that it is superior.

It’s the barely veiled reversal of the burden of proof in the form of, “If you can’t prove me wrong, I’m right.” The idea that disagreeing with “someone’s approach and methods” requires me to do a better job is ridiculous. And it’s ridiculous in two different ways.

First off, it directly implies having any answer, no matter how bad or wrong, is better than having no answer. That’s silly. In reality it is perfectly okay to say, “We don’t have a way to solve this problem.” We don’t have to find a better answer to say one answer is wrong.

Second, it places an incredible burden upon anyone who disagrees. Even if someone knows what a better approach would be, they may not be able to create and implement it. It is absurd to say a single, unfunded individual must do as much, if not more, work than teams of individuals that get paid for what they do.

If Zeke wants to criticize Anthony’s standards, he shouldn’t promote standards that are as absurd, if not moreso.

If you make a comment as pointed as Watts, you have to back it up with at least one reason why you think the adjustments are all wrong. His comment was unfortunately fact-free, and so was his follow-up and his comments at WUWT. Zeke, on the contrary has provided facts and references by which to judge Watts’ comment.

Mosher, I described two ways in which is was wrong. Why would you respond by saying:

Its actually not a canard. its the way science works.

This is nothing more than, “You’re wrong.” You don’t give any explanation as to how I’m wrong. You don’t attempt to show any flaw with my descriptions or reasoning. In fact, you don’t do anything to address anything I said.

What could you possibly hope to contribute to a conversation with a comment like yours?

Brandon, your response could be exactly applied to Watts remarks. Not even a hint of what he thinks they did wrong. Completely valueless and it just makes him look quite bitter. Zeke asked for an improved method, but he could equally have asked for just something that they did wrong which would invalidate the method. Nothing forthcoming from Watts on that front either.

Brandon, your response could be exactly applied to Watts remarks. Not even a hint of what he thinks they did wrong.

That may be true, but I was linked to Zeke’s post, and I was discussing something Zeke said in that post. I don’t see why I should randomly start discussing something else.

Zeke asked for an improved method, but he could equally have asked for just something that they did wrong which would invalidate the method.

Zeke didn’t “ask” for an improved method. He said “the proper way to respond” was to provide an improved method. The former is fine. The latter is not. The latter says you can’t respond if you don’t provide an improved method.

The standard Zeke promoted is unacceptable. Nothing about Anthony Watts has any bearing on this issue.