Well, it’s fitting–homeland of Arrhenius, Ekholm, Bolin & Erikson, and Rossby (and for a time home to Bjerknes.) Sweden is disproportionately well-represented in the development of modern meteorology/climatology & modelling, it seems to me.

This has been up a few weeks and I haven’t seen any comment on my science blogs of choice. The author states, “There are high trends from GHCN, so high in fact that anyone who questions Phil Climategate Jones temp trends will need to show some evidence. Certainly Phil is an ass, but it no longer seems to me that he has ‘directly’ exaggerated temp trends one bit. ”

Some of the subtext in the comments thread is deeply conflicted and amusing to work through. But of course not an inch is given. The trend may show up, but it’s all the UHI effect apparently.

I gather RomanM is behind this and I know there is scant respect for his work here. But I thought some of you might be interested. And I’m always interested in your take on such things.

Over at Bart Verheggen’s blog some funny outraged contrarian is claiming that the buoys are deliberately placed in the shipping lanes and are therefore subject to the UHI aka OSL* effect as well. No prove is given ofcourse, because who needs it, don’t ask stupid questions like that…

Crispy, there have been a bunch of these lately. One by Tamino, one by Zeke Hausfather at Lucia’s place, one by Ron Broberg (rhinohide.wordpress.com), one by Nick Stokes (moyhu.blogspot.com), and probably others that I’m forgetting. Plus of course the exact replication of GISSTEMP by Clear Climate Code.

Many of the above also did some examinations of the station dropout issue, too. Unsurprisingly, the decrease in the # of stations over the past couple of decades doesn’t seem to have had any impact on the trends.

Yes, thanks J, I’ve seen some of these and followed them with interest, but Zeke and Ron and Nick and Mr T are all working either neutrally or in support of the science. The graph at The Air Vent is the first I’ve seen from the denialist side, and the blogger who executed it seems to be a couple of pink pills short of making himself a tinfoil hat, if his other posts are indicative. So it kinda jumped out at me when I spotted it.

Since this is an open thread, I’d like to vent a bit. I have followed discussions in Real Climate and got frustrated its moderation policy. To my mind, its simply wrong to not allow regular commenters to shoot down denialists’ there. The moderators do not have time to check everything, and it is really loss to science, if the products of the BS machine are left there with no response.

It’s good to see this kind of stuff come out. The problem with denialist work is that generally it never gets to much of a finished state (in fact a lot of preliminary snide accusations and leading remarks are made, and then never finished up) and also that “extents” of issues in the methodolgy are not estimated (just the “PR” gotcha soft story), and that different issues are mushed together, rather than disaggregated (again useful for PR, but bad for logical science insight). McI is very, very prone to all these flaws (but at least is good at math and programming and reading papers and has a high IQ). Watts has these flaws, but lacks the smarts to do hard core science. Id is sort of in between.

Now I understand they did some monthly anomally adjustments to the data when they switched versions, but those shouldn’t affect the trends. The global trend is indeed the same, but the Land & Ocean trends have changed dramatically as has the USA48 trend. It appears the Land adjustments have been significantly upward and the ocean adjustments significantly downwards which would also explain the greater USA48 warming.

What could have warranted these adjustments? Is there any documentation on why this was done? And how consistant is this with Land & Ocean warming trends of other datasets?

Thought experiment: what happens if CO2 is held at 0 ppm?
What is the carbon compound at 255K at earth surface with lowest energy level? While it’s true the excess of carbon to oxygen in the universe makes an atmosphere for such planets as Jupiter (methane). I don’t remember if Gas giants have much CO2 in them (possibly as liquid or solid?). I’m not so certain of relatively large terrestrial planets. I’d guess the rarer spacial oxygen would aggregate in the atmosphere of such planets and (via lightning or auroras) combust any excess carbon produced by stars to monoxide or dioxide, the thing is not so easy to get totally rid of (not to talk about weathering which may happen both ways, but I don’t know of a reducing atmosphere currently on this planet (though some dead zones on ocean may come close to reducing the carbonates…)) (I blame hangover, Alastair Reynolds and spring for this post)

Now that the Oxburgh Report into CRU backs Jones et al, the deniers will obviously be pinning all their hopes on Russell.
What will they say when all three reports are in and the deniers are left empty-handed ? Conspiracy whitewash !!!

Actually, I reckon that then the only deniers left will be the troofer kind, who believe that EVERYTHING is a conspiracy.

Here is my simple minded analysis: CO2 + H2O keep the surface about 33 K warmer that without those gases, 288 K. Each component is credited with about 50% of the effect.

Now eliminate the CO2 to lower the average temperature by about 16.5 K to 271.5, only 0.5 K above the freezing point of salt water. From CC, H2O varies abut 6% per 1 K. 16.5×6 = 99% so the ocean completely freezes over and the temperature drops to 255 K.

“This was a rather small and peaceful eruption but we are concerned that it could trigger an eruption at the nearby Katla volcano, a vicious volcano that could cause both local and global damage,” said Pall Einarsson, a geophysicist at the University of Iceland’s Institute of Earth Science, Associated Press news agency reported.”

Kevin, I think that previous story may be referring to the first eruption less than a month ago. This one’s much bigger, with thousands of flights having been cancelled and many northern european airports closed at the moment.

Interesting paper in Phys. Rev. Lett. by Rypdal and Rypdal (one is a physicist, the other a statistician – sometimes you feel like a nut, sometimes you don’t) For those of you who have access, the link is http://prl.aps.org/abstract/PRL/v104/i12/e128501

The paper is in reply to an earlier Phys. Rev. Lett. by Scafetta and West that claimed the Global Temperature Anomaly and the Solar Flare Index tracked with each other and followed Lévy walk statistics with the same waiting-time exponent. The new paper says the older paper is in error, and that the older paper’s claim of sun-climate complexity linking is wrong. The new paper says the SFI can be described as a Levy flight, but the GTA cannot, and is instead described as a persistent fractional Brownian motion.

I’m a physicist and not a statistician, so many of the paper’s statistical arguments are unfamiliar to me. Because the site’s owner and most readers are statisticians, I’d be interested anyone’s take on the paper, if they’ve read it.

The last paragraph of my previous post is the abstract from the Rypdal paper, and probably shouldn’t be there because of copyright restrictions. I apologize for that, and hope the site moderator can remove that paragraph.

[Response: I believe quoting the abstract constitutes “fair use.” However, at your request I removed it.]

Probably depends on whether or not your expectations are based on the last few years (i.e. Goddard’s selective comparison with a german source that has only been tracking extent for a few years, all record low years compared to the 1970-2000-ish time frame, leading him to claim that sea ice has returned to “normal”), or on a longer timeframe.

Yes, Goddard’s right, sea ice extent has “recovered” to being kinda normal for the 2006, 2007, 2008, 2009 time frame.

But normal people might suggest that this is a really silly POV since these years represent record lows in the satellite record …

Thanks for the links Hank. Perhaps I should have said “more than what should be expected given the time of year”. ‘Tis the season to anticipate massive arctic melting.

Don’t get me wrong. I believe summer Arctic ice is not long for this planet. But I would still give it a decade or so, barring unusual winds like we had a couple years ago.

I guess my point is that speculating on anticipated dramatic short term sea ice changes seems a bit of a folly, but then again perhaps Petro (or others here) know more about it than I do (that’s not hard to imagine).

According to the folks at NSIDC – there is actually is slightly more >1yo ice this year than there was last year. Check out fig 6 here: http://nsidc.org/arcticseaicenews/

>Goddard’s selective comparison
>Look at the amount of first-year ice, this year
>If winds set up to transport ice south in any part of the arctic, a lot of melting might result.
We’ll see. One thing’s for certain, claims of “recovery” are just crap.
>barring unusual winds like we had a couple years ago.
>The sorry state of the Arctic sea ice is, that once the conditions similar in 2007 happen next time, it is all gone.

However, to credit anything other than “noise” for the (very minor) increase of older ice would be a mistake at this point IMHO.

Since the NSIDC graph is only “percentages” and not absolute values you could make the argument that IF the total area had actually decreased much over the last couple years – the change in percentage might NOT reflect an overall increase in multiyear ice but a decrease in first year ice. I don’t think that is the case however.

Maybe this is already common knowledge, but in case it isn’t I thought I should share something I just learned.
According to the latest issue of Trains Magazine the first rail delivery of coal in 50 years has been made to Pennsylvania State University.
Not just a little bit…90 cars worth.

It’s been known for a while that there’s a correlation between the seasons and volcanic eruptions in Alaska. The latest I read was that scientists had identified a weather pattern that pushed water around (don’t recall if it was more or less water – sorry) and that flexed the magma chamber below the volcanoes and catalyzed the eruptions.

That there could be geosphere adjustments as a result of climate disruption isn’t exactly a surprise – the sea is receeding north of Juneau because glacial melt reduced the loading on the coastline. Whether the geosphere adjusting will cause catastrophic (on a human scale, not an earthly scale) is the question.

“The Great Global Warming Blunder: How Mother Nature Fooled the World’s Top Climate Scientists”. He claims to have found THE truth about what drives climate: internal variability in the form of clouds.

Well, from all what I have seen as evidence for that hypothesis he must come up with something very new and unknown to all other scientists…

As best as can be determined, there has been no change in cloudiness for the past several decades (in which considerable warming has happened). Moreover, the Eemian interglacial was about 2 K warmer than now, so “clouds” didn’t prevent that either.

One question I have – when this 2K figure is quoted, does it refer to (for instance) average 20th century temperatures? It would be interesting to work out how far away from Eemian temperatures we were during the first decade of this century..

these higher temperatures were concentrated at high latitudes, apparently due to astronomical forcing. But that’s enough to raise sea level as that’s where the big ice sheets were. How this relates to cloudiness, I don’t know.

“It’s not nice to fool Mother Nature”–how about the converse? Or, more exactly, alleging the converse?

The level of rhetoric in this title certainly leaves no doubt about whether it has scholarly pretensions, or whether it is avowedly a popular book. Which then raises the implied question, if Spencer has indeed “0verturned the science,” where are the technical articles in the professional literature laying out this radical new understanding?

After all, Spencer has demonstrated an ability to publish in the past.

Hmm, a pygmy elephant i might accept (after some discussion of evidence), but i think a shrew is rather snarky… are all your posts that detailed and well argued?

i’ve been interested in this for some years, having read several of McGuires books, and followed his work. This isn’t that difficult, because he pops up on TV pretty regularly in these parts.

When i posted about this on RC (2006/2007? about the time he published the alaska work that Brian refers to), i was ignored by the mods (which is fair enough really, the evidence was thin, they really dont need to open another front, they are mainly US based etc.), but was flippantly dismissed by several regular posters.

i think one of them was you, David! :-) certainly a quick search of RC reinforces my feeling. like our elephants, the internet doesnt forget!

How splendid that we should discuss it again (or more precisely; i post, then you snipe), because the evidence is now far stronger, because Prof McGuire has been very busy.

He has virtually stood this up single-handedly (although he has had many co-authors on the numerous papers he has published), and i am delighted to see that his efforts are being recieved with a great deal less skepticism this time round, a function of the weight of the evidence, i guess.

In what we tell ourselves is an age of reason, we are behaving increasingly irrationally. More and more people are signing up to weird and wacky cults, parapsychology, séances, paganism and witchcraft. There is widespread belief in ludicrous conspiracy theories, such as the 9/11 terrorist attack being an American plot.

So far, not much to complain about. I’m concerned by such observations, too.

What to conclude? The answer is glaringly obvious, apparently:

The basic cause of all this unreason is a steady loss of faith in God…

Well, I suspect they will keep coming up with that (“cloudiness has decreased”) argument, because the uncertainties in cloud cover measurements are (as far as I know) so large that there is a lot of room for speculation, and speculation is enough to “prove” AGW wrong (because in their view, that’s based on even less than speculation). For the rest, I agree with you David.

As for increased absolute humidity, CC says close to 6% per 1 K of warming; indeed absolute humidity has been increasing, as measured; see some NOAA site. But clouds readily turn into precipitation and, depending upon global precipitation product, either no increase precipitation in 29 years or else a slight increase in 52 years. In either case I conclude no increase in cloudiness, so far.

To precipitate, not only must the air parcel reach 100%+ relative humidity, but also there needs to be CCNs to condense the water/ice onto. Turns out there is a superabundance of CCNs everywhere but possibly the interior of Antarctica, where it doesn’t matter as the air is hyper-dry. As warm moist air will certainly continue to rise in the future, it won’t hang around to increase low cloudiness.

On elephant shrews: while a few extra earthquakes and volcanoes certainly making life difficult in a few localities the actual elephants, pigmy or otherwise, are regionally (too wet, too dry) for agriculture and ocean deterioration, acidification etc. [I’m a Mt. St. Helens downwinder, by the way.]

Condensation is also very much determined by the surface solar radiation. This has shown substantial decadal variations, on global but especially on local scales. SSR has been shown to vary (greatly, in the order of 10 W/m^2!)with the aerosol concentrations from human emissions (I can REALLY recommend a review article from Wild, 2009: Global dimming and brightening, a review.). Decreased evaporation and heating of the atmosphere by aerosols could have lead to a decrease in precipitation (slowing of the hydrological cycle), which goes against the AGW trend.

The Earth radiation balance as driver of the global hydrological cycle

(Note, that means “one of the drivers” not “the only driver”)

—-excerpt follows—
Solar forcings may be even more efficient in modifying the intensity of the hydrological cycle than thermal forcings, as indicated by a higher hydrological sensitivity (e.g., Allen and Ingram 2002, Liepert et al 2004). The hydrological sensitivity, defined as change of precipitation per unit temperature change, is found to be 2–3 times larger under solar forcings than under thermal forcings (Liepert et al 2004, Andrews et al 2009). This is related to the fact that solar forcings apply at the surface directly because of the high solar transparency of the atmosphere compared to thermal radiation. Solar forcings thus effectively alter the surface radiation balance and the associated imbalance between the surface and atmospheric energy contents, which needs to be compensated for by convective fluxes and related evaporation/precipitation. Greenhouse-gas-induced thermal forcings, on the other hand, heat the atmosphere directly through radiative absorption and the surface indirectly through downward thermal radiation. Thermal forcings are therefore less effective in strengthening the imbalance between the surface and atmospheric energy contents. Hence the required changes in the compensational convective fluxes and associated evaporation/precipitation are smaller (equation (4) in Liepert and Previdi 2009). The different effects of solar and thermal forcings become particularly evident in the direct (fast) response of the hydrological cycle to them, while the subsequent longer-term response of the hydrological cycle, including all feedbacks induced by these forcings, is similar between the two forcing mechanisms (Andrews et al 2009, Lambert and Webb 2008).
—end excerpt —

It’s another look at filling in the details, and it’s about the contribution of solar to the short-term variability; I’d watch for it to be picked up and misinterpreted by the usual “anything but the IPCC” groups.

That makes intuitive sense to me, in that growing up in “Snow” Ste. Marie you could very directly observe the effects of a sunny afternoon even when the air temps weren’t all that high. Every southwest-facing snowbank became its own little microclimate.

(For that matter, you could also directly observe the albedo effect, as dark inclusions in the snow–usually clumps of dirt from the roads–melted much more rapidly than uncontaminated zones did.)

I’m afraid of that too, because there are signs of renewed global dimming, which (besides a possible tempering of the warming trend) might lead to a decrease in evaporation, and thus possibly to a discrepancy between model projections of humidity in the atmosphere and the measured humidity. It can also lead to increased droughts and other strong local effects. Aerosols are something to keep an eye on, also because it can change cloud properties.

Hank,
Interesting contribution. I would point out that this would be an extremely important consideration for any mitigation that used aerosols to reduce warming. We are in effect exchanging visible light for IR. I suspect this affects the biosphere significantly as well as the hydrosphere.

In both the original Wegman report and a subsequent follow-up paper by Yasmin Said, Wegman and two others, the background sections on social network research show clear and compelling instances of apparent plagiarism. The three main sources, used almost verbatim and without attribution, have now been identified. These include a Wikipedia article and a classic sociology text book by Wasserman and Faust. But the papers rely even more on the third source, a hands-on text book that explores social network concepts via the Pajek analysis software package – the same tool used by the Wegman team to analyze “hockey stick” author Michael Mann’s co-author network.

Not only that, but the later Said et al paper acknowledges support from the National Institutes on Alcohol Abuse and Alcoholism, as well as the Army Research Laboratory, raising a host of new issues and questions. And chief among those questions is this: Will George Mason University now finally do the right thing and launch a complete investigation of the actions and scholarship of Wegman and Said?

I have a list of 18 successful predictions of climate models, and I’ve amassed references for all one but one them–both for the prediction, and for the later empirical confirmation. But I don’t have it for “expanded range of hurricanes and cyclones.” I can’t seem to find anything releveant in Google Scholar. Can anyone help me out here?

I’d love to see someone rewrite it at about the 7th grade reading level (the US national average).

It begins thus:

The Bootstrap

Statisticians can reuse their data to quantify the uncertainty of complex models

Cosma Shalizi

Statistics is the branch of applied mathematics that studies ways of drawing inferences from limited and imperfect data. We may want to know how a neuron in a rat’s brain responds when one of its whiskers gets tweaked, or how many rats live in Manhattan, or how high the water will get under the Brooklyn Bridge, or the typical course of daily temperatures in the city over the year. We have some data on all of these things, but we know that our data are incomplete, and experience tells us that repeating our experiments or observations, even taking great care to replicate the conditions, gives more or less different answers every time. It is foolish to treat any inference from only the data in hand as certain.

If all data sources were totally capricious, there’d be nothing to do beyond piously qualifying every conclusion with “but we could be wrong about this.” A mathematical science of statistics is possible because, although repeating an experiment gives different results, some types of results are more common than others; their relative frequencies are reasonably stable. We can thus model the data-generating mechanism through probability distributions and stochastic processes—random series with some indeterminacy about how the events might evolve over time, although some paths may be more likely than others. When and why we can use stochastic models are very deep questions, but ones for another time. But if we can use them in a problem, quantities such as these are represented as “parameters” of the stochastic models. In other words, they are functions of the underlying probability distribution. Parameters can be single numbers, such as the total rat population; vectors; or even whole curves, such as the expected time-course of temperature over the year. Statistical inference comes down to estimating those parameters, or testing hypotheses about them….

A piece of trivia about bootstrapping. Originally, one of the proposed names for the technique was “shotgunning” in analogy with jackknifing–another important nonparametric technique. The reason was that you could crack just about any problem as long as you could clean up the mess afterwards.

A National Science Foundation (NSF) video series that features many of the world’s top scientists discussing climate change and why humans are responsible for today’s changes. Using plain language combined with stunning video this series is a must-see for anybody interested in the issue.

Topics include: How Do We Know?, IPCC, Carbon Cycle, Water Cycle, Earth’s Heat Balance, Climate Modeling, What Americans Believe about Climate Change, and History of Climate Change Research.

Horatio also does not see topics like “What Do We Know about Hacking Into Email Servers?”, “NIPCC”, “Spin Cycle”, “Fair and Balance”, “Bikini- Modeling”, or “What Rupert Murdoch Believes about Climate Change”.

Thanks Hank. Hopefully my congratulatory post will make it through, as well as the the link to Deltoid’s takedown of David Rose’s big story on Mojib Latif, in response to a post of Rose’s Daily Mail original. There are a number of denialists posting there, but a number of sane rebuttals too.

I’m trying to put together the sequence of events that led to the abandonment/barring of the work being done by John Vilet, Steve Mosher and others when they were crunching the numbers at CA late 2007 as surfacestations.org data was coming in. Preliminary results were that the time series from good USHCN stations closely matched the official record. A couple months after this analysis started, it suddenly stopped. I’m keen to know exactly how this happened. Did Watts make his data unavailable, or did the participants agree not to post any more?

I want to get that straight before a follow-up with Judith Curry, who just emailed Watts about it. His reply email can be seen at comment 149 here:

He’s claiming that he stopped releasing data after Menne et al, but I wondered if he’d done so earlier. surfacestations.org hasn’t been updated since last July. And I’m curious about the events surrounding the demise of the CA analysis. What prevented John V et al from continuing the work?

Watts said that the analysis, based on something like 40% photo coverage by the surface stations project, was meaningless because there wasn’t enough data.

IIRC.

My reading of the situation is that eventually John V wised up and realized that Watts wasn’t chasing the truth, but rather a predetermined “answer”, i.e., the surface station record is meaningless and therefore it’s not warming, blah blah.

Is there a linkful blog page on the demise of the CA analysis etc? If not, I might put one together – any useful links would be appreciated. Wasn’t able to find a contact for John Vliet at opentemp.org – I’d like to find out what happened from him.

[Response: It’s bad enough that you seriously believe such a childish theory. Worse that you appear to be ignorant that you’re not even the originator of this infantile idea. Worst, you didn’t even do it as well as Klyashtorin and Lyubushin.]

Thanks for the pointers, people. According to Watts, he shut down access to surfacestations.org data after NOAA’s pdf ‘talking point‘ in July last year. I’m still not sure why the CA analysis was abandoned, and wonder if Watts had shut down access earlier. I’m trying to figure out if Watts was genuinely put out by the NOAA article, or if he didn’t like the message that was emerging in late 2007 on CA. The timing of his shutting down access is something to resolve on that point. I’ll keep trying to reach John Vliet. If I get a fix, I’ll post it here by way of thanks for the suggestions.

I will put my money where my mouth is by betting $1000 AUD that my prediction of global mean temperature anomaly for 2015 of about 0.3 deg C will be closer to the observation than the IPCC value of 0.7 deg C. Any takers?

[Response: There’s no such “IPCC value of 0.7 deg.C” for 2015 — certainly not on the scale of HadCRUT3 data. You just made that up. How despicable.]

For 2005, since the global mean temperature anomaly (GMTA) from CRU was 0.47 deg C, for 2015 the IPCC projection for GMTA would be about 0.47+0.2=0.7 deg C.

[Response: The long-term MEAN value for 2005 is 0.423 (value at 2005 of trend line fit to 1975-2009 data). Add 0.2 and you get 0.623, much closer to 0.6 than 0.7. But you use the 2005 value including the noise, because it allows you to exaggerate.

And for your model, you didn’t even have the common sense to estimate the amplitude of your sinusoid by least squares (like Klyashtorin and Lyubushin did). Instead you just “eyeballed” it, and exaggerated the hell out of it as a result — because that’s what you wanted.

And you didn’t bother to validate your model by withholding any verification data, either earlier or later data. If you had done so you’d have found that your model is a gigantic FAIL.

You really, seriously, are utterly clueless. Which is why you belong on Anthony Watts’ blog.

What’s easily predictable is that fools will exaggerate every chance they get.]

Girma,
“Cyclicity” is only one of many types of patterns that SEEM to oscillate. Global temperatures would oscillate even of only because because radiative forcing represents a negative feedback. It is not predictive of the long-term energy balance of the system. If you do not have a candidate for a periodic forcing mechanism (and no, PDO, AMO, etc won’t cut it), looking for “periodicity” in noisy data is a recipe for fooling yourself.

which is in good agreement, methodlogically and otherwise, with the simplest model in Tol, R.S.J. and A.F. de Vos (1998), ‘A Bayesian Statistical Analysis of the Enhanced Greenhouse Effect’, Climatic Change, 38, 87-112.

Chris S. — Thanks. I hadn’t seen that thread before. The 3.7 year cycle is definitely due to a North Pacific Kevin/Rosby wave, part of El Nino. There are several papers about it. The 7.75 year cycle is probably another such and also part of El Nino, but I’m less certain of that and haven’t (yet) noticed any papers regarding that effect.

Over at WUWT, Steve Goddard is at it again. (Apparently, Watts has forgiven him for the “CO2 snow in Antarctica” thing…)

Now he’s claiming that the surface temperature on Venus has nothing to do with either the Sun or the Greenhouse Effect — apparently it’s purely an artifact of the atmospheric pressure. According to Goddard, if 90% of the CO2 in Venus’s atmosphere were replaced by nitrogen, it would only change T by “a few tens of degrees.”

Dang it, you made me look. I only got as far as the article about the letter signed by members of the NAS, which they called the AAAS. Big difference. I can be an AAAS member by subscribing to Science. The NAS, maybe a little more difficult. So far, nobody has caught on. I’d write in, but a) my posts do not appear, and b) my work IP address would show up.

I think this one is WAY worse. The CO2 snow just had one stupid mistake, and one that a NASA scientist also made. This one is just one long-debunked rambling, with loads of errors. Most interesting is to see the more ‘well-known’ commenters reacting with great joy (Hu McCullough, Willis Eschenbach, in particular).

“This is further evidenced by the fact that there is almost no difference in temperature on Venus between day and night. It is just as hot during their very long (1400 hours) nights, so the 485C temperatures can not be due to solar heating and a resultant greenhouse effect.”

He also provides percentages of the atmosphere referenced to 1. The Venusian atmosphere is 95% CO2! But hey, since H20 vapor is a much more powerful greenhouse gas (even though it makes up only .002% of the atmosphere… It must be H20!

I explained why Goddard’s conjecture was wrong in a post on RealClimate yesterday. In brief, his static atmosphere generating heat through its high pressure is a perpetual motion machine of the first kind, creating energy out of nowhere. A compression event would heat up Venus’s atmosphere (and would require a power source), but the added heat would have radiated away by now.

If you wonder what all the squawking about the cost of avoiding global warming is, this may help.
Or not.

——
“We’ve lost almost $11 trillion of household wealth in the last 17 or 18 months,” lamented Senator Christopher J. Dodd, the Connecticut Democrat, on last Sunday’s “Meet the Press,” as he urged Congress to proceed with speedy deliberations on a finance reform bill.
Eleven trillion dollars! That’s over three-quarters of our current gross domestic product.
Where did all this wealth go? Did other folks get it? Or did it just go up in smoke?
For that matter, what precisely is “wealth”? Is it something tangible we can see, or is it something intangible – something merely imagined?
In an illuminating paper on asset values and wealth, the economist Michael Reiter defined wealth in a way that makes sense to economists:

“Wealth” is the present value of the expected stream of future utility [human happiness] that an “infinitely lived individual or a dynasty” [or a nation] could hope to extract from the real resources available now and in the indefinite future, assuming these real resources are allocated and managed now, and over time, so as to maximize that present value of future utils (at the “proper” discount rate).

Two practical points can be extracted from this abstract definition.

First, economists think of wealth not just in monetary terms — as cash, stocks, bonds and real estate — but in terms of human well-being.

Second, and most importantly, the wealth a nation believes itself to possess is based strictly on the citizenry’s expectations about the future. It is in good part a figment of the citizens’ imagination.
To be sure, these imaginations are anchored in the tangible and intangible resources a nation has at any moment and hopes to have in the future. Among these resources are patents and blueprints that represent the current technological state of the art.
But the same set of current resources can trigger vastly different levels of imagined “wealth,” depending on the citizens’ mood.
….

This comment on the Goddard thread is too hilarious to be kept hidden there …

David Bailey says:
May 9, 2010 at 4:26 am
Anthony,

I think this could be a really big nail in the coffin of climate change because it demonstrates more clearly than anything else that the normal critical processes that operate in science haven’t worked in “climate science” for some time! Can you get this published in a journal somewhere?

Arch, thanks for that link to the Mohonk study. It brought back a flood of fond memories as I spent a lot of time climbing, hiking, swimming and x-country skiing in the ‘Gunks when I was an undergrad at SUNY New Paltz.

I’m glad you appreciated it Jim. There must be some very dedicated folks out there.

For me – “the Gunks” were a mythical place I read about in climbing magazines back in the early 70’s when I spent my college summers in Yosemite waiting breakfast and dinner tables and tearing up my hands on granite faces in between.

The VSC is prominently linking a Fraser Institute document that is filled with errors and misleading/missing information. Although alerted to this over 6 months ago, the link is still there! Story detailed in link above.

Written about economics, but really about belief, skepticism, and learning new things:

— excerpt follows —

“… I run into a lot of spurious arguments by people who sound like they don’t understand the accounting.

Or maybe they just feel threatened on some strange existential level – as if what I am writing threatens their core belief system. I think that is a lot of what is going on. So I am writing this post to explain how the human brain processes information. And then I will make a few remarks about how this applies to the present day situation.

Suspension of disbelief

The core of my argument will come from James Montier, now at the fund manager GMO. As a strategist at Dresdner Kleinwort Benson in 2005, he wrote a timeless piece on the debate between two 17th century philosophers René Descartes of France and Baruch de Spinoza of the Netherlands. Descartes was of the view that people process information for accuracy before filing it away in memory. Spinoza made the opposite claim, that people must suspend disbelief in order to process information. The two competing ideas were put to the test; and it appears that Spinoza was right about the need for naïve belief, something that has grave implications for investing, the subject of Montier’s essay.

Here is a long excerpt of what Montier wrote. The article is available online via John Mauldin (the link is at the bottom). This is a fantastic look into how people process information….”
—- end excerpt—

Arguing from a cherrypicked selection of quotes from the “Climategate” emails, McIntyre has claimed that IPCC authors Chris Folland and Michael Mann pressured Briffa to submit a reconstruction that would not “dilute the message” by showing “inconsistency” with multi-proxy reconstructions from Mann and Briffa’s CRU colleague Phil Jones. Briffa “hastily re-calculated his reconstruction”, sending one with a supposedly larger post-1960 decline before. According to McIntyre, Mann resolved this new “conundrum” and simply “chopped off the inconvenient portion of the Briffa tree-ring data”.

But a review of the emails – including some that have never been quoted before – clearly contradicts McIntyre’s version of events:

* Jones and Briffa were concerned that Mann had an outdated version of the Briffa reconstruction, and both urged the adoption of the newer “low frequency” one, more appropriate for comparison with other multi-century reconstructions.
* Far from pressuring Briffa to change his reconstruction right away, Mann questioned whether an immediate change was required, or even possible, and counselled waiting for the next revision.
* CRU colleague Tim Osborn advised Mann that he and Briffa “usually stopped” the “low frequency” reconstruction in 1960, and went one better in his later “resend” to Mann, by explicitly removing the post-1960 data.

I’ll also show how McIntyre has changed his narrative along the way , in an effort to prove that the true “context” of the famous “trick” to “hide the decline” is somehow an indictment of the IPCC.

There’s a linked story over at Rabitt Run how a single Pacific Gray Whale landed in the Mediterranean… it’s thought it swam there via the Arctic. Interestingly they disappeared from the Atlantic around the 17 hundreds.

Tamino,
It appears my post got lost in the ether… I guess it was because of the link … check out Climategate Country Club (one word) -dot- com … an article by a “physicist/astronomer” called: Sea Level Rise: What the Data Actually Shows … wherein he accuses the CU folks of doing something “fishy” with the data.

Tamino, I recommend you don’t look at that place without zipping yourself up in some type of anti-stupid armor. To give a hint on the ‘science’ on that site: links to Monckton, Morano, and Jo Nova (and Climate Depot as the best skeptics’ site, urrrgh). Maintained by one Mark E. Gillar, Monckton-groupie.

I am an astronomer by training, I am currently with the Dept. of Physics & Astronomy, University of Hawai’i, Hilo.

I will admit to being a former believer in “Global Warming”; but after the infamous article in Grist in 2005, which called for “Nuremberg-style trials” for “Global Warming Deniers”, I decided to check out the science involved myself, not relying on any published research.

What I discovered has appalled me- the “science” behind GW is not science at all. I hope to share some of what I have found with members here.

I have experience in planetary, stellar, and extragalactic astronomy; I have published research in these fields. I have experience in planetary-scale modeling, and have worked with NASA, and many of the observatoiries on Mauna Kea. Therefore, although I am not a researched in the field of climatology, I have the background in physics and planetary science that allows me to analyze and judge research in that field.

[Response: Unfortunately he doesn’t say where he got his sea level data. It certainly doesn’t agree with the Topex/Jason data provided here.

Actuall, it is Topex/Jason data, as provided at that link. Specifically, it appears to be the data for the Pacific Ocean only, with the inverted barometer correction applied, and the last 11 months chopped off, to give 52 years of data.

I particularly like the way Purves somehow manages to confuse the correlation coefficient with standard deviation.

[Response: By golly, it looks like you’re right — he’s using the Pacific-Ocean-only data. Too bad he didn’t say so. Is he so foolish that he didn’t realize this? Or is he being deliberately deceptive? You make the call.]

That’s interesting, since in my experience any comment that Watts actually disapproves of will get responded to nearly instantaneously. Quite a few of us here have had our comments edited or deleted, or been banned, just for asking Anthony uncomfortable questions about his ethical lapses in the past.

Interesting statement, but means nothing on its own.
Is this unusual? Unexpected?

Without knowing the answer to these questions, the first statement carries no particular significance.

So our aim is to find out just how unusual the lack of global warming in recent years really is.

Slide 11 summarizes the paper’s conclusions and then adds one not in the paper:

Observed trends in the global average temperature, from length 5 to 15 years, lie out in the lower tails of the probability distributions from the collection of climate model projections under the SRES A1B emissions scenario

Typically the probability of occurrence of the observed trend values lies between 5% and 20%, depending on the dataset and the trend length.

In the HadCRUT, RSS, and UAH observed datasets, the current trends of length 8, 12, and 13 years are expected from the models to occur with a probability of less than 1 in 20.

Taken together, our results call into question the consistency between the observed evolution of global temperatures in recent years and the climate model projections of that evolution.

Global warming has stopped (or at least greatly slowed) and this is fast becoming a problem.

Yeah? Slide 4 on data sources is munged at Heartland’s powerpoint. Wait to see if the paper gets through peer review and let’s see if it gets revised before publication. We’ve seen so many press releases that go way beyond even actually published papers, let alone ‘submitted’ papers.

Dhogaza, I thought Annan’s focus was more on the upper bounds of climate sensitivity? I think his (their) earlier work had a mean climate sensitivity that was close to the mean of the Charney sensitivity.

But I agree – interesting mix of authors, indeed. I hope James has a post up soon.

I find it very peculiar that a third party (Patrick Michaels’s sidekick Paul Knappemberger) is releasing details and drawing exaggerated conclusions from a paper that no one can even look at.
…

(a) Knappenberger’s headline conclusion “Global warming has stopped” is unsupported by the paper’s analysis (since it doesn’t look at the evolution of trends, only at short-term trends of various lengths, but all up to the end of 2009).

… [T]the observational record does not support the assertion that global warming “stopped” or even “greatly slowed”. Not even close.

(b) There appears to be no treatment of the various uncertainties in the temperature record itself. For example, how wide are the confidence intervals for the observed trends? Are they wider than those of the models?

…

Frankly, I find the paper’s analysis less than compelling. But the real problem is the gross exaggeration of the paper’s implications.

BTW, it’s purely speculative, but I think James Annan’s contribution probably relates to the frequentist/histogram approach to assigning cumulative probabilities in the model ensemble.

I was disgusted to see Roger Harrabin defending retired mining engineer Steve McIntyre, and giving him a completely unjustified platform for his bad science.

McIntyre has a terrible track record on climate science, and giving an unqualified blogger like McIntyre equal footing with actual climate scientists is just reprehensible.

BBC climate reporting is becoming truly abysmal.

If McIntyre was genuine, then he would “audit” all the nonsense spouted by deniers, as well as work done by real scientists. But he does not, because he is just a cog in the disinformation campaign against global warming. He has had every opportunity to engage in the science, and make a useful contribution. Instead, he has niggled away at the corners, and failed to change the science in any useful way.

Roger Harrabin, if you want to be credible, then you have to stop treating blogs as reliable sources.

You say “The deep irony is that critics like Mr McIntyre profess themselves to want to take part in the science, not to destroy it.”

You *must* know this isn’t true. As I said, he has had opportunities to be constructive. Instead, he has done nothing useful. Nothing. The trivial errors he has found have not been significant, but the disinformation campaign he has waged is having a disproportionate effect, thanks largely to gullible or lazy journalists repeating the nonsense as gospel.

Willis confuses ice area and anomaly (in the y-axes of his Figs. 2 & 3), but that’s a minor issue. He does, however, notice something that had caught my own interest over the past year. Since 2007 there seems to have been a real change in the temporal variability in the ice area anomaly, with a very pronounced seasonal cycle. To me, this indicates that there’s been some kind of a regime change. The winter ice area doesn’t change all that much, because the whole Arctic Ocean freezes over every winter, but much more of the basin is ice-free in the summer now. This winter-summer divergence creates a seasonal cycle in the anomaly data. As more and more multi-year ice gets lost, this pattern will presumably continue.

Willis, of course, misinterprets this all as evidence of a problem with the satellites or the data processing. But if you ignore the usual WUWT stupidity, there’s a scientifically interesting topic here.

There’s fascinating stuff coming out about Don Easterbrook’s misleading presentation at the Heartland conference. Easterbrook’s talk was enthusiastically promoted by Fox News as proof of the imminent threat of “global cooling”. Unfortunately, it’s very clear that Easterbrook’s central graphic was a fake, created by copying a graph from Wikipedia showing a reconstruction of Greenland summit temperatures for the past 10,000 years, drawing a line at the level of the 1905 temperature, and labeling it as “current temperature”. (Temperatures at the nearest station with data from 1900-present show that current temps are about 2 to 2.5C warmer than in 1905…)

Easterbrook has denied plagiarizing the graph and is avoiding commenting on the fact that he labeled 1905 as “current temperature”. But close examination of his graphic and the Wikipedia/Global Warming Art version makes it crystal-clear that he did steal (and then doctor) the graph … and the Greenland temperature reconstruction in question clearly shows 1905 as its last date. It’s kind of ironic that this isn’t getting more attention, considering that unlike the fake controversies spun up by the denialist blogosphere, this is a pretty obvious case of actual fraud, and one that was highly touted in the media just last month.

A comment was posted on Watts the other day asking whether anyone has checked GISS methodology for extrapolating over the Arctic by selecting a limited number of stations across North America to represent a comparable proportion of data points to those available for the arctic and comparing the extrapolation between them against available data?

Or at least, that’s how I interpreted the comment…

Anyway, this seems like the kind of test that may have already been done a number of times but I’m having trouble catching the scent on google scholar.

I wasn’t exactly surprised by the reply the original comment was met with over at Watts; can someone here point me in the right direction?

The original comment:
————
TFN Johnson says:
June 3, 2010 at 9:47 am
Has anybody checked the GISS methodology by taking, say, the the US data for just a dozen or so stations and then using the GISS homogenising algorithm to generate the temperature at all the other stations. If that shows results very similar to the actual measurements then it is probably true that Eskimos are buying bikinis.

Not necessarily an answer to your question, Andrewo, but there have been several scientists who’ve made the argument you can quite accurately measure the trend with just a few hundred (well-chosen) stations. In the world, that is! I know Phil Jones has made that argument, and I think it’s been tested in the blogosphere, but I’m afraid I can’t find the links.

Details have fogged in my noddle now, but IIRC it was a Hansen et al. paper that suggested temps could be interpolated out to around 1200 km from a (any) weather station. Is that what they are on about?

Anyway, I’d say, why not just look at conditions on the ground for evidence? Whilst they ain’t buying bikinis (unless they’re off on holiday to warmer climes), they certainly seem to be moving, or entertaining the thought of doing so.

Of course, I could say look at the Arctic ice data, but of course that has recovered to climatology this year, hasn’t it? Oops! Maybe not!

Has anyone seen Roy Spencer’s latest at WUWT? I ask because I think he might have something right, but this is the place to ask about statistical validity.

Dr. Spencer fits HadCrut temp data against the solar cycle and finds (a) a time lag of about 1 year, (b) a sensitivity to radiative forcing equivalent to 1.7C / double CO2 (and the commenters there are not pleased).

But here’s my reaction – the Charney feedbacks can take longer than 11 years. So, Dr. Spencer is seeing the amplitude of an oscillator with a long time constant responding to a (relatively) fast forcing – so the response is lower than it would be for a slow forcing. So if we had a good estimate of an effective time constant for Charney feedbacks, we could establish what factor by which to multiply Dr. Spencer’s 1.7, to get the Charney sensitivity his result really implies. (It would be pretty funny if that turned out to be clearly in the center of the IPCC range.)

But of course such a re-analysis would only have a hope of being valid if the original analysis was statistically valid.

Another example is the question of how many stations are needed to measure the global average annual mean surface temperature. Researchers previously believed that an accurate estimate required a large number of observations. Jones et al. (1986a,b), Hansen and Lebedeff (1987), and Vinnikov et al. (1990) used more than 500 stations. However, researchers gradually realized that the global surface temperature field has a very low dof. For observed seasonal average temperature, the dof are around 40 (Jones et al. 1997), and one estimate for GCM output is 135 (Madden et al. 1993). Jones (1994) showed that the average temperature of the Northern (Southern) Hemisphere estimated with 109 (63) stations was satisfactorily accurate when compared to the results from more than 2000 stations. Shen et al. (1994) showed that the global average annual mean surface temperature can be accurately estimated by using around 60 stations, well distributed on the globe, with an optimal weight for each station.

I’m thinking the same thing, FWIW, but there’s a lot of variability in the sea ice extent/area, so nothing should really be counted on yet.

That said, we are at all-time low numbers for this date according to IJIS, and have been for a few days. A rough indicator: we’re about three days ahead of the June 6 value from 2006–that year having posted the lowest extent for the late May-early June period of the year.

They find a transient climate response of at least 2.5K for the HadCRU data (and other data sets) and a minimum equilibrium climate sensitivity of 3.8K. They do not account for any lag in the response to the solar cycle although an earlier paper did hypothesize about possible lags:

My reaction was the same. I’m surprised that he now allows comments– anyhow, a positive development.

Interestingly his very next post claims that the instrumented temperature record for the N. Hemisphere can be mostly explained by natural variability (SOI, PDO and AMO)– that is, internal climate modes. So in one he estimates CS for to be 1.7 C and int he next post he trumps the observed warming almost entirely up to natural cycles?

IMO these two claims seem to contradict each other, and the title of Spencer’s most recent post (“Warming in Last 50 Years Predicted by Natural Climate Cycles”) is misleading b/c it gives the impression that he is referring to global SATs, when he is not. Also, I found it odd that his predicted temps have an offset of about 0.2 -0.4 C (too low since about 1980).

Any thoughts Tamino or BPL or others in the know?

His results concerning the role of internal climate modes seem to contradict those made by Swanson et al. (2009), namely:

“Global mean temperature at the Earth’s surface responds both to externally imposed forcings, such as those arising from anthropogenic greenhouse gases, as well as to natural modes of variability internal to the climate system. Variability associated with these latter processes, generally referred to as natural long-term climate variability, arises primarily from changes in oceanic circulation.
Here we present a technique that objectively identifies the component of inter-decadal global mean surface temperature attributable
to natural long-term climate variability. Removal of that hidden variability from the actual observed global mean surface
temperature record delineates the externally forced climate signal, which is monotonic, accelerating warming during the 20th century.“

No useful rebuttal from my side, just an observation: Spencer’s blogposts are often not very well correlated to his scientific papers.

I also noted that Spencer is pretty good at making ‘models’ that somehow fit the data, but which contain a number of huge flaws. For example, he could ‘model’ the sea surface temperature and the observed CO2 increase, such that the observed increase could mainly be linked to the SST. ‘Hence’, the observed increase was mainly from the ocean….(where’s the sink, Roy, if not the ocean?).

Aha! We conclude that Roy is very cleverly taking the mickey by doing a half-McLean-et-al. followed by a reverse-McLean-et-al. and waiting to see if anybody notices!

He differentiates just the T series, thereby removing any trend, and finds a (surprise!) good correlation between the T-difference values and the absolute values of the climate indices. Then he puts the trend back in again by “add[ing] up the temperature change rates over time” and pretends to be amazed that ‘model’ follows data – brilliant!

In response, we hope to see Foster et al. publish half of their original comment on McLean et al., backwards.

Spencer’s model doesn’t “predict” a long-term temperature change, only annual differences based on three climatic indices, and as I’ve shown above, it predicts them poorly (red line is Spencer’s model, black line is for CRUTEM3NH) — AFAIR the R2 coefficient for his data was about 0.07.

In fact, you could try to fit gaussian noise to the data from training period and in ~25% cases get better correlations than for Spencer’s data. So the similarity between summed temperature differences, as predicted by model, and CRUTEM3NH time-series, doesn’t mean much. I suspect that Spencer had tried several different temperature data and training periods before he discovered that for this particular set he got the most convincing results.

Then I will highlight the reasoning for Pinker’s relevance to you with a rather breathless minimum of explanatory notes:

The IPCC provides a formula to calculate the total warming due to external forcings – we can apply this to both the global brightening and the increase in CO2 to calculate their respective theoretical impact:

The warming caused by the 2.9 Wm-2 global brightening is calculated as follows:

2.9 Wm-2 x 0.1845 K/Wm-2 x 0.8 x 2.813 = 1.2 K

The value 0.1845 is the first differential of the fundamental equation of radiative transfer at the Earth’s surface, assuming a mean surface temperature of 288 K. This differential converts surface flux changes to equilibrium surface temperature changes in the absence of any temperature feedbacks.

The value 0.8 converts equilibrium to transient warming (transient warming is what is likely to be observed over the period, rather than the warming that will finally result once the climate has settled to equilibrium at a later date).

The value 2.813 is the UN’s central estimate of the feedback multiplier.

Accordingly, the warming that the UN would predict in response to the 2.9 Wm-2 increase in surface solar radiation noted by Pinker is 1.2 K.

To this one must add the warming that the UN would predict as a result of the increase in CO2 concentration from 324 to 370 ppmv (parts per million by volume) over the period, which is calculated as follows:

5.35 ln(370/342) Wm-2 x 0.3125 K/Wm-2 x 0.8 x 2.813 = 0.3 K

Thus, total warming to be expected from the global brightening and CO2 increase combined, using the UN’s method, would be 1.5 K. However, the actual warming from 1983-2001 was less than a quarter of this, at just 0.37 K.

Dr. Pinker was using data some of which had not yet been corrected for orbital decay in the ERBE satellite. Even allowing for this, it appears that the UN has overstated climate sensitivity at least threefold.

This of course leads to a lower sensitivity estimate for CO2 of around 1 °C, rather than the 3.26 °C central estimate used by the IPCC (IPCC, 2007, box 10.2).

Thanks Kevin. Since posting that I think he’s going by 18 years for the Pinker study (times 0.16 Wm-2), and is therefore only counting in Pinker’s short wave radiation forcing without taking into account the long wave and net forcings, which I’ve asked him to clarify.

My maths are abysmal too, but I suspect he makes the same mistake as Monckton did with the Pinker (2005) study (the same one Tim Lambert played a correction from Pinker of, during debate with Monckton). I’m probably completely wrong, though, and trust the opinions of the more expert.

I think he’s taking Pinker’s 0.16 Wm-2 and multiplying by the 18 years of the Pinker 2005 paper’s analysis period, which I believe only refers to downwelling SW. To get the proper figure there needs to be an accounting for LW and/or different conditions (cloudy, clear sky).

Here’s the message from Pinker to Tim Lambert explaining what the correct interpretation is.

Tamino, if it’s OK with you I’d like to point out that I tentatively have started a small blog that is entirely devoted to the Arctic sea ice. I’m not sure how interesting it will be for me to do or for others to follow, but I’m hoping some of the smarter guys can come over and comment every once in a while, so all interesting news, data and knowledge concerning the Arctic Sea Ice gather in one place.

(3) Interprets the high correlation as proof the overall increase in CO2 is caused by warming oceans, not by fossil fuel emissions.

Identification of the problem with this argument is left as an exercise for any reader with even a modicum of an understanding of mathematics.

You’d think that, given how many many many times Watts and his guest bloggers have had their statistical incompetence exposed over here, that Anthony might get a little more cautious and ask a friendly statistician to review that kind of post before making it public.

Of course, it’s also possible that Watts did ask for an “external review” before publishing … but the person he asked was McLean. That would explain a lot.

I’m embroiled in something between a discussion and an argument with an “Al Tekkhaski” over at Bart Verheggen’s blog (here on down).

The gist of his argument is, given some stations at separations of 50km show diverging temperature trends, then Nyquist’s theorem requires sampling at every 25km for a total of 200,000-300,000 stations to reliably construct a global average. Tekhasski seems to know some stuff about signal processing, but I’m not convinced he’s screwed down.

I think the discussion/argument has run its course. But I’d like to know if there are any substantive points (pro and contra) that have not been considered.

More interestingly; I’d like to understand how, if at all, Nyquist’s theorem relates to sampling requirements for constructing an average, when the signal is not periodic and sampling points are irregularly spaced.

Even more interestingly; Wang & Shen (1999) estimated that global annual average temperature has 45 degrees of freedom. The average can be constructed from 45 independent random variables. 45 stations picked randomly from the map will in all probability not all be statistically independent. So, assuming that station locations are randomly distributed and that record lengths are not an issue, how do you estimate the minimum number of randomly sampled stations that will reliably produce an accurate estimate of the global average, with 45 as a lower bound…? Jones (1994) writes…

What methods are available for estimating the effective number of independent stations? A common method of dimension reduction is principal components analysis. There are a number of rules (see, e.g., Preisendorfer et al. 1981) for deciding how many components contain significant information. The number retained would be their effective number, and rotation of the components might indicate where the sites might be located. Another means of estimating the effective number would be to use the correlation decay-length concept discussed by Briffa and Jones (1993). The correlation decay-length l is the distance at which the correlation between one station and another falls to a value of 0.37 (l/e) estimated from the formula

r=e^(-d/l),

where r is the correlation and d is the distance between the stations. With l at a typical value of 1500 km for midlatitudes, a station would have to be less than 520 km away to maintain half the variance (r=0.71) in common with its neighbor.

Isn’t Nyquist making VERY general assumptions about the noise? I think that in the climate system, the variation from station to station exhibits correlation over a much larger scale. The fact that there are exceptions only means that there is no guarantee in extrapolating to points within the system, not for the global mean as a whole.

That is the impression.
I cited some of your work that shows good agreement between averages constructed from mutually exclusive subsets of GHCN (‘GHCN: preliminary results‘ and ‘Dropouts‘)… if the sampling rate is a priori incapable of constructing reliable averages, as Tekhasski claims, then the averages should disagree. He first responded that the agreement occurs by chance, when pressured by citing more examples, he claimed the agreement is inevitable as the data are ‘preselected’… I think he really is claiming that because the subsets are selected from the existing GHCN dataset that this invalidates the comparisons :-(

[Response: He hasn’t thought things through at all. Nor has he applied any statistics. He’s just thinks he knows everything.

He gives the example of different warming rates for Albany, TX and Haskell, TX. Ask him if that difference is statistically significant, and at what level of significance. After he answers, ask him how he compensated for red noise.]

“Ali Tekhassi” is all over the amazon.com global warming threads in the “science” forums. In addition to not believing global warming is even happening, he doesn’t believe acid rain was ever a problem, that CFCs hurt the ozone layer, that discretization is a valid mathematical technique, and, interestingly, that the Big Bang happened. His degree is in engineering. He’s convinced he knows much more than climate scientists about the climate because he’s good at fluid mechanics.

Re Al, I have had the same issues with Al. He has had comments removed from Atmoz for being argumentative. IMHO, Bart needs to lower the amount of noise he currently has on his blog, it really does appear it has been hi-jacked by some very determined contrarians, and detracts form the science.

Interesting revelations by Barton (hi there). I have no reason to doubt Barton, but if what Barton says is true about Al’s stance on things such as CFCs etc. Then it is pointless engaging him. Appears to be a classic case of D-K. Maybe someone should let Bart know….

Tamino, is there I way for me to contact you by email?

[Response: If you want to send a private message, post it as a comment and indicate it’s not for posting (everything goes into the moderation queue).]

Gawd, this is embarrasing! Even a naive plumber as myself can figure out why anomolies are used to describe temp variation! Who cares if it is 10* hotter/cooler 25 k’s away, it’s the anomolies that show the trend.

Lazar, you’re right. This race was finished eons ago and all the fans have left!

Hi Tamino
I was reading some post of yours about inversion of borehole temperatures. You mentioned that you had been working on alternative methods of reconstructing surface temperatures from borehole measurements. But I can’t find any posts on those methods. Can you (or somebody else) point me to them?
Thanks.

I’m aware that Al T blows a lot of smoke, and his first comment at my site irked me for having a slant of conspiracy thinking to it.

However, I like keeping the discussion as open as possible, although it clearly has downsides as well. I’m not moderation very tightly at all, although I’ve recently banned someone from my blog for being extremely obnoxious.

Definitely could be a market for it; if only there was a recognized need.

Admittedly, Watts has been entertaining lately, but it is so hard to wade through the knee jerk “brilliant,” “final nail” and “Gore is fat” type bot comments that it gets depressing for anyone that has hope for the continued success of the democratic process in an increasingly technical world. (BTW that was a patriotic American comment and not an anti-American comment, just in case anyone [tries to] misinterpret it.)

I just wanted to share a very funny comment from a denier on a separate forum. He trotted out the classic “It’s warming on Mars” argument to claim that AGW was bunk. I linked to John Cook’s site debunking that one, of course not expecting him to accept and admit he was wrong. His response was good for a laugh, though:

“Mars’ climate is driven by dust? That’s hysterical. They have 2 inches of atmosphere and the source for all democrats climate science says it’s dust – so it’s dust.

And this guy is peer reviewed. I thought all members of the First Church of Anthropogenic Global Warming worshipped at the alter of peer review. ”

Astronauts aboard the International Space Station spotted an enormous circle on the frozen surface of Siberia’s Lake Baikal this April. Around the same time, NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS) satellite captured an image revealing a second circle in the lake’s middle […] a natural occurrence every few years on Earth’s largest (by volume) and deepest freshwater lake. Experts explain the circles are methane gas that rises from the lake floor. The emissions can create a mass of warm water that, if large enough, begins to swirl under the influence of the Coriolis force. “Once the water mass reaches the underside of the ice on the surface of the lake, the warm water melts the ice in a ring shape,” says Marianne Moore, a marine ecologist at Wellesley College in Massachusetts who has studied Lake Baikal with Russian researchers. The latest ring patterns included a circle of thin ice with a diameter of 2.7 miles. […] SOURCE: LiveScience.com

January 16, 2010:The urban adjustment, previously based on satellite-observed nightlight radiance in the contiguous United States and population in the rest of the world (Hansen et al., 2001), is now based on nightlight radiances everywhere, as described in an upcoming publication. The effect on the global temperature trend is small: Based on the 1900-2009 period, that change reduces it by about 0.005 °C per century.

GISS makes a decent effort to adjust for UHI in the U.S. (outside the USA, its efforts are risible.) […] These adjustments are pig’s breakfast. […] At the end of the day, there is no evidence that Hansen’s “UHI” adjustments outside the U.S. even begin to deal with the problem. […] GISS uses hopelessly obsolete population meta-data to supposedly identify “rural” stations, but GISS “rural” is all too often small city (or even large city). Unlike the US, GISS methods don’t find sure ground and thus their adjustments end up being essentially random, mostly reflecting random site relocations and having nothing to do with UHI adjustment.

SM assumes that the 0.1°C / century difference between NOAA and GISTEMP must be due to UHI adjustments;

A few days ago, I showed the notable difference between the GISS (UHI-adjusted) version in the US and the NOAA unadjusted version, where the difference is much more than 0.1 deg C/century asserted by CRU/NOAA. […] It shows that the CRU and NOAA failures to make UHI adjustments along the lines of GISS are introducing a substantial bias in these records.

I’m now on moderation at Keith Kloor’s blog (http://www.collide-a-scape.com/) for pointing out, in an “inappropriate manner”, that Judith Curry cited Oppenheimer at al in support of the denialist position that the drive for “consensus” causes the IPCC to exaggerate science. Of course, Oppenheimer et al are complaining that the politically-driven consensus requirements of the IPCC have led AR4 to the opposite, specifically, to underestimating possible sea level rise over the next century.

Wouldn’t hurt for people to gang up there. Curry really needs to be held to account, regardless of “manners”.

Hocker’s an idiot. He doesn’t even respond when the absurdities of his position are pointed out (e.g. that the ocean is acidifying, that C12/C13 ratios are rising and that human emissions are more than twice what we see in the atmospheric increase). Paging Doctors Dunning and Kruger!

Mr Hocker has showed up from WUWT to defend his claims, and doesn’t seem to be conceding anything.

It appears that Mr. Hocker and Steve Goddard share common ancestry, one with a extremely strong tendency towards D-K combined with a blatant disregard of reality.

Jenn, even some of the people at WUWT pointed Hocker’s error. I assume you understand what he did wrong?

The funny thing is that Hocker’s on to something interesting – the *rate* of accumulation of CO2 is modulated by temperature, presumably because it affects the rate of uptake of excess CO2 by the ocean. The other funny thing is that someone – on WUWT IIRC – has dug out the IPCC reference to the very same observation.

But he seems to think that he’s proven that the growth in CO2 comes from the oceans outgassing CO2, not from our burning of fossil fuel. That’s crazy.

I thought he’s a lawyer living in Hawaii, active in Republican politics there.

Read that somewhere, but could be wrong.

He also claims to be friendly with a few of the Mauna Loa research staff. Odd, that, you’d think he’d run his Nobel-winning debunking of Newton (due to his abuse of simple calculus) through some of those so-called friends.

Speaking of Steve Goddard, every day when I look at the CT maps, I am amazed. It seems pretty clear that the ice is anything but “concentrated” for the most part, and that we could be in for some very interesting numbers. We’ll see.

Such an irony that SG has his eyes fixed so firmly on PIPS for the “real picture” of the sea-ice–given that it’s (gasp!) a *model!*

You’re on moderation because of your hostile tone and your affection for ad homs, not because of your actual criticism. I also emailed you personally to tell you which comment put you in moderation and that wasn’t it.

Why don’t you take a lesson from Bart Verheggen on how to disagree with someone while also remaining polite.

There will be no more thread jacking over at my site by the likes of you and any other nasties on the opposite side of the climate spectrum.

Ah, we now have a climate “spectrum”. Funny, I don’t think one side of that “spectrum” publishes much. Judith Curry, for instance. Nothing published since 2008, and you have to go back to 2006 to find a paper on which she was first author–and that was more a political document than a scientific one. Indeed, you have to go back all the way to 2004 to find a time when she was publishing technical material with any regularity.

“Nobody knows why these dramatic climate changes occurred in the ancient past. Ideas that commonly surface include perturbations to the earth’s orbit by other planets, disruptions of ocean currents, the rise and fall of greenhouse gases, heat reflection by snow, continental drift, comet impacts, Genesis floods, volcanoes, and slow changes in the irradiance of the sun. No scientifically solid support has been found for any of these suggestions.”

Really? Milankovitch cycles have no solid support? Genesis floods are on par with the rest of the proposed causes? Was Laughlin joking when he posted that?

[Response: Don’t know whether he was joking, but I do know he’s wrong. Milankovitch cycles are a near-certainty — you don’t find *all* those periods in paleo records by accident.]

I’d disagree, he seems to completely disregard most of geology. If a geologist wrote a rambling piece in which he claimed that ‘no one knew’ what powered the sun, or that physicists had some theories about gravity, but they couldn’t be tested because of air resistance, but they *really* claim that feathers accelerate as fast as rocks (snigger!), he’d be laughed out of town. Yet apparently it is permissible the other way around.

The main thrust of the argument seems to be that the earth will ‘recover’ (itself a loaded term) in 100k years or so from anything we do – which is both technically correct and completely missing the point – you could use the same argument as a murder trial defense (‘The victim would be dead in 100 years anyway, hence it does not matter if he was shot at the age of 30′). He also seems to think that you can’t detect a global warming signal because of weather..

Anybody else notice the similarity here to the “god of the gaps” arguments. They’re fighting like hell to preserve and even exaggerate ignorance so they’ll have space to cram their deity/philosophy/politics into thos gaps without causing it severe bodily harm. That’s the thing about gaps in knowledge. They keep shrinking, and if have to keep pushing back on the walls of increasing knowledge, it’s a pretty good indication that your deity, philosophy/politics are wrong.

I made a comment where I noted that the trend estimates for the two datasets were within a standard deviation apart from each other. And thus no significant difference in trend.

Somebody (anonymous, nickname Sky) then made this comment:
“Confidence intervals are meaningful only in the the context of statistical sampling from an ensemble of time-series or a population. When there’s no missing data, the “trend” fitted by linear regression over a set period of time at a particular station is not a statistical sample, but an entirely deterministic calculation! It is EXACT in the same way that an exhaustive census of the population is exact. No matter their range, confidence intervals based on the usual iid ASSUMPTION of regression analysis are meaningless in that context. They obscure the issue at hand, which is that any “homogenization” MANUFACTURES a time-series with a SYSTEMATIC bias. It is a transparent attempt to mislead with pseudo-statistical nonsense.”

Any thoughts on that?

[Response: Well, I’ve already used “LOL” and “ROTFLMAO” — what more is there to say?]

FYI, Poptech (he/she/it/they of the lists of anti-AGW scientific “peer reviewed” papers) is taking a bit of a pasting from KingInYellow over at The Guardian, in comments on the Spall PNAS consensus paper. Poptech’s trying his usual “assertive means right” method, but it’s not even denting KIY.

Worth a look, as Poptech appears all over the place promoting his list, and KIY’s arguments may be of use.

the question of how many stations are needed to measure the global average annual mean surface temperature […] the dof for the observational temperature field is […] 45 for the global temperature field

Presumably that means 45 (approximately) surface stations, given appropriate locations and weightings, could reconstruct global average annual temperatures accurately. But you’d first need more than 45 stations to determine what the weightings should be (spatial correlations)… ?

“The research reports on our reconstruction of climate for Ellesmere Island in the Canadian high Arctic where we show that temperatures 4 to 5 million years ago were ~19C higher than today, at a time when atmospheric CO2 levels were very close to those today…
[…]
The implication of our research is that we may already have passed a tipping point for major increases in Arctic temperatures due to increasing levels of atmospheric CO2.”

Here are some recent papers coming up with similar findings in terms of temperature and CO2 levels.

Yup. Can’t say I was too thrilled myself. What’s the order of things on how we learn about how the climate works? “Paleoclimate first, observations second, models third.” Or something like that. If these papers had been from the 1980’s I’d be a bit happier.

Kevin McKinney,
Indeed, the uncertainties on the high side are indeed are the most worrying aspect from a risk management perspective. They make it impossible to bound risk. If we are at the lower end of the 90% confidence interval of sensitivity ~2.1 degrees per doubling, we are still in trouble if we continue BAU another few decades. If we’re at the upper end, we’re in the soup already. And even though the probability of even higher sensitivity is only 5%, this range dominates the risk calculus, because the consequences are so severe.

I think this is why James Annan has been so insistent on restricting consideration to the 3 degree range. I think the fact that this range is favored by the overwhelming majority of estimates really is telling us something, so there may be something to his Bayesian prior. However, a Bayesian analysis that is dominated by choice of prior is never comfortable.

Unfortunately, there’s nothing very “comfortable” about any of this, is there?

That’s the one sense where I feel an occasional twinge of sympathy for the denialists: while I do enjoy learning about these issues & interacting with these bright & knowledgeable minds, it’d be pretty cool to be able with a clean conscience to be able to take a bunch of these hours and put them back into my artistic and professional life.

I could even put some of them back into household maintenance, which would make my wife happy, too!

But the weight of the evidence says to me that we are collectively being really, really dumb, and I feel compelled to be a small part of doing something about it. So I keep coming here and RC to stay up on what’s happening, and keep going to my news sites to insist, again, that basically, we know what we know and reality is what it is.

I’ve been reading an ongoingdiscussion at Keith Kloor’s which main theme is advocating engagement between scientists and ‘skeptics’ whilst serving a garnish of collegiality over warmed over FUD. Frankly, I find the contrast rather nauseating. (And I like Keith Kloor, think his heart is in the right place, and I even agree with some of Judith’s criticisms pertaining to activities described in the CRU emails… as most of you probably know).

… the “significant concerns” being a blog rant about PMOD vs. ACRIM by a Czech translator with an M.A. in history whose post is so incoherent that it’s impossible for me to work out what his objections to PMOD are, except that he accuses Lean and Frolich of fraud. From which Judith concludes;

Probably the most important conclusion from the IPCC AR4 is:
“Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations”

This statement was used as the “litmus test” for the PNAS paper. IMO ”very likely” (>90% certainty) is WAY too much confidence for this statement, given the above points.

Never mind that the differences in TSI between PMOD and ACRIM reconstructions are around 0.5 W/m^2, which translates to a forcing of less than 0.1 W/m^2, compared to a GHG forcing over 1950-2000 of around 1.5 W/m^2. Or that PMOD is very likely to be much more reliable.

Judith continues;

We won’t have high confidence levels in the IPCC statements until these issues are challenged from every direction by skeptics of whatever stripe. Scientists should welcome challenges from skeptics as an opportunity to either demonstrate the robustness of their data/analysis/model/theory or to improve same. Instead we see snarky dismissals of skeptical arguments and hassles that try to slow down publication. This is not the way to build confidence in the science.

Is she really claiming that Joe public will disbelieve AGW until scientists sit down with cranks to discuss PMOD vs ACRIM? Seriously? How many Joe publics have these guys spoken with?

Ok I agree with the, erm, consensus at Kloor’s that it would be nice if the bile in online debates between ‘consensus’ scientists and ‘skeptics’ was turned down or off. But this has nothing to do with science; the very collegial social kibitzing at Kloor’s is not producing any real insights, on the other hand James Annan is frequently sarky and dismissive but spots big and interesting problems and nails them down. Public opinion? Are they again really claiming that Joe public would believe AGW if only Gavin Schmidt were more polite to Lucia Liljegren (actually that should be the other way round)? This blog world exists in an alternative universe…

Judith continues to underwhelm. She seems to have dedicated the rest of her career to becoming a one-woman demonstration that even PhDs (and maybe especially PhDs) can succumb to the Dunning-Kruger effect when they venture outside their own field. I think for Judith, that is hurricanes. However, she hasn’t published in it for, what, 4-5 years now?

I really think she started out trying to build a bridge to the skeptics unaware apparently, that she’d be the only one to use it.

Judy, can you hear me. Blink twice, honey, if you’re being held against your will.

Lazar,
If you have the stomach to engage, more power to you. My own experience is that it is often best to wait for a teachable moment, and teachable moments rarely come in dens of stupid.

Debates between science and anti-science cannot help but become personal. If we restrict them to evidence, they become rather boring because only one side has any evidence in its favor. Eventually, especially when the anti-science types cannot understand the evidence, they will question the integrity or competence of the scientists who gather and interpret the evidence. And the scientists, being human, will respond in kind. There is no basis for dialogue. We are doing science according to the rules of science. They are playing calvinball.

As for the public, most of them are more interested in catfights on television or the next American Idol that anything resembling truth. When they pay attention at all, they make the mistake of thinking that the scientists can be right only if they are saintly. WRONG! The scientists are right because they cleave to the evidence and because they follow the proven procedures of the scientific method. That’s the truly remarkable thing here: The scientific method yields reliable understanding even when practiced by fallible human beings. People are missing the real story here because they cling to the myths of the saintly scientist and the mad scientist. The reality is much more interesting.

Ray, as I posted on KK’s blog, I think that there is only an extremely small amount of people that are interested enough in AGW, to actually spend the time to learn, or bitch about this issue. For the vast majority of people, it is something akin to civic politics. We know something is going on in the neighbourhood, but really don’t have time for whatever it is.

It’s official! Those noise-makers behind the vexatious CRU FOI requests are incompetent, according to the Russell review.

Asked whether it would be reasonable to conclude that anyone claiming instrumental records were unavailable or vital code missing was incompetent, another panel member, Professor Peter Clarke from Edinburgh University, said: “It’s very clear that anyone who’d be competent enough to analyse the data would know where to find it.

“It’s also clear that anyone competent could perform their own analysis without let or hindrance.”

WUWT has put up another nonsense post, where Dr. R. A. Keen argues there has been no increase in heatwaves in Philadelphia based on the “analysis” that the annual maximum temperature shows no trend. This is an attempt to discredit Mann once again. Given that heatwaves are defined as periods of time, where the temperature exceeds a certain level, this is obviously rubbish.

However, I’d be interested in the result of a proper analysis. I’d like to categorize days as hot days (anomaly greater than 9°F and temperature above 90°F (adapted from wikipedia)) and then get annual stats of how many x-day periods of hot days one has. Is there a function in R, that detects periods above a certain threshold in a time series and counts the duration? Otherwise it’ll be easier for me to just write a program from scratch.

So far, I’ve only turned the anomalies (1961-1990 base period) into a graphic representation. See here:

Columns correspond to days May 1 to September 30 (I’m not interested in winter heat waves), rows R1 to R127 are years 1872 to 1999. Hot days have positive values. It looks like there are more hot days today and longer consecutive runs as well, that’s why I want to look into this.

Thanks in advance for any help you can render. If you should find this topic interesting yourself, feel free to take over this little project.

Obviously WUWT will get nowhere with annual data! Heatwaves don’t last a whole year. Once again, their brains fell out or they are deliberately peddling lies. My money is on both. It’s quite scary watching people like Goddard be stupid and mendacious at the same time. Do we add Keen to that illustrious list of know-nothing know-it-alls?

The Union of Concerned Scientists have decided to get proactive on the subject of Murdoch’s NewsCorp allowing the likes of Fox News to get away with all sorts while the parent claims to be concerned about the environment. They’ve set up an online letter which you can sign and automatically send to NewsCorp demanding they get their house in order. Commenters here may want to give it a go.

Yeah, I found that out, too, sorry. I might pop them an email to see if they can expand it to worldwide. That kind of thing can be very effective. Scott Mandia’s Tweeting and Facebooking it so hopefully it’ll get around and go viral.

“What is it with you that you cannot even report a simple message such as that without falsifying it? What is the point of making a statement that is so obviously, blatantly false. Is it incompetence or deliberate mendacity that leads you to utter such falsehoods?”

I cannot believe that it is incompetence. It seems that we have got to a stage when people like Watts know that they can lie with impunity. It no longer matters how false his message because his followers have long since abandoned any interest in the truth.

wanted to say thanks for keeping us posted on RC
and for your data links..
1. how is publishing your global temperature going?
2. any ideas on why mbh98 wanted there matrix overdetermined? –actually what does ‘overdetermined’ mean?
thank you for your time.

jl // July 31, 2010 at 12:01 am — A system of n (linear) equations in n unknowns is exactly determined and (usually) uniquely solvable.

More acturately, the system can be written as a square matrix of coefficients, A, a column vector of the n unknowns, x, and a column vector of constants, b. The system of equations then is
Ax = b
and the solution is
x = A^(-1)b
provided the inverse matrix A^(-1) exists. (For just one equation, this amounts to a fancy way of saying A is different from zero, as A^(-1) = 1/A.)

If there are more equations than unknowns the matrix of coefficients is no longer square and the system
Ax = b
is said to be overdetermined. In that case there is a form of solution which minimizes the sqaure of ther difference between the left side of each equation and the associated right side constant in the least squares sense. One wants that vector x such that
|| Ax – b ||
is as small as possible.

Ladies, Gentlemen and germs. A thought.
For years the deniers have kindly and persistently tried to educate us, they have told us clearly and repeatedly that CO2 is plant food and the more we have the better. They have also told us that the resulting rising temperatures are a good thing™, and that we shall benefit by being able to expand our arable land to higher latitudes.
If you should encounter one of them in the near future, would you mind asking them from me, what effect those higher temperatures have had on the 2010 Russian grain harvest?
Many thanks.

Unquoted from Watt’s cited source: “Carmi times”:
“…U.S. soybean production is forecast at a record high 3.43 billion bushels, up two percent from last year. Based on Aug. 1 conditions, yields are expected to average 44.0 bushels per acre, unchanged from last year’s record high yield….”

In other words the US soybean “yield” (production per acre) is unchanged and the increased “production” (as trumpeted as evidence of some kind of benefit of ACC) is actually due to an increased number of acres planted. I wonder why Nelson/Watts didn’t mention this?

Funny what you can do if you want to “cherry quote” an article (particularly a relatively obscure article). Example:

“…[USDA] U.S. soybean crop for 2010 estimated to be down slightly from 2009…”
“…Most farmers Wallaces Farmer editors talked to at the Iowa State Fair this week think USDA will have to lower its corn and soybean crop estimates in its September Crop Production Report, which will be released September 10…”
“…Other crops…alfalfa down a bit…”

Anthony obviously counts on the blind faith and/or the absence of critical thinking among his devoted regulars; many of which are quick to apply the label of “RELIGION! ” to those of differing opinion…

Didactylos | August 19, 2010 at 9:17 pm — Yes, lots of variation and nothing so refined as change over a mere lifetime of a tree can possibly be resolved. I’ve seen some studies of reconstructing CO2 concentrations over the Miocene by this method; indeed d18O temperature proxy largely agrees with the forcing provided by changing CO2. I’ve also seen a recent study (done @ UNM, first author’s PhD and he is now @ UT-Austin) strongly suggesting that leaf stomata counts become unreliable for CO2 concentrations above 1000 ppm.

I haven’t seen any for the Quaternary, which seems to be the period of interest to you. I suggest sticking with Antarctic ice cores.

Roger Pielke Sr. is in full-denial mode over at Skeptical Science (full thread devoted to debunking him there).

“There does not need to be years of record to obtain statistically significant measures of upper ocean heat content. This is the point of using heat. We just need time slices with sufficient spatial data. A trend is unnecessary, and indeed can be misleading when the signal is substantially nonlinear. Moreover, if global annual average cooling occurs, such as from a major volcanic eruption, the global warming “clock” is reset regardless of the long term trend.”

I was unaware of Pielke Sr’s street cred in time-series analysis.

PS: Maybe time for a new Open Thread; this one’s getting long.

The Yooper

Search for:

Support Your Global Climate Blog

You can help support this blog with a donation. Any amount is welcome, just click the button below. Note: it'll say "Peaseblossom's Closet" and the donation is for "Mistletoe" -- that's the right place.

New! Data Analysis Service

Got data? Need analysis?
My services are available at reasonable rates. Submit a comment to any thread stating your wishes (I'll keep it confidential). Be sure to include your email address.