Friday round-up

Two items of interest this week. First, there is an atrocious paper that has just been published in JGR by McLean, de Freitas and Carter that is doing the rounds of the denialosphere. These authors make the completely unsurprising point that that there is a correlation between ENSO indices and global mean temperature – something that has been well known for decades – and then go on to claim that that all trends are explained by this correlation as well. This is somewhat surprising since their method of analysis (which involves taking the first derivative of any changes) eliminates the influence of any trends in the correlation. Tamino has an excellent demonstration of the fatuity of the statements in their hyped press-release and Michael Tobis deconstructs the details. For reference, we showed last year that the long term trends are still basically the same after you account for ENSO. Nevermore let it be said that you can’t get any old rubbish published in a peer-reviewed journal!

Second (and much more interestingly) there is an open call for anyone interested to contribute to setting the agenda for Earth System Science for the next couple of decades at the Visioning Earth Science website of the International Council for Science (ICS). This is one of the umbrella organisations that runs a network of committees and programs that prioritise research directions and international programs and they are looking for ideas. Let them know what your priorities are.

533 Responses to “Friday round-up”

Hmm. Simon, I agree. It is probably more like 99.7% In more than twenty years of trying, no one has come up with an acceptable alternative explanation. In the meanwhile, the multiple lines of evidence for AGW continue to grow.

Any climate scientist would give almost anything to be able to show that global warming is natural, cyclical, and will soon reverse itself. Alas, the laws of physics stand in the way of such wishful thinking.

It would be interesting to know if any members of congress have indicated that they need the political cover of plants in audiences to vote against these things. I’ve heard some say they need to hear from constituents to stand up to lobbyists. Are any saying they need fake constituents to stand with lobbyists? Is any of this solicited? A number of members have insisted on delay so they can hear from constituents and now there seem to be fake ones speaking very loudly.

The Senate Ethics Committee is very hard to contact but I wonder what position they would take if one member were sending fake constituents to another member’s constituent meeting? What tangled webs….

John, the research is easy to find; how many can you find by searching for yourself? Did you look at the topics here at RC?

Without looking in the FAQ here or the topic list or the site search, I can point you easily to 8 here, plus the analysis the paper describes, which might be the ninth.

Not saying this is the answer you’re looking for, just noting how easily this kind of information is found.

It will always help all of us if you show what you can find on your own first, and how you looked — rather than just asking others. That helps us get an idea of how people coming new to this material do try to find it.
It would be a kindness if you’d make the effort and say how it works for you.

Yeah, I saw a reproduction of a “doorhanger” encouraging people to turn out for meetings on these particular two topics. Perfectly acceptable in ordinary circumstances but it appears the objective is intimidation as opposed to discussion.

Along those lines, once the two topics are intertwined it becomes difficult to discuss one in isolation and of course it brings out the trolls. A poster here appeared in recent days muttering about brawls and other violence around climate change. Coincidence? Maybe.

NY Times covers Pentagon’s concern w/climate change. They’ve been prodded, a bit, but apparently have found plenty of knock-on effects.

“The changing global climate will pose profound strategic challenges to the United States in coming decades, raising the prospect of military intervention to deal with the effects of violent storms, drought, mass migration and pandemics, military and intelligence analysts say.

…

Recent war games and intelligence studies conclude that over the next 20 to 30 years, vulnerable regions, particularly sub-Saharan Africa, the Middle East and South and Southeast Asia, will face the prospect of food shortages, water crises and catastrophic flooding driven by climate change that could demand an American humanitarian relief or military response.

…

A changing climate presents a range of challenges for the military. Many of its critical installations are vulnerable to rising seas and storm surges. In Florida, Homestead Air Force Base was essentially destroyed by Hurricane Andrew in 1992, and Hurricane Ivan badly damaged Naval Air Station Pensacola in 2004. Military planners are studying ways to protect the major naval stations in Norfolk, Va., and San Diego from climate-induced rising seas and severe storms.

Another vulnerable installation is Diego Garcia, an atoll in the Indian Ocean that serves as a logistics hub for American and British forces in the Middle East and sits a few feet above sea level.

Some of the prodding has been internal. CNA came out with a report two years ago http://www.npr.org/templates/story/story.php?storyId=9580815 on this subject. We took this to Steny Hoyer as part of Stepitup. He has a couple of vulnerable bases in his district and one close by in VA where constituents work.

It is interesting that the military is starting to do its own climate modeling.

Here‘s a simple (O level) question for you Mark. Interpret this analogy in the context of climatology by ascribing appropriate values to the parameters A B C D E F so as to make a coherent and relevant statement. (Clue: A = climatologists).

How about considering reading comprehension in your future education needs?

There are two points:

1) That the science didn’t know how bees flew despite the fact of their ability, still they could design planes that flew.

2) Despite not knowing how bees flew, you didn’t care about the risks of putting your direct life in the hands of such people’s results. Completely differently to when you place your children’s future in the hands of climate scientists who don’t have clouds nailed down as well as you’d like.

#1 shows how you don’t have to have it 100% right to get it right.

#2 shows how you’re two-faced when it comes to science you don’t like.

fn.5
In general, uncertainty ranges for results given in this Summary for Policymakers are 90% uncertainty intervals unless stated otherwise, that is, there is an estimated 5% likelihood that the value could be above the range given in square brackets and 5% likelihood that the value could be below that range. Best estimates are given where available. Assessed uncertainty intervals are not always symmetric about the corresponding best estimate. Note that a number of uncertainty ranges in the Working Group I TAR corresponded to 2 standard deviations (95%), often using expert judgement.

fn.6
In this Summary for Policymakers, the following terms have been used to
indicate the assessed likelihood, using expert judgement, of an outcome or
a result: Virtually certain > 99% probability of occurrence, Extremely likely > 95%, Very likely > 90%, Likely > 66%, More likely than not > 50%, Unlikely < 33%, Very unlikely < 10%, Extremely unlikely < 5% (see Box TS.1 for more details).

…
[Text box in red]:

The understanding of anthropogenic warming and
cooling inﬂuences on climate has improved since
the TAR, leading to very high conﬁdence7 that the
global average net effect of human activities since
1750 has been one of warming, with a radiative
forcing of +1.6 [+0.6 to +2.4] W m–2 (see Figure
SPM.2). {2.3., 6.5, 2.9}
…
fn.7:
In this Summary for Policymakers the following levels of conﬁdence have
been used to express expert judgements on the correctness of the underly-
ing science: very high conﬁdence represents at least a 9 out of 10 chance
of being correct; high conﬁdence represents about an 8 out of 10 chance of
being correct (see Box TS.1)

—–
Look, Simon, there are plenty of places where people get encouraged to argue with exaggerated statements, rather than encouraging people to actually state the sources so people can look them up in the science. Try, please.

Scientists have only found enough evidence to state with 90% confidence that the mass inside your lungs is cancer. Therefore, I will take the 10:1 odds that it is not. What you are ignoring is that there is zero evidence that it is NOT cancer. None. In science you either go with the theory that best favors the evidence available or you produce evidence that contradicts the hypothesis. You are insisting there is some middle ground. There isn’t. Science or Anti-science. Pick.

#463 Hank. Thanks for keeping patient. I wonder if you can help me with this?

My journey to work takes about 50 minutes: never possibly less than 40 minutes, but sometimes as much as two and a half hours. It’s not within the same % of 50 minutes + or -. It can be as much as +200% despite being never better than -25%.

Whenever I see “+ or – x%“ in a report, it always raises a doubt in my mind as to whether these are numbers off the top of the head, or have really been properly thought out.

Oh, and, Simon, yes, researchers nitpick this stuff extensively because it’s not simple, not easy, and often novel methods are tried and compared to older methods.

A guy at RC in the last day or so posted an abstract from a discussion draft (submitted paper). Gavin pointed out inline that the paper has been withdrawn; the statistics didn’t survive review; follow the link to see the reviews, they’re public.

Your driving time example indicates you haven’t taken statistics, or you’d recognize the question of skew and distribution as a familiar one in Statistics 101. Not that that means it’s easy: as the IPCC doc says, “… uncertainty intervals are not always symmetric about the corresponding best estimate.” It’s a common problem with data.

simon abingdon (465) — With so many fine statistical tools availabble it is easy to calculate the error bars around the trend line from the available data. If there is enough data all this is even statisticaly significant at an appropriate level for a scientific paper.

As for you trip to work, it has never taken more than 2.5 hours yet. Gather more data. :-)

Your concern seems to be addressed at least partially in Hank’s quote:

“Assessed uncertainty intervals are not always symmetric about the corresponding best estimate.”

In general, uncertainties reflect the probability function and its shape. Many times this will be symmetrical, but, as your example points out, not always. I suspect that this issue will have been considered carefully in judgments such we are discussing.

Caution is in order. There are indeed many fine tools which calculate probable errors in trends, but most of them operate under the assumption that the noise superimposed on the signal is white noise. In many cases in geophysics this is not the case. In particular, temperature time series exhibit non-white noise. In that case the probable errors estimated by most programs will be too small.

Re: #465 (simon abingdon)

Even when the data themselves are not symmetric about their mean (or about the trend), it is often true that averages, slopes, and other summary statistics are symmetric about the mean. This is because of the most powerful theorem in statistics: the central limit theorem. So don’t be surprised when the error range is symmetric about the estimate; this is usually correct.

Simon #465. You are getting good advice, but because of what you said- “doubt in my mind as to whether these are numbers off the top of the head, or have really been properly thought out,” I think that it is important to state specifically that errors (+ or – whatever) are not just thought out. These numbers are standard components of statistical computations that are not arbitrary in any way. They represent one standard deviation or the standard error of the mean (the latter sometimes inappropriately on small sample sizes). It is not a value judgment. See the statistical references suggested by others.

Do you understand the difference between a normal (Gaussian) distribution, and for example, a lognormal?

The latter is right-skewed, often where there is some bound at the left, but no bound at the right. There is some minimum time for a trip, but maxima can be long. We once had a 5-hour trip to South Lake Take Tahoe take ~3 days.

For example, given the same {age, gender, ethnic group} height is probably normal, but weight may well be better modeled by lognormal. I.e., if the mean weight were 150 pounds, the chance of someone weighing 225 is a lot higher than weighing 75.

Simon, any measurement is subject to error, and those errors will be distributed according to some distribution. In many cases, that distribution is symmetric; in many it is not. In the case of your trip to work, you are bounded on the low side by some constraint–e.g. the speed limit, if you are law abiding or the speed of traffic or top speed of your car if you are not. On the high side, there are many sources of delay, and they are addative. All this shows is that in any problem, you have to understand the error calculus to accurately bound the confidence intervals. This is hardly profound. What is of more interest is 1)why you seem to think it is; and 2)why you think that climate scientists don’t already know this.

#466 Hank I’ve now read Grumbine’s blog as you suggested. (I hadn’t when I replied to your #474 BTW). It’s the usual stuff about how to define climate and needing 20-30 years to identify the underlying trend. Seems to me this all assumes there is an underlying trend which obediently stays the same while we look for it. What does a graph of the last 25 years of the FTSE tell us? Consistent upward trend (with just a couple of dips). Would you rely on that as a predictor of the future? Are you prepared to rely on the UK Met Office when they tell you what temperatures you can expect 70 years from now region by detailed region across a country as small as England?

No, Simon, you’re not reading it carefully. I’d suggest you go through Grumbine’s several threads on detecting trends more carefully, then ask a question there if you still haven’t understood it.

You need to get past the argument from incomprehension and do the work to understand this very basic thing about statistics. It’s the stuff taught in the first six weeks — and then retaught endlessly because it’s not easy to understand.

The claim was made earlier that there were exactly “nine pieces of evidence.” Or maybe “nine main pieces.” You took a long time to tell me to “do research.” I am not in that business. I can be fooled by the bogus stuff and see some strong arguments as very weak. This is the case for a lot of the “interested public.”

If you — or anybody here — wants to help me on this, I’d be grateful. If not, I’ll probably bumble my way along somehow.

as to your question “So why does any reasonable person think that the FACTS actually provide CONFIRMATION for a theory of AGW?”

because there are multiple levels of correlation in the attribution.

– amount of GHG’s added to atmosphere equal to amount of ghg output.
– physics of sensitivity are well known to increased GHG’s

Yes, you are arrogant (unwarranted confidence). It’s not about how you feel. It’s not even about how you think. It is about the science, maths, physics. It seems to me that you are not as smart as you are attempting to appear. If you can’t even put the basic pieces of understanding together in a cogent manner, I am afraid you may actually be incapable of learning, unless this is handicap you may be able to overcome, but it’s not looking good at this point in time.

“There are none so blind as those who choose not to see.”

BTW, this is not ad hominem, merely an observation. You have been pointed to ample real science and explained many things in a ratiocinative manner. You just don’t seem to have learning capacity.

I take it from these equations that the estimated trend for 1960-2008 is 0.0326°C/year (from the 1960-2008 dataset).
and
for 1980-2008 is 0.0424°C/year (from the 1980-2008 dataset).

This compares to the IPCC [2007, WG1 SPM, p5]
“The linear warming trend over the last 50 years” (0.013°C/year [0.010°C to 0.016°C/year] ) [decadal changed to annual for clarity]

So the time trend found from the microwave data when accounting for SOI correlation is over double that quoted in the IPCC.
“Study shows warming trend is more than double what was previously expected, after accounting for El Nino effects”

Of course, there is no way of knowing whether this is significant since there was no treatment of statistical significance and error bars in the paper.
It probably is not significant but it just shows that this paper could have been ‘spun’ in a completely different way.

Steve #489 LOL! Yes, Tamino noticed this too. Actually, if you want to determine the long-term trend of a time series, about the very worst way to go about it, accuracy wise, is to average its year-upon-year differences…

Not interesting, Simon. Just wrong. That’s why 9 people are correcting you.

You seem to be claiming that we cannot analyze anything unless it is simple. Not true. Do you have a retirement account? Did you sell everything when the market fell? This is no more complicated than what most people already bet their life savings on.

There’s a letters war in The Australian. Yesterday, they published a denial letter, which attracted a flood of pro-denial of comments.

Today I have a reply in; if anyone sees this in time to join in, please get a comment in. The denial bunch is bound to go after this in a big way judging from yesterday’s (too late to respond there: the paper doesn’t close comments but they stop posting them towards the end of the day of publication — time zone GMT +10).

Simon Abingdon — You need to look at enough data. Here are the decadal averages from the HadCRUTv3 global temperature product.http://tamino.files.wordpress.com/2008/04/10yave.jpg
By eye alone I believe you will note the linear trend over the centennial scale data, with some wobbles around it.

… We calculate the effects of alternative emission profiles on atmospheric CO2 and global temperature change over a millennial timescale using a simple coupled carbon cycle-climate model. For example, if it takes 50 years to transform the energy sector and the maximum rate at which emissions can be reduced is −2.5% [per year], delaying action until 2020 would lead to stabilization at 540 ppm. A further 20 year delay would result in a stabilization level of 730 ppm, and a delay until 2060 would mean stabilising at over 1,000 ppm. If stabilization targets are met through delayed action, combined with strong rates of mitigation, the emissions profiles result in transient peaks of atmospheric CO2 (and potentially temperature) that exceed the stabilization targets. Stabilization at 450 ppm requires maximum mitigation rates of −3% to −5% [per year], and when delay exceeds 2020, transient peaks in excess of 550 ppm occur. Consequently tipping points for certain Earth system components may be transgressed. Avoiding dangerous climate change is more easily achievable if global mitigation action commences as soon as possible. Starting mitigation earlier is also more effective than acting more aggressively once mitigation has begun….”

#482 Mark (quoting my #481) “Are you prepared to rely on the UK Met Office when they tell you what temperatures you can expect 70 years from now region by detailed region across a country as small as England?”