More PR related confusion

It’s a familiar story: An interesting paper gets published, there is a careless throwaway line in the press release, and a whole series of misleading headlines ensues.

This week, it’s a paper on bromine- and iodine-mediated ozone loss in marine boundary layer environments (see a good commentary here). This is important for the light that it shines on tropospheric ozone chemistry (“bad ozone”) which is a contributing factor to global warming (albeit one which is about only about 20% as important as CO2). So far so good. The paper contains some calculations indicating that chemical transport models without these halogen effects overestimate ozone near the Cape Verde region by about 15% – a difference that certainly could be of some importance if it can be extrapolated across the oceans.

Why is this confusing? Because the term ‘climate models’ is interpreted very differently in the public sphere than it is in the field. For most of the public, it is ‘climate models’ that are used to project global warming into the future, or to estimate the planet’s sensitivity to CO2. Thus a statement like the one above, and the headline that came from it are interpreted to mean that the estimates of sensitivity or of future warming are now in question. Yet this is completely misleading since neither climate sensitivity nor CO2 driven future warming will be at all affected by any revisions in ozone chemistry – mainly for the reason that most climate models don’t consider ozone chemistry at all. Precisely zero of the IPCC AR4 model simulations (discussed here for instance) used an interactive ozone module in doing the projections into the future.

What the paper is discussing, and what was glossed over in the release, is that it is the next generation of models, often called “Earth System Models” (ESMs), that are starting to include atmospheric chemistry, aerosols, ozone and the like. These models may well be significantly affected by increases in marine boundary layer ozone loss, but since they have only just started to be used to simulate 20th and early 21st Century changes, it is very unclear what difference it will make at the large scale. These models are significantly more complicated than standard climate models (having dozens of extra tracers to move around, and a lot of extra coding to work through), are slower to run, and have been used much less extensively.

Climate models today are extremely flexible and configurable tools that can include all these Earth System modules (including those mentioned above, but also full carbon cycles and dynamic vegetation), but depending on the application, often don’t need to. Thus while in theory, a revision in ozone chemistry, or soil respiration or aerosol properties might impact the full ESM, it won’t affect the more basic stuff (like the sensitivity to CO2). But it seems that the “climate models will have to be adjusted” meme is just too good not to use – regardless of the context.

June 26th, 2008 at 12:45 PM

A Reuters article is even more blatant:

“Climate models need tuning, study finds”

In paricular with respect to CH4 , a bi-product of ozone Bromine destruction process. I wonder if this methane component has been included in current models. And whether or not there has been, as the article cites; “greater warming” with less GHG’s….

Ike Solem:

June 26th, 2008 at 1:31 PM

The paper should have used the phrase “biogeochemical model”instead of “climate model”. Such models might increase the longer-term climate predictions (100-1000 years, say), and help with understanding climate change over the past glacial cycles, but they won’t really have a whole lot of effect on the shorter-term (say, 25-100 yr) predictions.

Biogeochemical models are intended to understand and predict the cycling of molecules and elements through the various “pools” – biomass, atmosphere, lakes and rivers, soil, oceans, ice sheets, etc. That has applications to many subjects – radiocarbon dating, tropospheric ozone chemistry, etc. – and such models might also be able to predict some of the external forcings that are currently set by hand in climate model forecasts. A major difficulty is getting good rates for the various processes that move elements from one pool to another, many of which are still poorly understood, and many of which vary widely with temperature, moisture and nutrient supply.

June 26th, 2008 at 2:30 PM

I think it’s true that saying “climate models will have to be adjusted”
adds to PR related confusion on global warming.

I also think that NOAA’s statement (below) exemplifies to the public that
there is much confusion on how much warming will occur by 2100.

NOAA statement: “According to the range of possible forcing scenarios,
and taking into account uncertainty in climate model performance,
the IPCC projects a best estimate of global temperature increase of
1.8 – 4.0°C with a possible range of 1.1 – 6.4°C by 2100, depending
on which emissions scenario is used.

helvio:

June 26th, 2008 at 3:47 PM

“Climate models will have to be adjusted” just means that theoretical climatology is as fragile as its models. A science that has to be readjusted every time a discovery is made, it is closer to philosophy than to science, in my opinion. Particle physics or astrophysics, two robust science subjects, do not change their big picture every time a new discovery is made, which is a sign of their maturity. In theoretical climate science, especially in climate modeling, I see the opposite, and that’s why I have to be skeptic! And so should everybody!

[Response: Being a sceptic is fine. Ignoring the complexity of the world around you is not. – gavin]

David B. Benson:

June 26th, 2008 at 3:56 PM

pat neuman (6) — What isn’t well communicated is just how very bad even 1.1 K will be.

June 26th, 2008 at 4:20 PM

If helvio is a skeptic, he should be more eager to stop greenhouse gas emissions, since he would have to admit that reality might be worse than science suggests (uncertainty goes both ways). And then there is reason to think the uncertainty may be skewed toward greater change, since the IPCC only models things that are decently well understood, and there are known phenomena (e.g. “dynamic processes”) that make things worse, but are too difficult to model. Thus the establishment projections have a understatement to them. Similarly, James Hansen has recently suggested that the paleo record predicts that long term feedbacks will eventually produce twice the warming that the short term feedbacks modeled by the IPCC. So it seems to me someone skeptical of the science should be even more worried. Also, it seems to me that astrophysics recently did see a big picture change with the recent supernova observations.

Guenter Hess:

June 26th, 2008 at 4:23 PM

1. I agree, in communications the sender is responsible that the message gets across, not the receiver. To me the message got across that the earth is getting warmer, anthropogenic greenhouse-emissions are the likely root cause, since warming and emissions correlate. However, I also understood that ozone concentration does have also an influence, since absorption of UV photons is warming the atmosphere. If new observations are made, of course the model has to be adjusted. If I am wrong, Gavin maybe you can enlighten me.
2. I learned from my Ph.D. advisor that in science you observe and measure first, and then you try to explain the data and observations with a model. Then you try for predictions or formulate a hypothesis. Then you measure and observe again. If model and measurements or observations disagree, of course you have to adjust or discard your model.
For models there are certain restrictions for good practice. Firstly, additionally to fit the data ideally every parameter or term has to be individually proven by observations or measurements, if you want to validate your model. Secondly, if you cannot fulfill this first point because of natural restrictions you have at least to strive for Occam’s razor and limit your model to a minimum set of parameters. So adjustment of a model is just the way of the scientific method and need to be explained to the general audience.
3. If you publish a paper and postulate a hypothesis you have to face the questions of your scientific peers. The task of your peers is to be skeptic. Maybe, Gavin or somebody else can explain to me the hype in climate blogs about skeptics and non-skeptics.
4. Shouldn’t every scientist be a skeptic until the model is validated?

Harold Pierce Jr:

June 26th, 2008 at 4:24 PM

RE: A few interesting ozone factoids.

Oceans release dimethyl sulfide which is very good scavenger of ozone. It reacts rapidly with dimethyl sulfide to produce dimethyl sulfoxide (DMSO) and oxygen. DMSO is miscible with water and is washed out the air by rain. DMSO can react with ozone to produce dimethyl sulfone, which is also miscible with water.

Over the land ozone will react with DMS from decaying vegation. In pine forest ozone reacts terpenes released by the trees usually as a result of some type injury, to form ozonides. These are soild compounds that scatter light. The result is the blue haze often seen in these forests, such as in the Smoky Mountains.

Ozone also attacks rubber and a great many other substances. Plants have built-in anti-oxidants and ozone scavengers. Anti-oxidants are added to rubber and other materials exposed to air and ozone to supress and arrest the attack of these highly-reactive molecules.

JohnLopresti:

June 26th, 2008 at 4:30 PM

I wonder if increased arctic ice melt these past two years, and ocean borne iodide relates to paucity of near coastal rainfall where I live. I recall iodides are part of the chemistry of “cloud seeding” to make rain. My hypothesis would go that the temperature shift from polar cap melt, plus the dilutive effect of some increment of fresh water might shift the halide equilibrium sufficiently to decrease onshore precipitation in areas like ours which rely on ocean formed rain and snow cloud systems.

June 26th, 2008 at 4:53 PM

It is also an apparently interesting journal article, explaining indirectly a significant Polar spring ice break up effect, the release of cloud seeding gases, creating greater albedo causing a warming delay otherwise done by the higher rising sun, The commonly known destruction of ozone during that period also further adds to cooling as cited by article. This means that without the usual sudden great release of stored under Arctic ice bromines in April spring, Arctic surface temperatures get warmer , as they did a few months ago despite a cold winter, the contradiction of the sudden jump in temperatures during April just past is a little less confusing. Current Arctic ice cap melt is still fierce despite the later cooler 2007-08 winter over Canadian side of the Pole, Climate is complicated, but the warming continues.

Jim Galasyn:

June 26th, 2008 at 6:17 PM

In 7, Helvio calls climate science “immature” and “fragile,” when compared with astrophysics and particle physics.

So tell me, Helvio, most of the universe comprises what kind of matter?

sidd:

June 26th, 2008 at 6:23 PM

Guenter Hess write at 26 June 2008 1623:
“Shouldn’t every scientist be a skeptic until the model is validated ?”

Yes. And beyond. For the model cannot ever be proved true. Just “not yet falsified.”

Thus, if i step off a precipice, the result is a test of the models from the days of Newton and Einstein. As we stuff CO2 into the atmosphere, the result is a test of the models, from the days of Fourier and Arrhenius.

i submit that the experiment is imprudent in both cases.

In the first case, my broken body below would validate the models, affect a handful. in the second, the result affects the entire world.

sidd

June 26th, 2008 at 6:35 PM

#10

1) Well, this is journalism. Journalists have the responsibility to get stories right. 9 times out of 10, it’s the primary literature that is taken out of context in climate change related new stories.

Ozone changes have had a warming effect in the troposphere, and depletion of ozone layer (stratosphere) has a slight cooling effect (ozone also effects upwelling radiation). See this chart. But CO2 and methane have been more important.

2) The point of this article is that ozone revisions do not effect model estimates out to a doubling of CO2, or 2100, or what have you. Chemical models would need revisement, and this story is interesting, but I don’t take it to be espcially meaningful in the context of modern and future climate change.

3,4) The word “skeptics” in the blogosphere does indeed have a wrong tone to it. We should say “denialists” or “noise makers.” Skepticism is a good thing. Unfortunately, we see much more of the former.

June 26th, 2008 at 7:40 PM

Thank you RC for your work discussing models.

As a non-scientist citizen, I struggle to wade through the IPCC and other sources. Although informal, all seem to be carefully prepared science designed for scientist eyes. That is fine. Most require real work to comprehend. Sometimes I wish I could see clear, simple and comprehensible presentations that I could grasp quickly. Edward Tufte is the Master of Visualizing Information http://www.edwardtufte.com/tufte/

Does one web site offer simplified descriptions of climate models for lay citizens, the press, and students? The public needs valid sources for easier-to-understand model descriptions and the changes. Best would be a carefully tended web site devoted to climate models.

The RC site on models offer excellent backgroundhttp://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/ Now for instance, when there is new data such as the recent significant ice shelf loss, where do we see this reflected adjustments to models? Or in this case, when no change to the model is warranted. Somehow I envision a giant dynamic graphic for each model that reacts to input from current data changes. It might be the right time for an ambitious visualization effort. Rigorous sourcing would exclude messages constructed by the Denialist PR Industry.

Of course humans prefer selective choice in what to believe. And the mainstream press often obscures before it reveals. Physical sciences cannot change that. But elegant presentation of science can help educate the press and the public.

RC reminds us to step up and embrace scientific technique and the complexity. But the world needs good, understandable and current information in easy to understand reports. It is hard work to construct such. Is this too much to expect in our rapidly changing world?

Thomas:

June 26th, 2008 at 7:41 PM

Conceivably CO2 sensitivity could be affected by inclusion of better ozone (or other chemistry). The issue for this sort of scaling is whether integrated over the globe, such effects are positive -or negative feedbacks on the initial perturbation. I think this paper said, more ozone than had been expected is being destroyed, this is quite different from more is being destroyed as temperatures increase. The latter statement would suggest a previously unknown negative feedback, the former would suggest that the overall heat balance is not as currently modeled, but says nothing about how that error would evolve with time or CO2 concentration.

“Poll: most Britons doubt cause of climate change”

when the poll did not actually find this, rather, most respondents thought that “many scientific experts still question if humans are contributing to climate change”, and an even bigger majority thought that individuals SHOULD be expected to do things to curb climate change, given the options of responding that it was not individuals’ responsibility or that climate change was natural/humans had little impact.

Alex J:

June 26th, 2008 at 8:47 PM

Here’s anther great headline for ya:
“Destruction Of Greenhouse Gases Over Tropical Atlantic May Ease Global Warming”:http://www.sciencedaily.com/releases/2008…
Ohhh, that’s such a nice word, “ease”. So this means we can shut up about reducing fossil carbon output, right? :-|

It seems unthinkable, but for the first time in human history, ice is on course to disappear entirely from the North Pole this year.

June 26th, 2008 at 10:37 PM

Better source:

Abstract

Ensemble predictions of arctic sea ice in spring and summer 2008 have been carried out using an ice-ocean model. The ensemble is constructed by using atmospheric forcing from 2001 to 2007 and the September 2007 ice and ocean conditions estimated by the model. The prediction results show that the record low ice cover and the unusually warm ocean surface waters in summer 2007 lead to a substantial reduction in ice thickness in 2008. Up to 1.2 m ice thickness reduction is predicted in a large area of the Canada Basin in both spring and summer of 2008, leading to extraordinarily thin ice in summer 2008. There is a 50% chance that both the Northern Sea Route and the Northwest Passage will be nearly ice free in September 2008. It is not likely there will be another precipitous decline in arctic sea ice extent such as seen in 2007, unless a new atmospheric forcing regime, significantly different from the recent past, occurs.

Received 10 January 2008; accepted 21 March 2008; published 22 April 2008.

cat black:

June 26th, 2008 at 10:40 PM

#17 (simple explanations) The challenge with simplifying complex science is that you have to leave out some of the fundamental background. I could say “increasing CO2 is sure to warm the atmosphere and that is a problem” but that would leave me open to attack: How do you measure CO2? What makes you so sure it is increasing? How much increase is too much? And why is warming a bad thing anyway? Last winter was cold/hot/dry (pick one) and that disproves your thesis. Blah blah. A simple statement is easily made the butt of jokes.

No easy solution: Understand the science, trust the scientific method, teach others what you learn, don’t be fooled again.

June 26th, 2008 at 10:56 PM

Yes, it really is an uphill PR battle out there. Some editors are good, some editors are not so good. And headline writers often write what they wish, without really reading the story, or knowing the science. SO keep up the PR pressure, yes. Shine a light on all of this.

Meanwhile, why are there no contingency plans (that we know of) in case global warming spins out of control?

Imagine reading this in the future. Note date: 3008:

“US has no plan to cope with collapse of civil society in distant future”

By ”Staff Reporter”
WEBPOSTED: June 25, ”3008”

A former senior official in the Northern White House has dropped a proverbial
bombshell by asserting that the U.S. and other major industrial
countries have no coordinated plan to cope with a collapse of civil
society in the future due to climate change and global warming and
leading to possible mass migrations to northern regions of the globe.

Edward Lendner, who was director of climate issues in a previous White
House administration, wrote last week: “In what would be the single
most important contingency that could impact civil society in the
United States and other nations around the world, there is no agreed
upon plan for how to deal with a collapsing world in the distant
future if climate change and global warming get out of control and
mass migrations northward create chaos in both wealthy and poor
countries.”

In response to an email asking: “Why was no plan drawn up when you
were in the climate issues office in the White House?” Lendner
replied: “There was no lack of planning on the US side. We did our
part. There are several secret documents that have been drawn up in
response to the collapse of civil society in North America and Europe
— in addition to Africa, Asia and South America — but these plans
have never been made public and most likely never will be.”

“The main issue is that there is no agreed-upon mechanism for
bilateral and multilateral planning (including with China and India,
with their huge billion-plus populations), which obviously should be
done in advance of such a contingency,” he added.

Pete H.:

June 26th, 2008 at 11:46 PM

Thanks for this post. This is not my field of study and yet I knew enough to feel an intense amount of frustration when I read the headlines and then read the original Nature paper. It must drive you nuts.

Since you seem to be in a debunking mood, would you be interested in commenting on the latest mischief at ClimateAudit regarding GISS dataset corrections. I think it is best you nip that stuff in the bud.

June 27th, 2008 at 5:23 AM

Helvio (7): This ozone paper is one more example (amongst many) where “skeptics” and people who are genuinely confused claim or think that the big picture needs changing, whereas in reality it doesn’t. In reality, the big picture has actually been remarkably stable since decades (see eg “the discovery of global warming” by Weart, or work by Oreskes). Observing a flying bird doesn’t disprove gravity. Claiming that it does because there’s something about gravity that you don’t like can hardly be described as skeptical (hence the quotation marks).
Your eagerness to change the big picture stems from your “skepticism”, rather than that you are skeptical because the big picture needs changing.

June 27th, 2008 at 6:10 AM

helvio writes:

Particle physics or astrophysics, two robust science subjects, do not change their big picture every time a new discovery is made,

Neither does climatology. Where did you get the idea that it did?

Nylo:

June 27th, 2008 at 6:30 AM

If zero of the IPCC AR4 climate models use an interactive ozone module, then this is WORSE, not better, than claimed by the author of the paper. It is not that they considered changes in ozono to be too weak, but that they didn’t consider ozone fluctuations at all. The 50% claimed by the author then becomes a 100%. We have a negative feedback that has been ignored by the current models, which however include positive feedbacks from other gases like water vapour. How can anyone claim that this shouldn’t affect predictions about the future climate, certainly escapes me.

[Response: Much does. The AR4 models did not have an interactive ozone component, but they did have estimates of changes in ozone over the 20th Century – which were calculated using estimates of changes in ozone precursors (CO, VOCs, NOx etc.) from industrial sources. Ozone is a GHG and so this adds to the forcing (as I stated, about 20% of the CO2 amount over that period). But there is no negative feedback here – where did you get that idea? And of course changes in ozone will affect future climate – who said they didn’t? But they will do so as part of the evolving changes in all sorts of forcings – including aerosols and vegetation – that were assumed fixed in the AR4 scenarios. – gavin]

Ray Ladbury:

June 27th, 2008 at 9:18 AM

Helvio, I’ve done experimental particle physics–and you are naive if you think every aspect of it is immutable. Yes, the standard model is pretty solid–at least if they find the Higgs Boson. However, particle masses, strengths of interactions, branching fractions, etc. all affect parameters in the model. The results here are akin to such minor modifications, but the basic structure of the models–along with the main forcers and their strengths–does not change. Reporters talk in terms of “revolutionary discoveries” because it sells. It provides a narative that makes the story readable. It’s not how science generally works.

Guenter Hess, Your presentation of the idealized model of scientific inquiry is basically correct, though I find things never move in quite so orderly a fashion in the real world. What you are missing is the probabilistic/information-theoretic nature of scientific truth. At some point a model becomes sufficiently validated that it becomes “Truth”. The consensus model of Earth’s climate is sufficiently well established, with so many different strands of evidence that it falls into this category. We will learn much more about climate because there is still much more to learn. Still the basic structure of the model is unlikely to change dramatically.

Anonymous:

June 27th, 2008 at 10:44 AM

gavin@28: [… And of course changes in ozone will affect future climate – who said they didn’t? But they will do so as part of the evolving changes in all sorts of forcings – including aerosols and vegetation – that were assumed fixed in the AR4 scenarios. …]

How was this omission (“evolving changes … assumed fixed”) justified in order to avoid falsifying the AR4 scenarios?

[Response: Huh? The AR4 scenarios cover a range of forcings from small to very large. Nothing that ozone will do in the future will affect that range appreciably. That’s why we run a range of scenarios. – gavin]

Guenter Hess:

June 27th, 2008 at 11:56 AM

Ray Ladbury, I agree that the big picture about global warming is pretty well established. But it is my opinion that the probabilistic/information-theoretic nature of scientific truth requires Occam ’s razor to proceed towards more knowledge.

pod:

June 27th, 2008 at 12:33 PM

In response to Nylo:

Gavin as pointed out that ozone forcings are included in the 4AR scenarios. Ozone feedbacks are excluded because of the computational demands.

If anything, tropospheric ozone chemistry is going to provide a positive climate feedback, at least in the short term. Isoprene, one of the main biogenic VOCs responsible for much of the background ozone production over the continents is emitted from trees which become stressed by heat and drought. In a warmer world isoprene emissions look set to increase. This will result in increased ozone production. Add to this fact that ozone photochemistry becomes more active at warmer temperatures and this reinforces my point. However, I make no mention of aerosols here…specifically, the production of aerosol associated with oxidation of VOCs and ozone photochemistry, which may offset this positive forcing. Future changes in NOx emissions due to expanding industry/tighter emission controls and/or changes in vegetation distributions may well override this effect though.

Mark:

June 27th, 2008 at 2:57 PM

Guenter Hess,

How can Occam’s razor be the best way to improve science? Most people think it means “The simplest explanation is probably the right one” and then jump to “it’s not us” as the simplest explanation. forgetting

a) their version only says “probably”
b) the real (near enough) version is something along the lines of “do not multiply the number of entities in an explanation needlessly” which really doesn’t mean the same thing at all.

(b) is even odder in that most skeptics/deniers claim extra entities as what’s making the problem (“How do you know it’s us? It could be some longer-term change not covered in your models”).

Your message in any case is so lacking in information, what you really mean is left obscure.

Ray’s point can be illustrated with Gravity. We KNOW our model is wrong. There are some elements it cannot (and should) explain. That’s how we know it to be wrong. However, in all the places we wish to use it, it answers every question to an accuracy that is in retrospect astounding. Even more so the (much more incorrect) classical newtonian gravity. It was still enough to get our probes billions of miles passing precise points (and changing path that depends to great sensitivity on the PRECISE arrangement) and getting within thousands of miles of the spot we wanted (maybe hundreds, I don’t think anyone wondered enough to answer the accuracy).

So there we have used a theory we KNOW to be wrong. However, the differences between the better (and theoretical differences from what we hope an even more accurate model would produce) and this wrong theory do not change the outcome.

To bring it back to climate change: it doesn’t matter if it takes 50 years or 100 years to get to irreversible change if we don’t do something about it. It may change when we can stop. So inaccuracies don’t change what must be done now.

That’s my reading of it, anyhow.

June 27th, 2008 at 4:01 PM

re: #17 & #23 Simple Explanations

A presentation can be a simplified view of the underlying science. Just as a scholarly paper is well footnoted and processes well defined, any presentation can be well linked to the source data.

I would love to see a climate model that can be easily explored by its data definitions. We can do that now, it is just that a site visitor takes on the extra work of searching and following links.

Any simplified presentation could be linked to its source material. This means authoring scientific content should conform to accepted Web style guidelines.

Guenter Hess:

June 27th, 2008 at 4:07 PM

Mark:
Sorry for my bad communication, I don’t think we disagree, but our misunderstanding is only that I learned the quote of Occam’s Razor differently:
“If you have to choose between two theories or models that both explain the data or the observation and have not been proven wrong by independent measurements, choose the one with the least amount of independent parameters. “
Newton’s theory of gravity fits that point exactly, you do not need the general theory of relativity in many practical case and I hope also to our current best climate models.
I think that can be applied very well to models, theories and simulations and doesn’t imply right or wrong.
But I admit it gets usually interpreted very wrong in the way you described

Mark:

June 27th, 2008 at 4:47 PM

Gunther,

No worries. I think the version I had is the closer to the original workings. The reason why I do not think you should be using this is that the denialist will ALWAYS be looking for the guesses and using them to say “see! you don’t know”. So the models have to continue to add more entities to model (though they are not being *needlessly* multiplied, which is where the version I used is different). The issues become whether they have significant effect.

At the moment, nothing is known one way or the other, but the spread used in the IPCC reports should still cover the likely outcomes.

Occam isn’t needed until at least *something* is done to see whether more work is warranted.

Ta.

Guenter Hess:

June 27th, 2008 at 5:07 PM

Mark,
maybe both of us are right. The excerpt from Wikipedia:
“Occam’s razor (sometimes spelled Ockham’s razor) is a principle attributed to the 14th-century English logician and Franciscan friar William of Ockham. The principle states that the explanation of any phenomenon should make as few assumptions as possible, eliminating those that make no difference in the observable predictions of the explanatory hypothesis or theory. The principle is often expressed in Latin as the lex parsimoniae (“law of parsimony” or “law of succinctness”): “entia non sunt multiplicanda praeter necessitatem”, roughly translated as “entities must not be multiplied beyond necessity”.
This is often paraphrased as “All other things being equal, the simplest solution is the best.” In other words, when multiple competing theories are equal in other respects, the principle recommends selecting the theory that introduces the fewest assumptions and postulates the fewest entities. It is in this sense that Occam’s razor is usually understood.”

The all other things being equal is the important part. Simplicity alone is of course no value. Let’s agree that we disagree, I think this principle is needed and provides good guidance for scientists and engineers not to stray in the purely speculative part. It works at least in my organisation.

Ray Ladbury:

June 27th, 2008 at 7:00 PM

Mark and Guenter, Actually, there are several versions purporting to be what William of Occam actually said–and of course he said it in Latin, but I think the closest to the recognized form (translated) is: “Entities should not be multiplied beyond necessity.” Information Theory encompasses this rather nicely in the form of the various information criteria (e.g. Akaike Information Criterion), with the likelihood term ensuring fidelity to the data and the term in the number of parameters penalizing excessively complicated models.
Unfortunately, General Relativity isn’t a particularly good example, as it is a dynamical model (motivated by actual physics), not a statistical model (where goodness of fit determines parametric values). General relativity is always the “correct” theory given our current knowledge. Newtonian gravitation is an excellent approximation except in a few notable cases.

Climate models, too are dynamical models motivated by physics, and with parametric values determined independently of the fit to the data. The physical model is then validated.

David B. Benson:

June 27th, 2008 at 7:02 PM

Guenter Hess (37) — Here is a modern interpretation of Ockham’s razor:

For those of us who are scientists in less politicized fields, can you explain the following items. I don’t have the resources to verify the data and can’t find the answers on the internet.

[Response: This is not a sensible reading of the data. All surface temperature and lower atmosphere satellite records (look up MSU 2LT, two versions UAH and RSS) show upward long term trends. Individual year anomalies are highly correlated but short term trends are very noisy and of limited significance. 2008 has been a relatively cool year, due in part to the La Nina event in the Pacific. – gavin]

Can you explain the historic changes to nasa temperature data I have seen plotted in other places which demonstrate an obvious increase in slope from past to present as compared to the previously published data.

[Response: I’m not sure what you are referring to, but the source data used by NASA GISTEMP come from NOAA. In the US those data have been extensively corrected to deal with various biases in station location moves, time of observation etc. which has had the net effect of increasing the trend over that seen in the unadjusted data. These corrections are a) well accepted, b) nothing to do with NASA, and c) fully discussed in the literature. – gavin]

I am asking because this type of thing is what creates a significant amount of the denial your readers comment on and I am honestly curious about your reply.

I am a first time visitor to your website. I have to say that while most of what is presented seems very scientific and carefully thought out there is an article which claims Al Gore’s movie isn’t politically motivated really really REALLY stretches your credibility on what is a very important issue.

[Response: I’m sure that Al Gore’s motivations are political. He is after all a politician. But you can have politicians who pay attention to what scientists say, and you can have politicians who stick their fingers in their ears and scream ‘la-la-la I can’t hear you”. Gore is former. Inhofe (for example) the latter. An Inconvenient Truth was a pretty good popular summary of the science, even if it wasn’t perfect. – gavin]

Still if you can help me understand the discrepancies I would be most appreciative.

June 28th, 2008 at 12:17 AM

Thanks Ray. I didn’t know the origin of Occam’s Razor, but, wow, “Entities should not be multiplied beyond necessity” sounds like a really weird way of saying an explanation with the least number of assumptions is probably closest to the truth. :) It sounds more like one should limit how mant kittens they get or something! But maybe I have Occam’s razor all wrong… Anyway, that’s very interesting, thanks.

CoJon:

June 28th, 2008 at 12:39 PM

Thanks Gavin, you are helping me. My first question was more related to a sudden physical separation between the satellite and ground data. The upward trend is without question. I just was curious if anyone had an explanation of the current separation in the data. Ground data showing top ten warmest year and satellite data showing the bottom ten coolest. Sorry about the generality. The discrepancy seems quite large.

[Response: Different range of years I expect. – gavin]

[edit for clarity]
———

Your response below is also helpful but leaves me with another question. I always thought that development of cities and infrastructure around temperature sensors would create an artificially high measurement resulting in a downward correction. Can you tell me the primary mechanism(s) which results in the sensors reading cooler than actual temperature necessitating the upward correction?

Again, this is quite difficult information to locate for someone not immersed in the field on a daily basis.

Thank you again.

[edit for clarity]

[Response: If you move a station from a town center to the airport, it generally imparts a cool bias. If you used to take temperature readings in the afternoon, and then you started to take them in the morning, that would be a cooling factor as well. I recommend the paper I linked above for more details – it really is quite clearly written. – gavin]

CoJon:

June 28th, 2008 at 6:35 PM

[Response: Different range of years I expect. – gavin]

I haven’t seen the latest ground measurement data but it seems that a measurement difference from the coolest to the warmest on record deserves some form of analysis. FYI, I have a complete and at times unfortunate understanding of the noise in datasets. However the current situation seems extreme to me. After all we are talking about ending our way of life. Fossil fuels power this planet and if I am to teach those around me about the truth of our world I want to know it is true.

[Response: What society does (or does not) do about climate change is irrelevant to the issues of observational accuracy. And no policies are being decided upon based on one year of data. The ground data can be seen here or here – both products show the last decade to have been the warmest decade in over hundred years. Exact year rankings are not very robust due to the uncertainty in the estimate for single years – 1998, 2005 are roughly statistically tied for warmest years (followed by 2003, 2002, 2006 or 2007, 2002, 2003 depending on the product). In the satellite data (seen here for instance), 1998 is the top year, followed by 2005, 2002, 2003, 2007. Where is the big discrepancy? – gavin]

After all, this issue is highly politicized and we in the public need to question those working for the governments (Hansen) of the world about their motives as you yourself would question a climatologist working for an oil company. For those of us on the outside, we cannot discount either opinion without careful review.

Therefore, I wonder if the most recent ground data is being unduly influenced by politics. I think this is a reasonable question, which may be difficult to answer.

[Response: The data are what they are. – gavin]

If the difference is just data noise on top of the warming signal, I wonder how many times in the satellite record has the lower atmosphere data deviated below ground measurements to this magnitude.

Regarding the second answer, I can completely agree with your basic statements about temperature, however from the article from Dr. Hansen is missing the supporting data for his conclusions. I understand now on the surface he is claiming the reasons for the corrections however the magnitudes of the corrections and the start and end dates for the linear corrections are a bit obfuscated.

I have written papers in another field which have been peer reviewed, only to find mistakes in them at a later date. The acceptance of this Hansen article doesn’t give it accuracy. The universe is a cold mistress that way. Is there something you can add to give me comfort.

Help:

June 28th, 2008 at 6:50 PM

Michael Tobis is engaging over at CA. It looks to me that he could do with a bit of a helping hand.

Martin Vermeer:

June 29th, 2008 at 4:49 AM

#43 CoJon writes:

I have written papers in another field which have been peer reviewed, only to find mistakes in them at a later date. The acceptance of this Hansen article doesn’t give it accuracy. The universe is a cold mistress that way. Is there something you can add to give me comfort.

Ah yes, peer review is only a first test, like a spam filter: there will be false negatives. Still spam filters are useful. There are many more compilations of surface or lower-tropo temperature than GISTemp, and many more sources than Hansen, and the overall picture isn’t in question.

A second layer of review is provided by the IPCC process, where the whole body of climate science literature is reviewed and summarized in a unified way, something only someone well at home in the field can do in a meaningful way. It is not perfect either, but it’s the best we have. Think of doctors: there are bad doctors, and good doctors have bad days. Still you don’t want to second-guess the whole medical profession when your health is at stake. At most you would get a second opiniion — from another doctor.

Decision making under uncertainty is the norm, in politics as in daily life. You base policy on the best information on offer, uncertainties and all. Waiting for certainty may mean waiting too long.

I feel your pain. If you do what I did, spend two years on and off to get familiar with the current state of climatology (and if you are teaching this you owe that to yourself), you will finally see who is responsible for tthe current state of confusion of the public. Hint: it’s not the community
of practicing climatologists :-)

Ray Ladbury:

June 29th, 2008 at 6:46 AM

CoJon, It would help if you could tell us your sources for some of the contentions you are making. As I am sure you are aware, it is quite easy to manipulate data to demonstrate or insinuate a lie. Likewise, an analyst inexperienced with the data and the field may inadvertently take data out of context (and here the context is “global” and “climate” [>=30 year trends] and change [again, how things are trending]).

If you listen to the wrong sources, you would think there is even controversy over whether the planet is warming–a contention belied by the data as well as all that melting ice.

trrll:

June 29th, 2008 at 12:34 PM

Thanks Ray. I didn’t know the origin of Occam’s Razor, but, wow, “Entities should not be multiplied beyond necessity” sounds like a really weird way of saying an explanation with the least number of assumptions is probably closest to the truth.

This is a popular misunderstanding of Occam’s Razor. In fact, there is no reason to believe that the simplest model is “likely” to be “closest to the truth” in any real probabilistic sense. If anything, the historical trend is in the other direction, with simple models being discarded in favor of more complex ones.

So Occam’s Razor cannot be seen as any kind of guide to truth. A better way to think of it is as a heuristic rule of thumb for efficiently ordering hypotheses for investigation. The amount of data required to exclude a simple hypotheses is generally less than for a more complex hypothesis, so this strategy allows one to progress through the possible hypotheses most efficiently.

Tim McDermott:

June 29th, 2008 at 1:12 PM

CoJon:

The paper with meat (89 references) is here. The four page testimony paper is just a summary of the one linked.

Ray Ladbury:

June 29th, 2008 at 2:22 PM

trrll, I think that Information theory provides a good way for interpreting Occam’s razor–increasing complexity is justified only if the more complex model (in terms of # of parameters) better explains the data by an amount exponential in the difference in complexity. Again, as I pointed out, this really only applies to statistical models. However, complicated dynamical models always require a lot more data–both for evaluating parameters and for verirication–than simple ones. If you don’t have enough data, here, too, you will probably find that the simpler model has better predictive power.

trrll:

June 29th, 2008 at 4:12 PM

Ray, the information theoretic approach provides a quantitative criterion for the “necessity” part of Occam’s Razor. Of course, it is impossible in principle to prove that the simpler model is better when the simple model is included as a special case of the more the complex one. At best, one can establish confidence limits on the additional parameters of the more complex model.

Ray Ladbury:

June 29th, 2008 at 5:56 PM

Trrll, You have to start with a criterion for “better,” and depending on which criterion you choose, you will get AIC, BIC or some other metric/evidence function for deciding between theories. The nice thing about the information theoretic methods is that they start with concepts that are in some way fundamental–e.g. Kullbach-Liebler distance for AIC, or an optimial posterior probability distribution for BIC

Tom Dayton:

June 29th, 2008 at 6:57 PM

RE: 51

Ray is right on with having to start with “a criterion” for “better” theory, but I imagine he wouldn’t disagree with me rephrasing that as “… a set of weighted criteria.”

Theories are evaluated against multiple criteria, which include not only the ability to predict, but also the ability to explain in the manner of feeling satisfying to a person (“explanatory power”); the ability to generate other, (possibly) better theories (“fruitfulness”); and so on.

There is diversity among scientists in how much they weight each of the criteria. Even a given scientist will weight the criteria differently at different times. There is no single, objectively correct, final-for-all-time set of weighted criteria. Often there is no single, objectively correct, evaluation of a theory’s degree of meeting even one criterion. Even the criterion of predictive power depends on precisely how you define the theory and precisely which observations you choose to compare to the theory.

Examples abound, of scientists whose personal judgment of a theory was doggedly more optimistic than the judgments of other scientists, but who turned out eventually to be correct. An example is plate tectonics. But the more (well-informed) scientists who agree in their evaluation of a theory, the higher the probability of that theory being correct.

People who are not actively engaged in the practice of science often react badly to what I’ve just written, because it makes science seem so subjective, arbitrary, random, and untrustworthy. The missing piece is what you can understand only by actively practicing science. The processes, rules, and rules of thumb all work together quite well, in a sometimes unnervingly non-waterfall way, to converge the conclusions on good theory and good observations. An example is plate tectonics.

June 30th, 2008 at 10:52 AM

Well not exactly spot on the subject but I have a question about Joyce E. Penner who recently some how made a science journalist on the Swedish Radio believe that the IPCC report Chapter 3 and 9 say that increased solar irradiance not greenhouse gases has contributed to most of the energy to the climate system since 1990. I have looked thru the chapters and briefly though her publications but really can’t find support for there claims. Any one know what they are on about? (changes in low cloud cover?)

(and that the recent “dip” in temperature could be due to low sun activity or changes in clouds)

(apparently she also suggested that the sensitivity probably could be less then 1,5)

Best Regards

[Response: This sounds confused. Joyce Penner is one of the top aerosol people and I doubt that she would have got the IPCC report so wrong. I would look to a mistake by the journalist in interpreting her remarks. Without a transcript it’s hard to say what was meant. – gavin]

July 2nd, 2008 at 7:59 AM

Re 53, Solar irradiance or GHG’s being responsible for warming since 1990’s: My guess would be that what was meant is the global dimming – global brightening issue (but I could be way off the mark here):
The hypothesis is that an increase in aerosols (and subsequent decrease in solar radiation reaching the earth’s surface, hence “global dimming”) has masked some of the warming up to the 1980’s, but since then the aerosol loading has largely stabilized or even decreased, thereby having added to the warming by allowing more sunlight to reach the earth’s surface (global brightening). At some point the aerosol loading will stop decreasing further, and then the extra warming that the decrease in aerosol has caused will also stop. There were some interesting presentations about this at the EGU in Vienna. See eg this recent paper: http://www.agu.org/pubs/crossref/2008/2008GL034228.shtml. I do expect some discussion about these results though…

July 2nd, 2008 at 11:11 AM

Hi again,

Yes that is what it mostly is about I guess, I only have second-hand information by mail exchange with the journalist. I have mailed Gavin some details I don’t know if he have time over to look at it but it would be interesting to here what he have to say about it… I have seen earlier comments from him on that area. http://www.giss.nasa.gov/research/news/20070315/

Well the Journalist claims:
That chapter 3 page 277 and 278 in IPCC and chapter 9 (3?) page 277 (278) say that the forcing from less particles is bigger then the forcing from greenhouse gases.

Chapter 9 page 674 and beyond, page 676 figure 9.3. shows that the decrease in reflected radiation is significantly less then what the models show. The numbers for the forcing for this he say is significantly bigger then the forcing from CO2 (during that time I guess) and that the numbers is in the text and that he thinks it is weird that it did not make it to the summary.

He further states that Penner had a poster at AGU 2007 about the fact that the models did not consider the increase in radiation.

Then he mentions Pinker et al in Science 2005 that supposedly show that radiation at the surface (less particles) have bigger forcing then the greenhouse gasses. Wild et al had an article on the same theme… there.

Chylek et al he also say have after IPCC published a report that said that the greenhouse effect at least should be half of what is supposed today.

He also mentions the “problem” with the models for the tropics that “should” increase 3 times more “a bit up” then on the ground if it is greenhouse gases that stands for the increased temperature… and that chapter 9 page 267 and past don’t show that.

I just got his reply to my mail so I haven’t had time to look at it but will try to do it later… thanks for the response.

July 3rd, 2008 at 3:13 AM

Well just to let you know, I sent a short reply to the reporter when I got his mail mainly just stating that if you read the IPCC report you also se that there are lots of uncertainties around this and I also sent the article from NASA. And also said that I can’t find any published support for the claims that are made in the program. He then sent me this reply late in the evening.

“If you don’t find support for it blame IPCC they only use published data”

“the satellite measurements are global the increase in radiation mainly comes in 20n to 20s where most of the energy comes in but this is also evident for 60s to 60n figure in chapter 9”

“An interesting thing is that this manly is evident over the ocean which could point on an other explanation then clean air.”

“the “radiation balance” (sorry for the bad translation but I think you understand) there are a number of articles that backs up that it increases and there is no as direct measurements that couples greenhouse gasses to the increased to the increased energy in the system.”

“It also is interesting that the LW “radiation” have increased not directly what you would expect from an increase in greenhouse gasses.”

He then say that he wants to take a brake in the communication… (I made sure I had his boss on the mailing list).

But my complain is not that they talk about uncertainties of clouds or aerosols but the claims that where made in the program. This is one of maybe three really trusted science news sources in Sweden, so I think they should try to build the news on published articles…

1. The higher degree of radiation that got through in the 90s made it hotter and a decrease now lowers the temperature.

2. It’s not the last years lack of warming that is most troublesome, satellite measurements show that it’s not the greenhouse gasses that mainly have increased the energy in the system it’s the increased radiation from the sun (that get in due to clouds or less aerosols). The IPCC report chapter 3 and 9 say this, but it is not mentioned in the summary.

3. the models predicts the recent warming but that the large increase in radiation is not accounted for in the models.

4. the models say that it more or less is the greenhouse gasses alone that stand for the increased temperature when they really have had a large help from increased radiation.

5. if it is reported in the IPCC report that the climate models is wrong and that it is increased radiation and not greenhouse gases that could have the biggest effect on the last decades warming should it not have said so also in the summary that was presented for the politicians and public?

6. among other things the summary sais that the increase in temperature during the 90s was 0,2 and it lies between the modelled ranges 0,15-0,3 /decade but according to David Parker in the British whether service the last years cooling have decreased rate of the change to 0,13 i.e. lower then what the models show, ad to this that if about half of the increase in the 90s is due to change in radiation the carbon dioxide effect is far smaller then what the models are programmed for.

They then speculate abut how low the sensitivity could be.

So I objected against the jump in conclusions that this is well supported and said I think that it is up to debate what effect this have on the temperatures and that there also are other things pointing in the other direction of higher sensitivity, we just don’t know. And that it then is wrong by a respected science channel to extrapolate and guess what might happen. And then express it as very probable… So I asked fore published articles that supported what he said and I really cant say that I got any… will reed them closer later.

So any one that can help me whit what kind of conclusions that could be drawn from what the published literature says on the subject is most welcome. For example how much lower sensitivity or forcing could one expect or is it lower at all?

Best Regards

Ray Ladbury:

July 3rd, 2008 at 7:31 AM

Magnus, Where on Earth (or elsewhere?) is your journalist getting his information? Chapter 3 doesn’t even address cosmic rays! It sounds as if he is just spouting denialist talking points. And for
God’s sake, the radiation actually reaching the ground has decreased by nearly 3%. This is what happens when you let ignorant food tubes report on science!

July 3rd, 2008 at 8:44 AM

yes, well that is one of his points… also I think. That it increased during the 90s and then now is decreasing… so he thinks that less aerosols in the 90s contributed to the warmer climate then in the 2000s he say that the radiation decreased (due to less active sun?) and that this is why we se a “lack” of warming now. this would lead to the conclusion that CO2 forcing is much smaller then in the models because they “follow” the temperature without accounting for the “increased inlet of radiation” (clouds or aerosols)…

Now I could not understand where he got the info from and it seams to me that most of this is cherry picking and extrapolating and i don’t know what Penner actually told him but he said that she said that a doubling of CO2 could be much less then 1,5…

So this is why I want to know if there exist any publications that put any strength behind the low sensitivity…

I also want to apologise for my sometimes bad translations, hope that it doesn’t confuse you to much…

I’m also sorry that i haven’t had time to look closer in to this myself… looking up if the decrease in radiation during 2000 could explain lower temperature e.g. as I understood it before it was more likely to come from la nina or something combined, and why such a decrease would occur 2000 if aerosols really have that big effect, could it all be clouds and if so why? and how much of the aerosol effect is in the models… and so on.

Ray Ladbury:

July 3rd, 2008 at 11:04 AM

The changes in the 90s were tiny–way too small to explain the temperature change unless you have significant positive feedback. And if there is a significant feedback that applies to insolation, it must also apply to CO2. Thus, trying to explain warming with only solar changes would–if it were possible at all–could actually result in a higher sensitivity to CO2.
There are >20 GCMs out there. Not one of them has a sensitivity less than 2 degrees per doubling–that ought to give an indication of the difficulty. Now consider that if someone actually accomplished this, they would probably become the most famous climate scientist on Earth! Must be damn hard!

July 3rd, 2008 at 11:54 AM

It sounds like the journalists are definitely extrapolating too much, but the research that they seem to base their opinions on is not necessarily from the skeptical point of view. The effects of aerosols are very uncertain, and the global dimming – global brightening issue is one that could have a significant effect indeed, and it is undertaken by serious scientists. The journalists taking such research out of context and extrapolating wildly is off the mark though.
Some of the points you raise, Magnus: The SPM is a synthesis, and thus not all individual pieces of research need or should be mentioned there. Climate models take all known forcings into accoutn, not just greenhouse gases. See eg Hansen et al Science 2005.

David B. Benson:

July 3rd, 2008 at 1:18 PM

Magnus Westerstrand (59) — In he second paragraph of

there are links to two important (IMO) papers on the determination of climate sensitivity and a discussion paper. I encourage you to start there.

Mark:

July 4th, 2008 at 10:28 AM

[Ray Ladbury Says:
27 June 2008 at 7:00 PM

Mark and Guenter, Actually, there are several versions purporting to be what William of Occam actually said–and of course he said it in Latin, but I think the closest to the recognized form (translated) is: “Entities should not be multiplied beyond necessity.”]

Aye, my sister did Latin and Greek, but that’s the closest I got to it.

And I’d peg my version fairly close to that one:

“do not multiply the number of entities in an explanation needlessly”

Which is (as far as I can see) just saying WHAT constitutes an entity.

So if you have your pink unicorn theory then someone says “well, what about poink unicorn poo” and you have to then say “it evaporates”, you’ve added another entity in your explanation.

Having to add more and more codicils to your explanation is often a sign you have it wrong. Which is why “Dark Matter” is a hypothesis (at best): because we keep having to fiddle what it does to keep it fitting.

Problem is, we don’t have anything better, which is a downer. But that may show some more basic assumption is untrue (and that constitutes the reasoning for MOND et al).

(see, I’m not a climatologist, I did physics with astrophysics, but I’ve had to do numerical modelling using fluid dynamical questions, so I know how hard it is to get even an answer, never mind manipulate it so it gets a pre-determined answer…)

Hank Roberts:

July 4th, 2008 at 12:01 PM

> “If you don’t find support for it blame IPCC
> they only use published data”
and
> … he said that she said …

I think this journalist has just illustrated _why_ the IPCC use only published data. Because gossip isn’t easy to peer review. Duh.

I hope you contact Dr. Penner directly and invite a response, now that you have the journalist on record in email about what he attributed.

Hank Roberts:

July 4th, 2008 at 12:46 PM

Back on topic, this press release continues to pop up as news at various place I read.

They make the point that direct observation studies are useful — in later news I see mentions of science budget decisions being made. Was the timing and PR language related to

I thought I could blame the journalists but the actual press release says “chemicals … attack the ozone, breaking it down. As the ozone is destroyed, a chemical is produced that attacks and destroys the greenhouse gas methane.”

Chemically imprecise: “breaking … destroyed … attacks ….”
Is the paper any better about mechanisms and results? Or did they just observe less ozone and methane and say it is going missing?