KTH, Stockholm Conference

On Sep 11-12, 2006, KTH (Royal Institute of Technology) in Stockholm, Sweden hosted an international seminar on climate variability (seminar website here). The seminar had 16 speakers from 14 countries and was attended by 120 people. It was organized by Peter Stilbs and Fred Goldberg, who extended great hospitality to the presenters. Anders FlodstràÆà⵭, President of KTH, agreed to the seminar and was an impressive figure as convener of the closing panel.

The seminar arose as one of a series of Pro-and-Con seminars sponsored by KTH. In this case, the balance of presenters and audience was non-IPCC. This was not through the fault of the organizers who made diligent efforts to obtain IPCC-types. However, in the end, von Storch, Bengtsson and Kallen ended up being the only "IPCC" presenters. Bert Bolin, former IPCC chairman, attended for part of the Monday session. (He refused to pay a conference registration of about $25 despite being asked for payment – I guess he’s used to expense accounts.)

The purpose of the seminar was not to present new results, but to summarize their views for a non-specialist audience. The following notes are not intended to be anything more than a rough impression and no slight is intended to those whose presentations are treated summarily.

Fred Goldberg is a material scientist, who has been an active "skeptic" in Sweden. He presented an account of historical information on the MWP and Little Ice Age. He showed some results on cloudiness thatI had not seen before, illustrated by some interesting paintings. (Fred travels to Svalbard every year and is familiar with the Arctic.)

Wjiborn Karlen is a prominent paleoclimatologist who has published dozens of peer-reviewed articles (curiously he’s joint on Moberg et al 2005). He presented information on variability in the Holocene. He showed the Briffa 2000 reconstruction – which, as I’ve pointed out here, is much influenced by the Yamal substitution. We chatted afterwards; he’s very concerned over the integrity of CRU temperature data and stated that no article involving Philip Jones could be relied on; I asked him if I could quote him on that and he said yes.

Bob Carter is an Australian geologist, who has professionally collected important deep sea cores from sediments offshore New Zealand showing climate variability in Deep Time. He presented on variability over Deep Time emphasizing that variability existed on every scale imaginable, showing what “trends” looked like over 10,000 years, 1000 years, 100 years and 10 years. He described the collection and interpretation of cores investigating northward flows of Antarctic water offshore New Zealand. He closed his presentation with an alarming quote from a reviewer for a grant who stated that grants should not be given to scientists who make public comments of the type that Carter has made. When I see the variability in Carter’s cores, in which centennial variability of the scale of the past century is routine, assertions that the variability over the past century requires anthropogenic influence seem rather over-confident. His PPT is online here.

Then moi. I explained how I got interested in the climate debate and how our present analysis evolved, beginning with the 2003 exchanges. Some aspects of our dialogue with Mann make more sense in this context. I presented a couple of new graphics — one showing the impact of one contaminated proxy on a von Storch-Zorita pseudoproxy network; one on Wahl and Ammann, but I’m finding that this sort of detail doesn’t play very well – not just with this sort of audience, but even highly specialized audiences.

Having said that, my pseudoproxy graphic did get understood immediately by von Storch. Von Storch and Zorita had sent me benchmark pseudoproxy results in the spring. They had reported last year (GRL) that Mannian PCA made "no difference" in a VZ pseudoproxy network in which the pseudoproxies were gridcell temperature plus white noise. In our Reply, we had argued that the pseudoproxy network was an irrelevant test for the impact of Mannian PCA on MBH proxies, but von Storch wasn’t convinced by the reply. A few months ago, they sent me a pseudoproxy run for me to reconcile. I replicated the VZ result on a pseudoproxy network of 55 series (their "region 1") agreeing that Mannian PCA did not have an impact on such a network. However, I showed a graphic illustrating the impact of the introduction of one synthetic nonclimatic hockey stick series on the network – Mannian PCA latched onto the nonclimatic HS and, under some circumstances, even flipped over the actual signal.

We had a nice chat in the afternoon – it was a beautiful sunny day in Stockholm. This is the third occasion tis year that we’ve co-presented: at the National Academy of Sciences, at the House Energy and Commerce Committee and now the KTH Seminar. He thought that my presentation was more relaxed than the previous presentations. He got the point of the new graphic instantly; he suggested that I publish it without mentioning bristlecones – as a purely mathematical exercise, although bristlecones will be the unmentioned elephant in the room. I guess one can be prudent in climate science from time to time.

Although my presentation was obviously very critical of the hockey stick, neither Bert Bolin nor Kallen had any questions or comments. The only critical question came from Bengtsson, who claimed that Mann’s error bars covered any problems. I replied that Mann’s error bars were meaningless as they had been calculated on calibration period residuals on an overfitted model and were “not worth the powder to blow them to hell”. Bengtsson did not pursue the matter.

Von Storch made a classroom-type presentation on detection and attribution. He had little invested in the presentation and pretty much mailed it in, but it was nice of him to show up. He challenged skeptics who thought that solar influence or any other influence was capable of explaining modern warming to do so in the context of a structured climate model — which seems fair enough to me although I wonder what sort of funding and support would be available for such an enterprise.

Carbon Cycle
On Monday afternoon, presentations were broadly on the carbon cycle — Segalstat from Norway, Peter Stilbs of KTH, Jaworski from Poland, Sakalos from KTH and Richard Courtney from England.

I haven’t thought about the carbon cycle very much, but, thinking back, it was one of the first things that I wondered about and the IPCC answers are not necessarily obvious. The KTH scientists tended to come from a physical chemistry background and to approach carbon cycle issues from that viewpoint.

The connection of increased CO2 in the atmosphere to the burning of fossil fuels is "obvious" to climate scientists. 6 GT of CO2 are produced each year by burning fossil fuels and CO2 contents of the atmosphere increase by about 3 GT, with the natural system being unable to absorb the difference. This has gone on for the entire history of the Mauna Loa measurements commencing in the 1950s. Increased CO2 levels provide a plausible explanation for modern warming.

What troubles the physical chemists about this simple scenario is that "natural" CO2 flux in a given year is about 193 GT, so that the anthropogenic contribution is a relatively small contribution to the annual flux. Now it’s obviously conceivable that the natural sinks can’s accommodate the extra 5%, but there’s surely no harm in wondering about this topic. Further complicating the situation for physical chemists is that CO2 solubility in the oceans decreases with warmer oceans. Thus increased warmth would lead to increased atmospheric CO2 levels in a process that is readily understood in physical chemistry. Robert Essenhigh of Ohio State has written about this problem.

Here there’s an interesting connection with ice core results. In interglacial warm periods, temperature increases have led CO2 increases, rather than the opposite. The IPCC position is that the CO2 increases on Milankowitch scale are a feedback which are necessary to amplify the very small Milankowitch forcing. On the theory that the physical chemists are pushing towards, one would expect that CO2 levels in ice cores in (say) the warm Eemian interglacial would be similar to modern levels, rather than substantially less as observed in Vostok core. With this editorializing, on to the afternoon presentations.

Segalstat of Oslo University led off the afternoon with a general, and, as far as I can tell, uncontroversial exposition of the carbon cycle, but closed with the well-known cartoon showing the correlation between change in bathing suits from bloomers to thongs and global warming. At this point, Bert Bolin exploded at Segalstat (who apparently is a former student of Bolin’s) saying that he needed to read a text book. Bolin announced hat he was leaving the conference because it was such garbage. After some efforts to restore order, Bolin sat down for a few minutes and then left, still without paying his entrance fee.

Jaworski discussed problems measuring CO2 in ice cores. He’s been severely criticized and I do not pretend to be familiar with the controversy. He discussed problems in contamination of ice cores, pointg out, for example, downhole penetration of Pb and Zn. He said that ice cores showed sheeting fractures and that CO2 could easily be lost in such fractures. If so, measured CO2 levels in ice cores could have a downward bias (though presumably the higher-frequency results would have some meaning.) Afterwards, in conversation with Fred Goldberg, he thought that a useful experiment for a technical university would be to test CO2 behavior in ice core under various high pressure situations. That seemed like a sensible experiment to me — and one worth doing — although I’d be surprised if the results were dramatic. But you never know and the experiments are either worth doing or worth replicating.

This physical chemistry approach of KTH was well exemplified in an original paper by Sakalos, a young scientist at KTH. He showed that quartz sand had very large surface area and acted as a catalyst in the oxidation of methane. Because of the enormous amount of quartz sand, he argued that this previously undesrcribed effect had a material impact on the methane balance.

Richard Courtney, a well-known English skeptic, closed the day with an impassioned presentation about modeling the carbon cycle.

Models
On Tuesday, there was discussion broadly speaking on climate models and forcings (two presentations by Willie Soon, one on behalf of Sallie Baliunas), Kallén of Sweden, Bengtsson, Marcel Leroux of France and Fred Singer, followed by a panel discussion (von Storch, Bengtsson, Carter and Singer) led by questioning from the President of KTH, who was very poised. Hans Erren discussed Arrhenius from a historical point of view.

Willie Soon emphasized the newness of quantitative measurements of the sun and the infancy of our knowledge. There is a common graphic synchronizing the various measurements of solar irradiance; Willis showed an interesting graphic with the unstitched measurements – which are all over the place. He discussed the differing estimnates of the amount of change in solar irradiance fron the Maunder Minimum to the present — ranging from 0.2% to 0.6%. The $64 question for people seeking to attribute climate change to solar variations is how relatively small changes in irradiance can impact changes in climate. One certainly emerged with the view that understanding of solar flux was, so to speak, in flux.

He made a second presentation on the impact of solar changes on climate. He’s interested in the impact of solar variabilty on the ITCZ – an extremely interesting issue IMHO. We chatted about this – he’d noticed my post on Kim Cobb’s corals, in which I interpreted her MWP results as showing northward ITCZ movement rather than a "cold" Pacific.

Kallen made a presentation on the Arctic Impact Assessment Study. He showed an interesting graphic in which warmth in the Arctic in the 1930s was recognized, but differentiated the post-1970 warming as being much broader. Comparing Arctic warmth in the 1930s with modern warmth was a recurrent theme. He didn’t discuss the Antarctic situation.

Bengtsson presented on climate models starting with their role in numerical weather prediction. Meteorologists take considerable pride in extending prediction accuracy from a couple of days to a week or so. He showed a graph showing the decay in correlation as weather prediction models moved out. I was going to return his confidence interval question by asking him what confidence interval he would place on a model once the correlation got to zero. However, I thought that too many presenters were showing up too often and held my peace.

Marcel Leroux presented on Mobile Polar Highs. While these seemed to be a plausible meteorological phenomenon, I didn’t understand what was controversial.about them or how they tied in to global warming controversies. Leroux has a new book and I’ll read it some time. Marcel Crok has talked to Leroux and his understanding is that Leroux’ view was that some parts of the Arctic were warming and others weren’t and that this could be explained through a change in wind circulation involving the Mobile Polar Highs. I’ll send Lerouz the dO18 graphic from Mount Logan which the proponents argue to signify a change in wind circulation.

Hans Erren made a very nice historical presentation on Arrhenius, providing strong evidence that Arrhenius had both fudged his data and misinterpreted effects related to salt prisms as evidence of CO2 absorption. Hans’ talk could definitely be developed into a publication in a history of science journal or even for a more general magazine like NWT or Scientific American.

Fred Singer discussed the recent CCSP report (we discussed the statistical appendix here recently), pointing out what appeared to be a pretty alarming statistical manipulation by Wigley and associates. I’ll take a look at this at some point., but, if true, is the type of manipulation that is unfortunately so common in this field.

A Swedish scientist made a presentation on impacts, but I missed most of this presentation.

The closing panel consisted of von Storch, Bengtsson, Carter and Singer, with Flodstrom, President of KTH, asking some questions and then turning questions over to the audience. Flodstrom handled this well. I won’t summarize this, other than to mention von Storch’s complaint about non-climate scientists butting into climate science – something that doesn’t happen in, say, biochemistry or particle physics. He pointed out that KTH was a technical university without any tradition in climate science, but still acted as host to a climate science seminar. Von Storch thought that this phenomenon deserved the interest of sociologists. It’s a fair question, which I think can be answered, but won’t comment on at this time.

Anyway, I found the conference interesting and appreciated the hospitality.

166 Comments

RE: “At this point, Bert Bolin exploded at Segalstat (who apparently is a former student of Bolin’s) saying that he needed to read a text book. Bolin announced hat he was leaving the conference because it was such garbage. After some efforts to restore order, Bolin sat down for a few minutes and then left, still without paying his entrance fee.”

1. Which presenters raised the hair on the back of your head as needing auditing? Which seemed like cranks or people making arguments without the intellect or knowledge to back it up?

2. If your topic was too difficult for VS to get right away it was likely too technical for the general audience or poorly argued. Or both.

3. I hope you listen to VS on things like the bcps versus math. It is not an issue of avoiding controversy so much as a failure on your part to disaggregate issues: you want to cite multiple problems (for instance methodology versus data input), then you confound them together when one gets to the analysis stage.

the 6 GT of CO2 produced annually-how is that calculated? i have read sources stating that 12% of petroleum products go into “non-fuel” uses, mostly including plastics, solvents, lubricants, and other sources which remain sequestered. so if 6 GT is calculated from the gross amount of petroleum sold, 5.28 GT is closer to the truth. (720 BILLION tons less, in dr. evilspeak)

another source of argument is that decreasing oceanic pH is caused by increasing carbonic acid caused by atmospheric diffusion from increasing atmospheric CO2. but- warming oceans decrease in CO2 solubility. so where is the increased acid coming from, if not from atmospheric CO2?

Afterwards, in conversation with Fred Goldberg, he thought that a useful experiment for a technical university would be to test CO2 behavior in ice core under various high pressure situations. That seemed like a sensible experiment to me — and one worth doing — although I’d be surprised if the results were dramatic.

I think you both are right here. Just to scratch this off the list as a potential source of error makes it worth doing. Particularly because it doesn’t seem to be too onerous an experiment.

So much for “The science is settled. There is consensus amongst scientists that AGW is happening”.

Thank you for reporting on this seminar. It is interesting how seminars and panels of this nature (eg NAS, Wegman) give opportunity for the issues to be surveyed by a range of contributors, and the areas where there is a certain lack of consensus can be readily seen.

It seems to me that it would be a very good idea to hold a similar seminar to discuss IPCC4 when it is released. Had one been organised after the release of TAR, we might have been able to see the holes in it, particularly the egregious politicisation of the Summary for PolicyMakers.

Of course, there can be problems in securing attendance from representatives from both the AGW and skeptic camps, but the KTH Seminar appears to have been at least partially successful in that. Of course, if the status of the seminar is sufficiently substantial, an invitation to present is a potentially reputation enhancing event difficult for members of either camp to decline.

One of the most interesting points from your discussion (for me at least) were the emphasis on the CO2 cycle, and the evidence that the solubility of CO2 in the oceans decreases as water temperatures rise.

Another very interesting comment was Kalbern’s comment re Phil Jones and CRU. How Jones et al get away with this is totally beyond me.

Bob Carter’s work also seems very pertinent.

Overall, a rich body of science that poses real questions for the AGW crowd.

Dr Theodor Landscheidt is the man when it comes to very long range
forcasting. Too bad he is dead and I don’t know who is taking up his work. He has correctly forcasted the last four El Ninos including the present one which seemed quite unlikely back in the spring when La Nina seemed to be coming. He also predicted the last La Nina correctly. These forcasts and many others years in advance are all based on solar activity. He has proved that solar activity (eruptions) has a huge effect on weather and long term climate. He says a mini ice age is coming based on the Gleissberg cycle of solar activity. He expects a grand minima of the Maunder minimum type which would result in a new mini ice age by 2030. I would listen to this man, he has been right so many times it is not by chance.

He has written many papers. One paper even discusses Solar effects on CO2. Isn’t that a kick.

See link below for forcasts of ENSO events it is based on Solar System Torque cycles and Golden
Sections of the sun’s eruptional activity:

Steve, the solar material you cite seems to deal only with irradiance. There is a lot of published (and peer reviewed) material on the idea of the solar magnetic field and solar wind modulating penetration to earth’s atmosphere of cosmic rays, which in turn modulates cloud cover and potentially has a much larger forcing factor than irradiance. Svensmark et al were the original contributors back in the early 90s, and the theory has also drawn a considerable degree of refutation, but I have never seen enough refutation to justify discarding it. I have not followed these arguments for the last couple of years, so was wondering if there was any discussion at Stockholm. Murray

About the carbon cycle, we had already a few discussions about that. IMHO the skeptics are wrong on this specific item and I think that the skeptic camp looses scientific credit by denying that the increase of CO2 in the atmosphere is not caused by increased burning of fossil fuels.

To recapitulate:

– The close correlation of CO2 and temperature (where CO2 lags temperature) in pre-industrial times gives a 8-10 ppmv change for every change of 1 K in temperature. That means that the recent increase of ~1 K since the LIA at most can give an increase of ~10 ppmv. The measured increase is 90 ppmv since 1850.
– Indeed the previous interglacial (the Eemian) had higher temperatures than current and lower CO2 levels (around 300 ppmv).
– The increase in air concentrations is in lockstep with the use of fossil fuels, be it that app. halve of it is absorbed by plants and oceans.
– Absorption/release of CO2 by the oceans is not only a matter of temperature, but also of the partial pressure of CO2 in the air. An increase shifts the dynamic equilibrium between absorption and release towards absorption. And is followed by CO2-sequestering biological activity in regions which are normally too cold. But still a change in ocean temperature (like an El Niño period) is visible in an increasing rate of change for CO2 levels (some 6 months after the onset).
– No other explanations are available which show the same smooth increase since 1850. Volcanic CO2 emissions are far too low and are too irregular to give such an increase. There are no known increases in volcanic eruptions or emissions in recent times (increased volcanic CO2 emissions surround eruptions in many cases).
– Last but not least, as CO2 from burning fossil fuels is strongly 13C depleted (and volcanic CO2 is mostly not), this must (slowly) reduce the 13C/12C ratio of CO2 in the air. Which is what is measured. See Keeling ea. and e.g. the graph of Mauna Loa

In terms of the CO2 increase lagging the temperature increases at the end of the ice ages, how much CO2 gets locked up in all that ice?

There is certainly enough ice to lower sea levels by 100 metres. Is it not possible that a lot of CO2 gets frozen into all those glaciers as well. When the ice unfreezes, the CO2 gets released. Just a question I have had for some time.

While I agree that the “fossil-fuel”/CO2 link is a perfectly reasonable theory, I still think the area merits investigation and we should attempt to understand the “carbon cycle” better than we do right now, on the principle that it’s better to understand the specific mechanism of the connection if at all possible. There may be nuances that we don’t realize that could affect other areas.

#6 and Steve M.’s comment — After reading Jaworowski’s objections about ice-core CO2, and going on to read some papers by other authors on the likely chemistry of CO2 in ice, including UV-induced radical reactions, it became obvious that people have blindly gone and treated CO2 in ice as though it were potassium in basalt. I.e., trapped and inert.

Virtually none of the really basic experiments have been done to determine the stability, or its lack, of CO2 in ice under pressure. One part of the deep environment includes veining of highly concentrated brine along with sulfuric acid. The same zone-refining-like processes that concentrate brine and acid also concentrate soluble organics like formaldehyde. There may be a witches brew (dog’s breakfast?) of chemistry that can occur in deep ice, especially over geological time. I don’t think any of that has been properly explored in the lab. Once again, we see climate science rushing in where properly trained scientists should fear to tread.

For example: P. Buford Price (2000) “A habitat for psychrophiles in deep Antarctic ice” PNAS 97, 1247-1251 “Microbes, some of which may be viable, have been found in ice cores drilled at Vostok Station at depths down to approx 3,600 m, close to the surface of the huge subglacial Lake Vostok. Two types of ice have been found. The upper 3,500 m comprises glacial ice containing traces of nutrients of aeolian origin including sulfuric acid, nitric acid, methanosulfonic acid (MSA), formic acid, sea salts, and mineral grains.“

Jaworski also talked quite a bit about what happens when you drill ice under pressure – arguing that pressure differences are created between the host ice and the core which affect the core; also that the drilling fluids impact the core – that was the point of the Pb, Zn discussion.

– The close correlation of CO2 and temperature (where CO2 lags temperature) in pre-industrial times gives a 8-10 ppmv change for every change of 1 K in temperature. That means that the recent increase of ~1 K since the LIA at most can give an increase of ~10 ppmv. The measured increase is 90 ppmv since 1850.

This is something I still have trouble understanding in the climate argument: if we do not have a reasonably accurate historical temperature record, how can we claim a correlation between CO2 concentration and temperature? Is it the case that we do have an accurate historical temperature record? (And that’s before getting on to any causal relationship between CO2 and temperature.)

Umm, Jarworski is generally condisdered a nutter by glaciologists. His theories have been looked at an discarded. Like Ou. Or the solar people, until they can get better evidence together or just give up.

Steve M. is considered a “nutter” by a whole raft of climate scientists, as am I (by a smaller raft), and Wegener was considered a nutter by geologists, and the Australian scientists were considered nutters by the medical establishment for daring to suggest that ulcers were caused by bacteria …

I’m sure you get the point. All you’re doing is appealing to consensus, which just makes folks here consider you a nutter … you might take a look at http://www.answers.com/topic/zbigniew-jaworowski for a bit more info. Anybody that Stephen Schneider thinks is terrible gets my vote … and I loved the irony of this quote from Schneider:

Jaworowski is perhaps even more contrarian than most, claiming that he can prove the climate is going to get colder through his work excavating glaciers on six different continents, which he says indicates what we should really be worrying about is ‘The approaching new Ice Age…’.

w.

PS – you also say:

Oh, and Pat “¢’¬? what is the second type of ice? Link please.

Pat himself didn’t say anything about a second type of ice, his carefully cited quote did … per his citation, said quote was P. Buford Price (2000) “A habitat for psychrophiles in deep Antarctic ice” PNAS 97, 1247-1251. If that’s not enough link for you, go to Google and do a Google search on “How to use Google” …

“If that’s not enough link for you, go to Google and do a Google search on “How to use Google” … ”

Funny off topic story.

A few years ago when I was working for Segway I had a call from Sergey Brin’s personal secretary. There were a few issues to clear up, did that. Then she asked me about a new model, E-Machine, and what it looked like.

In the area of reconstruction statistics, I do not think that Steve is considered a nutter anymore. He has shown that Mann made an error. He has not shown that the error actually made a difference. The BCP stuff is debatable, the incorrect centering of the PCA is not.

So, let me make this plain: I do not consider Steve a “nutter” on the issue of how Mann did his PCA. Wegman showed not that it mined for hockey sticks, but that it mined for signal. In the end this did not make a difference, because a lot of the proxies he used had a valid signal, as demonstrated by the NRC report. As far as Faraday was concerned, he was making, an accurate as it turned out, conjecture. I watched the same show tonight and Einstein was a horny dude who also postulated theories which were initially thought crazy. But physics and climatology are very different subjects and statistical mistakes are not quite enough to disprove a theory which is backed up by physics: see Tyndall, since Hans thinks Arrhenius screwed up bad.

In Steve’s summary of the C cycle, I wondered why if you were a denier you would embrace the (known) fact that CO2 absorbtion goes down as the acidity of the oceans goes up? This makes things worse!! I’ve seen other deniers say that the recent Siberian results (sorry, don’t have a link — they have been all over the news lately) about the increased rate of CH4 release from permafrost makes AGW any less valid. It would seem to me that this is a feedback which is proceeding at a rate 5x higher than we predicted.

Just what is your alternative theory? The data on solar forcing does not seem to work, volcanic has been well acounted for. What else is left? Please let me know, because the natural forcings which we are aware of do not account for the observed anomolies.

Re # 3 MichaelS
Your point about economists being enthusiastic about AGW is well made.We have a particularly enthusiastic one here in Australia. Despite predicting 11 out of the last 3 recessions,not being able to predict next year’s interest rates,or commodity prices(oil price anyone?) they have complete faith in Climate Model predictions many years into the future.
Now that the science is settled(!) let’s talk about solutions (carbon tax)

“This makes things worse!! I’ve seen other deniers say that the recent Siberian results (sorry, don’t have a link “¢’¬? they have been all over the news lately) about the increased rate of CH4 release from permafrost makes AGW any less valid. It would seem to me that this is a feedback which is proceeding at a rate 5x higher than we predicted.”

To re-iterate a question I posed a few days ago in a seperate context.

First 13-C and 12-C
The 13C isotope is stable and heavier than the more common form of carbon (12C), and plants tend to selectively assimilate the lighter isotopes during the photosynthetic process. This results in the following features of the 13C/12C ratio in the atmosphere: (1) a seasonal cycle occurs with the heavier isotope at relatively high concentrations during the summer, as plants selectively remove the lighter isotope from the atmosphere, and (2) a general decrease with time, as more fossil carbon (which originally was plant material, and consequently biased toward the lighter isotope) is injected into the atmosphere from the combustion of fossil fuels. So as more and more fossil fuels are burned, there should be a measure of relative proportion available from isotopic abundances.

Now 14-C and 12-CO
One of the longest isotopic records available supports the overall decline in atmospheric 14CO2 reported by most groups. The 14C record for Wellington in 1965 shows a peak occurring roughly 1 year later than that observed in the Northern Hemisphere (Nydal and Lovseth 1983; Levin et al. 1985). Manning et al. (1990) reported a seasonal component for the Wellington record and a cycle of decreasing amplitude for the seasonal component. Manning et al. (1990) reported that up to 1980, the cycle had a maximum in March and a minimum in August along with a negative anomaly in December. The amplitude of the cycle for this period decreased steadily from a peak-to-peak range of 20 per mil in 1966 to 3 per mil in 1980. From 1980 onwards, Manning et al. (1990) found that a different cycle emerged: the new cycle had an amplitude of ~5 per mil, a maximum in July-August, and a minimum in January. Of course the half-life of 14-C is long for us but not long for fossil fuel timescales, this needs taking into account but could be hacked easily enough. Unless…

We Have a Problem (Houston)
The problem with all of this is that the basis for the selection of carbon isotopes (presumably the 3% difference in reaction rate due to a primary kinetic isotope effect, this figure for 13-C and 12-C) and the existence of relative amounts of each form of carbon, can only be relied on in this way if there are no other processes interfering with the abundances. And there are, or rather there is. Cosmic ray flux, hello! This is assumed, even in carbon dating, to be constant, whereas it isn’t and not by a long white piece of fossilised chalk, or even a short green true believer. I have not, as yet, seen anywhere, any analysis of how the changes in cosmic ray flux impact quantitatively on the processes we’re discussing and the abundances being measured – those used to draw conclusions about carbon in the atmosphere. So the whole enchilada is still open as far as I’m concerned. However if it’s been analysed quantitatively by a group somewhere and someone has the reference, please point me in the right direction.

Antway, the effect of doubling CO2 to 560ppmv is less than 0.2C, without a positive feedback via increased water vapour – there is peer reviewed evidence there there is no positive feed back:

#26 — As Willis pointed out in #27, I mentioned no second sort of ice. The second sort of ice is mentioned in the rest of the abstract, if you care to read it. It was irrelevant to the point, but concerned ice closer to the Lake Vostok boundary.

I’m also very tired of personal attacks (“Jarworski is generally condisdered a nutter” — the least you could do is get his name right: Jaworowski). Try refuting his arguments for once. Note that as Willis also pointed out, I didn’t even cite Jaworowski.

Abstract: The presence of snow greatly perturbs the composition of near-surface polar air, and the higher concentrations of hydroxyl radicals (OH) observed result in a greater oxidative capacity of the lower atmosphere. Emissions of nitrogen oxides, nitrous acid, light aldehydes, acetone, and molecular halogens have also been detected. Photolysis of nitrate ions contained in the snow appears to play an important role in creating these perturbations. OH [radical] formed in the snowpack can oxidize organic matter and halide ions in the snow, producing carbonyl compounds and halogens that are released to the atmosphere or incorporated into snow crystals. These reactions modify the composition of the snow, of the interstitial air, and of the overlying atmosphere. Reconstructing the composition of past atmospheres from ice-core analyses may therefore require complex corrections and modeling for reactive species.

Do you think it’s possible that halide radicals and organic matter incorporated into snow crystals can become part of the deep ice and affect the chemistry of trapped CO2 across millennia? Apart from the sulfuric acid and mineral brines, I mean.

As a more general caution regarding your dismissal of anything-but-CO2 forcing (the ABC of AGW), you do understand, don’t you, that in a chaotic climate system such as that of Earth, climate changes can occur without any changes at all in external forcings? Earth climate probably has a large variety of quasi-stable states — both deeper and more shallow strange attractors — among which it can migrate without any jumps or non-zero-slope trends in external forcing.

I have not posted to here before, but I write to comment on #14 where it says;
“IMHO the skeptics are wrong on this specific item and I think that the skeptic camp looses scientific credit by denying that the increase of CO2 in the atmosphere is not caused by increased burning of fossil fuels.”

I am willing to provide a copy of the paper I presented at KTH to those who want it (it is ~3MB so I only want to send it once to a distribution list). It is based on the paper by Rorsch, Courtney & Thoenes, E&E (2005) and begins with;
1. Introduction

It is commonly assumed that the rise in atmospheric carbon dioxide (CO2) concentration during the twentieth century (approx. 30% rise) is a result of anthropogenic emissions of CO2 (1,2,3). However, the annual pulse of anthropogenic CO2 into the atmosphere should relate to the annual increase of CO2 in the atmosphere if one is causal of the other, but their variations greatly differ from year to year (4) (see Figure 1).

Figure 1. Annual human emission (Fem) and the measured flow of carbon dioxide into the atmosphere (Fa) in GtC/y (4)

For some reason, the Figure does not print here.

If we had wanted merely to show that there is no causal link between the anthropogenic emission and atmospheric CO2 concentration then this graph would have been sufficient. In some years almost all the anthropogenic emission is sequestered into the sinks and in other years almost none.

But, as I explained in Stockholm, we wanted to determine if there could be such a causal link and we found such a way.

In Rorsch et al. (2005) we suggest that – for some reason – the equilibrium of the carbon cycle may have changed. If so, then the observed rising atmospheric CO2 concentration is an indication of the carbon cycle adjusting to the new equilibrium state. The short term sequestration processes of the carbon cycle can easily adapt to sequester the anthropogenic emission in each year (as our paoer demonstrates). But some processes of the carbon cycle have rate constants of years and decades and, therefore, the carbon cycle would take decades to stabilise at a new equilibrium condition. And these rate constants are very likely to be affected by temperature. Hence, the rate of the adjustment to the new equilibrium would depend on the mean global temperature. Therefore, the rise of the atmospheric CO2 concentration in a year would depend on the mean global temperature of the year (as is observed i.e. Calder’s ‘CO2 thermometer’).

The ‘CO2 thermometer’ of Nigel Calder calculates the temperature anomaly of each year from the increase of the atmospheric CO2 concentration over each corresponding year. On face value, this ‘CO2 thermometer’ is very strange: one could expect the concentration of CO2 in the air to be related to the mean global temperature, but the ‘CO2 thermometer’ indicates that for each year the annual rise of the concentration relates to the mean global temperature, and this rise is not affected by the anthropogenic emission. However, the suggested adjustment to a new equilibrium provides an explanation of Calder’s ‘CO2 thermometer’. And it also provides an explanation of why the accumulation of CO2 in the atmosphere continues when in two subsequent years the anthropogenic flux into the atmosphere decreases (this happened, for example, in the years 1973-1974, 1987-1988, and 1998-1999). The short term sequestration processes of the carbon cycle can easily adapt to sequester the anthropogenic emission in each year, and the adjustment (by other parts of the carbon cycle) to the new equilibrium is not affected by this sequestration.

It should be noted that no explanation for either of these observed effects existed before Rorsch et al. (2005), and the explanation in Rorsch et al. (2005) is still the only explanation evinced for either of them. Importantly, these observed effects are confirmatory evidence for our suggestion of a change having occurred to the equilibrium state of the carbon cycle.

As I said in Stockholm, using this suggestion we used six different models of the carbon cycle that all match the observed increase of atmospheric CO2 concentration on a year-by-year basis very well. It is important to note that we did not use any “fiddle factors’ such as the 5-year-averageing used by the IPCC (that cannot be justified because there is no known physical mechanism that would have such effect). Three of the models assumed the anthropogenic emission was solely responsible for the increase to the concentration, and the other three assumed a natural cause was mostly responsible. Hence, there can be no certainty as to the actual cause.

Why should such a change to the equilibrium state of the carbon cycle have happened?
Well, there are several possible reasons. The anthropogenic emission of CO2 is one possibility, but other possibilities are more likely. In Rorsch et al. (2005) we suggest that the change is most likely to be a result of the rise in mean global temperature (the delay of ~50years in the response of the carbon cycle to temperature change is not surprising). And I also repeated this suggestion in Stockhlom.

In Rorsch et al. (2005) we suggest that the change is most likely to be a result of the rise in mean global temperature (the delay of ~50years in the response of the carbon cycle to temperature change is not surprising).

Am I missing something? Atmospheric CO2 started to increase around 1850, with a 50 year lag in response of the carbon cycle that would put the disturbance to the cycle aroung 1800, well before global temperatures started to increase. So how can global temperature increase be the cause of the rise in CO2 which preceeded it?

In Rorsch et al. (2005) we suggest that the change is most likely to be a result of the rise in mean global temperature (the delay of ~50years in the response of the carbon cycle to temperature change is not surprising).

Shouldn’t this mean that the rise of CO2 levels should stop very soon, as the mean temperature (according to any reconstruction) is (at least) roughly flat in the period 1940-1980?

Dear Richard your effect only shows up for mauna loa where it is clearly the effect of the CO2 thermometer as discovered by Jarl Ahlbeck.
Have a look at other sites where the oceanic temperature effect is absent. Also there is a clear gradient from land to sea indicating that the CO2 source is on land.

As for the InterGOVERNMENTAL Panel on Climate Change – the clue is in the name.

Words belonging to someone else below, but I couldn’t say it better:

The IPCC is a propaganda exercise for the supporters of the theory that
greenhouse gases have harmful effects on the climate. The Editors and
Lead Authors are carefully chosen for their known advocacy, and just to
make sure, the “Summary for Policymakers” which is the only part most
people read, is agreed line-by-line by Government representatives.
Despite all these precautions they have never made a firm commitment to
their theory. A typical statement is the notorious “The balance of the
evidence suggests a discernible human influence on the global climate”.
Note, this is only a “suggestion” (by whom?) and it does not mention
greenhouse gases.

The “balance of the evidence” is distorted throughout the volumes, and
their treatment of solar influences is typical. Most of the relevant
published papers are mentioned, but any that suggest that the sun’s
influence is important are marginalised or deflated. When in doubt they
leave really challenging papers out altogether.

They have the support of most of the important Journals in this
exercise. Papers which emphasize the importance of the sun are sometimes
published, but they often insert a phrase in the title which discourages
readers from finding evidence in favour of the sun’s influence. Many of
the editors are environmental activists.

Not many people seem to bother with the actual IPCC reports. They are
voluminous and require hard work to oppose. However, they are the source
of most scientific argument in favour of “climate change” and they
deserve more attention from scientists.

Earth climate probably has a large variety of quasi-stable states “¢’¬? both deeper and more shallow strange attractors “¢’¬? among which it can migrate without any jumps or non-zero-slope trends in external forcing.

This is a terrific point, Pat. If anyone thinks this sounds outlandish, consider that the human heart, the most well-behaved organ of all, has such quasi-stable states. It is only within recent decades that we recognize them for what they are, but they exist and are important to understand. It is reasonable to expect a far more open chaotic system like Earth’s climate contains many more such states. The arrogance of the warmers on this issue (as with feedbacks & trends …) is remarkable.

Re #55
Paul, #56 was intended as an invitation to JMS in #30 to reply to Pat in #35 on this specific point. Fact is: there are plenty of warmers who don’t know what quasi-stable states are. Similarly, there are plenty of people who don’t realize the human heart can be plunged into quasi-stable states other than the one we enjoy most of the time. There are warmers who will understand the human circulatory system, but will be unfamiliar with Earth’s circulatory system, or will not recognize where there may be similarities.

My point is that in both systems there is a lot of interpretible complexity that is hidden to the casual observer, but which is accessible to the careful analyst. Nothing more.

Thankyou for the interest in my posting to this blog. I answer each point in turn.

#43 and #44. Steve my paper is on the disc supplied by KTH at Stockholm so I strongly suspect you do have a copy. However, if not and if you did want me to post it up at “esnips.com or some such” then I would like to be informed.

#48. I think it does imply that the system will soon adjust to the lower temperatures between ~1940 and 1970. If it does not then that will be a proof that the suggestion of the temperature rise with ~50-year-lag being the cause of the increased atmospheric CO2 concentration is not correct.

#52. Perhaps it does only show for Mauna Loa. Indeed, we only modeled the Mauna Loa data. As I said at Stockholm:
“Determination of the constants in the three models
It should first be noted that there are few available empirical data that can be trusted. In fact, these are limited to the observed increase of the concentration of CO2 in the atmosphere, well recorded at Mauna Loa since 1958 (see Figure 2). The annual flow Fa into the atmosphere can be derived from this (see Figure 1). Second best are the data collected by cdiac.ornl on the human emission (Fem ), but they may be an underestimate if nations have not provided the correct figures.”
Other time series also exist, and I showed some in a Figure; i.e.
Figure 2. Rise and fall of carbon dioxide concentration in the atmosphere at four sites, Mauna Loa Hawaii, Estevan Canada, Alert Canada, Shetland Islands.
Here three years are selected from the long term graph 1991- 2000, C.D. Keeling and T.P. Whorf. “On line trends”, cdiac.ornl (4)”
However, Mauna Loa is the longest and generally most respected by those who believe in AGW (e.g. IPCC). Either variability of land sources of CO2 tends to confuse the data from other measurement sites, or the ocean ‘smooths’ the data at Mauna Loa.

However, the lack of corelation between the annual anthropogenic CO2 emission and the annual increase to atmospheric CO2 concentration is seen everywhere. Indeed, this is why IPCC uses its unjustifiable 5-year-running-mean to make an apparent fit between these two parameters. Our model requires no such fiddle-factors.

#12/Landscheidt’s work: Amazing papers. It looks like an example of overfitting that came up with a model that is working (bender?). How can someone look at that work and say that solar forcings aren’t major?

Re #64
Thanks for the invitation to comment on that pile of work, jae. As Bloom suggests in #37, it would take quite some time for me to do an evaluation of all that, especially as I’m no “astrologist”. [I’m still wading through the backlog of requests on Emanuel.]

What I will say as a prelude is that if you follow pro or college sports in the US, you know that every few years a new guru emerges who appears to have the magic/genius insight that everyone is craving. He gets national exposure. He makes a surprising number of incorrect predictions, and is tossed out by the media, with someone else to conveniently replace him. Same thing in the business world. Same thing among political pundits. Convince me that Landscheidt and his models are not just another instance of this: overfit models that are dredged up from a veritable ocean of possibilities, and thus ripe for failure.

Does he possess special insight that nobody else does, or has he been lucky? That is the question. You can guess what my “priors” are.

bender, how many events does the model have to predict correctly before you start getting comfortable with it? He has correctly predicted four El Ninos and at least one La Nina, so far. That does not seem to be very probable, without some solid truth behind the model.

Re #66
Tell me this, jae. Why are so few of these “amazing” Landscheidt works published in the primary literature? And don’t tell me its the evil AGW cabal that’s been suppressing them, because a lot of these are from the early 1980s – long before the alleged cabal allegedly got hold of power. Also, could you tell me where I could get a turnkey script showing exactly how Landscheidt’s forecasts are generated? I’d like to see exactly how it works.

As for your question: “How many events?” Very good question. I will say this: whatever that number is, it is propotional to the number of people out there trying to generate such forecasts. Bonferroni again. In the case of sports prediction or political or economic punditry, you have thousands of these clowns (why do they wear bow-ties? and coloured glasses?), so I would want to see very long strings of correct and unlikely (in terms of Bayesian priors) predictions before I bothered listening very closely to their predictions. In the case of climate science, there are significantly fewer pundits out there willing to go out on a limb. Am I dismissing Landscheidt through an ad hom attack? No. I am merely providing an explanatory prelude as to my skeptical “priors”.

Jeff, a rough calculation of total ice volume during the glacials (~70 million km3) yields a volume of ~7 million km3 of air (10% at near surface pressure). Or at ~1 Mt/km3 (surface pressure) a total of 7×10^12 ton of air. The total air weight of the atmosphere is 5×10^15 ton or 3 orders of magnitude higher. Further it is air with some traces of CO2 which is included in the ice, not CO2 alone. Thus any difference in CO2 levels of old air enclosed in the ice vs. ambient air would induce a negligible difference…

RE: # 64 – or are they, filtered solar forcings. Filtered by a semi chaotic filter where both the spectral characteristics and attenuation factor at any given frequency are ever changing. And what is the filter? I’ll leave it with that riddle.

Nicholas, I agree that much need to be investigated in the carbon cycle, like biogenic processes which are responsible for a huge part of the current (50%) sink of the extra carbon released by humans. And the real impact of CO2 on temperature. The past 420,000 years (and beyond) in the ice core records are not a good guide, as these show the impact of temperature on CO2 levels, not the impact of CO2 on temperature…

Re #69
“The filter” would be all that which lies between earth and sun. JMS, however, seems to assert in #30 that there is nothing unknown or unsurprising that could be produced by this filter. Steve S, don’t leave us with a riddle. Give us a concrete statement. A list of papers. Anything is better than nothing.

The Earth receives a continuous influx of energy from the Sun. Some of this energy is absorbed at the Earth’s surface or by the atmosphere, while some is reflected back to space. At the same time, the Earth and its atmosphere emit energy to space, resulting in an approximate balance between energy received and energy lost. Knowledge of the natural and anthropogenic processes that affect this energy balance is critical for understanding how Earth’s climate has changed in the past and will change in the future.

In order to advance understanding of this issue, the U.S. Climate Change Science Program asked the National Academies to examine the current state of knowledge of how the energy balance regulating Earth’s climate is modified by “forcings” including gases and aerosols, land use, and solar variability and to identify relevant research needs (see Appendix B for the full statement of task). This report provides the consensus view of the committee that was formed to undertake the study. In this report, the committee presents specific recommendations for expanding current radiative forcing concepts and metrics and outlines research priorities for exploiting these concepts and metrics as tools for climate change research and policy.

………..

2 State of Scientific Understanding

In this chapter the state of understanding of radiative forcing from individual agents is reviewed. Over the past 15 years, the Intergovernmental Panel on Climate Change (IPCC) has produced assessments in which at least one chapter has been devoted to a thorough review of current understanding about radiative forcings. The discussions here summarize the findings of the IPCC’s Third Assessment Report (IPCC, 2001) and scientific advances since it was published.

The Third Assessment Report (IPCC, 2001) includes a summary figure of the global and annual mean radiative forcings from 1750 to 2000 due to a range of perturbations (Figure 2-1), including the well-mixed greenhouse gases, ozone, aerosols, aviation effects on clouds, land use, and the Sun. The largest positive forcing (warming) since 1750 is associated with the increase of the well-mixed greenhouse gases (carbon dioxide [CO2]; nitrous oxide [N2O]; methane [CH4]; and chlorofluorocarbons [CFCs]) and amounts to 2.4 W mà⣃ ’ ‘2. The greatest uncertainty in Figure 2-1 is associated with the direct and indirect radiative effects of aerosols. If the actual negative forcing from aerosols were at the high end (most negative) of the uncertainty range, then it would have offset essentially all of the positive forcing due to greenhouse gases (see also Boucher and Haywood, 2001).

According to the IPCC definition, applied to the data in Figure 2-1, “The radiative forcing of the surface-troposphere system due to the perturbation in or the introduction of an agent is the change in net irradiance at the tropopause after allowing for stratospheric temperatures to readjust to radiative equilibrium, but with the surface and tropospheric temperatures and state held fixed at the unperturbed values.” This definition of forcing is

…………..

4 Rethinking the Global Radiative Forcing Concept

The current global mean top-of-the-atmosphere (TOA) radiative forcing concept with adjusted stratospheric temperatures has both strengths and limitations. The concept has been used extensively in the climate research literature over the past decades and has also become a standard tool for policy analysis endorsed by the Intergovernmental Panel on Climate Change (IPCC). The concept should be retained as a standard metric in future climate research and policy. However, it also has significant limitations that have been revealed by recent research on forcing agents that are not conventionally considered and by regional studies. Also, it diagnoses only one measure of climate change (equilibrium response of global mean surface temperature). The committee believes that these limitations can be addressed through the introduction of additional forcing metrics. Table 4-1 gives a list of these metrics and summarizes their strengths and limitations. Detailed discussion of each is presented below.

So help me out, here, Steve S. The atmosphere is a source of solar filtering that is changing in its filtering properties by jumping chaotically among quasi-stable strange-attractor states as we recover from the LIA? And these physico-chemical changes are so poorly understood that attempts to “fix” the GCMs by overfitting forcing coefficients are absurd? And we would be better off spending our billions on basic research in solar-atmospheric physics?

Ralph, of course the “temperature” in ice cores is deduced from changes in (deuterium and oxygen) isotopes, which are related at worst to adjacent sea surface temperatures (for near-seashore ice cores like Law Dome) and at best to SH ocean temperatures (for the near polar Vostok ice core). But indeed these are only approximations of temperatures. What is interesting is the very nice correlation (with lags) of CO2 levels with temperature changes. Even if the real (global) temperature changes were much higher or lower, that wouldn’t change the correlation.

Further, all high-accumulation ice cores in Antarctica (thus enclosing the air bubbles quite fast) show a decrease of ~10 ppmv CO2 for the period MWP-LIA. If the correlation CO2-temperature holds for more recent (pre-industrial) times, this was induced by some 1 K cooling after the MWP. Which is in line with the highest estimates by other types of proxies (Esper, Moberg). That means that the same rule still governs for the LIA-current period and that the ~1 K increase in temperature since the LIA can’t give more than 10 ppmv CO2 increase (at least in ice cores, but see further the comment by/on Jaworowski)…

Did you read or try to understand the papers? They tell you how he does it. By the way you do try to mock him. Did you ever hear of Solar Torque Cycles or Golden Cycles? Hers a quote from one of his earlier papers:

“If my El Niño forecast proved correct, this would be the third successful El Niño forecast in a row. The second one had a lead time of 2 years. There are other successful long-range climate forecasts exclusively based on solar activity: End of the Sahelian drought 3 years before the event; the last three extrema in global temperature anomalies; maximum in the Palmer Drought Index around 1999; extreme River Po discharges around 2001.1 etc. (Landscheidt 1983-2001). This is irreconcilable with IPCC’s allegation that it is unlikely that natural forcing can explain the warming in the latter half of the 20th century. In declarations for the public, IPCC representatives stress that taxpayer’s money will be used to develop better forecasts of climate change. What about making use of those that already exist, even if this means to acknowledge that anthropogenic climate forcing is not as potent as alleged.”

This was written before his third correct El Nino and now it is four. I was waiting for this fourth El Nino for years to come. There were many false alarms from NOAA about coming El Nino’s and La Nino’s that made me wonder at times. Alas, he was right again. This is real Physics and I am a physicist.

#73 Sounds to me like a good summary of the current state of things and a prescription for the way forward. The only difference I would make is that we should only spend millions not billions until the problem is correctly understood. I’m even willing to bet that the prediction of climate will turn out to be what the computer scientists call NP complete, essentially impossible on the time scales they now claim, due to the complexity of the problem.

About 13C/12C, the relative abundances are measured (see my last link in #14) at 10 stations between 80N and 90S. The change in abundance of O2 (decreasing by burning fossil fuels) and 13C/12C ratio both were independently used to calculate the ratio of extra uptake by oceans vs. terrestrial biomass. See Battle ea. in Science.

I haven’t studied 14C/12C ratio’s, because the open-air nuclear tests of the 1940s-1960s increased its level quite impressive (see here. The levels are decreasing now, but it is IMHO hard to know the difference in attribution for the use of fossil fuels vs. return to the lower (and variable) cosmogenic created levels (by 14C sequestering via the carbon cycle)…

As far as I know, cosmic rays create 14C, no 13C or 12C. The equilibrium production/depletion is in the order of 10^-12 14C/C. Even if similar (or much higher) quantities of 13C or 12C were produced by cosmic rays, that wouldn’t have any measurable influence on the 13C/12C ratios…

The observed decrease in 13C/12C ratio can’t be explained by increased outgassing from the oceans due to higher temperatures, because the oceanic 13C/12C ratio is slightly higher than the 13C/12C in air.

The only alternative theory (as volcanic degassing shows the same problem), is that there is more biomass decay than growth. But that is rather in contradiction with the “greening earth”, obeserved by satellites…

Re #79
First glance: looks interesting. However I clearly lack the background necessary to formulate an assessment that is worth publicizing. Will have to do a lot more reading before I feel comfortable commenting.

If it is true that (1) the physical behavior of the sun is not known with certainty and is not properly included in the GCMs, and (2) the filtering effects of the atmosphere are not adequately represented in the GCMs, then I would have to say there’s quite a bit of room here for the alternative explanations that JMS in #30 is seeking.

The issue of solar-atmospheric interactions would seem to require a thread of its own (like “Clouds and Water Vapor”). It’s certainly OT for this thread.

PC/8 in 2007.2 has El Niño potential. As the date 2007.2 is closer to 2006/2007 than to 2007/2008 it is to be expected that El Niño will already emerge around July 2006 and last at least till May 2007 (Probability 80 %). The alternative to this early date is a release of the expected El Niño around April 2007; it should last till January 2008 (Probability 20 %).

That’s one of the problems with the claim that someone predicted events X1,X2,X3, etc, when what you are really dealing with is a continuous fluid dynamic process. Temperature anomalies do not come in discrete pulses that are easily classified as El Nino or La Nina. Allowing a forecaster the liberty to predict “events” leaves him a lot of wiggle room in deciding the threshold used to infer the occurrence of an event. A subjective test of forecasting skill is not a test at all.

That aside, let me get this straight. Landscheidt is said to have forecast 4 such events in row, each with lead times of 1-2 years? All using the same model? Where is the code for this model? Words are meaningless when it comes to fluid dynamics. And forecasts that were not published in hard copy don’t count either.

Also, if these “events” are truly cylical, then why such crappy lead times? Shouldn’t he be able to foreecast them with lead times of 5+ years?

I don’t know, guys. This smells like crank science to me. Very little of it peer-reviewed. No code. Where do the probability estimates in this “forecast” come from anyways? Don’t send me a linkie. Copy and paste a coherent explanation, or use your own words.

“According to Neelin and Latif (1998) weather noise and deterministic chaos, representing the internal variability of the climate system, set the fundamental limits to the lead time. This emphasis on the exclusively internal character of ENSO events is in accordance with the tenet of climatology that ENSO phenomena are the most spectacular example of a free internal oscillation of the climate system not subjected to external forcing.”

If ENSO is mostly internal, then what is so improbable about predicting the last 4 El Ninos with leads times of ~1 year? It’s not like he predicted them all way back in 1980. That would be a little more interesting.

If I start trying to predict coin tosses I’ll get quite a few long strings of “successes”. In fact, in team sports I’ve rarely lost picking “tails” every time!

Re#90, you ask; “Copy and paste a coherent explanation, or use your own words”.
I will try (briefly), but I stress that I do not know whether or not the late Theodor Landscheidt’s method has merit. It certainly deserves proper investigation instead of the flaming rejections it gets from people who have not read his arguments.

1.
Landscheidt says the Earth’s ocean/atmosphere/climate is not a closed system but is driven by interactions with solar activity (i.e. radiations, particles and disturbance of cosmic ray flux into the atmosphere).

2.
He says that although the system is chaotic/stochastic its gross behaviours can be deduced by determining the variations to solar activity that affect those behaviours (including ENSO).

3.
He claims that gravitational interactions with the giant planets affect the Sun’s behaviour and he calculates the variations to torque this applies to the Sun. (He makes no comment on why these interactions should have such effect, but it is certainly true that they will affect fluid flows within the Sun).

4.
He correlates past climate behaviours (e.g. ENSO) to the torque cycle of the solar system acting on the Sun.

5.
He calculates the future torque cycle and assumes the observed relation between climate and the torque cycle will be sustained.

The major problem with this is that it assumes observed corelation indicates causality in the absence of any known causative mechanism. The observation could be merely chance coincidence. However, the repeated predictive performance of the assumption does indicate that there is need to determine if a causal mechanism exists and is operating.

Re #94 so, the idea is Jupiters gravity effects the Sun so much that the changes in the Sun then trigger EN’s here on Earth? So, why doesn’t Jupiter effect our weather directly? Or Mars? or Venus? Or perhaps these planets do control our lives…

Per my new resolution, Peter, I’m going to consider your question as being serious and attempt to answer it.

The sun moves in what is called a “barycentric orbit”, which means it doesn’t sit still at the center of the solar system. Instead, it is pulled by the gravity of the planets into an orbit around the center of mass of the solar system. The net action of the planets (mainly the huge ones, Jupiter and Saturn, but also including the others) is to speed up or slow down the sun’s rotation in this orbit. When this happens, the solar magnetic dynamo (which produces the solar magnetic field) is either strengthened or weakened.

This change in the solar magnetic field affects the earth through a couple of ways. One is indirect, through the magnetic field’s effects on cosmic rays, and through them, on clouds. The other way is through the CMEs, the coronal mass ejections. These are millions of tonnes of matter which are ejected from the sun and thrown out into space. When they hit the earth, they affect the earth, the auroras, and (it is believed) the weather.

The reason that Jupiter doesn’t affect the Earth directly in this manner is that we don’t have a thermonuclear fusion reaction going on at the core of the earth …

This matter is getting very off-topic because it does not directly relate to Soon’s presentation at KTH on Solar effects. However, you make an important observation at the end of your posting #96.

Prejudice and irrationality are functions of all people (including scientists) and not only those who believe in nonsense pseudo-sciences such as astrology.

Landscheidt’s arguments relate to positions of the planets and their possible effects on Earthly phenomena. Such arguments hint at astrology and so serious consideration of them is inhibited by a gut response from scientists. Indeed, years ago when he was preparing his paper for inclusion on Daly’s web site he sent me a draft for comment that included an influence diagram with the form of a pentagle. I strongly commended that he replace the diagram with something else. Its shape smacks of irrational mysticism. Therefore, some people would see the shape of that diagram, not study it, and not read his words. He replaced it with his “5-fingers’ diagram that is much less clear, and I congratulated him on the change despite the reduction in clarity of his paper.

It is always necessary to assess what people are saying and not what a first impression suggests they may be saying. And I think the feelings engendered by Landscheidt’s work illustrate the great need for more events such as the KTH Meeting between AGW-believers and AGW-agnostics. Each of these groups has false assumptions concerning the other that may be reduced by such meetings.

Peter, I am not sure if Landscheidt was right or wrong with his method to predict El Niños. Just a few remarks:

– Contrary to the earth, the sun is a boiling mass of material. A small difference in gravity change by changes in distance of the planets can have a profound influence on coronal mass ejections.
– I have looked at the frequency of El Niño events during the solar cycle. About 80% of the events is in the upgoing and downgoing flanks (first and third quarter) of the cycle, only 20% near maximum or minimum. Which is what Landscheidt expects.
– While the origin of ENSO events may be mostly internal to the oceans, the solar influence just may synchronise the event with a small change in energy/electrical/magnetic field at the right moment…

I’m going to call a halt to discussion of Landscheit’s theory of sunspot formation on this thread. Maybe we can return to it another time on a separate thread, but it’s not a topic that I want to get into right now.

I’ve been watching the El Nino/La Nina phenomenon for several years. The El Nino area warms for awhile and then it cools. Sometimes it looks like a strong El Nino is developing and then it suddenly reverses and nothing comes out of it.

The system just looks chaotic to me.

Here is good animation that covers the last year of SST anomalies in the Pacific. Right now it looks like an El Nino is developing. But last winter, a La Nina seemed to dominate. North America’s winter was the opposite of what would be expected in a La Nina event.

This is my comment on Executive Summary of the U.S, Climate Change Science Program Syntheis and Assessment Product 2.2 The First State of carbon Cycle Report (SOCCR), which I sent today to the Coordinating team. This comment is relevant to the discussion on CO2 in the ice cores.

Sirs,

Your report is misleading uninformed people. In the Executive Summary, page ES-5, line 23, it is stated that “The primary source of carbon in North America is the release of CO2 during the combustions of fossil fuels (Figure ES-1)”. In fact, primary source of atmospheric carbon in North America and elsewhere, is not fossil fuel burning, but natural sources. The annual natural flux of CO2 (expressed as carbon) from ocean into the global atmosphere is about 106 Gt, and from the lands 63 Gt, summing up to a total of about 169 Gt. To this natural flux of CO2 the fossil fuels, land use, and cement production add about 6.3 Gt per year, i.e. about 3.7%. The North American contribution of 1.6 Gt per year adds a triffle 0.95% to the natural flow of CO2 into the global atmosphere. This hardly could be defined as “primary source”. Your Figure ES-1 distorts the reality by not showing the total oceanic CO2 flux, but only flux from coastal ocean. However, the air and CO2 over Noth America comes not only from the coastal ocean but from its whole body. Ignoring in your Table ES-1 the natural flux of CO2 into the global atmosphere misguides the public and authorities.

In page ES-1 lines 26-27, there is a statement: “The concentration has increased by 31% since 1750, and the present concentration is now higher than at any time in the past 420,000 years, and perhaps the past 20 million years”. This statement, based in part on studies of CO2 in the polar ice cores, is false.

Ice core data do not present a reliable reconstruction of the composition of the past atmosphere, because polar ice is not a proper matrix for such studies. It does not fulfill the absolutely essential closed system criterion, partly because polar ice containes liquid water at temperatures even below -50 centigrades. Due to many fractionation processes in the ice sheets and in the ice cores, about 30% of CO2 is depleted from the air inclusions trapped in the ice. One of the main factors is high solubility of CO2 in liquid water (73 times higher than solubility of nitrogen and 35 times higher than solubility of oxygen). With the increasing pressure in the ice caps CO2 starts to enter into a solid form of clathrate, at about 5 bars (~ 100 meters depth). Nitrogen enters the clathrate form later than CO2: at 80 bars and oxygen at 100 bars (at a depth of about 900-1200 meters, where all air bubbles disappear in ice). This depletes CO2 from the entrapped air. Drilling the ice cores is a brutal method, leading to extreme contamination of even the innermost parts of cores, with heavy metals present in the drilling fluid. All ice cores suffer distortion caused by the “sheeting phenomenon”, which lead to dense horizontal cracking of cores, and a loss of fractionated gases during decompression of ice cores. To date, ice core studies are not able to provide a reliable reconstruction of CO2 level in the pre-industrial atmosphere. Obviously, you may ignore the evidence on lack of reliability of ice core studies, which consist the very foundations of the man-made climatic warming hypothesis. But then how can you escape from the trap of being biased?

Why to mention 20 million years and not 35 million years, when CO2 concentration was 1500 ppm, and the global temperature was slightly lower than now? Why not say that 450 million years ago atmospheric concentration of CO2 was >6000 ppmv and temperature was about 3 centigrades lower than now?

In page ES-3, lines 8 – 12, you cite IPCC, 2001, stating that CH4 and CO2 contribute together up to about 80% to the greenhouse effect. This is incorrect, as water vapor alone is responsible for >90% of the greenhouse effect, and man-made CO2 emissions contribute about 0.05 to 0.25% to this effect.

I think that discussing or disecting or even skewering cranks that may have been at your meeting is on topic, Steve. The thread is about the meeting. Please allow that to go forward, despite the discomfort it causes you and/or any damage it does to “the big tent” of skeptics. Open discussion is more important then “helping your side. The thread is about the menagerie meeting–let’s discuss the various animals in the zoo.

No, it is very, very, very far from sufficient. (For starters, an equation somewhere would be nice.) Willis says the explantion was “clear”. How can something that’s opaque be clear?

This topic, like “Water Vapor and Cloud Feedbacks” is far more important and interesting than any on “animals in the zoo”. The link to proxy reconstruction and climate suditing is clear:
1. Where is Landscheidt’s code? (transparency)
2. What is an overfit model? (robustness)
3. Are the white vs. gray literatures equally suitable for policy-making?

Note that I did not flame anybody’s theory. What I did was openly disclose my relatively uninformed prejudices. But as Steve M has asked for a halt to the discussion, we should respect that. So I will end my contribution here.

Landscheit was not at the conference nor were his theories discussed there. I don’t want the bandwidth of this blog consumed with this sort of issue. There are many venues where interested parties can do so.

TCO, I’m sure that dissecting carbon cycles issues is an interesting topic for someone, but I can’t do everything in the world. I’ve got many topics that are much higher in my personal queue; that doesn’t stop other people from investigating these matters.

Richard, in #41, you discuss other possibilities than increased human emissions for the increase in CO2 levels. Where a change in equilibrium is a possibility. But…

– Higher sea surface temperatures indeed change the equilibrium between outgassing and absorption of the oceans. But as already mentioned, the ocean’s surface is richer in 13C than ambient air, thus any degassing of the oceans would increase 13C/12C ratios, not decrease them as observed.

– The only alternative should be an extra decay of terrestrial plant organics. There are indications that this indeed increased, due to deforestation and melting of permafrost. But extra decaying of organic material, due to higher temperatures, also happened in the Eemian (and is part of the overall correlation between CH4, CO2 and temperature), when temperatures e.g. in Alaska were substantially higher. Despite the higher temperatures, CO2 values in the Eemian (measured in ice cores) were only slightly higher than in the pre-industrial part of the current interglacial.

Deforestation indeed induces emissions of 13C depleted CO2, due to induced forest fires and decay of organics buried in the upper soil layer. The amounts are substantial: near 2 Gt/yr, compared to an estimated near 7 Gt/yr for burning fossil fuels. But this is part of the human contribution, not related to a temperature induced equilibrium…

What is clear, is that year-by-year (season-to-season, month-by-month, day-to-night) variations in (local, ocean, global) temperatures have a rapid effect (thanks Hans Erren for #34) on CO2 levels (and of course the photosynthesis with sunlight). One need to take that into account if one compares human made emissions with global (regional, local) CO2 levels in the atmosphere. Also, where one measures is also important, on an island in the middle of the oceans or during the day or night in a forest deep inland. Even measuring CO2 levels near ground in the neighbourhood of decaying organic material gives firmly increased CO2 levels (as every -organic- gardner knows: mulching activates growth)…

So IMHO, there is only one viable explanation left: the increase of CO2 levels since the industrial revolution is due to human activity (by burning fossil fuels and deforestation)…

Hans, I prefer to discuss a few topics (origin of the CO2 increase and the ice cores by Jaworosky) here, as there are more persons here who can comment with substantial points… But if everybody agrees…

According to IPCC TAR Figure 3.1a there are 38,000 PgC of carbon in the ocean and 730 PgC in the atmosphere. There is a two way flux of carbon between ocean and atmosphere of 90 PgC/year. According to Figure 3.1b a further 5.4 PgC is added to the atmosphere each year from anthropogenic sources.

CO2 moves between ocean and atmosphere primarily by processes of solution and diffusion and the rate of transfer therefore depends on the partial pressures of CO2 on both sides of the boundary layer. Partial pressure is dependent on both CO2 concentration and water temperature.

Assume the ocean-atmosphere system is in quasi-equilibrium and will always revert to equilibrium when it is disturbed. A back-of-the-envelope calculation indicates that if the 5.4 PgC of anthropogenic CO2 were added to the atmosphere in a single pulse, the system would revert to equilibrium with a time constant of 5.4 / 90 years, i.e. in about 3 weeks. At the new equilibrium level the 5.4 PgC would be distributed between atmosphere and ocean in the ratio 730:38,000 so that .1 PgC would be added to the atmosphere and 5.3 PgC would be added to the ocean. Atmospheric CO2 would be expected to increase by 1 part in 7300 in a year. In fact the CO2 mixing ratio has increased from around 330 to 380 umol/mol in 30 years, an increase of 1 part in 200 per year.

This increase is too large to be accounted for simply by the addition of anthropogenic CO2 and therefore there must be some other source. One such source is the ocean itself. Temperature increases of oceanic water masses (0.5 deg in 30 years) are in fact observed (e.g. Bindoff and McDougall, 2000, J.Phys.Oceanog 30,1207-1220). An increase in the temperature of deep ocean water masses will increase the partial pressure of CO2 in the ocean and so change the equilibrium distribution between ocean and atmosphere.

Increased atmospheric CO2 concentration is then the symptom rather than the cause of global warming.

This argument is too simple and obvious to have been missed by a generation of climatologists. So what is wrong with it and why is it never discussed?

John, your viewpoint is almost exactly the same as Frted Goldberg or PEter Stilbs of KTH in Sweden. Bert Bolin’s retort was to tell them that this was garbage and that they should read a textbook – hardly the most articulate argument in the world. I guess he mentored the Team.

Note Dardinger’s #159 crosspost in Road Map. The problem, John, is that mixing in the ocean is very slow except in the “mixed layer” which is only from about 10 meters to 100 meters or so thick (thicker near the poles, thinner in equatorial waters.) This is only small fraction of the total ocean so while the amount which mixes from the surface to the deeper ocean is similar to the amount which mixes from the atmosphere into the ocean, it will take a long time for the total to mix. Mixing by diffusion would take practically forever, so most of the mixing is via deep sea currents which then upwell various places, such as along the Chilean coast and downwell in places like the arctic ocean. And the funny thing is that the amount of CO2 which upwells is greater than that which goes down, at least so far, because most of the CO2 in solution gets used by plankton for photosynthesis. The difference is made up as the remains of dead creatures drop quickly into deep waters. But if we continue to have CO2 rise in the atmosphere and thereby in the ocean, then eventually more CO2 will go down via currents than comes up as well as all that falls, and this will keep the amount of CO2 sequestration high. Comment by Dave Dardinger “¢’¬? 21 September 2006 @ 9:52 pm | Edit This

re: #115. I hope John’s view isn’t “almost exactly the same as Fred Goldberg or Peter Stilbs of KTH in Sweden.” It’s fine to throw things out here on a blog and get feedback about where you’re wrong, but you shouldn’t be doing it in an international conference. If they did then Bolin had a right to be mad.

Fred Goldberg and PEter Stilbs presented on different topics and I am reportting more on conversations here; however as Richard Courtney observed, this is the line of discussion that he presented. When Bolin got mad, only Segalstat had presented on carbon cycle and he made a pretty general presentation. Prior to that, the presentations had been on variability: Carter, myself, Karlen – he may not have liked the presentations, but there was nothing that he could grab hold of as being wrong. He didn’t say boo after I presented and I was pretty critical of the Team and their works.

Thankyou for your comments at #112. I am genuinely grateful because this is the kind of debate I enjoy. My prejudice is in favour of science and I regret that climate science is bedeviled by pro and anti IPCC positions (the IPCC exists to support the UN Framework Convention on Climate Change and I could not care less about any political policy).

But, to deal with your excellent substantive points, I first need to make clear that we disagree on our interpretations.

You say, “as already mentioned, the ocean’s surface is richer in 13C than ambient air, thus any degassing of the oceans would increase 13C/12C ratios, not decrease them as observed”.

I agree, but that is a starting point.

Over the periods of measurement in the last century (i.e. the short period between 1980 and 2003) the 13C/12C ratio has decreased steadily. The regression equation for the data since 1881 is:
13C/12C = -0.025 + 41.9 with a regression coefficient of -0.97.
The common explanation (that I understand you to be presenting in #112) is that the atmosphere is being diluted with the CO2 produced by burning fossil fuels. The values of D13C/12C for coal, petroleum and methane are around -26, -25 and -28 respectively.

In the 23 year period the CO2 content of the atmosphere [Mauna Loa data] has risen from 723 to 805 Gt C, an increase of 82 Gt C in an atmosphere with an average of 764 Gt C. So, if 82 Gt C came from anthropogenic sources with a delta 13-C value of -25 and the original value in 1980 was -7.5 this would make the value in 2003 to be -9.3.

This change is more than sufficient to explain the observed value of -8.1 and advocates of AGW stop there and say “See the change is more than enough!” But, in fact, that is the problem: it is much, much more than enough.

An assumption of an anthropogenic cause predicts a change from -7.5 to -8.1; i.e. a change of 0.6.
But the observed change is from -7.5 to -9.3; i.e. a change of 1.8.
This discrepancy demonstrates that – if the data are adequate – then the anthropogenic emission is responsible for no more than a third of the isotope change at most. And if something other than the anthropogenic emission is responsible for most of the isotope change, then that something may be responsible for all the observed change.

In normal science, such a finding would induce an investigation of
a) the adequacy of the data
or
b) the major cause of the change to the ratio.
But, in climate science, this problem is ignored and statements (such as yours) that it changes in the right direction are used to justify ignoring the problem.

I do not know what causes the major part of the change to the ratio. And I would like to know. Suppositions such as those you list in #112 are a useful starting point for investigation. However, I unequivocally state that I cannot agree with your conclusion that,
“IMHO, there is only one viable explanation left: the increase of CO2 levels since the industrial revolution is due to human activity (by burning fossil fuels and deforestation)…”.
This is not a “viable explanation” because it does not agree with the data on which it is based.

Also, John Reed in #114 explains why degassing seems to be the most likely source of the increased atmospheric CO2 concentration. And Dave Dardinger in #117 explains why ocean degassing takes time. These are the reasons why my paper at Stockholm concludes that;
“the relatively large increase of CO2 concentration in the atmosphere in the twentieth century is likely to have been caused by the increased mean temperature that preceded it. The main cause may be desorption from the oceans. The observed time lag of half a century is not surprising.”
But I stress that my paper and our original paper (Rorsch et al. (2005)) also said;
“Assessment of this conclusion requires a quantitative model of the carbon cycle, but such a model cannot be constructed because the rate constants are not known for mechanisms operating in the carbon cycle.”

The claim by segalstad was that co2 doubling is not possible because henry’s law telss that for the equilibrium 51 x that amount is needed which is larger than the reserves.

The reason why this is incorrect is because – as is observed – it is not a process in equilibrium, but an injection that adheres to diffusion laws with a half life of 50 years. So yes it is indeed possible to get CVO2 doubling because the diffusion process into the sinks is slower than the speed with which it is added.
/home.casema.nl/errenwijlens/co2/co2fick.xls

Nicholas, I think he i saying that if you do the pure numbers you end up with a 2003 value of -9.3. That is, pure theoretical calculation. The -8.1 figure is from observations, so there is a decrepancy of 1.2 between a simple calculation without confounding factors and the observations which of course include all confounding factors.

Well, I understand that he is saying that the theoretical calculations based upon the (naive) assumption that all CO2 emitted into the atmosphere stays in the atmosphere do not agree with actual measurements.

Further, I interpret what he saying as suggesting that the content of 12C in the atmosphere is rising faster than can be explained by this scenario. However, his numbers seem to suggest the opposite, unless I am misunderstanding something – that there is some process removing excess 12C from the atmosphere.

I really do think if you re-read what he says, you will find he has reversed the theoretical calculations and the observations in the middle. Otherwise I don’t think his conclusion is backed up by his figures.

It would indeed be interesting if the rate of increase of 12C in the atmosphere was higher than could be explained by burning 13C-depleted fuels. However, he says the observation is that the concentration has changed from -7.5 to -8.1, whereas his calculations suggest it should change from -7.5 to -9.3. Surely the most obvious theory to account for this would be that some of the 12C/CO2 emitted into the atmosphere is absorbed by plants/oceans/etc. and plants/oceans/etc. emit much 13C/CO2 in exchange for it?

Most of us are pretty sure there’s a great deal of CO2 flux going on absent that which comes from burning, so we don’t expect the CO2 which is increasing in the atmosphere to be the SAME CO2 which is being emitted, necessarily. Emitted CO2 changes the balance, but there’s still a lot of flux going on.

Perhaps this question has been answered already, but here goes. In the past century lots of areas with perennial vegetion have been replaced with farms that cultivate annual vegetation. Each year there is more and more vegetative waste that must be emitting more CO2 as it rots. Couldn’t farming be a significant source of increased CO2? Perhaps it is as large as or even larger than the source of CO2 from burning fossil fuels? How many Gtons of CO2 is emitted each from rotting agricultural wastes and how has this varied over time?

Thankyou for your query #122. I meant what I said. Simply, that is how the numbers come out. Anders explains the matter correctly in his post #123. Anyway, the point is mute because the discrepancy is 300% in either case.

As you say, it is naàƒ⮶e to assume that the calculation could be correct because the carbon cycle is not adequately understood. And, as I said, the correct calculation would require a quantitative model of the carbon cycle that cannot be built because the rate constants are not known for mechanisms operating in the carbon cycle.

And I said that when the discrepancy in the naàƒ⮶e calculation is pointed out then those who advocate AGW arm-wave that the change is in the right direction (as you do in your final paragraph of #124). But this is not good enough. Nothing is constant in nature. Everything changes all the time in nature and the ratio must increase or decrease if it does not stay constant.

I repeat that these observations are a spur to research. One should not “cherry pick’ parts of the data that fit one’s case and ignore the others (with arm-waving excuses). In this case, the direction of the change fits the hypothesis of an anthropogenic cause but the magnitude does not. You claim to know the cause for the change to the isotope ratio with no evidence for your claim. I am ignorant of the cause and would like to find out what it is.

There must be much more area of the Earth covered by annual vegetation now than in the past. For example, the US great plains has thousands of square miles with corn plants and wheat and other grain plants going to waste. In the 1800s, this would not have occurred. Similarly in Brazil and Africa. Perennial grassland and forest have been replaced with annual plants. It must be having some effect.

Comment on #124 and earlier on 12/13C issues. Please note that well-known isotope effects affect separation processes (including evaporation & outgassing) as well as biological (and chemical) processes/reactions operate in such a way that those involving the lighter isotope are faster/favored. Therefore e.g. CO2 outgassed from the seascontains less carbon-13 than was proportionally present in the seawater. Similarily, degradation processes on organic material, or respiration (of plants or animals or simple organisms) that form CO2 as end-product are slower on carbon-13-containing molecules.

Doug, indeed there is a difference between old growth forests (like the Amazon) and new growth and/or annual vegetation. old growth/mature forests are more or less in equilibrium for CO2 uptake and release (but with increasing CO2 levels act as carbon sink, due to extra growth). Be it that some of it was accumulated in trees, roots and decaying material, of which part is relative stable buried in the soil. Once the forest is cut, the carbon present in soils will rapidely oxydise, which is a net release to the atmosphere. Agricultural use at first sequesters CO2, but that is either eaten (and biologically burned), burned (directly or indirectly) or composted. within a few months to a few years, this returns to CO2 again, but the first months/years act as CO2 sink. After that period, there is again an equilibrium between sequestering and release.

For newly planted forests, where the wood (after some 30-50 years – 200 yrs for oak) is used for short term (paper) or long-term (furniture) items, there is a net sink, which comes into equilibrium with emissions after some 31-250 years, depending of growth speed and use.

As far as agriculture is concerned you can pretty much ignore what is happening above ground. Aside from the fact that as Peter mentioned, in temperate areas perennial vegetation dies back in the winter, most of the carbon is in the soil anyway. That has been disappearing from cultivated lands for a long time. After an initially rapid fall C continues to be lost slowly for a long time, up to 100 years. That will have a depleting effect on the 12/13C ratio as the carbon is plant derived. However, over the last 100 years there has been a large increase in the area of maize/corn grown, which is less 12C depleted (delta about -12 compared with about -27 for wheat) so this will have had a counter effect. How much, who knows.

#132,
bender, you left out a sarcastic reference to Galileo, a challenge to publish their findings somewhere so that all the real scientists will finally know the truth and some type of schoolyard wager.

If you’re going to ‘ask for him’ at least give us the Dano bells and whistles we’re accustomed to.🙂

surely the conversion from, broadly, forest to farmland took place long ago in many parts of the world? Did this also cause climate/CO2 effects?

I’ve wondered about that too. Before the European settlers came in numbers, the North American Indians routinely set burns every fall to destroy the underbrush and dead leaves. The fires spread over large areas and burned for days. This was done in the Northeast, the Great Plains, and the far West. There are reports of early settlers traveling up the Mississippi to Cahokia (modern St. Louis) and seeing mile after mile of fire for days. Our modern heavily wooded North American forests are a recent result of a different wilderness management philiosophy. This had to have a significant impact on the CO2 in the atmosphere.

125/Douglas: Permanent deforestation would be expected to increase CO2 levels, since the trees are storing it. However, according to Lomborg (who backs up his statements well), the total amounts of forest area have remained quite stable since 1950. Thus, probably not a lot of CO2 from this source. Annual vegetation is probably fairly neutral, since it sucks CO2 out of the air in the growing season, then puts it back when it decays, is eaten, etc. Logging with replanting is also relatively neutral, but the CO2 is locked up and released on much longer time scales.

Your figures seems to be right, there is more 13C left in the atmosphere than may be expected from the increase in 13C depleted burning of fossil fuels. That points to either release of 13C rich CO2 by some source, or depletion of 12C by some sink.

– The release of relative 13C rich CO2 might be from the oceans, induced by surface warming, but that is not very plausible:
1. total amounts in the air should increase much more than observed, as induced by fossil fuel burning + outgassing of the oceans (unless absorption by land biota exceeds both), but observed increase is less than 50% of what is expected from fossil fuel burning alone.
2. observed ocean pH decreases (be it at the brink of accuracy of the measurements), which points to increased CO2.
3. Bàƒ⵨m ea. (published in Geochem. Geophys. Geosystems, 3/3, 2003) shows fast decreasing 13C/12C ratios in coralline sponges, together with the start of the industrial revolution.

This indicates (without a good explanation of other 13C sinks or 12C sources) that oceans are sinks for the excess CO2 released by fossil fuel burning, and therefore follow the 13C/12C change in air since the industrial revolution.
Note: the aragonite (calciumcarbonate) tested in the sponges above contains 13C/12C ratios as they were in CO2 of the surrounding water at the time it was incorporated. This is a straightforward chemical process, not a biological process (be it biologically controlled), thus there is no/little change in 13C/12C ratio.

– Biogenic activity in the oceans and on land act as 12C sinks. This is anyway proven. And explains much (not all, there still is a missing part) of the difference in emissions and CO2 levels in air at one side and the difference in 13C/12C ratio between theory and observations at the other side…

Overall, it seems to me that there is no real explanation other than burning fossil fuels by humans which can explain the increased CO2 levels and the 13C/12C ratio change (in air and oceans) since the industrial revolution…

Peter, you are right that there is a change in 13C/12C ratios with degassing/absorpion and biogenic processes (be it less for chemical processes). That would be important if there was more degassing of CO2 from the oceans than absorption. But as explained in #138, the equilibrium shifted the other way out. One remarkable point, according to the findings in sponges (see the fig. at page 15 of the reference in #138), is that oceans originally had a lower 13C/12C ratio than air, but that reversed in the past 30 years. Probably the result of increased 12C releases in air and the lag in uptake and distribution by the oceans.

Thankyou for your points in #138 and #139, especially the thought-provoking points in #138. And I appreciate Hans provision of his graph in #137 (I really wish I could figure out how to post graphs here, too).

This all brings me back to where I was. As you say, “Your figures seems to be right, there is more 13C left in the atmosphere than may be expected from the increase in 13C depleted burning of fossil fuels. That points to either release of 13C rich CO2 by some source, or depletion of 12C by some sink.” So, we now agree to that degree. But – as is always useful – our interpretations then diverge.

I say I do not know the reason for the change to the isotope ratio being 300% greater than predicted by an assumption of burning of fossil fuels. And I want to know. You say that we do not know the reason for the 300% discrepancy so the fossil fuel usage must be the cause.

This echos matters at the KTH Meeting. Bolin took the floor (with no invite) and provided a graph of atmospheric CO2 and O2 concentrations that he claimed showed the atmospheric CO2 increase “must” be the result of the burning of fossil fuels. Of course, they do not prove any such thing any more than the isotope ratio change does (as our discussion demonstrates).

Later that afternoon I presented my paper that concluded:
“Attribution studies have used three different models to emulate the causes of the rise of CO2 concentration in the atmosphere in the twentieth century. These numerical exercises are a caution to estimates of future changes to the atmospheric CO2 concentration. The three models used in these exercises each emulate different physical processes and each agrees with the observed recent rise of atmospheric CO2 concentration. They each demonstrate that the observed recent rise of atmospheric CO2 concentration may be solely a consequence of the anthropogenic emission or may be solely a result of, for example, desorption from the oceans induced by the temperature rise that preceded it. Furthermore, extrapolation using these models gives very different predictions of future atmospheric CO2 concentration whatever the cause of the recent rise in atmospheric CO2 concentration.

The above findings cause severe doubt to any “projections’ of future atmospheric CO2 concentration and to any projections of future climate that consider atmospheric CO2 concentration to be significant.”

At the KTH Meeting I repeatedly stressed (both before and after my presentation) that I want the “above conclusions” to be shown to be wrong. They prevent advance of the science because nobody can know in which direction to advance. Nobody attempted to suggest a flaw in my presentation (except for observation of its unfortunately necessary excessive speed of delivery!) but the following morning Bengtsson reported – in his presentation – that his team are to construct a combined ‘climate and carbon cycle model’. Until the “above conclusions” are shown to be wrong then such a model can only be an expression of science fiction.

In my opinion, the science of global climate cannot properly progress until everybody stops pretending that we know the cause(s) of changes to atmospheric CO2 concentration and starts serious research to find out what they are.

Richard, I understand and agree with everything you say, except (once again) where you say there is a “.. change to the isotope ratio being 300% greater than predicted ..”.

I understand (from your own figures) that the change is 1/3 of what is predicted, not 3 times what is predicted. Obviously 1/(1/3) = 3. But I interpret your statement that it is “300% greater than predicted” as meaning this:

ratio_now = ratio_then + expected_delta x 3

Whereas I think what is observed is:

ratio_now = ratio_then + expected_delta / 3

Am I wrong? Perhaps the negative values are what is confusing me. (What are the units?) Let me subsititude the values to clarify:

Richard, I interpreted your figures as that there is less 12C found in the atmosphere than what is emitted by burning fossil fuels. Not the other way out. To be sure of what you meant, here again what you wrote:

In the 23 year period the CO2 content of the atmosphere [Mauna Loa data] has risen from 723 to 805 Gt C, an increase of 82 Gt C in an atmosphere with an average of 764 Gt C. So, if 82 Gt C came from anthropogenic sources with a delta 13-C value of -25 and the original value in 1980 was -7.5 this would make the value in 2003 to be -9.3.

This change is more than sufficient to explain the observed value of -8.1 and advocates of AGW stop there and say “See the change is more than enough!” But, in fact, that is the problem: it is much, much more than enough.

An assumption of an anthropogenic cause predicts a change from -7.5 to -8.1; i.e. a change of 0.6.
But the observed change is from -7.5 to -9.3; i.e. a change of 1.8.

The last paragraph is in discrepancy with the first two: The observed values go from about -7.6 around 1980 to -8.1 in 2002 for all stations. See the graphs/data of the different stations in Keeling ea.

The theoretical decrease in 13C/12C, if the full 82 Gt CO2 was all from fossil fuel (and land change), is as you calculated -9.3. That means that there is not a threefold decrease of 13C/12C in the observations vs. calculations, but that only 30% of the theoretical value is found back in the air. Thus about 2/3th of the decrease is mitigated through selective processes, of which increased biogenesis as well as on land as in the oceans may be responsible. In fact, Battle ea., in the full article in Science, used the change in 13C/12C and the change in O2 to calculate how much of the mitigation was by land and how much by the oceans…

ehm richard, there was no time for questions. You held a sermon, not a presentation. You did not explain, there was far to much text and formulae on the viewgraphs which went far to quick to read. Even I – in the posession of the manuscript of your paper -could not follow your reasoning.

Didn’t see your message until I sent mine… But I suppose that you are right and that Richard mixed up observations and theoretical values (right Richard?). If it was the other way out, that indeed would be hard to explain from any practical or theoretical point of view.

I might say that it’s interesting that on this blog there is quite a bit of argument among those united under the umbrella of “skeptics”. I think this is healthy, though prone to cause some hurt feelings. I hope Richard recognizes this. A site such as RealClimate, where a basic uninimity of message is enforced, may be clearer in in what the observer takes away, but is much more prone to propagate error. A site such as this, while it may have more error presented, isn’t as likely to have an observer go away thinking an issue is settled.

Those demanding certitude won’t care for this, but those demanding truth will.

Re ##76 & 78
P. Linsay speculates “that the prediction of climate will turn out to be what the computer scientists call NP complete…” and Bender is “inclined to agree…”

A pedantic correction: NP-complete problems are ‘decision problems’. That is, they accept data as input and expect a Yes/No answer as output. For example, “Does the word ‘nerd’ occur in the book ‘Podkayne of Mars’?” is a decision problem. Climate prediction, as currently practiced, involves running a simulation of the evolution of an initial/boundary value problem. As such, the problem is not posed as a decision problem, so the notion of NP-completeness does not apply. Indeed, running a simulation for a number of time-steps generally requires resources that are merely a polynomial function of the size of the grid used in the simulation and degree of the system of difference equations.

A computer science idea more applicable to climate prediction is the notion of Kolmogorov complexity wherein you ask “What is the size of the smallest computer program able to generate the answer to some given problem?”

It may well be that some climate simulations would properly require computer programs with storage requirements or time-step requirements that are exponentially related (or worse) to the accuracy desired from them. However, until the climate modelers are willing to specify exactly what they are trying to calculate and the means they are trying to use to calculate it, it won’t be possible to make such analyses. [For an example of useful introspection into modeling chaotic systems, see this paper.]

At the extreme, when a person is trying to calculate a future state of some analog system to some desired degree of accuracy it is an interesting question whether the smallest, fastest computation capable of simulating the evolution of that system to the desired degree of accuracy isn’t performed by the world itself. That is, it is quite possible that every accurate’ simulation of the evolution of a given system (for example, the climate) is slower than simply letting the universe run its course and measuring the result. Such a system would be non-predictable in a formal sense.

I must confess this hasty proclamation was based on a hazy recollection of NP-completeness based on a lecture some fifteen years ago as an undergraduate. I’m very far from an authority on the subject, and a more nuanced appraisal of the issue is appreciated. That’s why I said I’m “inclined to agree”, not that I “fully agree”. Anyone who understands anything about computing science is welcome to explain how GCMs work, in theory and in practice.

At the time of my posting I wasn’t so sure Steve M wanted his blog to span the topic of GCMs. Also, jae was prompting me to read a half dozen papers on solar forcing. Hence my hasty reply.

#152. Thanks for the clarification. I always thought the essential NP complete problem is the traveling salesman where you want to find the shortest route to visit a bunch of cities. The difficulty increases exponentially with the number of cities.

In any case, you captured exactly what I was thinking, that is, in the end the climate will be its own best model and no computer program will be able to calculate it faster than the climate actually evolves. Since it’s a nonlinear dynamical system it’s chaotic and will display sensitive dependence on initial conditions. The end result is that even with a perfect model imperfect inputs will result in predictions that are increasingly inaccurate the longer the forecasting time. It’s also true for chaotic systems that exponential increases in accuracy, hence computing time, are required for linear increases in forecasting time.

One of these days the mechanisms of climate will be understood and like the weather there will be a well understood upper bound on how far into the future one can make valid predictions.

Is weather beyond X days predictable in polynomial time within an error rate of e?

This is an example of a yes/no decision question that for certain (X, e) is surely not P but might be NP. Is that getting closer to the flavor of it, Neil Haven?

I’m not sure that the weather forecasting problem can be phrased in terms of (i.e. reduces down to) the TSP, so I’m not sure it’s NP-complete. It’s a tough problem. It’s a nonlinear problem. But surely that is not enough to admit it to the class of NP problems? [I’m not even sure it makes sense to talk about complexity as a function of “problem size” when the problem size is effectively bounded by the size of the planet, which is fixed (in practice, though not necessarily in theory).]

Trying to extract myself gracefully from a problem about which I know virtually nothing…

I note that processes that exhibit nonlinear sensitivity to initial conditions at local spatial & daily time-scales do not necessarily do so at global/decadal time-space-scales. Weather prediction and climate response to variable inputs are therefore two different classes of problems that are often conflated. Which is why Lorenz (1963) does not refute AGW as deduced from GCMs.

I note that processes that exhibit nonlinear sensitivity to initial conditions at local spatial & daily time-scales do not necessarily do so at global/decadal time-space-scales. Weather prediction and climate response to variable inputs are therefore two different classes of problems that are often conflated. Which is why Lorenz (1963) does not refute AGW as deduced from GCMs.

Since climate is the average of weather, are you saying that the average of chaotic processes is non-chaotic? Or am I misunderstanding your point? I can see that it might be bounded, but then the initial chaotic process is bounded as well.

Re #156 Thanks, as always, for your challenges.
1. I don’t think you misunderstand my point. It’s probably the lack of transparency in my reasoning.
2. No, I am not saying that the “average” of several chaotic processes is non-chaotic.

I think your question to me is ill-posed because nonlinear processes can’t be linearly averaged to yield a first-moment expectation. The real question is “how can nonlinear chaos at one space-time scale give rise to linear predictability at another, macro-scale?” I will not try to answer that one tonight. But the short answer is that the nonlinearities giving rise to chaos are local in origin. As soon as you change your frame of reference to something much larger than local, those nonlinearities can (I think) be treated as giving rise to random departures about some global mean field. Thus you are no longer interested in internally-driven local nonlinear dynamics, but externally driven global linear dynamics. i.e. The frame of reference is the problem.

I don’t actually know how GCMs work, and so I am already at the limit of my knowledge. The next time Isaac Held is in the room we should ask him if my interpretation is full of beans. It’s a very important question.

Just think of a boiling pot of liquid. The dynamics are highly nonlinear, chaotic, at the microscale of bubble formation. At meso-scales circulations can form as emergent phenomena, which can behave somewhat linearly before exhibiting nonlinear bifurcations. While at the macroscale, despite all the chaos & complexity, the temperature rises linearly and predictably as the liquid is heated.

Re #153
Bender, muddling the definition of NP-complete after 15 years is nothing to apologize for. I characterized my clarification as pedantic in all seriousness.

Re #154
Paul, to compound my sins, I will just mention that the optimization version of the traveling salesman problem you describe (find the shortest route…) is actually NP-hard. The decision version (is there a route shorter than x…) is NP-complete. Complexity theory gets out of hand quickly…

Re #155
Bender, now you’ve formulated a decision problem. That is definitely more the flavor of it. If your computation model allows me to take measurements, I can predict the weather of a day X days from now by waiting X days and then taking my measurements. But maybe that’s cheating? àⰃ à ➍

More seriously, even assuming that you know what equations and initial/boundary conditions to use in your predictions, there is no general analytical way that I know of to figure out whether a set of difference equations evolved from a set of initial and boundary conditions (such as a climate or weather model) is going to give you an accurate-enough answer using only polynomial time or polynomial space. It depends on topological characteristics of the phase space of the system (Lyapunov exponents and such) which generally must be estimated numerically. It’s no picnic. It seems that such analyses are not common in climate modeling. Physicists have done some good work on figuring out how far to trust computer simulations of the classical gravitational many-body problem. I don’t know whether the climate modelers read any of the relevant physics literature.

I agree that whether a system looks chaotic or not depends on the time-space-scale of the quantities you use to describe it. However, it seems to me that the distinction between weather and climate is pretty arbitrary. Weather tends to concern itself with observable quantities at smaller time-space scales than climate, but isn’t this a distinction more of degree than of kind? Do you really want to say that weather prediction and climate prediction are different classes of problems? Or just that weather and climate are different aspects of the same problem allowing for different approximations?

bender, thanks for the example. Your point is well taken, that there are linear emergent phenomena that can come out of chaotic, non-linear processes. However, it seems that the emergent phenomena can be chaotic as well.

Look at wind. It’s blowing outside my house now, and it comes in gusts and puffs, and is nowhere constant. Its motion is non-linear, chaotic, turbulent.

Now, I’m living in Hawaii, and the wind is a “trade wind”, part of the huge Hadley cell circulation. But that circulation is chaotic and non-linear as well, with the ITCZ wandering north and south in an unpredictable fashion.

Similarly, the Mobile Polar Highs drift around the poles, and we cannot predict their movement for more than a few days out. The same is true of hurricanes. In fact, weather, whether local or regional, is not predictable very far into the future.

But the great air motions of the world, which are non-linear, are built up of the small air motions of the world, which are also non-linear. So there seems to be no a priori reason to assume that as we increase the scale, that things will become linear.

Finally, one of the most interesting things that you said is the following:

I’d like you to expand on that, if you will, because we use averages of such non-linear things as wind speed and temperature all the time. It may be that our normal averages are not optimum estimators for the first moment of non-linear processes … but do they not have a first moment?

I’m pretty sure that part of the natural carbon cycle includes sequestering, including via calcium carbonate (sea shells), clathrates, oil, natural gas, and coal. Not an expert in the field, but these processes did not stop happening in the distant past. They still happen today, especially the carbonates as you notice the seashells on the beach. Shells of mollusks, and especially phytoplankton, which die and fall to the deep stay there for millions of years. The ocean is a big place, and this adds up to a tremendous amount of carbon. As I understand it, natural carbon sinks do not play a role in most climate models, despite the massive amounts of CO2 removed thereby. Although burning fossil fuels adds to CO2, so does outgassing from from oceans, volcanoes, and so on. the oceans may be a major source as CO2 solubility goes down as temperature goes up (the warm Coke experiment’). Therefore, and I’m sure I’ll be corrected here, it is not immediately clear where the bulk of the new’ CO2 comes from.

The outgassing warm Coke experiment’ only acounts for 10 ppm per degree which is far to little to explain current rise in CO2. Furthermore we observe a net takeup by oceans, we observe a net takeup by the biosphere, so we do have a net sink, not a source. Straightforward bookkeeping shows that the sinks are increasing. Of the 7 Gt of human emission per year on average 45% is sequestered.
It is very clear where the bulk of ‘new’ CO2 comes from: burning of fossil fuels.

Which implies that 55% isn’t sequestered and thus the enormous size of the problem of stabilizing atmospheric carbon dioxide concentration at any level. Considering that China accounts for about 25% of total CO2 emissions (about the same as the US) and is expected to double their emissions in the near future, the rest of the world would have to go to negative emissions to stabilize the concentration. Smaller cuts only lengthen the time it takes to reach a given concentration. In the case of Kyoto, the delay, IIRC, is no more than 6 years, if all signatories actually met their 2012 goals.

Right now from their vantage point fossil fuels are their path to a fully developed first world economy. Like the west, however, the Chinese have nuclear fission technology readily available and have a lot less difficulty dealing with the environmental lobby when it comes to nuclear waste issues, so their current heavy dependence on fossil fuels for power station energy generation may only be a short term phenomenon:-

# Hans Erren in 160: I’ve heard this claim; The oceans have heated 1-1.5 degree. Put into henrys law. This would result in a decrease in waters ability to absorb CO2 by 60 ppm. Is it not possible that this release of Co2 could be the main source of the increase of increase of CO2 in the atmosphere?

How is the drop in CO2 between antarctica proxies and Mauna Loa explained? Why don’t we have the direct measurements taken from the antarctica rather than Mauna Loa. An increase in temperature in the pacific would effect CO2 in the surounding areas. My guess would be that the cold antartic would be much less sensible to this kind of influence. Measuring CO2 in antartica is also more in line in with the AGW-alarmist “colder is better” philosopy ;P

#163 Tons of nuclaer waste is no problem. As long it ain’t CO2😉

#All: Are there no other proxies than ice cores to determine CO2 fluxes? Didn’t anybody take any samples of the atmosphere

Hans Erren: I’m catching up with CO2 concentrations and just read about “CO2 thermometer” ar Daly’s site. You mentioned in this thread that in places other than Mauna Loa, the relationship looks different. Can you point me to a link to this information? Thanks.

“Water vapor is the most important “greenhouse gas”. Man’s contribution to
atmospheric CO2 from the burning of fossil fuels is small, maximum 4% found by carbon
isotope mass balance calculations. The “Greenhouse Effect” of this contribution is
small and well within natural climatic variability. The amount of fossil fuel carbon is
minute compared to the total amount of carbon in the atmosphere, hydrosphere, and
lithosphere. The atmospheric CO2 lifetime is about 5 years. The ocean will be able to
absorb the larger part of the CO2 that Man can produce through burning of fossil fuels.”

Bob, it seems to be quite difficult to see the difference between residence time of a certain molecule of CO2 (man-made or not) in the atmosphere and the time needed to reduce any excess CO2 (in mass) back to the dynamic equilibrium of before the addition of extra CO2. That is where Segalstad makes a mistake…

The residence time of any molecule of CO2 is governed by a) the amount of 13C depleted CO2 which is added to the atmosphere by fossil fuel burning b) the seasonal/continuous turnover of yearly about 20% 13C rich CO2 degassing from the oceans and the absorption of 13C depleted atmospheric CO2 into the oceans, which gives a half-life residence time of about 5 years and a residual of initial human emissions CO2 of only 4%. The seasonal/continuous turnover doesn’t add or substract any amount of CO2 from the atmosphere over a year (except for temperature variations), it is only an exchange of about the same quantity of molecules in/out the atmosphere.

The increase/decrease of total CO2 is governed by a) the amount of total CO2 added by humans (no matter what type) and b) the net amount of total CO2 (no matter what type) absorbed by the oceans and vegetation (the net difference between natural sources and sinks). The emissions were in total about 70% of the original amount in the atmosphere, causing and increase of about 30% in atmospheric CO2. The rest of the emissions is dissolved in the oceans and incorporated into vegetation. The half-life time for any excess CO2 is somewhere between 30-40 years (several estimates exist)…

In the past 50 years, in every year, the emissions were larger than the increase in the atmosphere. That means that in every year, there were more natural CO2 sinks that sources. Except for some influence of temperature changes (about 10 ppmv/K), almost all of the increase of CO2 in the atmosphere is thus from human emissions…

2 Trackbacks

[…] Climate Audit KTH, Stockholm Conference the impact of the introduction of one synthetic nonclimatic hockey of snow greatly perturbs the composition of near-surface polar air Table 4-1 gives a list of these metrics and summarizes their […]