U.S. CCSP REcommends Audit Trails

The U.S. CCSP report on temperature trends includes the following remarkable recommendations on audit trails:

The independent development of data sets and analyses by several independent scientists or teams will serve to quantify structural uncertainty and to provide objective corroboration of the results. In order to encourage further independent scrutiny, data sets and their full metadata (footnote 1) should be made openly available. Comprehensive analyses should be carried out to ascertain the causes of remaining differences between data sets and to refine uncertainty estimates.

In their text, they say:

To ascertain unambiguously the causes of differences in data sets generally requires extensive metadata for each data set (C4; NRC, 2000b). Appropriate metadata, whether obtained from the peer-reviewed literature or from data made available on-line, should include, for data on all relevant spatial and temporal scales:
“⠠Documentation of the raw data and the data sources used in the data set construction to enable quantification of the extent to which the raw data overlap with other similar data sets;
“⠠Details of instrumentation used, the observing practices and environments and their changes over time to help assessments of, or adjustments for, the changing accuracy of the data;
“⠠Supporting information such as any adjustments made to the data and the numbers and locations of the data through time;
“⠠An audit trail of decisions about the adjustments made, including supporting evidence that identifies non-climatic influences on the data and justifies any consequent adjustments to the data that have been made; and
“⠠Uncertainty estimates and their derivation.
This information should be made openly available to the research community.

Nice to see the term “audit trail” in a climate science publication. But how do these guys reconcile these pieties with acquiescence in total obstruction by Phil Jones and the Hockey Team? Now Folland himself is an important originator of SST data – maybe somneone should see how they do with requesting data and meta data from him?

1. I agree with the ethos of the proclamation. Perhaps a more useful test then seeing how an individual author does in his practice, is to ask for an example of best practice to be highlighted.

2. I hate people not sticking to the topic, Lubos. Is this place a general place for like-minded skeptics to just post random anti-AGW blather as inverses of RC-Lynne? Or is this a place for really exploring the issues and talking real science.

They speak about adjustments to data, but don’t mention archiving computer code, although they mention “data-processing procedures” in footnote 1.

It’s also striking, to me, anyway, that in the recommendations on testing the validity of GCMs they nowhere suggest that someone propagate the parameter uncertainties through a model calculation in order to generate confidence limits for the projections. This seems fundamental to me, and is done in every other branch of science, especially in physics, in order to put boundaries on a predicted value. That this standard method of validity-testing seems invisible to climate physicists is little short of astonishing.

Luboà…⟬ you’re hanging around this thread somewhere — can you comment on how the scientists in astrophysics would react to model predictions offered with no error analysis?

Dear Steve, I apologize for the off-topic comment. Now you reminded me that I have read the recommendation to post general comments to the Road Map, and I will try to follow this recommendation from now on. Sorry again. Also, 18,000 years should have been 12,000 years – whether or not it matters.🙂

Dear Pat Frank #3, while I am a theorist working with formulae that are typically exact, we interact with phenomenologists and cosmologists very intensely (and are interested in all these things ourselves), and I know how they would react. Let me start with the “other” answer. When they or we speak, we often quote a number without an explicit error margin, to get a rough idea. The reason why we often don’t mention the error is that we usually know, from the context, what the order of magnitude of the error is. (The corresponding estimate in climate science is that the error is as big as the effect.) If you inform someone about the newly measured top quark mass, everyone knows that the error is about one percent or 2 GeV from those 175 GeV. In QCD, the errors are several percent. Fourteen significant figures are measured from the electron’s magnetic moment – the most accurate measurement agreeing with theory that we have. And so on.

In particle physics and especially cosmology, people would express the errors by graphs with the confidence level indicated by different colors in the plot.

When actual numbers that are composed from one number are measured or predicted, the error is always listed in the papers. This is of course especially the case of all experimental physicists whose main pride is to measure something precisely. For these physicists, the error is of course more important than the average value of the quantity they measure😉 itself, and they want to minimize it.

At the same moment, one must say that there are various methods to estimate the errors and some most naive methods can lead to a huge overestimate. For example, if you sum 1 million of independent numbers each of which is determined plus minus one, the sum won’t be plus minus one million but plus minus one thousand (square root of one million) only. So one must be careful not to overestimate the errors propagating in the models. As long as there are many stochastic processes around, the errors may be smaller than one could imagine.

Of course, a simple way to estimate the errors is to run the model many times, with different settings of the random generators and perhaps different initial values of parameters within their error margins. One obtains many different results that will have some variance and one can estimate the error from this ensemble. Such an error is not necessarily the only possible source of errors, of course, because the model might be insufficiently good.

Every physicist in the canonical portions of the physics departments – condensed matter physics, high energy physics, cosmology, atomic physics and optics and a few others – would be shocked to see some results from computer models that are known to generate random outcomes in a broad range of values without any information about the error because everyone would realize that the error is the main thing to be worried about in this case. Without such information, it is natural to expect that someone ran the program several times and picked a result she liked. This is not science: these are computer games.

Feynman has said: “There is a computer disease that anybody who works with computers knows about. It’s a very serious disease and it interferes completely with the work. The trouble with computers is that you ‘play’ with them!” I think that he was primarily worried about the loss of contact with reality during such games: the computer models are not necessarily the same thing as reality even if someone really enjoys this game! They only become a description of reality if their assumptions and results are sufficiently accurately verified against reality.

#6 — “Every physicist in the canonical portions of the physics departments – condensed matter physics, high energy physics, cosmology, atomic physics and optics and a few others – would be shocked to see some results from computer models that are known to generate random outcomes in a broad range of values without any information about the error because everyone would realize that the error is the main thing to be worried about in this case.”

I am not sure why these authors are being implicitly criticised. They have set out clearly the importance of audit, and emphasised that audit must be comprehensive.

They may not be in a position of power where they can force that sort of behaviour on other people, and it may be that bluntly criticising other people may be viewed as counter-productive, but they have made a clear public statement of how one should behave. That sort of thing serves as a marker, both for them, and it can be used as a lever to force these things on other people.

Who knows, they may even have some of our favourite individuals in mind ?

My personal opinion is that best practice will spread if individuals start using datasets with complete and open audit trails preferentially. Lets be realistic though, this aint going to happen overnight.

Ref #3 – archiving code is a red herring unless you expect scientists to all code platform independently which is unrealistic in the short term. Anyway, replication of code constitutes an extremely valuable test. So long as the method is suffiently detailed and the individual suitably qualified this will constitute solely extra (valuable) work on their part to replicate the results. Good science will not exist by taking short-cuts that could lead to replication of common mistakes through the common use of bad code. In many cases bugs are not “blindingly obvious”. Bottom line: scientists are scientists and not software engineers so code replication is good and code sharing at best dubious in many cases.

re #10: Thank you for taking part in discussions here! There are plenty of other topics around here, where your opinion would be highly appreciated!

Good science will not exist by taking short-cuts that could lead to replication of common mistakes through the common use of bad code. In many cases bugs are not “blindingly obvious”. Bottom line: scientists are scientists and not software engineers so code replication is good and code sharing at best dubious in many cases.

Although I do agree that code replication is an extremely good thing, I have to disagree with code sharing. In my opinion, the key point is the access to the code. If code sharing is only done among a closed circle of “experts”, it is true that this may lead to the situation where everyone has the same nasty bugs in their code. However, if the code access is open, the bug sharing problem will be overweighted by the fact that there are so many people checking the code: someone will notice the bug at some point. I think we have plenty of examples from the softwere world that this indeed works. “Given enough eyeballs, all bugs are shallow.”

Peter, thank you for your commitment to best practices. My own experience has been almost exclusively with the paleo data rather than the surface data, but I’ll try to take a look at what you’ve done.

If the methodology is described in sufficient detail, then the code becomes redundant. My own experience in the paleo field – where the data sets are not large – is that the methods are not described either in sufficient detail or even correctly. In fields like econometrics where similar difficulties are faced and there seems to be a longer history, the conclusion has been that methodological descriptions are virtually always unsatisfactory and there is no alternative except to archive code in order to ensure that all the little methodological steps are recorded.

In the field field, there seem to be innumerable little adjustments and selections that are done ad hoc and are never justified formally so the situation may be different than in your field.

In my world, a combination of machine driven and human performed code reviews are considered a best practice and in some cases are a requirement prior to general availability / first volume shipment of a product. It is interesting that even someone who appears here to be a relatively reasonable / reason driven scientist / peer reviewed author would find it normal to maintain “private” code. This is a testament to the difference in mindsets between the high tech world and the scientific research world. Of course, the high tech world still struggles with attaining this goal. Years ago, “private” code was also the norm here. But what happened was, as applications became more complex and mission critical, software quality became a “critical to quality” item and customers raged for reform. So, by the by, the reform is taking place. Not perfect yet, but now it is considered bad form to not subject code to an appropriate level of transparency within whatever constraints there are from an IP perspective. Of course, open source, no hiding allowed, ever, under any circumstances.

Peter: say you write your code in Fortran and archive it, and that Fortran will only work under QNX on a PA-RISC chip (or some other contrived combination).

I can still take that code—knowing virtually nothing about Fortran—infer what it’s doing and port it to my language of choice on my platform of choice. It’s a lot more work than taking portable code and just running it, but at least I know I can replicate it 100% given sufficient documentation about the language and library/libraries you have used. In the case of Fortran, there’s plenty of documentation on the internet.

And besides, most scientific code doesn’t rely on anything too bizarre and will likely compile on anyone’s platform with some small modifications.

So I don’t think archiving code is a red herring at all, unless you’ve used some language which is so obscure that nobody understands it and for which there are no references available.

#10. I’ve done a quick browse through the URLs provided by Peter Thorne on their radiosonde data. My impression is that they’ve made a real effort to provide thorough documentation of their decisions. Obviously I don’t have the time or resources to personally verify the audit trail and I’ve not had occasion to apply this data set, but it’s nice to see a genuine effort of this type – a refreshing change from the Team.

archiving code is a red herring unless you expect scientists to all code platform independently which is unrealistic in the short term. Anyway, replication of code constitutes an extremely valuable test. So long as the method is suffiently detailed and the individual suitably qualified this will constitute solely extra (valuable) work on their part to replicate the results.

There are two problems with this.

First, the code allows an auditor to examine every step of the process. This can avoid the “right answer, wrong method” problem.

Second, if problems are found, having the code can save immense struggle with the original author because your “replicated” program works differently than theirs. With the code in hand, the problem can be clearly identified and pinpointed, saving lots of time and fruitless discussion.

I think this should probably be the subject of a separate thread, but let’s give it a go …..

Let’s assume that much of what Steve M says on this site is true (but note that I am personally a fair way from that position). What Steve seems to be saying is that the uncertainties in what climate scientists say about climate change are larger than they claim. I don’t see a lot of claims by Steve of total bias regarding the claims of these scientists. Also, it seems that, after taking a lot of what Steve has said into account, the NAS Comittee does not seem to have suggested that there should be any major changes in the “consensus” position. So we seem to be left with the “pre-Steve” claims of climate scientists plus some extra uncertainty due to Steve’s work. Right?

So now we come to the question of due diligence – a subject which we hear much about on this site. How does climateaudit and its posters suggest that the “man (or woman) in the street” should deal with the issue of climate change in his or her professional life? Let’s take a hypothetical example. Suppose I am a developer and I want to build a resort on the waterfront – say in Fiji. Should I apply normal risk assessment procedures and take account of the claimed effect of sea-level rise (however unlikely it is deemed to be)? Should I design the locations of dwellings and other buildings so that the perceived risk of flooding (based on current knowledge) is within some “reasonable” limit? Or should I, on the other hand, simply say “the sea is not rising – I know it isn’t – I won’t take any account of it”?

I think this is a useful discussion to have here – it could go beyond the usual carping at the evil climate scientists and could possibly provide a useful and practical indication of how professionals should react to the “threat” (actual of perceived) of anthropogenic global warming. This is also a discussion which many people can’t cop out of – they have professional responsibilities which require that they make decisions now, using admittedly limited data regarding AGW, about issues of considerable importance (in this case – coastal planning issues).

P.S. let’s not get bogged down with pedantry here. I know “climate change” is different from “anthropogenic climate changing” is different from “global warming” – but that is not the issue – the question is: what do I do I do NOW, in a REAL WORLD situation, when confronted by a situation which may be impacted by a phenomenon such as sea-level rise?

First, of all, as I’m sure you’re aware, any contractor will have to allow for storms, typhoons, tsunamis, etc. when building a seafront hotel or resort. Therefore there will be plenty of allowance made for differences in ocean level built in to begin with. The question then is what is the combined chances of ocean level rise plus extreme weather exceeding what I’m building in?

Now this is a good question, but it immediately butts up against the media hype. Even the worst of the warmer scientists will admit when forced into a corner, that the alarmists are needlessly pessimistic. But a non-science-trained investor, banker or even tourists may think that what the “whacko-environmentalists” say is gospel. So the question is whether or not this will enter into their calculations or not. What I think happens is that a bit more cush is built into the plans (which makes things safer anyway against the storm-of-the-century which is going to occur sometime anyway) and then the PR flacks proclaim that now we’re safe against 500 years of AGW sea-level rise, which is true in one sense, but which wouldn’t matter if we had a rise like the fringe predictions claim and a storm to boot.

Thus an outright lie by the kooks is countered by misleading verbage by the builder. When an idealistic but honest guy from the insurance company sees the shell game and complains, the builder’s guys can then show him that the extreme sea level rises have been based on obviously wrong assumptions and can be ignored.

The CCSP statement is important, IMO. CCSP is Bush’s program (well, formally his initiative), and he sort of seems to be paying attention to the synthesis and assessment reports being produced by CCSP. I should make it clear that I am not defending Bush, since IMO he has been the worst “science” president certainly in my lifetime. But the CCSP is important, and the CCSP statement has some chance of making inroads into the transparency of climate science and actually being enforced. I suspect that Steve and Ross had a major (indirect) influence on this.

Actual enforcement of this is another issue. It needs to be enforced by the government funding agencies first and foremost. It then needs to be more thoroughly integrated into our professional ethics and practice, and this is accomplished through professional societies and the institutions that employ scientists and also the institutions that educate scientists (the forthcoming AGU session on the integrity of science shows that our professional society is trying to grapple with this issue). It is time for a culture change in the climate research community, now that our science has become policy relevant.

One further comment on the CCSP temperature trends report. IMO, this is an assessment process/product that actually “worked”. I have a little bit of an insider’s view on what contributed to this assesment actually working. NOAA was assigned as the lead government agency on this, and I serve on the NOAA Climate Working Group where this assessment process and progress was discussed extensively. I also sit on the NAS Climate Research Committee (CRC), which had responsibility for reviewing the draft report from the NOAA-led group (i.e. picking the review committee), and I also landed on the NAS review committee. As a member of the CRC, I made a strong push for Lindzen to be on the review committee, since I thought it important to get a diversity of viewpoints and if Lindzen was going to criticize this, I thought it would be far better for him to be part of the process that might constructively improve the report. The NAS review of the draft assessment report can be obtained online fromhttp://newton.nap.edu/catalog/11285.html.
The review was probably longer than the text of the orginal draft assessment report. The NAS review was then reviewed by an additional independent group, and revised, before being published and given to the NOAA-led committee writing the assessment report. The main criticism we had of the assessment report was that the “insiders” were merely summarizing their own work, discounting work by others. The only dissenting voice on the assessment team was RP Sr, but his dissents were mostly off on a tangent from the assigned issues. The revised (final) version of the assessment report was totally changed, and the group actually did work to identify the main cause of the discrepancy between the different satellite data sets, although i’m sure the scientific debate will continue on this topic.

In pondering the issues that made this assessment successful, I have come up with the following. These assessment reports are a colossal amount of work, so it makes sense for govt employees to do most of the work. But there is danger in having people doing the assessments who were the major contributors to the products being assessed; however if this had not been the situation, the outcome whereby work was done to identify and rectify a discrepancy would not have been accomplished. The NAS review was obviously critical, and the inclusion of unbiased people with a variety of expertises and perspectives was essential. The final layer of review (reviewing the review) in this instance did not make a huge difference, but it is generally a good practice.

I note that NAS review of the CCSP synthesis and assessment reports is not mandatory, but was requested by NOAA. Such review should be mandatory, and hopefully will be done on each of the forthcoming synthesis and assessment reports.

RE: #17 – I’ll bite – this seems like a descent allegory to encompass a critique of the precautionary principle.

RE: Suppose I am a developer and I want to build a resort on the waterfront – say in Fiji. Should I apply normal risk assessment procedures and take account of the claimed effect of sea-level rise (however unlikely it is deemed to be)? Should I design the locations of dwellings and other buildings so that the perceived risk of flooding (based on current knowledge) is within some “reasonable” limit? Or should I, on the other hand, simply say “the sea is not rising – I know it isn’t – I won’t take any account of it”?

Here would be my approach. I would do a risk – reward analysis that would include 50, 100 and 500 year storm surges / high surf events. At some point, you have to let nature be nature. I probably would not seek to mitigate against the 500 year surge, but would against the 100 year and all lesser ones. I would imagine that doing this would readily encompass any realistic sea level change scenarios be they due to either tectonic or other subsidence, or, gross sea level rise.

So we seem to be left with the “pre-Steve” claims of climate scientists plus some extra uncertainty due to Steve’s work. Right?

I think it is a whole lot worse and significant than this. I think we seem to be left with a basic mistrust in much of what the most vociferous of the climate scientists are saying, because of their failure to release data and methods; their secrecy; their outrageous attacks on those they regard as skeptics/contrarians/amateurs; their down-playing of uncertainties; and their very unscientific exaggerations of the possible effects of AGW. We have the shameful situation of real scientists acting like Al Gore! There is just too much junk science in the field, right now. It is very refreshing to see people like Judith Curry and the Pielkes engaging the uncertainties.

#23. I think that part of the problem with these assessment reports is that they are “volunteer” work from people who are typically on the public purse but paid out of other pockets. I think that there is a need for a big-budget assessment of some key topics by independent people who are paid to assess. I’d prefer that the assessment be done by non-climate specialists, but highly competent in technical areas. Yes, it might subtract funds available for other work, but that happens all the time in other areas and the value of independent verification is worth it. That’s the only way that the taint of alarmism is going to be resolved.

Dave (your post 18): sorry but that isn’t much of an answer. It’s not a matter of EITHER “typhoons, tsunamis, etc.” OR sea-level rise. Sea-level would be added on to everything else. I assume a dilligent planner would take account of ALL effects – and not overdesign for ANY of them. So ….. what account would you recommend that he/she takes for sea-level rise this century?

Steve Sadlov – in your post 22, you seem to fall into the same trap as Dave (see 18 and my reply 26) – a diligent designer does not overdesign for any one effect, and so must incorporate ALL effects into his/her risk assessment. So I repeat my question to Dave: what account would you recommend that he/she takes for sea-level rise this century?

Incidentally, you seem to have the misconception that planning for a “100-year” event somehow makes a development “safe”. Suppose the development is only planned to last 50 years (which is probably a pretty low estimate), then the probability of the 100-year event happening during that time is about 40% — I don’t call that “due dilligence”!

jae – you make a number of statements in 24 which many would find quite outrageous. For example, as for “downplaying uncertainties”, I’d quote the projected sea-level rise for the 21st century from the IPCC Third Assessment Report — a rise of between 0.09 to 0.88 metres between 1990 and 2100 – these don’t seem like “downplayed uncertainties” to me – do you want the range to be any larger (in which case, remember that the upper limit would have to go up)? I’ll also ask you: what account would you recommend that a planner should take for sea-level rise this century?

Steve McIntyre – it would be nice to have your input into this discussion of “due dilligence and the planner” (it started at 17). So far, the responses from two posters seem to me to be prime examples of “due recklessness” — basically, their advice is to ignore sea-level rise in planning decisions and assume that the risk will have been covered by some other considerations (e.g. the allowance for storm surges). Now, I don’t think that it is beyond the bounds of possibility that some planners actually read this blog and believe that there are some credible statements made in it. Suppose they take the “advice” they get from climateaudit and apply it to real-world planning decisions (i.e. they take no account of sea-level rise)? Suppose they then have some sea-level related disaster in a decade or so’s time? I think many would accuse them of lack of due dilligence in taking the “climateaudit” advice instead of the “conventional wisdom” (i.e. as giving in the IPCC reports). Or perhaps the planners would regard climateaudit as lacking due dilligence in failing to put caveats on statements included in this blog.

Don’t you think it might be time to consider putting some instruments in place (e.g. carefully worded caveats) to cover yourself for future litigation if/when some of the “advice” included here fails to deliver (and I think such failures could be pretty spectacular!)?

And there are probaly equally as many who would not consider what jae has poste dand outrageous but in fcat as an accurate statement on the current state of affairs in the global warming arena.

JB are you familiar with the ALARP principle? Do you understand that risk is essentially the likelihood of a hazard (high sea level rise) multiplied by its possible consequences (flooding of large areas of inhabited land). The problem with the ALARP principle is the ‘as low as reasonably PRACTICABLE’ bit. This is particularly true for low probablity hazards (high sea level rises) which could have major consequences. From an ALARP perspective when it comes to mitigation such low frequency hazards tend to be given the same weight as high frequency hazards with low consequences. Mitigation (risk reduction) in three possible different ways i.e through reduction of the hazard frequency or reduction of the consequences or both. From an ALARP perspective the problem with the AGW debate is that the alarmist are claiming a likelihood for, for example, high sea level rise (and an unjustifable level of certainty) which justifies their claims for immediate mitigation (significant reductions in CO2 emmissions – the Kyoto Protocol). The monies involved in mitigating ‘dangerous global warming’ are enormous. The problem is that the ALARP principle is essentially a cost benefit trade off and so should be equally applied to high frequency low consequence hazards. There are many other hazards which mankind is subjected which could benefit from the application of the ALARP principle than global warming.

Re: #29 ditto jae too…and hello! sea-level has been rising since the last ice age, what 11,000 yrs? I didn’t need an AGW theory to know that. (Why are most folks not interested in such things until they are presented in an Al-gorian manner with fear/guilt/alarm/ & drama?)

Every year, more and more folks in the USA build houses or resorts on shorelines or on mountains: places that are pretty and prone or subject to natural changes, disasters and fire. There is a price to pay sometimes for that. (is it mostly paid by FEMA?)😉

On a personal note: my father in law lives on the North Shore of the island of Oahu. (house is right on the beach-sooo nice) His says his beach has been growing, not shrinking over the last decade. It’s coming up and covering the steps. IOW, there are less steps down to the beach then when he moved in.

welirocks – with reference to what you said in post 33, you should read the literature and also try and be just a little quantitative. To say that “sea-level has been rising since the last ice age, what 11,000 yrs” omits to say BY HOW MUCH it has been rising during the Holocene – and HOW MUCH MORE IT HAS BEEN RISING during the 20th century. The evidence shows the sea-level recovery from the last ice age slowed considerably during the Holocene such that the level now is virtually where it was 2000 years ago. The remaining slow rate of rise from this source is only around 0.25 mm/year, compared with a present sea-level rise of around 3 mm/year – quite an increase!

You can also cherrypick beaches as much as you like but the fact remains that about 70% of the world’s beaches are receding while only about 10% are prograding (i.e. going seawards).

KevinUK (32) – you skirt around my question admirably. I asked nothing about mitigation – I asked what account planners should take of future sea-level rise when planning a development. You talk about things like “low probablity hazards (high sea level rises)” and “an unjustifable level of certainty” but don’t seem comfortable to put your own figures on the table and say what you think the 21st century rise will be (with uncertainty estimates).

jim: There may be a sea level rise and AGW may actually exist. I think the “taking it into account” by investors in seaside property can be rather minor, even if it does exist!

1. The trend (so far) is not happening fast.
2. Even when it starts to speed up, it will still be relatively slow compared to the time required to build seawalls or such.
3. year to year variance of storm surge is much greater then trend of sea rise per year.

I really don’t expect investors to take much account of the AGW issue in their decisions. They are not going to stop coming to the beach, for instance. They still go to California even though it has earthquakes.

welirocks (37) – I’m afraid I miss both your points. I never said we weren’t still living in the Holocene – what I objected to was the implication that present sea-level rise is just a recovery from the last glaciation.

Barret: what is the quantititative measurement that you have? Can you calculate to what extent seaside property is having it’s price depressed. Talking to planners is one thing, but not a market view. Look at oil: I can talk to peak oil panicers that think implied price should be $200/bbl based on soon running low and can talk to oilfielders that worry about busts and being back at 10 or 20. But the futures market is fluid and 5 year futures are 55 (or whatever).

#39 Jim Barrett, sure he can speak for himself. I will have him address you when he wakes up. I am up with the chickens here, so to speak.
You said:
To say that “sea-level has been rising since the last ice age, what 11,000 yrs” omits to say BY HOW MUCH it has been rising during the Holocene – and HOW MUCH MORE IT HAS BEEN RISING during the 20th century.

Then you say:
“I never said we weren’t still living in the Holocene”

Ok, so then it seems to me you are highliting the 20th Century only because you believe humans are the cause of sea level rise. Let me clarify because I am not out of left field. I suggest continuing of a melt started thousands of years ago, did I not?. And by “0” sea level I mean what level is the sea supposed to be at in geologic terms? (we are talking about the Earth) What is “correct” sea level position then? Who says the current rate melt in geologic terms is “unprecidented”?? Is the Sun’s influence considered ever? And the Tilt/wobble of the Earth? How about Ground Water addition/influence? Wind? Tectonics? Is there really a “correct” place to argue from here? ( I just hate the alarm stuff, so this is how I think and I do read ;))

And I apologise in advance if this talk is too OT and if it gets closed for that reason-its o’kay.

Obviously you’re missing an ironameter. My messages was purposely, and I thought obviously ironic. The point is that the sea level highth rise to be considered is small compared to the surge heighths to be considered, so the additional amount of protection to be added will be small. You mention that the estimated sea-level rise in the 21th century was something between essentially zero to about a meter (a bit less), this means the rise would likely be about 1/3 meter (given the biases in the calculations). And how much of this would be from AGW vs the historic rise in sea levels? I’d say 10 cm or less. So, I’d expect our contractor would have to allow for surges of several meters. (I’m no expert on the matter, but I’d expect people building on the shore would expect to allow for near misses by hurricanes at the least.) Given that, adding 10 or even 20 cm additional wouldn’t be a major cost, even given that protection isn’t totally linear.

Getting back to the irony, the builder will know he only needs to tweak things but also knows the public is being propagandized and therefore needs a PR campaign which shows he “cares.” Thus, he throws the chicken littles’ tactics back in their faces. If the general public is considered to be too stupid to recognize exaggeration then so be it. The point is you shouldn’t come around to people who DO recognize the propaganda and complain that they’re exaggerating (or minimizing as the case may be) too. Either stick to the facts totally or …. well, no need giving Steve something to snip.

My husband is presenting at the FOP field trip taking place all next week. (his thesis is being published/cited again too) If you are really into to being spooked by the sea come join us at Pt. St. George, Crescent City, in Northern California, next coming Sunday. He’s a tsuami expert.🙂

Have you warned RC or other warmers of “future litigation” if the world spends one or two trillion dollars to cap CO2 emissions and the science eventually indicates that if AGW has occurred at all it was at worst insignificant and possibly even beneficial on balance?

So far, the responses from two posters seem to me to be prime examples of “due recklessness” “¢’¬? basically, their advice is to ignore sea-level rise in planning decisions and assume that the risk will have been covered by some other considerations (e.g. the allowance for storm surges). Now, I don’t think that it is beyond the bounds of possibility that some planners actually read this blog and believe that there are some credible statements made in it. Suppose they take the “advice” they get from climateaudit and apply it to real-world planning decisions (i.e. they take no account of sea-level rise)? Suppose they then have some sea-level related disaster in a decade or so’s time? I think many would accuse them of lack of due dilligence in taking the “climateaudit” advice instead of the “conventional wisdom” (i.e. as giving in the IPCC reports). Or perhaps the planners would regard climateaudit as lacking due dilligence in failing to put caveats on statements included in this blog.

To answer this question, we need to look at a few things. These are:

1. How can we determing the average sea level?

2. How much has sea level changed over the last century?

3. How much of a problem was the last century change?

4. How much is it changing now, and is the change accelerating?

5. Is there any way to predict future changes?

Let me take these issues one at a time.

1) How can we determing the average sea level? “Average” sea level is harder to determine than you might think, because of the constantly changing nature of the sea. The confounding factors are:

a. Tidal oscillations, caused by the gravitational pull of the moon and the sun, cause changes in sea level with a major period from 12 to 24 hours, but which also have longer periods ranging up to half a century or more.

b. Barometric pressure changes from weather systems depress or increase sea levels, with time scales from hours to weeks.

c. The “El Nino” effect, on a time scale of years, strongly affects sea levels in the South Pacific.

d. Winds can cause sea levels to pile up against the land or push them away from the shore, on the time scale of hours to weeks.

e. Seasonal barometric variations have the same effect as those due to weather systems, on the scale of months.

f. The “sloshing” of tides, especially in enclosed basins such as atoll lagoons, can increase or decrease sea levels on a time scale of days to years, depending on the size of the basin.

g. Changes in seawater temperature, on a time scale from years to centuries, can increase or decrease sea levels.

h. The land on which the tide gauge is situated may be rising or falling.

You can see the difficulty. Since 1993, we have had satellite radar altimetry measurements of sea level. However, these suffer from difficulties as well. To measure sea level from space, the combined error (satellite altitude error plus instrument calibration error plus satellite drift error plus instrument drift error plus atmospheric correction error plus sea state error plus unknown error) has to be less than one part, not per million, but per billion.

2. How much has sea level changed over the last century? Due to the difficulties listed above, and the dearth of long term tidal records, this question is hard to answer. The generally accepted value is on the order of 150-200 mm, with large uncertainty. The IPCC puts it at 150 ±100 mm. In San Francisco, it has risen ~200 mm. In parts of the world where the land is slowly sinking, including river mouth deltas such as Bangladesh, the rate is much higher. For example, there has been 400 mm of rise in Bangladesh since 1980, and 600 mm of rise in Calcutta since 1935. (Data from PSMSL)

3. How much of a problem was the last century change? Well … not much. I mean, when people go to list the top ten disasters in the Twentieth Century, the two foot sea level rise in Calcutta doesn’t even come up on the radar …

4. How much is it changing now, and is the change accelerating? There is absolutely no evidence that the change is accelerating. Here is the record from the TOPEX satellite:

As I mentioned, the longer term trend of this data is not all that accurate, because of slow drift. However, it shows no evidence of any increase in the rate of sea level rise. It shows a rise of about 300 mm per century, which is above that of San Francisco, and below that of Calcutta.

Nor is there any evidence of acceleration in the tidal records. Mitchell et al., in their extensive study of Pacific tidal records(Sea Level Rise in Australia and the Pacific), state that:

… it is difficult to find evidence of any change in the sea level trend (acceleration) in the whole region, as might have been expected, based upon the output of the climate models.

5. Is there any way to predict future changes? Well, in a word … no. The climate models are worse than useless, they are deceptive. We do not know whether the earth’s temperature is being affected by human actions, or if so, how much and in which direction. Climate science is in its infancy, as are the climate models, and asking infants for predictions is an exercise in futility.

What would I do if I were a planner? Given the lack of any evidence for change in the rate of sea level rise, I’d assume that there would be a continuation of the current rise in my region, whatever that might be. In some areas of the world, the sea level is dropping … go figure.

The prediction of oceanic damage is problematic at best. Ask any coastal processes engineer, they’ll tell you that it’s a SWAG …

w.

PS “¢’¬? SWAG? Well, there’s two kinds of guesses, your WAG, and your SWAG. A WAG is a “wild-assed guess” … and a SWAG, which is much more reliable, is a “scientific wild-assed guess” …

#46, thanks. Your SWAG points are much better way to put it then my “what is 0 sea level?” angle. Believe it or not, I knew your points, just don’t have the same way with words. Also, typo alert! I do know how to spell tsunami from in my last post.
I am not quite representative of the population that’s too stupid , but I am close.😉 That’s why all this climate alarm being fed to people, plus the lack of geologic understanding of earth age and history most regular folks don’t have or really think about; and the instant gratification generation “computers and the internet know and do everything for me” mentality; makes for great propaganda.

re 46: Lake Michigan/Lake Huron monthly average record high and record low are about two meters apart (6.30 feet is the official number) over the period 1918 to present making the centimeter changes in ocean levels seem insignificant. And the lake level oscillates over that range on a ten to 20 year cycle.

Fluctuations in Great Lakes water levels have occurred continually since the Great Lakes formed at the end of the Ice Age. Lake levels can affect the extent of shoreline erosion and shoreline property damage, riparian interests (beach widths and public access), dredging and shipping (depth of navigation channels), construction of marinas and other water dependent facilities, drinking water intakes, cooling water intakes for steel mills and electric generating stations, wetland acreage, and coastal flooding. Lake level records have been kept for “Lake Michigan/Huron” at various gage stations around these lakes since 1860.

A few years ago when Lake Michigan was quite high, property owners were building seawalls to protect their yards and dumping sand in the water to rebuild their beaches. With levels once again near record lows, some are pleased with their new broad sand beaches while others have expensive docks and pilings sitting high in the air far from the water’s edge. And of course, we’ll be spending millions to study the “problem.”

Canada and the United States are launching a $17.5-million study to determine why water levels in the upper Great Lakes have declined to near-record lows.

What could be causing this, you ask … “‘One possible explanation is that global warming has changed rainfall patterns,’ said Ralph Moulton at the Canadian Hydrology Service.”

If we don’t understand what causes the water levels to change in a system that can be completely surveyed by a man on a horse its not likely we’ll understand ocean levels any time soon.

Thank you Willis (46). I’m sure that investors in the Taunovo Bay Resort in Fiji (for which you are apparently the “Construction Manager”) will be delighted to know that you would make no future allowance for sea-level rise other than for “a continuation of the current rise in my region, whatever that might be”. Good old “status quo” stuff ….. and “due diligence” indeed!

And thanks also for the lecture on sea level. It would help if you could get the numbers right though. The current (Third Assessment Report) IPCC estimate for 20th century sea-level rise is 100 mm-200 mm. That’s 150±50 mm, and not “150±100 mm”. Also, it is absolutely incorrect to say that “there is absolutely no evidence that the change is accelerating”. The pre-20th century rise was estimated to be 0.25±0.25 mm/year (IPCC TAR); the 20th century rise was estimated to be 1.5±1.5 mm/year (IPCC TAR); your plot in (46) shows a rise from 1993-2005 of 3.03 mm/year. If that’s not an acceleration then I don’t know what is.

Also, please provide a reference to “Mitchell et al.” and let us know over what period he was talking.

“The point is that the sea level height rise to be considered is small compared to the surge heighths to be considered”

Wrong – it depends very much where you are in the world. In many places the surge height is only around a metre, so we are talking about possible sea-level rise of the same order as the surge.

“You mention that the estimated sea-level rise in the 21th century was something between essentially zero to about a meter (a bit less), this means the rise would likely be about 1/3 meter (given the biases in the calculations).”

And now you misunderstand risk analysis. You shouldn’t just plan for the mean but for a proper range of possibilities. The IPCC TAR projections of 21st century sea level were based on the results of seven models. The maximum and minimum of the model projections, for all 35 SRES scenarios, were the basis for the “0.09-0.88 m rise from 1990-2100”. If you look at the projections, the choice of scenario makes little difference to the result, so the number of degress of freedom in the projections is probably only around 10. Trying to estimate a “full range” from only 10 values is pretty hard. In fact, if you do the sums, you’ll find that the “limits” of the projections shown in the IPCC TAR are probably only about 1.5 standard deviations of the distribution – so there is a quite significant probability that the sea-level rise will be outside these limits (e.g. 9% that it will be greater than 0.88 m).

And I think your answer to my original question (even though, thank goodness, you admit to being “no expert on the matter”) is that you would allow “10 or even 20 cm additional” to allow fot 21st century sea-level rise.

Interesting. I don’t think I’ll be coming to climateaudit when I’m looking for expert advice on coastal planning.

Jim Barrett #50 and 51:
You are really wrong. I am somewhat of an expert in this field.
My MS degree thesis dealt with this specific issue. Willis’ explaination is absolutely correct. As someone with real field experience in the arena, and living along the So. Cal coast line my entire life, 42 yrs, I can tell you the California sea levels have not come up noticably more than one cm during that time. Using the IPCC as an example, is ridiculous. The organization is political with a political agenda and does not follow scientific methods.

Sea level is a relative term. So for you to imply the gloom and doom senerios of up to .88 of a meter in the next 100yrs is based on what data from where?
That information from the IPCC doesn’t tell you anything because it depends on where you are measuring it. Many places in the world are going through relative sea level drop. If you know anything about this, then explain to me why that happens? (I already know)

If you spend any time on this board at all, you will realize the IPCC and their reports are not highly respected. I used the IPCC 1995 report as my text book for a graduate level climate change course, in 1997. Even our extremely liberal teacher, commented on several of the problems with the report to include the references to anthroprogenic influences. And in fact, he would not discuss the issue further, saying only the science was not there to support the IPCC claims. Since that time, the issue of AGW and potential sea level rise has only become more political and less scientific. Garbage in and garbage out.

As a consulting geologist for an environmental geotechnical company, I do this for a living. Risk analysis is how I make my money. And we are doing fine, thank you very much. If I took your approach to the problem, my clients would be broke and I would be fired.

#51. Jim Barrett, I have never purported to give any advice on coastal planning. I urge due diligence and proper care and this site has never purported to provide any advice on coastal planning. The allegation is ludicrous.

My objections to many climate science articles and publications is their lack of care. For example, the recent NAS panel recommended that bristlecones (strip-bark) be avoided in temperature reconstructions and then failed to check whether the reconstructions that they illustrated contained bristlecones, because they did not bother checking what trees were used. In North’s words, they “winged” it. Unfortunately, this is all too common in climate science. There also appears to have been a lack of care and due diligence in planning for KAtrina-type events. While this is an important issue, many other well-qualified people are commenting elsewhere on the issue and I do not have the time or resources to add to this debate.

Abstract: The paper suggests a sequential procedure which gives a fixed-width confidence interval for the stationary availability EX mod (EX+EY) of a repairable system with lifetime X and repair time Y. (5 refs.)

Re #52: “As someone with real field experience in the arena, and living along the So. Cal coast line my entire life, 42 yrs, I can tell you the California sea levels have not come up noticably more than one cm during that time.”

Ah, rocks, rocks, rocks. It’s not smart to just pull a number out of the air when it’s so very easily checked. CA sea level has gone up 19 cm in the last century, measured at both San Francisco and San Diego (Cayan et al [2006], quoted on page 32 of this report), and it’s been a fairly constant rate, so the real number for 42 years is more like 8 cm. IOW, you’re off by a factor of eight. I’ll be sure to apply that error factor to any future pronouncements from you.

Thank you Willis (46). I’m sure that investors in the Taunovo Bay Resort in Fiji (for which you are apparently the “Construction Manager”) will be delighted to know that you would make no future allowance for sea-level rise other than for “a continuation of the current rise in my region, whatever that might be”. Good old “status quo” stuff ….. and “due diligence” indeed!

And thanks also for the lecture on sea level. It would help if you could get the numbers right though. The current (Third Assessment Report) IPCC estimate for 20th century sea-level rise is 100 mm-200 mm. That’s 150±50 mm, and not “150±100 mm”. Also, it is absolutely incorrect to say that “there is absolutely no evidence that the change is accelerating”. The pre-20th century rise was estimated to be 0.25±0.25 mm/year (IPCC TAR); the 20th century rise was estimated to be 1.5±1.5 mm/year (IPCC TAR); your plot in (46) shows a rise from 1993-2005 of 3.03 mm/year. If that’s not an acceleration then I don’t know what is.

Also, please provide a reference to “Mitchell et al.” and let us know over what period he was talking.

I’ll take these one at a time.

1) I’m no longer with Taunovo Bay. I was working as a sport salmon fishing guide on the Kenai River in Alaska, and I’m currently retired, following my life-long motto, “Retire early … and often”. I’m also not the W. Eschenbach mentioned in 55.

When I was with Taunovo Bay, however, I worked closely with a coastal processes engineer setting the level of the villas above mean sea level. These were fixed at 3.65 m (about 12 feet) above MSL, which since the tide there is ±3 feet, puts them about nine feet above the highest possible tide. During this process, I learned that much of coastal process engineering is more art than science. The engineers deal with that uncertainty by including very large safety factors to include all possible contingencies, including possible sea level rise greater than the expected rise. In fact, the uncertainties in things like storm surges and wind seeds are much longer than those of excess sea level rise (above the historical rise, which is directly included in the calculations).

Now, since you didn’t know any of this history, or the coastal process engineer who made the final decisions, or how high the Taunovo Bay Villas are sited above the sea, or the days of work that went into making the decision … with you being completely and totally ignorant of all of that, how dare you accuse me of a lack of due diligence?

2) The IPCC said “Based on tide gauge data, the rate of global mean sea level rise during the 20th century is in the range 1.0 to 2.0 mm/yr, with a central value of 1.5 mm/yr (the central value should not be interpreted as a best estimate)”. This is a typical IPCC statement, no central value, no error estimate. Because of the IPCC habit of clearly saying “two standard errors” when they mean that, and saying nothing when it is one standard error, I have increased the width of their estimate to two standard errors instead of one. Clearly, if the central value is not the best estimate, their error range must be wider than their interval.

3) Mitchell was talking about the last century, during which we would expect to find acceleration due to AGW if any existed. He found none, which agreed with the IPCC TAR conclusion, which was “No significant acceleration in the rate of sea level rise during the 20th century has been detected.”.

Finally, regarding the location of the Mitchell paper, you have the title, you go look it up. You can’t insult me and then ask me to do you any favors.

#56 — Strange thing, but the reference given in that report of the plot of San Francisco tidal gauge history, “Cayan et al., 2006,” leads to a mystery.

The reference list offers two choices: a 2004 PNAS article (vol. 101, 12422-12427) describing GCM-predicted CO2-driven climates over California or a more cryptically-alluded 2006 Journal of Climate article (vol. 19, 1407-21), each of which has Cayan as a co-author, but neither one of which includes tidal gauge data. I wonder where they actually got it.

On the other hand, I’m not disputing the data themselves. It’s well-known, however, that local tidal gauge data can be distorted by local subsidance or uplift, or basin oscillations. E.g. R. E. Flick, ea (2003) “Trends in United States Tidal Datum Statistics and Tide Range” J. Water. Port Coast. Eng.129, 155-164 show the tidal gauge at San Francisco has recorded a mean sea level increase of 14.5 cm/c (since 1855), but right across the bay, the Alameda gauge has recorded 8.1 cm/c (since 1939).

Which one do you want to believe? The Crescent City gauge on the northern CA coast has recorded a rate of -5 cm/c (since 1933). Is that one more trustworthy?

1. If you are no longer “with Taunovo Bay”, might it not be a good idea to get them to update their web site? It may save them a bit of grief too!

2. If you want to know what the phrase “Based on tide gauge data, the rate of global mean sea level rise during the 20th century is in the range 1.0 to 2.0 mm/yr …..” means, then just read Box 11.1 of Chapter 11 if the IPCC TAR – that says quite explicitly what a “range” means. Arbitrarily multiplying that range by two makes no sense. If you look at my comment 51, you will see that (with a little bit of statistics) it looks as if the “range” quoted by the IPCC is probably around 1.5 standard deviations – but even then there is a fair “uncertainty in the uncertainty”. And if you or anyone else understands what you mean by “if the central value is not the best estimate, their error range must be wider than their interval”, then I’d be glad to hear an explanation – it is news to me that there is a relationship between the confidence interval and the distance between the “central value” and the “best estimate”.

3. As for Mitchell et al., it was never published in a referreed journal. I presume you picked it off the web and now unfortunately it has disappeared. I’ll give you a clue – it was dated 2000. Perhaps you could instead start quoting from the peer-reviewed literature since that time and from papers that don’t just bolster up your flimsy case – I think you might find that a few things have happened. And you don’t seem to have addressed my point about how a pre-20th century rise of 0.25 mm/year, a 20th century (averaged) rise of 1-2 mm/year, and a late-20th century rise of over 3 mm/year doesn’t amount to an acceleration!

Steve McIntyre (54): “….. many other well-qualified people are commenting elsewhere on the issue and I do not have the time or resources to add to this debate.”

A wise decision, Steve. It is much easier and safer to carp from the background rather than to make constructive comments, and possibly even to make a few statements about where you think the future might lead (come on Steve, you aren’t stupid; you must have a view on this, based on all that you have studied in the last few years).

As for your comments about “lack of care” – don’t you agree that MANY of the off-the-cuff comments and judgements on this site have been made with NO CARE WHATSOEVER – many are just plain garbage, and as you point out frequently, this site has a large readership. Doesn’t that worry you just slightly?

The paper itself is a conference paper and I can’t find it online. I otherwise have no knowledge of it or its authors.

What stikes me about this table is the bewildering variation in trend (estimated from tide gauges) between sites. Consider the trend (mm/y) at these three sites:

Williamstown +0.26
Geelong +0.97
Point Lonsdale -0.63

The first two are within Port Phillip Bay, and only 50km or so from each other. Yet the trend at Geelong is three times that at Williamstown. At the head of the bay, Point Lonsdale (closer to Geelong than Geelong is to Williamstown), the sea level is falling!

Another is Adelaide (2.08) vs Port Pirie (-0.19) again proximate.

The uncertainty associated with these estimates is huge. Even John Hunter, on sea-levels at Tuvalu, (to his great credit) assigned an uncertainty larger than the magnitude of his estimate of sea level rise (i.e. the sea level there could be falling).

The last time I looked at tide gauge sea level data it scared me away. To suggest that anyone can acceptably estimate the trend in the 19th century is a complete mystery – we can’t even do it for the 20th. I’m sure if the time-series experts like bender and Jean S turned their steely gaze to the sea-level literature we would find all the familiar problems associated with other areas of climate science.

The satellite data looks like a welcome improvement, as long as the instrumental error is miniscule (as mentioned by Willis). Personally, the idea that you can detect mm-scale variations in sea level from a satellite boggles my mind, but I have no expertise in this matter.

Re 60, Jim, first, could I ask you tone it down a bit? I did nothing, and you accused me of a lack of due diligence. Now you continue your sniping. Why? If I inadvertantly insulted you, I apologize … can we move on from there in a more collegiate tone?

You point out, quite correctly, that the IPCC claims that the 1-2mm range contains all of the data, with no central value. However, in the same chapter we find the following (my emphasis)

From Table 11.9 one can see that there are six global estimates determined with the use of PGR corrections derived from global models of isostatic adjustment, spanning a range from 1.4 mm/yr (Mitrovica and Davis, 1995; Davis and Mitrovica, 1996) to 2.4 mm/yr (Peltier and Tushingham, 1989, 1991). We consider that these five are consistent within the systematic uncertainty of the PGR models, which may have a range of uncertainty of 0.5 mm/yr depending on earth structure parametrization employed (Mitrovica and Davis, 1995). The average rate of the five estimates is 1.8 mm/yr. There are two other global analyses, of Gornitz and Lebedeff (1987) and Nakiboglu and Lambeck (1991), which yield estimates of 1.2 mm/yr, lower than the first group. Because of the issues raised above with regard to the geological data method for land movement correction, the value of Gornitz and Lebedeff may be underestimated by up to a few tenths of a millimetre per year, although such considerations do not affect the method of Nakiboglu and Lambeck. The differences between the former five and latter two analyses reflect the analysis methods, in particular the differences in corrections for land movements and in selections of tide gauges used, including the effect of any spatial variation in thermal expansion. However, all the discrepancies which could arise as a consequence of different analysis methods remain to be more thoroughly investigated. On the basis of the published literature, we therefore cannot rule out an average rate of sea level rise of as little as 1.0 mm/yr during the 20th century. For the upper bound, we adopt a limit of 2.0 mm/yr, which includes all recent global estimates with some allowance for systematic uncertainty. As with other ranges (see Box 11.1), we do not imply that the central value is the best estimate.

Now, they claim that the 1″¢’¬?2 mm range includes “all recent global estimates” … but right above that, they cite and show in their table an estimate with a result of 2.4 mm per year, another with a result of 1.4 mm per year and a “range of uncertainty”, whatever that means, of ±.5 mm/year. Let’s be conservative and assume that this is a 95% confidence interval. Since the first study gives a 95% confidence interval of 1.9 to 2.9 mm/yr for the sea level rise, and the other gives 0.9 to 1.9 mm/yr … how on earth can they claim that the “range” is 1 to 2? … Typical IPCC non-science.

Regarding the acceleration question, both Mitchell and the IPCC report no acceleration in the 20th century, which is where we would find it if it were caused by CO2. Rates of rise have been both higher and lower in the preceding centuries and millennia … your point being? Perhaps you could point us to some citation that shows acceleration of rise in the 20th century before you start saying that people are not practising “due diligence” if they don’t take this so-far-unseen acceleration into account. It sure doesn’t show up in the TOPEX satellite record …

My main point in all of this is, as usual in climate science, the true answer is “we don’t know to that degree of accuracy”. We don’t know how fast the oceans rose in the last century. From the TOPEX data, it now appears that the ocean, over a period of a decade or so, can rise in one part and sink in another … where does that fit with “due diligence”? Will those rises even out? Why do the steric contribution and the meltwater contribution to the sea level rise not equal the total rise? Once again … we don’t know …

Finally, before you get any more abusive about the Mitchell paper, let me point out that it is both cited and referenced in the very Chapter 11 of the IPCC TAR that we have been discussing. It was published by the Australian National Tidal Facility as part of the Proceedings of the Pacific Islands Conference on Climate Change, Climate Variability and Sea Level Rise, Linking Science and Policy, and has been cited in 16 other papers plus the IPCC since that time. Bill Mitchell is the head of the Australian National Tidal Facility. I’ve put a copy of the paper here, just for you.

James Lane (63,64): Taking a simple average of the trends of sea-level records of different lengths and from different places is rather dumb. We know that the variability of single records is such that at least 50 years of record is generally required to estimate the long-term trend. This is not to say that short records are useless. The trick is to put together the records in an objective way – it’s called (heaven forbid) a “reconstruction”. An alternative way to proceed is to look only at the “long” records, as Douglas did a fair while ago, coming up with an estimate (1.8 mm/year for the 20th century) which is close the present best estimate of about 1.7 mm/year.

The paper by Mitchell et al. has long been used by contrarians to support whatever hapens to be their current fad – however, a collection of tide gauge records (many quite short) of varying length does not, on the surface, tell us very much – but such a paper is ripe for cherrypicking!

Due diligence requires that all the literature on sea-level rise in this region should be considered – not just one unrefereed and hard-to-find paper from 6 years ago.

(but I still think that a pre-20th century rise of 0.25 mm/year, a 20th century (averaged) rise of 1-2 mm/year, and a late-20th century rise of over 3 mm/year is a pretty good indication of an acceleration – it fits the above paper pretty well also.

Thanks for the Mitchell paper, but of course I had it already.

Unfortunately, I may have misled you earlier in (60) where we were talking about the limits (1-2 mm/year) for the 20th century. In an attempt to clarify what I was saying, I referred back to my comment 51, which of course referred to the IPCC PROJECTIONS for the 21st century – so ignore that bit of my posting 60 – sorry.

Finally, you criticise the “1-2 mm/year” estimate of 20th century rise from the IPCC TAR. You must realise that one cannot simply combine results from a diverse range of papers and workers with simple statistics. For one thing, how do we apply weights to the different estimates? It is necessary to apply some value judgements about which are “good” result and which are “bad”. In fact, the IPCC TAR estimate of 1-2 mm/year was little changed from previous IPCC estimates and nicely brackets the present best estimate of 1.7 mm/year. And, if you don’t think the IPCC did it right, why don’t you come up with a better estimate and publish it?

“Ah, rocks, rocks, rocks. It’s not smart to just pull a number out of the air when it’s so very easily checked. CA sea level has gone up 19 cm in the last century, measured at both San Francisco and San Diego (Cayan et al [2006], quoted on page 32 of this report), and it’s been a fairly constant rate, so the real number for 42 years is more like 8 cm. IOW, you’re off by a factor of eight. I’ll be sure to apply that error factor to any future pronouncements from you.”

Bloom Bloom Bloom, hubby is still snoozing. However I downloaded the report and looked at it. I want to try and get this reply by myself. It’s fun.

Ok you might want to look at that report again after this. I didn’t read the whole thing but I found the 19 cm you are quoting as GLOBAL sea level rise from a model, not California’s actual seal level rise. This is what I see from for the chart (on my page 35 not 32)

I quote the description:

” Source Cayan,et al 2006. Global sea level rise is projected to range from 4-33 inches during the 2000-2100. This compares to a rate of approximately 7.6 inches a (19 cm) per century

The other chart along side it is only showing San Francisco going along with that supposed trend not the whole coast of California going along with that trend. There is no description for this graph but found in the paragraphs above the chart it states:

“So far there is little evidence that the rate of global sea level rise has accelerated (the rate of California tide gauges has actually flattened during the last several years) but climate models suggest strongly that this may change.”

There go those wiley climate models again…

So where do you get “and it’s been a fairly constant rate” in regards to the California coast?

Here’s an EPA page with San Francisco sea level going down in 2000 plotted in mm/yr. Also there is some fairly balanced information in the text, for a global warming page that is:

re: #56 & 58
San Francisco Bay sits within a trough, and most of it is going down. It’s sinking. Read a book from an introductory California geology course. The bay is an extention of The Great Valley Geomorphic Province. It has been sinking for the last several hundred thousand yrs. That’s why it is full of water.

Just up the coast a short distance the sea level is actually dropping because of uplift. Remember Steve Bloom, sea level is a relative term that depends on where you are like Pat Frank is showing as well. That paper you cited is complete nonsense and not based on any real geology from what I have seen in these references, and without reading the thing yet.

Read:
Califonia’s changing landscapes: A Guide to the Geology of the State. By Gordon B Oakeshott. 1978 (This is the bible for California Geology)

Or try:
California Geology By Deborhah Harden, 1995

Or:

Geology of California By Robert M. Norris and Robert W. Webb 2nd addition 1990

Therein you will find a real understanding of San Fransisco Bay. All three will tell you the sea is not rising there, the bay is sinking.

The tidal gauge data needs to be correct for uplift and subsidence and the data has not been corrected. This data is worthless unless this is taken into account. Tidal gauges measure tide not techtonics hence the difference between the San Fransisco Tidal Gauge and the Crescent City Tidal Gauge.
BTW San Diego Bay is also under going tidal subsidence hence the apparent sea level rise. It’s all relative.

On a personal note 30yrs of surfing the local breaks in So Cal show no evidence of sea level rise.

In the last 100 yrs my understanding from my education is that sea level has actually only come up a few cm in California. All natural. From the beginning of the Holocene, sea level in California has risen several hundred feet. All natural.

Mr and Mrs welirocks (68-71): all you are doing by concentrating on San Francisco is illustrating the common strategy of the contrarians of cherrypicking – just go and look at the literature on GLOBAL sea-level rise. You’ll also find in that literature information on how sea-level rise varies AROUND THE GLOBE, why it varies and how results from one station are NOT measures of global sea-level.

Also, if you are going to claim that the present global sea-level rise is simply a continuation of the recovery from the last glaciation, then it might be an idea to take a look at the literature on sea-level reconstructions over the last glacial cycle (and especially during the Holocene). And make sure you come up with some actual numbers and not just qualitative generalisations!

Jim Barrett #72
What is this the twighlight zone?
We are discussing a link that did that very thing; consentrated on San Fransisco. It was provided by Steve Bloom; pdf from California EPA in #56. Address your complaint to him.

Also we know perfectly well how sea level varies around the globe. Re-read.

you say:

“You are going to claim that the present global sea-level rise is simply a continuation of the recovery from the last glaciation, then it might be an idea to take a look at the literature on sea-level reconstructions over the last glacial cycle (and especially during the Holocene). And make sure you come up with some actual numbers.”

No we don’t. We don’t need actual numbers to comment on a well known geological conditions for San Fransisco and San Diego Bay areas and the Holocene. We cited sources. Look it up yourself in the library. And for that matter Gov.Arnold should too.

Do you understand the difference between tidal gauge measurements and sea level rise?
Do you understand how local geological condition plays a major role in this discussion?
Look up what subsidence means.
Go over Willis’ and Frank’s points again.
You are wasting everyone’s bandwith and time.

#66 — “Due diligence requires that all the literature on sea-level rise in this region should be considered – not just one unrefereed and hard-to-find paper from 6 years ago.”

And yet your compere in arms, Steve B., in #56 employed two sites from an unpublished unrefereed internal California government report to suggest that sea levels have been rising all along the west coast of north america.

Is that not egregious cherry-picking, too? And why haven’t you protested it?

Mr and Mrs welirocks (68-71): all you are doing by concentrating on San Francisco is illustrating the common strategy of the contrarians of cherrypicking – just go and look at the literature on GLOBAL sea-level rise. You’ll also find in that literature information on how sea-level rise varies AROUND THE GLOBE, why it varies and how results from one station are NOT measures of global sea-level.

Jim, we all understand that. The issue is the uncertainty surrounding the trends. Examples have been given of widely divergent trends from sites that are only miles apart. Taking a large number of sites from across the globe and averaging them doesn’t solve that problem.

Added to the measurement error is the problem of subsidence or otherwise as mentioned by the Wellirocks, which can apparently be important on quite small scales.

I think anyone that pretends that sea-levels in the 19th century can be reliably estimated or reconstructed is out of their mind, and in the 20th century, prior to satellite observations, I don’t think the situation is much better.

(but I still think that a pre-20th century rise of 0.25 mm/year, a 20th century (averaged) rise of 1-2 mm/year, and a late-20th century rise of over 3 mm/year is a pretty good indication of an acceleration – it fits the above paper pretty well also.

Thanks for the Church and White reference, Jim, but of course I had it already. The problem is, I don’t take much stock in it.

Why not? Because the errors are greatly underestimated, there has been inadequate adjustment for autocorrelation, and the analysis results disagree with other datasets.

DISAGREEMENT WITH OTHER DATASETS

Let’s look first at the problems with estimating sea level from coastal gauges. Here’s what the TOPEX satellite radar says the change over the last ten years has been like …

Now you tell me, Jim … how accurately could you predict this rise from 10 sea level gauges on the coast in 1870, mostly in Europe and the US? Or even from the 170 current gauges? Most of the coast world-wide hasn’t seen much rise at all in ten years, and sea level’s dropping around San Francisco … but it’s rising where there are no gauges.

Church and White claim that from 10 tide gauges in 1870, we can tell the relative global sea level within ±44 mm (95% confidence) … that’s an inch and three quarters accuracy for the sea level a century an a quarter ago, from 10 tide gauges … hmmm. Given the map above, do you think that’s possible?

ERROR UNDERESTIMATION

To see if their errors were reasonable, I decided to compare their error estimates with the error estimates from another dataset for which we have 95% confidence intervals, the HadCRUT3 surface air temperature dataset. Here’s the comparison …

Now, air temperature is easy to measure. We have now, and have had historically, many more air temperature measuring sites than tidal stations. In addition, the sea level must be measured for at least 50 years before we can have any confidence in the results, because that is the length of the tidal cycle. Air temperature, on the other hand, has no such restriction. Sea levels must be measured at high tide and low tide to be of any use, and these times change constantly. Early records are often quite inaccurate, because the levels were not measured at the highest or lowest times. Again, air temperatures do not have this restriction. Automated air temperature measurements of daily highs and lows, using mercury thermometers that measure the extremes, have been around for half a century. Automated tide gauges are a recent development. Sea level measurements are affected by the local rise or fall of the ground level, where air temperatures are absolute. Finally, air temperature can be measured anywhere on the globe, while sea levels can only be measured on the coast.

Because of all of these factors, it is totally unreasonable to say that we can measure historical sea levels with greater accuracy than historical air temperature levels, and yet that is what they are saying.

AUTOCORRELATION

Here’s the real problem in all of this. C&W say that they have adjusting for autocorrelation, viz:

For the 20th century, the rise is about 160 mm and the linear least-squares trend is 1.7 ± 0.3 mm yr-1 (95% confidence limits). This error includes allowance for the serial correlation of the time series, (four years of data per degree of freedom), uncertainties in GIA corrections (0.09 mm yr 1 from the rms difference between GMSL trends calculated using three different GIA models [Church et al., 2004]) and uncertainties in the EOFs (0.1 mm yr 1, see below).

However, the problem runs much deeper than that. The lag(1) autocorrelation of their yearly data is so extreme (0.90) that:

1) We cannot say that there is a trend in the data (p>0.05, so we cannot reject the null hypothesis).

2) We cannot say whether the quadratic they propose as a fit to the data is any better than a straight line. Thus, we cannot claim any acceleration.

For all of these reasons, as I said above, I don’t put much stock in this study …