Ups and downs of sea level projections

The scientific sea level discussion has moved a long way since the last IPCC report was published in 2007 (see our post back then). The Copenhagen Synthesis Report recently concluded that “The updated estimates of the future global mean sea level rise are about double the IPCC projections from 2007″. New Scientist last month ran a nice article on the state of the science, very much in the same vein. But now Mark Siddall, Thomas Stocker and Peter Clark have countered this trend in an article in Nature Geoscience, projecting a global rise of only 7 to 82 cm from 2000 to the end of this century.

Siddall et al. use a semi-empirical approach similar to the one Stefan proposed in Science in 2007 (let’s call that R07) and to Grinsted et al. (2009), which we discussed here. What are the similarities and where do the differences come from?

For short time scales and small temperature changes everything becomes linear and the two new approaches are mathematically equivalent to R07 (see footnote 1). They can all be described by the simple equation:

dS/dt = a ΔT(t) + b (Eq 1)

dS/dt is the rate of change of sea level S, ΔT is the warming above some baseline temperature, and a and b are constants. The baseline temperature can be chosen arbitrarily since any constant temperature offset can be absorbed into b. This becomes clear with an example: Assume you want to compute sea level rise from 1900-2000, using as input a temperature time series like the global GISS data. A clever choice of baseline temperature would then be the temperature around 1900 (averaged over 20 years or so, we’re not interested in weather variability here). Then you can integrate the equation from 1900 to 2000 to get sea level relative to 1900:

S(t) = a ∫ΔT(t’) dt’ + b t (Eq 2)

There are two contributions to 20th C sea level rise: one from the warming in the 20th Century (let’s call this the “new rise”), and a sea level rise that results from any climate changes prior to 1900, at a rate b that was already present in 1900 (let’s call this the “old rise”). This rate is constant for 1900-2000 since the response time scale of sea level is implicitly assumed to be very long in Eq. 1. A simple matlab/octave code is provided below (2).

If you’re only interested in the total rise for 1900-2000, the temperature integral over the GISS data set is 25 ºC years, which is just another way of saying that the mean temperature of the 20th Century was 0.25 ºC above the 1900 baseline. The sea level rise over the 20th Century is thus:

S(1900-2000) = 25 a + 100 b (Eq. 3)

Compared to Eq. 1, both new studies introduce an element of non-linearity. In the approach of Grinsted et al, sea level rise may flatten off (as compared to what Eq 1 gives) already on time scales of a century, since they look at a single equilibration time scale τ for sea level with estimates ranging from 200 years to 1200 years. It is a valid idea that part of sea level rise responds on such time scales, but this is unlikely to be the full story given the long response time of big ice sheets.

Siddall et al. in contrast find a time scale of 2900 years, but introduce a non-linearity in the equilibrium response of sea level to temperature (see their curve in Fig. 1 and footnote 3 below): it flattens off strongly for warm temperatures. The reason for both the long time scale and the shape of their equilibrium curve is that this curve is dominated by ice volume changes. The flattening at the warm end is because sea level has little scope to rise much further once the Earth has run out of ice. However, their model is constructed so that this equilibrium curve determines the rate of sea level rise right from the beginning of melting, when the shortage of ice arising later should not play a role yet. Hence, we consider this nonlinearity, which is partly responsible for the lower future projections compared to R07, physically unrealistic. In contrast, there are some good reasons for the assumption of linearity (see below).

Comparison of model parameters

But back to the linear case and Eq. 1: how do the parameter choices compare? a is a (more or less) universal constant linking sea level to temperature changes, one could call it the sea level sensitivity. b is more situation-specific in that it depends both on the chosen temperature baseline and the time history of previous climate changes, so one has to be very careful when comparing b between different models.

For R07, and referenced to a baseline temperature for the year 1900, we get a = 0.34 cm/ºC/year and b = 0.077 cm/year. Corresponding values of Grinsted et al. are shown in the table (thanks to Aslak for giving those to us!).

For Siddall et al, a = s/τ where s is the slope of their sea level curve, which near present temperatures is 4.8 meters per ºC and τ is the response the time scale. Thus a = 0.17 cm/ºC/year and b = 0.04 cm /year (see table). The latter can be concluded from the fact that their 19th Century sea level rise, with flat temperatures (ΔT(t) = 0) is 4 cm. Thus, in the model of Siddall et al, sea level (near the present climate) is only half as sensitive to warming as in R07. This is a second reason why their projection is lower than R07.

Model

a [cm/ºC/year]

b
[cm /year]

“new rise” [cm] (25a)

“old rise” [cm] (100b)

25a+100b
[cm]

total model rise [cm]

Rahmstorf

0.34

0.077

8.5

7.7

16.2

16.2

Grinsted et al “historical”

0.30

0.141

7.5

14.1

21.6

21.3

Grinsted et al “Moberg”

0.63

0.085

(15.8)

(8.5)

(24.3)

20.6

Siddall et al

0.17

0.04

4.3

4

8.3

8.3 (?) 7.9

Performance for 20th Century sea level rise

For the 20th Century we can compute the “new” sea level rise due to 20th Century warming and the “old” rise due to earlier climate changes from Eq. 3. The results are shown in the table. From Grinsted et al, we show two versions fitted to different data sets, one only to “historical” data using the Jevrejeva et al. (2006) sea level from 1850, and one using the Moberg et al. (2006) temperature reconstruction with the extended Amsterdam sea level record starting in the year 1700.

First note that “old” and “new” rise are of similar magnitude for the 20th Century because of the small average warming of 0.25 ºC. But it is the a-term in Eq. (2) that matters for the future, since with future warming the temperature integral becomes many times larger. It is thus important to realise that the total 20th Century rise is not a useful data constraint on a, because one can get this right for any value of a as long as b is chosen accordingly. To constrain the value of a – which dominates the 21st Century projections — one needs to look at the “new rise”. How much has sea level rise accelerated over the 20th Century, in response to rising temperatures? That determines how much it will accelerate in future when warming continues.

The Rahmstorf model and the Grinsted “historical” case are by definition in excellent agreement with 20th Century data (and get similar values of a) since they have been tuned to those. The main difference arises from the differences between the two sea level data sets used: Church and White (2006) by Rahmstorf, Jevrejeva et al. (2006) by Grinsted et al. Since the “historical” case of Grinsted et al. finds a ~1200-year response time scale, these two models are almost fully equivalent on a century time scale (e-100/1200=0.92) and give nearly the same results. The total model rise in the last column is just 1.5 percent less than that based on the linear Eq. 3 because of that finite response time scale.

For the Grinsted “Moberg” case the response time scale is only ~210 years, hence our linear approximation becomes bad already on a century time scale (e-100/210=0.62, the total rise is 15% less than the linear estimate), which is why we give the linear estimates only in brackets for comparison here.

The rise predicted by Siddall et al is much lower. That is not surprising, since their parameters were fitted to the slow changes of the big ice sheets (time scale τ=2900 years) and don’t “see” the early response caused by thermal expansion and mountain glaciers, which makes up most of the 20th Century sea level rise. What is surprising, though, is that Siddall et al. in their paper claim that their parameter values reproduce 20th Century sea level rise. This appears to be a calculation error (4); this will be resolved in the peer-reviewed literature. Our values in the above table are computed correctly (in our understanding) using the same parameters as used by the authors in generating their Fig.3. Their model with the parameters fitted to glacial-interglacial data thus underestimates 20th Century sea level rise by a factor of two.

Frosty legacy: We cannot afford to lose even a few percent of the land ice on Earth, which in total would be enough to raise global sea levels by 65 meters. (Calving front in Svalbard, photo by S.R.)

Future projections

It thus looks like R07 and Grinsted et al. both reproduce 20th Century sea level rise and both get similar projections for the 21st Century. Siddall et al. get much lower projections but also strongly under-estimate 20th Century sea level rise. We suspect this will hold more generally: it would seem hard to reproduce the 20th Century evolution (including acceleration) but then get very different results for the 21st Century, with the basic semi-empirical approach common to these three papers.

In fact, the lower part of their 7-82 cm range appears to be rather implausible. At the current rate, 7 cm of sea level rise since 2000 will be reached already in 2020 (see graph). And Eq. 1 guarantees one thing for any positive value of a: if the 21st Century is warmer than the 20th, then sea level must rise faster. In fact the ratio of new sea level rise in the 21st Century to new sea level rise in the 20th Century according to Eq. 2 is not dependent on a or b and is simply equal to the ratio of the century-mean temperatures, T21/T20 (both measured again relative to the 1900 baseline). For the “coldest” IPCC-scenario (1.1 ºC warming for 2000-2100) this ratio is 1.3 ºC / 0.25 ºC = 5.2. Thus even in the most optimistic IPCC case, the linear semi-empirical approach predicts about five times the “new” sea level rise found for the 20th Century, regardless of parameter uncertainty. In our view, when presenting numbers to the public scientists need to be equally cautious about erring on the low as they are on the high side. For society, after all, under-estimating global warming is likely the greater danger.

Does the world have to be linear?

How do we know that the relationship between temperature rise and sea level rate is linear, also for the several degrees to be expected, when the 20th century has only given us a foretaste of 0.7 degrees? The short answer is: we don’t.

A slightly longer answer is this. First we need to distinguish two things: linearity in temperature (at a given point in time, and all else being equal), and linearity as the system evolves over time. The two are conflated in the real world, because temperature is increasing over time.

Linearity in temperature is a very reasonable assumption often used by glaciologists. It is based on a heat flow argument: the global temperature anomaly represents a heat flow imbalance. Some of the excess heat will go into slowly warming the deep ocean, some will be used to melt land ice, a tiny little bit will hang around in the atmosphere to be picked up by the surface station network. If the anomaly is 2 ºC, the heat flow imbalance should be double that caused by a 1 ºC anomaly. That idea is supported by the fact that the warming pattern basically stays the same: a 4 ºC global warming scenario basically has the same spatial pattern as a 2 ºC global warming scenario, only the numbers are twice as big (cf. Figure SMP6 of the IPCC report). It’s the same for the heating requirement of your house: if the temperature difference to the outside is twice as big, it will lose twice the amount of heat and you need twice the heating power to keep it warm. It’s this “linearity in temperature” assumption that the Siddall et al. approach rejects.

Linearity over time is quite a different matter. There are many reasons why this cannot hold indefinitely, even though it seems to work well for the past 120 years at least. R07 already discusses this and mentions that glaciers will simply run out of ice after some time. Grinsted et al. took this into account by a finite time scale. We agree with this approach – we merely have some reservations about whether it can be done with a single time scale, and whether the data they used really allow to constrain this time scale. And there are arguments (e.g. by Jim Hansen) that over time the ice loss may be faster than the linear approach suggests, once the ice gets wet and soft and starts sliding. So ultimately we do not know how much longer the system will behave in an approximately linear fashion, and we do not know yet whether the real sea level rise will then be slower or faster than suggested by the linear approach of Eq. 1.

Is there hope that, with a modified method, we may successfully constrain sea level rise in the 21st Century from paleoclimatic data? Let us spell out what the question is: How will sea level in the present climate state respond on a century time scale to a rapid global warming? We highlight three aspects here.

Present climate state. It is likely that a different climate state (e.g. the glacial with its huge northern ice sheets) has a very different sea level sensitivity than the present. Siddall et al. tried to account for that with their equilibrium sea level curve – but we think the final equilibrium state does not contain the required information about the initial transient sensitivity.

Century time scale. Sea level responds on various time scales – years for the ocean mixed layer thermal expansion, decades for mountain glaciers, centuries for deep ocean expansion, and millennia for big ice sheets. Tuning a model to data dominated by a particular time scale – e.g. the multi-century time scale of Grinsted et al. or the multi-millennia time scale of Siddall et al. – does not mean the results carry over to a shorter time scale of interest.

Global warming. We need to know how sea level – oceans, mountain glaciers, big ice sheets all taken together – responds to a globally near-uniform forcing (like greenhouse gas or solar activity changes). Glacial-interglacial climate changes are forced by big and highly regional and seasonal orbital insolation changes and do not provide this information. Siddall et al use a local temperature curve from Greenland and assume there is a constant conversion factor to global-mean temperature that applies across the ages and across different mechanisms of climate change. This problem is not discussed much in the paper; it is implicit in their non-dimensional temperature, which is normalised by the glacial-holocene temperature difference. Their best guess for this is 4.2 ºC (as an aside, our published best guess is 5.8 ºC, well outside the uncertainty range considered by Siddall et al). But is a 20-degree change in Greenland temperature simply equivalent to a 4.2-degree global change? And how does local temperature translate into a global temperature for Dansgaard-Oeschger events, which are generally assumed to be caused by ocean circulation changes and lead to a temperature seesaw effect between northern and southern hemisphere? What if we used their amplitude to normalise temperature – given their imprint on global mean temperature is approximately zero?

Overall, we find these problems extremely daunting. For a good constraint for the 21st Century, one would need sufficiently accurate paleoclimatic data that reflect a sea level rise (a drop would not do – ice melts much faster than it grows) on a century time scale in response to a global forcing, preferably from a climate state similar to ours – notably with a similar distribution of ice on the planet. If anyone is aware of suitable data, we’d be most interested to hear about them!

Update (8 Sept): We have now received the computer code of Siddall et al (thanks to Mark for sending it). It confirms our analysis above. The code effectively assumes that the warming over each century applies for the whole century. I.e., the time step for the 20th Century assumes the whole century was 0.74 ºC warmer than 1900, rather than just an average of 0.25 ºC warmer as discussed above. When this is corrected, the 20th Century rise reduces from 15 cm to 8 cm in the model (consistent with our linear estimate given above). The 21st Century projections ranging from 32-48 cm in their Table 1 (best estimates) reduce to 24-32 cm.

Martin Vermeer is a geodesist at the Helsinki University of Technology in Finland.

Footnotes

(1) Siddall et al. use two steps. First they determine an equilibrium sea level for each temperature (their Eq 1, and shown in their Fig. 1). Second, they assume an exponential approach of sea level to this equilibrium value in their Eq. 2, which (slightly simplified, for the case of rising sea level) reads:

dS/dt = (Se(T) – S(t)) / τ.

Here S is the current sea level (a function of time t), Se the equilibrium sea level (a function of temperature T), and τ the time scale over which this equilibrium is approached (which they find to be 2900 years).
Now imagine the temperature rises. Then Se(T) increases, causing a rise in sea level dS/dt. If you only look at short time scales like 100 years (a tiny fraction of those 2900 years response time), S(t) can be considered constant, so the equation simplifies to

dS/dt = Se(T)/ τ + constant.

Now Se(T) is a non-linear function, but for small temperature changes (like 1 ºC) this can be approximated well by a linear dependence Se(T) = s * T + constant. Which gives us

dS/dt = s/τ * T + constant, i.e. Eq (1) in the main post above.

R07 on the other hand used:
dS/dt = a * (T – T0), which is also Eq. (1) above.
Note that a = s/τ and b = –a*T0 in our notation.

(2) Here is a very basic matlab/octave script that computes a sea level curve from a given temperature curve according to Eq. 2 above. The full matlab script used in R07, including the data files, is available as supporting online material from Science
% Semi-empirical sea level model - very basic version
T1900=mean(tempg(11:30)); T=tempg-T1900;

(4) We did not yet receive the code at the time of writing, but based on correspondence with the authors conclude that for their values in Fig. 3 and table 1, Siddall et al. integrated sea level with 100-year time steps with a highly inaccurate numerical method, thus greatly overestimating the a-term. In their supporting online information they show a different calculation for the 20th Century with annual time steps (their Fig. 5SI). This is numerically correct, giving an a-term of about 4 cm, but uses a different value of b close to 0.12 cm/year to obtain the correct total 20th Century rise.

415 Responses to “Ups and downs of sea level projections”

Presumably its been thought through already, but I’ll mention it anyway. To obtain a closer agreement with the observed data, would it be more appropriate to consider the Arctic, Greenland, and Antartica as separate compartments for the purpose of using the linearised equations? Since the rate of temperature increase in the Arctic region is much greater than nonpolar latitudes, treating it as a single compartment with its own parameters would allow a better fit without losing the rationale of using linearised equations. If the compartments use the same $\Delta T(t)$, namely the global $\Delta T(t)$, then there is presumably a different “a” parameter for each compartment, but the “b” is the same parameter for all 3 compartments, giving a total of 4 parameters to tune, instead of 2. Any feedback?

Thanks for a nice review. I have some questions (perhaps from misunderstanding…):

– To what extent is the parameter “b” (the old sea level rise that existed before human perturbation) constrained by data? It can not be arbitrarily adjusted it seems to me. The better constrained “b” is, the more useful the 20th century record would be to pin down the value of “a”.

– The assumed linearity of sea level rise with temperature holds for short timescales and small temperature changes, you write. Said linearity makes sense to hold for thermal expansion and pure melting, but not for mechanical instabilities speeding up the process. Does this mean that this approach doesn’t include the potential effects of icesheet dynamics and how they could speed up sea level rise? Does that mean that this approach gives a lower limit of sea level rise, excluding any bad surprises?

– As to your last question, are data from the Eemian (last interglacial period) not sufficiently accurate to provide stronger constraints?

– How valid of an analogue is the Eemian for the eventual sea level rise (as opposed to the rate) we’ll experience at a given temperature? If it is anywhere near valid, it would be a strong argument to limit future warming to remain below the Eemian value (when the global average temperature was 2 degrees warmer and sea level was 6 metres higher than now) or at least keep the overshoot as short and limited as possible.

[Response: How do you know it was 2 ºC warmer globally? In our paper on the Eemian we concluded otherwise. -stefan]

Stefan,
I’ve read different numbers about the global average temperature during the Eemian: between 1 and 2 deg C. Here’s an article pointing to a value of 2 degrees:
High rates of sea-level rise during the last interglacial period
E. J. Rohling et al, Nature Geoscience 1, 38 – 42 (2008)http://www.nature.com/ngeo/journal/v1/n1/abs/ngeo.2007.28.html
“During MIS-5e, such rates of sea-level rise occurred when the global mean temperature was 2 °C higher than today,”

In 2007, IPCC notes “Global average sea level rose at an average rate of 1.8 [1.3 to 2.3] mm per year over 1961 to 2003
(IPCC) concluded that “No significant acceleration in the rate of sea level rise during the 20th century has been detected

[Response: See the title of Church and White (2006) in the reference list above. And note that Grinsted et al. get a consistent number for a from a very different sea level data set. -stefan]

Though I appreciate the general comparison of the studies, it would seem rather simplistic when considering expansion, evaporation, eustasy and isostasy, versus melt water being added. Maybe it would help if you outlined the assumptions behind the formula and works contained in the comparison for us laymen?

How complicated would this cause the simple formula, you provided, to become? Would the effects of these additional processes only be a small percentage when compared to added runoff or thermal expansion?

Cheers!
Dave Cooke

[Response: The basic idea of this approach is not that individual physical processes are modelled (that’s actually the more common and traditional approach), but rather that we learn from data about past sea level rise how sea level responds to a temperature change. Those data of past sea level rise of course tell us only one thing: the grand total. So with this approach you cannot easily break down sea level rise into contributions from different processes. -stefan]

@6 (John) – A meltwater lake is clearly a surface phenomenon. For it to be caused by a geologic heat source, there would have to be a truly vast amount of melt at the ice/rock level. We’d have noticed that by one means (radar) or another (jökulhlaup).

I have a number of questions on this topic, but most are not sufficiently thought through by me yet to credibly pose. One simple basic question: How can sea level rise (17cm since 1900 e.g.) be measured with any (or much) confidence? In earlier times it relied on dubious tide markers. Even the uncertain tide markers had a very limited scope compared to the entire ocean(s). The steady state sea level varies by tens of meters around the oceans which swamps individual measurements in the cm, let alone mm, range. (Granted the sparse individual measurements can produce their own comparisons – forgetting their own uncertainty, but how can that be projected to an average overall sea level?) Even current satellite measurements — theoretically highly accurate — seems like the old ‘measure with a micrometer, mark with a chalk, cut with an ax’ problem. When a satellite picks up a 1mm (let alone 0.1mm) change, how is the instantaneous tide level, the wave height, even the satellite’s orbit, etc., taken into account to build confidence in the 1mm measurement. Satellites also have the problem with projecting over the entire ocean (what’s the baseline per latitude, longitude, time of day, time of year, lunar phase, etc), though theoretically (again) they could do a much more thorough job than the tide markers.

How do we know with even the slightest confidence the 17cm rise since 1900 (maybe easier than the others), the 5cm rise since 1990, etc? The relative consistency among all of the measurements points in the right direction; but it’s still a long row to hoe. Or are there other metrics and justifications?

Apparently then I am confused, if you were establishing the shift in levels based on warming trends and according to my review of the GHCN data, that the period between 1900 and 1930 would have been slight and the change between 1920 and 1950 should have been dramatically different. Matter of fact, it should appear that the amount of expansion between 1920 and 1950 should be nearly 80% of the change between 1970 and 2000.

This then begs a question, if the evaluation is of change in a variable and the observed effect, should not the evaluations be marked by the extremes or outliers in the series to try to establish the slope relationship of the relationship between temperature and sea level? It would seem if then you were to attempt a comparative study of slopes you should be able to establish a correlation, if a minimum of 10 events can be established.

The benefit is once this is done you could go back to the data sets and establish the contributor sets and begin the isolation of the participation of each to the change (Hmmm, …this seems to be the top down versus a traditional bottom up approach. Not that it is wrong, only that where one begins with the causes and tries to suggest the ends, the other begins with the ends and tries to suggest the cause. This is similar to economic systems, most times both miss major deviations to the hoped for steady state…)

How does this help in the long run? Does this then provide a planning model and a suggestion for the rate that action must be taken? If so then would it not seem that changes that happened between 1920 and 1950 and later again between 1970 and 2000 would be something to investigate to suggest the actions that would be necessary in the future? If true it generally required the simple removal of formerly habitable land from the municipality Zoning and Planning Board and higher expense planning for beach replenishment.

Is there any suggestion that the results of the present data would require any special response or preparedness for disasters or insurance company outlays, that would exceed what has been required in the past? Is it possible that this could lead insurance companies to have a voice in what they are willing to insure based on these models? Sorry for the questions, just I see many interrelated things and worry about the data sets underlying the planning that these models could feed.

Again my undying thanks for your time and effort. I cannot wait to see the outcome of this round of comments. I will check back in a couple of days to see where this has gone.

““Coastal erosion” is and has been an ongoing process for as long as an interface between the oceans and land that is susceptible to displacement / movement has existed.

The process is independent of all physical phenomena and processes discussed in this post.”

Yes but this historical self-regulation is not narrow enough to ensure civilzation’s survival. I assume a refernce to calcium-carbonate weathering cycle. When sea level rises less CO2 goes into space. When fall more land is exposed and more is rock-sequestered. But AFAIK all this means is before you hit runaway cooling at -67C or so when CO2 would be permanently sublimated to space; this cycle has historically kicked in before this. On the warming side obviously this cycle has historically self-cooled before 100C boiled the oceans. My question is I’ve read there is a temperature around 60C-70C where a runaway forcing occurs, maybe from molecules evaporating to space I can’t remember or track the reference down. Does anyone know of what this process is that was hypothesized someone to trigger runaway Venus at around 65C? It is bugging me.

If I understand what you are proposing, combining different rates for different components in a linear manner is degenerate so there is no “better” fit that can be obtained. (m1*x+b1)+(m2*x+b2)+(m3*x+b3)=m*x+b Shifting stuff between m1,m2,and m3 does not make any difference to what m turns out to be.

The more interesting idea is that the actual equation is seriously non-linear on century time-scales. Hansen discusses a scenario where sea level rise could be 5 meters by the end of the century. In terms of policy decisions, one needs to know about that kind of risk.

“Understanding future sea-level rise is one of the most pressing concerns for climate scientists. Recourse to modern observations and modeling are the principal techniques applied to this problem. However, these approaches are not without controversy. Modern observations of ice sheets are of short duration, making it difficult to distinguish variability from secular trends. Contemporary ice sheet models often ignore rapid, dynamic processes such as the collapse of ice sheets and the acceleration of ice streams in response to warming conditions. It is of critical importance to establish whether the specific examples of rapid ice sheet response observed over the last decades represents a trend or a short-term anomaly that will not impact the large expanses of ice sheets distant from ice streams in the longer term. Projections of future sea-level rise require insights into ice-sheet dynamics on centennial timescales. The record of sea-level rise during the termination offers just such insights and can be used as a means to better understand the integrated ice-sheet response to climate change outside the ‘noise’ of the last decades of observations, and the uncertainty of the impact of specific dynamic processes on centennial timescales. We are not considering the termination as a quantitative analog to the future—special consideration must be given to the fact that sea-level rise will not be as rapid today as during the termination because ice sheets are dramatically smaller today compared to the termination.”

This gentleman suffers from a serious case of tunnel vision, which I won’t even bother to explain, since it’s so obvious.

Take a moment longer and explain how a rising sea level would -not- accelerate coastal erosion rates. Can you do that for us, describing how coastlines will magically become immune to erosion where isostatic rebound is not available as an offset against a rising sea?

1) In Pfeffer et al. (Science v321, pp1340 et seq., 2008) an estimate of maximum glacier velocities is used to calculate contributions from WAIS and GIS to sea level rise. Are there more recent attempts in that direction?

3)Lastly, can anyone tell me the correct reference to the Wingham, Shepherd paper supposedly in GRL about Pine Island thinning far quicker than previously imagined? I still can’t find it. Or was this BBC article merely making it all up ?

In defense of the much-maligned Dan Hughes, allow me to point out that coastal erosion and coastal submersion are two different things. In the limit of 10,000 meters of SLR there will be no coastal erosion at all because there will be no coast at all.

When the advanced search is restricted to Wingham or Shepherd as authors, GRL as the journal, with 2009 as the relevant year, I find it not. I ask again, does anyone have the reference to the publication mentioned in

And further, the ice is colder below the surface, it’s not melting at the top because of heat from below!

Yes, there’s meltwater at the base during the warm weather (Mauri has commented that in the cold season it refreezes and the ice closes up the openings). There’s frictional heating when the ice moves against the base rock, and maybe along internal fractures/discontinuities.

But there are temperature records you can look up that track the temperature of the rock and of the ice that was above it at different points in time.

The assumption of “Linearity in temperature” is based on observations of systems (Earth & model) in near equilibrium. That assumption only holds as long the system remains near the observed equilibrium state. Current climate forcing perturbs the system, causing it to seek a new equilibrium. The transition to the new equilibrium involves reorganization of ocean and air currents. Such reorganization involves significant changes in heat flows from what was observed in the original system. Models may not reflect all of these changes. These changes result in rapid changes in the climate of various regions relative to the global mean temperature. One such example is “Polar Amplification”. A second example is the warming of the water at the bottom of Disco Bay and the Spitsbergen Current more than the global average warming. Or, heat may be transferred to ice, resulting in ice melt, but no increase in temperature. The suddenness of the effects of these changes can be shown by the breakup of the Larsen Ice Shelf. Thus, we are already outside the useful range of a linear temperature model.

Again, the assumption of linearity of time was based on observations of system(s) near equilibrium, or forced by orbital mechanics. In a forced system, the behavior of ice is not (generally) a function of time. It is a function of total energy. In a system near equilibrium (or forced by orbital mechanics), energy flows at fairly constant rates over time, and the system behavior appears to be a function of time. However, the melting of ice really is a function of energy, not time. If the heat is transferred to the ice in centuries, then the ice melts in centuries. If the heat is transferred in seconds, then the ice melts in seconds. It is the accumulated heat, and not the elapsed time that determines the state of the system.

The assumption that large ice sheets have long response times is based on short term observations of systems in near equilibrium. In view of geologic evidence of rapid ice sheet decay under orbital forcing, we have to consider the possibility of very rapid disintegration of ice sheets under our large, and rapidly increasing AGW forcing.

Today our oceans have more available heat than they had last year. (Despite substantial heat going to melt sea ice.) Next year, they are likely to have more heat than they have now. And, with increasing greenhouse gas concentrations, the rate of heat accumulation in the oceans is likely to be greater. Thus, heat flow to the ice is not likely to be linear with respect to time. Total heat content of the ice is the integral of the area under the heat flow curve, and it will increase in a very nonlinear manner with respect to time.

Sea level is an interesting case because only a small fraction of the ice must actually melt in order to discharge large volumes of ice into the ocean. The structural strength of ice declines in a very nonlinear function of contained heat. While it is difficult to dream of a solid glacier moving at 70 km/year down through a fjord, that volume of ice broken up into a slurry, can be easily flushed through a deep fjord. See for example (http://vulcan.wr.usgs.gov/Glossary/Glaciers/IceSheets/description_lake_missoula.html )

Semi-empirical models are most useful for systems that are near equilibrium, and are expected to remain near equilibrium. We have a climate system that is increasingly forced. Semi-empirical models are best for interpolation, but in this case they are being use to extrapolate on matters critical to public policy. Semi-empirical models require domains that have been carefully scrutinized for discontinuities such as melting points of materials in the system. In this application, we do not know how the melting points of salt water (anchor ice) and fresh water will affect the results of the model.

My estimate is that these models will significantly underestimate sea level rise just as semi-empirical models missed the decline of Arctic Sea Ice. It is really just the same system and the same models. It is the same problem. The system is in transition to a new equilibrium.

Thank you, Mr, Roberts for the NERC article. In light of the comments there, would someone knowledgeable care to speculate on the results of floating the main trunk of Pine Island ? Naively, I would imagine a great increase in ice velocity and mass export as basal drag vanishes but that is probably incorrect…

Yes. Three remarks (Stefan may offer a more considered opinion):
1) It used to be believed that the two contributions were roughly equal, but it seems that the mass part is clearly larger
2) We haven’t had GRACE and ARGO for many years yet; if this same situation has persisted in the past, we wouldn’t necessarily know about it
3) That said, these figures are undoubtedly also affected by natural variability. “Recent years” may be atypical.

Mr. Lewis writes:
“Today our oceans have more available heat than they had last year. (Despite substantial heat going to melt sea ice.)”

The phrase “substantial amounts” might need to be qualified here. I see from Levitus(2008) that oceans are warming at 4e21J/yr. GIS and WAIS melting at about 500Gton/yr absorbing about 1.5e20J/yr, roughly thirty times less than the amount of heat warming the oceans.

These changes result in rapid changes in the climate of various regions relative to the global mean temperature. One such example is “Polar Amplification”.

Aaron, it’s amusing you should bring up this example, as it is precisely what I proposed as an example of positive nonlinearity in preparing for this post, when Stefan corrected me that this is not true: if the ratios between the warming rates at different latitudes remains the same, the warming pattern doesn’t change, and the behaviour is strictly linear…

I do have a few questions, but these relate to a more “classical” type analysis.

The first has to do with changes in the amount of non frozen water stored on land. Both surface and ground water storage in some regions is changing dramatically, for instance the Aral sea is drying up, but a lot of new water is being impounded behind the three gorges dam. Quite a few regions have had water table drops of tens or hundreds of meters (including The San Joachim valley, the Ogalla aquifer, and norther India. I suspect that these “land use” changes in total water stored on land is non-trivial in amount. Does anyone have an order of magnitude estimate of how this compares to the rate of change of ice volume?

I worry somewhat that even given a fixed with time temperature that ice sheet melt rates could increase over time. The mechanism for this increase would be a postulated accumulation of surface dirt from year to year on the ablation zones of ice sheets. I don’t think comparing one or two warm seasons to obtain melt rate fits to temperature would capture this effect. If it does occur it could mean that future rates of sea level rise are significantly underestimated.

Not too sure just where you discern tunnel vision. To me, the idea that sea level rise can’t be faster than at termination does not seem justified. Is that what you mean?

[Response: The question whether future sea level rise can be faster than that during deglaciation at the end of the last Ice Age is an interesting one. There are two competing effects. (i) There was much more ice back then, hence a greater surface area on which any warming could act to melt ice – this argues that melting during deglaciation would have been faster than in future. (ii) The forcing that drove deglaciation changed over a much slower time scale. The increase in insolation that melted those ice sheets took about 10,000 years. Given that Siddall et al. found a 2,700 year response time for the ice sheets, one has to conclude that the rate of melting was limited by the slow change in forcing. In other words, had global temperature increased by 5 ºC within a century (that’s like a step function for the ice sheets with their 2,700-year response time), deglaciation would have occurred much faster. In fact, had the forcing changed in a step-function manner from LGM to Holocene conditions, so that sea level from that point would have started to approach the new equilibrium on a time scale of 2,700 years, the first century would have seen a sea level rise of over 4 meters in the framework of the Siddall model (the initial rate is 120 meters divided by 2700 years = 4.4 meters/century).
Given these two competing effects, it is hard to say without detailed calculations which one would dominate, and I don’t think one can conclude a priori that future sea level rise cannot be faster than that during deglaciation. -stefan]

Thomas L. Elifritz, re #15: – “a serious case of tunnel vision, which I won’t even bother to explain, since it’s so obvious.” Please do spell it out so people like me, who don’t spot the obvious, can learn something.

Aaron above mentions and gives a pointer a page on to the glacial Lake Missoula/channeled scablands event(s). A while back I dropped a bunch of links in this Stoat thread and pointed back to an even older Prometheus thread arguing for a “quarter inch” sea level rise.

Seems some geological structures we used to think formed slowly may have occurred during events too fast to resolve in the paleo information (e.g. drumlins, recently observed to form very rapidly in glacial outflow under ice in Antarctica, apparently something of a surprise ….). Maybe pingoes too?http://scienceblogs.com/stoat/2007/02/why_do_science_in_antarctica.php

The profile and supporting text notes that the 3 to 6km wide discharge channel is still 1500 metres below sea level at a point 40km from the existing grounding line.

So every kilometre the glacier face pulls back produces another two kilometres to the face, and eventually when the face is back to 40km from the coast the discharge will be over a face 80 kilometres long (40km on either side) and 2500 metres high (1000 metres above sea level) falling into the discharge channel. At what rate? I shudder to think!

Thanks for your reply. I had been thinking that the change in available surface area would play a role but also the different rates of warming as a function of latitude. So, faced with the same rate of global warming, the higher rate of temperature increase at the poles could lead to quicker melting of less surface area and so compensate.

Your invocation of faster warming for the whole globe is more important.

Another question is to what degree are contenental ice sheets proping themselves up? The GIS is domed so that in some sense all of it could slide right now. Were contenent scale ice sheets flat for most of their extend and thus had nowhere to slide to and thus no quick dissipation path? The ratio of steep edge surface area to flat central surface area could be important since the (self-made) altitude of an ice sheet plays a role in its preservation. This might also counteract the effects of greater surface area to some degree. If the timescale for gravity assisted disintegration is ten times shorter than for the melt from the middle of a continent timescale, then sea level rise could be faster even with less available ice so long as it is more susceptible to sliding.

It strikes me also that the shape of the curve for sea level rise might be affected by prior climate history. We’ve experiences a period of stable temperatures allowing ice creation to come into equilibrium with ice destruction so that ice sheets slopes are probably at a critical gradient. Thus we might experience more rapid initial sea level rise than if we were transitioning directly from a time of ice sheet growth (by accumulation) to ice sheet loss. The total sea level rise would be governed by the amount of available ice, but we would have some of it front-ended compared to at termination perhaps.

So, I think that the statement that sea level rise will be slower now needs more support or qualification, but I don’t discern tunnel vision in it.

Though their projections of sea level rise are on the low end, the extra ocean they project is nothing to sneeze at, as Siddall tells AFP:

“Fifty centimetres (20 inches) of rise would be very, very dangerous for Bangladesh, it would be very dangerous for all low-lying areas. And not only that, the 50 centimetres (20 inches) is the global mean. Locally, it could be as high as a metre (3.25 feet), perhaps even higher, because water is pushed into different places by the effect of gravity.”

> How can sea level rise (17cm since 1900 e.g.) be measured with any (or much)
> confidence?

Good question… I suggest you try to read Church and White, linked to under the post.

Tidal variation on the open ocean is sub-meter, not “tens of meters”, and can be well modelled, being periodic. It is true that the global geometry of tide gauges is very, very poor, with most of them on the Northern hemisphere and all of them (except some islands) on the coast. And its gets worse going back in time. But OTOH, just like with the surface met stations, there is a lot of long-range correlation in the field measured, especially when averaging over time. That makes a seemingly hopeless job feasible.

Altimetric satellites are a lot better: they scan the whole ocean over their orbital repeat period, not just coast lines adjacent to human habitation. But… they have been around only since about 1992 (the type of altimetric satellite good for this work, i.e., Topex-Poseidon and successors). Their radar footprint on the sea surface is several km across, so waves are averaged out. The orbit is tracked by on-board carrier-phase GPS.

Church and White use the trick of “bootstrapping” the principal modes of sea level variation found from the altimetric era back to the tide-gauge-only era, mostly eliminating the biases caused by the poor tide gauge geometry. The technique used is our good old loved/hated PCA/EOF also used in proxy temperature reconstructions, and in the Steig et al. Antarctica paper ;-)

Summa summarum, it is hard work. One tricky thing to get right, e.g., is the effect of ongoing post-glacial isostatic adjustment, that affects measurements also well outside the once glaciated areas.

I understand your (and others) scepticism, but there is only one remedy for that: dig in the literature. It really holds up.

> a lot of long-range correlation in the field measured,
> especially when averaging over time…. makes a seemingly
> hopeless job feasible.

This is the key. I wish there were an illustration of it somehow to convey to people how you can get strong evidence of tiny changes by using a large number of relatively inaccurate instruments like old glass-bulb thermometers, or yardsticks nailed to piers, inspected by ordinary folks who take notes on them every day.

Take a huge number of instruments. Spread them out very widely. Have them checked — by fallible human beings, and some of them get placed poorly, and some get damaged, and some aren’t exactly perfect. Any one of them can be wrong in some way but each of them day to day, year after year, can be watched and seems in its own small way to be giving believably consistent results for where it is, more or less, although you wouldn’t trust any one of them to tell you some _tiny_little_change_ was real for sure. Whether you’re looking at the daily temperature or the tide height.

But step back and look at the entire field of them. And if you notice that same _tiny_little_change occurring broadly and consistently across the entire collection, you have good reason to say it’s detecting that little change.

That the values would be linear would be accurate if the rate change or flow rate were also linear. However, when you have a greater source energy level and a steady state flow capacity the rate of flow increases or the energy content of the flow would have to be higher, hence there would be large amounts of inter-zonal mixing.

Going further, if you increase the specific and latent energy content near the equator then it is likely the transportation path would have to expand to maintain equilibrium. If as we have seen that there appears to be multiple paths towards the polar emission, then it would seem likely there should be both effects occurring in the attempt to re-achieve equilibrium.

Instead the empirical measurements regarding the oceanic current flow rate in the N. Atlantic appear to be nearly stable. What has happened is that the ocean currents carry a higher temperature instead.

Apparently in the atmosphere the Hadley cell may have greater rate of flow towards the North, interrupting the Walker circulation and Jet Stream. It would appear that the Walker cell interruption and the Northern Jet Stream deviations may result in “sloshing” in the ocean basin related to changes in the wind currents. It would seem imperative that if the intent were to insure that there was clear evidence of expansion that you would need to perform a simultaneous zonal comparative analysis on opposite shores. I suspect non-linariety will be observed near the Hadley and ITCZ and Ferrel cell interfaces.

‘… for correcting … myths. The article suggests that, rather than repeat them … one should just rephrase the statement, eliminating the false portion altogether so as to not reinforce it further (since repetition, even to debunk it, reaffirms the false statement)….”
—-end excerpt—
Read that linked page. It’s important if you want to be effective teaching facts instead of the controversy.

Re the billboards, I’d say something like this if I were talking to your neighbors:
————

Valero — a Texas company that will suffer greatly from sea level rise (Texas is so darned flat along the coast):

They’re overstating — by about triple — how the Waxman bill will change the cost of gasoline, _and_ they are making it sound like an immediate change — leaving out the 20-year time span, for a far smaller increase.

Duh. You buy gasoline from people who lie to you? Why?

The EPA’s estimate is a cost “25 cents per gallon higher by 2030” — and you can look this up easily for yourself.

40 Martin Vermeer: “Fifty centimetres (20 inches) of rise would be” irrelevant to almost all Americans and other people who are burning the most fossil fuel. Hence, Fifty centimetres (20 inches) of rise would be ignored by the decision makers in Washington, D.C. who are watering down H.R. 2454 even more at this moment. Who cares about the people in Bangladesh when the coal industry in the US stands to loose $100 Billion per year in cash flow if an effective bill passes? Not the stockholders of the coal industry! I stand by my statement.

> Take a huge number of instruments. Spread them out very widely. Have them checked

It’s more than that, Hank. The instruments check each other. It’s called redundancy. If an instrument suddenly shows a skip compared to all its neightbours, it is safe to assume that “something” happened. And that you can fix it.

It’s due to the long-range correlation situation that we have massive redundancy. It’s only when you have large scale systematic error patterns — like the glacial isostatic adjustment that I mentioned — that redundancy doesn’t help, and you have to get the correction right.

“The EPA’s estimate is a cost “25 cents per gallon higher by 2030″ — and you can look this up easily for yourself.”

Which is, I think, nuts.

It was much more expensive than that a little more than a year ago. (Stuck button alert) Peak Oil passed in 2005. They aren’t pulling anymore out of the ground now than they did then and, if and when the recession eases, demand WILL go up.

Peak Oil and increased demand, eheu. Write down what you can remember about 2007-2008 so you can tell your grandkids what The Good Old Days were like.