Kaufman's Stick: Iceberg Lake Varves

In the first post on Kaufman et al, I observed that, like other Team multiproxy studies, its HS-ness is contributed by only a few series. As shown below, a composite of 19 out of 23 Kaufman proxies does not yield an “unprecedented” late 20th century (tho it yields an elevated late 20th century.) A composite consisting only of ice cores shows nothing unusual about the 20th century. However, four proxies ( 1: Blue Lake (Alaska) varves, 4: Iceberg Lake (Alaska) varves, 9: Big Round Lake (Baffin Island) varves and 22: Briffa’s Yamal tree ring chronology) have a very pronounced HS and in the Kaufman CPS, these 4 series are the “active ingredients” in the Kaufman HS.
Figure 1. Left – composite of 19 Kaufman proxies; right – composite of 4 Kaufman proxies. (The three Finnish sediments are used in native orientation, rather than the Kaufman orientation, which is inverted from the original orientation e.g. the Tiljander series discussed in the previous post.)

Briffa’s Yamal series is almost as notorious at CA as Graybill’s bristlecone pines and the consistent Team selection of this series rather than the nearby Polar Urals series has been noted unfavorably on many occasions.

Rather than dealing further with the tired Yamal series one more time, today I want to discuss one of the new “ingredients” – Loso’s Iceberg Lake reconstruction, which, in Kaufman’s rendering, also has a notable HS shape – in this case, limited to the last 4 decades of the 20th century.

Loso’s original article was in the form of an actual temperature reconstruction. Here is the Loso reconstruction (decadally averaged) in the original units – left scale – and as re-scaled by Kaufman into SD Units. Note the impact of changing from deg C to SD Units. Variation in the original reconstruction was very small – the step change in the 1960s was a couple of tenths of a degree. But in Kaufman’s SD Units, this small step change becomes a step change of 4 sigma – one which, together with Yamal and a couple of others, ends up having an impact on the overall reconstruction. (Kaufman’s Yamal version closes the 20th century at an astonishing 7-sigmas. )

While Kaufman’s re-scaling obviously warrants attention, I’d prefer that readers not dwell on this step at this time, as there are some very interesting aspects to the Loso data that shed light on the properties of varve thicknesses as a temperature proxy. Here is the underlying Loso varve “chronology” (using this term as in tree ring networks), plotted from original data. A couple of points here. First, the varve chronology doesn’t look much like a plot of temperature data – it’s far too spiky; the distribution is clearly not a normal distribution and visually looks like it is a fat-tailed distribution (which proves true). Secondly, there seems to be a step change in 1958, with an actual discontinuity in the original data in 1957. One wonders whether there is some sort of inhomogeneity. Also worrying is what seems to be a sort of “divergence problem”: the trends since the 1960s seems to be down, even though temperatures have been going up, with the HS-ness of the series perhaps resulting from some sort of 1957 inhomogeneity. I looked at data from individual cores to assess this troubling visual appearance.

Figure 3. Loso Varve Chronology

Given the visual appearance of a non-normal distribution, I did a qqnorm plot of all the varve width data (left panel) and, on the right, a similar plot for the logged varve widths. (Loso’s temperature reconstruction is log-transform of his varve width chronology.) As you can see, the original varve widths are remarkably fat-tailed; indeed, even the log-transformed varve widths are far from normal and remain fat-tailed. This creates major complications for simplistic efforts to average a few measurements in making a varve chronology or to “standardize” data as we shall see below. The combination of wildly non-normal fat tails and probable inhomogeneity makes this a very problematic raw material for construction of a temperature index, as I’ll further show below.

Loso Cores
One advantage of mineral exploration experience is that one understands the importance of examining individual cores. Fortunately, Loso provided some raw information on this. The modern portion of the Loso reconstruction is calculated from only 1-3 cores (A,K,M), shown below over the period 1000-2000 in both linear and log scales. Core M has a discontinuity of nearly 400 years – I haven;t examined the cross-dating of this core, but I wonder how they established this discontinuity which seems troublingly long. You can see that Core A and Core K have very different contributions to HS-ness: Core K shows no modern HS-ness. The entire HS-ness of the Loso reconstruction, one of the two largest contributors to the Kaufman HS, comes entirely from Loso Core A, where there seems to be an inhomogeneity around 1957.

Figure 5. Loso Cores A,K and M

I noted above that it was not easy to make an average when confronted with wild distributions such as the one observed here. Loso attempted to mitigate the wild non-normality by the expedient of simply deleting some of the larger excursions in his calculation of the chronology average. (In 1957, all three values were deleted and that’s why there is no value for that year.) This is explained as follows:

Scattered among the other well-dated sections are isolated strata that record episodic density flows (turbidites), resuspension of lacustrine sediment by seismic shaking and/or shoreline-lowering events, and dumping of ice-rafted debris. The case for excluding such deposits from climatologically-oriented varve records has been made elsewhere (Hardy et al., 1996), and we accordingly removed measurements of 82 individual laminae from these other sections. Those removed (mostly turbidites) include many of the thickest laminae, but sediment structure (not thickness) was in all cases the defining criterion for exclusion from the master chronology.

I examined the calculation of the “average” varve thickness from 1860 to 2000 and identified excluded varves (by figuring out which varves, if any, were excluded in order to yield the reported average as opposed to the average using all the varves.) In the figure below, I’ve plotted varve widths for the 3 cores from 1860-2000 in both linear and log scales, marking the excluded varves in red. Loso says that “sediment structure” rather than thickness was the basis for exclusion, but one can’t help but wonder how solid this classification really is. Regardless, the high values in Core A clearly result from some sort of inhomogeneity in the Core A data starting around 1957-8 – and seemingly settling down in more recent values. If this data was revisited in a few years, I wonder whether it would have reverted back to a more average value.

Figure 6. Showing excluded varves in chronology calculation.

The construction of a sediment “chronology” has much in common with a tree ring chronology. Indeed, it looks quite a bit harder to me that for tree rings, since there seems to be considerably more inhomogeneity between cores and local sedimentation conditions having a substantial impact. The number of cores used in the varve chronology (1 to 3) are FAR less than minimums required for construction of a tree ring chronology under far less trying circumstances. To my knowledge, this is not confronted by the varvochronologists.

After excluding a few series, Loso constructed a chronology by averaging the remaining values. This is done at the native value stage (pre-logging.) Since non-excluded outliers from the extreme fat-tailed distribution are not “cut” (a precaution common in mining exploration to mitigate “nugget” effect), even after averaging with 1-2 other values, such outliers can still have a dramatic impact on a chronology. The Loso chronology is still fat-tailed. Loso’s temperature reconstruction is a re-scaling of the log of the varve chronology.

This partly mitigates the non-normality, but, at first glance, this seems both a step too late and, given the non-normality of the logged varve widths, not necessarily an adequate precaution. I haven’t pondered all the issues of how to deal with such refractory raw ingredients, but it would be worth examining the effect of a non-parametric standardization of the actual distribution to a normal distribution (with a relative low ceiling – maybe 2 sigma – on the contribution of any one varve.)

This would still not mitigate the apparent inhomogeneity of Core A. Here one would welcome a far more expansive exposition by Loso than the one actually provided. One would also welcome the adoption by varvochronologists of some of the precautions developed by dendros over the years – which Loso’s chronology doesn’t meet.

As matters stand, the second ingredient to the Kaufman Hockey Stick (after the Yamal substitution) is the Loso Iceberg Lake varvochronology – where, unfortunately, there is evidence that the HS-ness of this series is a result of an inhomogeneity in Core A (one not shared by Core K).

I too was struck by the divergence issue (the article’s figure 6) and how it seemed correlated with the shrinking local tree-ring anomaly. A bonus point to Loos for saying “Smoothed versions of both records are shown, but correlation was calculated with raw (annual) data.”

Have you tried a log-logistic distribution? I have no theoretical basis for suggesting it, other than it looks like it might fit well.

Then that would be a good reason not to do it. If there’s no theoretical reason to treat data like that, then don’t do it.

This is the problem I’m having with the statistical treatments in climate science – using statistical metrics to suggest mathematical treatments of the data in order to announce statistical results which show correlation. It’s circular reasoning leading to results which are meaningless and misleading.

Statistics is a branch of mathematics, not of witchcraft or sorcery.

If statistics in climate science is to be treated like magic, then this blog should be renamed “Defence against the Dark Arts”

I have to disagree with you here, John A. Using transformations to ensure adherence to model assumptions (IID Normal) and to yield a more parsimonious model has a long history with some of the greatest modern statisticians, including George E. P. Box.

John A, I’m with Mike B and Aslak here. If one series is normally distributed and another series is log-normally distributed, then it is not only reasonable but recommended to transform the latter series so that it is normally distributed prior to calculating a correlation – regardless of whether it improves the Pearson correlation or not. Indeed, it’s objectionable not to do so – a point that I intend to make regarding some other varvochronologies.

It would be salutary for applied scientists to plot distributions at an early stage. If there is no obvious analytic transformation, I’d recommend a non-parametric transformation to a normal distribution prior to calculating a correlation (this can be done by mapping a net of quantiles from one distribution to the other.)

If one series is normally distributed and another series is log-normally distributed, then it is not only reasonable but recommended to transform the latter series so that it is normally distributed prior to calculating a correlation – regardless of whether it improves the Pearson correlation or not. Indeed, it’s objectionable not to do so – a point that I intend to make regarding some other varvochronologies.

My point about a statistical model was not “piling on” any more than your frequent references to MBH98 or Yamal or R2 are “piling on”.

I don’t get any sense in which the authors actually sought to do pre-analysis of their data to transform it into meaningful (or hopefully meaningful) data sets which can be put into a statistical model whose behaviour is well understood (as you are now suggesting). Did they plot any of these distributions before transformation to justify their steps? Did they tell us how they calibrated varve thickness or density with temperature?

No. It’s just like medieval alchemy compared to modern chemistry.

Instead, Kaufman2009 appears to be doing Mannian PCA all over again – chop data of obvious “outliers” (by eye it appears), put data in statistical meat grinder, whisk in a few weightings, output Hockey Stick graph, publicize then publish. (I’m pretty sure that Gerry North will be asking you to “move on” any time now).

When constructing varve chronologies inhomogenities can actually be helpful. In Swedish late glacial varve chronologies “tappningsvarv” (draining varves) are a well-known phenomenon. These are abnormally thick varves caused by the sudden draining of ice-dammed lakes (the whole Baltic in one case). Such varves are often useful as stratigraphic and chronological check-points over a considerable area.
Another factor that can mess up varves is seismicity which can cause both plastic deformation of wet sediments, liquefaction, slumping and turbidity currents. Alaska is a very tectonically active area so this seems a quite likely explanation for both chronologic gaps in deposits, the odd extreme varve and zones of abnormal varve thickness. There are many instructive images of such structures on p. 45-64 of this publication (“Early Holocene faulting and paleoseismicity in northern Sweden”) from the Swedish Geological Survey:

I’ve read the preliminary paper from #1 but not the final. This seems to set a new sort of record because a temperature reconstruction is made in the instrumented era but instrumental temperatures are not reported. Unless I missed them.

Whether the ice blockage broke just before the MWP depends not only on the climate of the time, but on previous climate which might have allowed the blockage to grow hugely thick. Another example of concentrating on some variables but not on others.

General: The correlation of 0.6 reported between tree rings and varve properties would seem to be less than that if the kick up at the recent end was removed. I’d even guess that the correlation would become insignificant. Once again, there are so many variables influencing varve properties other than temperature that it’s a bit pointless to compare them to dendro, which has multiple confounding properties of its own

Finally, the work seems to be a honest and careful effort to do a good job with the available physical material, but the author appears to have blinkers on when externalities should be brought into the scheme of things.

Re: Steve McIntyre (#9), Also noted but hard to read on my copy so I did not comment in case it was scribal.

Re: tty (#16), However the author did report a careful search for unconformities of the gross kind and found none. There are differences between a “deflation” of intact layers and a removal of them, such as angular unconformities etc.

A reader has emailed me observing that Figure 2 of Loso et al shows a major reconfiguration of this transient proglacial lake in…. 1957. The caption to Figure 2 shows that the lake level fell by 26 meters in 1957!!

You don’t suppose that this might have introduced the inhomogeneity in Core A??

You need look no further. Such a lowering of a proglacial lake would expose a large expanse of easily eroded bottom sediment. The result would be exactly what is seen in Core A, a sudden large increase in sedimentation which then slowly decays as the most easily eroded material is exhausted and vegetation establishes itself on the exposed bottom.
In fact we have a new Korttajärvi here, but with natural forces rather than man causing the increased erosion. I am very surprised that Loso even tried to extend the series after 1957.

Re: ianl (#20),
…and this is why eco-minded Master Gardners laughed themselves silly when it was suggested that temperature, not precipitation, is a primary driver of growth in the nearby Rocky Mountains. And that differential growth in strip-bark pines is attributed to climate rather than the strip-bark process itself.

Some things are simple physical observables to those who understand the physical reality. I will never understand why such observations are discounted because the observers are not climate scientists.

We have no record of that glacier’s behavior during the MWP, but the lake itself provides another form of evidence for how nearby glaciers responded to MWP warming. We examined dozens of outcrops throughout the muddy bottom of Iceberg Lake, and the varve record was in all cases uninterrupted by signs of large-scale erosional unconformities. This continuity of sedimentary layers in Iceberg Lake precludes the possibility that catastrophic lake drainage events—which would have resulted in widespread lakebed erosion comparable to that seen since 1999—occurred at any time during the last 1,500+ years. Contemporary jökulhlaups reflect climatically induced thinning of the large glacier that impounds Iceberg Lake; the absence of evidence for similar events in the varve record strongly argues that the MWP was not warm enough to prompt similarly extensive glacier thinning and retreat, suggesting that contemporary glacier retreat is unprecedented over the last 1,500 years.

I haven’t parsed this statement yet, but this seems to me to be a useful sort of observation and one that might survive parsing. It’s not all that much help in making a multiproxy reconstruction, but it’s an obstacle to someone arguing the opposite – that the MWP was warmer than the present.

Re: Steve McIntyre (#11),
Seems like there may be a stronger anecdotal streak in paleolimnology, possibly the product of the richness of the material they study. Rich in signal, but ultimately buried in noise.

the absence of evidence for similar events in the varve record strongly argues that the MWP was not warm enough to prompt similarly extensive glacier thinning and retreat

The underlieing assumption apparently being that the glaciers were similarly extensive 1500 years ago as they are today to enable “similarly extensive glacier thinning and retreat”.
.
Why is the cutoff at 1500+? Technical limitations or because an “event” occurred shortly before than, muddying the water?

On downloading some of the proxy data directly from the server, I started reading into how these varve data are handled. Does anyone know if Varve thickness is typically deemed linear with temp or does that have to do with the reason for looking at log relationships? It’s all new to me.

—-

Nice job on the proxy sorting BTW. The depth and time involved in looking at the critical details of so many proxy papers is daunting on a good day. Loso is just one of many.

I wonder if Kaufman took logs before or after decadal averaging? Ordinarily this wouldn’t make much difference, but this is not an ordinary case.

Steve: Loso took the logs. He averaged (say) 3 cores and then took an average. He then made a temperature “reconstruction” by taking a log of the average and doing a linear rescaling. Kaufman then took a decadal average of the temperature reconstruction and rescaled to (0,1) over 980-1800. Lots of transformations, but not that deal appropriately with the form of non-normality, let alone the inhomogeneity.

Contemporary jökulhlaups reflect climatically induced thinning of the large glacier that impounds Iceberg Lake; the absence of evidence for similar events in the varve record strongly argues that the MWP was not warm enough to prompt similarly extensive glacier thinning and retreat, suggesting that contemporary glacier retreat is unprecedented over the last 1,500 years.

Suggests. What does that mean exactly? Some will take that to mean “definitely does”. To me it’s an assumption, based on more assumptions, that the proxies are telling them what they think they’re telling them. I don’t buy it.

This is likely too simplistic, but anyone looking at the data for Loso post 1956 would see it is out of all proportion both with the trend in Loso prior to 1956, and actual temperatures recorded in that area. So why wasn’t an alarm bell rung?

Another question – why do they need proxies when reliable temperature records exist? Sure calibrate it, but the temperature record itself ought to take priority over a spurious leap in the so-called proxy record. Or, as has been suggested, the proxy data for that period ought to have been excised as unreliable.
Steve: Relax about this particular issue. No one is substituting varvochronology for temperature measurements. It’s not for the recent history; it’s to compare the recent results to past history. Whether the comparisons make any sense is what we discuss here.

I do wonder why they didn’t align the core M spike at 1300 with the 1650 spike on core A. There do seem to be similarities in the two leftmost spikes in M and A, and the intervening curves. But I also know we humans are good at finding or inventing similarities by eyeball. There probably was something obvious in the geology which indicated the proper alignment.

I would say temperature is definitely not linear with varve thickness. Sediment pickup and transport upstream are related to turbulence. Influx volume to the lake is related to temperature and the inflow to the lake is likely linear with temperature, assuming the same exposed melting area in the drainage basin (which probably is not true over time). However, I don’t think turbulence is linear with flow. And even if turbulence is linear with flow volume, it’s possible that cool but wet summers might look the same as warm but dry, the overall volume of water/sediment into the lake being the same. I suspect most of the volume of the varve is laid down in the spring melt however so it’s kind of a one season thermometer.

Have you tried a log-logistic distribution? I have no theoretical basis for suggesting it, other than it looks like it might fit well.

If Aslak had instead said:

Have you tried a log-logistic distribution? I have no theoretical basis for suggesting it, other than it looks like it might help satisfy the asumptions of correlation analysis.

then I doubt John A would have made any remark.
.
His complaint is a generic one regarding a posteriori analysis. While it is true that post-hoc fitting of distributions to satisfy a statistical model’s assumptions is acceptable, it is equally true that any expert truly familiar with their data type will typically have an understanding why a process follows one distribution (e.g. log-normal) and not another (e.g. normal). John A is simply asking for evidence that this sort of understanding exists in paleolimnology. This is a fair request. If these processes are as inhomogeneous (syn. heterogeneous) as Loso and others suggest – including some of Kaufman’s co-authors! – then the distributions may be compound/complex. So let’s hear some experts address John A’s concern. What kinds of distributions do paleolimnology data typically follow? And why?

Piling on? May I rephrase? Is the PCA approach that they use (yet again) clear and appropriate? What does it yield that an arithmetic mean would not? (All the questions asked of Steig et al. could be asked here. Seems to me John A was merely seeking to initiate that process.)

Was thinkin’ of writin’ country&western lyrics as a hobby. The line “Mama, don’t let your son grow up to be a varvochronologist” came to mind in a trial run. It has that certain lack of meter. [Snip away, if you must!]

Since the underlying data is a thickness, so bounded below by zero, I think of the gamma family and the many distributions one can get from sums and ratios of gammas. Too, the integer shape parameter part of the family is generated from sums of exponentially distributed variables, which in turn are waiting times for Poisson events. If one thought of varve thicknesses as coming from sums of events over time, then, the gamma family might be a pretty reasonable model, at least to begin with.

I would like to know if the varvologists of the climate world ever get together with the sedimentologists of the hydro world.

We (of the latter pursuasion) know as a result of day and night half-hourly sampling during the rise and recession of major (and minor) floods, that the 5% highest inflows typically transport 95% of the sediment load by mass. We have developed procedures for approximately estimating % of mass retained/mass transported in a storage (lake of reservoir). We even have developed means for estimating rate of change of density of deposited material vs time and the patterns of deposition over the total area of the storage. The relationship between temperature (whether daily, seasonal or annual is, at best, extremely tenuous.

I would be grateful if someone could explain to me the scientific correlation between varve thickness and temperature. I’m not being snide. I would relly like to know if we have a disconnect here.

You need to see this in a historical perspective. Varve chronology was developed in Sweden by von Post and his school about a century ago. They worked on glacial clays deposited in front of the receding ice-sheets. These clays were deposited on a flat bottom in fairly deep and calm waters in the proto-Baltic sea and can be easily correlated over relatively large distances (several tens, sometimes hundreds of kilometers). This is because they mirror the amount of melting over a fairly large area of ice-cap. It seems likely that in this context the variations in varve thickness actually, to some extent, do indicate summer temperatures.
This idea was then carried further by means of “teleconnections” (yes, this was how the word was invented), and efforts were made to correlate the Swedish varve chronology with varve chronologies from New England and even Patagonia. This failed miserably and “teleconnections” are usually mentioned as a quaint historical curiosity in Quaternary Geology textbooks. It is weird to see them resurrected in climatology, rather like if phlogiston should show up in chemistry.

We (of the latter pursuasion) know as a result of day and night half-hourly sampling during the rise and recession of major (and minor) floods, that the 5% highest inflows typically transport 95% of the sediment load by mass. We have developed procedures for approximately estimating % of mass retained/mass transported in a storage (lake of reservoir). We even have developed means for estimating rate of change of density of deposited material vs time and the patterns of deposition over the total area of the storage. The relationship between temperature (whether daily, seasonal or annual is, at best, extremely tenuous.

Let me see if I can cut to the chase: the width of varves is determined primarily by volumetric flow of the input river and that the largest floods produce the thickest varves. Close? Does the density of the varve have any climatic meaning as far as you can see?

Re: jlc (#47), Well, the simplest explanation of the relationship of varve thickness and temperature comes from one of our old friends:

Observations from modern lakes containing varved sediment indicate a logarithmic relation between varve thickness and early summer temperature, which influences meltwater, and suspended sediment discharge

Steve, in your qqplots above (your fig 4), are the linear lines the result of some kind of robust regression procedure that is downweighting or trimming the nonlinear tail data? Just wondering… I’ve been doing some simulations to see what sort of relatively simple distributional family (that makes some theoretical sense for this kind of data) might generate such distributions. I think Beta-primes (ratio of two gammas) look workable but it is hard to tell for sure without knowing how you drew those lines.

Re: Steve McIntyre (#52),
Ok, qqline delivers a line through the first and third quartile points of the qq plot.

A simple two-parameter distribution that can give the look in Steve’s figure 4 is the Beta prime family. I tend to think of this distribution as the ratio of two gamma distributions. But it is also isomorphic to the F distribution (Chi-square is a gamma, and the F is a ratio of Chi squares).

For an example, draw one thousand pairs of independent gamma variates, NUM and DEN for numerator and denominator, with shape parameters .5 and .4 respectively, and create V = NUM/DEN from each pair. Then V and LNV will give you qq plots and qq lines that look much like Steve’s pair of figures in his figure 4. I would post the images but I’m not sure how to do it.

Two interesting points. When a betaprime variate’s denominator parameter is less than 1 (it is .4 in this example), the mean of the distribution doesn’t exist. Steve remarked:

“This creates major complications for simplistic efforts to average a few measurements in making a varve chronology or to ‘standardize’ data as we shall see below…wildly non-normal fat tails…makes this a very problematic raw material for construction of a temperature index.”

Uh, yup.

Second, there may be a physical model that would rationalize a ratio of gamma variates. Gamma distributions have been used, among other things, to model accumulated rainfall, water in dams, etc. But they are also models of waiting time for events (e.g. “death” or “failure”), as with exponentials but more general. Gammas with shape parameters less than unity have declining failure rate in t+dt as t increases. If there was a physical model of sedimentation which depended on the relative timing of two roughly independent seasonal or annual events, with the declining rate mentioned above for both events, then the beta prime would be a reasonable model of the distributions Steve displayed.

I am not a varvowhatchamacallit or a sedimentsophist. But maybe someone around here knows enough to say whether this bears any relationship to anything real.

I can appreciate that for ancient deposits (equilibrium density) and with no other processes involved (geothermal, summer precip, no other inflow sources, etc.), it may be appropriate to try to develop a correlation between relative summer temperature and varve thickness.

For “recent” deposits, it would be necessary to develop age-density relationships to extend series. It would seem to me that the combination of uncertainties makes it pretty improbable that any mean summer temperature series derived from varves could have a precion of better than ± a couple of degrees.

It is evident that scientists get away with stuff that would never be acceptable for engineers. In the past, this was not a problem as researchers were trying to develop interpretations of interesting data.

Today, much more is dependent on proper understanding of orders of precision, data validity and uncertainty, identification and evaluation of alternative explanations for physical phenomena.

Ya picked a fine time to leave me, my friend.
Four hundred years now, tempture’s unprecedented.
The crops are a-wiltin’. The planet’s a-tiltin’.
Soon we’ll all wish for the end.
Ya picked a fine time to leave me, my friend.

Not an R guy which makes talking turkey with this community more difficult, but rather a SAS guy. Getting SAS to draw the equivalent of the qqline function was clumsy since it doesn’t deliver a canned option for that particular line within its qqplot procedures. Here is the first part of the SAS batch code:

This gives me the data set “one” with the thousand draws of a betaprime “v” and its log “lnv”. Then I sort and select out the first and third quantiles and run a regression of v and lnv on qx (inverse standard normal on 1/4 and 3/4). This prints intercept and slope between the first and third quantile observations. Call these INT and SLOPE. The second step program is then:

…where I actually provide the constants INT and SLOPE. In certain procedures, certain optional parameters can’t be easily read from a dataset and this is unfortunately true of PROC CAPABILITY for the line graphing, though it could internally use the sample mean and s.d. for drawing the line.

Sediments in measured time,
accruing thick and thin.
The link to climatology’s
so weak that it’s a sin.

Bender, we all appreciate your wit and rhyming skill, but a blanket dismissal of all varves as irrelevant to climate without even looking at them crosses the line from healthy skepticism to knee-jerk denialism, and only reinforces negative stereotypes of CA.

Varves undoubtedly tell us something about the history of local climate, even if sometimes the message has been too much complicated by ice-dam breaks, human intervention, etc. to justify identifying it as temperature.

Varves undoubtedly tell us something about the history of local climate, even if sometimes the message has been too much complicated by ice-dam breaks, human intervention, etc. to justify identifying it as temperature.

But is not the question/point: what is the message that is being presented?

What is the relationship of varve properties to temperature? How is it calibrated? Can a case be made for it a prior with arguments that are more than mere arm waving? Is a regression model (not sure that I have seen it here) that shows a trend of temperature during some time during the year to some varve property significantly different than zero sufficient, by itself, in using that varve property in modeling a temperature reconstruction back in times long ago?

How seriously should we be taking these models to avoid the denialist label (not that that label or others drives anything important with me)? When I see these models as the same old, same old without the apparent missing caveats my first tendency has been to give them a quick look and then ignore them. This is not to say that I do not enjoy the analyses of these works.

I’m not too familiar with the proto-Baltic (tty #54 above), but I reckon I know the Andes more than 99.99% of people responding on Climate blogs. I would have to say that the tranquil deposition mechanism suggested by tty does not exist in the Andes, at least not below 5000m. It may possibly exist elsewhere.

It’s worth keeping in mind that we haven’t had a sudden flowering of climatological geniuses. We have had a massive jumping on the bandwagon by the mediocre. You know who they are and I know who they are. They have no serious background in applied or theoretical science. They are defensive, resentful and scared. Because of this they come across as arrogant.

Anyway, in keeping with the poetic theme, I offer the following:

Oh, Mary, this London’s a wonderful sight,
With people all working by day and by night.
These people they rant and they rave and they shout
They talk about things they know f**k all about
But I’ll tell ya Mary, I’d much rather be
Sending my thoughts off to WUWT

But I’ll tell ya Mary, I’d much rather be
Sending my thoughts off to WUWT

I, for one, would grieve the loss of perspective provided by a true professional in the brave field of sedimentology. Don’t abandon us, who are muddling through the many issues, completely to our own amateurish devices.

Sorry if I sound a bit grumpy here, but is it not facile and a bit like that other blog’s errors to dismiss work in a field before you have brought yourself up to steam? There might be some innate proxy value for temperature in varve properties. My inclination is to think not, because of the amount of stationarity one needs to assume; and because of the difficulty of calibrating varve properties with local temperature. But I so not dismiss it out of hand. One still has to go through the processes of reason and learn from the research of others.

For a direct linkage, temperatures taken in the instrumented era have to deal with extremely recent sediments. It is well known in geology that it can take many orders of magnitude more time for the normal effects of dewatering, compaction, metamorphism (if any), shearing, folding, intrusion by other bodies, alteration of provenance, thixotropy and so on before a sedimentary pile can be regarded as having completed its formation – if it ever does complete its formation. Even 5000 feet down in the Mississippi mud there are active processes at work.

To derive temperature properties from recent varves, one has either to go via an intermediate (like dendrothermometry or some types of isotope work), or determine the rather exact progression from a sediment layer put down this year to one put down thousands of years ago. Where does the varve under study sit in this spectrum of change? The answer is generally unknown within adequate accuracy.

Sedimentolgy deserves to be called a brave field, because of the formidable complexity of nonlinear problems that it faces. Even the classic problem of deposition of sediment load in a river delta, where persistent density currents are observed in a slowing, semi-steady, quasi-laminar flow, is by no means simple hydrodynamics. In limnological cases one faces the added complexity of load development (i.e., sediment pick-up from the runoff basin) from episodic rainfall events. All of this involves viscous, turbulent flow that interacts with the flow-channel boundaries in unpredictable ways. And strong winds (even staid Fenno-Scandia is raked occasionally by gales of 50 knots or more) may disturb the unconsolidated stratigraphy through wave action in shallow lakes.

For any proxy to be reliable, it must be coherent over a wide range of frequencies with the signal of interest. Varves may indeed provide some useful indication of precipitation intensity over decadal time-scales or longer. But with the latter being being only tenuously coherent with similar-scale temperature variations, it seems difficult to make a rigorous scientific case for any supposed thermometry by varves.

Hope not too dead yet!
Mr.Rocks emailed his thesis adviser (and co-author) yesterday; a sedimentolgist (retired just recently) on the question of thermometry and varves. This is what he said:

Hi. I’m in New England at the moment, but can give you a quick answer [if you google ‘varve’ you’ll be overwhelmed with info]. By definition varves are annual cycles. Most commonly the term is applied to glacial lake seds. If marine, oxygen isotopes can be used to get paleotemperatures [perhaps in lakes too if any calcareous plankton are found]. The Greenland ice cores are varved in that they have annual cycles with data going back 15K+ years and trapped gas can yield O-isotope paleotemp data. Hope this helps.

Re: jlc (#48),
is there a good doc online to get a sense of the basics of current modeling practices amongst you? In light of the recent conversation, it would be interesting to know what sorts of stochastic assumptions (say, about the underlying joint waiting times and size distributions of inflows) give you guys pretty good predictive models.