State of Antarctica: red or blue?

A couple of us (Eric and Mike) are co-authors on a paper coming out in Nature this week (Jan. 22, 09). We have already seen misleading interpretations of our results in the popular press and the blogosphere, and so we thought we would nip such speculation in the bud.

The paper shows that Antarctica has been warming for the last 50 years, and that it has been warming especially in West Antarctica (see the figure). The results are based on a statistical blending of satellite data and temperature data from weather stations. The results don’t depend on the statistics alone. They are backed up by independent data from automatic weather stations, as shown in our paper as well as in updated work by Bromwich, Monaghan and others (see their AGU abstract, here), whose earlier work in JGR was taken as contradicting ours. There is also a paper in press in Climate Dynamics (Goosse et al.) that uses a GCM with data assimilation (and without the satellite data we use) and gets the same result. Furthermore, speculation that our results somehow simply reflect changes in the near-surface inversion is ruled out by completely independent results showing that significant warming in West Antarctica extends well into the troposphere. And finally, our results have already been validated by borehole thermometery — a completely independent method — at at least one site in West Antarctica (Barrett et al. report the same rate of warming as we do, but going back to 1930 rather than 1957; see the paper in press in GRL).

Here are some important things the paper does NOT show:

1) Our results do not contradict earlier studies suggesting that some regions of Antarctica have cooled. Why? Because those studies were based on shorter records (20-30 years, not 50 years) and because the cooling is limited to the East Antarctic. Our results show this too, as is readily apparent by comparing our results for the full 50 years (1957-2006) with those for 1969-2000 (the dates used in various previous studies), below.

2) Our results do not necessarily contradict the generally-accepted interpretation of recent East Antarctic cooling put forth by David Thompson (Colorado State) and Susan Solomon (NOAA Aeronomy Lab). In an important paper in Science, they presented evidence that this cooling trend is linked to an increasing trend in the strength of the circumpolar westerlies, and that this can be traced to changes in the stratosphere, mostly due to photochemical ozone losses. Substantial ozone losses did not occur until the late 1970s, and it is only after this period that significant cooling begins in East Antarctica.

3) Our paper — by itself — does not address whether Antarctica’s recent warming is part of a longer term trend. There is separate evidence from ice cores that Antarctica has been warming for most of the 20th century, but this is complicated by the strong influence of El Niño events in West Antarctica. In our own published work to date (Schneider and Steig, PNAS), we find that the 1940s [edit for clarity: the 1935-1945 decade] were the warmest decade of the 20th century in West Antarctica, due to an exceptionally large warming of the tropical Pacific at that time.

So what do our results show? Essentially, that the big picture of Antarctic climate change in the latter part of the 20th century has been largely overlooked. It is well known that it has been warming on the Antarctic Peninsula, probably for the last 100 years (measurements begin at the sub-Antarctic Island of Orcadas in 1901 and show a nearly monotonic warming trend). And yes, East Antarctica cooled over the 1980s and 1990s (though not, in our results, at a statistically significant rate). But West Antarctica, which no one really has paid much attention to (as far as temperature changes are concerned), has been warming rapidly for at least the last 50 years.

Why West Antarctica is warming is just beginning to be explored, but in our paper we argue that it basically has to do enhanced meridional flow — there is more warm air reaching West Antarctica from farther north (that is, from warmer, lower latitudes). In the parlance of statistical climatology, the “zonal wave 3 pattern” has increased (see Raphael, GRL 2004). Something that goes along with this change in atmospheric circulation is reduced sea ice in the region (while sea ice in Antarctica has been increasing on average, there have been significant declines off the West Antarctic coast for the last 25 years, and probably longer). And in fact this is self reinforcing (less sea ice, warmer water, rising air, lower pressure, enhanced storminess).

The obvious question, of course, is whether those changes in circulation are themselves simply “natural variability” or whether they are forced — that is, resulting from changes in greenhouse gases. There will no doubt be a flurry of papers that follow ours, to address that very question. A recent paper in Nature Geosciences by Gillet et al. examined trends in temperatures in the both Antarctic and the Arctic, and concluded that “temperature changes in both … regions can be attributed to human activity.” Unfortunately our results weren’t available in time to be made use of in that paper. But we suspect it will be straightforward to do an update of that work that does incorporate our results, and we look forward to seeing that happen.

Postscript
Some comment is warranted on whether our results have bearing on the various model projections of future climate change. As we discuss in the paper, fully-coupled ocean-atmosphere models don’t tend to agree with one another very well in the Antarctic. They all show an overall warming trend, but they differ significantly in the spatial structure. As nicely summarized in a paper by Connolley and Bracegirdle in GRL, the models also vary greatly in their sea ice distributions, and this is clearly related to the temperature distributions. These differences aren’t necessarily because there is anything wrong with the model physics (though schemes for handling sea ice do vary quite a bit model to model, and certainly are better in some models than in others), but rather because small differences in the wind fields between models results in quite large differences in the sea ice and air temperature patterns. That means that a sensible projection of future Antarctic temperature change — at anything smaller than the continental scale — can only be based on looking at the mean and variation of ensemble runs, and/or the averages of many models. As it happens, the average of the 19 models in AR4 is similar to our results — showing significant warming in West Antarctica over the last several decades (see Connolley and Bracegirdle’s Figure 1).

128 Responses to “State of Antarctica: red or blue?”

Eric, I think you and Hank are being too negative about this. I often listen to SciFri, and I just listened to the podcast of your segment. By comparison with others I have heard, I thought it was actually pretty good. I was concerned when you seemed late in the segment to get diverted into an explanation of the cooling of Eastern Antarctica, but you got back on track near the end, emphasizing the essential point, which is that West Antarctica is warming significantly. This is a general audience with limited time available, so things cannot be explained at a level that would satisfy a scientific conference.

[edit] How does the fact that satellite data was not collected prior to 1982 which was during a professed cooling period 1969 – present support that argument that warming was being shown by any thing but the surface stations?

Further how does this compare with the study:

Twentieth century Antarctic air temperature and snowfall simulations by IPCC climate models. Andrew Monaghan, David Bromwich, and David Schneider. Geophysical Research Letters, April 5, 2008

“We can now compare computer simulations with observations of actual climate trends in Antarctica,” says NCAR scientist Andrew Monaghan, the lead author of the study. “This is showing us that, over the past century, most of Antarctica has not undergone the fairly dramatic warming that has affected the rest of the globe. The challenges of studying climate in this remote environment make it difficult to say what the future holds for Antarctica’s climate.”

The authors compared recently constructed temperature data sets from Antarctica, based on data from ice cores and ground weather stations, to 20th century simulations from computer models used by scientists to simulate global climate. While the observed Antarctic temperatures rose by about 0.4 degrees Fahrenheit (0.2 degrees Celsius) over the past century, the climate models simulated increases in Antarctic temperatures during the same period of 1.4 degrees F (0.75 degrees C).

The error appeared to be caused by models overestimating the amount of water vapor in the Antarctic atmosphere, the new study concludes. The reason may have to do with the cold Antarctic atmosphere handling moisture differently than the atmosphere over warmer regions.

That shows that based on physical evidence there was only .2C warming for the century.

[Response: What the study you are writing about shows is that some models overestimate the amount of warming relative to one data-based estimate of the amount of warming. The data-based estimate is from a paper of mine, showing about 0.2C warming on average. That’s for the entire continent though on average; the West Antarctic warming is greater. It is based on ice cores, which is an inherently conservative estimate because there is generally a seasonal bias due to non-uniform snowfall rates. Borehole thermometry results I’ve seen suggest greater warming than that, in at least two locations in East Antarctica. Watch for publications on this in the next year. I think the assessment in the Monaghan paper that Antarctic warming is overestimated by the models is a premature conclusion.–eric]

I don’t quite get this focus on the last 50 years. Aren’t we most concerned with the latest trend: the one that relates to the steep part of the hockey stick graph, the one that confirms man’s influence on climate, the one that justifies alarm, the one that show consistency with greenhouse gas effect? Shouldn’t we therefore focus on the period for which we have identified a clear manmade global warming signature, hence the period 1975 – present? To attribute the cooling in East Antarctica as a local effect indirectly caused by man induced ozone and hence as atypical for the region ignores the fact that MSU data for the Troposphere of the Polar and ExtraTropic regions of the Southern Hemisphere also show no warming and even some cooling.

[Response: What you say doesn’t make any sense. If the MSU data show cooling in the recent past over East Antartica, that’s entirely consistent with the surface temperature data and the ozone-related interperetation. For the pre-ozone-hole period, the MSU data agree well with our assessment, as shown in the paper linked to my post (click on ‘troposphere’)–eric]

Another episode of Drew Shindell’s “Who should you trust, the models or the data?”

The winning answer is neither alone but both together, which brings me to my point. Here Eli descends to speculation: As has been emphasized in the paper under discussion there is a warming from global climate change going on in Antarctica and a cooling from ozone depletion. The latter has a HUGE annual cycle which kicked in in the early 80s. Therefore, we can a) trust the modeling of the cooling effect after 1980 when we actually have good, correlated data of the depletion depth and ground level temperatures (well more than anything else) and b) take it as a given that there was only a small cooling effect before say 1975 or so and c) use observed the annual cycle as a measure of the contribution of the two effects.

This means that we can model what the Antarctic warming would be WITHOUT springtime ozone depletion and will be as the effect of chlorine/bromine loading of the stratosphere decreases.

Ron Taylor, I’d like to think you’re right. I listen to SciFri regularly.

I’d bet they tried to squeeze in five minutes for this but hadn’t read it. Before the news break Ira said the surprise is it’s cooling not warming); after the news he said he’d gotten it backward. Neither was correct. As Eric said: Cooling or warming? Yes. That much, Ira’s staff should’ve gotten for Ira. Forgivable, likely not much harm done, missed chance. Felt like it dropped on his desk too late.

Just sayin’ — it’s cautionary. Radio may need preplaning of “elevator talks” — the short phrases that educate.

[Response: Well said. Exactly: neither of the two ways that Ira put it were right!–eric]

Overall the media coverage of this paper was excellent. I’m not completely sure why, given that lots of papers of equal or greater significance don’t get this degree of attention, but perhaps the difference was that it was the Nature cover story. Interestingly just a day later came the paper on the decline in western North American forests, which got similar attention. That was no surprise given its subject matter, but notably I didn’t see any signs of the two papers crowding each other out.

Re SciFri, I suspect that the two 20-minute segments were originally planned to be a half-hour and that the two breaking news climate segments (the forestry paper had the following 10-minute chunk) were squeezed in at the last minute. Eric says that Ira wasn’t “on his game” yesterday, but I have to say my impression from listening to many shows is that Ira just isn’t very comfortable with climate stories. I’m not sure why, although part of it may be that when they allow call-ins during climate segments (which they didn’t for either of these) they tend to get lots of the sort of troll with which we are all too familiar in the climate blogosphere.

The paper notes that radiometric surface temperature “Tir” is different from 2-m shelter height air temperature.

Was any attempt made to correct for this difference ? It’s not clear from the paper that there was and text for Figure 2 simply says the two temperatures actually ‘agree well.’ Physically I don’t see why one would expect them to agree well.

[Because we are looking at changes through time, there is no need to correct for this difference unless the difference itself changes with time. You can’t easily change the surface temperature without changing the temperature above the surface, due to mixing, so of course they are related. The relationship doesn’t have to be constant, because the amount of mixing can change. But our results indicate that the relationship doesn’t change much — this isn’t a trend in mixing (which would be very remarkable, and a story in itself, if it were true) rather than in actual temperature. If it were, the results would differ between the satellite-based results and the AWS-based results. They don’t differ significantly.–eric]

I agree they should **correlate** over time and trends may be similar, but ‘agree’ meaning ‘approximately equal’ is a stronger condition. As a general rule, of course, surface/skin temperatures don’t equal 2-m air …

[Response: Eric already explained this once above. We are not working with temperatures but, rather, temperature anomalies. The mean is subtracted off. So a constant offset e.g. such as one might expect between ice surface temperature and a 2m air temperature, would have no influence on the result of the analysis. By showing that we get the same result using both AWS and satellite ice surface temperatures, we have indeed demonstrated that to be the case. –mike]
[It’s worth adding that this simple point gets overlooked all the time, including by some colleagues of ours that really ought to know better (I won’t name names). It is akin to misunderstanding the difference between precision and accuracy.–eric]

“However, the trends in our results (when we use the AWS) don’t depend significantly on trends in the AWS data (in fact, the result changes little if you detrend all the AWS data before doing the analysis. –eric]”

What method did you use to detrend the AWS data? I have looked at a number of East Antarctic stations in GISTEMP and all of them show a statistically insignificant warming trend using linear models. I would like to detrend the data to see what impact that has.

[Response: Errrr… none. Detrending means removing the linear trend, and if it’s small and not significant, then detrending won’t change much. – gavin]

Incredibly warm for this time of the year. Which makes it obvious, the clear air
is warming the surface! The tropospheric weighted temperature is 244 K, probably 5 degrees K above a tentative especially small data base.

Finally if there was a survey of Arctic people throughout the world, there would be near unanimous result, its getting so warm Up Here, the sky is changing…
Its simply too bad, we dont hear much from the Antarctic long term transient population, from the plumber to the scuba diver, there should be a likewise response just about starting there.

I have not read the study but I have read the SI and abstract and I have a couple of questions:

1. If I read the SI correctly, the satellite data for clouds was removed when it did not within 1 SD of the climate mean. How did you tell the clouds from the ice? Did you develop your model and then remove any cold outliers since they would have to be clouds?

[The cloud masking is done with *daily* data, and is based on multiple channels to identify clouds. The details of the method are in Comiso, 2001, Journal of Climate. Cloud masking is done *before* anything else. The clouds actually tend to be warmer than the surface, so warm outliers would be removed more often than cold.]

2. It was not stated in the SI but did test your model on any well documented area to see if the satellite/SST collected the 82-2006 period would then give the correct trend line for the previous 25 years using just the SST data?

[Not sure what you mean about SST data, which we didn’t use in the analysis. If you are referring to the general circulation modeling, all we did for this paper was look at the already-published results from a 2007 paper in JGR. As for the satellite data, yes of course we compared it with other data in well document areas. If you look at Comiso’s 2001 paper you’ll see the demonstration that it is extremely high fidelity.]

3. Why is it more important that you start measuring from 1958, a cold point for the century and not from 40′ or 69, 79, 82 which all show cooling trend for the century?

[Response: If you can find me comprehensive data from Antarctica going back to 1940, I’d be delighted to hear about it. Almost all the data start in 1957. And of course we do show the results for starting in ’69 (Figure 3b), ’79 (Figure 4), and ’82 (Supplementary Information). A very clear point in the paper (if you bother reading it) is that West Antarctica is warming based on any of these starting points. How do you know 1958 is a “cold point” for this century, since there are virtually no data going back prior to 1957? (There are in fact some data going farther back, but just in a few isolated places. The only continuous long record is from the sub-Antarctic island of Orcadas, showing pretty much monotonic warming since 1901. That doesn’t help much though since that is near the Antarctic Peninsula, which everyone already knew was warming. An analysis of the rest of the available data by Jones, 1990 in Journal of Climate shows overall warming since the early 20th century. Those data are simply too discontinuous and sparse for us to have used them in our analysis. None of these data suggest anything special about 1957. Although some people seem bent on suggesting that we “chose” 1957 for some nefarious reason, 1957 was the start of the International Geophysical Year, when most of the weather stations were established; that’s why the weather records generally start then.–eric]

“[It’s worth adding that this simple point gets overlooked all the time, including by some colleagues of ours that really ought to know better (I won’t name names). It is akin to misunderstanding the difference between precision and accuracy.–eric]”

It is riddled with errors and misinformation, in common with all of Booker’s pieces on global warming. Most of the comments following the article are supportive of Booker, with no comments critical of his statements. The reason appears to be that The Telegraph is censoring any such critical comments, which now appear far less frequently than a few months ago.

I posted a comment that pointed out and corrected eight errors in Booker’s article (see below), but the Telegraph has not published it. Misinformation and errors, followed by censorship: that is the state of one of the main newspapers in the UK at present.

Errors, briefly:
1. New evidence contradicts “all previous evidence”.
2. E. Antarctic cooling “major embarrassment to the warmists”
3. Antarctica is “source of all the meltwater which will raise sea levels by 20 feet”
4. Antarctic peninsular “tiny” and only part that is warming.
5. “The study relied ultimately on pure guesswork”
6. “Dr Kenneth Trenberth”
7. “hockey stick” rewrote the scientific evidence.
8. ““well-established fact that the world was significantly warmer in the Middle Ages than it is now”.

[Response: This appears to be par for the course for the Telegraph. The last time they messed up they didn’t even allow a correction from the main interviewee (see Ben Goldacre’s column). – gavin]

[Response: A reminder of why leading environmental journalist George Monbiot (who writes for the Guardian in the UK) has aptly termed Booker the Patron Saint of Charlatans – mike]

“1957
Launch of Soviet Sputnik satellite. Cold War concerns support 1957-58 International Geophysical Year, bringing new funding and coordination to climate studies.
Revelle finds that CO2 produced by humans will not be readily absorbed by the oceans.”

Any plans to make the reconstructed data (AVHRR, AWS, and combined) publicly available? I checked your Web site as well as the paper and SI but couldn’t find a link…

[Response: I’ll have the reconstructed data available shortly on my web site at U. Washington. All of the data that go into this work has always been available on line, through NSIDC (National Snow and Ice Data Center) (for the AVHRR) and through the READER site at the British Antarctic survey (for the weather stations, including the AWS). These raw data were already available before we published the paper. The RegEM code — in the form we used — is (and was) available at T. Schneider’s web site at CalTech. I’ll put up links to all of this when I put our reconstruction on line. We’ll send the reconstruction to NSIDC, a reliable archive, when we have time.–eric]

“At 25 deg N ‘snapshot’ measurements over the past 50 years suggest that the MOC has slowed by 30% and the structure of the overturning circulation has changed so that the southward transport of lower NADW has halved and the southward recirculation of upper waters in the subtropical gyre has doubled.”http://www.noc.soton.ac.uk/rapid/rw/docs/RAPID-WATCHscience.pdf

Re: 43. Perhaps I was too ambiguous with my question. Huff states that it is worth giving statistical material a “sharp second look before accepting”. The maps above give only trend data, my ‘second look’ (looking primarily at the 1969-2000 map) is to enquire whether West Antarctica is warming towards the mean temp of the East (with the Eastern temp cooling from a higher point) i.e. convergent trends, or whether both sides of the continent were previously closer in their mean temps and are now warming/cooling away from each other (i.e. divergent).

My feeling is it is the latter but I would welcome clarification (in the full knowledge that there is not likely to be a simple yes/no answer to the question).

Hope that’s clearer.

[Response: Chris: West Antarctica is getting warmer, and it is already much warmer than East Antarctica, because it is much lower in elevation. So to the extent that cooling will continue in East Antarctica, I suppose you could say “divergent”. I’m not sure anything is learned by such terminology though: it doesn’t tell you anything about the underlying mechanisms.–eric]

I don’t understand why any infilling technique is needed, RegEM or otherwise. Can’t you just regress the AVHRR data on the available years of the occupied station data after 1982, apply the relationship (with noise, and accounting for spatial autocorrelation in the AVHRR data) continent-wide to the usable occupied station data, and call it good? (And still test for cloud and inversion biases by incorporating the AWS data as you did) How does the infilling actually generate any necessary information? I’m assuming it’s because the temporal change in the set of usable occupied station data makes the regression process very messy and your method eliminates that?

Also, it’s not clear to me–is RegEM also used in spatial filling for the masked-out (cloud-covered) areas in the AVHRR data, or only for infilling of the instrumental T data? Or is it using the covariance matrix between AVHRR and occupied stations to fill both simultaneously. What exactly does it do? And the T Schneider paper says it’s appropriate for conditions where the number of variables exceeds the number of records. How is that the case here?

[Response: Unclear about what you’re suggesting. The ‘infilling’ is nothing other than estimating missing values based on available values, i.e. it is the sort of regression you allude to. However, such multivariate regression problems need to be carefully regularized to avoid overfitting, hence the use of methods such as RegEM. Note that, as shown in Supp Info, we get the same result using the more conventional approach of employing EOFs to infill missing data (this been done to infill gappy instrumental climate records by the UK Met Office, NOAA, and many others for more than a decade). Note also that conditions you cite Schneider for corresponded to his observations using ‘ridge regression’ as the regularization scheme in RegEM. There are a number of papers (including the one we cite in the paper) showing that this does not work well w/ the infilling of sparse data, but that the alternative use of truncated total least squares (TTLS) as a regularization scheme does work quite well, based on independent tests using model simulation data (of course, this what the cross-validation tests in the paper are all about as well). This is what was used, as described in the paper. All infilling was with done w/ the final AVHRR product (i.e. after any cloud masking had already been done). -mike]

Lastly, what about just using the microwave data, avoiding the AVHRR/cloud masking issue, like Shuman and Stearns, except with the full suite of stations. Unacceptable spatial resolution?

[Response: I agree with Mike’s points above. Also, we did use the microwave data, but it has the huge problem that it doesn’t see the snow surface — it is effectively seeing about 1 m into the snow, so changes in snow properties create spurious non-temperature noise. Shuman and Stearns had to adjust for this on a case by case basis using the automatic weather station data. The results don’t actually differ much from what we obtained with the AVHRR, but statistics were terrible.–eric]

Yes I understand that the infilling is for estimating the missing instrumental records. But I don’t see how the process is multivariate in the first place. Because of the multiple bands in the AVHRR data, or the multiple stations in the record? I was assuming the multiple channels get synthesized into surface temperature values via some standard physics equations, in which case you would then just regress the time-averaged AVHRR temperatures against the co-located, non-missing, occupied station data, and apply that relationship (with noise etc) to provide the continental and regional estimates. Or is it the number of occupied stations that makes it multivariate, each considered one variable? But why would one do that? Something basic is not clicking.

[Response: Have you read the paper? I really think this is very very clear as we wrote it. If it isn’t, read the papers we cite: notably Comiso 2001 and the two or three by Mann and/or Rutherford. Not meaning to be dismissive here; I just don’t understand what you don’t get. –eric]

Eric, do you think I could ask the type of questions I did without reading the paper and the supplemental information? I’ve also looked at 3 of the references (Schneider, Mann et al, Shuman and Stearns) as much as time would allow me. These are complex, less than familiar techniques that are germane to the results, and most of us don’t have time to trace the background and details of them. If I did, I would. That’s why we ask questions.

I tried to be as clear as I could. To re-state: (1) what exactly makes the data multivariate, such that a multivariate procedure (a modified RegEM using TTLS instead of ridge regression) is needed, and (2) why is filling in the missing data even necessary in the first place, i.e. why can’t you just regress the co-occurring AVHRR data and the non-missing occupied station data, and then apply that relationship contintent-wide, using the spatial info in the satellite data, to get your large scale T estimates? What purpose does the data filling serve?

[Response: Jim. Thanks for trying again, especially when I may have seemed dismissive. Now I see what you’re asking. (1) The point of using RegEM, instead of conventional PCA, is that this allows one to account for spatial covariance information both in the predictor data (weather stations) and the predictant data (the satellite data). Any PCA analysis of climate data is just a snapshot of reality. The patterns one gets are hopefully reflecting the climate dynamics, but the longer the time series you use, the more likely that they reflect a meaningful, representative average picture. As we say in the paper, the point of TTLS is to solve the general matrix inverson A = bx where both A and b may be approximations. More typical regressions would assume all the wiggle-room (in the least squares sense) is in b, and that A is perfect. That’s very rarely the case. A = bx is a model. ~A = ~bx is almost always a better model. (2) You could do exactly what you say and the results would be identical. Some temporal infilling would still be necessary in the time series, because they are discontinuous — there are gaps in *all* the records, even the weather stations. But ignoring that, you are right. You don’t really need to do the spatial infilling we did. However, in doing it, it makes the calculation of large scale averages simple arithmetic, and it has the advantage of providing a picture of the spatial weighting of the large-scale averages. I hope that helps more!. –eric]

[Response: Hmmm. I must confess that I, myself, don’t follow. We’re interested in getting the best estimate we can of the spatiotemporal evolution of Antarctic temperatures over the past 50 years. The belongs to a class of problems in atmospheric science/climatology/oceanography that several different groups have been working on for well over a decade. Namely, given a set of sparse, but long-term climate/atmospheric/oceanic field data and a spatially complete (or at least far more widespread), but short set of data describing the same (or equivalent) field, how do we use the combined information in both datasets to get a best estimate of the full spatiotemporal history of the instrumental field in question? The Mann et al paper cites 18 studies of this type dating back more than a decade by Smith, Reynolds, Kaplan, Rayner, Folland, and other leading researchers interested in this question. All use some variant on the basic principle we’re using, i.e. using eigenvectors of the data covariance matrix (whether it be simple PCA, Regularized Expectation-Maximization, or some other variant). What is it, Jim, that you think you have come up with that solves the problem in a much simpler way and avoids having to work with data covariances and other messy entities? I can promise you there are literally teams of scientists around the world who would like to learn. So please spell it out for us, if you would. Thanks! –mike]

Thanks Eric, I appreciate your efforts to explain and clarify. I’m interested in these techniques because of their wider possible applicability to other types of ecological/environmental analysis where missing data are common and satellite imagery can help fill in the blanks.

Mike, [edit] you’re reading things that aren’t there. These are QUESTIONS, not accusations, about the methods used. I am simply trying to understand what was done, not argue that I have a better way, because I’m highly interested in methodological issues in general, including how they can potentially be used in ecological research where spatial variability and missing values are common issues. [edit]

[Response: Jim, perhaps time to tone this down. It did objectively seem to me that you were implying that the climate research community has somehow missed a trivial solution to a difficult problem, and instead has pursued unnecessarily technical and overly complex approaches to the problem at hand. I feel as if Eric and I provided you with a lot of information, and all of the relevant literature, hence both of us expressed some frustration with your continued questioning along seemingly similar lines. If I am mistaken in my interpretation, then I most definitely apologize. I would still however suggest you consider phrasing things in such a way so as to make such a misinterpretation less likely. We’re quite open here to honest questioning when it doesn’t come across as leading or overly aggressive. –mike]

Mike, I’m aware that there is a high degree of sophistication in some climate science methods–it’s clear just by reading any random part of the literature–which is why I am even interested in them in the first place. You can be 100% sure there was no ill intent in my questions, however they may have come across to you. I’ve looked back at my original post and cannot honestly see how this could have come across as accusatory, so don’t know where to go with that. You may feel the methods are obvious because you have been working with them for a while–I can relate to that–but they are less than obvious to many of us who have not been, including those with a decent statistical background. [edit]

From the innumerate gallery, I appreciate y’all working to get past the raised-hackles (very human) responses. And text conveys what, five or maybe ten percent of meaning, the rest we get from body language or when it’s missing, our brains, er, interpolate the missing data (grin).
Which is hard to do.

Pray keep talking. I won’t understand the math but I seriously do understand the effort it takes competent people to make progress in an area like this with this medium. And greatly appreciate the effort.