Climate Science Glossary

Term Lookup

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Term:

Settings

Beginner Intermediate Advanced No DefinitionsDefinition Life:

All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Are surface temperature records reliable?

What the science says...

The warming trend is the same in rural and urban areas, measured by thermometers and satellites, and by natural thermometers.

Climate Myth...

Temp record is unreliable

"We found [U.S. weather] stations located next to the exhaust fans of air conditioning units, surrounded by asphalt parking lots and roads, on blistering-hot rooftops, and near sidewalks and buildings that absorb and radiate heat. We found 68 stations located at wastewater treatment plants, where the process of waste digestion causes temperatures to be higher than in surrounding areas.

In fact, we found that 89 percent of the stations – nearly 9 of every 10 – fail to meet the National Weather Service’s own siting requirements that stations must be 30 meters (about 100 feet) or more away from an artificial heating or radiating/reflecting heat source." (Watts 2009)

Temperature data is essential for predicting the weather. So, the U.S. National Weather Service, and every other weather service around the world, wants temperatures to be measured as accurately as possible.

To understand climate change we also need to be sure we can trust historical measurements. A group called the International Surface Temperature Initiative is dedicated to making global land temperature data available in a transparent manner.

Surface temperature measurements are collected from about 30,000 stations around the world (Rennie et al. 2014). About 7000 of these have long, consistent monthly records (Fig. 1). As technology gets better, stations are updated with newer equipment. When equipment is updated or stations are moved, the new data is compared to the old record to be sure measurements are consistent over time.

Figure 1. Station locations with at least 1 month of data in the monthly Global Historical Climatology Network (GHCN-M). This set of 7280 stations are used in the global land surface databank. (Rennie et al. 2014)

In 2009 some people worried that weather stations placed in poor locations could make the temperature record unreliable. Scientists at the National Climatic Data Center took those critics seriously and did a careful study of the possible problem. Their article "On the reliability of the U.S. surface temperature record" (Menne et al. 2010) had a surprising conclusion. The temperatures from stations that critics claimed were "poorly sited" actually showed slightly cooler maximum daily temperatures compared to the average.

In 2010 Dr. Richard Muller criticized the "hockey stick" graph and decided to do his own temperature analysis. He organized a group called Berkeley Earth to do an independent study of the temperature record. They specifically wanted to answer the question is "the temperature rise on land improperly affected by the four key biases (station quality, homogenization, urban heat island, and station selection)?" Their conclusion was NO. None of those factors bias the temperature record. The Berkeley conclusions about the urban heat effect were nicely explained by Andy Skuce in an SkS post in 2011. Figure 2 shows that the U.S. network does not show differences between rural and urban sites.

Temperatures measured on land are only one part of understanding the climate. We track many indicators of climate change to get the big picture. All indicators point to the same conclusion: the global temperature is increasing.

Comments

IanC, thankyou for your detailed analysis. I note that your plotted difference is a close match to that provided by Kevin, but what a difference in perspective does the inclusion of the original data make. In this case it is worth noting that the GISStemp trend is 0.64 C per century. The overall change in trend is, therefore, less than 10% of the total.

Your claim that "ALL changes in the direction that support his belief?" is demonstrably false. In Jan 2010, an adjustment resulted in a decrease in global trend of 0.005oC/century

So, my point is discounted because the changes are only 10%, yet, I am proven false because of one adjustment that is less than 3% of the overall adjustments? That doesn't seem right.

My overall point though, and the thread topic, is the reliability of the data. I'd say a 10% adjustment is rather large, considerring everyone "thought" the data was correct before the adjustment.

I don't have enough info regarding the algorithm to say anything more about it, except the general observation, again, that the chances of all the adjustments being on "the correct side of the belief paradine" can't be 100% (sorry - 99%).

Kevin, if everybody thought the data were right before the adjustment they would have stopped working on it and the adjustment would not have been made. The adjustment was made precisely because of the research of climatologists who work to understand the limitations of the data.

Your argument is a straw man, the climatologists know that the data were collected for purposes other than climatology (i.e. weather forecasting, which has differing requirements), and research on dealing with these issues is ongoing (perform a google scholar search on "homogenisation" of station data.).

Now just because the data are not perfect, that does not imply that they are unreliable, as the uncertainties are quantifiable, even if they are not displayed in every graph you see.

IMO your point is discounted because you have presented no evidence to support it. Only suspicions based on your perception of the adjustments and a graph from Climate4You.

There is no a priori reason to expect that adjustments to NASA GISS historical temperature data must be "fair and balanced". Only that they (a) address identifiable problems with the data and (b) are methodologically sound.

If you have evidence that one or both of (a) or (b) is not the case, or can link to someone else who does, then by all means bring it to the attention of the pros here (and even better, bring it to NASA's attention).

But you are going to need more than your personal suspicions as expressed in:

If the data is/was so accurate, why does Hansen keep changing it? And why are ALL changes in the direction that support his belief? You would think that at least some "mistakes" were made in the other direction, no?

or

I'd say a 10% adjustment is rather large, considerring everyone "thought" the data was correct before the adjustment.

I don't have enough info regarding the algorithm to say anything more about it, except the general observation, again, that the chances of all the adjustments being on "the correct side of the belief paradine" can't be 100% (sorry - 99%).

(By the way, can you please provide some kind of substantiation that "everyone thought" the historical data was correct before the adjustment? There's a rather large difference between thinking that data is 100% correct, and thinking it is correctenough.

Kevin - Regarding your complaints on adjustments, I'll just restate something I posted on one of the "skeptic" blogs on those very adjustments:

It could be argued that it’s better to look at raw temperature data than data with these various adjustments for known biases. It could also be argued that it’s worth not cleaning the dust and oil off the lenses of your telescope when looking at the stars. I consider these statements roughly equivalent, and (IMO) would have to disagree.

If you don't agree with adjustments for various biases, you're going to have to address them directly - regarding the particular adjustment, with support for your opinion - before such criticism can be taken seriously.

Kevin: "considerring everyone "thought" the data was correct before the adjustment."

"Correct" is not a binary choice (yes, no) in science. No data are perfect. They don't have to be perfect in order to be useful. Even when they are already good enough to be useful, it is possible to get greater utility by improving the analysis.

You seem to be falling into the "if we don't know everything, we know nothing" mindset where certain individuals in the fake skeptic camp play the uncertainty monster. If you waited until your knoweldge was perfect before doing anything, you wouldn't even be able to get out of bed in the morning.

Kevin, if everybody thought the data were right before the adjustment they would have stopped working on it and the adjustment would not have been made. The adjustment was made precisely because of the research of climatologists who work to understand the limitations of the data.

These adjustments were made in 2008. This thread was started in 2007. Therefore there was confidence that these were accurate before they were. That is my point.

There is no a priori reason to expect that adjustments to NASA GISS historical temperature data must be "fair and balanced". Only that they (a) address identifiable problems with the data and (b) are methodologically sound.

You are correct. I do not have a priori reason to expect that, just logic, common sense, and probability.

A new algorithm is used that can find abnormalities better. It stands to reason, that the probability of finding data that "needs corrective action" only on "one side of the argument" would be rather small.

It is just a thought provoking exercise. Do you really believe that all those adjustments were needed, but that there was only the one adjustment the other way? I just read a piece by Dr. Sanford (Union Concerned Scientists) the other week where he was arguing that due to the fact that since there were MORE high temp records than low temp records lately, that this proved AGW theory. That level was something like 75-25, not 99-1.

You seem to be falling into the "if we don't know everything, we know nothing" mindset where certain individuals in the fake skeptic camp play the uncertainty monster.

This is the nature of the thread here. How reliable is the data. I am not naive to believe it can ever be 100% accurate, nor does it have to be. Again, this thread started in 2007, saying how reliable that data was, then there is a correction that adjusts the data in such a way as to increase the warming trend by 10% in 2008, so how reliable was it in 2007?

Kevin wrote "These adjustments were made in 2008. This thread was started in 2007. Therefore there was confidence that these were accurate before they were. That is my point."

If that is your point, you are labouring under a misaprehension. Most SkS regulars are well aware of the fact that there are homogenisation issues with the data, and that there will continue to be adjustments as the science improves. That is the nature of science. However that does not mean that the data are unreliable, even with the adjustments, the uncertainties are small enough to be confident of the conclusions being drawn on the basis of those data.

Kevin wrote "Again, this thread started in 2007, saying how reliable that data was, then there is a correction that adjusts the data in such a way as to increase the warming trend by 10% in 2008, so how reliable was it in 2007?

That's all I'm asking."

However, Kevin earlier wrote "If the data is/was so accurate, why does Hansen keep changing it? And why are ALL changes in the direction that support his belief? You would think that at least some "mistakes" were made in the other direction, no? How much cooler are the 30's going to get?"

It seems to me that your purpose has changed somewhat!

If you want to ask scientific questions, then ask them, rather than imply scientists have been disingenuous. All that achieves is to create a combative atmosphere that rarely helps much.

These adjustments were made in 2008. This thread was started in 2007. Therefore there was confidence that these were accurate before they were. That is my point.

Unfortunately, your point appears to rest on a false dichotomy: that data are either accurate or they are not. As Bob Loblaw noted, data are actually on a continuum of more or less accurate and there is almost always room for improvement. If the accuracy of GISTemp improved due to the 2008 adjustments, it does not follow, of necessity, that it was not accurate before, only that it was less accurate.

I do not have a priori reason to expect that, just logic, common sense, and probability.

A new algorithm is used that can find abnormalities better. It stands to reason, that the probability of finding data that "needs corrective action" only on "one side of the argument" would be rather small.

This is an argument from personal incredulity, not an appeal to "logic", "common sense", or "probability". In addition, with regards to treating the data there are no "sides of the argument". There are only identifiable, quantifiable uncertainties & biases (of the methodological/numerical kind, not the political kind) in the data and adjustments to correct them.

It is just a thought provoking exercise. Do you really believe that all those adjustments were needed, but that there was only the one adjustment the other way? I just read a piece by Dr. Sanford (Union Concerned Scientists) the other week where he was arguing that due to the fact that since there were MORE high temp records than low temp records lately, that this proved AGW theory. That level was something like 75-25, not 99-1.

Are those Dr Sanford's exact words? Is there a link? Based on what you have written it appears Dr Sanford noted that high temperature records exceeded low temperature records in the given timeframe by a ratio of 3:1. How is this pertinent? Insofar as you are tying this back to a ratio of adjustments performed on NASA GISS, this appears to be a non sequitur.

How reliable is the data. I am not naive to believe it can ever be 100% accurate, nor does it have to be. Again, this thread started in 2007, saying how reliable that data was, then there is a correction that adjusts the data in such a way as to increase the warming trend by 10% in 2008, so how reliable was it in 2007?

That's all I'm asking.

The false dichotomy identified at the start remains in play here. Just because the data was made more accurate/more reliable in 2008 does not mean it wasn't accurate or reliable at all in 2007. It just means it was not as accurate. If you suspect otherwise, can you provide some sort of calculation or other analysis to support your suspicion (or a link to someone else doing so)?

Kevin - You have (incorrectly) posed the question of accuracy as binary; that if current data is accurate then previous data cannot be accurate, cannot be trusted, and you then attempted to use that as a Reductio ad Absurdum argument against corrections.

That is simply wrong.

The real state of affairs is a continuum:

Estimates pre-2007 were and are accurate, within some uncertainties.

Current estimates are more accurate, with fewer uncertainties.

Again, if you disagree with any particular correction(s), you are going to have to present data demonstrating an issue with that. Not logical fallacies and arm-waving, which is all you have presented to date.

" ...a 10% adjustment is rather large, considerring everyone "thought" the data was correct before the adjustment."

In fact, if you look at the SkS trend Calculator you will see that the trend for Gistemp is 0.064 C per decade +/- 0.007 C per decade (11%). So his point is that the temperture record is not as accurate as advertized because a change smaller than the advertized accuracy has been made.

Even more bizzare is claim that:

"I don't have enough info regarding the algorithm to say anything more about it, except the general observation, again, that the chances of all the adjustments being on "the correct side of the belief paradine" can't be 100%"

It has been already established that the change in gistemp is primarilly because of changes in the Global Historical Climate Network, whose algorithim Kevin claims ignorance of. Here are the actual adjustments from raw data made by that algorithm:

(Note: Darwin is highlighted because it comes from a discussion of a frequent denier cherry pick used to suggest the GHCN adjustments are wrong.)

The key point for this discussion is that the adjustments are not 99% in one direction. They are very close to being 50/50. In another discussion of adjustment bias, it was found that the mean adjustment of 0.017 degrees C/decade. This data is for the GHCNv2 rather than v3, but no doubt the statistics of the later will be similar.

I think the expectation that adjustments should be even is also misplaced. If you want to compare temperature measurements mad today with measurement taken in the morning, against same station but temperature done in afternoon, then you have to move past temperature down. Its a change of practise. Likewise, comparing modern screened electronic thermometer against past unscreened and glass thermometer also require past to be adjusted down.

I would certainly not expect any of the temperature records to be beyond improvement. Its a case of methodology advancement and available funding. What is also clear though is that you cant blame GW on adjustments.

scaddenp is correct. Most of the historical changes that have introduced inhomogeneities into the temperature record have tended to cause recorded temperatures to suddenly go down. Station moves from built-up locations to more rural locations (e.g. Darwin, Port Hedland); switching to Stephenson screens; changing Time of Observation; changing the method sea surface temperatures were recorded after WWII; heck, in the very earliest part of the Central England Temperature record, the temperatures are not comparable because the thermometers were placed inside to avoid having to go out in the cold to read them!

So we should expect corrections to often be increasing recent temperatures or decreasing older temperatues as we become more able to isolate and correct for various effects.

However, suppose that in spite of the facts:

That these effects have actually been measured in order to create formulas to correct for them.

That GISS has been releasing all source code and data for years, and publishing all algorithms in the peer-reviewed literature for decades, and independent groups have replicated their results, and not one single fake sceptic has ever published a criticism of any of the algorithms used.

That using raw data, and even tiny subsets of the raw data, gives almost identical results.

In spite of all those facts, you just don't trust any form of correction? Not because you can actually identify anything wrong in all that publicly-available information, but just because your gut tells you it must be so?

Well, in that case you can completely avoid all corrections by simply detecting when a discontinuity in a temperature station's record occurs, and then simply break the record in two at that point. Pretend it's actually two completely different records, and make no effort to quantify the effect of the discontinuity so that you can correct for it.

I think the expectation that adjustments should be even is also misplaced. If you want to compare temperature measurements mad today with measurement taken in the morning, against same station but temperature done in afternoon, then you have to move past temperature down. Its a change of practise

The algorithm was put in to pick up disparaties, not a change in when temp was measured. So this argument does not apply.

In fact, if you look at the SkS trend Calculator you will see that the trend for Gistemp is 0.064 C per decade +/- 0.007 C per decade (11%). So his point is that the temperture record is not as accurate as advertized because a change smaller than the advertized accuracy has been made

That is not whatI was saying. For the century, there was a 10% increase in the rate of temp increase, solely due to these adjustments.

Tom Curtis,

I didn't state that all the temp adjustments were positive, I stated that all of them made the temp increase rate change in a positive fashion. Lower the early temps, increase the latter temps. When you look at the chart I gave, that is exactly what happened.

The algorithm was put in to pick up disparaties, not a change in when temp was measured. So this argument does not apply.

Really? I'd say there'd be a disparity, almost by definition, between temperature readings a weather station makes at one time of day and those it makes at another.

With regards to the remainder of your comment #266, the bottom line is that you have articulated suspicions (yours and others') that something is wrong with GISTemp following adjustments made in 2008.

However, and this is the critical part, you have not provided, either directly in the comments or by link to another site, any actual criticism of the adjustments. What you have instead provided is an extended argument from personal incredulity and allegations of bias.

If you can furnish any sort of methodological critique of GISTemp's processes, I am sure that the knowledgeable commenters here would be quite happy to discuss them. Until then, however, it seems to me that you are wasting your time - I rather doubt you will convince those skeptical of your claims as long as you limit your arguments to the above.

Kevin, I am giving you reasons why a disparity would suddenly appear. A change in TOBS, move from city to airport, change of thermometer, and change of screen will all create a discontinuity in the record that the algorithm will pick up, and they will all result in temperatures taken after the change being lower than the ones before, so adjustments will increase trend.

Kevin @266, first, the chart I showed @263 is not of adjustment of individual temperature datums, but of adjustment to station trends. That is not a matter of adjustments down early and up late, but of differing adjustment for each station that just happen to have a mean value slightly above zero, even though nearly half make negative adjustments to the trend.

Second, it is not a 10% adjustment, but an 8.9% adjsutment in a record with an 11% error margin. Further, it was not an adjustment in the temperature at all, but an adjustment in an index of temperature which you have done nothing to show makes that index less accurate. For all you know, and most probably, it has made it more accurate.

I have 3 questions (1) Since oceans store same heat per ~20' depth as all air & all land that's relevant (to ~20' depth) why is avg global temp being used rather than ocean heat energy in the graphs publicly discussed ? (2) Is avg global temp a simple avg of all readings, or weighted ? (3) Can I find a proxy historical global temp set (600Ka ? 600Ma ?) to fine time resolution - a millenium ? a century ?). Anybody ?

2- no it is absolutely not a simple average. First "global average temperature" is tough to define (and measure) so what is usually calculated is global average anomaly. Second, all the temperature records use area weighting. However, there is a lot of differences in the detail (and a lot of detail). The advanced section of this article gives you good pointers for more information.

3- Any proxy of use has to have two attributes - a way to tell the time accurately and a way to tell the temperature. The best long term proxy is ice core bubbles. The "lock in" time for a bubble is short and where you have annual snow layers, you have very good clock. Thermometry is also very good compared to most other proxies. Resolution degrades as you go back in time for all proxies. Ice core gets you 600Ka but only for very selected places on earth (greenland and Antarctica). Spelothems are prob next best as far as I know but more problematic for absolute dating and thermometry but wider global coverage. Going back beyond these you lose time resolution badly as you become dependent on radiometric dating resolution. Resolution will depend on the particular technique. In something like benthic forams from marine cores you can good relative time but not absolute time (and a lot of fun interpreting the thermometry). In short all proxies have issues of one sort or another and paleoclimate studies are best within integrating multiple lines of evidence.

Go to http://www.ncdc.noaa.gov/paleo/recons.html for data but read metadata about limitations before leaping to any wild conclusions.

Air temperature measurements were not started with the monitoring of climate in mind. The concept of "climate" probably didn't even exist in those terms until after people started accumulating data. As experience accumulated, methods of measuring temperature improved (and changes need to be accounted for in looking at long-term trends).

Even though historical air temperature records are an incomplete view of historical global conditions, they are useful. Extensive land surface air temperature records go back much further than ocean temperature records. We understand many of the linkages between ocean and land temperatures, and we can account for much of the differences in patterns. AIr temperatures are but one part of the jigsaw puzzle, but they do help.

I'm doing another AGW debate and was wondering if someone can give me a quick response to the following claim, or refer me to where I can read up on it myself:

"To start with the "global warming" claim. It is based on a graph showing that "mean annual global temperature" has been increasing.

This claim fails from two fundamental facts

1. No average temperature of any part of the earth's surface, over any period, has ever been made. How can you derive a "global average" when you do not even have a single "local" average?

What they actually use is the procedure used from 1850, which is to make one measurement a day at the weather station from a maximum/minimum thermometer. The mean of these two is taken to be the average. No statistician could agree that a plausible average can be obtained this way. The potential bias is more than the claimed "global warming.

2. The sample is grossly unrepresentative of the earth's surface, mostly near to towns. No statistician could accept an "average" based on such a poor sample.

It cannot possibly be "corrected" It is of interest that frantic efforts to "correct" for these uncorrectable errors have produced mean temperature records for the USA and China which show no overall "warming" at all. If they were able to "correct" the rest, the same result is likely."

dvaytw, I suggest you ask the person how s/he would, ideally, determine whether or not global energy storage was increasing via the enhanced greenhouse effect. That will either push the person toward an evasive rejection of the greenhouse effect (which you can counter with directly measured surface data that confirm model expectations) or push the person into giving you their answer to the question. If you get that answer, then you can compare it with what scientists are actually doing.

It's an odd complaint anyway, since satellite data--even the raw data--confirm the surface station trend, and stratospheric cooling can only be partially attributed to other causes. Then there's ocean heat content data (an invitation to weasel via Pielke and Tisdale, though), global ice mass loss data (harder to deal with, but the move will probably be "it's happened before."), changes in biosphere, thermosteric sea level rise, and the host of other fingerprints.

The first criticism, that there is no "global average" temperature, is hardily "fundamental." The word "average" has many meanings. That it is used in a way of which the critic disapproves is of no fundamental importance, except perhaps to the critic himself who is evidently "no statistician."Given what was said, I guess the critcism is confined to land temperature measurements.It is true that on land the daily maximum and minimum temperature is all that is recorded. It is the standard practice and dates back to 1772 with the CET. The average of these two readings would then be "the mean recorded temperature" which makes it an average.The critic appears to be suggesting that an average minute-by-minute daily temperature would yield a result with no global temperature rise. Quite how that could be so is unclear. Both the maximum and the minimum averages have been rising in recent decades. And that the minimums have been rising more steeply than the maximums is symptomatic of increased atmospheric insulation - or an enhanced greenhouse effect.

As for the second criticism, it is pure nonsense. As DSL reminds us, the assertion that urban heat islands have significantly distorted the temperature record is difficult to maintain when the satellite record provides essentially the same result.

Figure 5 in the advanced version of this post compares raw data with corrected data, putting the lie to the last claim.

The idea that the average cannot be determined accurately due to sparse samples is disproven by the fact that the same temperature trend can be derived by using approximately 60 rural-only stations (e.g. Nick Stokes' effort referred to in the OP; caerbannog also posts regularly about his downloadable toolkit e.g. comment #8 on this post, which itself is about Kevin C's tool). Anybody attempting to cast doubt on the basis of point 2 really has to explain how the reconstructed record is so robust and insensitive to the particular stations used.

The average = (min+max)/2 temperature issue is irrelevant; all that matters is whether it creates a bias. In the US, where temperatures were recorded by volunteers and the time of day of observation (TOBS) changed over time, it actually does create a bias (a step change to cooler readings at a given station when the change occurs, which caused a reduction in trend over time as the change rolled out), but that can be corrected for, and if it's not, it demonstrably doesn't make much difference. BEST's approach, of simply splitting the station when a step change is detected, deals with this without any correction required.

Finally, a global average is not difficult to work out, but it's also not necessary to compute in order to detect global warming — the issue is the change in temperature not the temperature itself, which is why "temperature anomaly" is always used, and the change is easy to detect. One of the reasons why so few stations are required is that anomalies are strongly correlated over large distances (demonstrated empirically by Hansen et al way back in the 80s) even while the actual temperatures between nearby stations can vary widely (e.g. with altitude and surrounding environment).

I should probably also point out that the "global warming claim" isn't based on "a graph" that shows that "mean annual global temperature" has been increasing. For a start, it goes back over 100 years, with the calculations of Arrhenius that showed increasing atmospheric CO2 concentrations would increase global temperatures, coupled with the fact that we have increased atmospheric CO2 concentrations by about 40% and continue to do so; the graph merely provides evidence to support the theory. Secondly, there's an awful lot more evidence out there than just "a graph". Tell them to look at what's happening in the Arctic sometime.

1. No average temperature of any part of the earth's surface, over any period, has ever been made. How can you derive a "global average" when you do not even have a single "local" average?

What they actually use is the procedure used from 1850, which is to make one measurement a day at the weather station from a maximum/minimum thermometer. The mean of these two is taken to be the average. No statistician could agree that a plausible average can be obtained this way. The potential bias is more than the claimed "global warming. [emphasis added]

This "argument" already breaks if it could be shown that there is at least one location where measurements over a long time span have been made with high quality. In fact many institutes or harbours all over the world are sentimentally proud (up and including irrationally proud) of having exactly those measurement series.

Invite your opponent to a search throughout the internet and s/he might find them. But you could even help her/him. For just one example, the "Long-Term Meteoroligical Station Potsdam Telegraphenberg" has a very nice and explaining website. And s/he could even find some very frustrating series to her/his worldview there. As anyone could clearly see by the "annual mean" graph there is a clear trend over the time and this trend is far atop the often cited 0.8/9°C worldwide. This difference is due to one of the predicted outcomes of the enhanced greenhouse effect - higher latitudes will warm up more than the global mean.

But do not hope to convince deniers by facts - here you'll encounter (perhaps) something like "but it is NOT since 1850!" or, if somebody else shows another time serie beginning at around 1840, the goalpost will shift. Or anything else ;-)

Thanks guys for all the tips. My initial tactic was to point out to him that there are so many temperature records showing the same basic pattern; if the measurement system were flawed, errors would be in all directions and there wouldn't be such obvious similarity between them. I also pointed him to a very useful pair of charts: on Wikipedia, 'temperature records by countries'. A quick glance of the 'hottest temperature records' vs. 'coldest temperature records' shows that the former outnumber the latter by a large margin in the last couple decades. So even just looking at that, the trend is pretty obvious.

I have another question, but I don't want to keep bothering y'all for answers, so maybe you could just direct me to the most pertinent article, in response to this point of his:

// ...if anyone wants to claim that CO2 levels in the upper atmosphere are causing ground level increases in temperature, there would need to be much greater warming there, which is demonstrably not happening. //

PS - Moderator, please feel free to delete any of my "please help me with debate" questions to the forum if you feel they are off-topic or don't contribute to the discussion! Thanks in advance!

Response:

[JH] We welcome your posts and others like them. The comment threads should, in an ideal world, function as a classroom where honest questions are asked and honest answers are given.

If you have a question and cannot find an appropriate thread to post it on, feel free to post it on one of our "open threads", i.e., the Weekly Digest or the Weekly News Roundup.

"... if anyone wants to claim that CO2 levels in the upper atmosphere are causing ground level increases in temperature, there would need to be much greater warming there, which is demonstrably not happening"

I would point out that, first, "skeptics" greatly exagerate the expected amount of warming due to CO2 (and other anthropogenic factors); second, scientists have always expected that other short term factors will cause fluctuations in the increase of temperature so that, over short periods it may be much less than is expected over the long term, or even negative; and that third, a very powerfull short term factor is known to be depressing the rate of temperature decrease, and in fact accounts for nearly all of the discrepancy between the actual temperature increase and that predicted by the models.

With regard to the exageration of the expected rate of warming, this is typically done with graphs such as this one by Murry Salby:

Such graphs may be created in ignorance, by simply scaling the (smoothed) CO2 and Temperature graphs to have a common standard deviation. Such a scaling ignores the fact that annual fluctations in CO2 concentration are too small to significantly effect global temperature, and so on short times variations in CO2 are not expected to match variations in temperature. As a result the scaling does not reproduced the expected temperature increase. That mismatch is exagerated if the match is done between annual (or worse, monthly) temperature variations and a smoothed CO2 curve as done above.

In some instances, however, including that of Salby, the exageration must be deliberate. That is because the same authors show graphs of the expected temperature increase as a function of CO2 concentration over the coming century. As a result, when they show the short term "prediction", they must know that they have changed the relative scales of CO2 concentration and temperature, thereby mistating the predicted increase in temperature from the increase in CO2. This can be seen by comparing the prediction at the scale used for centenial predictions with that used over the last few decades:

As can be seen, with an honest scaling, recent temperature increases have closely matched those predicted by the IPCC Assessement Report 4 (AR4). To avoid any misunderstanding, however, it should be clear that the "prediction" above is produced by simply using the same scale ratio between CO2 and temperature as is used in Salby's centenial comparison, and slightly understates the actual AR4 short term prediction, which was for 0.2 C per decade. Salby's graphic manipulations are discussed in more detail here.

With regard to the expected short term fluctuations, that can be seen in the temperature record up to 2005 (when the short term temperature trend met or exceeded IPCC predictions). During that period, however, there are many short term periods with zero, or slightly negative growth:

Climate scientists are not utter fools. They can read temperature graphs as easilly as anyone else; and could see, therefore, that a prediction of temperature increases without faltering (ie, monotonic increase) was already falsified, and would not be so foolish to frame their predictions in a fashion that was already falsified. The assumption that a short term low trend in temperature increase somehow falsifies AGW, however, tacitly assumes that they were such fools, for it assumes a "hiatus"not greatly different from "hiatuses" that occured before the predictions will falsify AGW.

Nor do climate scientists predict short term fluctuations merely to save appearances. In the CMIP5 model intercomparison for IPCC Assessment Report 5, using the scenario with the strongest warming (RCP 8.5), over 8% of 15 year trends with a start year of 1970 or later, and and end year of 2015 or earlier are smaller than the HadCRUT4 trend since 1998. Indeed, 4.48% are negative and there is one 15 year trend of negative 0.15 C per decade. The prediction of short term fluctuations and hiatuses comes from the models themselves. They are not ad hoc afterthougths. They do not typically show up in statements about predictions because they represent short term chaotic factors that have no influence on the long term trend. Consequently they do not coordinate in position across all models in the ensemble, and do not appear in the ensemble mean. Indeed, the lowest 15 year trend in the ensemble mean over that period is more than twice the HadCRUT4 trend since 1998; but that is because the ensemble mean has eliminated short term non-forced fluctuations while the real world has not. Climate scientists know this, indeed insist upon it. So-called "skeptics", however, blur the distinction whenever possible.

In this regard, it is worthwhile noting that the peak temperature of the 1997/98 El Nino was 0.6 C warmer than the La Nina years on either side of it (see first graph). That is the equivalent of three decades global warming. With ENSO introducing such large fluctuations into short term temperature trends, it is impossible that trends of less than thirty years should consistently show trends near to the long term trend.

Finally, there is, in fact, a known short term non-forced factor that accounts for nearly all of the discrepancy between predicted and observed short term trends. Given the comment in my last paragraph, it will come as no surprise that it is ENSO:

Very clearly, ENSO has had a strong negative influence on the temperature trend since 2006, and arguably since 1998. That ENSO is the major driver of the recent temperature "hiatus". In fact, three very clear lines of evidence demonstrate that beyone reasonable doubt IMO. They are the fact that if you only examine the trends in ENSO equivalent years, all trends are nearly the same and close to that predicted by the models; that if you adjust temperatures for known ENSO states,the result is a trend close to that predicted by the models, and finally, if you constrain a model to match the historical ENSO pattern, it reproduces the historical temperature record. I discuss these points in detail here.

It should be noted that ENSO is not the only known factor that helps explain the reduces recent trends. Tropical volcanism is known to have increased the aerosol load, a factor that should induce cooling if not for a countervailing warming trend. We are also experiencing unusually weak solar activity, which should also have the same effect. Other factors may also have influence, and scientists are examining these factors, and others to determine the relative importance of different factors. But ENSO is the main factor, without doubt. It is sufficiently strong a factor that, if CO2 forcing did not have a significant warming effect, we should be experiencing a significantly negative short term trend in global temperatures, not the weakly positive trend we are currently experiencing.

As I understand the question, it means the questioner expects upper atmosphere to warm. This is a misunderstanding about how the greenhouse works. In fact, the stratosphere is predicted to cool. You might like to look at SoD article on why though though there a number of other resources. However, most deniers are looking for a convenient excuse to ignore science and are unlikely to put the effort needed into understanding this.

Thanks guys for all the tips. My initial tactic was to point out to him that there are so many temperature records showing the same basic pattern; if the measurement system were flawed, errors would be in all directions and there wouldn't be such obvious similarity between them. I also pointed him to a very useful pair of charts: on Wikipedia, 'temperature records by countries'. A quick glance of the 'hottest temperature records' vs. 'coldest temperature records' shows that the former outnumber the latter by a large margin in the last couple decades. So even just looking at that, the trend is pretty obvious.

I have another question, but I don't want to keep bothering y'all for answers, so maybe you could just direct me to the most pertinent article, in response to this point of his:

// ...if anyone wants to claim that CO2 levels in the upper atmosphere are causing ground level increases in temperature, there would need to be much greater warming there, which is demonstrably not happening. //

PS - Moderator, please feel free to delete any of my "please help me with debate" questions to the forum if you feel they are off-topic or don't contribute to the discussion! Thanks in advance!

Sorry, hope this question isn't to far off topic. Today I've read several articles about the moss found on Baffin Island. They said they had determined it was 44K years old from carbon dating. I was just told that when the moss died it would have stopped generating C-14. There is no way to differentiate between dead moss and dead moss covered by ice. So when the researchers can say they have "old" dead moss but they can't say anything about ice unless they can prove that without the ice the moss would have (mysteriously) come back to life and the C-14 started accumulating again.

I have no idea of how carbon dating works on moss so I thought I'd enquire.

I'm not quite sure I understand your question, Stranger. Does it help to consider that neither dead moss nor moss that is in a complete metabolic stasis will replenish C14 from the atmosphere? As well, it's not likely that dead moss exposed to weathering would endure for 44,000 years, not even in a very cold, dry climate.

Maybe if you could point to where you read about this. Was it something to do with this research?

Stranger @23, I assume you are referring to this research, a popular account of which is given here. The same research is detailed more briefly in the link provided by Doug Bostrom.

Given that, all that is required for moss to not accumulate new C14 from the atmosphere is that it be either dead, or unexposed to the atmosphere (ie, covered in ice). There is a slight twist to that. Specifically, if new and living moss grows in the same location as old and dead moss, it will potentially contaminate the age signal, making the older moss appear younger. If you look at this image from the popular report, you will see areas in which new plants is growing by the green colour:

In fact, looking closely, it appears that the new plants are grass rather than moss. That is important for two reasons. First, it makes it easier to distinguish between the old moss and the new growth, thereby avoiding cross contamination. Second, It is my understanding that moss will grow in situations too cold for grass to grow, suggesting a possibility that Baffin Island is now warmer than when the moss was formerly growing, not just when it was ice covered. Of course, that later point depends critically on the species of moss involved, and as the original research is behind a pay wall, I cannot confirm it.

Despite the fact that C14 doesn't distinguish between a merely dead plant, and one covered by ice, the conclusion AGW "skeptics" apparently want to draw from that does not follow. That is because if a soil is not ice covered, and is above freezing for at least part of the year, new plants will grow in it. Those new plants will then show up as having a relatively young age in carbon dating. Thus, for the "skeptic" scenario to make sense, the ice would have had to melt away without temperatures ever rising above freezing. Quite apart from the conundrum in that, temperatures in the area are definitely above freezing for at least part of the year now so even in that scenario, temperatures are still warmer than they have been, likely in the last 110,000 years.

I should note that I am not expert in arctic biota, so there may be some contrived way in which temeratures were briefly warmer in the interval and not shown up based on biology alone. However, the only time since the end of the last ice age in which temperatures may have been warmer is shown by the younger C14 ages across much of the transect to have also been a period when the icecap was growing. (Those younger ages also illustrate my point in the preceding paragraph.)

In the linked descriptions of the research Dr. Miller, the scientist doing the research, describes the ice caps on Baffin Island as retreating at a rate of 1-2 meters a year. They collect all their samples from the very edge of the ice, less than a meter away. It follows that if the ice retreats a meter a year and you collect your sample from 0.5 meters of the ice that the sample was covered by ice last year. Plants grow very slowly in these conditions so contamination by fresh growth can be eliminated by careful sampling. It is simple enough for the scientists to collect data several years in a row to confirm that the samples were ice covered in the past. Next year you can collect from areas that you document are ice covered this year. For exceptional samples scientists return to the site the next year and confirm their previous result. Tom's picture is of the camp the scientists have. It is not the collection site. In this area the ice does not flow over the ground so old samples have not been disturbed (in most locations flowing glaciers destroy plant samples, that did not happen here). Denier claims that Dr. Miller does not know that that the samples were ice covered in the past are easily shown to be ignorant of the facts. In general, you should question claims that professionals make simple errors that are easily checked. Scientists ensure that their claims are substianted by the data. Is it likely that Dr. Miller would spend months camping on Baffin Island, thinking about the data every day, and make a mistake that could be recognized in one minute by an untrained eye? It is much more likely the deniers have not read the paper and are making up the problems.

The samples are reported variously to be older than 40,000 years and 120,000 old. This is due to the fact that it is not possible to date samples over 40,000 years old using carbon-14. Once samples are 40,000 years old all the C-14 is gone (some scientists claim they can date to 50,000 years ago). The climate 40,000 years ago was much colder than today so the most plausible age is 120,000 years old which was the last time there was an interglacial. (Note that older ages cannot be excluded, the samples could be much older, but not younger, than 120,000 years).

Most of the samples are only about 5,000 years old (easily carbon dated). It is known from other work that it has been getting cooler on Baffin Island for the past 5,000 years (until the start of AGW). This work indicates that climate models have substantially underestimated the warming in this area. That suggests that it will warm more in the future than currently predicted. Those crazy alarmist models, underpredicting warming again!

Thanks Michael and Tom. The article linked was much more informative the the ones I read at Yahoo and other news outlets. The information on the C-14 and the moss was very helpful.

The claim now is that the Arctic has cooled about 1 degree over 5,000 years (with several shorter warm periods between). But the Baffin Island weather station doesn't even show warming from 1970 to present so how can this study claim otherwise?

All the global temperature records show strong warming in the Arctic. This yearly GISS report shows about 1.5 C increase over baseline for the Baffin area in 2011. 2012 is similar. Perhaps you could cite your record that states no warming from 1970 to the present on Baffin Island? I found a reference on WUWT that claims that. Since the sea ice has collapsed in that area the past decade, it is clear that it has been warmer than it used to be. Perhaps WUWT has been cherry picking their data stations again.

Michael, thanks for your comments. I was engaged with someone who most likely saw the WUWT postings. I've hardly ever gone there except when someone at this site links to it.

It seems like when new issues arise I find myself unable to expond on it with someone from the skeptic side who seem to have more experience than I have. The good thing is that when they confond me I'm able to ask you guys to help me see it in a proper context.

So as new issues arise that I'm not familue with I'll make my occasional request to help me out.

Stranger @288, if you click on Baffin Island on the map at the Giss Station Data page, you will see a list of nearby stations. If you click on one of those, you will see a graph of the annual temperture data for that station. One example is this, from Frobisher Bay (extreme south of Baffin Island), which definitely shows a trend. So do Clyde, Coral Harbour, Hall Beach, Fort Chimo, and Gothab Nuuk, all selected because they have a complete or almost complete record to 2013 and are within approx 800 km of Frobisher Bay. Many other stations in the area are seriously incomplete, and show apparently no trend. Given that all the stations with nearly complete records and the GISS temperature index for the region all show positive trends, the apparent lack of trend in the incomplete records is likely a function of time period or missing data. However, it is quite possible that you could be shown a temperture series for a station on Baffin Island with little or not trend. There are cherry picking opportunities everywhere ;)

Thank you for posting the link to GISS. I wanted to look up that data and did not know the right page.

On WUWT they post only the Clyde data, and they delete the data after 2009 so that it looks flatter. I noticed that you linked all the relevant data and kept in all the data points. Why don't you also only link to the data that appears to support your position best? ;).

I post rarely now because I think your responses are better than mine. Keep up the good work.

Michael, actually, the WUWT graph only extends to 2008. The data available from GISS extends to 2010. However, this is probably only because the copied a graph from a 2009 post on the World Climate Report. The greater contribution to the flatness of the WUWT graph is that they only show summer temperatures, which have not risen as fast as annual temperatures. That is probably because excess energy in summer goes into the arctic ice melt rather than into raising temperatures, as can be seen in this plot of seasonal variation in arctic temperatures based on DMI data:

Thankyou for the compliment, by the way. However, I also enjoy your posts and would like to see more of them.

The origin of the Wattsupian graph dates back to the 2009 Axford et al paper on the work at Lake CF8. At the time Wattsupia simply re-posted the World Climate Report nonsense. There is a debunk from 2009 by Dale Husband stating that a quick look at the GISTEMPS data shows the graph is bogus. "That's not even remotely the same chart!" I am presently unable to expand on this as the GISTEMP station page isn't working for me.

MA Rodger @294, the difference between the GISS graph and the WUWT/World Climate Report Graph is that the former is for annual values, while the later is for Summer (JJA) means only. I have downloaded the data, and plotted the graphs myself, and can verify that the WUWT graph is the summer data, as claimed. The claim that the flatness of the summer graph means there has been no warming, however, is simply false. Arctic summer tempertures near ice fields are very constant because of the large amount of ice in the vicinity. The temperature of the ice is, naturally enough, freezing - and prevents temperatures rising more than about 3 C above zero. Excess energy that would have raised temperatures in the absense of ice melts the ice instead.

I should note that WUWT and the World Climate Report correctly identify their graph as being of summer temperatures in each case, so there can be no suggestion that they have passed of summer temperatures as annual temperatures. They have merely misinterpreted the significance of the stable summer temperatures over time.

Hoping this is the right argument/thread for this - as a rank amateur in climate science who logs time in the trenches of conservative message boards trying to engage skeptics on the fence in rational conversation, I continue to run up against folks holding up Mr. S. Goddard's accusations of NASA data fudging courtesy of Dr. Hansen. (I'm not referring to the WUWT-related sea ice debacle, this is brand-new 2013 stuff.)

I won't sully these pages with a link to the nonsense; Google "Goddard Hansen tampering" or just go to his site and you'll find it easily enough. The problem is that I'm unfortunately not statistician enough to refute these charges on a technical platform, so I'm stuck with supplying admittedly ad hom responses pointing out his abyssmal track record and the like.

Could someone provide a concise answer as to where specifically Goddard has been misreading or miscalculating temp data in the last year specifically? More specifically, is the adjustment of the temperature record valid and due to the dropout of poorly cited or obsolete stations, or has there been no adjustment and SG's just misrepresenting the data via improperly constructed graphs, or both?

I've found plenty of explanations for the 2012 debacle, but little re his latest round of histrionics, and would love to know if this is the same deal or some new angle he's adopted. Thanks in advance for any thoughts.

rivetz - It's the same deal, the same nonsense, as has been pushed by climate denialists and conspiracy theorists before. They are claims that corrections for known errors and biases are somehow the result of an Evil Plot, usually tied to Agenda 21 or some other fever dream. And they consist of ignoring known biases, cherrypicking single stations, and other errors.

The adjustments made to the US temperature data, most of which consist of time of observation (TOBS) bias correction, are clearly and publicly documented (see here and here) - the TOBS issue has been a known bias for over 150 years, and its correction entirely justified if you want accurate data.

Here are the various adjustments, along with a link to their public description, documentation, and reasoning. Note the similarity to the data adjustments Goddard and others claim as sinister and underhanded, while congratulating themselves for the discovery! If the people making such claims ever cared to read the documentation they would realize their mistakes.

As I've said in previous discussions on the topic, looking at temperatures without correcting for these known and well quantified errors is as foolish as looking for stars without cleaning the dust and oil off your telescope lenses. The results will, in both cases, contain errors.

Bar the 2012 post with Gobhard spouting off about US temperatures & discussed @297, I note in a later post from March 2013 it is GISS global temperature that the cretin is getting in a huff over. As examining the ravings of a lunatic is not my favorite pass time, I cannot guarantee that Gobhard is totally out of his tree, but I see no evidence to suggest that he is in his tree.

GISS do not "tamper" but make documented amendments. The only significant amendment since Feb 2012 is the change from using HadOISST to ERSST in January 2013. When I plot the data-copy Goddard shows with the latest GISS data I get the same 1880-2012 graph as Sato did for ERESST-Had+OISST. It is not greatly dissimilar to the plot Gobhard presents 1910-2011.

If there are other posts by the cretin, I would hazard a guess that they are similarly well grounded on another planet (probably the planet Wattsupia).

Except that stations and practises vary worldwide and different times. Furthermore, you are claiming subjectivity when algorithms and methods are published. You asked for consistancy? What is your objection to the BEST methods? What about all the other issues? (Of which change of station is the major one?). What about UHE? Urbanization is not a uniform process?

Oh, and you can do the raw data. See above. Just that no serious researcher would try to draw conclusions from data series that are not comparable. Note also the agreement with proxies like sealevel.