All eight international models surveyed by the Bureau of Meteorology now suggest tropical Pacific Ocean temperatures are likely to remain ENSO-neutral for the second half of 2017.

It’s a tough business to be in when your CO2 driven “climate change” can’t get there unless a natural ENSO event pushes up the temperature for you. Meanwhile, Justin Gillis at the New York Times claims “Earth Scorching CO2” is higher than ever while temperatures stabilize at a value that is the same as about 1980 (0.27°C), according NASA’s GISTEMP:

Land-ocean temperature index, 1880 to present, with base period 1951-1980. The solid black line is the global annual mean and the solid red line is the five-year lowess smooth. The blue uncertainty bars (95% confidence limit) account only for incomplete spatial sampling. [This is an update of Fig. 9a in Hansen et al. (2010).]

Looks like that big El Niño driven peak in GISTEMP of 0.98°C for 2016 could be coming down in 2017 if the current values hold and ENSO neutral conditions remain.

Just look at the sea surface temperatures, there’s not a lot of warm water:

We live in interesting times.

NOTE: I expected some complaints about comparing GISS and NCEP graphs, and there were plenty. I did it to illustrate a point.

Which one is the RIGHT temperature anomaly? Anomalies are all products of their baselines, and baselines are a choice of the publisher.

If NASA GISS is to be believed as the world’s most cited source for global temperature, then 0.27C is correct for 1980.

Unfortunately, they have been living in the past, and refuse to update their baseline. UAH did it, RSS did it, NOAA/NCEP did it….why not GISS? The answer: Gavin Schmidt.

This is why absolute temperatures don’t suffer from the choices made by the researchers for the anomaly baseline. There’s no musical chairs with anomaly baselines.

It would be nice if GISS got with the program used the 1981-2010 baseline like other data sets, or all the climate data publishers agreed on using one baseline. For example, here’s a BEST plot with all the baselines adjusted to NASA GISS 1951-1980.

The general public really doesn’t care or know about anomaly baselines – they just want to know what today’s temperature is relative to the past.

Standardizing on one baseline for all climate data sets would make that easier for the public consumption. I’m sure that call for standardization of baselines will fall on deaf ears at NASA GISS, where their lead researcher, Gavin Schmidt, is so petty he can’t even appear on the same TV set with another researcher.

Let the squawking begin.

UPDATE: To further illustrate the point about different baselines giving different results to the public, here is the HadCRUT4 data, which uses a 1961 to 1990 baseline:

According to their data (which is mostly the same raw GHCN data used by NASA GISS, plus some others) their 1980 temperature anomaly was somewhere around 0.1°C (see green lines intersection), where GISS says 0.27°C

Again, why can’t climate science do a simple thing like standardize on a baseline period ?

I’ve added a caveat in the title to reflect this: (depending on who you ask)

Yes I realize that, and expected complaints, in fact I counted on them…but here’s the deal: which one is the RIGHT temperature? hmm? Anomalies are all products of their baselines, and baselines are a choice of the publisher.

If NASA GISS is to be believed as the world’s most cited source for global temperature, then 0.27C is correct for 1980.

Unfortunately, they have been living in the past, and refuse to update their baseline. UAH did it, RSS did it, NOAA/NCEP did it….why not GISS? The answer: Gavin Schmidt.

For example, many parts of the world have poor record of absolute temperature due to factors such as elevation diversity in some regions, while there are points in such regions such as cities with weather records in mountainous areas. So much of a specific region of the world are better known in terms of how much anomaly WRT some baseline, as opposed to absolute temperature of most of that region. And the surface temperature can’t be measured as accurately by satellites as that of regions of the atmosphere, because surface emissivity varies more than that of the satellite-measured parts of the atmosphere and due to more factors.

Yes, right. Such a basic mistake. 2017 temperatures are so far similar to 2015 and clearly above average for the 2000-2016 period. Temperatures are going down, but remain elevated after the 2015-16 El Niño. Significantly warmer than 1980 temperatures.

Eye-balling, not calculating, so ball park, not precise, but close enough for guvmint work, looks as if ~0.05 anomaly for first half of 1980 (and for year) vs. ~0.25 for 2017 to date, with trend heading down. Is ~0.2 degrees C of warming in 37 years significant, especially measured in the year following a super El Nino? IOW, 0.54 degrees per century.

Anomaly variability in satellite record is about 1.3 degrees C, so two percent of that would be within measurement error, at 0.026 degree. Where I live, annual variation can be from -37 to +47 degrees. Yesterday’s was from 7 to 36.

The temperature difference between 1980 and 2017 is way above 2% and therefore significant. The warming is real.

As the earth is a spheroid with an inconstant orientation towards the sun, local conditions can vary hugely. Average changes for the entire surface are however much much smaller. The difference between the Last Glacial Maximum and the Holocene is believed to have been of only 4-5 degrees C.

Whether we are due or not for cooling you should remember that has been a consistently failed prediction for the past 15 years.

And the same can be said about warming. What we are seeing is the down wind impact of tropical water vapor, and as the ocean warm pools move from place to place over the decade(s), it alters the land surface temperature average, on decadal time frames.
Watch those sst anomalies. Large areas of the US are having cooler than average temps. It’s 63.5F, and the days are already getting shorter.

If you are comparing the average temperature for two different years, the interval between them is irrelevant.

According to UAH the temperature difference between 1988 and 2017 is significant. The question is what does that mean. 1988 was a strong El Niño year, and 2017 is not. And saying that since 1988 there has been cooling would be an obvious mistake.

I haven’t predicted that it would start 15 years ago. Maybe someone else did. But 15 years ago, I expected that the late 20th century warming cycle would end about 30 years after it started.

The earliest I would have expected cooling was 2006, ie 30 years after the dramatic PDO flip of 1977, which caused whatever warming actually has been observed since then. It’s too soon to say that that cooling hasn’t indeed begun, since the super El Nino may have masked the signal.

No, not really, it’s done the opposite way on a regular basis in newspaper articles…ie. temp today is x warmer now than some date in the past.

It’s like the song, “Does anybody really know what time it is?” With anomalies and differing baselines presented to the public, does anybody really know what the temperature is, or was? Some standard for baselines in the climate community would solve this. That’s my point.

That depends on what you are talking about, the measured average temperature at a location, or the calculated average temperature at that particular spot?

Because if it’s measurements, that weather is a tiny part of climate. And when weather is controlled by long period features, such as decadal ocean cycles, weather averages into climate, and that “climate” is going to have a decadal cycle.

Agreed, I can do that. But the public can’t. ..any they shouldn’t have to, that’s the point of making this comparison, to show how different baselines are giving people different answers. Why can’t the climate community standardize on a single baseline for public presentations? If they did, there would never be issues over comparisons over one graph or the other….

Picking any arbitrary year is a mistake, as with 1980. But Warmunistas point to 2015 and 2016 despite their being super El Nino years.

The trend in UAH since 1979, despite our just coming off a super El Nino, is barely positive. Hence, I agree with you that no worrisome warming is occurring. However, I’d go farther and say that no significant warming has happened, since the past warming cycle is in no way any different from prior warming cycles within prior centennial-scale warmings, such as the Medieval WP, where “significant” means attributable to human activity, outside of natural variability. IOW, there is no human signal in the data.

They weren’t carefully picked, as in cherry picked. One is this year and the other is 30 years ago, a traditional climate interval. The trend during that interval would also be about flat, although I haven’t computed it.

I can’t center a 30-year interval on 2017. I can only end one then.

The interval centered on 1988, ie 1974-2003, may well prove warmer than 2003-32.

you should compute the 30-year averages centered on 1988 and 2017 and compare those.
=========
That only works if the temps are in terms of a common baseline, such as Celsius. As soon as you use anomalies based on the past, where you are also adjusting the past, your results are not to be trusted.

you should compute the 30-year averages centered on 1988 and 2017 and compare those.
=========
That only works if the temps are in terms of a common baseline, such as Celsius. As soon as you use anomalies based on the past, where you are also adjusting the past, your results are not to be trusted.

Whether the US continues subsidizing windmills matters very much. His district has more of them than any other in the country. A lot of his campaign contributers have gotten rich off them. If he votes against subsidies, so might other members.

So the fact that he is convinced that earth is liable to cool again, as it did twice before since the end of the LIA, is significant for US “climate” policy.

Some climatologists have better records predicting than others, by relying upon climate history, rather than tarot. Or extrapolation of the latest trend indefinitely.

The one thing we can be sure about climate is that it will change. Hence, cooling is certain sooner or later. I could be wrong about when it will start, or has started. I won’t be wrong that climate will cool, if not in a decade, then a century or millennium or three.

Reality has certainly not matched Hansen’s bias, who in 1988 predicted runaway global warming. Here is is 30 years later, and UAH finds global temperature cooler than in 1988, despite CO2 rising at or above his highest estimated rate.

Yes, the vast majority, if not all the warming observed since the depths of the LIA, c. AD 1690, has been natural. Obviously same goes for prior such fluctuations in the Holocene and previous interglacials.

Earth has not yet enjoyed in the Current Warming Period, since c. AD 1850, a single 50-year interval as warm as at least three such during the Medieval WP, and more during the Roman, Minoan and Egyptian WPs, to say nothing of the long Holocene Climatic Optimum.

Until and unless an important human signal be teased out of genuine climatic data, then there is no reason to worry, let alone dismantle the global economic system which feeds, clothes, houses, educates, warms, cools and provides work and play for going on eight billion people.

Javier: “The temperature difference between 1980 and 2017 is way above 2% and therefore significant. The warming is real.”

The warming is real but here is a different perspective on it.

Here’s Hansen’s 1999 U.S. surface temperature chart:

On the Hansen chart you can see that 1998 is the hottest point on the chart with the exception of the 1930’s, which is 0.5C hotter than 1998, and this also makes the 1930’s 0.4C hotter than 2016.

So, yes there has been warming from 1980 to 2017. 1980 is one of the colder years on record, so it’s no wonder we have warming. But we had even more warming from 1910 to 1940, and the 2017 temperatures are about 0.7C cooler than the 1930’s.

The warming from 1910 to 1940 is considered to be natural variability, and there is no reason to assume the similar warming from 1980 to today is not also natural variability.

If you want to argue that the Hansen 1999 U.S. temperature profile does not represent the Global temperature profile, I would say you are wrong. All unmodified charts from around the world resemble the Hansen U.S. chart temperature profile. They definitely do not resemble the bogus, bastardized Hockey Stick charts the Alarmists have dishonestly created (see Climategate) to sell the CAGW narrative.

According to the Hansen U.S. 1999 chart, in combination with the UAH satellite chart, which Gabro reproduced above, we have been in a temperature downtrend since the 1930’s, and we will have to go at least 0.7C higher from here to break the downward trendline.

Here everyone goes again – averaging averages of intensive variables. This is like comparing the average telephone numbers in two different telephone books to 3 places of decimals. Mathematically perfectly correct, logically worthless.

Average temperatures give no information on the amount of energy in the lower atmosphere. They cannot even indicate whether the amount of energy in the lower atmosphere is going up or down.

: 1951-1980 for GISS
≠===========
GISS adjusts the past every day or two, which makes a mockery of using the past as a baseline. In effect the baseline is constantly changing. One might as well redefine the value of zero every couple of days. Hopefully in the right direction when paying bills and the opposite when depositing the pay check.

Never happen as long as “consensus climate science” rules, since how then could the gatekeepers keep cooking the books with constant adjustments, in order to keep up the scare and keep the funding trough taps flowing?

The caveat is not good enough. The headline is still grossly misleading. Perhaps this would be better:
“Why can’t climate scientists not standardize of a common baseline?”
Makes your point right up front.

Meanwhile, Justin Gillis at the New York Times claims “Earth Scorching CO2” is higher than ever while temperatures stabilize at a value that is the same as about 1980 (0.27°C), according NASA’s GISTEMP

Not so, I’m afraid: Both graphs show anomalies, not temperatures. Both have different base periods.

GISSTEMP of .27 degree C and NCEP CFSR CFSv2 of .264 degree C are an apple and an orange. The 1980 GISSTEMP of .27 C is .27 C warmer than its 1951-1980 averagte. The NCEP CFSR / CFSv2 of .264 C is .264 degree C warmer than its 1981-2010 average.

Just on the face of it, this makes no sense. First you say that temperatures are leveling off at 0.26 C degrees above 1981-2010 levels using NCEP data. Then you say that temps are leveling off near 1980 levels. This is completely contradictory. How can temps be both above 1981-2010 levels but at 1980 levels? Reading on, it becomes clear how you do it: You are comparing the NCEP anomaly with the GISS anomaly. This, of course, is totally illegitimate. The GISS anomaly is based on a 1951-1980 baseline while the NCEP anomaly is based on a 1981-2010 baseline. You are comparing apples and oranges. Were you unaware of your mistake or were you hoping we wouldn’t notice?

Curiosity question, I realize that the WMO and friends consider 30 years to be suitable for anomaly baselines, but does anybody have a longer term baseline, i.e. 60+ years? And if not, is there a “how-to” on how to create one?

I agree “that we do not know the average temperature of the earth with an acceptable degree of precision.” Doesn’t this also mean that we can’t calculate a global temperature anomaly with an acceptable degree of precision?

Regarding climatology. Rather than comparing the change in temperature anomalies relative to a climatology baseline, shouldn’t we be looking at changes in the climatology baseline over time?

Doesn’t this also mean that we can’t calculate a global temperature anomaly with an acceptable degree of precision?

Theoretically no, calculating the anomaly only requires the station data and a consistent methodology. On principle you can calculate the difference between two unknowns with great precission because you can measure the changes, not the absolutes. Obviously I am not going to defend the methodology behind temperature data. I am just talking in general.

Regarding climatology there is an absurd reductionism of climate to temperature changes, and these to anomaly changes. It doesn’t make much sense, but that is the way humans are. We need a number to anchor our thoughts, even if it is totally meaningless, like the famous two degrees that we should avoid, that is a totally made up number.

The real problem is that we do not know the average temperature of the earth with an acceptable degree of precision.
================
It is worse than that. much, much worse. there is no international standard to calculate the average temperature, and depending upon the algorithm you choose, it is possible to show the earth on average is both warming and/or cooling at the same time.

In other words, global warming may be as much a product of the method used to calculate global average temperature as anything else.

Regarding climatology. Rather than comparing the change in temperature anomalies relative to a climatology baseline, shouldn’t we be looking at changes in the climatology baseline over time?
=====================
indeed, what effect does this have on the 1950-1980 baseline? It looks like the baseline itself is a moving target:

The issue is that whatever warming has occurred since CO2 took off after WWII, which is slight, is well within normal bounds, so there is no detectable human footprint. What is detectable is the observation that CO2 released by human activity has greened the earth, especially in arid regions. Warming effect, not so much.

So you’re comparing anomalies from different datasets and you’re doing so, according to your addendum, “to illustrate a point”. But the point of your original article was completely clear: That 2017 temperatures are just about the same as 1980 temperatures. It’s right there in the title of your article. But now you seem to be abandoning that point completely.

Please clarify the point of your article. Are you maintaining that there has been little if any warming over the past 37 years? If so, your method is completely illegitimate and your conclusion is misguided. If your point is something about the desirability of standardizing baselines, then why didn’t you say so in the original article? And if your point is now the latter, that certainly is ironic, because you specifically did not standardize the baselines of the two datasets in the comparison that you made.

(By the way, baselines don’t need to be “updated”. They are what they are, and once established, they don’t change.)

The crooked “data” gatekeepers keep changing past temperatures, so while the baseline years remain the same, the alleged temperatures for those years do change. All the time. To whatever the book cookers want and need them to be to maintain their mendacious “series”.

Your comment reminded me of —
About 50 years ago (+ – 10), a meteorology textbook was published wherein the temperatures were converted from C to F. The process also resulted in the Latitude and Longitude on the maps being likewise converted. The degree symbol ( ° ) is not often such a problem, but percentages and nominal, ordinal, interval, and ratio scales are.

Javier ….. as you say, predicting is difficult, especially about the future. Same holds for this notion by the IPCC that we are going to see between 2-6 C increase in temp. Like you said, they’ve been predicting warming for 15 years, and save for a few elninos …. just hasn’t happened.

Correct. Contrary to models and climastrologists’ predictions since at least 1988, temperature was flat between the two super El Ninos, despite steady rise in CO2. The only possible “warming” remotely plausibly attributable to humans is the accidental fact that the super El Nino peak of 2016 was ever so slightly warmer than that of 1999. So, essentially, no warming for 17 years. And it’s looking as if the zero trend will continue, if not cooling in the offing.

I agree that future temperature predictions are as likely to fail whether they are towards the warming side as towards the cooling side. And the more extreme the predictions are, the more likely they will fail.

In my opinion it is very likely that future temperatures will fluctuate, showing some warming or some cooling at times.

Leif and others were well spoken on the problem about using comparisons that have different baselines. So my question to those commenters is “What is the best baseline?”. There is none is a possible answer.

Right. So why DON’T THEY? (the publishers of the data and choosers of the baseline)

Imagine if monthly sunspot counts were expressed in anomalies using different baselines by different researchers. NOAA might use the 20th century average as a baseline, NASA might use the last 10 solar cycles as a baseline, SIDC might use a baseline from 1800 to 2000.

It would get pretty ridiculous pretty quick in reporting “sunspot anomalies” to the public. Just look how much trouble you had getting the recent correction to sunspot numbers accepted…now you have people referring to old and new sets.

Imagine if monthly sunspot counts were expressed in anomalies using different baselines by different researchers.
This is actually what has happened with different observers defining different baselines. The difficulty is in ‘harmonizing’ the baselines. And the difficulty in that is that the assumption that the definition of solar activity [e.g. “what is a group?”] does not vary with time [which is actually does in poorly known ways]. The analogous problem with global temperature is the changing distribution and density of stations as well as the changing environment [less rural].

All that said, there is really no excuse for not using the same baseline [recognizing the uncertainty when going back in time].

There are offsets that you can add or subtract to compensate for baseline changes. It varies a little from month to month and supplier to supplier. I posted a table here. GISS land/ocean for January is reasonably typical. It goes

1951-80 0
1961-90 0.102
1971-00 0.242
1981-10 0.428

IOW if you campare NCEP with a 81-10 baseline to GISS without conversion, you are adding in a 0.428 difference.

There is no need for, or even justification for, a moving baseline. The concern expressed by alarmists is that industrialization is responsible for increased release of CO2 and consequent warming. Thus, the appropriate baseline for the argument is any pre-industrialization 30-year period.

A moving baseline is much like the infamous shell game. It becomes difficult to know which shell the pea is under. That is, it is difficult to make comparisons and predictions when different baselines are used routinely. But, maybe that is the intent!

Finally, the calculated baseline is actually an artificial construct. The global standard deviation is quite large for a 30-year period. Therefore, currently, an average is calculated and it is assigned a precision that is essentially the same as the annual/monthly average that is used to compute anomalies. If the data analysis isn’t going to be rigorous, one might as well pick some arbitrary number such as 14.000 deg C and compute anomalies from that and drop the pretenses. That is, say, “Assuming a pre-industrial global average temperature of exactly 14 deg C, it is defined as the baseline temperature for computing anomalies.” It won’t make much difference in the reported results, but one can then easily make comparisons between reports from different times and authors without needing to know what baseline was used for the particular report.

What I have said above is still valid if anomalies are computed at the station level instead of at the global level. If one adds (or subtracts) a constant to the baseline, a computed anomaly will differ only by that constant. Any subsequent operations such as calculation of trend lines of converting back to actual temperatures will not be affected by the constant.

The powers of ten alternation breaks down at 300 Ka, since that was also during a glaciation, so was colder than now. However 3 Ma was warmer than now.

Climate constantly changes, has usually been warmer during the Phanerozoic Eon (last 540 million years), and nothing the least bit out of the ordinary or worrisome is happening as a result of a fourth molecule of vital plant nutrient (photosynthesis fuel) in 10,000 dry air molecules.

LOL, due to methodology changes, the “global” temperature for …hmmm, I think it was 1997 as stated by NASA was over 1C warmer than the current tempertures. You have to work it out from their anomaly and baseline since they don’t state the temperature directly. It’s amusing to see how much wiggle room there actually is in the processing.

13.3 Defining the Baseline
A baseline period is needed to define the observed climate with which climate change information is usually combined to create a climate scenario. When using climate model results for scenario construction, the baseline also serves as the reference period from which the modelled future change in climate is calculated.

13.3.1 The Choice of Baseline Period
The choice of baseline period has often been governed by availability of the required climate data. Examples of adopted baseline periods include 1931 to 1960 (Leemans and Solomon, 1993), 1951 to 1980 (Smith and Pitts, 1997), or 1961 to 1990 (Kittel et al., 1995; Hulme et al., 1999b).

There may be climatological reasons to favour earlier baseline periods over later ones (IPCC, 1994). For example, later periods such as 1961 to 1990 are likely to have larger anthropogenic trends embedded in the climate data, especially the effects of sulphate aerosols over regions such as Europe and eastern USA (Karl et al., 1996). In this regard, the “ideal” baseline period would be in the 19th century when anthropogenic effects on global climate were negligible. Most impact assessments, however, seek to determine the effect of climate change with respect to “the present”, and therefore recent baseline periods such as 1961 to 1990 are usually favoured. A further attraction of using 1961 to 1990 is that observational climate data coverage and availability are generally better for this period compared to earlier ones.

Whatever baseline period is adopted, it is important to acknowledge that there are differences between climatological averages based on century-long data (e.g., Legates and Wilmott, 1990) and those based on sub-periods. Moreover, different 30-year periods have been shown to exhibit differences in regional annual mean baseline temperature and precipitation of up to ±0.5ºC and ±15% respectively (Hulme and New, 1997; Visser et al., 2000; see also Chapter 2).

13.3.2 The Adequacy of Baseline Climatological Data
The adequacy of observed baseline climate data sets can only be evaluated in the context of particular climate scenario construction methods, since different methods have differing demands for baseline climate data.

There are an increasing number of gridded global (e.g., Leemans and Cramer, 1991; New et al., 1999) and national (e.g., Kittel et al., 1995, 1997; Frei and Schär, 1998) climate data sets describing mean surface climate, although few describe inter-annual climate variability (see Kittel et al., 1997; Xie and Arkin, 1997; New et al., 2000). Differences between alternative gridded regional or global baseline climate data sets may be large, and these may induce non-trivial differences in climate change impacts that use climate scenarios incorporating different baseline climate data (e.g., Arnell, 1999). These differences may be as much a function of different interpolation methods and station densities as they are of errors in observations or the result of sampling different time periods (Hulme and New, 1997; New, 1999). A common problem that some methods endeavour to correct is systematic biases in station locations (e.g., towards low elevation sites). The adequacy of different techniques (e.g., Daly et al., 1994; Hutchinson, 1995; New et al., 1999) to interpolate station records under conditions of varying station density and/or different topography has not been systematically evaluated.

The discussion comparing anomalies with different baselines is germane, however the big take away here, to me, got lost in this matter. I have been talking (here) about the lack of warm water back when the 2015-16 El Nino was rising. I was a bit surprised (and suspicious) of how high it got, but then, not surprised at how fast it dropped (record decline) in 2017. I suggested to Tisdale at the time that he, or someone more knowledgeable than I CALCULATE the bounds of likely temperature from the thin warm layer to see where the temps are likely to go (remember the experts were thinking a continuing or a repeat El Nino was in the offing).

I’ve also more recently taken up this lack of warm water and a disconnect with surface temperatures and the (restricted) ENSO zone as an indicator of whither temperatures. With cold water not so much welling up at the eastern end of the equator, but slanting down into the equatorial zone from cold blobs in NH and SH and an impotent W. Pacific Warm Pool – cool at both ends. Also, the rather quick change from persistent warm blobs in the El Nino development period to persistent cold blobs in the temperate zones since looked like world temperatures were going to follow these cooling effects and ignore the equatorial band. These too might have been calculated by the specialists to give a forecast (as I did a year ago by eyeball).

I checked to see if I was typing in Russian because my entreaties didn’t seem to interest a generally argumentive, sharp crowd here at WUWT. I was even beginning to think that only Ben Santer and Michael Mann saw my offerings, noting that the latter at least twitters instantly after a controversial blog post appears on WUWT so he’s watching. Hey, I’m only a geologist and engineer – so what do I know. Anyway, thanks to Ryan Maue my analyses have been belatedly independently corroborated. Maybe now some PhD nouveau climate student will do the calculations.

Anthony, I understand the point you are trying to make, but only after reading your comments further down. The article was not clear, I honestly thought your vacation was causing you to go senile.

Was there something in particular that triggered this post? I have never seen the general public comparing anomalies, most of the alarmists use only their favored temperature set, GISS, not specific anomaly values, except in reference to the ‘scary’ 2 degree threshold, but that at least does have a quasi-standard baseline.

As to your point, yes, it would be nice to see a standard baseline used in the climate science community. Either that or attempts at absolute temperatures, but that is a much more difficult, if not impossible task.

FERDBERPLE
there is no international standard to calculate the average temperature, and depending upon the algorithm you choose, it is possible to show the earth on average is both warming and/or cooling at the same time.
HENRY
you are right. my results show it is already cooling, if you look at a globally balanced sample
maxima or minima
in degrees C/annum

Forrest Gardener
Because it fits 100%? To define a function you need at least 4 points.
Admittedly, the period I looked at (1973-2015) is approximately half a Gleissberg cycle.
So, the whole wave is a sine wave, wavelength 87 years.
Still, I think for the half GB the parabola proves my pint, i.e. all warming and cooling is natural.
Man made warming either does not exist or is too small to make even make a dent in what nature gives us,
hence my correlation of 100% for the speed of warming/cooling.
they had that already more or less figured out before they started with the CO2 nonsense:

Humans contribute to local climate change, but it doesn’t add up to enough noticeably to affect the global average. Unless your only data come from urban heal islands which used to be cool, dark forest.

1. Place the thermometer 5 feet above the ground (+/- 1 ft.). A thermometer too low will pick up excess heat from the ground and a thermometer too high will likely have too cool of a temperature due to natural cooling aloft. 5 ft. is just right.

2. The thermometer must be placed in the shade. If you put your thermometer in full sunlight, direct radiation from the sun is going to result in a temperature higher than what it should be.

3. Have good air flow for your thermometer. This keeps air circulating around the thermometer, maintaining a balance with the surrounding environment. Therefore, it is important to make sure there are no obstructions blocking your thermometer such as trees or buildings. The more open, the better.

4. Place the thermometer over a grassy or dirt surface. Concrete and pavement attract much more heat than grass. That is why cities are often warmer compared to suburbs. It is recommended to keep the thermometer at least 100 ft. from any paved or concrete surfaces to prevent an erroneously high temperature measurement.

5. Keep the thermometer covered. When precipitation falls, you do not want your thermometer to get wet as that could permanently damage it. A Stevenson screen is a great place to store thermometers and other instruments as they provide cover as well as adequate ventilation. If you can’t get one, a simple solar radiation shield is adequate.

… and wondering how a person places a thermometer in the shade and out of direct sunlight AND keeps it in the open and positioned for good air circulation at the same time?

I still see the perfect temperature-measuring spot as elusive. How does anybody agree that they are even measuring the same thing consistently from one location on Earth to the next ?

Forget whether experience agrees with a scientific guess or not, Mr. Feynman (yeah, THAT Feynman) — just tell me how the heck do I even take a blasted temperature measurement to help confirm a scientific guess or not !

Measuring in precisely the same way, at the same times of day at exactly the same spot will tell you about the changes there over time. But how many such good locations are there? And how can they represent the planet?

Hence, satellites and balloons are the only even remotely good enough data for scientific purposes. Floating temperature gauges in the oceans move around too much. Even balloons aren’t sampling exactly the same volumes of the atmosphere.

Measuring in precisely the same way, at the same times of day at exactly the same spot will tell you about the changes there over time. But how many such good locations are there?

Probably not many. But as long as they in general do the same bad thing, the day to day change will be as good as it can get.
This is one of the reasons I follow the day to day change in a single station, subtract yesterday’s Tmin from today’s Tmin as difference, intra-day change Tmax-Tmin, and average Tmin and Tmax.
You can look at all three, and get a good idea of what’s happening where we have surface stations, and they they change from day to for long periods https://i2.wp.com/micro6500blog.files.wordpress.com/2017/01/1980-series1.png

NASA (the National Adjusting the Science Association) CAN’T use 1980-2010 as a baseline period, because they haven’t yet decided how how or cold that period was. Cooling the past can’t begin while youre still using part of that period in your hottest evah decade claims.

The low solar activity cooling regime is in place & will continue until SC25 starts, 2019-20.

Using my tried and true F10.7-TSI-SST model, which is based an intimate perfect working knowledge of the temporal relationship between these three measures, I’ve estimated Had3SST will drop a further 0.27C to 2020 from the Dec 2016 Had3SST value of 0.447C, to 0.178C (+.05/-.1), putting the end of solar cycle value (if it were to end at the year-end) of somewhere between just above to just below the cycle SST yearly starting value of 0.141C in 2008.

CFSv2 2m will drop along with it, possibly going to “zero” or negative by 2019-20.

This estimate will be updated in January 2018 after the 2017 numbers are all in. It could go lower.

The very best way to keep up on ocean warming/cooling is the daily 7-day SSTa change:

Using my tried and true F10.7-TSI-SST model, which is based an intimate perfect working knowledge of the temporal relationship between these three measures
Only in religion and cults does one find ‘intimate perfect working knowledge’. Not in science…

The perfect knowledge of these relationships and principles is based on science, research, & work.

Cult? At least I’m not involved in the continual promotion of your cult of personality. ;)

You have no idea what I’ve done, so the attitude towards me is wholly unwarranted.

A few days ago you were trying to tell me there is no such F10.7-TSI-SST relationship. Instead you presented the rather false and pathetic formula as you did here today that you use to describe the sun-earth temperature relationship to support your theory that the sun only warms by 0.1C. The formula this mathematical theory is based on has no predictive power through a solar cycle like my work.

My work is powerfully predictive and had immediate application to my knowing the timing of the 2015-16 ENSO. Three years ago I said on an ENSO blog post here,

“…Climate change comes from solar changes. Solar activity ramped up late last year and has since tapered off. The “recharge” of the oceans from that rampup is now dissipating. If and only if there is another spike in solar activity this year will there be an El Nino.

I said that then because I knew at that time that all the ENSOs at the top of the solar cycle occured above the 120 sfu level AND are delayed due to the temporal relationship of F10.7cm to TSI.

The green arrow in the image below signifies the time when I first plotted and realized this relationship.

The rest of my model involved first smoothing the daily data and finding more confirmation of the temporal relationship, and creating a very nice low error TSI predictor based on the SWPC monthly SSN/F10.7cm forecast. I used the SWPC 2016 F10.7 forecast in late 2015 to forecast the Had3SST change over the year based on a second empirically derived regression formula of TSI-SST. I was less than 3% off.

The SC24 TSI rise & maximum drove the whole 0.6C 2008-2016 SST spike, and is now cooling us off.

The solar cycle influence for SC24 was 0.6C, not 0.1C. You’re way off Leif. My stuff works.

I thought you would know this. The WMO defines climatological periods of reference ending in the last complete decade, while for comparison purposes also establishes a fixed reference period. Then it is up to research institutions to adhere to this standard or not.

4.8.1 Periods of calculation

Under the current WMO Technical Regulations, recognising the realities of a changing climate, climatological standard normals are defined as averages of climatological data computed for successive 30-year periods, updated every ten years, with the first year of the period ending in 1, and the last year, with 0. That is, consecutive 30-year normals include: 1 January 1981 to 31 December 2010, 1 January 1991 to 31 December 2020, and so forth. Countries should calculate climatological standard normals as soon as possible after the end of the decennium. Climatological standard normals periods should be adhered to whenever possible in order to allow for a uniform basis for international comparison.

Also under the WMO Technical Regulations, recognising the need for a stable base for long-term climate change and variability assessment, a fixed reference period is defined as the 30-year period 1 January 1961 to 31 December 1990. This period should be used to compare climate change and variability across all countries relative to this standard reference period. It will remain fixed in perpetuity, or until there is a sound scientific reason to change it.

So now you know. Some are using the climatologial standard normal 1981-2010, that will be changed to 1991-2020 in less than 3 years, while others are using the fixed reference period 1961-1990. Both are doing it in accordance to WMO guidelines.

“that will be changed to 1991-2020 in less than 3 years”
There are simple practical considerations that cause people to make different choices, which the WMO recognises here. GISS uses 1951-1980 because that was the most recent 30 year period when they started. And they now have a large base of published numerical data. If they changed, then whenever you saw GISS data you’d have to look up which anomaly they were using.

On the other hand, the anomaly base temperature is supposed to be your best estimator of present data. That ensures that you don’t have to worry about whether the sample for a given month includes the right balance of warm and cold places, because you have subtracted out the difference. If there has been significant drift since the base period, this works less well. That is why, when HADCRU (using 1961-90)T included more Arctic stations in V4, the anomaly (and trend) went up.

Then there is the issue that the anomaly base is first used for individual stations, so you have some work to do if they don’t have data in the period. That is another reason why 1961-90 is popular with people using GHCN; they can include more stations. It’s also why they don’t use more than 30 years. However, the issue is manageable – BEST uses the same least squares system I do, which doesn’t need a fixed period at station level. But you do need eventually to decide on a reference period.

So NOAA uses 1961-90 for its initial average calculation. Once the data has been aggregated, you can convert to any other base just by subtracting the average for that period. So NOAA converts to 20th century for a lot of reporting.

Nick, I know you are being serious with this post, but try to separate the effect of a different time period for calculating the base for the anomalies from the effect of changing the stations used for calculating the base.

You can’t talk about changing the stations as justification for NOAA using a particular time period.

The 30 year periods of which you write were decided on back in the mid-1930s.
That was a time of printed materials, before computers (as we know them), and before the United Nations. Moreover, the issue of interest was more about meteorology, and less about using scary climate to justify “social justice” — or whatever it is that is going on.

As he who is about to go on vacation says “why can’t climate science do a simple thing like standardize on a baseline period ?”
But I would add, it doesn’t have to be 30 years.
I vote for 73, it is a nice prime number.

Rather than a step up, as during the switch from 1971-2000 to 1981-2010, Warmunistas are liable to get a nasty surprise after the switch to 1991-2020. For that matter, if the rest of this year, 2018 and 2019 drop back under the present baseline, as after the last super El Nino, the dreaded Pause will be on again.

Not sure you’re familiar with the real world, but the surprise I have in mind is that the nest 30 years are liable to be cooler than the past 30 years, as demonstrated by the past 300, 3000, 30,000, 300,000, 3 million, 30 million, 300 million and three billion years of climate history.

And, yes, that presentation makes it look as though there is very little change, but sometimes even small changes can have important consequences. Going from -0.5C to 0.5C – just one degree – means that my ice lolly drops off the stick and turns to slush. Probably ruins my trousers. This is the sort of horror that Margaret Thatcher was warning us about, and why we have to hang everyone who breathes out.

This post should be deleted and rewritten. It should be argueing temps are increasing at similar rates looking 37 years apart. In particular, looking at 2 neutral El Nino years we have:

1980 +0.27°C warmer than the 1950-1980 baseline

2017 +0.26°C warmer compared to 1981-2010 baseline

And the later baseline was also warmer.

I hate to say it but that sounds like an argument that temperatures are continuing to increase with a near linear trend. That’s not what most WattsUpWithThat readers, including myself, were expecting 3 years ago.

The ±0.46 C lower limit of uncertainty shows that between 1880 and 2000, the trend in averaged global surface air temperature anomalies is statistically indistinguishable from 0 C at the 1σ level. One cannot, therefore, avoid the conclusion that it is presently impossible to quantify the warming trend in global climate since 1880.

“I’ve added a caveat in the title to reflect this: (depending on who you ask)”

It doesn’t reflect it very well. There isn’t any data source saying that the temperature now is similar to 1980. If you don’t take account of base differences, you can mix them to get any number you like. GISS then with NCEP now, it’s a small difference. If you compare GISS now with NCEP then, the difference is huge.

“a 0.2 degree C difference in 37 years huge”
ERA-Interim, using a 1981-2010 base, says the temperature in 1980 was negative. GISS says the 2016 average was about 1°C. That shows boost you can get with base period fiddling.

If you use the same base, as I did above, for UAH, you get a gain of only 0.3 degree C for 2017 over the same part of 1980, for which year the average monthly anomaly was indeed very slightly negative.

Gabro,
You’re missing my point, which is the difference you can make using different base periods in a comparison. As Joe Bastardi points out below, a reasonable estimate of warming from 1980 to now is about 0.45C. As I point out above, a reasonable estimate of the difference between a 1951-80 and a 1981-2010 base is 0.42. So if you use the 1959 base for 1980, and the 1981 base for now, you get something like .45-.42 = not much change. But if you do it the other way, you get .45+..42=0.87C, which would be a large change in 37 years.

Mine is that in UAH, 2017 has been only 0.3 degrees C warmer than the first five months of 1980. Hardly anything to get worked up about.

Of course you’re right that the 1950s, ’60s and ’70s were cooler, so you’d get a bigger difference with an earlier baseline. That is, until you go back to the 1920s, ’30s and ’40s.

If you compared the past 30 years with the 30 years around 1690, we would indeed be importantly warmer. But the same interval would be cooler than the 30 years from 1181 to 1210, or many other such intervals in the Medieval WP.

Simon, you cannot convert one baseline to another unless the baselines are calculated the same way.

If you want to say that your baseline was the temperature as measured on your thermometer between 1980 and 2010 but you are now updating your baseline to be the temperature as measured on your thermometer between 1985 and 2015 then you can legitimately do so providing you are suitably cautious about things which have changed over that period (for example housing which has encroached on your thermometer).

If you want to say that your baseline was the temperature as measured on your thermometer between 1980 and 2010 but you are now updating your baseline to be the temperature as measured on thermometers 100km away (for example because your old thermometer broke) then you are unwittingly playing the pea and thimble trick.

Try not to be so condescending until you understand what you are talking about.

I much prefer to see the actual temperatures, rather than contemplate anomalies to the second decimal.

NOAA reported: “The combined global average temperature over the land and ocean surfaces for April 2017 was 0.90°C (1.62°F) above the 20th century average of 13.7°C (56.7°F)—the second highest April temperature since global records began in 1880, trailing 2016 by 0.17°C (0.31°F) and ahead of 2010 by 0.0.7°C (0.13°F).”
Mmm, so approaching a searing 15°C, boy the world is steaming /sarc.

And for May, NOAA reported: “Averaged as a whole, the global land and ocean temperature for May 2017 was 0.83°C (1.49°F) above the 20th century average of 14.8°C (58.6°F) and the third highest May in the 138-year global records, behind 2016 (+0.89°C / +1.60°F) and 2015 (+0.86°C / +1.55°F). ”
But to keep up their hottest “evar” meme, they also noted: ‘May 2017 was characterized by warmer- to much-warmer-than-average conditions across most of the world’s land and ocean surfaces. However, near- to cooler-than-average conditions were present across the eastern half of the contiguous U.S., eastern Europe, western and north-central Russia, as well as parts of the northern and southern Atlantic Ocean, northern and southern Pacific Ocean, and the tropical Indian Ocean.”
“The global land-only surface temperature was the coolest May land temperature since 2011 and the seventh highest since global records began in 1880 at 1.15°C (2.07°F) above the 20th century average 11.1°C (52.0°F).” Wow, the land temperature averaged only 12°C in May, no wonder I have the heater on.

Robber, I agree with your concerns. If NOAA and the other keepers of the sacred data had set out to obfuscate they could not have done a better job. If they had set out to inform they could hardly have done a worse job. Layer upon layer of fudging and gobbledegook cannot hide the fact that the global average temperature is one of the stupidest ideas of our time.

What would be useful is to describe what the climate was at each particular location (eg my home town of Cairns) 30 years ago, describe what the climate is now and identify and analyse the differences. Then and only then some form of aggregation of data would be informative.

“Wow, the land temperature averaged only 12°C in May, no wonder I have the heater on.”
That is why it is very foolish of NOAA to quote such an average in these reports (they explain here, S 7). Not only is it almost impossible to measure properly, but it is meaningless. Most places were nowhere near 12°C in May. But if you say that the average anomaly was 1° (it wasn’t, in May), then there is a reasonable chance that it was warmer than usual where you are, whatever usual is.

No Nick. The use of anomalies is one of the major steps in throwing away useful information. That is doubly so when people try to compare anomalies in one location with anomalies in another location or worse try to combine all of the anomalies into a single global anomaly.

You are so misguided on this subject that it is hard to believe you are serious. Why troll?

Gavin says that he can accurately and precisely take Earth’s surface temperature with 50 stations. If they all have continuous records since 1880, stations uniformly maintained during that time, with no switch to electronic thermometers, and cover the land surface uniformly, to include elevation differences and are all in areas which have been rural all that time, away from pavement, then, yes, maybe, theoretically, but for the land only. But there are few if any such sites.

Antarctica’s fringes only started getting measured continuously in the 20th century, and at the South Pole only since 1957, IIRC. There has been no warming at the SP, which is precisely where it should be most evident, according to AGW theory.

Oops, wrong Paul in previous comment. Anyway, The woodfortrees blog has a lengthy discussion of baselines, including BEST’s, and shows all the different data set trends baseline shifted to the UAH baseline.http://www.woodfortrees.org/notes

Anth@ny has clarified that the point he was making was precisely about the problems with using T anomalies from different baseline periods. He maybe should have made the point more explicitly rather than relying upon sub rosa satire.

We calculate absolute temperature.
You can choose any base peroid you like.
For display purposes and consistency with the most widely used series (hadcrut) we use 1951-80.
U can play with periods to your hearts content if you first do the real series in absolute as we do.
It’s not an issue worth discussing.
Waste of time.
Not scientifically relevant.

Oh Mossshhher the once Great and Powerful, when you declare that an issue is not worth discussing, a waste of time and not scientifically relevant, one thing is almost certainly true. In this case you seek to deflect from the inhomogenities that have been inflicted on ALL of the temperature “records”.

Your conduct suggests that whatever you are against is almost certainly right on target. Yes, you have utterly destroyed your credibility.

If u use the baselines of different time periods, you get nonsense. We should be more patient with the climate to change. Climate is slow, like evolution. Its no use to watch evolution from one month to the other.

Well there certainly has been lots of squawking, both here and elsewhere. Some the elsewhere squawking was downright mean, but that’s OK, it’s part of the rigid mindset those people have, they fear change, and they fear different ways of looking at things.

Joe Bastardi and Leif Svalgaard (among others) point out that the comparison is ridiculous, and that is indeed the point; It is. But how is the public supposed to be able to interpret these differing surface temperature presentations done on different baselines? The answer is: they can’t, unless there is a standardized baseline.

All the squawking elsewhere has shown me that there’s really no interest in the climate science community in coming up with a standardized baseline for public surface temperature presentations, they’d rather defend their own work and declare “Watts is an idiot for saying so”. It’s pretty typical, and exactly what I expected. They don’t like change, they really don’t like anyone else suggesting that they way they present temperature data might not be in the best public interest, because after all, they are saviors of the planet and who are you to question us.

As Steve Mosher likes to say: Too Funny!

On the plus side, I’ve been given a marvelous gift from all this squawking. Watch this space after I return. For now I’m closing the thread, as I’m heading out, but there will be a new post sometime after I return on this very topic.