At the time of posting 4 of the 5 monthly data sets were available through February 2017. HadCRUT4 is available through January 2017. The NCEP/NCAR re-analysis data runs 2 days behind real-time. Therefore, real daily data From February 28th through March 29th is used, and the 30th is assumed to have the same anomaly as the 29th.

The projections for the surface data sets (HadCRUT4, GISS, and NCEI) are derived from the previous 12 months of NCEP/NCAR anomalies compared to the same months’ anomalies for each of the 3 surface data sets. For each of the 3 data sets, the slope() value (“m”) and the intercept() value (“b”) are calculated. Using the current month’s NCEP/NCAR anomaly as “x”, the numbers are plugged into the high-school linear equation “y = mx + b” and “y” is the answer for the specific data set. The entire globe’s NCEP/NCAR data is used for HadCRUT, GISS, and NCEI.

For RSS and UAH, subsets of global data are used, to match the latitude coverage provided by the satellites. I had originally used the same linear extrapolation algorithm for the satellite data sets as for the surface sets, but the projections for RSS and UAH have been consistently too high the past few months. Given that the March NCEP/NCAR UAH and RSS subset anomalies are almost identical to February, but the linear extrapolations are noticeably higher, something had to change. I looked into the problem and changed the projection method for the satellite data sets.

The Problem

The next 2 graphs show recent UAH and RSS actual anomalies versus the respective NCEP/NCAR anomalies for the portions of the globe covered by each of the satellite data sets. RSS actual (green) anomaly tracked slightly above its NCEP/NCAR equivalant through November 2016 (2016.917). But from December 2016 (2017.000) onwards, it has been slightly below. Similarly, UAH actual anomaly tracked its NCEP/NCAR equivalant closely through November 2016, but fell and remained below it from December 2016 onwards. I’m not speculating why this has happened, but merely acknowledging the observed numbers.

https://wattsupwiththat.files.wordpress.com/2017/03/rss1.png

https://wattsupwiththat.files.wordpress.com/2017/03/uah.png

The Response

Since the switchover in December, the actual satellite anomalies have paralleled their NCEP/NCAR subsets, but with a different offset than before. So I take the difference (current month minus previous month) in the NCEP/NCAR subset anomalies, multiply by the slope(), and add to the previous month’s anomaly. E.g. for the March 2017 UAH projection…

subtract the February 2017 UAH subset NCEP/NCAR anomaly from the March number

multiply the result of step 1 by the slope of Mar-2016-to-Feb-2017 UAH anomalies versus the NCEP/NCAR subset anomalies for the UAH satellite coverage area.

add the result of step 2 to the observed February UAH anomaly, giving the March projected anomaly

The graph immediately below is a plot of recent NCEP/NCAR daily anomalies, versus 1994-2013 base, similar to Nick Stokes’ web page. The second graph is a monthly version, going back to 1997. The trendlines are as follows…

Black – The longest line with a negative slope in the daily graph goes back to early July, 2015, as noted in the graph legend. On the monthly graph, it’s August 2015. This is near the start of the El Nino, and nothing to write home about. Reaching back to 2005 or earlier would be a good start.

Green – This is the trendline from a local minimum in the slope around late 2004, early 2005. To even BEGIN to work on a “pause back to 2005”, the anomaly has to drop below the green line.

Pink – This is the trendline from a local minimum in the slope from mid-2001. Again, the anomaly needs to drop below this line to start working back to a pause to that date.

Red – The trendline back to a local minimum in the slope from late 1997. Again, the anomaly needs to drop below this line to start working back to a pause to that date.

If your projection of 1.03°C for GISS is right, and it looks good to me, that will make the 2017 average for the first quarter 1.02°C. That contrasts with the record 0.98°C for 2016. That gives 2017 a real shot at being the fourth consecutive record year. NOAA, at 0.95°C for Q1, 2017, would be just below 2016 0.99C.

On RSS, I think you really need to switch to RSS TTT V4. They have been issuing warnings about V3,3 for a while now, and so it isn’t really worth analysing the ups and downs.

On RSS, I think you really need to switch to RSS TTT V4. They have been issuing warnings about V3,3 for a while now, and so it isn’t really worth analysing the ups and downs.

Is 3.3 going to be cancelled soon? I see that V4 global coverage is better than V3.3 (V4 ==> 82.5S to 82.5N; V3.3 ==> 70S to 82.5N). One thing I couldn’t find in a quickie Google session… what elevations (or pressure levels) do the 2 versions use?

I suggest you should draw your graphs so they so they at least an approximation to a valid continuous function reconstruction from sampled data.

An appropriate method would be to draw perfectly HORIZONTAL lines through each plotted data point and extending from the time axis center of the preceding time cell to the center of the following cell, and then drawing verticals to connect the horizontals. The result is a modification of the common ” sample and hold ” method of reconstructing a continuous function from its validly sampled data points, which simply places the horizontal segments across each cell from one sample to the next. A simple low pass filter can render the result as a reasonable respectable replica of the original band limited signal.

Connecting the plotted data points by straight lines is both mathematically invalid, but simply demonstrates an ignorance of sampled data system theory.

It gets tiresome observing this level of basic ignorance among so-called climate scientists.

Remember the US non satellite records have an algorithm that recalculates the past records regularly. As Mark Steyn remarked at a Senate hearing, we still can’t predict what 1950’s temperature will be in 2100.

As Mark Steyn remarked at a Senate hearing, we still can’t predict what 1950’s temperature will be in 2100.

That is the most amazing sentence I have read in ages. The utter lunacy of arguing over a few hundredths of a degree when the entire data set is constantly changing argues persuasively that mankind exhibits very little intelligence.

“That gives 2017 a real shot at being the fourth consecutive record year.”
Nick, I think science should have nothing to do with records as they are often meaningless or even misleading. What matters is the trend.
Consider the drunkard’s walk, where each increment is random.
Clearly the overall trend is zero, as it is random, and yet our drunkard will magically generate one record after another.
Chris

[“If your projection of 1.03°C for GISS is right, and it looks good to me, that will make the 2017 average for the first quarter 1.02°C. That contrasts with the record 0.98°C for 2016. That gives 2017 a real shot at being the fourth consecutive record year. NOAA, at 0.95°C for Q1, 2017, would be just below 2016 0.99C.“]

“Where did you get 0.98C from?”
The average for the whole of 2016. The question is whether 2017 is on track to exceed that annual average. We knew 2016 would go down after March; we don’t know that for 2017.

The average for the whole of 2016. The question is whether 2017 is on track to exceed that annual average. We knew 2016 would go down after March; we don’t know that for 2017.

One thing we can be relatively certain about is that surface temperature adjustments will cool down the next 9 months, versus Jan/Feb/Mar. I did a post back in 2014 https://wattsupwiththat.com/2014/08/23/ushcn-monthly-temperature-adjustments/ where I graphed USHCN temperature adjustments separately for January, February… etc. The results for 1970-2013 are in the following graph (click to view original)

You’ll see the peak is about 4km. One of the things that happened going from UAH 5.6 to 6 is that the quoted level went from 2km to 4km. But it’s rather an arbitrary figure. As you see from the RSS diagram, it’s actually just a continuous weighting function, and takes in a wide range. The key issue is to avoid stratosphere (which behaves quite differently) and to avoid the large obscuring signal from the surface. That isn’t easy with TLT. You’ll generally see John Christy quoting TMT nowadays.

I’m at a loss as to the what value add this provides. Guessing what next month’s anomalies will be seems strange. There isn’t a variance analysis, which would be meaningless on a month-to-month basis, so I go back to what value does this provide?.

I do mention the method, and that fact that the first 29 days of March NCEP/NCAR data is available.

Thanks for the tip about the last 2 images not auto-displaying. I’ll know better next time. I did them yesterday early afternoon to include March 29th data. They looked good in the staging area. When I selected “Preview”, WordPress gave me the blank “Beep beep boop” page with a spinner. Half an hour later, it was still spinning, and the browser status bar was madly updating about “contacting/connecting-to/witing-for/reading/receiving-data-from a zillion adservers. I had to go with it “sight unseen”.

I understand what you’ve said, but my point is month-to-month (or month-over-month) variance is not climate – it’s weather. As such, trivial variances are meaningless. The variances are so insignificant that any small number selected as the “forecast” is as good as any other small number (most seem to be within the range of instrument error).

I certainly do not mean to be rude to Walter, but I seriously wonder what the value is.

The people who make up this fake data need to keep feeding their families, so they just have to keep making new stuff up, even though it conveys no information. But as you can see, they can make up this fake data to three or four significant digits.

If they didn’t do that, just think of all the taxpayer’s grant money that would go unclaimed.

…. But as you can see, they can make up this fake data to three or four significant digits.

To my way of thinking, it is a shame to only have three or four significant digits when you are making up data. Hell, if you go to all the trouble to fake the data you should use at least 7 significant digits!

I have been saying for a few months that the cold ‘blobs’ that have replaced warm blobs in the Pacific (and developed in other oceans) would decouple global Ts from ENSO. The end of the California drought was vindication of this idea. Your forecasts are running too hot because of this.

Here in SW France we still have the heating on at times. All but two years in last fifteen the heating has been turned off by early – mid March. Not a warm year so far.
It seems to me that all the alleged warming is where there is no one living and therefore no one to disprove the warm temps.

SW France and much of Spain was slightly cooler than average for the date over recent days but seems to be settling now. Most of the unusual warmth is in the far north of Europe, Asia and the Arctic; though Central Europe and China are also well above average. Antarctic is cooler than average.

“Since the switchover in December, the actual satellite anomalies have paralleled their NCEP/NCAR subsets, but with a different offset than before. I looked into the problem and changed the projection method for the satellite data sets. the projections for RSS and UAH have been consistently too high the past few months.”
Step 1 you say subtract the February 2017 UAH subset NCEP/NCAR anomaly from the March number? By March are you referring to March 2016 but you have added in an offset.

Nick Stokes
“If your projection of 1.03°C for GISS is right, and it looks good to me, that will make the 2017 average for the first quarter 1.02°C. That contrasts with the record 0.98°C for 2016. That gives 2017 a real shot at being the fourth consecutive record year.”
Seeing as how El Nino has gone and the pacific warmth is unlikely to be a patch on 2016 the only way 2017 GISS could be warmer than 2016 is a very heavy thumb on the GISS algorithms. Bates showed that this may not be hard to do but in a Trump monitored, Lamar Smith overseeing world this will be extremely difficult to achieve. Not impossible given M Mann’s recent chutzpa but very difficult all the same.

To get to a 2017 El Nino, the forecast, from the La Nina that ended in Jan-Feb 2017, the ONI has to transition from negative numbers to positive numbers. If the GMST for the first 1/4 of 2017 exceeds the record annual mean of 2016, which it does, in the what easily could be the coldest 1/4 of 2017.

Please explain how one can take temperature data measured to the tenths of a degree and extract anomalies down to the thousandth of a degree? The CRUTEM4 temperature data set states the following about their measurements: “Year followed by 12 monthly temperatures in degrees and tenths (with -999 being missing)” Yet the anomalies show an accuracy down to the third decimal point. This is a statistical impossibility starting with measurements of only one decimal points.

I’ve heard the argument that using many measurements allows one to get better accuracy, but this is incorrect. Using multiple measurements allows one to determine the uncertainty in the mean to a finer precision than in the original measurements, but those measurements have to be of the same thing and at the same time and place. The uncertainty in the mean is also affected by the range of the measurements, and since the range in these measurements has to be more than ten degrees, there is no way to get the uncertainty in the mean to such precision.

I would love to see the equation that is used to determine these average anomalies, and hear the justification for using it.

Well, that’s the reality take. I’d like to hear the logic of those who think there’s some statistical feat of magic that can make measurements with one decimal point be accurate to three decimal points.

Please explain how one can take temperature data measured to the tenths of a degree and extract anomalies down to the thousandth of a degree?

Average family size in the US is said to be 2.6 people. Everyone (I hope) understands that no one is claiming each household contains 2 people plus 0.6 of a person. Most people get it that the 0.6 is a statistical artefact arising from the averging process.

Likewise, no one is taking temperature measurements to tenths of a degree, nor is anyone claiming to do so. The precision comes from the averaging of different temperatures, many of which are measured to 0.5 of a degree.

You only need a few measurements to 0.5 C accuracy to get an average that extends to many more decimal places, never mind the thousands of such measurements those who make these estimations have at their disposal.

As for anomalies, since these are just differences from long term averages exactly the same principle applies.

“Likewise, no one is taking temperature measurements to tenths of a degree, nor is anyone claiming to do so. The precision comes from the averaging of different temperatures, many of which are measured to 0.5 of a degree.

You only need a few measurements to 0.5 C accuracy to get an average that extends to many more decimal places, never mind the thousands of such measurements those who make these estimations have at their disposal. ”

And yet time and time again this gets trotted out. Take DSD audio. DSD is 1-bit, but has a sampling rate of 2.8224 MHz. 1 bit, two possible values, and yet can reproduce high quality audio.

That turns out to not be the case. Considering the “average size of a US household” calculation, to claim an accuracy of 2 significant digits when only one is in each measurement is incorrect. If you’re counting whole people, you can’t give an average of 2.6 people. You can calculate that figure, but to be statistically accurate it must be reported with the same number of significant digits, and so should be rounded up to 3. As you said, there can’t be 0.6 people in a household somewhere.

As for the thousands of measurements, to be valid for their claimed use, they must be a measurement of the same thing at the same time and place. One can measure the length of a board a thousand times with a ruler marked in millimeters, and the mean of the measurements will be +/- 0.5 mm. The multiple measurements allow one to calculate the uncertainty in the mean to more significant digits, but the uncertainty in the measurement remains the same +/- 0.5mm.

Thus, one could take a thousand measurements of a board and reduce the uncertainty in the mean to say its length was 47.7cm +/- 0.003mm, but the board would still only be measured to the +/- 0.5mm accuracy. The precision of the mean can be known to be within 0.003mm of that 47.7cm figure, but you can’t claim to have measured the board down to 47.700cm.

Finally, the same thing isn’t being measured anyway. A thousand measurements are being taken at a thousand different locations, and the mean is claimed to be “the average US (or global) temperature.” This is like measuring a thousand different boards in different places, and claiming you have the “average length” of a “board.”

Finally someone understands this BASIC mathematical principle—-besides me. I once had a Township civil engineer insist I design an earthen retention basin to four decimal places when the soil coefficient was 0.74. Seems he had this computer program…need I say more. LMFAO he didn’t like it.

It seems like a similar question arises in satellite measurements of ocean surface heights. The Jason 1, 2, and 3 satellites claim to measure distance to the ocean surface with an accuracy of 3 centimeters (after adjusting for orbit variability from center of Earth to an accuracy of about 1 cm), and from this to determine global annual rise in sea level of about 3 millimeters (+ or -). It’s hard to see how that can be ten times more precise than the original measurements.

What is interesting is that GISS is literally the only data set that will reportedly tick upwards. UAH and RSS meanwhile show a sizable downtick.

Why such a discrepancy, you would either have to throw out GISS when pertaining to temperature or you have to see it as literally the only record that can be trusted?

That’s not to mention the growing cold blobs in the oceans (anomalies anyway) and ENSO being neutral (the sea level anomalies so far also don’t show much promise for another big El Nino event this year). The anomaly charts of the WXMaps site also showed, for the first time in a long time it seems, long-lasting negative winter temperature anomalies in the North American arctic instead of a massive red blob. It could be that global cooling is starting or is about to start, but it will depend on what the next year of data brings.

I think the January drop in UAH and RSS in comparison to the surface indices had something to do with something that was specific to this past winter. Maybe the anomalously warm tropical/subtropical NE Pacific was causing deep convection before January but not since, maybe the issue was where and in what direction snow cover anomalies were. Or maybe the issue was the warm blob in the North Pacific being replaced by a cold one and air from that area was uplifted into the satellite-measured troposphere by storms. I think whatever it is, it will continue through April and then fade. If the issue is snow cover, I this will fade sooner. If the issue is tropical/subtropical ocean temperature patterns, I think this will continue through May and change while the ITCZ is moving northward in June. In any case, I think the February low satellite readings look like some sort of downward spike that I expect to not continue into March, maybe they were related to temperature and snow cover anomalies in North America that were less anomalous in March. So, I expect March UAH v6 to be around +.38-.39, and March RSS to be around +.47.

I was way off, and I wonder why. UAH is in, and it was +.19, which was .19-.2 degree less than I expected. It was .16 degree less than Walter Dnes expected, after he made a downward shift in his method of predicting UAH.

I wonder if the three surface figures will also be about .16 degree less than predicted by Walter Dnes. If they are close to his prediction instead, then there is a recently rapidly widening divergence between the Big 3 surface datasets and those of the satellite-measured lower troposphere. And if this happens, I wonder if all 3 of the surface datasets will be in such a rapidly widening divergence, or if HadCRUT4 will be close to .16 degree cooler than Walter Dnes’s prediction while the 2 American ones turn out close to his prediction.

Almost all the reanalysis results for surface temperature in March were about the same as Feb. My NCEP/NCAR (same as Walters) was down by just 0.01 °C, and others reported in the comments there were a little above. I’d expect a somewhat lower GISS; others probably closer to Feb.

Since the NCEP/NCAR re-analysis is quite close to the surface (995 mb pressure level), I expect HadCRUT4 and GISS and NCEI to track reasonably close to it. The satellite data represents the lower troposphere, which may not be an “apples-to-apples comparison” with the surface data sets.

This article is an example of looking at the leaves on the trees in the “forest” of climate change.

Two and three decimal places are nonsense for these data.

Even one decimal place is of questionable value, since most instruments used for surface measurements have a margin of error of at least +/- 0.5 degrees C.

This article is an example of unimportant issues that global warming believers want skeptics and “deniers” to focus on … while they are busy brainwashing the public and teaching children with their wild guess predictions of a coming climate change catastrophe.

One year is meaningless in the big picture of 4.5 years of continuous climate change.

One month of one year is meaningless too.

Even less meaningful, if that’s possible, is projecting monthly anomalies … rather than just waiting for the final data to become available.

I ask author Walter Dnes to consider if there is anything else he could be doing to advance the fight against climate scaremongering — something that would have more value than projecting monthly temperature anomalies …. when final data are almost complete … and before the regular after-the-fact “adjustments” begin changing the data !

Mr. Dnes’ article is a poster child for how to waste time and energy writing about climate change, since it only tells us what we already know — the climate is always changing, and humans can’t predict the changes (even very short-term changes).

Below are three statements of the the most basic climate science knowledge, Mr. Dnes,
hopefully to guide your next article towards more important aspects of climate change:
(1) The climate is always changing,
(2) Humans can’t predict climate changes, and
(3) Climate predictions are a waste of time unless you KNOW what causes climate change
(although predictions can be useful to scare people, and control them).

For permission, contact us. See the About>Contact menu under the header.

All rights reserved worldwide.

Some material from contributors may contain additional copyrights of their respective company or organization.

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!
Cookie Policy