2016: The Warmest Year on Record, with a Dip in the Second Half of the Year

2016 was the warmest year since humans began keeping records, by a wide margin. Global average temperatures were extremely hot in the first few months of the year, pushed up by a large El Nino event. Global surface temperatures dropped in the second half of 2016, yet still show a continuation of global warming. The global warming “pause”, which Berkeley Earth had always stressed was not statistically significant, now appears clearly to have been a temporary fluctuation.

Robert Rohde, Lead Scientist with Berkeley Earth, said “The record temperature in 2016 appears to come from a strong El Nino imposed on top of a long-term global warming trend that continues unabated.”

In addition, 2016 witnessed extraordinary warming in the Arctic. The way that temperatures are interpolated over the Arctic is now having a significant impact on global temperature measurements. Zeke Hausfather, Scientist at Berkeley Earth said, “The difference between 2015 and 2016 global temperatures is much larger in the Berkeley record than in records from NOAA or the UK’s Hadley Centre, since they do not include the Arctic Ocean and we do. The arctic has seen record warmth in the past few months, and excluding it leads to a notable underestimate of recent warming globally.”

Elizabeth Muller, Executive Director of Berkeley Earth, said, “We have compelling scientific evidence that global warming is real and human caused, but much of what is reported as ‘climate change’ is exaggerated. Headlines that claim storms, droughts, floods, and temperature variability are increasing, are not based on normal scientific standards. We are likely to know better in the upcoming decades, but for now, the results that are most solidly established are that the temperature is increasing and that the increase is caused by human greenhouse emissions. It is certainly true that the impacts of global warming are still too subtle for most people to notice in their everyday lives.”

Richard Muller, Scientific Director of Berkeley Earth, said: “We project that continued global warming will lead us to an average temperature not yet experienced by civilization. It would be wise to slow or halt this rise. The most effective and economic approach would be to encourage nuclear power, substitution of natural gas for future coal plants, and continued improvement of energy efficiency.”

Griff, please keep in mind that “recorded history” is, by any way you slice and dice it, a very insignificant fraction of the earth’s complete history. Let’s assume 4 billion years for the planet, and I will be terribly generous and give our species 200 years of accurate recording data.

Compress that 4 billion years into a single calendar year, “recorded history” starts at 1.6 seconds before midnight. Between “2” and “1”, on the way to “Happy New Year!”

Therefore we are assuming that the final 1.6 seconds of the year is representative of the entire year. This is where there is a significant FAIL with regards to the alarmists point of view.

Pre civilized neolithic demographic transition (sometimes referred to as the agricultural revolution) was around 12,000/12,500BCE. So well into the Holocene. It was only when the cold receded and the planet/climate changed into one of modest warmth, that man started rapidly advancing.

But in reality, more recognizable civilizations began in Mesopotamia and Egypt some 4,000 to 5,000BCE at which time in the Northern Hemisphere it was warmer than it is today.

If one traces the cradle of civilizations, it will be seen that it is temperature dependent, with warmer areas advancing earlier than colder climes.

Everything we know about the rise of mankind and life on Earth suggests that warmth is good, and cold is bad.

Wrong on both counts.
The Minoan, Egyptian, Roman and Medieval warm periods were all warmer than today.
It’s been explained to you many times that you can’t compare rates between proxies which are 50+ year averages with modern records that record each year.
But then again, being accurate and telling the truth has never been your objective.

@Darrell Demick January 18, 2017 at 8:31 am
‘Historic’ knowledge indeed seems sorely missing with the Greenhouse believers.
Some 84 million years ago temperatures peaked at 15-20K higher then today.
No ice to be found on earth, except perhaps on mountaintops near the poles.
We have been cooling down since then, with an ice age starting some 2,5 million years ago, when the deep oceans had cooled down sufficiently to allow ice to form.

Don’t know what the dinosaurs were using to run their airconditiners…..

Pretty clear: “We project… will lead…” meaning so far, even 2016’s warmth hasn’t yet exceeded what has been “experienced by civilization.” In other words, they estimate it will get warmer than all of civilized history but it hasn’t yet.=

Uhhh, no … we only have reliable temperature records for just a handful of locations going back to the late 19th century. Civilization started sometime around 5,000-7,000 years ago. Therefore it is impossible to make any statement at all about temperature records for most of civilized history.

We can infer temperatures in various but limited locations based upon analogs that are assumed to reflect local temperatures, such as from ice core data in Greenland. That’s not a recorded temperature, howeer.

Civilization doesn’t “experience” the average global temperature. The average global temperature is just a mathematical exercise. Only actual people experience actual temperature, and it is local and changes throughout each day and is not yet outside the normal range of experienced temperatures from around 130 F to -120 F.
Besides, if the average temperature went from around 57 F up to 59 F, I guarantee you nearly everyone has experienced that actual temperature.

Yes because Greenland has never ever been ‘Green’ in the past, and Viking farmers never farmed there. Grape vines never grew in England and eastern Canada in the past, and was never called Vineland. /sarc

Civilisation does not experience as its average temperature, the average temperature of the globe. I am going to remember that.

Historically humanity experienced the temperature where they lived which was way above the global average. I experience an average temperature of about 22 degrees C. The global average is closer to 16. The reason is both where I choose to live and that I both heat and cool my surroundings.

Almost no one in the world chooses to live at an average temperature as is experienced in Antarctica.

The average temperature the average human experienced during the Minoan optimum was far above the global average. People, crops, game, stock animals, all thrive at temperatures above the global average. Even the Southern ocean is barely below zero C. The Arctic ocean too. Life loves warmth.

That, Griff, is humbug. The earliest “civilization” – as marked by literacy in some form, “urban” communities in some form, monumental architecture in some form, and agriculture – the Bronze Age shall we say – appears in climate regimes that were warmer than the present, Minoan Warm period for instance, and then cools somewhat, before reaching a slightly less warm peak during the Roman Warm period. Things then cool again before cresting once more in the Medieval Warm period, which was still not as warm as the Roman Warm Period. The present might be as warm as the Medieval, but that has not been established satisfactorily to anyone lacking a political agenda, and thanks to political agendas and poorly documented “adjustments” of data may never be satisfactorily established. In fact, if you relax criteria (drop literacy) and toss in the neolithic ca 8.000 years ago as “civilization,” then not only was that warmest period in the last 12,000 years, sea levels were about 1.5 meters higher than at present (lots of data about that from the Pacific, Texas, Brazil and other places if you look for it).

Civilization began during the Holocene Optimum, when globally and regionally the cradles of civilization were warmer than now. The oldest writing dates to c. 5200 years ago. The peaks in civilization thereafter also occurred during intervals warmer than now, namely the Egyptian, Minoan and Roman Warm Periods. The High Middle Ages also happened during centuries warmer than now.

Every proxy shows the Holocene Optimum, Minoan, Roman and Medieval Warm Periods warmer than now. Clearly you have not been paying attention or ignoring all actual evidence, such as for higher sea levels during those intervals, for starters.

Hi Griff,
Ever been to Reykjavik? A great place, I recommend it.
The Icelanders are fiercely proud of their history, it seems to be a national hobby of sorts. There are numerous historical displays concerning their now defunct colonies in Greenland. There are many fascinating tales told.
Remember, it is not ancient history, it is modern history. An abundance of records and artifacts from those days exist and are in numerous museums.

Of course, a hundred times more can be said for the Medieval Warm Period all over Europe and the world.
You said: “if its an average never experienced by civilisation”
That word:
IF, If, if, if ….
Go to Iceland if you can ever manage it. It is a truly remarkable place.

0.02 C is at least a factor of 25 smaller than the measurement error alone, Mark.

The BEST message is brought to us by people who apparently don’t understand even the concept of instrumental resolution.

Elizabeth Muller, Executive Director of Berkeley Earth (and daughter of Richard Muller, perhaps?), “the results that are most solidly established are that the temperature is increasing and that the increase is caused by human greenhouse emissions.” both declarative clauses quantitatively insupportable.

I know nothing about you running away from anything, but you are correct that the ‘hotter’ claim provides a number that is statistically indistinguishable from the temperature of the preceding year. That is a fact.

It surprises me that so many readers don’t grasp the concept that underscores all numerical claims arising from measurements: that all measurement have a margin of error known as the uncertainty. Making multiple measurements of the same thing with the same instrument doesn’t reduce the uncertainty of the measurements, but it does provide a better guess as to where the centre point of the range of measured values lies.

The oceans, which dominate the global temperature, are NOT measured with the same device a number of times, so the centre of the range is not ‘better known’ than any one measurement. Averaging the measurements made with different instruments each with their own uncertainty is done using standard mathematical tools and all the errors propagate through to the final number. The final number cannot possible be know with greater accuracy that the most accurate contributing instrument.

In any case almost all the numbers come from platinum resistance temperature devices (PT100 RTD’s). New and freshly calibrated they can be read to a precision of 0.01 C and are accurate to 0.02 C. After a year without recalibration, depending on the electronic, they can be accurate to 0.03, the about 0.06 after five years, though I would not trust such readings myself.

I have checked PT100 RTD’s using a very accurate device and found that they are stable to 0.005 but better than that the result drifts around: 0.002±0.002. So the 0.01 claim is correct for sure.

The claim that 2015 was ‘hotter’ than 2014 by 0.001 C is laughable to anyone working in a science job. The claim that 2016 was ‘hotter’ than 2015 has to be based on the different claims for methods: with or without the Arctic: Arctic in? Warmer. Arctic out? Meh….

Hi Crispin, the problem with the temperature sensors is less their calibration, although that too is critical, but rather the systematic measurement errors that arise from uncontrolled environmental variables.

Even well-calibrated PRTs, when in a gilled shield or a Stevenson screen, suffer from radiant warming and insufficient wind-speed. This typically puts a warm bias of unknown magnitude into their measurements. Radiant cooling under a clear sky can introduce a cool bias. These errors are variable across all time-spans, from hours to annual, are not random, and are not known to average away.

Random errors with a mean of zero that are uncorrelated across different instruments, will average away in a multi-instrumental mean. The people making the surface air temperature record assume all measurement errors are random and average away. They’re wrong.

If you like, here is the debate at tAV that Steve Mosher referenced. The evidence is there about the truth of my behavior and Steve’s description of it.

“If you like, here is the debate at tAV that Steve Mosher referenced.”
I have many criticisms of Pat Frank. I think his maths is off the planet. But running away is not something he does.

But it was interesting to see that thread, in the light of another WUWT thread in which he complains that nine out of ten journal reviewers (who were rejecting his papers) just don’t know anything about measurement error, and are not scientists. Apparently the then scientific luminaries of the sceptic blogosphere who commented are in the same boat. It becomes clearer by the follow-up thread:

Jeff Id: “Everything you have written to date simply confirms my initial points. This is not an accurate representation of ‘uncertainty’ in anything.”

Steve Fitzpatrick: “Stop wasting your time Pat. You still are confusing internal variability with uncertainty about the true state of the system.”

RomanM: “I am currently traveling away from home and don’t have time right now to address all of your points, however I suggest that you go wrong right from the start.”

DeWitt Payne: “I have read your papers, they are wrong and Jeff and Lucia have not been refuted at all, much less thoroughly.”

Carrick: “Pat, I feel that i owe you a bit less of a cryptic explanation of my concerns about E&E. The comment you make about judging a paper regardless of its source is a good one, my comment was more about how E&E has done you a disservice by allowing the paper to be published without as thorough of a vetting as it deserves. As a result, I think you have a paper that has substantive flaws in it.”
and more succintly:“Pat, your error bars regardless of how you obtained them do not pass the smell test”

But Pat Frank stood his ground. There are just so many non-scientists in the world. If it weren’t for Pat …

Nick, picking out and quoting a few critical comments is not an appraisal of the debate, is it. But then you knew that, didn’t you.

If you’d proceeded to the end of the thread, you’d have found that both JeffID and Lucia were claiming that I had confused weather noise for measurement error. That charge is evident in the post you reproduced from Steve Fitzpatrick.

When I challenged them to point to the Figure or Table displaying weather noise, they couldn’t do it. Because there aren’t any.
Jeff eventually grudgingly admitted as much, “I get that Pat hasn’t included weather noise in his final calculations for Table 1,2 and the figures.” Lucia attempted to deny she’d ever made the claim. Steve Fitzpatrick never seemed to grasp the difference between random and systematic error. That problem ought to be familiar to you, Nick.

It finally appeared that both Jeff and Lucia made the weather noise criticism without having read through the paper.

The measurement errors I assessed were the systematic errors reported in Hubbard and Lin Realtime data filtering models for air temperature measurements Geophys. Res. Lett., 2002, 29 (10), 1425. The use of those data is made abundantly clear throughout my paper. But not clear enough for my critics, apparently.

Here is a summary reply to their claims of mistake, and why they’re wrong. Also here.

And Nick, if you support the standing as scientists for climate modelers who think that an error statistic is an energetic perturbation, and that uncertainty bars imply ever wilder model oscillations, well, then, good luck to you, and do stick to managing grocery accounts with your math.

Steve Mosher, by the way, has never once substantiated his accusations.

What debate ? There’s no debate when one side is controlling and fixing the data. Usually it’s called fraud. Now it’s called saving the planet. It’s a legal way of defrauding people via their tax dollars. By the way, send me your money, I can get you a 10% per year return, year after year. As long as you don’t take the money out, we’re good.
Politically I’m sure they can get more milage out of this. The stern reprimands from bureaucrats that aren’t accountable to anyone. Shadow puppets that can greatly appear as if world concensus is against you. One lies and the other swears to it. It’s like cutting the head off a ten headed monster, 2 grow back. They can produce an endless supply of papers on minutia .
Scientifically, AGW is dead.

Let’s notice that, after accusing me of running away from debates (false), Steve Mosher disappeared after I challenged him to substantiate his claim that he “showed [my] claims were testable and false.

Better yet, you could produce a graph that shows the number of cell phones and CO2 rising at the same rates. Obviously the unprecedented number of cell phones is cause by CO2. We have to stop producing CO2 immediately !

Nick, how cold should we make earth and how? Also, why should we make it colder. Of course we are in control now so we-can-do-it, right? Please present your dissertation on how it should be done. Thanks.

Leave the earth’s temperature ALONE, Nick–who are you to say what the temperature ought to be?

My gosh, you Progressives think you know better than everybody else what we should eat, how we should behave, what earth’s thermometers should read, and even how to THINK!

It’s time to leave well enough alone–so when my garden or orchard fails from cold I won’t be suing the pants off your “Climate Control Organization” for damages (and since frost is the major reason for crop failure, I doubt you’ll have sufficiently deep pockets ever if your goal is to reduce temperatures!)

Here’s an idea: Why don’t you guys just reverse homogenize the temperature record the other way (this time, do it correctly), and you’ll be happy as clams.

“openly displayed and justified adjustments”
Can you point to where the satellite adjustments are openly displayed? I can certainly point to where the adjustments for surface data are. At the GHCN site, you will find data both before and after adjustment. And for satellites? And if you really want, you can download the code and run the adjustment yourself. Or, as I have long urged, do your own calculation with the unadjusted data to see how little difference it makes.

“Nick demands that others do the work for him”
I’ve done the work. I have followed the temperature indices in some detail. And I don’t believe those “openly displayed and justified adjustments” for satellite data exist. But readers might like to hear your version.

When I was collecting data, the statistician I don’t think changed my results, making it better, to suit his results to “get it right”. I think he used my data, and I was there to ask questions of if they thought it looked funny!

There’s plenty of data in the surface record we have to show air temps are not going up in any measurable way from co2, and the oceans are so poorly measured who knows. But I do know that if you swap warm water in the southern hemispheric oceans with cold water in the northern hemisphere, the global average temp will go up.
Even though the energy content didn’t change.

I have never seen a chemistry equation adjusted. You know the one that claims so much warming from so much co2. Is it or isn’t it ? You shouldn’t have to adjust anything. It may be the warmest evah, but the warmest is so far below the lowest projection that it nullifies the theory. Obviously, it’s wrong. However, the co2 levels have been adjusted. For what purpose ? To make the science better or to disprove that co2 follows temperature? For what purpose ? To make the science better or to make some really big missing numbers in co2 concentrations fall into line with the models ? Did the instruments on co2 recordings suddenly change ? I know some thing is wrong, I’ve been keeping track. The record (not the most on record) for 2005 stood at 2.52 ppm for years, then poof, it went to 3.10 ppm. For 17 years the record co2 ppm was 1998 at 2.93 ppm. So from 2005 a BMT added each and every year on top of the previous, and the ppm for 2016 was 3. Something ? (Or so i saw) You don’t think something is wrong with your story ? Variations is your answer isn’t it ? 60 years of nearly precise variations ?
Magic numbers.

Then it is strange that the manufacturers of the satellite series sell their series as surface temperatures. Even stranger, however, is that the trend of the series of ground and water measurements (not to be forgotten: water measurements) is increasingly moving away from the satellite measurements. Is the CO2 no longer well-mixed, but creeps on the ground to 2 meters high and in the water to 2 meters deep, while in the rest of the atmosphere, especially in the tropical hotspot, it does not occur so frequently? Perhaps one should mute the manufacturers of the satellite lines and their series. Well, it’s a pity that Clinton did not win, Sir.

Hans-Georg, the guys at BEST, NOAA, etc measure UHI and tell us how that is doing. I live in a 6,000 person town and we have some big UHI days here in the winter, but ask the guys at BEST or NOAA and they will say UHI is minimal at best in a 6,000 person town. That’s funny, because every winter it gets down to 3 degrees Fahrenheit in the middle of town and just 1 mile away once you get out of town it is -9 degrees Fahrenheit; that is a 12 degree Fahrenheit difference. But Stokes and the rest will say UHI is adjusted for and it’s mainly only in cities and that a little town of 6,000 doesn’t really get any UHI. Tell that to the 12 degrees I see every year when it gets really cold outside in the winter. This 6,000 person town went from 1,000 people around 1900, to 2,000 people to 3,000 people, all the way up to 6,000 people. What these guys are measuring is how much UHI has increased over the years and passing it off as something else.

How do they adjust for UHI? In the winter it can be 12 degrees different on a Monday, 10 degrees different on a Tuesday and then only 3 degrees different on Wednesday. What formula is used to adjust for that UHI in a tiny town of 6,000 people? Oh that’s right, its considered rural and no UHI adjustment is needed. lol

This is all well-known to me and well, my contribution was a somewhat ironic answer to Nick Stokes. The trend has long been no longer a friend of the various techniques for measuring the temperatures. Either one or the other is wrong. And I personally point to Best, GISS, Hadcrut and the others for some reasons.

And that is the problem Nick. There aren’t nearly enough surface thermometers by several orders of magnitude, to properly capture a valid sample of the earth’s global Temperature map.

So they are just a few scattered Yamal Christmas trees, trying to masquerade as a properly sample band limited continuous function. And they can’t even capture temporal samples on a valid basis either, so all of your magic code is prestidigitation on total rubbish.

To AndyG55 – right. The whole Average Global Temperature is a construct. Maybe even a useful construct once it gets sorted out some day. But right now, it is scientific and mathematical foreplay with all the gridding and smearing. It is a CONSTRUCT. It is not measuring temperature. Recall how it is done and how adjustments are made such as TOBS to homogenize the data. Ok, my language may not be exact, but for those that are interested, go read how the Average Global Temperature is constructed. It can now be done very quickly with auto downloads and computer programs. But as I was told by my computer instructors back in the ’60’s – First instruction: “A computer is just a fast idiot. It does exactly what you tell it quickly.”
Second instruction: “GIGO.”

Give the Muller’s and Mosher credit. They have a very fast idiot. It may even be useful. But it isn’t temperature. It’s a CONSTRUCT.

They break each individual station into 8 different stations on average using “breakpoints”. Then they somehow stitch everything back together and have record temperatures.

It is a superior method to remove any cooling trends or just normal temperature trends.

Any time a cooling period is observed in a station, a “breakpoint” is detected and they split the record and remove the cooling trend. If a station just has a normal temperature zero-change trend, a “breakpoint” is detected because all the other adjusted stations nearby have a warming trend. Split that record, build in warming trend and away we go.

At least they seem to recognize that the solution to the problem (if in fact there is a problem) is nuclear, natural gas and energy efficiency. No mention of renewables like solar, wind and biomass that, in spite of the hundreds of billions per annum spent on them to date, contribute a paltry 2.6% to the world’s energy needs.

Also, I was almost shocked to see him criticizing the tendency of media to attribute any slightly unusual weather pattern (a storm, a hurricane, etc) to GLOBAL WARMING, and conceding that “It is certainly true that the impacts of global warming are still too subtle for most people to notice in their everyday lives.”

I thought it was bad to use an El Nino year to prove something. Isn’t that why contrarians get accused of cherry picking if they use 1998? Oh, wait, I’m sorry, I forgot that the global warming people are exempt from their own rules and can make up whatever they want. My bad.

Yes, but it is quite fair to compare temperatures of the Nino years 2016 and 1998 (although the 1998 el Nino was slightly stronger than the recent one). The difference is +0.36 C, suggesting a trend of 0.20 C/decade for such peak events..

Depends on the dataset, no? There isn’t a difference of 0.36 when using any of the satellite datasets, so you are then left with surface temperatures, which have poor coverage, and for some reason need to keep having adjustments made to them. 0.02 warmer using the UAH dataset. After 18 years of increased CO2, I’m absolutely terrified of that rise.

There isn’t a difference of 0.36 when using any of the satellite datasets…

Sure there is somewhere! O R probably means the ENSO peaks in the monthly record of some satellite data. Your 0.02 °C concerns the difference between the yearly UAH6.0 averages, within which all the monthly peaks vanish.

But it depends also on where you compare. Here is a chart with two UAH plots
– Globe
– Tropics Ocean
You see here that in the Tropics, ENSO 1997/98 remains the King.

Surprisingly, when you compute the time series for Nino3+4 (5N-5S–120W-170W) out of UAH’s 2.5° grid data, the peaks are less strong than for the Tropics and the Globe.

You gotta love the “relative to 1951-1980 average”!!! Or when we had sparse buoy data and incomplete land thermometers in much of the Southern Hemisphere. Or that during that time span the earth was cooling, the entire time. Well it is Berkeley. My how they’ve fallen. Poor snowflakes.

Steven Mosher, so what was the temperature in my town 2 weeks ago? I live in a small 6,000 person town. In the middle of town it was 3 degrees Fahrenheit , just outside of town about 1 mile from the middle of town it was -9 degrees Fahrenheit. Are you guys using 3 degrees from the UHI that you say doesn’t exist in such a small town of 6,000 people or the -9 degrees? Hmmm, looks like you guys need to actually go do some field work. UHI is even big in small little towns of 6,000 people, I bet you didn’t know UHI can cause up to 12 degrees difference in a little town of 6,000 people from a site 1 mile away. This town is in the middle of farm country, flat, flat, flat and not next to water. How much does your algorithm adjust for UHI for towns of just 6,000 people? I’m guessing 0.00 which means you are measuring UHI and not what you think you are measuring.

My guess is that with all those millions of records (and of course, further millions just waiting to be ‘digitized’), it would be awfully hard to break down temperatures accurately to the hundredth of a degree that has been used to separate one year from another. Yet that seems to be what the ‘warming’ based on.
And nothing changes the fact that there is no causality proven by any of these records, nor is the ‘end-of-the world’, or even the ‘disaster’ scenario supported.
And perhaps, most importantly, you’re TRYING to get there.

Steven,
When we do data rescue on official Australian records, we find at most about 0.5 deg C warming for roughly the 120 years after 1880.
We show this to you and you duck shove it.
Then, you seem to issue figures of your own, adjusted, and claim about double that rate.
Is it a wonder that thinking people have reservations about your methodology?
We write articles like the one on Sydney Observatory and show a good case that the data are unfit for the purpose to which you now are putting it in BEST. http://joannenova.com.au/2017/01/sydney-observatory-where-warming-is-created-by-site-moves-buildings-freeways/
Is it strange that people question your methods?
We look for the most suitable ‘pristine’ stations and select 40 in Australia, looking for a baseline that UHI can be set against. We find so much noise in these pristine records that they are useless to set a baseline, so we give up on that method to assess UHI. You invent another, but somehow the noise problem is resolved.
Again, I would hope that you realise you have a high probability that BEST is giving rather ‘wrong’ answers.
I applaud your initial effort to make BEST the best, I agree with a lot of your concept methodology and I admire your persistence in promoting wrong answers. But sooner or later you will find why you are wrong. At this stage I think, but do not know, that it is because unconstrained variables affect T more than you can correct. Whenever we look at a site in some detail we find more often than not that errors of bias in particular make the data unfit for purpose, no matter how much you alter it as well. This is because you can’t know what is an accurate figure to use to make an adjustment. You make some reasonable guesses, but you do not and cannot know how much to adjust, without having some form of mental preconception of what is correct, e.g. by reference to global data. (Which is also replete with bias errors).
Cheers
Geoff

Bore holes going back to the Renaissance, from an Australian perspective.

‘The full geothermal reconstruction also shows excellent agreement with the low-frequency component of dendroclimatic reconstructions from Tasmania and NewZealand. The warming of Australia over the past five centuries has been about two-thirds that experienced by southern Africa, and only about half that experienced by the continents of the Northern Hemisphere in the same time interval.’

2. I don’t have their regional data for Australia, but a look athttp://berkeleyearth.lbl.gov/regions/australia
might convince that the warming rate after 1880 should be somewhere between 1860 (0.61 ± 0.23 °C) and 1910 (0.94 ± 0.08 °C). Maybe you will agree on say 0.75 ± 0.10 °C / century.

First off, there isn’t enough land in the southern hemisphere for there to be plenty of data, even if the vast majority of the land wasn’t uncovered by sensors.
It really is sad the way the trolls insist that one sensor to cover an area 10’s of thosands of square miles is adequate.

Cat666
Colleagues, amtes, did leading edge science math on geostatistics of which Kriging is a part. Months in France with Journel, visits to AUST by Agterberg and others. Among other exercises, we applied geostatistics to the ore resource calculations and pit designs for the uranium deposits we found at Ranger One, then by far the largest richest globally ( but not for long)!
Cut to chase, reconciliation of estimated tonnes and grade before mining matched what was actually mined over the next 20 years or so to within +/- 10% or better. Satisfactory indication that Kriging as we did it was validated.
Ore resource calculation has much in common with climate exercises like global temperature mean. There are 2 substantial differences. In mineral work, it will usually be possible to take further samples if one finds a space with data too sparse. Cannot do this for pastvwearher OBS. Second, there is only detriment to mineral work from subjective adjustment of raw, measured values like grade. You cannot sweet talk the ore deposit into being better than it is, ore more like another similar one elsewhere. In climate work, as soon as subjectivity arises, you might as well pack up and go home with your bags of adjustments. Think about it.
Geoff

” Satisfactory indication that Kriging as we did it was validated.
Ore resource calculation has much in common with climate exercises like global temperature mean. There are 2 substantial differences.”
Yes, it does. You’re trying to assess a continuous distribution from sampling. And you’re always “making up” the numbers in between. That is the basis of spatial sampling. The issues are well understood in geostats (Fontainebleau, Geoff? Serra?). The main difference with climate is that it changes, so yes, you can’t resample. But you have a lot more data points.

And the maths used in global temperature works in geostats. It’s harder to check the outcome in climate, but that doesn’t mean the maths is wrong.

First off, there isn’t enough land in the southern hemisphere for there to be plenty of data…

In fact you are simply right, MarkW, at least when we compare the GHCN stations in SH and NH.

But…
You see here that though there is some rather sparse station population in the SH, the GISS land-only record relying on them nevertheless manages to show, for 1979-2015, a trend lower than that of UAH6.0.

Geoff, when you were Kriging your geostat data, and there was a slip fault (I hope that’s the right term) through the terrain, don’t you manually add that into your kriging? Because it’s a nonlinear fault, and linear infilling makes nonsense out of it?

Well, I trust sparse sampling and infill..
One example, throw away 99.83 % of the spatial information in UAH v6 TLT, and sample only in 18 uniformly distributed point worldwide. Can anyone surely tell the graphs apart?

The global warming signal is so clear that is has no practical chance to hide between those 18 points (ie in 99.83 % of the area).

What warming of the last couple of decades?
The only warming in the last 20 years was due to the recent El Nino and is pretty much gone by now.

Have you calculated the trend for the last 2 decades? It’s about 0.18 C/decade. That’s a long way from zero. And the El Nino years keep getting warmer and warmer, with 2016 quite a bit warmer than 1998.

Actually the adjustments in GISS for example are not small relative to the trend they show. And the adjustments never seem to be final. New temperature values appear even on recent temperatures on an almost continuous basis. Nick Stokes hand waves the fact that ground level temperatures are not the same as atmospheric temperatures as measured by satellite. But when one looks at the dramatic difference in trends between the two, how does one not dredge up a bit of skepticism?

“But when one looks at the dramatic difference in trends between the two, how does one not dredge up a bit of skepticism?”
Well, try some skepticism here. below is a plot showing that dramatic diffrence in trends. But notice that the upper group includes UAH5.6, still being produced. So if surface measures are consistent, and UAH is on both sides of this “dramatic difference”, why would a skeptic prefer UAH6?

The hottest year on record has no credibility with all the adjustments that have been made.

Yeah, sure, it allowed the NCDC and GISS to have a big news conference and get headlines on left-leaning websites/newspapers, …

… But they had to achieve this by continually making up temperature adjustments and have lost all credibility with people on the skeptical side.

Give up your integrity and credibility for a few headlines???

Nobody should be using their numbers anymore and just use the satellite temps.

When Congress and Trump go about “adjusting” climate research budgets, you know what is a good idea, let’s take $1.0 billion of that and award it to John Christy and Roy Spencer and they can set-up a brand new institute that has credibility. This is probably the best idea I have seen about what to do with the climate change budgets.

That’s only true if you measure to the top of the recent El Nino.
If you measure from the top of the previous El Nino to the top of the recent one, it’s closer to 0.02C/18 years.

Completely, horribly incorrect. You don’t measure trendlines by drawing lines between two points; you use linear regression, which uses *all* of the data in the series.

For instance, look at the temperatures from 1998 to 2014. Calculate the trendline. It’s positive. Why? Because even though the highs were about the same (1998, 2004, 2010), the lows kept getting higher. The overall temperature kept rising, and this is shown mathematically.

PatFrank:

Let’s see … that’d be 0.18±0.5 C.

Over the last 30 years, the trend is about 0.18C +/- 0.05C per decade. Quite a bit smaller uncertainty.

Over short time periods, though, yeah, the uncertainty is high. Which is why climate is defined over 30 years; 10 or 15 years is just too short to generally draw conclusions. Not statistically significant.

When you pretentiously call your dataset B.E.S.T before it’s even released , it’s pretty clear it’s a sales pitch and not science.

BEST was supposed to be land only record, then they got into the BS “global average” game. Last time I looked at there site I could not even find the land only record any more: Total propaganda and hype now unfortunately.

Betty Muller says:

“We have compelling scientific evidence that global warming is real and human caused, but much of what is reported as ‘climate change’ is exaggerated.

Sure the globe is warmer and real, but you don’t have any “compelling scientific evidence” it is human caused, you have a temperature dataset. The early 20th c. rise was no different from the late 20th c. rise. That is “compelling evidence” that human effect is minimal.

That’s part of the eco-religion – that humans are somehow not natural, and – combined with concepts of the ‘fragile ecosystem’, ‘stasis’, and ‘equilibrium’ – any effect at all, no matter how trivial, is by definition judged to be destructive.

It’s ironic – trying to scare us with a pseudo-phenomenon that will supposedly ‘kill us all’, yet the solution is the population control, segregation, or even eradication of the human animal – a ‘pestilence’ I believe the esteemed ‘SIR’ Attenborough, called us.

Why is the 98 spike not showing in their graph ? If averaging has smoothed it out should they not wait the appropriate period and smooth out 2015-16? I made this point about the GISS record in another post without backup evidence. I was challenged to provide evidence. Well here it is, in this graph.

I am pretty sure that this is caused by the use of the Karl et al 2015 which reconfigured the way aspects of the raw data was being interpreted. That paper significantly reduce the 1998 spike amongst many other changes, such as the removal of the pause from existence.

For anyone who doesn’t know what the “1998 spike” is all about, take a look at this UAH satellite chart which shows how the 1998 heat spike compares to previous and subsequent years. As you can see, 1998 is the hottest year in the satellite record (1979 to present) with the exception of the year 2016, where the highest temperature was measured to be 0.1C higher than the highest point in 1998. The temperatures have subsequently fallen from that high.

Any temperature chart you see that doesn’t have the same profile as the UAH satellite chart, with 1998 being hotter than any subsequent year except 2016, is a bogus, bastardized, politicized chart that was created to promote human-caused global warming/climate change.

Exactly! Scientific evidence means they have identified a physical condition that can be measured. This doesn’t mean some concocted data series that supposedly indicates a global temperature. Scientific evidence also means the mathematical basis for how that physical condition works. Neither of these appear to be available.

Agreed! Actually discussing this “compelling scientific evidence” is what we should be doing rather than volleying pithy retorts back and forth. Nick Stokes has a very fine mind and appears to be as informed as anyone as to the actual existence of ANYTHING that supports the Climate Alarm meme. I feel he very much improved the discussion at Climate Audit challenging statistical methods/conclusions there. I would equally much welcome a guest posting from him on any of the science that he considers a compelling reason for alarm and have him defend it. Starting with the most compelling. But for a lay person who has really tried to get to the “truth” regarding human CO2-caused climate change, every time I learn more I get more skeptical and more distrustful of the “science”. And all you need is your own common sense to question the integrity of the “leading lights” promoting CAGW alarm, and recognize the flashing neon signs of conflict of interest. But, like that old hamburger commercial: Where’s the Science!

Agreed Scott, but the “flashing neon signs of conflict of interest” of the (elite and powerful) forces arguing against need also to be recognized too. Big Carbon and its business interests for example.

Most reasonable people, including Anthony Watts it seems, would agree as a starting point that:
-CO2 is rising quickly (on geological time frames)
-that is probably contributing to an overall planetary warming,
– there is at least a correlation between CO2 and temperature behavior in the atmosphere, and
-that humans are partly responsible for the increasing CO2 (and other GHG like CH4) levels.

But they can see that there is uncertainty about how much warming, how fast and will it matter.

If that is a reasonable position, it would be helpful if Anthony jumped in right here to confirm that is a correct reading of his position of uncertainty, or otherwise.

Nick Stokes: I don’t think Trump will be able to silence BEST. They are not reliant on Federal funding.

But, hello: “Berkeley Earth has been supported in part by the Director, Office of Science, of the U.S. Department of Energy. The Lawrence Berkeley National Laboratory (Berkeley Lab) has administered the financial support provided by the Department of Energy (Contract No. DE-AC02-05CH11231), and Berkeley Lab is a participating institution. Many of the participants work for Berkeley Lab.”

“how can we have had four consecutive hottest-year-ever records when the satellite and balloon data show no global warming for the past ~20 years?”
Satellites measure (not very reliably) a different place. But both indices reported a record for 2016. As for balloons, RATPAC A posted a huge record anomaly of 1.31°C, clearing the previous record in 2015 by 0.25°C.

“What has happened is that NASA/NOAA significantly reduced temperatures in the 1930s to erase the hottest decade.”
I’m surprised to find Don Easterbrook staging this familiar switcheroo between global and US data. Globally, the 1930’s were never close to being the hottest decade.

“You know Ratpac A is a Tom Peterson “fabrication™™””
I don’t believe it has anything to do with Peterson. But the post said:“when the satellite and balloon data show no global warming for the past ~20 years”
If not RATPAC, what is he talking about?

“Changes from 2011 ballon data (Angell) to 2016 Ratpac A”
Yes, that how it goes:
DE: “satellite and balloon data show no global warming for the past ~20 years?”
NS: “RATPAC A posted a huge record anomaly of 1.31°C
AG: “Ratpac A is adjusted to follow the AGW meme, just like the BEST fabrication is.”

So what are we left with?
UAH6.0 had adjustments, but those are very good adjustments. UAH5.6 had adjustments but they were very bad adjustments. RSS had very good adjustments, but now has very bad adjustments. And so on.

With daily and even hourly temperature swings of 20, 30, or more degrees F, it will be difficult to sell the prophecy of Catastrophic Anthropogenic Global Warming to people who have lived all (or most) of their life on Earth.

However, if Al Gore and his Nobel Prize film “An Inconvenient Truth” were to go, every coastal inhabitant of the United States, whether on the west or east coast, would have to feel the effects of climate change. However, just the opposite is the case, the situation appears to have calmed so far so far that since Kathrina no heavy hurricanes hit the shores at all. So just the opposite of the predicted. When a comparatively lukewarm breeze named Sandy hits New York, the Hurricane-starved Liberals licked after the catastrophe. This, however, all told the state of mind in times of a hurricane drought.

Yeah. Unfortunately, for the Profits… I mean, prophets, the system is incompletely, and, in fact, insufficiently, characterized, and unwieldy (i.e. chaotic). Thus the establishment of a scientific logical domain, scientific philosophy, and scientific method, that are an implicit acknowledgement that accuracy is inversely proportional to the product of time and space offsets from an observer’s frame of reference.

Well, people want to believe. People will have to choose; but, the conflation of logical domains promises progressive corruption.

nn: Very fortunately many humans beings have the attention span of a gnat. If it wasn’t colder or hotter in their lifetime, then it never happened. How they remember this is greatly influence by the news and climate prophets. Actual evidence is rarely sought, few ever bother to check.

This is always stated like an indictment and calamity. In the absence of humans, what “would” the global temperature be? Is Muller telling us the planet would remain in a permanent Mini-Ice Age? That trends in climate before say 1950 would have ended?

What does the BEST climate model look like, and what temperatures is BEST predicting a century from now? Is BEST predicting a straight shot of non-stop warming?

Elizabeth Muller: “…. for now, the results that are most solidly established are that the temperature is increasing and that the increase is caused by human greenhouse emissions.”

Santer et al’s most recent paper (“Comparing Tropospheric Warming in Climate Models and Satellite Data”) reviews the “discrepancy” between models and observations. In particular, the prediction that an Enhanced Greenhouse Effect requires amplified warming aloft in the tropical troposphere (a scaling factor of about 1.6). This should be the source of increased “back radiation”. Santer’s paper discusses and evaluates the discrepancy, but it also confirms that the discrepancy exists and describes this as a matter of “scientific concern”

If Elizabeth Muller is correct, the above chart should be accompanied by a chart for the tropical troposphere which shows circa 1.6 degC anomaly in 2016 (for same reference period). Does anybody know if this exists?

Someting is very wrong with that extra high El Nino of 2016 you highlight in this paper. For that matter, somehting is wrong with your entire temperature curve starting with 1980. First, that El Nino of 2016. Its peak stands half a degree Celsius above the super El Nino of 1998. That is an impossibility. 1998 was a super El Nino, not part of the preceding group of five that belong to the ENSO. Satellites prove it was twice as high as these five preceding El Ninos were. You arbitrarily demoted it to the same height as the five that preceded it. But to reach a half a degree Celsius difference in height in only twenty years is nonsense. Let me remind you that changing actual temperature curves without publishing an explanation is falsification of scientific data, a scientific crime. And a large-scale crime exists here because a horizontal stretch of flat global temperature between 1979 and 1997 was changed into warming. A new warm period called “late twentieth century warming” was created thereby and the hiatus of the eighties and nineties that was there was erased. Fortunately, I was able to use the original data in my book “What Warming” and the hiatus that was over-written by warming later can be seen as figure 15 of my book. Fortunately, they still do not control satellites and the original hiatus data can still be obtained from satellite archives. But this is not the only wrong thing that was done. To change vthe official temperature curve took three participants, at least. I have spotted NCDC (NOAA), GISS, and the Met Office as three of them. Having changed their raw data to a common curve they next decided to make their corresponding data curves identical, It was done by using the same computer program on all three data sets. And it did a fine job except that it left the presence of itself to be known to all who use these data. This is because the computer left sharp spikes all over the three publicly accessible falsified data curves. Most spikes are placed near the ends of temperature segments, but one sits right smack on top of the 1998 super El Nino and gives it a boost of 0.1degrees Celsius. In your graph you have demoted the super El Nino of 1998 to the same height as the five previous EL Ninos that belong to the ENSO group. I have not checked directly but it looks like the warming introduced in the eighties and nineties was continued after the 1998 super El Nino left. If so, it would explain how the 1916 El Nino reaches its unbelievable height. Now let’s look at the bottom part of the super El Nino of 1998. In satellite view you can see that the two temperature dips on two sides of it are of equal depth. But this is not true of your current temperature chart. In your chart the right-hand dip next to 1998 is shallower than the left-hand dip by 0.1 degree Celsius. That makes it look like the El Nino is climbing a hill. Furthermore, the beginning of the twenty-first century warming is shown as having equal height with the 1998 super El Nino. Satellite data that are accurate show that the twenty-first century warming only reaches to half height of the 1998 super El Nino if correctly represented. On satellite scale that means a temperature jump of 0.45 degrees Celsius in only three years – an unbelievable rise at the rate of 15 degrees per century. I recommend that you withdraw this temperature chart from public use and get to work on all the unsatisfactory aspects of this data set and any similar ones you may be stuck with. If you for any reason should hand out a wrong temperature chart, other scientists using it can come to wrong conclusions.

I still subscribe to the old analogy that, as a x48 year-old :-), I am probably at the tallest in my life I have ever been – and ever will be. Therefore, the last ten years can be described as being the tallest ten years of my life. I now worry that I need to buy a new bed and raise the doorways in my home as it is beyond doubt that I shall be over seven feet tall pretty soon (according to the logic of climate statisticians).
That said, I note that my mother-in-law achieved her tallest in her 70s and then fell back by a few inches due to spinal problems, so nature seems to have a remedy, and she didn’t become a seven foot ogre.!

The likes of Nick Stokes and Mosher are wed to the the theory of AGW because they can work with the numbers. They love that bitch so much that they don’t want to have her dissed, so no matter what, she will always be right, and always respected. Anyway, satellites, shatellites. Huh!

2016 was the warmest year since humans began keeping records, by a wide margin.

Satellite measurements–the only geographically complete data available for the globe–show 2016 to be only trivially warmer than 1998.

The global warming “pause”, which Berkeley Earth had always stressed was not statistically significant, now appears clearly to have been a temporary fluctuation.

Fluctuations, by definition, are short-term phenomena. It’s not the 18–year pause, but the temperature spike of the last El Nino that is a proper fluctuation.

The difference between 2015 and 2016 global temperatures is much larger in the Berkeley record than in records from NOAA or the UK’s Hadley Centre, since they do not include the Arctic Ocean and we do. The arctic has seen record warmth in the past few months, and excluding it leads to a notable underestimate of recent warming globally.”

Including data-deficient regions such as the Arctic Ocean in the global average results in the substitution of theoretical projections for hard data–scarcely the stuff of “compelling scientific evidence,” claimed by Elizabeth Muller. It’s little more than an attempt to put a positive face on a convenient fiction–a common posture in post-modern academia.

What is the value used as the GAT? I don’t care about the anomalies unless there’s a numeric value for what the anomalies are measured from. Otherwise, anything can be shown to be true and correct. What is the GAT VALUE in degrees for 1980-2009 period shown, not the anomalies? Where can I find the values for GAT for other periods? Is there any rule about which period to use? I see 1880-2015, 1961-1990, etc, etc and all seem to be used to create the prettiest, scariest graph? Who decides what period to use?

(I can’t do the math because I need raw values, the weighting system and maybe a bigger computer, at least according to most warmists. It’s too complicated for mere mortals to actually calculate.)

Sheri,“I can’t do the math because I need raw values, the weighting system and maybe a bigger computer, at least according to most warmists.”This post and it’s linked predecessors tell you where to get the code, how I weight it, and the R code I use. I have a very ordinary PC.

The essential thing to realise about anomalies is that you take anomalies before averaging. Each data value has an expected value (usually that base average) subtracted before averaging. That gain in homogeneity is what it is all about. The period itself doesn’t matter much. Usually they were the most recent three decades when the index began.

“TTT ist not the surface.”
Nor is TLT – glad you noticed. The thing is, satellites can’t measure at the surface – there is a blinding glare of microwave from the surface itself. So they do various higher levels and subtract, which is a dicey business numerically. I think there is a general realisation that TLT went too far. You won’t see John Christy citing TLT now – it’s always TMT. And I don’t think RSS will release a new TLT; TTT as in the release is their preferred version. It sin’t surface, but neither was TLT; it’s just more reliable.

And remember, too, Sheri, when you take an anomaly, the uncertainty in the result goes up by the sqrt[sum of (systematic measurement errors in the base and datum)^2]; a nicety that seems to systematically escape the notice of these many practitioners.

And remember, too, Sheri, when you take an anomaly, the uncertainty in the result goes up by the sqrt[sum of (systematic measurement errors in the base and datum)^2]; a nicety that seems to systematically escape the notice of these many practitioners.

To improve this uncertainty my anomaly is based on that stations prior day’s value. And as you move forward tomorrow can not be both plus and minus to subsequent days, so the error term for longer strings is ~error/2.
Now I have no specific temp reference point, but that’s less important than change.
Sheri you can find my stuff here http://wp.me/p5VgHU-13
The actual area reports are here http://sourceforge.net/projects/gsod-rpts/

Okay, you have the math covered and I’ll try. I still want to know the exact value of the 1980-2009 GAT used on the graph. NOT the anomalies. The real, calculated value. You’re using the graph, you should know the value.

“The essential thing to realise about anomalies is that you take anomalies before averaging. Each data value has an expected value (usually that base average) subtracted before averaging. That gain in homogeneity is what it is all about. The period itself doesn’t matter much. Usually they were the most recent three decades when the index began.”

This does not make sense. You can’t subtract the difference from the average before averaging. “An expected value”—expected because?

The period certainly does matter. Using the entire period or colder periods gives a lower value for the average—which still is hidden and no one seems to want to give an actual number. Otherwise, why would it matter if one starts with 1998 and goes forward? Yet I’ve seen hysterics (not from you) over that practice. It must matter or many, many warmists are who are screaming “cherry picking” are lying.

Sheri,
The anomaly arithmetic goes like this. You have a whole lot of stations with histories, and you want to get a global average for Dec 2016.
Step 1. For each station, calculate its average 1981-2010, or whatever. These are called normals.
Step 2. For each station, subtract its normal from the Dec temperature. That is the anomaly.
Step 3. Average the anomalies as if you were calculating a numerical spatial integral – ie area-weighted. That is when you are combining things.

So you never compute a global average temp, nor should you. The reason is that the anomalies are integrable, and the normals often not. Suppose you live in a hilly state. If it has been a warm month, it has probably been warmer than average everywhere. But the normals vary up hill and down dale. This matters particularly if you have diffrent data in each month (some goes missing). For abs temp, that really matters, but not for anomaly, because of that homogeneity. That is why proper index makers only average anomalies.

Sheri,
I’ve written a reply on anomaly arithmetic which disappeared, but will probably show up in a few hours. Meanwhile“Otherwise, why would it matter if one starts with 1998 and goes forward?”
I think that is mixing up the period of graph that you should show and the anomaly base. In the end, a different anomaly base only adds a constant. It’s a bit like whether you use F or C; the graph still looks the same. Some say that a base might be scarier, but is 100F really scarier than 40C?

Any existing systematic error at the start of record collecting is either fixed, and since I started at zero it’s gone, it’s also not changing for tomorrow’s measurements. If it’s not fixed, how is taking an anomaly from an average that has the same if not more averaged systematic error, and is then compared against an average with far larger uncertainty.

If I take the day to day change for an area and average them together, is that not the best possible data showing how that collection of surface stations records changed? If I sum a year of those changes, they should sum to zero, right? What about taking the slope of that day to day change? Is there not an undeniable seasonal change that we can then pull from a half year of data?

And we can see what has actually been measured. You know none of the published temperature series are not infilled with linearized data, over a nonlinear field (temp is not specially linear). It junk that they can make do whatever they think is supposed to be happening. Oh, and they are righteous in their series being the best humans have to offer.

micro6500, your argument has a chance if you’re using only aspirated sensors.

Otherwise, the systematic measurement error is environmentally determined by sun and wind, and varies hour-by-hour, day-by-day, and annually. It’s not constant, its magnitude is unknown, and taking anomalies doesn’t subtract it away.

micro6500, your argument has a chance if you’re using only aspirated sensors.
Otherwise, the systematic measurement error is environmentally determined by sun and wind, and varies hour-by-hour, day-by-day, and annually. It’s not constant, its magnitude is unknown, and taking anomalies doesn’t subtract it away.

Unfortunately the sensors are what they are.
I only use daily summary(NCDC GSoD), far from perfect, but I think there is info to be found.

I like seeing more than a dozen stations for an area, as long as they respond the same to the same input within reason, systematics are easier to ignore. And for many calculations I use a 30 day running mean to get the slope as the length of day changes, and measure the response of the atmospheric to a well defined amount of solar.

Thanks to Nick and everyone else who added to the anomaly/averaging answer. More research on my part will be needed, as well as testing of the mathematics and how changes affect it. I realize that to Nick and others, this is “obvious”, but nothing is obvious to me until I can work through the math and the reasoning. Again, thanks for the input. I appreciate it and will study the information you provided.

1sky1“Try selling the notion that 2016 was significantly higher than 1998 to the keepers of satelltite indices”
RSS needs no convincing. Their news release is headed“Analysis of mid to upper tropospheric temperature by Remote Sensing Systems shows record global warmth in 2016 by a large margin.”

Analysis of mid to upper tropospheric temperature by Remote Sensing Systems shows record global warmth in 2016 by a large margin.

Now you’re shuffling the pea to indices at other levels and no longer talking about TLT, which is the level most strongly coherent with the surface. As Werner Brozek reported here a week ago for TLT: “In 2016, RSS beat 1998 by 0.573 – 0.550 = 0.023 or by 0.02 to the nearest 1/100 of a degree.” A nearly identical TLT comparison was reported by Roy Spencer of UAH.

1sky1,“Now you’re shuffling the pea to indices at other levels and no longer talking about TLT, which is the level most strongly coherent with the surface.”
The other levels are all just as global. And the usual complaint here is that TLT is not coherent with the surface. In fact, V5.6 TLT was weighted to peak at 2km above the surface; V6 is 4 km above. And V5.6 showed a big rise.

But RSS has more to say on this. They have for some time had an advisory on V3.3 TLT, saying that it had known problems and use with caution. In the latest announcement, they say:“RSS TLT version 3.3 contains a known cooling bias. We are working to eliminate the bias in the new version of TLT. Even with these known cooling biases, 2016 was a record warm year in TLT v3.3. In fact, 2016 was a record warm year in all RSS tropospheric temperature products (TLT v3.3, TMT v3.3, TTT v3.3, TMT v4.0 and TTT v4.0”

The other levels are all just as global. And the usual complaint here is that TLT is not coherent with the surface.

On a yearly average basis, I get R^2 > 0.8 for UAH TLT vs. my own estimate of surface temperature using only thoroughly vetted station records. To what unvalidated GAT indices RSS compares its results is of little scientific interest.

2. We all know that the GHCN station population is sparse at such latitudes, especially at 82.5N-80N: there are no more than 3 stations in the stripe.

But nevertheless you can compute a monthly times series out of that data, showing a trend of 0.692 °C / decade.

3. Additionally, you may compute a times series out of those 3 UAH grid cells encompassing the stations; you obtain 0.446 °C / decade.

4. If now you compute the trend over 1979-2016 for each of the 9,504 UAH grid cells, you see that of the 100 cells with the highest trends, 96 are in the Arctic, within 82.5N-80N… 2 are in the Antarctic, and 2 in Kamchatka.

So yes, 1sky1: the Arctic is warming a very little bit. I know: it did in the Golden Thirties as well. But is that the point here?

And there – yes yes – you exactly see what Elizabeth Muller is telling us about. But… not only at the surface; in the troposphere as well.

There’s a woeful lack of appreciation here of thermodynamic basics in attaching great significance to latitude-dependent temperature variations. Since the fourth power of surface blackbody temperature is proportional to internal kinetic energy density, it should come as no surprise that fairly uniform poleward transport of heat produces “polar amplification.” Of course high latitudes are more variable than lower zones! But temperature is merely an intensive variable, and the much-more densely sampled sub-polar latitudes provide far more reliable estimates of the variability of globally thermalized solar energy than the sparsely sampled polar regions that thermalize very little while acting as exhaust pipes for terrestrial heat.

Cliffhanger – Really? Why? Governments use taxes to pay their cronies. What good will a carbon tax do? Look where they have them and look at who benefits and who gets exemptions. It’s POLITICAL. Remember that.

In Middle Aged England they had a tax on window size. Guess what happened; people started building houses with mini windows and lived in semi darkness. Tax income didn’t go up too much.
Blind carbon tax had its blind predecessors.

This question brings to mind a story I sometimes share from when I was a personal counselor in the 1970s — counseling people about improving their lives by improving their behaviors. I had a client from Las Vegas — annual income above $250 K. Profession? Professional poker player. His life was a mess, just couldn’t seem to move forward. I worked with him for a couple of weeks and just couldn’t find out what he was up to that was wrecking his life — under the philosophy that bad personal behaviors make one’s life miserable. I finally gave up on sorting him out. But I had one last question — just out of curiosity. I said “Poker is a game of chance and skill — I know — but over time, even the best players go through periods of bust and boom. How is it that you win big so consistently?” He answered, surprised by the question, “Oh, that’s easy. I cheat.”

So many delusional idealogues here. How many times does it need to be said. 2016 was the warmest year since humans began keeping records – 3 years in a row. The global warming “pause” appears to have been a temporary fluctuation. 2016 witnessed extraordinary warming in the Arctic that is having a significant impact on global temperature measurements. The next trench to overcome is “its a good thing”.

Speaking of delusional ideologues, here comes tony again.
2016 was only a few hundredths of a degree warmer than 1998, the previous 2 years came nowhere close to 1998.
2017 is already shaping up to be way cooler than 2016. Which according to you is impossible.
CO2 has gone up 30% since 1998, and the temperature only went up by 0.02C, this despite the AMO and PDO both being in their warm phases.

Humans started keeping records during the depths of the Little Ice Age. Of course it has warmed since then. Thank God it has.

The arctic is warming because the El Nino dumped it’s heat there.
Such warming is normal, has happened before and will abate shortly, just like it did after every other el nino.

Warming is better than cooling. Only a total moron would deny that. So much for your “trench”. That fact is completely separate from whether or not we are currently warming, which itself is dependent on what time period you want to cherry-pick, what regions of the earth you want to cherry-pick, and which charts you want to cherry-pick. With you Warmists, it’s always cherry-picking season.

It’s the warmest year since humans began keeping records—less than 150 years ago. So 150 years out of 4.5 billion is very, very significant it seems.

The answer to your question? It can be said over and over and over, but the reality is 2016 may have been the warmest and 2014 and 2015 next warmest, but that means nothing except they were the warmest. Until a trend line becomes reality (as in NEVER), it matters not. Three record years in a row are just that—three record years in a row. Means nothing more than 3 record cold years in a row or 15 cold years in a row. Nature and reality are not bound by trend lines.

Tony, no scientist is saying that there hasn’t been warming. It’s the cause of that warming that is in question. Further, the warming is so far below projected models as to render the AGW theory null and void. Adjusting the data hasn’t helped much in that regard either. You’re screaming about 0.01 C of warming ? Record crops and less severe weather that’s the reality. Let’s revisit the predictions made in the year 2000. Who put the C in catastrophic? You realize not one prediction/projection by the IPCC and associates has happened ( outside of a slight warming trend). The only way you can get every prediction wrong is using the wrong math.

1) I remember the 1970’s 2) co2 follows temperature . If it weren’t for that, actual temperature might have fallen in my view. And that could still be the case. The perception of warming maybe long term weather related and not climate.

2017 is going to be several tenths of a degree cooler than 2016. This will be an order of magnitude more cooling than the amount of warming from 1998 to 2016.
I’m wondering how our various trolls are going to spin that?

I’l have a go.
A few tenths less than last year but several tenths more than nearly every other year since 1980. As in it will still be true to say 9 hottest years in the satellite record have occurred in the last 10.

MarkW January 18, 2017 at 2:02 pm wrote: “2017 is going to be several tenths of a degree cooler than 2016. This will be an order of magnitude more cooling than the amount of warming from 1998 to 2016.
I’m wondering how our various trolls are going to spin that?”

tony mcleod January 18, 2017 at 9:18 pm replied: “I’l have a go.
A few tenths less than last year but several tenths more than nearly every other year since 1980. As in it will still be true to say 9 hottest years in the satellite record have occurred in the last 10.”

Wrong on both points if you go by the UAH satellite chart, shown below:

Why so much emphasis on El Nino years? These temperature spikes are caused by short term climate events not necessarily associated with the cause of the longer term rise in temperature that has occurred periodically in the post 1880 measurements. The current pause in warming appears to have begun in 2002. If a horizontal line is drawn forward to the present from the reading for 2002, and if the unrelated El nino temperature reading for 2016 is ignored then the pause is evident.

2016 was the warmest year since humans began keeping records, by a wide margin.

“Since keeping keeping records” sure sounds like temperature records (thermometers and stuff), NOT recorded history.
Yet we then have this:

Richard Muller, Scientific Director of Berkeley Earth, said: “We project that continued global warming will lead us to an average temperature not yet experienced by civilization.”

That phrase sure sounds more like recorded history (Roman, Minoan, Medieval etc.) rather than just the tamperedature records…..unless he means “civilization” didn’t begin until somebody noticed what happens when you put mercury in a glass tube.

It is difficult to understand why the proponents of man-made global warming would get so excited about the recorded temperature spike during an El Nino year. The cause for the El Nino temperature spike is known and it is not man-made.

“2016 witnessed extraordinary warming in the Arctic that is having a significant impact on global temperature measurements”.

Tony, Tony, Tony. What 2016 witnessed was extraordinary cooling effect in the Arctic, as warmer air went there to have its heat radiated into space. Check DMI for the 80 Degree Latitude average temperatures for 2016. Temperatures were above normal in the past winter (some 5 degrees C), but still were around -20C. As the spring and summer came on, they went back to the long term average, and remained that way until the fall, when they dropped below zero C and are now back around -20C again. That is, the heat anomaly radiated into the blackness of space, but managed to inflate the overall global anomaly in the process (i.e. global average temperature for winter was elevated by the “heat wave” last winter in the Arctic). However, the effect was to cool the planet (the heat had to come from somewhere, and was not replenished in the Arctic darkness). Net, net, the planet is venting the El Nino spike via one of the only viable mechanisms it has, moving it to the poles to go right up that cold “chimney”. The big Arctic lows (like the extraordinary one experienced this last week), are sucking warm air up to the pole, and consigning the heat to space (and in the process, building record snow/ice pack on Greenland. That huge storm (below 960 mb), is also crushing and compacting the ice – it’s lowering the extent, but building the volume – look for rapid ice growth, and a average to better ice volume in the next few months.

First, I think you mean Anthony, Anthony, Anthony. Those words were cut and pasted from the article.

Second, “the planet is venting the El Nino spike via one of the only viable mechanisms it has, moving it to the poles to go right up that cold “chimney”.” is just an incorrect guess, because

Third, as far as radiating heat to space go, the tropics win that race by a country mile.

Fourth Arctic sea-ice volume (as well as area and extent) are all falling off the charts.

So if you were wanting “average to better ice volume” you’re a bit late mate. What’s left of the ice area is a thin (mostly <1m) sheet of extremely vulnerable 1st year ice which will be very lucky to survive next summer. In fact it wont.

If you are looking for signs of "extraordinary cooling", you might want to look elsewhere.

The arctic has been ice free many times in the past and will become ice choked again in the future. Ice free arctic means absolutely nothing over geological time periods. There is nothing happening with the climate that has not happened before. Remember there were Hippos and Alligators living on Ellesmere Island in the past and they don’t like the cold much.

Net, net, the planet is venting the El Nino spike via one of the only viable mechanisms it has, moving it to the poles to go right up that cold “chimney”.

I’m very impressed!

But…

– it seems that only the Arctic’s cold “chimney” is “consigning the heat to space”. The Antarctic experiences even bigger lows, all being able as well to “suck warm air up to the pole”. Is the South striking a bit?

– that heat “consigned the heat to space”: does that not last not least reach TOA? So should not all these pretty satellites with circumpolar orbits notice that heat getting off the upper end of the “chimney” ?

– and if they did really: wouldn’t you be proud enough to show us lots of data about that?

There is no global surface instrument record. The earth’s land mass covers about 30% of the globe and is sparsely covered as far as thermometers. The ocean surface temperature record is even worse. And why is ocean temperature data combined with atmospheric thermometer records. They do not relate to one another as far as climate is concerned.

Arguing about temperatures and climate drivers has become just as emotional as arguing about sports or politics or religion. Very little is said about what we should actually do. This position from Berkeley Earth, “The most effective and economic approach would be to encourage nuclear power, substitution of natural gas for future coal plants, and continued improvement of energy efficiency.”, seems reasonable. What do you think?

Those who have something to lose in the political science or climate science game will refuse to say, “BINGO!”.
Those in those games who have, never the less, said, “BINGO!”, are those of integrity.
What they say should be considered and evaluated.
(Keep the rest away from the trough.)

I wonder if anyone would pay as much attention if Muller etc., were based in Little Whinging, rather than Berkely. I supppose nobody would be inclined to associate “Little Whinging Earth” with an academic institution.

On the other hand, I suppose some might accidentally associate “Berkeley Earth” with the Unversity of California.

At least they keep good company, tax wise, by paying none, as do –
“Religious, Educational, Charitable, Scientific, Literary, Testing for Public Safety, to Foster National or International Amateur Sports Competition, or Prevention of Cruelty to Children or Animals Organizations”

Elizabeth Muller, Executive Director of Berkeley Earth, said, “We have compelling scientific evidence that global warming is real and human caused, but much of what is reported as ‘climate change’ is exaggerated. Headlines that claim storms, droughts, floods, and temperature variability are increasing, are not based on normal scientific standards. We are likely to know better in the upcoming decades, but for now, the results that are most solidly established are that the temperature is increasing and that the increase is caused by human greenhouse emissions.

If I’m not the first to mention this, I apologise, I only skimmed through the comments. We all recall how “global warming” was somehow subsumed into something new called “climate change” when the “pause” started and it looked as if warming was over for the time being. There was endless repetition of the theme “climate change is real, it’s happening now, just look out your window blah blah blah”

That is a clear statement (my bolding) from a prominent warmist that climate change was a fiction based on repetition of untruths; and she feels comfortable saying it now that it looks like warming is back. Really – how transparent can you get?

I wonder what the message would be if we have a couple of years of cooling too rapid to be adjusted away? Will “warming” disappear again and “climate change” come back?

They (the climate establishment) must think that the general public is made up of people who are really stupid and unable to perceive logical inconsistencies. That’s probably not too far from the truth. Sigh.

Appears? That’s the best that the “Lead Scientist” has got it figured out? Like maybe the strong El Nino and long-term warming-trend may not have been the reason…it could have just been a coincidence?

“…The most effective and economic approach would be to encourage nuclear power, substitution of natural gas for future coal plants, and continued improvement of energy efficiency…”

Is Muller saying that as the Scientific Director of Berkeley Earth, or is he saying it as President and Chief Scientist of Muller & Associates, “and international consulting group specializing in energy-related issues?” LOL

Just curious. Although I’m somewhat familiar with Berkeley Earth from WUWT and certain commenters, it’s not something I follow closely. I’m familiar with the name Richard Muller, but does this border on the edges of grant-troughing nepotism?

From the article, “Elizabeth Muller, Executive Director of Berkeley Earth, said, “We have compelling scientific evidence that global warming is real and human caused…” I must have missed something. Even if I accept all the adjusted temperature records wholesale, that says absolutely nothing about the cause of any level of warming.

Sure, CO2 can theoretically decrease planetary cooling by reducing the amount of radiant energy leaving the atmosphere (reasonable mechanism based in physics), and hypothetically the multitude of feedback mechanisms responding to an increase in CO2 could amplify the effect (highly uncertain and incomplete hypothesis due to lack of adequate knowledge about known and unknown feedback mechanisms), and result in significant warming, but at what point in the last 30 years did anyone attempt to falsify that hypothesis (i.e., to demonstrate that natural variability cannot explain the apparent warming)?

CO2 concentrations are rising, and the GAT (whatever that means – different discussion) appears to be increasing, but correlation does not imply causation. (For some entertaining examples, see http://www.tylervigen.com/spurious-correlations.) So, the fact that CO2 concentrations are rising, and GAT appears to be rising is not evidence in any scientific sense that the CO2 is causing the GAT to rise. What is the “compelling scientific evidence” that any global warming in the last 50 years is human caused?

In my first comment on January 28th I pointed out falsifications present in your global warming data. Now let’s look at erroneous interpretations that your paper is full of. In your opinion, the 2016 El Nino wipes out the ‘…global warming “pause”, which Berkeley Earth had always stressed was not statistically significant…” That is abject nonsense. The first six years of 21st century can easily be interpreted as a pause in satellite views as well as in HadCRUT3, as can be seen in figure 24 of my book. However, if additional data that came after 2008 is used we find a La Nina at year 2008 and a Le Nino in year 2010. And temperature after this is distinctly lower than at the turn of the century. This temperature curve eventually turns upward after 2012. But the two ENSO components in between make it hard to see the true temperature trend which is actually downward, indicating cooling. An ENSO oscillation does not normally contribute to the background warming because El Ninos and La Ninas are created in pairs (pp. 17-21) and effectively restore the original background level after having passed through. In view of this fact we can ignore the trend shown by ENSO peaks/valleys here and simply draw a straight line through from 2002 to 2012. This is the most likely path of the background temperature we cannot see because of ENSO. After 2012 that temperature curve begins to turn up in preparation for creating the 2016 El Nino. But the straight line itself from 2002 to 2012 has a negative slope and this indicates cooling. not warming. That cooling is due to the fact that the warmth at the beginning of the twenty-first century cannot be replenished because its source, the super El Nino of 1998, has left by this time This leaves the straight-line segment from 2002 to 2012 as a pointer to future temperature changes. If you extend it past the 2016 El Nino peak it reaches baseline sometime in the year 2017. This points to further cooling, quite possibly down to the level that existed in the eighties and nineties. But here is what Robert Rohde of Berkeley Earth has to say about it: “The record temperature of 2016 appears to come from a strong El Nino imposed on top of a long-term global warming trend that continues unabated.” First, neither El Ninos nor La Ninas should be used to determine global baseline temperature. That is abject nonsense. Second, that “long-term global warming trend” is nothing of the sort. It is a remnant of warm water, now cooling, that was left behind by the super El Nino of 1998 as I explained. After the 2016 El Nino is over this will become the dominant factor controlling global temperature. Which, in the absence of additional ENSO activity, means that temperatures as low as the eighties and nineties become possible. From there Rohde goes on to comment on Arctic warming and bring in Zeke Housefather’s opinion. It is obvious that neither one knows much about the Arctic but nevertheless are true believers in its greenhouse origins. I solved the Arctic warming problem in 2011 and proved that it was caused by a rearrangement of the North Atlantic current system at the turn of the twentieth century [E&E 22(8):1069-1083 (2011)]. Prior to that there was nothing in the Arctic except for slow, linear cooling for 2000 years. It first started in the early 20th century but temporarily halted in mid-century. That halt was a thirty year cool period but it was over by 1970 and warming has been steady since then. The actual warming is caused by the change of the Gulf Stream path across the North Atlantic Ocean. Gulf stream leaves the Gulf of Mexico and turns north through the straits of Florida. It keeps going north, parallel to the east coast, and then turns west into the North Sea as Ben Franklin new. This is what warms the climate of Europe. What the rearrangement of currents did was to redirect part of the Gulf Stream directly into the Arctic Ocean. Spielhagen et al. took an Arctic cruise to check it out. They discovered that warm water was entering the Arctic Ocean from the Atlantic and that its temperature was higher than ever known for the Arctic. That is the source of Arctic warming, not another fairy tale about the greenhouse warming.

These people are a perfect example of what happens with intellectual inbreeding.

The globe has warmed, true, … caused by humans …. ummm … just where is the empiric evidence for that?? There is none outside of the virtual world of computers. WE DON’T LIVE in the computer.

ANOTHER INTERESTING touch of hypocrisy …. the MWP, LIA are alleged by these morons to be a “regional event” …. but then go on to allow the Arctic … another “regional event” to dictate “global” temperature.

Our government should transfer ALL of the funding of Climate Change to the study of Mental Health … so maybe they can figure out why these people are so dang stupid. … and come up with a stupid pill to fix it.

Notice the 1997/8 strong El Nino peak has been adjusted that much over recent years there are now 9 little peaks gone above it, when before there were none. (not including the past 2 years)

What pause? We adjust it and change it to help get rid of it.

I estimate changing the data sets again has added about 0.1c to 0.2c more warming only down to adjustments. The same data stations don’t show this warming difference, different cherries do thanks to comparing apples with oranges.

The error is up to 0.4c in just the changing data sets due to these causing a swing up to this value for one month alone.

It may be as warm recently as 1997/98, but there is no evidence it has been any warmer.

In a foggy severe viral/bacterial upper respiratory infection state of mind (they put me on antibiotics because I am that snot-nosed sick and it is after midnight in the Mid-east section of Oregon) buried in way taller than me snow drifts that harken back to the turn of the past century, I came upon a brilliant (or stupid) idea. If I wanted to find an interstadial peak/stadial trough or slide up/down aaaachoooo signal in oceanic/atmospheric oscillations that would be Germaine to the past 800,000 year ice core record, which of the present indices currently in use would I choose to investigate? I would be looking for fast rising and jagged falling oscillations that show a loooooong term segment of that kind of pattern. Given the proposed idea that Bob Tisdale has offered/expounded on related to net oceanic discharge/recharge of solar heat, I may want to stick to season-delimited oscillations (such as the Arctic Oscillation and PDO, etc.). Any thoughts? Since we are at an interstadial peak I would be looking for whatever pattern that demonstrates net belching out of warming to the atmosphere. But even better, what indices would I use to detect a an interstadial peak “knee” that sends us back down the jagged slide to “brrrrrrrr”?