Wednesday, January 21, 2015

Tricks used by David Rose, denier "journalist", to deceive

This is just a short article to show the journalistic tricks that professional disinformers use. It's excerpts from an article by denier David Rose, who is paid to write trash for the Mail, a UK tabloid of the sensationalist kind. He'd probably claimthat he's just "doing his job". His job being to creates sensationalist headlines and not bother too much about accuracy, but try to do it in such a way as to stop the paper ending up in court on the wrong end of a lawsuit. Just. (The paper probably doesn't mind so much getting taken to the Press Complaints Commission. )

The Nasa (sic) climate scientists who claimed 2014 set a new record for global warmth last night admittedthey were only 38 per cent sure this was true.

First of all notice the use of the word "admitted" - as if it was something that the scientists were forced into, whereas in fact that they provided all the information in their press briefing. Notice also that David doesn't even know how to spell NASA. Then notice his straight up lie. It's not true. David has taken one number and used it out of context. The 38% number is the probability that 2014 is the hottest year compared to the probability that 2010 and other hot years are the hottest. 2010, the next hottest year, only got a 23% probability by comparison. Here is the table showing out of 100%, what the different probabilities are:

You can see how David misused the 38% number. In fact the odds of it being the hottest year on record are the highest of the lot.

In a press release on Friday, Nasa’s (sic) Goddard Institute for Space Studies (GISS) claimed its analysis of world temperatures showed ‘2014 was the warmest year on record’.

The claim made headlines around the world, but yesterday it emerged that GISS’s analysis – based on readings from more than 3,000 measuring stations worldwide – is subject to a margin of error. Nasa (sic) admitsthis means it is far from certain that 2014 set a record at all.

See how David Rose distorts things. How he uses rhetoric, abusing words like "emerged" and "claim" and "admits". He is flat out lying about the "far from certain". He just made that one up. It may not be "certain", but it is much more certain than "far from". And it is more "certain" that 2014 was the hottest year than that any other year was the hottest year.

If David Rose were arguing that you beat your wife, even though you don't, he'd probably write it up as:

The so-called scientist claims that he doesn't beat his wife. He admits that he cannot prove he doesn't beat his wife. However this journalist can show that it has emerged that his claim is subject to a margin of error. 95% of wife-beatersdenybeating their wives.

And I doubt he'd add the confidence limits to the 95% number!

David Rose continues his deception writing:

Yet the Nasa (sic) press release failed to mention this, as well as the fact that the alleged‘record’ amounted to an increase over 2010, the previous ‘warmest year’, of just two-hundredths of a degree – or 0.02C. The margin of error is said by scientists to be approximately 0.1C – several times as much.

That section by David Rose contains the same misprint of NASA (as Nasa), plus the same journalistic tricks of rhetoric, as well as a lie. The margin of error of the annual averaged global surface temperature is described in the GISS FAQ as ±0.05°C:

Assuming that the other inaccuracies might about double that estimate yielded the error bars for global annual means drawn in this graph, i.e., for recent years the error bar for global annual means is about ±0.05°C, for years around 1900 it is about ±0.1°C. The error bars are about twice as big for seasonal means and three times as big for monthly means. Error bars for regional means vary wildly depending on the station density in that region. Error estimates related to homogenization or other factors have been assessed by CRU and the Hadley Centre (among others).

If the press release didn't include any confidence limits, then where did David Rose get his numbers from? you ask. That's a very good question. It turns out that NOAA and NASA held a press conference, during which they showed some slides and explained the confidence limits, among other things. So David Rose was being very deceitful, wasn't he. Which isn't a surprise.

What bit of deception does he swing to next? Well here it is. You be the judge:

As a result, GISS’s director Gavin Schmidt has now admitted Nasa thinks the likelihood that 2014 was the warmest year since 1880 is just 38 per cent. However, when asked by this newspaper whether he regretted that the news release did not mention this, he did not respond. Another analysis, from the Berkeley Earth Surface Temperature (BEST) project, drawn from ten times as many measuring stations as GISS, concluded that if 2014 was a record year, it was by an even tinier amount.

More rhetorical tricks using words like "admitted". More deception by David Rose tabloid denier extraordinaire. When and how and where did David Rose ask Gavin Schmidt the question? I don't know. It looks as if it was via an accusatory tweet of the type "have you stopped beating your wife", like this one:

@ClimateOfGavin why didn't you mention the size of the 2014 "record" and the uncertainty in the GISS press release? Do you regret this?
— David Rose (@DavidRoseUK) January 17, 2015

Yet Gavin Schmidt did respond to David Rose, so it was David Rose who told the lie:

The "Nasa" isn't a mistake, it's just the Daily Mail's in-house style. A lot of British newspapers will avoid using all caps for government bodies. See this elsewhere on the site. Or this from a New Zealand news site.

David Rose's list of alleged sensationalist buzz words is longer than that of Anthony Watts. Anthony mainly only uses one word: "claim". David, being a professional denier writer, has collected more: "emerged" "alleged" "claim" "admitted".

He'd probably be forced to admit that he overuses his alleged claims though.

PS You can always archive anything iffy and provide a link to that. Links to mainstream newspapers are okay - even the gutter press that David writes for. is okay by me.

It might even be the default behaviour of their word processor. In Microsoft Word if you type in "nasa" as the first word on a line, it changes it to "Nasa". You have to type in "NASA" for it to remain all caps.

The convention is that if an acronym is spelled out when spoken, ie U-S-A, then caps are used, whereas if it pronounced phonetically, ie Nasa, then only the first letter is capitalised. I think it's silly and that caps should always be used to denote that it's an acronym, but that does seem to be common to most newspapers in the UK now.

The Daily Mail (also Daily Fail, Daily Heil) is an atrocious 'newspaper' full of racist, sexist, homophobic, child fantasy, right-wing lies and hatred. I would implore no one visits their website. Sometimes, though, it can be accidental. I have a Chrome plugin called "Kitten Block" which redirects you to the Tea and Kittens page. It's also available for Firefox :) http://www.theguardian.com/media/mediamonkeyblog/2011/mar/28/kitten-block

Are the error bars for any given year completely independent of all other years, or do the issues that lead to error margins point in the same direction all the time (but we don't know which)? Or is it a mix of both?

To give a (rather lengthy) example of what I'm talking about suppose the anomaly for Year X is +0.7 +/- 0.05 and the anomaly for Year Y is +0.61 +/- 0.05:If the errors are independent, Year X could be a low as +0.65 and Year Y could be a high as +0.66, so there is some small probability that Year Y was hotter than Year X.

If not independent, Year X could be as low as +0.65, but if so then all other results are likely to be at the bottom end of their error range (ie the factors that created the error are more or less constant across the whole data set), so Year Y is very unlikely (in this case) to be hotter than +0.56. Alternatively, if we consider that Year Y might be at the upper end of its error range (+0.66), then it is likely X is also at the top end of its range (+0.75), and Year X is definitively, 100% guaranteed to be the hotter year, even if we don't know exactly how hot it was.

The discussion I see on this sort of thing suggests its the former (or always assumes it at least), but in my work (business analysis, not very sciency I'm afraid) I see a lot of cases where the latter case is a better representation of what is going on - while the ranges of uncertainty overlap, the same factors are at work throught the dataset, and if they push one data point high or low, they are doing it for all.

Interesting question, Frank. A similar thought flitted through my head when I was writing these articles, but I didn't know the answer, so the thought disappeared as quickly :)

Nick Stokes would be a good person to ask (or Gavin Schmidt or someone else from NASA, or the Hadley Centre, or Kevin Cowtan and Robert Way). The other day Nick wrote about the main sources of year to year difference. He plays with the data a lot, so he might have thought about that very issue - or not.

Hmm, good question. In think in that context it makes sense to capitalise GISS because people will not generally be familiar with the organisation and it is pointing out that it is an acronym. But that kind of inconsistency is exactly why I would capitalise all acronyms.

And in those references 9-13 are two satellite data series, and three surface data series (well not really, the HadCRUT4 is just a paper, not a link). Now it even says that "mean of the RSS, UAH, NCDC, HadCRUT4 and GISS monthly global anomalies", but guess what. It's all a fraud. Only the RSS and UAH data is used in the chart. (There is a clue, as the dope forget to actually change the friggin graph. At the top in blue it says RSS + UAH)

I actually downloaded all the datasets, used R to consolidate the data and create a mean. I then plotted them, and guess what. It doesn't look the same. When I plot just the RSS and UAH the graph I get is exactly the same as what is in the paper.

(1) Well that figure should show the 95% confidence intervals of the satellite derived data set trend line (with flared/tapered ends). It does not.

(2) The FAR trend line should also start with the 95% confidence interval bounds associated with the uncertainties of the underlying time series (in other words, not a point start but an interval start). It does not.

The uncertainty is calculated for the year they calculate the value for. The uncertainty is expressed as a range. It usually represents the 95% (2 sigma) certainty range ie there is a 95% probability that the actual value falls in the calculated certainty range.

Here is what the first figure in the Monckton paper SHOULD have looked like.

http://i57.tinypic.com/i5ub2r.png

It uses a mean of the surface temperature datasets compared to the IPCC FAR projections. Given that the reality is so different to the fantasy that Monckton is trying to portray it's a wonder how the paper was even published. Explains why a second rate Chinese publication was chosen.

Also, the FAR projection used by Monckton is the wrong one. The report used 4 scenarios A-D, Monckton cherry-picks the highest, scenario A and misrepresents it as the IPCC projection. We now know that the forcings in Scenario A were an over-prediction. By 2011 we had not reached the 2000 value in Scenario A for CO2 forcing. Scenarios B-D were closer and their associated temperature trends were between 0.1 and 0.2C/decade. In other words, even in 1990 the models used by the IPCC were making accurate predictions.

I saw your reply, thanks - I mispoke in linking it to spatial sampling error. What I meant to say is that what I'm thinking of is similar to your reasoning about why spatial sampling error is not really a factor, (since 2014 is built from essentially the same set of stations as 2010).

I suspect that in this case of comparing like-to-like the probability that 2014 was the hottest is rather higher than the 38% - 48% (or 1.5 to 3 times as likely as any other year) that is being advanced.

Harry, to explain a bit further, suppose we have a station that complies with all standards, but because of some local topographic anomaly or the type of paint used on the screen or whatever, tends to read fractionally high or low (but still within the standard measurement uncertainty for that equipment). If you want to compare it to another instrument, the errors of the two instruments are independent (as per my case 1), but if you are comparing one instrument's results from one year to the next, in the simple case, the factors that led to its error last year are the same as this year (as per my case 2), and the measurement uncertainty in relative terms should be much lower. That's just a simple one for one instrument, but I'm wondering if there are larger scale factors that cannot be ignored in absolute terms, and are thus embedded in the margin of error (how hot was this year?) that can, in fact, be ignored when looking at the data in relative terms (was this year hotter than that year). Nick identifies (at Moyhu) spatial sampling as one of these possibles (at least for comparing years that are very close together), I was being more generic having less of a grasp of the detail.

"At last, some genuine and valid criticism. I had not recalled that IPCC had made its 1 k by 2025 prediction under Scenario A. However, Scenario A was its business-as-usual scenario, and it had incorrectly predicted a far greater rate of forcing, and hence of temperature change, than actually occurred."

So, his paper plotted a cherry-picked temperature prediction even though he must have known that it was not based on real-world data, and presents this as 'empirical evidence that the models run hot', when, as he now concedes, it is evidence of no such thing.

"I had not recalled that IPCC had made its 1 k by 2025 prediction under Scenario A."

That's really, really bad science - to add a line to a figure where you do not know the constraints.

Now, he has confirmed that it was wrong, but it looks at the same time is trying to get around it, too, by claiming this means the IPCC scenario A was wrong and therefore the prediction is still wrong.

Its classic Monckton. I guess he's being using this chart so long, he's forgotten it is bogus. The IPCC published 4 scenarios in the first Assessment Report, labelled A-D. Nobody knows in advance how forcings will develop so the IPCC run their models against a range of possible scenarios, low to high, and publish the predicted temps from each. They labelled Scenario A as 'business as usual', meaning high coal usage, and low emission controls. In fact we now know that the actual forcings ran some way below Scenario A, closer to B-D which all had similar values in the early decades. This was due in some measure to the collapse of the Soviet Union, rather than emissions control, an event few people could have foreseen, and the reason the IPCC use scenarios! The actual 2011 forcing number for CO2 is in the paper but this did not prevent Monckton from presenting Scenario A - and only Scenario A - as the IPCC projection.

So, one of the IPCC scenarios turned out to be an overestimate compared to real-world observations; if the range was correctly chosen, this will always be the case. But the topic of the paper was not IPCC projections, it was 'why models run hot' and the figure appeared in Section 2 'Empirical evidence of models running hot'.

In fact, Monckton is so pleased with his FAR prediction, it appears again, no more legitimately, in Fig 2.

Phil, this misrepresentation of the FAR is quite serious. Since you've managed to get some responses out of Monckton, implicitly acknowledging it is wrong, perhaps you may want to inform the journal of this misrepresentation.

I know some have suggested writing a rebuttal, but the journal apparently has page charges, and do people really want to go through that effort?

Marco - there's so much wrong with the paper - this is just the facet that I investigated in detail - that I suspect a correction would be several times larger than the original, so, while I may drop the journal a line, I think the best course is just to let the article sink without trace.

I currently find myself "debating" with someone over there who claims that "According to recent Met Office data average global temperature decreased very slightly during this decade". What planet do you suppose (s)he is living on?

New Look

G'day. HotWhopper is having a facelift. Do let me know if you find anything missing or broken.

When you read older articles on a desktop or notebook, you may find the sidebar moves down the page, instead of being on the side. That can happen with some older articles if your browser is not the full width of your computer screen. I am not planning to check every previous post, so if you come across something particularly annoying, send me an email and I'll fix it. Or you can add your thoughts to this feedback article.

You can use the menu up top to get to the blogroll or whatever it is you might be looking for on the sidebar.

When moderation shows as ON, there may be a short or occasionally longer delay before comments appear. When moderation is OFF, comments will appear as soon as they are posted.

All you need to know about WUWT

WUWT insider Willis Eschenbach tells you all you need to know about Anthony Watts and his blog, WattsUpWithThat (WUWT). As part of his scathing commentary, Wondering Willis accuses Anthony Watts of being clueless about the blog articles he posts. To paraphrase:

Even if Anthony had a year to analyze and dissect each piece...(he couldn't tell if it would)... stand the harsh light of public exposure.

Definition of Denier (Oxford): A person who denies something, especially someone who refuses to admit the truth of a concept or proposition that is supported by the majority of scientific or historical evidence.
‘a prominent denier of global warming’
‘a climate change denier’

Alternative definition: A former French coin, equal to one twelfth of a Sou, which was withdrawn in the 19th century. Oxford. (The denier has since resurfaced with reduced value.)