More Deception in the Lewandowsky Data

As CA readers are aware, the Lewandowsky survey was conducted at stridently anti-skeptic blogs (Deltoid, Tamino etc.) and numerous responses purporting to be from “skeptics” were actually from anti-skeptics fraudulently pretending to be skeptics. To date, most of the focus has been on the fake responses in which respondents, pretending to be “skeptics”, deceptively pretended to believe in conspiracies that they did not really believe in. In today’s post, I’ll discuss another style of (almost certain) deception in which Lewandowsky respondents gave fake/deceptive responses to the Free Market questions.

In today’s post, I’m going to compare answers from the Lewandowsky survey to answers from the WUWT survey carried out by A. Scott. For the purposes of today’s post, I’m using the version that I downloaded (1554 responses); more responses came in later. I’m not convinced that the additional responses would ensure more reliability. At some point, I’ll take a look at the tranche of additional responses, but there are enough responses for the purposes of today’s comparison to the Lewandowsky survey (1145 retained responses).

The Lewandowsky survey (at anti-skeptic sites) predictably showed greater incidence of “zealous” anti-free market sentiment, but, much less predictably, it also showed significantly greater incidence of super-zealous pro-free market sentiment. My diagnosis is that these super-zealots are fake responses by warmists acting out their caricature of skeptics, much along the lines of the Bat Cave villain caricature in Gleick’s forged strategy memo.

First of all, the incidence of multiple strong disagreement with the six free market propositions is noticeably more prevalent in the Lewandowsky sample of anti-skeptic blogs than in the WUWT survey: this can be see in the graphic below (normalized to counts per 1000 respondents) where the stronger right hand tail in the Lewandowsky survey counts respondents with multiple strong disagreements with the FM propositions. This result is unsurprising, since the Lewandowsky survey is primarily of anti-skeptics; I suspect that all parties would probably hypothesize that anti-skeptics are likely to be more pro-big government and more pro-public sector unions (another formulation of Lewandowsky’s “free market”.)

The next graphic shows something against this “expected” pattern. It shows the count (per 1000) of strong agreements with the FM propositions in the two surveys. Unsurprisingly, the count of respondents who did not strongly agree with any of the FM propositions (the zero count on the left) was larger in the survey at the anti-skeptic blogs than at WUWT. However, take a look at the right hand tail: this is the important feature. Zero WUWT respondents strongly agreed with 5 or more FM propositions, whereas 35 Lewandowsky respondents purported to strongly agree with 5 or more (with a further disproportionate Lewandowsky respondents purporting to believe in 4 or more.) In my opinion, all these 48 Lewandowsky responses are fake.

The discrepancy becomes even more pronounced when expressed in terms of the “skeptic” population in each survey as shown below. The incidence of ‘skeptics’ purporting to strongly agree with 5 or more FM propositions is 133 per 1000 in the Lewandowsky population, but zero in the WUWT population.

The sample sizes are large enough that the differences are statistically significant. (With a little thought, this could be expressed relative to a statistical test, but the statistical test is merely putting a number on the stark difference in normalized counts.)

Which is more plausible? That the Lewandowsky survey attracted 48 free-market superzealots, none of whom showed up in the WUWT survey? Or that these 48 responses were fake responses by anti-skeptics holding a Bat Cave caricature of skeptics? In my opinion, the latter is vastly more likely. Unfortunately for Lewandowsky, his failure to ensure data integrity renders him unable to give any assurance on the matter.

This pattern of deceptive response does not overlap with the high-conspiracy count responses (very incompletely analysed in Lewandowsky’s blog post.) Only 5 of the 48 free market superzealots were also conspiracy superzealots, so the impact of the two classes of (almost certainly) fake responses are additive.

In Lewandowsky’s criticism of the Bray-von Storch survey, Lewandowsky told his fellow SkS insiders that its results were worthless because Bray and von Storch had been unable to ensure data integrity. A criticism that applies even more forcefully to the Lewandowsky survey, which is clearly contaminated with fake/fraudulent responses for the free market questions as well as the conspiracy questions.

As I’ve said before, I do not believe that Lewandowsky was personally complicit in the initial submission of fake/fraudulent responses, though his decision to survey skeptics at anti-skeptic blogs was unwise, if not reckless. however, in my opinion, once the problem with fake/fraudulent responses was forcefully drawn to Lewandowsky’s attention (by Tom Curtis as well as me), Lewandowsky himself should have notified the journal and asked that the article be re-reviewed with particular emphasis on whether he had adequately ensured data integrity. Had he done so, Lewandowsky would have an answer to criticism that he had failed to act properly once he was aware of potential problems. I think that Lewandowsky’s decision to sneer at criticism will prove unwise.

256 Comments

To play devil’s advocate, could Watts’ respondents, already knowing the outcome of the Lewandowsky survey and the reason for the Watts survey, have hidden their true pro-free-market opinions when completing the survey?

But even without the Watts’ results, I agree that the long tail (and in your last graph an upswing for 5 and 6) in the Lew data is very suggestive of faked responses.

Steve: most of the discussion leading into the WUWT survey was about the conspiracy. Also as others observe below, Lewandowsky’s FM questions as phrased were so over-egged that even opponents of big government were unlikely to go strong agreement on 5 or 6 of them.

Even the devil wouldn’t believe that one. For one thing, the focus was on climate, not economics. I know when I filled out the survey, worrying about how my own economic views would be perceived was not foremost in my mind. Second, we’re not talking one or two, but 48 “missing” zealots. Zealots are zealots not because they hide what they believe in, but because they are unafraid to proclaim it. You’d have to believe every zealot is also dishonest.

What I would believe is that the vast majority of respondents here honestly and faithfully recorded, to the best of their ability, their true opinions and positions, because we are sick and tired of the dishonesty, dissembling and prevarication all around us.

SCheesman: I disagree. A survey was put up on websites that are vociferously, stridently, one might even say viciously anti-skeptical. Websites that have traditionally put AGW skeptics in the same bin as UFO believers, flat-earthers, and kooks. Do you honestly believe that none of these folks, having seen what the questions were, wouldn’t find it great fun to pretend to be a whacko who is a skeptic?

Especially when the questions themselves and the collection of questions was so heavy-handed and over-the-top.

Especially when these websites which consider people who lied for the cause (Gleick, etc) to be heroes.

Especially when those who run those sites are close to the fellow running the survey, and when the goals of the survey leaked out?

Sure, people might’ve focused on the tin-foil-hat questions and science and thought little of the economics questions. Maybe. Possibly. But considering the rest of the negatives, that’s rather naive.

That’s possible. But the questions on free market were worded the way someone who does not understand a free market would word them and were worded very dogmatically. I doubt most skeptics OR free market zealots really think the free market never has any bad results.

–1. An economic system based on free markets unrestrained by government interference automatically works best to meet human needs.

I pride myself on being a ‘free marketeer’ but I don’t think I could strongly agree with that statement. Actually, I think I would have to strongly disagree.

At the very least I need the bureau of weight and measures so that if I enter into a contract for ‘a ton of bricks’ I have an enforceable contract for an amount of bricks equal to the legal definition of ‘a ton’.

I do not see any government “interference” in your example. A Bureau of Weight and Measure would be no different than a Bureau of Treasury establishing a uniform monetary system; such activities “facilitate” trade. The key word is “interference” — hindering or obstructing.

When I answered the question in the survey, my concept of government “interference” was on the order of: regulations, restrictions and prohibitions on trade unrelated to the general public safety, price controls, tariffs, price supports, subsidies, special taxes, quotas, etc.

See, Dave L., that’s where the ambiguity of the question comes in, and why it is a badly constructed one. People are allowed to interpret “interference” in completely different ways. Terrible survey design.

In my laymans opinion, based on; (1)voluntary comments left in the survey, (2)the fact it was communicated to potential respondents only in a single guest post of mine at WUWT, (3)was posted late on a Friday night, (4)that the majority of responses were made in the following couple days over the weekend, and last, (5)looking at an overview of the response comparisons between the two surveys and seeing the kind of results I – again as a layman – and a skeptioc – would have expected between the two …

I believe; (a) respondents generally made a good faith effort to complete the survey without bias, and (b) the quality of the survey is relatively high, considering the above and as there was not sufficient notice or time for any significant effort to game the survey to be undertaken.

Last – Steve can far better address this, but my again laymans view, is that the vast majority of respondents were skeptic-leaning. There are a couple of the CO2 related questions that offer, in my opinion, a pretty good indicator and they registered 72% and 78% responses pointing to “strong” skeptic-leaning responses

As to my point (5) above I saw much more nuanced responses to some key questions, as I personally would have expected to those questions, as opposed to the largely sharp black and white answers in the Lewandowsky data to same questions.

To me that was another “gut” indicator of the overall quality … when you see answers across the spread, seems to tell me more thought was given to answering, whereas a sharply defined Strongly Agree vs Strongly Disagree split on same question in Lewandowsky leads me towards Steve’s belief there was a whole lotta ‘gaming’ going on.

Re: A.Scott (Sep 23 15:31), As an add on to the above – Lewandosky collected responses from August thru October 2010 ultimately obtaining appx 1,300 total.

The responses Steve is using for now (N=1554) were collected within less than the first 24 hours – with the 1st recorded 9/8/2012 at 8:23pm and number 1554 recorded 9/9/12 at 4:30pm.

Lewandowsky contacted strongly pro-AGW sites only to disseminate the availability of this survey, who had several months at minimum to learn about the study and that it was targeted at skeptics, and for some to – as we know occurred – conspire to submit fake results.

In the WUWT instance there was a single guest post – from myself – on a Fri nite. All of the responses Steve is using, which were from across the globe, were collected in that 1st 24 hours. There was virtually no time for non-skeptics to learn about the survey, let alone have time to try to subvert it.

Coupled with the high response rate of voluntary contact info provided and the large number of often well thought and detailed comments my gut feel is the quality is quite high and the prevalence of fake responses should be low.

And having the Lewandowsky data as at least a nominal validation should allow a fairly decent ability to notice attempts to fake responses.

I think DR_UK makes a good point. The WUWT survey is a useful exercise. However, responses may be skewed by what people perceive to be the purpose of the survey. It’s simply worth considering this in the data analysis. Cheers James

Steve: of course, skewing is an issue. Just as it was with Lewandowsky.

That is what I would have thought was a basic starting point here (along with leaving out the skeptics as oil shills as a conspiracy, of course) . To a ‘lay observer’ – I don’t normally comment as I am too busy just soaking it all up since ‘Look what the cat dragged in’ burst upon the scene and certainly no statistician – but that does hit you in the face (as well as the small amount of responses). From my earliest days in just reading the Adorno et al work “The Authoritarian Personality” even those vaguely interested in stats -and all I was doing was creating an (amateurish, really) questionnaire in those days, it was made quite clear (and obviously so) that this bias makes a survey meaningless. To get a valid/reliable survey (measuring what it is claimed to measure/measure it over the field,time etc) you can’t ask a question like “ a regular lynching is good for them” and expect to measure the attitude of the whole population to prejudice! And so it is with the population of ‘skeptics’ even without going into the strange ways they decided to contact those skeptics.

“Only 5 of the 48 free market superzealots were also conspiracy superzealots, so the impact of the two classes of (almost certainly) fake responses are additive.”

This seems counterintuitive. Wouldn’t a fake skeptic pose as a superzealot for both?

Steve: a respondent wishing to game the original survey would not necessarily assume that Lewandowsky would credulously count every wacko conspiracy response – for all he knew, the analyst was planning to screen out wacko conspiracy respondents and therefore, overegging the responses wouldn’t work. Unexpectedly, Lewandowsky didn’t screen out wacko responses, so such precautions proved unnecessary, but this wasn’t known in advance.

The British Psychological Society provides its members with guidelines on good practice in the conduct of internet mediated research. This includes a comment that ensuring data integrity is an ethical issue.

I don’t know whether there are similar guidelines for Australian psychologists, but I’d be surprised if there weren’t.

Steve didn’t say it was the only place it got complicated. There are questions to be asked about the Watts/Scott survey. But the graphs Steve shows here are amazing. Are you saying the 48 maniac pro-free-marketeers who were also ‘skeptics’, that Lewandowsky carefully included, were all genuine? How come absolutely nobody out of 1554 in the WUWT survey came out like this?

Re: tlitb1 (Sep 23 13:11), More to the point — he seems to be referring to the framing of the questions. My understanding is that A. Scott attempted to provide the questions exactly as asked by Lewandowsky. Perhaps the readers frustration with the questions — if that is the issue — should be expressed to Dr. Lewandowsky.

@Tony Mach: The latest anti-skeptic hero is Gleick, who is lauded for faking credentials to a competing firm in order to steal some documents. Having not found a smoking gun in those documents, he forged another document, included it with the genuine documents and “leaked” them to some of the websites involved in the Lewandowsky survey pretending to be “an insider”.

Yeah, I’d say that those websites would be crawling with folks who would be more likely than most to fake survey responses.

I liked the A Scott survey, tlitb. Considering the Lewandowsky survey, I think it was a valuable attempt to provide counterpoint. I don’t think it was meant to be compared to Lew’s survey; it was an attempt to do better and get a better understanding of how actual skeptics might answer the questions. It also tried to add some nuance by adding the “don’t know” or “neutral” option.

Steve: I disagree (as Scott knows). The Lewandowsky questions are stupid, so there is zero point in trying to “improve” them with the Dont Know option. An exact comparison, question for question, from WUWT respondents facilitiates direct comparisons and should have been the objective. But it’s water under the bridge and better to have what we have than nothing.

The “neutral” decision was a conscious one, and one I struggled with. It was based on a lot of reading that strongly indicated the “neutral” option was important to data quality. That Lewandowsky’s use of a 4 point scale forced answers that did not necessarily represent the respondents true position.

And it was a “neutral” response, not a “don’t know” response … this was explicitly addressed in my design. I purposely did not label each possible choice for this very reason.

On the form you see:

“Strongly Agree o o o o o Strongly Disagree”

… with the numbers 1 thru 5 above each answer choice. No “neutral” was specifically identified. The same was included in directions at top:

Responses are ranked on a scale of 1 to 5 – with 1 being “Strongly Agree” or “Absolutely True” and with 5 being the opposite, “Strongly Disagree” or “Absolutely False”

The item responses were designed to represent your position on a scale of 1 to 5 between Strongly Agree and Strongly Disagree specifically to avoid to the extent possible the neutral option being used as “don’t know.” Some respondents DID state they used the #3 as a “don’t know” which is inherent in any survey unless you also include “don’t know” as a choice.

The original intent of this work was to provide a full standalone set of more robust, better quality data, with less biased method of obtaining. To capture the true skeptic response that Lewandowsky failed to obtain.

My intent was to have this data analyzed independently of Lewandowsky and compare the end conclusions between the two. Adding the neutral choice provides a nigher quality overall response by nearly all accounts, and as such the analysis and conclusions should be more robust.

I also felt – and still believe – that for individual item level comparisons – comparing responses between the two at an individual question level, it was simple to just disregard the “neutral” responses – to compare the 4 comparable responses between the two – “Strongly Agree – Agree – Disagree – Strongly Disagree.”

This reduces the total response sample from the new survey, but it already had well over 100 times the “skeptic” leaning responses that the Lewandowsky survey had, and I this loss or responses was not an issue – still comparing at best maybe 200 total skeptic responses from Lewandowsky with 1000+ skeptic responses in the new survey.

As to tlitb1 – what “priming” was done? The pre-information provided respondents was this was an attempt to re-create the original survey – and it was – same questions, same order etc., and the single clarifying instruction as follows – which was in response to the complaints about the poor sentence composition and structure:

Please make the choice that most closely reflects your response to the question as written, even though you may not agree with how the question is framed.

Last – yes the questions are largely junk. It would be much better to start from scratch. But there are two reasons not to here. First, it would not then be a remotely relevant comparison. Second, the reason those questions were used was because they are standardized “conspiracy” “freemarket etc sets from prior work with prior response data behind them – that is why Lewandowsky chose them.

As the survey folks here I think can attest, that is one of the things a good survey should have – prior work where possible at its base. This allows a level of validation that the responses received are in the same ball park as previous versions. For example – the conspiracy questions were originally developed in Swami et. al. (2009) … he still has a version of the survey with the base questions (and others) online and active at:

If you complete it at the the end it will provide the responses (at least in a % format). Note this survey (as did the orig in Swami 2009) use a 9 point Likert range.

So yes – the questions are garbage, and I cannot believe they were written that way – with all the flaws – by supposed experts – but they are somewhat of a “standard” which was why they were used, and in part why I re-used them.

Re: A.Scott (Sep 23 16:16), A> Scott. I think you did the best you could. I for one appreciated the neutral response as a way of saying — I can’t believe that people really use questions like this and pretend to run a professional survey…

Since, if you search the UN site, the exact phrase New World Order appears 364 times I really did have trouble with the first question…

A powerful and secretive group, known as the New World Order, are planning to eventually rule the world through an autonomous world government, which would replace sovereign government.

I would have been able to give a 100% affirmative — as opposed to a weak (2) if you had rephrased thus:

A powerful and open group composed of bumblers, known as the United Nations see themselves as a New World Order, capable of controlling everything from Climate to Social Systems through Secretive Sub-Organizations such as the IPCC are planning to eventually attempt to rule the world through an autonomous world government, the UN, which would replace sovereign government.

I am sure you can understand the point… So when it appears as the first question it somehow translates in my mind to — the second point and my pen (mouse) freezes when I am asked to respond.

The questions were as many (most all) have noted, often terribly constructed with multiple points, making a specific answer difficult.

The neutral option was intended to help slightly with this, and seemed to me to have done so.

That said – I don’t disagree with Steve at all – it would have been better to stick with 4 as in the original. I intended the results of my re-created survey to stand on, and be analyzed, on their own. In hindsight, had I known the depth of the apparent problems with the original survey’s data, I may have chosen differently.

I think the responses to conspiracy theory will certainly be skewed in Scott’s survey. I don’t see how that would apply to the FM questions.

I think it would be interesting to run Lewandowsky’s analysis on this data set (if anybody can figure out what he’s done, and assuming there aren’t methodological errors in it, which still isn’t clear to me) or somebody else’s and see how the results compare. In particular, is there still a FM stratification between skeptics and warmers?

To be clear, I’d like to see if that portion of his study was reproduced with Scott’s survey. If it wasn’t, it probably doesn’t say anything good about Lewandowsky’s results. If it does, it doesn’t demonstrate validity, just consistency, which is still a good thing to know about.

Steve: Roman is close to replicating the SEM part (as am I.) It basically is an analysis of the correlation matrix. Since the correlation of many variables is materially impacted by “outliers” i.e. fraudulent/fake responses, as shown in my post a few days ago, SEM is not a magic bullet. Very much GIGO.

I was thinking last night that like climate science, using PCA muddles the issues just enough that the fuzzy headed can mull about ‘latent’ variables from bad data rather than data quality. Like so many papers they are just digging patterns from a mess without any understanding of the mess from which they came.

I was listening to NPR a couple of weeks ago and some high school girl was going on about how she wanted people to stop investing their entire lives in sports because not everyone can be a football star (pick your continent for the definition of football) but everyone CAN be a scientist or engineer. I remembered back to the mass dropouts from engineering school and laughed at her. Maybe she was right though, and anyone can be a ‘research’ psychologist, because like CS, the conclusions in psychology don’t seem to hold correctness above the result.

Also to be clear, this is the real problem I have with faustus and Paul Vaughn and their pithy little comments. Simply regurgitating something that you’re told works or is “standard practice” in a field, neither makes what you say defensible nor correct.

The debacle of MBH and the 12(?) or so repetitions of this erroneous methodology come to mind:
Whether newer reconstructions are valid or not is a fair question, but there is no question that they are not consistent with these previous “hockey sticks.”

figure.
Lack of consistency is an example of “fails to validate across repetition.”

To be clear my point is just because a group of people decide something is “OK to do” doesn’t automatically make it OK nor necessarily defensible. How many decades did people practice data mining in epidemiology (or has it really even stopped?).

With social science instruments, unless you’re going to polygraph or something, you really have no objective method for determining the accuracy of the responses of an individual in a survey. (You might be able to catch some spoofers, but that’s not the same issue as accuracy of responses relating to instrospectively obtained information.)

Steve: there’s a very high overlap between Loehle and Moberg proxies – something that I pointed out at the time. I haven’t commented on the new Esper paper yet, but it’s interesting as being a post-Mann paper i.e. it does not feel obliged to coerce its results to a Mannian framework. It also makes some very interesting observations of secular change in high-latitude JJA forcing.

Carrick,
I’m looking at robust statistical methods to see whether relative simple algorithms can detect the fraudulent responses. There are some patterns in the fraudulent responses.

One of the problems in trying to apply robust methods is that the underlying variables depart so greatly from the assumptions of normality that underlie PCA, factor analysis and SEM. For example, things like CYMoon, CauseSmoke only have a few non-standard (fake) responses. the first step in robust methods – calculating a robust sd – fails. I’m looking at workarounds. For many of the key Lewandowsky variables, there aren’t robust correlations or covariances, because nearly everyone rejects the conspiracies. So in addition to the first-order violations of survey methodology, it looks like Lewandowsky violated nearly every assumption of his statistical method.

It’s pretty amazing. if things like this regularly get published in psychology, it’s even worse than Mannology.

“I think that Lewandowsky’s decision to sneer at criticism will prove unwise.” Why? What do you believe will be the consequences to him?

Steve: I find it hard to believe that his reputation will be enhanced by publishing an article based on fake/fraudulent data, but the climate world is very strange. For example, Mann et al 2008 relied for its key results on contaminated and upside-down data and has not been retracted, though, by rights, it should have been.

In the long run, his effort will be viewed as a cheap little propaganda trick, even among those who won’t admit it publicly. The appearance is of a man purporting to be a scientist who has become utterly undone by his biases.

The motivation for this is the same motivation that drove Gleick to extremes.

Yeah, and we know how mightily Gleick has suffered for his actions. Oh, wait…
A pattern of Mann, Gleick, and Lewandowsky blithely standing truth on its head without suffering any repercussions should be far more disturbing then the details of their mendacity.

I think that Lewandowsky’s decision to sneer at criticism will prove unwise.

Personally I don’t think this will ever be “proven”.

It might be seen though ;)

Lewandowsky constructed a system of proof that unassailable. A system that you guys have been caught out being enlisted to play in – it goes like this –

I see people I don’t like and I hypothesize they have characteristic A. Characteristic A is hardly understood by anyone but it annoys the people of characteristic L (i.e. me).

No one understands my annoyance.

I know ihere are many more people who don’t like characteristic B who I wish I could enlist to be on my side on this subject.

If I can show that characteristic A has some tendency to overlap characteristic B.

Then who gives a crap if they are equivalent I will have done my job!

This possibly has been shown by this study.

Lew is the worst most common kind of polemicist that has been seen down the ages. Ironically, the fact that Australia is a nexus that is breeding this pseudo-academic idiocy today at a greater rate should be an academic focus of study, looking at their politics is better subject for any ambitious psychology/psephologist I say ;)

Steve: I don’t think that it would materially impact Strongly Agree or Strongly Disagree. That’s why I constructed the analysis in this post using the Strong versions. IMO, it is evident that the FM superzealots in the Lewandowsky survey are fakes. It’s too bad that Lewandowsky failed to establish data integrity, isnot it?

It is Eli’s hobbit when confronted with a question without don’t know, if that were the best answer simply to leave it blank. If the machine insists on an answer, one goes on to something else.

Now some, not Eli to be sure, might think accusation of fraud are serious matters, so the question is, are people here accusing Prof. L. of fraud, or simply saying that some of the people who filled out the survey were writing porkies?

Steve; why don’t you read the post where I stated, as I’ve said on several previous occasions, that there is no reason to believe that Lewandowsky was involved in submitting the fake/fraudulent responses. E.g. in this post I stated:

As I’ve said before, I do not believe that Lewandowsky was personally complicit in the initial submission of fake/fraudulent responses, though his decision to survey skeptics at anti-skeptic blogs was unwise, if not reckless. however, in my opinion, once the problem with fake/fraudulent responses was forcefully drawn to Lewandowsky’s attention (by Tom Curtis as well as me), Lewandowsky himself should have notified the journal and asked that the article be re-reviewed with particular emphasis on whether he had adequately ensured data integrity.

How does it affect a survey if persons decline to participate because of ambiguous questions?

I would have abandoned the Lewandowsky survey when I encountered a dilemma such as the Iraq War question. Four specific reasons were given for going to Iraq; one was listed in the survey, so how do I answer?
It is possible others were involved in any of the assassinations but how do I know? So what would be my survey answer?
Who knows what Coke was thinking etc…

So is it believed there would be an equal number of mild skeptics like myself and anti-skeptics who don’t participate because of the questions that it balances out? Or are the questions more likely to cause a group who think similarly to abandon the survey?

Good points, Keith. I did click away from continuing the survey when it was posted recently at WUWT, because the questions were ill-formed, biased, and/or simply stoopid. Even the “don’t know” option (which was not in the original survey) could not redeem the ignorance and carelessness embodied in quite a few of those questions.

I would think that “skeptics” are much more likely to have that reaction, since a lot of the biased/loaded questions about climate, free market, etc. are less likely to irritate all the “Alarmists” who share Lewandowsky’s preconceptions.

Yeah, make me a third who would never have taken the survey because of the stoopidity. And as I believe sceptics are more sceptical, in an intelligent way, I agree this would deter more of us than those happy to receive the truth about climate from higher authority. The fakers eager to play their part in smearing us were the icing on the cake.

(When I say skeptics I include lukewarmers and the rest. We need your intelligence for my account to be true :))

I don’t believe that anybody with a sceptical bone in their body would have completed that survey when it was originally issued as it would not have taken them long to start wondering who is pulling the strings and why are they pulling them – mistrust being at the heart of scepticism. At which point, I suspect most would have quit. Or, if not, the temptation to try to screw up the results by providing fake responses would have been great so perhaps that is exactly what they got from sceptics and alarmists alike.

The survey struck me as an amateurish attempt to pigeonhole an entire group of people into the fruitloops category. The assumption that a representative population of on-line sceptics would assist in that goal by answering every question honestly, is frankly … nuts.

Don’t make the mistake of believing Lewandowsky is stupid. He isn’t. He has an agenda though. The Lew-paper isn’t about science, it isn’t about math, it isn’t about facts. It’s about marginalizing opponents, challenging their moral worth as human beings and make them fall into line.

The psychopathologization of dissent is a powerful political technique. What he basically says is: “You don’t agree with me? You won’t comply? Then you must be lunatic and thus you should be medicated and be placed in an asylum.” And he is a psychiatrist, even a professor, so he must know these things, right? And who would ever want to be a lunatic?

Yes Keith, as science, the survey and the paper is total and utter crap. But it isn’t about science, it is about politics, it’s about him appealing to his own authority, and about enshrouding his views in something that looks like science. He can probably bedazzle many people that way.

What I wonder is; how many of the other 140 papers he claims being a (co)author of sink to the same standard as this one. He has probably been successful in pulling off similar stunts before.

You are likely correct in your assumption about motivation, but I believe that in fact Lewandosky is practicing far above his level of competence. His bluster is not fake however. He appears incapable of realizing that opponents may be far more knowledgeable and competent than himself. The existence of intelligent, logical skeptics just doesn’t fit his world view. At a deep visceral level he believes that the people he and SockPuppetScience clash with are all truly drooling morons and none of us can see the truth that glows brightly in front of us because we are so stupid/confused.

He is incapable of logical debate. He is not very bright. He simply floated up the ladder of soft science into his current tenure. His worldview is so skewed that even when a knockout argument is made, he can’t perceive it, like the story of the Native Americans who couldn’t see Columbus’s ships because they didn’t fit in their world view. (I believe that story about the Native Americans is bullshit, but it fits as a metaphor here).

Pathological Myopia would be a potential diagnosis for him. Maybe that can be added to the DSM-IV list.

With apologies to A. Scott for a sincere effort, I must say I would not turn over the results of either survey to a client. Lewandowsky’s for a variety of reasons–in fact I cannot remember an instance of sloppier and ham-handed fielding. It really seems as if he set out to break every rule in the book and I hope reporting of this episode doesn’t creep beyond the climate discussion bubble, for fear of the loss of credibility for online data collection in general.

A. Scott, as mentioned above, the possibility that respondents were primed for the exercise is too great for me to be confident in your results. It would have been really interesting to conduct the survey using WUWT respondents concurrently–with a good survey, of course.

I will call the attention of other readers to one other unexploded mine–Lewandowsky said that his results are consistent because of responses to happiness with life questions. However, I have not seen reporting on those questions, nor the standards to which Lewandowsky compared them.

Tom:
Good catch on the happiness with life questions.
As to the issue of respondents being primed – I am not sure that I understand how they were primed. Is it not better for respondents to know the purpose of a survey rather than having them guess? There are different models for designing surveys – my preference is always to be very clear and explicit about what you are trying to find out.

subsets of our items have been used in previous laboratory
research, and for those subsets, our data did not dier in a meaningful way from published
precedent. For example, the online supplemental material shows that responses to the
Satisfaction With Life Scale (SWLS; Diener, Emmons, Larsen, & Grin, 1985) replicated
previous research involving the population at large, and the model in Figure 1 exactly
replicated the factor structure reported by Lewandowsky et al. (2012) using a sample of
pedestrians in a large city.

I was wrong about the “Lewandowsky (2012) reference elsewhere here as well … should have looked at the “References” here is the paper they refer to:

Eli I read that nonsense blog you linked to. Short summary: “I already had this bias. I like this guy, and not being a numbskull like me, he must be really smart and anyway he confirmed my preconceived bias, therefore the science is good.”

Liking somebody isn’t much of a basis for selecting good quality scientific research. Not liking somebody isn’t much of a basis for dismissing their work either. I just sent in a paper with substantial correction that I reviewed, even though the guy is a FBB and an all around nice guy.

Thanks Eli for putting forward that link. I usually assume that people put forward the best arguments they have (left). And I thinkt it has been quite telling to see how dearly quite a few have wished for the Lewandowsky study to contain some little grain of truth or even ‘science’ about those evil ‘deniers’

It also confirms my observation that nobody who feels that he needs to use the term ‘denier’ when debating climate, neither its science nor policy, has a shred of substance to contribute to any such debate. Just cheering for the ‘home team’ from the sideline ..

You need to put warnings about potential IQ damage up when linking to those sorts of things. It must be some kind of legal assault to click on a link and have your logic circuits damaged by goofballs with keyboards.

Oh, yes, the Idiot Tracker – that’s the guy with the blog about the GPS unit attached to his forehead. We ran into him demonstrating his statistical knowledge the other day. Funny that, he hasn’t been back since.

Now some, not Eli to be sure, might be amused by Prof. Lewandowsky’s incompetent smear job, but then Eli does not seem to have the prerequisite capabilities to understand just how laughable it was.

Alternatively a short survey like this could be run on Amazon’s Mechanic Turk for about 50 cents. It certainly is a different audience but at chances are they would be the least primed by all of this debate.

I will call the attention of other readers to one other unexploded mine–Lewandowsky said that his results are consistent because of responses to happiness with life questions. However, I have not seen reporting on those questions, nor the standards to which Lewandowsky compared them.

Has anyone?

Not me.

But no matter I think Lewandowsky is lazy but he may be correct in his unseen reference in its broad conclusion. I.e Referencing unseen papers in a “control” is suspicious in it sloppiness. but *correct* in *correct* society Australia

If you have a paper that shows something that “everyone just knew darling why bother !” (pretty much the default position of the defenders) then why is his “control” not even in press?

Notice the “control” is the variation of his “counterbalancing” technique where he varies the SWLI questions across his survey.

As I said before.

You don’t need a pathological *ideation* you need to focus on what the assumptions are.

Why did Lewandowsky give the nod to only certain blogs and only then latterly offer the survey to “Skeptic” blogs with a certain variation (or flavour) of survey?

Experimenter bias is obvious.

The effort by Lewnadowsky is laughably obvious here. I think SteveMc is doing a good job showing the wooliness of the terminology but there is enough in other realms to show this as shite – or as some have more subtly said – it is just a Daily Mail OMG survey writ large.

… their perception of consensus and their endorsement of scientific findings about the climate reflected in part a content-independent disposition to perceive scientific consensus, and a correlated disposition to accept scientifically well-established propositions. This finding replicated the factor structure reported by Lewandowsky et al. (2012).”

One presumes his reference to “Lewandowsky (2012)” would mean to this publication which just appeared online:

I noted on the earlier thread that if L. et al (2012) “… the model in Figure 1 [of L. et al] exactly replicated the factor structure reported by Lewandowsky et al. (2012) using a sample of pedestrians in a large city” then there should have been no reason to extract factors or perform SEM to develop a model.

The new data should have simply been tested against the model from L. et al (2012).

It appears I used your intended purpose for the middle position. Namely, if the question was so worded as to make any of the other 4 answers not in accord with my thinking, I chose that middle position. I was under no illusion that I was neutral for that question, but that it served as a trashcan for that question.

A psychological observation from an engineer completely untrained in psychology.

I find it interesting that Lewandowsky, the dogmatist, did not allow a neutral middle ground in the individual spectra of responses while Scott, the skeptic attempting to replicate Lewandowsky, insisted on including a neutral middle ground even though he knew this would jeopardize his replication.

It seems to me that many skeptics approach the issues of CAGW from the POV of: “I did not know that, based on my training I do not think that anyone can know that, show me how you know that please”.

No amount of hand waving or pea thimbling will convince a skeptic who does not know that they should know, just because someone says so.

It seems to me that those who believe in immediate action to handle CAGW see only white and black with no grey areas. That is why Roger jr was seen to be a skeptic.

That’s a fantasy. Unfortunately, pseudoskeptics are frequently “super-zealous pro-free market.” In fact, as anyone who has spent five minutes on a climate comment thread knows, many psuedoskeptics will eagerly denounce the super-zealous pro-free market position as creeping socialism.

You have now spent more than a dozen posts attacking Lewandowsky without, as far as I can tell, articulating a single substantial problem with his study. This is indicative of Stage III Lewandowsky madness. Don’t be too proud to seek help.

Robert: You mean not counting the fact that Lew started with a survey done by convenience sampling. (In fact, he pretty much created a new category which we might call “super-convenience”, since he not only did a sample of convenience but did it only on sites that were guaranteed to be hostile to those he did not properly survey.)

And he intermingles PCA and CFA without distinguishing the two, which make a difference in a variety of ways, including what eigenvalues are significant. And he included responses that are obviously faked. And he withheld inconvenient data. And…

Yeah, other than that kind of Survey 101 stuff, I guess there’s no substantial problem with the study. He’s an expert at pushing buttons in Mplus, SPSS, etc.

Your website has a most appropriate name – keep on tracking. But unless you can see that approaching a particular activist group is not the best way of conducting a survey about attitudes among their opponents, that asking leading questions invalidates the responses, that failing to disclose one’s methods disqualifies work from being called science, and that psychologists should use statistical methods in the same way as statisticians or psephologists rather than inventing their own without a shred of justification other than “that’s how we do it”, you won’t find much.

“You mean not counting the fact that Lew started with a survey done by convenience sampling.”

That’s not a substantial problem. You want to do a different survey, with a different methodology, go right ahead. Disagreeing with how a scientist set up a study is NOT the same thing as finding a problem with it.

“And he included responses that are obviously faked.”

Nope, try again.

“And he withheld inconvenient data.”

Another claim you failed to demonstrate.

“And he intermingles PCA and CFA without distinguishing the two, which make a difference in a variety of ways, including what eigenvalues are significant.”

So the fact he knows more about statistics than you is a flaw? Funny.

“Yeah, other than that kind of Survey 101 stuff, . . .”

That’s the wonderful thing about fake expertise; it’s so versatile. You take a fake climate expert, they’re automatically qualified as a fake social science expert, fake economist, and so on.

I’m going to have to reject your argument from authority and ask that you demonstrate your claims. ;)

————————
“But unless you can see that approaching a particular activist group is not the best way of conducting a survey about attitudes among their opponents . . .”

You’ve made a number of mistakes here. First of all, being a normal person that accepts the realities of the physical world does not make you an “activist” (and, as survey reveals, many of the people at SkS et al were, in fact, “skeptics.”) Second, if you are conducting a survey looking at the correlation of attitudes as expressed by other parts of the survey, where you get your sample has nothing to do with the correlation. That is, to borrow a phrase, “Survey 101.” Third, deniers had every opportunity to post the survey, but, no doubt fearful of what it would reveal, refused to do so. Their cowardice is no fault of Lewandowsky’s.

Lewandowsky used inappropriate statistical methods on data compromised by fraudulent responses, that had been collected with no attention to data integrity. If that doesn’t bother you, it’s hard to carry on a rational discussion.

Nope, he didn’t. You’d didn’t like the choices he made, which is something completely different. This is a pattern with you. You take research you find threatening, look for another way to analyze the data (there’s always more than one way to process data), proclaim your method unquestionably superior and start whining about incompetence and fraud.

As a short con, you’ve had some success among the gullible, but you needn’t waste your time selling that dreck over here.

“compromised by fraudulent responses”

Another claim you’ve failed to demonstrate. Any survey can be affected if people chose to lie. Lewandowsky very clearly laid out what he did to weed out fraudulent response. It’s in the paper. If you think he should have done something else, the solution is to DO YOUR OWN STUDY, rather than engage in endless public mathurbation with the data of others.

“collected with no attention to data integrity”

Deniers talking about integrity . . . that one always makes me smile. It’s like the Syrian Army giving a lecture on human rights.

I wonder if with all your strong opinions as to how scientists should conduct themselves you’ll ever get off your comfortably padded tush and do some new science. You know, like an actual replication of a study, rather than the faux “replication” by mathturbation.

Steve: Lewandowsky’s methods are inappropriate because the data does not satisfy the assumptions of the methodology, not because I don’t “like” them. Nor did I accuse Lewandwosky of being complicit in the submission of fraudulent responses. However, I do think that Lewandowsky should have been concerned once the problem of fraudulent responses was drawn to his attention and that his lack of attention to the issue once it was drawn to his attention is problematic. Lewandowsky himself criticized the Bray-von Storch survey for its failure to ensure data integrity; I am doing no more than applying Lewandowsky’s professed standards to his own work.

Robert, setting up a survey IS a science, and one of the more difficult at that. Lewandowsky, being who and what he is, should know that. That fact is what makes his decision to publish something so appalling framed so questionable. If he really wanted proper data for a proper analysis he would have insured that it was available. It might have required actually logging on to sites like WUWT and CA and clearly and explicitly asking for particpants (which still carries a self-selection bias), but would have at least expanded his universe to unequivocally include known “sceptical” venues.

As it is, it is quiet possible, contrary to Steve’s conclusion, that rather than the fraud responses being faked by AGW evangelists, the responses were actually “skeptic” leg pulling. Leg pulling has a long and embarrassing history in science and history from Piltdown Man to the Drakes Bay Plate. In EITHER case, the individual who quite obviously failed to maintain any integrity or quality in the study was Lewandowsky. He framed a patently biased “survey” with a clear agenda that a five-year-old would have seen through. He then posted his survey only in a biased venue (and yeah, “he” didn’t really do the grunt work such as posting the notices, but it is his study and the buck stops with him)where the sole respondents were likely to be individuals of his own persuasion.

“It might have required actually logging on to sites like WUWT and CA and clearly and explicitly asking for particpants (which still carries a self-selection bias), but would have at least expanded his universe to unequivocally include known “sceptical” venues.”

It’s amazing that you still don’t understand why the source of the responses is totally irrelevant. Speaking of a five-year-old level of understanding.

those professionals involved in surveys, and the professional literature, spend so much time on the topic?

Not just professionals. This is the sort of thing we teach to school children.

This is not hyperbole in my case. I would use Lewandowsky’s method of sourcing replies as an example to 14 year olds about selection bias. What’s more, they would get it almost immediately, because it is pretty obviously incompetent.

Robert is thread bombing, and knows it. No-one actually believes that the sources of samples are irrelevant. By his logic it would not matter if the people of China were allowed to pick the president of the US because, hey, where people hang out makes no difference to their political views.

Second, if you are conducting a survey looking at the correlation of attitudes as expressed by other parts of the survey, where you get your sample has nothing to do with the correlation. That is, to borrow a phrase, “Survey 101.”

I didn’t know that. This is an astounding result! Having taught Sampling Theory on numerous occasions at both the undergraduate and graduate level, you would think that I would be aware of such an astounding piece of statistical knowledge.

Do you have a reference for it or was it perhaps something you made up on the spur of the moment to suggest that you knew what you were talking about?

Robert: I had suspected that you were a smart guy who was just blinded by your extreme prejudices.

[quote]“You mean not counting the fact that Lew started with a survey done by convenience sampling.”

That’s not a substantial problem. You want to do a different survey, with a different methodology, go right ahead. Disagreeing with how a scientist set up a study is NOT the same thing as finding a problem with it.[/quote]

This proves that you’re 100% faking it. The design of a study is the foundation upon which everything else is based. No amount of statistical wizardry will rescue a convenience-sampled survey. Or your now-smashed reputation.

I agree, I wish ‘Robert’ would continue posting and demonstrate the level of SkSc followers and among those who think they are arguing science through references to ‘consensus’ that has to be achieved by exclusion and erasing comments.

Actually, I must commend you Robert, for the bravery of posting such a stupid comment here. In the closed echo chambers of Deltoid, Tamino and SkSc you would probably get approval by the regulars who even might reinforce your ‘analysis’. But finging out things for real never was the motive there. And here you fully display the result of hanging there and believing that you are intellectually arguing a position ..

Robert: “as survey reveals, many of the people at SkS et al were, in fact, ‘skeptics.'” Um, the survey reveals many of the people at SkS in fact SAID they were skeptics. SkS is actively hostile towards skeptics, just like you, viewing themselves as the fact-loving ubermensch, like you. Only a skeptic in an over-the-top Street Corner Preaching phase of belief would actually participate there.

I’d encourage you to keep posting your naive rants here, so skeptics don’t have to drop by SkS or other zoos to see the animals.

I think there’s a legitimate case to be made that the bigoted and fanatical deniers, like you, are unlikely to expose themselves to real science at places like SkS. But of course such a confounder would tend to weaken the association between climate denial and conspiratorial thinking, not strengthen it. The fact that a messy pile of ignorance and hate like you avoids science sites strengthens the result, rather than weakening it. ;)

This line of reasoning depends on there being no fakes. But the fakes are obvious, for the reasons Steve has given. Lewadowsky should have been acutely aware of the possibility and should have acted having read Steve’s critique.

These are two very different views of the situation. But we also note that you are not able to express yours without calling contributors here that you hardly know ‘bigoted and fanatical denier’ and ‘messy pile of ignorance and hate’.

Just for clarification – the Robert you are all arguing with isn’t the same one who writes for SKS.

Steve: i.e. the Robert Way who told his fellow SkS editors that I was right on Tiljander, on Mann et al 2008, on Steig et al 2009 (but who did not acknowledge this online at SKS.) Who said that many had tried to bring me down and “not proven to be terribly successful”, that I was a “tough person to target” and who confessed that he didn’t “want to go up against that group [Climate Audit], between them there is a lot of statistical power”, provoking another SkS editor to confess “Real data and statistics are a subtle subject. I try to stay away from both, as far as possible.”, but who nonetheless accused me without evidence of being a “conspiracy wackjob”

Uh, no. It doesn’t. The claim that people faked their responses is independent of where the survey was published.

The belief in a conspiracy to rig the results of the survey through an elaborate ruse of pretend responses is amusing, and tends to reinforce Lewandowsky’s point. What it does not do is show Lewandowsky erred. Since any survey — any survey at all — inevitably suffers from sampling bias, reporting bias, and probably a half-a-dozen other problems as well, the response of social scientists to these complaints that people can lie to surveys and the population is not random would doubtless be “Duh. Welcome to human subjects research.”

Steve: I did not allege that there was a “conspiracy” to rig the results through fake responses. The very low number of persons purporting to believe in the wacko conspiracies means that a relatively small number of people could affect the results.

In years of writing posts at Climate Audit, I never alleged conspiracies, nor even used the word. However, SkS is rife with conspiracy theories and conspiracy allegations.

If you wish to discuss conspiracies, you will find fertile ground at SkS, but not here where we’re more interested in statistics.

“Lewandowsky collected responses from August thru October 2010.”
This seems most unlikely. Looking at the two blogs with a significant number of comments, all relevant comments were made by 1 September, i.e. within 48 hours of the posts going up. People were commenting on their opinon of the survey, the difficulty of responding etc. The absence of comments after 1 September suggests that practically all the responses were received by then.
The big unknown is the number of responses from Skeptical Science. In the long, long discussion I had, first with moderators at SkS, the with John Cook, it was never mentioned that Cook had been tweeting about the survey. When did SkS post it? When did responses come in? SkS won’t say. Doubtless the dates of responses will be part of the supplementary information that Lewandowsky has been promising us for the past two months.

Regardless, we know it was first offered in August, and was discussed as having most of the 1300 total responses collected when he presented in end of September. We also know he was still solicciting more responses into October.

Or went through the motions of soliciting. We have no idea how many responded to the later efforts and, having elicited and massaged the data to find the answer he wanted, as reported publicly, it’s not at all clear Lew wanted any more or accepted any more.

“Discussion of survey objectives while in the field”
The medical example would be telling participants in a drug trial if they are getting the drug or the placebo.
In the case of the Lewandowsky survey, there was ongoing discussion about proving that skeptics are nutters while the survey was live, plus the questions were so ham-handed that anyone could guess the POV of the survey designer and his goal just from that.

Fantastic Robert! As well as trying to argue by analogy, seldom an impressive tactic but for some reason favoured by many of your group, you have just shown you know as little about medical research as you do about any other form! The gold standard of clinical research is the double blind controlled trial, which is designed specifically to prevent either clinicians or patients being able to influence the outcome by skewing the data. Lewandowski on the other hand created these opportunities both by the nature of the questions and by his comments during the survey period, and it is not unreasonable to think he did so intentionally. The alternative is that he was so biased or stupid that he did not realise that these were catastrophic procedural flaws. It appears he was not alone.

Hmmm, at work they seem to insist on the analysts being blinded (as well as the docs and the pts), which would seem to make it triple-blinded. A missed endpoint is a full restart, on any trends in the data that don’t meet threshold, even unexpectedly good trends. No post-hoc moving of the goal posts. No exploiting of identifiable subsets who are high% responders.
It seems possible that a 3 letter gov’t agency required this.
I recall a recent (book? article?) suggesting a lot of preventable mortality on a study that was not allowed to be unblinded, despite conspicuous evidence of unexpected bad outcomes that were not part of the official study design.
So, structured ‘blinding’ of studies is not without cost.
RR

What beats me though, is how anyone could imagine it requires a “conspiracy” for any person or group to act so as to further its own interests.

Government stands to benefit richly from public acceptance of CAGW, and government selects and pays for virtually all climate science. So what could be more obvious and unsurprising than government climate scientists declaring CAGW to be true?

Indeed, it would suggest a conspiracy (of integrity) if they declared it false or doubtful.

“What beats me though, is how anyone could imagine it requires a “conspiracy” for any person or group to act so as to further its own interests.”

First of all, Lewandowsky did not say that climate deniers believe that there is a conspiracy to promote global warming. He found that people who denied global warming were statistically more likely to express belief in unrelated conspiracy theories like the belief that the US government faked the moon landings. Now, you could argue that this implies that climate deniers are more likely to be gullible or that on average they suffer from an irrational fear of government. The paper itself, however, merely demonstrates the association.

But you should realize that your assertion that “Government stands to benefit richly from public acceptance of CAGW” is not self-evident and itself has a conspiratorial flavor. The argument that “government stands to benefit” usually rests on silliness like “It’s an excuse to raise taxes” or “It’s an excuse for government to expand.” The argument also assumes a degree of government control over the findings of scientific research that also involves a level of paranoia and motivated reasoning associated with conspiracy theorists.

I would suggest that to avoid the unfavorable comparison with conspiracy theorists, that you stick closely to what you can prove and avoid sweeping assertions like “government stands to benefit.”

Robert – you state that “[Lewandosky] found that people who denied global warming were statistically more likely to express belief in unrelated conspiracy theories like the belief that the US government faked the moon landings.”

But that’s not what he found at all … he found that people who filled out his online survey filled it out in a certain way, that is all. As it was an online survey no conclusion can be drawn. (If you believe online surveys can accurately reflect reality then I’d like to get your opinion on whether online surveys could replace voting at government elections.)

Uh, no. I’m going to have to reject your assertion on that score, sorry.

I suggest you read the paper and see how the survey responses were used. There’s a difference between using a sample as a proxy for the general public and comparing responders’ views on one subject to their views on another subject.

The question a lot of people are fumbling around which Lewandowsky is generalizability. Which is always a question in any form of human subjects research. Yet no one is asking the only relevant question, which is: “Can these deniers, who were more prone to conspiratorial ideation, by considered representative of the broader community of deniers in that respect?”

The trouble with saying “They posted at Skeptical Science!” is that there is no reason to think that deniers comfortable at a pro-science website are MORE distorted in their thinking than the self-segregated. The trouble with claiming “People could have lied!” is that that is always the case in social sciences research, online or offline.

Overall, it’s a familiar pattern: people who don’t know much about the science criticizing people who do because of a result they don’t like.
Steve: again, you do not confront the issues. There is substantial evidence that Lewandowsky used fake and/or fraudulent responses, though there is no evidence that Lewandowsky was directly implicated in submitting the fake responses. However, Lewandowsky did misrepresent that the surveys were posted at “pro-science” blogs with “diverse readership”, when they were actually only posted at stridently anti-skeptic blogs with very small skeptic readership. This is a serious misrepresentation that, in itself, should cause the editor of the journal to re-consider the article.

It’s not simply a matter of saying that the respondents “could” have lied. There is evidence that fraudulent responses were submitted. In my opinion, as I’ve said before, Lewandowsky should voluntarily ask that his paper be re-reviewed with the reviewers specifically considering the issue of fraudulent responses. By not doing so, Lewandowsky is leaving himself open to criticism that he proceeded with the paper even after the fraudulent/fake responses had been drawn to his attention.

Update Sep 29: I’ve added the adjective “pro-science” used in the original article to the above inline comment at the suggestion of Tom Curtis. Readers here are aware that Lewandowsky did not claim to post at skeptic blogs; my objection is Lewandowsky’s representation that these anti-skeptic blogs had a “diverse audience”. I am not aware of any evidence that the anti-skeptic blogs have a “diverse audience”, as opposed to an audience that is nearly all hard-core anti-skeptics. Lewandowsky’s claim that these blogs have a diverse audience was, as far as I can tell, fabricated.

By using the wrong assumptions in constructing your analysis, you have also shown that you know nothing about how social scientists work, and probably should desist from your feeble attempts to audit them.
____________________________________________

Actually, Lewandowsky says he is a cognitive psychologist, not a sociologist. The two disciplines are quite dissimilar and use different approaches to their research. Sociologists, typically rely on qualitative (interview) and textual analysis. A good example, and on this very topic can be found within

If you contrast the article of Hoffman to that of Lewandowsky’s, the differences are clear, both in style and in dogmatism.
I also note that Lewandowsky fails to cite Hoffman.

Psychologists tend to use techniques like PCA and EFA to extract something “meaningful” out of “soft” data, typically from
questionnaires. Unfortunately they have great difficulty in avoiding the truism “garbage in, garbage out”.

The validity of sociologists’ findings are qualitative (at best) and almost always ungeneralizable, but they do not claim universal truths either.

The problem for individuals like Faustusnotes, is they have no appreciation of the critical “down field” problems associated with the data handed them. Here I am not referring to “soft” but rather “hard” data, which forms the foundation of all AGW predictions. We can turn “data into information” but only if the data is valid and the methods appropriate.

Faustusnotes, is obviously a competent statistician, and his analysis is concise and well written. However when he attempts
to distance himself from the obvious implications, by saying (about Lewandowsky)

>>>>>> “I think it’s a defensible decision to construct
>>>>>> a factor set based on previous research and theory”

he neglects the point on “truism”. Faustusnotes, as appropriate, analysed the data using the precept of “unsupervised
learning” – that is we let the data disclose what information it contains. Lewandowsky, however, “tortured” the same data to
tell him exactly what he wanted to known. I think people would recognise this as a significant bias. This is a variant of
the same problem often referred to in this forum when researchers fail to report data analysis which is “inconsistent” with
their agenda. Does Faustusnotes seriously believe that Lewandowsky would still publish the article if it disproved his intent?

Faustusnotes, as a “warmist”, has done us all a great service, by highlighting that beyond the constant reality of “garbage”
data, researchers, like Lewandowsk, do not analyse data without a bias, or even worse do not understand how to analyse data
correctly. Irrespectively, it casts a long shadow of doubt over the value of such findings and illustrates that their
research is ‘agenda’ rather than ‘data’ driven.

“Visitors to climate blogs voluntarily completed an online questionnaire between August and October 2010 (N = 1377). Links were posted on 8 blogs (with a pro-science science stance but with a diverse audience); a further 5 “skeptic” (or “skeptic”-leaning) blogs were approached but none posted the link.”

Truncating the quote by omitting the qualifier “pro-science” or suggesting that Lewandowsky suggested that the survey was posted at “skeptic” blogs is straight forward misrepresentation, and difficult to justify as memory lapse. Lewandowsky has not misrepresented the case, and certainly has nothing to fear from editor reconsidering the article on the basis of that claim. As it happens Lewandowsky was incorrect on two points. No link to the survey was posted at Skeptical Science, although John Cook did post the link on a tweet. Further, a link was posted on a “skeptical” site, although not, apparently, one that was contacted by Lewandowsky. The number of responses received after the posting at Junk Science appears to have been around 200 plus, increasing the number of respondents from N=1100 reported in a talk at Monash University on the 23rd to N=1377 at the end of the collection of data.

Steve: Tom, I made no misrepresentation. The stridently anti-skeptic blogs at which the survey was posted do not have a “diverse audience”. That seems to me to me a matter of fact and a clear misrepresentation on Lewandowsky’s part, which would have wrongfooted a reviewer unfamiliar with the turf. Climate Audit is a “pro-science” blog as are the other “skeptic” blogs. I am very reluctant to dignify the appropriation of this term by anti-skeptic blogs. Readers of this blog know that Lewandowsky did not claim to have published the survey at skeptic blogs as this has been much discussed in previous posts. So I don’t think that my wording here made any actual readers think that Lewandowsky had claimed to have published the survey at skeptic blogs- though, as you point out, a link was published at Junk Science. To avoid confusion however, I’ll clarify this in my remark.

You observe that some 200 responses were received subsequent to the Junk Science link, implying that these were associated with the Junk Science link. Lewandowsky also asked for responses from UWA students and faculty and it is unknown what happened to them. The survey link posted at Junk Science was a different link than at the anti-skeptic blogs. Why don’t you get the number of respondents at each link from Lewandowsky?

I am unaware of any EVIDENCE that Tamino, Deltoid and the other anti-skeptic blogs have the “diverse audience”; as opposed to an audience almost entirely consisting of hard-core anti-skeptics (judging from the tenor of comments and the identity of regular commenters). If there is evidence to the contrary, I’d be interested and will dial back on this point. Otherwise, based on present information about their audience, it appears to me that Lewandowsky’s claim that they had a “diverse audience” was fabricated. Again, if Lewandowsky was in possession of information about the audience of these blogs showing a “diverse audience” – information that is not available to the rest of us, I will dial back this comment. Otherwise, that’s my opinion.

Tom: Yes, Lew did totally misrepresent what he did by saying that the rabidly anti-skeptical blogs he posted the survey on were “pro-science” and had a “diverse audience”. As you should know, people come here routinely after having been insulted and censored at those blogs.

As you should also know, those blogs glorified Gleick, as one example, who lied to steal some documents from a competing, skeptical organization, then forged an incriminating document, then released the forgery with the stolen documents under the misrepresentation that he was a disgruntled insider at the competing organization. He became an instant hero. So how likely is it that vociferously anti-skeptical sites that view skepticism as an oil-industry-funded conspiracy and glorify lying would have a “diverse audience” as opposed to a bunch of folks who would fabricate fake skeptical surveys proving what Lew — an anti-skeptic — obviously wanted to prove?

Steve, all the “pro-science” blogs have a diverse audience in that their audience is drawn from a range of specializations and competence levels within the sciences, and indeed, from outside the sciences as well. As such, they are demographically diverse except that they are heavily weighted in favour of acceptance of mainstream climate science and (apparently) left wing politics. That is what I understood Lewandowsky to be saying. More importantly, for his claim to be false it is necessary not only that they not be diverse with respect to opinion on climate science, but also that they not be diverse by other demographic measures.

As it happens, the audience of at least some of the sites is also diverse with respect to opinion on climate science. The audience of Skeptical Science ranges from a man who believes that radar causes global warmng, and another who believes the latitudinal differences in temperature are a consequence of the properties of geothermal energy from the Earth’s core (rather than the angle of incidence of sunlight) through to people even I would call alarmists. Such “skeptics”, along with far more reasonable “skeptics” make up about 20% of active commenters at SkS, and more than 20% of active posts by my estimate. Heavily weighted is not the same as not diverse. To use an analogy, the former refers to the location of the mean, the later to the magnitude of the variance.

Steve: as you are aware, Lewandowsky’s claim that the survey was posted at SkS was untrue (it was only mentioned in a tweet by Cook.) My impression of the comments at Tamino and Deltoid – where the survey was posted – is that the audience is almost entirely strident anti-skeptics. For a survey that purports to measure “skeptic” responses, this is not relevantly diverse.

As both of us discussed, there are many hallmarks of fraudulent responses, including the implausible conspiracy beliefs. As I’ve noted elsewhere, it seems very implausible to me that such nearly 50 Lewandowsky respondents hold FM views more extreme than any of the WUWT respondents – a pattern that is far more suggestive of fraudulent responses according to a caricature of skeptic beliefs.

because Lewandowsky failed to ensure data integrity, it is impossible to “prove” that the responses are fraudulent. However, there is no reason why Lewandowsky should benefit from his failure to ensure data integrity. lewandowsky objected strongly at SKS to the failure of the Bray-von Storch survey to ensure data integrity and you have given no reason why a lower standard should be applied to his own work.

Again, I assert that Climate Audit is a “pro science” blog. I defy you to find any statement from me that departs in any respect from supporting scientific standards. In my opinion, efforts to withhold data – on which SkS has been conspicuously silent – are “anti-science”. In addition, Climate Audit is “pro statistics” and “pro numeracy”. I notice that SkS moderators acknowledge their own weakness and lack of experience when it comes to numeracy and statistics. Without numeracy, you can hardly claim to be “pro science”, other than in the sense that you are cheerleaders without the pom-poms.

Tom: it looks like you’re living in Candyland and wearing rose-colored glasses. SkS is a hostile place to skeptics. Hostile. Heavy-handed censorship, painting of skeptics as being shills in the employ of Big Oil, readers and moderators alike piling on, accusations of crimes against humanity by skeptics. It’s not a place where you’re going to find reasonable skeptics participating. Only newbies who didn’t know what they’ve stumbled into, and street-corner “preachers” who go there to convert who they might. The rest of the reasonable skeptics who might stop by from time to time don’t participate, they’re just there to see what’s happening in the zoo.

It’s also a place that glorifies guys like Gleick, who is a hero because he lied to a competing organization to steal documents, then forged a document and released the forged and stolen documents under the guise of an “insider”. Unfortunately for him, he was outed by bloggers who were smarter than he, who saw that the forged document was obviously written by him, and it was only then that Gleick fessed up — to everything except the forgery that revealed his deception in the first place.

So, yeah, you’re not going to have reasonable skeptics there who participate enough that they’re going to take a survey. And you’re going to have a fair number of anti-skeptics who want to be just like Gleick and deceive for the cause.
snip

Tom:
How could you know this, when the majority of people who visit these sites would normally not comment or post. Of those who do, are personal demographics recorded and summarised? I doubt it.
Anecdotal evidence as represented by your several single-case studies is hardly appropriate.
I also note Lewandowsky says he collected ‘age’ and ‘income’ data, but does not present it (perhaps in SI).
If I was a reviewer I would require that info be present upfront to give me some indication of what type of audience was responding. How can you claim an audience is “diverse” when you present no information on its characteristics and do not even qualify what you mean by “diverse”?

Steve: the relevant issue is whether the Lewandowsky skeptic sample was contaminated by fake/fraudulent responses. While the demographics should be reported, it seems to me to be a separate issue and a distraction from the fraudulent response issue.

This would be relevant if this ‘study’ had been an attempt to do a serious investigation into how various individuals positionthemself in relation to a number of issues. But so far, nothing has indicated, much less established, that this big ‘if’ is fulfilled.

I would consider this study to be junk science even if all answers where honest ones. Based on how obviously it is trawling for a desired outcome.

And also how Lew pretends to both know what the strength of what he calls established (climate) science and also that the issues persued by skeptics are baseless.

It looks like he was hoping to get some nutters voicing climate scepticism to confirm his prejudice, while (apparently?) unaware of both the numbers of the nutters on the climate scare side, their obsession and how happily air them.

GPhill
How could you know this, when the majority of people who visit these sites would normally not comment or post.

Obviously Tom cannot prove that, as you know. However, I believe the onus of proof is on Lewandowsky (or his supporters) to prove that they did get genuine sceptics. There is no burden of proof on Tom at all to defend reasonable grounds for finding fault with the selection procedure.

I would suggest that the sort of sceptics who lurk at those places (Deltoid, Tamino etc) are not going to enter a survey. It takes a very special person to hang out there being insulted, and then co-operate with a survey obviously intended to demonise oneself.

I am exceedingly thick skinned. I teach Junior Maths, so it’s practically a job requirement. Yet I cannot bring myself to visit those sites other than to fact check what someone else says they said.

Steve: the real problem is that the journal peer-review mechanism accepted it as valid, despite the major design flaws and assumptions in logic.
Your point on the potential fake/fraudulent responses is correct, but how can you prove it? While reasonable this is a claim the journal editors will always dismiss instantly, as they have already ignored all basic standards necessary for such an article. No doubt Lewandowsky is using such claims of “fake/fraud” to show colleagues how his conclusions have been already corroborated. I can see the next paper already.

I don’t think that anyone would argue with you that an 80%-20% split of “believers-skeptics” in the survey need be a problem provided that the two portions of the sample are properly selected. Stratified sampling has long been a staple of statistical methodology. You are right that the effect of this split would be to increase the uncertainty levels in the analysis. Where I strongly disagree with your position is in your justification that the sample of “skeptics” properly meets the “properly selected” criteria.

There are two aspects to the type of sampling used here: convenience sampling and self-selection.

The convenience sampling aspect arises because the samples were selected solely through the “pro-science (Lewandowsky’s terminology”) blogs. The failure to actively pursue placing the survey at the “skeptic” sites (frequented by the majority of skeptics) created a sub-population of skeptics which consisted of those individuals (whether they comment or not) who access the sites used in the survey with sufficient regularity to be aware that the survey was there. The second aspect, self-selection, further filters the skeptic responders to those who not only decide to access the survey, but who also actually sat through the complete process of filling out the questionnaire.

In order for the sample to be representative of the broader population, this final group needs to have characteristics that reflect those of the wider population in roughly the same proportional magnitudes. Otherwise, the results will be biased (in your words, the locations of the means of the two will not be the same). Because the results in many cases in the paper are dependent on exceptionally small frequency counts, the effect of such bias is an extremely important factor in the overall picture.

Your justification is in the statements:

As it happens, the audience of at least some of the sites is also diverse with respect to opinion on climate science.

…

Such “skeptics”, along with far more reasonable “skeptics” make up about 20% of active commenters at SkS, and more than 20% of active posts by my estimate.

“Diversity” is not a sufficient justification for the sample to be reasonable. Unless, as mentioned above, the sample is genuinely representative of the overall population of “skeptics”, then the entire picture falls apart. From my own personal experience on the climate blog scene, I would suggest that most of the “reasonable” skeptics would be very unlikely to be among those who would end up completing such a survey. Either way, the burden of justification is on Prof. Lewandowsky’s shoulders.

His justification is:

Our respondents were self-selected denizens of climate blogs. One potential objection against our results might therefore cite the selected nature of our sample. We acknowledge that our sample is self-selected and that the results may therefore not generalize to the population at large. However, this has no bearing on the importance of our results – we designed the study to investigate what motivates the rejection of science in individuals who choose to get involved in the ongoing debate about one scientific topic, climate change. As noted at the outset, this group of people has demonstrable impact on society and understanding their motivations and reasoning is therefore of considerable importance.

This is a particularly unscientific explanation which translates to “whoever responded is who we are trying to describe”.

RomanM, there are two distinct issues here. The more important with reference to the paper is whether there were sufficient “skeptics”, and whether those “skeptics” that responded were sufficiently representative of the “skeptic” community in general for the data used to be useful in analysing the beliefs of “skeptics”. There is some reason to doubt that. In particular, “skeptics” who frequent “pro-science” sites are probably not typical and make up 50% plus of the “skeptic” sample. Based on that, the sample is biased and the results apply not to internet debaters of climate science in general, but only to those who habituate “pro-science” blogs. This is so even if there were significant responses from the posting at Junk Science, although the more responses from Junk Science, the more we can generalize the results.

The second issue, which I actually adressed in my comments, is what is meant by Lewandowsky when he refers to a “diverse audience” of pro-science blogs.

When conducting a survey, it is desirable that the participants be representative in areas not explicitly examined by the survey. If we were to discover that all the audience of the “pro-science” blogs consisted almost entirely of Rastafarians, for example, then it becomes possible that acceptance or rejection of Rastafarian doctrines is the cause of certain correlations found in the survey rather than items actually examined in the survey, such as acceptance or denial of the IPCC consensus on climate change. So far as I can see, Lewandowsky advises us in the paper that the denizens of “pro-science” blogs are not unusual except in terms of beliefs about climate change and other issues directly assessed by the survey. Combined with the admission that no “skeptic” blogs posted the survey, it amounts to a claim that the sample is not biased with respect social status, race, religion, etc; but are likely to be biased with respect to acceptance of the climate change consensus.

In this respect, it is important to note that any reasonable psychologist (or social scientist in general) would on learning that samples where drawn only from blogs on one side of an internet debate, expect that the sample will be biased.

Steve McIntyre has taken a statement indicating the bias of the sample, and the reasons for it (ie, a bias towards the “pro-science” side of the debate) along with a statement that that is the only known bias in the sample and insisted it amounts to a claim that there is no bias in the sample. On the basis of his misrepresentation of Lewandowsky, he accuses Lewandowsky of misrepresentation.

Steve: oh puh-leeze. The most important issue is whether the Lewandowsky survey was contaminated by fraudulent responses by anti-skeptics. It was. Second, misrepresentations include statements that are materially misleading. The only reasonable interpretation of Lewandowsky’s statement about “diverse audience” in the context was that the anti-skeptic blogs in question had a “diverse audience” that included skeptics, not that they were gender and age diverse. I do not believe that there is any evidence that Tamino and Deltoid have a “diverse audience” in respect of the issue that matters. Accordingly Lewandowsky’s statement in his article was a material misrepresentation.

It amazes me how often warmist defences rely on wordsmithing – that a materially misleading statement was not an actual lie. That’s no way to persuade anyone of the strength of your position.
If Lwandowsky had been concerned about full, true and plain disclosure – as he should -he would have stated that the survey was posted at seven anti-skeptic blogs at which skeptic participation, as measured by comments, was sparse. He should have clearly stated that the survey methodology carried an unusual risk for an online survey: that some respondents at the anti-skeptic blogs would submit false respondents in which they pretended to be skeptics holding a variety of wild conspiracies (that the respondent did not actually believe) in order to make skeptics seem absurd. He should have noted that his hypothesis that skeptics were associated with absurd conspiracy theories had been previously publicized at one or more of the blogs and that some of the anti-skeptic blogs had not posted the survey anonymously, but had associated with Lewandowsky, whose conspiracy theory had been widely publicized at such blogs. He would have noted very clearly the number of people purporting to hold wacko conspiracies was very small overall and that findings concerning the small conspiracies could be easily contaminated by a few fake responses.

Once one gets to the statistics – and I’ve not commented on this much yet, Lewandowsky is abysmal. His methodology requires normal distributions. However, the distributions of answers in the small conspiracy questions is about as non-normal as gets. Nearly everyone disbelieves the conspiracy; there are a few (probably fake) outliers. A distribution less compliant with the assumptions of the Structural Equation Modeling method cannot be imagined.

In this respect, it is important to note that any reasonable psychologist (or social scientist in general) would on learning that samples where drawn only from blogs on one side of an internet debate, expect that the sample will be biased.

Then why proceed with the study at all if your purpose is to denigrate the side that showed no interest in participating?

a) I have a comment stuck in moderation. If you don’t intend to pass it through, can you delete it so it disappears.
b) In my browser your inline comment above is not bold after the first paragraph. It gives the impression at first that Tom Curtis wrote some of those things. Very confusing.

I think this comment from Steve bears repeating – its one of the most succinct descriptions of the issues I’ve seen:

If Lewandowsky had been concerned about full, true and plain disclosure – as he should -he would have stated that the survey was posted at seven anti-skeptic blogs at which skeptic participation, as measured by comments, was sparse.

He should have clearly stated that the survey methodology carried an unusual risk for an online survey: that some respondents at the anti-skeptic blogs would submit false respondents in which they pretended to be skeptics holding a variety of wild conspiracies (that the respondent did not actually believe) in order to make skeptics seem absurd.

He should have noted that his hypothesis that skeptics were associated with absurd conspiracy theories had been previously publicized at one or more of the blogs and that some of the anti-skeptic blogs had not posted the survey anonymously, but had associated with Lewandowsky, whose conspiracy theory had been widely publicized at such blogs.

He would have noted very clearly the number of people purporting to hold wacko conspiracies was very small overall and that findings concerning the small conspiracies could be easily contaminated by a few fake responses.

That said, Steve also took Tom Curtis to task for supporting Lewandowsky – which is an overall fair comment. However, Tom also has been highly critical, and I would note that the vast majority of this post by Tom was, if not highly, at least “critical.”

My personal feeling is we need more people like Tom and Steve trying to work together – as is, albeit haltingly – occurring here. When you have someone like Lewandowsky – who is purposely and actively polarizing its all the more difficult – but the problems must be addressed. I think we need to agree to disagree, but then try and keep the lines of communication open, and work together where there is agreement.

Steve: I agree that Tom has made constructive comments. I thought that he somewhat regressed today into the annoying wordsmithing that is all too characteristic of Nick Stokes: rather than conceding that highly misleading comments are misleading, Stokes typically argues in diminishing circles that the comment cannot be proven to be an out-and-out lie.

The reason why I regularly urge readers to “watch the pea” when reading Team articles is that, from time to time, we encounter statements which fall far short of full, true and plain disclosure, statements that may not be actual lies, but which are misleading unless parsed carefully. We’ve discussed this in connection with Gavin Schmidt on a number of occasions.

As someone that tries to write accurately, if someone tells me that one of my comments is misleading, I try to understand why the comment appeared misleading to him and will go back and clarify the point so that future readers are not misled. I did so recently in response to criticism by Tom of an inline comment that I’d written. Even writing carefully, one can inadvertently write unclearly; I take such criticisms seriously and try to make any required amends as promptly as possible.

In this case, I wish that Tom would look into the same mirror and try to understand why the Lewandowsky assertion is misleading, rather than go into Stokesian arguments.

And Tom – you made valid and well considered points in most of your statement – but then this:

Steve McIntyre has taken a statement indicating the bias of the sample, and the reasons for it (ie, a bias towards the “pro-science” side of the debate) along with a statement that that is the only known bias in the sample and insisted it amounts to a claim that there is no bias in the sample. On the basis of his misrepresentation of Lewandowsky, he accuses Lewandowsky of misrepresentation.

Either you are talking around the issue to create a defense or the two of you are on different pages. Lewandowsky has proclaimed the sites as “diverse” while directly noting their “Pro-science” bias. Either you are diverse or you are biased.

The results he obtained showed at very best that around 17% of the responses were skeptic leaning. Irregardless of the firther question of reliability of thoise skeptic reponses, 17% skeptic vs 83% non-skeptic – is not by the wildest stretch “diverse.”

Lewandowsky shjowed he understyood the problem by noting he also tried to solicit responses from “skeptic-leaning” sites, with no success. Why even bother to include skeptic-leaning sites, if the “diverse” audience he had ready access to at the “pro-science” sites was acceptable.

That he had no success was becasue he made no effort – any rational person would expect a strong outreach would be required – but he made none. They did not even insure all of the recipients got, or understood, the emails.

Another I think excellent question, knowing he understood the need to obatin skeptic-leaning responses, was why he only tried 5 skeptic-leaning blogs vs the 8 “pro-science” ones he had already posted at?

There are many tools available to obtain at least a nominal sense about reach and draw. Where is his overall power analysis? And more specifically where is his power analysis justifying his site selection priocess? I personally used Alexa rankings – not perfect but the show a direct worldwide ranking for most websites. How did Lewandowsky make his choices?

And the big question – if he cared about obtaining a sample from “skepotic-leaning” sites why did he fail to even try to reach out to the monster of all climate discussion sites – highly awarded, and with a well known “skeptic-leaning” readership – Watts Up With That?

That he failed to even try to engage the largest climate discussion site overall, let alone that it has a strong skeptic-leaning readership, proves Steve’s point. The only logical conclusion, when you consider; 5 skeptic vs 8 non-skeptic, the lack of any meaningful attempt at outreach – to engage the skeptic sites, and last the failure to conatct and include the laregest climate discuscussion site, with its well known skeptic leaning audience … is that there was no desire to obtain a repesentative sample of skeptic-leaning blog denizens.

And since the study was about those who are skeptical – about the beliefs of those who are motivated to reject climate science – this ommission is so significant, that it cannot be made unknowingly. Thus Steve’s claim that Lewandowsky misrepresented that he obtained responses from a “diverse” pool is accurate. He did not.

Lewandowsky knew obtaining samples from only the pro-science sites was not sufficient, but made a half hearted effort at best, while purposely and knwoingly ignoring the largest skeptic leaning site, to obtain a sample from legitimate skeptic’s. But claimed his sample was diverse nonetheless.

Tom – I’ll also concede upfront, from a purely definitional standpoint, “diverse” (: differing from one another, composed of distinct or unlike elements or qualities), technically could describe these “pro-science” sites if they had even a handful of participants with skeptic leaning views.

We could, but we shouldn’t. Even simple common sense tells us that would not be a relevant definition of “diverse” in this case. To make the claim is one that is technically accurate, but functionally/effectively false.

I also believe that no intelligent person could visit those “pro-science” sites, spend any time reading the comments, and say that a sample of “skeptic” rated responses collected thru those sites would be free from concerns over bias.

Lewandowsky is as you note an expert. He acknowledges the sample is lacking diversity by his attempted inclusion of “skeptic leaning sites. That he claimed “diversity” regardless, and proceeded based on samples only obtained thru the “anti-skeptic” sites often openly hostile to skeptic thought or belief, despite his tacit acknowledgement that sample would not be sufficiently representative, is a misrepresentation.

You even spend the majority of your post agreeing – concluding:

“any reasonable psychologist (or social scientist in general) would on learning that samples where drawn only from blogs on one side of an internet debate, expect that the sample will be biased”

By proceeding with a sample he knew was biased, whose data integrity was questionable at best, which his actions acknowledge was an insufficient sample to accurately represent the overall group being studied, Lewandowsky, I think clearly, misrepresents that the sample and his data integrity were sufficient.

To me a perfect example of technically accurate but still false Steve describes. There was a lack of diversity which Lewandowsky tacitly, and by action both, acknowledges. But even though he was unable to obtain the sample he acknowledges by word and deed is not representative, he proceeds regardless.

A question – would you accept the results of a survey about the opinions of “liberal leaning” people if all of the data was collected only thru strictly, and at times fervently, conservative sites? Or more extreme yet, would you accept results of a survey about conservative values if all of the data was collected thru a site such as the rabidly liberal Daily Kos?

I second the importance of engaging in civil dialogue with anyone who posts here with contrary opinions and provides some rationale for them. Even faustusnotes added to the discussion although his tone was inappropriate.

Conversely I think it would be best to avoid feeding trolls – such as the one whose best response to arguments he doesn’t like is “sorry, you’re wrong!”. Many posts arguing against a position put forth by Steve are so bizarre I question whether even the posters believe them.

“Lewandowsky could have easily mitigated the problem if he had followed my original suggestion that he ask the journal to re-review the article with a particular eye on the problem of fraudulent responses. You should have encouraged him to do so as well. Shame on both of you.”

I think those are unfair comments. Tom strongly critique the survey on those grounds when it came out. He did that on his home turf at SkepSci starting here: http://www.skepticalscience.com/news.php?n=1540#84394 and was attacked by Michael Sweet and others. After that he disappeared (nb: I am not a conspiracy theorist and do not believe Tom’s disappearance is evidence of any kind of conspiracy).

I believe (or speculate) that given Tom’s demonstrated determination to understand and defend science from the fundamentals of physics to softer economic sciences that he is probably discussing the survey and analysis in one on one in emails with someone or some people and will come to a supportable conclusion in time.

I suspect Tom Curtis’ seemingly bipartisan criticism of Lewandowsky’s polling analysis is more damage control than an totally objective criticism. I also think that he can project that image because some skeptics and those suspected of being skeptics have given the analysis some credence by analyzing the statistics used. What Lewandowsky has done is what anyone could easily do legitimately or otherwise in such polls and that is obtain responses from the less reasonable portion of the group that identifies with a position that is antithetic to one’s own position. The implied conclusion that that position is somehow weaken by the responses of the less reasonable portion of people who claim to have that position is a political one and certainly not scientific. In order for the Lewandowsky analysis to have any of its claimed importance, vis a vis a reasonable view of AGW and related policies, it would have to conjecture that the science and policy formation are the result of consensus thinking and that would include the uninformed and less reasonable part of any side of that consensus.

The Lewandowsky analysis is nothing more than a transparent attempt to affect the political stage of the AGW debate. I would think that Tom Curtis realizes what Lewandowsky is attempting here and if he were truly objective in these matters would be discussing the essence of the analysis and not the peripheral issues of who is playing nice and who is not.

“The most important issue is whether the Lewandowsky survey was contaminated by fraudulent responses by anti-skeptics. It was.”

Yeah, but you haven’t shown that. You’ve simply asserted it. You have yet to prove it.

snip

Steve: untrue personal accusations snipped.

There is convincing evidence of fraudulent responses, as I’ve asserted. As Lewandowsky observed in the case of Bray-von Storch, an author seeking to publish results from an online survey has to show data integrity. The onus is on Lewandowsky to demonstrate data integrity – he hasn’t done so. I made the very moderate suggestion that Lewandowsky request re-review of this article, asking the reviewers to pay attention to the possibility of fake responses. You should support that request.

Such a reviewer, if diligent, would need to examine IP addresses, the unreported duplicate responses, among other things. Even then, I don’t believe that an objective reviewer could arrive at the conclusion that Lewandowsky had assured that his results were not contaminated by fraudulent data and should therefore recommend rejection of the article.

This is very well put. Throughout my career I have found it extremely difficult to explain to laypeople the concept of taking a random sample to obtain population estimates. There are two misconceptions that are very hard to break through:

1. As long as the population size is large, the percentage of the population sampled is not relevant to the accuracy of the population estimate. Many times I’ve heard – “But you’re only looking at 0.01% of the data!”

2. No matter how big the sample size is, if the sample is not a truly random selection from the population we desire to estimate (as in a convenience sample), the answer will probably be wrong. Lewandowsky has not done the work to even define the population he’s trying to estimate. In fact a good question to add to the survey would be “I voted for George McGovern” (ok, most readers are probably not old enough to have voted in 72).

I sometimes cite the Gallup pole, which looks at only a very small percentage of the electorate, and the disastrous literary digest poll of the 1936 election which surveyed 2.4 million people and predicted a landslide victory for Alf Landon over FDR. Landon carried two states.

When Tom Curtis and Stephen Lewandowsky speak of “pro-science” blogs without a hint of irony, I think they are referring to “pro-post-normal-science” blogs.

That’s the only way it makes sense.

And to suggest that the so-called “pro-science” blogs are any more “diverse” when they censor posts with such annoying and dialogue-squelching regularity is also a stretch. They could be more diverse if they were more tolerant of opposing views, but they aren’t. Most skeptics I know would love to engage the proprietors of the warmist blogs, but they are usually denied that privilege after a short exchange–if they get that far. From what I can see, Lewandowski’s blog is even worse than RealClimate.

Lewandowsky’s complete phrase “with a pro-science science stance” is awkward. Instead of using “pro-science” I would use the whole phrase at every opportunity with the quotes.

But more importantly it was completely unqualified and disconnected from his results. None of the bloggers have revealed their responses to the survey. So how does he qualify pro science and skeptic? Personal bias perhaps? How pro-scientific.

Anderegg et al 2010, “Expert credibility in climate change” co-authored by Dr. Stephen Schneider provides some guidance on how to understand one’s ranking of scientists in the climate arena and stance vis-a-vis the science. I will extend their thesis to bloggers in the climate arena that were asked to host Dr. Lewandowsky’s survey.

Using Google scholar’s h-index gadget I collected the rankings of the principals of the blogs.*

Spotting the “Pro science science stance” a Lewandowsky (37) and they would still lose by a landslide (total = 87, average = 12).

Conclusion

By the Anderegg standard skeptic bloggers are decidedly more scientific than “pro science science stance” bloggers.

* The ranking tool often confuses authors with similar names. I did my best to filter the results (i.e. qualifying terms like climate, authors first initials, etc) That said this is the climate biz most of my results are right side up and the error bars on my averages are +/- a bristlecone.

[quote]“You mean not counting the fact that Lew started with a survey done by convenience sampling.”

That’s not a substantial problem. You want to do a different survey, with a different methodology, go right ahead. Disagreeing with how a scientist set up a study is NOT the same thing as finding a problem with it.[/quote] (Robert)
———————————–
Quite possibly the silliest assertion I have seen in years. Research methodology is the essence of science, and if there is a problem with the methodology, then there is a problem with the study.

Or are you suggesting that a scientist could just set up a dartboard with answers instead of numbers, invite some friends around and lay on beer, and let them provide a result for the study? No problem, according to you. Someone else could use a roulette wheel if they didn’t like the results.

There are recognised methodologies for designing opinion surveys, and while they are not perfect, they certainly ensure much more accurate and unbiased results than leaving it to chance.

It is safe to say that this paper broke pretty much every rule in the book.

Umm. Robert, you’re kind of looking terminally foolish on this thread. Perhaps you should be forthcoming with your CV, so we can understand the infallible logic underlying statements that seem close to imbecilic…

Start with a poll question asserting “grey hair is attractive” – yes, no? You can’t build a poll in this because grey hair is not defined in the poll, attractive to whom is not defined, there are only 2 reponses possible and it’s based on subjective responses that could change as fashions change.
So, you approach the makers of hair colouring and ask how many shades they make to disguise grey (40) and what the sales of each shade are each year but tint number. You ask the Government statistician for the number of people with grey hair and you then have the beginnings of a qauntitative basis for answering the first poll question.
Although this is oversimplified, it’s an example of whether it is appropriate or valid to use traditional statistics for the evaluation of touchy feely things. In my blithe ignornance, I had assumed that a separate subset of statistics had been developed for beliefs and intangibles and that there was good cause for caveat emptor.
Only the latter 2 words seem applicable.
……………..
I should have known this. After college I was selected for the Air Force Academy after a week of pre-screening. An arrogant Eoropean Fritz psychologist had the unstated task to ensure that a group of 20 post-adolescent men, to be housed together for 4 years, did not contain a candidate whose interest was more in the other guys than in study. He took us one by one and for a half hour he shouted questions in the form “Here is the first part of a sentence, finish it immediatately with your first thought”. When he asked me “He dreams about….” I answered “Once a year”.
Even verbal polling needs some controls.
The pollster can get quite angry.

Steve, as agreed in the previous thread, I’ve completed an analysis of the data as I would have done it, which can be viewed here. It’s a long post but I’ve tried to put some simple explanations at the end for people who don’t understand some of the detail.

My simple conclusions are:

1. Lewandowsky’s free market vs. AGW findings are correct (as I understand them) though I got to them by a slightly different route
2. AGW skeptics are more likely to endorse only one conspiracy theory: the AGW one. They are no more likely to endorse the others than are warmists, and in my model conspiracy theories are uncorrelated with the AGW/Free market factors
3. Some assumptions Lewandowsky made at the start of his analysis about which constructs some variables are associated with led to the AGW conspiracy theory being forced into the same factor as all the other conspiracy theories, which led to the association between skepticism and conspiratorial thinking. In my opinion this is a spurious conclusion that was inevitable given the assumptions he made, but unfortunate given its consequences

I think HAS (who was commenting on the last thread) might have some constructive things to say – it seems to me that Lewandowsky’s factor analysis was more confirmatory than exploratory.

If anyone has questions about the revised analysis or the implications, I’ll be taking them over at my blog. I’m going to write a follow up post on the use of online surveys for studying online communities which I hope will rebut some of the criticisms being made here about that aspect of the data collection. Also, I would be very surprised if any skeptics really saw anything unusual or antagonistic about the results of my analysis: I’ve written the conclusion from a warmist perspective, but I don’t think there’s anything controversial in the results of my analysis. What a difference one correlation can make!!!

Thanks for the efforts, but you realize that your results will be totally ignored by all involved?

The Raving Denizens of SkS (like Robert) will say that you’re a poser and Lew is a world-renowned statistical expert. (Even though he incorrectly applies rules of thumb for PCA to CFA.) Skeptics will continue to point to the fact that the original survey was not only a survey of convenience, but was hosted at sites that are vociferously anti-skeptic, which glorify (Gleick, etc) lying for the cause because their cause is to save the world.
snip

A convenience sample that is very thoroughly done, with a vetted and thorough mixture of questions might be salvageable. (Though never as good as a proper survey.) A self-selected convenience sample where it’s trivial for anyone from anywhere in the world to represent themselves in any way they want? Offered on websites where it’s very obvious that fraud will occur?

You seem reasonable and reasonably skilled, so why would you tar and feather yourself by defending something like that?

-snip –
Obviously your crusade against angriness still has a long way to go.
Steve: all references to religion are automatically snipped or deleted. With regular commenters, I would have deleted the entire exchange, but I’m giving you more latitude than regular commenters.

I do not moderate in advance and sometimes people break blog rules. I ask regular commenters not to respond to comments that break blog rules.

You are correct that it is difficult to make inroads against angriness, but I still try. It is too bad that you don’t make the attempt.

“AGW skeptics are more likely to endorse only one conspiracy theory: the AGW one.”

More likely than whom? Those involved in the conspiracy?

Couldn’t resist.

There are different kinds of conspiracies. The manipulation of results and findings at the IPCC is clearly organized and results from confirmation bias in my view. I don’t think one need to be a paranoid conspiracy theorist to come to that conclusion. It’s reasonable to think that way, given the evidence. It’s also reasonable to think that it’s not and to defend the process.

Whether there is a vast, worldwide conspiracy to convince the human population that CAGW is about to overwhelm us and whether it is tightly organized and villainous is besides the point. The point is that belief in CAGW is widespread and not conclusively supported by evidence in my view, not by a long shot. If that makes me a conspiracy-monger, so be it.

“the data should be examined without constraints on which groups of variables should be associated with factors”

If I understand that correctly, I heartily endorse it. One trick of correlation mining in psychology is to force really long surveys down the throats of your undergrads, slice and dice the data, and then present a survey to your target population that is designed to show the correlation you want (and never reveal the messy undergrad results!). Since Dr. Lew apparently surveyed his own university, chopped questions out of the analysis, played games with the factors, etc., there is some evidence of this type of correlation mining.
When Steve commented that:

“FN, if you look at Lewandowsky Table 2 footnote, he says that he excluded CYClimateChange from his conspiracy factor. Thus, I don’t think that your analysis on this issue is on point.”

In respect of L. et al’s use of CFA they simply identified three sets of questions (Free Market, Climate Science and Conspiracy) and thought each could be reduced to simpler forms if they shoved them into the statistical sausage machine.

Why they did this is unclear (Heath et al just summed the free market responses when they used the same questions questions in their scale – I think SM also asked why not do this somewhere in this saga), and while they cite studies they have based some of their questions on (Heath et al, Swami et al) they don’t import any understanding from that prior work into the constructs for used in this study.

So any reduction in dimensionality is derived de novo from this study’s data set. It’s EFA coupled with assumptions (not hypotheses) about the questions forming a set.

In doing that they throw away information about interactions between individual questions in different sets (e.g. some of the relationships f. unearthed). But if you want to reduce your data down them’s the breaks – doesn’t help an investigative study though.

I too do wonder about the questions that got excluded (climate change is a hoax, AIDS is US govt created). If the experiment had been designed before the questionnaire this should have been identified as an issue. Further the claimed contamination goes both ways: so what is the impact of having asked these on the key vbles measuring rejection of climate science and HIV-AIDS? Too late having asked both questions to eliminate one and say the contamination to the other can be ignored.

Perhaps the exclusion was a last minute thing, due to an over active peer reviewer desperate to find something in the study to find fault with.

HAS … Lew et al do claim their work “parallels” and “replicates” Heath & Gifford 2006 etc. Which at minimum implies they matched their results with the others.

Except that Heath & Gifford do not provide sufficient information in their paper to support Lewandowsky’s claims.

There is no definitive evidence in Heath too determine the scale they used for the FM questions. One might assume – from other suggestions and inferences, that they likely used a 5 point scale. Lewandowsky sued a 4 point scale. At minimum if Lewandowsky did contact the author and verify the scale, there should have been a notation about how he balanced his 4 point responses against the Heath et al 5 point system

The difference in scales can very clearly have an effect. Lewandowsky notes that when comparing to this other work their results showed a larger association than any other existing work. He passes it off as it maybe, possibly, might be … from his use of the magical SEM.

However it would seem highly plausible, considering he did not reference this significant issue or any adjustment – along with the other issues raised and general sloppiness, that this difference was the result of comparing 4 point to 5 point scale derived data.

There are other possibly significant issues with Lewandowsky’s claims regarding the Heath & Gifford 2006 results as well.

Paralleling previous work, we find that endorsement of a laissez-faire
conception of free-market economics predicts rejection of climate science

Those historical analyses mesh well with empirical results which show that people’s rejection of climate science is associated with the embrace of laissez-faire free-market economics (Heath & Gifford, 2006; Kahan, 2010).

Rejection of climate science was strongly associated with endorsement of a laissez-faire view of unregulated free markets. This replicates previous work (e.g., Heath & Gifford, 2006) although the strength of association found here (r ‘ :80) exceeds that reported in any extant study. At least in part, this may reflect the use of SEM, which enables measurement of the associations between constructs free of measurement error (Fan, 2003).

Hi over here. My feeling in writing my earlier comment was that the Heath et al data set would be more useful to compare with on this dimension (but I hadn’t noticed that it too might have the potential issue around 4 or 5 categories). The weighting issue needs to be checked too – L. et al eliminates one question from the set and weights others unevenly (because of the PCA), and that will an impact.

Heath et al is a better piece of work than L et al (despite its more limited sample) simply because it addresses the experimental design and sampling issues up front. Probably this is because I see Yuho Heath believes in quantum theory. I have never quite been able to go that far, but have always felt a sound understanding is an essential skill in the 21st century.

HAS … read the results in Heath & Gifford 2006 – from beginning to end – especially as they relate to Lew’ findings. See what you see.

And yes it is pretty much certain Heath used 5 point scale. We know Lew used 4 point. We don’t know that Lew knew, or made any adjustment – if so seems that should have been a significant step requiring description in methods.

Steve, I would now like to put a little time into repeating the analysis I conducted using the data A. Scott collected. Can you tell me where to find it, or could A. Scott?
Steve: I’m taking a look at it and plan to report on it at some point. I’ve asked A. Scott to hold off distribution until then.

As Robert points out on his own blog, there have been an amazing number of responses amongst climate skeptics to the Lew paper.

Would Scott consider running the survey again, this time, exactly as set up by Lewandowsky so Steve can have a better comparison? We already knew what the survey was designed for, before we answered it – as did the fakers – so on that score, another rerun would change little.

Lucy, rerunning the survey for a third time would produce more junk. The methodology for the first version was fatally compromised; the second was an amusing but statistically irrelevant exercise; a third would just be farce.

All open internet surveys are meaningless. Rerunning an open survey which has been widely discussed for a third time would produce results worth less than zero.

It would be worth running a proper survey about the relationship between people’s attitudes about a range of issues and climate change, but (i) it would not be an open internet survey and (ii) it would look very different to the caricature of a survey that Lewandowsky promoted.

“It would be worth running a proper survey about the relationship between people’s attitudes about a range of issues and climate change, but (i) it would not be an open internet survey and (ii) it would look very different to the caricature of a survey that Lewandowsky promoted.”

The Six Americas study does a lot of polling on climate change. Perhaps after all the attention “skeptics” have brought to Lewandowsky et al, they’ll add some conspiracy theory questions.

They do ask questions about trust, which have consistently shown that climate “dismissives” (their term for deniers) consistently express mistrust of virtually everything: http://bit.ly/Srgb1N. That global mistrust is certainly consistent with, though not identical to, Lewandowsky’s findings.

I’ve just an an email conversation with John Cook. He thinks that the idea that anyone faked their responses on the survey is itself conspiracy thinking on the part of Steve et al. This reveals a lot. FWIW, such a survey practically begs for scamming since it is so obviously advocacy-oriented. If it gets posted on alarmist blogs where group-think and advocating “action” to save the planet are the norm, it does not require a any concerted action for fake answers to be submitted. Individual action of like-minded people is all it takes. Being able to predict this merely requires a modicum of understanding of motivations and human nature.

John Cook … thinks that the idea that anyone faked their responses on the survey is itself conspiracy thinking

“Conspiracy” is steadily losing any meaning. First it was widely abused to describe acting out of obvious vested interest (government-funded climate alarmism), now even a single respondent can apparently “conspire”. (With himself…?)

I took little interest in the SkS private forum at the time but Geoff Chambers, bless his cotton socks, drew attention to the following on 27th March on Bishop Hill, highlighted almost six months later by Anthony Watts. This was the conclusion to Glenn Tamblyn’s desperate musings in response to the outing of Peter Gleick:

In a smoke filled room (OK, an incense filled room) we need a conspiracy to save humanity.

Seeing conspiracies against them everywhere, even in the most honest and open criticism, then saying this to themselves. History says it’s not a good place to be.

Cotto: Do partisan politics play a large role in contemporary science education?

Dr. Berezow: Possibly indirectly. There are very few conservative scientists. One survey showed that only 6% of US scientists are Republicans, while 55% are Democrats. In the social sciences, the ratio can be as lopsided as 30 Democrats for every 1 Republican. Obviously, a discipline that is so ideologically skewed in one direction is going to produce research that reflects that internal bias. Partisan politics probably plays little to no role in the objective “hard” sciences (biology, physics, chemistry, etc.), but it’s hard to believe ideology doesn’t affect the quality of the research that comes out of the more subjective social sciences.

Also, teachers’ unions — which are allied with the Democratic Party — refuse to accept any reasonable reforms in education (such as merit-based pay and charter schools).

I was wondering if anyone here could recommend a good website where climate is discussed. I’ve heard that there has been a new record low set for ice in the arctic but I can’t seem to find any mention of that here. Would melting trends of the polar cap be considered off-topic at Climate Audit? All I can find here is a lot of complaining about someone’s polling methods, or something like that.

There’s been a lot about the Arctic sea ice situation on Watts Up With That and many other sites, as I’m sure you know. Right now an Australian psychologist has taken it upon himself to smear, in a particular gross way, every single kind of person who questions global warming dogma, science and policy: skeptics, lukewarmers and Roger Pielke Jr alike. Survey and statistical garbage has already led to ridiculously credulous articles in the mainstream media. I’m grateful Climate Audit has turned its attention to such gutter scholarship – because in times past people averted their eyes from similar academic extremes and not far down the track real horrors took place. It’s ugly and you don’t have to read about it. That’s real freedom for you. But if you do look even for a moment it raises a question: why don’t the elite of climate science ever distance themselves from such malignant claptrap? Or is this the only way the whole edifice can prop itself up these days?

I do wonder, how many idealogues are prepared to debase their profession to blindly support AGW propaganda?

How many disciplines are content to let those professionally associated with them drag that discipline into the public gaze and let shoddy work stand as representative, without explanation, criticism or censure?

I suspect those answers are sensitively dependent on the point that AGW related funding begins to dry up in earnest.

Making things up again, are you, Robert? Oh, but I forget that in fact you are simply parroting Lewandowsky.

Have you thought about the basis for his measure of “rejection of science”?

The sum total of information on this aspect in his study comes from the responses regarding these two statements:

The HIV virus causes AIDS.

CauseSmoke Smoking causes lung cancer.

Now, these two items may look simple of the face of it, but they must be the most powerful in the world on a par with the Yamal Tree. Why? Because from the 16 possible pairs of responses, Prof. Lewandowsky is capable of peering deep into our minds and extracting the latent factor which encapsulates the “rejection of a range of scientific propositions” or as in your words “rejection of multiple scientific disciplines”. Imagine what he could do with three items. Chosen appropriately, perhaps those items could accurately measure intelligence levels or the propensity for accepting pseudo-scientific propositions without question.

But, of course, one should not overlook Prof. Lew’s statistical tools. In his words (emphasis his, not mine):

SEM is a technique that estimates latent constructs—that is, the hypothesized psychological construct of interest, such as intelligence or personality or conspiracist ideation. SEM does this by considering multiple items, thereby removing the measurement error that besets individual test items.

We cannot get into the details here, but basically SEM permits computation of the error-free associations between constructs, such as one’s attitudes towards science and one’s conspiracist ideation. It is because measurement error has been reduced or eliminated, that correlations between constructs are higher in magnitude than might be suggested by the pairwise correlations between items.

Imagine that. “Removing the measurement error”! Allows the “error-free associations between constructs, such as one’s attitudes towards science and one’s conspiracist ideation” to shine through! All this with a method that involves factor analysis – a method which some statisticians term to be very subjective and for which the criterion for judging the quality of any factor analysis has not been well quantified.

Robert, maybe a survey should be done looking at the mistrust of activist scientists whose agenda driven work needs to be constantly audited to separate the propaganda from any real science which might accidentally slip in. If we ask nicely, we might even get whoever came up with the two gems above to design the questionnaire items and have Prof. Lew wave his mathemagical wand over the data to get the results we want to see.

@RomanM: That’s a critical point in all of this: He uses two statements and asserts that those who disagree with them are subject to “the rejection of science more generally”, and “rejection of a range of scientific propositions”. He refers to these two questions as “a range of other scientific propositions” that indicate “acceptance of … of other sciences”.

One might stretch things and say rejecting those two statements is a rejection of *medical* science, though that’s a big stretch. But a rejection of multiple scientific disciplines and science itself? I mean, there really isn’t much choice here except to call him stupid or deceitful.

I agree with Wayne that until Roman brought this to our attention – thanks to Robert, as always – the non-sequiturs in Lew’s own rejection of a range of scientific propositions were only latent, not fully computable.

One of the defining characteristics for me of the ‘consensus’ is how little thought is needed to comply. To have a genuinely scientific outlook, in all areas of human experience, is not trivial. To think one call boil it all down to answering the ‘right way’ – and the easy way – on two propositions from medicine (which I assume are correct but have never had the interest to look into) is patently absurd.

This brings to mind a little web syncronicity that occured two days ago as I mentioned Glenn Tamblyn’s musings “in response to the outing of Peter Gleick”. Glieck’s ever-so-humble role in producing, on 5th January, “The 2011 Climate B.S.* of the Year Awards” in Forbes Magazine came back to mind. As I clicked to read the great man’s guidance to the simple folk as to the worst perpetrators of Bad Science in the previous year Forbes interrupted me with an online ad and this quote from Mahatma Ghandi:

One must become as humble as the dust before he can discover truth.

The trouble was, that ruled out listening to Gleick. My own BS detector had gone super-sensitive, just as it has as I reflect on what Roman says. But a BS detector that works reliably in every single area in which we come across a truth claim in this life which has a scientific component … such a quality I would call Wisdom. Seek her, yes, as the ancient wrote, but never assume you have fully possessed her.

If Lewandowsky were truly interested in rejection of science, he would have included in the poll propositions such as “Under a business-as-usual approach, the global average surface temperature will increase by 6 C by 2100″, or “At the current pace of greenhouse gas production, the Greenland Ice Sheet will disappear by 2200″. It’s possible to “reject the science” in ways other than disbelief in the greenhouse effect.

So it would take expertise in clinical psychology to deduce that this paper is a blatant, completely bogus piece of smearology? On the other hand climate scientist Michael Mann felt he had the expertise to endorse it. Are you planning to complain to Professor Mann? Or do the disciplinary boundaries only provide protection one way?

Early on in this series I also suggested that there are people that hold the extreme positions recorded in the survey. What I objected to was Lew. suggesting that the extreme positions were representative of the majority of skeptics. I suggested I would even be happy to supply some names of people that just might have responded to the survey.

I have thought about this and recommend that people google this term — with the quotes: “Toronto Street News”. This is one group of people that seem to revel in conspiracies — I happen to have met some of them. I do not share their views on 99% of what they publish — and speculate about the rest. Did I find them crazy? Well — not really — except about the conspiracies. It’s not my cup of tea.

After perusing that site, people can make their own decisions as to whether some might answer “Strongly Agree” to a lot of conspiracies.

It did occur to me that had L. et al used established measures of {free market} {state control} and {conspiracy theorist} {coincident theorist} rather than just making them up we’d have had a better handle on these issues.

Haven’t got the faintest idea if such measures exist and have been operationalised and standardised on any populations (and if they haven’t psychologist should probably be circumspect about using the constructs), but if they have been and they had been estimated in this survey we would then be able to start to answer the question “how normal were the sample on these measures”?

What a complete waste of everyone’s time. The Lewandosky paper should have just been ignored – an online survey in the field of the softest of soft sciences, psychology. Everyone knows that online surveys are a complete waste of time – do you think you’d get an accurate indication of the upcoming elections in the US using online surveys? What a joke. Instead of investigating the reduced Arctic ice cover data we have this nonsense (both by Lewandosky and McIntyre).

Lewandowsky did some interesting scientific work. He’s not a climate scientist, so there’s no reason he should be studying the Arctic ice. He’s a psychologist who studies memories and popular myths, so he’s right where he’s supposed to be. Psychologist is, of course, a “soft” science, but that doesn’t make it inferior. It’s still an interesting and important field of study.

Perhaps there is a desire, conscious or unconscious, to exploit the familiarity effect Lewandowsky writes about in the “debunking handbook”: the more we hear a myth, the more likely those already predisposed to it are to believe it. In this case, the myth is that there are serious problems with the paper. Despite failing to identify any real problems, deniers repeat the assertion over and over in dozens of blog posts until a large part of their audience believes major problems have been found.

In other words, perhaps the overreaction is partly motivated by the desire to create a belief that there’s a fire by blowing a lot of smoke.

The bar for being a denier has now be brought down lower than in living memory. One only has to think that one has identified a real problem in the Lew paper and one qualifies. The way this is going there’s going to be a handful of guardians of the truth, a bit like the men around General Custer, surrounded by seven billion deniers. Poignant. Or I suppose the madness of crowds will make up the numbers. But it’s a dangerous crowd that not only sees no problem in this paper but uses denial as a synonym for common sense.

The problem is that online surveys are non-scientific by their very nature and are completely unreliable. You don’t have to be a psychologist or a statistician to know that. Much better would be to get a survey done by a professional polling company that would include questions on conspiracies, AGW skepticism, political beliefs etc. applying the usual polling safeguards. This of course would cost money so instead we get this lazy survey. Are Rasmussen or Gallup using online surveys when trying to predict the US presidential election? Online surveys are notoriously open to abuse. One well known case from a few years back: when the BBC asked people to choose online their favourite song of all time the winner was “A Nation Once Again”, an Irish rebel song. My point is that any online survey cannot be relied upon, and Lewandosky’s use of such a survey demonstrates the ‘softness’ of psychology as a science. Obviously I do not mean that Lewandowsky should be studying Arctic ice but if he thinks he can get meaningful data out of an online survey then perhaps he would be as well checking that ice data. At least the data would be worth checking.

[Still can’t understand why everyone hasn’t just dismissed this survey as worthless garbage in / garbage out. You can’t improve the quality of the original data by refining the analysis.]

Robert – simply making assertions (yes, no or whatever) is not an argument. You don’t answer the point that no online survey can be relied on. We can argue forever whether psychology is good or bad (I’d say its predictive capability is not good although its post hoc explanations are interesting), but in relying on an online survey to make his point Lewandosky does the field of psychology no favors.

[For the record I reckon that AGW is happening, the question is how much. Steve McIntyre, in an ideal world, would be working with Mann and other climate scientists checking and rechecking the data. Unfortunately that’s not the way its working out and the climate change debate is dominated by ‘deniers’ and ‘warmists’. Lewandosky is not helping, just adding more noise – and this noise is neither pro-AGW nor anti-AGW, its just noise i.e. a useless online survey. It doesn’t matter what analysis Lewandosky does or Steve does – it’s still just another useless online survey.]

In calling iskoob an idiot I take it you agree with the moderator at STW who said:

The theory of global warming is robust, on a par with the Theory of Gravity.

In your mind not only is this statement valid, but to question it is the action of an idiot. And in asking this question I too presumably am an idiot. Eureka! Thanks for the belated education in advanced science and logic – parts Cambridge couldn’t reach. It’s intimidating to be in the presence of such intellect.

The simplest way to deal with any wrong assumption I may have made was to say where I made it. For instance, you don’t think the theory of global warming is robust on a par with the Theory of Gravity. Your use of the word idiots suggested otherwise. Stop playing games and tell us what you think.

Second, Robert, here’s what I feel about the use of ‘denier’ by you and your ilk, as expressed in the HuffPost Comments yesterday:

Dallas, here’s why I find it loathsome to interact with posters like you on such things. Instead of saying “So Loehle has a warmer MWP,” which is fair comment, you say “So Loehle has a warmer MWP, a period inexplicably beloved by the denier set.” It isn’t beloved by me – I’m totally open minded on the subject. But I also take very seriously the use of denier as an allusion to holocaust denier. I hate the word with a passion. It makes me want to give up any interaction on the spot. Got that? … To have this disgusting epithet thrown around in the climate debate should make some people hang their heads in shame.

I was very careful in that thread, headed up by a lengthy critique by Michael Mann of Nate Silver’s recent book, not to get involved in insults against him or anyone else on the thread. Extending the same courtesy here would mean this word was deleted from your lexicon.

You are welcome to take offense at the label “denier.” I don’t care. You should realize, however, that it’s a broadly accepted term, and becoming more broadly used all the time. For example, polling denialism: http://bit.ly/QAjxTR.

Being a denier implies, obviously, that you’re in denial. Deniers have chose to take that as a veiled allusion to Holocaust denial, which doesn’t really help you when you’re trying to refute the idea that you see hidden conspiracies all over the the place.

Steve: please consult Paul Bain’s work on the function of dehumanizing language. Increased use of dehumanizing terms by one group against another is often not helpful, as Bain pointed out.

Clearly because people like you, Robert, are increasing their use of derogatory statements to describe people that disagree with their world view, it must be OK. I cannot begin to understand how anyone could fault that logic.

Most of the interest expressed above is not about the reality of global warming as broadcast 24/7. It is in the validity of the science used to evolve that hypothesis.
Problematically, there is still no definitive ‘engineering quality’ paper that quantitatively links GHG to temperature change in the global atmosphere. Can’t make a silk purse from a sow’s ear.

On other thought. With 20 years in nuclear power, 4 years in medical devices, and 3 years in Fossil Fuel design, and 3 engineering degrees..I don’t think I’m “stupid” or “A-typical”.

In fact, I think I’m a pretty much normal “Engineer/Nerd”.

WOULD I EVER TAKE THE L. SURVEY, or ANY SURVEY?

Absolutely not.

My life is too short.

I don’t trivialize it.

Therefore, depending on how many “me’s” there are out there, ANY survey has a LARGE SEGMENT OF UNCOUNTED PEOPLE who have views that WILL NEVER BE FOUND OUT..except at the ballot box, and then with NO attribution to MOTIVATION..

Okay, Phil, if you want to fixate on arctic ice at the moment at the expense of antarctic ice because it’s politically expedient and strengthens the AGW narrative, here you go:

Rob
Posted Sep 28, 2012 at 4:51 PM | Permalink | Reply

sorry – OT

I was responding to theduke – you snip me for rebutting his nonsense OT but not him?

Steve: the OT issue was raised by another commenter. theduke referred the issue to another blog where the topic was being discussed. If you’re interested in discussing Arctic ice, that would be a place to do it. However, you’re right that he did add an editorial dig on the matter and I’ll snip that.

Your concern about readability for those who do not comment is one of the things that makes this blog stand out – that and the determination to let critics have their say, even those as courtesy-challenged as a recent visitor. I speak as someone who barely commented for my first four years or so here. One reason I always try and take snips (which happen frequently) like a man, sobbing silently into my keyboard.

I’m not sure this point has been made here, so I will make it: This Lewandowsky fellow is, apparently, a PSYCHOLOGIST–in fact, has a PhD in the field. Now psychologists are supposed to know a thing or two about human nature, and yet this Dr. L., a psychologist, apparently did not even consider the possibility that respondents might fake their responses, scamming the survey, even though even non-psychologists can appreciate this fact. Didn’t even consider it, didn’t check for it, didn’t make any attempt to structure his survey in a way to detect, easily, fraudulent responses. You would think that a psychologist would be the person MOST LIKELY to consider such things, and Dr. L. did not. Apparently, none of the peer reviewers–presumably also psychologists–considered such things either.

Steve, the arrogant belief that only your most negative construal of an opponents words can be accurate is characteristic of pseudo-scientists everywhere. It is only by holding such beliefs that they can persistently characterize their opponents as being incorrect on everything, as they feel the need to do. It makes rational debate with such people, including you, impossible.

Steve: oh puh-leeze. I absolutely do not extrapolate from someone being wrong on something to them being wrong on everything. I’ve emphasized this to readers on numerous occasions. This sort of extrapolation is completely foreign to me. I challenge you to produce an iota evidence for this sort of extrapolation on my part.

Further, if there is a plausible interpretation of someone’s representations, I have no objection to adopting it. However, I’ve grown weary of wordsmithing. In the case at hand, from comments at Tamino and Deltoid, there is NO evidence that their supposedly “diverse audience” includes a material number of skeptics.

In my opinion, it is not a “most negative” interpretation of Lewandowsky’s representation of a “diverse audience” to interpret him as representing that the audience at these blogs, while anti-skeptic, had sufficient diversity to include represenative skeptics. Far from being harsh, it seems to me the most reasonable interpretation. I’m trying very hard to be objective here. Other forms of diversity are simply not relevant here. Unfortunately, I think that you, like Nick Stokes does so often, are simply wordsmithing on this point.

There is convincing evidence that Lewandowsky’s survey contained fraudulent responses, that he made no attempt to ensure data integrity and that he has failed to act proportionally once the problem of fraudulent responses had been brought to his attention.

Lewandowsky could have easily mitigated the problem if he had followed my original suggestion that he ask the journal to re-review the article with a particular eye on the problem of fraudulent responses. You should have encouraged him to do so as well. Shame on both of you.

Tom – its is not “the most negative construal” of an opponents words. It is a plain review of their words and deeds in their entirety.

As I note below – Lewandowsky tacitly admits the sample of only the “pro-science” sites is not sufficiently diverse by including “skeptic-leaning” sites in his plan.

You and I debated over the Likert scale issue in Heath 2006. You felt it was sufficient that there was enough not-directly related other information in the paper to make an educated guess that the authors used a 5 point scale.

I don’t disagree the clues might suggest that answer.

But this is supposed science. And science and or other “professional” work should be held to a minimum standard. To say we should consider in the light most favorable to the author is simply wrong. Some of the findings of scientific and professional research and papers have far reaching, often life and death, consequences.

Should we accept the results of a drug research paper that did not have absolute clear definition of every scale and calculation used? I think the answer is absolutely not. What of the educated guess you make – however strongly implied – is wrong?

It is a demonstration of a serious lack of rigor and professionalism. Yes the one sentence in Heath is probably simply misplaced – should have been a line higher. But what if that was a drug study and that assumption turned out to be wrong?

You seem to want to excuse poor practices because they are probably ok. And here they may well be. But if the industry as a whole feels as you seem to, and as they demonstrate here – in both Heath and Lewandowsky, that professional standards are not of utmost importance … if publications and their peer review are insufficient to capture glaring issues apparent to a layman years later, then how can we trust any work?

You acknowledge the deficiencies. But then seemingly want to excuse them. You say we should not construe in the most negative light. I strongly 100% disagree. Strong and critical review and challenge is the cornerstone of research and science. There should be a zero tolerance for errors, omissions and sloppiness period.

Every author should welcome the strongest challenge and critics. If their work is honest and robust these challenges will only make the work stronger. If they cannot meet the challenge their work will be proven wrong.

There should be zero mulligans, no free passes, no ‘thats good enough’s’ … no excuses. Either your work can pass all review, or it cannot.

FWIW, many people who favor one side or the other visit the other. The two largest blog traffic sources for Rabett Run are Real Climate and Climate Audit (about 3:1 for RC). Bishop Hill is also kind to Eli’s stats

Impossible to judge that sort of thing without a lot more detailed website traffic info on pageviews, time and interactions on site, etc. I have indeed clicked to Rabett Run and SkS (and other Alarmist sites) from “skeptical” sites but hardly spent any time at either site (sorry Eli, the ratio of sneer to content is too high at both). Certainly would not have been at SkS or the others to see a link to the Lewpaper survey had it been posted there. Recurrent, regular visitors to either SkS or RR (or similar sites) would more likely come from RC at a far higher ratio than the 3:1 suggests. Maybe more like 30:1 ha ha (just a wild guess).

[quote]In a survey of more than 2,000 psychologists, Leslie John, a consumer psychologist from Harvard Business School in Boston, Massachusetts, showed that more than 50% had waited to decide whether to collect more data until they had checked the significance of their results, thereby allowing them to hold out until positive results materialize. More than 40% had selectively reported studies that “worked”8. On average, most respondents felt that these practices were defensible. “Many people continue to use these approaches because that is how they were taught,” says Brent Roberts, a psychologist at the University of Illinois at Urbana–Champaign. [/quote]

As someone who does object to the word, because of how it began, I agree with your question. One feature of the Lewandowsky phase of climate propaganda is the complete obliteration of any distinctions outside of The Cause as defined by The Cause.

Re: tlitb1 (Sep 28 18:01),
OK, you probably should have deleted this comment along with my original that it was correcting and the one of Roberts I directed my original question to.

In the hopes my paraphrase of Roberts original statement (that didn’t seem too different in content and style to his others) doesn’t trigger the spam filter again, Robert said something that implied he thought Roger Pielke Jr, being one of the 5 “Skeptic” blogs, was in a sphere of “denial”. In the weak hope that Robert would even answer now, I genuinely would be interested in Roberts answer to the restating of my question below.

@Robert
I’m not one who takes offense at the “denier” label/modifier, but I am always bemused by its myriad applications that make it seem meaningless when used near the word climate. In this case I think you must be including Roger Pielke Jr. within the group “denier”. If so can you tell me what you think he denies that justifies his inclusion?

[…] there is no further doubt about the connection of John Cook’s Skeptical Science effort to the advocacy disguised as science going on at the University of Western Australia with Stephan Lewandowsky. Since this was sent using […]

[…] has not posted anything on his blog. Steve McIntyre has posted two further articles (here and here). Has someone had a quiet word to Lewandowsky? Plus, we note, Psychological Science must be due to […]