29 January 2017

This post is mainly addressed to all of my online friends, acquaintainces, contacts, etc etc. Whether we interact via Twitter, Facebook, Skype, e-mail, or some combination of these. (Everyone says I have to be on WhatsApp and Instagram too, but I already waste way too much time online as it is.) Of course, people I've never heard of are welcome to read and comment on this too.

These are strange times. Since about 24 June 2016 I have had this constant strange feeling of unease. It's faint, but real. And since 9 November 2016, it has become a bit less faint.

I don't think I've ever had any problems with mental health. That is, if I complete a measure of depression, I don't think I've never been at a point in my life where I scored above 0 on any of the items, and even then I only would have scored 1 on a couple. About once every three years I go through a little phase where I feel strangely lethargic for a couple of days (after controlling for hangovers), but that's about it.

I just looked at the Beck Depression Inventory and today I scored 6 out of 63. Probably my highest ever, but I didn't score more than 1 on any item, and a score of 1 to 10 is classed as "These ups and downs are considered normal". So apparently I'm fine at that level. This sums up more like how I feel:

My "work", such as it is (I don't have a job that involves leaving the house and going to an office with a boss where I have to do stupid shit), involves quite a lot of being critical of other people's work. I try to do this in as civilised a way as possible. I prefer to write my critiques of scholarly work in the form of manuscripts that are at least intended for publication in journals (except when something really pisses me off and I dash off a blog post about it, which I usually regret shortly afterwards when it turns out I didn't do my due diligence). When you work this way, you need a valve to release the pressure, because it's very, very slow and tedious to work your way through a series of articles about how people consume pizza with almost innumerable statistical errors in them (shameless pimping of our new preprint there). For me, that valve is mostly Twitter, and sometimes Facebook. But that brings me back face to face with... well, the causes of "that feeling". 80% of the tweets in my feed, every second Facebook post seems to be about what the whole world (or at least, my blinkered, woolly-liberal(*) section of it) is talking about.

I'm starting to think that this feeling of unease may be affecting my interactions. People who used to be up for stupid, nerdy banter about stuff that doesn't matter seem to be a little bit more sensitive. Stuff doesn't get discussed that probably ought to. Or, perhaps worse, stuff that shouldn't be discussed does come up. I've witnessed people whose fundamental views on a particular question differ by about one hair's width from each other having fights --- well, not quite fights, but exchanges of snarkiness --- over utterly trivial details. People seem to be a little bit on edge. I find myself wondering if I ought to drop that bit of banter into a tweet when the only people who will read it are people I've been happily bantering with for a couple of years.

I have been wondering whether I'm alone in experiencing this "gnawing feeling" in the form of (what I presume is) low-level stress. Today, as I wondered whether to publish this draft (which I've been working on occasionally for a few days now, not that it shows from the quality of the writing), I saw that my occasional co-conspirator James Heathers --- for whom the words "irrepressibly upbeat" are normally a mere pastiche of an understatement --- seems to have been having something similar going on. So maybe it's not just me.

And I'm lucky. I'm white and male and all of the other things that place me above the midpoint of luck and privilege on every scale ever. Just after the US election result, I saw a tweet from a Black person that basically said, "Hey, liberal white folks. That feeling in your stomach right now? Welcome to our world, every day of our lives". So I'm conscious that this is probably just me having a whine about how I don't feel as good as I think I'm entitled to feel.

Currently I don't have many few ideas for cheering myself up. Silly, over-the-top prog-rock wigouts work a bit, for a few moments. My slow acquisition of the documents I need to apply for Irish nationality provided a couple of moments of light relief last Friday, as one certificate arrived in the post and I got e-mail confirmation that another was on its way. But these are small consolations.

Anyway, back to the first paragraph (all the professional writers seem to have learned at writing school that you have to finish with a quirky point that ties back into your first quirky point). To my online friends, acquaintances, etc: If I am being "differently annoying" right now --- i.e., not in the normal "Nick, we get it, just shut up now" way :-) --- then I apologise, but things are, well, not normal.

PS: Normally I allow comments on my posts, but it doesn't feel right in this case. That seems to fit in with my theme here. Heh.

(*) I don't think I'm very political. I mean, yes, I don't like
racism, and I think that multinationals probably ought to pay more tax,
and the state in some countries should probably help poor people more, but I do find a lot of
"progressive" ideas to be just sloganising. I think that there are real
biological differences between the sexes, and I don't think want it to
be impossible to start a business because you might make a lot of money
from it. I
just wish the view was better from on top of this pile of fences.

14 January 2017

Amid all the stories of bad behaviour by researchers confronted with demonstrations of errors and other problems with their work --- I'm sure many readers have their own favourite examples of this --- I thought I'd start the year off with a story of somebody doing the right thing.

You may be familiar with our (that's James Heathers and me) GRIM article, in which we demonstrated a technique for detecting certain kinds of reporting errors in journal articles, and showed that there are a lot of errors out there. The preprint was even picked up by The Economist. GRIM has caused a very small stir in skeptical science circles (although nothing compared to Michèle Nuijten's statcheck and Chris Hartgerink's subsequent bulk deployment of it with reporting on PubPeer, a project that has been immortalised under the name of PubCrawler). Some people have started using the GRIM technique to check on manuscripts that they are reviewing, or to look at older published articles. Even the classic 1959 empirical demonstration of cognitive dissonance by Festinger and Carlsmith succumbed.

Round about the time that we were finalising the GRIM article for publication, I came across Mueller and Dweck's 1998 article [PDF] in JPSP, entitled "Praise For Intelligence Can Undermine Children's Motivation and Performance". I'm quite skeptical of the whole "mindset" area, for a variety of reasons that don't matter here, but I was especially interested in this article because of the tables of results on page 38, where there are no less than 50 means and standard deviations, all with sample sizes small enough to permit GRIM testing.

This looked like a goldmine. Unlike statcheck, GRIM cannot be automated (given the current state of artificial intelligence), so running one or two checks typically requires reading and understanding the Method section of an article, then extracting the sample sizes and conditions from the description ("Fifty-nine participants were recruited, but three did not complete all measures and were excluded from the analyses" is often what you get instead of "N=56"; if anyone reading this works in an AI lab, I'd be interested to know if you have software that can understand that), and then matching those numbers to the reported means in the Results section. So the opportunity to GRIM-check 50 numbers for the price of reading one article looked like good value for my time.

So I did the GRIM checks, taking into account that some of the measures reported by Mueller and Dweck had two items which effectively doubles the sample size, and found... 17 inconsistencies in the means, out of 50. Wow. I rechecked - still 17. And a couple of the standard deviations didn't seem to be possible, either. (I have some code to do some basic SD consistency checks, but the real expert here is Jordan Anaya aka OmnesRes, who has taken the idea of GRIM and done some smart things with it).

What to do? I got James to have a look, and he found the same problems as me. We decided to contact Dr. Carol Dweck, the senior and corresponding author on the article. Would she want to talk to us? Would she even remember what happened back in 1998?

To our slight surprise (given some other recent experiences we have had... more to come on that, but probably not any time soon), Dr. Dweck wrote back to us within 24 hours, saying that she was going to look into the matter. And within less than four weeks, we had an answer, in the form of a 16-page PDF document in which Dr. Dweck and her co-author, Dr. Claudia Mueller, had brought Dr. David Yeager to help them. They had gone through the entire article, line by line, and answered every one of our points.

For several of the inconsistencies that we had raised, there was a conclusive explanation. In some cases this was due to some degree of unclear or omitted reporting in the article, some of which the reader (me) ought perhaps have caught, others not. (To our amazement, two of the study datasets were still available after
all this time, as they are being used by a teacher at Columbia.) A few other problems had no obvious explanation and were recorded as probable typos or transcription errors, which is a little unsatisfying but perhaps not unreasonable after 18 years. And in one other case, outside the table with 17 apparent inconsistencies, I had highlighted a mean that was (rather obviously) not wrong; getting a long sequence of precise measurements right is hard for everybody.

So for once --- actually, perhaps this happens more often than we might think, and the skeptical "literature" also suffers from publication bias? --- the process worked as advertised. We found some apparent inconsistencies and wrote a polite note to the authors; they investigated and identified all of the problems (and were very gracious about us calling out the non-problems, too). With Dr. Dweck's consent, I have written this story up as an example of how science can still do things right. I'm still skeptical about mindset as a construct, but at least I feel confident that the main people researching it are dedicated to doing the most careful reporting of their science that they can.

10 January 2017

Double-blind peer review (hereafter, DBPR) has quite a few supporters. I imagine that people who suspect that their manuscripts have been unfairly treated (say, by a reviewer who is a rival or just doesn't like them personally) are likely to be among this group. But I've seen other credible arguments that DBPR will level the playing field in science. Some research suggests that the identity of the author, or even just the prestige of their institution, can affect the likelihood of a manuscript being accepted. There are also issues about the fair treatment of women and other groups
who have been traditionally disadvantaged within science. If it's all about the quality of the research and not the reputation of the big-name authors, then the science ought to be judged independently of its origin, and since we're only human, eliminating whatever relationship that the reviewers might have with the author looks like it has to be a good thing.

The concept of preprints --- that is, putting a draft of your article somewhere online to get feedback from the community before you submit it for publication in a journal --- also has quite a few supporters. In the last couple of years we have seen the launch of several new preprint servers for the biological and social sciences, and the open access journal PeerJ has its own preprint section. I was recently a co-author on a preprint for which it wasn't quite clear where the best journal to submit it would be; this problem went away (give or take the article processing charges, but one of my co-authors had some funding) when an open access journal contacted us and offered to publish it. (Exactly what conflicts of interest this might create for the peer-review process is left as an exercise for the reader; in view of my general skepticism about OA journals maybe I am being a little hypocritical here, but I will claim that I didn't want to let my co-authors, most of whom are enthusiastic proponents of OA, down here.)

The biggest advantage of preprints is that you can get your research out there quickly. The GRIM article that I published with James Heathers is a good example of it. Within a month of us posting the preprint, it had close to a thousand downloads and been featured in The Economist. Even with a quick review turnaround at Social Psychological and Personality Science (SPPS) --- which might have been expedited by the action editor or reviewers having been exposed to the preprint --- it took five months for this article to be published online.

However, there seems to be a problem when you mix these two good ideas. The whole point of a preprint is to get people talking about your new ideas, and give you feedback --- presumably in a less formal way than the leaden tone of a decision letter, and the subsequent obsequiousness of your reply ("We thank Reviewer 2 immensely for his extremely helpful comments on section 2.3, although we suspect that they might have been even more extremely helpful if he had read section 2.4 where we anticipated and addressed, with individually-numbered bullet points, every one of these extremely helpful comments"). This is generally going to involve you generating some publicity for your preprint. Now of course, you could create an egg account on Twitter, and a sock puppet on Facebook ("Danielle Kahnewoman", for example) and a Gmail address for correspondence, and spam the world with links to your anonymised preprint. But in practice, everyone is going to know who wrote it. And that means that when the manuscript gets to the reviewers at the journal that offers (or, in some cases, mandates) DBPR, those reviewers won't even have to resort to the standard techniques that they might use to identify the authors (e.g., seeing which author is the most cited in the References section); there is a high chance that they will already have read the preprint. Even if they haven't, they will just need to put the first sentence of the manuscript inside quotes into Google and they will find the preprint in seconds.

I discovered today that Personality and Social Psychology Bulletin (PSPB) --- a stablemate of SPPS where we published the GRIM article --- is introducing a policy of mandatory DBPR from March 2017. That's a decision for the Editorial Board, but it makes me wonder what their policy is on preprints. (Wikipedia has a list of journals and publishers whose preprint policy is known --- generally, it seems, preprints are fairly well accepted --- but the word "psychology" doesn't appear anywhere on that page.) It seems to me that by mandating DBPR, a journal is essentially committing itself to refusing to consider manuscripts that have previously been posted as preprints, because anonymity is essentially impossible --- or rather, it's untenable to pretend that anonymity is possible --- under such circumstances.

A related problem with mandatory DBPR, if the journal wants to actually attempt to enforce it (in my experience, many problems in any form of professional life start when someone creates a rule and then tries to be consistent in enforcing it, despite the messiness of the world), is that in addition to the assumption that the manuscript is not available through Google, it also assumes, more completely, that it has not previously been seen by the reviewers in an unblinded state. That seems like a rather untenable assumption, especially in specialised fields. PSPB is a well-respected journal by any measure, but like any journal ("Cell wouldn't take it? Let's try Nature!") it may not always be the first port of call for the authors who submit there. Should the reviewer who has already seen the manuscript unblinded on behalf of another journal recuse herself because she knows who the author is, thus depriving the editor of an expert opinion (which, as a bonus, could presumably be provided very quickly)?

For what it's worth, I don't have a solution to this. I like preprints, but I also like the idea of DBPR (although here are some short counterarguments, and here is some pro-and-con discussion). I suspect that mandatory DBPR may be incompatible with the realities of the scientific world (even without preprints), because reviewers are human; as mentioned elsewhere in this post, they may have strong suspicions or even outright knowledge of the authors' identities, and it could place them in a morally ambiguous situation to impose a requirement that they declare such suspicions or knowledge. But I'm loath to criticise this decision by PSPB --- which is by no means the onlyjournal to impose DBPR --- because it was presumably taken for good reasons and after considerable thought. Short of introducing peer review by AI robots (insert your own joke here about the last terrible review you received), it looks like we're going to be stuck with at least some of the problems associated with scientists being human for a while yet.

[ Update 2017-01-10 15:37 UTC: Thanks to Stepan Bahnik for pointing out that the new, mandatory DBPR policy at PSPB also applies to SPPS and their other stablemate, Personality and Social Psychology Review. I would be very interested to hear from any members of the Editorial Board of any of those journals about how they see the relationship between that decision and their policy on preprints. ]