29 June 2018

The website Rate My Professors is getting rid of its “hotness” rating. Which means you won’t see stuff like this any more:

The idea of getting rid of the “chili pepper” has been floating around for a while, but fellow neuroscientist Beth Ann McLaughlin was able to hit a nerve on Twitter this week. Almost 3,000 retweets and many professors chimed in to say, “Get rid of this appearance rating.”

And to their credit, the website owners did.

This is a good thing for people in higher education. The Rate My Professors site is well known to people in higher education, both faculty and students. I’ve encouraged students to use Rate My Professors, because I have a record teaching, and people have a right to hear other students’ experiences. It matters when they tacitly suggest it’s okay to ogle professors.

It’s nice to have a little good news. And to be reminded that sometimes, faceless corporate websites – and the people behind them – do listen to reason, and can change.

But frankly, every little bit of legwork just makes me less inclined to post preprints. I’ll still do it if I think I have some compelling reasons to do so, but doing this routinely as part of my regular publication practices? Maybe not.

11 June 2018

But I am a great believer in the saying, “Never try something once and say, ‘It did not work.’” (Sometimes attributed to Thomas Edison, I think.) I submitted another manuscript over the weekend which I thought might be a little more suited to preprinting, so after I submitted it to the journal, I went and uploaded it to biorXiv. It was the weekend, so it sat until Monday. Today, I received a reply. My preprint was rejected.

bioRxiv is intended for the posting of research papers, not commentaries(.)

How interesting.

I like that this demonstrates that preprint servers are not a “dumping ground” where anyone can slap up any old thing.

My paper is not a research paper. I don’t deny that. Following that rule, biorXiv made a perfectly understandable decision.

And while she is the only one to have “National Academy of Sciences” listed in the authors’ affiliations, the rest of the author list is nothing to sneeze at. It boasts other people with “famous scientist” credentials, like Nobel laureate and eLife editor Randy Schekman. Most of the authors are involved in big science journals.

One of my criticisms of preprints is that they would make the Matthew Effect for publication worse. People who are in well-known labs at well-known institutions would receive the lion’s share of attention. People who are not would have just another expectation with minimal benefits.

But this feels even worse. This feels like there’s one set of rules for the rank and file scientists (“No commentaries!”) and another set of rules for scientists with name recognition (“Why yes, we’d love to have your commentary.”).

I like the idea of preprints, but this is leaving a sour taste in my mouth.

Update, 12 June 2018: The manuscript found a home at a different preprint server, Peer Preprints.

04 June 2018

Making the rounds in international news in the last couple of days is a viral video that is normally described in the headlines this way:

"Heroic crayfish cuts off own claws to escape the pot!"

Crayfish behavior, heat, pain, claws... This is right on target with some of my research. But so far, nobody has called me to break down what is going on in this video.

What the crayfish is doing probably autotomy, not desperate self-mutilation. A crayfish dropping a claw is not like a person ripping off an arm. But the narrative is so good that nobody cares about the science.

03 June 2018

Years ago, while listening to CBC's Morningside, I heard this description:

"Canada is a country that works in practice, but not in theory. The United States is a country that works in in theory, but not in practice."

I was reminded of this over the weekend reading a thread about data sharing (https://twitter.com/danirabaiotti/status/1002824181145317376). Universal a data sharing between scientists is one of those ideas that sounds great in theory. So great that people tend to undervalue how it will work in practice.

Another example that I was thinking about recently was post publication peer review. In theory, it might be nice to have a single cenatralized site that included all post publication comments. In practice, blogs have a pretty good track record of bringing important critical comments to a broader audience.

I see this over and over again with people putting forward ideas about how we should do science? The meta science, so to speak. Around publication, peer review, career incentives, statistical analysis. I've been guilty of this. There's old posts on this blog about open peer review that I still think were fine in theory, but not grounded in practice.

I think we scientists often get very enamoured of those solutions that work in theory, and undervalue things that work in practice.