28 October 2015

The tub’s been dirty for a while, but you finally have a quiet Sunday morning. So you pull on the cleaning gloves, get out a scrubber or rag for one hand, and put a bottle of cleaner in the other. You fully intend just to get rid of the soap scum in your tub.

But then you think, “I’ve already got the gloves on, and I’ve got the rag and cleaner, so I might as well do the bathroom sink.”

By the time the gloves come off, you’ve done the tub, bathroom sink, toilet, and kitchen sinks and counters.

For that reason, anytime you start a scientific project, open up your writing program and save a file. Even if it’s just a title, a few headings, your address, and maybe a few lines of introduction. If you have enough of an to start collecting data, you have enough of an idea to know, in broad strokes, some of the stuff you need to say in the introduction.

26 October 2015

Consumer Reports recently looked into homeopathic remedies. It was pretty timid repudiation, but did contain this critical line:

That makes no scientific sense, our experts say.

In this regard, Consumer Reports seems to be ahead of the giant science publisher Elsevier. Elsevier publishes an entire journal titled Homeopathy.

I was reminded of Homeopathy when Jane Hu commented that Elsevier’s journal Medical Hypotheses is “The X-Files division of Elsevier.”

Medical Hypotheses was intended to be a journal that would let people put out ideas that would be hard to publish in more conservative journals. This mostly meant ditching peer review, and a lot of crazy stuff got into its pages. I have never heard anyone praise a paper from Medical Hypotheses as demonstrably advancing a field. But I am at least sympathetic to the idea that maybe some speculative ideas need a home.

But if Medical Hypotheses is Elsevier’s X-Files, Homeopathy is its Area 51.

I hate to be blunt, and expect someone call me names, but... Homeopathy is crazy. Homeopathy is pseudoscience. It has no theoretical mechanism for action. It has failed test after test after test. It does not work.

Not only is this journal published by Elsevier, it is indexed in the Thomson Reuters Web of Knowledge and has an Impact Factor.

But whenever anyone talks about the importance of gatekeepers and reputation and the value of traditional publishers... ask if a major scientific publisher should have a journal like Homeopathy on its roster. Any publisher that claims to value scientific rigor should not only be embarrassed by a journal like this, they should shut it down.

Scientific publishers should be judged not only on their highest quality products, but by what crap they keep around and can’t be bothered to get rid of.

So for every topic that someone thinks is pseudoscience on wikipedia, STM pubs aren’t allowed to publish studies on it? What?

Sure, cast doubt on Wikipedia rather than addressing the question. Say it’s just “someone” on Wikipedia, as though it’s the work of a lone troublemaker instead of a page with (as of this count) 2,299 different users. Ignore that the Wikipedia article has 297 references, many of which go to peer reviewed journal articles. Ignore that homeopathy does not have credibility in the scientific community.

If I had time, I would love to create list of all the articles in Elsevier journals that have titles like, “Homeopathy cannot even be used to replace placebo”. Or maybe this article in the Elsevier journal The Lancet, which concludes in the abstract, “This finding is compatible with the notion that the clinical effects of homeopathy are placebo effects.”

I’m not arguing that academic publishers shouldn’t outlaw papers on a topic. The point is that having a dedicated journal to bunk looks bad, and provides an easy outlet for bunk.

Indeed, Homeopathy may be one of the best arguments for keeping research articles bundled in journals: it provides a quick signpost that reads, “You can ignore this.”

Not only does Arachnothelphusa merarapensis have a pretty shade of purple going for it, it has some darned interesting habits, too:

All three individuals collected by the authors were all found in trees, often quite high off the ground. Three is obviously not a huge number, but the team looked for any burrows on the ground, and didn’t see them, suggesting this species is a climber by nature.

Because this seems to be a tree dweller, it could easily be affected by logging. The good news is that this animal was found at Merarap Hot Spring Resort (which gives it its name), so its habitat seems safe right now. But that the team could only find three individuals suggests it might be a rare species.

Instead of a bunch of short little football shapes, we see long, stringy networks of mitochondria within cells.

I’m embarrassed by how many times I’ve shown the artistic representations of the mitochondria in my classes. They’ve been mostly provided by textbook suppliers, but they are far from alone in getting it wrong, as the Google search shows. My colleague down the hall, Robert Gilkerson, works on mitochondria, tells me that researchers have known that mitochondria form these long, connected networks since the 1990s.

Weirdly, Wikipedia shows no less that four of the wrong “bean-like” pictures, then goes on to say:

Although commonly depicted as bean-like structures they form a highly
dynamic network in the majority of cells where they constantly undergo fission and fusion.

Why do we keep perpetuating the wrong image? The artist’s renditions are very helpful in showing the double membrane structure of mitochondria, which is very relevant to function. But that could be shown in a more realistic representation of the structure, instead of copying from other textbooks.

I say “copy” deliberately, because there is clear evidence that undergrad biology textbook creators do copy from existing texts, sometimes for generations (Gould, 1991).

01 October 2015

I completely agree that the problem the badges are trying to address is one that needs addressing: clarifying author contributions. The article describes efforts to come up with a standardized list of tasks that people might perform in a scientific study. I’ve done similar exercises in my biological writing classes. Usually, we end up with about five categories, something like this:

Concept

Experimental design

Data collection

Statistical analyses

Writing

The taxonomy the badges are working from is more elaborate, with 14 categories, although the article mentions another group that recorded over 500 reasons (!) someone might be an author on a paper.

The Nature article links out to four papers with badges, each badge signifying an author’s contribution. The badges are standardized, appearing with the same design in both journals.

In neither journal do the badges appear in the PDF of the papers. To me, this immediately limits the usefulness of badges. I save papers as PDFs, and I consider that to be the most “official” version of the paper. If the goal is to clarify authorship, it needs to as integral a part of the paper as author affiliations or contact information.

Turning these contribution categories into badges seems like needless gamification. The article notes that software firms have used badges. This is probably why I have only heard our online learning center talk about badges. That’s been about it.

I’m hesitant about adopting trendy things from software companies. I think too often, you run the risk of investing a lot of time and effort into something nobody uses, and is quickly abandoned a few years later. For example, see this article about how universities bought into Second Life, and where that effort stands now:

I decided to travel through several of the campuses, to see what’s happening in Second Life college-world in 2015.

First, I didn’t see a single other user during my tour. They are all truly abandoned.

Second, the college islands are bizarre. They mostly are laid out in a way to evoke stereotypes of how college campuses should look, but mixed in is a streak of absurd choices, like classrooms in tree houses and pirate ships. These decisions might have seemed whimsical at the time, but with the dated graphics, they just look weird.

The work on standardizing the contributions seems very valuable to me. It moves us closer to to the movie credit model, which I think scientific authorship will ultimate evolve towards, particularly with kiloauthored papers. But I am trying to imagine having “writing,” “acting” and “special effect” badges go by at the end of movie. It wouldn’t deepen my understanding of who did what.

I do not understand how contribution badges add value that you don’t get by simply writing out the contributions in words.

DrugMonkey asks how much it costs to generate a publication in something like Science or Nature or Cell. This was probably prompted by Steve Ramirez’s estimate that it took $3 million to generate one of his papers.