Thursday, July 25, 2013

The Gains and Pains of Joint Authorship

There are many differences between the scholarly cultures of
the humanities and the natural sciences. One seemingly superficial but striking
difference is in the number of authors per paper. A significant number of
papers in the humanities are single-authored whereas in the natural sciences
multi-authored papers are the norm; for some articles, the author list is longer
than the article itself. For example, this paper on an
experiment performed at the CERN Large Hadron Collider sports an army 2,926
authors, enough to fill two concert halls.

In contrast to the natural sciences, publications in the humanities
are typically single-authored; examples are essays in philosophy, linguistics, and
literary criticism. Such scholarly endeavors are by nature individualistic. The
author’s style of writing and argumentation play an important role. References
to these essays are therefore often accompanied by quotes rather than by a dry summary
of findings. It is apparently not only important what
the author said but also how he or
she said it.

In the humanities, the author can be held responsible for the entire content of a paper; all remaining errors are my own is a common expression in the
acknowledgements of such scholarly contributions. In the natural sciences, complementary types
of expertise are essential to carry out a project. (I haven’t tried it yet but I’m
pretty sure you cannot single-handedly conduct an experiment in a particle
accelerator.) As a result, among the thousands of authors there probably isn't a single one who oversees the entire paper.

So what is the lay of the land in the social sciences? In
psychology—the field I am focusing on—multi-authored papers have become the
norm. Often co-authorships are student-mentor partnerships but especially with
the advent of neuroimaging techniques, the complementary-expertise model of the
natural sciences has become common.

Multi-authored papers raise all kinds of issues regarding
credit. How much credit should go to the first author relative to the other
authors and what is the status of the last author? Various journals, including Psychological Science as of this year,
are now requiring authors to specify their respective contributions. Who
designed the experiment? Who analyzed the data? Who wrote the paper? Who went
along for the ride? And so on. This is a good idea. Moreover, it is not only a
good idea for assigning credit but also for assigning responsibility.

To what extent should a co-author be held accountable for the
entirety of a scientific article? A—what I would call—shared-responsibility view holds that by signing on as co-author, a
researcher is responsible for the entire paper. The rationale for this
assumption is that if you want the credit then you should also accept the
responsibility. And if there is blame to throw around, it should fall on everyone.
If you burn your behind, you’re going to
have to sit on the blisters as the Dutch expression goes.

Another view assumes divided
responsibility. Authors are only responsible for the part that is covered
by their domain of expertise. The rationale for this view is that you cannot
hold people responsible for things they have no control over. This would seem
obvious in the physics example I just gave. Forgive my profound lack of
knowledge on the topic, but I would assume that the guy who cranks up the
particle accelerator and the guy who touches up the images in Photoshop have no
overlapping expertise, so it would seem unfair to rake the former over the
coals if the images are artistically subpar.

Which view should we adopt in psychology, shared or divided
responsibility? The case of Barbara Fredrickson, which I described in my previous
post, provides a poignant illustration of the issue. Fredrickson had
co-authored a paper with Marcial Losada in which they presented, among other
things, a mathematical model of emotional dynamics based on fluid dynamics. A recent paper
convincingly and eloquently showed this model to be a mathematical shambles.

In a response to this
critique Fredrickson radically disowned the model. She argued that the modeling
was entirely Losado’s work and that it, all things considered, was not even
relevant to the rest of the research, so that it could be safely expunged from
the record.

Many people find Fredrickson’s response inadequate (see for
example the comments on this Neuroskeptic post)
and it is easy to see why. On multiple occasions Fredrickson has embraced the
model and touted its virtues, for example in a popular book and in this talk (starting at
12:35). Her own website until very recently
displayed the butterfly image produced by the mathematical model. The image is
gone now, but at the bottom of this post is a screen shot.

By washing her hands of the model now, Fredrickson has
shifted from a shared model of credit to a divided model of responsibility. All
gain, no pain in other words.

There may not be a good solution to the problem of assigning credit and responsibility but all gain no pain doesn’t seem the right model. It might be a start to require authors to indicate not only which components of the work they want to receive credit for but also which ones they want to be held responsible for. And wouldn’t we want to have a perfect match between credit and responsibility?

10 comments:

Great post. Since you are an editor yourself, I have to ask, what do you think about the role of a journal that gives its imprimatur to a paper such as Fredrickson & Losada? Journals essentially define the academic record by choosing which papers to publish. So the issue of credit/responsibility for a good or bad paper goes beyond the authors. Journals act as though their curatorial role ends once a paper is accepted for publication. The barriers to publication can be obscenely high, as if polluting the record (or the journal's reputation) with a bad paper would be unconscionable. But when a paper is later discovered to be bad, journals just let the paper stand, essentially assuming that the academic record can straighten itself out through further publication and discussion. It's incoherent! It's like a gated city whose only defense strategy is to place its entire army outside of its walls.

About the post: You know, I'm always surprised that people don't exploit the loophole that multiple authorship provides. What's the cost of throwing your buddy onto a paper as a middle author? Virtually nothing. But for him/her, the benefits are huge: Professional advancement and grant funding, to begin. Indeed, shared responsibility might be the only check on this. Given the huge numbers of authors that I've seen on some papers in some disciplines, and the sneaking suspicion that at least some of the authors are added for no good reason, I've often thought that the amount of credit one gets for a publication should be divided by the number of authors of the paper.

Dale: Journals are supposed to retract articles that are fraudulent. Cognition retracted Hauser's tainted articles, and when I was at JML, we worked to retract a particular article. Articles that just end up being wrong, though, aren't retracted, and I don't think they should be. The academic record ought to include not only things we think are correct, but things that could have been correct but we now think are incorrect. (Indeed, if you're a real Popperian, then all we can ever know are things that are incorrect, yes?!)

I also think that what you describe common: some co-authors seem to be along for the ride. Often these free riders are senior people rather than buddies (this may be more of an issue in Europe than in the States). In such cases dividing the amount of credit by the number of authors would be hugely unfair to the junior author who may have done most of the work. Forcing authors to be clear about their contributions is a better solution. It is not airtight of course because some co-authors may be powerful and brazen enough to claim a bigger role for themselves than is warranted.

Hi Vic, just to be clear, I agree with you that articles shouldn't be retracted just because they are later found to be wrong. A journal editor should look at the reasons why something is wrong. Were the assumptions flawed, but reasonable at the time? Keep. Were the authors the unlucky victims of a Type I error? Keep. Was a clerical error discovered that completely invalidates the findings? Retract. Was there a conflict of interest that was not properly disclosed? Retract.

The F&L paper is an interesting case; the paper isn't wrong, it's intellectually vacuous, and that's putting it nicely. It's not even that there is an error in the math. The math is simply irrelevant, there just to make the paper seem more sciency.

My point was that journals are actually in an even better "heads I win, tails you lose" situation than the authors in a multiauthor paper, because on the one hand, they get credit (in terms of reputation and impact) for their role in curating the best papers, and on the other, when a paper turns out to be deeply flawed, it's the authors' that go under the bus, and the journal just washes its hands of the situation by publishing an opinion piece (and getting even more citations and a higher impact factor!). On Google Scholar the citation count for the four main papers in vol. 60(7) of American Psychologist are: 970, 92, 231, and 33. Guess which is F&L?

Dale, your characterization of the Fredrickson & Losada paper is on the money. Accepting this paper clearly has been an editorial error of epic proportions. You would think that the journal cannot simply let this stand.

Thoughtful as always, Rolf. I would suggest a hybrid model of limited shared responsibility. It's contours are blurry, but I would argue this is what we already have in place, at least roughly speaking.

Any time you put your name on an article, you are assuming some responsibility for it. Presumably, even if having a mathematical model attached to her research helped get Fredrickson more attention and citations, on the whole, having the model discredited I would presume is a net negative, even if she shoulders none of the blame. So there's a cost to her, because her paper is undermined, even if there's no direct cost to her.

Where I think it's appropriate that there be some cost is in order to avoid "willful ignorance". That is, cases in which co-authors don't do their basic due diligence to ensure that their co-authors work meets standards that they are comfortable with.

On the other hand, assigning too much blame would lead to two problems: first, a deficit in trust -- I think it's healthy to have an academic community in which you're not worried that your adviser has pulled a Stapel on you, or is otherwise doing bad science. I think if they can explain to you what they did and how they did it, unless there's reason to be skeptical or suspicious, you should be able to trust your co-authors.

The second issue is that you don't want to discourage inter-disciplinary collaboration. In fact, I think that we should be doing everything we can in the opposite direction. However, when you collaborate with people who don't share your expertise, while you gain access to a whole new universe of scholarship, you lose the ability to be able to critically assess it to the same degree as you could in your own discipline. I certainly wouldn't want to discourage a psychologist from working with an economist, or a geneticist, or a mathematician simply because she could not easily verify her collaborator's work.

Thanks Dave. I agree. In the case of Fredrickson I feel that because she has used the model so much to promote the research, she cannot so absolve herself so easily from it now that it has been proven to be flawed beyond repair. In interdisciplinary collaborations, it would be easy to specify who did what. The geneticist would then not claim credit for the mathematician's work and vice versa. In collaborations within fields it is harder because of overlapping competencies.

I have been dismayed by how little regard some scientists have for scientific process. I recently refuted one conclusion of an oft-cited paper on the economic impact of invasive species (Pimentel et al, Ecological Economics 2005).

http://www.sciencedirect.com/science/article/pii/S0921800913000785

Among many basic errors, the estimate regarding the impact of feral cats didn't even multiply two numbers together correctly. People were just interested in the provocative headline that feral cats caused $17 billion in economic damage per year via predation on native bird species. No one cared (or cares) that the underlying math, science and logic fall short of a middle school level. Many defenders of the flawed estimate have objected to my criticisms not on a scientific level, but because they like the larger point that the estimate supports. This strikes me a a very precarious position for scientists to take.

A recent Tufts Veterinary School article cited the discredited estimate in such a way as to reflect a clear lack of interest in how the estimate was calculated.

http://now.tufts.edu/articles/cat-eat-bird-world

The Tufts authors were entirely disinterested in the fact that the same journal that published the estimate they cited had published my refutation. They simply wanted to cite a big number to support the idea that their research is important (as I believe it is).