Is there too much coauthorship in economics (and science more generally)? Or too little?

Economist Stan Liebowitz has a longstanding interest in the difficulties of flagging published research errors. Recently he wrote on the related topic of dishonest authorship:

While not about direct research fraud, I thought you might be interested in this paper. It discusses the manner in which credit is given for economics articles, and I suspect it applies to many other areas as well. One of the conclusions is that the lack of complete proration per author will lead to excessive coauthorship, reducing overall research output by inducing the use of larger than efficient-sized teams. Under these circumstances, false authorship can be a response to the warped reward system and false authorship might improve research efficiency since it might keep actual research teams (as opposed to nominal teams) from being too large to produce research efficiently. One of the questions I rhetorically ask in the paper is whether anyone has ever been ‘punished’ for having their name included on a paper for which they did not perform their share of the work (I think the answer is “no” in economics).

Liebowitz may well be right when it comes to the incentive structure. There is very little cost to including collaborators. Much of my most successful research has been collaborative (with varying percentages of contributions from the collaborations). Many times I’ve added a collaborator after most of the work had already been done, just because I think it makes the work stronger.

But . . . Liebowitz argues that excessive collaboration can harm research, if people would get more done working on their own. I have a different perspective in that I think even a small collaboration by a coauthor can make an article or book much stronger. Given that this seems to hold in statistics, where we publish dozens of papers a year, I’d think it would be even more the case in economics, where researchers take years to publish a single article.

True, collaboration did not save economists Carmen Reinhart and Kenneth Rogoff from making three serious errors in a single article, but—no joke—I think that adding a third collaborator might have made a difference and allowed them to have avoided their errors.

Look at it this way. Either Reinhart and Rogoff had a research assistant do their calculations or they did it on their own.

– If it was a research assistant who ran the numbers, it seems quite possible to me that being a coauthor would’ve been a big deal for the RA, and he or she would’ve been super-careful to get the spreadsheet calculation right and (if the person was a high-quality RA of the sort that Reinhart and Rogoff could certainly afford) might even have caught the other problems with the analysis. (True, Reinhart and Rogoff still don’t seem to want to admit there are other problems, but of course they’re in defensive mode now. They might have been more open during the research process.)

– If Reinhart and Rogoff did the analysis . . . well, we’ve already seen that they didn’t have full control over their data. Maybe they were too busy to take care to do things right. Having a coauthor with real responsibility could’ve made all the difference.

Liebowitz also writes:

As a measure of research success for senior faculty, department chairs, on average, rely most heavily on the journal of publication, with a weight of 40%. Citations, by way of contrast, were in third place, receiving a weight of 26% . . . This is analogous to a sports team picking its starting lineup based on the ex ante performance of players on athletic tests intended to predict success on the field, instead of using the ex post actual performance in games.

This is an issue too, highly relevant to the question of incentives for untenured academic researchers, but to me somewhat distant from the larger concerns about research quality.

Don’t get me wrong—I don’t think coauthorship is a panacea for the problems of sloppy research. But for an important project, I see real advantages to having multiple people taking responsibility for the result.
P.S. Here’s Table 1 of Liebowitz’s paper:

33 Comments

On the flipside of collaboration—which I vastly prefer and think makes much better work in general—is diffusion of responsibility, where everybody thinks someone else “has that problem” and thus things don’t get solved. http://en.wikipedia.org/wiki/Diffusion_of_responsibility Another downside is that the paper may end up being the product of a committee where everybody’s hobby horse has to get ridden.

In the case of Reinhardt and Rogoff, I suspect there was a lot of ex post “analysis” going on. They had a pro-austerity story they wanted to tell, come hell or high water. Yes I just accused them of motivated reasoning and ideological analysis, but I’ve seen it happen enough, and their weighting scheme was so unbelievably odd that it’s hard to make sense of it otherwise.

Moral of the story is choose your collaborators carefully, but make sure they don’t all share your ideological preconditions.

Medical research may be an example of the dark side of co-authorship, with diffusion of responsibility as Verkuilen points out: Some years ago, a Norwegian researcher (Jon Sudboe) was caught having invented data for articles published in top-level medical journals. Further fraud was found in other work he had done. There was a public report from an investigative commission that interviewed 60 (!) co-authors. The report said that the co-authors usually had roles similar to sub-contractors with highly limited responsibilities (and limited overview of the total research work being done), or else they were senior-researchers who had primarily played a major role at a “higher level” (contributing the original idea, being involved in planning or polishing the paper, and contributing their gravitas and giving the papers credibility within the profession). The Vancouver rules for co-authorship appeared to be poorly understood and not followed in the medical research community, and as a result all co-authors (except one who had been Sudboe’s thesis advisor and closest collaborator) were absolved of any guilt. Basically – they were just doing what everyone else was doing.

A related anecdote: A couple of years later I heard about a PhD student finishing up her paper for submission who was told by her advisor that a senior researcher at the institution should be added. “But he hasn’t even seen the paper,” the PhD student protested, and was told that “this is how we do it.” The added name belonged to a researcher who around the same time protested that it was unfair to criticize the culture of the medical profession, as such problems were in the past and Vancouver rules were now well known by all and strictly followed.

“If Reinhart and Rogoff had a research assistant do their calculations,” then that person should have been an author. The problem starts right there: not valuing the contribution of the data analyst, who has a great deal of responsibility for the validity of the results.

Many years ago (as a pretty senior grad student) I did some statistical analysis for a someone. He was using a standard, respectable data source the name of which eludes me now to study noncustodial father involvement. He had very preconceived notions of what he wanted to see and wanted me to do things like analyze the data with and without the survey sampling weights on or to jigger the sample inclusion rules to see if things came out better for his desired story. He was a highly unethical person and I profoundly hope he is no longer in academia, but I have no intention of looking to see.

I’m afraid the specific tinkering you describe is almost benign (not that I’m defending it!). I’ve heard a story from a grad student I consider trustworthy about how she caught her adviser manually increasing an effect size on a graph, presumably for more dramatic results. I recommended she blow a whistle, but she didn’t feel like it.

In economics, one must be involved from beginning to end to be a co-author (exceptions for very big names perhaps, who can get away with only helping with the theory, or only with the empirical results, say).

A student who contributes only to one small part of the paper does not warrant co-authorship (in economics, that is). Part of the reason is that papers can take 4-5 years to complete, and undergraduate or Masters students do not stick around long enough. PhD students often focus on their own individual research, as they usually cannot include a co-authored paper in their dissertation.

I am not saying it is *right* for research assistants to be denied co-authorship but I am saying it is the custom in economics and finance.

Why do econ. papers take so long, especially the ones without actual field work? I wonder how the trajectory looks like: Is 80% of the final paper done in 20% of the time and the polishing takes the remaining 4 years?

Or is it just tardy reviewers? I really think something ought to be done to speed up the publication cycles.

I see different reasons:
1) The referee process can be really lengthy. I submitted a paper to a decent journal after 8 months I got feedback. 1 positive, 1 negative report.
Paper rejected.

I resubmitted to the journal suggested by the editor of the first submission. It is under review for 9 months now. I enquired about the status and I got an answer along the lines of, “we will now ask new referees…”

Even if all goes well: I hear something with 3 months (short), I get an opportunity to revise (extremely rare to have just an accept), I revise very quickly (1 month), afterwards there is no hassle anymore, then it would have taken 8+9+3+1=21 months. That would be considered an extremely fast publication.

{Although it is said that things go faster once you have acquired a certain status]

2) It is expected that you present at some high profile conferences (AEA etc.) to increase your chances… not sure whether this really is important.

3) In contrast to some fields, you can’t just do some analysis, write things up and that is it. It is expected that you spend a lot of time on writing a well written essay. A bit ludicrous and this rarely works out well. Especially since (as Andrew has noted) some really like a mathy sauce mixed through the analysis.

————————————————

With respect to coauthorship:
Some advisors (some at least) demand that their name is on articles of PhD students while they have no clue what it in there. I know examples in different fields and especially in econ where I have first hand experience.
Hard to imagine (or believe) for some, but I have seen this even at top Ivy league institutions.

A short story: Once I wrote a paper and when the first draft was finished I had a meeting with an advisor. I expected some feedback. I got two remarks. 1) The affiliations of the advisor were incomplete. 2) Where are we going to send the paper to.

I swore that, if I ever would become an advisor that I would be compassionate, caring and interested. We’ll see what happens.

Yes, I concur about publication delays: typically, 1 year to get a working draft (2 yrs if a new project, new datasets), then 1 year to present the paper around, ”market” it, and polish, and then 2 yrs of submitting and resubmitting. Total 4 yrs for a paper based on an ongoing research program, and 5 yrs if new. And that’s optimistic, because if you aim high (5% acceptance rate journals) then you might need 4 yrs to publish (6-7 yrs total). Not unusual.

Re: unethical advisors, I have never seen this firsthand, but I believe it. In economics and finance, PhD students have some protection in that often it is badly seen for a PhD thesis to contain chapters co-authored with an advisor. So the advisor has less power to get a freebie article because of accepted norms. But this is not always true.

Why should I bother to read your research if you can’t be bothered to package it into something that’s as understandable as possible and is decent to read? If anything, I think there’s too much math and not enough decent writing. Unless you’re Mr White, the fact that a paper is borderline incomprehensible means that you can’t communicate your findings well.

The most brilliant findings in the world are useless if you can’t communicate them. Quality writing, however, can also make the most mundane findings come across as centrally relevant.

A failure on your part to fully account for the importance of crafting quality communication, in order to deliver your findings to your target audiences(s), is probably a reflection of excess confidence that you are such a genius that you don’t need to bother to make your results as readable as possible.

If all you have is some neat math, and you can’t produce an essay which convinces me that the research is socially and methodologically relevant within the first few lines, I will probably conclude that all you have is some neat math.

I had meant to give this example in a comment a few posts back about experiences with peer review journal articles and all the ways the system is gamed.

As a Freshman/Sophomore undergrad physics major I double checked a Nuclear Physicist’s derivative calculation (complicated but standard freshman calculus stuff) and found a small mistake in it. They offered to put me as one of many coauthors on the article, most of whose contributions were no greater than mine or, for that matter, the janitor who mopped the floor every week. With a gleam in their eyes said things like “see isn’t doing real science exciting”.

I my heart I knew I hadn’t done real science, not even a little bit, and so swore I’d never specialize in Nuclear Physics, or ever read a paper with more than 3 authors. I’ve seen nothing sense to suggest either prejudice was unjustified.

There are totally legit reasons for long author lists in some fields. I have a few articles where there are many authors, all of whom contributed important work. For instance, one was an intervention study on immune response in the elderly I worked on as a grad student. I did the statistical analysis, there were three other people who ran the study (one of whom was the primary author), and two faculty members who oversaw the lab and were the responsible project investigators. The article indicated clearly who did what.

It’s not arbitrary. In the fields I’m familiar with the best research typically had a difficult birth. They’re based on the kind of new ideas and risky research that makes it very difficult to attract massive funding or other researchers; both of which work against having large numbers of coauthors. Moreover if they’re playing publication games with the authorship, it makes me wonder what other games they’re playing that I don’t know about.

Back when Nuclear Physics meant “foundations of Quantum Mechanics” you didn’t see papers with 20 authors. Now that it’s a stale dead field, you do. Life’s to short to read the boring tripe turned out by these massive research teams, but I’m glad someone does just in case.

I’m curious, what parts of physics do you think are not “stale dead” now? Trying to see if your correlation of quality with few author papers holds.

In some ways, as a field matures you always need more effort for lesser incremental development. And that sort of makes big teams more likely. Complexity increases too. It’s so much less likely that one any person has all the mental and physical apparatus necessary to advance understanding of a problem (my knowledge is more in the applied sciences).

Well, most people would say Nuclear Physics is a stale backwater now, so that’s not just my opinion. Big picture wise, I think all of physics has been dead in the water for about half a century. Despite massive increases in funding and exponential growth in research papers the 1850-1950 period really did achieve a lot more than anything that’s come since.

Your second paragraph is the standard comforting viewpoint. I don’t buy it at all. Not even a little bit. Complex, but safe pedestrian research with big research teams is not inevitable. The real stuff has a history more like the lasso. Gelman described how he didn’t think it was wrong initially, but didn’t think it was going anywhere big. That’s the kind of resistance important new work meets even from people favorable to it and it doesn’t allow for big research teams. People do the big science stuff because it’s a lot easier way to make a living. Indeed it’s the only way for most run of the mill scientists to make a living.

The kind of big science you talking about is not new. Most of the stories about big research project lampooned in Gulliver’s Travels are so ridiculous that most people think they were completely made up. But they weren’t. They were based on real projects by some of Europe’s well funded royal societies. Swift was satirizing real life.

If Einstein’s 1905 papers cost a few thousand dollars to produce, then some enterprising administrator will eventually get the idea they can create a thousand Einstein’s by securing $2 million funding. But you don’t get a thousand Einsteins with those funds; you just a get big safe forgettable project that costs $2m.

Well I’m not in nuclear physics, and in my experience this is field dependent. In areas like psychology, medicine, or bench sciences, empirical papers are quite likely to be research team type things, whereas theoretical papers tend to have fewer authors. If you only think theoretical papers are likely of value, that will obviously skew towards them. In other areas of research the line between empiricist and theorist isn’t nearly so crisp.

That said, I think you’re knocking Kuhnian “normal science” too much. I totally agree about the fact that a lot of normal science involves grantsmanship and playing “small ball” to make sure the funders stay happy, and that has had a distorting effect on the scientific enterprise. However, there are ways of doing that right, such as putting in more theoretically interesting parts into a larger applied project.

Really revolutionary work, which is quite rare and hard to predict, absolutely depends on there being a lot of normal science going on. If Michelson and Morley hadn’t done their work, if lots of other empirical anomalies hadn’t been in the air, and if a lot of new mathematical tools hadn’t been invented in the 19th Century, Einstein would have stayed a Swiss patent clerk.

1. Very little research has any lasting value. So I question the utility of arguing about the utility of co-authors. If that co-authorship assists careers and some of those careers then contribute meaningful material, I can argue that is worth the cost.

2. We often barely see the real work in the research process. In medical studies, for example, we barely see the process by which the study design became focused in this manner. We don’t see the qualitative and early quantitative analysis process. We see instead an outline of a decision, with minimal or no discussion of alternatives that were rejected. In other words, much of the process in the article is selling the veracity of the article through the presentation of this particular design and process as though it must be correct. So I don’t see how in many studies it would be possible to say this or that person should or should not be listed.

Unfortunately, there isn’t an easy way to weight citations as supporting, general literature review or, sadly, “we couldn’t replicate the result”. I’m thinking of a widely cited BIS paper on global output gaps that everyone got excited about for a while.

[…] even largely explaining the rise in mean number of authors per paper. Discussion from Andrew Gelman here. And coincidentally, Joan Strassman just posted on the same general issue, suggesting that […]

[…] over the past few decades. This may be because of bad incentives for researchers (as Stan Liebowitz has argued), or because more expensive capital is required for research as in particle physics, or because […]