Early to press is best for success

This paper is bound to piss off a few people. So be it. This is what we found, regardless of what you want to believe.

Led by the extremely prolificBill Laurance, we have just published a paper (online early) that looks at the correlates of publication success for biologists.

I have to preface the main message with a little philosophical discussion of that loaded word – ‘success’. What do we mean by scientific ‘success’? There are several bucket loads of studies that have attempted to get at this question, and several more that have lamented the current system that emphasises publication, publication, publication. Some have even argued that the obsession of ever-more-frequent publication has harmed scientific advancement because of our preoccupation with superficial metrics at the expense of in-depth scientific enquiry.

Well, one can argue these points of view, and empirically support the position that publication frequency is a poor metric. I tend to agree. At the same time, I am not aware of a single scientist known for her or his important scientific contributions that doesn’t have a prolific publication output. No, publishing shitloads of papers won’t win you the Nobel Prize, but if you don’t publish, you won’t win either.

So, publication frequency is certainly correlated with success, even if it’s not the perfect indicator. But my post today isn’t really about that issue. If you accept that writing papers is part of a scientist’s job, then read on. If you don’t, well …

There are a few possibilities here, with some well-known mechanisms, and others that are only suspected. Using the CVs of 1400 biologists in various disciplines (excluding medical) from four different continents, we measured the number of publications they had written by the time they had completed their PhD and ten years later. We also collected information on the scientists’ gender, whether English was their first language, and the international ranking of the university where they obtained their PhD.

Combining the data into a series of linear models, we asked the following questions:

Given that our sample included people that stayed in science for at least ten years (i.e., we didn’t include people that gave up their scientific careers in the interim), do males publish more than females?

If you went to a highly ranked university for your PhD (e.g., Cambridge, Oxford, Harvard, etc.), were you likely to publish more than someone who had received theirs from a lower-ranked institution?

Most scientific results are published in English these days, so if English is your first language, do you have an advantage and therefore publish more than someone for whom English is a second (or third, fourth, etc.) language?

If you start publishing early in your career, does that set the pace for the rest of it?

The results? Drumroll, please.

Most will be happy to read that the most important determinant of your ‘long’-term (10-year) publication success is how many papers you’ve written by the time you’ve completed your PhD. This effect increases markedly if we take the number of papers you’ve published three years after PhD completion as a predictor. To make the point again that publication output is a reasonable metric of ‘success’, we also found that it was highly correlated with the ten-year h-index of the scientists for which we had data.

But there were other effects, albeit of lesser importance. Yes, even after removing the well-known ‘attrition’ effect of female scientists (i.e., leaving their careers earlier than males), men tended to publish a little more than women. There are many potential reasons for this, including still largely male-dominated academic and publishing systems, misogyny and the extra constraints of child rearing. We still have a long way to go here.

English as a first language also gave scientists a publication advantage as hypothesised, although the effect was weak.

Possibly one of the most interesting results was that PhD-university ranking had absolutely no discernible effect on publication output, regardless of which ranking metric one uses.

There a few take-home messages in all of this. First, if you are a PhD student and/or early-career researcher, make sure you put the effort into getting those first papers out. Second, if you’re considering people to hire for a new position and you’re taking a gamble on their potential to publish, you should perhaps place a strong importance on their publication output to date (all other considerations being equal).

However, employers should NOT choose men over women, nor should they blindly hire people with English as a first language. Case in point is that most of my lab’s best and brightest are early-career women from non-English-speaking countries. The gender and language effects were weak at best, and nearly disappeared once we considered the data three-years after PhD completion.

Finally, if an employer is considering choosing one of two recently completed PhD students for a postdoctoral position, and the one from the higher-ranking university has fewer publications than the other from the lower-ranked institution, my advice would be to choose the latter (all other things considered being equal, of course). Maybe students (and their parents) should also put less emphasis on university ranking and more on the people with whom they will be working when considering where to do their postgraduate studies.

19 responses

[…] Then there is the issue of language capability. Many scientists complain that it isn’t their role to tutor a student or charge in the subtleties of the English language, and that poor writing skills in English hinder their capacity to produce good scientific publications. I agree that it can be an additional challenge, but I disagree entirely that having English as a first language these days provides much in the way of an advantage for learning scientists. It is no secret that the writing capacity of even native English speakers is on the wane, so in my experience there has been no more effort to get a non-native speaker’s English prose up to scratch relative to the native English speaker’s. In fact, some of my best writers have hailed from countries where English is not the native tongue. In addition, our analysis from 2013 showed clearly that neither first language nor gender explained much variation at all in the publication output of biolo…. […]

[…] Here are a few things I learned after publishing my own research papers: I recommend these to students who are ready to embark into a research career. Studies have shown that publishing research articles is the only way of success in academia (see “Publish or Perish“). […]

[…] submitted our work. Being rejected is part of the processes. Aiming high is necessary for academic success, but when a negative decision is made on the basis of (often one) appalling review, it’s a […]

The conclusion for people who are still in a gap year, like me, is that practice writing as soon and as much as you can. Of course the definition of success is different from person to person, but there’s nothing wrong to make the finding as a motivation to write and publish more.

I think some of the conclusions in this paper are a bit over-aching — a student essentially chooses the way he/she wants to tell their PhD stories (academically or non-academically), some wait longer wanting to tell one big story, others want to share all the specifics of many small stories, a mix of the two or neither (many go out and may become successful science writers having never written a paper). And these are tied to an individual’s perspectives to tell a story and motivation levels to write, irrespective of whether a professor can actually encourage you to publish (especially if a student doesn’t want to). But if we were to consider a hypothetical highly persuasive professor who actually managed to convince an unmotivated student to write in the early stages of their careers; in the long run these students would still fall out compared with students who are consistently motivated (irrespective of the encouragement provided by his/her supervisor).
Cutting across academia (a heterogeneous group that includes people who publish and as importantly don’t publish papers), I don’t think we have a single metric of success or need to have one. Further, is there really a correlation between people who are perceived to be successful and their H index? The jury is still not out there.

One thing I think we should have emphasized more in our paper is that writing ability varies greatly among individuals and might well be a key determinant of long-term scientific productivity, in addition to other attributes like personal motivation.

I guess the implication is that need to focus not only on teaching young scientists to be better at science per se, but also we need to help them become better writers.

Thanks to Ilkka Hanski for mentioning this. It was a point that was somewhat implicit in my thinking but just didn’t get emphasized clearly enough in our paper.

Seriously? You guys still counting “success” by number of papers published and how quickly? Citation counts and h-index? Is this really the best advice you can give young(er) scientists out there?

I’m thinking of John Ioannidis and his work on why most published research findings are false. I am thinking of Daniel Kahneman and his extraordinary work on heuristics and biases that impinges on our perceptions, too. I am thinking of the number of scientists who agree on the limitations of citation counts and h-index (see, for example, the list here: http://www.csid.unt.edu/Files/citation_analysis_annotated_bibliography.pdf), but still keep using it, as you do, here, with all the usual lame disclaimers. This always surprises me. Is it because it is easily available out there, a few clicks away, generating quick data for another publication on publication success? Or because the fundamental currency of science shall forever remain the lonely, peer-reviewed publication: convert everything to USD before you trade? How many papers did Gregor Mendel publish in his lifetime and how many times were they cited, I wonder? We stand on the shoulders of giants and stake authorship as if we were peerless.

Or is it that the glacé of objectivity and quantitative competence we like around our scientific selves will not allow us to think of more relevant but as yet intractable measures? Such as those related to actual and substantive contributions to knowledge rather than incremental, self-promoting pieces cast as publishable units, to participating in extending that knowledge beyond the narrow confines of academia, to making research and data and images open access, to working with a range of other scientists and non-scientists in true collaboration rather than an authorship haze, to publish and communicate good science in un-indexed regional and local journals and magazines and blogs and even pamphlets, to take science to the streets, to merge science with sentience, reason with affect? Did you even scan the 1400 CVs in your “sample” for anything else that reflected their success as biological scientists? Did you ask the biologists what they felt was ‘success’ in their own work? Imagine submitting Rachel Carson or Aldo Leopold or Salim Ali, or even Tim Flannery, to a peer-reviewed paper-and-citation counting match with the likes of Bill Laurance (no offense meant, just for some perspective!).

I could go on about publication success, but it already sounds like a rant, so I hesitate to even propose pausing to find ways to reflect on one’s own scientific accomplishment: what have you really achieved in the wide wide world out there?

Have you ever heard of problem solving? that is, of course, problem solving using science. There’s no question problem solving in the private sector (e.g. not academic) may get the benefit of printed peer discussion, but approval is not necessarily required and in many cases, publishing is not allowed. For us, scientists in the private sector, the mantra of publish or perish is no more than a busload of crap. We have a different mantra which is, make this work in a way that is most efficient to achieve practical goals or get fired. For those of you that have already the words Exxon, Pfizer or Monsanto circling around your heads, you can wipe them off easily. There are lots of scientists working for green companies and NGOs that are committed to produce alternative ways to achieve sustainable development and while we may like to publish our results, it just isn’t a priority. For us, the measure of success is to solve problems such as find a balance between extracting natural resources, minimize ecological footprint, economic and social support for local communities and political stability. You can argue all you want about acceptable standards of success, but to us, no matter how many papers are published, science detached from the human dimension is simply unsuccessful.

Hi Corey, interesting paper. I thought I’d post the question I sent Bill here as well.

Why weren’t the type of institution where people work (R1, liberal arts college, etc.) and the proportion of FTE devoted to research included as predictor variables? I’m not sure about many of the other countries you included, but at least in the US how much you produce (or rather, the expectation of how much faculty should produce) will depend on the type of institution that hires you. In addition, even at a primarily research institution like mine the amount of time faculty are hired to devote to research varies from 10-100%.

My sense is that these factors (institutional category and % of FTE devoted to research) would be at least as important as pre-hire publication record, at least in the US.

Corey, isn’t it that you are measuring success by… success? (“the most important determinant of your ‘long’-term (10-year) publication success is how many papers you’ve written by the time you’ve completed your PhD”). So success, in terms of publication output (no arguing there) is calculated primarily by the number of papers you published early on?
It makes sense that the two are correlated, but what are the determinant of early success?
Regarding your advice to students, I’m not sure that making the effort to publish early is going to be sufficient; what I think is that those who have whatever skills (or curses) that make them successful scientists will show it early on, and will publish early on (and continue so, ten years after).
It might be misleading to think that if you do your best to reach your first publications, even if you are not made for that, then you’re safe, the rest will follow.
Am I right or am I right? ;)

While I generally agree, I am aware of at least one scientist known for his important scientific contributions who did not have a strong publication record.

Hugh Everett III was an American physicist who first proposed the many-worlds interpretation (MWI) of quantum physics (wikipedia).
3 publications in Google Scholar, in two different fields, almost 3.500 citations. Plus he changed quantum physics for ever.

He is an extreme case and in no way undermines your argument. My point is that with people living in society, it’s difficult to know. Everett would be someone you probably wouldn’t hire for a post-doc and in fact he went on to work as a military consultant.

This is a really interesting post and I look forward to reading the paper. But my first thoughts are that if employers base their decisions on which post-docs to employ on the applicants publication sucess to date, then this will be a self-perpetuating cycle. There is no opportunity for early career researchers to improve their publication record if they don’t have a job (or aren’t even given the opportunity to prove themselves at interview because of a judgement on their future publication ‘sucess’ based on their CV)…..