Hey scientists, how much of your publication success is due to dumb luck?

What makes some scientists’ careers take off whereas others’ stagnate? There are personal factors, of course: Some run clever experiments, have good collaborating skills, and are eloquent in communicating their work. But there’s also just dumb luck. Sometimes doing the right experiment at the right time makes all the difference in publishing a paper that wins lots of attention.

The most time-consuming step in attacking that question wasn’t creating the computer model or crunching the numbers, says Roberta Sinatra, a statistical physicist at Central European University in Budapest and the lead author of the new paper. Instead, it was the “dirty data cleaning.” Those data came from scouring the journals of the American Physical Society as well as Web of Science, a citation database. After disambiguating the names of more than 10,000 scientists who had conducted at least 20 years of research and published 10 papers, they had a list of 514,896 papers. Then they mapped out the millions of citations to those papers, and searched for a statistical model that best predicted scientists’ future success based on their early publication history.

The first surprise to pop out was the randomness of success. You might guess that, over time, a scientist matures and produces better work, with later papers earning more citations. But no such trend emerged. Instead, a scientific paper looks more like a lottery ticket, Sinatra says, with the number of citations a paper receives mostly due to luck. “So publishing more papers is like buying more tickets,” she says. “And that’s why you have a bigger impact during your more productive years” as a scientist.

But not all scientific careers are alike. Some people who publish the same number of papers—even in the very same journals—get more citations than chance alone can explain. All of those nonrandom differences between people—eloquence, team-building skills, and creativity—boiled down to a parameter in the model called Q. The authors found that calculating a scientist’s Q-factor requires at least 20 papers and 10 years of citations. With that in hand, however, they found that they could accurately predict the number of citations earned by that scientist’s 40th paper with 80% accuracy.

The finding that luck plays a big role in citations makes sense to Lucas Carey, a systems biologist at Pompeu Fabra University in Barcelona, Spain, who was not involved with the study. “To get a hit [paper], you have to publish often, even for high Q(uality) authors,” he wrote in an email. “This is a great and most likely correct take home message.” But universities probably won’t be using Q-factors anytime soon for hiring decisions, he says, considering that it isn’t very predictive until later in a scientist’s career.

Oren Etzioni, a computer scientist at the Allen Institute for Artificial Intelligence in Seattle, Washington, also not involved with the study, wants to see more validation of the power of Q for “addressing the age-old question: What’s a scientist’s influence now, and in the future?” But he calls it “a valuable addition” to the already crowded toolbox of metrics for sizing up scientists, such Semantic Scholar, an artificial intelligence tool for analyzing scientific careers that he debuted earlier this year.

Sinatra, meanwhile, says she hasn’t calculated her own Q-factor. “I’m not that old yet,” she points out. “I only have 14 papers.” And when the time comes, she vows she still won’t calculate it. “I don’t like attaching numbers to humans.”