Share

Derek Lowe's commentary on drug discovery and the pharma industry. An editorially independent blog from the publishers of Science Translational Medicine. All content is Derek’s own, and he does not in any way speak for his employer.

Big Pharma And Its Research Publications

A longtime reader sent along this article from the journal Technological Forecasting and Social Change, which I’ll freely admit never having spent much time with before. It’s from a team of European researchers, and it’s titled “Big Pharma, little science? A bibliometric perspective on Big Pharma’s R&D decline”.
What they’ve done is examine the publication record for fifteen of the largest drug companies from 1995 to 2009. They start off by going into the reasons why this approach has to be done carefully, since publications from industrial labs are produced (and not produced) for a variety of different reasons. But in the end:

Given all these limitations, we conclude that the analysis of publications does not in itself reflect the dynamics of Big Pharma’s R&D. However, at the high level of aggregation we conduct this study (based on about 10,000 publications per year in total, with around 150 to 1500 publications per firm annually) it does raise interesting questions on R&D trends and firm strategies which then can be discussed in light of complementary quantitative evidence such as the trends revealed in studies using a variety of other metrics such as patents and, as well as statements made by firms in statutory filing and reports to investors.

So what did they find? In the 350 most-represented journals, publications from the big companies made up about 4% of the total content over those years (which comes out to over 10,000 papers). But this number has been dropping slightly, but steadily over the period. There are now about 9% few publications from Big Pharma than there were at the beginning of the period. But this effect might largely be explained by mergers and acquisitions over the same period – in every case, the new firm seems to publish fewer papers than the old ones did as a whole.
And here are the subject categories where those papers get published. The green nodes are topics such as pharmacology and molecular biology, and the blue ones are organic chemistry, medicinal chemistry, etc. These account for the bulk of the papers, along with clinical medicine.
The number of authors per publication has been steadily increasing (in fact, even faster than the other baseline for the journals as a whole), and the organizations-per-paper has been creeping up as well, also slightly faster than the baseline. The authors interpret this as an increase in collaboration in general, and note that it’s even more pronounced in areas where Big Pharma’s publication rate has grown from a small starting point, which (plausibly) they assign to bringing in outside expertise.
One striking result the paper picks up on is that the European labs have been in decline from a publication standpoint, but this seems to be mostly due to the UK, Switzerland, and France. Germany has held up better. Anyone who’s been watching the industry since 1995 can assign names to the companies who have moved and closed certain research sites, which surely accounts for much of this effect. The influence of the US-based labs is clear:

Although in most of this analysis we adopt a Europe versus USA comparative perspective, a more careful analysis of the data reveals that European pharmaceutical companies are still remarkably national (or bi-national as a results of mergers in the case of AstraZeneca and Sanofi-Aventis). Outside their home countries, European firms have more publications from US-based labs than all their non-domestic European labs (i.e. Europe excluding the ‘home country’ of the firm). Such is the extent of the national base for collaborations that when co-authorships are mapped into organisational networks there are striking similarities to the natural geographic distribution of countries. . .with Big Pharma playing a notable role spanning the bibliometric equivalent of the ‘Atlantic’.

Here’s one of the main conclusions from the trends the authors have picked up:

The move away from Open Science (sharing of knowledge through scientific conferences and publications) is compatible and consistent with the increasing importance of Open Innovation (increased sharing of knowledge — but not necessarily in the public domain). More specifically, Big Pharma is not merely retreating from publication activities but in doing so it is likely to substitute more general dissemination of research findings in publications for more exclusive direct sharing of knowledge with collaboration partners. Hence, the reduction in publication activities – next to R&D cuts and lab closures – is indicative of a shift in Big Pharma’s knowledge sharing and dissemination strategies.
Putting this view in a broader historical perspective, one can interpret the retreat of Big Pharma from Open Science, as the recognition that science (unlike specific technological capabilities) was never a core competence of pharmaceutical firms and that publication activity required a lot of effort, often without generating the sort of value expected by shareholders. When there are alternative ways to share knowledge with partners, e.g. via Open Innovation agreements, these may be attractive. Indeed an associated benefit of this process may be that Big Pharma can shield itself from scrutiny in the public domain by shifting and distributing risk exposure to public research organisations and small biotech firms.
Whether the retreat from R&D and the focus on system integration are a desirable development depends on the belief in the capacities of Big Pharma to coordinate and integrate these activities for the public good. At this stage, one can only speculate. . .

14 comments on “Big Pharma And Its Research Publications”

I would disagree with “science…was never a core competence of pharmaceutical firms” because I’ve seen a lot of excellent science alongside the technological aspects of drug discovery. Unfortunately, a lot of that science was never published for a variety of reasons.
With regards to publications in general, much of the journal paper-counting can be correlated with who is in charge and the changing policies within discovery organizations, good or bad.
For example, in Large Pharma Company X during the ’90s, publication was encouraged but much more emphasis was placed on moving projects forward and getting well-characterized compounds into the clinic. Opportunities for advancement and rewards were based on meeting project goals and doing good med chem and biology, regardless of how much you published. Getting patent applications filed were more critical.
Then in the ’00s, management change throughout Company X installed a very pro-publication policy with very specific numbers for how many papers you needed (as well as on how many you had to be first author) to be promoted to the next level. What resulted, some would say, was a culture where you designed compounds so the SAR made a nice paper, or made compounds in such a way to maximize the number of different papers you could write. What also resulted was scientists taking an inordinate amount of time out of the lab, not working or thinking about the project, but worrying about the details of yet another set of papers. Programs suffered.
Now, in the ’10s Company X, rebounding from the “what have we got for all these papers your staff wrote?”, has instituted a policy that only cutting edge, “quality” papers will come out of projects. Because the promotion requirement for publications is still in place, this complete 180 has now generated a generation of scientists that are conflicted and whiplashed.
So, any implications of the number of journal publications from a company and their impact on Open Science or Open Innovation, is probably just a highly imperfect analysis (viz-a-viz correlation is not causation). Sometimes the whims of a mystical leader are the root cause of significant change, good or bad.

Methinks, the authors are overthinking this.
Except for a few organizations, publications while nice, are not a key measure of job performance. In the era of contracting R&D staffing, one is not highly motivated to spend the time and effort to produce a publication when it will only minimally impact your job security.
In essence a bottom-up rationale (keeping your job) for decreasing publication rate is more likely than a top-down rationale (core competencies, shareholder value, open innovation, system integration, blah, blah, blah), IMO.
Not to mention, I’m not even sure that the publication rate is decreasing if measured on a per scientist basis rather than the per firm basis used.

Publications are meaningless in the discovery and commercialization of drugs, and their trends support this as companies wake up and stop wasting their time with grandiosity by publication. The journal rags are dying like their newspaper brethren as well, and the sooner the better for humanity. They have been feeding the egos of scientists and sucking their lifeblood long enough.
They should have analyzed the patent literature, which tells a completely different story.

It’s also hard to publish data that says you can’t repeat the wonderful science that was published last year in “The Prestigious Journal of Whatever”, which is unfortunately where a lot of the effort and brain-power has gone. I think industry is also less willing to publish something that is perhaps “sketchy” in terms of seeing consistent results. Maybe that’s just my bias coming from a very good industry-based scientific team…

I have heard that at Genentech promotions can be held up if you don’t publish (even if you are otherwise an excellent scientist). I agree that publications are not too important for drug discovery success; at the same time, in this era of short and ever-changing job stints, publications are important for showcasing your talents to the outside world.

There used to be a notion that working in science was service to a more noble mission than simply corporate metrics. Publication is important because it enriches the broader advance of science. Now people in industry are more likely to publish because they feel they need something on their CV then their job vaporizes or they need a box checked for their own advancement within the company. And the quality of the publications shows it! It would sure help if companies would publish more of their results when they find that the dogma of the day in the literature is unsupportable. It used to be that people felt that sharing important findings was something of a professional obligation — when did this business become so pedestrian?

@3 and @8: The proliferation of publications in recent years may have had the negative result of more low quality manuscripts polluting the literature, but the fact remains that publications themselves are essential. Science is a cumulative process, and new discoveries build on previous knowledge which is disseminated largely via publication. Pharma is and needs to continue to contribute to this knowledge.
The question of “how many new drug ideas succeeded from Journal X?” should be asked knowing (a) “how many drug ideas succeed?” (overall success rate) and (b) “how many new drug ideas succeeded without having drawn on the scientific literature?” (other group).
One recent example of publications having an impact: since Bradner’s Nature paper describing the BET inhibitor JQ1 (http://www.ncbi.nlm.nih.gov/pubmed/20871596), a slew of pharma research programs have been initiated and venture funding has been secured for startups focusing on BET inhibitors. While a drug has not yet made it to market, it is a nice example of the impact of a high quality publication.

@9 – that’s a nice paper, but it’s from an academic group who might see their mission as you describe. I would imagine if a pharma company discovered a new target/inhibitor, they would keep as much internal as possible and only publish in a journal once they had mapped out as much ip space as possible, making it difficult for other groups to pursue the same target
by the way, do i know your brother jack?

Will, indeed Jack is my bro…
Need to clarify which question we are discussing: One question (from the creatively named “anon” in comment #8) is “how many new drug ideas succeeded from ‘The Prestigious Journal of Whatever’?”. The former question fails to recognize the complexity of the endeavor of drug discovery; there’s rarely a single paper (or single scientist) you can point to as being responsible for a drug. This, in turn, makes explicit definition of the value of publications difficult because, rather than delivering a new target with a single paper, the value can be unanticipated and may manifest itself as enabling contributions. One example might be learnings from SAR that could be transferred to a distinct chemical series for the same target. As to your question, paraphrased as “how many papers from Pharma have helped competitors?”, experience tells me that pharma papers can provide very useful knowledge their competitors.

The pharma-clinicians and the pharma-chemists probably have quite different opinions about the merits of publishing, which might account for the two distinct points of view emerging in the comments.
There is no surprise in the European ‘national’ pharma image that emerges from the survey. If you look at the annual summaries of new drug approvals (e.g. in Pink Sheet), up to about 2010 almost all new products were approved first in their country or region (for EU) of origin.

How were “publications by pharma” defined? This study seems to skip over the fact that pharmaceutical research has not always been published under the name of the company — if you made sure to include investigator-instigated-trials related to specific products, or simply studies that were produced with the help of publication management companies, you’d have a clearer picture of what was actually put into the literature from these companies.
Debates about the appropriateness of practices that effectively hid the role of companies in certain publications didn’t really get started until the mid-’00s; that’s going to skew these results downward…