a blog about genomes, DNA, evolution, open science, baseball and other important things

Last week news broke of a pair of lawsuits filed by two prominent female scientists alleging they had been subject to persistent gender discrimination by The Salk Institute, the storied independent research center in La Jolla, California, where they both work.

I obviously can’t speak to the validity of these specific charges – it’s not a trivial task to dissect the basis for the successes and failures of small numbers of individuals. But the accounts of Lundblad and Jones sound all too familiar: case studies of a system that we know from myriad individual stories and a bevy of rigorous studies to be systematically biased against women.

Lawsuits are complicated, obviously, and tend to bring out the worst in institutions. But, even given this, the responses of the Salk and its leaders to these charges have been incredibly disappointing.

In an initial statement issued on July 14th, the Salk coupled anodyne verbiage about their commitment to equality and diversity with a document listing “issues” with the careers of both Lundblad and Jones.

Amongst the Salk’s complaints were that, in the past decade, both Jones and Lundblad had “failed to publish a single paper in any of the most respected scientific publications (Cell, Nature and Science)” and that their annual productivity (measured in numbers of papers per year) was below the median of their colleagues.

There are so many things wrong with this statement it is hard to know where to begin. First, counting publications is a horrible way to measure someone’s contributions to science – many fantastic scientists publish slowly and carefully, and a lot of highly “productive” labs publish a large number of worthless papers. Even worse, attempting to equate a scientist’s value to the number of papers they have in Cell, Nature and Science (CNS) is pure bullshit. Everyone in science knows that getting papers into these journals is a brutally competitive lottery, based on an highly flawed system for projecting the quality and impact of a work, heavily impacted the perceived sexiness of the topic (hence the referral by many scientists to these as “glam journals”). There are plenty of people – myself included – who think that the system of review and editorial selection at these journals does not lead to their publishing best science – and to use this as the primary way of judging someone’s career is absurd.

There is also a deeply political aspect to getting papers into these journals, and many serious and outstanding scientists simply choose not to play the game. Crucially, exactly the same kind of “old boys club” effect that Jones and Lundblad cite as affecting their careers at the Salk also plays a role in selecting papers for these “top” journals. I will put aside the fact for now that this obsession with top journals by the Salk is perpetuating the toxic culture of the impact factor that many top Salk scientists (including its president) have derided. More directly relevant to this issue, in citing a poor record of CNS publications as the primary reason that Jones and Lundblad have not been rewarded as much as their male colleagues, they are not strengthening their case – rather the Salk is confessing that it relies on a biased system to judge their scientists, precisely what Jones and Lundblad are alleging.

The Salk was pretty harshly – and justifiably – trashed for their stance over the ensuing few days, leading to a second statement from the Salk’s president Elizabeth Blackburn, which I repeat here in full:

I’m saddened that an institute as justly revered as the Salk Institute is being misrepresented by accusations of gender discrimination. Our stellar scientists, both female and male, hail from 46 countries around the world and all bring their unique and valuable perspective to our efforts to unravel biological mysteries and discover cures.

I am a female scientist. I have been successfully pursuing scientific research with passion and energy for my entire career. I am not blind to the history of a field that has, unfortunately and sometimes unconsciously, favored males. But I would never preside over an Institute that in any way condoned, openly or otherwise, the marginalizing of female scientists. The Salk Institute and some of the greatest female scientific minds in the world have always worked together for their mutual benefit and the benefit of humanity.

At every place where I have had a leadership voice—the World Economic Forum, the President’s Council, the American Society for Cell Biology, the American Association for Cancer Research, our nation’s prestigious universities, and many committees—I have emphasized diversity and inclusion. That’s an undebatable tenet of mine. Important biological research that is going to impact humanity and improve the condition of our people and our planet is difficult work. Thus we need the best minds in the world— regardless of race, gender or nationality—to help us discover solutions.

This is what we do at the Salk Institute and what we will continue to do: work together to help people live longer, healthier lives.

I have tremendous respect for Blackburn as a scientist and a person, and her words passionately defending diversity are nice. But, to be blunt, this statement is pathetic.

First of all, the fact that Blackburn emphasized diversity and inclusion in Davos or anywhere else is of no consequence. She is now the leader of an academic institution and what matters now is not words but tangible steps to eliminate discrimination at her institution. And the idea that she would “never preside over an Institute that in any way condoned, openly or otherwise, the marginalizing of female scientists” is risible. The marginalization of female and many other types of scientists is not a rare, isolated facet of specific institutions – it is an endemic, universal problem in science.

Almost by definition every leader of every institute is presiding over an organization that participates in the marginalization of women in science, because it is intrinsic to operating in the world we live in today. The Salk would have to be an unprecedentedly remarkable place if it were free of gender and other forms of discrimination. The question for Blackburn and other scientific leaders is not whether they condone discrimination, it is whether they are willing to confront the fact that it unequivocally does exist AT THEIR INSTITUTION, whether they endeavor recognize the specific ways it manifests AT THEIR INSTITUTION and whether they use their leadership position to take tangible action to eliminate it AT THEIR INSTITUTION.

Instead of doing any of this, Blackburn would have us believe that any assertion of discrimination must be false simply because she would never be the leader of such an institution. Instead of dealing with the problem and instead of recognizing the bravery it took for Jones and Lundblad to put themselves forward in this way, Blackburn has publicly called them bad scientists and liars. And in doing so Blackburn joins a long list of institutional leaders who, when presented with evidence of discrimination at their institution attack the messengers, valuing their short-term interests of their institution at the expense of the long-term interests of science and people who carry it out.

For all her lofty rhetoric about the value of diversity, Blackburn has failed the acid test of promoting it.

The soul of academic science is being destroyed, one patent at a time.

Nowhere is this more evident than in the acrimonious battle between the University of California and The Broad Institute of Harvard and MIT over who owns the rights to commercialize gene and genome editing systems based on the CRISPR immune system of bacteria. There are a dizzying number of patents involved in this dispute, and many more players staking claims to what has the potential to be billions of dollars in royalties down the road. But the heart of the matter is rather simple.

UC claims it should own broad rights to CRISPR-based gene editing because UC Berkeley’s Jennifer Doudna and colleagues were the first to show how a protein (Cas9) from the bacterium Streptococcus pyogenes could be weaponized to permit the easy editing of DNA. (Full disclosure: I am a professor in Doudna’s department). The Broad counters that they should get the rights to the application of CRISPR-based gene editing in humans and other eukaryotes (which include all animals, fungi and plants – i.e. most of the organisms where there is money to be made) because, they assert, The Broad’s Feng Zhang was the first to demonstrate the use of Cas9 in eukaryotic cells.

Last week a panel of judges of the U.S. Patent Trial and Appeal Board sided with The Broad, finding that the application of CRISPR-based gene editing to humans and other eukaryotes was not an obvious extension of demonstrating the basic utility of the system, and hence constitutes a separate patentable invention.

I encourage you to read the judges’ decision. Far from being a descent into an arcane warren of patent law, as most people seem to expect, this case is very straightforward, resting on the simple question of whether the extension of CRISPR from bacteria and a test tube to human cells would have been expected to work by someone or ordinary skill and experience in the field. I don’t agree with the ruling, but the judges offer a lucid and very accessible account of what was presented to them and how they arrived at their decision.

While on the surface this case seems like a fairly mundane “I invented it first! No I did!” dispute, albeit with unusually large financial stakes, to me it represents something far more important: a battle for the very soul of academic science and the principles upon which it is based.

When I first heard, in 2012, that scientists in the Doudna lab had discovered that the Cas9 protein cuts DNA at a specific point based on instructions in a small piece of RNA, and that they had invented a way to simplify its application, I didn’t give a moment’s thought to patents. Instead I marveled that evolution, through the never ending fight between organisms and the viruses that plague them, had created a protein whose key properties were just what was needed to allow molecular biologist to easily edit the DNA of their favorite organism.

If academic science worked like it should we all would have spent the subsequent five years focused only on figuring out all the cool things we could do with this new toy – and there are a lot of cool things. But where we should have seen nothing but scientific opportunity, many saw dollar signs, and the flurry of CRISPR activity beginning in 2012 has become as much a patent gold rush as a journal of discovery.

The academic quest for patents is no longer the side story. Where once technology licensing staff rushed to secure intellectual property before scientists blab about their work, patents now, in many quarters, dominate the game. Experiments are done to stake out claims, new discoveries are held in secrecy and talks and publication are delayed so as not to interfere with patent claims. This is bad enough. But the most worrying trend has been the willingness of some researchers and research institutions to distort history, demean their colleagues and misrepresent the scientific process to support these efforts.

And while all of academia is complicit, The Broad Institute has taken the game to a new level. In 2015, as the patent fight was heating up, The Broad published a “CRISPR timeline” which defined history as ending with Feng Zhang’s demonstration of CRISPR gene editing in human cells. It also demoted the work from Berkeley to playing second fiddle to the work of Virginijus Siksnys’s group who, conveniently, did not have a competing patent claim.

The Broad set up a website describing their patent claims, which includes the following statement:

Then there was The Broad Institute Director Eric Lander’s widely derided “Heroes of CRISPR” essay in Cell which further rewrote history. Written under the conceit of giving credit to forgotten scientists, Lander wove a sweeping story of scientists toiling in obscurity until The Broad stepped in an made their work important. Doudna and her close collaborator Emmanuelle Charpentier were once again reduced to bit players in this narrative.

This was all clearly done as part of a public relations strategy to support their patent case, in which the assault on reality continued.

We can agree or disagree whether or not it was obvious that Cas9 would ultimately work in eukaryotic cells. The expert testimony shows that cogent arguments can be made on either side. But The Broad chose not to rely on cogent arguments. They had a trick up their sleeves. They scoured Jennifer Doudna’s public statements about the process of getting CRISPR to work in human cells, and found some where she appeared to be making The Broad’s case for them, which they submitted as evidence for their case and trumpet on their website.

For example, The Broad highlighted Doudna saying she experienced “many frustrations” in getting CRISPR to work in eukaryotic cells. But one can believe that it was obvious that CRISPR would work in eukaryotic cells, and still not expect that it would work the first time someone tried it or that the process would be free or frustration. Because that’s how science works! It is often difficult and frustrating – indeed it almost always is – even when you’re working on something that is obvious. Lander knows this. He was once a scientist. And yet he and The Broad are perfectly happy to misrepresent the scientific process to bolster their legal case.

A second quote featured by The Broad has Doudna saying “it was not known whether such a bacterial system would function in eukaryotic cells.” But this is an absolutely true statement that any good scientist would say even if they believed CRISPR would work in eukaryotic cells. In science something is not known until you demonstrate it. This is what any good scientist should say when they have yet to prove that something is true. By pretending that this is a statement about the patentability of CRISPR in eukaryotic cells, The Broad is once again misrepresenting the scientific process and condemning Doudna for little more than being a careful scientist and speaking honestly about it.

Is this the lesson we really want to learn from CRISPR? That scientists working in fields with commercial potential should never speak honestly about their work and the scientific process? That if they do they will get screwed over by someone unscrupulous who prioritizes winning patents and trains their scientists to behave like clandestine operatives rather than the public servants they really are?

By making the scientific process itself party to their legal case, Lander and The Broad are doing more than just securing victory in court; they are willfully undermining science for personal and institutional gain. If academics – including one of the most prominent academic scientists in the world – are willing to lie to promote their own and their institution’s financial interests, why should anyone believe anything they say? If there’s one thing that’s more dangerous than fake news, it’s fake science.

And it’s not just The Broad. While in this case UC’s defense of their CRISPR intellectual property could rely on a truthful account of its discovery, I have no doubt that they would be willing to resort to unsavory tactics and falsehood to secure victory (see their history of trying to coverup cases of sexual misconduct).

Sadly, there will always be venal and weak people in science – it is, after all, a human endeavor. But we do not need to feed them. As we decry The Broad’s behavior, we have to recognize its source – the transformation of academic science from an engine of discovery into a source of institutional and personal riches. And there is a simple way to reverse this trend: deny academic institutions intellectual property in their research and inventions.

Academic science is, after all, largely funded by the public. By all rights discoveries made on with public funds should belong to the public. And not too long ago they did. But legislation passed in 1980 – the Bayh-Dole Act – gave universities the right to claim patents on inventions made by their researchers on the public dime. Prior to 1980 these patents belonged to the federal government and many languished unused. The logic of Bayh-Dole was that, if they owned patents in their work, universities and other grantees would be incentivized to have their inventions turned into products, thereby benefiting the public.

But this is not how things worked out. Encouraged by a small number of patents that made huge sums, universities developed massive infrastructure to profit from their researchers. Not only do they spend millions on patents, they’ve turned every interaction scientists have with each other into an intellectual property transaction. Everything I get from or send to a colleague at another academic institution involves a complex legal agreement whose purpose is not to promote science but to protect the university’s ability to profit from hypothetical inventions that might arise from scientists doing what we’re supposed to do – share our work with each other.

And the idea that this system promotes the transformation of inventions made with public funding into products is laughable. CRISPR is a perfect case in point. The patent battle between UC and The Broad is likely to last for years. Meanwhile companies interested in actually developing CRISPR into new products are stymied by a combination of a lack of clarity about with whom to negotiate, and universities being difficult negotiating partners.

It would be so much easier if the US government simply placed all work arising from federal dollars into the public domain. We have a robust science and technology industry ready to exploit new ideas, and entrepreneurs and venture capitalists eager to fill in where existing companies are uninterested. Taxpayers would benefit by allowing the market, and not university licensing offices, to decide whose ideas and products make the best use of publicly funded inventions.

And most importantly we all would benefit returning academic science to its roots in basic discovery oriented research. We see with CRISPR the toxic effects of turning academic institutions into money hungry hawkers of intellectual property. Pursuit of patent riches has transformed The Broad Institute, which houses some of the most talented scientists working today, into a prominent purveyor of calumny.

We have to fix this problem now or there will be countless other Jennifer Doudnas slimed by colleagues, their contributions to science attacked not for their validity or importance, but for their impact on some other institutions patent portfolio. The soul of academic science is at stake.

For decades the NIH has been the premier funding agency in the world, fueling the rise of the US as the undisputed powerhouse of global science. But in his eight years in charge of federal efforts to understand, diagnose and cure disease, current NIH Director Francis Collins has systematically undermined the effectiveness of the institution and overseen a decline of American science.

Biomedical research in the US has been driven by the creativity and industry of individual investigators and their trainees. Collins has systematically diverted funds from investigator initiated projects in favor of “big science” projects conceived in and managed from inside the Beltway.

The model for these initiatives is the well-regarded Human Genome Project. However Collins, who headed this project in its final years, learned all the wrong lessons from this effort, focusing on central planning and control, and the generation of massive datasets, while ignoring the importance of technology development. Hence his signature projects as NIH director have been ill-conceived and wasteful of precious research funds.

The NIH has always aimed to fund scientists based on their ideas and accomplishments, but under Collins’s big science paradigm, money is increasingly doled out based on researchers’ willingness to sacrifice their autonomy and creativity to Bethesda’s plans. Scientists are herded into consortia and spend endless hours on conference calls to produce data that are of fleeting value.

Collins’ has further corrupted the process of peer review by becoming too close to leaders of the major research institutions, who have had an outsized role in shaping billions of dollars of NIH initiatives, and then benefited disproportionately when funds from these projects were distributed.

The US has led the world in training biomedical scientists, attracting many of our most talented minds into science. Central to this was the expectation that they could build stable careers based on NIH funding. But under Collins this system has collapsed. “Young” PIs generally do not receive their first grants until they are in their 40’s, spend an increasing amount of time seeking funds, and no longer feel they can count on NIH funding.

American science has always enjoyed strong support from Congress and the public. This support depends on a high degree of trust. But Collins has repeatedly made unrealistic promises to Congress and the public to secure support for his signature initiatives. There is almost certain to be a public backlash against the NIH when these projects fail to deliver.

Scientific progress almost always begins with basic discoveries. But in his efforts to curry favor with Congress, Collins has consistently promoted translational research with a dubious track record over basic biomedical research. He has involved the NIH in massive translational projects that are either premature or that the NIH is ill-prepared to carry out.

Finally, science as an endeavor involved building on the research of others. However Collins’s NIH is mired in a serious reproducibility and reliability crisis. Confidence in NIH funded research is at an all-time low, and Collins has responded with bureaucratic measures that have little hope of correcting the problems, while leaving untouched the perverse incentives that lead to the production of unreliable research.

Fortunately, destroying the greatest scientific engine humanity has ever created takes time. The US remains the global leader in biomedical research, with a talented and creative scientific workforce eager to tackle pressing problems in basic science and public health, and a diverse array of commercial enterprises ready to turn their discoveries into products that improve the health and well-being of our citizens. There are many thousands of talented and dedicated people at the NIH. But more time with Collins at the helm would be a disaster.

The National Institutes of Health are an invaluable resource for the American people and our economy. But it is in serious need of reform if we are to benefit optimally from the opportunities of 21st century biomedicine. It’s time to replace Francis Collins and name a talented physician scientist with real vision and wisdom as NIH Director.

Last week there was a brief but interesting conversation on Twitter about the practice of “co-first” authors on scientific papers that led me to do some research on the relationship between author order and gender using data from the NIH’s Public Access Policy.

I want to note at the outset that this is my first foray into analyzing this kind of data, so I would love feedback on the data, analyses and finding, especially links to other work on the subject, as I know some of these issues have been addressed elsewhere.

A long post follows, but here are some main things I found:

The number of female authors falls off as you go down the list of authors of a paper, with fewer than 30% of senior authors female.

Contrary to my expectation, there doesn’t seem to be a bias to put the male author first when there are male-female co-first author pairs.

There are, however, far fewer male-female co-first author pairs than there should be based on the number of male and female first and second authors.

The same thing holds true more generally for first-second author pairs. There is a deficit of cross gender pairs and a surplus of same gender pairs.

Part (and maybe most) of this effect is due to an overall skew in gender composition of authors on papers.

If you are female, there is a 45% chance that a random co-author on one of your papers is female. If you a male, there is only a 35% chance that a random co-author on one of your papers is female.

Before I explain how I got all this, let me start with a quick explainer about how to parse the list of authors on a scientific paper.

By convention in many scientific disciplines (including biology, which this post is about), the first position on the author list of a paper goes to the person who was most responsible for doing the work it describes (typically a graduate student or postdoc) and the last position to the person who supervised the project (typically the person in whose lab the work was done). If there are more than two authors an effort is made to order them in rough relationship to their contributions from the front, and degree of supervision from the back.

Of course a single linear ordering can not do justice to the complexity of contribution to a scientific work, especially in an era of increasingly collaborative research. One can imagine many better systems. But, unfortunately, author order is currently the only way that the relative contributions of different people to a work is formally recorded. And when a scientist’s CV is being scrutinized for jobs, grants, promotions, etc… where they are in the author order matters A LOT – you only really get full credit if you are first or last.

Because of the disproportionate weight placed on the ends of the author list, these positions are particularly coveted, and discussions within and between labs about who should go where, while sometimes amicable, are often difficult and contentious.

In recent years it has become increasingly common for scientists to try and acknowledge ambiguity and resolve conflicts in author order by declaring that two or more authors should be treated as “co-first authors” who contributed equally to the work, marking them all with a * to designate this special status.

But, as the discussion on Twitter pointed out, this is a bit of a ruse. First is still first, even if it’s first among equals (the most obvious manifestation of this is that people consider it to be dishonest to list yourself first on the author list on your CV if you were second with a * on the original paper).

Anyway, during this discussion I began to wonder about how the various power dynamics at play in academia played out in the ordering of co-equal authors. And it seemed like an interesting opportunity to actually see these power dynamics at play since the * designation indicates that the contributions of the *’d authors was similar and therefore any non-randomness in the ordering of *’d authors with respect to gender, race, nationality or other factors likely reflects biases or power asymmetries.

I’m interested in all of these questions, but the one that seemed most accessible was to look at the role of gender. There are probably many ways to do this, but I decided to use data from PubMed Central (PMC), the NIH’s archive of full-text scientific papers. Papers in PMC are available in a fairly robust XML format that has several advantages over other publicly available databases: 1) full names of authors are generally provided, making it possible to infer many of their genders with a reasonable degree of accuracy, and 2) co-first authorship is listed in the file in a structured manner.

I downloaded two sets of papers from PMC: 1,355,350 papers in their “open access” (OA) subset that contains papers from publishers like PLOS that allow the free text to be redistributed and reused 424,063 papers from the “author manuscript” (AM) subset that contains papers submitted as part of the NIH’s Public Access Policy. There papers are all available here.

I then wrote some custom Python scripts to parse the XML, extracting from each paper the author order, the authors’ given names and whether or not they were listed as “co-first” or “equal” authors (this turned out to be a bit trickier than it should have been, since the encoding of this information is not consistent). I will comment up the code and post it here ASAP.

I looked at several options for inferring an author’s gender from their given name, recognizing that this is a non-trivial challenge, with many potential pitfalls. I found that a program called genderReader, recommended by Philip Cohen, worked very well. It’s a bit out of date, but names don’t change that quickly, so I decided to use it for my analyses.

I parsed all the files (a bit of a slow process even on a fast computer) and started to look at the data. I’m going to focus on the AM subset here first, because these are all NIH funded papers and thus mostly from the US, so intercountry differences in authorship practices won’t confound the analyses, and because the set is likely more representative of the universe of papers as a whole than is the OA subset. I will try to note where these two datasets agree and disagree.

Of the 424,063 papers in AM, there are 2,568,858 total authors with a maximum of 496 and a wide distribution.

There are 219,559 unique given names (including first name + middle initials), of which about 75% were classified successfully by genderReader as male, mostly male, female, mostly female or unisex. About 25% were not in their database. For the purpose of these analyses, I treated mostly male as male and mostly female as female. I’m sure there’s some errors in this process, but I looked over a reasonable subset of the calls and the only clear bias I saw was that it didn’t do a good job of classifying Asian names – treating most of them as unisex, and thereby excluding them from my analysis. All together there were 1,206,616 male authors, 737,424 female authors and 624,818 who weren’t classified. Of the authors who were classified, 62% were male.

Of the 424K paper 32,304 contained co-equal authors, and 28,184 contained two or more co-first authors (assessed by asking if the co-equal authors were at the beginning of the author list). Of these, 85% (24,087) had exactly two co-first authors and 12% (3,285) had three co-first authors (one had 20 co-first authors, which I’m just going to leave here for discussion). I decided to use only those with exactly two co-first authors for the next set of analyses.

There were a total of 11,340 papers with exactly two co-first authors both of whose genders were inferred. Of these, the author order counts were as follows:

Count

Percent

Male-Male

4286

37.8

Male-Female

2479

21.9

Female-Male

2399

21.1

Female-Female

2176

19.2

I will admit I expected to see a lot more papers with Male-Female than Female-Male orders amongst two co-first authors. That is, however, not what the data show.

However, that doesn’t mean there’s not something interesting going on with gender here. First, there’s obviously a lot more male authors than female authors. In this set of papers, only 40.3% of authors in position 1 and 41.0% in position 2 are female. Given this you can easily calculate the expected number of MM, MF, FM and FF pairs there should be.

Expected

Observed

Male-Male

3994

4286

Male-Female

2776

2479

Female-Male

2696

2399

Female-Female

1874

2176

Although there doesn’t seem to be a bias in favor of M-F over F-M, there are significantly (p << .0000000001 by Chi-square) fewer mixed gender co-first author pairs than you’d expect given the overall number of male and female co-first authors.

What can explain this? Are young scientists less likely to collaborate across gender lines than within them? Are male and female pairs better able to resolve their authorship disputes, and are thus underrepresented amongst co-first authors? Or are there fewer opportunities for them to collaborate because of biased lab compositions?

First I wanted to ask if there was a similar bias if we looked at all papers, not just the relatively rare co-first author papers. Here is the fraction of female author by position in author list for all papers (excluding the last author for now).

Female authors are most common in the first author position and they are increasingly less represented as you go back in the author order. Maybe this has to do with the well documented problem of academia driving out women between graduate school and faculty position. So next I asked what fraction of senior authors are women.

Yikes. Only 28% of senior authors of NIH author manuscripts are female compared to 46% of first authors. That’s horrible.

So what about the question from above. Are mixed gender first and second author pairs less common across all papers, not just co-firsts? The answer is yes.

Expected

Observed

Male-Male

60052

66807

Male-Female

48874

42120

Female-Male

51623

44869

Female-Female

42014

48769

Again, there are lots of possible explanations for this, but I was curious about the effect of biased lab composition (if the gender composition of labs is skewed away from parity then you’d expect more same gender author pairs). It’s hard to look at this directly with this data, but if one were going to guess at a covariate for skewed lab gender it would be the gender of the PI, and this I can look at with this data.

So, I next broke the data down by the gender of the senior author.

And in tabular form since the data are so striking.

PI Female

PI Male

1st

56.3 %

41.0 %

2nd

53.0 %

40.6 %

3rd

50.6 %

40.0 %

4th

48.5 %

39.0 %

5th

45.1 %

37.1 %

This data very strongly suggests that women are more likely to join labs with female PIs and men more likely to join labs with male PIs. But it doesn’t say why. It could be that people simply choose labs with a PI of their gender, or that PIs select people of the same gender for their labs. This could have to do with direct gender bias, or with lab style or many other things. Or it could be that there’s a hidden field effect here – that different fields have different gender biases, which would drive the gender distribution of labs on average away from parity.

But whatever the reason it’s a clear confounding factor in looking at gender and authorship. Interestingly, the bias against mixed gender first and second authorship is still there (p-values << .0000000001) even if you control for the gender of the PI.

Next I asked if we could detect a skew in the gender composition of the entire author list of papers. So I took sets of papers with number of authors ranging from 2 to 8 (these are the ones for which we have enough data), filtered out papers where one or more authors didn’t have an inferred gender, and compared the distribution of the number of female authors to that expected by the frequency of male and female authors at each position. There is very consistently a skew towards the extremes, with a significant excess in every case of papers with authors of one gender.

So there’s a pretty systemic skew in the gender composition of authors on papers, but where that skew comes from is unclear. Let’s look at the gender mix of all of the other authors on a paper as a function of the gender of the last author.

Again, there’s a pretty strong skew. But is this due to the PI’s gender or to a more general gender imbalance? It’s a bit hard to tell from this data alone. It turns out the skew you see after dividing based on the gender of the last author is roughly the same if you divide based on the gender of any other position in the author order. Here, for example, is what you get for papers with six authors.

There’s a lot more one could and should do with this data, and I will come back to it later, but for now I will end with this observation. If you are female, there is a 45% chance that a random co-author on one of your papers is female. If you are male, it goes down to 35%. That’s a pretty big and striking difference, and I’m curious if anyone has a good explanation for it.

President Obama published an article in the Journal of the American Medical Association today discussing the current state of his health care reform initiatives. Fortunately, the article is not behind a paywall. But JAMA nonetheless asserts their ownership and right to control the article’s use, as they do on all articles they publish, by attaching the following to the article’s PDF.

Unfortunately for JAMA, they have no right to do this. Section 105 of US Copyright law makes clear that works of the US government – and POTUS is a government employee last time I checked – are not eligible for copyright protection in the US (and JAMA is in the US).

17 U.S. Code § 105 – Subject matter of copyright: United States Government works

Copyright protection under this title is not available for any work of the United States Government, but the United States Government is not precluded from receiving and holding copyrights transferred to it by assignment, bequest, or otherwise.

It is completely inexcusable for journals to assert a right they are aware they do not have, thereby undoubtedly leading to people failing to make use of the article in ways that they are clearly legally eligible to do, such as redistributing and reusing the content, as I am doing here.

Special Communication|July 11, 2016

United States Health Care Reform: Progress to Date and Next Steps

Barack Obama, JD1

1President of the United States, Washington, DC

ABSTRACT

Importance The Affordable Care Act is the most important health care legislation enacted in the United States since the creation of Medicare and Medicaid in 1965. The law implemented comprehensive reforms designed to improve the accessibility, affordability, and quality of health care.

Objectives To review the factors influencing the decision to pursue health reform, summarize evidence on the effects of the law to date, recommend actions that could improve the health care system, and identify general lessons for public policy from the Affordable Care Act.

Evidence Analysis of publicly available data, data obtained from government agencies, and published research findings. The period examined extends from 1963 to early 2016.

Findings The Affordable Care Act has made significant progress toward solving long-standing challenges facing the US health care system related to access, affordability, and quality of care. Since the Affordable Care Act became law, the uninsured rate has declined by 43%, from 16.0% in 2010 to 9.1% in 2015, primarily because of the law’s reforms. Research has documented accompanying improvements in access to care (for example, an estimated reduction in the share of nonelderly adults unable to afford care of 5.5 percentage points), financial security (for example, an estimated reduction in debts sent to collection of $600-$1000 per person gaining Medicaid coverage), and health (for example, an estimated reduction in the share of nonelderly adults reporting fair or poor health of 3.4 percentage points). The law has also begun the process of transforming health care payment systems, with an estimated 30% of traditional Medicare payments now flowing through alternative payment models like bundled payments or accountable care organizations. These and related reforms have contributed to a sustained period of slow growth in per-enrollee health care spending and improvements in health care quality. Despite this progress, major opportunities to improve the health care system remain.

Conclusions and Relevance Policy makers should build on progress made by the Affordable Care Act by continuing to implement the Health Insurance Marketplaces and delivery system reform, increasing federal financial assistance for Marketplace enrollees, introducing a public plan option in areas lacking individual market competition, and taking actions to reduce prescription drug costs. Although partisanship and special interest opposition remain, experience with the Affordable Care Act demonstrates that positive change is achievable on some of the nation’s most complex challenges.

INTRODUCTION

Health care costs affect the economy, the federal budget, and virtually every American family’s financial well-being. Health insurance enables children to excel at school, adults to work more productively, and Americans of all ages to live longer, healthier lives. When I took office, health care costs had risen rapidly for decades, and tens of millions of Americans were uninsured. Regardless of the political difficulties, I concluded comprehensive reform was necessary.

The result of that effort, the Affordable Care Act (ACA), has made substantial progress in addressing these challenges. Americans can now count on access to health coverage throughout their lives, and the federal government has an array of tools to bring the rise of health care costs under control. However, the work toward a high-quality, affordable, accessible health care system is not over.

In this Special Communication, I assess the progress the ACA has made toward improving the US health care system and discuss how policy makers can build on that progress in the years ahead. I close with reflections on what my administration’s experience with the ACA can teach about the potential for positive change in health policy in particular and public policy generally.

IMPETUS FOR HEALTH REFORM

In my first days in office, I confronted an array of immediate challenges associated with the Great Recession. I also had to deal with one of the nation’s most intractable and long-standing problems, a health care system that fell far short of its potential. In 2008, the United States devoted 16% of the economy to health care, an increase of almost one-quarter since 1998 (when 13% of the economy was spent on health care), yet much of that spending did not translate into better outcomes for patients.1– 4 The health care system also fell short on quality of care, too often failing to keep patients safe, waiting to treat patients when they were sick rather than focusing on keeping them healthy, and delivering fragmented, poorly coordinated care.5,6

Moreover, the US system left more than 1 in 7 Americans without health insurance coverage in 2008.7 Despite successful efforts in the 1980s and 1990s to expand coverage for specific populations, like children, the United States had not seen a large, sustained reduction in the uninsured rate since Medicare and Medicaid began (Figure 18– 10). The United States’ high uninsured rate had negative consequences for uninsured Americans, who experienced greater financial insecurity, barriers to care, and odds of poor health and preventable death; for the health care system, which was burdened with billions of dollars in uncompensated care; and for the US economy, which suffered, for example, because workers were concerned about joining the ranks of the uninsured if they sought additional education or started a business.11– 16 Beyond these statistics were the countless, heartbreaking stories of Americans who struggled to access care because of a broken health insurance system. These included people like Natoma Canfield, who had overcome cancer once but had to discontinue her coverage due to rapidly escalating premiums and found herself facing a new cancer diagnosis uninsured.17

Figure 1.

Percentage of Individuals in the United States Without Health Insurance, 1963-2015

Data are derived from the National Health Interview Survey and, for years prior to 1982, supplementary information from other survey sources and administrative records. The methods used to construct a comparable series spanning the entire period build on those in Cohen et al8 and Cohen9 and are described in detail in Council of Economic Advisers 2014.10 For years 1989 and later, data are annual. For prior years, data are generally but not always biannual. ACA indicates Affordable Care Act.

In 2009, during my first month in office, I extended the Children’s Health Insurance Program and soon thereafter signed the American Recovery and Reinvestment Act, which included temporary support to sustain Medicaid coverage as well as investments in health information technology, prevention, and health research to improve the system in the long run. In the summer of 2009, I signed the Tobacco Control Act, which has contributed to a rapid decline in the rate of smoking among teens, from 19.5% in 2009 to 10.8% in 2015, with substantial declines among adults as well.7,18

Beyond these initial actions, I decided to prioritize comprehensive health reform not only because of the gravity of these challenges but also because of the possibility for progress. Massachusetts had recently implemented bipartisan legislation to expand health insurance coverage to all its residents. Leaders in Congress had recognized that expanding coverage, reducing the level and growth of health care costs, and improving quality was an urgent national priority. At the same time, a broad array of health care organizations and professionals, business leaders, consumer groups, and others agreed that the time had come to press ahead with reform.19 Those elements contributed to my decision, along with my deeply held belief that health care is not a privilege for a few, but a right for all. After a long debate with well-documented twists and turns, I signed the ACA on March 23, 2010.

PROGRESS UNDER THE ACA

The years following the ACA’s passage included intense implementation efforts, changes in direction because of actions in Congress and the courts, and new opportunities such as the bipartisan passage of the Medicare Access and CHIP Reauthorization Act (MACRA) in 2015. Rather than detail every development in the intervening years, I provide an overall assessment of how the health care system has changed between the ACA’s passage and today.

The evidence underlying this assessment was obtained from several sources. To assess trends in insurance coverage, this analysis relies on publicly available government and private survey data, as well as previously published analyses of survey and administrative data. To assess trends in health care costs and quality, this analysis relies on publicly available government estimates and projections of health care spending; publicly available government and private survey data; data on hospital readmission rates provided by the Centers for Medicare & Medicaid Services; and previously published analyses of survey, administrative, and clinical data. The dates of the data used in this assessment range from 1963 to early 2016.

The ACA has succeeded in sharply increasing insurance coverage. Since the ACA became law, the uninsured rate has declined by 43%, from 16.0% in 2010 to 9.1% in 2015,7 with most of that decline occurring after the law’s main coverage provisions took effect in 2014 (Figure 18– 10). The number of uninsured individuals in the United States has declined from 49 million in 2010 to 29 million in 2015. This is by far the largest decline in the uninsured rate since the creation of Medicare and Medicaid 5 decades ago. Recent analyses have concluded these gains are primarily because of the ACA, rather than other factors such as the ongoing economic recovery.20,21 Adjusting for economic and demographic changes and other underlying trends, the Department of Health and Human Services estimated that 20 million more people had health insurance in early 2016 because of the law.22

Each of the law’s major coverage provisions—comprehensive reforms in the health insurance market combined with financial assistance for low- and moderate-income individuals to purchase coverage, generous federal support for states that expand their Medicaid programs to cover more low-income adults, and improvements in existing insurance coverage—has contributed to these gains. States that decided to expand their Medicaid programs saw larger reductions in their uninsured rates from 2013 to 2015, especially when those states had large uninsured populations to start with (Figure 223). However, even states that have not adopted Medicaid expansion have seen substantial reductions in their uninsured rates, indicating that the ACA’s other reforms are increasing insurance coverage. The law’s provision allowing young adults to stay on a parent’s plan until age 26 years has also played a contributing role, covering an estimated 2.3 million people after it took effect in late 2010.22

Figure 2.

Decline in Adult Uninsured Rate From 2013 to 2015 vs 2013 Uninsured Rate by State

Data are derived from the Gallup-Healthways Well-Being Index as reported by Witters23 and reflect uninsured rates for individuals 18 years or older. Dashed lines reflect the result of an ordinary least squares regression relating the change in the uninsured rate from 2013 to 2015 to the level of the uninsured rate in 2013, run separately for each group of states. The 29 states in which expanded coverage took effect before the end of 2015 were categorized as Medicaid expansion states, and the remaining 21 states were categorized as Medicaid nonexpansion states.

Early evidence indicates that expanded coverage is improving access to treatment, financial security, and health for the newly insured. Following the expansion through early 2015, nonelderly adults experienced substantial improvements in the share of individuals who have a personal physician (increase of 3.5 percentage points) and easy access to medicine (increase of 2.4 percentage points) and substantial decreases in the share who are unable to afford care (decrease of 5.5 percentage points) and reporting fair or poor health (decrease of 3.4 percentage points) relative to the pre-ACA trend.24 Similarly, research has found that Medicaid expansion improves the financial security of the newly insured (for example, by reducing the amount of debt sent to a collection agency by an estimated $600-$1000 per person gaining Medicaid coverage).26,27 Greater insurance coverage appears to have been achieved without negative effects on the labor market, despite widespread predictions that the law would be a “job killer.” Private-sector employment has increased in every month since the ACA became law, and rigorous comparisons of Medicaid expansion and nonexpansion states show no negative effects on employment in expansion states.28– 30

The law has also greatly improved health insurance coverage for people who already had it. Coverage offered on the individual market or to small businesses must now include a core set of health care services, including maternity care and treatment for mental health and substance use disorders, services that were sometimes not covered at all previously.31 Most private insurance plans must now cover recommended preventive services without cost-sharing, an important step in light of evidence demonstrating that many preventive services were underused.5,6 This includes women’s preventive services, which has guaranteed an estimated 55.6 million women coverage of services such as contraceptive coverage and screening and counseling for domestic and interpersonal violence.32 In addition, families now have far better protection against catastrophic costs related to health care. Lifetime limits on coverage are now illegal and annual limits typically are as well. Instead, most plans must cap enrollees’ annual out-of-pocket spending, a provision that has helped substantially reduce the share of people with employer-provided coverage lacking real protection against catastrophic costs (Figure 333). The law is also phasing out the Medicare Part D coverage gap. Since 2010, more than 10 million Medicare beneficiaries have saved more than $20 billion as a result.34

Figure 3.

Percentage of Workers With Employer-Based Single Coverage Without an Annual Limit on Out-of-pocket Spending

Data from the Kaiser Family Foundation/Health Research and Education Trust Employer Health Benefits Survey.33

Before the ACA, the health care system was dominated by “fee-for-service” payment systems, which often penalized health care organizations and health care professionals who find ways to deliver care more efficiently, while failing to reward those who improve the quality of care. The ACA has changed the health care payment system in several important ways. The law modified rates paid to many that provide Medicare services and Medicare Advantage plans to better align them with the actual costs of providing care. Research on how past changes in Medicare payment rates have affected private payment rates implies that these changes in Medicare payment policy are helping decrease prices in the private sector as well.35,36 The ACA also included numerous policies to detect and prevent health care fraud, including increased scrutiny prior to enrollment in Medicare and Medicaid for health care entities that pose a high risk of fraud, stronger penalties for crimes involving losses in excess of $1 million, and additional funding for antifraud efforts. The ACA has also widely deployed “value-based payment” systems in Medicare that tie fee-for-service payments to the quality and efficiency of the care delivered by health care organizations and health care professionals. In parallel with these efforts, my administration has worked to foster a more competitive market by increasing transparency around the prices charged and the quality of care delivered.

Most importantly over the long run, the ACA is moving the health care system toward “alternative payment models” that hold health care entities accountable for outcomes. These models include bundled payment models that make a single payment for all of the services provided during a clinical episode and population-based models like accountable care organizations (ACOs) that base payment on the results health care organizations and health care professionals achieve for all of their patients’ care. The law created the Center for Medicare and Medicaid Innovation (CMMI) to test alternative payment models and bring them to scale if they are successful, as well as a permanent ACO program in Medicare. Today, an estimated 30% of traditional Medicare payments flow through alternative payment models that broaden the focus of payment beyond individual services or a particular entity, up from essentially none in 2010.37 These models are also spreading rapidly in the private sector, and their spread will likely be accelerated by the physician payment reforms in MACRA.38,39

Trends in health care costs and quality under the ACA have been promising (Figure 41,40). From 2010 through 2014, mean annual growth in real per-enrollee Medicare spending has actually been negative, down from a mean of 4.7% per year from 2000 through 2005 and 2.4% per year from 2006 to 2010 (growth from 2005 to 2006 is omitted to avoid including the rapid growth associated with the creation of Medicare Part D).1,40 Similarly, mean real per-enrollee growth in private insurance spending has been 1.1% per year since 2010, compared with a mean of 6.5% from 2000 through 2005 and 3.4% from 2005 to 2010.1,40

Figure 4.

Rate of Change in Real per-Enrollee Spending by Payer

Data are derived from the National Health Expenditure Accounts.1 Inflation adjustments use the Gross Domestic Product Price Index reported in the National Income and Product Accounts.40 The mean growth rate for Medicare spending reported for 2005 through 2010 omits growth from 2005 to 2006 to exclude the effect of the creation of Medicare Part D.

As a result, health care spending is likely to be far lower than expected. For example, relative to the projections the Congressional Budget Office (CBO) issued just before I took office, CBO now projects Medicare to spend 20%, or about $160 billion, less in 2019 alone.41,42 The implications for families’ budgets of slower growth in premiums have been equally striking. Had premiums increased since 2010 at the same mean rate as the preceding decade, the mean family premium for employer-based coverage would have been almost $2600 higher in 2015.33 Employees receive much of those savings through lower premium costs, and economists generally agree that those employees will receive the remainder as higher wages in the long run.43 Furthermore, while deductibles have increased in recent years, they have increased no faster than in the years preceding 2010.44 Multiple sources also indicate that the overall share of health care costs that enrollees in employer coverage pay out of pocket has been close to flat since 2010 (Figure 545– 48), most likely because the continued increase in deductibles has been canceled out by a decline in co-payments.

Figure 5.

Out-of-pocket Spending as a Percentage of Total Health Care Spending for Individuals Enrolled in Employer-Based Coverage

Data for the series labeled Medical Expenditure Panel Survey (MEPS) were derived from MEPS Household Component and reflect the ratio of out-of-pocket expenditures to total expenditures for nonelderly individuals reporting full-year employer coverage. Data for the series labeled Health Care Cost Institute (HCCI) were derived from the analysis of the HCCI claims database reported in Herrera et al,45 HCCI 2015,46 and HCCI 201547; to capture data revisions, the most recent value reported for each year was used. Data for the series labeled Claxton et al were derived from the analyses of the Trueven Marketscan claims database reported by Claxton et al 2016.48

At the same time, the United States has seen important improvements in the quality of care. The rate of hospital-acquired conditions (such as adverse drug events, infections, and pressure ulcers) has declined by 17%, from 145 per 1000 discharges in 2010 to 121 per 1000 discharges in 2014.49 Using prior research on the relationship between hospital-acquired conditions and mortality, the Agency for Healthcare Research and Quality has estimated that this decline in the rate of hospital-acquired conditions has prevented a cumulative 87 000 deaths over 4 years.49 The rate at which Medicare patients are readmitted to the hospital within 30 days after discharge has also decreased sharply, from a mean of 19.1% during 2010 to a mean of 17.8% during 2015 (Figure 6; written communication; March 2016; Office of Enterprise Data and Analytics, Centers for Medicare & Medicaid Services). The Department of Health and Human Services has estimated that lower hospital readmission rates resulted in 565 000 fewer total readmissions from April 2010 through May 2015.50,51

Figure 6.

Medicare 30-Day, All-Condition Hospital Readmission Rate

Data were provided by the Centers for Medicare & Medicaid Services (written communication; March 2016). The plotted series reflects a 12-month moving average of the hospital readmission rates reported for discharges occurring in each month.

While the Great Recession and other factors played a role in recent trends, the Council of Economic Advisers has found evidence that the reforms introduced by the ACA helped both slow health care cost growth and drive improvements in the quality of care.44,52 The contribution of the ACA’s reforms is likely to increase in the years ahead as its tools are used more fully and as the models already deployed under the ACA continue to mature.

BUILDING ON PROGRESS TO DATE

I am proud of the policy changes in the ACA and the progress that has been made toward a more affordable, high-quality, and accessible health care system. Despite this progress, too many Americans still strain to pay for their physician visits and prescriptions, cover their deductibles, or pay their monthly insurance bills; struggle to navigate a complex, sometimes bewildering system; and remain uninsured. More work to reform the health care system is necessary, with some suggestions offered below.

First, many of the reforms introduced in recent years are still some years from reaching their maximum effect. With respect to the law’s coverage provisions, these early years’ experience demonstrate that the Health Insurance Marketplace is a viable source of coverage for millions of Americans and will be for decades to come. However, both insurers and policy makers are still learning about the dynamics of an insurance market that includes all people regardless of any preexisting conditions, and further adjustments and recalibrations will likely be needed, as can be seen in some insurers’ proposed Marketplace premiums for 2017. In addition, a critical piece of unfinished business is in Medicaid. As of July 1, 2016, 19 states have yet to expand their Medicaid programs. I hope that all 50 states take this option and expand coverage for their citizens in the coming years, as they did in the years following the creation of Medicaid and CHIP.

With respect to delivery system reform, the reorientation of the US health care payment systems toward quality and accountability has made significant strides forward, but it will take continued hard work to achieve my administration’s goal of having at least half of traditional Medicare payments flowing through alternative payment models by the end of 2018. Tools created by the ACA—including CMMI and the law’s ACO program—and the new tools provided by MACRA will play central roles in this important work. In parallel, I expect continued bipartisan support for identifying the root causes and cures for diseases through the Precision Medicine and BRAIN initiatives and the Cancer Moonshot, which are likely to have profound benefits for the 21st-century US health care system and health outcomes.

Second, while the ACA has greatly improved the affordability of health insurance coverage, surveys indicate that many of the remaining uninsured individuals want coverage but still report being unable to afford it.53,54 Some of these individuals may be unaware of the financial assistance available under current law, whereas others would benefit from congressional action to increase financial assistance to purchase coverage, which would also help middle-class families who have coverage but still struggle with premiums. The steady-state cost of the ACA’s coverage provisions is currently projected to be 28% below CBO’s original projections, due in significant part to lower-than-expected Marketplace premiums, so increased financial assistance could make coverage even more affordable while still keeping federal costs below initial estimates.55,56

Third, more can and should be done to enhance competition in the Marketplaces. For most Americans in most places, the Marketplaces are working. The ACA supports competition and has encouraged the entry of hospital-based plans, Medicaid managed care plans, and other plans into new areas. As a result, the majority of the country has benefited from competition in the Marketplaces, with 88% of enrollees living in counties with at least 3 issuers in 2016, which helps keep costs in these areas low.57,58 However, the remaining 12% of enrollees live in areas with only 1 or 2 issuers. Some parts of the country have struggled with limited insurance market competition for many years, which is one reason that, in the original debate over health reform, Congress considered and I supported including a Medicare-like public plan. Public programs like Medicare often deliver care more cost-effectively by curtailing administrative overhead and securing better prices from providers.59,60 The public plan did not make it into the final legislation. Now, based on experience with the ACA, I think Congress should revisit a public plan to compete alongside private insurers in areas of the country where competition is limited. Adding a public plan in such areas would strengthen the Marketplace approach, giving consumers more affordable options while also creating savings for the federal government.61

Fourth, although the ACA included policies to help address prescription drug costs, like more substantial Medicaid rebates and the creation of a pathway for approval of biosimilar drugs, those costs remain a concern for Americans, employers, and taxpayers alike—particularly in light of the 12% increase in prescription drug spending that occurred in 2014.1 In addition to administrative actions like testing new ways to pay for drugs, legislative action is needed.62 Congress should act on proposals like those included in my fiscal year 2017 budget to increase transparency around manufacturers’ actual production and development costs, to increase the rebates manufacturers are required to pay for drugs prescribed to certain Medicare and Medicaid beneficiaries, and to give the federal government the authority to negotiate prices for certain high-priced drugs.63

There is another important role for Congress: it should avoid moving backward on health reform. While I have always been interested in improving the law—and signed 19 bills that do just that—my administration has spent considerable time in the last several years opposing more than 60 attempts to repeal parts or all of the ACA, time that could have been better spent working to improve our health care system and economy. In some instances, the repeal efforts have been bipartisan, including the effort to roll back the excise tax on high-cost employer-provided plans. Although this provision can be improved, such as through the reforms I proposed in my budget, the tax creates strong incentives for the least-efficient private-sector health plans to engage in delivery system reform efforts, with major benefits for the economy and the budget. It should be preserved.64 In addition, Congress should not advance legislation that undermines the Independent Payment Advisory Board, which will provide a valuable backstop if rapid cost growth returns to Medicare.

LESSONS FOR FUTURE POLICY MAKERS

While historians will draw their own conclusions about the broader implications of the ACA, I have my own. These lessons learned are not just for posterity: I have put them into practice in both health care policy and other areas of public policy throughout my presidency.

The first lesson is that any change is difficult, but it is especially difficult in the face of hyperpartisanship. Republicans reversed course and rejected their own ideas once they appeared in the text of a bill that I supported. For example, they supported a fully funded risk-corridor program and a public plan fallback in the Medicare drug benefit in 2003 but opposed them in the ACA. They supported the individual mandate in Massachusetts in 2006 but opposed it in the ACA. They supported the employer mandate in California in 2007 but opposed it in the ACA—and then opposed the administration’s decision to delay it. Moreover, through inadequate funding, opposition to routine technical corrections, excessive oversight, and relentless litigation, Republicans undermined ACA implementation efforts. We could have covered more ground more quickly with cooperation rather than obstruction. It is not obvious that this strategy has paid political dividends for Republicans, but it has clearly come at a cost for the country, most notably for the estimated 4 million Americans left uninsured because they live in GOP-led states that have yet to expand Medicaid.65

The second lesson is that special interests pose a continued obstacle to change. We worked successfully with some health care organizations and groups, such as major hospital associations, to redirect excessive Medicare payments to federal subsidies for the uninsured. Yet others, like the pharmaceutical industry, oppose any change to drug pricing, no matter how justifiable and modest, because they believe it threatens their profits.66 We need to continue to tackle special interest dollars in politics. But we also need to reinforce the sense of mission in health care that brought us an affordable polio vaccine and widely available penicillin.

The third lesson is the importance of pragmatism in both legislation and implementation. Simpler approaches to addressing our health care problems exist at both ends of the political spectrum: the single-payer model vs government vouchers for all. Yet the nation typically reaches its greatest heights when we find common ground between the public and private good and adjust along the way. That was my approach with the ACA. We engaged with Congress to identify the combination of proven health reform ideas that could pass and have continued to adapt them since. This includes abandoning parts that do not work, like the voluntary long-term care program included in the law. It also means shutting down and restarting a process when it fails. When HealthCare.gov did not work on day 1, we brought in reinforcements, were brutally honest in assessing problems, and worked relentlessly to get it operating. Both the process and the website were successful, and we created a playbook we are applying to technology projects across the government.

While the lessons enumerated above may seem daunting, the ACA experience nevertheless makes me optimistic about this country’s capacity to make meaningful progress on even the biggest public policy challenges. Many moments serve as reminders that a broken status quo is not the nation’s destiny. I often think of a letter I received from Brent Brown of Wisconsin. He did not vote for me and he opposed “ObamaCare,” but Brent changed his mind when he became ill, needed care, and got it thanks to the law.67 Or take Governor John Kasich’s explanation for expanding Medicaid: “For those that live in the shadows of life, those who are the least among us, I will not accept the fact that the most vulnerable in our state should be ignored. We can help them.”68 Or look at the actions of countless health care providers who have made our health system more coordinated, quality-oriented, and patient-centered. I will repeat what I said 4 years ago when the Supreme Court upheld the ACA: I am as confident as ever that looking back 20 years from now, the nation will be better off because of having the courage to pass this law and persevere. As this progress with health care reform in the United States demonstrates, faith in responsibility, belief in opportunity, and ability to unite around common values are what makes this nation great.

Additional Contributions: I thank Matthew Fiedler, PhD, and Jeanne Lambrew, PhD, who assisted with planning, writing, and data analysis. I also thank Kristie Canegallo, MA; Katie Hill, BA; Cody Keenan, MPP; Jesse Lee, BA; and Shailagh Murray, MS, who assisted with editing the manuscript. All of the individuals who assisted with the preparation of the manuscript are employed by the Executive Office of the President.

Congressional Budget Office. Federal subsidies for health insurance coverage for people under age 65: 2016 to 2026. https://www.cbo.gov/publication/51385. Published March 24, 2016. Accessed June 14, 2016.

A recent post on the GOAL mailing list by Heather Morrison alerted me to the following sneaky aspect of Elsevier’s “open access” publishing practices.

To put it simply, Elsevier have distorted the widely recognized concept of open access, in which authors retain copyright in their work and give others permission to reuse it, and where publishers are a vehicle authors use to distribute their work, into “Elsevier access” in which Elsevier, and not authors, retain all rights not granted by the license. As a result, despite highlighting the “fact” that authors retain copyright, they have ceded all decisions about how their work is used, if and when to pursue legal action for misuse of their work and, crucially, if they use a non-commercial license they are making Elsevier is the sole beneficiary of commercial reuse of their “open access” content.

For some historical context, when PLOS and BioMed Central launched open access journals over a decade ago, they adopted the use of Creative Commons licenses in which authors retain copyright in their work, but grant in advance the right for others to republish and use that work subject to restrictions that differ according to the license used. PLOS and BMC and most true open access publishers use the CC-BY license, whose only condition is that any reuse must be accompanied by proper attribution.

When PLOS, BioMed Central and other true open access publishers began to enjoy financial success, established subscription publishers like Elsevier began to see a business opportunity in open access publishing, and began offering a variety of “open access” options, where authors pay an article-processing charge in order to make their work available under one of several licenses. The license choices at Elsevier include CC-BY, but also CC-BY-NC (which does not allow commercial reuse) and a bespoke Elsevier license that is even more limiting (nobody else can reuse or redistribute these works).

At PLOS, authors do not need to transfer any rights to the publisher, since the agreement of authors to license their work under CC-BY grants PLOS (and anyone else) all the rights they need to publish the work. However, this is not true with more restrictive licenses like CC-BY-NC, which, by itself, does not give Elsevier the right to publish works. Thus, Elsevier if either CC-BY-NC or Elsevier’s own license are used, the authors have to grant publishing rights to Elsevier.

However, as Morrison points out, the publishing agreement that Elsevier open access authors sign is far more restrictive. Instead of just granting Elsevier the right to publish their work:

Authors sign an exclusive license agreement, where authors have copyright but license exclusive rights in their article to the publisher**.

**This includes the right for the publisher to make and authorize commercial use, please see “Rights granted to Elsevier” for more details.

This is not a subtle distinction. Elsevier and other publishers that offer it routinely push CC-BY-NC to authors under the premise that they don’t want to allow people to use their work for commercial purposes without their permission. Normally this would be the case with a work licensed under CC-BY-NC. But because exclusive rights to publish works licensed with CC-BY-NC are transferred to Elsevier, the company, and not the authors, are the ones who determine what commercial reuse is permissible. And, of course, it is Elsevier who profit from granting these rights.

It’s bad enough that Elsevier plays on misplaced fears of commercial reuse to convince authors not to grant the right to commercial reuse, which violates the spirit and goals of open access. But to convince people that they should retain the right to veto commercial reuses of their work, and then seize all those rights for themselves, is despicable.

Any sufficiently convoluted explanation for biological phenomena is indistinguishable from epigenetics.

Use of the word “epigenetics” over time

Epigenetics is everywhere. Nary a day goes by without some news story or press release telling us something it explains.

Why does autism run in families? Epigenetics.
Why do you have trouble losing weight? Epigenetics.
Why are vaccines dangerous? Epigenetics.
Why is cancer so hard to fight? Epigenetics.
Why a cure for cancer is around the corner? Epigenetics.
Why your parenting choices might affect your great-grandchildren? Epigenetics.

Epigenetics is used as shorthand in the popular press for any of a loosely connected set of phenomenon purported to result in experience being imprinted in DNA and transmitted across time and generations. Its place in our lexicon has grown as biochemical discoveries have given ideas of extra-genetic inheritance an air of molecular plausibility.

Biologists now invoke epigenetics to explain all manner of observations that lie outside their current ken. Epigenetics pops up frequently among non-scientists in all manner of discussions about heredity. And all manner of crackpots slap “epigenetics” on their fringy ideas to give them a veneer of credibility. But epigenetics has achieved buzzword status far faster and to a far larger extent than current science justifies, earning the disdain of scientists (like me) who study how information is encoded, transferred and read out across cellular and organismal generations.

This simmering conflict came to a head last week around an article in The New Yorker, “Same but Different” by Siddhartha Mukherjee that juxtaposed a meditation on the differences between his mother and her identical twin with a discussion of the research of Rockefeller University’s David Allis on the biochemistry of DNA and the proteins that encapsulate it in cells, that he and others believe provides a second mechanism for the encoding and transmission of genetic information.

Although Mukherjee hedges throughout his piece, the clear implication of the story is that Allis’s work provides an explanation for differences that arise between genetically identical individuals, and even suggests that they open the door to legitimizing the long-discredited ideas of the 19th century naturalist Jean-Baptiste Lamarck who thought that organisms could pass beneficial traits acquired during their lifetimes on to their offspring.

The dispute centers on the process of gene regulation, wherein the levels of specific sets of genes are tuned to confer distinct properties on different sets of cells and tissues during development, and in response to internal and external stimuli. Gene regulation is central to the encoding of organismal form and function in DNA, as it allows different cells and even different individuals of a species to have identical DNA and yet manifest different phenotypes.

Ptashne has studied the molecular basis for gene regulation for fifty years. His and Greally’s critique of Mukherjee, or really Allis, is rather technical, and one could quibble about some of the specifics. But his main points are simple and difficult to refute:

There is essentially no evidence to support the idea that chemical modification of DNA and/or its accompanying proteins is used to encode and transmit information over long periods of time.

Rather than representing a separate system for storing and conveying information, a wide range of experiments suggests that the primary role of the biochemistry in question is to execute gene expression programs encoded in DNA and read out by a diverse set of proteins known as transcription factors that bind to specific sequences in DNA and regulate the expression of nearby genes.

In one way this debate is incredibly important because it is ultimately about getting the science right. Mukherjee’s piece contained several inaccurate statements and, by focusing on one aspect of Allis’s work, gave an woefully incomplete picture of our current understanding of gene regulation.

Any system for conveying information about the genome – which is what Mukherjee is writing about – has to have some way to achieve genomic specificity so that the expression of genes can be tuned up or down in a non-random manner. Transcription factors, which bind on to specific DNA sequences, provide a link between the specific sequence of DNA and the cellular machines responsible for turning information in DNA into proteins and other biomolecules. Small RNAs, which can bind to complementary sequences in DNA, also have this capacity.

But there is scant evidence for sequence specificity in the activities of the proteins that modify DNA and the nucleosomes around which it is wrapped. Rather they get their specificity from transcription factors and small RNAs. That doesn’t render this biochemistry unimportant – the broad conservation of proteins involved in modifying histones shows they play important roles – but ascribing regulatory primacy to DNA methylation and histone modifications is not consistent with our current understanding of gene regulation.

Something is, however, getting lost in this back-and-forth , as one might come away with the impression that this is disagreement about whether cells and organisms can transmit information in a manner above and beyond DNA sequence. And this is unfortunate, because there really is no question about this. Ptashne and Allis/Mukherjee are arguing about the molecular details of how it happens and about how important different phenomena are.

Various forms of non-Mendelian information transfer are well established. The most important of which happens in every animal generation, as eggs contain not only DNA from the mother, but also a wide range of proteins, RNAs and small molecules that drive the earliest stages of embryonic development. The particular cocktail left by the mother can have profound effects on the new organism – so called “maternal effects”. These effects can be the result of both the mothers genotype, the environment in which she lives, and, in various ways, her experiences during her life. (Such phenomena are not limited to multicellular critters – single-celled organisms distribute many molecules asymmetrically when they divide, conferring different phenotypes to their different genetically identical offspring).

Many maternal effects have been studied in great detail, and in most cases the transmission of state involves the transmission of different concentrations and activities of proteins (including transcription factors) and RNAs. That is the transmitted DNA is identical, but the state of the machinery that reads out the DNA is different, resulting in different outcomes.

However there are some good examples in which modifications to DNA play an important role in the transmission of information across generations – most notably with “imprinting”, in which an organism preferentially utilizes the copy of a gene it got from one of its parents independent to the exclusion of the other in a way that appears to be independent of the sequence of the gene. Imprinting, which is a relatively rare, but sometimes important, phenomenon appears to arise from parent-specific methylation of DNA.

Could the histone modifications that Allis studies and Mukherjee focuses on also carry information across cell divisions and generations? Sure. Our understanding of gene regulation is still fairly primitive, and there is plenty of room for the discovery of important inheritance mechanisms involving histone modification. We have to keep an open mind. But the point the critics of Mukherjee are really making is that given what is known today about mechanisms of gene regulation, it is bizarre bordering on irresponsible to focus on a mechanism of inheritance that only might be real.

And Mukherjee is far from the only one to have fallen into this trap. Which brings me to what I think is the most interesting question here: why does this particular type of epigenetic inheritance involving an obscure biochemical process have such strong appeal? I think there are several things going on.

First, the idea of a “histone code” that supersedes the information in DNA exists (at least for now) in a kind of limbo: enough biochemical specificity to give it credibility and a ubiquity that makes is seem important, but sufficient mystery about what it actually is and how it might work that people can imbue it with whatever properties they want. And scientists and non-scientists alike have leapt into this molecular biological sweet spot, using this manifestation of the idea of epigenetics as a generic explanation for things they can’t understand, a reason to hope that things they want to be true might really be, and as a difficult to refute, almost quasi-religious, argument for the plausibility of almost any idea linked to heredity.

But there is also something more specifically appealing about this particular idea. I think it stems from the fact that epigenetics in general, and the idea of a “histone code” in particular, provide a strong counterforce to the rampant genetic determinism that has dominated the genomic age. People don’t like to think that everything about the way they are and will be is determined by their DNA, and the idea that there is some magic wrapper around DNA that can be shaped by experience to override what is written in the primary code is quite alluring.

Of course DNA is not destiny, and we don’t need to evoke etchings on DNA to get out of it. But I have a feeling it will take more than a few arch retorts from transcription factor extremists to erase epigenetics from the zeitgeist.

Several people have noted that, in my previous post dealing with PLOS’s business, I didn’t address a point that came up in a number of threads regarding the relative virtues of PLOS and scientific societies – the basic point being that people should publish in society journals because they do good things with the money (run meetings, support fellowships and grants) and that PLOS is to be shunned because it “doesn’t give back to the community”.

I agree that many societies do good things to build and support their communities. But sponsoring meeting and fellowships is not the only way to give back to the community. PLOS was founded to make science publishing work better for scientists and the public, and we are singularly devoted to that goal. This means publishing open access journals that succeed as journals. This means demonstrating to a skeptical publishing and funding community that it’s possible to run a successful and stable business that published exclusively open access journals. This means working to change the way peer review works and the ways scientists are assessed. This means lobbying to promote laws and policies that increase access to the scientific literature.

Because of PLOS and other open access pioneers, around 20% of new papers are immediately available for people around the world to access without paywalls. PLOS’s success as a publisher has served as a model for other publishers and journals to adopt open access. PLOS’s promotion of open access and our lobbying helped make funder “public access” policies that make millions of papers freely available a reality. And PLOS is now working to promote instant publication, open peer review and other publishing changes that not only will make science more open, but get science out more quickly and make the ways we evaluate papers and each other more effective. This is what we give back to science. People are, of course, free not to value these things, to question whether PLOS’s role in these things was significant, or that we’ve achieved our goals and are no longer essential. But it’s ridiculous to say that PLOS doesn’t give back to the community just because we don’t sponsor meetings.

Now none of this should be construed as my saying people shouldn’t publish in society journals, provided they are open access of course. One of the reasons we started PLOS was because, back in the late 1990’s, most scientific societies rejected the idea that they could take advantage of the Internet’s power to make their work more widely available by using a different business model. We felt they were wrong, and one of PLOS’s main goals has always been to demonstrate that an open access business model could work for them – and I’m thrilled that in many cases this has work – see open access society journals like G3 and mBio, journals that I wholeheartedly and unambiguously support.

However, a lot of society journals – most – are not open access. And no matter how many meetings and fellowships the revenue from paywalled journals support, they are not worth it – I’ve yet to see a society whose good works were so good that they outweighed the harm of paywalling the scientific literature – using meetings as an excuse to paywall the scientific literature is completely unacceptable.

The reliance of so many societies on journal revenues has often made it hard to distinguish them from commercial publishers in their public stance on important issues in science publishing. You would think that, on first principles, scientific societies would support improving access to the scientific literature. Indeed several societies recognized this early on and pioneered open access and other open publishing business models before PLOS came along. However they are the exception. The most powerful societies have for decades not only been trading meetings for access to the literature, they have been using the profits they get from their journals to openly fight open access. Opposition from scientific societies was one of the major reasons for the scuttling of Harold Varmus’s 1999 eBioMed proposal, which would have created an NIH managed pre-print server with a full system of post-publication peer review. And for years major scientific societies were THE loudest voices on Capitol Hill arguing AGAINST the NIH public access policy and other moves for better access to the scientific literature.

I also have long wondered whether it’s good for societies in a more general sense when they are reliant on publishing revenues for their funding. Societies are supposed to be organizations that represent their members, and yet the concept of being a member of a society has been weakened by the fact that few people actively choose to become a member of a society to support their activities and have a voice in their policies. Rather people become society members because it gets them access to journals and/or discounts to meetings. I love the Genetics Society of America, but they and many other societies do this weird thing where, if you go to one of their meetings, the cost of attending the meeting as a non-member is greater than the cost of attending as a member plus the cost of membership, so of course everyone “joins” the society. But this kind of membership is weak. And I wonder whether people wouldn’t feel more engaged in their societies, and if societies wouldn’t be more responsive to their members, if they became true membership organizations once again.

Finally, I want to return to the issue of finances. One of the threads in Andy Kern’s series of Tweets about PLOS finances that triggered this series of posts was his surprise that PLOS had margins of ~20% and had ~$25m in assets. In response I encouraged him to look at the finances of scientific societies. I think it’s good that Andy has triggered a conversation about PLOS’s finances – most people are unaware of how the publishing business works – something that’s important if we’re going to change it for the better. And similarly I think it would be great to learn more about the finances of the scientific societies that people support – most of whom not only file required Form 990s, but also offer more detailed financial reports. Some of the stuff you find is disturbing (like the fact that the American Chemical Society, long one of the fiercest opponents of open access, is sitting on $1.5b in assets) but most of it is just enlightening. I’ve compiled a list of Form 990s from the member societies of FASEB, and will be adding more information in the coming days.

Last week my friend Andy Kern (a population geneticist at Rutgers) went on a bit of a bender on Twitter prompted by his discovery of PLOS’s IRS Form 990 – the annual required financial filing of non-profit corporations in the United States. You can read his string of tweets and my responses, but the gist of his critique is this: PLOS pays its executives too much, and has an obscene amount of money in the bank.

Let me start by saying that I understand where his disdain comes from. Back when we were starting PLOS we began digging into the finances of the scientific societies that were fighting open access, and I was shocked to see how much money they were sitting on and how much their CEOs get paid. If I weren’t involved with PLOS, and I’d stumbled upon PLOS’s Form 990 now, I’d have probably raised a storm about it. I have absolutely no complaints about Andy’s efforts to understand what he was seeing – non-profits are required to release this kind of financial information precisely so that people can scrutinize what they are doing. And I understand why Andy and others find some of the info discomforting, and share some of his concerns. But having spent the last 15 years trying to build PLOS and turn it into a stable enterprise, I have a different perspective, and I’d like to explain it.

Let me start with something on which I agree completely with Andy completely, science publishing is way too expensive. Andy says he originally started poking into PLOS’s finances because he wanted to know where the $2,250 he was asked to pay to publish in PLOS Genetics went to, as this seemed like a lot of money to take a paper, have a volunteer academic serve as editor, find several additional volunteers to serve as peer reviewers, and then, if they accept the paper, turn it into a PDF and HTML version and publish it online. And he’s right. It is too much money.

That $2,250 is only about a third of the $6,000 a typical subscription journal takes in for every paper they publish, and that $6,000 buys access for only a tiny fraction of the world’s population, while the $2,250 buys it for everyone. But $2,250 is still too much, as is the $1,495 at PLOS ONE. I’ve always said that our goal should be to make it cost as little as possible to publish, and that our starting point should be $0 a paper.

The reality is, however, that it costs PLOS a lot more than $0 to handle a paper. We handle a lot of papers – close to 200 a day – each one different. There’s a lot of manual labor involved in making sure the submission is complete, that it passes ethical and technical checks, in finding an editor and reviewers and getting them to handle the paper in a timely and effective manner. It then costs money to turn the collection of text and figures and tables into a paper, and to publish it and maintain a series of high-volume websites. All together we have a staff of well over 100 people running our journal operations, and they need to have office space, people to manage them, an HR system, an accounting system and so on – all the things a business has to have. And for better or worse our office is in San Francisco (remember that two of the three founders were in the Bay Area, and we couldn’t have started it anywhere else), which is a very expensive place to operate. We have always aimed to keep our article processing charges (APCs) as low as possible – it pains me every time we’ve had to raise our charges, since I think we should be working to eliminate APCs, not increase them. But we have to be realistic about what publishing costs us.

The difference in price between our journals reflects different costs. PLOS Biology and PLOS Medicine have professional editors handling each manuscript, so they’re intrinsically more expensive to operate. They also have relatively low acceptance rates, meaning a lot of staff time is spent on rejected papers, which generate no revenue. This is also the reason for the difference in price between our community journals like PLOS Genetics and PLOS ONE: the community journals reject more papers and thus we have to charge more per accepted paper. It might seem absurd to have people pay to reject other people’s papers, but if you think about it, that’s exactly what makes selective journals attractive – they have to publish your paper and reject lots of others. I’ve argued for a long time that we should do away with selective journals, but so long as people want to publish in them, they’re going to have this weird economics. And note this is not just true of open access journals – higher impact subscription journals bring in a lot more money per published paper than low impact subscription journals, for essentially the same reason.

Could PLOS do all these things more efficiently, more effectively and for less money? Absolutely. We, like most other big publishers, are using legacy software and systems to handle submissions, manage peer review and convert manuscripts into published papers. These systems are, for the most part, expensive, outdated and difficult or expensive (usually both) to customize. We are in a challenging situation since, until very recently, we weren’t in a position to develop our own systems for doing all these things, and we couldn’t just switch to cheaper or free system since they weren’t built to handle the volume of papers we deal with.

That said, it’s certainly possible to run journals much, much more cheaply. It costs the physics pre-print arXiv something like $10 a paper to maintain its software, screening and website. There are times when I wish PLOS had just hacked together a bunch of Perl scripts and hung out a shingle and built in new features as we needed them. But part of what made PLOS appealing at the start is that it didn’t work that way – for better or worse it looked like a real journal, and this was one of the things that made people comfortable with our (at the time) weird economic model. I’m not sure this is true anymore, and if I were starting PLOS today I would do things differently, and think I could do things much less expensively. I would love it if people would set up inexpensive or even free open access biology journals – it’s certainly possible with open source software and fully volunteer labor – and for people to get comfortable with biomedical publishing basically being no different than just posting work on the Internet, with lightweight systems for peer review. That has always seemed to me to be the right way to do things. But PLOS can’t just pull the plug on all the things we do, so we’re trying to achieve the same goal by investing in developing software that will make it possible to do all of the things PLOS does faster, better and cheaper. We’re going to start rolling it out this year, and, while I don’t run PLOS and can’t speak for the whole board, I am confident that this will bring our costs down significantly and that we will ultimately be in a position to reduce prices.

Which brings us to issue number two. Andy and a lot of other people took umbrage at the fact that PLOS has margins of 20% and has ~$25 million dollars in assets. Again, I understand why people look at these numbers and find them shocking – anything involving millions of dollars always seems like a lot of money. But this is a misconception. Both of these numbers represent nothing more than what is required for PLOS to be a stable enterprise.

I’ll start by reminding people that PLOS is still a relatively young company, working in a rapidly changing industry. Like most startups, it took a long time for PLOS to break even. For the first nine years of our existence we lost money every year, and were able to build our business only because we got strong support from foundations that believed in what we were doing. Finally, in 2011, we reached the point where we were taking in slightly more money than we were spending, allowing us to wean ourselves of foundation support. But we still had essentially no money in the bank, and that’s not a good thing. Good operating practices for any business dictate that the company have money in the bank to cover a downturn in revenue. This is particularly the case with open access publishers, since we have no guaranteed revenue stream – in contrast to subscription publishers who make long-term subscription deals. What’s more, this industry is changing rapidly, with the number of papers going to open access journals growing, but many new open access publishers entering the market. So it’s very hard for us to predict what our business is going to look like from year to year, while a lot of our expenses, like rent, software licenses and salaries, have to be paid before revenue they enable comes in. The only way to survive in this market is to have a decent amount of money in the bank to buffer against the unpredictable. If anything, I am told by people who spend their lives thinking about these things, we’re cutting things a little close. So, while 20% margins may seem like a lot, given our overall financial situation and the fact that we’ve been profitable for only five years, I think it’s actually a reasonable compromise between keeping costs as low as we can and ensuring that PLOS remains financially stable while also allowing us to make modest investments in technology that will make publishing better and cheaper in the long run.

Just to put these numbers in perspective for people who (like me) aren’t trained to think about these things, I had a look at the finances of a large set of scientific societies. I looked primarily at the members of FASEB, a federation of most of the major societies in molecular biology. Many of them have larger operating margins, and far larger cash reserves than PLOS. And I haven’t found one yet that doesn’t have a larger ratio of assets to expenses than PLOS does. And these are all organizations that have far more stable revenue streams than PLOS does. So I just don’t think it’s fair to suggest that either PLOS’s margins or reserves are untoward.

Indeed these numbers represent something important – that PLOS has become a successful business. I’ll once again remind people that one of the major knocks against open access when PLOS started was that we were a bunch of naive idealists (that’s the nicest way people put it) who didn’t understand what it took to run a successful business. Commercial publishers and societies alike argued repeatedly to scientists, funders and legislators that the only way to make money in science publishing was to use a subscription model. So it was absolutely critical to the success of the open access movement that PLOS not only succeed as a publisher, but that we also succeed as a business – to show the commercial and society publishers that their principal argument for why they refused to shift to open access was wrong. Having been the recipient of withering criticism – both personally and and as organization – about being too financially naive, it’s ironic and a bit mind boggling to all of a sudden be criticized for having created too good of a business.

Now despite that, I don’t want people to confuse my defense of PLOS’s business success with a defense of the business it’s engaged in. While I believe the APC/service business model PLOS has helped to develop is far far superior to the traditional subscription model, because it does not require paywalls, but I’ve never been comfortable with the APC business model in an absolute sense (and I recognize the irony of my saying that) because I wish science publishing weren’t a business at all. When we started PLOS the only way we had to make money was through APCs, but if I had my druthers we’d all just post papers online in a centralized server funded and run by a coalition of governments and funders, and scientists would use lightweight software to peer review published papers and organize the literature in useful ways. And no money would be exchanged in the process. I’m glad that PLOS is stable and has shown the world that the APC model can work, but I hope that we can soon move beyond it to a very different system.

Now I want to end on the issue that seemed to upset people the most – which is the salaries of PLOS’s executives. I am immensely proud of the executive team at PLOS – they are talented and dedicated. They make competitive salaries – and we’d have trouble hiring and retaining them if they didn’t. The board has been doing what we felt we had to do to build a successful company in the marketplace we live in – after all, we were founded to fix science publishing, not capitalism. But as an individual I can’t help but feel that’s a copout. The truth is the general criticism is right. A system where executives make so much more money that the staff they supervise isn’t just unfair, it’s ultimately corrosive. It’s something we all have to work to change, and I wish I’d done more to help make PLOS a model of this.

Finally, I want to acknowledge a tension evident in a lot of the discussion around this issue. Some of the criticism of PLOS – especially about margins and cash flow – have been just generally unfair. But others – about salaries and transparency – reflect something important. I think people understand that in these ways PLOS is just being a typical company. But we weren’t founded to just be a typical company – we were founded to be different and, yes, better, and people have higher expectations of us than they do a typical company. I want it to be that way. But PLOS was also not founded to fail – that would have been terrible for the push for openness in science publishing.I am immensely proud of PLOS’s success as a publisher, agent for change, and a business – and of all the people inside and outside of the organization who helped achieve it. Throughout PLOS’s history there were times we had to choose between abstract ideals and the reality of making PLOS a successful business, and I think, overall, we’ve done a good, but far from perfect, job of balancing this tension. And moving forward I personally pledge to do a better job of figuring out how to be successful while fully living up to those ideals.

Search

Michael Eisen

I'm a biologist at UC Berkeley and an Investigator of the Howard Hughes Medical Institute. I work primarily on flies, and my research encompases evolution, development, genetics, genomics, chemical ecology and behavior. I am a strong proponent of open science, and a co-founder of the Public Library of Science. And most importantly, I am a Red Sox fan. (More about me here).