PeerJ Preprints: Science Policyhttps://peerj.com/preprints/index.atom?journal=peerj&subject=7800Science Policy articles published in PeerJ PreprintsNCBI will no longer make taxonomy identifiers for individual influenza strains on January 15, 2018https://peerj.com/preprints/34282017-11-222017-11-22Eneida HatcherYiming BaoPaolo AmedeoOlga BlinkovaGuy CochraneNadia FedorovaWilliam GrunerDetlef LeipeYasukazu NakamuraYuri OstapchukVasuki PalanigobuRobert SandersConrad SchochCatherine SmithDavid WentworthLinda YankieSergey ZhdanovIlene Karsch-MizrachiJ. Rodney Brister
Currently the National Center of Biotechnology Information (NCBI) assigns individual taxonomy identifiers to each distinct influenza virus isolate submitted to GenBank. To support this practice, individual flu isolates must be manually added to the NCBI taxonomy database and unique taxonomy identifiers generated. This added layer of manual processing is unique to influenza virus and prevents automatization of the flu sequence submission process. Here we outline a new NCBI policy that normalizes influenza virus taxonomy processing but maintains features supported by the previous approach. This change will reduce the amount of manual handling necessary for flu submissions and pave the way for increased automation of the submissions process. While this automation may disrupt some historic practices, it will better align influenza virus data processing with other viruses and ultimately lower the submission burden on data providers.

Currently the National Center of Biotechnology Information (NCBI) assigns individual taxonomy identifiers to each distinct influenza virus isolate submitted to GenBank. To support this practice, individual flu isolates must be manually added to the NCBI taxonomy database and unique taxonomy identifiers generated. This added layer of manual processing is unique to influenza virus and prevents automatization of the flu sequence submission process. Here we outline a new NCBI policy that normalizes influenza virus taxonomy processing but maintains features supported by the previous approach. This change will reduce the amount of manual handling necessary for flu submissions and pave the way for increased automation of the submissions process. While this automation may disrupt some historic practices, it will better align influenza virus data processing with other viruses and ultimately lower the submission burden on data providers.

Manipulating the alpha level cannot cure significance testing – comments on "Redefine statistical significance"https://peerj.com/preprints/34112017-11-142017-11-14David TrafimowValentin AmrheinCorson N. AreshenkoffCarlos Barrera-CausilEric J. BehYusuf BilgiçRoser BonoMichael T. BradleyWilliam M. BriggsHéctor A. Cepeda-FreyreSergio E. ChaigneauDaniel R. CioccaJuan Carlos CorreaDenis CousineauMichiel R. de BoerSubhra Sankar DharIgor DolgovJuana Gómez-BenitoMarian GrendarJames GriceMartin E. Guerrero-GimenezAndrés GutiérrezTania B. Huedo-MedinaKlaus JaffeArmina JanyanAli KarimnezhadFränzi Korner-NievergeltKoji KosugiMartin LachmairRubén LedesmaRoberto LimongiMarco Tullio LiuzzaRosaria LombardoMichael MarksGunther MeinlschmidtLadislas NalborczykHung T. NguyenRaydonal OspinaJose D. PerezgonzalezRoland PfisterJuan José RahonaDavid A. Rodríguez-MedinaXavier RomãoSusana Ruiz-FernándezIsabel SuarezMarion TegethoffMauricio TejoRens van de SchootIvan VankovSantiago Velasco-ForeroTonghui WangYuki YamadaFelipe C. ZoppinoFernando Marmolejo-Ramos
We argue that depending on p-values to reject null hypotheses, including a recent call for changing the canonical alpha level for statistical significance from .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable criterion levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and determining sample sizes much more directly than significance testing does; but none of the statistical tools should replace significance testing as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, or implications for applications. To boil all this down to a binary decision based on a p-value threshold of .05, .01, .005, or anything else, is not acceptable.

We argue that depending on p-values to reject null hypotheses, including a recent call for changing the canonical alpha level for statistical significance from .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable criterion levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and determining sample sizes much more directly than significance testing does; but none of the statistical tools should replace significance testing as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, or implications for applications. To boil all this down to a binary decision based on a p-value threshold of .05, .01, .005, or anything else, is not acceptable.

If funders and libraries subscribed to open access: The case of eLife, PLOS, and BioOnehttps://peerj.com/preprints/33922017-11-042017-11-04John WillinskyMatthew Rusk
Following on recent initiatives in which funders and libraries directly fund open access publishing, this study works out the economics of systematically applying this approach to three biomedical and biology publishing entities by determining the publishing costs for the funders that sponsored the research, while assigning the costs for unsponsored articles to the libraries. The study draws its data from the non-profit biomedical publishers eLife and PLOS, and the nonprofit journal aggregator BioOne, with this sample representing a mix of publishing revenue models, including funder sponsorship, article processing charges (APC), and subscription fees. This funder-library open access subscription model is proposed as an alternative to both the closed-subscription model, which funders and libraries no longer favor, and the APC open access model, which has limited scalability across scholarly publishing domains. Utilizing PubMed filtering and manual-sampling strategies, as well as publicly available publisher revenue data, the study demonstrates that in 2015, 86 percent of the articles in eLife and PLOS acknowledged funder support, as did 76 percent of the articles in the largely subscription journals of BioOne. Twelve percent of the articles identified the NIH as a funder, 8 percent identifies other U.S. government agencies. Approximately half of the articles were funded by non-U.S. government agencies, including 1 percent by Wellcome Trust and 0.5 percent by Howard Hughes Medical Institute. For 17 percent of the articles, which lacked a funder, the study demonstrates how a collection of research libraries, similar to the one currently subscribing to BioOne, could cover publishing costs. The goal of the study is to inform stakeholder considerations of open access models that can work across the disciplines by (a) providing a cost breakdown for direct funder and library support for open access publishing; (b) positing the use of publishing data-management organizations (such as Crossref and ORCID) to facilitate per article open access support; and (c) proposing ways in which such a model offers a more efficient, equitable, and scalable approach to open access than the prevailing APC model, which originated with biomedical publishing.

Following on recent initiatives in which funders and libraries directly fund open access publishing, this study works out the economics of systematically applying this approach to three biomedical and biology publishing entities by determining the publishing costs for the funders that sponsored the research, while assigning the costs for unsponsored articles to the libraries. The study draws its data from the non-profit biomedical publishers eLife and PLOS, and the nonprofit journal aggregator BioOne, with this sample representing a mix of publishing revenue models, including funder sponsorship, article processing charges (APC), and subscription fees. This funder-library open access subscription model is proposed as an alternative to both the closed-subscription model, which funders and libraries no longer favor, and the APC open access model, which has limited scalability across scholarly publishing domains. Utilizing PubMed filtering and manual-sampling strategies, as well as publicly available publisher revenue data, the study demonstrates that in 2015, 86 percent of the articles in eLife and PLOS acknowledged funder support, as did 76 percent of the articles in the largely subscription journals of BioOne. Twelve percent of the articles identified the NIH as a funder, 8 percent identifies other U.S. government agencies. Approximately half of the articles were funded by non-U.S. government agencies, including 1 percent by Wellcome Trust and 0.5 percent by Howard Hughes Medical Institute. For 17 percent of the articles, which lacked a funder, the study demonstrates how a collection of research libraries, similar to the one currently subscribing to BioOne, could cover publishing costs. The goal of the study is to inform stakeholder considerations of open access models that can work across the disciplines by (a) providing a cost breakdown for direct funder and library support for open access publishing; (b) positing the use of publishing data-management organizations (such as Crossref and ORCID) to facilitate per article open access support; and (c) proposing ways in which such a model offers a more efficient, equitable, and scalable approach to open access than the prevailing APC model, which originated with biomedical publishing.

Becoming more transparent: Collecting and presenting data on biomedical Ph.D. alumnihttps://peerj.com/preprints/33702017-10-252017-10-25Christopher L PickettShirley Tilghman
For more than 20 years, panels of experts have recommended that universities collect and publish data on the career outcomes of Ph.D. students. However, little progress has been made. Over the past few years, a handful of universities, including those in the National Institutes of Health’s Broadening Experiences in Scientific Training consortium, and organizations, including the Association of American Universities and the Association of American Medical Colleges, launched projects to collect and publish data on biomedical Ph.D. alumni. Here, we describe the outcome of a meeting, convened by Rescuing Biomedical Research, of universities and associations working to improve the transparency of career outcomes data. We were able to achieve consensus on a set of common methods for alumni data collection and a unified taxonomy to describe the career trajectories of biomedical Ph.D.s. These materials can be used by any institution, with little or no modification, to begin data collection efforts on their Ph.D. alumni. These efforts represent an important step forward in addressing a recommendation that has been made for decades that will improve the ability of trainees to better plan for their careers and for universities to better tailor their training programs.

For more than 20 years, panels of experts have recommended that universities collect and publish data on the career outcomes of Ph.D. students. However, little progress has been made. Over the past few years, a handful of universities, including those in the National Institutes of Health’s Broadening Experiences in Scientific Training consortium, and organizations, including the Association of American Universities and the Association of American Medical Colleges, launched projects to collect and publish data on biomedical Ph.D. alumni. Here, we describe the outcome of a meeting, convened by Rescuing Biomedical Research, of universities and associations working to improve the transparency of career outcomes data. We were able to achieve consensus on a set of common methods for alumni data collection and a unified taxonomy to describe the career trajectories of biomedical Ph.D.s. These materials can be used by any institution, with little or no modification, to begin data collection efforts on their Ph.D. alumni. These efforts represent an important step forward in addressing a recommendation that has been made for decades that will improve the ability of trainees to better plan for their careers and for universities to better tailor their training programs.

Industry payments to physician journal editorshttps://peerj.com/preprints/33592017-10-202017-10-20Victoria S S WongLauro N AvalosMichael L Callaham
Objective: To assess industry payments to physician journal editors, and determine how their financial conflict of interest rate compares to all physicians within the specialty.
Study Design and Setting: Open Payments is a United States federal program that mandates reporting of medical industry payments to physicians. We performed a retrospective analysis of prospectively collected data, reviewing August 1, 2013 to December 31, 2016 payments using the Open Payments search tool. We collected payments data on “top tier” US-based physician-editors of highly-cited medical journals including 1) total general payments from industry, 2) total “direct” research payments, and 3) associated “indirect” research funding. We compared payments to physician-editors and payments to physicians-by-specialty using existing published data.
Results: In 35 journals, 333 (74.5%) of 447 “top tier” editors met inclusion criteria as US-based physician-editors. Of these, 212 (63.7%) received any industry-associated payments in the study period. In an average year during the study period, 141 (42.3%) of physician-editors received any payments directed to themselves (rather than their institutions), 120 (36.0%) received payments >$50; 66 (19.8%) received payments >$5,000 (the threshold designated by the National Institutes of Health as a Significant Financial Interest); and 51 (15.3%) received payments >$10,000. Mean annual payment of "total general payments" was $55,157 (standard deviation 561,885, range 10-10,981,153) with median of $3,512. Median general industry payments to physician-editors were mostly higher compared to all physicians within their specialty.
Conclusions: A substantial minority of physician-editors receive direct payments from industry within any given year, though most editors received payment of some kind in the study period. There were significant outliers. More robust and specific editor financial COI declarations may be appropriate given the extent of editors’ influences on the medical literature.

Objective: To assess industry payments to physician journal editors, and determine how their financial conflict of interest rate compares to all physicians within the specialty.

Study Design and Setting: Open Payments is a United States federal program that mandates reporting of medical industry payments to physicians. We performed a retrospective analysis of prospectively collected data, reviewing August 1, 2013 to December 31, 2016 payments using the Open Payments search tool. We collected payments data on “top tier” US-based physician-editors of highly-cited medical journals including 1) total general payments from industry, 2) total “direct” research payments, and 3) associated “indirect” research funding. We compared payments to physician-editors and payments to physicians-by-specialty using existing published data.

Results: In 35 journals, 333 (74.5%) of 447 “top tier” editors met inclusion criteria as US-based physician-editors. Of these, 212 (63.7%) received any industry-associated payments in the study period. In an average year during the study period, 141 (42.3%) of physician-editors received any payments directed to themselves (rather than their institutions), 120 (36.0%) received payments >$50; 66 (19.8%) received payments >$5,000 (the threshold designated by the National Institutes of Health as a Significant Financial Interest); and 51 (15.3%) received payments >$10,000. Mean annual payment of "total general payments" was $55,157 (standard deviation 561,885, range 10-10,981,153) with median of $3,512. Median general industry payments to physician-editors were mostly higher compared to all physicians within their specialty.

Conclusions: A substantial minority of physician-editors receive direct payments from industry within any given year, though most editors received payment of some kind in the study period. There were significant outliers. More robust and specific editor financial COI declarations may be appropriate given the extent of editors’ influences on the medical literature.

A Quality Management System for scientific research activities and its related management softwarehttps://peerj.com/preprints/33162017-10-042017-10-04Luca CaruanaAlessandro PensatoLoredana RiccobonoGiovanna LiguoriAntonella LanatiAnnamaria KisslingerMarta Di CarloAntonella Bongiovanni
Quality disciplines have been widely used for decades in industrial and business fields. It is only in recent times, however, that Quality management and approaches have received proper attention in life science. The development, optimization, validation and dissemination of an innovative way of planning and organizing research activity, inspired by Quality and Project Management principles, is the aim of the present work. Hence, we have generated a Quality Management System for a pilot life science research laboratory that deals with the housing and handling of marine organisms. Based on our Quality system, we have also created the modular software prototype Help4Lab. We could demonstrate that a proper and accurate transfer of Quality culture and methodologies to intellectual and scientific production can facilitate and strengthen scientific research.

Quality disciplines have been widely used for decades in industrial and business fields. It is only in recent times, however, that Quality management and approaches have received proper attention in life science. The development, optimization, validation and dissemination of an innovative way of planning and organizing research activity, inspired by Quality and Project Management principles, is the aim of the present work. Hence, we have generated a Quality Management System for a pilot life science research laboratory that deals with the housing and handling of marine organisms. Based on our Quality system, we have also created the modular software prototype Help4Lab. We could demonstrate that a proper and accurate transfer of Quality culture and methodologies to intellectual and scientific production can facilitate and strengthen scientific research.

A methodology for malaria programme impact evaluationhttps://peerj.com/preprints/32632017-09-182017-09-18Emilie PothinLuis SeguraKatya GalactionovaLeah BohleBarbara MatthysOlivier J.T. BrietThomas A Smith
This document describes a methodology for continual assessment of the impact of malaria interventions, and the efficiency of the malaria programme. The methodology is designed to be implemented recurrently on a cycle of 2–5 years, with the involvement of stakeholders, including National Malaria Control Programmes, development partners and other organizations active in the programme. Their participation should inform the impact and efficiency assessment, so that it is linked to subsequent decision making defining the nature and scope of malaria control interventions. The methodology is designed in a modular way, providing some flexibility with regard to which elements are implemented at any given time. Some modules require technical capabilities usually not available in a regular monitoring and evaluation (M&E) team, and will require contributions from other national and/or international partners.

This document describes a methodology for continual assessment of the impact of malaria interventions, and the efficiency of the malaria programme. The methodology is designed to be implemented recurrently on a cycle of 2–5 years, with the involvement of stakeholders, including National Malaria Control Programmes, development partners and other organizations active in the programme. Their participation should inform the impact and efficiency assessment, so that it is linked to subsequent decision making defining the nature and scope of malaria control interventions. The methodology is designed in a modular way, providing some flexibility with regard to which elements are implemented at any given time. Some modules require technical capabilities usually not available in a regular monitoring and evaluation (M&E) team, and will require contributions from other national and/or international partners.

Imagining the ‘open’ university: Sharing scholarship to improve research and educationhttps://peerj.com/preprints/27112017-09-142017-09-14Erin C McKiernan
Open scholarship, such as the sharing of articles, code, data, and educational resources, has the potential to improve university research and education, as well as increase the impact universities can have beyond their own walls. To support this perspective, I present evidence from case studies, published literature, and personal experiences as a practicing open scholar. I describe some of the challenges inherent to practicing open scholarship, and some of the tensions created by incompatibilities between institutional policies and personal practice. To address this, I propose several concrete actions universities could take to support open scholarship, and outline ways in which such initiatives could benefit the public as well as institutions. Importantly, I do not think most of these actions would require new funding, but rather a redistribution of existing funds and a rewriting of internal policies to better align with university missions of knowledge dissemination and societal impact.

Open scholarship, such as the sharing of articles, code, data, and educational resources, has the potential to improve university research and education, as well as increase the impact universities can have beyond their own walls. To support this perspective, I present evidence from case studies, published literature, and personal experiences as a practicing open scholar. I describe some of the challenges inherent to practicing open scholarship, and some of the tensions created by incompatibilities between institutional policies and personal practice. To address this, I propose several concrete actions universities could take to support open scholarship, and outline ways in which such initiatives could benefit the public as well as institutions. Importantly, I do not think most of these actions would require new funding, but rather a redistribution of existing funds and a rewriting of internal policies to better align with university missions of knowledge dissemination and societal impact.

Can editors save peer review from peer reviewers?https://peerj.com/preprints/30052017-09-062017-09-06Rafael D'AndreaJames P O'Dwyer
Peer review is the gold standard for scientific communication, but its ability to guarantee the quality of published research remains difficult to verify. Recent modeling studies suggest that peer review is sensitive to reviewer misbehavior, and it has been claimed that referees who sabotage work they perceive as competition may severely undermine the quality of publications. Here we examine which aspects of suboptimal reviewing practices most strongly impact quality, and test different mitigating strategies that editors may employ to counter them. We find that the biggest hazard to the quality of published literature is not selfish rejection of high-quality manuscripts but indifferent acceptance of low-quality ones. Bypassing or blacklisting bad reviewers and consulting additional reviewers to settle disagreements can reduce but not eliminate the impact. The other editorial strategies we tested do not significantly improve quality, but pairing manuscripts to reviewers unlikely to selfishly reject them and allowing revision of rejected manuscripts minimize rejection of above-average manuscripts. In its current form, peer review offers few incentives for impartial reviewing efforts. Editors can help, but structural changes are more likely to have a stronger impact.

Peer review is the gold standard for scientific communication, but its ability to guarantee the quality of published research remains difficult to verify. Recent modeling studies suggest that peer review is sensitive to reviewer misbehavior, and it has been claimed that referees who sabotage work they perceive as competition may severely undermine the quality of publications. Here we examine which aspects of suboptimal reviewing practices most strongly impact quality, and test different mitigating strategies that editors may employ to counter them. We find that the biggest hazard to the quality of published literature is not selfish rejection of high-quality manuscripts but indifferent acceptance of low-quality ones. Bypassing or blacklisting bad reviewers and consulting additional reviewers to settle disagreements can reduce but not eliminate the impact. The other editorial strategies we tested do not significantly improve quality, but pairing manuscripts to reviewers unlikely to selfishly reject them and allowing revision of rejected manuscripts minimize rejection of above-average manuscripts. In its current form, peer review offers few incentives for impartial reviewing efforts. Editors can help, but structural changes are more likely to have a stronger impact.

The prehistory of biology preprints: a forgotten experiment from the 1960shttps://peerj.com/preprints/31742017-08-222017-08-22Matthew Cobb
In 1961, the NIH began to circulate biological preprints in a forgotten experiment called the Information Exchange Groups (IEGs). This system eventually attracted over 3600 participants and saw the production of over 2,500 different documents, but by 1967 it was effectively shut down by journal publishers’ refusal to accept articles that had been circulated as preprints. This article charts the rise and fall of the IEGs and explores the parallels with the 1990s and the biomedical preprint movement of today.

In 1961, the NIH began to circulate biological preprints in a forgotten experiment called the Information Exchange Groups (IEGs). This system eventually attracted over 3600 participants and saw the production of over 2,500 different documents, but by 1967 it was effectively shut down by journal publishers’ refusal to accept articles that had been circulated as preprints. This article charts the rise and fall of the IEGs and explores the parallels with the 1990s and the biomedical preprint movement of today.