Several online forums exist to facilitate open and/or anonymous discussion of the peer-reviewed scientific literature. Data integrity is a common discussion topic, and it is widely assumed that publicity surrounding such matters will accelerate correction of the scientific record. This study aimed to test this assumption by examining a collection of 497 papers for which data integrity had been questioned either in public or in private. As such, the papers were divided into two sub-sets: a public set of 274 papers discussed online, and the remainder a private set of 223 papers not publicized. The sources of alleged data problems, as well as criteria for defining problem data, and communication of problems to journals and appropriate institutions, were similar between the sets. The number of laboratory groups represented in each set was also similar (75 in public, 62 in private), as was the number of problem papers per laboratory group (3.65 in public, 3.54 in private). Over a study period of 18 months, public papers were retracted 6.5-fold more, and corrected 7.7-fold more, than those in the private set. Parsing the results by laboratory group, 28 laboratory groups in the public set had papers which received corrective action, versus 6 laboratory groups in the private set. For those laboratory groups in the public set with corrected/retracted papers, the fraction of their papers acted on was 62% of those initially flagged, whereas in the private set this fraction was 27%. Such clustering of actions suggests a pattern in which correction/retraction of one paper from a group correlates with more corrections/retractions from the same group, with this pattern being stronger in the public set. It is therefore concluded that online discussion enhances levels of corrective action in the scientific literature. Nevertheless, anecdotal discussion reveals substantial room for improvement in handling of such matters.

John Bohannon wrote a news article in Science that either shows that many open access journals with APC charges have sloppy (or no) peer review…or shows almost nothing at all. This story discusses the article itself, offers a number of responses to it—and then adds something I don't believe you'll find anywhere else: A journal-by-journal test of whether the journals involved would pass a naive three-minute sniff test as to whether they were plausible targets for article submissions without lots of additional checking. Is this really a problem involving a majority of hundreds of journals—or maybe one involving 27% (that is, 17) of 62 journals? Read the story; make up your own mind.

Evaluation of scientific research is becoming increasingly reliant on publication-based bibliometric indicators, which may result in the devaluation of other scientific activities—such as data curation—that do not necessarily result in the production of scientific publications. This issue may undermine the movement to openly share and cite data sets in scientific publications because researchers are unlikely to devote the effort necessary to curate their research data if they are unlikely to receive credit for doing so. This analysis attempts to demonstrate the bibliometric impact of properly curated and openly accessible data sets by attempting to generate citation counts for three data sets archived at the National Oceanographic Data Center. My findings suggest that all three data sets are highly cited, with estimated citation counts in most cases higher than 99% of all the journal articles published in Oceanography during the same years. I also find that methods of citing and referring to these data sets in scientific publications are highly inconsistent, despite the fact that a formal citation format is suggested for each data set. These findings have important implications for developing a data citation format, encouraging researchers to properly curate their research data, and evaluating the bibliometric impact of individuals and institutions.

The Higher Education Funding Council for England and three other UK funding bodies (the Scottish Funding Council, the Higher Education Funding Council for Wales and the Department for Employment and Learning) have enacted an open access mandate.

Here's an excerpt:

5. The core of this policy is as follows: to be eligible for submission to the post-2014 REF, outputs must have been deposited in an institutional or subject repository on acceptance for publication, and made open-access within a specified time period. This requirement applies to journal articles and conference proceedings only; monographs and other long-form publications, research data and creative and practice-based research outputs are out of scope. Only articles and proceedings accepted for publication after 1 April 2016 will need to fulfil these requirements, but we would strongly urge institutions to implement the policy now. The policy gives a further list of cases where outputs will not need to fulfil the requirements.

LIBER believes that the right to read is the right to mine and that licensing will never bridge the gap in the current copyright framework as it is unscalable and resource intensive. Furthermore, as this discussion paper highlights, licensing has the potential to limit the innovative potential of digital research methods by:

restricting the tools that researchers can use

limiting the way in which research results can be made available

impacting on the transparency and reproducibility of research results.

ARL has been awarded a $1 million grant for the Shared Access Research Ecosystem (SHARE).

Here's an excerpt from the announcement:

The Association of Research Libraries (ARL) has been awarded a joint $1 million grant from the Institute of Museum and Library Services (IMLS) and the Alfred P. Sloan Foundation to develop and launch the Shared Access Research Ecosystem (SHARE) Notification Service. SHARE is a collaborative initiative of ARL, the Association of American Universities (AAU), and the Association of Public and Land-grant Universities (APLU) to ensure the preservation of, access to, and reuse of research findings and reports.

SHARE aims to make research assets more discoverable and more accessible, and to enable the research community to build upon these assets in creative ways. SHARE's first project, the Notification Service, will inform stakeholders when research results—including articles and data—are released.

Prior studies demonstrate the shocking unavailability of most books published in the 20th Century, prompting The Atlantic Monthly headline: How Copyright Made Mid-Century Books Vanish. The unavailability of new editions of older works would be less problematic, however, if little consumer demand existed for those works. In addition, the lack of new editions would be much less troubling if the works were easily available in alternative forms or markets. Newly collected data provides evidence of the demand for out-of-print books and then charts the availability of out-of-print works in digital form (eBooks and .mp3), in used book stores, and in public libraries. The situation with books remains dismal, although music publishers on iTunes seem to be doing a much better job of digitizing older works and making them available than do book publishers. Some theories for this discrepancy are offered.

This study presents and analyzes the findings of a 2012 survey of member libraries belonging to the Association of Research Libraries (ARL) on publishers' large journal bundles and compares the results to earlier surveys. The data illuminate five research questions: market penetration, journal bundle construction, collection format shifts, pricing models, and license terms. The structure of the product is still immature, particularly in defining content and developing sustainable pricing models. The typical "bundle" is something less than the full publishers list. Neither market studies nor market forces have produced a sustainable new strategy for pricing and selling e-journals. Finally, a complex history of managing license terms is revealed in the data.

In their report, published in March 2014, Björk and Solomon set out a series of scenarios for how funders might develop their approaches for supporting APCs. These cover both full open access journals (which operate exclusively by this model) and so-called hybrid journals (which offer this service for individual articles, while continuing to operate via the subscription model). The authors appraised three combined scenarios, which they conclude to be the most promising for further consideration.

The MOOC Content Licensing Solution uses the current per-page or per-article academic-based pricing rightsholders have established through CCC's Electronic Course Content pay-per-use service. CCC offers digital rights from over 5,000 rightsholders around the world to public, private not-for-profit, and private for-profit U.S.-based institutions of higher education that conduct academic MOOCs.

In the previous post, and also on our site for PLOS ONE Academic Editors, an attempt to simplify our policy did not represent the policy correctly and we sincerely apologize for that and for the confusion it has caused. We are today correcting that post and hoping it provides the clarity many have been seeking. . . .

Two key things to summarize about the policy are:

The policy does not aim to say anything new about what data types, forms and amounts should be shared.

The policy does aim to make transparent where the data can be found, and says that it shouldn't be just on the authors' own hard drive.

Correction

We have struck out the paragraph in the original PLOS ONE blog post headed "What do we mean by data", as we think it led to much of the confusion. Instead we offer this guidance to authors planning to submit to a PLOS journal.

What data do I need to make available?

We ask you to make available the data underlying the findings in the paper, which would be needed by someone wishing to understand, validate or replicate the work. Our policy has not changed in this regard. What has changed is that we now ask you to say where the data can be found.

As the PLOS data policy applies to all fields in which we publish, we recognize that we'll need to work closely with authors in some subject areas to ensure adherence to the new policy. Some fields have very well established standards and practices around data, while others are still evolving, and we would like to work with any field that is developing data standards. We are aiming to ensure transparency about data availability.

Global digital media company Getty Images today announces, for the first time, the ability for people to easily embed and share its imagery—at no cost—or non-commercial use on websites, blogs and social media channels through a new embed tool. . . .

This is the latest in a series of moves by Getty Images to harness technology and social media to drive broader exposure and usage of its content. Recent initiatives include a unique partnership with Pinterest, the fastest growing content sharing channel*, announced in October 2013, whereby Pinterest pays Getty Images a fee in exchange for metadata. Getty Images then shares these fees with its contributors, who also receive attribution when their content is used.

This is the first of a trio of essays: two related to fairly specific situations, one covering a range of ethical discussions. Depending on how you define "ethics," I could also include sections on Elsevier and OA, embargoes, fallacious and misleading anti- OA arguments and the whole area of peer review. Or maybe not. In any case, we lead off with the sad case of Jeffrey Beall.

Since Beall's chief claim to fame is his ever-growing list of supposedly predatory OA journals, and since I'm showing the case for treating Beall as a questionable source, I have to say this: In case you're thinking "Walt's claiming there are no scam OA journals," I'm not—and toward the end of this essay, I'll quote some useful ways to avoid scam journals regardless of their business model.

"Reed Elsevier is continuing to deliver on its long term strategic and financial priorities. With underlying revenue growth across all major business areas, operating profit and earnings grew well in 2013. We made good progress on organic development and portfolio reshaping, and our strong cash flow enabled us to step up our share buyback programme whilst maintaining balance sheet strength. We are recommending a +7% increase in the full year dividend for Reed Elsevier PLC and +8% for Reed Elsevier NV, in line with growth in adjusted earnings per share at constant exchange rates."

In an effort to increase access to this data, we are now revising our data-sharing policy for all PLOS journals: authors must make all data publicly available, without restriction, immediately upon publication of the article. Beginning March 3rd, 2014, all authors who submit to a PLOS journal will be asked to provide a Data Availability Statement, describing where and how others can access each dataset that underlies the findings. This Data Availability Statement will be published on the first page of each article.

The goal of this issue is to provide a succinct overview of e-book platforms for academic librarians as well as insights into where e-book platforms are headed in the future. Most of the authors work in academic libraries and their job responsibilities include developing, procuring, promoting, and educating users about e-books. The topics covered include an overview of e-book platforms including technical aspects and business models, lending platforms, aggregator platforms, commercial publisher platforms, and university press platforms. It is our hope that when you read these articles it will add to your knowledge base about the current and future state of e-book platforms in academic libraries.

Featured Digital Scholarship Publications

DigitalKoans Overview

DigitalKoans provides news and commentary on digital copyright, digital curation, digital repository, open access, research data management, scholarly communication, and other digital information issues. From April 2005 through March 2016, DigitalKoans had over 13.4 million visitors, over 60.5 million file requests, and over 45.3 million page views. Excluding spiders, there were over 8 million visitors and over 19.8 million page views. It is available via e-mail, RSS feed, and Twitter.