Feed URL:

One click:

Close

Authors

The academic publication process is not the most efficient of systems. Authors often go through several rounds of submission and rejection trying to find an appropriate publication venue. In fact, it takes on average nearly six months for a paper to reach publication from date of submission, during which time there is plenty of opportunity for competitors to publish similar results.

The reality is that before a paper is accepted by a journal, it is often rejected by one or more others. The reason for rejection need not be a methodological flaw – perhaps the research just isn’t innovative enough for the prestigious titles. During this time, each journal sends the paper for appraisal by experts in the relevant field; peer review. Endless cycles of repetitive review and requests for additional experiments from multiple journals does not necessarily make for efficient scientists, or efficient science.

At last week’s SSP conference in San Francisco, those of us interested in Altmetrics were rather excited to see representatives from each of the major products come together in a session entitled ‘Metrics 2.0: It’s about Time…..and People’.

First up was Andrea Michalek of Plum Analytics (who kindly shared her slides here). She revealed a sneak preview of work being done with the The Smithsonian, one of Plum’s first customers. Their product, PlumX, is being used to collect data (usage, captures, mentions, social media, citations) in order to generate reports on publication activity in support of research evaluation.

She explained how in scholarly communications, the same article can be published in multiple locations on the web (e.g. publisher website, PubMed Central, Mendeley). Fortunately, Plum collects and displays the counts from each of these individual locations, allowing users to get a full view of the engagement surrounding a particular article, video, presentation etc. Indeed, she stressed the importance of tracking the impact of all aspects of output, not just the article. She spoke of these ‘2nd level metrics’ and used the example of an author who blogs about his/her research. more…

Since 1840, BMJ has been a trusted voice in the development of improved healthcare. We are proud of our heritage but also believe in looking forward. Our objective remains to support medical professionals and organisations in continuously improving the delivery of quality healthcare. By sharing our information, analytical tools and technology during an upcoming hack day (6-7 July), BMJ seeks to help healthcare professionals and organisations improve the care they provide.

An obvious but important message that underpinned discussions on all three days was the importance of sharing data. On the first morning, Simon Hodson of Jisc quoted Geoffrey Boulton of the Royal Society (who have made sharing data a condition of publication): “Publishing articles without making the data available is scientific malpractice.” This is an extreme but not uncommon view.

Rubriq is a new startup attempting to reduce inefficiencies in publishing by providing peer review independent of journals. While others, such as Faculty of 1000, offer this with post-publication reviews, Rubriq focuses on pre-submission review. Rather than replacing peer review completely, Rubriq hopes to provide editors with initial insight, allowing them to reduce time to first decision or use it as a filter (by setting a threshold for a minimum score needed to submit). Rubriq see the R-Score (an overall score for the paper based on Quality of Research, Quality of Presentation, and Novelty and Interest) as a new article level metric.

Article-level metrics (or ALMs) were a hot topic at this week’s HighWire publisher meeting in Washington. (Highwire hosts both the BMJ and its stable of 42 specialist journals). From SAGE to eLife, publishers seem sold on the benefits of displaying additional context to articles, thereby enabling readers to assess their impact. These statistics range from traditional indicators, such as usage statistics and citations, to alternative values (or altmetrics) like mentions on Twitter and in the mainstream media.

So, what services are available to bring this information together in one simple interface? There are quite a few contenders in this area, including Plum Analytics, PLoS Article-Level Metrics application, Science Card, CitedIn and ReaderMeter. One system in particular has received a good deal of attention in the past few weeks; ImpactStory, a relaunched version of total-impact. It’s a free, open-source webapp that’s been built with financial help from the Sloan Foundation (and others) “to help researchers uncover data-driven stories about their broader impacts”.

Publishers face authorship issues on a daily basis. Who should be listed as an author? How can an author role be appropriately acknowledged? How can we discern author responsibility? Linked to this is conflict-of-interest reporting: who needs to report what, and in what context?

A registry that will grant scientists a unique identifying number, helping readers of the literature to distinguish between authors with similar names, launched this week. ORCID, the Open Researcher and Contributor ID, seeks to remedy the systemic name ambiguity problems seen in scholarly research by assigning unique identifiers linkable to an individual’s research output.more…

The “publish or perish” model of the academic world has followed a similar pattern since the middle of the last century. It generally takes around seven years from the conception of an idea, to the publishing of a paper, to the point where a critical mass of citations are formally gathered around it.

“Clearly the world moves much, much faster than that now,” argues Andrea Michalek, co-founder of startup Plum Analytics, with researchers posting slides online about their work even before it’s published, and tweets mentioning those discussions and linking back to the content. “All this data exhaust is happening in advance of researchers’ getting those cited-by counts,” she says, and once a paper is published, the opportunities for online references to it grow.

A common goal of academics is to read all relevant publications within a particular field of expertise. Locating these materials is a challenge (to say the least) and the task is becoming more and more difficult as the number of papers published annually increases year-on-year.

Earlier this month, changes were made to Google Scholar to encourage the serendipitous discovery of new research during a scholar’s routine activity. The new service, Scholar Updates, conducts a search on the author’s behalf and provides a list of recommended publications. It builds upon existing research alerts offered by Google Scholar, similar in nature to those of ISI Web of Science and other academic databases.

In response to static, neglected lab websites that have become the norm, a Princeton scientist (Ethan O. Perlstein) has personally invested in the design of a site that will inspire fellow academics to openly share their research. In addition, with his academic appointment coming to an end, http://perlsteinlab.com/ is a great way to establish a personal brand.