BruceCaronWebsite: http://nmri.orgBruce is the founder and current executive director of the New Media Studio and the New Media Research Institute in Santa Barbara. He was tr

Abstract

The real sticking points preventing scientific communication from taking full advantage of digital distribution are the following: 1) top ranked journals have cornered the reputation economy in terms of impact on tenure (they are a virtual whuffieopoly: for the term “whuffie” see Hunt and Doctorow). 2) the very same journals remain locked into the 20th century (with resemblances to prior centuries) print-based publishing model, built on blind peer review and informed by the scarcity of space available in any printed journal. The task then, is to release them from their print-based constraints, while rewarding and supporting them to continue to be a high-end filter for quality science; and then transitioning their whuffie-abilities to a form more suited to the rapid digital dissemination of scientific outcomes. The academy needs great filters to help guide readers to the best science among hundreds of thousands of new papers every year. Universities need fair and broad feedback from the academic community to decide which faculty deserve promotion. The research community needs to accelerate publication speed and minimize editorial overhead. And the public needs markers that help them determine good science from the rest. Open-access content is the first step. The next step might need some badges.

As the open-access revolution makes steady inroads into scientific publishing, its goal of universal open-access for science is both revolutionary and evolutionary. Open-access content is the first, underpinning, notion in the process that is now pushing scientific publication ever so slowly into a Web 2.0 world. The status quo for science publication (described by a speaker in a moment of pique at the PLoS Forum as “sclerotic, idiotic, and dysfunctional”) remains firmly attached to practices from its long past. Open-access content is the catalyst for the next logical step: post-publication peer-review. Using digital badges to mark top-quality scientific results carries the best parts of the old system into the digital future.

In a 2010 blog post for the Society for Scholarly Publishing, Why Hasn’t Scientific Publishing Been Disrupted Already?, Michael Clarke wrote, “When Tim Berners-Lee created the Web in 1991, it was with the aim of better facilitating scientific communication and the dissemination of scientific research. Put another way, the Web was designed to disrupt scientific publishing. It was not designed to disrupt bookstores, telecommunications, matchmaking services, newspapers, pornography, stock trading, music distribution, or a great many other industries…. And yet it has [emphasis in the original].” His point is that the scientific communication should have been the first arena for radical change fostered by the new opportunities afforded by the Internet, and not the last. His conclusion on why scientific communication has not changed to date at its core, despite the efforts of the open-access publishing (and open-science and open-data) movements, is that publishing has become embedded into career advancement based on the older model (dating back originally to 1680); nobody cares to be the first martyr to a new system, while the old system, however inefficient and expensive it might be, still gets one tenure. In short, the reputation of the scholar is carried on the shoulders of the reputations of the journals that publish her works. Journals remain the standards—heraldic gonfalons—that authors can fly for their deans.

Although it does leverage a key advantage of digital distribution, open access is not the main feature for change in the academy’s finely tuned reputation systems. Nothing in the tenure process prevents the top journals from switching over to an open-access system. In fact, top universities are now requiring open-access for their faculty’s publications. Indeed, even the Royal Society is now offering some form of open access, although its 2008 statement warns against the uncertainties that new business models might create for researchers and for society; “Careful forethought, informed by proper investigation of the costs and benefits, is required before introducing new models that amount to the biggest change in the way that knowledge is exchanged since the invention of the peer-reviewed scientific journal 340 years ago. Otherwise the exchange of knowledge could be severely disrupted, and researchers and wider society will suffer the resulting consequences.” In other words, change is all right as long as it doesn’t touch our bottom line.

Post-publication peer review: it’s in your future

The next fork in the road is the one that leads to a path less taken, at least so far. This is the path to post-publication peer-review. The idea is to immediately make public (i.e., publish on the Internet) any paper that meets some agreed-upon threshold of being scientific. After publication, an open (not yet fully determined) peer-review process would mark the best science outcomes from among these papers. This notion follows on the remarkable success of the arXiv pre-press service in physics. A good part of the recent PLoS Forum discussion centered on the practical advantages of rapidly making public all scientific findings, and then using peer review after publication to mark those of real value. Tremendous savings of time, editorial resources, and other expenses would result. The pace of science communication would accelerate noticeably.

But, alas, nobody would ever again get tenure. That is, of course, unless, or until, the same publishers that wield their status to attract the top science outcomes can be convinced to do this not in their current role of gatekeeper, but in a new role: that of validating great science through a post-publication imprimatur.

Instead of the old paper-journal gonfalons, digital badges are offered to reward scientists for publishing great science, regardless of where this is hosted in the Internet. These might be badges offered by top journals, where the badge might also require some additional reworking of the published material—determined by a conversation between the author and the “journal.” Of course the badge is also a tag, and readers can instantly search for content that has been flagged with a journal’s badge. All of the so-honored papers could also be collected into bound publications (preserving, at least for a while, the notion of a paper journal). The very same reputation that the journal once provided when it only accepted a small percentage of supplied materials for publication can now be offered to a similarly small percentage of already-published material. Rewarding great science after the science is already public is clearly a win-win operation for scientists and the public, and could be very much a win for journals as well, if done with care.

In some ways, the current pre-publication, blind peer review by a couple of volunteer scholars is akin to attempting to choose who is a hero in an army before the battle (or the Oscar before the cameras roll), based on judgements about individual character and potential. Only a few soldiers will deserve the medal, but it surely makes better sense to award these after the fighting is done. When a new scientific paper is posted into an Internet collection, it is available for a range of community comments. Hundreds of scientists with similar interests can immediately weigh in on the paper’s value. Subsequent awarding of the badge from one (or more) journals and/or professional organizations and/or universities would further validate the paper’s import. Winning a badge from, say, Nature, Science, or Cell could still mark a paper as stellar. And the badge would still fly high in the face of a tenure committee.

Consumer safety groups and other recommender organizations have long used badges of approval to mark everything from toasters, to movies and plays, to vacation resorts. Indicators of quality are well received in many circumstances. In scholarly, open-access publishing, the Connexions Project at Rice University already uses badges as part of their “lens” system; where editorial members pick high quality content and gather this into their lens. The application of quality markers after publication is more practical, more useful, and less prone to error or to the suspicion of bias.

Similar badges could be flown by the author to represent her current university, or the school where he received his PhD, or the professional organization of which she is a member. And each of these organizations and schools can also assemble the content of their members into a corpus of honor, and bestow further honors (by some selection method) on the best papers from among all of these. Other badges might be bestowed by a variety of interested editorial collectives, from graduate-student societies to professional organizations. A proud proliferation of badges attached to a open-source paper would show convincingly how the larger community of readers has reviewed and approved the scientific value of this work. Here we have peer-review that is actually far more robust, and potentially more fair, and performed with virtually no expense to the Internet host.

For the sake of this argument, let us suppose that one or more of the larger hosting organizations (Google, Internet Archive, Microsoft, Amazon, DataGrid, etc.) opened up a service to host scientific papers. The Google Knol system is a good example. These hosts would accept badges as tags on the digital paper-objects. The awarding of the badges would be decided by the journals and professional organizations, based on their own internal systems of review. Freed from the task of actually publishing the content, the journals and organizations can focus entirely on the process of selecting from the already available science content those few papers that deserve their badge.

Those who would speculate on the inevitability of clever scientists gaming the new badge system for their own advantage might want reflect on the current system where every judgement is performed anonymously, and each paper that is declined for publication creates an author who suspects that the system is not fair. And it may not be fair, but it is the only system available at the moment. Print journals must be highly selective because of very practical constraints on their size. Most papers must be rejected, often for reasons that have little to do with the intrinsic merit of the paper.

Which takes us back to the journal. Whether this is a journal of a professional society or a commercial journal, there must be some market in play for them to buy into this badge economy. A simple, pay-for-badge approach is obviously wrong-headed. Unlike open-source publication, where the first-copy costs are well known, putting a price on a journal’s reputation only tarnishes this. This means that some other economic model needs to be found to a) reward those organizations who have built up their reputation over the course of decades and centuries, and b) open up new reputation arenas where all badges are in competition for new reputation building.

One way to do the former is to follow the model of “catch shares” in fisheries. What this would mean is that a one-time fund (potentially created by national science agencies, perhaps in partnership with leading foundations) would be distributed among all current science publishers in shares determined by their status in the current marketplace (e.g., by their impact factor and other measures). Publishers would access their share of the fund but agreeing to participate in this new post-publication peer review system. The shares would be pro-rated over several years, and would support their continuing editorial flagging of top-rated science content. This fund would recognize the value of these journals as filters for top-quality science, a service that all scientists can appreciate and use. Over the years, these journals will need to adjust their business plans to take full advantage of their position as science information validators. Some funds would be reserved for new efforts to build badges of quality, spreading the opportunity window. In this way, post-publication peer review builds an editorial economy where reputation is reflected by consumption.

There are probably several other economic models that would help guide the current peer-review system into a new post-publication peer-acclaim system. This author confesses to no economic sagacity. He sold Apple at $25 (well, at least, he did buy it at $14). The point here, is that if there are models that can create a sustainable halibut fishery and cap and trade CO2, there should be a model that can port the reputations of scientific journals onto a system of badges for post-publication peer review; a model that would bridge between the current system and what must, in all fairness, emerge to bring the promise of digital publishing into the service of the academy.

A couple of remaining issues keep this blog from ending right here. One is the other side of the badge. The other is replicating the quality-adding features of the current editorial system. The other side of the badge is the need not to reward great science but to protect the academy and the public from poor science. What is the scientific equivalent to the “Good Housekeeping Seal of Approval”? If all papers that, on some level, appear to be science are now available, who might be called upon to clear the brush of poor science from the fields of acceptable (if not spectacular) science? Fortunately, the academy has professional organizations for virtually every discipline and sub-discipline that are well suited to offer their badges of approval to mark the outer boundary of what their communities consider to be good science. Today, poor science is readily available on websites and blogs across the internet, so there is no protection for the public. Tomorrow, seals of approval from academic organizations will help to mark science worth reading. The remaining question is this: who will trim the twisted prose and copy edit all of these self-published scientific papers? Once journals no longer provide these services, they will need to be crowd-sourced to those who read the papers. Moderated editing will allow the readers to polish the copy while the author maintains control of the narrative.

The real sticking points preventing scientific communication from taking full advantage of digital distribution are the following: 1) top ranked journals have cornered the reputation economy in terms of impact on tenure (they are a virtual whuffieopoly: for the term “whuffie” see Hunt and Doctorow). 2) the very same journals remain locked into the 20th century (with resemblances to prior centuries) print-based publishing model, built on blind peer review and informed by the scarcity of space available in any printed journal. The task then, is to release them from their print-based constraints, while rewarding and supporting them to continue to be a high-end filter for quality science; and then transitioning their whuffie-abilities to a form more suited to the rapid digital dissemination of scientific outcomes. The academy needs great filters to help guide readers to the best science among hundreds of thousands of new papers every year. Universities need fair and broad feedback from the academic community to decide which faculty deserve promotion. The research community needs to accelerate publication speed and minimize editorial overhead. And the public needs markers that help them determine good science from the rest. Open-access content is the first step. The next step might need some badges.

13 Comments

How do we get from here to there? — You have made a compelling case for where we would like to be. The question that needs answered is – how do we get from here to there? – how to overcome the vested interests in the status quo?

Untitled — Bruce,It is great idea, publish first online pool on any platform and then get edited and selected (badges) by a pool of selectors- Journals, universities, Prizes, newspapers, popular magazines, univesities, libraries, professional society etc. Can it be achieved or as Norman says how do we go there. Top Journal, Top University, English, Science establishment hold on to powers and influence has proven very difficult to shake out inspite of the success of open journals. Great knol

Untitled — Thanks for the comments!I think the open access movement is the wedge here (and the stick). As more state agencies and universities move to require open access (and support immediate open access), a tension is created where a “great science filter” fund (the carrot) to push journals to start offering badges to the best science papers might push the needle over towards to new system.Some people I’ve spoken with resent the idea of paying (off) the current journals. My take is that they have built a valuable service, and they deserve a founding share in the next, digital, version. Hanging on to their share is then their business model.

Untitled — The citation economy is already in the works. Tenure decisions (at UC Santa Barbara, at least) are measuring this metric. So this is a complementary locale in the larger economy. Good point!

Untitled — Bruce and Norman,Everyone wants new metrics to replace commercial “citation index” and “Impact factor” and alternatives like page view, prints, email attachment, comments, star ranking, review are some of the factors which can lead to an alternative index if combined with Google Analytics data and Page Rank.This index must be validated in diverse subjects and fields like arts, social sciences, ecology, maths, music, biology, science, medicine for acceptance and routine use. With the prices of journal subscription going up, even top university libraries in Europe/US are finding it difficult to renew subscription. PLoS journals now routinely include real time metrics for all the journal articles as well as for the journals which includes, Page views, print outs, downloads, bookmarks, links, citation. As in knols, very few readers give star ratings or leave comments or review. Here is a recent paper which is very similar to good knols.http://www.plosone.org/article/metrics/info:doi/10.1371/journal.pone.0009948

Untitled — Krishnan,I’ve had talks with PLoS, Nature Networks, and the Smithsonian as to why they don’t get users to rate and comment on their sites. I would say this shows that you cannot just add on a bit of Web 2.0 to your site and expect people to act like it’s their own Facebook profile page.The real future for crowdsourcing peer review will happen when academic societies and universities and other places embed their communications into fully socially networked environments, where they allow the users to have a fair amount of active say in the network’s governance. This follows Clay Shirky’s notion of setting up the right bargain between the network and its members.

role of google knol — I’d like to see journals pop up here at Google Knol, or else some other site that has a more open review process. I’ve seen the influenza and couple of other journals here at knol, and of course all the PLOS and related open journals elsewhere (many use OJS – open journal systems – for hosting).I’m not seeing any journals with post-publication or at least public reviews/commenting though outside of the hard sciences and medicine yet (correct me if there are though). My field is education, for example.I explored a bit on how to start a journal here and the workflow involved – this is a ‘fake’ journal test:http://knol.google.com/k/doug-holton/journal-test/15pkhk5yia4d5/3#I would like to see google knols embedded more into “fully socially networked environments” like you mentioned, though, for marketing and communication and so forth – twitter, facebook, google wave/buzz, blog mentions, etc., in addition to of course google scholar. Google knols apparently aren’t even listed in google scholar, just a handful of citations.

Untitled — Thanks Doug. Knol is a great platform for collaborations and new collections. The PLoS team once told me it takes only about 120 scholars (in an intellectual community of practice) to create a new journal. Doing this on the Knol platform makes a lot of sense.

fantastic post – but do we still need the Journals? — Bruce,Great post. I could see the badges being a great system, but as you mentioned getting prestigious journals on board would be akin to asking them to commit suicide. Perhaps they could be forced to by funding agencies, but keep in mind the editorial committees of prestigious journals are composed of the same people who control the funding. I just wrote something along these lines at http://marciovm.com/i-want-a-github-of-science. I got 40k views from 108 countries within 48 hours of posting, which shows the power of blogs and social news aggregators (the post went to #1 on Hacker News). That post looks at the tools software engineers use to distribute Open Source code (GitHub in particular) and concludes that Open Science should learn from them. A key finding is that journals need not be involved, instead one can rely on the credibility and influence of individual scientists, which can let one chip away at the system instead of challenging it head on (same way Facebook is replacing TV).-Marcio

Untitled — Marcio,Thanks for the comment. We are, IMO, moving from a time where a few journals are in the center of the research publication recognition space. I was looking for a way to segue gracefully to a new recognition space by giving them a role during the transition. Not sure we need to do so. You’re right. I could see a github style replacement eroding their influence without them being on board at all. I really like your github blog, and I’ve tweeted it along. Congrats on its success!What I really like about your github notion is a way to attach new research results to a branch of existing work without having to write all this down again and again (80% of the content of most articles is a description of where the research fits into the larger picture). So we could see more microarticles attaching to existing branches based on shared protocols or shared data, etc.