The Selected Papers Network (Part 1)

Christopher Lee has developed some new software called the Selected Papers Network. I want to explain that and invite you all to try using it! But first, in this article, I want to review the problems it’s trying to address.

There are lots of problems with scholarly publishing, and of course even more with academia as a whole. But I think Chris and I are focused on two: expensive journals, and ineffective peer review.

Expensive Journals

Our current method of publication has some big problems. For one thing, the academic community has allowed middlemen to take over the process of publication. We, the academic community, do most of the really tricky work. In particular, we write the papers and referee them. But they, they publishers, get almost all the money, and charge our libraries for it—more and more, thanks to their monopoly power. It’s an amazing business model:

Get smart people to work for free, then sell what they make back to them at high prices.

People outside academia have trouble understanding how this continues! To understand it, we need to think about what scholarly publishing and libraries actually achieve. In short:

1. Distribution. The results of scholarly work get distributed in publicly accessible form.

2. Archiving. The results, once distributed, are safely preserved.

3. Selection. The quality of the results is assessed, e.g. by refereeing.

4. Endorsement. The quality of the results is made known, giving the scholars the prestige they need to get jobs and promotions.

Thanks to the internet, jobs 1 and 2 have become much easier. Anyone can put anything on a website, and work can be safely preserved at sites like the arXiv and PubMed Central. All this is either cheap or already supported by government funds. We don’t need journals for this.

The journals still do jobs 3 and 4. These are the jobs that academia still needs to find new ways to do, to bring down the price of journals or make them entirely obsolete.

The big commercial publishers like to emphasize how they do job 3: selection. The editors contact the referees, remind them to deliver their referee reports, and communicate these reports to the authors, while maintaining the anonymity of the referees. This takes work.

However, this work can be done much more cheaply than you’d think from the prices of journals run by the big commercial publishers. We know this from the existence of good journals that charge much less. And we know it from the shockingly high profit margins of the big publishers, particularly Elsevier.

It’s clear that the big commercial publishers are using their monopoly power to charge outrageous prices for their products. Why do they continue to get away with this? Why don’t academics rebel and publish in cheaper journals?

One reason is a broken feedback loop. The academics don’t pay for journals out of their own pocket. Instead, their university library pays for the journals. Rising journal costs do hurt the academics: money goes into paying for journals that could be spent in other ways. But most of them don’t notice this.

The other reason is item 4: endorsement. This is the part of academic publishing that outsiders don’t understand. Academics want to get jobs and promotions. To do this, we need to prove that we’re ‘good’. But academia is so specialized that our colleagues are unable to tell how good our papers are. Not by actually reading them, anyway! So, they try to tell by indirect methods—and a very important one is the prestige of the journals we publish in.

The big commercial publishers have bought most of the prestigious journals. We can start new journals, and some of us are already doing that, but it takes time for these journals to become prestigious. In the meantime, most scholars prefer to publish in prestigious journals owned by the big publishers, even if this slowly drives their own libraries bankrupt. This is not because these scholars are dumb. It’s because a successful career in academia requires the constant accumulation of prestige.

The Elsevier boycott shows that more and more academics understand this trap and hate it. But hating a trap is not enough to escape the trap.

Boycotting Elsevier and other monopolistic publishers is a good thing. The arXiv and PubMed Central are good things, because they show that we can solve the distribution and archiving problems without the help of big commercial publishers. But we need to develop methods of scholarly publishing that solve the selection and endorsement problems in ways that can’t be captured by the big commercial publishers.

I emphasize ‘can’t be captured’, because these publishers won’t go down without a fight. Anything that works well, they will try to buy—and then they will try to extract a stream of revenue from it.

Ineffective Peer Review

While I am mostly concerned with how the big commercial publishers are driving libraries bankrupt, my friend Christopher Lee is more concerned with the failures of the current peer review system. He does a lot of innovative work on bioinformatics and genomics. This gives him a different perspective than me. So, let me just quote the list of problems from this paper:

• Expert peer review (EPR) does not work for interdisciplinary peer review (IDPR). EPR means the assumption that the reviewer is expert in all aspects of the paper, and thus can evaluate both its impact and validity, and can evaluate the paper prior to obtaining answers from the authors or other referees. IDPR means the situation where at least one part of the paper lies outside the reviewer’s expertise. Since journals universally assume EPR, this creates artificially high barriers to innovative papers that combine two fields [Lee, 2006]—-one of the most valuable sources of new discoveries.

• Shoot first and ask questions later means the reviewer is expected to state a REJECT/ACCEPT position before getting answers from the authors or other referees on questions that lie outside the reviewer’s expertise.

• No synthesis: if review of a paper requires synthesis—combining the different expertise of the authors and reviewers in order to determine what assumptions and criteria are valid for evaluating it—both of the previous assumptions can fail badly [Lee, 2006].

• Journals provide no tools for finding the right audience for an innovative paper. A paper that introduces a new combination of fields or ideas has an audience search problem: it must search multiple fields for people who can appreciate that new combination. Whereas a journal is like a TV channel (a large, pre-defined audience for a standard topic), such a paper needs something more like Google—a way of quickly searching multiple audiences to find the subset of people who can understand its value.

• Each paper’s impact is pre-determined rather than post-evaluated: By ‘pre-determination’ I mean that both its impact metric (which for most purposes is simply the title of the journal it was published in) and its actual readership are locked in (by the referees’s decision to publish it in a given journal) before any readers are allowed to see it. By ‘post-evaluation’ I mean that impact should simply be measured by the research community’s long-term response and evaluation of it.

• Non-expert PUSH means that a pre-determination decision is made by someone outside the paper’s actual audience, i.e., the reviewer would not ordinarily choose to read it, because it does not seem to contribute sufficiently to his personal research interests. Such a reviewer is forced to guess whether (and how much) the paper will interest other audiences that lie outside his personal interests and expertise. Unfortunately, people are not good at making such guesses; history is littered with examples of rejected papers and grants that later turned out to be of great interest to many researchers. The highly specialized character of scientific research, and the rapid emergence of new subfields, make this a big problem.

In addition to such false-negatives, non-expert PUSH also causes a huge false-positive problem, i.e., reviewers accept many papers that do not personally interest them and which turn out not to interest anybody; a large fraction of published papers subsequently receive zero or only one citation (even including self-citations [Adler et al., 2008]). Note that non-expert PUSH will occur by default unless reviewers are instructed to refuse to review anything that is not of compelling interest for their own work. Unfortunately journals assert an opposite policy.

• One man, one nuke means the standard in which a single negative review equals REJECT. Whereas post-evaluation measures a paper’s value over the whole research community (‘one man, one vote’), standard peer review enforces conformity: if one referee does not understand or like it, prevent everyone from seeing it.

• PUSH makes refereeing a political minefield: consider the contrast between a conference (where researchers publicly speak up to ask challenging questions or to criticize) vs. journal peer review (where it is reckoned necessary to hide their identities in a ‘referee protection program’). The problem is that each referee is given artificial power over what other people can like—he can either confer a large value on the paper (by giving it the imprimatur and readership of the journal) or consign it zero value (by preventing those readers from seeing it). This artificial power warps many aspects of the review process; even the ‘solution’ to this problem—shrouding the referees in secrecy—causes many pathologies. Fundamentally, current peer review treats the reviewer not as a peer but as one who wields a diktat: prosecutor, jury, and executioner all rolled into one.

• Restart at zero means each journal conducts a completely separate review process of a paper, multiplying the costs (in time and effort) for publishing it in proportion to the number of journals it must be submitted to. Note that this particularly impedes innovative papers, which tend to aim for higher-profile journals, and are more likely to suffer from referees’s IDPR errors. When the time cost for publishing such work exceeds by several fold the time required to do the work, it becomes more cost-effective to simply abandon that effort, and switch to a ‘standard’ research topic where repetition of a pattern in many papers has established a clear template for a publishable unit (i.e., a widely agreed checklist of criteria for a paper to be accepted).

• The reviews are thrown away: after all the work invested in obtaining reviews, no readers are permitted to see them. Important concerns and contributions are thus denied to the research community, and the referees receive no credit for the vital contribution they have made to validating the paper.

In summary, current peer review is designed to work for large, well-established fields, i.e., where you can easily find a journal with a high probability that every one of your reviewers will be in your paper’s target audience and will be expert in all aspects of your paper. Unfortunately, this is just not the case for a large fraction of researchers, due to the high level of specialization in science, the rapid emergence of new subfields, and the high value of boundary-crossing research (e.g., bioinformatics, which intersects biology, computer science, and math).

Toward solutions

Next time I’ll talk about the software Christopher Lee has set up. But if you want to get a rough sense of how it works, read the section of Christopher Lee’s paper called The Proposal in Brief.

Post navigation

30 Responses to The Selected Papers Network (Part 1)

You forgot Editing. The journals actually do a nice job of making sure that papers meet a (low, admittedly) baseline of readability. One important factor is that at a journal, at least one person with good English-language skills will read and edit the paper. Another important factor is consistency of intra-paper references and citations, coherent systems for notation, etc.

It’s true, editing is another service the journals provide. A lot of people I know, who read most of their math papers on the arXiv, come to believe that editing isn’t really worth the price we pay to the big commercial publishers. It can be done for a more reasonable price than those journals charge, that’s for sure.

I agree with J. Baez. Editing cannot cost that much. And also, this can be solved by means of rejection of the paper until readability or consistency is achieved by the authors. I know, this requires revision, but revision is cheaper than edition. If they don’t know how to properly write in English, then they always can hire a translator.

I think there are many ideas to explore here. For instance the idea of working paper depuration. This is somehow already occurring on blogs where people remarks mistakes. And also, instead of using the “prestige of journals”, a possibility may be a kind of startup recommendation. At the beginning a paper do not have citations. But an anonymous revision by “randomly” chosen referees that can give a startup score to the papers may do the job. All this can be easily centralized into something similar to ArXiv and mirrored everywhere.

In my experience journals often do a very bad job of editing. I’ve had papers in which journals have made the pictures look terrible, and introduced errors in the text, which have required a lot of work to fix.

I’m sure they do this with the best of intentions. But I cannot believe this is a better system than just spelling out the conventions to the author, and making them do the work to fix things.

Those conventions should include good use of English (or whatever language the journal is in) — if the author can’t provide that, it’s reasonable to expect them to pay themselves for professional assistance (a service many universities provide), or to get help from a colleague.

If I may add to the complaints against journals, to `outsiders’ at least, many journals seem to operate like clubs to a large extent. This becomes apparent for interdisciplinary articles, or when someone tries to publish in a field in which he or she is unknown to the experts. The problem lies not only with the `peer reviewers’ or even the reviewing system in such cases, which was noted in the article by Christopher Lee. Rather, it may be a difference in style of presentation, or formulation of the problem, or even the approach to the solution that is completely alien to the (established) researchers in the field.

In the present system, it is possible to break into the club (or publish a paper which suffers from these drawbacks) only if some reviewer chooses to suspend objections and work through the paper (and then finds it worth publishing). It seems that in Dr. Lee’s proposal, in which reviewing appears to be mostly voluntary or even pro-active, this is less likely to happen than in the present system.

This is not so much of a complaint against the new proposal as it is a worry.

Although I am not an academic I have watched with interest how the big publishers are reacting to the resistance they are meeting. The sort of money the big publishers have been pulling out of academic publishing is not consistent with their status as a mature industry. The middleman margins they get are usually only seen in my commercial experience in immature markets.

So what to do? It is a bit like Martin Luther saying to those hosed off by the Catholic Church :”You don’t need a middle man to talk to God”. He had the alternative ready to go. There is no fully viable alternative that meets the requirements of academic progression.

It is harder in the academic world to get change because publication in “prestigious” journals is embedded in academic remuneration and advancement models.

Longer term this is what I think might happen. Where I live (Australia) the universities are under funding pressure and it is dawning on them that massive online courses essentially disrupt the whole economics of traditional bricks and mortar universities. It only takes a large employer or two to say that successful completion of relevant online courses is sufficient to get you a job and the universities essentially become disintermediated. In such a world they will simply not be able to afford the outrageous prices and bundling practices being served up to them. If that economic reality occurs the economics of big publishing change fundamentally just as they do in any other market faced with demand problems. We are not quite there yet but journalism ( a classic middleman role) has collapsed as a profession in the last 10 years due to the impact the internet has had on advertising revenues. You will see discounting of prices initially and lowering of standards to get volume etc but I can conceive of a world in which a good communicator can command the sort of ground that a good entertainer does irrespective of what their publication status is. I’m not sure how hard core research fares in that sort of world – maybe the answer is a simple one – the big publishers truly become a mature industry and live with a 5% profit margin.

Great posting. Looking forward to “Next time I’ll talk about solutions”, Is there any kind of a public place where people could post papers into, where competent people could do public reviews almost like this kind of a blog?

We’re not waiting for anything! As I attempted to say in the article, Christopher Lee has already set up the software, a preliminary version is ready now, and next time I’ll explain it and invite you all to try using it.

I see I didn’t say that very clearly! I will fix my post to make it clearer.

The “endorsement” (and “impact-guessing”) point makes one more thing bad. At least outside of mathematics, a lot of papers I see (and almost all in highly-prestigious journals like Science and Nature) totally lack any scientific doubts, honest claims about the limits of the method (even if they are already known by the authors).

Instead, the introduction part is more like an advertisement part.

Sure, I appreciate far fetching results, and even dreamy intuitions on the possible applications of results. However, the no balance between the positive side and any caveats is, in my opinion, on the verge on scientific dishonesty. Moreover, sometimes it is even hard to make sense of “what was _actually_ proven” by the paper.

For the same reason, most of times I enjoy smaller conferences and workshops, where people get more into technical details, than the marketing approach (of selling the results to the scientific audience, and funding organs).

A quote from Evarist Galois:
“Unfortunately what is little recognized is that the most worthwhile scientific books are those in which the author clearly indicates what he does not know; for an author most hurts his readers by concealing his difficulties.”

@Piotr: I totally agree with you. The whole point of the selected papers network idea is to return scientific communication to its roots, that is, people would show their colleagues a letter they got from a friend describing a surprising result, or ask him to talk about it at the next club meeting. No marketing BS, just informal communication among people with common interests. That was a “selected papers network” by word of mouth, and it served science admirably for a long time. The replacement of informal communication by formal (bureaucratized) communication has spawned endless pathologies (such the ones you describe) because its transport (mass media, aka the printing press) was fundamentally mismatched to the highly specialized character of scientific research.

RE: your concerns, one basic goal of SPN is to publish the peer reviews, so everyone can see all sides of the question…

Regarding editing, I would just like to add that, aside from the language part, anyone with a reasonably decent knowledge of LaTeX can format their paper to look presentable. I don’t think it is too much to ask for people in the sciences to learn LaTeX (if they don’t already know it).

“This is the part of academic publishing that outsiders don’t understand. Academics want to get jobs and promotions. To do this, we need to prove that we’re ‘good’. But academia is so specialized that our colleagues are unable to tell how good our papers are. Not by actually reading them, anyway! So, they try to tell by indirect methods—and a very important one is the prestige of the journals we publish in.”

I, as an academic myself, do not think, that people are unable to tell how good someones papers are. I think that most academics, irrespective of their scientific qualification, are opportunists. They are not willing to make their own judgement! Instead, they rely on paper counts and impact factors. Counting means, you do not have to think at all, you do not have to take a position, you do not take responsibility for your actions, because someone else is doing the job of judgement for you, it is simply the easiest way and spares time (not to forget the illusion of objectivity). In the end nearly everyone hates the established system and nearly everyone does nothing to change it. Because, if it comes to oneself, all the nonsense of impact factors etc. becomes important again and people start to behave opportunistic again, despite their better knowledge.

I’m afraid I agree with John here when he says “academia is so specialized that our colleagues are unable to tell how good our papers are. Not by actually reading them, anyway!”. There have been times that I really want a certain result outside my area of expertise, and find the result in some preprint or abstract, but find it immensely difficult to decide on my own whether the result is solid and watertight (despite the fact that I’d really like to be able to decide on my own). It’s sometimes awkward asking an expert to get his/her opinion. In that situation, I’ll look at any evidence I can get my hands on. Here, publication in a reputable journal goes a *huge* distance in assuaging any doubts, although of course the ideal circumstance would have been my being personally convinced by deeply understanding the mathematics.

Besides whether papers are correct (in mathematics) or well-argued (in other fields), there’s also the more elusive question of whether they’re important. For example, when we were choosing between algebraic geometers for a job at U.C. Riverside, I found that one had written a paper like this:

This paper studies the asymptotic behavior of the syzygies of a smooth projective variety X as the positivity of the embedding line bundle grows. We prove that as least as far as grading is concerned, the minimal resolution of the ideal of X has a surprisingly uniform asymptotic shape: roughly speaking, generators eventually appear in almost all degrees permitted by Castelnuovo-Mumford regularity. This suggests in particular that a widely-accepted intuition derived from the case of curves — namely that syzygies become simpler as the degree of the embedding increases — may have been misleading.

Who among us can say whether this is a really important result or not? Rather few, I’d say. I understand what all the terms mean except “Castelnuovo-Mumford regularity”. But I’m no position to tell whether this result is important. I’m willing to assume it’s correct, but that’s different.

(In case anyone is wondering, we wound up offering him the job and he didn’t accept it.)

The capitalist system itself is a problem, as well. It always reduces cost to bare minimum. So, reviewers are overloaded with papers to handle within their already busy schedule, leading to decisions on abstracts, or first paragraphs, or title alone. Like at LinkedIn, comments are made on topics without even reading what they’re commenting on. Decisions are made on general feeling on a subject matter. How would a Krell of “Forbidden Planet” write a paper a so-called-expert of Earth could even understand? Experts are like the Clergy that suppressed Galileo & Copernicus.

A very preliminary version of the software is already up and running, and I’ll explain it and point everyone to it in a few days. If it catches on and grows, it will require help—not just funding, but people with various skills—to improve it. Since the arXiv and PubMed Central are managing quite nicely, I’m not too worried about the ability of citizens, academic institutions and governments to fund this if it turns out to be something people like—especially since it could ultimately save all these players huge amounts of money.

John, I clicked on the link for The Proposal in Brief, and was sent to a bibliography for Lee’s paper Open peer review by a selected-papers network. I didn’t see The Proposal in Brief as a title for any reference under Chris Lee’s name. Did I overlook something?

Science, in a nutshell, is about finding flaws – finding flaws in your methodology, your logic, your techniques, and so on. One way to find flaws is through peer review. By “peer”, we mean people who are as knowledgeable as yourself regarding the topic(s) discussed in your published research. This review of your work by your peers is very effective way to find flaws in your research, but what we call peer review today is seriously flawed in its implementation. The process (also in a nutshell) goes like this:

1) Submit your work to a small group of people who publish a widely read magazine

2) This small group of people decide if your paper is worth publishing or not

There are many problems with this, the foremost being that this is not an example of peer review, this is an example of review by a committee. Peer review means that flaws in the published work should be published too. I cannot read a paper on a topic that I am not an expert on and then know with complete certainty if the paper is flawed or not. If that paper is accompanied by a true-to-life peer review, then I can make that determination based on that additional criticism.

Furthermore, there is no way to ensure that the small group of arbitrary people who do the limited “reviewing” will be or will remain unbiased by either money or politics. Both are very heavily involved in this process today. Politics is especially troublesome because it results in the filtering of dissenting or unpopular ideas, especially when that politics is fueled by religious agendas.

Another problem is that there is no way for you to know what ideas or works were rejected or not. If I am using someone else’s paper as a reference and it is flawed, that makes my work flawed, and you cannot pursue the truth based on lies.

Science is not a democracy; you don’t vote for what is fact or not – that is flawed reasoning. The political entity, the IPCC, is very guilty of this nonsense, ultimately determining where resources are allowed to be spent doing research. As history has proven time and time again, politics and science do not work well together, yet here we are, repeating history all over again.

The process of peer review needs to be completely in the hands of peers, not bean counters or CEOs or politicians. The only other thing it would need is to be organized and refereed by mature adults with logical thinking abilities.

[…] academics for their own research through their libraries. Just look up articles on the subject by John Baez or Timothy Gowers to see how true this is (the linked posts are just the most recent of many and I […]

How To Write Math Here:

You need the word 'latex' right after the first dollar sign, and it needs a space after it. Double dollar signs don't work, and other limitations apply, some described here. You can't preview comments here, but I'm happy to fix errors.