A more modest proposal

In my previous post I suggested a way in which an online system of submitting and commenting on papers might perhaps work better than our current system of journals, editors and anonymous referees. I am very grateful to all who commented, both positively and (more often) negatively. It has given me a lot to think about. One thing that I wasn’t expecting, but should have expected, was that a number of people just plain don’t like the idea of an online alternative, regardless of the rational arguments. I don’t mean that there aren’t arguments to back up the dislike — merely, that I think that there is a dislike there, which becomes an argument in itself, since if many people have an emotional reaction against a new system, then that makes it less likely that the system will be adopted by enough people to become as officially recognised as the journal system. To avoid misunderstanding, let me stress that I’ve got nothing against emotional reactions, as long as they are backed up with arguments; and in the comments on my previous post they have been. Indeed, the arguments against various aspects of what I suggested have caused me to realize that there are some disadvantages I didn’t think of and others that I underestimated.

In this post, I want to summarize the points made in the comments (for the benefit of anyone who is interested in what was said but doesn’t have time to read through them all), and then make a second suggestion, which I think deals with a number of objections to the first. As with the first, I don’t see the details as set in stone. I think it’s an improvement on the first, but doubtless it can itself be improved on. Whether it reaches the level where one should actually consider trying to implement it is of course quite another matter. But I do think that these issues should be discussed: if we were designing a system from scratch for disseminating and evaluating mathematical output, I don’t think we would come up with the current journal system, though of course that’s not the situation, and historical accidents often result in quite good ways of doing things.

Summary of the reaction to the previous post.

I’ll number the reactions and attribute them, with links to the comments where they were expressed (in more detail). This isn’t a comprehensive list of objections — more like a list of the objections that have had an influence on the new suggestion. (Even then it may not be complete — apologies to anyone that I accidentally miss out.)

1. Andrew Stacey. The incentive system I proposed (roughly speaking, Mathoverflow-type reputation points) will not be enough to make people contribute. Similar attempted sites have failed, and this may be because what people need as motivation is a direct and immediate benefit from contributing.

2. Andy P. Even if a new system is demonstrably better for mathematicians, it still needs to be taken seriously by people in other subjects who have power over mathematicians (e.g. when handing out money). Everyone understands how peer-reviewed journals work, but that won’t be the case for some new website.

3. Alexander Woo. We need a way of rapidly sifting out the vast majority of candidates for positions that may attract hundreds of applications. A website with detailed narrative descriptions of papers will make that an impossibly long process.

4. Henry Cohn. Mathoverflow reputation points work because we know that they’re just a game. If they actually mattered, then abuses of the system and all manner of unpleasantness would be much more likely.

5. Yla Tausczik. A useful aspect of the current system is that journals fix an official version of an article, which can then be the one that other articles refer to.

6. Scott Morrison. To have a realistic chance of success, any proposal should be incremental rather than revolutionary.

7. David Savitt. One of the most valuable aspects of the current system is the kind of nit-picking feedback that ends up improving the presentation of a paper. People would be unwilling to provide that except anonymously — if, that is, they could be bothered to provide it at all.

8. Noah Snyder. So much is published that, whatever incentive systems one tries to provide, most of it would simply not be looked at.

9. Super Mario. It is very hard to persuade a lot of people to use a website, and the success or failure of attempts is sometimes extremely sensitive to tiny details of how the site works.

10. Andy P. The journal system works just fine, so trying to devise a new system is pointless, and potentially damaging.

11. Shahab also makes a number of interesting points, too long to summarize here.

The new suggestion.

Imagine the following common situation. You’ve worked for some time on a problem, and finally you’ve proved something interesting enough to publish — or so it seems to you anyway. So you write a preprint. Are you happy with it? Yes, up to a point, but you have a few residual anxieties, such as whether you’ve really got all those technical lemmas correct, whether you’ve mentioned all the relevant previous work, whether your write-up is going to be comprehensible to anyone else, etc. etc. Wouldn’t it be nice to get some detailed feedback before you submit it for publication? But who is going to be prepared to put in the work it takes to check calculations, comment on presentation, and so on?

That’s where http://www.howsmypreprint.com comes in. (The suggestion for a name is of course not serious.) You put your preprint on the arXiv and you create a page on Howsmypreprint, wait a little while, and then a few weeks later (or perhaps much sooner if the system really works) you get a list of typos, small errors, big errors if there are any, suggestions for how to improve the presentation, and so on.

But why is anybody ready to do that for you? Here’s where a suggestion of Andrew Stacey comes in: if you want your paper checked over in this way, then you have to pay for that service by doing it for other people. In other words, you accrue points for working on other people’s papers, and you spend them when they do work on yours.

That’s the basic idea. There are many details to discuss, but first let me say why I think that in principle it can deal with almost all the objections above. The numbers here will correspond to the numbers up there.

1. There is now a genuine selfish incentive for contributing to the site. If it is seen to work, then people will be keen to use the service, and therefore keen to contribute.

2. The site is not intended to supplant the journal system. It is meant to provide a new service.

However, it could have a profound influence on the journal system. For instance, if I get detailed feedback on my preprint, I could then submit not just the paper but the feedback too. Then the work of the journal could be greatly reduced: all they need from the referee is an assessment of how interesting the paper is, and the difficult bit — reading it carefully and making lots of suggestions — has been done already. Some journals might start to insist that all their submissions must first have spent a certain period of time on the site.

3. Since journals still survive, we still have a rapid sifting mechanism.

4. The points on the site are no longer reputation points — they are “brownie points”. In case that’s just a UK expression, I mean that they are rewards for a service rather than an indication of how amazing you are. Also, since the purpose of the site is not to evaluate papers, there isn’t much reason to game the system. (The only one I can think of is trying to earn lots of points by giving rather rushed and incomplete feedback. I’ll discuss that potential problem later.)

5. Not a problem as journals still survive.

6. This system is incremental rather than revolutionary, since it is an addition to what we have now, which could gradually replace certain aspects of it (the main one being that the hard work done by referees would be done at a different stage of the process).

7. Not a problem — the feedback could be provided anonymously.

8. If the points system was properly calibrated (which might be a challenge) then something like Kirchoff’s current law ought to apply: on average, if you contributed to the site, you would be rewarded for your contribution. To put it more crudely, all those authors writing uninteresting papers would be helping out with other uninteresting papers.

9. I can’t say that this proposal addresses the problem that it’s hard to predict what will work.

10. I think most of these objections don’t apply to this revised proposal.

So I’ve ended up saying that all the objections that a new proposal can reasonably be expected to deal with have been dealt with. However, that leaves another question: does this suggestion throw away so much that it ends up being pointless? If all that happens is that you sometimes get feedback on a paper before you submit it instead of getting it after you’ve submitted it, has anything significant changed?

I’ll discuss this at some length, but before I do, I’d remind you of Scott Morrison’s point, that change should be evolutionary. This is meant as a first step (which might be the only step we ever wanted to take), so part of the point of it is that it is not a big change. What I’d like to argue is that it’s a good small change, and that it could potentially lead to further evolutionary steps — by the gradual addition of extra features to the site.

But suppose that we just stick with the proposal above, which leaves the work of evaluating, certifying and archiving to journals but potentially takes from journals the more arduous task of reading carefully through submissions. Doesn’t that just leave everybody doing the same amount of work, with no further benefits?

I think not. One benefit of this system would be that the voluntary work we do by critically reading other people’s papers would be coupled more closely to the benefit we get from having our own papers critically read. At the moment, if we do a lazy job, or sit on a paper for a long time, or refuse to referee it because we are too busy, almost nobody will find out, so the negative consequences for us are close to zero. With the online system (which, remember, is first and foremost a supplement to the current system, which would only gradually come to replace certain aspects of it), we would be putting in the work in order to earn a reward. That would feel fair.

I also think that carefully reading a paper and making suggestions for improvements is a very different process from deciding whether it is good enough for a particular journal. This system could in principle decouple these two tasks. One person could do the careful reading and report via the website. Another, for the journal, could make a judgment on its suitability for publication. What’s more, the second person would be looking at a revised and improved paper, and would (if things work as I envisage them) have access to the report of the first person. So they would be making a judgment with more information available. I think this would make the job of refereeing papers for journals much less painful and much more streamlined. Something like it happens already when one is asked to give a quick judgment on whether it is worth refereeing a paper at all, but wouldn’t it be better to make those quick judgments after a paper has been through the “cleansing” process? And wouldn’t it be better for people who find it hard to get their results published if they could at least get some feedback?

Perhaps those would be fairly small gains — I’m not sure — but an online system would come with a lot of flexibility that the current journal system does not provide, which could potentially add considerably to those gains. For example, if one added back some of the features I suggested earlier, like the possibility of offering constructive comments on other people’s work, then the journal referee would have more information to go on, such as how other people in the area were reacting to the paper. Another gain that I’ve already mentioned is that it would be easy to allow different kinds of mathematical document to receive feedback, even if they were not intended for journal publication.

Fine details.

There is a potential problem with the points system, which is that it wouldn’t be right to reward people for giving just any feedback to papers: it has to be useful feedback, of a kind that demands quite a bit of time. It would be unfair if people were to receive detailed and helpful feedback in return for having offered feedback that merely mentioned one or two easily spotted typos here and there.

How can this problem be overcome? One idea is that when a report is offered on a paper, the author of that paper can say how satisfied they are with the report, on a scale from 1 to 5, say. So if your report says merely, “This looks OK to me, except that on page 2 line 5 you’ve written “the the”,” then you won’t get rewarded very much. I think it might be an idea to have a feature a bit like Mathoverflow’s “acceptance” of answers: if somebody does such a good job on your paper that it’s clear that there’s no need for anyone else to make a comprehensive list of detailed suggestions, then you “accept” their report — and they are duly rewarded. But the satisfaction mark could take into account how difficult you thought your paper was to work through in the first place.

Should these reports be public, and should they be anonymous? One possibility is that the writer of the report could decide whether he or she wanted to be named and was willing for the report to be made public. The author would also have a say in whether the report was public. If both referee and author were happy to have the report made public, then it would be. One could also have a private link to the report, which the author could make available to the journal to which he or she decides to submit the paper.

Another feature one might have is a sort of reverse acceptance, where the writer of a report would tick a box to confirm that a new draft of the author has dealt satisfactorily with the suggestions made. Again, this information could speed up the process of conventional publication considerably.

What if the author of the paper unfairly fails to recognise the hard work put in by somebody who writes a report? I don’t see an easy solution to this if the report remains private. If it’s public, then the unfairness of the author would be there for all to see, but not if it’s private. However, I think that only in rather difficult, exceptional cases would there be any reason for authors to behave in this way. Perhaps some people would be a little ungenerous, but if the referee had put in a lot of work, then surely the vast majority of authors would be happy to reward it appropriately.

A very simple additional feature that could be helpful is “certification buttons” that you press to give some useful information to other people. One might be, “This is a serious mathematical paper.” It wouldn’t say anything about whether the paper was correct, but just that it wasn’t the work of a crank. If you pressed that button, you would get a very small addition to your points, and it would be a matter of public record that you had pressed it. (The same would go for all certification buttons, to help people judge the value of the certifications.)

Another might be, “I haven’t checked in detail, but I’m confident that this proof is essentially correct. Yet another, for which more points would be on offer, could be, “I have checked carefully and am happy to confirm that the proof is essentially correct.” (That wouldn’t be a guarantee that every last detail was correct, but just that the certifier — who would be named — was very confident in the results.)

What happens if cranks start certifying each other’s papers? There are many possible answers to this. One I like is due to Noam Nisan (or at least, I got it from this comment of his), which is to set up “networks of trust”. If for any reason I decide that I trust the judgments of some reviewer, I click on a box that creates an edge between me and that reviewer (in a graph of which we are both vertices). Various algorithms can be used to derive information from the resulting graph about who to trust if you begin with some small group of people that you definitely trust. And official institutions could set up their own networks, possibly making them public.

Summary.

As I see it, the main properties of this second suggestion are these.

(i) It is designed to be a useful supplement to the journal system rather than a replacement for it.

(ii) It could streamline the work we do for journals.

(iii) The incentive for working on this site would be that others would do the same for you. (That’s sort of true for the current system, except that if you don’t do your share of the work, others still do it for you.)

(iv) If somebody didn’t want to have anything to do with the new system, that would be fine.

(v) It would be easy to add further features to the system, which could allow both it and the journal system to evolve. Here are three examples. First, if a simple certification system could tell us that people we trusted had judged a paper to be serious and almost certainly correct, we might well have, for many papers, all the information we needed for metrics, sifting out of job applications and the like. We might find we could get by with far fewer journals. Second, if people wanted to experiment with ideas like virtual journals, it would be easy for them to do so. Third, one could make it possible to give feedback in the form of smallish comments that were different in style from the detailed reports that would be the main purpose of the site, but also useful.

Before I stop, let me mention one other feature that I’d like to see, which I forgot to mention earlier. It’s that everyone would start with a credit of say three papers (maybe more if they were PhD students). That’s partly so that the system can get started at all, and partly because beginning mathematicians probably need to get a few papers under their belts before they start refereeing the work of other people. (That said, many graduate students work through recently published papers of more senior people, and could in principle offer extremely useful feedback. That wouldn’t be ruled out at all.)

Another thing I forgot to mention is that since points would be just for earning the right to have feedback on your submissions, there would be no need to make them public, and so no unhealthy competition.

Yet another thing I forgot to mention is Andy P’s view that the journal system ain’t broke so we shouldn’t fix it. Rather than comment on this, I refer you to Noam Nisan’s elegantly written response (to which Andy P in turn responds).

Added later: I make a further suggestion in this comment below, which I think could significantly improve the chances of a site like this working.

91 Responses to “A more modest proposal”

Thank you for these very thoughtful outside-the-box posts on publishing. I think there are a lot of plausible ideas herein.

One thing I find suboptimal about the current system is this: the name of the journal in which a paper appears has become a proxy for the quality of the paper. This is bad in two ways: first, because it reduces a complex evaluation of quality to a metric with only a few bits of entropy (a dozen maybe? 4096 journal names?); and second, because the standard deviation of quality in any journal is rather high (since we’re all only human), and so the metric is only sort of order-preserving.

Why do I make that comment here? Because one negative consequence of this proposal, I fear, would be to enhance this quality-proxy problem. The more a journal referee’s job became simply “is this paper good enough for this journal”, the more those decisions might devolve into subjective/human/biased opinions that poorly reflected actual mathematical quality.

One argument I’ve seen is that if an actual person is prodding someone to write a review, then it’s harder to refuse.

However, there’s no reason, why the same cannot be done for the above model. People who haven’t contributed much over a predefined time period could be prodded to promise a certain amount of contribution in the next few months.

This would of course require that such a site have people in charge (perhaps changing from time to time) who agree to do the prodding.

Christian Robert once wrote about a journal that gives credit for refereeing and requires such credit or money for publishing. Those might be the right people to ask if you ever actually want to (find someone to) implement your ideas.

Personally, I think we should start at the very end of your suggestions, i.e., Noam Nissan’s comment about a web of trust against cranks. This is, in essence, the description of a social network, one dedicated to a single purpose: giving information about researchers you trust. I find this idea easy to implement by using existing technology and very useful when it comes to assessing researchers.

It might be criticized that this will allow influential researchers to have too much of an influence by endorsing young researchers, but, realistically, this already happens in the journal system right now — such a web of trust would make it transparent.

The second point I’d like to make is that I think a centralized site is not a good idea. If I’m producing mathematical content such as reviews, I’d prefer to be in full control over it, i.e., have it on my website.

This is not a problem at all, of course. Either your site could endorse creative-commons licences or could be more an aggregator site (like researchblogging.org) instead of a centralized site like mathoverflow.

I don’t think that any of the ideas you describe really require a centralized site anyway and certainly any web-of-trust idea could be realized better with a so-called “federated social network”. Incidentally, this is what boolesrings.org is all about.

One note — the “you have to give something to get something” point system is apparently quite common in filesharing services. So there’s presumably a lot of data on how to tweak such a system to encourage maximum participation (especially in a context in which wherewithal to participate is not equally distributed.)

“Does this suggestion throw away so much that it ends up being pointless?”

I feel that even if this suggestion provides a good improvement over the current system, it would be much more rewarding trying to work out a version of your original proposal.

I will try to sketch an amended version of “How might we get to a new model of mathematical publishing?” taking as a starting point the 10+ objections helpfully summarized in the current post.

Assume we need evolution instead of revolution, as described by Scott Morrison (6). In that case, the proposal would be to create a website where people can upload their papers as preprints, in the same manner as arxiv works for physcists. As far as I know, there is no disadvantage in doing so on the part of journals (some of them even encourage it), so this website wouldn’t be a direct competitor. As already has been outlined, add a comment system to politely criticize a given paper (when you write a new comment, you’re asked to select what kind of feedback you’re giving – suggestions, point out mistakes, give a concise review, etc, in order to place such comments in a orderly fashion).

I read a comment saying that some people dislike the use of ratings, but I think it’s essential to add a rating system (a la MathOverflow) to incentivize positive comments and to help other users to separate the wheat from the chaff. As this website, in principle, wouldn’t interfere with real life affairs, objection (3) is not a problem anymore. However, I expect that, as the site grows in quality, it will add something to real life but I’d argue it will be mainly positive. Actually, we can even enforce that: Allow for upvoting or downvoting papers (and comments) but instead of relying on a clear-cut measure for a user, only give honor badges based on performance in the site, highlighting character traits that are useful to future employers. In this way, you don’t give a tool that can be abused. Also, a nice addition would be a mechanism to interact with other authors in order to nurture a network of collaboration.

In this scenario, you really want to get your preprint published in this website (because it gives you feedback, exposure, etc), so you’re allowing yourself to publish a better paper (7) ). Also, with some effort (for example, if you’re responsive enough to comments), you earn badges that, in the eventuality a employer wants to look over your profile, he will see badges you have actually earned and arguably, you’d be proud to show off.

Andy P’s objection (2) is not really a problem with the above suggestion but let me clarify a thing: The source code for this website will be openly available to everyone, so if any kind of metric is needed in the inner workings of the site, that piece of algorithm will be out there to be understood.

Henry Cohn’s point (4) about abuses of the system applies to everything. Obviously, we would design countermeasures similar to the ones implemented by StackOverflow against suspected activity.

Yin Tausczik (5) described a useful aspect of the current system: when a paper is published, it’s done and it can be cited by other sources. Well, that’s partially useful but it begets a much more serious problem: A paper that is corrected (or retracted) after being published will not reach every other source referring to it, so the mistake will prevail for years. This problem is pervasive in other fields. In any case, you may want to make reference to the paper on the journal’s website but still be able to check the last modification date of the “preprint” in this website. If journals ceased to exist, you would value correctness more than fixedness.

Most of the arguments you answered in your past post deal with common issues in a very satisfactory manner, so I won’t repeat them. As I said, I think the first approach is more valuable than this ‘more modest proposal’. Hopefully I added something useful to get back the original idea with some modifcations.

As to “What happens if cranks start certifying each other’s papers?”, why not allow only mathematicians to use the site? I suggest that in order to sign up, one would need to be invited/endorsed by somebody who is already signed up.

I like this idea, and I think it has nothing to do with replacing journals and everything to do with improving the quality of mathematics papers. Personally I spend a lot of time writing single-author papers, and, after I have been working on a paper for long enough, I can no longer tell what is good and bad about its presentation for someone who is not intimately familiar with it (and sometimes I can no longer tell if my arguments are accurate). However, people tend to be unwilling to “pre-referee” the paper (a job which a co-author might do). Such “pre-refereeing” comments, when they do come, are often extremely useful in making papers more readable, or pointing out problems which need to be fixed before submission to a journal. I would be very willing to do this pre-refereeing job for other people in exchange for them doing it for my papers, but a mechanism to organise this is missing at present.

1) “everything to do with improving the quality of mathematics papers” – yes ! Absolutely agree.
Well, may be I would say improving quality of EXPOSITION in mathematical papers is the first concern, quality of results will be improved as a consequence.
I feel that may be 90-99% of current mathematical writings are not well-written or let us say there is space for improvement.
And I think we actually badly need to improve this.
(Being in Moscow – I am out of current publishing system – no access to journals, MathScinet – only arXiv – so I have NO opinion about it). But I obviously feel that we badly need to improve exposition quality of our papers, sometimes even close colleagues in the field write such papers that I can only ask – who do you think will read this and what for? Sometimes written in a manner that simple things looks very complicated. And complicated things are lost in the tons of inessential details.

I think if quality of exposition will be improved, then quality of results will be improved as a consequence – cause if paper is badly written and results are not very important – no one will actually check this in current system, so current system to some extent forces cheating like that – if results are bad – just do bad exposition and no one will check it. But if exposition will be improved than it will be easy to see that results are bad, and hence people will loose their reputation.

2) “I can no longer tell what is good and bad about its presentation for someone who is not intimately familiar with it” – again, absolutely agree, I also often feel the same.
Moreover one exposition can be good for one person another for another – Tim’s idea gives some way to deal with it – if one will comment that he is not clear about one lemma you can answer him giving another proof which might be more close to his background. And here is feedback is very important – cause to put all possible proofs in paper I think is not reasonable – but it is reasonable when someone will request doing it.

Moreover I feel that when I am trying to polish paper and hence reading and rereading it many times, then my mind begin to cheat me – I begin to learn the text by heart. But the problem is that I am learning NOT what is ACTUALLY written, but what I was INTENDED to write down. And this can be very different things :) But when I am rereading – my mind shows me things not written, but that fake things which were my “intentensions” to write down – so I just cannot see the gaps or mistakes.

There is simple solution to it – put paper in the table and keep it 2 months then check it again, but usually I must finish paper before some deadline of some grant or whatever – so this does not work.

Best wishes, Alex
PS
Let me also remark about current publishing system, I understand that it plays important role as kind rating agencies – good journal accept paper – then your rating is high. But I would say that here in Moscow it does not seem to have big influence. I think if journals will disappear most of us will even does not notice it, cause we read papers from arXiv, journals are not available for us and actually we do not know much what western journals are good and what are bad – I have heard that Annals and Inventiones are cool, but what is number 3 ? I really do not know, and I think many people here even do not know about Annals and Inventiones… Friends physitists working in hep-th look on citation number, not on journals. The reasons of situation are related to economical situation – you can get permanent position – but you will not be paid reasonable money, so…
Well, may be now situation begin to change recently appeared math department in the economical university which is very rich since very close connection to government, so they begin to pay money and hire foreign professors – so they will look on publication lists, but before – people were actually knowing each other personally – making talks on seminars or whatever and influence of journals was not so high.

A problem I see with this is that established research groups already have small networks of people they can turn to for reading a paper, Someone in this position, or someone with an established reputation will feel more confident to submit their work for open review and will tend to benefit most from the results.

It seems then that creating this type of site is (in effect) an attempt to publicize and grow such networks. But I don’t know how well this will work without the kind of gradual exposure process which takes place now.

Perhaps there should be mechanisms to choose a network of reviewers for a paper dynamically, similar to friending in facebook.
At ‘zero order’, this just enforces existing networks : Increasing your exposure gets you a little closer to the way things might be in a journal review.

Hmm, this could be pretty interesting. Anything that helps people get useful feedback would be great.

I’m probably being naive, but I wonder whether this really needs a point system at all. I don’t really like the idea that people who have points will feel entitled to receive feedback. Should they feel cheated if nobody finds their papers interesting? Should people who have received feedback feel obligated to provide it for boring papers if nobody is asking for feedback on interesting papers at the moment, or can they legitimately wait for papers to show up that they care about? (I also worry a little about the idea of a rating scale for feedback: if a beginner offered genuine feedback on one of my papers but I thought the feedback was mediocre, I would feel conflicted about giving it a poor grade.)

I see the purpose of the point system as enforcing fairness, so nobody can profit from getting feedback without being willing to put in some work themselves. This perspective suggests there’s a feedback shortage, and people have to be given incentives to provide it. However, I suspect there’s not nearly as much of a shortage as it might appear.

For example, I read many more papers than I write, and when I read a paper, I often think of things I might like to tell the author. If I find corrections that need to be made, I do tell the author, but there are lots of other sorts of comments that seem less pressing (comments on clarity rather than correctness, other suggestions, ideas for further work, connections), and I usually don’t try to communicate these when I don’t know the author.

One reason is that it is not clear how receptive the author will be, and I don’t want to waste time writing to someone I fear might not want feedback (maybe they don’t want to put any more work into the writing, or the final version has already been sent to the publisher). They may even resent what I thought of as constructive criticism.

If I knew the author really wanted feedback, I would be much more likely to provide it. After all, I’m already doing the reading and thinking anyway, so writing my feedback down is a small price to pay for getting to help polish the paper.

I’d guess I’m not so atypical in this way. A point system isn’t going to convince me to read papers I don’t care about (or to read substantially more papers), and for the papers I do care about I’d actually be happy to offer feedback and suggestions.

So for me, most of the benefit of this system would come just from having a current database of who wanted feedback and what (if anything) they were particularly hoping to hear about.

Can it provide feedback with enough technical depth? I’m not an editor myself, but my impression from the other side of the aisle is that they spend a lot of their time trying to find people with time available and background that makes them able to review a given paper. For fields which require a lot of core knowledge, or which are fairly narrow, the success of a system like this might depend on how many of the “main players” adopt such a system.

Can it provide immediate incentives to those who don’t have something to release to public scrutiny? Ph.D. students, postdoctoral students, and researchers with lower publication levels come to mind. Even for those who are active, there isn’t as much reason to do the work (and it is hard work) except near the end of a writing cycle, and the year appears to be the most useful unit of measurement for the length of this.

Can it avoid a skew towards reviews of low-hanging fruit? In a writing group with the kinds of incentives that you mention, there are often very strong motivators to review things that are short, things that are less deep, or things that are “flashy” in order to grab some quick points. I don’t think that anybody wants this to be the ultimate result.

The suggestion sounds quite reasonable. I am a bit worried about conflicts of interests: how do we avoid (even anonymous) positive reviews by close colleagues? Editorial boards are supposed to find *suitable* referees, this is an important point.

I suggest that there are no anonymous reviews. However, there may be reviews of “reviewing boards” (self-organized groups on specific topics with no link to a specific publisher and no decision-making system), that is, reviews written by an anonymous member of such a board acting on behalf of the board. This semianonymity may solve the problem.

I think we want to distinguish carefully between reviews and suggestions for improvement. This proposal is primarily about the latter: I see no compelling need for the reports to be named, and two reasons for them not to be. First, by their very nature such reports contain a number of “negative” comments (such as that the proof of a certain lemma is too complicated, or a certain section is unnecessarily hard to understand). Secondly, if there is to be eBay-style feedback (as suggested in a comment below) as a way of ensuring that people who work on a paper are reasonably conscientious about it, then the author should be able to express an honest opinion without worrying about who they are offending or pleasing. However, if (as seems to me to be a natural and good extension of the basic concept) people can also post reviews on the site, then I very much agree that these should be named. That would make people much less willing to be unpleasant, and it would also make any relationship between reviewer and author transparent.

I like this idea a lot. It gives a quick and easy way to give and receive feedback on papers which to me at least seems very useful. However I would suggest that instead of trying to implement it directly one should start with small incremental steps and use the experience gathered along the way.

One such first step could be a site based on the Mathoverflow software, where instead of questions one posts abstracts and links to preprints. This idea has the big advantage that it will be fairly easy to implement. Based on how it performs and what flaws it has one would be able to make a more informed opinion on what the next step should be.

Also I think that it is better to try to make this work without awarding points at all. Points on internet forums seem to have two functions: they are used to filter content, and they act as an ego boost which in turn is a way to get the user addicted. The need to filter comments in this preprint feedback site should not be an issue, at least at the beginning. As for the ego boost, I believe that any system which encourages people to concentrate on getting recognition rather than on making useful contributions to mathematics is ultimately harmful. Even if points are used as a way to pay for feedback the ego problem will remain. So, unless it turns out that the site cannot function without a point system, such a system should not be introduced.

The journal system isn’t broken if one has access to a top university’s library with decades worth of physical journals, substantial subscriptions to digital journals and easy access to inter-library loans.

However I think many researchers, especially those in developing countries or even less prominent universities in first-world countries are not in this position, and so it is important to keep thinking about accessibility.

The best way I can think of offhand to simultaneously avoid both the rewarding-trivial-reviews problem and the not-rewarding-good-reviews problem is to make the awarding of points a payment from the reviewee to the reviewer of, by default, n points, with the ability to adjust the size of the award from 0 to, say, 2n (or some other greater-than-n value). Now trivial reviews won’t be rewarded, but to solve the other half, the reviewee always surrenders at least n points—the only incentive to award less is to, essentially, lodge a complaint about the quality of a review.

The chief issue I see is that the question of when one must make an award becomes a thorny one. The ability to see a review without paying the base cost becomes a loophole in the system, but the possibility that one’s review may never even be looked at simply to avoid that cost is highly demotivating. The best solution I can think of is to require payment in advance for some number of reviews (possibly with the balance refundable after some period of time), and then to mark those papers that no longer have a prepaid balance—it’s still possible to review older submissions, but the potential reviewer is warned in advance of the possibility that their review will simply never be looked at.

Then there’s the issue of potential griefing by consuming prepaid reviews with garbage. No inspiration for how to address that at the moment. The big downside that I can see at the moment is that the whole business would require actual economy design, rather than being able to take a “they’re just brownie points” attitude.

I think it would be useful to identify the shortcomings of the current system that we are hoping to fix with these proposals. I see three issues with the current system:
1) Subscriptions are very expensive. Moreover, a lot of that money goes to publishers who are increasingly doing less and less to add value to the process.
2) It takes much longer than it should for a paper to go through the publication process.
3) Most people use the journal as a proxy for the quality of the result. However, this turns out to be almost useless for comparing any two papers in isolation (however, somewhat better in an averaged sense).

Let me address these:
1) This is in many ways the easiest issue to fix: Mathematicians should just get together and set certain requirements for journals to meet (regarding cost/availability/etc) we should then, in mass, refuse to send and referee for journals not meeting these criteria. The fact that we haven’t done that indicates that this isn’t an issue that people really care about. It would also be useful for professional societies to start more journals (as to direct the profits that journals do generate back to the community).
2) This is really a problem with human nature. Your typical 20 page paper takes about 10 months to referee. Now I imagine in very few cases this is because it really takes 10 months to do the job. Experience tells me that this is because referees know that it is acceptable to take 10 months, and so the paper sits on their desk for 9.5 months before being examined. It would be very interesting to find ways to incentive fast referring.
3) This is a much more difficult issue. One source of trouble here is that it is very difficult to shop a paper around because of (2). Perhaps journals should have multiple series (say A,B,C,D and E) which would be proxy for quality. Based on the referee report the editors would assign the paper to a series. Now submitting the paper to one journal is really like submitting it to 5 different journals of varying quality. Alternately, one could consider making referee reports portable. So when I get a lengthy report for the Annals saying my paper is very good, but not quite Annals quality, I can then port that over to a slightly lesser journal and not have to wait another year for the process to start over.

Perhaps people can identify other shortcomings of the system. I think the three I mention above are the biggest issues. This proposal does do a bit on (2) and (3) however since it is complementary to the journal system it doesn’t do much on (1). I do note that Terry Tao has essentially been running a forum for feedback on his preprints on his blog for the bast 5 years or so. One should note that even while many of these papers are very high quality and feedback is expressly encouraged, feedback (on average) has been somewhat modest.

As an incremental improvement, I think the best idea along these lines would be for MathSciNet to start a wiki feature to collect typos (probably requiring some sort of certification to join to filter out cranks). I think this would be a valuable service and would be a good jumping off point for further developments.

1) The subscriptions situation is a complete disaster that many have tried to solve for years with only some partial success so far (Open Access with is now increasingly allowed, e.g., Springer Open Choice). However, positive aspects of commercial journals are seldom mentioned in these discussions. They were often started because there was an interest in expanding research that non-profit journals couldn’t or didn’t bother to cover. Because they are commercial obviously once they got established they started to want to make maximal profit. The academic community could have seen this coming. It seems not fair to want to wipe them out now – they are what they are.
2) Even trivial papers can take a few day effort to referee properly.
E.g., the source of a copyright infringement can be cleverly hidden in a long list of references, a small but tricky argument can be omitted on purpose, etc. Journals could facilitate anonymous contact with the authors during the refereeing process, which would then be reported with the review. This could be helpful e.g., if the authors stubbornly refuse to answer questions.
3) It seems a centralized idea to have Annals decide where a paper should be published, though it is a similar idea to the best universities making hiring decisions for others – in some more anarchist countries such a system is resented. Besides, journals are run in different ways, e.g., the editors have varying degrees of freedom in their decisions.
Also, if you treat your paper as an entity, you care where it goes to live. Then it suffices to prepare it to be a good enough fit for a chosen journal, or take other necessary measures. E.g., anticipating a jealous report, it is not uncommon for a co-author to hire a student of an editor during refereeing.
With how conservative the math community has always been, opening up MathSciNet could already be a revolution. Fiddling with arXiv type ideas would be just following up on the physicists.

I like the general idea of you point 3). I always have problems to choose the right journal – not so much because of the right “reputation level” but more because of the thematic fit. The title often does not say too much (consider names like “Journal of pure and applied …”). Moreover, most descriptions of the scope of a journal naturally leave ample room for interpretation. As an example I sometimes thought that it would be good to submit a paper to SIAM in general and let some “meta editor” decide to which journal it should go (this could happen either before reviewing of after acceptance). The same applies to several IEEE Transactions…

A pure rating A, B, C,… reminds me of school grades and are there editors who would like to act as teachers on other researchers? I am not sure. And what paper will go into the “worst level journal” (merely saying “borderline work – barely publishable”)?

What I was going to say has partially been said by Jon above, but not completely, so I’ll still say it. I’m not sure I agree with Henry Cohn that points are unnecessary — I don’t think that people are going to do anything that requires serious hard work without some kind of incentive. (The current incentive — not wanting to annoy an editor — would be absent in this system.)

However, one thing that worries me about what I proposed above is determining an appropriate reward for a report. Subsequently, I thought of the same idea as Jon, which is to have a kind of market. When you join the site, you start with a certain credit, and you can then spend it as follows. When you want a paper of yours worked on carefully by somebody else, you post a link to the paper, the task you want done (e.g. checking very carefully that all the parameter dependences are correct in Section 5 — or something more general like reading through the entire paper carefully and commenting on presentation, correctness, etc.), and the number of credit points you are prepared to pay whoever does it for you. If nobody is prepared to do it, you can lower your price, and in that way a fair price is reached.

The immediately obvious drawback is that somebody might offer a big reward for going through a long and difficult paper, and I might go through it in a very cursory way. To get round that problem, I suggest (i) that authors are quite careful to specify what exactly it is that they want done (there could be some fixed formulae to use for this) and (ii) that after the job is done, the author gets to rate the transaction in an eBay-like way. That way, if somebody offers to do the job on my paper, I can look up their feedback history. If I see lots of five-star feedback and comments like, “Thanks for an amazingly thorough and conscientious job!” then I’ll be inclined to accept, whereas if I see lower ratings and comments like, “Missed several typos and an error in the proof of Lemma 2.2,” then I won’t be so keen. Under this system, it would probably be best if reviewers used pseudonyms, so that authors would feel able to be honest in their ratings and not worried about offending a colleague or friend.

I stress once again that for this new proposal points have nothing to do with ego boosts and showing off. They are solely a form of currency that you want to earn in order to get other people to do some hard work on your papers.

Interesting thoughts, it looks like it is converging towards something really feasible and useful. And this dichotomy between publicly-known authors and anonymous commenters is so natural to most anyway that it can only help mass adoption later on.

A few more ideas and questions: (a) would any anonymous commenter have access to any paper, or might there be a matching process ? For instance, if Alice registers as anonymous commenter under pseudonym dreamerA, she could have to mention her specialities (say a few MSC codes, or arxiv subtopics). These would stay invisible to others, but one can imagine two things: either there would be an icon on each of dreamerA’s reviews indicating whether she listed herself as a specialist or not ; or, more stringent, dreamerA would be unable to do any “in-depth review” jobs outside her specialities, only “typos & general style” ones. (I think most people would be honest and not claim to know a topic when they don’t, so such matching would be trustable).

(b) There are three kinds of typos I think: text typos like “the the”, TeX typos (which not anyone might know how to fix), and actual math typos (wrong index which changes the meaning, etc) So perhaps it would be nice to enforce authors to specify which kind of typo they agree some commenter found (by ticking relevant boxes, say). Because it would be a shame if some nitpicky author gave an ebay-style feedback like “did a comprehensive review but missed several typos”, which is lukewarm, when in fact the reviewer skipped mentioning the silly typos and concentrated on giving interesting mathematical remarks. That is, encourage authors not to ask for a complete package to only one commenter, but actually split jobs at least a little.

I think this new site could really be launched and tested in beta soon with a few willing authors and anonymous commenters, just to see what happens.

This mathoverflow/social networking model of publication sounds pretty awful to me. One of the reasons the journal system works now (as much as it does) is that the journals choose experts to be referees and in theory the referee will take care to make sure the paper is correct and appropriate for the journal. If you switch to the big social networking funfest, then whoever shows up to referee a paper will get to give his opinion, and given the official nature of the site, his opinion would suddenly be elevated for no reason whatsoever. And the upvoting adds a groupthink aspect to it.. the opnions of whomever spends the most time on the site are weighted more than others. And I don’t think the regulars would be particularly representative of the math community. For example, on mathoverflow the set of prominent users is distinctly tilted towards male graduate students and postdocs interested in algebra-related areas who are interested in their position on the high score list, I mean in increasing their reputation. These will be the ones refereeing the most papers and generally having the most influence, based solely on their level of interest in the site(s).

Another aspect of this groupthink model would be that the importance of research areas become weighted towards the areas of interest to the most frequent users. So based on mathoverflow for example, functional nth roots of e^x or sin(x) would become exciting research areas, possibly for years to come. The opinion of an undergraduate with a C average becomes as important as the opinion of a Harvard professor. I realize that measures can be taken to prevent this, but if the current internet is any indication at all it will be difficult.

“The opinion of an undergraduate with a C average becomes as important as the opinion of a Harvard professor.”

And that’s exactly the way it should be. We should hear everybody’s arguments and based on what they say, decide whether it makes sense or not. Otherwise, we are falling into a fallacy (argumentum ad verecundiam).

Gowers: I was referring to your previous post, not this one. (I included my response here because I wasn’t sure if anyone was still looking at the responses for that post)

The proposal on this entry is a lot different of course and I don’t really find it that objectionable, as long as people aren’t pressured to use it. If people want to solicit feedback from the public it’s their right.

Zarrax: one aspect of your response is very interesting to me. This aspect was: you like it “as long as people aren’t pressured to use it.”

To the extent the traditional referee system works at all, it is because there *is* significant pressure to use it. If you habitually turn down referee work, it is harder to get your own papers looked at (and, sometimes the papers of your students/colleagues also). If you accept referee work, it is mostly thankless, but you can at least mention where you referee on your CV, and it sometimes builds sympathy with editors so that your own submissions get a more favorable hearing. In any case there is universal agreement, that referee work “counts” as service to the professional community. There is pressure to participate.

With no pressure to participate, I see any “social refereeing” website as developing much as MO and other online projects have: a strong and committed base of enthusiasts heavily interacting with each other, but nothing approaching universal involvement (or even widespread involvement). If it escaped this and became widespread, there would almost certainly come pressures to participate that eventually manifest themselves in external ways. People who didn’t, would be seen as not doing their fair share. A lot of people bristle at the idea of nonactivity on a website affecting them professionally, and I think this is where a lot of objections are coming from, fundamentally. The fear is that eventually we won’t get hired or promoted because we didn’t log in to some website often enough. We are conditioned by history to see traditional refereeing as something that “counts”, so we accept that nonparticipation has negative consequences. But most of us, even younger people, are far from seeing websites in this way. I think the cultural transition that would lead to widely popular social-networked refereeing, if it does take place, will take place over a traditional time scale (e.g. years and years) and not an Internet time scale, because it requires a change in how we view professional obligations.

We are in an interesting time right now. Having a web presence is increasingly important— there was a recent discussion on the Secret Blogging Seminar blog related to this, in terms of graduate school admission— but it is in kind of a grey area. Web stuff has “semi-official” status: there is a tendency to think that we “know less” about an applicant who only exists as a CV, letters, and personal statement, and that we “know more” about someone who has that, and a big Internet presence. But it also has a “unofficial” status, leading some people to see participation online as illegitmate or even negative. (I partially sympathize with this: if one’s teaching evaluations are consistently low, and one are not active in the affairs of your own department, then spending 20 hours a week online answering math questions under one’s real name is a public declaration of how little one values working for the people one is actually paid to work for.)

This newer proposal seems ok to me. I do wonder if it would catch on though… mathematicians are busy as it is and furthermore might feel uncomfortable discussing their research in a public venue. It might be hard to induce people to participate, even with a point system. My guess (and it’s just a guess) is that like a lot of math on the web, you’d end out with a dedicated core of people who are enthusiastic users but most people would still just put things on the arxiv and passively solicit feedback that way. This doesn’t mean it wouldn’t end out being very helpful to those who do use it though.

As a computer scientist type person, I preferred something closer to your first proposal. Indeed, I think it would be very feasible to implement. My suggestion is that conferences start using this to handle paper submissions. The journals could continue to exist for longer and more thorough reviews. I guess this wouldn’t be much of an impact in your field since conference publications aren’t common.

It seems that computer scientists haven’t yet had the confidence to implement SciWi. Doing reviews through MathSciNet has the advantage that it would be approved by the community first.

SciWi makes reviews on an article possible only if an editor (e.g., author, competitor, etc.) starts a “journal” on it (an article could have multiple journals). This might need to be modified somewhat to suit the maths culture. E.g., individuals feeling strong about their accomplishments could start “journals” on whole research areas. Even regular journals could be evaluated as “journals”, which could be helpful to deal with situations such e.g., a cartel taking over an editorial board.

I would be happy to help with programming in such a project, if it can grow out of this discussion. If this were to be polymath, then the first contributors could be offered the ownership of domains: mathematics-reviews, math-reviews, maths-reviews, which are available with all extensions at this time.

Wow, I really like your new approach, I think it has a much higher chance of success. (The devil is still in the execution, of course, but the overall design looks very solid to me as it provides the right incentives.)

Concerning the point system, I would try to keep it as simple as possible (complexity generally hinders adoption). Maybe three options for the author: “don’t reward comment”, “reward small comment”, “reward significant comment” with a point value of 0, 1 and 2 each.

The “graph of trust” feature is not very useful; humans are very good at judging and remembering each other, so there is no need to keep track of that in software. Just give each one a real name / pseudonym that looks like a real name and you’re fine. (It might become useful when the graph grows very large.) I would leave this feature out in a first iteration and only add it when people start to describe roundabout solutions for it on their blogs.

There is one huge problem with the point system: it’s a market. If you want to keep it functional as a market, which is necessary to keep the incentives aligned, you will have to deal with inflation, deflation, price changes and so on. This is hard. Every MMORPG has to deal with this and still, most have a completely dysfunctional economy. Here a fascinating article on the In-game Economics of Ultima Online to give you an impression of what can go wrong. Maybe your market avoids these issue, but you have to keep an eye on it. After all, it’s the core of your system.

On top of that, changes in the value of brownie points will upset a lot of users, so you may lose a lot of contributors if anything changes. Many an MMORPG have been killed because the developers changed the game to a point where they alienated their initial audience.

It would be easier to decouple the point market from the incentives (like MathOverflow, where no one cares about reputation points), but I don’t see how that’s possible. To comment on a paper, you do need some kind of tangible reward, don’t you?

I’m with Henry on the incentives part: I don’t see the need. If I read a paper and find some errors, then this gives me a place to post them – hopefully easily. If I have no real incentive to read a paper in the “real world” then chucking a few “reputation points” at me isn’t really going to help. So the big benefit of this, to me, is that something I would do anyway (ie annotate a paper as I read it) now has a chance to be of use to others with very little extra effort from me.

This, of course, already goes on unofficially (the nCafe has had many threads of this type) but there is considerably value in having a centralised place.

Where there might be a place for measuring how “helpful” someone has been is with non-mathematical reading. I might, if I have a bit of spare time, read over a paper that isn’t particularly in my field if I knew that the author was specifically interested in improving their English. Then the author might want to know if I’m generally regarded as being a good proofreader in that regard, and I might want some way to choose amongst all the people who want their papers read in this way, and “contribution to society” is as good a way to judge this as any.

But certainly at the mathematical level I don’t think that my behaviour would be influenced to a great degree by any incentives, and if it were I’d probably wish that it wasn’t.

I disagree about points. I once heard that the average number of readers of a mathematics paper is less than 1. I don’t know whether it’s true, but assuming that there’s a long tail, there will be huge numbers of papers that people won’t have happened to read, so if these were posted on the site, I think there has to be an incentive for people to read them. Note that I am absolutely not suggesting reputation points in this second proposal. Yes, I’m suggesting points, but they are solely a form of currency. You earn points by reading people’s papers, and you spend them by having your papers read. So calling them “reputation points” is completely inappropriate, for two reasons. First, it would be silly to say that your reputation had gone down when someone read a paper of yours, and secondly, I don’t imagine the number of points you have being public knowledge anyway: the system would not allow you to offer more points than you actually had, but that’s the only difference they would make. I think having a points system like this would hugely increase the take-up of the site, whereas not having points would result in the vast majority of papers being ignored, which would I think kill off the site pretty quickly.

Also, the way I now see things since writing this comment people who used the site would specify fairly precisely what they wanted done and offer a certain number of points for it. So checking the English might be one such task, as might checking the calculations in just one section or something like that.

(bother … can’t reply to replies … hopefully this will end up in the right place)

I misunderstood that part of the initial proposal then. Yes, that sort of “point” system is better than an all-round “reputation”. I’ve often thought that the SE/MO system would work better if votes were *potential* points. When you vote on a question you say, “I’d like to read an answer to this” and that tells the answerer how much they might gain by answering, but they only get those points if the original voter comes back and actually looks at the answer and votes for that as well. But I digress.

I like the idea of being able to say, “I would like this looked at and I’m willing to “pay” for someone to do it.” where the payment is in the form of agreeing to look at someone else’s work (or as a result of having looked at someone else’s work).

Let me try to imagine what it might be like. Say I’ve been using the system for a while so have some points (I really, really want to shout “What do points mean?” but I shall resist the temptation). I post a paper on this site. I declare that I’m particularly keen to know if the proof of theorem 5.2 is correct and am willing to pay a (declared) number of points. So I “hand over” those points to the system. There should be some way to ensure that I now have no incentive not to award those points. I’d probably also put up a few points for “other helpful remarks”.

Now several people take a look at the paper and post their opinions, most likely concentrating on that theorem but hopefully also with other helpful remarks. At the end of some fixed time period, I have to decide how to allocate the points, and there has to be a safety check that if none of them is useful then I don’t have to allocate them (on first iteration, I still don’t get them back, just the time period is extended).

How should I do that? Clearly, I’d like to be fair, and seen to be fair. So the community could have some input. But on the other hand, they’re *my* points and it’s *my* paper so I should have the final say. Maybe the system could suggest a distribution of points based on the votes (assuming we have votes) or some other indication and then I’m allowed to skew that distribution a little but not completely.

What I’m trying to avoid in that is the danger of someone posting something that they believe to be helpful, but which ultimately isn’t. Or is maybe subsumed in someone else’s larger work. The system could reward contributions, or at least contributions that have been somehow marked as “positive” contributions.

Perhaps the system should put up some points on some scale depending on how many points I put up. Perhaps log(n) + c. Then when I’ve allocated my points, the system allocates its points according to some horrendously complicated formula whose goal was to “smooth out” my allocation to ensure that no-one was unjustly done out of their points.

This is just me playing with the idea to see how I like it. I’ve no idea if any of the above is reasonable, feasible, or desirable. But I’m the kind of person that needs to imagine how I might actually use something like this in order to get an idea of whether or not I like the idea.

I think the problem of how to award points for people who’ve done tasks is difficult to solve by means of points alone. That’s why I suggested a feedback system (somewhat similar to what there is on eBay). If I post my paper and ask for Lemma 5.1 to be checked, people can offer their services. I then look at their feedback: if they generally seem to do a good job, then I award them the contract, so to speak. It would take a bit of time for the feedback system to start working (since people would need to do some work to get the feedback) but it would provide people with a strong incentive to do a good job. Some authors might be very mean with their feedback, but there wouldn’t really be any reason to.

I also like this idea a lot. Like Andrew Stacey, I am somewhat ambivalent about what (if any) incentives should exist. For papers whose results I need to understand and use, I already have the best possible incentive for careful checking: self-interest! I am unlikely to check other papers very carefully no matter the incentives.

On the other hand, I admire people who can carefully check papers that are not so directly connected with their own work (and these people do exist). It seems only fair to reward them, and I like the “transactional” system proposed here better than the “reputation point” system used at MathOverflow.

See my response to Andrew Stacey’s comment just above. When people have a good incentive for reading the paper (namely, that they were going to do so anyway), then I’d simply say that that’s great for them — they pick up points for doing something they wanted to do anyway. In that respect it’s a bit like being paid to prove theorems …

You make good points. And, as I wrote above, I appreciate that the point system might motivate some people. For me, the “killer app” here is the existence of a formal system for giving and receiving suggestions and comments. I have to admit that I hope it eventually evolves into a replacement for (at least) commercial journals, and tactically I think this type of incremental change is a great idea.

I like this idea, though getting widespread adoption could be tricky, and, without an external standard of value or method of exchange, the ‘currency’ of points could be plagued by extreme instability. A real economy needs a central bank to set interest rates so that inflation stays under control but there is enough currency out there to keep the economy moving. What provides this overall regulating function for this currency?

One of the reasons I like this proposal is that it helps the mathematical community beyond just the most active researchers. At least in the United States, there are many people who publish only one paper every several years in a mediocre journal. These people are engaged in research not to advance the discipline of mathematics but because they and their employers believe that having faculty who are connected to their discipline benefits students (by which I mostly mean undergraduates) in various ways, one of which is to provide students with genuine opportunities to work on unsolved problems. (At the same time, these people and their employers also believe that small class sizes are hugely beneficial to students. Since resources are finite, this means more time spent teaching and far less time to devote to research.)

One of the real strengths of the current journal system is that it permits and to some extent encourages the creation of mediocre journals, and it provides a somewhat hidden way for more active researchers to subsidize these less active researchers to some extent as far as refereeing goes. (Each one of them may submit very few papers, but they referee even fewer, and there are a lot of them.)

I think a lot of value could be created without pushing anyone to read or evaluate more papers. We all read and evaluate papers constantly as part of our research (not just as referees). We just need to get people to share the insights they gain in their reading. I posted some comments about this here:

papercritic.com requires registering through mendeley.com. However, to complete one’s profile they ask for Academic status, which can be only students or other professionals. In particular, the two fields named researcher are institutionalized. In the old times science was a hobby. With viXra.org doing fine, this ongoing discrimination against non-professionals is ridiculous.

I looked at science-advisor.net, a review site mentioned on Michael Nielsen’s blog. However, to complete registration there, all fields must be completed, e.g., Institution/company, Position/Occupation. Anyway, it seems it is hard for people to review. It might have to do with the academic culture, which, if it is feudal, discourages independent action in established groups. Maybe a better idea would be to use technology to first try to change this. Why not start a prototype review site just in additive combinatorics?

I think many ideas here are wonderful, and we badly need a new model of publishing (and not only in mathematics). Here goes an idea I have been nursing since a long time, and that now seems to be appropriate to share with other people:

Why not to add a new ” dimension” to papers being published, so that, besides authors and referees, papers would come with an endorser, or several endorsers? So, for instance , if I convince some respected (or not so) colleague to endorse my paper, not only (s)he will have a responsibility, but this endorsement would affect acceptability in a journal, book chapter or so. An endorser is not an editor, and would not even need to write a report or any written form to justify her/his opinion, just to “sign back”. In many cases, if my endorser is a very well-respected member of the academic community, this would already be a sign of quality. Just to the contrary If my endorser has a debatable reputation — so it would be easier to suspect of “cranks helping cranks”.

The idea was already used in scientific societies (as by the Académie des Sciences de Paris, and perhaps by many other).
The basic point , as I see it, is to fully recognize, in a new model, the fact that mathematics is made by mathematicians (and science as a whole is made by scientists) , at the same time giving credits to people that donate their time helping science to be published, and at the same time emphasizing mutual responsibility.

Let me give you an example why this type of endorsement requirement would be harmful in the current academic culture.

Suppose that: (1) your advisor mentions an interesting problem that he would like to solve but doesn’t know how to, then you go do a postdoc where you see a talk that mentions that a solution of the same problem is in progress; (2) the postdoc is one-year so you quickly have to come up with a result to impress another potential employer; (3) suppose that you are absolutely sure that a solution of that problem will grant you a job with someone you know a bit; (4) you decide your academic survival depends on whether you can solve the problem, moreover, another postdoc friend of your would be very interested in cracking this problem with you.

If you and your friend solve the problem there is no way that the resulting paper can be endorsed by an academic employee, unless he/she is an editor, in which case the interest of the journal may be greater than that of enforcing community rules.

In fact, what used to happen in independent postdoc actions was e.g., typographical errors in a paper, leading to it being printed again (CMP 2000). In the example I mentioned the paper was sent to an AMS journal, which accepted it, then asked that it be moved to an Elsevier one, where it was successfully printed. From a holistic perspective, the one-year postdoc sponsor and the AMS editor were gay.

Speaking about non-profit journals, what are these strange looking changes about: PAMS now accepts papers no longer than 15 pages in print, on the other hand Annals is soliciting excellent papers no longer than 20 pages. If a journal wants to reposition itself, shouldn’t it announce it well in advance? Someone can spend his/her lifetime to come up with a result acceptable to Annals, so this seems a bit contrary to the mathematical culture.

[…] a massive website combining the functionality of arXiv, Math Overflow, and more. There is also a revised (mostly scaled down) version, where the website would mostly serve as a venue for exchanging constructive […]

Speaking about refereeing maths papers, my impression is that it is a disaster when it comes to finding errors. E.g., recently I was reading a somewhat interesting rigorous result in a Khayyam Publishing journal, when I noticed that the authors made a short calculation about sth in the model from which it is clear that they didn’t understand that Fourier inverse of Fourier Transform gives identity only under some assumptions about a function.

Can anything be done about this? If maths culture is like a club, then we are supposed to be positive to each other, so exchanges like in Physical Review are out of the question. However, if we can’t guarantee that our papers are rigorous, than e.g., we can’t blame physicists that they are interested mainly in closed form solutions.
From my experience, approaching an author about a nontrivial error behind closed doors is ok only if the paper can be fixed while being still in print. It might be argued that the community is so close-knit that editors know the specialists in finding errors, who are happy to referee. However, it can happen that such a rare specialist misses an error in a paper he co-authors, in which case his reputation is gone and he gets interested in sth else. In other words, a group action might be more helpful here.

Two negatives give a positive, so it might be a maths-friendly idea to set up a site specializing only in error finding. At this time domains mathematicserrors, matherrors, mathserrors, are available with all extensions.

This is exactly the point of my answer below : the site you describe should give a certification, mandatory for further submission to journals. Because they need this certification, submiters will have to certify other papers to have their own paper certified.

The generic idea of tweaking a paper in public has to be a good thing. Forgive me, but I thought the Polymath project did that? The mathematics was conducted openly, but wasn’t a paper also written publicly? Wasn’t the paper tweaked?

To my mind this is the way to go. Open science, leading to an iteratively written/edited paper on a wiki, leading to submission to a peer-reviewed journal as a sanctioned/anonymously reviewed summary of a milestone. Apologies for the self-promotion, but that’s what we did, in chemistry: http://www.nature.com/nchem/journal/v3/n10/full/nchem.1149.html

The added feature here is the points incentive to contribute/referee. That’s interesting, and will probably need to be launched and itself tweaked rather than created perfectly de novo.

The decoupling of approving a paper as publishable vs assessment of its importance is the mission of PLoS One (am on the board). But there is nothing to stop tweaking on a different website before submission in order to improve the quality of the paper, and the incentive structure you mention might promote that. Very interesting discussion.

Speaking about publishing, here is a thought on Walter’s suggestion “we badly need a new model of publishing (and not only in mathematics)”. From my experience, it seems that referees are prone to burning time looking for typos rather than serious errors. If maths culture is about follow-ups, then this may be a group action: a typo initially gives an impression that sth is wrong, so if typos are eliminated, next readers will have an easier time, possibly leading to the outcome that only a generalization of an original paper is fully correct. However, typo spotting might require the help of a specialist, if even respected companies can’t handle it. E.g., I have a Piatnik deck of World War 2 Planes cards, nine spades has Vickers-Armtrong at the top. Admittedly, this may be only because of the subject matter. Another Piatnik deck I have, Battleships (e.g., ace clubs Admiral Graf Spee), seems ok.

Actually some published (e.g., in Annals) papers have typos, so it might be a maths-friendly idea to set up a site dealing with it. At this time domains publishedtypos are available with all extensions.

[…] Tim Gowers has been asking similar questions for peer review in mathematics (see here and here) As we have seen above, such a system is likely top be highly controversial as well as difficult to […]

1. It seems to me that you would like to speed up the evolution. Of course at a certain (small and unpredictable) scale you will, but I doubt that any big mutation will survive.

2. After the first round of negative comments you propose something which, essentially, tries to make people read papers. If they are interested then they read, if not then they don’t. I don’t think that any artificial incentive scheme will change this simple rule.

3. What you are proposing is (in some form) happening already (surely, to many of us):
a. I find a paper submitted to the arXive interesting, I read it, and I send comments to the author(s).
b. When I write a paper, I send a preliminary version to a few colleagues and ask for comments.
c. I get comments on my papers from people I don’t not know.

May I comment on item 3.
You write – “… I send comments to authors …”.
it is one of key points – you send ONLY to authors. But may be yours comments will be interesting to other people, who will read this paper ? In current situation no way for them to know your opinion.Tim’s proposal deals with it.

You can say that arXiv know have a “trackback” system – so it is a way solve the problem, but it has obvious distadvatages e.g. may be I do not want anybody to comment on my paper – so I just do not sent it to Tim’s site, but with arXiv you cannot control it.
The other advantages of Tim’s site are clearly described by Henry Cohn in comment above which I like very much.

This is an extremely good idea, but it will only work when the actual referees of established journals will say to the editors : “I am fed up of reviewing papers full of mistakes, or that have been already reviewed somewhere else and refused, and I do not want to be dishonest by accepting a paper that I can not certify. I am a great mathematician, and I am only here to say if this result is important or original, not to check the details. Therefore, I will only accept to review papers which are already certified to be correct by http://www.mypreprintcontainsnomistakes.com. Then, it’ll only take me half an hour and not 12 months to review it”.

But, hey, that’s not free! For one article submited, I will have to review 5 articles! Why 5? Simply because 4 guys are reviewing my own paper.

5-4=1 : this is the fields medalist (or established specialist) who will not review the paper, but will handle the reviewing : it’s him who will disseminate the 25 mistakes in the tex file, in order to be sure that the wannasubmit reviewers are not just faking to do their job by just saying, “I’m fine with this article :-)”.

These master of games (these specialists who handle the reviewing), also do get, in the meanwhile, their credits to get their own paper reviewed for certification.

The side benefits : people will tend to publish less, but of better quality, in order to avoid this time consuming process. The papers which are certified, but not accepted in big journals, are nevertheless accessible to the public, with a guaranty of the validity of the results. As already mentioned, much less work for the referees of journals. As it is more difficult to let pass the articles of your friends if three other anonymous guys have to certify the paper (at the risk of getting your own points lost because you didn’t point out the 25 mistakes), there will be less “this paper shouldn’t have been accepted”.
And last but not least : the more the publishers externalize their added value, the easier it will be overtake them.

A small additional remark here is that the relationship between judging quality and judging correctness could go in one of two ways. One could either judge the quality and say, “I accept, conditional on the paper being certified correct at ismypapercorrect.com,” or one could, as you suggest, refuse even to look at the paper until its (probable) correctness had been certified.

I also had a rather depressing thought, which is that if you know that there is a website that’s explicitly devoted to judging correctness, and that people are in some sense forced to do the work reasonably well, then there would be a temptation to submit somewhat sloppily written papers to that site. To guard against that, I think that reviewers should have the right to assess how conscientious authors are, so that authors could gain a reputation for producing carefully written papers that were not a nightmare to review.

I completely agree to what you suggest in your second point : the number of referees needed to certify your paper should be proportionnal to the degree of correctness you have shown on your previous submissions:

If you have previously submitted very good papers (i.e. with almost no mistake), then you should be assigned less reviewers for certifying your futur papers. In particular, because of that you will have to refer less papers to “pay” for the certification of your paper under certification, and this should be strongly encitive to submit good papers.

(the number of papers you have to refer for a submission must be equal to the number of referees who are needed to refer your paper, if one wants some to have a zero sum game, i.e. something workable).

Let me add, that, in order to incite reviewers to do correctly their job, one could also make depend the number of refered papers needed to “pay” for their submission, not only of the papers they submit, but also on the quality of the work done as a referee.

Your first point is interesting, in particular in order to introduce the use of ismypapercorrect.com because otherwise a submiter could think “what is the point of getting my paper certified in order to be able to submit to the “annals of certified papers”, if I am not sure that it will be later accepted in that journal? I will rather submit it to another journal which does not require certification.”

Once, the use of certification will be generalized, it could become mendatory.

Academia always used to be a gift economy; we respect the people who contribute the most, just as in the potlatch of the Pacific Indian tribes. Contribution can mean starting a new discipline, proving a key theorem, reviewing papers, acting as an external examiner, raising money to endow studentships, writing a popular book, going on a speaking tour round schools… and most of us try to do several of these things. When assessing people for tenure or promotion, we weigh the whole contribution.

The gift-economy aspects of publishing are now breaking down and we’re getting spammed out. We get dozens of emails every week, particularly from Elsevier journals, demanding that we referee papers, often in fields in which we have little interest. The request doesn’t come from someone we know any more, but from a secretary’s secretary. We can’t submit a free-form report but have to give a paper umpteen different numerical scores. Frankly it’s too much bother. So the response rate drops, and the spam volume mounts.

What can we do? The old model has been trashed by the rapid consolidation in the industry, which has stripped out costs and discarded the personal touch. At the same time it’s charging our universities a fortune for material we produced ourselves.

One way forward is what the medical journals have done; when I used to referee papers for the likes of the BMJ they used to pay me. Tim, if you want to pay reviewers, then if the journal is for-profit you should pay them in folding cash money.

Another is to take the open knowledge route, as exemplified by the Wellcome Trust’s decision that all its funded research must be made available to all online. Harvard has taken a similar view.

I’ve plumped for open knowledge. I no longer review for any more publications that don’t make their content available to all without charge.

arxiv.org’s weakness is that it hasn’t admitted that it rejected a poster margin paper, however, it is now doing a better job at spotting copyright infringements than many journals, e.g., admin note at http://arxiv.org/abs/1112.5501

@Yael
people like “c” and “c++” so the hopes it will be the same with arxiv and arxiv++ :)

@Ralph

Sorry I did not quite catch what you mean.
I do not see much bad with that paper at least looking only on “admin notes” – it seems the guy first proved result for r^6, then generalized to R^2n – and the date difference is 4 years…

Wiki and MO are amusing examples how great things can be created only with the “good will” and awards in the form of some kind of “visiability” or “fame”.

I think these motivation-resorces would not be enough for this project, but before discissing other motivations, imho necessary to understand how much extra motivation-resorces we need ? May be not so much ?

PS
Addition to previous post about motivations.
I forget another one – which also comes for free – and is related to “good will”:

Third motivation: We might want to see that what are we doing is at least somehow useful. And inet provides inaccurate, but easy way to see it – various counters how much you page has been visited (this works on Wiki), how much upvotes you get – on MO and etc…
Small but pleasant things :)

Example. When I wrote some page for Wiki I thought it would be visited several times per year (because I know more or less all who interested in subject), but unexpectedly it is visited several times per day – this makes me a little happy, and quite puzzled who might be visiting it ?

That simple depressing idea did not come to me mind :) although it should, probably some robots visit, does not mater…

At some post above you mentioned that if this would be started implemented you might help programming.
Do you have ideas how this should be programmed ? Can one take some ready solution and upgrade it ? What is the ideas ?

Gower’s proposal has an enormous number of virtues (e.g., one referees what one is reading anyway). But to give it teeth, it would help if some journals use the refereeing on howsmypreprint as a partial basis for publication – it will be up to the editors to decide on the value of the comments received, and whether they suffice to make a decision or whether a conventional referee report is needed. This can be in parallel with the existing system.

[…] saying about the review process (my summary with links is here). Tim Gowers, for example, has been talking of a possible website where papers could be deposited as drafts (arXiv style) and then people be […]

I like the proposal much. I would also like to point out that things could be raised to a higher level also at the level of referee/author correspondence within journal refereeing and revision system.

As a referee, I often see some things how to improve the paper before I read it completely. If I send this as a response it will not be delivered to the author until other referees respond so no use to give these remarks early. But those remarks if received early could help with providing opportunity to make the changes faster and also to enable me to read the rest of the paper in a more readable form to start with. Also if I send more expositional requests and smaller remarks to the reader and want to wait with more serious reading only upon those requests are met, some journals do not allow me option to ask afterwards for a major revision, as I did not ask a major revision in the first circle! This is ridiculous: whatever is needed to raise the quality to the acceptable level one should be able to suggest at ANY stage in the process. Referee often wants to help the author improve the paper but with only one or two iterations of communications allowed it is often hopeless.

From the point of view of an author, it is also often needed to have a small communication with the referee, like if I do not understand some of the questions raised and ask for clarification of a point in a review. If this gets through the editor, it may aggravates the editor and one is cautious not to do that. So one is left with the need to answer the referee report with only partial understanding. Thus I think more informal and inter-stage communications should be allowed between the referee and the author, of course with the online system ensuring the anonymity of a referee, the independence/separation of referees (so that one does not see what others think ), and ensuring control by the editor (who can always see the record of all communication).

[…] original (for example, I have been inspired by posts and related discussions as this one and its follow-up), and I have never taken any real action to see whether they could be tweaked and somehow […]