The problem with radical redesign is that it is hard to understand what change has caused which effect.

I suggest that we as a community focus on one problem at a time. If we want to focus on multiple problems, maybe each conference should attack one at a time, so at least each variable can be tested separately.

Let's start with the problem of low quality reviews. Here is a modest initial proposal based on an economic model:

Each review should have two components: (1) technical summary and feedback, and (2) subjective evaluation wholly supported by technical evaluation in (1)

The technical summary should be presented to the authors before decisions are made, and the authors will rate reviews based on understanding. So will other PC members (anonymously). The results will be used to rate PC members and reviewers and provide them with tokens.

PC members and reviewers will need to spend these tokens to get their papers published at top conferences in the future. The monetary system will need to be worked out, but we can let junior researchers borrow tokens from the central bank at the start of their careers so as not to harm their initial careers. But eventually everyone has to pay in quality reviews for papers that they want to publish.

These are initial thoughts and the proposal should certainly be refined to address potential abuses. For example, technical parts of the review should be devoid of all subjective opinions and hidden praise, so that the temptation to flatter the authors for earning tokens can be avoided. Also, probably feedback from authors of papers in the bottom 33% should not be counted towards awarding tokens.

From my past experience with conferences with two-phase review (rebuttal, or whatever you want to call it) - it rarely changes anything in the final program. In other words, it's a lot of overhead, for a very little impact on the outcome of the review process.

Thanks for the insight! My question would be a different one though: Does rebuttal/rebattle [1] change something from the perspective of the /reviewers/: Do you write your review more carefully (it could be questioned) or not?

If the first happens, I guess it's worth the overhead. If no, you are right. In this case, it's not worth the bother.

Well, in all venues I had the two-stage review system, I did not alter my first round of review.

What changed was (observing in other reviews, not mine):

1. the discussion was a bit longer (to give the authors time to read the first round of reviews, and to respond to them).
2. some small mistakes that were usually purged during the discussion phase, were purged by the authors. Nothing too big went through anyways.
3. some authors felt they were heard (usually those who fixed small mistakes in the reviews), and some authors felt they were ignored twice.

All in all, the gain for the review process was marginal (again, in my experience), and in some cases negative, as at some cases the extra deadline prevented from people to "miss" the end of review/beginning of discussion phase deadline with a real review, so they had to use a very short review, while working on the full review afterwards.

Actually, what I was proposing is largely orthogonal to current "two-stage" review systems.

My point was to have a system where authors and fellow PC members review the reviewers. Furthermore, this review would cause bad reviewers to lose the right to publish their own work at future top conferences.

This would create (I think) a powerful incentive for reviewers to spend the time to craft better reviews -- at the very least, to understand better technically what is going on in a paper that they are supposed to be reviewing.

--

Finally, coming back to the points raised in this thread about multi-stage reviews: At TCC 2013 this year, we tried out a system which allowed for *freeform* interaction between PC members and authors (i.e. a "poly-stage" review process). In my opinion as the PC chair with a global view of what happened, this interaction was extremely helpful, especially with papers that were "on the edge", or were misunderstood during the review process.

No, there would be nothing special about serving on PCs. Every member of the community would need to "earn" the right to submit papers by providing good reviews. Serving on a PC would be an opportunity to earn a lot of credits toward submitting and publishing papers.

Junior researchers could "borrow" credits at the start of their careers to submit papers despite not having had an opportunity to review other papers. We would need to ensure an equitable way to provide opportunities for everyone who is willing and able to review papers. (This would increase the pool of able people who are willing to provide quality reviews.)

Does this clarify the basic idea of the proposal? As I wrote in the first post, I think the idea would need substantial refinement before it could be deployed.

"Re" the question about "first submissions". There is a quite obvious solution: Give everybody some credit to begin with [1]. Let's say everybody gets 100 credits as a freshman. Every submission to proc/IACR costs 20 credits (in total - to be split among the authors), every review earns 10 credits for a good review, 5 credits for a so-so review and 0 credits for a bad review. You are blocked after 3 bad reviews in a row (meaning there is some time in between these reviews). Being in a PC earns 35 credits, plus 2 credits for each good sub-review. If it was a self-review, the rules from above apply. So everybody can submit 5 papers without having reviewed anything so far. As soon as you are in a PC, you earn so much credits that you don't really have to worry about submitting papers anymore. Everybody else who behaves "nicely" get enough credits to have the right to submit.

Problem: How do people get requests for reviews? I got my first reviews from my supervisor (Bart Preneel) - and it took three years before somebody else from the community contacted me. So when I am a lonely Ph.D. student somewhere in the wild, wild world, I can submit 5 papers and that's it as nobody will request reviews from me early enough.

Hence, if we were to adopt this system, we would need a mechanism that people who already submitted papers (and got good marks - maybe not "accept", but certainly something better than always "strong reject") will get requests for reviews. It's certainly manageable, but does introduce some more constraints for the editors.

All in all, I didn't make up my mind yet. It's certainly a completely different proposal to deal with the review-problem (turn authors into reviewers :-) ) but does have subtle points to it that need to be addressed first.

It could work on a large scale (e.g. proc/IACR), but I do not think that it can be tested with only one conference.

Best,
Christopher

[1] The idea is borrowed from an SF novel about a society where "Social Darwinism" was introduced: Everybody can only consume as much money for health care or "lethal situations" as s/he previously earned. No money can be transferred between people, including spouses, parents <-> children, .... Obvious problem are babies who would die rather quickly without health care. In the novel, they get 500 credits upfront. Enough for normal babies; a problem for babies that are born with diseases. Frightening idea - but a very good novel :-) Couldn't find author / title back though :-(

The way I imagined it is that this would be a (somewhat) actively managed process.

There would be a "central bank" that is responsible for managing the currency. It would have the dual mandate of:
1) Ensuring incentives for high-quality reviews
2) Full employment for crypto-researchers

The central bank could intervene from time to time, especially for researchers working in smaller sub-areas, to make sure researchers have the credits they need to make sure research gets published. At the same time, they should not intervene too much, so that no researcher gets the impression that they can avoid providing high quality reviews and still demand high quality reviews for their own submissions.

But in general, I think the idea of having a large pool of willing volunteers waiting to get opportunities to provide high quality reviews to PCs can only help. I would generally love to have a (long) list of willing expert sub-reviewers when I serve on PCs.