Across a wide range of scientific communities, there is growing interest in accelerating and improving the progress of scholarship by making the peer review process more open. Multiple new publication venues and services are arising, especially in the life sciences, but each represents a single point in the multi-dimensional landscape of paper and review access for authors, reviewers and readers.
In this paper, we introduce a vocabulary for describing the landscape of choices regarding open access, formal peer review, and public commentary. We argue that the opportunities and pitfalls of open peer review warrant experimentation in these dimensions, and discuss desiderata of a flexible system.
We close by describing OpenReview.net, our web-based system in which a small set of flexible primitives support a wide variety of peer review choices, and which provided the reviewing infrastructure for the 2013 International Conference on Learning Representations. We intend this software to enable trials of different policies, in order to help scientific communities explore open scholarship while addressing legitimate concerns regarding confidentiality, attribution, and bias.

This is an enlightning and very well written paper that helps to put open scholarship and reviewing in perspective, clarifying and discussing the choices that must be made, and introducing the basic ideas behind the openreview.net system and its pilot trial with ICLR 2013.
Among the concerns of our current reviewing and publishing practices in computer science (page 1, 2nd column, top paragraph), I would add that conference reviewing for large conferences such as NIPS and ICML is plagued by what I call the "reviewer crunch": the reviewing period for the conference becomes a bottleneck during which it is very difficult to allocate the approprate reviewers to a paper because they are already too busy with other papers to review. As past program chair for such large conferences, I had the impression that this phenomenon seriously reduced the quality of the reviews, and hence not just the quality of the accepted papers, but also hurt the perception of fairness and credibility of the conference. This is in contrast with the reviewing process for journals, for which reviewing is distributed throughout the year.
I believe that this situation could be alleviated thanks to an open reviewing system such as the one proposed here, but with the proposed feature of hosting multiple reviewing entities that may pass judgement on the same paper. Indeed, I believe that the ideal situation would be that authors can submit their paper to review at any time during the year, as soon as the work is considered ready for reviewing (and initial publication!). The evaluation and reviewing work done, e.g., in the context of a journal, should then be available by a conference which chooses to present that paper for oral dissemination. Note that this proposal would also address the troubling issue of journal vs conference publications: conference publication has come to be the dominant form in computer science, unlike in other disciplines such as the biological sciences, thus forming barriers to collaboration between disciplines. I believe that different versions of the same paper (along with the associated reviews, comments and responses) should be centralized (or overlayed) in one place, including a submitted version, a journal version or a conference version (which may be shorter for the sake of quick reviewing). Different reviewing entities, if they agree to collaborate in this respect, should be able to share reviews and reviewer identity, to make this not only possible but reviewing-efficient. Program chairs could then decide to either accept an already reviewed paper or request additional reviews, based on the reviews and comments from the authors and the public.
Please briefly explain what is an "overlay journal".
On page 1, 2nd column, 2nd paragraph: I did not understand "to protect readers from wasting time on bad papers".
One page 3, 2nd column, bottom bullet: why would authors pay to allow reviews to be ported across reviewing entities?
At top of page 6, regarding "portable peer review": I think this is a very important feature (see my above rant) but I doubt that openreview.net will be the only system in town. Therefore, it would be important to work towards inter-operabiilty of different open reviewing systems, define message-passing conventions between them, maybe based on the concepts introduced in this nice paper.
Section 5, 4th bullet: weren't the reviewer identities also visible to the program chairs?
Section 6, regarding the replacement of arXiv's functionality inside openreview.net: are revisions handled in the same flexible way as arXiv? can authors who do want to use arXiv still use it without having to duplicate the effort of propagating revisions?
Typos:
- duplicated 'of' in {of of a paper}
- insert 'as' in {user's a "todo" list}

Thanks for the comments! We fixed all of the minor issues in the current revision.
First off, regarding revisions: revision tracking will be available in the next release of OpenReview.net, associated with our local hosting of the PDFs. The idea of tracking arXiv papers is a good one-- we had implemented that when the PDF hosting was exclusively on arXiv, but hadn't thought of giving authors the option to use both systems in parallel with automatic synchronization. I've added a ticket for this request but can't predict when it will get done.
We have been discussing the benefits of hosting multiple venues together, to allow for portable peer review and for multiple endorsements of the same paper by different venues. However it is a really important insight that this feature can be leveraged to bridge the journal/conference divide-- I for one had not fully appreciated this angle. I think it would be hugely beneficial to help clarify and ideally unify what constitutes "publication" in different fields, and to remove arbitrary barriers to collaboration when there is a mismatch. This can also be an issue within a field: for instance, the lack of a more formal "publication" in the form of a citable Proceedings was an issue brought up by several respondents to the ICLR 2013 survey. A related question is the relative weighting of workshop vs. conference papers; this seems to me entirely murky. Another tricky issue is that many venues won't consider work that has been previously published, which makes it all the more important to say exactly what "publication" is.

This is a great paper. Many people have discussed the need to modernize
peer review. It is nice to see the authors taking this seriously and doing
something about it. The development of openreview.net is a very
important milestone.
I agree with the authors that separating dissemination from evaluation
is key.
I don't really have any specific comments on the manuscript; I just wanted
to add my support for these ideas.

I appreciate the clear descriptions of the dimensions, vocabulary, and cultural views of open scholarship. Seeing the key trends and concerns laid out before the discussion of the system has been very helpful to me as a social psychologist. I expect the paper will be referenced frequently because it provides a framework to guide researchers and practitioners across many disciplines.
My comments are simply thoughts and questions I had as I read the paper. I imagine the authors have actively considered these points in their work. My intent is to continue discussion in this forum about methods for understanding the effects of using open review processes and tools on the development of scientific communities and on accelerating collaboration and discovery.
The authors' persuasive call for open experimentation makes me want to know how the platform can enable and encourage communities to gather and share data on outcomes from the different possible configurations of the system both within a community that is testing multiple methods and across communities.
From the discussion of ICLR 2013, it looks like the system records usage stats on things like numbers of papers, responses, anonymous responses, reviewer follow-up's, etc. Is there an option to share this usage data within a community as it is emerging? How might sharing periodic aggregate activity data within a community (in addition to the email about new activity on past papers) encourage greater discussion? Some groups may even choose to set community goals (#'s of papers, #'s of comments, contests for most "liked" comments)...and could use a tool for reporting progress. What types of in-progress feedback will deepen collaboration and discovery?
Beyond the pilot, will the tool also provide ways for users to follow up with their communities on the perceptions of the process, possibly through a survey or a comment/blog process, after the decisions have been made, as done after the ICLR 2013 conference? Asking people to reflect on the process is one way to help people recognize that many fears in anticipation don't turn out to be scary in practice. I would hope most communities will design their own feedback process, but incorporating options in the tool itself could accelerate our larger community's understanding of attitude change and intended/ unintended consequences over time.
How about an option to have an ongoing open comment section about the review process as the process is unfolding? Groups who talk about the group process tend to develop more explicit norms - like, "participation through commenting is really valuable so let's do it" or "we should watch out for dismissing papers that are written in English as a second language," "or, wow, people are posting some really new, interesting stuff..." How can tools be designed to accelerate the achievement of the wider scientific community-building objectives of the group?
What do adopters of the system really want to know / really want to create by using it? What do we as a community experimenting with the social and scientific benefits of transparent decision processes want to know and create? What other data do we want to collect? The system's user-configurable review policies paired with user-configurable feedback and group development tools is a potential next step. Eager to hear everyone's thoughts.
These are a few examples of the wider question = given that the platform is designed to encourage customized experimentation with open review, how is it also designed to encourage customized learning from the process?
The OpenReview.net system allows for systematic exploration of powerful variables. It is also accelerating social change. Pretty exciting.

Thanks for your comments! We absolutely agree that collecting quantitative data on the community response is key to evaluating different policies, describing their effects, and optimizing them for different communities. I like the idea of providing continuous aggregate metrics for a venue, though I think that will be largely informational-- i.e. I doubt that anyone would be motivated to contribute to a conversation in order to meet a # of comments goal, but I could be wrong. Tracking "liked" comments and other measures of the impact of a paper, comment, author, or venue is certainly part of the long-term vision. I think users will be very interested in seeing such impact measures, and they are more likely to be motivated by a personal reputation scores (as have been used to great effect on sites like StackOverflow).
I think the ICLR 2013 survey was really enlightening; thanks again for your help with that! Administering such formal surveys on an ongoing basis to participants in many conferences may not be sustainable. Users will burn out on them, and the IRB restrictions may make it difficult to report continuously updating aggregate results--though that would be interesting and I can look into it.
It will be much easier, and perhaps more effective, to create a public comment area for every conference where reviewing processes (and perhaps other matters regarding conference organization) can be discussed. This could be a bit tricky, though, in that we might not want to artificially segregate discussions regarding different venues when they may have identical policies. We already have a ticket about creating a public web forum for discussing these issues generally (i.e., not associated with any venue).

Just two remarks about the described tool openreview.net:
* How is the openreview.net software licensed? Let me suggest to publish the source code under the GNU AGPLv3, in the spirit of open science. This might start a community of developers and scientists collaborating to improve the tool.
* It would be very useful for papers to contain a link to their openreview.net page, to stimulate post-publication reviews. (E.g. I only found *this page* by chance.)

Thanks for your feedback!
Regarding open source: the code is closed for now, because it is at an early stage of development, rapidly changing, and is just not ready for any public examination or contributions. For instance: due to the sensitivity of the contents of an OpenReview installation (e.g., identities of anonymized persons), we need to be very careful not to inadvertently reveal any security-compromising information as part of the source code.
We are certainly open to releasing the code in the future when it is more stable and after a thorough security audit, but even then there will be several tricky issues to consider. For instance, it would be highly beneficial for users to log in to a single system that hosts many venues, at least within a given scientific discipline. That allows a unified todo list (e.g., when submitting to and reviewing for multiple conferences and journals), and provides for "portable peer review". This goal may be undermined if many independent installations spring up, at least until such time as a standard interchange format is developed as Yoshua suggests above. Related, if anything goes wrong on some third-party installation--such as data loss, a security breach, excessive downtime, or failure to maintain long-term archives of accepted papers--we would not want that to reflect negatively on the OpenReview system.
We entirely agree that papers should have links back to their discussion pages here, at least by assigning DOIs and perhaps also through the CrossMark system, which can help alert readers to new revisions of a paper. These are certainly on our todo list!