Pages

Sunday, April 04, 2010

Peer Review VI

Peer review is at the heart of knowledge discovery. It is thus no surprise it is frequently subject of discussion on this and other science blogs. The other day I read a great post by Cameron Neylon, Peer Review: What is it good for?It occurred to me that instead of writing the 124th comment there, it might be more fruitful if we discuss here what I've - oddly enough - repeatedly suggested on other's blogs, but evidently not on my own.

To first make sure we're on the same page, here's how peer review works today. You write a manuscript and submit it to a journal, where it is assigned to an editor. Exactly what happens then depends somewhat on the journal. In some cases the editors sort out a lot of submissions already at this stage. At very high impact journals like Science and Nature, editors reject the vast majority of papers not because they're wrong but because they're not deemed relevant enough, but these journals are exceptions. In the general case, your manuscript will be assigned a number and be sent to one or several of your colleagues who (are thought to) work on related topics and who are asked to write a report on your paper. That's the peer review.

The assignment of people to your paper doesn't always work too great, so they have a chance to decline and suggest somebody else. At least in my experience in most cases the manuscript will be sent to two people. Some journals settle on one, some do three. In some cases they might opt for an additional report later on. In any case, the reviewers are typically asked for a written assessment of the manuscript, that's the most important part. They are also asked whether the manuscript is suitable for that particular journal. And typically they'll have to judge the manuscript on a 5 point scale on how interesting the content/good the presentation is etc. Finally, they'll have to make a recommendation on whether the paper is suitable for publication in the present form, whether revisions should be done, have to be done, or whether it's not suitable for publication. The reviewer's judgement is anonymous. The author does not generally know who they are. (That's the default. There is in principle nothing prohibiting the reviewer from contacting the author directly, but few do that. And in some cases it's just easy to guess who wrote a report.)

In most cases the report will say say revisions have to be done. You then revise the paper, reply to the queries and resubmit. The exchange is mediated by the editor and can go back and forth several times, though most journals keep this process to a few resubmissions only. The whole procedure can take anything between a few weeks and several years. In the end your paper might get rejected anyway. While in some cases one learns something from the reports, the process is mostly time-consuming and frustrating. The final decision on your manuscript is made by the editor. They will generally, but not always, follow the advice of the reports.

On the practical side, the manuscript- and review-submission is typically done through a web interface. Every publisher has their own labyrinth of usernames, passwords, access codes, manuscript-, article tracking-, and other -numbers. The most confusing part for me is that some journals assign different author and reviewer accounts to the same person. I typically write reports on one or two papers per month. That's more than ten times the number of manuscripts I submit. There are very few journals who pay reviewers for their report. JHEP announced some while back they would. Most people do it as a community service.

There are several well-known problems with this procedure. Most commonly bemoaned is that since the reviewers are anonymous they might abuse the report to their own advantage. That might be in the form of recommending the author cites the reviewer's papers (you're well advised to do that). In the worst case they might reject a paper not because they found a mistake but because it's incompatible with their own convictions. One can decline to review a paper in this case due to "conflict of interest," but well. We have all conflicting interests, otherwise we'd all be doing the same thing. What happens more often though is that the reviewer didn't read or at least not think about the paper and writes a sloppy report that might accept a wrong or decline a right paper mistakenly.

I have writtenmanytimes that the way to address these community-based problems - to the extend it's possible - is to make sure researchers' sole concern is the advancement of science and not other sorts of financial pressure, public pressure or time pressure that might make it seem smart to down-thumb competitors. But that's not the aspect I want to focus on here.

Instead I want to focus here on the practical problems. One is that the exchange between the author and the reviewers is unnecessary slow and cumbersome. Submitting a written report and having it passed on by the editor must be a relic from the times when such a report was carved in stone and shipped across the ocean. It would be vastly preferable if journals would instead provide an online interface to communicate with the author that preserves the anonymity of the reviewer. This would allow the reviewer to ask some quick, clarifying questions and not only make their task easier, but also prevent misunderstandings. Typically, if the reviewer has misunderstood something in your manuscript you have to do a lot of tip-toeing to tell them they're wrong without telling them they're wrong because, you see, the editor is reading every word and knows who they are. The idea isn't that such a communication interface should replace the report, just that it could accompany the review process and that a written report is submitted after some determined amount of time.

Another practical problem with today's peer review process is that when your paper gets rejected and you submit it to another journal, you have to start all over again. This multiplies efforts on both, the authors' and the reviewers', sides. This is one of the reasons why I'm suggesting to decouple the peer review process from the journals. The idea is simply that the author would submit their manuscript not for publication to a journal, but first to some independent agency to obtain a report (or several). This report would then be submitted with the manuscript to the journal. The journal will still have to assess whether the manuscript is suitable for this particular journal, but the point is that the author has a stamp of legitimacy independent from a journal reference. It's up to the author then what to do with the report. You could for example just use the report (in some summarized form) with an arxiv upload.

There could be several of such peer review agencies, and they might operate differently. Eg some might pay the reviewer, some might not. Some might ask for a fee, some might not. One would see which works best. In the long run I would expect the reports' relevance to depend on the reputation of the agency.

This would address several problems at once. One is the by the open-access movement frequently criticized "power of journals." That's because in many, if not most, fields a journal reference is still considered a sign of quality of your work. But journal subscriptions are very costly, and thus the pressure to publish in journals often means scientific studies are not freely accessible for the public. The reason is simply that today subscription journals are the main providers of peer review. But there is actually no reason for this connection. Decoupling publication from the review process would remove that problem. It would also address the above mentioned problem that when your manuscript gets rejected from one journal you have to start all over again with the peer review. Finally, it would add more diversity to the procedure in that the reviewers and authors could chose which agency suits them best. For example, some might have an anonymous, and some an open review process. Time would tell which in the end is more credible.

There is the question of financing of course. Again, there could be several models for that. As I've said many times before, I think if it's a public service, it should be financed like a public service. That is to say, since peer review is so essential to scientific progress, such peer review agencies should be eligible to apply for governmental funding. One would hope though that, as today, most of the time and effort would be carried by the researchers themselves as a community service.

63 comments:

Another nice piece to add to the series you have regarding peer review, which in this case deals more with the efficiency of the process rather than its fairness or any particular work’s general relevancy. To be honest I don’t quite understand how it would happen that a researcher would send their paper to an inappropriate journal to begin with, as if that were the case wouldn’t that be more reflective of the authors understanding of the significance of their work more than anything else.

I’ve probably got it wrong, yet as I understand it in many cases the reason papers that are rejected are then sent to another journal is that there is a pecking order or ranking among the journals themselves and even competition amongst them for the better and more significant papers and authors. I was just wondering then how your proposal would impact on this as to be an improvement over the current system.

Hi Bee,You write This is one of the reasons why I'm suggesting to decouple the peer review process from the journals. The idea is simply that the author would submit their manuscript not for publication to a journal, but first to some independent agency to obtain a report (or several). This report would then be submitted with the manuscript to the journal. The journal will still have to assess whether the manuscript is suitable for this particular journal, but the point is that the author has a stamp of legitimacy independent from a journal reference. It's up to the author then what to do with the report. You could for example just use the report (in some summarized form) with an arxiv upload.

If arxiv provided a way to provide a "quality" classification of papers, and if there was a peer-reviewing agency that provided such quality reports, then the publishing industry instantly dies.

It sometimes happens for example that a paper submitted to a theoretical physics journal would actually fit better in a mathematical physics journal. Most publishers have many journals, so they'll suggest to transfer the manuscript then. That's one example. Another one is that some journals have different sorts of articles: research articles, review articles, notes, comments, or so-called "topical reviews" etc. It isn't always so clear which paper belongs where. Another typical example is Physical Review Letters. To get a paper published there you need to explain why it qualifies as "rapid communication." Lacking that, your paper might be published as a regular article in PRD, which is an excellent journal, but doesn't have the same impact.

I don't think that my proposal would change very much about what you call "pecking order." I can't see how. Best,

I don't think so. First, not every field has something like an arXiv. Then, there's all the old-fashioned people like me who want to have their work printed (dead tree format!). I would still try to get my papers published. But maybe more importantly, journals play a more essential role for the communities than you make it seem. They reference, edit, filter and order content, some journals (especially the high impact ones) do PR work for their authors. Papers published in a journal are usually easier to read because they have certain standard notations, somebody has checked the references etc etc. It is probably true that some of the smaller, more or less irrelevant journals would die. But well, that's natural selection. Best,

I'm not sure how much more we can ask of governments in terms of funding. Any monies sent to peer review agencies could be money spent on actual research itself. Then who controls the peer review agencies? Whomever supplies the money for them, that's who. I'm not saying it's a bad idea, Bee, in fact I like it very much. I just foresee a bunch of complaints and potentially unending debates about the specific structure.

Wait a minute, I know and Eureka .... let's hold a conference on how best to set these things up. Aruba, anyone?

Seriously, peer review is dissolving (and in strange ways) thanks to technology. A single arXiv pre-print by Verlinde gets the world talking. Hawking said in one of his books that by 2008 or 2015 (I forget which) there would be 1 scientific paper written per minute. Thanks to the Chinese make that per second. Who has the time to keep up? In short there's a bottleneck.

In conclusion, you describe an important problem and present a decent solution, but the implementation will get hairy. Good luck and thanks for raising the issue. Meanwhile, I will waste the next 10 minutes of my life reading Lubos' (recent) review of Lisi's E8=T.O.E. theory. Everyone stares at a train wreck. :-p

I believe what Hawking was referring to was the number of papers generally classified as scientific among all its categories and specialties. In the end however that vast majority of papers in the physical theoretical domain are exploratory or suggestive as being not much beyond being well defined hypothesis. Case in point all those papers in DSR written on the variable speed of light being nixed by one demonstrating them as irrelevant.

So I find theoretical research papers more as being the open dialogue in science, which should be encouraged to have even greater (qualified) transparency then it currently has. I think in part this is what Bee is in favour of and struggling to find a methodology for it . I myself would like to see all prequalifying discussions such as Bee proposes be able to be monitored by anyone found generally qualified to do so, like just simply having a doctorate in the subject.

I’m reminded of a debate I read about once between J.S. Bell and a colleague at CERN regarding the actions of SR in a specific instance, where the vast majority of the researchers polled agreed with the colleagues opinion, even though Bell was then able to show this tobe wrong and from first principles, which being somewhat shocking as one would suspect anyone of them should have it be so understood. This had me to wonder after, how much of this , which I would call the misinformation within science permeates the discipline and being as great a road block to its progress than anything else. That’s why I would agree in the more general sense transparency should be increased and broadened to serve as a way of helping in such regard.

Having been around here for as long I have I can see how it might feel "demonstrated" as to how one may place trust "in the system."

How it can work when placed in another hands, and how one might feel, if one's futures was perceived as well from that standpoint.

In this, it maybe about the idea of gender advances, that can be about issues of equality?

I mean, you talking about "the motivation" as well as what you might want changed?

In Peter's case, and closely tied to the maths(Tegmarkian for you), we see where you might have a exceptions about mathematics at the basis of the universe?)while delving into the physics and it's truth, can be understood from this historical standpoint?

Understanding where you have been and what you see in the system can greatly enhance the roads you have walked while not being fully aware of the method by such advance.

It had indeed been current in your own learning curve?

Of course this is what I see, while not fully constructive of the larger picture of the methodology, it does put before me, the idea that public relation is to be maintained and paramount to the advance of culture by this close ties between discussion and dialogue about the advances in science being timely and current.

Truth seekers generally do wade through "all the information" to get to the source.:)

So there is "one's own standard" by which one sets in stone a path by which all shall walk after?

Thanks for your explanation and as I suspected there is more to it then I realized. None the less I don’t find that you disagreed there is this ranking even among the journals themselves and thus have as significant the benefit gained through such completion. So I would suspect that journals would still reject papers based solely on this, even though all the other criteria having being meet.

This of course brings us back to the quality aspect of things, which seems to stubbornly resist having a predefined metric applied to it. So it’s like asking, what is quality, with only being able to answer what is good enough to be thought able to work and then demonstrated as such; which of course is what science itself is suppose to serve in accomplishing. Perhaps it all comes down to what Pirsig noted that first one has to be shown to care and therein ask ourselves how is this able to be discerned or still better encouraged.

My only experience in all this is in being involved in the manufacturing industry, where quality control and quality assurance often are confused as being one in the same. That being quality control is something that can be given metrics as in the end be able to know what the instance of failure should be relative to the degree of inspection and the technical method(s) of assembly used.

However quality assurance in the end can only be accomplished by all those involved in the process, as relating to how much they care about the quality of what they produce as to having it only to be assured by them rather than by anything before or after applied to measure success or failure. Now it’s true caring can be encouraged by means of positive or negative motivation, yet unless one considers people being the same as donkeys who respond to only the carrot and stick, who other then the individuals themselves can have quality assured. So my more general thoughts rest is how to increase the quality of individuals being more fundamental then considering how to measure the quality of what it is we would have them produce.

Instead of having a new peer review board, cannot we have standardized interfaces to do this work? E.g., the telecom companies of the world could not interoperate without standards. So if the [owners of the] journals can be convinced that standard information interchange processes and formats would make the job of peer-review easier, then they could among themselves set this up, just as the telecom companies set up standards bodies to figure out how to interoperate.

Hi, I have read your post but not the comments. Broadly I agree with quite a lot of what you say, but ....what journals don't use an online tracking/submission system these days? So far as I knew, almost all the main publishers have these systems now.The idea of an independent peer-review system (from the journals) has been made before. I know it sounds sensible, but i don't see how it would work. For example, you get just as many complaints about grant or funding peer=revew, and that is decoupled from the journal process. As well as the considerable hurdle (which you mention) of cost and practicability (who operates it, etc), it removes any sense of individuality of the journal - how can a journal apply its own criteria if some Politburo is deciding what it should publish? What you'd end up with is a vast vat of articles, with little sense of which are most significant - as this is provided by the journal you publish in, by and large.

The best way to run a peer-review system is to have two or more independent peer reviewers who do not sign their reports. The authors have the chance to respond (and because the refs are not named, they cannot bully, charm or manipulate them) and ALL the refs are sent ALL the reports from the previous rounds, with the authors' response. If there is anythign unfair about any of the reports, or some undeclared interest (in either direction), it will come out.

For papers it offers to publish, a journal could publish the refs' reports with the paper (again, no names necessary because what is important is the arguments the referees make, not their names). This is resource-intensive for a high-quality journal that has several rounds of review, because the paper evolves so much in the process, the journal would have to publish each round of reports/response and each version of the ms. This would be quite costly and in these days of resistance to pay for what you read....would the community want this?

For papers that the journal cannot offer to publish, a manuscript transfer service so that the refs reports can be attached to the ms when it is submitted elsewhre, is worth exploring more broadly than just within one publisher. NPG does this, but so do a consortium of neuroscience journals that have different publishers. This process can cut out a lot of time for authors between having a ms returned from one journal and being published in another.

No, of course I didn't disagree that there is some ranking among journals. While I don't think the impact factor is, as an exact number, taken very seriously, there's no denying that in every field there are a few leading journals. There are also typically some less known but national journals that play a somewhat different role.

In any case, you are of course right in that the big question here is one of quality. Thing is that editor's decisions to publish or reject a paper might be due to many reasons. These reasons might or might not agree with your criteria for quality, which I would argue is to some extend subjective, in particular when it comes to the question of how interesting a paper might be for the reader. I would then argue that decoupling the peer review, which provides an indispensable measure for such quality, from further considerations that might play a role to publish or not publish a paper would make that process cleaner. Best,

I think you have misunderstood what I was suggesting. Nobody would dictate a journal what to publish. You would just not get the reports on your manuscript via the journal. You would first get them from elsewhere, then they're yours. Then you would submit them with your manuscript to the journal for consideration.

As I wrote, the journals will still have to decide if your paper is suitable for that particular journal. But the point is if the editor decides for whatever reason not to, you still have a (hopefully good) report that you can either just use with some online database (that is now non-peer reviewed), or you could use it to submit the paper elsewhere.

I also don't know any journals that don't use online systems these days. I suggested one could use these better by offering an online communication interface between the reviewer and the author that is not, bit by bit, passed on by the editor (though it would be accessible to the editor). Best,

What you suggest would also be an improvement, but I frankly think this would be even harder to realize than what I suggest. My suggestion has the advantage that you can try it without convincing publishers it's a good thing to do. You could simply start offering such a journal-independent peer review service. This would be of immediate use for now non peer-reviewed open access journals or data-basses (like the arxiv), since authors could just add that report (instead of a journal reference). If these reports obtain some generally accepted level of reliability you could just submit them to a journal with your manuscript. One would think that in the long run editors would just use these reports instead of requesting their own ones. If they find it necessary for one reason or the other, nothing stops them from sending the paper out for an additional review however. Best,

Yes, you are of course right. The question is would people trust that system? The answer is, needless to say, initially they would not trust the system. Trust can only be established over time. But look at the situation now: do you have the impression there is a lot of trust in the current peer review praxis? I don't have that impression. Even my colleagues who use it all the time are cynical about its randomness. I think the change that I have suggested would, over the course of time (say, a decade) be trusted by more people than the current system. Best,

I believe the future is in electronic repositories of scientific articles. Peer review should be handled by an online communities of authors associated with such repositories.

Every author would have an account and a possibility to rate every paper in the repository, ranking them as: crackpot, bad, erroneous, ok, good, brilliant or other. Everyone would always be able to see who rated each paper and what their rating was.

Community could as an addition collect membership fees and pay professional reviewers whose sole job would be to review papers and ensure neutral pov. Integrity of such reviewers would be evaluated by members. A few arbitrators could also be hired (or recruited from community) to settle disputes. Those entering dispute should be made to cover part (or all) of the cost.

Membership fees should probably depend on the amount of community work each member is doing, for example the more reviewing he does the less he pays. Quality of reviews could also be factored.

All papers in such repository should be clearly labeled as to their content. This system should also be augmented by automatic classifiers based on keywords or something like that.

Full disclosure should make the system harder to game.

Authors should be encouraged to focus on quality of papers not quantity.

Tspin: That's all very nice, but you're talking past reality. If you'd do it this way, the vast majority of papers would never get any rating. The papers who would get ratings would be those by already well-known people. Scientists are meanwhile infamously well-known for their hesitation of publicly commenting on other people's research articles online. (I just read another blogpost on this the other week. Will see if I can find it.)

Yes, full disclosure would make the system harder to game. I believe that's a good idea, but you have to face that at least today too many people wouldn't want to use it. That's why I'm saying introduce some of these peer review agencies, some could be anonymous, some not, and just see which one does better in the long run. Best,

Subjectiveness of position, does not make for a "consistent measure of the quality of the work" is what I hear you saying.

A some point one cannot care about what people think and having this inner belief that you are living the truth as best you can and living by principles of science as you know them, then, the central core position is not going to waver much under such consternation "of the belief" that the fate rests in other hands with which with you granted power of existence based upon.

This is indeed a difficult position "to believe in" if you believe the world of science is indeed the trade by which you qualify payment.

I have seen the differences in the way one can live while still pursuing the science that they hold dear to their heart while gaining payment from other means.

You've seen it as well.

So this brings us back to the trade upon which all your training has rested to seek support to pursue the science you have come to believe in shall provide the best truth so as to lead one to a better comprehension of the world around us.

Just clarifying "the situation a little more" as you had come to reveal it as I see it.

I agree that the system would need to motivate people to rate papers but it's certainly doable. There are many ways in which it can be done:

1. First there should ratings, comments to reviews. Ratings would be as easy as selecting a radio button, while comments and reviews would be more involved. That way even people with little time could at least rate the paper.

2. An account should be required to download or submit papers so having to register would no longer be a barrier (and it's a huge disincentive now in commenting on papers for example).

3. Financial incentive - for example each time you download a paper you have an option - either at least rate the paper in 2 weeks time (for example) or pay a small fee for a professional reviewer.

4. There would be a natural incentive to rate and review - it indirectly benefits you since you take advantage of other people ratings and reviews to filter papers.

5. Each paper you cite would be automatically rated by you as good unless you explicitly include a different rate in citation. (Citation metric would also be available for each paper.)

5. Professional communities and universities could require their members and stuff to rate a certain number of papers quarterly as a community service. Since the rating is transparent one could easily verify whether it's done.

6. Graduate students could be required to review some papers.

7. As already said there could be professional reviewers paid by the community.

8. External entities could offer paid filtering and reviewing services (such as Faculty of 1000 now for example)

9. Finally I believe that once community gets used to rating and reviewing papers it will become natural and initial barriers will go away.

So while I agree that people would have to be motivated to rate and review papers (especially at first), I believe there are many ways in which it can be made to work and eventually rating and reviewing should come naturally to those using the system.

Some good points here. However, unless I missed it you left out a very (to many of us) important issue: the anonymity of the submitters, not of the reviewers. A lesser known, and "hence" not very important person will not be taken as seriously as a big name. People are like that, there's no point in pretending they aren't. It is especially hard if you are offering a challenge point.

Some journals offer the "anonymous review" option in that other sense of the term, it has likely saved many folks from rejection for inappropriate subconscious reasons. I wish I could find link to a "sting" operation awhile ago, where previously accepted papers were resubmitted to scientific journals under alias (hence, as if "nobodies") and many were rejected. Anyone know of it?

Peer review agencies are also a valid alternative though I believe community peer review has some advantages: i think it better leverages the community, it could be faster, it offers a possibility of an efficient dialog between researchers, and it offers a good way of assessing research impact - all reviews are transparent, located in one place and available to all.

Another point is that I am not sure how one could force publishers to use peer review agencies if they fought to keep the status quo. The idea of online community peer review would just skip publishers altogether.

Peer review is a three-edged sword. Consistency "merely" requires scholarship. Defects get through, but retraction and frank scandal often (usually?) follow when the whole community looks.

Politics is not "merely." Einstein or Yang and Lee overturning a discipline are present day intolerable. There is nothing neutral or abstract about a Referee hamstringing a competitor.

Risk is the third edge. Polywater, cold fusion, Podkletnov, Fifth Force... were sensational but not malicious. They were bad observations repeatedly made by the rules. There is Scylla Charybdis incentive for a journal to seek sensationalism, especially if it cannot be gainsaid (dark matter). Experimentalists are deeply disincentivized (told to go to Hell) by grant funding and publication both. Nobody wants to hear Millikin got the electron's charge wrong (bad viscosity for air; selective data exclusion) by way of a better measurement.

(An electron is 1.602176487x10^(−19) C. Millikin reported 1.5924x10^(−19) C. The 0.6% divergence is more than five times Millikan's stated standard error.)

Bottomline: Before you ask for more scientific science metrics, deliver scientific evidence that the use of such metrics is beneficial for scientific progress to start with.

In Against Measure, you provide some foundational information there to support the extension of Peer Review "as a fundamental discrete question" as to providing "some measure" yet there is no qualifier that seems to have a sustainable picture?

By name then, shall the distinctions set you apart, author or subject, who is the peer review?

If the names are removed from any assumption as to bias then who is to know who is commenting on, and shall not judge, based upon the sector of knowledge being represented? No one knows the qualifier by name."

vast amounts of new data are available on scientific interactions thanks to the Internet

As if by example of some "first principle" this "id" that follows the researcher and peer review system, does not allow for any identification by name, but by the links as a url identifier that follows one's career?

IN the spirit of impartiality and presence to the market place of ideas, the papers rest on their own merit, as well, introduce new principles or thoughts to action that help to push knowledge forward.

I am very opposed to the idea of allowing people to "quick rate" a scientific paper by a radio button. I am totally not interested in that sort of judgement, and I think if you allow people to do it, you're just creating a lot of social side-effects that will further streamline and distort interests. Before you suggest something like this, I would ask you to figure out what effect it has had on consumers' taste that readers can rate books on amazon on a five star scale and customers can pick their reads by following what many other people liked. Scientists are supposed not to be influenced by social bias, but they are human and you shouldn't unnecessary support such bias.

I don't want people to judge a paper if they are not also required to provide a written explanation for their judgement that contains a scientific argument. If they don't have any justification for their opinion, it's scientifically irrelevant. What would happen is simply that people will down-rate papers of people they like, and up-rate papers of their friends and papers that cite their own work. What exactly is it that you hope to achieve with that? Best,

That is right, I haven't commented on double-blind review once again because I've written several times already (for example here, here and here) that it's a nice idea but practically unfeasible. It would for example mean that you cannot upload a paper on a pre-print server before you submit it for peer-review, otherwise the reviewer could very easily find out who wrote it. It is also pointless in that reviewers are picked from among people who work on related topics, and in most of these subfields people know each other anyway, and they know who is working on what. Best,

Now that I think of it the peer review process is more synonymous with the quality control process, rather than the quality assurance role, so perhaps having it reside with a separate entity might be the proper thing to do. Then of course we have the quality assurance aspect which in industry relates more specifically to the expectations of the customer, which in most companies becomes the primary responsibly of management, in both defining and implementing it, with having the end users expectation made clear to everyone involved.

Now in science’s case we could then deem the journals as being responsible for quality assurance, yet really are they the correct ones for the task? The answer resides with who is the customer, which in the present model appears to be the scientists themselves. However can we say this is correct since that would be like saying only bakers are the customers of bread? So I would ask who are the customers of science, or if this being identifiable at all as science is in the business of discovering truth and so to whom should this be excluded to? This is not to have an answer proposed, simply an attempt to perhaps better understand the question.

Tommaso Dorigo has nice things to say about you and Stefan, Bee, here, specifically this page. Well done.

As far as I'm concerned peer review took a big hit when a Frank Wilczek-edited journal allowed a notorious Bogdanov brothers paper to be published. I don't blame Frank for that nor should anyone else in IMO. The Bogdanovs definitely deserved their PhD's ... in Con Artistry, not Mathematics and Theoretical Physics.

@Bee regarding double-blind: I accept that it has limitations, but first of all things don't have to be perfect to be worthwhile, and the main point is to help relative unknowns. The "outsiders" are the ones least subject to the limitations you note, and most targeted to be helped by minimizing categorical ad hominem tendencies in reviewers. Such submitters would often be generalists with a cogent, even "dangerous" idea that deserves more scrutiny. (BTW, what sort of "specialist" would e.g. come up with or critique something like DSR, or some interpretation of quantum mechanics?) They might well not have a pre-print up, and wouldn't be known to others in the field.

Even if reviewers could guess others in their clique, the greater point is to keep them from sensing "Oh, that's nobody" instead of which particular person it is. And in general, withholding submitter name reduces the overall incidence of personal bias - so why not? It's silly to refuse a simple, cost-free (?) option just because it isn't perfect. Many journals still offer it, and they should. I'm going to ask for it if I can.

Note, you stated per one of your quotees:[Jeffrey] Di Leo regards double-blind reviews in which the reviewer doesn't know the author's identity (which he calls "totally anonymous") as "not as problematic" as the standard partially anonymous peer review.

Segue: haven't others here been asked to submit names of their own reviewer prospects? Do journals use these people?

Finally, many of the complaints about anonymity are regarding perpetual anonymity as of blog posts under handles, or that reviewers never become known to the submitters - but reviewers and everyone else will find out who the submitter is, if and when the work gets published.

Bee: "I am very opposed to the idea of allowing people to "quick rate" a scientific paper by a radio button. I am totally not interested in that sort of judgement, and I think if you allow people to do it, you're just creating a lot of social side-effects that will further streamline and distort interests. Before you suggest something like this, I would ask you to figure out what effect it has had on consumers' taste that readers can rate books on amazon..."

I don't get your objection. I am not talking about stars but specific judgments. The buttons could be: crackpot, erroneous(explain), ok (as in valid results), good (as in important results), brilliant (as in breakthrough), other(explained). And the idea is that after you read it you at least chose one of those. There is would still be an option to comment or write a review. Also all judgments are signed so you can setup it to only show those of your co-workers for example.

Why would you be "totally not interested in that sort of judgement" ? Do you think scientists will be dishonest in their opinions?

And what does amazon book rating has to do with it? Do you think the validity and importance of scientific papers is as arbitrary as book tastes?

"Do you think the validity and importance of scientific papers is as arbitrary as book tastes? "

The question is a mixup. The validity and importance of a scientific paper is what you want to know. The taste of people might or might not be related to that. It's people's taste that is irrelevant for a scientific judgement. It doesn't play a role. The point you don't understand is if there is a way for scientists to quickly and easily find out other's tastes, it begins to play a role. Scientists are human. They do pay attention to what others think, whether or not there's a scientific reason for it. There is ample research showing that there's a natural inclination to attach interest to things other people find interesting. There is research showing people are more inclined to believe in things many other people believe in. In particular if you respect these people or if they are friends. Do you see now what I'm saying: With allowing a cheap, quick, judgement that does not require the reviewer to carefully read the paper and to formulate a written report justifying their judgement is like opening Pandorra's box. It is a perfect way to vastly amplify sociological effects in the community.

Suppose what's going to happen if some Prof VIP and his friend rate a paper (maybe of their student?) as top. Nowadays, nobody knows and nobody cares. Consequently, that student has to prove himself. What would happen with the rating publicly available is that this student would get a vastly increased amount of attention. That attention goes further on the expenses of others. It's a rich-get-richer, poor-get-poorer setting that amplifies problems we see already today: streamlining, networking, social games. I don't want that. It's bad enough as it is already. Best,

Again, as in a previous discussion topic ("against measure"), I think it is important to review the problem under a more fundamental perspective. I think there are two questions that complicates the peer review process: the first one is about "productivity", and the other is about the possibility of reviewing papers with authors as anonymous, which is inexistent today (so that the contents of the paper are reviewed without any prejudices against the authors). I will only comment a little bit about the "productivity" issue.

What I mean is: there is too much "productivity" out there, as if it were a good thing by itself. "Performance for performance". Too many papers, the vast majority being small increments (if at all) to established results. This makes peer review a burden most of the time. Having to deal with such a large number of papers unavoidably impacts on the review process in a very direct sense. I think this makes all the practical issues that you mention increasingly cumbersome.

(To applied sciences, my rationale may or may not apply). What I think it should be done is a change in paradigm in the sense that scientists should be encouraged to produce *less* -- not more! -- and focus on higher quality, real, relevant advancements. Some people would find this completely strange, but I think that we would make better science -- throughout the process: from the problem motivation, development, solution, presentation, etc to review per se -- if we **stop aiming at productivity**, **stop aiming at performance for performance**, towards the value of insight, creativity and problem solution/analysis skills. This should be the source for incentive for better science and consequently a more manageable review process, which could continue as it is today without major changes (except for allowing for author anonymity, as mentioned above, something that I am strongly for).

I really mean it. Pure "productivity" (viz., the aim at high multiplicative paper production) should count *against* the author. It is very improbable to be so "highly productive" and, at the same time, to offer *real* advancement in every single paper amongst the numerous papers/year published along with a diverse web of collaborators. People usually think it is a very good thing to have "many collaborators" (again, I do not refer here to large list of collaborators in a big lab/experiment). Generally, this is only a means to multiply the number of papers per author. It is a shame. People should **prove themselves more** by publishing high quality, insightful **solo-papers**, more than a large number of collaborative papers with incremental results.

With the two issues above well established, then I believe that peer review should become more bearable and constructive, with maybe just a very few modifications to the current system.

I don’t think it’s so much the issue of collaborating or not collaborating or even the quantity of papers per say, yet rather this aspect of theoretical physics that forces alliances between theorists into camps as to which approach forms to be the best overall direction or what problems the most worthy to be working on. Strangely enough I think science has allowed itself to be too much affected by the other practices and vocations of the world, rather than it having more effect on them.

That is when you spoke about paradigm shifting, until the last half century or so it was science that for the most part was provoking the shifts, now only to have the opposite occur, where it havimg all these metrics for evaluating effort, contribution and value, which relate more to the industrial complex, then it does to the aspiration, methods and traditions of pure academia.

The fact is this has all happened before, when ancient Greek culture was adopted by the Romans, where the schools of pure thought were transformed to be seen only valuable for their technical utility, to be all eventually lost to empire building and maintenance. That’s to say science went from expressing the greater and broader aspects of the consciousness of our species, to only reflect the mind of it or more specifically its will. That is science should not be taken as simply a means to an end, when it really is what has us to know what those ends might and can entail.

"In order to form an immaculate member of a flock of sheep one must, above all, be a sheep."

I agree with you. The pressure to publish is definitely one of the prime reasons why peer review today works so badly. You could decompose that into financial and time pressure. As I mentioned in my post, these are important points that we've discussed elsewhere. Here, I just wanted to focus on some practical issues. While the improvements I suggested don't address all problems, I believe they could at least make the system work more efficiently and with that save time and effort, which in turn could improve the quality of reviews.

I don't really have any good idea how to address the issue with paper over-production on that practical level. To some extend that problem might take care of itself in that I've come to notice increasingly more papers remain eternal pre-prints (the word "pre-print" itself is somewhat ironic). The problem with the paper production isn't simply that the total number plays a big role (in the sense of a metric as we discussed earlier). I mean, if you're a young researcher the first say, 10 papers or so, are really essential for you to be taken seriously. But whether you have 80 or 200 papers doesn't make much of a difference. Even the dumbest metrics take into account that the number of papers written is an expression not of scientific quality but more of the pace of research in some field, habits in that field, and personal research style. I thus suspect that the paper production is mainly due to peer pressure, combined with papers being, still, the one and only accepted mode of scientific conversation. The latter can to some extend be addressed practically:

For example, I would think that if recorded seminars would make the step to being more widely accepted as references, then the paper production would go down simply because you could use your seminar for the all-important "said it first" reference. PIRSA was ambitiously proposed to become the arxiv for seminars. I still think it's a great idea. (I don't know what happened to that idea and what the status is. Either way, it would probably take some time for the idea to take off.)

A similar remark could be made about blogposts. Except that, and I've said that many times before, since blogposts can be edited past publication date without there being a track for these changes, they are not very suitable for scientific purposes. I believe that the arxiv meticulously tracks changes made to submissions is one of the prime reasons why scientists lead there discussions via paper uploads instead via more flexible web2.0 tools. That's a software problem that could easily be overcome though. (My suggestion was simply that there would be an option to "lock" a blogpost at some date in some form and it cannot be modified without breaking that "lock". With this, a blogpost could obtain a date stamp. That of course does only work if you use a software provided elsewhere and not one you're running yourself. It doesn't seem so complicated to me to do that.) Best,

Just to clarify: I'm not totally against collaboration, I do have collaborations myself and they have proven very constructive. What I am against is when one attempts to increase his/her collaboration web just in order to increase his/her chance of having more papers published.

Yes, it is difficult to address the overproduction of papers at any practical level. It is a natural imposition of the system, and has other influences, as you mention.

I also think pirsa is a nice idea.

Concerning blogposts, if I am not mistaken, wordpress keeps a log of modifications, so it would only be a case of making this log explicitly shown below the blog post; as you mention, it should be a relatively easy issue to address.

What is your opinion on the question that I have mentioned -- on allowing authors to submit their papers for peer review anonymously (only after acceptance revealing their names)? I think this is a very practical issue to address.

I think it's a good idea, but I think its effect will be very limited, see my reply to Neil B above. For one, it would basically render the idea of a pre-print server pointless. And besides that, due to the specialization in most fields of science, it won't be too hard to guess who the author is if you make only a little effort. (Rspt if you don't find nothing the author will be pretty much unknown.) It is incidentally not true that double-blind review is not existent today, it's just not common and it's pretty much inexistent in our community. (I know there are journals who'll do double-blind review if the author asks for it, but can't recall which. Maybe somebody else knows?) Best,

I'm aware. It's just that I am old fashioned since I usually post at the arxiv (for instance) *after* acceptance for publication... But I can understand that high impact results, specially from large experiments/observations cannot wait too long.

it won't be too hard to guess who the author is

Yes, but that will only be more probable in a small and highly specialized community.

It is incidentally not true that double-blind review is not existent today, it's just not common and it's pretty much inexistent in our community.

I have published in various areas -- from astrophysics/cosmology to software engineering to condensed matter (not that I'm proud of that, it's a big suffering). From my own experience and from my husband's, who is an engineer, double-blind review (or even triple) is common in engineering, computer science and software engineer, and acceptance to major congresses is usually valued over journal publication, as I have noticed.

(BTW, I have also noticed that in astrophysics, preprints with no peer review, ie, not published in main journals, are generally not taken in consideration. This may have have changed in the last few years, I can't tell).

Yes, the acceptance of pre-print (or never-print) papers differs greatly from field to field. I recently talked to somebody (cond-mat, stat-mech) who told me that it's actually only a small fraction of the papers there that appear on the arxiv. (I suppose he knows what he's talking about.) It is probably not too surprising that in hep-th and hep-ph the acceptance of arxiv papers seems to be the largest. My vague impression is that when a paper is unpublished it will sink to the bottom of the sea within a scale of 1-2 years. A paper that's unpublished and uncited after 2+ years is likely to remain unpublished and uncited forever. That doesn't happen if the paper was published, or, alternatively, if it's well integrated by citations. If a paper is well-cited, it doesn't matter much if it's published or not. In particular, many review articles never seem to get published, and why should they? Best,

I'll get back if need be, but IIRC American Journal of Physics and Foundations of Physics offer double-blind review. As I said before, I don't trust people's flaw-riddled psyches and a sting operation/s showed that name-recognition distorts appreciation of inherent contribution. (Bee, what if one of your papers was sent to LuMo for review? ;-O) IMHO it's better to suppress that on option, and take your chances they'll guess who you are anyway.

I guess nowadays many people start studying some subject in a given research area, make a personal review, and afterwards just make it publicly available posting it at the arxiv as a paper. In the old times, such surveys used to be kept as personal research annotations.

As for ArXiV: Not always a sure bet to get in a "good" paper anyway, even if get sponsorship and account (I have account ...) There's been lots of criticism of their snobbery, picky attitude (I mean, beyond what "pickyness" ideally should be), retaliation against critics or unpopular notions (and that, very inconsistent considering weird stuff there like birds from future trying to stop LHC!) Also, aren't "editorial" decisions effectively made by a crew of grad students? Check this out:Archive Freedom Organization

Well, there's "ViXrA" to get things up if need be, but does anyone take ViXrA pieces seriously?

Physicists' best chance of spotting an effect of "quantum gravity"—the melding of quantum mechanics and Einstein's theory of gravity—may have evaporated. According to some quantum-gravity theories, the speed of light may change very slightly with the light's wavelength, and experimenters are searching for the effect in radiation from distant stellar explosions. Those searches may be in vain, however. If light's speed varied in this way, then untenable paradoxes would arise, one theorist argues. The speed variations must be at least 23 orders of magnitude smaller than experimental limits set last year, she says.

"It's incredibly hard to find an observable effect of quantum gravity, so I'm a little bit sorry about the result," says Sabine Hossenfelder of the Nordic Institute for Theoretical Physics in Stockholm. Some theorists say that subtle quantum-mechanical effects may avert the paradox, however.

The debate centers on a decade-old idea known as DSR—for "doubly special relativity" or "deformed special relativity." DSR attempts to reconcile Einstein's theory of special relativity—which says the speed of light is the same for all observers, even if they're moving relative to one another—with the possibility that the speed of light also depends on its wavelength. Such a dependence had been suggested by theories of "noncommutative geometry" and emerges from some theories of "loop quantum gravity" (Science, 8 November 2002, p. 1166)....

Thanks :-) Well, some of my colleages are a little unhappy about my paper (see here and here), so the discussion isn't settled yet. In this case, peer review was actually very helpful to improve the paper. (That wasn't so much because of the content, but because after squeezing 20 pages on 4 the argumentation suffered somewhat.) Best,

Thanks for the links, Bee. Your work has apparently put some people on the defensive. We are reminded this is inevitable whenever a paper is written that falsifies or attempts to falsify a much worked-on speculative theory. Doubly so if the falsification is based on thought experimentation and not empirical experiment.

Amelio-Camelia's work is primarily DSR, so we can fully understand his reaction. You will of course react to his reaction, he will then respond similarly, you will rebut, etc. etc. until the community comes to some kind of consensus, if it ever does.

As far as Lee Smolin goes, he has many oars in many waters, which I can definitely relate to. I close with a passage from a notable person in the community who shall remain nameless for the time being (not Lubos) from an e-mail he sent me regarding Lee. I do not necessarily agree with all of his points, but he is more or less open-minded (therefore definitely not Lubos)and I do respect his opinion even if he leans toward superstrings. So did Lee once when he was young. Lee is my first of many heroes in this "game" of quantum gravity, so it pains me a bit to write this:

"Some have questioned the rigor of Lee Smolin's work and complain that he dabbles in multiple areas without much follow-through. Whether or not this is a fair criticism, I think it *is* fair to say that his work over the past decade has focused on big-picture issues rather than drilling down into specific theories. Consequently, people who *do* focus on those theories say that he gets some things wrong. For instance, he insists that string theory is not background-independent, but this is really true only of 1980s-era string theories. AdS/CFT provides a fully background-independent theory. Also, see Joe Polchinski's review of his book at http://blogs.discovermagazine.com/cosmicvariance/2006/12/07/guest-blogger-joe-polchinski-on-the-string-debates/ "

I love doing you favors Bee, even unasked for ones, as you and Stefan are heroes of mine as well, and definitely 2 of my favorites, and I have many heroes, beginning chronologically with Clerk-Maxwell and Einstein and the wonderful Paul Dirac. And here's a shout-out to Michele Besso for being the sounding board for old Albert in May 1905, 5 weeks before he submitted his first Special Relatively paper. Awesome.

I am aware of Lee's response, and you can be sure I sent that to my friend. He didn't respond. :-)

I love Lee, I'm not sure I can make that any plainer. He's a polymath IMO. Others disagree, shrug. God forbid on something as fundamental as "reality" there would be disagreement, but the historical record says otherwise: Lorentz, Planck and Einstein stubbornly holding on their pets being perfect examples. Why should we in out time be any different?

Gratuitous On-Topic stuff (and cheekily so):

I love the way Physicists love the word "duality." (Again, beginning with Clerk-Maxwell and the "Electric field" and the "Magnetic field." So ... I cheekily ask:

Steven, Bee - I am well aware of the prejudice against thought experiments, and defensiveness towards appealing speculative notions. I ran into some flack with a proposal to show the decoherence interpretation (decoherence occurs, the argument is over whether it accounts for measurements ending up as effective mixtures) was experimentally in contradiction to standard QM. It was claimed I made a math error, which had to be retracted - but that critic insisted the result had no relevance to the issue anyway. (Read up from link as motivated.) Actually, this experiment could be easily done, it just hasn't been AFAICT.

As I posted to Bee's FB Page: There is a motivation, even maybe a "need" for DSR, but it has problems - so "what's a universe to do"? (not angling for more about that specific.)

As for some escapes from heavy review by peers (or for some, their "betters"!) - there are sometimes cracks a talented person can wriggle through. American Journal of Physics (reminder, offers double-blind review) used to have a "Questions and Answers" section. A person could pose an issue as a question, like "Would putting a particle in a refractive medium foil the Heisenberg Microscope by allowing shorter in-medium wavelength but same photon momentum?" to which an answer might come in later, e.g.: the photon actually has more effective momentum in the medium! See, a point can be posed if the writer doesn't know the answer, can't figure it out but thinks it's a good challenge question. Some of my proposed paradoxes could well be presented this way - as stimuli, not fait accompli.

(BTW, I have a better challenge to HM: use birefringent materials as test particles to alter angle of polarized light - we can see the particles from the background that way, even if imperfectly: but no momentum was transferred (except occasional scattering - but the scattering was necessary in each observation, in the original thought-experiment.)

Chad Orzel of Uncertain Principles blog, whose specialty,Atomic/Molecular Laser Cooling, is the same as that of US Dept. of Energy Head's Stephen Chu, for which he won The Nobel Prize in Physics (Not Chad ... Stephen).

Nil wrote again: Actually, this experiment could be easily done, it just hasn't been AFAICT.

As far as you can tell. And whose responsibility is THAT, Neil? Could it be ... oh I don't know (but in fact I do) ... yours?!

C'mon man, you live in Southern Virginia outside of Norfolk, right? There are plenty of great colleges and universities there. SELL yourself, man! It's just a friggin' Mach-Zehnder Interferometer experiment, how hard can it be?

My advice, freely given but apparently not yet accepted, is to write a paper at viXra (arXiv spelled backward). Not nearly as good as an (stuck-up elitist) arXiv pre-print, OK, but better than what you have out there now. You have to point to SOMEthing Neil, other than a Google search for the most talked-about QI guy.

Please do NOT make those you wish to do you a favor DO WORK to GET YOU. Show them, clearly, on ONE webpage that at least LOOKS professional, what you are asking for.

Yes, Steven, you have some good points. Not entirely, since it isn't really a proper expectation that one person must go through the entire process of both explaining what should happen (on the theory side) as well as doing the experiment. One person often passes over to whoever can do the lab work.

I don't think interferometry is easy or cheap, considering e.g. 1/8 wavelength-grade positioning requirements, unless indeed at say J-Lab. Yeah, I know (not just "of") some some people there who could do it. Maybe they would, I should ask. But anyway, the argument "on paper" can be made given normal assumptions since we already know "what standard QM predicts." That's what I'm defending against DI - ironically enough - so this time I'm being quite the opposite of "cranks" like Randall Mills.

One of the problems, as revealed by that fracas: the DI isn't a clear and well-posed theory anyway. Its advocates don't easily explain "what happens" different from traditional QM, partly because the latter can't really say either! It's hard to know just what they are saying (like with MWI, sometimes "entangled" so to speak with DI.) It comes across like a post-modernist essay, or figuring out what Wittgenstein really meant. I can't help that. All I know is, I can disprove the claim they make sometimes about mixtures. I think other things are more important now but maybe I'm on to something. More time opens up soon.

I should try ViXrA, but I wonder if writing as a science journalist making an original point can be done too. I need to master quantum optics conventions to make a good point anywhere. Blogs - you make some good points on yours, right? I guess you mean, need to really publish anything not just opining but a specific claim. Your experiences? Finally: OK I get that the Google brag gets gets annoying. (Heh, but you do realize some people pay a lot to get high in search ... ;-)

I need to master quantum optics conventions to make a good point anywhere.

And there you have it, the crux of your problem, Neil. You really DO have to write a paper, wherever you publish it (online or ... not), and have the math be spectacular, or at least correct. I don't subscribe to either Many-Worlds nor Anthropic Landscape, but I'm sure the mathematics in each case by Everett II and Susskind was right. Um, so? Does it reflect reality?

That is THE question Neil, but before any of yours/our "peers" can answer that most important question, you'll have to present yourself mathematically. THEY after all, have peers as well (not to mention superiors/mentors), and will have to present your maths as a reason to move forward into experimentation, should they agree with you.

Logic, Neil, as Aristotle showed us, is very hard to argue with. Not that doesn't stop some people. But the logic has to be shown, on paper or on a screen, or no one will take you seriously. Sorry about that, but that's just the way it is.

That's ... the system.

At least QM is a linear theory and not a non-linear one like Quantum Field Theory. How hard can it be?

OK Steven, I won't dwell on my particulars much more to ward off OT digression, but your question is good in general: I meant, very specific conventions. My argument absolutely was mathematical and not just "logical." (I'm not sure what you mean, unless generalized complaints against circular arguments etc, which I also do.) Don't you recall from the post? I calculate output channel amplitudes given various assumptions, and compare them. That is the essence of weighing theories against each other. You're right, it's not hard and I could easily calculate the amplitudes given the old QM assumption of maintaining wave states, v. the DI presumption (again, as best I can gather given their starry-eyed expositions) of mixtures occuring at BS2.

I meant, very specific practice of conventions (as re phase shifts) and not the big-picture level of math v. argument. It matters since C.O. said my notation was confusing. Maybe that explains why he got a false impression. What I wrote already approached that whole format, it just wasn't quite good enough (like an sloppy translation etc.) This sort of thing comes up as an obstacle to review. There are standards about how to label different states, whether to include amplitudes separately or all in one symbol, and so on. Compare SRT where they have decided over just the last few decades to drop the sub-zero from "proper mass" and just call it "mass" (not even all agreed.)

In any case, once hashed out we could all agree on the standard prediction for output amplitudes (same as I first claimed), we just disagreed on what those calculated results mean for measurement theory. And like it or not, philosophical wrangling has to enter at that later point. One way to look at the QM problem is, you go in a progression "philosophy" - math/hard science - "philosophy."

Given the current system, you're up against the wall in lacking a PhD. in Physics or in Mathematics. That means you have to go a bit of the extra distance, given the current system of Peer review. Regarding THAT, I think Bee's conclusion in Peer Review V is perfect. I mean it's a perfect conclusion, not that the system is perfect. The system is obviously not.

But America having some kind of Peer Czar, a completely terrible idea, is NOT the solution. What if Len Susskind was appointed the Peer Czar? What would that mean for non-Anthropics like Gross or Princeton-IAS or non-Superstrings Quantum Gravitationists in genrral?

Nothing pleasant, I assure you, and the person who would appoint Len would know nothing about Science, otherwise they wouldn't appoint him. This is a basic problem with America, and the CIA in particular: Political appointees are next to worthless, only the workers (CIA Analysts) matter, but the political appointee screws up their efforts.

Politics!

Specifically, very few people really care about Quantum Interpretation. Well, you do. Fine. Do what you want. I have no idea if your theory is wrong or right, but I DO know it's provable/falsifiable. I think most of us would like to see your suggested experiment be run to settle your falsifiable hypothesis once and for all, so you either continue in your work (if you're right), or move your brilliant mind on to something else more important and also immediately provable/falsifiable, if not.

Yes, I've had experience with Mach-Zehnder Interferometry. It's neither expensive not does it take a long time. Most Universities have labs with the apparatus in-house already for heat transfer experimentation. You need a quiet stable room, a table, balloon tires for the tabletop on top of the table for damping purposes, the tabletop (slate will do), a laser, beam splitters, mirrors, a screen, and a camera. Run the experiment, analyze the results, write the report. No big shakes.

Yes Stefan and Bee, I read the post. I think you yourselves haven't appreciated the practicalities of a peer-review system that is run by the authors. If you, the author, get peer reviewers' reports yourself, they are not necessarily going to be objective or the best judge of your paper. Journals like to choose their peer-reviewers for themselves, for many reasons. There is at least one journal that does what you suggest (authors submit with reports, or at least, with an endorsement) - Biology Direct. Why not check that out?

Sorry, but I think you still didn't get what I thought was very clearly written. You would not "get the reports yourself" in any other way than you currently get them. I did not say authors should be picking the reviewers. I am saying take the peer review process as it is today, and decouple it from the journals. That's all. I already said previously that yes, journals might still want to get another report for one or the other reason. Nothing speaks against that. I just think that if the reports are reliable, that would become unnecessary over the course of time. That one odd journal asks authors to submit reports with their paper (when I understand that correctly, that would be reports from self-selected colleagues?) doesn't address at all what I was aiming at. The advantage of what I'm suggesting is that you would have your reports to use them for any journal, or use them with a non-reviewed journal, or a pre-print server. I've said all that in my post and in several comments above. Best,

Creative Bee has had THE good idea. I never thought of peer review agencies in the past, that's indeed interesting. Those submitting articles know from experience how time-consuming submission is, especially when it has once already been submitted and rejected. The horrible thing is that one is not allowed to submit a manuscript to several journals at the same time. Imagine a world in which one would be allowed to apply for ONE job ONLY at a time : how long would one stay without a work in such a world? With peer review agencies, one could submit a manuscript to several agencies at once, then submit it to a journal with the best report obtained.