Arvind Narayanan's journal

Finished 5 reviews in the last week. Phew. If my reviews are any indication, only one of them will get accepted. Kind of mirrors the luck I've been having with my own submissions. Insane competition. Three out the five I reviewed deserved to be published, IMHO, but there simply isn't enough space to accomodate all good papers. Referees are therefore forced to look for reasons to reject rather than consider the merits of the paper.

Part of the reason is the publish-or-perish problem. Because of the pressure to publish, authors tend to rush to print without completing their research or double-checking all their claims, make one paper into two (or many), or work on ill-motivated problems. Another reason is the amount of pure crap that gets submitted. Bluntly put, some authors simply aren't smart enough to make useful contributions to science, and should go look for work elsewhere. Another source of crap is undergrad authors doing all they can to bolster their resumes for grad school applications (I should know; I've been there :-) Bad papers sometimes get accepted because they are not obviously bad or because they go to referees who don't specialize in the subfield. Even when they don't get accepted, referees have to spend a considerable amount of time before deciding that they are looking at nonsense, leaving them less time to do justice to reviews of the good papers.

Publish-or-perish isn't going to go away any time soon, but crapflooding appears to me to be the product of two peculiarities of the publishing process in computer science: the forum is almost always a conference (as opposed to journals; the field moves too fast for the archaic journal reviewing process to keep up) and reviewing is double blind, meaning that neither the author nor the reviewer knows who the other party is. Double blindness definitely has big advantages in ensuring the fairness of the reviewing process, but it diminishes authors' inhibitions to submitting junk papers. Here's a simple solution: publish the reviews of all papers, along with the authors' names, (and possibly including the submitted versions of the papers) on the conference website after reviewing ends. Sit back and watch as crapflooding automagically diminishes, reviewers' load decreases, and the genuinely good papers get their due.