Pages

Friday, April 20, 2007

FOCS/STOC Reviewing

Bill Gasarch has a long list of questions about paper review processes at FOCS/STOC (what happened to SODA?) following up on Vijay Vazirani's not-radical-enough guest post about submitting videos along with submissions. Since it's tedious to post a list of answers to so many questions in comments, I'll post it here, and let Blogger trackback work its magic.

Is the community really driven by these conferences? An Underlying assumption of these discussions has been that someone judges us based on the number of STOC/FOCSs we have. Who is this mysterious someone? Is it our departments? Our Colleges? Ourselves? Granting agencies?

Well it's not really that mysterious: I'd say all of the above.

Is it bad that we are so judged?? PRO: Its good to have a central place where you know the good papers are. CON: The rest of the items on this list are about what problems there are in judging quality CON: Some of these papers are never put into proper journal form. CAVEAT: Is the Journal-Refereeing system all that good to decry that it is lacking here?

Nothing wrong with being judged. However, the assumption that FOCS/STOC is the repository of all "good papers" is problematic. And personally, I'm tired of people complaining all the time about journal refereeing. The SAME people who review papers cursorily for conferences are the ones who do it for journals. If conference reviewing is "OK", what makes journal refereeing so terrible?

Other fields do not have high-prestige conferences- why do we and is it a good thing?. Our field moves fast so we want to get results out fast. It is not clear that FOCS/STOC really do this. Using the web and/or Blogs can get the word out. Important new results in theory get around without benefit of conferences. For results just below that threshold its harder to say.

I'm sorry. I'm having a hard time getting through the fog of hubris around this statement. I fail to understand how we can have the gall to stand up and say that CS theory moves a lot faster than any field that abjures conferences. Bio journals publish every few weeks, and there are mountains of papers that show up. For a closer-to-home example of the herd mentality that often drives research in theoryCS, papers in string theory appear at high rates without needing the acceleration of conferences.

Are the papers mostly good?

As I was saying to someone the other day, it's unlikely that you'll find more than a couple bad FOCS/STOC papers in each proceedings. So, Yes.

Is their a Name-School-bias? Is their a Name-person-bias? Some have suggested anonymous submissions to cure this problem.

After a recent SODA, someone levelled this complaint against the committee, both in terms of committee composition, as well as paper acceptance. There is some info on this matter here.

Is their an area-bias? There are several questions here: (1) is the list-of-topics on the conference annoucement leaving off important parts of theory? (2) is the committee even obeying the list as is? (3) have some areas just stopped submitting?

I'd argue that there is (mostly) faddism rather than bias; certain topics gain a certain cachet in certain years, and often papers that in other years might not have cracked a FOCS/STOC list do so. However, (3) is definitely true to a degree for computational geometry and data structures. I've lost count of the number of people who claim that they only submit to STOC/FOCS because of tenure pressure. It's not clear to me whether actions match words in this regard, but the prevailing sentiment, at least in these two areas, is strong.

Is their a Hot-area-bias?

YES: see above.

Is their a mafia that controls which topics gets in?

I've never been on a FOCS/STOC committee, but I'd strongly doubt this. I think our community is full of strong personalities, and it's hard to imagine that many people working together with such cohesion :)

Is their a bias towards people who can sell themselves better? To people that can write well?

Undoubtedly so. However, this cannot be viewed as a problem per se. It's just a function of human nature; if you understand something better, or it seems clearer, you will often view it favorably. And there's nothing wrong in creating an evolutionary pressure towards better writing and "framing"

Is their a bias towards making progress on old problems rather than starting work on new problems?Not really. I think there's a mix of preferences, and that reflects individual interests.

Is their a bias towards novel or hard techniques?

Again, I suspect people have their own viewpoints; some like beautiful proofs, some like novel techniques, some are impressed by technical wizardry , and so on.

Is it just Random? Aside from the clearly good and clearly bad papers, is it random? Is even determining clearly good and clearly bad also random? One suggestion is to make it pseudo-random by using the NW-type generators. This solves the problem in that since it really is random it is less prestigous and most of the problems on this list go away. Would also save time and effort since you would not need a program committee.

Well since the vast majority of submissions to any conference are in the mushy middle, we have to invoke the proposition that I think is attributed to Baruch Awerbuch: the probability of a paper being accepted is a random variable whose expectation is a function of the paper quality, and whose variance is a function of the program committee.

Are there many very good papers that do not get in? It has been suggested that we go to double sessions so that more get in. If the quality of papers has been going up over time this might make sense and would not dilute quality.

This goes back to the dual conflicting roles of a conference: quality stamping and dissemination. You will almost certainly dilute the first by increasing paper acceptances, but you will only help the second.

Is 10 pages too short for submissions? This was part of Vijay's Video Suggestion. Are figures and diagrams counted for those 10 pages? If they are they shouldn't be.

Too short for what ? Reviewers, with the number of papers they need to review, can hardly be expected to read 20 page submissions. And as an aside, if you can spend the time on a video review to explain the paper, why can't you use the same time to write a GOOD outline of the main ideas in one section of the paper itself ?

Are many submissions written at the last minute and hence badly written?

YES

Are many submissions written by the authors taking whatever they have by the deadline and shoving it into a paper?

OFTEN

Since the conference is about all of theory, can any committee do a good job?Vijay was partially addressing this problem by trying to find a way to make their job easier.

Committees try, with extensive outsourcing, but it's essentially an impossible task, and one should not expect perfection.

Do other conferences have these problems? That is, the more specialized conferences- do they have similar problems? Why or why not?

Allowing PC members to submit changes the review dynamics tremendously. It increases the reviewer pool, reduces reviewer load, but also reduces the quality of review ratings. Most other communities allow PC submissions, so it's really hard to compare. I am NOT advocating going this route.

Do you actually get that much out of the talks? If not then it is still valuable to to go for the people you meet in the hallways?

Schmoozing is much more effective for me than attending talks. But that's just me. Others may have different viewpoints.

For all the items above, even if true, are they bad? Some have suggested that bias towards big-names is okay.

Bias towards "big-names" is braindead. Bias towards quality is perfectly fine. A correlation between "big-names" and quality does not imply that one should be replaced by the other.

8 comments:

The SAME people who review papers cursorily for conferences are the ones who do it for journals. If conference reviewing is "OK", what makes journal refereeing so terrible?

Being at the moment at two conference PCs simultaneously (both at their "active" stage: ESA and CSR), I can say that conference reviewing is different not just by numerical parameters, but rather it is a different thing at all.

I do not quite understand your comment: do you suggest replacing STOC/FOCS with journals or forgetting journal publications after STOC/FOCS ones?

My larger point is that we like to vilify journals as the 'other': journals are bad, they are this, they are that. and yet it's the same population of people doing both conference and journal reviewing.

Maybe I have been lucky with journals, but the quality of comments that I've gotten back from journal submissions have been light years ahead of the conference comments.

I've seen FOCS comments that are either one liners "we accept, it is interesting" or "we reject, our community does not do things this way" without even suggesting what "this way" or "our way" is or whether or not they know of an alternative community that might do things with a spirit similar to the submission.

Journal comments, especially from SICOMP, tend to thorough and insightful, and are usually done within a few months.

This is easy to fix. First make a point of writing an entire page of comments in free flow form. At the end go over it and transfer the confidential parts into the "comments for PC only" area (you'll be surprised how little needs to be transfered). End result, the author gets a page full of reviewers thoughts and opinions. This is an efortless way to provide much better comments to the authors.