This is a conference that aims to be the venue for the first papers in new areas. This prompted people to ask me afterward if we shouldn’t start a new conference devoted to second papers. I thought this was an appealing ideas, and perhaps the conference could be called Follows-up in Computer Science; a snarky colleague, however, suggested that we already have two such conferences and they are called STOC and FOCS.

ICS has a steering committee entirely composed of past and future Turing Award winners, so surely they know what they are doing. A common complaint I heard, however, was that it isn’t clear exactly what the motivations and the goals of this conference are, what papers are being sought (surely you cannot fill up a 30-paper conference with first papers, each opening up a new area), and so on.

Helpfully, Oded Goldreich, one of the promoters of ICS, has written a statement about the goals ICS, as well as a longer essay on What is wrong with STOC and FOCS. The arguments made in the essay are Oded’s motivations for the new conference.

As I have said before, I agree with the importance of conceptual innovations, and of simplicity, but I disagree with the claim that our current review system undervalues such points. Hence, I think that initiatives such as the “letter on conceptual contributions” and now ICS will not correct an imbalance, but rather will create an imbalance, penalizing the necessary, hard, and unglamorous technical work by which we understand new ideas, exploit and simplify their applications, and create the conditions such that the next new ideas are “in the air” and the right person at the right time can get them, and so on.

23 thoughts on “So this is what “FOCS” stands for”

I hadn’t seen Oded’s statement. Thanks for pointing to it. I must admit, despite your descriptive adverb “Helpfully”, I did not find it particularly helpful (or his other essay). That is, I’ve already heard the argument that conceptual contributions are not sufficiently taken into account currently in FOCS/STOC conferences (a point about which people may reasonably disagree). I didn’t feel I got anything additional out of these statements. YMMV.

I wonder if, based on what you’ve said, you intend to vote with your feet and not submit to ICS. I also wonder how many others intend to vote with their feet in that way, even if they feel they have a paper with a conceptual contribution, and continue to submit to FOCS/STOC.

In the interest of giving specific examples (something I’ve found lacking in the ICS discussion thus far), I would like to say that I greatly enjoyed your paper “Max Cut and the Smallest Eigenvalue”. In my mind, it is an example of a paper that combines both technical merit and conceptual thinking. That is, you seem to have spent as much time and space in the paper describing how the results relate to other results, and what the thinking is behind the results, as on the results themselves. I think it’s an excellent example of what FOCS/STOC aim for in their papers, and I’m sure any of ICS/FOCS/STOC would be fortunate to have similar papers.

It has become very popular in blogs to knock STOC and FOCS. I’ve been attending FOCS and STOC almost as long as Oded. Of course, back then, giants strode the earth (by that I mean many of the founders of the field – in fact many are still around). These were the formative years of complexity-based cryptography, NC parallel algorithms, and computational learning theory. There were major advances in circuit lower bounds nearly annually. Chernoff bounds had only recently come into wide usage. Fourier analysis of Boolean functions was still to come. Papers were duplicated and sent as a dozen or fifteen hard copies and sent by surface mail or courier to the PC chair to be redistributed by mail to the committee members. Student registration did not include the lunches so we were forced to head off to lunches off-site.

I think that there is a bit of nostalgia in Oded’s comments. The existence of Best Paper Awards is not, as Oded suggests, indicative of any fundamental change – there has been a Best Student Paper award (the Machtey Award) at FOCS for around 30 years. Certainly during my postdoc in 1986-87 at MIT I recall the competition aspect of STOC/FOCS submission as acute. People always used to talk about the really hot papers, which were typically scheduled in the first and/or last time slots in the conference.

It was usual then for people to be a few years out before they were on PCs. The first STOC/FOCS PC I was on was the one for 1992 FOCS (the PCP theorem certainly made that exciting!). I was just on the 2009 STOC PC. We didn’t have anything as momentous as the PCP theorem this time but we had just as many innovative, creative, and conceptual papers as in 1992 (even as a percentage of accepted papers) and the reviews were much more carefully done than in 1992. In terms of its operation, the recent PC was much more legalistic than the earlier one but I don’t view that as a fundamental difference.

We have both lost and gained in diversity of papers. In the early 1980’s STOC/FOCS used to have about 10% of papers in logic. Even through the early 1990’s computational geometry papers were regular fixtures. The gains all relate to the broadening of the field such as algorithmic problems in economics and quantum computation. The existence of parallel sessions has also meant that it is easier for different areas to drift further from each other.

A major difference between the two PCs is that a much higher percentage of papers in 1992 had introductions that spoke to a general audience. Many more of the submissions trumpeted their key new idea right up front (though this was sometimes a bit oversold). There has been an increase of sophistication along with the increase in breadth. Where we need to work harder is in addressing our work to the widest possible audience in our field. We need to make sure, as the field broadens and deepens, that we write papers for more than just the experts.

The ICS could be a great venue and there is certainly some advantage of the shorter time between submission and publication. I am with Luca and Michael, though. that there is no reason that STOC/FOCS. do a pretty good job overall in the balance of styles of papers accepted.

I am also currently unconvinced by the proponents of ICS that something is wrong with STOC/FOCS, but I’m open to being convinced.

I can think of one way in which the proponents of ICS can help convince me that this conference is needed:

Suggest a list of 20 papers that did not get accepted into STOC/FOCS, yet had serious (potential) contributions and impact, and were written in the last 3 years.

Since it can be embarrassing (and unconvincing) to suggest ones own rejected papers as examples of what’s wrong with the system, believers in ICS should suggest papers written by others. If there actually is room for 30 papers a year that are missed by STOC/FOCS every year, then finding 20 from the last 3 years, and don’t involve the organizers of ICS, should be easy. Where are all these papers?

Thanks Luca for calling attention to my essay.
I do hope that it will help to clarify what is the basis for
my various actions. Convincing others is of secondary
importance to me now; I wish first to be understood
even by those who may not agree with my analysis.

Indeed, Luca, we differ in our view and analysis of the current reality
as well as with respect to our conbcerns regarding the future.
Indeed, I think that there is a significant imbalance in the review
proceess (and it reflects a simlar imbalance in the attitudes of
the commnity), and I hope ICS and/or something else can correct it.
I see absolutely no danger of “penalizing the necessary, hard, and unglamorous technical work ” in the near future. I guarantee that
if I ever see such a danger I’ll fight against it no less than anybody else.

A word to Paul. I agree that the competition was always there, but the question is of intensitity and more so of balance between this aspect and the actual contents. It is also true that one award, created to commemorate a person that many liked and were shock by hisearly death, has existed for decades, but more than a handful were created in the current decade. And note, this is just one concreate example. Lastly, I’m well aware of the danger of being seen as a nostaligic old guy (if not a useless and power hungry old idiot). I’m risking this image because the issue at hand is more important to me than my image. I think you (and all others) should take this seriously. But let me stop here.

A word to Anup. I can easily make such a list, but will not make it because it will lead to an “out of point” discussion of each of these examples.

Finally — to everybody. I don’t like writing on blogs (mainly for technical reasons), but I will be happy to correspond directly via email with anybody who wishes to correspond with me. I will be also happy to have furthers questions that I may answer on my said webpage.

I think the following ideal model might be helpful to understand the purpose of ICS:

Say there are two kinds of works in TCS, technical or conceptual. A healthy area needs an appropriate ratio between the two kinds, so it makes sense to adjust the ratio to fit the area.

However, given the huge career pressure upon young researchers in TCS (guess that’s also true for many other areas) and the prestige of STOC/FOCS, it’s not that STOC/FOCS adjust their favor to fit the area but the other way around: the favor of the area is driven by the taste of these two conference.

Therefore in order to find the real optimal ratio between technical and conceptual works, just relying on STOC/FOCS is not sufficient. ICS plays the role of a “buffer” for potential conceptual works.

The current unsureness about ICS is actually a good thing, in terms that it makes people’s interests more independence of the “halo” of certain conferences, thus it can more accurately represent the real appetite of TCS for conceptual works.

I’m surprised this proposal is so “controversial”. The ICS organizers are not forcing STOC and FOCS to change. Though some may not agree, there clearly is a subset of the community that does not feel served by STOC and FOCS. So what is wrong with them introducing a new venue more in line with the type of research they appreciate? Isn’t this the normal and healthy way to proceed? Don’t most conferences and workshops get started this way?

If there actually is room for 30 papers a year that are missed by STOC/FOCS every year, then finding 20 from the last 3 years, and don’t involve the organizers of ICS, should be easy. Where are all these papers?
Why is this a good metric for establishing the need/importance of ICS? In addition to the benefits outlined by the ICS proponents, the very existence of ICS could motivate people to write more conceptual papers. It could also help keep some people in theory who otherwise might have lost interest. I also suspect the conference could serve as a bridge with other areas from outside theory (especially with the rest of computer science).

So even if there are only 10 papers in the first few years, ICS may still have some value to the community.

p.s.: I forgot to answer Anup’s last Q (i.e.,”where are these papers”).
I have answered it on my opinion page (along with a few other Qs).
Let me reproduce this Q&A below.

[Q: Where are the potential ICS papers, which were not accepted to STOC/FOCS]

Some of these have appeared in special area conferences and some have not appeared in any conference. In both cases, the impact of these works on TOC has been reduced and/or delayed. The question of whether a certain work should appear in a TOC-wide conference or in a special area conference is addressed in my opinion page “Where to submit”
(http://www.wisdom.weizmann.ac.il/~oded/on-submit.html).

My point was only that STOC and FOCS are also quite welcoming of conceptual contributions (and certainly not less welcoming than in the past as Oded has suggested). Have there been what turn out to be good conceptual papers that have not been accepted over the last few years? Of course. There also have been many more papers that are good technical contributions that are not accepted.

I am quite supportive of ICS. It has a number of features that could make it very appealing. One aspect that I think is very important has not received much discussion: the potential multi-round interaction with the PC. In my experience a number of the conceptual papers that are not accepted have some serious flaws in the submitted version. Because they are working with new concepts, the authors sometimes don’t get the definitions right which allows them to be met by trivial solutions as well as the desired ones. The authors may have what might be a good concept but no good examples to back it up. Authors are sometimes not aware of other definitions/approaches that have large similarity with theirs and so can’t distinguish their work from closely related work. The papers may be very sloppily and hastily written and so some of the arguments may have major gaps (or even be wrong as stated).

Many of these problems can be addressed with a multiround process of reviewing and shepherding of papers that the ICS call seems to suggest. The structure of STOC/FOCS reviewing is not designed for such a process – shepherding of papers is quite rare. The default assumption is that papers should be judged based on what was originally submitted. If concerns about correctness or clarity come up during the reviewing process then authors may be contacted for clarification but in general the PC is not going to go out of its way to reshape a seriously flawed paper even though it has a germ of an interesting concept.

I answered the issue that your raised (in Comment #2), which were not raised by Luca. As to your main claim, shared by Luca, I disagree with both of you — but I’m very happy with such type of disagreement.

Wrt the new issue that you raise (this time in favor of ICS), I’m affraid I have to disagree too. As you do note, interaction with authors was practiced in the past by some PC members of STOC/FOCS; this is nothing special to ICS, except maybe the official encouragement to do so, which is positive but quite reduendent from the place I stand. I disagree more strongly with your view that conceptual aspects are more easy to handle by such interactions and by your willingness to forgive such glitches. Personally, I’m more critical of unclarity when it comes to conceptual issues (assuming, of course, that one can verify the claims/issues also given the unclarity…). Similarly, I think that revising conceptual issues may be harder than revcising technical ones.

Oded

p.s.: I forgot to note before that I liked very much the attitudes expressed by YIN and JKL.

Responding to Yin’s comment, the problem is that there rarely are “conceptual papers” and “technical papers.” Every paper has conceptual contributions and technical contributions, and the issue is how their merits are balanced in evaluating the paper. This is especially sensitive for the borderline papers, which are ranked nearly randomly by the committee, so that even a small bias in a direction or the other can be decisive.

So talking about the extreme cases distracts from the fact that a small change in the intentional and subconscious biases of reviewers in balancing technical versus conceptual merits can make a very notable difference in choosing the bottom 30-50% of STOC-FOCS papers. In turn, this will make a notable difference on which students will find their work recognized and so on. I realize that if the proponents of ICS and of the letter consider the current balance wrong, then it makes sense to pressure the community to change. Respectfully, I disagree.

I would also like to add that there are more dimensions to the merits of a paper than the technical and the conceptual contributions. For example, (1) how much weight should we give to the quality and clarity of the presentation? (2) how much weight to the confidence of the committee on the correctness of the proof, and the rigor of the arguments? (3) how much weight to whether the paper is interesting to a large section of the audience of the conference? I think (1) and (2) should be given more weight than they currently are. I have no opinion on (3), but Oded does.

“I think (1) and (2) should be given more weight than they currently are.”

Am I uninformed? I thought the things that should be evaluated, in descending order of importance, are:

a. Importance of the problem/new concept.
b. Correctness of the solution.
c. Generality of the solution method (are general techniques developed/lemmas proven which might help others with their problems?).

You seem to be implying that correctness isn’t given much weight. Is this really true? When I review papers, I pay close attention to correctness.

When I signed the so-called “conceptual manifesto”, my primary concern was something quite different than the focus of the subsequent debates. In retrospect, I feel it was poor choice of terminology that suggested an artificial dichotomy between “conceptual” and “technical” papers, with many people interpreting these words in different ways. For me, the most important point was something that I suspect is quite uncontroversial, and which I’ll try to articulate now…

As the discussions show, there are many different kinds of contributions that papers make – making progress on known important problems, introducing new models and questions, developing new techniques, bringing simplicity and clarity to previously complex/confused areas, drawing new connections between topics, etc. All of these are important for any field, yet it is natural that there are a diversity of opinions about what the right balance between these should be. Indeed, the “optimal” balance (if there is one) is probably very dependent on the specific topic/field/community and its current state of research.

Note that some of the above kinds of contributions might be called “technical” (making progress on existing important problems, developing new techniques) and others “conceptual”. But purposely missing from my list is the question of how “difficult” or “easy” the paper is. Indeed, I feel that a paper’s “difficulty” is orthogonal to its value, and should not be a significant criterion in deciding whether to accept it. Instead, we should be trying to assess how much we *learn* (or will learn) from a paper, how it contributes to advancing the state of knowledge in the field. We may learn a lot from a difficult paper because of the significance of the final result or because of techniques developed along the way, but either way, the paper’s difficulty is not the *reason* for its value. To some extent, the same holds for simplicity – if we prefer simpler solutions (when they exist), it is because they tend to clarify our understanding, tend to be more efficient/practical, etc.

This probably all seems obvious, but what led me to sign the conceptual manifesto was an experience on a PC where I felt that too much of the discussion centered on how difficult papers were. I think it is quite easy to fall into this habit, since difficulty feels like a more objective criterion on which to judge papers – much easier to apply than “how might this paper advance the field?”. But there’s no point in using an objective criterion if it is irrelevant to our actual goals…

My impressions from one PC experience should not be enough to draw general conclusions. But when I heard complaints from enough others that seemed symptomatic of the same underlying problem, it seemed worthwhile to try and bring the issue out. For me, “conceptual” was intended in a broad sense, to encompass what we learn from a paper – its take-away messages – as opposed to how much “technical work” was involved. But understandably these words were interpreted in different ways…

Here I’m not attempting to speak for other signers of the conceptual manifesto (much less the organizers of ICS). Certainly some people feel that there are certain kinds of contributions (e.g. “first” papers) that are particularly disadvantaged in the review process. I have some feelings about this too, but (a) I think reasonable people can disagree about what is the right balance, and (b) I don’t think we can begin to really discuss or address these issues unless we are able to resist reverting to the criterion of “difficulty” in evaluating papers.

[One minor point in response to Anup’s question: the challenge in giving specific examples is that you rarely know about if and why papers are rejected except when the authors are close to you (in which case you seem biased) and when you serve on a PC (in which case everything is confidential). For this reason, I don’t have a clear picture of the trend over the past couple of years. (I have no complaints about my own papers, and I haven’t served on a FOCS/STOC PC in a little while.) Nevertheless, one somewhat recent example that Anup’s specific question brings to mind is that the first few papers that were developing the notion of differential privacy did not appear in STOC/FOCS (but I have no idea whether they were even submitted, or what the original versions of those paper looked like), but now that the area is more “developed” it seems welcome. However, as Oded says, I think that focusing on this one example would be “out of point” (especially since Anup’s question was about ICS, and my comments above were not).]

Thanks Luca for responding to my comments. I agree with you that the “technical/conceptual” dichotomy is over-simplifying the reality. I also understand your worry about the big changes caused by small bias. But as a proponents of ICS, I do not want to use the strong word like “wrong” to describe the current STOC/FOCS. I prefer to use the word “unknown”: the optimal balance of conceptual contributions to the field is unknown to us if we keep using STOC/FOCS as the ONLY venues for top theory works. It’s not about that they are wrong, it’s because they are too prestigious that people are staring at them and may change for them.

My view is that theory as a field has grown and there are many threads of
work competing for space. When there is heavy competition, it is easier for any PC to apply “technical difficulty” as a metric for the “borderline” papers.
This is not going to change substantially as long as the competition for slots is large. Some feel that a big-tent approach with more slots will allow more people to attend and present their work at STOC/FOCS and I am sympathetic to that. Others want single-track workshop like feel for STOC/FOCS where they can hear only about the latest and best work chosen by a reasonable PC. We cannot have both. I do not believe that the conference system is the key incentive for people to do good work – however we are stuck with it now and change seems difficult.

When there is heavy competition, it is easier for any PC to apply “technical difficulty” as a metric for the “borderline” papers.

That has not been my experience with STOC/FOCS PCs. The metric for borderline papers has explicitly been “Which papers would be of most value to the community given the rest of the program?” This means that the n+1-st rated paper on a popular topic is less likely to get in than a paper that introduces a new connection to other research areas, a new problem that will be grist for future papers, or a new technique that might be useful for solving other problems.

Popular research topics are where the heavy competition manifests itself the most. With so much competition, a smaller percentage of these papers stand out from the pack and so the hit rate for them is lower, though there are exceptions when research in an area is perceived to be advancing particularly quickly.

I don’t want to talk about ICS, but generally, I think that the “conceptual vs technical” debate misses the point and may be damaging to our community.

The real potential problem I see in reviews is superficiality. As the field becomes broader, papers become harder to evaluate for PC members. Time pressure, technical complexity. and bad writing don’t make their job any easier. So there is a temptation to fall into various stereotypes such as “this paper is too easy”, “I don’t like the entire subfield”, and generally judge the paper by the easiest to measure such as the quantitative factor being improved, or one’s first impression of the “story” of the possible application. (I have to say that most people do try hard to avoid superficiality, and also you often see people who wrote very short reviews but in the PC meeting discuss extensively the paper in a way that makes it clear that they actually read the paper deeply. But still, in my mind this is the danger we need to guard against.)

I’m afraid that now we’ll just have one more stereotype “this is a technical paper that just does weight lifting”. More generally, I hope this debate won’t polarize the community and we won’t have “pro-conceptual” and “pro-technical” PC members that try to correct imbalances at the expense of carefully evaluating each paper on a case by case basis.

Well said, Boaz, I fully agree. My previous comment (and my intention in signing the “conceptual manifesto”) was about trying to push against one particular kind of superficiality I had seen in the review process, but this is not the only kind of superficiality and nothing will be gained if we just replace it by other forms of superficiality.

On a lighter note, there should also be an LCS – ‘Last-papers in Computer Science’ which focuses on, as the title suggests, the final word on a problem. For e.g. acheiving a hardness result equal to the best known approximation factor of an optimization problem or vice versa would be the last word on that problem.