Federating the "big four" computer security conferences

Last year, I wrote a report about rebooting the CS publication process (Tinker post, full tech report; an abbreviated version has been accepted to appear as a Communications of the ACM viewpoint article). I talked about how we might handle four different classes of research papers (“top papers” which get in without incident, “bubble papers” which could well have been published if only there was capacity, “second tier” papers which are only of interest to limited communities, and “noncompetitive” papers that have no chance) and I suggested that we need to redesign how we handle our publication process, primarily by adopting something akin to arXiv.org on a massive scale. My essay goes into detail on the benefits and challenges of making this happen.

Of all the related ideas out there, the one I find most attractive is what the database community has done with Proceedings of the VLDB Endowment (see also, their FAQ). In short, if you want to publish a paper in VLDB, one of the top conferences in databases, you must submit your manuscript to the PVLDB. Submissions then go through a journal-like two-round reviewing process. You can submit a paper at any time and you’re promised a response within two months. Accepted papers are published immediately online and are also presented at the next VLDB conference.

I would love to extend the PVLDB idea to the field of computer security scholarship, but this is troublesome when our “big four” security conferences — ISOC NDSS, IEEE Security & Privacy (the “Oakland” conference), USENIX Security, and ACM CCS — are governed by four separate professional societies. Back in the old days (ten years ago?), NDSS and USENIX Security were the places you sent “systems” security work, while Oakland and CCS were where you sent “theoretical” security work. Today, that dichotomy doesn’t really exist any more. You pretty much just send your paper to the conference with next deadline. Pretty much the same community of people serves on each program committee and the same sorts of papers appear at every one of these conferences. (Although USENIX Security and NDSS may well still have a preference for “systems” work, the “theory” bias at Oakland and CCS is gone.)

My new idea: Imagine that we set up the “Federated Proceedings of Computer Security” (representing a federation of the four professional societies in question). It’s a virtual conference, publishing exclusively online, so it has no effective limits on the number of papers it might publish. Manuscripts could be submitted to FPCS with rolling deadlines (let’s say one every three months, just like we have now) and conference-like program committees would be assembled for each deadline. (PVLDB has continuous submissions and publications. We could do that just as well.) Operating like a conference PC, top papers would be accepted rapidly and be “published” with the speed of a normal conference PC process. The “bubble” papers that would otherwise have been rejected by our traditional conference process would now have a chance to be edited and go through a second round of review with the same reviewers. Noncompetitive papers would continue to be rejected, as always.

How would we connect FPCS back to the big four security conferences? Simple: once a paper is accepted for FPCS publication, it would appear at the next of the “big four” conferences. Initially, FPCS would operate concurrently with the regular conference submission process, but it could quickly replace it as well, just as PVLDB quickly became the exclusive mechanism for submitting a paper to VLDB.

One more idea: there’s no reason that FPCS submissions need to be guaranteed a slot in one of the big four security conferences. It’s entirely reasonable that we could increase the acceptance rate at FPCS, and have a second round of winnowing for which papers are presented at our conferences. This could either be designed as a “pull” process, where separate conference program committees pick and choose from the FPCS accepted papers, or it could be designed as a “push” process, where conferences give a number of slots to FPCS, which then decides which papers to “award” with a conference presentation. Either way, any paper that’s not immediately given a conference slot is still published, and any such paper that turns out to be a big hit can always be awarded with a conference presentation, even years after the fact.

This sort of two-tier structure has some nice benefits. Good-but-not-stellar papers get properly published, better papers get recognized as such, the whole process operates with lower latency than our current system. Furthermore, we get many fewer papers going around the submit/reject/revise/resubmit treadmill, thus lowering the workload on successive program committees. It’s full of win.

Of course, there are many complications that would get in the way of making this happen:

We need a critical mass to get this off the ground. We could initially roll it out with a subset of the big four, and/or with more widely spaced deadlines, but it would be great if the whole big four bought into the idea all at once.

We would need to harmonize things like page length and other formatting requirements, as well as have a unified policy on single vs. double-blind submissions.

We would need a suitable copyright policy, perhaps adopting something like the Usenix model where authors retain their copyright while agreeing to allow FPCS (and its constituent conferences) the right to republish their work. ACM and IEEE would require arm-twisting to go along with this.

We would need a governance structure for FPCS. That would include a steering committee for selecting the editor/program chairs, but who watches the watchers?

What do we do with our journals? FPCS changes our conference process around, but doesn’t touch our journals at all. Of course, the journals could also reinvent themselves, but that’s a separate topic.

In summary, my proposed Federated Proceedings of Computer Security adapts many of the good ideas developed by the database community with their PVLDB. We could adopt it incrementally for only one of the big four conferences or we could go whole hog and try to change all four at once.

Comments

I don’t think that centralizing the publication process in security is a good idea. There have been problems with various instances of program committees over the years, and the diversity of conferences helps protect against these problems — even if there is overlap between program committees, different program committees do in fact yield different decisions.

Indeed, if this proposal gained traction, the result would certainly be that attention would shift to independent conferences (new or existing).

I don’t understand your point about VLDB — VLDB is hardly the only show in town when it comes to databases: PODS, SIGMOD, ICDE, and EDBT/ICDT are certainly major, prestigious conferences for publishing database results and are independent of VLDB.

Wouldn’t this move wipe any distinction between Oakland, USENIX Sec, CCS, and NDSS? For better or for worse, these conferences are still different from each other in the topics they accept for publication. (Personal note: I do not think NDSS is on par with Oakland, USENIX Sec, or CCS.)

To the second idea (of letting conference PCs to select papers from the FPCS accepts): What happens to FPCS accepts that are not picked by a particular conference PC? Are they denied conference publication forever? Are they considered again by the next conference PC? Wouldn’t this make the queue of such “resubmits” infinitely long?

Maybe we should first focus on fixing the journal-reviewing process in computer security. Nowadays it often takes 3-6 months to get a response.

I don’t really see strong distinctions between the big four. Certainly the distinctions were once larger and clearer than they are now. Ultimately, you can judge the similarity of the conferences by the people on their program committees, and it’s the usual suspects on each and every one.

With regard to papers accepted by FPCS and not picked up by any of the conferences, these papers would still be truly published in FPCS. They just wouldn’t have the bonus of being presented at one of the conferences. In effect, it’s a second-class alternative to being flat-out rejected. I’d prefer to see more of those papers published rather than being resubmitted and rejected again and again. Sure, it wouldn’t be as prestigious as appearing at a conference, but it would let the authors move on and do something new.

As to fixing our journals, part of the problem is that they’re not prestigious. Nobody cares. The genius of the PVLDB idea is that you try to merge the good aspects of a journal (consistent multi-round reviewing) with the good aspects of a conference (fast turn-around from submission to publication).

Counterpoint: maybe my idea is too radical. As Doug pointed out, the rest of the academic database world still publishes at “traditional” conferences. Maybe we should just convert one of our big four conferences to a straight-up copy of the PVLDB model and see how that goes. If successful, it could grow into my FPCS idea.

I do agree with Doug’s comment about the need for diversity in PCs. Perhaps one way to handle this concern could be to give the authors an option to “reboot” their paper, enabling them to purge the comments given by an overly negative PC and starting over with the next PC. The original proposal is still useful because many of the rejects may be “weak” and the authors could easily justify to the same set of reviewers that they have handled all concerns (thus only a few papers have the motivation to use the reboot process). However, if the authors feel that the reviews are overly critical, they can simply reboot the process and start over. The number of reboots could be limited to one or two to prevent abuse of this process and conserve PC resources.

Today, every time you resubmit a paper, you effectively reboot. It’s sensible that authors deserve some control in this matter. Somehow, though, anything we do needs to account for the limited attention span of PC members. I’ve lost count of how many times I’ve reviewed substantially identical submissions in successive PC meetings, neither of which deserved to be published. Where’s the mechanism to say enough is enough?

I feel like our field has matured enough to now mimic the publication model of other fields. Let journals be gatekeepers of science, and let conferences be venues for social networking and presenting papers/results. The common refrain is that journals take years for publication. But that is not necessarily true in today’s world. TISSEC already seems to be turning around reviews in about 2-3 months (if reviewers are cooperative), and the mean number of cycles to acceptance will be similar to the federated proposal here. Once a paper is accepted, it can go up on ArXiv or some such popular repository. Papers are available soon, they are peer reviewed rigorously, and will become part of printed proceedings. Win-win-win?

So that being said, what do people think about the PLoS One model (see links below), where reviewers are gatekeepers of “good science” without comment on novelty? That way papers that are technically sound make it through peer review quickly, and then the community decides what is novel and what is not. Many good papers at these big four conferences get rejected based on 3 or so people conferring a judgment on “novelty”. In today’s world, why rely on 3 people to make this call on “novelty”? If a paper is novel and interesting, it will get more buzz, more citations, and spur more research. Boring papers (but technically sound) will be forgotten.

To some extent, a not-terribly-selective journal plus a selective second-phase of conference invitations is one logical extreme of my FPCS idea. You basically have two knobs: how selective you want to be in the initial submission phase to FPCS, and a second knob about how selective you want to be when conferences take all, some, or few of FPCS acceptances.

I propose that FPCS be more selective than PLoS One, and that a second level of down-selection happen for conference presentations, but the exact ratios could well be fine-tuned later on.

This may “slow down” our field, but I don’t really see that as a bad thing. One interesting retrospective question to ask is how many papers at each of the top 4 security conferences each year are earth-shattering, game-changing results (5 out of 30?) — and how many are solid or otherwise incremental pieces of work that probably belong as deltas to a journal publication?

I like your emphasis on trying to balance speed of publication with promotion of higher-quality material.

Freedom to Tinker is hosted by Princeton's Center for Information Technology Policy, a research center that studies digital technologies in public life. Here you'll find comment and analysis from the digital frontier, written by the Center's faculty, students, and friends.