Conference Acceptance Rates

Acceptance rates are one of the key ways of measuring the quality of conferences. I think it’s time we collect that data for conferences relevant to visualization. I have put together a page for this, and have found some of that data. But I need your help to fill in the gaps and suggest other conferences that would be of interest.
I originally only wanted to include the VisWeek conferences, but then I figured it would make sense to extend the list a little bit. So currently, there is InfoVis, Vis, VAST, EuroVis, AVI, UIST, and CHI. I am open to suggestions for further conferences to include, but I want to keep focusing on visualization. So even though CHI and UIST are included, I don’t want to expand into HCI much more.

There is also missing data. Especially for InfoVis, IEEE Xplore doesn’t have the prefaces for conferences before 2005. Some of the early Vis conferences and a few others are also missing. Please take a look at the list and if you have access to proceedings for years where numbers are missing, send them to me or leave a comment below.

Given that this is about visualization conferences, I figured it would make sense to also visualize the data. The first visualization shows acceptance rates from 1990 to 2010. Not all of the missing lines are actually missing data, because the different conferences started at different times.

The following shows numbers of submissions. CHI’s increase in submissions is impressive; even more impressive is that they kept their acceptance rate nearly constant through this enormous growth.

The next task is to look into impact factors for visualization journals. That information seems to be much harder to come by though, unfortunately. But it would be very worthwhile.

Robert Kosara is Senior Research Scientist at Tableau Software, and formerly Associate Professor of Computer Science. His research focus is the communication of data using visualization. In addition to blogging, Robert also runs and tweets. Read More…

Reader Interactions

Comments

Does Acceptance Rate really correlate to Conference Quality? I’ve never heard a convincing argument of this.

Seems most of the conferences I know of with very low Acceptance Rates seem to suffer from the “in-crowd” syndrome, where they only accept papers from the same few people every year, no matter what they are.

Perhaps something connecting the papers & acceptance rates with the number of other papers using them as a reference (to find “foundation” papers) would be more useful.

There are certainly limits, a very low acceptance rate can be problematic. That’s also the reason CHI didn’t just decide to keep the number of papers constant and let the acceptance rate go down to 10% or so when submissions were increasing.

But it’s still meaningful. There’s no question that a conference with a 25% rate will have higher-quality papers than one with 40% or more. A rate around 20-25% is probably the best: it’s selective while still leaving room for new ideas. And it’s also a question how democratic and open the community is, and if it’s controlled by a few powerful people.

Citation counts are another good metric, but much harder to come by. There’s also a PageRank-like approach that looks at how highly cited the papers are that are citing your paper, etc. But the problem with that is getting the data, parsing references, etc.

I am completely with you that extreme rates may be a hint that there is something wrong with the conference. But what makes a good conference? If it is just a “meeting” of the “best” papers, we are only generating CO2 by flying there. But seriously, meeting “the right” people, having the chance to bump into each others and exchange ideas is probably far more contributing to a good conference. The citation count is probably another thing you need to believe in, and often there is the correlation you have been talking about.

Good academic work is not necessarily bound to playing according to the commonly established system. Or vice versa, there is far too much happening in the community which just pays homage to the system. Both your suggested measures are tightly bound to the system – which may certainly be a good orientation to start off.

Depending on your motivation level, you could ping the chairs to see if they have records for the missing years. It would be good to have the data, I’m quite irritated that they didn’t include it in the first place for the archival record.

Thanks also for the names of the chairs, I’m going to ask them for the missing acceptance rates. One problem I see with the move towards electronic proceedings is that they don’t seem to include those prefaces (at least VisWeek 2009 didn’t), and the digital libraries are sometimes missing pages.