Menu

Last night, I finally finished a marathon month+ of reviewing for machine learning and machine-learning-adjacent conferences. Because of my own poor calendar organization, I foolishly agreed to join the program committees for IJCAI 2105 (Machine Learning Track), KDD 2015, ICML 2015, and UAI 2015. These conferences all had reviewing periods during the month of March and this first bit of April.

My paper assignments for these conferences were six for IJCAI, five for KDD, six for ICML, and five for UAI. While I was reviewing these 22 papers, I was recording my initial overall recommendation (prior to discussion and author response) for each of these papers, just to measure how I tend to score papers. I figured I’d post some of these recordings here, with the major caveat that these are still tiny sample sizes and they are heavily biased by what papers and topics I like to bid on. I’m also going to convert all scores to a scale of [strong reject, weak reject, weak accept, strong accept] to both simplify and muddy up my data a bit to prevent any chance of some smartypants somehow de-anonymizing based on my silly blog post.

For IJCAI, my recommendations for my six papers were one reject, one weak reject, three weak accepts, and one strong accept.

For KDD, my recommendations for my five papers were three rejects, one weak reject, and one strong accept.

For ICML, my recommendations for my six papers were two weak rejects, three weak accepts, and one strong accept.

For UAI, my recommendations for my five papers were two rejects, two weak rejects, and one weak accept.

Overall, I recommended four rejects, six weak rejects, eight weak accepts, and three strong accepts. I gave zero strong reject recommendations. If my initial vote was the only one that counted, my accept rate for each conference is 66% for IJCAI, 20% for KDD, 66% for ICML, and 20% for UAI. Overall, my acceptance rate was a rather high 45%.

So what is the takeaway message? I’m not sure. I guess this still isn’t enough data to really tell anything. Let me attempt to make some claims.

The numbers suggest that I like ICML and IJCAI papers better than UAI and KDD papers. I would be pretty surprised if this is true and not just a result of randomness. It’s hard to tell with the IJCAI ML track being a brand new idea. I usually imagine myself as liking UAI papers the most of all the medium-sized ML conferences.

The numbers suggest that I like ICML papers about graphical models, structured prediction, and relational learning. Since these are the topic areas I usually bid on and that Toronto Paper Matching usually assigns to me. This is plausible, but not consistent with my low accept rate for UAI.

By a similar argument, the numbers suggest that I don’t like KDD papers on graph mining and relational models. This is also plausible, but surprising. I think in this case, I really like the problem area of data mining from complex network data, but maybe I’m often unsatisfied by the methods people propose. It’s possible I’m too critical of this kind of work.

Sorry these are all pretty weak analyses. The sample size is just too small. If I want to understand my own biases better, I need to volunteer to review even more (note to self: do not do this), or keep better records from previous years of reviewing.

Only one thing is absolutely clear from this month of reading all these submissions: seriously everyone needs to stop using the word “employ.”