Menu

It’s reviewing season for the summer conferences, so here’s something that’s on my mind as I’m doing my reviews.

One crappy thing that happens a lot in machine learning research is that researchers do non-scientific things like over-claiming, taking ownership, and bad experiment design. We end up with paper after paper, each claiming to present the best method with cherry-picked experiments that only demonstrate that the authors can draw prettier curves than other authors.

Sometimes authors use phrases like “our method” a lot in their description of the approach they’re demonstrating. Sometimes I even see tables or plots describing the final results from experiments where the legend entries are “our method,” “So and so’s method,” “SVM,” etc. This type of naming hints at a lack of objectivity.

Naming the proposed method is usually better, especially when the name actually describes the thing (so not an acronym that uses a letter from the middle of one of the words… c’mon people). Then the authors become scientists trying to understand the properties of some approach they discovered. And yes, they still get credit for discovering it, they just get fewer eye rolls.

This attitude also encourages poor experiment design. As computer scientists, we should want to understand the behavior of certain algorithms, so really good experiments would test many hypotheses about how the new algorithm performs under different conditions. We want to understand the strengths, weaknesses, and tradeoffs in comparison to other known methods. But many experiments in papers only test one hypothesis: “our method is the best method ever and you should purchase it.”

This problem is bad enough that I almost never trust the results of experiments in papers, or I always just think of them as synthetic sanity checks, even when they are using real data.

I’m certainly also quite guilty of this unscientific attitude and behavior. It’s very hard to avoid. On one hand, as scientists, we want to advance the world’s knowledge on machine learning, but on the other hand, as people who do science for a living, we want credit for advancing the world’s knowledge. That often leads to our papers reading more like patent applications than descriptions of scientific discovery. Yuck.

In conclusion, I’ve pointed out an annoyance and proposed no great solution for it. So I guess this qualifies as just ranting. But my method of pointing out this problem improves upon the state-of-the-art method by so-and-so et al. by 11%.