Baidu and the Imagenet Challenge

The image net challenge is a benchmark object-recognition task in Computer Vision that proved to be a game changer for the deep learning community.

In 2012 a group of researchers published a breakthrough result at NIPS on this challenge that significantly outperformed all previous attempts. Their approached used an old techique, known as a convolutional neural network, with some very clever advances both from the engineering side and some innovative thinking about how regularisation might be performed.

That might be the end of the story, but of course it is not. A hitherto neglected technique has begun hitting pages of leading newspapers such as the New York Times. A storm has been brewing.

With even better engineering and even more innovative thinking the result has been wittled down by a series of big name companies and leading lights in the field. Experts in deep learning are in high demand. Companies have massively redirected their investment strategies.

On Tuesday it was revealed that one of those results was obtained by breaking the rules of the system.

How did they break the rules?

The models are graded by taking a test. The test gives you a result in terms of your score. Of course, if you take a test many times then sometimes, even if you are just guessing the results, you will score higher. And that is what was done in the case of this result. The benchmark has strict rules about how many times you can take the test and these rules were broken.

Who broke the rules? Is it a conspiracy?

Well, as far as I can see, it would be pretty easy for anyone, who was under pressure to achieve the result, to break the rules in the naive way in which they seem to have been broken. And fortunately the competition organisers discovered the rule breach (perhaps because it was so naively done). So no, I don’t think it is likely to be a grand conspiracy.

Unfortunately, the people that did it are Chinese. And now, for example, on this Endgadget post, we are getting comments that are tantamount to 19th century Victorian racism.

I want to be very firm on this. As a community we have created a problem. It may have been a young, under pressure, individual who made a serious mistake in a high stakes multimillion dollar game of “who’s got the deepest net”. He made the mistake in a paper that hadn’t even been reviewed. This paper was widely reported on in the leading media of our day. That doesn’t seem very healthy for our research.

My position is: “Get a grip people”. If the stakes weren’t so high then the fall wouldn’t be so great.

I can spend time criticising Baidu if I like. I worry about an internet search company that has such close ties to its own government that there may well be an open door between the two. A search company that is now investing so much money into its AI that it could give that government the detailed information on its people that would render any of our world’s historical secret police forces or networks of informers totally obsolete. One that makes 1984 look like a children’s nursery story. I worry also that other governments don’t seem to be looking on such a system with horror, but with envy. However, that’s an entirely different issue.

With regard to this issue of whether a company tried to cheat a competition that had probably already run its useful course, I’d like to point the finger back at us, the research community, for overhyping its relevance. We’ve placed people in the position that they feel so much pressure to make such a silly decision. This is not a ‘blue laser’ achievement here, it’s a data set of pictures. And one that may have already served it’s (very valuable) purpose. Let’s not use it as the cornerstone of a modern witchhunt. Rather let’s focus on the very real issue of a community that values so highly such a result.