3/15/2008

COLT has a call for open problems due March 21. I encourage anyone with a specifiable open problem to write it down and send it in. Just the effort of specifying an open problem precisely and concisely has been very helpful for my own solutions, and there is a substantial chance others will solve it. To increase the chance someone will take it up, you can even put a bounty on the solution. (Perhaps I should raise the $500 bounty on the K-fold cross-validation problem as it hasn’t yet been solved).

Have you considered that your K-fold cross-validation problem may not be clear enough, or expressed in a way that makes its importance clear? Having followed a few of your links on this problem and looked at a few slides, I must admit that it still is not clear what exactly you are after. In addition, throwing around controversial statements like “Everybody does CV. Nobody knows why” seems counter-productive. If there is a technique for which people get a good intuition of why they do it or should do it, this must be it. Penalised likelihoods, information criteria, large-margin methods… sure, not everybody understands why. But I’d think most CV practitioners have a good intuitive justification of why they do it. One may argue with how sound this intuition is, but it’s a far cry from “nobody knows why”.

I hear the complaint that it is overstated. For me, looking from a mathematical viewpoint, I don’t think there is much overstatement, because we don’t have a very thorough analysis of CV compared to other methods. But for others looking from a practical viewpoint, I’m sure it’s often effective in practice.

Based on what I know about cross-validation, I believe there is a fair chance that people’s intuition about CV is wrong a small-but-significant fraction of the time (perhaps 5%). This “failure once in a while” behavior is difficult to gather intuition about from empirical practice, but would be revealed in a mathematical analysis.