Aside from regression coefficients, what are commonly used approaches to measure one variable's “sensitivity” to another variable?Last thought: I did show them the Achen paper that I linked above, which provides pretty comprehensive examples of how the boiler-plate regression approach can go wrong. They simultaneously acknowledged that it could go wrong and that there was not much theoretical reason why it should even work correctly in the first place ... and at the same time essentially said they did not care because they wanted something expressed as regression coefficients regardless of the ramifications of that sort of model. And these were highly educated veterans running a long-standing, successful company. :/

How to assess statistical significance of the accuracy of a classifier?And some application domains, say financial markets, where you get to use the classifier in many many roughly independent cases, just being a bit better than chance (R-squared's of like 11% or 12% are considered great) can mean a lot. In those cases, if even the boosted classifier has R-squared of 15% that might be considered very good -- in which case it really matters if you can statistically resolve whether the weak classifiers are definitely better than guessing.

Dec12

comment

How to assess statistical significance of the accuracy of a classifier?Not if you are boosting a bunch of weak classifiers, which is a very common activity. You may care about discrimination once you reach the fully boosted final classifier, but there's a lot of work between the start and the finish, and demonstrating that a complicated classifier empirically performs better than chance is important.