Header

Friday, April 15, 2016

Some time ago, I got to attend a dinner for undergraduates who had completed a honor's thesis in psychology. For each of these undergraduates, their faculty advisor would stand up and say some nice things about them.

The advisors would praise students for their motivation, their ideas, their brilliance, etc. etc. And then they would say something about the student's research results.

For some students, the advisor would say, with regret, that the student's idea hadn't borne fruit. "It was a great idea, but the data didn't work out..." they'd grimace, before concluding, "Anyway, I'm sure you'll do great." In these cases one knows that the research project is headed for the dustbin.

For other students, the advisor would gush, "So-and-so's an incredible student, they ran the best project, we got some really great data, and we're submitting an article to [Journal X]."

Somewhere in this, one gets the impression that the significance of results indicates the quality of a research assistant. Significant results are headed for the journals, while nonsignificant results are rewarded with a halfhearted, "Well, you tried."

I suspect that there is a heuristic at play that goes something like this: Effect size is a ratio of signal to noise. Good RAs collect clean data, while bad RAs collect noisy data. Therefore, a good RA will find significant results, while a bad RA might not.

But that, of course, assumes there is signal to be found. If the null is true, there's no signal, no matter how precise your RAs.

In any case, as unfair as it is, it's probably good for the undergrads to learn how the system works. But I'm hoping that at the next such banquet, the statistical significance of an undergrad's research results will have little to do with their perceived competence.