Comments, observations and thoughts from two left coast bloggers on applied statistics, higher education and epidemiology. Joseph is a new assistant professor. Mark is a marketing statistician and former math teacher.

Tuesday, October 14, 2014

[I have the nagging feeling that I'm not using the proper terminology with the following but the underlying concepts should be clear enough. At least for a blog post.]

Let's talk about three levels of selection effects :

The first is initial selection. At this level, certain traits of potential subjects influence the likelihood of their being included in the study. If you ask for volunteers in person, you will end up underrepresenting shy people. If you use mail surveys, you will underrepresent the homeless:

The second level comes after a study starts. You will frequently lose subjects over time. This type of selection is particularly dangerous because you cannot assume that the likelihood of dropping out is independent of the target variable. The isue comes up all the time in medical studies. For serious conditions, a turn for the worse can make it extremely difficult to continue treatment. The result is that the people who stick around till the end of the study are far more likely to be those who were getting better;

(Up until now, the types of selection bias we have discussed, though potentially serious, are generally not deliberate. Their consequences are unpredictable and they happen to even the best and most conscientious of researchers. That is no longer the case with level three.)

The third level concerns attempts to manipulate attrition so as to affect the results of a study. In these cases, researchers will attempt to get rid of those subjects who are likely to drag down the average. This is blatant data cooking and it can be remarkably effective. In school administration, the term of art is "counseling out." It is shockingly widespread, particularly among the "no excuses" charter schools.

The effect of this practice on kids can be brutal but that is a topic for another post. What interests us here are the statistical concerns; what are the analytic implications of this policy? In terms of direction, the answer is simple: schools that engage in these policies will see their test scores artificially inflated. In terms of magnitude, there is really no telling. The potential for distortion here is huge, particularly when you take into account the possibility of peer effects.

Put bluntly, in cases like this, "The first Success graduating class, for example, had just 32 students. When they started first grade in August 2006, those pupils were among 73 enrolled at the school," data showing above-average results are almost meaningless.