Become a Fan

10 February 2016

Mount St. Mary's, Metrics, and Perverse Incentives

by Ed Kazarian

As I remarked on Facebook yesterday, there is a lot of spectacular mendacity involved in the current crisis at Mount Saint Mary's Unviersity. As of yesterday, the University's provost has been forced to resign, and two faculty members have been summarily fired, one a tenured associate professor of philosophy and another an untenured professor of law. The justification for these firings, where available, made explicit reference to violating a "duty of loyalty," which adds to the already overwhelming impression that they come in retaliation for the exposure of the university president's plan to cull incoming students deemed likely to leave school without completing their first year before the school was required to report enrollment data to the federal government.* As a whole, the case is outrageous—and one hopes that these firings will be reversed, that the president and any board members who engineered them will be forced to resign, and that the principles of academic freedom, tenure, and the university's contractual obligations to its employees and its pedagogical obligations to its students that have been abrogated in the whole mess will be restored, reaffirmed, and strengthened. (Anyone who hasn't should consider signing the petition begun by John Schwenkler, located here.)

But while our attention is held by outrage over what is happening to these faculty and the cavalier attitude toward students reflected in the plan, we run the risk of overlooking the way that this case is an instance of a much more general problem. With the rise of various forms of quantitative assessment protocols (many of which, in practice, have been implemented ad hoc, and not always by folks with the training or expertise to produce reliable social science), we have also gotten a substantial increase in pressure to improve performance on such metrics, and thus to improve one's position on the rankings that are inevitably derived from them—rankings which have very real consequences for institutions, both in terms of their ability to recruit students (and their tuition) and in terms of other funding flows, like federal student aid money.

The Mount Saint Mary's president's program was intended for precisely such a purpose, in this case in response to pressure from the federal government to improve performance on a 'retention' metric. The plan was to offer students deemed likely to negatively impact the university's performance on that metric** a combination of pressure and inducements (including a refund of tuition) to leave, or allow themselves to be dismissed, before they would have 'counted.' As such, it's apparently not fraudulent, since the students in question would be off the books before reporting of enrollments were required. However, it is obviously contrary both to the underlying point of trying to measure institutional performance vis a vis 'retention' and to the mission of an educational institution: ensuring that admitted students are moving through—or, more narrowly in this case, remaining enrolled in—a degree program.

But then, it seems as if the emphasis on metrics isn't doing what it is supposed to, namely to create pressure on institutions to improve the way they serve students. Instead, it does much the same thing that similar emphasis has done in primary and secondary education, where charter schools that have the ability to select for students that are likely to perform well on various standardized measures will do so, leaving other institutions to deal with the consequences of enrolling a disproportionately high percentage of likely to struggle.* In Philadelphia, where I live, this is resulting in the slow reduction of the non-charter public school system to a rump composed largely of magnet institutions and more or less 'failing' schools. In higher ed, we can already see signs that the link between metrics (and rankings) and funding is pushing institutions to abandon or abrogate the elements of their missions that involve serving student populations that are likely to negatively impact ranking performance.

The Mount Saint Mary's case amounts, indeed, to precisely that—regardless of the fact that the students are being given full refunds on tuition, etc.; they are being quite comprehensively abandoned. And in many respects, it may not be the worst possible case. These students, at least, are not being left on the hook for substantial tuition expenses (though those are not the only expenses they will have incurred coming to college in the first place).

None of this is to say that good metrics, well designed and properly used, aren't important tools in figuring out what does and does not work in the way we are serving our students. But in a climate of pervasive insecurity both for institutions as a whole and for the various smaller constituencies within our institutions (departments, programs, individual faculty members, etc.), the link between metrics, rankings, and financing (and therefore institutional survival), seems to have a high potential for institutional toxicity.

** People absolutely should read Adriel Trott's excellent demolition of the rationale behind the attempt to determine, by means of a survey of students' mindsets in their first two weeks of school, whether they are likely to succeed in college, or even their first year. This program seems to be an excellent example of the tendency for junk social or psychological 'science' to get folded into this whole metric-based approach to things, to which I alluded above.

Comments

Mount St. Mary's, Metrics, and Perverse Incentives

by Ed Kazarian

As I remarked on Facebook yesterday, there is a lot of spectacular mendacity involved in the current crisis at Mount Saint Mary's Unviersity. As of yesterday, the University's provost has been forced to resign, and two faculty members have been summarily fired, one a tenured associate professor of philosophy and another an untenured professor of law. The justification for these firings, where available, made explicit reference to violating a "duty of loyalty," which adds to the already overwhelming impression that they come in retaliation for the exposure of the university president's plan to cull incoming students deemed likely to leave school without completing their first year before the school was required to report enrollment data to the federal government.* As a whole, the case is outrageous—and one hopes that these firings will be reversed, that the president and any board members who engineered them will be forced to resign, and that the principles of academic freedom, tenure, and the university's contractual obligations to its employees and its pedagogical obligations to its students that have been abrogated in the whole mess will be restored, reaffirmed, and strengthened. (Anyone who hasn't should consider signing the petition begun by John Schwenkler, located here.)

But while our attention is held by outrage over what is happening to these faculty and the cavalier attitude toward students reflected in the plan, we run the risk of overlooking the way that this case is an instance of a much more general problem. With the rise of various forms of quantitative assessment protocols (many of which, in practice, have been implemented ad hoc, and not always by folks with the training or expertise to produce reliable social science), we have also gotten a substantial increase in pressure to improve performance on such metrics, and thus to improve one's position on the rankings that are inevitably derived from them—rankings which have very real consequences for institutions, both in terms of their ability to recruit students (and their tuition) and in terms of other funding flows, like federal student aid money.