In which a veteran of cultural studies seminars in the 1990's moves into academic administration and finds himself a married suburban father of two. Foucault, plus lawn care. Comments are welcome. Comments for general readership can be posted directly after the blog entry. For private comments, I can be reached at deandad at gmail dot com. The opinions expressed here are my own (or those of commenters), and not those of my (unnamed) employer.

Monday, December 12, 2011

Punching Above Their Weight

Yes, I know, sports metaphors are inherently suspect, since they’re culturally coded as ‘male’ and therefore patriarchal and they subtly resinscribe the very blah blah blah.

But sometimes they’re really useful.

In boxing, “punching above his weight” refers to a boxer whose strength is greater than you would expect for someone in his weight class. Since size and power are roughly related, a middleweight who hits with the power of a heavyweight is said to punch above his weight. It’s a compliment; in essence, it’s saying that you overachieve relative to your resources.

Kevin Carey’s recent piece on picking high-performing community colleges reminded me of this old truism. The best performing community colleges aren’t necessarily those with the highest graduation or transfer rates; they’re those that consistently punch above their weight.

In this case, I’d take “weight” to refer to a combination of student characteristics and budget. Given a student population of x demographic profile -- age, race, income, even gender -- and a budget of z dollars per student, in general we should expect your grad/transfer rate to be y. If you did notably better than y, you punched above your weight, and you must be doing something right. If you did notably worse than y, it’s time to ask some difficult questions.

In that formula, “y” will differ from college to college. For example, I’m not shocked to discover that community colleges in South Dakota have some of the highest graduation rates in the country. Given the paucity of four-year options out there, I’d expect that many of the higher-achieving high school grads start out at the local community college, since it’s often the only game in town. In the more densely populated Northeast, by comparison, four-year colleges are everywhere, so community colleges tend to draw more uniformly from the lower ranks of the high schools. That fact, all by itself, changes the “y.”

Sports fans have been doing this kind of math for years. In baseball, for example, it’s easy enough to add up a team’s aggregate statistics and develop an “expected wins” total. A team that’s well above its expected wins total is either clever or lucky; a team well below is either snakebit or inept. By that measure, it’s entirely possible to say that the manager of a third-place team did a far better job than the manager of the second-place team. If the third place team didn’t have much to work with, and the second place team underachieved relative to its gargantuan payroll, then just looking at wins and losses won’t tell the story.

I’d like to find the colleges that are consistently doing better than their expected wins, and find out how they’re doing it.

I don’t hold out a lot of hope for this to catch on, even though it seems like a no-brainer. The winners in the current system have no incentive at all to upend the rules, especially when they probably wouldn’t fare terribly well. (Put differently: let me swap the student body at my college with the student body at Swarthmore, and let’s see what happens to graduation rates at each.) But if we’re serious about improving the performance of the masses, we’re not going to achieve that by competing to see who does the best job of keeping the masses out. Given that nearly half of the undergrads in America are at community colleges, coming up with measures that are contextually relevant seems like an obvious good. If you have x percent of students in poverty, and y percent who don’t speak English, and z percent over the age of 25, and you do a far better than predicted job for those students, then you’ve really achieved something. I’d like to find out which colleges fit that profile, and what they’re doing that I can steal.

I suppose one could object that this amounts to a kind of profiling, but I’d argue that it’s much more realistic and useful than pointing out to someone in a low-income part of the northeast that racially homogeneous states without four-year colleges achieve higher grad rates. The point is not to remake every college into a racially homogeneous cluster of high achievers; too many would be left behind. Telling me I could improve the college’s numbers by changing its demographics doesn’t help me at all; telling me who does better with the same demographics at least gives me a place to start.

Anonymous is correct that the analysis of a high quality data set can be done with simple tools. Our college does that, and it helps us see our weaknesses.

However, there is no National Common Data Set, only lots of separate ones that vary from state to state and college to college. For example, there is no nationally agreed-upon placement test for developmental ed that is used for every entering student at every college. There is probably no tag for "graduated from HS Y" or "transferred to/from college X", let alone whether they transferred in good standing or not. (One of our biggest successes is taking a student who flunked out of one school to an AA so they can transfer to a four-year program.)

Great Idea Dean Dad. I would add a developmental placement scale score (figured from SAT or ACT or one of several placement tests) to your list of categories. Every school should have that info, which is easier to get than a measure of "poverty" but would only be shared people had confidence it would not be misused. (Much the same can be said of "outcomes" data.)

It isn't profiling. We need to know where to put our resources. Is our failure with developmental students due to teaching or curriculum or attitudes or something specific to how our state lies to our HS graduates about being ready for college or even just being in a classroom? Each has a different solution.

I would add a developmental placement scale score (figured from SAT or ACT or one of several placement tests) to your list of categories. Every school should have that info, which is easier to get than a measure of "poverty" but would only be shared people had confidence it would not be misused.

I doubt most of my students have taken any of these types of test. Most CC students don't, or if they have, many of the scores are probably from so long ago that they wouldn't be that useful.

Where would you get the baseline data for Value Added? Even our outcomes assessment process looks only at the end point, not the change.

Anonymous @5:00PM -

I'm shocked. EVERY student entering our CC must take our placement test if they don't have an acceptable current score for english and math from a source such as the SAT or ACT or a current placement test that we accept.

You must have one heck of a HS system where you are if every high school graduate is ready for college algebra!

Don't most colleges have some sort of predicted outcome (based on high school performance, college board scores, and the like) as part of the profile for each student admitted? (I remember Wisconsin having such a thing in 1971.)

If so, isn't evidence that students systematically exceed those expectations evidence of something being added by the college?

We have an in-house placement exam for math, science and English, but I don't know how using those values for this purpose would correlate the way a standardized exam like the SATs would. I read that comment as if the poster thought that CC students had SAT scores, which is highly doubtful in my neck of the woods.

Actually, South Dakota doesn't have a community college system. There are some technical institutes that fill part of that niche, but there is only one "community college," and it's actually a private, non-profit institution. Sorry to be picky...

There's a Chronicle article on personalization software in higher ed that makes me wonder how long before selective colleges (or even "open admission" colleges that nonetheless target certain people for advertising or scholarships) figure out how to estimate which students will punch above their weights.

To continue with the boxing analogy: another issue that CC's face is the one of whether to stick with a ten-round format or to extend the match indefinitely. This is pretty relevant to the debate now going on in California. Aiming towards a time-limited format that favors students who need less remediation and who are ready to sign up for a specific goal within a semester or a year of enrollment sounds appealing. There is the counter-proosal that this limits opportunity for the less-prepared (i.e. individual students who punch below their weight, at least for the present). Tough choices.