June 23, 2010

Collegiate Lying Assessment?

I should be asleep, but it's summer, we've released our teenage son from stricter bedtimes, and he is practicing sax. Most of the tasks on my plate require a little more concentration than I can muster after midnight, so I will write instead about lying, or whether we can tell much about someone's skills when we put them deliberately in a decontextualized situation where either the situation is a known lie or where the individual in question can get ahead by lying.

What put that into my head was the description of the Collegiate Learning Assessment (CLA) published by creators of the test. The CLA is one of the three supposed assessments of college learning that comprise options in a part of the Voluntary System of Accountability, and apart from the controversies over generic assessments of college graduate learning and the mediocre statistical properties of the supposed value-added measures of non-longitudinal samples, it's important to look at the details of the assessments themselves.

On its face, the CLA looks like a plausible assessment of reasoning skills in a written context. As described by its creators, the CLA sometimes consists of a performance task keyed to a simulated case with attended (fictional) documents, and sometimes it is a prompt to critique a specific argument. The samples provided were both from public policy--specifically, crime. Thus far, it looks something a cross between high school debate and the AP history "document based questions." I've constructed some assignments around fictional cases as a way to fine-tune what students have to confront and how it ties in to the issues they need to address. And it is common enough for essay prompts to quote someone's opinion in the topic at hand and ask for a critical assessment. As I said, it's plausible on its face.

But a funny thing happens once you remove either type of task from the subject in which it's embedded: those who are rating student responses do not have the substantive expertise to check student assertions. If someone responds to a simulated case in my class with statements about education research that are clear misunderstandings of course material, they're not going to get an A. Same with a response to a "please evaluate this statement" prompt. With the CLA, however, there is no such check unless the human rater happens to have substantive expertise aligned with the prompt (in the samples, criminology, sociology, or government). And even there, the scoring guidelines appear to ignore the veracity of student statements. It is entirely about whether someone can construct or criticize an argument in response to prompts.

In this particular case (with a prompt about crime policy), suppose a student lied about criminology research--made up four names and said that they were famous criminologists who had conducted research in effective deterrents. How should such a response be scored? I think I know the answer, because K-12 students in Florida are sometimes encouraged to make up details for the state's writing exam. As far as I am aware, such fabrication is rewarded as success in providing "artistic verisimilitude to an otherwise bald and unconvincing narrative" (William Gilbert, The Mikado). If you know something to the contrary about the CLA, please point me to it in comments, but nothing I've read thus far is particularly reassuring on this point.

Maybe that's what we should be doing in college, producing Sophists who can turn a nice phrase and fake their way through school, through job interviews, through a daily three-hour radio political talk show, or through professional reports about things like bridge safety and oil-drilling backup systems.

Or maybe we should acknowledge that if it's to have any value, a general-education program has to have some substance, and assessments of its success need to be rooted in the areas it putatively requires some learning in. Not writing and reasoning in general but writing and reasoning about the stuff that's in the gen-ed curriculum.