November 30, 2005

Joann Robinson and E.D. Nixon

Tomorrow is the 50th anniversary of Rosa Parks' staying seated on a Montgomery, Alabama, bus. Since her death, journalists should have had a chance to absorb some of the lesser-known details of her life and the boycott. In particular, I hope that the commemorations honor the contributions of Joann (JoAnn?) Gibson Robinson and Edgar Daniel Nixon. Robinson was a teacher at Alabama State College and head at the time of the Women's Political Council and had been a victim of city bus drivers before. Nixon was a labor leader and head of the state Voters League and NAACP. Nixon advised the Women's Political Council in their preplanning for a boycott, and it was the WPC who decided not to start the boycott when other women had been arrested on city buses earlier in 1955.

I'd love to have been a fly on the wall when the ministers were meeting that day when Nixon and Robinson convinced them to lead a long-term organization: All right, these women and E.D. say we need a leader. Martin, you're going to be our leader, because you've only been here a short time, and it'll cost the rest of us dearly if we have to escape town. I know that's not precisely what happened, but I also know the myth's not quite right, either.

Robinson did write a memoir in the 1980s. Does anyone know if and where she's living today? She was born in 1912 and would be in her 90s.

Writing tics and reading idiosyncrasies

In one paragraph I'm reading this morning, a student wrote about courts' passing a decision. There's clearly some confusion with the student between legislation and judicial opinions (and two branches of government), though maybe it's a matter of our slippery language: Congress passes laws, and the Supreme Court passes down law. (There's also the unlikely possibility that the student considered the case the judicial equivalent of a kidney stone.)

But apart from conceptual confusion, student writing is chockful of ticshabits that interfere with communication. Some tics are perennial, the weeds of passive voice and antecedentless pronoun, comma splice and homonym confusion. Some tics fall in odd patterns, though, such as the invasion of alien whereases that sprouted six years ago in West Central Florida, or the whilsts that my native Floridians use only in writing. (The last doesn't bother me when coming from my English friends.)

(To be honest, education journals are full of writing tics as wellfrom impact used as a transitive verb to the neologism rubric, which most people would understand better as grading criteria. Sometimes, though, such bad writing can inspire Bulwer-Lytton contest entries.)

I suspect some writing tics come from students' thinking that they need $5 words to impress teachers. And there may be some truth in the impression, in part because Florida's writing test does reward multisyllabic Latinate words. Such verbal prestidigitation may be oxymoronic, but it's fungible. I respect students who can explain abstract concepts in down-to-earth terms, because such writing shows that a student got it. But tics are habitual.

Then there are the odd uses of prepositions, and I'm not sure what to make of them. As Stephen Pinker has written, verbs are little tyrants. Once you choose them, the sentence often has to take a certain form. I think he used to lay as a paradigm of the tyrant verb. Not only is it transitive, but it requires an adverb: One cannot simply lay a book. But picking up on the nasty rules that verbs (and lesser tyrant verbs) lay down requires both an ear for such patterns and also enough exposure to writing. Maybe the nonstandard uses of prepositions suggests that a student is overreaching in language, trying to use the $5.15 verb without reading the directions. Or maybe it reflects considerable independent efforts at reading difficult material without the teaching guidance that can smooth one's learning and make it possible to pay attention to the language as well as the ideas.

Such independent efforts often result in another error that is more common with older students, overlaying experience and prejudices on a reading without considering the reading in itself. I've written about my concern with close readings and textuality before, and I'm still puzzled how to teach that attention. One student in my honors-college class said she hated Ian Hacking's The Social Construction of What? because she couldn't hear his voice in her head, so she clearly has that capacity (and may be unable to shake the habit!). Is that mental listening a teachable skill?

[I]n the classroom, academic freedom rests on the notion of faculty expertise, ... It derives from values that attach to the distinctive role of the professional scholar, a member of a self-regulating corporate body whose job it is to certify that expertise. Academic freedom pertains to scholars as professionals, not individuals... Students do not have this kind of academic freedom and they ought not to be encouraged to believe that they do. [Emphasis added.]

This in the same issue of Academe where Joseph Heathcott explained the inappropriateness of the guild analogy for graduate training. Scott's been the head of AAUP's Committee A, doing good work, and she's allowed the occasional mistake. This one's a doozy. You think that maybe we'd realize that to the extent that academic freedom is a special form of free speech for individuals, it inheres in the institutional circumstances, not entirely in the person's characteristics. (If I quit my job and started working for a corporation, would I have special speech rights compared with my neighbor just because I have a Ph.D.?) Yes, faculty have authority over the class in important ways, but, sheesh, this is an inapt and particularly foolish bit of phrasing.

Then there's the strangecase of JohnDaly, the Warren Community College adjunct faculty who insulted a student at the institution in an e-mail exchange about the Iraq war and then was pressured to resign because his statement was politically incorrect, rude, and embarrassing to the college and, in addition, he's an adjunct and thus vulnerable. It's hard to pick a starting point for criticismthere are so many from which to choose. Seventy wrongs still don't make a right.

I turn around (okay, stop writing entries on academic freedom for a few months), and the world goes to pieces. What's wrong with you guys?

November 28, 2005

Growth models

All righthaving beaten a future article for Education Policy Analysis Archives halfway into shape, I'm taking some time for relaxation and my sidewise way of looking at education policy, or at what passes for it.

Since a few journalists have had a reaction-fest with this, there has been no acknowledgment of the existing literature on so-called growth models, their political implications, or the gaps in the literature....

I'll state up front that it's fine to focus on political questionsmoreover, I've argued in The Political Legacy of School Accountability Systems that the political questions are the important ones, ultimately, and it's impossible to have a technocratic solution to political problemsjust so long as you don't ignore the technical issues (and for that, see Linn, 2004). Haycock of the Education Trust is ultimately right about the focus on philosophical questions, regardless of whether I might agree with her on specifics.

Big political questions

So what are the policy/political questions? A few to consider:

The dilemma between setting absolute standards and focusing on improvements. As Hochschild and Scovronick (2003) have pointed out, there's a real tension between the two, and it's impossible to completely resolve the two. On the one hand, there are concrete skills adults need to be decent citizens (yea, even productive). On the other hand, focusing entirely on absolute standards without acknowledging the work that many teachers do with students with low skills is unfair to the teachers who voluntarily choose to work in hard environments. And, no, I'm not going to take BS from either side claiming that, on the one hand, we need to be kind to kids (and deny them the skills they need??) or, on the other hand, that we need to take a No Excuses approach towards those lazy teachers (and who are you going to find to teach in high-poverty schools when the teachers you've insulted have left??)

The question of how much improvement to expect. Here, Bill Sanders' model (we'll take it on faith for the moment that he's accurately representing his modelmore later on this point) is close to an average of one-year's-growth-per-year-in-school (see Balou, Sanders, & Wright, 2004, for the most recent article on his approach). But for students who are behind either their peers or where we'd like them to be, Haycock is right: one year's growth is not enough (see Fuchs et al., 1993, for a more technical discussion and the National Center on Student Progress Monitoring for resources).

The tension between the public equity purposes of schooling and the private uses of schooling to gain or maintain advantages. Here's one thought experiment: Try telling wealthy suburban parents, We want your kids to improve this year, but not too much because we want poor kids in the city or older suburb nearby to catch up with your children in achievement and life chances. If anyone can keep a straight face while claiming the parents so told would just sit back and say, Sure. That's right, I have some land to sell you in Florida.

Where is intervention best applied? Andrew Rotherham's false dichotomony between demographic determinists and accountability hawks aside, arguments by David Berliner are about where to intervene to improve children's learning, not about giving up. (I should state here that of course I have heard teachers and some of my students fall into the trap of this dichotomy, but that's a constructed dynamic from which we can and must escape. To dismiss Berliner and others as if they fall into the trap is to shut off one escape route. Shame on those who careless elide the two.)

Assumptions that technocratically-triggered sanctions based on (either) growth or absolute formulae work. I am yet to be convinced that such a kick-in-the-pants effect is strong enough or without side effects. This is not to say that I don't believe in coercion. I am just a believer in shrewd coercion, not the application of statistical tubafors (you'll have to search for the term on that page).

Statistical issues with multilevel modeling

Among education researchers, probably the tool(s) of choice for growth right now for measuring growth is so-called multilevel modeling. Explaining why multilevel modeling is the tool of choice for growth is probably an accident of recent educational history (combining the more recent pushes for accountability with the development of multilevel statistical tools), but it allows a variety of accommodations to the real life of schools, where students are affected not only by a teacher but a classroom environment in common with other kids as well as the school and their own characteristics (and family characteristics). That's a mouthful and only skims the surface.

But there are both technical and policy/political issues with the use of multilevel modeling software (and I use that more generic term rather than referring to specific software packages or procedures). Let me first address some of the technical issues:

Vertical scaling. In some statistical packages, there is a need for a uniform scale where the achievement of students at different grades and ages are on the same scale. That way, a score of a student who is 7 can be compared to an 8-, 9- or 10-year-old's achievement, resulting in some comparison across grades. This is not necessary with packages that use prior scores as covariates, but anything that looks at a measure of growth in some way strongly begs for a uniform (or vertical) scale. There are two problems with such vertical scaling, stemming from the fact that it is very, very difficult to do the type of equating across different grades (and equivalent curricula!) that is necessary to put students on a single scale. Learning and achievement is not like weight, where you can put a 7-year-old and a 17-year-old on the same scale. Essentially, equating is a type of piecemeal process of pinning together a few points of separate scales (each more closely normed). At least two consequences follow:

Measurement errors in a vertical scale will be larger than errors in a single-grade scale, which test manufacturers have far more experience norming.

The interpretation of differences in a vertical error will be rather difficult. One reason is the change in academic expectations among different grades, unless you narrow testing to a limited range of skills. But the other reason is subtler: the construction of a vertical scale can only be guaranteed to be monotonic (higher scores in a single-grade test will map to higher scores in the cross-grade, vertical scale), not linear. There will almost inevitably be some compression and expansion of the scale relative to single-grade test statistics. That nonlinearity is not a problem for estimation (since models of growth can easily be nonlinear). But the compression/expansion possibility makes interpretation of growth difficult. Does 15-point growth between ages 10 and 11 mean the same thing as 15-point growth between ages 15 and 16? Who the heck knows!

Swallowing variance. As Tekwe et al. (2004) point out in a probably-underlooked part of their article, the more complex models of growth swallow a substantial part of the available variance before getting to the "effects" of individual schools and teachers. This is inevitable with any statistical estimation technique with multiple covariates (or factors, independent variables, or whatever else you want to call them), but it has some serious consequencees for using growth models for accountability purposes. It erodes the legitimacy of such accountability models among statistically-literate stakeholders, who see that most variance is accounted for (even if in a noncausal sense) by issues other than schools and teachers. In addition, this process leaves the effect estimates for individual teachers and schools very close to zero and each other. Thus, with Sanders' model used in Tennessee, the vast majority of effects for teachers (in publicly-released distributions) are statistically indistinguishable. Never mind all my other concerns about judging teachers by technocracy: this just isn't a powerful tool even for summative judgments.

Convergence of estimates. In the packages I know, the models don't always converge (result in stable parameter estimates), given the data. Researchers with specific, focused questions will often fiddle manually with equations and the variables to achieve convergence, but you can't really do idiosyncratic adjustments in an accountability system that claims to be stable and uniform over timeor, rather, you shouldn't make such idiosyncratic adjustments and keep a straight face in claiming that the results are uniform and stable over time.

Political complications of multilevel models

In addition to the technical considerations, there are issues with multilevel modeling that are more political in nature than technical/statistical:

Omissions of student data. This is true of any accountability system that allows exemptions, but it's especially true of any model of growth that omits students who move between test dates. It's a powerful incentive for schools to perform triage on marginal students in high school, either subtly or openly. I've heard of such triage efforts in Florida, though it's hard to demonstrate intentionality. But even apart from the incentive for triage, it's hard to claim that any accountability system targets the most vulnerable when those are frequently the students who move between schools, systems, and states. And the more years included in a model, the less that movers count in accountability.

The complexity factor. Technical issues with complex statistical models are, well, complex and difficult to understand without some statistical background, and such complexity requires sufficient care with educating policymakers. That's especially important with growth models, which are pretty easy to sell to lawmakers who may be looking for a technocratic model that they don't have to think too hard about. Here's a reasonable test: will Andrew Rotherham's blog ever mention the technical problems with growth models? Will the briefs put out by various education policy think tanks explain the technical issues, or will they prove the term to be an oxymoron?

Proprietary software. I think that William Sanders still holds all data and the internal workings of his package to be proprietary trade secrets, even though they're used as public accountability mechanisms in Tennessee, at least (anywhere else, dear readers?) (Fisher, 1996). How can anyone justify using a secret algorithm for public policy in an environment (education) where everyone (and the justification for accountability itself) expects transparency? (For other commentaries about Sanders' model, see Alicias, 2005; Camilli, 1996; Kupermintz, 2003, and an older description of my own involvement in the earlier discussions of Tennessee's system. For his own description, see Balou, Sanders, & Wright, 2004; Sanders & Horn, 1998.)

Life-course models

One of my concerns with the increasingly complex world of statistical models of growth is their amazing disconnect from fields that should be natural allies. We have great statistical packages that are incredibly complex, but some days they seem more like solutions in search of problems than a logical outgrowth of the need to model growth and development in children.

As stated earlier, one problem is the attempt to put student skills, knowledge, and that vague thing we call achievement in an area on one scale. Unlike weight, there isn't a cognitive measuring tool I'm aware of in which all children would have interpretable scoresnonzero measures on an equal-interval scale, to choose one goal. But for now, let's assume that someday psychometricians find the Holy Grail of vertical scales (or maybe that would be a Holy Belay Line to climb down after scaling the...). Even waving away that problem, I'm still troubled by the almost gory use of statistical packages without some thoughts about the underlying models.

Even if one were interested largely in describing rather than modeling growth, you could start with nonparametric tools such as locally-weighted regression (or LOESS) and move on to functional data analysis. Those areas of statistics seem logical ways to approach the types of longitudinal analysis that the call for modeling growth seems to require.

Then there is demography. I'll admit I'm a bit partial to it (having a masters from Penn's demography group), but few education researchers have any formal training in a field whose model assumptions are closer to epidemiology and statistical engineering analysis than psychometrics. In demography, the basic conceptual apparatus revolves around analyzing the risk of events that a population is exposed to. The bread and butter of demography are births and deaths, or fertility and mortality. The fundamental measure is the event-occurrence rate, and the conceptual key to mathematical demography is the assumption that behind any living population is a corresponding stationary population equivalent, a hypothetical or synthetic cohort that one can conceive as exposed to the conditions in a population in a period of time rather than conditions a birth cohort experiences. It's as if you had a time machine at the end of Dec. 31, 1997, and a group of 1000 babies born all at the first instant of January 1, 1997, would be flipped back to the beginning of the year for all who survived to the end. It's an imaginary, lifelong version of Groundhog Day, but one with the happy consequence that the synthetic cohort would never hear of Monica Lewinsky. What happens to that synthetic cohort never happens to a real birth cohort, but it does capture the population characteristics of 1997. You can find the U.S. period life table for 1997 online in a PDF file, with absolutely no mention of Monica Lewinsky. (There is much I'm omitting in this description of a stationary population equivalent, I know!)

Demography offers a few aids to this business of modeling growth, because its bailiwick is looking at age-associated processes. Or, as a program officer for the National Institute on Aging explained at a conference session I attended a few weeks ago, aging is a lifelong process. Trite, I know, but it's something that the growth-modeling wannabes should learn from, for two reasons.

One is the equally obvious (almost Yogi Berra-esque) observation that as children grow older, their ages get bigger. Unfortunately, most school statistics are reported by administrative grade, not age, but this makes comparability on almost any subject (from graduation to achievement) virtually impossible. The only reputable source of national information about achievement that I'm aware of based on age, not grade, is the NAEP Long-Term Trends reports, pegged to 9-, 13-, and 17-year-olds tested in various years from 1971 to 2004. Some school statistics used to be reported by ageage-grade tables, which I'm finally figuring out how to use reasonably. But you could have some achievement testing conducted by age and ... well, enough of that rant.

The broader use of demography should be the set of perspectives and tools that demographers have developed for measuring and modeling lifelong processes. Social historians have an awkward term for thislife-course analysis. What changes and processes occur over one's life, and how do you analyze them? Some education researchers acknowledge at least a chunk of this perspective, most notably in the literature on retention, where you cannot take achievement in a specific grade's curriculum as evidence of the (in)effectiveness of retention in improving achievement. You can only find out the answer by looking at what happens to children as they grow older.

Some of the more sophisticated mathematical models of population processes have direct parallels in education that could be explored fruitfully. To take one example unrelated to achievement growth, parity progression (women's moves from having 0 children to 1 to 2 to ...) is an analog of progression through grades, and more could be done with using parity progression ratio estimates to see what happens with grade progression.

But, to growth... variable-rate demographic models hold some considerable promise at least in theory for analyzing changes from cross-sectional data. In the standard (multilevel model) view, you focus on longitudinal data and toss cross-sectional information, because (you think) that there is no way to separate out cohort from real growth effects. Aha! but here demography has an ideastationary population equivalentsand a toolvariable-rate modeling. While the risk model of demography requires proportionate changes, natural logs, and e to the power of ... well, you get the idea, I'm going to provide a brief sketch and two possible directions. For more details, see Chapter 8 of Preston, Heuveline, and Guillot (2001). (And remember, we're magically waving away all psychometric concerns. We'll get back to that a bit later.)

We're going to consider the measured achievement of 10-year-olds in 2005 (on a theoretically perfect vertically-scaled instrument) in two different ways, one related to changes among 10-year-olds and a second way, in the experience of this cohort, and use that to relate observed information from two cross-sectional testing administrations to the underlying population dynamics (in this case, achievement growth through childhood).

First, let's compare the achievement of 10-year-olds in 2006 to 10-year-olds in 2005. It doesn't matter whose is better (or if they're equal). My son is now 10 years old (and will still be 10 for the next round of annual tests here in Florida), so let's suppose that the achievement of 10-year-olds in 2004 is higher than for 10-year-old students the year before. Then we could think of achievement as follows:

The achievement of 10-year-olds in 2006 = achievement of 10-year-olds in 2005 and some growth factor in achievement among 10-year-olds between 2005 and 2006

For now, it doesn't matter whether the and refers to an additive growth factor, a proportionate one, or some other function. And if the 10-year-olds in 2005 did better, the growth factor is negative, so it doesn't matter who did better.

Second, let's compare the achievement of 10-year-olds in 2006 to 9-year-olds in 2005 in a parallel way:

The achievement of 10-year-olds in 2006 = achievement of 9-year-olds in 2005 and some growth factor in achievement between the ages of 9 and 10 for 2005-06.

Note: this "growth factor" is part of the underlying population characteristic that we are interested in (implied growth in achievement between ages, across the ages of enrollment).

Now, let's combine the two statements into one:

the achievement of 10-year-olds in 2005 and some growth factor in achievement among 10-year-olds between 2005 and 2006 = the achievement of 9-year-olds in 2005 and some growth factor in achievement between the ages of 9 and 10 for 2005-06.

Without assuming any specific function here, this statement explains the relationship between cross-sectional information across ages as one that combines changes within a single age (across the period) and changes across ages (within the period). Demographers' models of population numbers and mortality are proportional, so the and in both cases are multiplicative functions. But one could assume an additive function, also, or something else (a variety of functions), and the concept would still work. Once one estimates the changes within single years of age, one can then accumulate those differences and, within the model, estimate the underlying achievement growth between ages, which is the critical information of interest. When the interval between test administrations is equal to the interval between the ages (four years, for NAEP long-term trends), then the additive version with linear interpolation of age-specific change measures is identical to the change between 9-year-olds in 1980 and 13-year-olds in 1984, etc. But this method allows estimating those period-specific rates when the test dates aren't as convenient, and the exponential estimates are different.

Of course, this assumes perfect measurement, something that I'd be very cautious of, especially given the paucity of data sets apart from the NAEP long-term trends tables. I've played around with those, and the additive and proportionate models come up with virtually identical results with national totals, assuming linear change in the age-specific growth measures (since we only have measures for 9-, 13-, and 17-year-olds).

(Units for the vertical axis come from the NAEP scale.)

(Changing the interpolation of age-specific growth rates to a polynomial fit doesn't change the additive model much. It shrinks the estimates of growth in the exponential model a bit but doesn't change trends. And, yes, I'm aware of the label problem: arithmetic should be additive or linear.) Click on either graph to see a larger version.

There are odd results (does anyone know of reasons why the reading results were unusually high in 1992? are the results for 17-year-olds in 2004 unusually low for any reason? I was using the bridge results), and there are all sorts of caveats one should use for this type of analysis, from the complexity of estimating standard errors of derived data to changes in the administration for students with disabilities to the comparability of 2004 results and, oh, I'm sure there's more. The point is that demographic methods provides some feasible tools precisely for looking at age-related processes, if we'd only look.

Update! (12/2)

Update (12/8)

I foolishly forgot to mention a 2004 RAND publication, Evaluating Value-Added Models for Teacher Accountability, which describes the limits of growth models for accountability. Thanks to UFT's Edwize blog for point it out (though I have a few bones to pick with the larger postdon't have enough time to right now...).

Dropouts and the military

Jim Horn at Schools Matter (not Jim Horne, the former Commissioner of Ed in Florida) recently noted the existence of a sub-GED army enlistment program, suggesting that there's a reciprocity between higher graduation standards (esp. graduation tests) and needing cannon fodder for Iraq. While he does note an online article boosting the GED Plus enlistment program, and while he joins severalother bloggers in noting the program, Tim Schmoyer corrects the record: the Army first announced the program in 2000, although the incentive in the late Clinton era was to draw people from what was then the end of the boom times labor market (I assume the Army took a year for internal development of the guidelines, so the development process would have started in 1999), not to counter the bad news from Iraq by lowering standards.

And I'm not sure from the Army press release that the numbers are affected that much. The program takes only high school dropouts who score fairly highly on the qualifying test, have no criminal or arrest record, and meet several other criteria, including having what's described vaguely as high motivation. I've met a few military officers who scrutinize retention statistics very carefully, and they would have been quite skeptical of any program that let in a significant number of high school dropouts. From their perspective, GED recipients have higher general discharge rates (i.e., neither honorable nor dishonorable) and are risky enough compared to recipients of regular high-school degrees. In normal times, the military doesn't have the "acceptable failure rate" dilemma of school systems.

November 26, 2005

Family myths, redux

Judith Warner's column Kids Gone Wild in tomorrow's NY Times is another complaint about the decline of the American family, this time dressed up as concerns about declining parenting. She briefly acknowledges that children have always been seen as unruly and then blithely ignores what she wrote. If children were somehow unruly both before this current spate of malparenting and during, then does that mean that parenting doesn't matter?

I'm not going to excuse poor parenting, but where were the editors of the Times in publishing this anecdotal pablum?

Paraphrasing self-test

After looking for a way to help students test their paraphrasing skills, I've modified what real programmers have done to create a test-your-skills-at-paraphrasing page (and the links to the folks who really deserve credit are at the bottom of it). It won't cover errors such as borrowing metaphors, but it should give a reality check to someone who thinks that replacing every third word is appropriate. As usually happens with my webpages, I wait until I see someone really clever who creates something useful and then tweak it a bit.

The big one, which may or may not be big in reality, is the announcement that the Department of Education will be inviting a limited number of states to be piloting growth models. I've been observing the discussion of growth or value-added models of student achievement for more than a decade, and I'll have something to say on this in the next few days, but I've got some other projects that take precedence this weekenda hobby rocketry science project to be the flunky for, some bills to pay, some articles to finish prepping for Education Policy Analysis Archives, etc.

November 14, 2005

Al-Arian and academic freedom, redux

Greg McColm and my article, A University’s Dilemma in the Age of National Security (PDF), is now out in the National Education Association Thought and Action Fall 2005 issue (pp. 163-77). We've been working on this for over two years or, rather, Greg has done the vast bulk of the work and I've been putting in chip shots, academically speaking. He deserves any credit for clever turns of phrase as well as persistence that many other academics don't have. It's a little different from what we submitted but that's life with editing. Among other things that I've learned in working on this article is that some disciplines don't have standard citation styles because the rival proprietary journals have different ones, so the standard is to use the citation style of the source that material came from. But I'm sure that's not why you're going to read the full entry, which is about the criminal trial that's entering its concluding stage this week. Note: the article itself is unrelated to the trial, since it was written well before the trial started. Its appearance at the close of the trial is just coincidence.

This week, the prosecution will rebut the case raised by the lawyers for the four defendants. Al-Arian's team rested without presenting witnesses, but the others presented a few witnesses each before the summations. Journalists have described the summaries in essence as a battle over circumstantial evidence. Are the disparate pieces suggesting funding links between the defendants and the Palestinian Islamic Jihad enough to show that they raised money for PIJ with the intent to support the specific organizational mechanics of terrorism (and not just ancillary activities of PIJ)? There are bound to be appeals upon any convictions (and the several hundred pages of jury instructions, along with the dozens of decisions Judge Moody made in the course of the trial, will be fodder for them), but this seems to be the central question of the conspiracy and terrorism charges. (The other charges, about fraudulent immigration applications, is a whole other kettle of fish, and journalists haven't touched those at all, at least as far as I can tell.)

I haven't been sitting in the jury box, so I don't know the full evidence and won't comment on the key question. I'm sure that if there is a conviction, many will claim that the conviction is proof that the administration of USF did the right thing by trying to fire Al-Arian before indictment and by firing him right afterwards. That is essentially an argument that the end result of a criminal trial justifies the employment actions of a university. In some cases, where the basic facts are known before trial, that might well be the case. But I'm not so sure it holds here, with Al-Ariannot because he's anything like an angel. Far from it. But there are a few points that remain, specific to the trial:

The firing of Al-Arian after the indictment was a purely symbolic and political act. There was no payroll difference for the university between an unpaid leave of absence during a trial, at the end of which a conviction ends the job, and firing a professor after indictment. In both cases, the defendant is unpaid.

Many of the factual assumptions of Al-Arian's critics turned out to be in error, especially if you agree with the prosecution's case. In the early 1990s, Al-Arian wasn't adding to PIJ coffers, from all reports of the prosecution case that I've read. Instead, he was desperately seeking to raid PIJ accounts to support the think-tank he had co-founded. This prosecution claim doesn't necessarily obviate their central point, but it is related to the criticism of Al-Arian that he was using his employment at USF as a cover to legitimate the funding of terrorism. He may well have used his employment at USF as a mechanism to start a think-tank with delusions of Palestinian intelligentsia gravitas, eventually willing to propose various financial mechanisms to keep it afloat. (This is detail from the prosecution's case, detail that Al-Arian's lawyers may or may not dispute.)

The immigration-fraud charges are a safety-valve for the federal government. If Al-Arian and the other defendants are acquitted on the more serious charges but are convicted on the fraud charges (which I am guessing have a lower threshold to prove), those convictions will be powerful tools at deportation hearings, which (I am also guessing) would proceed on a track parallel to the appeals of any criminal convictions. A fraud charge may not carry lengthy prison time beyond what the defendants have already served before trial, but such convictions could be used in deportation hearings. The end result might well be an even more complicated legal mess than some of my friends and colleagues are predicting.

If there is no conviction or deportation order left standing at the end of the day, there is still Al-Arian's grievance against his firing. The USF administration's decision to fire Al-Arian on his indictment hinges on the legitimacy of that indictment, whose counts changed before trial, and (if it comes back to an employment case) would necessarily be a matter of not being proven in a court of law, at least as far as the law is concerned. The machinations to fire Al-Arian before indictment might well be used by Al-Arian's civil lawyer(s) as evidence that the termination decision was pretextual. And Al-Arian's civil lawyer in 2003 filed a pro-forma grievance under administrative rules passed by our Board of Trustees under the assumption that the Collective Bargaining Agreement with the faculty union was void, an assumption that Florida's courts have now made invalid. One more mess to consider.

If there is no standing conviction or deportation order at the end of all this, and there is a university grievance process that results in upholding Al-Arian's dismissal, the AAUP investigation of USF will probably become active again. In the summer of 2003, staff and members of Committee A reported to the annual meeting that under AAUP procedures, universities were given more due process than they usually give faculty: a university's hearing process had to be concluded (or none started) for AAUP to officially censure the administration. Because USF's administration and Al-Arian's lawyer agreed to suspend the process for a post-termination grievance pending the outcome of a criminal trial, AAUP's staff and Committee A leadership concluded that the annual meeting could not fairly consider the censuring of USF's administration. But if Al-Arian is freed and the grievance proceeds, then that stoppage on AAUP action is lifted (at least as I read the AAUP process). That doesn't guarantee censure, but it does make some discussion within AAUP highly likely, at least in the annual meeting.

For those who long argued for Al-Arian's termination, before an indictment, I wonder if they considered the likely results (at least until an indictment): a man as a cause celebre, with loads of time on his hands to raise funds for Palestinian causes. If those causes included terrorism,...

For those who long argued for Al-Arian's termination, and who are delighted that Al-Arian is on trial, I wonder if they thought that federal agents were better or worse at investigation than university administrators, or if in retrospect they preferred that the administration hire private investigators, who could possibly have interfered with or discovered the clandestine wiretaps of the feds.

Since Al-Arian's lawyer filed his post-termination grievance in 2003 using non-union procedures, the United Faculty of Florida (my faculty union) is out of the loop officially regardless of the results of the trial, any deportation hearings, or the grievance process. Of course, I'm not ruling anything out given the twists and turns of all this. My longstanding concern here has been with the long-term consequences of administration actions on faculty morale and the university environment, and while there are many things that are operating significantly better today than almost four years ago, this episode is another patch of tarnish on USF's history. The administrators and trustees who served in late 2001-early 2003 may not have been responsible for all of the things coming at them, but they made enough errors to contribute to problems. Until someone convinces me otherwise, I think the university would have been better off waiting until an indictment and putting Al-Arian on unpaid leave until the end of the trial (and subsequence proceedings) or waiting for evidence that would clearly justify discipline or termination on its face. The guy is no model of university citizenship, but that's not the entire question here.

Correction (7:30 a.m., Tuesday): It looks like the jury instructions only took three hours for the judge to read. Deliberations start today.

Do I look like I'm from MIT?

Yesterday, I was at the Central Park hawk bench, enjoying a discussion with one of the birding regulars there about Pale Male, when she asked me if I taught at MIT. Well, I'll admit that's the most unusual question I've ever heard from an adult. Occasionally I pretend that I've heard a rude question for the very first time (I've never been asked that by a stranger!), to give another person (especially students) a hint that you don't ask that.

But I really had never been asked if I taught at MIT before. So I explained that I taught at USF, and after a bit more conversation, it turned out that she had probably never heard of USF even though she had lived in Florida for several years, before she had become a birder.

Incidentally, New York was great. I'm exhausted, and I didn't get enough work done, but it was an absolutely wonderful trip. And I've been invited to play along with coauthor a review article over the next few months. Fun! Challenging. Trying to conceive of fitting that in with everything else. Anyone know a temporal equivalent of a shoehorn?

Dear Sherman Dorn,
I am replying to the blog on your website with the above title regarding my book, Rock Me Gently.
Firstly, I would point out that I do not consider myself to be a writer as I have never written a book before and I do not intend to write another in the future. Therefore I am not what you describe as a "professional plagiarist". Nor have ever claimed to have a photographic memory - that is media hype. My story is based on the diary I kept as an eight year old child, which is still in my possession. Therefore my story is not a lie, but a true memoir.
It took me seven years to write my book and my main aim was to give a voice to the two eleven year old friends of mine who died at the convent I attended as a child. Since the publication of my story,I have received numerous letters from people who experienced the same kind of abuse that I did as a child. All of them tell me that my story has helped them come to terms with what happened to them as children and for the first time in their lives, they are able to speak up about it.
I learnt to tell my story by reading and imitating the masters and I am truly sorry for the mistakes I made throughout the years as I wrote it. I am not trying to excuse what I have done, but would only add that the similarities between my book and that of other authors is minimal in comparison to the amount of words contained in my story.
Please would you read my book and judge for yourself. If you agree, I would like to send you a free copy
Kind regards,
Judith Kelly.

I should note that a comment on my earlier entry also defended Ms. Kelly. I guess she and her friends read blogs, or at least Google her name (or maybe check Technorati).

November 6, 2005

Waves of globalization

This morning, I was chairing an attrition-attacked panel at the Social-Science History Association meeting, where one presenter just never made it and where the discussant had to e-mail me her comments because of a minor family crisis. But the comments started a very nice discussion after two papers that appeared to have an odd juxtapositionone paper by an economics grad student on post-Civil War literacy rates (1870 census) and counties with Freedmen's Bureau activity, on the one hand, and a paper by Penn State-Behrend historian Liz McMahon on the relationship between Qur'anic and Protectorate/colonial schools on the 20th century Zanzibar colony's Pemba island, after the British takeover and emancipation.

I don't remember how we got around to discussing globalization, but the idea floated in the small group that there have been several waves of ideas, behaviors, patterns, spread by global mechanics as a conveyor or vector: disease, ideas, materials, wealth, and social relationship repertoires.

I'm not sure where to go with thatI don't quite have the span of knowledge to wrap my brain around itbut it seems far more historical than Thomas Friedman's "flat-world" book. Anyone want to pick up the idea and run with it (or let loose the cruel pack of vicious facts on the innocent hypothesis , to paraphrase from Bloom County)?

November 5, 2005

Every slice of humanity has its...

In principle, I'm a supporter of NBPTS's activities as a growing process for applicants. Sometimes, though, national certification doesn't mean that the teacher understands the principles of the discipline, in this St. Pete Times story:

"I guess you could say I'm a creationist," said Marcia DeMeza, a national board-certified science teacher at Lake Gibson High School in Lakeland.... DeMeza, who has been teaching for 38 years, ordered a video on intelligent design after receiving an advertisement for it. "The students really received the video well," she said.

November 3, 2005

Conference time!

Second public presentation of the net-flow stuff tomorrow, with a poster in the main book exhibit at the Social Science History Association, meeting this weekend in Portland. It's rainy in Portland. I'm not quite crazy enough to laminate my papers to get them safely between hotels. Or maybe I'm crazy enough not to...

In any case, the SSHA is full of serious quantitative folks, so I'll probably be asked to show my work the spreadsheets with the calculations.

I'm still figuring out what to do with the places and times where things don't quite look right. It could be a huge set of typographical errors, or maybe problems in how carefully officials wrote down the figures one year. (There's a great example in Union County in 1938-39, where every child in a grade is also the same age. How amazing! How unbelievable.) Or maybe instability with small populations, which I can believe with African American students in White county (all 125 of them in one year), but not whites in Terrell county the same year. Hmmn...

November 1, 2005

Alito and academic freedom

It looks like Samuel Alito's opinions on academic freedom are mixed. In one case, Saxe v. State College Area Sch. Dist., 240 F.3d 200 (2001), Alito wrote an opinion striking down an overly broad anti-harassment policy. So far, so good in terms of protecting expression.

Then there's Edwards v. California Univ. of Pa., 156 F.3d 488 (1998), in which Alito wrote an opinion undermining the claims of a faculty member to individual academic freedom separate from institutional academic freedom. The passage that is most worrisome follows (in the extended entry):

We do not find it necessary to determine whether the district court's instruction adequately defined the "reasonably related to a legitimate educational interest" standard because, as a threshold matter, we conclude that a public university professor does not have a First Amendment right to decide what will be taught in the classroom. This conclusion is compelled by our decision in Bradley v. Pittsburgh Bd. of Educ. , 910 F.2d 1172 (3d Cir. 1990), where we explained that "no court has found that teachers' First Amendment rights extend to choosing their own curriculum or classroom management techniques in contravention of school policy or dictates." Id . at 1176. Consistent with this observation, we concluded that "[a]lthough a teacher's out-of-class conduct, including her advocacy of particular teaching methods, is protected, her in-class conduct is not." Id . (citation omitted). Therefore, although Edwards has a right to advocate outside of the classroom for the use of certain curriculum materials, he does not have a right to use those materials in the classroom. Accord Boring v. Buncombe County Bd. of Educ. ,

136 F.3d 364, 370 (4th Cir. 1998) (in banc) ("We agree . . . that the school, not the teacher, has the right tofix the curriculum."); Kirkland v. Northside Indep. Sch. Dist. , 890 F.2d 794, 800 (5th Cir. 1989) ("Although the concept of academic freedom has been recognized in our jurisprudence, the doctrine has never conferred upon teachers the control of public school curricula."). But see Bishop v. Aronov , 926 F.2d 1066, 1075 (11th Cir. 1991) (finding that a public university's restrictions on a professor's in-class speech "implicate[d] First Amendment freedoms").

Our conclusion that the First Amendment does not place restrictions on a public university's ability to control its curriculum is consistent with the Supreme Court's jurisprudence concerning the state's ability to say what it wishes when it is the speaker. The following passage from Rosenberger v. University of Virginia , 515 U.S. 819 (1995), addresses this issue in the university context:

[W]hen the State is the speaker, it may make content- based choices. When the University determines the content of the education it provides, it is the University speaking, and we have permitted the government to regulate the content of what is or is not expressed when it is the speaker or when it enlists private entities to convey its own message. . . . It does not follow, however, . . . that viewpoint-based restrictions are proper when the University does not speak itself or subsidize transmittal of a message it favors but instead expends funds to encourage a diversity of views from private speakers. A holding that the University may not discriminate based on viewpoint of private persons whose speech it facilitates does not restrict the University's own speech, which is controlled by different principles.

Id . at 833-34. Since the University's actions in the instant case concerned the "content of the education it provides," id . at 833, we find that the University was acting as speaker and was entitled to make content-based choices in restricting Edwards's syllabus.

Edwards's reliance on the principle of academic freedom does not affect our conclusion that the University can make content-based decisions when shaping its curriculum. The Supreme Court has explained that "[a]cademic freedom thrives not only on the independent and uninhibited exchange of ideas among teachers and students, but also, and somewhat inconsistently, on autonomous decisionmaking by the academy itself." Regents of Univ. of Michigan v. Ewing , 474 U.S. 214, 226 n.12 (citations omitted). The "four essential freedoms" that constitute academic freedom have been described as a university's freedom to choose "who may teach, what may be taught, how it shall be taught, and who may be admitted to study." Regents of Univ. of California v. Bakke , 438 U.S. 265, 312 (1978) (opinion of Powell, J.) (quotations omitted). In sum, caselaw from the Supreme Court and this court on academic freedom and the First Amendment compel the conclusion that Edwards does not have a constitutional right to choose curriculum materials in contravention of the University's dictates.

There are two concerns I have here. One is the substantive judgments entailed here, which suggests that the First Amendment only protects an academic's statements outside class. I can't exaggerate how troubling that is. (FIRE staff, what do you think?)

But what is more worrisome is the slippery logic here, conflating all sorts of things: the restrictions on K-12 teachers with university teachers, of speakers with university teachers, and of general public employees with university teachers. There's the acknowledgment that the 11th Circuit had at least somewhat different reasoning (and a sort of balancing test) in Bishop v. Aronov, but no grappling with the reasoning there or with the broader question of when institutions have sway and when university teachers have sway. [Update: After a bit of thought, I should clarify that I don't agree with Bishop v. Aronov. My point is that Alito's reasoning here is brief and apparently untroubled with the possibility that he was essentially wiping out first-amendment protections for university teaching, period.]