The following was a private letter from Gerhard Casper, president of Stanford University, to James Fallows, editor of U.S. News & World Report. With the permission of both, it since has entered the public domain.

STANFORD UNIVERSITY

OFFICE OF THE PRESIDENT
GERHARD CASPER

September 23, 1996

Mr. James Fallows

Editor
U.S. News & World Report
2400 N Street NW
Washington, DC 20037

Dear Mr. Fallows:

I appreciate that, as the new editor of U.S. News & World Report, you have much to do at this moment. However, it is precisely because you are the new editor that I write to you, personally.

I emphasize you, because of your demonstrated willingness to examine journalism in the same way that journalism examines all other facets of society. And I say personally because my letter is for your consideration, and not a letter to the editor for publication.

My timing also is related to the recent appearance of the annual U.S. News "America's Best Colleges" rankings. As the president of a university that is among the top-ranked universities, I hope I have the standing to persuade you that much about these rankings - particularly their specious formulas and spurious precision - is utterly misleading. I wish I could forego this letter since, after all, the rankings are only another newspaper story. Alas, alumni, foreign newspapers, and many others do not bring a sense of perspective to the matter.

I am extremely skeptical that the quality of a university - any more than the quality of a magazine - can be measured statistically. However, even if it can, the producers of the U.S. News rankings remain far from discovering the method. Let me offer as prima facie evidence two great public universities: the University of Michigan-Ann Arbor and the University of California-Berkeley. These clearly are among the very best universities in America - one could make a strong argument for either in the top half-dozen. Yet, in the last three years, the U.S. News formula has assigned them ranks that lead many readers to infer that they are second rate: Michigan 21-24-24, and Berkeley 23-26-27.

Such movement itself - while perhaps good for generating attention and sales - corrodes the credibility of these rankings and your magazine itself. Universities change very slowly - in many ways more slowly than even I would like. Yet, the people behind the U.S. News rankings lead readers to believe either that university quality pops up and down like politicians in polls, or that last year's rankings were wrong but this year's are right (until, of course, next year's prove them wrong). What else is one to make of Harvard's being #1 one year and #3 the next, or Northwestern's leaping in a single bound from #13 to #9? And it is not just this year. Could Johns Hopkins be the 22nd best national university two years ago, the 10th best last year, and the 15th best this year? Which is correct, that Columbia is #9 (two years ago), #15 (last year) or #11 (this year)?

Knowing that universities - and, in most cases, the statistics they submit - change little from one year to the next, I can only conclude that what are changing are the formulas the magazine's number massagers employ. And, indeed, there is marked evidence of that this year.

Then there is "Financial resources," where Stanford dropped from #6 to #9, Harvard from #5 to #7. Our resources did not fall; did other institutions' rise so sharply?

I infer that, in each case, the formulas were simply changed, with notification to no one, not even your readers, who are left to assume that some schools have suddenly soared, others precipitously plummeted.

One place where a change was made openly was, perhaps, the most openly absurd. This is the new category "Value added." I quote the magazine:

"Researchers have long sought ways to measure the educational value added by individual colleges. We believe that we have created such an indicator. Developed in consultation with academic experts, it focuses on the difference between a school's predicted graduation rate - based upon the median or average SAT or ACT scores of its students and its educational expenditures per student - and its actual graduation rate."

This passage is correct that such a measure has long been sought. However, like the Holy Grail, no one has found it, certainly not the "we" of this passage. The method employed here is, indeed, the apotheosis of the errors of the creators of these ratings: valid questions are answered with invalid formulas and numbers.

Let me examine an example in "Value added": The California Institute of Technology offers a rigorous and demanding curriculum that undeniably adds great value to its students. Yet, Caltech is crucified for having a "predicted" graduation rate of 99% and an actual graduation rate of 85%. Did it ever occur to the people who created this "measure" that many students do not graduate from Caltech precisely because they find Caltech too rigorous and demanding - that is, adding too much value - for them? Caltech could easily meet the "predicted" graduation rate of 99% by offering a cream-puff curriculum and automatic A's. Would that be adding value? How can the people who came up with this formula defend graduation rate as a measure of value added? And even if they could, precisely how do they manage to combine test scores and "education expenditures" - itself a suspect statistic - to predict a graduation rate?

Were U.S. News, under your leadership, to walk away from these misleading rankings, it would be a powerful display of common sense. I fear, however, that these rankings and their byproducts have become too attention-catching for that to happen.

Could there not, though, at least be a move toward greater honesty with, and service to, your readers by moving away from the false precision? Could you not do away with rank ordering and overall scores, thus admitting that the method is not nearly that precise and that the difference between #1 and #2 - indeed, between #1 and #10 - may be statistically insignificant? Could you not, instead of tinkering to "perfect" the weightings and formulas, question the basic premise? Could you not admit that quality may not be truly quantifiable, and that some of the data you use are not even truly available (e.g., many high schools do not report whether their graduates are in the top 10% of their class)?

Parents are confused and looking for guidance on the best choice for their particular child and the best investment of their hard-earned money. Your demonstrated record gives me hope that you can begin to lead the way away from football-ranking mentality and toward helping to inform, rather than mislead, your readers.

speaking of rankings, someone did a very long and detailed post a few months back that broke down the various conferences by regional masters and whatnot, reaching the conclusion that certain schools not highly ranked would have a very hard time breaking into the BCS. I thought it was very well done, but I lost the link. If someone has that, please post it. Thanks!

Thanks, FMW. As one of the members who has used the US News rankings as part of these discussions I feel obliged to offer some counterpoints to the letter.

Mr. Casper has many valid points and concerns in his well written letter, most notably the dangers of perceptions created in devising a system of measure (rankings) for something considered unmeasurable (colleges). That some institution is labeled last place is the unfortunate consequence, and that some institutions can perhaps unjustly claim "superiority" over others serves little to no purpose. These rankings must be taken with a huge grain of salt.

That being said, I don't think the average (and especially above average) college applicants use this system as the be-all-end-all of decision making criteria. Nor do we confuse a school finishing #24 as discernably bad, as Mr. Casper might suggest. We all know Harvard presents a different academic world compared to Louisville or North Texas, and that the difference in scores for Yale and Penn may be so marginal that the difference in rankings is essentially meaningless.

What value does come from these rankings is the research and information behind it. In fact, if anything such rankings are merely the media stepchild resulting from the promotional efforts of the schools themselves. As our guidance counselors and university reps will tell you, data concerning research funds, SAT scores of incoming freshmen, etc, are all used to paint a picture of the general worth of that school, so that other prospective students, financial partners and others can see the benefits and results. That someone would try to group all of this data into a single equation may seem erroneous, or at least haughty, it has also served to promote post-secondary education in general and help propsective students learn more about a greater variety of schools they may not otherwise learn about.

Like the BCS, the system is far from perfect and could be improved if not scrapped. And no one should take these rankings as Gosel. They are simply one organizations perspective, one poll in the overall BCS of "how to choose a college."

We see some people throwing out all kinds of opinions about college rankings, i. e. US News & World report listings, AAU membership organization, flagships and land-grants observations, and of course personal biases. Now there is a website, possibly among others, that argue/discuss college academics. Here is something I found a bit amusing among a few Florida folks discussing FSU vs UF vs USF vs UM:

I have visited that site before. Its interesting for such a site to also have a series of forums within the site dedicated to specific universities and colleges that they consider to be the nations "top universities," like a ranking (ie University of Michigan, University of Virginia, UCLA, Cal-Berkeley, etc).

That FSU vs UF vs USF vs UM sounds like a bunch of Gators and Seminoles arguing. I think a discussion shouldn't be focused so much on which one is better within the state but what their academic reps are also nationally speaking. This type of discussion would be interesting for someone who is trying to determine which university or college is the best for them or for their children, irregardless or regarding any rankings, but more importantly on their own individual needs as far as a college education.

As far as college conference alignment and an "academic" criteria for membership is concern, which is seen even at the mid major level (see Big Sky), the most important opinions for college conference alignment on how an "academic" criteria is measured are those of the decision makers of such a decision -- the Presidents or Chancelors of the existing member institutions of the conference making a decision on expansion.

Last edited by metropolitan on Tue Jun 06, 2006 10:37 pm, edited 1 time in total.

Who is online

Users browsing this forum: No registered users and 2 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum