Of Interest

A regular reader sent us (pdf) these details behind this year’s US News rankings. Let’s spend five days discussing them. Today is Day 2.

Continuing our examination of the first portion of the data:

Note the key importance of Faculty Resources. On almost all other measures, Williams is very similar to its peer group, as we would expect. From the methodology:

Faculty resources (20 percent): Research shows that the more satisfied students are about their contact with professors, the more they will learn and the more likely they are to graduate. U.S. News uses five factors from the 2015-2016 academic year to assess a school’s commitment to instruction.

Class size is 40 percent of this measure. Schools receive the most credit in this index for their proportion of undergraduate classes with fewer than 20 students. Classes with 20-29 students score second highest; those with 30-39 students, third highest; and those with 40-49 students, fourth highest. Classes that have 50 or more students receive no credit.

Faculty salary (35 percent) is the average faculty pay, plus benefits, during the 2014-2015 and 2015-2016 academic years, adjusted for regional differences in the cost of living using indexes from the consulting firm Runzheimer International. U.S. News also weighs the proportion of professors with the highest degree in their fields (15 percent), the student-faculty ratio (5 percent) and the proportion of faculty who are full time (5 percent).

We will look tomorrow at some of the underlying details of this score, but, to the extent that there is a single explanation as to why there is a such a big 5 point gap between Williams and its peers, Faculty Resources is the primary explanation.

David, can you provide one single piece of evidence that this ranking is in any way important to the college’s reputation, let alone critically important? Maybe it was in the 1980s when these rankings first came out. But I don’t think you can.

Foolish reader! If the rankings weren’t important, than how do you explain this?

A presentation by Catherine Watt, the former institutional researcher and now a staff member at Clemson University, laid bare in a way that is usually left to the imagination the steps that Clemson has (rather brazenly) taken since 2001 to move from 38th to 22nd in U.S. News’s ranking of public research universities. …

When President James F. Barker took over the South Carolina institution in 2001, he vowed in his initial interview to move Clemson into the top 20 (a distinction that many research universities covet, but few can achieve, given that most of those already in the top 20 aren’t eager to relinquish their spots). Although many people on the campus were skeptical, Clemson has pursued the goal almost single-mindedly, seeking to “affect — I’m hesitating to use the word ‘manipulate,’ ” Watt said — “every possible indicator to the greatest extent possible.” She added: “It is the thing around which almost everything revolves for the president’s office.”

That statement was among the first at Watt’s session that provoked murmurs of discomfort (and more) from the audience — there would be many more as she described the various steps Clemson had taken to alter its profile in order to improve its U.S. News standing. …

The easiest moves, she said, revolved around class size: Clemson has significantly increased the proportion of its classes with fewer than 20 students, one key U.S. News indicator of a strong student experience. While Clemson has always had comparatively small class sizes for a public land-grant university, it has focused, Watt said, on trying to bump sections with 20 and 25 students down to 18 or 19, but letting a class with 55 rise to 70. “Two or three students here and there, what a difference it can make,” she said. “It’s manipulation around the edges.”

If the rankings are not important, then why do Clemson (and dozens of other schools) go to so much trouble to manipulate them?

Some of our snottier readers may mock Clemson for this manipulation, but such mockery just demonstrates their naivete. Consider Williams class sizes this fall. Example:

You think that the English department made a careful study of the optimal size of 100-level classes and just happened to decide that 19 or fewer was best for our students? Ha! President Morton O. Schapiro wanted Williams to be #1 in US News and he decreed that, to the greatest extent possible, class sizes should be fewer than 20. His legacy lives on.

Not that there is anything wrong with that!

Related posts:

16 Responses to “US News Details II”

ephalum says:

You have provided not one iota of evidence that U.S. News rankings have any affect — any — on students choosing between Williams and its immediate peers. In fact, all of the evidence proves just the opposite — when Amherst, Williams, Pomona, Swarthmore, Midd and Wesllesly switch around in the rankings, there is no change in the relative volume of applications received, or the relative caliber of incomign students. None. Now Williams may do things purposefully to secure a top ranking, but that doesn’t mean that top ranking has any correlations with the desirability of the institution relative to its close peers. And that is my point. If it did, Williams would receive 1000 more applications each year than Amherst, rather than around 1000 less, and would have students with lower SAT scores and class ranks, rather than essentially even. If Williams dropped to #3 in the rankings next year, that would impact upon the school and its students in no discernable way whatsoever, beyond eliminating the ability to offer certain chants at Amherst-Williams sporting events.

You have provided not one iota of evidence that U.S. News rankings have any affect — any — on students choosing between Williams and its immediate peers

Then why did Morty purposely game the rankings by setting 19 as the maximum size of many classes? In your view, Morty is stupid since this gaming, at most, affects our ranking by just a couple of positions. I, on the other hand, think that Morty is smart and that he did this (and other things) in order to preserve Williams at #1 because this does matter to a small but meaningful number of applicants and their families. Do you know more or less than Morty about this topic?

DDF –
I can’t tell if you are being deliberately obstinate. You are smart, good at logic. Someone asked for proof of it’s effects on the school, and what those effects are. You respond by saying “CLEARLY VERY SMART PEOPLE THINK IT’S IMPORTANT”. Sure, I agree. But that doesn’t seem to address the question. Do you think it does?

Phrased another way: Morty is very smart, and knowledgeable. What evidence do you think he saw that the ranking is so important? Do you have any of that evidence you could share with us?

David, if the argument “Morty did it, so it must have been smart” carried the day, then the entire purpose of this blog is defunct. Good to know. Wasn’t Morty in favor, for example, of anchor housing? And at least a dozen other policies that you have roundly critiqued, often in 30-part posts? Was he somehow less smart when he proposed those policies/ Just because Morty is smart, doesn’t mean he is always right. Do you agree with his position, for example, on safe spaces on campus? It is just laughable that YOU of all people are making this argument. If there was any evidence that being number one in U.S. News actually had a demonstrable impact on the caliber of Williams’ applicant / admitted students pool, it would be an easy argument to prove. Just go back historically, look at the years Williams was number 1, 2, or 3, and run a statistical regression on the succeeding year’s application pools and entering class credentials. You’ve heard of R, right :)?

And by the way, I am happy that Williams is number one. It’s nice to have attended the purported number one school, I guess. But it’s obvious that current generations of applicants could care less in terms of number 1, 2, or 3. They do a bit more homework than that. If they did care, we would have seen a gradual decline in the caliber Amherst’s student body relative to Williams’ over time. And a bigger decline in Swarthmore and Pomona’s. None of those things have happened in response to Williams’ rankings dominance. But you know which college has the best student body, at least measured by purely numeric credentials, of any non-tech L.A. school? It’s Pomona, even though it’s never been ranked even in the top two. Now if Williams dropped to, say, 20, I imagine that might have some small impact on applications and such. But that is never going to happen.

1) There is tons of evidence that many/most administrators (not just Morty) think the rankings are important: they devote a lot of time and energy to trying to manipulate them! We all agree on that, right? It is possible that they are wrong, of course, but I don’t think so.

2) I have had conversations over the years with students (a score or so) in which it was clear that Williams #1 ranking mattered, either to them or to their parents. But this is just an anecdote, at best, and could be very biased by all the international students I know.

3) I think that all the readers of this blog believe that a consistent ranking of #1 would, over several years, have an effect, relative to a ranking of #10. Correct? The debate seems to be that a ranking of #1 does not matter compared to a ranking of #2 or #3. Of course, once you admit that a difference between #1 and #10 matter, you have to decompose that large (?) effect into 9 component parts: the effect of going from #10 to #9, the effect of going from #9 to #8, et cetera. And, in those 9 pieces, I bet the biggest is due to going from #2 to #1. Would anyone disagree?

Thanks for pushing me to look more closely into this topic. Turns out that there is a large academic literature, all of which says more or less the same thing: the rankings matter. Example:

Despite the widespread popularity of the U.S. News & World Report College rankings there has been no empirical analysis of the impact of these rankings on applications, admissions, and enrollment decisions, as well as on institutions’ pricing policies. Our analyses indicate that a less favorable rank leads an institution to accept a greater percentage of its applicants, a smaller percentage of its admitted applicants matriculate, and the resulting entering class is of lower quality, as measured by its average SAT scores.

the difference between statistical significance and substantive importance seems of worth to consider. the impact of a move of 1 or 2 spots in the rankings leads to statistically significant results, but they look substantively unimportant from the models David found.

If U.S. News and World Report included temperature and weather data, then I don’t think Williams would hold the top spot. I’m a little surprised that these factors don’t have a higher weighting, or any weighting at all, in their model.

JCD –
That’s surprising to you? It’s surprising to me that that’s surprising to you. It wouldn’t even cross my mind to use weather as a metric in ranking academic excellence of an institution. Maybe that’s just because I’m more close minded than you.

I’m a native Californian. I don’t think it is normal or healthy or safe to live in an environment where the climate can kill you.

Part of a quality education is staying alive and comfortable enough that you can study and concentrate. I don’t see how anyone can do the highest quality work in an environment where even simply tasks like walking from class to class become the equivalent of an icy polar exploration.

When I had an office in Stetson Hall, it got so cold that the maintenance staff brought in a heater for me.

If you have never lived in an area with an excellent climate, then I can understand why you (or others) might underestimate how important the climate is to one’s success.

Is there any doubt that US News, flawed though it may be, has helped Williams as an institution? For those older — err, more experienced — posters on the site, were Williams / Amherst / Swarthmore (the only three schools ever to be ranked first in the LAC rankings) generally considered to be the H/Y/P of liberal arts schools prior to the mid-1980’s? My vague understanding was that Williams was not thought of in this fashion prior to US News, but of course now is universely recognized (by all who are knowledgeable about LAC’s) as the best in the business

I could be mistaken, but sometime in the early years of the US News rankings I thought Carleton slipped into the top 3. I am a bit embarrassed to admit it, but I only applied because of that — I’d gotten some track recruiting letters from them and pretty much ignored them until I saw they were a top three-type school in those rankings.

If JeffZ and Derek stand with me, then who would dare stand against me!