Brian's student quality rankings seem particularly salient to me today because I spent several hours this week investigating elementary and secondary schools in Utah. One of the metics by which these schools are evaluated is standardized test scores. This method of evaluation has many obvious shortcomings, but it has the virtue of inter-school comparability.

My use of standardized test scores to evaluate elementary and secondary schools in Utah is quite different from the use of LSAT scores to evaluate law schools. While the standardized test scores are viewed as an output measure, the LSAT scores are an input measure. (Of course, the standardized test scores taken by elementary and secondary school students are not entirely output measures, as the inputs affect the results significantly. Nevertheless, my sense is that most people use the scores to say something about the quality of the schools' educational programs.)

Why should students evaluating law schools care about "student quality" as measured by LSAT scores? Shouldn't they be more interested in the quality of instruction or the variety of program offerings? Are differences in LSAT scores reflected in the quality of classroom discussion? Or are those differences manifest in other aspects of the law school experience?

With regard to the effect of LSAT scores on teaching, I often have heard transfer students observe that the instruction at second-tier school X is as effective -- perhaps more effective -- than the instruction at first-tier school Y. Indeed, many professors at second- or third-tier law schools have a substantial personal investment in the idea that they are every bit as good at this part of their job as the more famous law professors under whom they studied in law school. And my own experience offers no reason to doubt this. Nevertheless, my experience has been that differences in LSAT scores are reflected quite dramatically in the quality of classroom discussions. Students at high-LSAT law schools ask more penetrating questions and engage in more challenging discussions of the materials than students at low-LSAT law schools. (Does this mean that students at high-LSAT law schools are getting a better legal education? That's not so clear to me.)

Another aspect of this debate, often ignored, relates to the spread between the high-LSAT students and the low-LSAT students. As an instructor, I try to teach to the "high middle" (say, the 75th percentile). This strategy is intended to engage a large number of students, including the top students in the class, but the risk is that the bottom students will be left behind. This risk increases as the spread between the top and bottom students widens.

Are differences in LSAT scores manifest in other aspects of the law school experience? My impression is that students at high-LSAT law schools have much different career aspirations/opportunities than students at low-LSAT law schools. The former are likely to pursue partnerships at large firms, prestigious government appointments, careers in legal academe, etc., while the latter largely aim for practices in small and midsize firms, positions in local district attorneys' offices, and the like. These divergent aspirations/opportunities probably are reflected in the types of activities that students pursue outside the classroom. As a result, they have a significant influence on the student life of the law school.

LSAT scores are a bit like star rankings of high school football players. Five-star athletes don't always become All-Americans in college, and two-star players sometimes win the Heisman Trophy. But programs that consistently recruit four- and five-star athletes play a different brand of football from programs that recruit the two- and three-star players. In short, LSAT scores matter.

For whatever the semantical ramble is worth considering the relative useful range of analogy, I would suggest that LSAT scores are more like combine measurements for NFL draft prospects than the star ratings of high school football players.

If I am correct, star ratings are more of a reflection of a body of work in a high school career. Combine measurements, on the other hand, are a combination of raw measurements, most of which NFL draft prospects generally spends months specifically preparing for.

I suppose that fact doesn't really change the fact that LSAT scores do matter to some degree, just like combine measurements also matter to some degree.

From my perspective, the thought may be generally moot. As one who transferred from a tier 3 to a tier 1 law school, I have witnessed very little difference in the way of penetrating and engaging discussion here at my new school (which has markedly higher student LSAT scores).

IMO, student quality entails far more than simply intellect. What about high levels of collaberation and concern for other students - something I experienced to a high degree at my tier 3 school?

Perhaps I am simply taking your high school to college analogy and simply moving it to college and the NFL, and in the process revealing my bias against the LSAT. In any event, as a Green Bay fan, I can't help but think of Tony Mandarich's impressive combine stats, and the fact the Brett Farve was taken in the second round...

Having friends at many different law schools (and different tiers of law schools), I think your analysis is right on. Having attended a top 20 law school, I think much of what was great about my experience was being surrounded by exceedingly smart individuals who asked difficult questions--both in the classroom and outside.

I also think this holds true for certain important extracurriculars, such as Law Review. A large part of what was great about being on Law Review at my school was spending time informally with many of the brightest and engaging in critical dialogue about everything from current affairs to law.

Back to football, lots of schools have great coaches. And surely someone playing on a mediocre team can still be a standout. (See Garrett Wolfe from last year at Northern Illinois) But playing at Florida or Ohio State has to hone a great player in ways that playing at a mediocre school simply does not. The competition just isn't the same.

This post misses the point as to why so-called student quality rankings, such as the one posted by Leiter, are on their face idiotic. These rankings do not address whether there is any marketable (or measurable) difference between schools whose 75th percentile differ by 1 or 2 or 3 points. At what point is the difference between LSAT scores meaningful to be able to draw any conclusions about the alleged overall quality of the student body?

The post suggests that for incoming students (and I suppose faculty in general) there is something worthwhile in looking at the ordinal ranking of law schools based on LSAT scores. Inherent in that suggestion is the understanding that the LSAT score (75th percentile) must reveal something about the quality of students at one school versus the quality of the students at another school. My point is that this type of comparison is virtually impossible when there is no measurable difference between the LSAT scores of students at school A versus school B. All the post says is that when you have a "large" enough spread between LSAT scores (which the poster doesn't quantify but refers to as "high LSAT schools" versus "low LSAT schools") you are likely to see some difference in the quality of the students. That observation doesn't validate the utility of ordinally ranking law schools by LSAT score. The fact that it is possible to rank law schools by 75th percentile LSAT scores (or 25th for that matter) doesn't necessarily mean that doing so will reveal any meaningful information that would be useful for comparing the quality of students at those schools. This type of ranking would be relevant if small changes (1 or 2 or 3 pts) in the LSAT score actually did correlate with differences in student quality. A claim that I doubt the poster (or folks like Leiter) are willing to make. At best it might suggest that clustering schools by LSAT scores makes more sense (assuming that one could actually measure the difference in student quality using LSAT scores).

Alex poses a fair question, and it is one that I have pondered. This is all just impressionistic, of course, but here are a few more thoughts.

First, in my view, all of the ordinal rankings of law schools are silly in the way that Alex describes. I have said that for as long as I can remember. And, as Alex suggests, I find some value in thinking about law schools in clusters.

Second, I believe that LSAT scores measure something meaningful about student quality. In Brian's student quality rankings, distinctions between the LSAT scores at the top schools and LSAT scores at the bottom schools are easy to see. Having taught at several law schools on the list, my experience has been that those differences show up in the classroom.

Third, the tricky question is in Alex's first comment: "At what point is the difference between LSAT scores meaningful to be able to draw any conclusions about the alleged overall quality of the student body?" I agree that fine distinctions are difficult. Is Yale at 176 meaningfully better on this measure than Harvard at 175? I would be surprised. How about the difference between Yale and Chicago at 172? That is a big enough difference that it might show up, though both schools have first-class students. I suspect that a difference of that size is more likely to show up between lower ranked schools because the ceilings will be different.

I have moved between schools and observed dramatic fluctuations within a single school that resulted in differences at the 75th percentile in the four- to five-point range. In my view, that gap yielded noticeable changes in the quality of classroom discussions.

The ordinal ranking is for presentation purposes, and does not prejudge what the relevant differences are. Gordon, what alternative of presentation is there besides the ordinal listing? One could group them into clusters, but where to draw the lines? With the basic data there--LSAT, GPA, class size--readers can draw the inferences that make sense I would have thought. As you point out, there are in fact differences between student bodies and they do track the significant LSAT differences. So, e.g., in my own recent experience, I thought there was almost no difference between the best students I had at Chicago last fall and the best students I have at Texas. But I also thought the "average" at Chicago was stronger than the average at Texas--results that fit the LSAT differentials (among other things) reasonably well (so, e.g., our 90th percentile LSAT is like Chicago's 75th, which given the differences in class size, adds up to roughly the same number of students with the strongest numerical credentials; but the median numerical credentials are quite a bit lower at Texas than at Chicago).

It's a fair point about presentation, Brian. You are looking at this from the vantage point of a producer of rankings, and I was writing from the vantage point of a consumer. My reference to ordinal rankings being "silly" was intended to suggest that we consumers of rankings shouldn't make too much into small differences in placement.

Gordon, what bugs me about using LSAT to judge student quality is that it doesn't seem to capture what admissions committees themselves find significant.

For example, admission at my school was extremely competitive. I myself got in everywhere but Yale. Almost all my friends did as well. I know of no one here who did not get into Columbia or NYU; and no one I met at admissions weekend chose Columbia or NYU over my school. Yet both Columbia and NYU are supposed to have a stronger student body than my school.

The inference is this. The admissions committee accepting students does not seem to agree with the use of LSAT to measure student quality. Otherwise it wouldn't fill its classes with a student body that had on average a lower LSAT than the student body at schools where admissions was noticably less competitive, and which were less desired among students. The committee obviously wants the strongest student body it can get.

Surely the admissions committees are not fumbling in the dark. What metrics are they using to indicate student strength? Admissions committees have access to a much more extensive picture of each student; they have a lengthy historical view against which to compare each applicant with past similar appicants. Given what these committees know, why not use their decisions to measure student strength?

Knowing what percentages of students were cross-admitted to which schools and where those students chose to go would capture some of this "hidden" information, wouldn't it?

Flash, "what bugs me about using LSAT to judge student quality is that it doesn't seem to capture what admissions committees themselves find significant."

No one claims that LSAT scores are the only relevant criterion for admissions. Admissions committees vary in their reliance on numbers, but every admissions committee I have encountered considers both numerical and non-numerical factors.

I agree that information on cross-admissions could be quite revealing, though I have no idea whether anyone has attempted to gather that sort of information. Admissions officers have a good sense for how well they compete with peer schools, but I am not sure whether they get their information from admittees or some other source.