I did a bit of SSRN/LSN hacking, and determined that ... there are some pretty major differences in the average number of downloads different LSN journals get. The below relies on data from just under a quarter million papers that are classified to LSN e-journals, and only extends to LSN classifications (i.e. working paper series or other SSRN e-journal classifications are not included).

This is an updated ranking of flagship law reviews at US law schools (updated as of March 20, 2018, including the 2019 US News numbers). ... The ranking table below includes all of the law reviews that ranked in the top 150 in in the MetaRanking, including all journals that ranked in the top 100 at least one of the following rankings: US News Peer Reputation Score Ranking (avg., 2010-2019), US News Overall Ranking (avg., 2010-2019), the Washington & Lee University ranking (current version, 2009-2016; default weighting), the Google Scholar ranking (index as of June 2017), and the W&L Impact Factor Ranking (not included in the MetaRank). ...

The IRS offers a rather puzzling explanation for why the continued failure to afford proper processing to at least some of the victim applicants should not prevent a finding of cessation. That explanation is that the organizations whose applications were still pending “were involved in ‘litigation’ with the Justice Department ….” The Service’s brief further illuminates this point with a footnote explaining that “[u]nder long-standing procedures, administrative action on an application for exemption is ordinarily suspended if the applicant files suit in court.”

It is not at all clear why the IRS proposes that not ceasing becomes cessation if the victim of the conduct is litigating against it. The IRS position is reminiscent of Catch-22 from the novel of the same name.

Under that “catch,” World War II airmen were not required to fly if they were mentally ill. However, anyone who applied to stop flying was evidencing rationality and therefore was not mentally ill. “You are entitled to an exemption from flying,” the government said, “but you can’t get it as long as you are asking for it.”

Parallel to Joseph Heller’s catch, the IRS is telling the applicants in these cases that “we have been violating your rights and not properly processing your applications. You are entitled to have your applications processed. But if you ask for that processing by way of a lawsuit, then you can’t have it.”

We would advise the IRS: if you haven’t ceased to violate the rights of the taxpayers, then there is no cessation. You have not carried your burden, be it heavy or light.

The IRS’s only further attempt to justify the lack of cessation as to some of the applicants is to refer to its Catch-22 litigation rule as a “longstanding policy.” To this we would advise the IRS: if you haven’t ceased discriminatory conduct, the fact that you have been failing to cease it for a long time does not create cessation. You still have not carried your burden….

Even if we assumed there was voluntary cessation, we would still conclude that the government has not carried its burden to establish mootness because it has not demonstrated that “(1) there is no reasonable expectation that the conduct will recur [or] (2) interim relief or events have completely and irrevocably eradicated the effects of the alleged violation.” ...

[T]he complaints alleged extensive discriminatory conduct including “delayed processing … harassing, probing, and unconstitutional requests for additional information that … required applicants to disclose, among other things, donor lists, direct and indirect communications with members of legislative bodies, Internet passwords and user names, copies of social media and other Internet postings, and even the political and charitable activities of family members.” While the Inspector General’s Report references many of these discriminatory actions, neither it nor anything else presented by the government meets the heavy burden of establishing that “interim relief or events have completely and irrevocably eradicated the effects of the alleged violation.

The problem (we've encountered it in philosophy in the past, but now everyone there knows Google Scholar is worthless for measuring journal impact) is that there is no control for the volume of publishing by each journal, so any journal that publishes more pages and articles per year will do better than a peer journal with the same actualimpact that publishes fewer articles and pages.

Includes only flagship/general law reviews at ABA accredited schools (I think I've captured (almost) all of these, but let me know if I've missed any). Rankings are calculated based on the average of Google's two scores (h5-index and h5-median), as proposed here by Robert Anderson. The final column shows how much a journal's rank has changed in 2016 versus last year's ranking (0 indicates no change, a positive number indicates the ranking has gone up in 2016, while a negative number indicates a drop in ranking in 2016).

Publications and citations are essential to the research academic. They help separate experts from novices in a given field. They provide metrics for universities to gauge the quality of their professors’ scholarship. In legal scholarship there is a particularly meaningful measure that distinguishes law from other disciplines: citations in published opinions. Supreme Court citations to law reviews convey the importance of an article to a particular area of law.

I decided to create a meta-ranking of the possible contenders for gauging the relative importance of journals and offers: US News Overall Ranking (averaged from 2010-2017), US News Peer Reputation Ranking (also averaged from 2010-2017), W&L Combined Ranking (at default weighting; 2007-2014), and Google Scholar Metrics law journal rankings (averaging the h-index and h-median of each journal, as proposed here by Robert Anderson). I've ranked each journal within each ranking system, averaged these four ranks using a 25% weighting for each, and computed and ranked the final scores. I think this approach benefits from incorporating a couple different forms of impact evaluation (W&L + Google) while not disregarding the general sentiment that law school “prestige” (USN combined rank + peer reputation rank, each averaged over an 8-year period) is an important factor in law review placement decisions.

Here are the Top 25:

MetaRank

Journal

Change from USN Rank

MetaScore

Avg. USN Peer Rank

Avg. USN Overall Rank

W&L Rank

Google Rank

1

Harvard Law Review

1

1.5

1

2

2

1

2

The Yale Law Journal

-1

1.75

1

1

3

2

3

Stanford Law Review

0

2.75

3

3

1

4

4

Columbia Law Review

0

3.75

4

4

4

3

5

University of Pennsylvania Law Review

2

6.5

9

7

5

5

6

Michigan Law Review

4

8

8

10

8

6

7

California Law Review

1

9

7

8

12

9

8

New York University Law Review

-2

9.25

6

6

14

11

8

Virginia Law Review

1

9.25

9

9

9

10

10

The Georgetown Law Journal

4

9.75

13

14

6

6

11

Texas Law Review

4

12

15

15

10

8

12

University of Chicago L. Rev.

-7

12.75

5

5

25

16

12

Duke Law Journal

-1

12.75

11

11

16

13

14

Cornell Law Review

-1

13.25

12

13

15

13

15

UCLA Law Review

1

13.5

16

16

7

15

16

Northwestern University Law Review

-4

15.25

14

12

13

22

17

Minnesota Law Review

3

15.75

20

20

11

12

18

Vanderbilt Law Review

-1

17.5

17

17

20

16

19

Notre Dame Law Review

4

21.75

27

23

19

18

20

Iowa Law Review

5

22.5

27

25

18

20

21

Boston University Law Review

3

24.25

25

24

22

26

22

William and Mary Law Review

8

25.5

32

30

21

19

23

The George Washington L. Rev.

-2

26

23

21

29

31

23

North Carolina Law Review

11

26

21

34

28

21

25

Southern California Law Review

-7

26.5

19

18

32

37

26

Boston College Law Review

5

27.25

29

31

23

26

The big movers here (in this ranking versus the average US News Overall Rank from 2010-2017) seem to be (but there are quite a few others who moved around):

Following up on last week's post on the 2015 Google Law Review Rankings: my Pepperdine colleague Rob Anderson has expanded his annual Google Law Review Rankings to include specialty, secondary, and law-related peer-reviewed journals. The Tax Law Review is the only tax journal to make the list of the Top 299 law review, at #121. Here are the ten most cited articles in the Tax Law Review over the past five years:

The Journal of Criminal Justice has been on a roll. Once considered a somewhat middling publication — not in the same league as top journals like Criminology and Justice Quarterly — it is now ranked No. 1 in the field according to its impact factor, which measures the average number of citations a journal receives and is meant to indicate which titles are generating the most buzz.

Rocketing to No. 1 is even more impressive when you find out that in 2012 the Journal of Criminal Justice was way back in 22nd place. That’s quite a leap!

Predictably, that sharp uptick made some researchers in a field devoted to misdeeds a tad suspicious. Among them was Thomas Baker, an assistant professor of criminal justice at the University of Central Florida. So Mr. Baker did what good researchers in all fields do: He took a hard look at the data. Then, after emailing it to a few friends, he decided to publish what he had found in the field’s widely read newsletter, The Criminologist.

What he found was this: Much of the rise in the journal’s impact factor was due to citations in articles published in the Journal of Criminal Justice itself.

Attorney well-being and depression are topics of great concern, but there has been no theory-driven empirical research to guide lawyers and law students seeking well-being. This article reports a unique study establishing a hierarchy of five tiers of factors for lawyer well-being, including choices in law school, legal career, and personal life, and psychological needs and motivations established by Self-Determination Theory. Data from several thousand lawyers in four states show striking patterns, repeatedly indicating that common priorities on law school campuses and among lawyers are confused or misplaced. Factors typically afforded most attention and concern, those relating to prestige and money (income, law school debt, class rank, law review, and USNWR law school ranking) showed zero to small correlations with lawyer well-being. Conversely, factors marginalized in law school and seen in previous research to erode in law students (psychological needs and motivation) were the very strongest predictors of lawyer happiness and satisfaction. Lawyers were grouped by practice type and setting to further test these findings. The group with the lowest incomes and grades in law school, public service lawyers, had stronger autonomy and purpose and were happier than those in the most prestigious positions and with the highest grades and incomes. Additional measures raised concerns: subjects did not broadly agree that judge and lawyer behavior is professional, nor that the legal process reaches fair outcomes. Specific explanations and recommendations for lawyers, law teachers, and legal employers are drawn from the data, and direct implications for attorney productivity and professionalism are explained.

Watching my youngest son draft and redraft his high school essays under the watchful eye of his English teacher, who is smitten by the inerrant wisdom of Strunk and White’s Elements of Style, I was curious how the best legal scholarship in the country fares by classic rules of writing.
To simplify my task, I have chosen one rule that is easily quantifiable. ... "[T]he expression 'the fact that' should be revised out of every sentence in which it occurs." ...

A ten-year search of the number of occurrences “the fact that” appeared in the flagship journals of the top law schools reveals the following:

This article is the latest in a series of simple annual studies of the sales of some leading law reviews, undertaken with an eye to getting an admittedly rough and partial sense of the state of publishing in the legal academy. Over the years, the data itself has turned out to be a little bit interesting in spots. More interesting (perhaps), and more amusing and worrisome (certainly), have been the continuing small discoveries that some law reviews report relatively low paid circulation numbers to the U.S. Postal Service (which appear only in tiny-type government forms buried in the rarely read front- or back-matter of the reporting law review), but then tout higher sales numbers in promotional sections of their websites. It is reminiscent of the way some law schools have number-fudged their presentation of other kinds of data to, for example, U.S. News & World Report. The law review-school comparison might prompt the reader to wonder light-heartedly how many law school deans were once law review editors. But answering that question would be too easy, and too far afield from the focus here on publishing in the legal academy. There is, however, another question whose answer might be more interesting, and more likely to lead to intriguing comparisons. The question: How have the size and composition of law review editorial staffs changed over time, in absolute terms and in terms of their relationship to the product they put out? Possible comparisons will probably suggest themselves. This year’s report covers the usual ground relating to paid circulation and associated editorial behavior. It also offers a limited and tentative first take on the production question.

As I have previously noted, Tax Notes fares poorly in the Impact Factor category (citations/number of articles published) because W&L apparently counts as "articles" all of the advance sheet material in Tax Notes.

Tax Notes is #1 by a wide margin in the number of citations in law reviews, with more than double the citations of its nearest competitor:

Tax Notes is #1 by a wide margin in citations in law reviews (762 v. #2's Virginia Tax Review's 595), but fairs relatively poorly (.001, ranked #20) in the Impact Factor category (citations/number of articles published). My guess is that W&L counted as "articles" all of the advance sheet material in Tax Notes. (Note: I omitted the NYU Journal of Law and Business from the above chart because it is not a tax journal.)

My friend and colleague Rob Anderson (Pepperdine) has expanded his Google Law Review Rankings to cover 216 law reviews based on articles published in 2007-2011 (with links to the most-cited articles for each journal). Here are the Top 25, along with each journal's ranking in the Washington & Lee law review rankings:

Tax Notes is #1 by a wide margin in citations in law reviews (1165 v. #2's Virginia Tax Review's 554), but fairs relatively poorly (.001, ranked #24) in the Impact Factor category (citations/number of articles published). My guess is that W&L counted as "articles" all of the advance sheet material in Tax Notes. (Note: I omitted the NYU Journal of Law and Business from the above chart because it is not a tax journal.)

Update: Thanks to Omri Marian for letting me know that Washington & Lee has released an updated ranking based on citations to articles published in 2005-2012. I will blog those rankings in a forthcoming post.

My initial decision on where to publish has
typically been guided by the US News rankings of law schools, which, in legal publication
circles, is used as a proxy for the quality of a law school’s journal. ... To be sure, there are other means by which to choose among publication
offers. Washington
& Lee University,
for example, ranks journal impact, i.e., how often the journal is cited. ...

Given the major flaws in the two primary journal ranking
systems, I would like to see a law professor develop a ranking methodology
based on authors’ experiences with the publishing journals. Law
professors are already ranking nearly every imaginable thing under the
sun—see, for example, here, here, here, here, and here. And a “law review author ranking” would
actually be meaningful. I would love for a semi-mathematically
inclined professor to run with this idea, and conduct an annual survey of
authors (nearly all of whom will be his/her fellow law professors) in order to
rank their law journal editing and publishing experiences.

I’ll get the ball rolling. The categories to be ranked could include: timeliness of the publication
(on time = 10 points); time allowed for the author to review edits (two weeks =
10 points); deference to the author’s style (high deference = 10 points);
creation of errors during editing process (no editor-created errors = 10
points); responsiveness to the author’s edits (short response time = 10 points);
and quality of the journal’s website (an up-to-date website posting the article
= 10 points). Of course, there are
probably a dozen other categories that could be included, but the total number
of categories ranked should be few, and the respondents should be guaranteed
anonymity, in order to induce participation by authors.

It is true that law review editors turn-over every year, and
a new batch takes their place. This
means that a great experience with “Journal A” could easily have been a bad
experience had the article been published a year earlier or later. It is further true that some law
professors—especially those seeking tenure—will, by necessity, continue to be
slaves to the US News rankings when selecting among publication offers. However, ranking the journals on the quality
of their editing process would still do two important things.

First, by ranking certain categories, such as whether the
editors were deferential to the author’s writing style, authors would be
clearly communicating to journal editors what they value in the
publication
process. And most of the editors will
likely respond by improving performance in these areas. ...

And second, if a particular journal ranks high, it will
likely be a source of pride, which will transfer to the next year’s
editorial
board. Similarly, if a particular
journal ranks low, that too will be passed on, and will give the next
year’s
board the incentive to do better than its predecessor board. Remember,
rankings are powerful. Law review editors are students, and some
students do drastic, life-ruining things based on rankings, e.g., going
into
debt $150,000 or more to go to a law school ranked in the 20s instead of
taking
a full scholarship at a school ranked in the 50s.

In 2011, for the first time since the U.S. Postal Service began requiring law reviews to track and report their circulation numbers, no major law review had more than 2,000 paying subscribers. The Harvard Law Review remains the top journal, but its paid circulation has declined from more than 10,000 during much of the 1960s and ’70s to about 5,000 in the 1990s to 1,896 last year.

Maybe it’s the hundred-degree heat talking, but I think law review rankings are a little bit useful. As a reader and researcher, I do make some use of an article’s placement as a screen for how close of an initial read to devote to it. When I look at the c.v.’s of two scholars whose work I’ve never read, I’m probably inclined to look more attentively at the work of the one with the fancy cites. Yeah, I said it. Put away the pitchforks, dear readers: I don’t think I’m alone. Satisficing is not going away. ...

It would be nice, then, if there were reliable guides to the signaling value of a given journal placement. U.S. News gives us a decent if limited signal; since most authors agree that at the pinnacle its rankings are roughly meaningful, we get scarcity. So we can assume that journals at the top are more selective than others. Whether they make good decisions when picking the few from the many we don't know. ... Is there a better way to rank journals? ...

An approximation of a value-neutral approach might be to simply rank publications based on the use others scholars make of them. (For a thoughtful review of why that method works and what its problems are, see Russell Korobkin, 26 FSU L. Rev. 851, and Ronen Perry.) Korobkin argues that, basically, citation counts create the least bad set of incentives; usefulness to others seems like a decent result even if it's somewhat distorting of the real scholarly mission. ...

Well, the Washington & Lee Law Library, as many readers will know, offers a ranking of law journals based on total citations and "impact factor," or IF. ... As weak as IF is in general, W&L’s implementation is particularly problematic. ...Finally, to be parochial, W&L only uses Westlaw to generate its citation counts, and Westlaw doesn’t include Tax Notes, a major publication for us tax types. (This is also our gripe with Leiter). So tax articles are (sniff) even more under-appreciated. ...

[W]hat I'd particularly like to see is some kind of quality-weighted influence measure, along the lines of google pageview, as described here.

For the past couple of years we have needled the Harvard Law Review (HLR) about its tendency to err on the side of inflation when describing the size of its subscriber base. So, it seems only fair now to salute the HLR’s recent correction, and to note that the extravagant circulation claims made these days by the Virginia Law Review make the HLR’s old claims seem downright modest. This year we are offering two new perspectives on the law review business. The first is really just a bigger version of an old one. We have added several law schools’ flagship law reviews to our little tables of journal circulation rates. The newcomers are: Boston University Law Review, Emory Law Journal, Minnesota Law Review, Indiana Law Journal, Illinois Law Review, Notre Dame Law Review, Boston College Law Review, Iowa Law Review, William and Mary Law Review, George Washington Law Review, Fordham Law Review, Alabama Law Review, North Carolina Law Review, Washington Law Review, Washington and Lee Law Review, Ohio State Law Journal, UC Davis Law Review, Georgia Law Review, Wisconsin Law Review. We also corrected a few errors in earlier versions of the tables and filled in a few blanks, an exercise that will doubtless be repeated in the future. The second new perspective is a look at the distant past, when only a few law reviews published any circulation numbers. A casual review of some of those early numbers, in tandem with an equally casual glance at the advertising pages of those early law reviews, provides an ironic reminder of a plausible piece of conventional wisdom about the decline in sales of print editions of law reviews: that the decline has been and is being caused by the rise of searchable electronic databases and of an Internet via which to conveniently tap into those databases.

[T]ake a look at the graph on page 550. It shows the trends in paid subscriptions at three leading law reviews — the HLR, the Yale Law Journal, and the Columbia Law Review — for which we have at least some data from the 1960s to the present. (The graph is prettier than it ought to be because we have filled in the blanks and smoothed the curves for each journal by assuming that its circulation rates in years for which we lack data are the same as the rates in the immediately preceding years.) The gray bar cutting across all three circulation trend lines marks the period during which Westlaw advertisements began appearing in the law reviews. Correlation does not indicate causation, of course, but it is hard to resist the thought that the appearance of that ink-on-paper Westlaw advertisement in the November 1979 ink-on-paper HLR marked what might eventually turn out to be the beginning of the end for the ink-on-paper HLR, and for ink-on-paper law reviews more generally.

Mr. Koulikov examines the level of coverage that articles originally published in law reviews receive in eight major general academic databases. His findings are very similar to those of other discipline-specific database coverage studies, and reveal that coverage varies widely by database, regardless of the database’s claim to cover legal periodicals. This has particular implications for the level of engagement that nonlegal scholars have with the literature of the legal academia, and for the potential for meaningful interaction between legal scholars and their peers in other academic fields.

For our second annual study of the law review business [see the first study here], we added circulation data for four flagship law reviews (UCLA, Texas, USC, and Washington University) and two specialty journals (NYU’s Tax Law Review and Duke’s Law and Contemporary Problems). We also corrected a few errors in the tables in our first study and filled-in a few blanks. And, finally, we noticed something that might be worth thinking about: the possibility that the law school combover culture has infected law reviews.

Davies documents an enromous decline in law review circulation over the 1979-2009 period. The Tax Law Review's circulation, for example, has declined 89.1% from a peak of 5,685 in 1980-81 to 620 in 2006-07.

This brief Essay reports a study of citations to every article published in 1992 in thirteen leading law journals. It uses citations as a proxy (an admittedly poor one) of article quality and then compares the citations across journals. There are, not surprisingly, vast differences in the number of citations per article. While articles in the most elite journals receive more citations on average than the other less elite (but still highly regarded) journals studied, some articles in the less elite journals are more heavily cited than many articles in even the most elite journals. In keeping with studies in other disciplines and other citation studies of legal journals, the results here suggest that we should be wary of judgments about quality based on place of publication. We should also be wary of judgments about quality of scholarship based on the number of citations, and we should, therefore, continue to evaluate scholarship through close reads of it.

Since the year 2000, LawTV has compiled the list of the best law schools in the United States, based on qualitative (rather than quantitative) criteria. More than half a million pre-law students, law students, law professors, and lawyers use the Law School 100 rankings each year.

The Law School 100 includes every ABA-accredited law school. The top 100 law schools are listed in their ranking order. The second 100 law schools (Tier 2 law schools) are listed in alphabetical order.

[L]aw reviews could be ranked, as are newspapers and other periodicals, based on circulation. Surprisingly, although the U.S. Post Office collects circulation figures for periodicals desiring reduced postage rates, we found no attempt in the literature to rank law reviews based on circulation. Our own preliminary ranking of law reviews by circulation yielded surprising results. Only five of the top twenty law reviews, but eight of those ranked lower than one-hundred (as measured by U.S. News & World Report), are included in the top twenty law reviews based on circulation figures.

[Here were the Top 10 law reviews by circulation, along with the schools' U.S. News peer reputation ranking:

1. Harvard

7500 (1)

2. Arkansas (Fayetteville)

5000 (97)

3. Yale

4500 (1)

4. Arkansas (Little Rock)

3800 (119)

5. Cornell

3500 (11)

6. McGeorge

3200 (108)

7.Boston University

3000 (25)

Brooklyn

3000 (64)

Seattle

3000 (108)

South Carolina

3000 (87)]

Ross E. Davies (George Mason) has compiled the circulation figures of the general law reviews at the Top 15 law schools as ranked by U.S. News in Law Review Circulation, Green Bag Almanac & Reader 164 (2009). Here is the abstract:

Many law reviews are required by law to publish accurate reports of basic information about their subscribers and circulation. But many do not -- do not report accurate information or do not report information at all. Perhaps this is in response to steep declines in subscriptions, which the available reports illustrate.

Davies documents a 62.4% decline in law review circulation over this 29-year period, from 47,543 in 1979-80 (3,170 per law review) to 17,878 in 2007-08 (1,192 per law review) (using data from the closest year if data was missing for either 1979-80 or 2007-08). The biggest percentage declines were:

A large-scale, multinational attempt in Europe to rank humanities journals has set off a revolt. In a protest letter, some journal editors have called it "a dangerous and misguided exercise." The project has also started a drumbeat of alarm in this country, as U.S.-based scholars begin to grasp the implications for their own work and the journals they edit.

The ranking project, known as the European Reference Index for the Humanities, or ERIH, is the brainchild of the European Science Foundation, which brings together research agencies from many countries. It grew from a desire to showcase high-quality research in Europe. Panels of four to six scholars, appointed by a steering committee, compiled initial lists of journals to be classified in 15 fields. Each journal was assigned to a category — A, B, or C — depending on its reputation and international reach. (See box below.) ...

My MoneyLaw colleague Tom Bell (Chapman) notes Michigan's new Wolverine Scholars Program -- in which Michigan undergrads with a minimum 3.80 GPA are admitted to Michigan Law School if they agree to not take the LSAT. The rankings benefit is that there is no LSAT score to report to U.S. News, while the minimum 3.80 GPA will boost Michigan's median 3.64 GPA, which counts 10% in U.S. News' methodology. Other schools presumably will follow Michigan's lead and create similar programs to recruit their undergrads while also goosing their U.S. news ranking.

The rankings motive is further corroborated by the disqualification if the potential Wolverine Scholar has taken the LSAT. ... [T]here are terrible externalities from this alleged merit-based program. It is impossible to deny that the Wolverine Scholars program will encourage students to (a) take easier classes and majors to avoid the need to take the LSAT to get into an elite law school, (b) discourage extracurriculars that will threaten the 3.8, and (c) make a lot of Michigan undergraduate professors miserable with complaints from students that their B+ or A- grade is going to blow their Wolverine Scholar application.

From a rankings perspective, what happens when you get 20, 30, or 40 candidates with 3.8+ UPGA and no LSAT score? From day 1 of admissions season, Michigan has much greater latitude to lock in higher median LSAT and UPGA numbers--because zero Wolverine Scholars are dragging down the LSAT and all are helping the UPGA numbers. Further, because of the idiosyncrasies of the USNWR rankings formula, see Ted Seto's Understanding the U.S. News Law School Rankings, at the upper ranges, small changes in UGPA have a much greater sway on rankings that a single LSAT point. For example, in the simulation model that Andy Morriss and I created, a move from 3.64 to 3.66 has a greater effect than a move from 169 to 170. If Michigan can get to a 3.80 UGPA, they could tie with NYU at #5.

Early Assurance applicants are exempt from taking the LSAT and registering with the LSDAS. Instead, please include an official transcript with at least five semesters of undergraduate grades. Early Assurance applicants must submit two recommendations, one of which must be the Early Assurance Dean's Certification Form. Competitive Early Assurance applicants should have an undergraduate GPA of at least a 3.8.

Texas Lawyer has released its annual ranking of the nine Texas law schools, based on a survey completed by 1,132 students enrolled at the school (the response rate ranged from 10%-25% at each school). The ranking equally weighs eight variables:

Teaching Quality

Faculty Accessibility

Preparation for Practice

Placement Office Helpfulness

Collegiality

Student Diversity

Technology

Library Services

Here is the overall ranking of the Texas law schools under the Texas Monthly methodology, along with their ranking in U.S. News and World Report (overall and peer reputation) and SSRN downloads (as well as their ranking among U.S. law schools) [click on chart to enlarge]:

Writing in the on-line edition of the National Law Journal, Dean Gary Simson of Case Western says the following about U.S. News’ recent announcement of possible changes to its methodology:

This announcement, and the wrench that it threatens to throw into structural changes that have been made to avoid being disadvantaged by a deeply flawed methodology, should cause law school faculties and administrations everywhere to finally say ‘enough’ and that they are done participating in a ranking system that has done substantial harm and little, if any, good to legal education in the United States.

In response, Mr. Robert Morse of U.S. News states;

If a law school refuses to provide U.S. News directly with statistical data from their annual American Bar Association (ABA) accreditation data questionnaire, then U.S. News still can get almost all of that school’s official ABA data from the ABA website. U.S. News would still be able to rank a law school, even if it refused to participate

Robert Morse, Director of Data Research at U.S. News & World Report, responds to the forthcoming National Law Journal op-ed by Gary J. Simson, Dean at Case Western (which fell ten places in the U.S. News overall rankings last year to #63), Say "Enough" to "U.S. News" (blogged here):

The U.S. News rankings also do not, as the dean implies, have a negative impact on legal education and law school admissions. The rankings provide prospective law school students with information about the relative merits of law schools that is not available from any other source. Going to a law school is a very expensive and time-consuming process, and our rankings provide one tool for students to use in choosing the best school for their needs.

The next U.S. News law school rankings aren't published until late March 2009, and we do not plan to make a decision on this issue until fall 2008 or early 2009. As we have done in the past before we change our methodology, U.S. News will carefully consider the impact of any such modification.

Many law school deans are upset about the recent announcement by U.S. News & World Report that it is seriously considering revising its law school rankings methodology to treat part-time students' entering credentials (LSAT score and undergraduate GPA) no differently than full-time students'. [blogged here, here, here, here, here, here, and here] ...

There is much room for reasonable debate among law deans as to how problematic the U.S. News proposal is, whether any deficiencies in it are curable with fine-tuning, and whether U.S. News should instead be thinking seriously about its treatment of transfer students. However, it seems beyond debate that it is truly depressing that law deans, who have so many important educational issues to address, feel the pressure they undeniably feel to make important decisions about their schools in response to a popular magazine's educationally unsophisticated decisions about ranking methodology. ...

Deans feel obliged to become experts in the ways of winning in the rankings, and in seeking higher rankings; the faculty and administration all too often make structural decisions about the law school with the rankings foremost in mind. In an effort to boost entering students' credentials they cut, often quite dramatically, the number of students in the first-year class. Then, to make up for the lost income to their heavily tuition-dependent school, they increase, often quite dramatically, the number of transfer students or LL.M. students and they develop a part-time program or expand an existing one. They economize by not filling faculty lines vacated by retirements and departures and by downsizing the staff. They diminish or even eliminate need-based financial aid in favor of using scholarship money to target incoming students who will boost the median LSAT and GPA. ...

What, then, do I propose now in response to the U.S. News announcement of a possible change in its ranking formula? I propose that law school faculties and administrations treat the announcement as a wake-up call and recognize how much they have allowed themselves to be at the mercy of editors whose primary interest is selling magazines, rather than providing a means of ranking schools that actually might promote the things that make for genuine greatness in a law school. This announcement, and the wrench that it threatens to throw into structural changes that have been made to avoid being disadvantaged by a deeply flawed methodology, should cause law school faculties and administrations everywhere to finally say "enough" and that they are done participating in a ranking system that has done substantial harm and little, if any, good to legal education in the United States. Even the faculty and administration at the most highly ranked schools — those schools that today appear to be winning in the rankings game — should recognize that they have a major stake in abandoning a system that, at some magazine editors' whim, could be suddenly revamped in ways that could send those schools plummeting from their lofty perch.

Assessing “value added” on a relative basis by school is not only knowable; we actually know it for many schools. So we just need to present the data in a convenient and user-friendly way for survey respondents, and then the rankings will move for at least a handful of schools, beginning next spring – and then we get our race to the top, beginning next summer. Caron's chart tells us: it can be done. Which is why, in my humble opinion, this chart might well be the most important in the history of legal education.

I previously blogged (here, here, here, and here) the announcement (here and here) by Robert Morse, Director of Data Research at U.S. News & World Report, that the magazine is "seriously studying" two changes to its law school rankings methodology that would affect 24.5% of the overall ranking:

The proposal is strongly opposed by deans at schools with part-time programs designed for students who are years past college graduation and often well into careers outside the law. They warn that a school's place on the U.S. News list is so important that some schools would drop the part-time programs rather than slip lower in the national rankings.

I previously blogged (here, here, and here) the announcement (here and here) by Robert Morse, Director of Data Research at U.S. News & World Report, that the magazine is "seriously studying" two changes to its law school rankings methodology that would affect 24.5% of the overall ranking:

If US News starts counting the LSATs of part time and transfer students, currently cheating law schools have to choose between tuition and rankings. The schools that choose ranking concerns over tuition receipts will admit fewer people with lower LSAT scores, who are likely to be disproportionately older, poorer, female, and/or People of Color. The schools that choose tuition will admit these students into their full time first year classes, treating them like everybody else, rather than as second class citizens. So whether this change helps or hurts women (and other affected groups) is going to depend on how many law students prioritize tuition, and perhaps also value the increased diversity of their first year classes that will likely result from accepting students with somewhat lower LSAT scores.

I previously blogged the announcement by Robert Morse, Director of Data Research at U.S. News & World Report, that the magazine is "seriously studying" two changes to its law school rankings methodology that would affect 24.5% of the overall ranking:

Compute the bar passage rate (2%) (school's bar pass rate/jurisdiction's bar passage rate) using only the data of first-time takers who are graduates of ABA-accredited schools.

Brian Leiter and Dan Solove both criticize the first proposed change because it will result in schools cutting back on their night programs and thus the opportunities they provide to nontraditional students,