Agoraphilia

Thursday, July 03, 2014

Is anyone still checking this blog in the hope that I might start blogging again? If so, you're in luck! To publicize my new book, Economics of the Undead (co-edited with James Dow and featuring chapters written by 20+ other authors), I have created an Economics of the Undead website. In addition to the book's table of contents, chapter excerpts, and a course guide, there's also a blog featuring posts with the latest econ-undead news and commentary. The book's official publication date is July 11, but you can pre-order now. And, if you would be so kind, like/tweet/share/follow/pimp the book and blog to anyone you think might appreciate them!

Friday, September 13, 2013

There’s a meme floating around that the storyline of Breaking Bad constitutes a scathing indictment of the U.S. healthcare system. The latest entry is this comic strip, which says that if Breaking Bad had been set in the U.K., it would be an “entirely different story” – one that ends in just 5 panels. But it’s not just comic strips. Daily Kos says that Breaking Bad “Displays [the] Brutality of American Private Health Insurance Non-System,” while Tricia Romano at the Daily Beast says the show “Is Fully Dependent on Our Broken Health-Care System.” There are probably other examples.

The problem with this claim isn’t that the U.S. healthcare system is actually wonderful. It’s not. The problem is that it’s just not consistent with the actual TV show. I can verify this because I’ve rewatched the whole first season (and much of the second) over the last couple of weeks.

Walter White makes his first foray into the meth business before health expenditures are even mentioned. Walter does have insurance coverage, and his HMO will cover his cancer treatment. It’s true that Walter mentions that his HMO isn’t very good at some point, but that’s as far as it goes. As it turns out, Walter doesn’t even intend to endure the treatment (as revealed a few episodes later in “Cancer Man”). It’s very clear that Walter’s overriding goal is to leave a nest egg for his wife, disabled son, and unborn baby.

Eventually, health costs do become an issue when Skyler pressures Walter to undergo treatment after all. But it’s not because his HMO won’t pay. It’s because Skyler finds an oncologist who is not just one of the best in Albuquerque, but one of the top 10 oncologists in the nation. It turns out this super-doctor with his fancy cancer treatment is not covered by the HMO, and the out-of-pocket price is $90,000. Some will say that’s the smoking gun that indicts the U.S. healthcare system. But there is no system in the world that offers high-end care to everyone. The vaunted U.K. and Canadian systems offer care to every citizen, but they don’t offer the best care to every citizen. That’s just not possible. A single-payer system is essentially a giant public HMO, and just like private HMOs, they sometimes deny treatment or (more relevant here) deny the highest-quality treatments. Citizens who aren’t happy with the coverage provided by the government system have to pay for it themselves, either through supplementary private insurance or out of pocket. Sometimes they even travel to foreign countries, like the U.S., for that care.

To reiterate: Walter White has health insurance, and it would have covered his cancer treatment. The only reason Walter needs so much money for medical bills is because he opts out of his insurance coverage in favor of higher-quality, more expensive treatment. And even then, it’s clear this isn’t Walter’s only motivation. In the episode “Seven Thirty-Seven,” Walter calculates how much he needs to sock away, and he comes up with $737,000, not just the $90,000 for the cancer treatment. This is a story that could have been told in many countries, including both the U.K. and Canada.

I’m not saying we can’t imagine a version of Breaking Bad that does condemn the U.S. healthcare system. For instance, they could have had Walter lose his job, and his health insurance with it, right before getting his cancer diagnosis. Less plausibly, they could’ve had his deductible and copayments be so large that he has to cook meth to pay them. (I say “less plausibly” because while those sums can be large, they’re probably not large enough to explain Walter White’s extreme actions.)

But Breaking Bad did not choose either of these routes. In fact, the show often goes out of its way to show that ultimately it’s not really about money at all for Walter; it’s about pride. Pride is why he didn’t want to have treatment in the first place. When his former colleague Elliot Schwartz offers to pay for Walter’s non-covered cancer treatment, it’s pride that makes Walter say no. And pride is why Walter continues to cook meth long after he’s achieved his monetary goals. Blaming the events of Walter White’s life on the U.S. healthcare system isn’t just wrong; it’s missing the entire point of the show.

Monday, August 05, 2013

Everyone’s talking about the lab-grown meat burger. I’ve been expecting this for years, and I think it’s extremely cool. I love me some science, and I’d totally give the burger a try. But let’s suppose the technology improves and the price drops enough for meat-without-feet to displace traditional beef. Is this clearly a good thing for the cows?

I don’t think there’s a clear answer; instead, it depends on a rather obscure philosophical question. For simplicity, let’s say we believe in animal-utilitarianism. We want to maximize the happiness (or utility) of the cows. But what are we trying to maximize, the average utility or total utility? The answer matters, because a widespread conversion to lab-grown meat would drastically reduce the number of cows being raised around the world.

If you’re interested in average utility, then the answer is probably yes. Let’s assume the few cows remaining are treated like kings. In that case, average happiness per cow will be very high.

But is average utilitarianism plausible? Average utilitarianism has some bizarre implications, not the least of which is opposition to adding new creatures with utility that is positive but below the average. If you currently have just one cow living like a king (utility of 100), and you add one more cow who lives like an earl (utility of 50), the average utility drops to 75. From an average-utilitarian perspective, you should oppose the creation of this new cow. Which is weird, because it seems like living like an earl – or even substantially worse than an earl – should be fine. I’d rather be a living pauper than not living at all.

Okay, so suppose we’re interested in total utility. In that case, it’s not clear whether the advent of lab-grown meat is good for the cows. If we suppose (as some animal rights activists would have us believe) that the life of a typical cow in the status quo is worse that death – that is, it has negative utility – then it would be better for the species to go extinct than continue as it is. But I’m doubtful that the life of a typical cow really has negative utility; I think it’s probably very low but positive. And if it’s not positive, it could be if we all switched to consuming free-range instead of factory cattle. If cattle do have lives with low-but-positive utility, then a mass conversion to lab-grown beef would certainly reduce the total utility of the cow population.

But total utilitarianism has problems, too, the most important being that it plausibly falls prey to Derek Parfit’s “repugnant conclusion”: that the best possible outcome is a maximally-sized population living lives just barely worth living.

So which should we support, average or total? Sadly, philosophy offers no clear answer. Both positions leads to some strange conclusions. David Friedman has offered a kind of “third way” between these two flavors of utilitarianism (based on what economists call a “partial ordering”), but I never really understood his solution intuitively. Some people would reject utilitarianism entirely, which may be plausible for humans, but for animals it’s hard to think of any reasonable alternative. (The vegetarian-libertarian Robert Nozick famously supported “natural rights for humans, utilitarianism for animals.”) Personally, I lean toward an ill-defined compromise of sorts between average and total utilitarianism, but I don’t claim to have any coherent definition – let alone a defense – of this position.

In any case, I think you have to conclude that lab-grown meat is not obviously superior to a continued reliance on traditional meat, even from the perspective of the cows themselves. To the extent you place any weight at all on the total number of cows, any large-scale reduction in demand for beef potentially raises serious concerns. Incidentally, the same logic applies to a widespread adoption of no-lab-meat vegetarianism as well.

Saturday, March 09, 2013

I just posted this on Facebook, and I thought I might as well post it here as well.

This is the marginal utility theory of Daylight Saving Time. If you could, you would allocate your daylight according to marginal utility -- starting with the most valuable hour to have daylight, then the second most valuable hour to have daylight, and so on. Suppose, as seems to true for many people, that your ordering (from most to least valuable) is something like this: The hours you want lit the most are from 7am-5pm. (Don't worry about the ordering of preferences within that period, because you'll get that much daylight even in the dead of winter, at least where I live.) Next, you'd like some daylight in the evening, after 5pm. And least important are the early morning hours, before 7am.

Under standard time, you've got your first period (7am-5pm) covered even in winter (at least where I live, Los Angeles). But as the daylight hours get longer, they are distributed approximately equally on both sides of that time period. This is inconsistent with your preference ordering, because you'd rather get the added daylight on the evening side. By mid-March, you've added a full hour in the morning (6am-7am) and full hour in the evening (5pm-6pm). But you'd much rather have had both hours in the evening (5pm-7pm). Switching to DST accomplishes this. And it does a similar thing the rest of the summer, although the specific hours swapped change.

But in that case, why not have DST year-round? Because if you did, then in the dead of winter you'd have daylight from 8am to 6pm. That means you'd be getting a less-valued hour of daylight, 5pm-6pm, instead of the more-valued hour from 7am-8am.

In other words, the order in which nature provides us with added hours of sun doesn't match our preferred ordering. Nature adds hours in a symmetric fashion, while our preferences order them asymmetrically, wanting to add more in the evening before we add more in the morning.

And in case you're wondering: yes, this scheme does imply that we might want to have Daylight SUPER Saving Time in mid-summer, so that we could trade an hour of daylight at 4am (useless!) for an hour at 9pm (awesome!). But given how much people bitch about changing over twice a year, the adjustment costs of four+ times a year would be too great.

Friday, March 08, 2013

Anyone wondering what kind of projects have been keeping me from blogging lately? Well, here's one...

Call for Abstracts

Economics of the Undead: Blood, Brains & Benjamins

Glen Whitman & James P. Dow, Editors

The editors seek abstracts for essays exploring the relationship between economics and the undead, especially zombies and vampires. The chosen essays will appear in a collection to be published by Rowman & Littlefield.

Ideal contributions will use economic reasoning to address issues related to the undead, use the undead as a means of exploring economic thought, or both. Abstracts and final essays should be written in an accessible and engaging style for a popular audience. Contributions should also make relevant reference to the undead in pop culture, such as the Twilight saga, Buffy the Vampire Slayer, the novels of Anne Rice, World War Z, the films of George Romero, True Blood, and The Walking Dead.

Possible topics include: supply and demand in the market for blood; the operation of zombie labor markets; the political economy of responding to undead threats; macroeconomic recovery after a zombie apocalypse; what zombie and vampire behavior tell us about rational-choice modeling; etc.

Submission Guidelines:

1. Send abstract of paper (100-500 words) in Word or compatible format.
2. Include resumé/CV for each author.
3. Submit by email to both glen.whitman@gmail.com and jpdow@verizon.net.
4. Submission deadline is 7 April 2013 21 April 2013.
5. For accepted abstracts, first drafts of essays will be due 15 July 2013.

Feel free to forward this to anyone with economics training or experience who might be interested in contributing. Although we are only asking for abstracts at this time, if you have already written an unpublished article that fits the subject matter, you may submit the article in its entirely.

Wednesday, December 12, 2012

Would you pay good money for accurate predictions about important events, such as election results or military campaigns? Not if the U.S. Commodity Futures Trading Commission (CFTC) has its way. It recently took enforcement action against overseas prediction markets run by InTrade and TEN. The alleged offense? Allowing Americans to trade on claims about future events.

The blunt version: If you want to put your money where your mouth is, the CFTC wants to shut you up.

A prediction market allows its participants to buy and sell claims payable upon the occurrence of some future event, such as an election or Supreme Court opinion. Because they align incentives with accuracy and tap the wisdom of crowds, prediction markets offer useful information about future events. InTrade, for instance, accurately called the recent U.S. presidential vote in all but one state.

As far as the CFTC is concerned, people buying and selling claims about political futures deserve the same treatment as people buying and selling claims about pork futures: Heavy regulations, enforcement actions, and bans. Co-authors Josh Blackman, Miriam A. Cherry, and I described in this recent op-ed why the CFTC’s animosity to prediction markets threatens the First Amendment.

The CFTC has already managed to scare would-be entrepreneurs away from trying to run real-money prediction markets in the U.S. Now it threatens overseas markets. With luck, the Internet will render the CFTC's censorship futile, saving the marketplace in ideas from the politics of ignorance.

Why take chances, though? I suggest two policies to protect prediction markets and the honest talk they host. First, the CFTC should implement the policies described in the jointly authored Comment on CFTC Concept Release on the Appropriate Regulatory Treatment of Event Contracts, July 6, 2008. (Aside to CFTC: Your web-based copy appears to have disappeared. Ask me for a copy.)

Second, real-money public prediction markets should make clear that they fall outside the CFTC's jurisdiction by deploying notices, setting up independent contractor relations with traders, and dealing in negotiable conditional notes. For details, see these papers starting with this one.

Tuesday, November 27, 2012

The Foundation for Economic Education recently invited me to join its flagship publication, The Freeman, as a regular contributor. It just published my first article, No Exit: Are Honduran Free Cities DOA? Here's an excerpt:

Eager to bring Hong Kong-style growth to their beleaguered Central American country, Honduras amended its constitution in 2011. The new provisions allowed the creation of quasi-sovereign special development regions. Libertarians thrilled at the prospect.

By making it easier to escape from bad government to better government, the Honduran plan would put the forces of competition and choice in the service of the Honduran people. Formerly, Hondurans who voted with their feet had to flee their homeland. Now, they could stay and wait for good government to come to them--at least to the neighborhood.

Those grand visions came to nothing, however. Instead, the Honduran Supreme Court struck down the constitutional amendments as ... unconstitutional. Does that spell the end of the Honduran experiment in newer, freer cities?

Monday, November 05, 2012

I plan to vote on Tuesday, for the same reasons I enunciated eight years ago. Nevertheless, I respect the position of libertarians who choose not to vote on grounds of principle (“the whole system is corrupt and I refuse to take part”) or rational cost-benefit analysis (“my vote won’t make a difference, and I might get hit by a truck on the way to polls”). So libertarian non-voters, I’m not talking to you right now.

I’m talking to the libertarians who do vote. To be more specific, I’m talking to libertarians who have found some reason to think that one of the major party candidates is the lesser of two evils. I respect that, too. Even though your vote won’t really swing the election to your favored candidate, taking part in the democratic process often means exaggerating the importance of your vote. Personally, I like to imagine that I represent all similarly situated people with similar beliefs, and then I vote the way I’d like to see the whole group vote.

Thus, if I were in a swing state where conceivably a group of libertarian-minded voters could affect the outcome if they all voted together, I would hold my nose and vote for one of the two major party candidates.

According to the New York Times electoral map, only 7 states are considered “toss-ups”: CO, FL, IA, NH, OH, VA, WI. To these, you might add the 8 “leaning” states: ME, MI, MN, NM, NV, PA (for Obama), AZ and NC (for Romney). If you’re a libertarian voter in one of these 15 states, then I have nothing useful to tell you.

But that leaves 35 states that are solidly in the Democratic or Republican camp, with a combined eligible-voter population of over 136 million (about half that number voted in 2008). None of these states would by any stretch of the imagination get tipped by your vote-of-exaggerated-size. In these states, there is no good reason to vote for Obama or Romney. You can vote your conscience with no fear that your conscience will have doomed our country to the greater of two evils.

And fortunately, there is an excellent vote-of-conscience choice available this year: Gary Johnson. Imagine if everyone like us (that is, libertarians in non-swing states) voted for Johnson. If even 1% of voters were in this category, Johnson would get over a million votes -- which might actually be enough to get some attention, and maybe establish a beachhead for another run in 2016.

Okay, that probably won’t happen. But your vote was never going to make a difference anyway. Not anywhere, in truth, but certainly not in a non-swing state. So why not vote for the only candidate who comes even close to representing your beliefs? Vote for Johnson.

UPDATE (11/6/12): To clarify, I have no particular love for the Libertarian Party, and my argument is not about setting up the LP for future elections. It's about setting up Gary Johnson for another run in 2016, whether as a Libertarian, Republican, or Independent. And it's also about casting a vote of conscience, irrespective of consequences.

Friday, July 27, 2012

I just learned today that the Laemmle Sunset 5, a theater famous for showing independent films, shut down late last year. The theater's website explains the theater's demise:

Eventually, new multiplexes such as the Arclight and the Grove opened nearby and began nabbing the artful specialty films that had long been the Sunset 5’s exclusive domain.

When a small business closes in the face of competition from larger firms, it's common to hear complaints about the capitalist system -- along with calls for subsidies or government protection (such as having the location designated a "historical landmark"). So I was pleasantly surprised to read the next line:

Such is the forward motion of time and commerce.

I appreciate and respect the owners' choice to accept the theater's fate with equanimity.

For those worried about the lack of venues for independent films, two things: First, as the quote above indicates, the Arclight and Grove theaters were able to squeeze out the Laemmle Sunset 5 in part because they offered independent fare in addition to the usual major-studio movies. And second, the Sunset 5 has been acquired by Robert Redford's Sundance Cinemas, which is currently renovating the theater. As Tom Bernard, co-president of Sony Pictures Classics, says in the linked article: "Maybe fresh blood will bring new life into the theater and come new cash too. A face lift on the theater may attract new audiences and make it a place to be." Only time will tell.

UPDATE (added immediately after posting): Before anybody says it, I should point out that the Laemmle family will do just fine. They have other theaters, including new ones opening elsewhere in the L.A. area. Obviously, it would be harder for someone who owned only one location to greet the news with such equanimity. Nevertheless, I respect how the Laemmles responded. Moreover, the usual calls for subsidy and protection don't just come from the small business owners, but from people who have an interest in the business, including suppliers (like indie producers) and devoted customers. I'm pleased that apparently didn't happen here.

Thursday, July 26, 2012

1. A fine of $60 million, which is approximately equal to the (past) annual gross revenue of the football program.

2. A four-year ban from postseason play; thus, Penn State will not be allowed to share in the conference’s bowl revenue, an estimated loss of about $13 million a year.

3. A cut in the number of football scholarships it can award each year.

4. The NCAA also erased 14 years of Penn State victories, wiping out 111 of Paterno’s wins and stripping him of his standing as the most successful coach in the history of big-time college football. Former Florida State coach Bobby Bowden, with 377 major-college victories, will replace Paterno, while Paterno will be credited with 298 instead of 409.

I don’t get #4. How do you change history? What about all the former players and spectators who know what actually happened, and what about all the newspaper and TV archives that attest to what actually happened? How is the NCAA going to change all that? It makes no sense to me.

Two of my father’s friends replied. One of them, JAB, argued that punishment #4 was an attempt to impose “a punishment that hits them in a place other than the wallet,” which was needed in order to send a message that all those wins are less important than honor and integrity. The other, Mike, said that history is changed all the time; as an example, he offered a story in which a company accidentally pays an employee $100,000 when it should have paid him $10,000, and then takes back $90,000 after noticing the error.

Here is how I responded:

I don’t think JAB and Mike’s responses to my dad’s question are sufficient.

With respect to JAB’s point, no one is disputing the NCAA’s motivation. They want to punish Penn State for bad behavior, and clearly there are ways they can do it (see items #1-3). The question is whether the NCAA has the ability to change history. History is what it is. If you committed murder, could the government punish you by changing your birthday? Of course not; your birth happened on a particular day. They could pretend it happened on a different day, or not at all, but that wouldn’t change the fact.

With respect to Mike’s point, it shows that it’s possible to remedy past events, but not to change them. In his example, you were still paid $100,000 in the month of February, period. The mistake was made, and the money appeared in your bank account. The subsequent take-back fixes the mistake, but it does not change history.

I would add to Dad’s point that trying to change history creates historical anomalies and contradictions. If Penn State didn’t win a particular playoff game, that implies that its opponent must have won. But then why didn’t that opponent appear in the subsequent playoff game? (Not being a follower of college football, I realize that “playoff” might be the wrong word, and the structure for determining champions isn’t like the NFL’s bracket structure. But you see my point.)

If I were to defend NCAA’s punishment #4, my defense would rely on the notion of a “social fact.” Some facts are true by the nature of physics, chemistry, etc., and therefore do not change based on human desire or behavior -- such as that the earth orbits around the sun. But social facts are different. They are true based on human conventions and values that define them as such. For instance, there is no “cosmic truth” about who won a game of chess. Rather, who won the game is a function of a set of rules for play, and those rules were invented by humans. Likewise, whether Philip and Elaine [my parents] are married is a matter of social convention -- what we as a society regard to be necessary and sufficient conditions for marriage.

In the case of college football, what NCAA is basically saying is that a “win” isn’t what you think it is. You may have thought that a “win” meant scoring a larger number of points by means of touchdowns, field goals, etc. (and of course, all those things are social facts as well). But NCAA is saying that, in an NCAA-qualified game, a “win” means both scoring a larger number of points within the game and abiding by certain standards outside the game. Thus, it is possible to decide or discover, retroactively, that what seemed to be a win was not. In short, NCAA claims the right to define the social facts within its sphere of control.

However, it would also be valid for someone (like Dad) to respond thus: “Okay, fine, you can define ‘win’ however you want for NCAA purposes. But you can’t redefine ‘win’ for the general public. The general public understands ‘win’ in terms of the conventional rules of football. And by those rules, it’s a fact that Penn State won all those games, and NCAA is powerless to change that.”

I leave it as an exercise to apply this lesson to the question of who won the 2000 election.

Tuesday, July 10, 2012

Some people are touting the statistics reported here, which show that the national debt has increased by a smaller percentage under Obama than the four previous presidents.

Reagan - 189% Bush - 55% Clinton - 37% Bush - 86% Obama - 35%

I double-checked the numbers, and they are technically correct. But they’re also meaningless.

First, these figures compare presidents instead of presidential terms. It shouldn’t be surprising that a two-term president will tend to rack up more debt than a one-term president.

Second, this is one of those instances where percentages are completely misleading. Each administration inherits the debt built up by all previous administrations, and that inherited debt provides the denominator for calculating the percentage increase. As a result, the percentage is automatically pulled downward for later presidents simply because they are later.

(For comparison, imagine if the entire $14.9 trillion in debt accumulated since 1980 had been added in equal-sized chunks by all eight presidential terms. That would be $1.87 trillion per term. Yet the percentages wouldn’t be equal at all. They would decline in every single year, from a high of 201% for Reagan 1 to a low of 13% for Obama.)

So what happens when we correct for both errors? Correcting the term problem first, and also adjusting dollars for inflation (something else I don’t think the original source did), here are the percentage increases by presidential term:

Now Obama’s record isn’t the best. He has the third highest percentage increase, and he hasn’t even finished his term yet. (I used the most recent national debt figures, which you can find here. For pre-1993 figures, see here.)

But again, the percentage is misleading. It would be better to look at the absolute dollar increase (again, adjusted for inflation). Here’s what you get:

Now it becomes clear: Compared to the previous seven presidential terms, Obama has presided over the largest increase in the national debt. And again, his term isn’t over yet.

Obviously, Obama’s defenders will say his actions were justified. He inherited a terrible economy, a large stimulus was necessary to boost performance, some expenditure increases were outside Obama’s control, etc. Those arguments might even be right, and they’re free to make them… but only after admitting that the national debt did, in fact, increase dramatically during Obama’s term.

One final addendum: these numbers could, of course, be adjusted in many other ways as well. You could adjust for the size of GDP or population. You could change the start-and-end dates to reflect who passed the relevant budgets, or to reflect that a presidential term doesn’t start until about a month after the election; doing so would shift some of Obama’s debt into Bush II’s second term (as well as shorten Obama’s effective time in office). In truth, there’s something inherently silly about trying to attribute changes in the national debt to specific presidents at all, since additional debt results from a complex interplay of policies created by multiple presidents and congresses over time. All I’m really trying to correct here is two very obvious errors that the creators of these particular statistics should have seen instantly, and probably would have seen if they didn’t have partisan blinders on.

Tuesday, June 12, 2012

As in every year since 2005, I’ve again built a model of the U.S. News & World Report ("USN&WR") law school rankings. This latest effort generated a record-high r-squared coefficient: .998673. More about what that means—and more about the one law school that doesn’t fit—below. First, here’s a snapshot comparison of the scores of the most recent (USN&WR calls them “2013”) law school rankings and the model:

As that graphical comparison indicates, the model replicated USN&WR’s scores very closely. Indeed, the chart arguably overstates the differences between the two sets of scores because it shows precise scores for the model but scores rounded to the nearest one for USN&WR.

As I mentioned above, comparing the two data sets generates an r-squared coefficient of .998673. That comes very close to an r-squared of 1, which would show perfect correlation between the two sets of scores. Plainly, the model tracks the USN&WR law school rankings very closely.

In most cases, rounding to the nearest one, the model generated the same scores as those published by USN&WR. In four cases, the scores varied by 1 point. That’s not enough of a difference to fuss over, given that small variations inevitably arise from comparing the generated scores with the published, rounded ones. Consider, for instance, that USN&WR might have generated a score of 87.444 for the University of Virginia School of Law and published it as “87.” The model calculates Virginia’s score in the 2013 rankings as 88.009. The rounded and calculated scores differ by 1.009. But if we could compare the original USN&WR score with the model’s score would get difference of only .565 points. I won’t worry over so small a difference.

You know what does worry me, though? Look at the far right side of the chart above. That red “V” marks the 4.48 difference between the 34 points USN&WR gave to the University of Idaho School of Law and the score that the model generated. Idaho showed a similar anomaly in last year’s model, though then it was not alone. This year, only Idaho does much better in the published rankings than in the model.

Tuesday, May 22, 2012

Huzzah for U.S. News and World Report! The most recent edition of its law school rankings includes the median LSAT and GPA of each school’s entering class. Finally. I have long argued that USN&WR should publish all of the data that it uses in its rankings. How else can the rest of us (read: rankings geeks) understand how—and, indeed, whether—the rankings work? Though USN&WR remains short of that ideal, disclosing median LSATs and GPAs represents a major step towards making the rankings more transparent and, thus, trustworthy.

USN&WR started the trend towards transparency last year, when it began publishing the “volume and volume equivalents” measures that it uses in its law school rankings. That input counts for only .75% of a school’s score, however. Median LSATs and GPAs together count for 22.5% of a school’s score, in contrast, making their disclosure by USN&WR all the more helpful.

There remain only two categories of data that USN&WR still uses in its law school rankings but does not disclose: overhead expenditures/student (worth 9.75% of a school’s score in the rankings) and financial aid expenditures/student (worth 1.5%). It isn’t evident why USN&WR declines to publish those inputs, too, though perhaps the financial nature of the data raises special concerns. If USN&WR cannot bring itself to publish overhead expenditures/student and financial aid expenditures/student, however, it should abandon those measures. They serve as poor proxies for the quality of a school’s legal education and if we cannot double-check the figures we cannot trust their accuracy.

Tuesday, May 15, 2012

Former student Gabe Krupa, remembering a lecture I gave on the topic of misleading graphs and statistics, alerted me to this graphic showing the fall in Yahoo’s enterprise value. (I didn’t know the term enterprise value, but apparently it’s similar to market capitalization but with a few tweaks.)

As you can see, from 2006 to 2012, Yahoo’s enterprise value fell from $54.9 billion to $17.26 billion. The current value is just under a third of its value six years ago. But that big circle looks a lot more than three times larger than the small circle. In fact, it’s about ten times larger.

As Gabe said in his email to me, the creators of this graphic used a 2/3 reduction in the radius of the circle when they should have used a 2/3 reduction in the area. Since the area of a circle increases with the square of the radius, the graphic drastically overstates the difference in value. (To be more specific, the small circle’s radius is about 31.4% of the big circle’s radius. The square of 0.314 is 0.098, meaning the small circle’s area is 9.8% of the big circle’s area.)

This kind of error was highlighted in Darrell Huff’s How to Lie With Statistics, first published in 1954. The bad news is that media sources still make the same error, whether purposely or accidentally, almost 60 years later. The good news is that apparently some students really do remember what they learned in class, even years later. My thanks to Gabe for bringing this example to my attention six years after taking my course.