Research, teaching, and mentorship in the sciences

Menu

Academic Moneyball

Standard

Over the past week, I’ve been reading Moneyball, by Michael Lewis.

I’m not a baseball person (though I do keep tabs on football soccer). I found Moneyball to be interesting in its own right, but particularly when considering how its lessons may be applied to academic culture.

Lewis tells the story of Billy Beane, the manager of a small-budget major league baseball team, who assembled a crew that was better than most big-budget competitors. How did Beane pull this off?

According to Moneyball, Beane saw through the intellectually inbred and reality-challenged worldview that permeated the baseball community at the time. Scouts were picking players — and offering them humongous salaries — on the basis of athletic traits that didn’t help teams win games.

Players were highly valued for certain traits, such as speed, fielding ability and throwing distance. The physique of a player also had a huge influence on the opinion of scouts, even though this trait offered only a marginal increase in performance. Beane was the first person in the baseball world to embrace the fact that things like speed, fielding, and outward appearance had a relatively small effect on the ability of a player to help a team win. By playing attention to numbers — and rejecting the universally accepted common wisdom — he saw that the single most valuable characteristic of a player is the ability to get on base. But this statistic — on base percentage — was being ignored by managers and scouts over less useful measures. There’s a more depth and nuance to the story, of course, but you get the idea.

Beane ignored what everybody else was thinking and built a team based on the notion that players have to work together to score runs. He assembled a complementary set of players that had characteristics that increased the probability of scoring runs. To Beane, what mattered was the ability to score runs, and as a result, players who had a history of demonstrating the numbers that increased the probability of runs were valuable. And he was the only guy with actual power in the industry who chose to see the world that way, which resulted in a very effective team run by picking players that were undervalued by the community.

This book kinda rocked my world, by helping me think about how academia also has a conventional wisdom that is mostly independent of reality. The traits that are valued within academia aren’t necessarily the ones that result in substantial academic progress. How is it that we size up academics? The number of publications, grant money, citations, institutional prestige, an impressive demeanor?

What constitutes a successful outcome of the academic enterprise? Sure I would hope you agree it that publication frequency and grant revenue don’t matter in the big picture. Think about the academics that have most greatly transformed your academic fields in the past hundred years. Are their successes the result of the these kind of traits, or are they successful because of other characteristics. What kind of science do we want to support and encourage?

As we pick graduate students, hire postdocs, conduct faculty searches, grant tenure, and select honorees, what are we measuring? We are essentially serving as the talent scouts of academia. Are we selecting for the traits that matter? What kinds of great progress do we want to see, and how do we measure the traits that get us there? I think the traits that we tend to focus on obscure the matter rather than promote real advancement. For example, quality mentorship is valued, but not as much as it is worth to our scientific community. Teaching gets little genuine respect. Publication quality is only measured in terms of journal prestige and number of citations, but these measures really have little to nothing to do with quality or the prospect for real impact.

If we use value measures that do not result in genuine scientific success, than we are causing scientists to chase the wrong parameters. If we actually valued things that build a vigorous and rigorous scientific community that produced innovative work on a regular basis, then we’d get more people pursuing this kind of work. We are all familiar with the publish-or-perish mantra, the demand for grant funding, and the need to be associated with a prestigious pedigree. We are also familiar with people who do great science in our midst without cranking out a ton of papers or without bringing in lots of money, but most of these people are undervalued.

Post navigation

3 thoughts on “Academic Moneyball”

Great post Terry. I’ll hopefully have more to say in a long-gestating future post, but for now I’ll just note a way in which the Moneyball analogy breaks down a little. In baseball, it’s pretty clear what the “fitness” metric is: the number of games the team wins. Ok, it’s a bit more complicated than that, for instance because teams may for instance want to sacrifice wins now in order to improve their chances of winning many games in the future. But basically: team wins. In contrast, in academia it’s not clear what the appropriate fitness metric is, or even that there is a single one. Billy Beane realized that, objectively, certain kinds of baseball players were being undervalued by the market. In contrast, while academic scientists and their employers certainly value some traits over others, that seems to me to often be a matter of somewhat-subjective judgement calls about what constitutes “good science” or a “good scientist”.

Your post also fudges a bit on what “team” you see academics as a part of. A team of one (themselves)? Their dept.? Their university? Science as a whole? That matters, because activities that increase the fitness of one or more of those “teams” might be costly to one or more of the others.

Responding to Jeremy. It gets worse, I think. I’m more ignorant of philosophy of science than I should be, but I’m aware that a lot of very good philosophers of science have struggled to come up with coherent definitions of goodness of science (Lakatos “progressive research programmes”; Popper “demarcation of pseudoscience”; Conant “fruitful hypotheses”), without [as far as I know] really establishing anything except that it’s a very, very difficult problem. And that’s even before we start trying to quantify goodness objectively …