The Science of Success

MediaPredict

Last month, the publisher Simon & Schuster announced a partnership with a Web site called MediaPredict, which would use the collective judgment of readers to evaluate book proposals. The deal drew scorn from many, who saw it as evidence that publishers, in an era of stagnant sales, had so lost confidence in their own judgment that they were reduced to the methods of “American Idol.” Asking readers to weigh in on a book’s commercial prospects was a recipe for mediocrity, and the experiment was “doomed to fail.” Yet even the idea’s critics recognized that it was a response to a real problem: most books today are not economically successful, which means that much of the time and money that publishers invest in projects is wasted.

For years, publishers have accepted this as unavoidable. After all, in nearly every media business most products flop, and most of the profits come from a small number of huge successes. In the music industry, only ten per cent or so of recordings actually turn a profit. On network television each season, most new shows fail. And in Hollywood, according to the economist Art De Vany, six per cent of films account for the bulk of the industry’s profits. Media companies have learned, accordingly, to diversify their bets and, increasingly, to share their risk with others. (Hollywood studios, for instance, now often bring in outside investors on big projects, limiting their potential downside.) But the process of predicting whether a product will be a hit remains remarkably haphazard and erratic.

Many people argue that it’s foolish to expect otherwise, and that no science of success is possible. In the famous words of the screenwriter William Goldman, “Nobody knows anything.” The fate of a book or a movie, the argument goes, is determined by too many factors to be predictable—advertising, reviews, word of mouth, luck, and, in the case of big hits, a simple desire to see what all the fuss is about. De Vany, for instance, says that the box-office performance of Hollywood films is “chaotic” in the mathematical sense of the term. Three Columbia sociologists recently found something similar in a series of online experiments in which people were divided into eight groups, asked to listen to songs by unknown artists, rate them, and then decide which ones they’d like to download—after being told how often others had downloaded the songs. The highest-rated songs, it turned out, were not always the most frequently downloaded. And in each group a different song ended up topping the charts. In the laboratory, at least, success appeared to be essentially random.

MediaPredict, however, is wagering that in the real world success is, at least in part, predictable, and it follows a model that, over the past decade, has proved surprisingly effective in forecasting a wide range of events: the prediction market. Prediction markets function like futures markets, except that, instead of betting on the future performance of a company or a commodity, people can bet (often with play money) on things like election outcomes, current events, and product sales. Rather than relying on the gut instincts of a single decision-maker, prediction markets tap the collective intelligence of everyone playing the market. The most successful media prediction market is the Hollywood Stock Exchange, in which traders collectively forecast the box-office performance of Hollywood films, Oscar nominations and results, and the performance of individual actors, with striking accuracy. The market on average picks more than eighty per cent of Oscar nominees correctly, and hasn’t missed more than one Oscar winner in the past four years. More important, it has also done a good job of predicting box-office performance. According to a study by Anita Elberse, a professor at Harvard Business School, the market’s forecasts are off, on average, by sixteen per cent—far from perfect, but a track record that most studio marketing departments would be proud of.

It isn’t just Hollywood, either. A British firm called Brainjuicer has been using collective intelligence to research the prospects of everyday consumer products, and its findings suggest that such techniques can forecast more accurately and subtly than traditional consumer research methods, which have a reputation for producing mediocre results. For instance, prediction markets avoid many of the faults of focus groups, which tend to be dominated by the loudest and most opinionated people, to be driven toward consensus decision, and to discourage disagreement, making them of limited usefulness. (“Seinfeld,” famously, was a complete bust with focus groups.) Prediction markets, by contrast, are competitive environments, and so they encourage diversity of opinion, minimize people’s influence on one another, and force people to think not only about their own tastes but about those of consumers as a body.

The collective intelligence of consumers isn’t perfect—it’s just better than other forecasting tools. The catch is that to get good answers from consumers you need to ask the right kinds of questions; asking the market to predict how many copies a book will sell, which requires predicting how a wide readership will behave, is better than asking the market to predict which manuscript will get a book deal, which requires predicting the decisions of a small number of editors. (The Simon & Schuster experiment with MediaPredict, unfortunately, focusses more on the latter.) And you need a critical mass of people to participate. It’ll take a while to work out the kinks, but in the long run these markets are tools that few media companies can afford to ignore. Nobody knows anything. But everybody, it turns out, may know something. ♦

Sign up for the daily newsletter.Sign up for the daily newsletter: the best of The New Yorker every day.