The discussion is provoked by an article recently published in the Journal of Wine Economics that tried to find out by analyzing Wine Spectator ratings of advertised versus unadvertised wines compared to the scores given by Robert Parker’s Wine Advocate, which does not accept advertising.

A Natural Experiment?

Economists are drawn to Freakonomics-style “natural experiments” like this one but there are several reasons why it would be difficult to prove systematic advertiser bias at Wine Spectator if it existed (and I don’t think it does).

First, you would need to have a set of ratings that would give you a “true” value for each wine that could be compared with the critic scores to reveal bias. However research published in the Journal of Wine Economics actually suggests that wine ratings are highly variable — different experts can give the same wine much different ratings and, indeed, the same wine judge can give the same wine different scores, too, in blind tastings.

I don’t think, based on this research, that it is possible to assemble a set of objective, unbiased scores to serve as a “control” data set. Without that control, it is hard to prove anything, even with sophisticated statistical tools.

The Mondovino effect

Second, it is kind of ironic that Wine Advocate was used in this study as the control. Although WA doesn’t accept advertising and is presumably immune from advertiser influence, that doesn’t necessarily mean it is unbiased. Indeed, the scuttlebutt in the wine world (see Mondovino for example) is that Robert Parker has a very definite (you might say biased) idea of what wine should be and that his ratings very much reflect his particular palate.

Let me say that I am not how sure how true this is and, because I am not an expert wine taster, I am not the right person to judge it. I will say that I have tasted some very un-Parker wines (in terms of the style stereotype) that received high WA scores, so I am a somewhat of a Parker bias skeptic.

If, however, we accept for the sake of argument the conventional wisdom that certain types of wines are more likely to get high Parker scores than others, then it seems like WA ratings are a problematic control for this experiment. You would be comparing possibly-biased apples with oranges that could be biased in a different way.

The problem gets worse when you consider the accusation that is often heard that some winemakers tailor their wines to Parker’s palate in an attempt to get good ratings. Some of these makers of “Parker wines” are probably more likely than others to advertise in WS and other publications. It’s a messy situation, don’t you think? Hard to filter out an objective control with all the alleged “noise” in the data and even more difficult to track down potential bias.

Market Discipline

Finally, I’d like to suggest one more irony. The premise of the original study is that wine critics will cheat on their clients (the subscribers) when the benefits in terms of advertising revenue is high enough (so long as they can keep their bias secret).

I understand that subscribers don’t want to play if they think the game is fixed, but it seems to me that the same holds true for advertisers. Who would want to buy ads in a magazine if it looked like doing so was equivalent to paying “protection” to the mob? It seems to me that I’d run the other way. Wine critic publications both need to be unbiased and to appear unbiased in order to prevent the advertising equivalent of “capital flight.”

And so the final irony is this. Because Wine Advocate does not accept advertising, it is not subject to this sort of “market discipline.” Robert Parker could in theory be as biased in his ratings as he wants and it would never cost him a single advertising dollar.

Does Parker exploit this freedom to promote particular interests? No, I don’t think so, but he could. Simply being ad-free is no guarantee of virtue (or objectivity) any more than selling ads is an indicator of vice.

Critics (and not just wine critics) live in a messy world of complex incentives and disincentives. We, the consumers of critic opinions, need to understand the situation and make our own subjective judgments.

Like this:

Related

Post navigation

2 responses

Let me add one point to this good analysis, Mike. While every reader/consumer should be aware of potential biases, unmitigated skepticism seems as short-sighted as unquestioning credibility. Consider the source! If a critic, or a publication, or a friend, has acted in ways that you judge honest and reliable, then perhaps you are justified in extending some trust, even if you can’t “prove” the lack of bias through mathematical calculations of statistical significance.

When wine critics are NIST accuracy certified you could have a real control group. Until then, Hodgson’s paper establishes a reasonable range for the most consistent judges and after reviewing both Hodgson’s and Reuters papers I believe both Wine Spectator and Wine Advocate are rating wines to an accuracy of about plus or minus 2 points regardless of advertising. I would like to see the relationship between the mean, modality and median. If the mode in Reuters study is skewed more than 4 points one way or the other that might be interesting….still it would only be a subjective analysis and as you point out wine suffers enough from that :-)

The Wine Economist

What would you get if you crossed the Wine Spectator, America's best-selling wine magazine, with the Economist, the world's leading business weekly? The answer is this blog, The Wine Economist, which analyzes and interprets today's global wine markets. Staff: Mike Veseth (editor-in-chief) & Sue Veseth (contributing editor).