Site Navigation

Site Mobile Navigation

Do Numbers Count?

When I write up our panel tastings I always include our top 10 wines (or beers or spirits). After several recent columns it occurs to me that I ought to explain more precisely what these lists mean.

I’m thinking in particular of an April 1 column on 2007 German spätlese rieslings, which inspired a spirited debate on the bulletin board at the Robert Parker web site over the rankings. Similar debates, though to a lesser extent, have followed recent columns on Sancerre and American pilsners.

It’s easy to attribute too much meaning to these tastings. The best way to understand and evaluate a wine is to open a bottle and drink the wine, preferably over the course of a meal. Good wines are living things, and they change with age, with temperature, and with context. The Pour is dedicated to evaluating wines in that way. But for many purposes other than this blog it’s not always practical.

For one thing, if you are trying to assess a vintage, a wine genre or a region, there are simply too many wines and not enough time for this method. So instead, we gather wines in a room — we limit ourselves to 25 at a time — and we taste and spit and evaluate. As professionals, we taste blind, and we try to block out considerations of context and to extrapolate what the wines would taste like with meals, properly aged and in other circumstances. I would like to think that we do a pretty good job with this.

After the tasting is done, I look at our notes and our scores and I rank the wines. Here’s where things apparently get tricky.

What do those rankings actually mean? In the spätlese tasting, our No. 1 wine was the ’07 Bockenauer Felseneck from Schäfer-Fröhlich in the Nahe. Our No. 10 wine was the ’07 Niederhäuser Hermannshöhle from Dönnhoff in the Nahe. Does this ranking mean that Schäfer-Fröhlich is a better producer than Dönnhoff?

Not at all. It simply means that at our tasting, (we taste all the wines blind), we liked the Bockenauer Felseneck better than the Niederhäuser Hermannshöhle. We didn’t dislike the Dönnhoff wine, and I am a great admirer of Dönnhoff in general.

It’s important to remember that tastings like these are snapshots of a wine at a particular frozen moment. All the wines on our top 10 lists are good wines, though they may vary in style, and how they taste at any given time relative to one another may change as well. That said, we are making educated guesses at how the wines will stack up. But at the end of the day, is there really that much difference between the three-and-a-half star Schäfer-Fröhlich and the two-and-a-half star Dönnhoff?

Oh, forgive me. That’s a silly rhetorical question. We live in a wine culture where it does seem to make a big difference whether a wine receives a “90’’ or an “89’’ in a consumer magazine. Of course, that’s a big difference in public perception. The difference between those two wines is in fact very small, as any critic will tell you. And a month later, if those wines were retasted under similar conditions, those scores might well be reversed.

My point is that these tastings are excellent guides to picking out some of the best wines in the area we are covering. But it’s a mistake to overanalyze the incremental scores among the top wines. There may be genuine differences in style and precision of these wines, but it would be wrong to conclude that because one wine is ranked higher than another, the producer is the better one. Evaluations of producers require long-term analysis. Tastings like these assess the moment.

Wait, wait, if you’re only reviewing, say, 15 wines, it certainly IS fair to ask whether 3.5 stars is a big difference over 2.5 stars. If you truly feel the differences between the wines is so slight as to be meaningless, then don’t use the star system– simply say “we find these 15 wines to be well worth your attention.”

And bringing ‘producer” into the discussion is introducing a red herring — you’re comparing wines, not producers.

And also, it’s disingenuous to compare the difference between 2.5 stars and 3.5 stars (in essence, a 9 point scale) as somehow equivalent to the difference between 89 and 90 (in essence a 100 point scale). On the first, one’s a B, one’s an A. On the second, one’s a B+ bumping against an A-, the other’s an A- bumping against a B+.

If your shorthand for ranking suggests huge disparities when you feel they don’t exist, then don’t use the shorthand. And if you feel you must, or your editor feels you must, then explain what the shorthand means for each review (like the restaurant reviews do, and like Wine Spectator, Wine Advocate, et al. do). But definitely realize the stars aren’t suggesting what you say here, and realize that if folks don’t read this, the stars will continue to suggest something you’re claiming they don’t mean.

Tasting 20 wines, in rapid succession, without food, is a strange practice that really has little in common with how most people consume wine.

My friends and I conduct blind tastings: a first round before dinner, a second with dinner. The opinions change dramatically once food is served. I am sure with a different menu the opinions would continue to change.

I think that the now (thankfully) questioned ranking preference for jammy rich wines of high alcohol is due to the way wine is tasted, for these confected flavors, luscious on their own, have no place on the dinner table.

I want a wine that makes my dinner taste better, and that tastes better because of what is on my plate.

There are so many factors when tasting wine, part of the reson I have never given scores nor underdstood them. One of the least talked about is order…had a customer at our tasting last Saturday ask me, “Why is it that I can come to a tasting and love a wine but it does not taste the same when I open it at home?” the answer was easy, “because you were not drinking Bourgueil or whatever before it” .

In a “tasting” setting the showiest wines tend to stand out but I have to agree with TJ above, that kind of flash more often than not, has no place at the dinner table, points mean nothing when your wine is clobbering your food.

I’m not sure Bill missed the point. I have to agree that Mr. Asimov’s approach to tasting (which I like) seems at odds with the scoring system (which I don’t much. At odds with scoring *at all*, really, but especially with a system that gives the impression of large differences because it is working on a short scale.

The real question, though, is why rate at all? Personally, I barely look at the scores, and often I can tell, from the descriptions, that I am more likely to enjoy a lower-ranked bottle than the “winner”. I think the column would be better if it just did away with rankings, listed wines alphabetically, and told us (as it usually does) what was liked or disliked about each wine, and whatever can be inferred about the region, varietal or whatever, from that sample.

That’s just me, though. I get the sense that much of the readership believes the scores to be very significant. Which is a shame.

The impulse to score and rank wines is apparently far too strong to be resisted. It happens constantly and endlessly.
It is totally understandable. I really think it should be resisted though. I really believe scoring has led the world of wine in the wrong direction. I realize the practice is far too entrenched to ever go away but just like addressing global warming, inevitability should not deter one from trying.

I’d like to see each wine commented on as to it merits and
it’s potential and in just how it could best consumed along with weaknesses or flaws as Pedro suggests.

Wine production and the consumption and enjoyment of it should not be constantly placed in a competitive context.
It strikes me as a typically American cultural misstep to
place such emphasis on scoring, ranking etc. How is
the dominance of the biggest, loudest wines to be avoided?

People look to critics for guidance (even instruction) on what to buy, the problem is that personal taste and the
mind set and setting are so variable that the “snapshot”
reality of a review also negates it’s value. So then, why do it?

Comments appear to be critical of the New York Times’ top-10 ratings. Why? Wine professional cannot deny that Amercans like to rate wine. When it comes to wine, 100-point scores may be the only ratings Times’ readers want to know. Why not abandon top-10 lists, so we can compare your scores to the critics?

I strongly agree with the comments by Pedro and ned. I don’t care about scores, and I very often find myself preferring the 89 rather than the 94 wine. Or, to put it another way, I usually prefer the second label from a big-time winery because the “premium” wine is so often overoaked, oversweet, overextracted and so on. The second label wine tends to be less futzed with.

As someone who imports wine, I confess that I will trumpet a high score because retailers in particular are swayed by the numbers.

im with bill too
its interesting that even with such a long exposition on what the rankings mean– so much divergence of opinion remains.
not surprising that the ranking itself is so misleading. in general, all measurement systems have several sources of error. unless the extent of the error and its sources are known and explicit the ranking has no clear meaning.

Wondering whether wines should be rated via a star system (like Guide Michelin) or a numbered scale is utter nonsense! (Is a 100-point scale a better gauge than a 10-point scale? Much ado about nothing, in the immortal words of the Bard).

We would all do well to keep in mind that wine ratings and their first cousins, blind tastings, are nothing more than marketing tools designed to move merchandise or, in the case of columnists, to help sell advertising space for their publications. Wine ratings and tastings are the height of pedantry and have as much to do with experiencing wine as fantasy football has with actually playing the game.

Wine is not a religion, it is not an object of affection, and it is certainly not an end in itself. Wine is simply meant to help unlock the various subtle flavors of the meal with which it is served, thus enhancing the eating experience. If wine is being sipped as a cocktail, its role is to help open one’s appetite for the meal to follow.

In countries with centuries of wine tradition (France, Italy and Spain come to mind), the locals love their wine and are quite knowledgeable, yet spend absolutely no time pondering about ratings (which do not exist) or reading wine columns (which are quite rare); they learn about which wine goes better with what by actually sampling wines along with food. That’s the role that bistros and brasseries play in France, and what tapas bars in Spain and cantinas in Italy are all about.

I beg to differ with Gustav. Wine tastings with friends over dinner can be a blast, and (in my experience) merry affairs that are anything but pedantic. There is usually no consensus over what is “best” and hence no rankings, other than personal preferences. I’ve discovered great new wines, and seen my prejudices overturned. I also know what (more or less) characterizes different vintages in Bordeaux, Burgundy, and the Rhône; it has proven useful in buying wines and pairing them with food. My favorite aspect of blind tastings is how the most expensive bottle is rarely the top choice.

All of these ranking systems reflect judgments by someone at some time about relative quality; all may be more or less reliable. Why single out the wine critic’s judgment (whether expressed in stars or numbers) as somehow fundamentally different, and fatally flawed? I personally would sooner rely on a critic whose palate and methodology I trust than the 1855 Classification to lead me to a good value in 2006 Bordeaux.

I don’t think TJ and I are in disagreement at all. There’s absolutely nothing pedantic about having blind tastings with friends, as TJ describes, with a first round before dinner and a second with dinner, it’s a great way to have fun and learn about wine! That’s precisely what a good neighborhood brasserie in France and a neighborhood tapas joint in Spain do.

What I find not only pedantic but totally useless are the blind tastings (the “taste, spit, and evaluate” process) described by Mr. Asimov, done for no purpose other than to “rank” wines. Ranking them in relation to what? It’s absolutely foolish to rank them in relation to each other, for each wine, being a living organism, interacts differently with the same type of food.

Taste is a very personal thing. At dinner at the home of friends a few months ago we found that an unranked 2004 Don Valentin Malbec, from Mendoza, Argentina, at $18 per bottle, did wonders for a superb cut of steak with peppercorns, served medium-rare, whereas a $70 bottle of 1999 Bertrand Ambroise Nuits St Georges les Vaucrains, rated 94 by Robert Parker’s Wine Advocate, made the same cut of beef taste like moldy show leather. Does that mean that the Argentine Malbec is a better wine than the Nuits St George? Not necessarily, but to us, on that particular occasion, it certainly was!

The joy of wine is finding out how it tastes with food. In that regard, rankings are nonsense and the ranking process undertaken by Mr. Asimov and most so-called “wine professionals” (taste, spit, evaluate) is unnatural and totally useless. Rankings and wine columns do nothing more than feed one’s ego and add to one’s sense of snobbery. To really learn about and appreciate wine there is no substitute for tastings as described by TJ, whether at home with friends or at a BYOB restaurant.

To echo Eric’s words consider olympic swimmers. Yes, there’s Michael Phelps, but the other swimmers who come in behind are only fractions of a second off. Even if you come dead last in an olympic event, you are still an olympic athlete. You are among the best. Likewise, if you decidedly are tasting 25 wines known for their talent and do it blind with the intention to rank them–they will be ranked, but, to their credit, all of them are still among a recommended suite of the best.

I really enjoyed this post, in my experience wine tasters and wine experts operate like a secret society trying to keep the lower class of drinker from reaching the upper echelon of wines. I am glad that a site like this exists to help educate those looking for a deeper understanding of wine, beers and fine spirits

“All the wines on our top 10 lists are good wines…” Does this mean your tasting coordinator screens the wines for quality before you taste? If that’s true, then Dylan’s comment stands as a good explanation. It makes sense to me: Who wants to read reviews where all the wines tasted are poor examples of the winemaker’s craft?

Gustav: Whatever else you may say about it, blind tasting is what keeps tasters honest. I agree that professional tasting is an abstraction, but what’s the alternative? “Taste, spit, and evaluate” may sound unappealing, but it’s an essential part of how I make my living. It is, in fact, work–something customers sometimes have a hard time understanding.

In the end someone else’s ranking of a wine may encourage you to purchase a wine the first time, but if you don’t enjoy it you’re not going to buy it again, regardless of the star/rating that some ‘expert’ has given it.

Personally, l’ve found a few wines by trial and error that I really enjoy and buy them by the case. I’ve found that I prefer certain varieties of grapes to others and stick with those. I may not be a conaisseur, but I know what I like, and it doesn’t really matter to me if anyone else agrees.