Monday, January 9, 2017

Are there biases in wine quality scores from semi-professionals?

I have previously discussed biases in the quality ratings of professional commentators, as well as in the pooled scores from the amateurs at Cellar Tracker. These biases involve the over-representation of certain scores compared to adjacent ones, indicating subconscious preferences on the part of the raters.

This leaves us to contemplate what semi-professionals might do. By this, I mean those people who rate thousands of wines, and have their own web page where they actively write about wine, but who do not necessarily make a living from doing this. These people are far more active than most of the contributors to sites like Cellar Tracker, and yet they are no involved with wine full-time. Do, they show the same rating biases as the professionals?

As my examples, I have chosen two of the people who have been most active on Cellar Tracker: Richard Jennings, and Jeff Leve.

Richard Jennings runs the RJonWine web site. He is the most frequent contributor of tasting notes on Cellar Tracker, with twice as many as anyone else (in excess of 45,000). Snooth has a Getting to Know Richard Jennings article from August 2014, in which he notes that he has been the most prolific tasting note writer on Cellar Tracker since 2007, having signed up in 2004.

His tasting notes cover an eclectic range of wines, coming from 328 different Cellar Tracker wine categories. Nevertheless, he show a distinct preference for pinot noir and chardonnay, which comprise 30% of the rating scores.

At the time I downloaded
his quality-score data (beginning of July 2016), there were 44,714
ratings available, from which I constructed the above frequency
histogram. As in my previous posts on this topic, the height of each
vertical bar in the graph represents the proportion of wines receiving
the score indicated on the horizontal axis.

As you can see, this graph is very symmetrical, but it does show some biases, although they are somewhat different to those of the professionals. It seems that a score of 88 is over-represented and 89 under-represented, compared to what would be expected; and possibly also 91 is over-represented compared to 90. So, Jennings does not show the 90-score bias so prevalent in the professional wine critics. Indeed, he may actually be veering the other way, and somewhat avoiding scores of 90.

Jeff Leve runs the Wine Cellar Insider web site, which specializes in Bordeaux, Rhône and California wines. He is also one of the most prolific contributors to Cellar Tracker
(in excess of 13,000 notes), having joined in 2010. His tasting notes cover
41 different Cellar Tracker categories, with a distinct preference for
Bordeaux, which comprises 70% of the notes. At the time I downloaded his quality-score data (beginning of July 2016), there were 12,617 ratings available, from which I constructed the second histogram.

You will note that Leve has a greater proportion of high-scoring wines than does Jennings (ie. the graph is skewed to the right). He also shows biases more typical of the professional critics. It looks like 90 is over-represented at the expense of 91, and a score of 100 might also be over-represented. For the lower part of the quality scale, the main scores used are 55, 60, 65, 70, 75 and 80, while 85 looks over-represented as well.

So, the answer to my question is "yes and no" — some semi-professionals are similar to professionals and some are not.

No comments:

Post a Comment

About this blog

In the interests of doing something different to every other wine blogger, this blog will delve into the world of wine data, instead of wine itself. The intention is to ferret out some of the interesting stuff, and to bring it out into the light, for everyone to see. In particular, I will be drawing pictures of the data — as William Playfair said (in 1805): "whatever can be expressed in numbers may be represented by lines". Hopefully, this will be both interesting and informative.