Metacritic: The Spectre At The Feast?

Or ...Why Publishers And Developers Need To Place Less Stock In Review Scores

I wrote an article last year titled Review Scores: Fear and Loathing and Cultural Worth that looked at our industry's relationship with Metacritic and the nature of reviewing, asking the question "do we review games too highly?" It was in that article that took a look at the number 7 and pondered who made it the bad guy - whether cowardly journalism or hysterical publishers were to blame for a shift in our industry's thinking that has made anything below 8/10 (or 85% in the cases of Irrational and Obsidian) considered a failure.

More and more these days we come to remember that this is an industry, that results are quantified, that sales data, and reactions are noted and studied. But to place livelihoods, and perhaps there fore lives themselves, at the mercy of a mechanism that delivers an average score from sources of varying quality, whose criteria for dishing out those individual scores may be wildly different seems some foolish. It's a mean number on more than one level.

On an individual basis, scores are useful tools. As I mentioned in my previous article, working out where a game fits on the scale, balancing out its pros and cons against peers and precedent, can help to hone argument, and focus debate. A score's role is then one of support for the main review, not a replacement for it.

It's important to remember that Metacritic is simply a collection of the opinions of individuals with different tastes, shaped by diverse editorial styles, varying critical criteria, and using different analytical methods. There are those who attempt some semblance of objectivity, those who are painstakingly methodical, others who embrace the subjectivity of it all, and many more in between. Of course, the diverse nature of the critical audience is a good thing: through such approaches critical consensus may be reached and truly great games uncovered, but such a process is always relatively vague. Such vagueness extends to the properties of Metacritic itself, whose selection criteria can be a little bit murky.

At best, Metacritic is a helpful tool to allow you to see what a range of people think about a game. The numbers might sway you one way or another in terms of reading a review, which then might contribute to your choice over whether to purchase now, later, or never, but it's become an industry standard simply by means of there being little else to challenge it. A little transparency would go a long way, particularly in terms of site selection, to the benefit of both creators, critics, and audience.

From an altruistic perspective, there's perhaps something appealing about developers getting bonuses based on quality rather than sales numbers. Certain parts of the media have jumped on such an occurrence with knives out and gleaming, but isn't it a vindication of good work being done? Shouldn't we as journalists be pleased that our words carry so much weight? Isn't this progress? There are things at stake, which naturally places more importance on on the culture being created and the criticism of that culture, which surely equals cultural progress?

Except that it doesn't. Metacritic is a fundamentally flawed system, and business decisions shouldn't be made on aggregators. Using Metacritic as a benchmark in any capacity says worrying things about the fragility of our industry. It should be simple. A game is made, 50 people all over the world who are experts in the field write reviews and submit scores, then those scores are averaged, and then the cage comes down. But how are these experts chosen? How does Metacritic's standardisation take into consideration the different methods of scoring, and the imbalance of weighting? What of differences in critical theory and how do you account for taste? If one shitty review can ruin the Christmases of 300 people, how is that right?

The Irrational job listing is particularly troubling. That publishers dish out extra money based on a number-crunching system with seemingly arbitrary admission requirements is one thing. They're suits, they're supposed to make clunky decisions that we can all howl at. But developers using Metacritic to ferret out talent is a bad thing. Most of the games out there have been made by more than one person: having worked for a games company that delivered titles which garnered 84% scores or lower should not be an indictment of an individual's work.

Admittedly, this is but one of a number of requirements, but if it's there more as a deterrent, what does that say about Irrational? Whichever way you look at it, even mentioning an innocuous little aggregation service like Metacritic in a guideline to applying for employment immediately transforms a consumer tool into something rather more sinister.

The business process of tying money to critical appraisal also widens the door for corruption, it encourages bad journalism because of ethical considerations, and facilitates higher and higher scores that service a piece of paper rather than the consumer. It encourages publishers to appeal more to the journalist than the consumer, to block difficult sites out and set a glass ceiling. This is an industry in which freezing out uncomfortable media outlets is incredibly easy to do, and tying money to review scores, bonuses to averages, and your staff roster to Metacritic only serves to make that worse.

Metacritic itself is no bad thing, not at all. I like the fact that it exists. But, much like a Goldie Lookin' Chain song about guns and rappers, irresponsible use of it can cause problems. If this is a trend that is to continue, I only see worrying times ahead.

My brother writes occasional game reviews for play.tm (his friend owns the site) - some of those reviews are counted as "Critic" reviews but my brother is definitely NOT an "expert" critic. He doesn't even play that many games.

I'm also convinced that Metacritic's colour-coding has played a pivotal role in the destructive idea that a 60-70% score is 'bad' or 'mediocre.' By colouring this band yellow, the most influential aggregator on the Internet is putting out warning signs; a colour that is usually associated with hazard, potential danger, indecision (just look at traffic lights) and even cowardice, and I'm certain that this has informed many gamers' decisions. Even if it's on a subconscious level. If you're undecided about whether to buy a game and are confronted with a yellow banner as the first thing you see, it's absolutely going to 'colour' your perceptions of its worth regardless of the actual score, and devalues the percentage band by doing so.

A 60% game is above average and worth playing by genre fans/target audience. A 70% is good - properly, seriously good. If metacritic had coloured this band green and perhaps adopted blue for 85-90, I reckon that public perception would be very different.