Reviewing Expectations: A Community Blog

Recently, Game Informer's Dan Ryckert reviewed Resistance 3. What followed was a massive attack upon his profile page by gamers that did not agree with his opinion. Some even went as far as to call for his firing for what they perceived as an apparent bias against the Playstation 3.

This got me thinking about expectations that we as gamers have of those writing reviews. There were a few questions that came to mind and I decided to ask Siant, Mray901, Born4This, and Warbuff what their thoughts were for each question. I'll also be sharing my own thoughts for each question.

Question #1

If a reviewer comes across a major glitch in a game, should they
replay to see if the glitch was a freak occurrence, or should they
include mention of the glitch in their review and potentially effect the
score?

Saint wrote:

I don't think
most people realize the pressure that comes with reviewing video games at the
professional level. First, you are providing a service to the community that
can largely persuade them to purchase or not purchase a particular title based
solely on your comments. As much as that can affect the money people spend on
the game in question, it can also affect the profits the developer/publisher
enjoy. I think video game journalism can make a game successful and profitable
just as easily as it can make the game fail and sell miserably. That's a lot of
power.

Second, many
people who work in the video game journalism industry know or at least have the
opportunity to meet the developers - and if they are too critical of the time,
energy and money spent on developing a game and don't show the proper respect
and appreciation, then it could reduce future opportunities working with that
developer or team. That doesn't mean journalism should ever print anything
other than factual information or cower in fear from the fallout of an honest
review, but it does mean it can challenge the strength of the professional
relationship if it's not conducted properly.

Finally,
reviewing games professionally can affect your reputation and the integrity of
the company you work for, and since reviews can, and often do, come down to
professional opinion, that can be a balancing act of epic proportions.

I'm usually
fairly lenient in passing judgment on a game, because I appreciate the effort
the developer put into it. I think for the most part, companies don't set out
to make bad games, but because of budget and talent limitations, deadlines,
competition and a whole bunch of other factors, it often happens. If I found a
glitch in a game, I would certainly see if I can reproduce it. If I can't, then
it would have little effect on my review score - I would just chalk it up as an
anomaly. If I could, then I would evaluate the impact of the glitch. Is it a
mission kill, a mere annoyance, something obvious, or near impossible to
discover. If it's something that prevents forward progress of the game and is easily
detectable by the QA checkers, then yes, I would be harsh in my review. But if
we're talking something as little as sticking C4 on the back of a jeep, driving
up a steep hill and detonating the C4 to launch the jeep across the map
(Battlefield 2 style), well that might just earn it some cool points.

Mray wrote:

I say they should do both. They should replay the part to make sure
it's not always going to be a problem, but point it out to warn
consumers and developers alike of the glitch.

If there'sa gamebreaking glitch or something similar, I feel that it
needs to be pointed out by critics, if only for the developer to take
notice and work on a patch.

Born4This wrote:

If a major glitch is found, the reviewer should replay that scenario to
see if it was a freak occurance. In Assassin's Creed II, I came across a
glitch where a roof guard started duplicating when I tried to kill him,
and his duplicates eventually filled the screen. Initially, I thought
this was part of the game (after all, Desmond was experiencing these
things through the Animus.) After the game started to slow down (and
possibly even froze - I don't recall), I decided it was a glitch and I
restarted the game. It never happened again. Personally, I don't think
it would have been fair of me to say that it was glitchy if I had
written a review of that game. Glitches happen, but frequency is key for
them to be mentioned in a review.

Warbuff wrote:

This question strikes a chord with me since my job is game testing.
It's no fun reading reviews or comments about a title I worked on and
seeing folks complain about bugs. With how much money we throw down to
play a new game it's acceptable for us to expect a high degree of
polish. That said I feel it is a reviewers responsibility to give
consumers a heads up if there are bugs that affect gameplay.

How
impactful a bug is to gameplay is also important to note so yes there
should be at a least a minor effort of investigation. It isn't helpful
to berate a game for bugs if they were a once only occurrence; but we
should still know they happened. Whether or not this impacts the score
is up to the individual writing the review. Scores are completely
subjective. If glitches affected gameplay which in turn affected how
much fun the reviewer was having we should expect that to be reflected.
It's possible to have a buggy game that reviews and scores well because
the core game is still fun to play.

My opinion is:

It really depends on the severity of the glitch. For a major game breaking glitch, such as the one experienced by many PS3 players of Assassin's Creed: Brotherhood, it should be mentioned should the reviewer come across it, regardless of whether they can recreate it. A glitch that forces the player to restart the game is something that consumers should at least be aware of, even if it does not necessarily have an effect on the score, because while the developer can patch the game post-release to fix the issue, there are some that would purchase the game that do not have an internet connection.

For smaller glitches that have an effect on gameplay, they should attempt to recreate them. If they can, then they should be what Dan experienced with his review of Resistance 3, where a single enemy would be glitched out of view, preventing progress. I feel Dan handled the mentioning of the glitch properly with his review, as it occurred multiple times in his time playing the game.

For smaller glitches that don't have an effect on gameplay, it depends upon the frequency with which they occurred. A good example of games that have a lot of glitches that don't really affect gameplay would be the Fallout and Elder Scrolls games. They have a large number of glitches, but most are more humorous than anything. However, if the glitches were more frequent and came at the expense of immersion in the game, I believe that a reviewer should make mention.

In the end, as with the final score of a game, it ultimately depends upon the experience that the reviewer had with the game.

Question #2

Looking at average review scores for publications on
metacritic, most, if not all, have an average review score above what
they consider to be an average game. Do you believe this can be
attributed to games improving, reviewers not wanting to negatively
impact the industry, or simply skewed perception of the quality of the
average game?

Saint wrote:

I think it's more likely a combination of factors instead of one simple
smoking gun. Certainly over time, elements of games have improved but I
don't know that you can say that games today are better than they were
five or ten years ago - graphically yes, but game play - not
necessarily. So, while this might contribute to a disparity with review
scores, I don't think it's the biggest issue.

I don't think most
reviewers worry too much about impacting the industry negatively, but I
do think it's human nature when we review a game we tend to lean towards
a more positive score even when the game doesn't meet our standards and
expectations of an average game. This has kind of changed the review
scale to the point where if a game isn't rated in the ninety percentile,
it's seen as an average game.

One other element I think contributes to
review scores is the age and experience of the person reviewing the game
and the number of games they have played in their time. If you haven't
played a very many games, then of course playing something from this
generation will be a more memorable game, but gamers who have been
around a long time and have something to compare the experience with,
might evaluate the game differently.

Mray wrote:

Honestly, I believe it has something to do with video games as a
whole improving. While the gaming industry is still quite young, it has
come incredibly far in terms of innovation. When you look at this
generation of home consoles, you can easily see just how much power they
have, and just how much they can do.

When a developer makes a new game, it can use said power, plus other
games' innovations, fail miserably at it, and still make something that
could be considered "average".

Of course, some reviewers may actually soften their reviews up a bit
just to not negatively impact the industry. Perhaps it's just a
different mentality for different critics; we may never be certain.

Born4This wrote:

I'd say it's attributed to the skewed perception of the quality of the
average game. Everyone has a different idea of what an average game is
and what an average score for an average game is.

Warbuff wrote:

First off Metacritic compiles scores from lots of sources that all have
varying scales. Gameinformer has a 1-10 scale with 7 being average.
Gaming Nexus grades games the same as school teachers. Their C- average
for Halo: ODST was converted to 42/100 by Metacritic garnering Gaming
Nexus lots of negativity from fans. I feel it is unfortunate that
investors rely so heavily on Metacritic scores. Scores are subjective,
scores are scaled differently per publication, and Metacritic isn't
equipped to convert all of them properly.

I'm wondering just how
many times I'll type subjective in these responses. I attribute all
discrepancies to skewed perception. Allow me to provide examples.

Going back a few months Andrew Reiner reviewed Red Faction Armageddon
and gave it a 7.25; according to GI's grade scale that is slightly above
average. Yet in the final sentence of his review Reiner tells us we
are better off playing Red Faction Guerilla. To me it seems like he
doesn't feel this average game is worth playing.

Digging deep to
2009 Matt Helgeson gave Deadly Creatures 7.0. In his opinion the game
is "genuinely charming" and "might be the ticket [] For gamers looking
for something different - provided they can cope with unpolished
mechanics" He acknowledges the game would not be enjoyable for everyone
but its uniqueness makes it worth playing for some gamers.

I
could spend hours going through old reviews but I just wanted to
illustrate my point that reviews are skewed based on the individual and
the game itself.

Gameinformers definition for a 7 - The game's
features may work, but are nothing that even casual players haven't seen
before. A decent game from beginning to end.

My opinion is:

I don't believe it has to do with games improving, as games haven't changed that much on a fundamental level over the last decade or more. I believe that that perception of an average game, and what deserves an average score is skewed, but I believe the ultimate reason for that perception being skewed is because reviewershave established their reviews in such a manner.

I think that there are a couple of reasons as to why this has happened.

The first is a little bit of a dark spot for the industry, and that is publishers' control over the release of their game for review. A good example of this was back when Assassin's Creed was reviewed by Electronic Gaming Monthly. They gave the game a 5.75 review score, and Ubisoft allegedly quit allowing them access to early previews and review builds of games. In this way, publishers have the ability to control how a game reviews and pushes the review scores higher than they necessarily would be without the interference of publishers.

Another aspect is whether a reviewer enjoyed a game or not having an effect upon how they judge technical issues or flaws within a game. I'd like to say that gaming journalism is a shining example of journalistic integrity, but I don't believe that every gaming journalist is able (or willing) to analyze all elements of a game,as opposed to just whether they enjoyed it or not. Again, this can go back to the the first question, with knowing when the technical issues of a game warrant attention.

Question #3

If a sequel to a well reviewed game does not make
drastic changes to it's core gameplay mechanics or game engine, yet
maintains a high standard for gameplay and enjoyment, should the game's
review be the same as the previous title, or lower?

Saint wrote:

I suppose the
best way for me to tackle this question is with an example. Dead Space 1 and 2
are nearly identical in every way. Dead Space might offer a few new weapons and
couple of different features not present in the original, but overall they are
nearly identical. They are both truly awesome video games and while I think
there are facets of each I like better than the other, I could easily give Dead
Space 2 the same or higher score as the original and feel comfortable doing so.
I don't always look for more flair to call a game better than the original. In
fact, if Modern Warfare 2 emulated the original Modern Warfare more closely, I
think I would've liked it better. I think they went overboard trying to improve
on content they actually took away from the original experience.

Mray wrote:

I think it depends on how much the first game
impacts the industry. If it's something incredibly new and original,
then the sequel will likely be given a lower review score just because
it's not as fresh.

Vice versa, some games can be good, but have bigger and better
sequels that fix everything about the first game, but not necessarily
making huge changes in terms of gameplay (Case in point: Uncharted)

Born4This wrote:

This is a tough question because "enjoyment" is so subjective. On one
hand, you have games that improve/change their core mechanics and game
engine, but the story/setting/characters may not be as interesting as
the previous game (see Crysis.) On the other hand, you have games that
basically keep the same core mechanics and engine, but improve on the
story/setting/characters (see Portal.) There are so many X factors
involved in reviewing, too.

For example, industry impact. What if
Half-Life 3 came out tomorrow, but wasn't much different than Half-Life
2? (similar graphics, mechanics, and story/setting/characters.)
Half-Life 2 didn't receive high praise just because it was a quality
game, but also because it was innovative and did some things that other
games hadn't done at the time, or at least not as well (physics, facial
animations, etc.) It may not sound fair, but reviews are often based on
expectations. There have been a lot of great FPS games since Half-Life 2
was released. If Half-Life 3 didn't do anything innovative and was
basically Half-Life 2 with a different story, I wouldn't cry foul if it
received a lower score.

Warbuff wrote:

This is tricky. On the one hand every game should be judged for its own
merit. On the other hand it's important that players know how much a
sequel resembles or differs from its predecessor. I don't feel that
repeating mechanics should count as negative in a review just because.
Folks put a lot of weight into review scores of sequels. Just because a
game received a 9 doesn't mean it will 2, 5 or 10 years down the road.

Both the original Halo CE and Halo: Reach were scored as a 9.5 by
Gameinformer. Does that mean a retro review of Halo CE would receive
the same score today? Halo: Reach uses MANY gameplay mechanics that
were introduced in the original. But if they work and people have fun,
why should the score of the sequel suffer on a technicality?

My opinion is:

It really depends upon how the technology in the game stacks up against what else is available on the market. For instance, Call Of Duty has been fairly similar from a gameplay perspective, and hasn't change much in terms of their engine with the last few entries. Yet, while not making many changes, they have consistently been at the top of what is available on the market. Thus, there hasn't been a reason to change the score. Now with them facing strong competition from Battlefield 3, it will be interesting to see how Modern Warfare 3 will be reviewed.

While I don't believe that a drastic reduction of score is called for, it appears that Battlefield is set to surpass Call Of Duty from a technological standpoint, while emulating much of what has made Call Of Duty such a popular game. If this is the case, it would only make sense for the score to drop slightly as other games catch up or surpass it, effectively raising the bar for standards.

This is not to say that the game is bad for staying similar from game to game, but as games get better, expectations for games are raised as well.

I hope that you all have enjoyed this blog, and feel free to share your opinions for each of the questions. I think it is interesting to see how different gamers perceive reviews and what expectations they have for them.

On a side note, I am on a new work schedule at my job, which only has me working 3 days a week (for 12 hour shifts). This will provide me with a lot more time to blog, so expect a flurry of blogs in the near future as I'll definitely be taking advantage.

In closing, I would like to think the contributors to this blog for their participation. I always enjoy the chance to put together a collaborative blog such as this, and they were great to work with on it. Again, I hope that you all enjoyed this blog, and leave some feedback.