Notice anything? Like the fact that Famitsu has now given out nearly as many perfect scores in the last two years as it did in the last 20? At this point, there's no denying it: Something has changed with regards to Famitsu's editorial policy on perfect scores – in an instant, it has gone from almost absurdly stringent to positively magnanimous. But why?

First, let us be sure we are fully disabused of the notion that Famitsu's four reviewers reach their scores by the barbaric primitive methods that the rest of us do, such as playing the game in question and then writing down what they think. In his book The Japanese Have a Word For It, former Tokyo journalist Boyé Lafayette De Mente says this about the Japanese magazine business:

When I first began working in the magazine and newspaper publishing industry in the early 1950s, I discovered that the editorial department was expected to cooperate in bringing in advertising revenue. In some publications, the editorial departments were little more than adjuncts to the advertising side. ... Editorial matter was regularly slanted to build up obligation among potential advertisers.

And yes, there's a word for it: chōchin kiji, a "lantern article" that sheds (positive) light on a favored subject.

More recently, and specifically to games journalism, the indispensable Kevin Gifford translated some recent remarks on Dragon Quest's perfect score by Nikkei writer and Independent Game Developers Association Japan coordinator Kiyoshi Shin:

The total score from (Famitsu's) four reviewers is seen to have a certain level of impact among retailers and users, so everyone has to be conscious of it. ... When it comes to power, game media is going to lose out to game companies every time. Japan's game companies have an aversion to getting scores applied to their releases, and the media is obligated to consider that in their actions — if a publisher refuses to give an outlet advance information, then that's it. I had the editor-in-chief of one publication tell me once that "adding scores is simply a difficult proposition for us as a business."

Meanwhile, American game media is filled with score-based reviews. ... As a result, you often see cases where major releases with enormous advertising budgets behind them are faced with low scores. Meanwhile, games with high-scoring reviews are usually backed up by users and have a tendency to be long-selling hits; poor games receiving high scores are a rarity.

In summary, note that Shin actually has to sit down and explain to his Japanese audience that, in America, bad games get low scores.

At that awkward *Last Remnant *postmortem at this year's Game Developers Conference, moderator Mark Cerny pointedly asked the game's creators about that very discrepancy. Their answer was that they felt that American game reviewers only considered what they themselves felt about the game, but that Japanese ones were more open-minded and considered what other players would think.

I could come up with reasons why this does not pass the smell test, but I feel I only need to point out that if we take this position and extrapolate it to its only logical conclusion, all games must get perfect scores because somebody's going to love everything unconditionally.

Don't get me wrong: I can hardly imagine that Dragon Quest IX and Monster Hunter Tri are going to be critically panned outside Japan. What I've heard thus far has been nothing but praise.

However, when viewed through the lens of what we know about the Japanese magazine business, and scores in particular, it's hard to believe that Famitsu's shift was due purely to an independent editorial desire to use the whole range of scores.

Since I have absolutely no knowledge of Famitsu's policy, I could not say for sure as to why this is happening now. But it is certainly true that the increase in 10 scores has been correlated with the decline in Japan's game market. Neither game hardware nor software has been selling well. It all seems a bit too perfect that just as game sales drop off, games suddenly get rated higher. Perfect *Famitsu *scores get a lot of attention, which can only translate into longer lines on release day.

Of course, that's a finite resource, isn't it? Perfect Famitsu reviews are worth a lot only because they're so rare. They used to be given out to only one game per console. What happens to the value of a 40/40 when they're given to three games per year?

(One last thing: With this review, the only major Japanese videogame maker that has not received a 40/40 score is Sony. I predict they will end up with one by the end of 2010.)