Reviews for video games are almost as old as video games themselves. Ever since gaming became a common household phenomenon, magazines and more recently, websites have been offering their opinions on games, complete with a comprehensive scoring system, or rather several different systems. Everything from a grade-school A-to-F scale to a numeric score out of 100 seeks to gauge the quality of a game for the viewer.
Not to sound nostalgic, but back in the day, games could be about…well…anything. Some of the greatest games of the 80′s and 90′s came out of unlikely genres and titles, games that were the first of their kind. Of course, not all of them were diamonds, there was also a lot of rough too. Because of how broadly games seemed to encompass and convey different ideas, a scalar form of rating was necessary. Moreover, it actually worked back then. Arguably, some of the best and worst games came out during this booming period when every company and developer was looking to one-up the competition with both their hardware and their games, and not in an extra-life context.
As a result, there were games like Super Mario 64, triple-A titles, in addition to the horrendous movie tie-in and B-rate games like Superman 64. The quality of games, as it were, was so sporadic and inconsistent that a 1-5 and 1-10 scale of their quality were necessary, if not obligatory, because titles covered all ranges of quality. If we switch over to a modern scope, however, games almost ubiquitously fall within the top 80 percentile of scores. Games have become consistently good, or at least consistent in themselves. Quality is relative, after all. The title that gets a 2/5 or a 3/10, while still an occurrence, is a rare one in the 21st century.
This seems to follow with production cost. Back then, a game could be made quickly and without excessive investment of money or effort. The stream-lined process allowed for rapid and continual development of games, which made the quality of games overall variant. In a modern perspective, however, games are laborious and huge efforts, with hundreds, if not thousands of people dedicated to making a single title for years at a time before its release. If an idea or mechanic behind a video game is bad or poor outright, chances are it won’t be supported or will be refined into a better state before much effort is put in. As such, quality has become static for games, with no titles being superb or stand-out, but all titles being equally satisfactory.
A proper example would be the coveted “game of the year” title. A few console generations ago, if a game truly was great, it would win game of the year almost unanimously. Now, however, “game of the year” can go to any number of great titles, but the line has been hazed. You can go to two different magazines or websites or networks and get several different titles for the true “best.” This isn’t necessarily a bad thing, it shows us that games have improved in quality overall, but it also rules out the quantifying of a game’s quality in a numeric manner such as ratings do.
So when we look at games as they are now, the true “range” of scores is less substantial than the industry makes it out to be. Some of my favorite games fall within the mediocre range of score, but they’re still a hell of a lot of fun. The make-or-break factor than got them in the 70-80 range is the degree of polish put into it rather than the quality of the game. Take Fallout: New Vegas. It fell one point shy of their rating “quota” that would grant the developers a bonus. To think that one point made the difference between a person losing their job or not is a bit suspect. What separates an 84 average score from an 85? Nothing really, it just comes down to how many bugs it has and how severe they are, a game of negatives rather than positives. The game is still fun, but review scores attempt to rate things in an over-simplified manner catered towards keeping every game on a similar scale.
The fact of the matter is games vary, even now, but not enough to warrant a 100 or even 10-point rating scale. If you don’t like a real-time strategy game, you still wont like it even if it’s the most polished, pristine game ever made. That doesn’t make it bad, but it does suggest you would prefer something else. If I see two games, one with an 83 rating and one with an 85 rating, I’m not naturally inclined to buy the slightly higher one. I look at the titles, see what they’re about, get a feeling for the game, and then I make a choice. And yet modern reviews perpetuate just this sort of idea by awarding minute differences in scores to games that ultimately prove incomparable, a victim of over-simplification.
With things like meta-critic, a degree of truth can be reached. If we take the average of all variants of a score, we at least come out with a true-enough consensus. On the other hand, public ratings and even ratings in general seem to be polarized for whatever reason, following a trend of all-or-nothing support. When there’s an 8 point difference between the average critic score and the average player score on a 10 point scale, one starts to wonder which can be trusted. The answer, it would seem, would be neither. Nobody can tell you what game is the best; you, viewer, must determine that for yourself.
I personally hate review scores. However, if I was required, I would take up a binary scoring system, following the pattern of all-or-nothing that seems to marginalize what people should like. A game either gets a 1–it’s worth checking out, try it if you like this sort of thing–or a game gets a 0–there is no reason you should play this game ever, even if you love the genre and the series. What we find then is that most games get that 1/1 score, which makes sense if you think about it. Nobody sets out to make a bad game, and recently games have been good all-in-all. The thing that has lapsed in quality, however, are review scores. After all, you are the only one who can decide what you like.