Not too long ago, console wars revolved almost entirely around who had the better hardware. Even today, a lot of weight (and debate) has followed which of the “Big 3″ could better render a realistic-looking environment on their console. But what’s so great about realism anyway?
Think back ten or twenty years ago: games consisted of having a few squares on a screen and navigating a maze of other squares. Saying that video games were simple starting out is an understatement; more often than not, the player only did one thing. Pong only consisted of rotating a little knob on your Atari 2600 to move a line on-screen. There were no objectives, no missions, just the wonder of seeing a reaction occur on-screen every time you twisted. And, of course, the simple pleasure of beating your friends or family members in a friendly game of virtual tennis.
Back then, the progress of game design was simpler. Games weren’t aiming to do anything in particular, but just to do more. It was an entirely new expressive canvas open to all sorts of perspectives and interpretations, as well as a unique factor: player interaction. Moving forward, however, that canvas became richer and open to more input as hardware and consequentially software made leaps and bounds towards greater rendition ability. Pretty soon, you could move in not just two, but four directions. You weren’t just a collection of squares, you became the beginnings of a full-on character. Games didn’t become better as a direct result of this processing power increase, but because those people developing games had access to better, more diverse technology, a larger variety of games both good and bad came to be.
Looking to the present, how far have we come? Pretty damn far. Games span a variety of themes, genres, and media. To the credit of ever-improving technology, we now look upon the primitive roots of video gaming in a mixture of both wonder and disbelief. After all, how people could play games that lack even a tenth of modern graphical polish is a question only the older generation could answer: it was fun. Fun! The sort of input-output system that games first pioneered was intriguing, to say the least, just as the ability for cameras to capture snippets of the real world was mystifying.
Yet there’s a stagnation about, one that didn’t plague the industry years ago. Suddenly, graphical output becomes not an enhancement of opportunity for developers, but the benchmark as to the quality of your system.
As we move closer and closer towards realism in video games and the ability to program life-like environments and characters, it seems like games are being shoe-horned towards being realistic. After all, if the ability is there, then not using it is just wasted potential. Or so it seems. In this way, realism is a detriment to the variety and natural variability that once made video games unique; just because the ability is there doesn’t mean that developers or even players want their video games to feel like real life. That’s what we already have…well, the real world for. That’s not to say that realistic games are bad, I love feeling immersed in a post-apocalypse or war. But I also love feeling immersed in an exotic, fantastic land. More often than not, immersion is not a matter of graphical capacity, but the choices made developing the game itself.
There are still a few things that developers can’t do given our current, albeit advanced technology. Rockstar’s L.A. Noir rings a bell with the facial recognition and expression mechanic, a feature that would be difficult to implement without an impressive level of detail. Even still, the mechanic was used to the point where it almost became comical how drastically facial expressions changed when a character was lying. Reliance on technology for a game mechanic only goes so far, the rest must be made up with elbow grease and smart decisions on the part of the developer.
In that sense, the most important part of a video game is (or should be) about striking a balance between game-play and graphical ability. Just having one without the other doesn’t make a game. Even from a practical perspective, one negative consequence about the resources involved in making better-looking games is that they have become larger (in terms of bits) and also shorter in length due to the cost and hours involved in making everything look pristine. The blind pursuit of realism doesn’t accomplish much other than quench the non-existent demand for things to look closer to the real world.
I look forward to what the next generation of consoles has in store, not because of the swanky new specs that each will surely have, but because I think the focus will gradually shift from computing power to focus instead on gameplay. And if we’re lucky, graphics will remain a benchmark of ability, a toolbox for developers, and nothing more.