By Deane Barker | January 28, 2013 | 3 Comments
New hardware may be game changer for video game sector: This is a fairly non-descript article about the current state of gaming consoles and the introduction of next-gen consoles this year. But it included this bit, which bothers me a little:
Today, the Xbox 360 is entering its eighth year on the market, while the PlayStation 3 and Nintendo Wii have been out for more than six years.
“Developers have really squeezed out every last bit of innovation they could out of the current hardware,” says Jesse Divnich, analyst with Electronic Entertainment Design and Research. “Both developers and consumers are screaming for new technology.”
This annoys me. What does new hardware get you? More processing power. What are game developers hungry to use that for? Better graphics.
When this guy says that developers have “squeezed every last bit if innovation,” does he really mean that? Or does he mean that we’ve just topped out on graphic power, and new and better graphics is the only new a great thing game developers could use right now?
I’ve been bitching about this for a while. This is from a post about seven years ago:
Say you doubled the intelligence of the enemy AI. How much processing power would you need to do this compared to the power required to run the graphics? My guess is that it’d be inconsequential.
So what this means is that the entire…point, of making a new system is graphics, graphics, graphics. Think about it, when the ads tout “better games,” what do they mean? More mentally challenging? More thought-provoking? More…what? No, they mean “better graphics.” It’s the be-all and end-all of systems these days.
What This Links To
It’s worth noting that the most innovative games to come out over the past few years aren’t on consoles – they’re on mobile devices.
The word “innovative” is misleading in that statement.
Related: the video game industry has ALWAYS used processing power and size to promote itself. Why the hell does ANY kid need to know that their Super Nintendo is a 16-bit system? Yet, that’s how entire generations of casual video gamers defined each step in the console wars.
Look, it’s not just better graphics. Having more processing power allows game developers to develop at higher levels of abstraction. It allows things like physics middleware to thrive, because there are processing cycles to run it. It allows things like gravity guns or even the animation system behind the Uncharted series. It allows things like the parkour in Assassin’s Creed or Mirror’s Edge.
It allows better AI, and more importantly, it allows having lots of AI characters. For instance, in Hitman: Absolution, there’s this one scene where you’ve got a big market, and it’s crowded. There are lots and lots of people milling around in relatively realistic ways. It feels almost like a real market. Or look at open-world games like GTA or Saint’s Row; those have a huge dependence on being able to run lots of sophisticated AI every frame.
With more processing power and more memory, you can make each frame a more intelligent frame. Now maybe that will be used for graphics, and you know, better graphics aren’t always just for show. But a lot of the time that power will be used for other things, too.
Also, one of the annoyances with consoles is that games for the PC are often limited because despite having more power available on the PC, the same game has to work on consoles too, with their much more limited processor and memory budget. So the PC version can only take advantage of better graphics and control schemes. But when you increase the lowest common denominator (the console), everyone benefits.
An excellent comment. Very enlightening. Thank you.