GAMING EQUIPMENT AND DE-SEVERANCE
Since Minority Report’s debut in 2002, the concept of a clean, swipe-able slate has been stuck in the imagination of developers of all stripes. It was thought that an infinite set of possibilities would follow when the entire screen doubled as the interface. Countless research projects and tech implementations have been inspired by this sci-fi daydream. The reckless pursuit of this fantasy had reached a fever pitch by the time the first iPhone was released in 2007.
IN THE BEGINNING
Fast forward to 2014 and the chain of influence that began with science fiction has had an unmistakable effect. Beyond just the obvious products like cell phones and computers, existing devices like refrigerators and lawnmowers have since been graced with a glassy surface and elegant movement interface. Controlling the world through an intermediary window with simple intuitive movements has never seemed so obvious.
So it’s a no-brainer that AAA video games would want to get in on this boon to creative interaction then – right?
But is this approach truly worth the hefty R&D investment for a medium founded on interactivity? What’s missing in this approach to design and interfacing?
For now let’s just note that such a design path inherently limits the motor complexity demanded of our incredibly adept and capable hands.
Let’s consider a very basic movement, something performed daily.
Take a drink. Consider how the weight of the glass shifts in your hand as you knock it back, how this transition of heft communicates directly to you how much is left. The liquid is cold, the glass perspires – you know to hold the glass so as to prevent a spill. If there is some kind of grip your fingers automatically find the ridges.
Pick up a book. Reflect on how the distribution of weight from the pages relates to you how far along you are. As you flip through, you can instantly tell if you’ve accidently grabbed two pages, based on the paper’s thickness. Lick your finger and you can stick a single page and cleanly turn to the next.
Consider how effortless this all is. You’ve done this countless times. You don’t even think about it.
These objects were designed to work with the full range of motion embodied in our wrist and fingers. They provide feedback in subtle ways that instantly communicate relevant information to you. The brilliant economy of their design is measured precisely by the lack of conscious thought associated in dealing with them. The mechanics of the interaction recede entirely into the background and you just do it.
Heidegger called this Ready-to-Hand. The ‘tool’ almost literally becomes part of you and your activity, qua tool user.
WHAT CAN I DO?
What can you do in these situations? How can you physically manipulate these interfaces? What and how do they relate information to you?
The answers to these questions inevitably vary depending on the specific app/game at work, but in almost all cases it requires a conscious attentiveness. In the vast majority of cases, your eyes and appendages are occupied in the pursuit of your intentionality. There’s also a lack of discrete feedback when performing inputs.
Although phenomenalogically very different, touch based controls and motions controls share these regrettable traits. You have to really put mental effort into controlling these experiences and visually processing the result of your actions. Such games are usually only capable of providing audio/visual cues when you flub an input, further taxing the conscious mental connection.
Notice that the degree of interaction is also very shallow for second screens, in that inputs are often achieved with two dimensional movements of a single finger. Although we have a broad range of potential motions we can pull off with our ten digits, it usually involves only a single finger operating on a flat plane. Despite this rudimentary use of natural selection’s most profound triumph, the employed hand usually cannot engage in other actions simultaneously.
REINVENTING THE WHEEL
Other mediums that involve a more passive consumption of media are content with this paradigm, since the core functionality they’re shooting for is serviced just fine by this mode of interaction. A single finger suffices when the level of involvement demands only a linear series of selections.
But the console based video game experience is not serviced well by this approach. Our medium demands highly specific and sophisticated inputs to control the action on screen and truly highlight the unique selling point of video games.
Controllers have enjoyed rampant popularity in this medium thanks to the very same qualities highlighted above in our more basic examples. You’re marshaling a great extent of your hand’s complexity. Clever methods of feedback are employed to convey information. The interaction quickly becomes unconscious.
The counter argument is that additional interaction and functionality is nonetheless being delivered to gaming. No one is suggesting that we ditch the controller entirely right? Why not add another dimension to our favorite gaming experience?
The problem is preclusion: interaction is a zero sum game. There’s no way to fully integrate a touch screen or motion controls because it occupies our hands and eyes. Development dollars spent on touch screen interaction similarly forestalls efforts to enhance the controller itself.
Extra Credits nicely spells out the current problems with the Kinect in their episode on the subject:
Instead of using dash-encumbered-Heidegger-terms, they call this phenomenon ‘kinesthetic projection’. Their analogy of the unhappy tummy at the 4-minute mark hits the nail on the head: these kinds of interactive stumbling blocks are more about gut intuition than a cerebral comprehension.
By comparison, controllers provide the perfect vehicle for losing yourself in a game. This approach fits snugly with the most common goal for any design team – strip away the medium while reinforcing everything that allows the player to become the character.
Second screens actually distract the player away from this deeper connection, by virtue of diverting attention off the on screen avatar if nothing else. Motion controls bump into the impossible challenge of effectively conveying information to the sensor while still maintaining a kinetically intuitive motion that matches the on screen avatar.
An embarrassing dogma began congealing during the PS2/Xbox generation, and had largely solidified by 2005. The complacent idea that the dual analogue, four-shoulder button controller needed no significant improvements had clearly taken hold. With minimal arguments to the contrary, such a mentality dominated our current state of affairs up until very recently.
But now fresh attention is being paid to both the Microsoft and Sony controllers, precisely the kind of consideration that enhances the unconscious, intricate interaction that makes video games so appealing.
Often these subtle improvements don’t look like much on paper but ultimately translate into fantastic gameplay solutions in the hands of the right developers. These don’t have to be flashy new mechanics either – solving existing problems that have long plagued genres is just as valuable.
I see the touch pad on the PS4 being just that kind of problem diffuser. Sony has hit every note with this upgrade to the dual shock: it doesn’t require visual attention, it includes force feedback, it can be operated with the thumbs without disturbing the index and middle fingers, and its multi directional.
For example, it could handle in game weapon selection, freeing up the D-pad for any number of more useful things, such as camera control. The touchpad could perform some of the same functions currently employed by the Wii U pad, but without the diversion of attention.
I’d argue that this wave of emphasis on second screen and motion interaction has been the result of developers favoring novel, one-off game mechanics that either
A) engage players for specific set pieces, puzzles, boss fights, simple menus, etc and are then abandoned;
B) reframe the tried and true action under a new, highly contrived control scheme.
In the modern age of endless sequels, developers find themselves in a tight spot: they must maintain the lynchpins of the franchise while creating new features or mechanics. What will their bullet points be at the next E3 when they’re hocking their 4th sequel in 5 years?
The result is stuff like…
And all of this happens at the expense of potential upgrades to the penultimate video game tool: the controller. Touch based games for Android and iOS are their own beast – they fit naturally into the greater gaming ecosystem. But it’s a different story for AAA games, where the development budgets are far more extravagant and the gambles that much more calamitous.
There are exceptions – the best of them incorporate the second screen or motion device in ways that fit with the gameplay and mitigate the problems mentioned above.
NEVER ENOUGH ALREADY
Right now this is our vision of the future:
It’s important for gaming developers not to buy into this supposedly obvious destiny. It doesn’t ultimately benefit the unique qualities of the medium; it doesn’t touch the real selling point that sets us apart from just video.
On the opposing side, controller innovations tend to upgrade the bread and butter gameplay to enhance the experience as a whole. They nix persistent problems and expand options for providing on the fly gameplay options.
There’s room for the two of ‘em, both have demonstrated their value. But the balance has been disturbed – there’s far too much attention and money being paid to these novel gameplay ideas as opposed to core game mechanics.
The “novelty” gameplay approach is a definitive dead end, since it cannot be fully incorporated with controller led gameplay. Until full on VR wins out, 95% of games (worth playing) will be powered by controllers.
And now, a shameless Heidegger diagram: