Last month, while writing an article for Rock Paper Shotgun, HM asked several developers for their thoughts on the physical interface between player and game. Robin Arnott, the audio engineer behind Deep Sea and Soundself, responded with a short essay. Extracts of this essay appeared in the completed article, but today Electron Dance presents the essay in full.
The original motivation behind Deep Sea was a dirt simple question: how do I maximize immersion? It was a curiosity drive! I started out knowing from my own experience that fear can short-cut the rational mind and touch players at a pre-cognitive level. But all the design decisions, like blinding the player, or playing back their breathing to obscure the critical information, all of that was me blindly reaching into the darkness and holding onto what seemed to work. I'm very fortunate to have stumbled onto some ideas that worked incredibly well, but the great irony of Deep Sea's development is that I didn't know why they worked. It took about two years of watching people play Deep Sea for me to reverse-engineer my own game and figure out the why.
SoundSelf is built on those understandings. I've since come to see immersion as a function of trance. In other words, immersion is in the same family of experience as hypnosis, meditation, and Pentecostal possession. So while SoundSelf is a radically different game from Deep Sea, as a designer they are both knots in the same thread. Only now, instead of accidentally stumbling into a hypnotism design-space, I see that what I'm doing is literally hypnosis. This is tremendously freeing because I don't have to depend on crutches like the fear response any more, and I can use these hypnosis techniques to induce ecstasy instead.
Games that reject visuals are rare because most games are about handling data... I think that's what a lot of people think a game is! Handling data and making information-based decisions is as much a part of the paradigm of this medium as words are to literature. And humans are visual creatures, which means that we can process a lot of data in images. Games like Deep Sea and SoundSelf are about getting away from data.
But in terms of accessibility, it's often thought that Deep Sea is a game for the blind, which it emphatically is not. Deep Sea is a game about weakness and dis-empowerment, and you get that by losing your primary sense. The blind, with their hyper-acute sense of hearing, can "see" straight through the brain-trickery that makes Deep Sea frightening. I definitely think there's a huge untapped market for games for the blind. If I were doing this stuff for money, I'd be making games for the blind.
I think until we get past this paradigm of games being about interpreting and managing data, controllers will still be based around the dexterous hands and fingers. There are a ton of biofeedback technologies that are mature in their development. The only reason these haven't been integrated into controllers yet is that market leaders think players more of the same - games about navigating data and making decisions. That's not what players want, that's just what the edges of this particular skybox look like. What players actually want are experiences that take them on a journey. Systems for navigating information, we can call those systems "games" if you like, are just a familiar tool for getting there.
What's exciting though is that VR is already shattering that paradigm. If I were in the console business right now, I'd be looking for a way to get ahead of the curve by integrating heartbeat sensors, breath-tracking, and EEGs into peripherals. The next generation of what we call games will not be about using the body as a means of control. It will be defined by experiences that blur the lines between self and software. This leap is right around the corner.