Sense and Sensibility in the Nintendo Wii

[Previously published on gamasutra]

This article is about game feel. It aims to show that gameplay is more than just game mechanics and interaction but the crafting of these as realistic experiences around the limitations and possibilities of the medium being used. The article starts with a brief section on how to render a fictional world into a realistic experience through the use of transliteration of senses. It continues with an analyses of the “opening minutes” of our experience with the Nintento Wii game console, followed by a very brief conclusion. I’m indebted to Michel Chion and Steve Swink. Their works were instrumental in helping me to articulate my thoughts.

The Real and the Rendered

Rendering a fictional world into a realistic experience is a complex process based on the transliteration of senses, that is, the the evoking of senses by means of other senses.

To render something “real” is by no means a new issue:  it lies at the heart of communication and has been examined in other media (and in the arts) for centuries.

What exactly are we talking about when we speak of such rendering? Let’s take an example from silent film, brought to attention by an early pionieer of film theory, Bela Balasz:

Shot 1 Close-up view of a gun’s trigger being pulled
Shot 2 A cloud of birds, all the sudden taking off of a tree


In Shot 2, although no sound is available, we clearly “see” the gun being  fired; or to put it more precisely, we “hear-see” it.

In this example, the transliteration of one sense into another (sight into sound) is a realistic rendering of how  gunfire “feels”; yet we must see the depth in this rendering. While the sight of the birds is rendered into the sound of the gunfire, the transliteration doesn’t step there. By means of association, the “sound” we believe to hear is rendered into yet another category of sense, that of touch: we “feel” the impact of the bullet; we’re blinking with the eyes and lean our heads back as if we were trying to escape a slap into the face.

The  realness we experience here is “rendered” reality, and by this we must understand the realness of the fictional world of the film. We clearly see how this plays a role in immersion, in our departure from reality, and our crossing into the virtual reality of the film.

This kind of rendering by means of transliteration of senses is also at work during the creation of game feel. I will try to explain how it works by having a look at the opening minutes of our experience with the Nintento Wii game console.

The Main Menu Screen of the Nintento Wii

Love at First Sight (and Sound)

The main menu screen of the Nintento Wii console doesn’t lose a second to set-up the conditions of the feel it wants to create. In our first contact with it (before we start using the controller), it utilizes sight and sought to render this feel.

The first step taken is to utilize the visuals in order to set-up the interface as a three-dimensional space with tangible objects. This tangible ludic realm is rendered by the calculated use of the following design elements:

Design element Description of how the transliteration works Type of rendering
Stacked InteractionPanels

(A combination of two layers which display menu buttons, date, and time)

The main menu features two interaction panels which are seperated by the use of contours and shading. Sight to Sight


Navigation Arrow(s)

(Pulsating small-sized arrows to the right and left of the screen)
The arrows are hovering and pulsating above the interaction panels, adding to the depth of the view. Sight to Sight


Menu buttons

(These resemble TV-screens and display animated images, which is in itself can be considered as a play on the already established visual code and conventions utilized to call the user to break the “fourth wall”.)

The volumetric design of the buttons foster a sight that creates the illusion of three-dimensionality (or depth), something that connotates touch Sight to Sight &Touch

(spatialization and texture)

Theme Music

(a “gentle” melody with high pitch and soft timbre, whose echoeing tunes overlap to create a tingelling, vibrating feel)

The tingleling and vibration of the music stirs up our inscape and stimulates our sensibility in regard to touch . It opens up an inner space that makes us feel the depth of the “vessel” that our body is. Sound to Sight &Touch

(spatialization and texture)

In terms of design, the visuals and music are geared towards creating the illusion of depth and stimulating our senses as a first necessary step to “prepare” the user for the experience of a “realistic” feel of controls.

I believe that the musical theme plays a central, embracing role here. The music fosters a hierarchy of senses in which the sense of touching is raised into the dominating figure  against a background of all other senses. The musical theme, we could say, *is* about sensuality, is about establishing an “inner” connection with the “hard”ware.

Love at First Motion

Once we start to use the Wii controller, it puts into motion the cursor and other animated graphics. The interplay between these elements as we carry out our actions is designed to enhance the game feel that was hinted at through our first contact with the interface.

Let’s have a closer look:

Design element Description of how the transliteration works Type of rendering

(a stylized hand that leaves traces when being moved) 

Moving the cursor leaves a quickly dissappearing trace on the screen, which resembling the feel of touching a surface with our fingertips. Sight to Touch

(trajectory of movement)

Menu buttons

(As described above, but changing in size and contour color when highlighted)

The button animation gives the illusion as if they were three-dimensional objects moving under the impact of a physical force (similar to when we are poking a box). Sight to Touch

( texture)

Button Description When the cursor rests long enough on a button, a button description appears and hovers over the interface Sight to Sight


Sound Effect The only sound effect used in the interface is heard when the button description appears. The sound gives the illusion that the button description seems to “grow” into our sight, whereas in reality it is not animated in that way. It is single frame appearing and disappearing, not an object that closes-up on us and gets buried under the interface panels. Sound to Sight

(trajectory of movement)

Wii Controller When hovering over buttons, the controller vibrates in a staccato-like manner, which feels like stumbling over something, or scratching pickles. However, the vibration also makes us recall the sound that is generated when such stumbling or scratching takes place. Touch to Sound to Touch

(trajectory of movement; texture)

We see that rendering of realistic movement in the Nintento Wii interface is achieved by the agglomeration of various senses through the cooperation of both interface and controls. What fosters the game feel is a synchronization of sight (volumetric indicators & animations), sound effects (attack-decay, timbre), music (tone, pitch, timbre, harmony) and touch (vibration).

In the case of interacting with a button, we see how the synchronization of controller vibration, cursor movement and button animations (or more precisely, the interplay of the transliteration of senses in each one of them) creates the sensation that we “click” something. Clicking a button feels like the slow dissappearing impression of an object that we have touched.


The fun and immersion of gameplay cannot be mere explained by game mechanics or interaction per se. We need to take into account their rendering into a realistic experience.  It is this complex process of transliteration of senses and their interplay, often going unnoticed during play, that gives players the game feel that they describe as a fun and immersive (hence realistic) experience.

How many stomachs does an interface have?

Interaction design for a game is a difficult task, because it requires the designer to create a classification system that has a limited number of classes yet is flexible enough to represent a great number of objects in the game world in an easy-to-access way. Often this is a matter of the distribution of the semantic workload between the graphic interface and the various other input peripherals (such as mouse, keyboard, or gamepad) Since each input device has a unique character, the result of the combinations can be highly different.

Looking at the various input devices that we have at our disposal today while playing games (keyboard, mouse, guitars, steering wheels, Wii controller etc), we see that they can work on very different levels of abstraction. For example a keyboard enables spelling on the phonetic level. On the other hand, the buttons of a mouse or a gamepad are more like ideograms. Combinations of these various input devices will be a basis for various naming and calling conventions, all of them constructing worlds with different qualities, quantities and behavior.

However, often the success of interaction design will depend a lot on how the graphical interface will digest the game world and decompose it into the semantically nutricious pieces that can be than passed over for further treatment to other “stomachs”, such as the menu bars, keys and controllers. In that sense it could be said that interaction design is like establishing a digestive system with many stomachs, like for example that of a cow (hence our initial question: how many stomachs does an interface have).

To digest or not to digest, that is the question!

To digest or not to digest, that is the question!

Here, however, the digestion works on a linguistic and semantic level. Often, the graphic interface will make the first big digestion and break down the virtual world into a group of semantic and narrative cores which will be then ready for processing on the next level, that of the button arrangement on the gamepad for example. The classification that decomposes the game world into units will set the constraints for key assignments or the type of input device that will be sufficient to turn around things (For example interaction design of RTS games for concoles is known as a true challenge). Through this digestion on several levels, the game world turns into a classified system of existents and interactibles that enable the player to take a course of meaningful action in the very context that the same digestion process has constructed. Achieving the appropriate levels of categorical sensibility through this process of decomposition will be mostly depending on the finesse that lies behind the design of this system of stomachs.

Through classification of random objects, the interface creates a world with recognizable patters

Through its system of classification, the interface creates an order: The random mix of objects turns into a world with recognizable patterns.

Let’s just remember some good examples for now (Who likes to think of bad examples anyway?): Diablo had a wonderful simple point-and-click interface which had classified the game world into three categories of existents (of which all could be simply unified under one universal category, “targets”): these were killables, collectibles and destination.  The player who used the mouse had no need to further explain what action she wanted to take. To click on the interface, depending on where the mouse was resting at that moment, either meant “kill this”, “collect this” or “go there”. A further category of actions could arise from an additional decomposition provided from the mouse: left-click and right-click could mean “kill this (with this)” or “kill this (with that)”. Continuos clicking would repeat the action, which meant that the number of actions was also easy to give in. In short: The various “stomachs” had decomposed everything perfectly, therefore the game had a very fluent process of “spelling”. It got particularily well along with the very addictive reward schedule. All this created a great, zero-friction game flow.

Another example is the infamous The Sims. The Sims also had a very digestive interface, but this time one which nested almost all “verbs” into the two interactible classes, “objects” and “characters” (the difference here is one based on humanistic assumptions, technically there is no difference between “character” and “object” in the way The Sims decomposes its world). It constructed a multi-layered deep world with this method. The player wasn’t bothered with much spelling procedures to express the actions she wanted to be done, she just chose them from a palette. The dominant “sentence structure” or grammar that allowed for communication between player and game world was:

Subject  – Object  – Verb

For example:

Character A – Character B – kiss.


Character B – Fridge – Prepare Dinner

Each sentence that the player constructs was displayed within the frame of the screen via a system of “uniform” icons on top of the screen, so that the player could edit the “paragraph” of events that he had just “written” through the orders she dispatch onto the characters. If she didn’t want a sentence to be carried out, she just deleted it from this “visual list” of future events. This high level of digestive groundwork was reducing the semantic workload of the controller a lot and gave the player great time and freedom for strategic thinking.

Modern gamepads can take over a lot of the semantic workload in todays games though. In fighting games like Tekken a player may “utter” a great number of attacking and defending moves without any procedure hat involves the use of the interface. But as much as it seems like the graphic interface does not do a lot of work in this genre, it is again the way in which the world was decomposed through it, that allows the game design in general to focus on detailed gamepad use during the spelling of actions.