Last night was the monthly meeting of Interact Seattle, a designer/developer group, and since I now live really close to where they meet (yay!) I decided to join them.

The topic for the night was User Experience (UX) design in gaming as presented by Karisma Williams, who is currently working on the Xbox Kinect project (more about that later!). The presentation focused a lot on the design of game menus and what are called Heads Up Displays (HUDs), which include any visible counters, maps, life bars, etc on the screen. Though menus and HUDs make up only a part of the UX of a game, they have a huge impact on the user’s satisfaction with the product.

As far as Karisma was concerned, the gaming industry is far behind where they need to be in regards to UX design. In the past, things like menus and HUDs where tacked on last minute and used as quick and easy solutions for hurdles encountered in the game’s design. For example, in a first person shooter game, the amount of ammo you have left is generally displayed as a HUD overlaying the game, but a more elegant and realistic solution would be to display the ammo on the visible gun the character is holding.

Menus can also be a user’s first impression of a game. In order to get to actual game play, someone must successfully navigate through an opening menu. Generally this isn’t a very hard problem to solve… create a few boxes or bubbles on the screen and allow the user to navigate via the buttons on the controller. However, as the way we interact with technology evolves, this problem becomes more difficult. Take Xbox Kinect as an example. With Kinect, there is not a controller or buttons… the only way the user interacts with the game is through motion. So how do you control a menu without controls?

I’m finding that there is a similar problem arising with the UX design in touch screen devices. If you take away the mouse as a navigational tool, you also take away the ability to use things like a hover state or a right click. For example, when you are on a website and you hover over what looks like a button, it will change color and you have instant feedback letting you know that it is, in fact, a clickable element. With new modes of interaction, we must find new ways to give users feedback.

When creating menus for Kinect games, a common theme emerged: keep it simple. Many of the game menus they created utilize a hover state; think using The Force in Star Wars. The end goal was to create strong UX design that got the users into the game itself quickly and with minimal frustration. Until the technology is able to capture finer muscle movements with greater fidelity, it’s back to basics.

It is also important to keep in mind with a new mode of interaction comes an entirely new learning curve for the user. If a new user is thrown in front of a Kinect game, let say they are going to race in a car, how do they approach the situation? If you don’t have any buttons to press, how do you get started? Would you know to mimic turning the key in the ignition to start your car? Designing for this kind of interaction requires a certain level of understanding of human psychology. We must understand how people learn in order to effectively teach them how to play an entirely new type of game.

My takeaways from the presentation:

  1. Only show the user what they need to see when they need to see it
  2. User Research (UR) is key; do it early and often
  3. I really want a Kinect 🙂