Showing posts with label microsoft game user research. Show all posts
Showing posts with label microsoft game user research. Show all posts

Saturday, 15 September 2007

Heuristic Evaluation for Playability

I've just added Desurvire's professional site, Behavioristics to the links section, and have read a paper she co-authored with Caplan and Toth, called Heuristic Evaluation for Playability

In this paper the authors present a system of design guidelines called Heuristic Evaluation for Playability (HEP), which they claim is particularly useful for evaluating pre-production prototypes and for addressing issues of game story and interface.

They catagorise their 43 different heuristics into game play (16), game story (8), game mechanics (7) and game usability (12). Interesting for me is that game play is defined in terms of mastery, of beating the game. There is no consideration given to subversive or emergent play which takes place beyond the intention of the designers.

Reading this piece makes me wonder about the best way to evaluate game usability. The following quotation strikes me as a good case study:
Each session was organized as a one-on-one evaluation session, in an environment similar to the one where they would actually play the game. Participants were given instructions to begin the game, asked to think aloud, and asked several probing questions while using the prototype.

I don't know of any game where I have to think out loud and am probed while I play. Surely this has an effect on how I play, or how I perceive the game? A less intrusive approach might be to use automated data gathering techniques inside the game to monitor the player's progress. At some stage it would be necessary to ask for their subjective evaluation of affect, but I wonder if this couldn't be inferred from their gameplay itself? Given the option to continue playing or stop early with no negative consequences it would be fairly easy to recognise if the player was having fun or not. The main problem with this thinking out loud and probing questions is that it disrupts a sense of immersion in the game. This would have significant consequences in games which require concentration, or which rely on mood as their aesthetic pleasure.

I wonder if HEP is a ludological / narratological analysis at the expense of the carnal pleasures of game play?

In conclusion they state,
The user studies findings highlighted issues specific to the game; boredom, challenge and pace level, as well as terminology. These issues were not found through HEP, whose benefit was in ensuring general game principles.

This is an important acknowledgement of the appropriateness of HEP.

Desurvire, Heather; Caplan, Martin; Toth, Jozsef, A. Using Heuristics to Evaluate the Playability of Games. (2004)

Friday, 14 September 2007

Usability Testing Example

Gamasutra have an article by Microsoft's Sauli Laitinen titled "Better Games Through Usability Evaluation and Testing".

There's also an article in the Journal of Usability Studies which I discuss in another post.

Interactions

I've added Interactions to the links section. Especially interesting was the special issue "More funology: games" which featured the following articles,

Pagulayan, Randy and Steury, Keith. "Beyond usability in games"

Chao, Dennis L. "Computer Games as Interfaces"

Laurel, Brenda. Narrative construction as play"

The ACM requires subscription to view these articles.

Isbister's Character Design

Both Gamasutra and Dr Dobbs have excerpts from Katherine Isbister's book Better Game Characters By Design A Psychological Approach.

There's also more of her work availalbe online at Friendly Media, and her blog, Game Empathy.

Of particular interest is Noah Schaffer's Heuristics for Usability in Games white paper in which he presents a set of heuristics for game design.

Also her presentation, Perform or Else! and footage about "extroverted play design".

Isbister, Katherine. Better Game Characters By Design A Psychological Approach. (Morgan Kaufmann, 2006)

Monday, 3 September 2007

Thief 3 Usability Testing

Shannon Lucas and Denise Fulton present a brief summary of their usability testing project of Ion Storm's game Thief 3: Deadly Shadows in an article called "What We Learned Evaluating the Usability of a Game"

They made use of the "Rapid Iterative Testing and Evaluation" (RITE) methodology developed by Pagulayan et al. at the Microsoft Game User Research Group to guide designers and programmers during the early Alpha stages of the game.

Interesting observations are that while they didn't have time to review the videotaped footage, this was useful as a resource for programmers to observe their AI under real operating conditions. It doesn't sound like they made use of data visualisation or game metrics, but rather relied upon subjective evaluation from the players first-hand experience. For example, the conclusion that "8 out of 10 players had difficulty using the blackjack tool" appears to be based on player reports rather than measurements of the weapons use during gameplay.

"What We Learned Evaluating the Usability of a Game". Usability and User Experience Community of the Society for Technical Communication. October 2004 issue (Vol 11, No. 2).

Psychology in Game Research

There's a lightweight article over at The Escapist titled "The Perception Engineers" about the role of psychology in game usability analysis, with comment from Daniel Gunn, Randy Pagulayan and Tim Nichols from Microsoft Game User Research.

Academia - Industy contact

John Hopson from Microsoft Game User Research wrote an article for Gamasutra last year, entitled "We're Not Listening: An Open Letter to Academic Game Researchers". He addresses the lack of communication between industry and academia and proposes ways to pitch academic research to developers. The central message is,

"if the research doesn’t include specific practical recommendations or a measurable impact on the final product, don’t bother trying to sell it to the industry."


Sounds obvious, but there's a lot of research out there that the industry is unaware of. It's our job to make them sit up and take notice.

Wednesday, 22 August 2007

Halo 3

Yesterday Wired ran an article about Randy Pagulayan's team at Microsoft Games User Research and their user testing of Halo 3.

It's an entertaining read with a few little tidbits of information on how they conduct their usability evalusation process. Highlights for me include the observation that dying can be fun. This is clear to a game player or developer, but might come as a surprise to a usability specialist focussed on effective and efficient interfaces. That "skin meters, cardiac monitors, and facial electromyographs" were needed to deduce this is quite surprising. I'd have thought that a questionnaire would suffice. Of course, this depends on the kind of game. If you get to see an entertaining animation, or a tactically-useful slow-motion replay of your death then there's clearly 'value' for the player who died. However, in a multi-player deathmatch game where you have to sit and wait for several minutes until all of the other players finish their game, this is clearly going to be a frustrating experience - unless there is some activity to take part in such as chatting with other dead characters, or trailing other live players.

Most interesting for me is the description of the tools Pagulayan's team devised to capture and visualise play throughs. This is where I see a double programmer / usability expert as being extremely useful to game development. Conditionally compiled code could be added to a particular build of the game to enable data tracking, perhaps offloaded to disk or to a database on the local network. The other side of this process would be to construct tools to analyse and visualise the data. It would work like a semi-independant tools and QA team, providing analysis back to designers.