Cognitive Walkthrough: bare bones & quickie example

1. Bare bones

From Philip Craiger's page at http://istsvr03.unomaha.edu/gui/cognitiv.htm

 

The main focus of the cognitive walkthrough is to establish how easy a system is to learn. More specifically, the focus is on learning through exploration. Experience shows that many users prefer to learn how to use a system by exploring its functionality hands on, and not after sufficient training or examination of a user's manual. So the kinds of checks that are made during the walkthrough ask questions that address this exploratory kind of learning. To do this, the evaluators go through each step in the task and provide a story about why that step is or is not good for a new user.

To do a walkthrough (the term walkthrough from now on refers to the cognitive walkthrough, and not any other kinds of walkthroughs), you need four things.

  1. A description of the prototype of the system. It doesn't have to be complete, but it should be fairly detailed. Details such as the location and wording for a menu can make a big difference.
  2. A description of the task the user is to perform on the system. This should be a representative task that most users will want to do.
  3. A complete, written list of the actions needed to complete the task with the given prototype.
  4. An indication of who the users are and what kind of experience and knowledge the evaluators can assume about them.

Given this information, the evaluators step through the action sequence (item 3 above) to critique the system and tell a believable story about its usability. To do this, for each action, the evaluators try to answer the following four questions.

 

A. Will the users be trying to produce whatever effect the action has?

Are the assumptions about what task the action is supporting correct given the user's experience and knowledge up to this point in the interaction?

 

B. Will users be able to notice that the correct action is available?

Will users see the button or menu item, for example, that is how the next action is actually achieved by the system? This is not asking whether they will know that the button is the one they want. This is merely asking whether it is visible to them at the time when they will need to invoke it. An example of when this question gets a negative supporting story might be if a VCR remote control has a hidden panel of buttons that are not obvious to a new user.

 

C. Once users find the correct action at the interface, will they know that it is the right one for the effect they are trying to produce?

This complements the previous question. It is one thing for a button or menu item to be visible, but will the users know that it is the one they are looking for to complete their task?

 

D. After the action is taken, will users understand the feedback they get?

Assuming the users did the correct action, will they know that? This is the completion of the execution/evaluation interaction cycle. In order to determine if they have accomplished their goal, the user needs appropriate feedback.

 

2. Quickie example

(from Lewis & Rieman)

Here's a brief example of a cognitive walkthrough, just to get the feel of the process. We're evaluating the interface to a personal desktop photocopier. A design sketch of the machine shows a numeric keypad, a "Copy" button, and a push button on the back to turn on the power. The machine automatically turns itself off after 5 minutes inactivity. The task is to copy a single page, and the user could be any office worker. The actions the user needs to perform are to turn on the power, put the original on the machine, and press the "Copy" button.

In the walkthrough, we try to tell a believable story about the user's motivation and interaction with the machine at each action. A first cut at the story for action number one goes something like this: The user wants to make a copy and knows that the machine has to be turned on. So she pushes the power button. Then she goes on to the next action.

But this story isn't very believable. We can agree that the user's general knowledge of office machines will make her think the machine needs to be turned on, just as she will know it should be plugged in. But why shouldn't she assume that the machine is already on? The interface description didn't specify a "power on" indicator. And the user's background knowledge is likely to suggest that the machine is normally on, like it is in most offices. Even if the user figures out that the machine is off, can she find the power switch? It's on the back, and if the machine is on the user's desk, she can't see it without getting up. The switch doesn't have any label, and it's not the kind of switch that usually turns on office equipment (a rocker switch is more common). The conclusion of this single-action story leaves something to be desired as well. Once the button is pushed, how does the user know the machine is on? Does a fan start up that she can hear? If nothing happens, she may decide this isn't the power switch and look for one somewhere else.

That's how the walkthrough goes. When problems with the first action are identified, we pretend that everything has been fixed and we go on to evaluate the next action (putting the document on the machine).

You can see from the brief example that the walkthrough can uncover several kinds of problems. It can question assumptions about what the users will be thinking ("why would a user think the machine needs to be switched on?"). It can identify controls that are obvious to the design engineer but may be hidden from the user's point of view ("the user wants to turn the machine on, but can she find the switch?"). It can suggest difficulties with labels and prompts ("the user wants to turn the machine on, but which is the power switch and which way is on?"). And it can note inadequate feedback, which may make the users balk and retrace their steps even after they've done the right thing ("how does the user know it's turned on?").

Two Frequent Misunderstandings (also from Lewis and Rieman)

Many evaluators stumble through an interface they don't know, and then evaluate the stumbling process. But the aim is to evaluate the optimalsequence and fix all troubvle spots that cause a deviation from that sequence.

 

The walkthrough does not test real users, so it usually finds far more problems than you would find with a single user in a single test session.