By guest bloggers Erica Olmsted-Hawala, U.S. Census Bureau and Betty Murphy, Human Solutions, Inc.
A common practice in usability testing is to ask people to describe what they are doing as they complete a task. With the “Think-Aloud” protocol, or method, test participants express their thoughts, feelings, and reactions to a website while they work. This method helps usability practitioners learn how users experience websites and other user interfaces.
In a recent DigitalGov University webinar, usability experts Erica Olmsted-Hawala (U.S. Census Bureau) and Betty Murphy (Human Solutions, Inc.) discussed different kinds of Think-Aloud protocols and how the choice of protocol can impact the results.
Erica began by explaining the theory behind the Think-Aloud method and the different approaches used in usability testing:
1. Traditional method: This method requires that the test administrator say as little as possible. The only permissible verbal cue is “Keep talking.”
2. Speech-communication method: This method allows the test administrator to acknowledge the test participant’s verbalizations with “Mm-hmm” or “Uh-huh” in addition to “Keep Talking.”
3. Coaching: The test administrator asks the participant for feedback and actively intervenes with probes, such as, “Did you notice that link up here?”, “You’re doing great,” or “Can you explain why you clicked on that link?”
Usability practitioners may not be aware of the distinctions between the methods. In particular, they may not know that the traditional and speech communication methods allow for such limited responses. Practitioners seem to use a mix of methods without much thought about their approach. For example, usability test reports rarely include the specifics of the Think-Aloud protocol used. However, Erica and Betty showed that the approach used can influence the results.
They shared the results of an experiment looking at whether the style of the Think-Aloud protocol matters to the results of usability testing[i]. In the experiment, they asked test participants to use an informational website and provide feedback. They assigned participants to one of the three Think-Aloud methods above, or to a “silent” control, where participants did not think aloud at all. During the sessions, they collected data on participants’ accuracy, task time, and satisfaction.
The results showed that:
- Those in the “coaching” condition performed better than those in the other three conditions, suggesting that the coaching provides users with assistance in completing the tasks that would not be available when working on their own.
- Those in the “coaching” condition were more satisfied than those in the other two Think-Aloud conditions. This result suggests that participants who are coached during usability testing will be more satisfied than actual users in the field will be.
- There were no differences among the four conditions in the amount of time it took participants to complete their tasks. This is important because it is commonly believed that the Think-Aloud protocol will increase (or at least impact) task time.
- There were no differences between the traditional and speech communication methods.
Because the Think-Aloud method can impact the results of a usability test, Erica and Betty recommended that usability practitioners consider the following when they conduct usability tests:
- The coaching method may provide assistance that users would not get on their own.
- The traditional or speech-communication methods will reflect an environment more similar to what users might experience “in the field,” where they will have no assistance.
- Usability practitioners should carefully consider which method to use for each usability test and discuss the implications of the decision with the test sponsors.
- When writing the documentation for the test, usability practitioners should accurately report and document the Think-Aloud method they used.
Olmsted-Hawala, E., Murphy, E., Hawala, S. and Ashenfelter, K., (2010). “Think-Aloud Protocols: A Comparison of Three Think-Aloud Protocols for use in Testing Data Dissemination Web Sites for Usability.” Proceedings of CHI 2010, ACM Conference on Human Factors in Computing Systems. ACM Press: pp 2381-2390.