Heuristic Evaluation of User Interfaces
Jakob Nielsen and Rolf Molich. 1990. (View Paper → )
Heuristic evaluation is an informal method of usability analysis where a number of evaluators arc presented with an interface design and asked to comment on it. Four experiments showed that individual evaluators were mostly quite bad at doing such heuristic evaluations and that they only found between 20 and 51% of the usability problems in the interfaces they evaluated. On the other hand, we could aggregate the evaluations from several evaluators to a single evaluation aud such aggregates do rather well, even when they consist of only three to five people.
This research established heuristic evaluation as a legitimate, practical method for identifying usability issues without requiring extensive resources or formal user testing. Before this, there was a stark choice between rigorous but expensive and time-consuming user testing, or completely informal and unreliable "eyeballing" of interfaces.
Their key innovation was demonstrating that while individual evaluators typically found only 20-51% of usability problems, aggregating evaluations from just 3-5 people could identify approximately two-thirds of usability issues.
Different evaluators find different problems, even when using the same evaluation criteria. As a PM, involving 3-5 reviewers with diverse perspectives will dramatically improve your ability to catch issues early.
They call out nine usability heuristics to keep an eye on:
- Simple and natural dialog
- Speak the user’s language
- Minimise user memory load
- Be consistent
- Provide feedback
- Provide clearly marked exits
- Provide shortcuts
- Good error messages
- Prevent errors
This paper fundamentally changed how we think about product evaluation by making structured usability assessment accessible, establishing the optimal number of reviewers needed, and creating a framework of heuristics that remains the foundation of many design systems today.