Toolbox

Heuristic Review

A principled expert walkthrough that surfaces usability problems in a design before any user testing takes place.

Overview

A heuristic review is an expert evaluation technique where reviewers assess a product or interface against a set of established usability principles, called heuristics. Rather than observing real users struggle, you apply accumulated knowledge about how good interfaces behave to predict where your design is likely to cause friction.

The method was formalized by Jakob Nielsen and Rolf Molich in 1990, and Nielsen's 10 usability heuristics (things like "visibility of system status," "error prevention," and "match between system and the real world") remain the most widely used framework. Their staying power comes from how well they capture the patterns that tend to trip people up across virtually any interface or product type.

What makes a heuristic review useful isn't that it replaces user testing. It's that it does something user testing can't do cheaply: clear out the obvious problems first. When you run usability testing on a product full of basic violations, participants spend their time being confused by things you could have caught in an afternoon. A good heuristic review focuses your testing budget on the things experts can't predict, which is where the real learning happens.

When to Use It

  • When you've inherited a product and need a quick read on its usability before planning a full research effort.
  • Before scheduling usability testing, to clear out low-hanging issues and focus participant time on harder questions.
  • When time or budget doesn't allow for user research and you need something more principled than gut-feel critique.
  • When evaluating a competitor's product to understand where your design has an advantage or a gap.
  • When you want to structure a design critique so it produces citable findings instead of opinion.

This method works best when the evaluators have real experience with UX principles. A heuristic review by people who don't understand what the heuristics actually mean tends to produce a long list of personal preferences dressed up as expert findings. If your reviewers need a primer before the session, budget time for that.

How It Works

Start by defining what you're reviewing and why. A full product audit is different from evaluating a single checkout flow. Getting specific about scope keeps the review manageable and makes findings more actionable.

Pull together a small group of reviewers, ideally three to five people. Research suggests that a single reviewer catches only about a third of usability problems, while five reviewers together catch most of them. Beyond five, the returns diminish fast. Each reviewer should work through the experience independently before the group compares notes. Independent review prevents the first person to speak from anchoring everyone else's observations.

Give each reviewer a defined set of tasks to walk through, not a free-range exploration. As they work through each task, they document every issue they encounter, citing the specific heuristic it violates. A finding without a heuristic citation is just an opinion. The citation forces the reviewer to articulate why something is a problem, not just that it feels off.

After individual reviews are complete, bring findings together. Look for patterns: issues flagged by multiple reviewers carry more weight than lone observations. Assign severity ratings (Nielsen's standard scale runs from cosmetic to catastrophic) so that when the list lands in someone's sprint backlog, they know where to start.

Tips

Don't let reviewers propose solutions during the review. The job in this phase is to document problems clearly and precisely. Solutions come later. Mixing diagnosis with prescription slows everything down and often results in fixing the symptom instead of the cause.

Make sure findings are specific enough to act on. "The navigation is confusing" is not a finding. "The secondary navigation disappears when a user enters the checkout flow, removing their only path back to browsing" is. Push reviewers to describe exactly where the problem occurs, what triggers it, and what the user is likely to experience.

Watch for false positives. Not every heuristic violation is worth fixing. A finding needs to actually harm the user experience in a meaningful way to earn a spot on the priority list. Severity ratings help, but someone on the team needs to exercise judgment before the list goes to engineering.

The Output

A categorized list of usability issues, each tied to a specific heuristic and rated by severity. This is documentation that can go directly into a prioritization conversation, not a general impressions summary.

This output feeds naturally into Impact/Effort Matrix for prioritization, or into a design sprint if the findings are serious enough to warrant immediate attention. If usability testing is planned, the heuristic review findings should shape which task scenarios get the most scrutiny.

Related Methods

  • Usability Testing: Comes after. Run the heuristic review first to clear obvious issues, then use testing to catch the problems expert review misses.
  • Experience Mapping: Runs alongside. A heuristic review maps cleanly onto an experience diagram, annotating where violations occur at each stage.
  • Impact/Effort Matrix: Comes after. Use it to prioritize the findings list once the review is complete.
  • Journey Mapping: Comes before. A journey map helps identify which flows and touchpoints are highest priority to evaluate.
  • Interviewing: Comes after. When heuristic findings raise questions about user mental models or real-world behavior, interviews are a good next step.