Interviewing
A direct conversation method for understanding why people think, feel, and behave the way they do.
Overview
Qualitative interviewing is the core of discovery work. You sit down with someone (in person or remotely) and ask them about their experiences, behaviors, and mental models. What comes back is the kind of context no analytics dashboard or survey can give you: the story behind the action, the frustration behind the workaround, the motivation behind the choice.
The distinction that matters most is between interviewing to confirm and interviewing to learn. Most teams unconsciously do the former. They arrive with hypotheses baked in, write questions that practically telegraph the expected answer, and walk out with data that ratifies what they already believed. Good interviewing means designing questions around behaviors and experiences, not opinions about your product. "Tell me about the last time you had to deal with X" gets you somewhere. "Do you like feature Y?" mostly gets you noise.
Interviews are best suited to generating hypotheses, not validating them. If you finish a session thinking "that confirms our direction," something probably went wrong. You should be leaving with new questions, contradictions worth examining, and details that didn't fit your model.
When to Use It
- At the start of a project, when you need to understand the problem space before jumping to solutions.
- When your team is operating on assumptions about user behavior that nobody has actually tested.
- When you need emotional context: what makes something frustrating, meaningful, or worth switching for.
- When you have quantitative signals (from analytics or surveys) but no explanation for what they mean.
- When you're trying to build personas grounded in real behavioral patterns rather than demographic guesses.
Interviewing is not the right method when you need statistical significance or comparative data at scale. It's also not a substitute for usability testing. Interviews reveal what people think and feel. Usability testing observes what they actually do with a specific interface. Both matter, and conflating them creates blind spots.
How It Works
Start with a focused research question, not a broad topic. "We want to understand our users" isn't narrow enough to structure a good guide. "We want to understand how people currently manage X and where things break down" gives you something to build from.
Write questions around experiences and behaviors. Open-ended prompts that invite storytelling ("Walk me through how you handled that last week") yield richer data than anything with a yes/no answer. Then prepare follow-up probes for each main question: specific things you can ask when an answer is shallow or vague. "Can you give me an example?" and "What happened next?" go further than most people expect.
The professional standard is a two-person team: one facilitator, one note-taker. The facilitator maintains eye contact and follows the conversation wherever it goes. The note-taker captures everything without breaking the rhythm. If you're running solo, record the session (with consent) so you can stay fully present instead of transcribing.
Silence is a technique. When a participant trails off mid-thought, most interviewers fill the gap. Don't. Wait. The unfinished thought often contains the most useful thing they said.
Tips
Treat your guide as a compass, not a script. If a participant mentions something unexpected and interesting, follow it. You can return to your question list.
Stay neutral. Visible reactions, whether enthusiasm when someone confirms your hypothesis or deflation when they don't, shape what they say next. Participants pick up on it faster than you'd think.
Build toward depth. Save sensitive or probing questions for later in the session. Trust accumulates over time, and the same question lands differently in minute five versus minute thirty.
End with something open: "Is there anything you'd want me to know that I didn't ask about?" This surfaces useful material more often than it should.
The Output
Raw research material: notes, recordings, transcripts, and direct quotes. This isn't immediately actionable on its own. The value lives in synthesis, which is why interviewing almost always feeds directly into Affinity Mapping as the next step.
A well-run set of interviews also gives your team shared exposure to real user language and real user problems. That shared context shapes decisions long after the formal deliverables are done.
Related Methods
- Affinity Mapping: Comes after. Interviewing generates the raw material. Affinity mapping turns it into themes your team can act on.
- Persona Profile: Comes after. Behavioral patterns from interviews become the foundation for personas grounded in evidence rather than assumption.
- Journey Mapping: Comes after. Interview findings map naturally onto the stages of a user's experience over time.
- Survey Design: Runs alongside. Surveys can quantify what interviews surface qualitatively, or screen participants before recruiting them for deeper conversations.
- How Might We: Comes after. Themes from your interview synthesis become raw material for opportunity framing and ideation.
- Usability Testing: Related but distinct. Interviews uncover the "why" behind behavior. Usability testing observes the "what" in the context of an actual interface.