Contextual Inquiry

In Brief
Contextual inquiry is a field research method that combines semi-structured interviews with direct observation of the customer in the actual environment where the problem occurs. You visit the customer’s workspace, watch them work, and ask questions in the moment. The output is a rich, qualitative picture of real workflows, tacit knowledge the customer cannot articulate in a traditional interview, and workarounds or substitute products that reveal unmet needs.
Common Use Case
You have identified a pain point for your prospective user, but you aren’t sure about their real behavior and workflows. You visit them in person, watch them act in real situations, and discover behaviors and workarounds they never mentioned in interviews. You want to see the real behaviors so you can design a better solution.
Helps Answer
- What problems does the customer actually experience in their environment?
- What workarounds or substitute products is the customer already using?
- How often does this problem happen in practice?
- What does the customer know about the problem that they cannot easily put into words?
- What steps make up the customer’s real workflow?
Description
Contextual inquiry is a field-research method built on the master/apprentice model: the researcher visits the customer’s actual environment, watches the work happen, and asks questions in the moment rather than after the fact. The method was developed by Karen Holtzblatt and Hugh Beyer at Digital Equipment Corporation in the late 1980s and codified in their 1997 book Contextual Design, with a practitioner-focused follow-up, Rapid Contextual Design, published in 2004.
Why observation produces different data than interviews. Customers can describe what they consciously do; they cannot describe the workarounds, sticky-note hacks, and tacit shortcuts they don’t notice themselves doing. A retrospective interview gets you the customer’s mental model of their work. A contextual inquiry gets you the work itself — including the parts the customer would never think to mention. Workarounds, environmental constraints, and interruptions only appear when you watch the work happen.
Four operating principles. The Nielsen Norman Group’s articulation of contextual inquiry’s four principles is a useful summary of how the researcher behaves on-site. Context means going to the real place where the work happens, not a conference room. Partnership means treating the session as a shared inquiry — the customer drives the work, the researcher follows and asks. Interpretation means voicing your reading of what you just saw and inviting the customer to confirm or correct it. Focus means having a defined research question so you notice what matters and don’t drown in everything.
The output is qualitative and small-sample. It produces hypotheses, not statistics — pair it with a downstream evaluative method (paper prototype, comprehension test, closed-ended survey) once the field data has shaped a hypothesis worth testing.
Where AI fails for this method. AI cannot replace physical presence. The serendipitous noticing — a sticky note on a monitor, an awkward physical workaround, the social dynamics of a workspace — is the entire value of the method. Use AI to accelerate analysis after the session, not to replace your presence during it.
How to
Prep
-
Define your research question. One sentence. “How do mid-market accountants actually close the books at month-end?” is a research question. “Learn about accountants” is not. The question shapes what you’ll watch for and which workflow steps you’ll ask the customer to walk you through.
-
Recruit five or more customers in their real environment. A contextual inquiry done at the customer’s desk during their actual workday produces different data than one done over coffee. If the problem only happens at certain times (month-end close, surge periods, mornings), schedule the visit for then. Recruit at least five customers — small samples mean any single observation is anecdote until a pattern repeats.
-
Train the observer pair. Pair the lead researcher with a second observer when possible — one focuses on the work, the other captures notes and timestamps. Both should agree in advance on the four principles (context, partnership, interpretation, focus) and on what counts as a workaround worth probing. If you’re solo, plan to record the session so you can re-watch instead of trying to take notes mid-observation.
-
Prepare a framing statement. Two or three sentences you read at the start of the session: who you are, what you’re trying to learn, that you’ll mostly watch, and that you may interrupt occasionally to ask what they just did. Make clear they’re the expert and you’re the apprentice.
-
Prepare an observation checklist, not a script. Unlike an interview, you’re not running a list of questions. You’re going to watch real work. The checklist is a reminder of which workflow steps and decision points you want to make sure get covered if they don’t surface naturally. Include space for: workarounds noticed, tools or substitute products in use, interruptions, body language during difficult moments.
-
Get permission to record. Audio at minimum, video or screen-share if it doesn’t change behavior. Promise the customer you’ll send them a summary and ask before quoting them anywhere.
Execution
-
Set the master/apprentice frame. Read your framing statement. Establish that the customer drives the work and you follow. Say explicitly: “I’d like to watch you do [task] the way you normally would. I’ll mostly watch and ask a few questions along the way.” Get explicit consent to record.
-
Observe first, ask later. Resist the urge to interrupt. The customer will work through steps that look obvious to them but contain the real signal. Note questions for later — most will be answered in the next two minutes of work without you asking. The principle is partnership: you and the customer are figuring out the work together, you are not interviewing them.
-
Probe in the moment, but only when it’s natural. When the customer hits a workaround, a hesitation, or an interruption, that’s the moment to ask. “What just happened there?” or “Why that step?” The principle is context: the answer to a question asked at the desk in the moment is qualitatively different from the answer to the same question asked at a debrief an hour later.
-
Voice your interpretation and let them correct it. Every fifteen or twenty minutes, summarize what you think you just saw: “It sounded like the reason you copied that into a spreadsheet is because the report doesn’t sort the way you need.” The customer will either confirm or correct you. The principle is interpretation: you are testing your reading of the work in real time, not waiting until you’re back at the office to make sense of it.
-
Keep the focus tight. Customers will volunteer adjacent stories, edge cases, and complaints about other tools. Note them, but bring the conversation back to the workflow you came to observe. The principle is focus: a session that drifts produces broad context but no usable signal.
-
Note what’s around the work. Sticky notes, dual monitors, printed-out cheatsheets, a chat window the customer keeps minimized, coworkers wandering by. The environment is data. A printed-out cheatsheet next to a software interface is the customer’s own admission that the software doesn’t work the way they need.
-
Run the remote variant when in-person isn’t possible. For distributed teams or software-only workflows, ask the user to share their screen on a video call while they work through a real task — not a demo, but actual work they’d do anyway. Record with permission. You lose physical context (desk setup, body language, interruptions), but for software products screen-sharing often reveals more about the digital workflow than an in-person visit would. The four principles still apply.
-
Wrap with a summary and clarifying questions. In the last five minutes, summarize the workflow back to the customer and ask any remaining clarifying questions. Confirm permission to follow up by email if you have a question after debrief.
Analysis
-
Debrief while it’s fresh. Within an hour of the session, the observer pair (or solo researcher) sits down with their notes and the recording and writes up: the workflow as observed, every workaround noticed, every place the customer hesitated, every place the environment intruded. Two hours of debrief per session is normal.
-
Run an interpretation session. With your team, walk through each session’s notes and surface every observation as an explicit interpretation: “When the customer copy-pasted the report into Excel, that means the built-in sort is unusable for their workflow.” This is the team’s chance to challenge each interpretation. Disagreement means you don’t yet know what you saw and need to look at the recording or follow up with the customer.
-
Build an affinity diagram. Take every observation, every workaround, every interpretation across all sessions and put each on its own sticky note (physical or digital). Group them by what they have in common, not by which session they came from. The clusters that emerge are the patterns. Single-session observations that fit no cluster are anecdotes — note them, but don’t build on them.
-
Identify workflow patterns. Look across the affinity clusters for repeated structures: the same workaround appearing in different sessions, the same decision point repeatedly going wrong, the same environmental constraint appearing at different customers. A pattern that shows up in three of five sessions is a pattern. A behavior that shows up in one session is a hypothesis worth probing further.
-
Be honest about sample size. Five customers is enough to surface patterns; it is not enough to claim those patterns generalize. State what you observed, in how many sessions, with what variation. Resist extrapolating qualitative observations to the entire population — write findings as “in five sessions we saw X” rather than “users do X.”
-
Synthesize into a hypothesis worth testing. The output of analysis is one or two clear hypotheses for downstream evaluative testing — a paper prototype to test against a workaround you discovered, a closed-ended survey to size how widespread the workflow pattern is, a comprehension test on a value proposition you can now articulate. Contextual inquiry doesn’t end the research; it sets up the next round.
- Confirmation bias The researcher arrives with a hypothesis and notices only the evidence that supports it. Mitigate by writing your assumptions down before the session and assigning a teammate to actively look for evidence against them.
- Observer effect The customer changes their behavior because they know they’re being watched, especially in the first ten minutes. Mitigate by asking them to start with a routine warm-up task before getting to the work that matters, and by making your framing statement explicit that there’s no right or wrong way.
- Leading interpretation When voicing your interpretation, you can phrase it in a way that traps the customer into agreeing. Mitigate by phrasing interpretations as questions (“It looked like X — is that right?”) rather than statements, and by inviting correction explicitly.
- Sample bias Customers willing to host a contextual inquiry skew toward those who are organized, articulate, and like to demonstrate competence. The customers who would benefit most from your product may be the least likely to volunteer. Mitigate by recruiting through multiple channels and noting which segments you couldn’t reach.
- Apprentice yourself to the customer Learn how they are currently solving their problems without your product. — @TriKro
- AI cannot replace presence Contextual inquiry is one of the research methods least disrupted by AI, and that is its strength. The entire point is to be physically present where the problem occurs. Use AI to accelerate your analysis after the session, not to replace your presence during it.
Learn more
Case Studies
Fast Food Milkshake example by Clayton Christensen
Lucky Iron Fish
The founder arrived at his solution after he immersed himself in the people he was solving a problem for and studied their culture: .
Moen
An Intro to Contextual Inquiries: Case Study Included
Shopping Cart
A perfect example of how you can build better products when you are in contact with customers and build the product in parallel with customer development: .
LEGO
Facing near-bankruptcy in 2003, LEGO sent researchers into children’s homes to observe how they actually played. The pivotal insight came from an 11-year-old German boy whose worn-down sneakers revealed that children seek deep mastery experiences, not instant gratification. This ethnographic research — not surveys — convinced LEGO to refocus on complex brick sets, and by 2014 it had become the world’s largest toy company by revenue.
Guiseppe Getto
Contextual Inquiry With AI (2025): Explores how AI tools are changing contextual inquiry field research, including new opportunities for augmenting observations and probing user workflows, while cautioning that AI cannot replace the human ability to understand subtle contextual factors in natural environments.
Dscout
Sneak a Peek Behind the Scenes with Contextual Inquiry: Dscout documents how mobile-first contextual inquiry tools enable remote field research, allowing researchers to observe participants in their natural settings via video diaries and in-context responses, scaling the traditionally labor-intensive method.
Intuit
Since the launch of Quicken in 1983, Intuit has run a “Follow Me Home” program where employees observe customers using QuickBooks and TurboTax in their actual homes and offices. The mantra is “look to be surprised” — the program is an extensive, ongoing part of how Intuit’s product teams uncover workflow interruptions that interviews alone would miss.
Further reading
- Beyer, H. & Holtzblatt, K. (1997). Contextual Design: Defining Customer-Centered Systems. San Francisco, CA: Morgan Kaufmann.
- Holtzblatt, K., Wendell, J. B., & Wood, S. (2004). Rapid Contextual Design: A How-to Guide to Key Techniques for User-Centered Design. San Francisco, CA: Morgan Kaufmann.
- Whiteside, J., Bennett, J., & Holtzblatt, H. (1988). Usability engineering: Our experience and evolution. In M. Helander (Ed.), Handbook of Human-Computer Interaction. New York, NY: Elsevier Science Publishing. 791-817.
- Nielsen Norman Group: Contextual Inquiry — A UX Research Method
- InContext Design — About
- Intuit Developer Blog: Why every company should be doing a Follow Me Home (2021-01-21)
Got something to add? Share with the community.