Product Prototyping - Clickable Prototype

In Brief
A clickable prototype is an interactive digital representation of a product where screens are interconnected through click targets — buttons, links, and hotspots — so users navigate through tasks as if using a real application. No code is written; the interactions are simulated by linking static screens. Despite the simplicity, many users cannot tell a well-made clickable prototype from a real product, which makes it a strong tool for usability testing, stakeholder alignment, and investor demos.
A clickable prototype is the foundation for real usability testing: you can observe whether users complete tasks, measure how long they take, and identify where they get lost. It is a clear step up from a paper prototype, which requires a human facilitator to simulate the interface, but a clear step down from a single feature MVP or mash-up, which run on real backends with real data.
Common Use Case
You have a candidate UX flow that already passed at low fidelity — paper prototype, storyboard, or solution interview — and now the question is whether the actual interaction design holds up under unsupervised use. You want measurable task-completion data and observable confusion points before any engineering investment, and you can spend a few days in a design tool plus 5 to 8 user sessions to get the read.
Helps Answer
- Can users complete key tasks without guidance?
- How long does it take users to accomplish core tasks?
- Where do users get confused or take wrong paths?
- Is the navigation structure intuitive?
- Does the visual hierarchy guide attention correctly?
- Do users understand what the product does from the first screen?
Description
A clickable prototype sits at a useful point on the prototyping spectrum: high enough fidelity to support genuine usability testing but low enough that no engineering is required. That is what makes it the primary tool for validating that a product’s interaction design works before writing code.
A clickable prototype should not be the first thing you build. If the basic concept and flow have not been validated through paper prototypes, storyboards, or solution interviews, you risk spending days polishing an interaction design for a concept that does not work. A clickable prototype answers “can users use this?” — not “should we build this?”
What a Clickable Prototype Is Not
- Not a working product. It has no backend, no database, no real data. Users click through a predetermined set of screens.
- Not a code prototype. No programming is involved. Everything is created in a design tool.
- Not comprehensive. A good clickable prototype covers 1 to 3 core tasks, not every possible interaction. Unconnected areas should show a “this area is not part of the prototype” message.
When to Use
Use a clickable prototype when:
- You have validated the basic concept through lower-fidelity methods and need to test detailed interaction design.
- You need to run usability testing with measurable task-completion metrics.
- You need to demonstrate the product to stakeholders, investors, or partners who need to “see it working.”
- You want to test specific interaction patterns (navigation, onboarding flows, checkout) before engineering invests in building them.
How to
Prep
- Define the tasks to test. Choose 1 to 3 core tasks that represent the product’s primary value. Each task needs a clear start point and end point — for example, “find a product, add it to cart, and check out” or “create an account and set up a project.”
- Map the screens. For each task, list every screen the user will see, including error states, empty states, and confirmation screens. A typical task takes 5 to 10 screens.
- Design the screens. Build each screen in your design tool. Start with the happy path, then add error states and edge cases. Use realistic content — placeholder text like “Lorem ipsum” confuses users and invalidates results. AI screen generators (v0, Galileo AI, Figma AI) can produce a first pass from a text description in minutes; treat the output as a draft to refine, not as final.
- Connect the screens. Add click targets (hotspots) that link screens. Every clickable element either leads somewhere or shows a “not available in this prototype” indicator. Dead clicks frustrate users and corrupt your test data.
- Pilot the prototype yourself. Walk through every task path twice. Check for dead ends, missing screens, and broken links. Have a colleague test it cold to catch assumptions you missed.
- Recruit and schedule participants. Recruit 5 to 8 users from the target segment. Match technical comfort level to the real audience — tech-savvy participants navigate prototypes more easily than typical users and will give you a falsely optimistic read.
- Write the moderator script. Standardize the task introduction, the think-aloud prompt, and your responses to user questions. Decide in advance what you will and will not help with — typically nothing.
Execution
- Run the sessions one at a time. With 5 to 8 participants from the target segment, give each person the task instructions and observe them using the prototype. Do not help, hint, or react. Their struggle is your data.
- Use a think-aloud protocol. Ask participants to narrate what they are looking at, what they expect to happen, and what they are about to click. Record verbal hesitation (“hmm, I think I’d click…”) because it surfaces uncertainty that successful clicks hide.
- Capture the click path and the failure points. Most prototyping tools record click data automatically. Note where users hesitate, where they backtrack, and where they click outside the connected hotspots — those are the design defects.
- Read the moderator script verbatim. A consistent task introduction across sessions removes facilitator-induced variance. Do not improvise.
- Watch for misidentification. If a participant describes the product as something other than what it is, the value proposition has not landed. Note the words they used.
Analysis
- Calculate task-completion rate per task. Percentage of users who complete the task without help. Below 80% indicates a real usability problem; below 50% indicates a flow-level problem that paper would have caught.
- Plot time on task and first-click accuracy. Large variance between users may indicate inconsistent mental models. First-click accuracy above 70% correlates strongly with overall task success — below that, the call-to-action is wrong, not the task.
- Cluster the path deviations. When 3 of 5 users click the same wrong button, that is a design defect, not a user defect. Patterns matter more than counts at this sample size — for confidence intervals on small-n usability metrics, ask an AI assistant to compute Bayesian credible intervals rather than frequentist significance tests.
- Synthesize the verbal feedback. Group hesitation moments by screen, by widget, and by task phase. Hesitation that clusters on a screen means the screen is the problem. Hesitation that clusters on a phase (“checkout in general feels off”) means the flow is the problem.
- Decide the next move. One of three things should happen: tasks pass cleanly → polish the flow and run quantitative validation in Usability Testing; tasks fail at specific decision points → redesign those screens and re-test the affected paths; tasks fail across the board → return to Paper Prototyping and reassess the concept itself.
- Facilitator influence How you introduce the task or react to user confusion biases the result. Read the moderator script verbatim and do not help users who are stuck. Their struggle is your data.
- Participant sophistication Tech-savvy participants navigate prototypes more easily than your actual target users. Recruit participants whose technical comfort level matches the target segment, not whoever is convenient.
- Happy path bias Prototypes tend to work cleanly for the intended flow but break for unexpected behavior. Build realistic error states and edge cases into the prototype before testing.
- Visual fidelity distraction A polished prototype invites comments on aesthetics rather than usability; a rough prototype gets dismissed as unfinished. Match visual fidelity to the question being tested — wireframe-quality for flow, hi-fi for visual hierarchy.
- Confirmation bias on click data Click counts can be read to support whatever conclusion the team already wants. Decide pass/fail thresholds before the test, not after.
- Small-sample overconfidence Five-user tests are directional, not conclusive. Frame conclusions as “early indication” until a second round confirms.
Learn more
Case Studies
Airbnb redesign (2014)
Airbnb’s design team rebuilt the booking flow as a clickable prototype in Figma’s predecessor and ran moderated usability tests on the host-onboarding path. The clickable prototype caught a sequencing problem (hosts asked for payment information before they understood the listing fee structure) that the static mockups had not surfaced. The team rerouted the flow before any engineering work began.
Marvel App at the BBC
The BBC used Marvel-app clickable prototypes to test new news-app navigation patterns with real readers before any iOS/Android engineering investment. The clickable rounds caught two navigation models that tested poorly with older readers, and one of those models had been the team’s preferred direction.
InVision at Adobe
Adobe’s product team used InVision clickable prototypes during the Creative Cloud redesign to validate task flows with creative professionals before a single line of production code was written. The clickable testing surfaced a discoverability problem with the asset-sync feature that became the redesign’s primary fix.
Figma at Microsoft
Microsoft’s design teams routinely use Figma clickable prototypes to test feature flows with internal employees and external customers before committing engineering resources. The shift to clickable testing as a default-before-build step is documented in Microsoft’s design-org case studies.
Further reading
- Steve Krug — Don’t Make Me Think, Revisited (New Riders, 2014)
- Jakob Nielsen — Why You Only Need to Test with 5 Users (Nielsen Norman Group, 2000)
- Jeff Sauro and James R. Lewis — Quantifying the User Experience (Morgan Kaufmann, 2016)
- Jake Knapp, John Zeratsky, and Braden Kowitz — Sprint (Simon & Schuster, 2016)
- Tomer Sharon — Validating Product Ideas Through Lean User Research (Rosenfeld Media, 2016)
- Nielsen Norman Group — How to Conduct a Usability Test on a Mobile Device
Got something to add? Share with the community.