Card Sorting - Pain Points

In Brief
Card Sorting for pain points is a simple prioritization exercise where you create approximately 10 cards, each representing a customer problem, pain, or unmet need, and ask customers to rank them by importance. While ranking, customers explain their reasoning, revealing not just what matters most but why. The exercise challenges founder assumptions about which problems are most pressing.
For the feature-focused variant of this method, see Card Sorting - Features.
Common Use Case
You have already talked to customers and surfaced several distinct pain points your product could address. Before committing engineering effort to any of them, you need to know which problems matter most to customers — not which ones you find most interesting. You want to force a prioritization that elicits stated priorities under a forced-choice constraint rather than asking customers to rate everything as “important,” and you want to hear the reasoning behind their choices.
Helps Answer
- Which customer problems are most important to solve?
- How do different sets of customers prioritize the same set of problems?
- Are the problems I think are most important actually the ones customers care about most?
- Are there problems I have not considered that belong on the list?
- How much do customers care about this problem relative to other problems?
Description
Card sorting for pain points is a forced-choice prioritization technique. Most customers, when asked in an interview, will say that all of their problems are important. Card sorting makes them choose. By physically arranging cards from most to least important, customers must make trade-offs that surface stated priorities they would not name in open interviews.
This is a different method from the better-known information-architecture variant of card sorting (Donna Spencer, Nielsen Norman Group), which asks participants to group and label cards to surface category mental-models. Pain-point card sorting borrows the moderated think-aloud mechanic but applies it to a ranking task on a researcher-supplied set of problems. The two share an elicitation pattern; they answer different questions.
The method is primarily evaluative — you supply the cards, customers rank a fixed set — with a generative tail through the 2-3 blank cards customers can add. The conversation during the sort is where most of the qualitative learning happens. As customers move cards around, they think out loud about why one problem matters more than another. They describe situations, frequencies, consequences, and workarounds. That narration is qualitative data the ranking alone cannot give you.
Including blank cards is a common practice for the generative tail. Customers will sometimes add problems you did not think of, and when multiple customers independently add the same problem, you have discovered a blind spot worth taking seriously.
How to
Prep
-
Create the pain point cards. Write approximately 10 cards, each describing one customer problem, pain, or unmet need in clear, jargon-free language. Use the customer’s language, not yours. Each card should describe the problem itself, not a solution. For example: “Spending too much time on manual data entry” rather than “Need an automation tool.” Include 2-3 blank cards so customers can add problems you missed.
-
Recruit 8-12 target customers. Card sorting only produces useful patterns when you can compare across multiple sessions. Saturation in qualitative research within a single segment typically requires 8-12 sessions (Guest, Bunce, Johnson 2006); 5-6 is a defensible minimum for early validation. If you suspect multiple personas, recruit at least 3 from each. See Customer Discovery Interviews for sourcing and outreach guidance.
-
Plan session mechanics. Decide whether sessions are in-person or remote, how you will record (video, audio, or notes-only — get explicit consent before recording), and whether you will compensate participants. Customer compensation typically ranges from a $25-100 gift card for B2C consumers to a $100-300 honorarium for B2B specialists, varying by time commitment and seniority. Document consent and compensation terms in your recruiting outreach.
-
Prepare the session script. Decide how you will introduce the exercise, what follow-up questions you will ask, and how you will record the final ranking and key quotes. Pilot the script with one customer or a colleague before running it for real.
-
Pilot the deck. Run one or two pilot sessions before committing to a full round. Pilots almost always surface a card that is too vague, a duplicate, or a problem you did not realize was solution-flavored.
-
Revise. If the pilot sessions surface problems, revise the deck and script. Consider using Comprehension Tests to revise the pain point descriptions.
Execution
-
Shuffle and present the cards. Lay all cards face-up on a table (or in a digital whiteboard). Present them in random order to avoid anchoring the customer’s thinking. Explain that there are no right or wrong answers and that you want to understand their perspective.
-
Ask the customer to rank by importance. Instruct the customer to arrange the cards from most important (top) to least important (bottom). Encourage them to think aloud as they sort. Ask them to fill in any blank cards with problems that are missing from the set.
-
Probe the reasoning. As the customer sorts, ask follow-up questions: “Why did you put that one higher than this one?” “How often do you experience this problem?” “What happens when this problem occurs?” “How do you currently deal with this?” Do not challenge or correct their rankings.
-
Take notes. Note clusters and gaps. Pay attention to cards that customers group together (“these are all related”) and cards they set aside as irrelevant (“this is not really a problem for me”). Both patterns are informative. Also note any blank card additions.
-
Optional — run a $100 allocation follow-up. After the ranking exercise, give the customer a hypothetical $100 and ask them to allocate it across the cards based on how much they would pay to have each problem solved. This adds a quantitative dimension and reveals intensity of preference. A customer might rank two problems equally but allocate $60 to one and $5 to the other, revealing both priority and perceived level of impact. The technique is also called the “$100 test” or “Buy a Feature” and is most clearly documented in Luke Hohmann’s Innovation Games (Hohmann, 2007); its measurement lineage in marketing research is “constant-sum scaling.” In remote sessions, ask the customer to type allocations next to each card and read them aloud — the missing physical bills change the ritual but not the data.
Analysis
-
Aggregate rankings across customers. After each session, record the final ranking and key quotes. Once you have 8-12 sessions, compile a rank vector per customer and tally how often each problem placed in the top 3, top 5, and bottom half. Problems that recur in the top 3 across most customers are strong candidates for the core value proposition.
-
Affinity-cluster the reasoning quotes. Pull every customer quote that explained a high-rank choice into a single working doc. Group similar reasons together (this is affinity diagramming). The clusters reveal why certain problems rank high — frequency, severity, blocked workaround, money lost — and that why is more actionable than the rank itself.
-
Look for divergence as a segmentation signal. Problems with high variance in ranking (some customers rank them first, others last) may indicate distinct customer segments with different needs. Look at what those split groups have in common — role, company size, vertical, life stage — and form a segmentation hypothesis to test.
-
Catalog blank-card additions. Problems added on blank cards by multiple customers independently are blind spots worth taking seriously. Group similar additions before deciding whether to add them to the deck on the next round.
-
Watch for flat $100 allocations. If the $100 allocation is very spread out (roughly equal amounts on every card), the customer may not have a strong pain point at all — which is itself an important finding. A flat allocation with a high-ranked top problem usually means the top problem is real but mild.
-
Note what customers set aside. Problems that customers set aside as irrelevant may have been based on your assumptions rather than customer reality. Add to the running list of cards to drop or rewrite.
-
Treat the data as directional. With 8-12 customers, individual outliers can skew aggregate rankings. Look for consistent patterns rather than precise orderings, and treat the data as directional rather than statistically significant — saturation, not significance, is the bar this method clears.
- Framing bias The way you describe each problem on the card influences how customers perceive its importance. Use neutral, descriptive language and avoid emotionally charged framing.
- Coverage bias The cards you choose to include define the conversation. If you leave out an important problem, customers may not think to add it even with blank cards. Conduct exploratory interviews before creating the card set, and always include 2-3 blanks.
- Anchoring and order effects The first few cards a customer reads can anchor their thinking, and the wording of the highest-stakes card can shift relative rankings of everything else. Fully randomize card order per session (not just rotate between sessions), and shuffle again before each customer.
- Availability bias Customers will overweight problems they experienced this week and underweight problems they experience less often but with greater consequences. Probe for frequency separately from rank (“how often does this happen?”) so the rank is not driven entirely by recency.
- Social desirability and interviewer effects Customers may rank problems they think they “should” care about (security, privacy) or that they think you care about higher than problems they actually struggle with. The $100 allocation exercise partially counteracts this; staying neutral when probing — no nodding, no “great” — counteracts the rest.
- Small sample size With 8-12 customers in a single segment, individual outliers can still skew aggregate rankings. Look for consistent patterns rather than precise orderings, and treat the data as directional rather than statistically significant.
Learn more
Case Studies
Shreyas Doshi’s Customer Problem Stack Rank (Stripe)
Shreyas Doshi, who led product on Stripe Atlas and Stripe Capital, formalized “Customer Problem Stack Ranking” (CPSR) as a regular practice for product teams: rather than ask customers if they like an idea, give them the candidate problem alongside other problems they face and ask them to rank. The framing is identical to pain-point card sorting at scale. The OpinionX writeup of Doshi’s approach documents how Stripe and other teams use it before committing engineering effort to a problem.
OpinionX (Daniel Kyne)
The OpinionX founding team ran a customer-problem stack rank against 600 target customers, expecting their original value proposition to surface as a top concern. It came in dead last out of 45 problem statements. But five of the top seven highest-ranking problems were close adjacencies to what they were building. Within a week of the experiment they had rewritten their landing page and onboarding flow around the five high-ranking problems, signed multiple paying customers, and tripled their landing-page-to-trial conversion. The case is a clean example of pain-point ranking changing product direction in days rather than months.
Adobe Dreamweaver — the $100 test
In the late 1990s the Dreamweaver product team gave members of their customer advisory board an imaginary stack of $100 bills and asked them to allocate the money across a list of candidate features. The forced allocation surfaced which features customers genuinely valued versus those they merely said they wanted, and the results were used to define the product’s minimal viable feature set. The exercise is the same prioritization mechanic used in the $100 allocation follow-up to a pain-point card sort, popularized in product-management practice by Karen Catlin’s retelling of the Dreamweaver story and codified in Luke Hohmann’s Innovation Games as “Buy a Feature.”
Further reading
- Luke Hohmann — Innovation Games: Buy a Feature
- Greg Guest, Arwen Bunce, Laura Johnson — How Many Interviews Are Enough? An Experiment with Data Saturation and Variability (Field Methods, 2006)
- Donna Spencer — Card Sorting: Designing Usable Categories (Rosenfeld Media)
- Card Sorting: Uncover Users’ Mental Models — Nielsen Norman Group
- Three Levels of Pain Points in Customer Experience — Nielsen Norman Group
- Alex Osterwalder — How Card Sorting Can Help You Understand User Priorities (Strategyzer)
- Card Sorting — Interaction Design Foundation
Got something to add? Share with the community.