Prioritization Games - Card Sorting

A customer arranging labelled feature cards in a vertical ranking on a table while an interviewer takes notes

In Brief

Card Sorting for features is a prioritization exercise where you write proposed features or product attributes on individual cards and ask customers to rank them in order of importance. The exercise forces participants to make explicit trade-offs and, crucially, to articulate why they rank features the way they do. The resulting conversations reveal the reasoning behind customer preferences, not just the preferences themselves.

This page covers the feature-prioritization variant of card sorting. The original card sorting technique is a user-experience method for organizing content into categories — defined by Donna Spencer and Todd Warfel as “a user-centered design method for increasing a system’s findability.” Practitioners have since adapted the same physical mechanic to rank features by value to the customer, which is what this method does. Card Sorting for features is not one of Luke Hohmann’s twelve Innovation Games, but it sits naturally alongside them in this section because it serves the same decision shape: ranking candidate features.

Common Use Case

You have a list of eight to twelve candidate features and you can spend an hour with each of three to five target customers. You want to know which features each customer ranks highest, but more importantly, you want to hear them think out loud about why one feature beats another. The output is a ranked list per session plus the verbal reasoning that explains the ranking — not just preferences, but the decision logic behind them.

Helps Answer

  • Which features do customers value most?
  • What is the minimum feature set for launch?
  • Why do customers prefer certain features over others?
  • Are there segments with different feature priorities?
  • What language do customers use to describe features?
Preparation takes 30 to 60 minutes to create feature cards. Each session runs 30 to 60 minutes. Plan for three to five sessions with different participants to see patterns.
Index cards or sticky notes, a marker, and a table to work on. No cost if you have basic office supplies. For a remote variant, use a digital whiteboard tool like Miro or FigJam.

Description

Card Sorting for features works by making the abstract concrete. Instead of asking “what features do you want?” — which produces vague wish lists — you put specific options in front of customers and ask them to choose. The physical act of arranging cards creates a tangible artifact that both you and the customer can discuss, point to, and rearrange.

The key insight is that the ranking itself matters less than the conversation it provokes. When a customer hesitates between two cards, or moves one card above another and then explains why, you are hearing the decision logic that drives real purchasing behavior. Spencer’s Card Sorting: Designing Usable Categories codifies the broader card-sorting tradition this method draws from, including the practice of asking participants to narrate their thinking as they sort.

This method is related to Card Sorting - Pain Points in the Generative Market Research section, which uses the same technique to prioritize problems rather than features. Use the Pain Points variant when you are still exploring the problem space; use this Features variant when you have a validated problem and are deciding what to build.

Optional: $100 Allocation Follow-Up

After ranking, you can ask participants to distribute an imaginary $100 across their top features. This adds a weighting dimension — a customer might rank Feature A first but allocate $40 to Feature A and $35 to Feature B, revealing that the top two are nearly equal in importance while the rest are far behind.

How to

Prep

1. Create feature cards.

Write 8 to 12 proposed features or product attributes on individual cards. Use clear, jargon-free language. Each card should describe one feature in 5 to 10 words, with a brief clarifying sentence if needed. Spencer’s guidance on writing cards for the underlying technique applies here: keep the wording at the customer’s level of abstraction, not your team’s.

2. Include blank cards.

Give participants 2 to 3 blank cards so they can write in features you did not think of. This is one of the most valuable parts of the exercise — the write-ins are where you discover the gaps in your own feature list.

3. Recruit three to five target customers.

A small qualitative sample is enough to surface patterns in the reasoning. Card sorting for information architecture sometimes uses larger samples for quantitative cluster analysis, but that is a different method with a different output. Here you are after the verbalized decision logic, which converges quickly across a handful of representative customers.

Execution

1. Ask the participant to rank.

Place all cards on a table and ask the participant to arrange them from most important to least important. Do not provide further instructions — observe how they approach the task. Spencer and Warfel describe the same hands-off setup for the underlying technique: the goal is to see how the participant makes sense of the set, not to coach them into your preferred answer.

2. Probe the reasoning while they sort.

As they sort, ask open-ended questions: “Tell me why that one went to the top.” “You hesitated between these two — what were you thinking?” “What would happen if you could not have the bottom three?” The point is to capture the verbalized decision logic, not just the final order.

3. Optional: run the $100 allocation.

After ranking, ask participants to distribute $100 of play money across their top 5 to 7 features. This reveals intensity of preference, not just order. A participant who allocates $60 to their top card and $5 each to the next eight is telling you something different from one who spreads $100 evenly across the top five.

4. Document results.

Photograph the final arrangement, record key quotes, and note which blank cards were added. The photograph is the artifact you will compare across sessions; the quotes are the evidence behind the ranking.

Analysis

1. Look for convergence across sessions.

Look for features that consistently land in the top 3 to 5 across multiple sessions. Convergent picks across three to five customers are strong launch candidates. Features that appear in the top group for every participant are stronger signal than features that appear in the top group for one participant by a wide margin.

2. Investigate high-variance features.

Pay attention to features that show high variance — some participants rank them first while others rank them last. High variance often indicates different customer segments with different needs. Pull the verbalization data for those features specifically; the explanation usually reveals the segmentation.

3. Read the blank cards.

Blank cards that participants add are especially valuable. If multiple participants independently add the same missing feature, it is a strong signal that your feature list has a gap. A single write-in is interesting; convergent write-ins across participants are an instruction.

4. Triangulate against adjacent methods.

If you have run a Buy a Feature session or customer interviews on the same feature set, compare the rankings. Features that win in card sorting but lose in Buy a Feature are features customers say they want but would not pay for; features that win in both are roadmap candidates.

Biases & Tips
  • Recency bias Customers may prioritize features tied to a problem they hit this morning rather than their most important needs overall. Ask explicitly about the past month, not just today, when probing reasoning.
  • Anchoring to your list The features you include on cards frame the conversation. If you omit a category entirely, it will not come up unless participants use blank cards. Pre-test your card set with a colleague to check for gaps before the first session.
  • Social desirability In group settings, participants may rank “responsible” features (security, accessibility) higher than they would privately. Run individual sessions, or have participants do an initial silent ranking before any group discussion.
  • Confusion with UX card sorting Be clear with participants and stakeholders that this is a prioritization exercise, not an information architecture exercise. The goal is to rank by importance, not to group by category.
  • Facilitator validation Nodding, smiling, or echoing a participant’s choice signals approval and biases the next move. Stay neutral while they sort; save the reactions for the debrief.

Next Steps

Learn more

Case Studies

Optimal Workshop

Faced with a long backlog of product-improvement ideas, Optimal Workshop’s product team ran a closed feature-prioritization card sort using their own OptimalSort tool. Thirty candidate features were assembled from Customer Success and User Research input, and the whole team ranked each card as “Most important,” “Very important,” or “Important” (with “Not sure” / “No opinion” escape options). A weighted scoring formula (4/2/1) yielded 15 priority features that drove the next planning cycle — a worked example of feature-importance card sorting (distinct from the IA/menu-structure variant).

Read more

Further reading

Got something to add? Share with the community.