Product-Market Fit Survey

A figure inside a dashed circle pulling a gift box with a heart closer, representing product-market fit

In Brief

The Product-Market Fit Survey is a single multiple-choice question that asks existing users, “How would you feel if you could no longer use this product?” with four answer options ranging from “very disappointed” to “no longer use.” If 40 percent or more of qualified respondents pick “very disappointed,” that is a strong quantitative signal that the product has achieved product-market fit for that segment. The output is a percentage score you can track over time, paired with follow-up questions that reveal what users value most and where the product is falling short.

Common Use Case

You have a few hundred active users and your investors are asking whether it is time to spend heavily on marketing. Before committing budget, you want a quick quantitative check on whether your core users truly depend on the product or are casually trying it out.

Helps Answer

  • Are enough customers deeply attached to this product to justify scaling?
  • Which customer group would miss this product the most?
  • What is the single benefit users would not want to live without?
  • What is missing for the people who are only somewhat disappointed?
Setup form on website: .5–1 day.…or…Prepare a longer survey containing the question below: 2–3 days (see Closed-Ended Survey for more details). AI tools can help segment respondents and auto-analyze open-ended follow-up answers, cutting analysis time significantly.
The survey itself can be run at little to no cost using free tools like Google Forms or Typeform. Dedicated PMF survey tools like pmfsurvey.com offer standardized templates. AI can assist with respondent segmentation and qualitative follow-up analysis.

Description

“Product/market fit” is a term coined by Marc Andreessen in his 2007 post The Pmarca Guide to Startups, part 4 — The Only Thing That Matters. Andreessen defined it as “being in a good market with a product that can satisfy that market” and described the lived experience of a startup that has it: customers buy as fast as you can build, usage grows as fast as you can add servers, and money piles up in the checking account. When you don’t have it, customers don’t quite get value, word of mouth is flat, sales cycles drag, and reviews are tepid. PMF is the single thing that matters because almost nothing else — team, market, product polish — can rescue a startup that lacks it, and almost nothing else can sink one that has it.

Having achieved product-market fit, you can safely say that you have proven the value hypothesis: the key assumption that underlies why a customer is likely to use your product. PMF is also a proxy for customer excitement — customers recommend you more frequently when you have it.

In 2009, Sean Ellis turned Andreessen’s qualitative description into a quantitative test. Ellis had helped run early growth at Dropbox, Eventbrite, LogMeIn, and Lookout, and noticed that the startups whose growth efforts compounded shared a common pre-condition: when he surveyed their existing users with a single question — “How would you feel if you could no longer use this product?” — at least 40 percent answered “very disappointed.” Below that threshold, paid acquisition usually struggled to compound; above it, marketing investment tended to scale. Ellis tested the survey across roughly one hundred startups before publishing the 40 percent benchmark in his Startup Pyramid essay.

The 40 percent number is a heuristic, not a hard cutoff. Rahul Vohra’s 2018 First Round Review account of how Superhuman built a “PMF engine” using the Sean Ellis test traces a deliberate climb from 22 percent → 33 percent → 58 percent over a series of release cycles by analyzing the qualitative follow-ups, identifying a “high-expectation customer” segment (a phrase Vohra credits to brand strategist Julie Supan), and aiming the product roadmap squarely at the things that segment said were missing. Brian Balfour’s essay The Never Ending Road To Product Market Fit argues PMF is not a single moment but a series of escalating tests — a leading-indicator survey, a leading-indicator engagement metric, a flattening retention curve, and a “trifecta” of growth, retention, and monetization — each of which must hold before scaling. Dan Olsen’s The Lean Product Playbook frames PMF as the top of a five-layer pyramid (target customer → underserved needs → value proposition → feature set → user experience), with the survey as one diagnostic among several.

This page covers the survey itself: how to qualify respondents, run the question, and read the score.

How to

Prep

  1. Define the user qualification filter. Sean Ellis’s original guidance is to send the survey only to users who have (a) experienced the core of the product, (b) used it at least twice, and (c) used it in the last two weeks. Surveying casual or lapsed users contaminates the score with people who never had a chance to depend on the product. If your analytics platform doesn’t track these conditions, define them as best you can — first-week churn is normal and you don’t want to count it as “not disappointed.”
  2. Decide standalone or embedded, then pick the channel and tool. A standalone single-question survey (in-app intercept, email link, or a dedicated form) keeps the PMF question uncontaminated by surrounding questions. Embedding it in a longer customer survey is fine if you place the PMF question early. Common channels: in-app prompt, email to active users, intercom-style chat. Common tools: Google Forms, Typeform, Sprig, the dedicated pmfsurvey.com template.
  3. Set a target sample size. Aim for at least 30 qualified responses before treating the score as anything more than directional, and 100+ before treating it as a real signal. Below 30, a handful of answers swings the percentage by 10+ points.
  4. Draft the follow-up question battery. Vohra’s account adds three follow-ups that turn the score from a number into a roadmap: What type of person do you think would benefit most from this product? (segmentation cue and Julie Supan’s high-expectation-customer framing), What is the main benefit you receive from this product? (the value proposition in your users’ words), and How can we improve this product for you? (the missing-feature gap, especially valuable from “somewhat disappointed” respondents). Add a fourth — Why did you choose that answer? — so you can read the qualitative pattern under the quantitative score.
  5. Pre-commit to segmentation analysis. Decide before launch which segments you’ll cut by (acquisition channel, plan tier, role, company size, signup cohort). The headline percentage often masks one segment well above 40 percent and another well below — the high-expectation customer segment is typically the one to build for. Choosing the cuts upfront prevents post-hoc data dredging.

Execution

  1. Filter the respondent list. Apply the qualification criteria from prep — you want users who have actually used the core product at least twice in the last two weeks. AI-powered customer analytics tools can help identify which users meet the criteria when your database doesn’t have clean behavioral filters.

  2. Send the survey. The single quantitative question is:

    How would you feel if you could no longer use this product?

    • Very disappointed
    • Somewhat disappointed
    • Not disappointed (it isn’t really that useful)
    • N/A — I no longer use the product

    Followed by the prep-drafted follow-ups (main benefit, type of person who would benefit most, what’s missing, why this answer).

    Send via any of:

    • Email link to qualified users
    • In-app intercept (Sprig, Intercom, custom)
    • Typeform / Google Forms link
    • The dedicated pmfsurvey.com template
  3. Run a tight collection window. Two weeks is usually enough — open windows drag and let new behavioral signals contaminate the cohort. Send one reminder; do not send two.

  4. Do not coach respondents. If a user replies asking what “very disappointed” means or which option to pick, do not steer the answer. The point is their unprompted reaction. Note the confusion as a signal that the question itself may need a small rewording for your audience, but do not change it mid-collection.

Analysis

If 40 percent or more of qualified respondents picked “very disappointed,” that is the threshold signal Sean Ellis found correlates with a startup that can scale acquisition and grow. Below 40 percent, the product is not yet a “must have” for enough of the surveyed segment, and pouring marketing budget on top is likely to be inefficient.

The headline number is the start of the analysis, not the end:

  • Read the qualitative follow-ups by quantitative bucket. Compare the language “very disappointed” users use to describe the main benefit, against the language “somewhat disappointed” users use. The very-disappointed cohort tells you what the product really is in users’ minds; the somewhat-disappointed cohort tells you what’s missing for them to cross over.
  • Segment. Cut the score by the segments you pre-committed to in prep. A 28 percent overall score that hides a 55 percent segment is a different problem than a uniform 28 percent — the first calls for narrowing the target customer; the second calls for fundamental product change.
  • Identify the high-expectation customer. Per Vohra, the segment that scores highest on “very disappointed” is the segment whose expectations you should design for. Building for the median user dilutes the product; building for the high-expectation segment tends to lift the median anyway.
  • Plan the next roadmap. Vohra recommends a roughly 50/50 split: half the roadmap doubling down on the benefits the very-disappointed cohort named, half closing the gaps the somewhat-disappointed cohort named.
  • Track over time. Re-run quarterly. PMF is not a single moment but a series of measurements that should hold (and rise) as the product and audience evolve, per Brian Balfour’s Never Ending Road framing.

AI tools (Dovetail, Marvin, generic LLMs with the raw responses pasted in) can auto-categorize the open-ended follow-ups and surface the language patterns that distinguish “very disappointed” respondents from “somewhat disappointed” ones. That language analysis directly informs your positioning and marketing copy. AI can also auto-segment quantitative responses by acquisition channel, usage behavior, or demographic profile to find which specific segments have crossed 40 percent even when the overall score has not.

Biases & Tips
  • Order effect If the PMF question sits inside a longer survey, earlier questions can prime the answer. Place this question early or run it as a standalone.
  • Premature measurement Asking before users have actually experienced the core product produces noise. Apply the qualification filter (used twice, used in the last two weeks); do not survey first-week signups.
  • Small-sample variance Below ~30 qualified responses, the “very disappointed” percentage swings widely on a few answers. Aim for 30 minimum and 100+ before treating the score as a defensible signal.
  • Threshold worship Forty percent is a heuristic Sean Ellis found across roughly one hundred startups, not a physical constant. Treat 38 percent and 42 percent as the same number; treat the trend across quarters and the segment-level scores as more informative than the headline.
  • Selection bias in respondents Users who bother to complete a survey skew positive. If your response rate is under 10 percent, your “very disappointed” share is probably overstated.
  • AI delivery distortion Do not administer the PMF question via conversational AI or chatbot. The method’s validity depends on its standardized, simple format. AI-generated preamble or dynamic follow-up questioning introduces variability in how respondents interpret the question. Use AI for targeting and analysis, not for the survey interaction itself.

Next Steps

  • If below 40 percent, segment respondents and focus the roadmap on the highest-scoring segment.
  • Run Customer Discovery Interviews with “very disappointed” respondents to understand what they value most.
  • Build features that “somewhat disappointed” users say would make them “very disappointed” to lose.
  • Re-run the survey quarterly to track PMF progress.
  • Use Customer Discovery Interviews with “somewhat disappointed” respondents to learn what would move them to “very disappointed.”
  • Set up Dashboards to track behavioral metrics alongside your PMF score and see if survey sentiment matches actual usage patterns.
Learn more

Case Studies

Superhuman

Rahul Vohra’s First Round Review account documents how Superhuman lifted its Sean Ellis “very disappointed” score from 22 percent to 33 percent and then to 58 percent across a series of release cycles. The team built a four-step engine — segment users, analyze the qualitative follow-ups (main benefit / type of person who benefits most / what’s missing), build the next roadmap on a roughly 50/50 split between doubling down on benefits and closing gaps, then repeat — and aimed every release at a high-expectation-customer segment they had isolated using Julie Supan’s HXC framing. The case study is the canonical worked example of using the PMF survey not as a one-off score but as a continuous product-development instrument.

Read more

Got something to add? Share with the community.