Closed-Ended Survey

A clipboard with a multiple-choice survey form showing filled and empty bubbles

In Brief

Closed-ended surveys are structured questionnaires that ask participants to choose between fixed, multi-choice answers such as yes/no or A/B/C/D. They are often used to rank the most important issues for participants or segment the population. Surveys can be repeated over time to identify trends. The output is structured quantitative data which can be analyzed with statistical techniques.

Common Use Case

You’ve talked to a number of customers and they’ve expressed four distinct pain points you could build a business around. To prioritize which problem to solve, you need to know how big the market for each is. You want to know what percentage of the broader population have those problems and how often the problems occur.

Helps Answer

  • Which of these options is the priority?
  • What percentage of people are interested?
  • What is the trend?
  • How can I segment the population?
  • Are there any unexpected correlations in answers?
Varies significantly with the survey length and methodology chosen. It can take under an hour to configure an exit survey on a website, or it can take weeks to perform a large-scale, in-person survey. Typically, surveys take under an hour to prepare and several hours to collect the data.
Surveys typically can be done for low to no cost. AI agents can help create robust surveys almost instantly and code them into an existing website. A variety of SaaS tools to run surveys also exist. Pen and paper also work well for in-person surveys.

Description

Closed-ended surveys force participants to pick from a fixed set of answers. Every respondent sees the same options, so the output is structured data you can count and compare.

Use them when you have “known unknowns” — you’ve done generative research, you have several hypotheses, and now you need numbers to decide between them. Don’t use them to discover problems you haven’t heard of yet. Open-ended methods are better for that.

Surveys can be run once to make a decision or repeated over time to track a trend. They scale cheaply: the same instrument works for 50 respondents or 5,000.

How to

Prep

1. Write your goal in one sentence.

Keep it focused and don’t try to learn everything at once. A survey built to segment a market looks nothing like one built to measure satisfaction. If you can’t state the goal in one sentence, you’re not ready to write questions.

Examples:

  • “Determine which of four pain points is most common among mid-market CFOs.”
  • “Measure whether satisfaction improved after the Q3 onboarding redesign.”

2. Pick a screener question.

The first question should filter out people who aren’t in your target population. Put it at the top and end the survey for anyone who doesn’t qualify. Try to avoid multiply screeners.

Are you currently responsible for purchasing software for your team? Yes / No

(If No → end survey)

3. Choose your question types.

Each type produces a different kind of data. Use the one that matches what you need to learn.

Dichotomous (yes/no) — Splits the population into two groups. Fast to answer. Use when you need a clean binary.

Do you currently use a project management tool? Yes / No

Multiple choice — Identifies which category a respondent falls into. Use for segmentation or feature preference.

What is your primary role?

  • Founder/CEO
  • Product Manager
  • Engineer
  • Designer
  • Other

Likert scale — Measures degree of agreement or satisfaction on a fixed scale. Use to gauge intensity, not just direction.

“I find it easy to prioritize my daily tasks.” Strongly disagree / Disagree / Neutral / Agree / Strongly agree

Forced ranking — Makes respondents prioritize. Unlike multiple choice, it forces trade-offs. Use when you need relative importance, not just preferences.

Rank from most to least important when choosing a SaaS tool:

  1. Price
  2. Ease of use
  3. Customer support
  4. Integrations

Matrix — Applies the same scale across several items. Efficient but fatiguing. Keep it to five rows or fewer.

Rate each department’s data literacy (1 = Low, 5 = High):

  • Marketing
  • Product
  • Sales
  • Engineering

Hybrid (closed + “Other”) — Add a free-text “Other” option when you aren’t sure your answer list is complete. If more than 10% of respondents choose “Other,” your options are wrong. Go back and fix them.

4. Write the questions.

Rules:

  • One concept per question. Don’t ask “Is the product fast and reliable?” — that’s two questions.
  • No hypotheticals. “Would you pay for X?” is worthless. People don’t know.
  • No leading language. “Don’t you agree that…” is not a question.
  • Include “N/A” or “Don’t know” where appropriate. Forcing an answer where none exists creates noise.
  • Keep the total survey under 15 questions. Completion rates drop sharply after that.

5. Order the questions deliberately.

  • Start with the screener.
  • Put easy, non-threatening questions next to build momentum.
  • Place sensitive or complex questions in the middle, after trust is established.
  • End with demographic questions (role, company size, etc.) — these feel impersonal and signal the survey is nearly done.
  • If order effects matter, randomize question sequence across respondents. Most survey tools support this.

6. Pilot with 5–10 people.

Run the survey on a small group before you launch. You’re checking for:

  • Questions that confuse people
  • Missing answer options
  • How long it actually takes (aim for under 5 minutes)
  • Whether the data you get back answers your stated goal

7. Choose a distribution channel.

Match the channel to your audience:

  • In-app or website exit survey — Best response rates. Use for existing users. Tools like Sprig or Hotjar embed directly.
  • Email — Good for existing customers or mailing lists. Expect 10–30% response rates. Keep the subject line short and specific.
  • Social media or community posts — Cheap reach but self-selecting. Useful for early-stage validation when you don’t have a user base yet.
  • Paid panels — Services like Pollfish or Prolific recruit respondents matching your criteria. Costs $1–5 per response. Use when you need a specific demographic you can’t reach organically.
  • In-person — Pen and paper at events, trade shows, or retail locations. High completion rates but small sample sizes.

8. Set a target sample size.

You need enough responses to trust the results. Rules of thumb:

  • For simple yes/no splits: 100 responses minimum.
  • For comparing subgroups (e.g., role A vs. role B): 30+ per subgroup.
  • For detecting small differences between options: 300+.
  • Use a sample size calculator to get a precise number. Plug in your expected effect size, confidence level (usually 95%), and population size.
  • If your sample will be small (under 50): You can still run the survey. You won’t get statistical significance, but you will get directional signals — which options cluster together, which segments lean one way. Present findings as “early indications,” not conclusions. Small-sample surveys are most useful when paired with follow-up interviews to pressure-test the patterns you see.

Execution

1. Collect responses.

How you collect depends on the channel you chose. In all cases: don’t explain the “right” answer, don’t react to responses, and don’t help participants interpret questions. You want their first instinct, not a coached answer.

Online surveys:

  • Set a deadline. Open-ended collection windows drag on and introduce timing bias.
  • Send one reminder 2–3 days after the initial send. Two reminders maximum — more than that annoys people and biases toward the overly compliant.
  • Monitor completion rates daily. If drop-off spikes at a specific question, that question is the problem.

In-person surveys:

  • Read the questions exactly as written. Don’t paraphrase or elaborate — that introduces variation between respondents.
  • Don’t hover. Hand over the survey (or tablet) and step back. People answer differently when they feel watched.
  • If a participant asks what a question means, note the confusion but don’t explain. That question needs rewriting.
  • Collect responses in a consistent format. If using paper, transcribe into a spreadsheet the same day while your memory of any issues is fresh.

Phone or video:

  • Read questions verbatim. For multiple choice, read all options before accepting an answer — people anchor on the first option they hear.
  • Record the session (with consent) so you can verify transcription later.

2. Export the raw data.

Download responses as CSV. Don’t analyze inside the survey tool — export to a spreadsheet or statistical tool where you can manipulate the data freely.

Analysis

1. Clean the data first.

  • Remove incomplete responses (respondents who dropped out partway through).
  • Remove speeders — anyone who finished in less than one-third of the median completion time was clicking randomly.
  • Check for straight-liners: respondents who selected the same answer for every question. Remove them.

2. Start with frequencies.

For each question, count how many respondents chose each option. Calculate percentages. This is your baseline. Look at it before doing anything fancy.

3. Cross-tabulate.

Cross-tabulation means breaking responses down by subgroup (e.g., role, company size, or answers to other questions). A preference that looks evenly split overall might be 80/20 within a specific segment. That’s where the insight is. You can do this in a spreadsheet with pivot tables, or ask an AI to do it for you.

4. Check distributions.

Don’t just look at averages. Look at the shape of how answers are spread out:

  • Bimodal (two peaks) — You may have two distinct segments responding differently. Investigate.
  • Heavily skewed (most answers bunched to one side) — The average is misleading. Report the median instead.
  • Uniform (answers spread evenly) — Respondents are guessing or the question doesn’t discriminate. The question may be poorly worded.

5. Test for significance.

You don’t need a statistics background to do this. Export your data as a CSV, paste it into an AI tool, and ask it to run the appropriate test. The key tests and when to ask for them:

  • “Run a chi-square test” — When you’re comparing categories across groups (e.g., “Did segment A prefer option 1 more than segment B?”). This tells you whether the difference is real or just random variation.
  • “Run a t-test” (two groups) or “Run an ANOVA” (three or more groups) — When you’re comparing average scores across groups on Likert-scale or rating data.
  • “Check for correlations” — When you want to know whether answers to two questions move together (e.g., do people who rate ease-of-use highly also rate satisfaction highly?).

The AI will return a p-value. If it’s below 0.05, the pattern is statistically significant — meaning it’s unlikely to be noise. If it’s above 0.05, you can’t trust the pattern yet. You may need more responses.

When an AI returns an analysis, open a new session and ask it to doublecheck and critique the analysis to avoid hallucinations.

6. Flag problems in the instrument.

  • Questions where 80%+ chose the same answer told you nothing. The question was too obvious or too leading.
  • Questions with high “N/A” or “Don’t know” rates were asked of the wrong audience.
  • If “Other” exceeded 10% on a multiple-choice question, your options were incomplete.
Biases & Tips
  • Social desirability bias Respondents avoid admitting to unsavory behavior, especially if results aren’t confidential. Keep surveys anonymous when possible.
  • Leading questions Subtle word choices prompt a particular answer. Compare: “Are you for or against an increase in tobacco tax rates?” (neutral) vs. “Are you in favor of increasing tobacco tax rates to protect our children’s health?” (leading). Have someone outside your team review your questions.
  • Emotionally loaded content Assumptions baked into questions skew results. “Where do you enjoy drinking beer?” presumes the respondent drinks beer. Use neutral framing and provide opt-out options.
  • Order effects The sequence of questions can prime later answers. Randomize question order across respondents when possible.
  • Willingness-to-pay bias Never ask people what they would like to pay. They understate or don’t know. Use behavioral methods (pre-sales, pricing page tests) instead.
  • Acquiescence bias People tend to agree with statements regardless of content. Mix positively and negatively worded items to catch this.
  • AI-drafted questions still need piloting LLMs can draft questions, scrub for obvious bias, and suggest appropriate scales, but they can’t catch questions that are technically clear yet culturally or contextually wrong for your audience. Always pilot the final survey with 5–10 real humans from your target segment before you spend on distribution.

Next Steps

  • Refine the customer segment with results and do Customer Discovery Interviews with the prioritized segment.
  • For indeterminate results, expand the sample size.
  • For confirmed participant problems, run a Value Proposition Test.
  • Use a Landing Page Test to test whether the segments you quantified will convert on a real value proposition.
  • Use a Comprehension Test to verify that your messaging resonates with the highest-priority segment before scaling.
Learn more

Case Studies

FourSquare

FourSquare, the location-based check-in app, expanded from 23 to 38 cities in October 2009, nearly doubling their coverage. Limited city availability had been users’ number one complaint, so the team used simple closed-ended surveys and usage data to prioritize which cities to add. The expansion validated that pent-up demand existed outside their initial markets, helping fuel their rapid growth from early-stage startup to mainstream adoption.

Read more

Coca-Cola

Coca-Cola conducted approximately 190,000 blind taste tests before launching New Coke in 1985. The tests showed a preference for the sweeter formula, but the company failed to ask whether customers wanted the original replaced. The resulting backlash forced Coca-Cola to reintroduce the original as “Coca-Cola Classic” within 79 days — a cautionary tale about asking the right questions in structured research.

Read more

Hyundai/Kia

J.D. Power’s Initial Quality Study uses structured closed-ended surveys asking new vehicle owners to indicate problems from a predefined list across categories like infotainment, drivetrain, and exterior. Automakers Hyundai and Kia used consistently poor IQS rankings as a catalyst for quality overhauls beginning in the early 2000s. By 2016, Kia ranked number one overall in the IQS — the first non-premium brand to top the study in 27 years.

Read more

Cleveland Clinic

Cleveland Clinic made the HCAHPS closed-ended patient experience survey — with questions like “How often did nurses treat you with courtesy and respect?” rated on a Never/Sometimes/Usually/Always scale — a central strategic priority under CEO Toby Cosgrove. Starting from the 10th percentile, their scores improved to the top 8% of roughly 4,600 hospitals, becoming a model other health systems studied.

Read more

LEGO Friends

Before launching the Friends line, LEGO ran a four-year structured study with 3,500 girls and their mothers, using closed-ended questions about color, model preferences, and play patterns. The quantitative results justified the 2012 launch, which more than doubled first-year sales forecasts and helped triple US/EU girls’ construction-toy revenue from $300M (2011) to $900M (2014).

Read more

Pew Research Center

American Trends Panel data shows closed-ended questions have a 1–2% item nonresponse rate while open-ended questions average 18% (ranging up to 50%); a useful empirical anchor for why founders should lean on closed-ended formats once the problem space is mapped.

Read more

Got something to add? Share with the community.