Competitor Analysis

A 4-column by 4-row competitive analysis matrix with feature icons across the top, animal silhouettes (cat, dog, bird) representing competitors down the left, and green checkmarks and red X marks comparing features

In Brief

Competitor analysis is most useful when you compare features, pricing, channels, and positioning in relation to a specific target segment — not as an abstract scorecard. The same feature is a strength against one segment and a non-issue against another, so the work is relational, not absolute. The output is a shortlist of competitors, a structured profile of each, and a synthesis (feature × segment matrix, positioning gap analysis, focused SWOT) that points to a defensible opening. AI can assemble most of the raw material in hours, but it routinely fabricates funding figures, customer counts, and feature claims; verification of every numeric and named claim is part of the method, not an optional polish step.

Common Use Case

You have a hypothesis about who else is in this space, but you need a structured map before you commit to a differentiation story or a pricing position. You are not yet ready to interview customers about each rival; you want to see the field, name the two or three you actually compete with for your target segment, and decide where to plant your flag.

Helps Answer

  • Which competitors are our customers most likely comparing us to?
  • How are competitors solving similar customer problems, and where do they fall short for our target segment?
  • What form should our product take to stand out for the segment we care about?
  • How can we differentiate our offering and positioning?
  • What revenue models are competitors using?
  • Which channels and acquisition motions do competitors lean on, and which are crowded versus open?
  • Which partnerships and integrations do competitors rely on, and which can we replicate or avoid?
A first pass with AI-assisted desktop research takes about a day: shortlist generation, structured profiling, and an initial synthesis you can defend in a meeting. A thorough pass — with verified primary sources, fact-checked numeric claims, and a continuous-monitoring setup — takes up to a week. After that, plan a re-run every quarter; competitor maps go stale within 90 days in fast-moving markets.
The cheapest tier is free LLM accounts, public pricing pages, Crunchbase free tier, Reddit, app stores, and Google for around $0. A working tier adds a paid LLM subscription ($20–$100/mo) plus SpyFu or SEMrush ($40–$80/mo) and G2 review access for roughly $200/mo. A continuous-monitoring tier with Crayon ($12K–$47K/yr) or Klue ($16K–$45K/yr) lands at $1K+/mo and is only worth it once you are competing for deals where battlecards pay for themselves.

Description

Competitor analysis is a structured comparison of the players already trying to solve the same job for the same kind of customer. The point is not to count competitors; it is to understand how each one is positioned for a target segment you care about, and to find a gap where your offering produces evidence of stronger fit than theirs.

The method has three stages. Prep produces a shortlist of 8–15 names worth deep work — a number small enough to research thoroughly and large enough to include direct competitors, indirect competitors, and substitutes from adjacent markets. Execution produces a structured record per competitor: features, pricing, channels, customer segments, sentiment, and funding. Analysis turns those profiles into decisions: a feature × segment matrix, a positioning gap analysis, focused SWOTs for the two or three competitors you actually compete with, and a written-up insight that names the gap.

Two principles run through all three stages.

The first is relational comparison. A feature is not a strength in isolation. SOC2 compliance is a strength against an enterprise buyer and irrelevant against a prosumer; a free tier is a strength against a price-sensitive solo user and a liability against a procurement-led buyer. Every cell in your matrix is scored against a named segment, never in the abstract.

The second is citation-backed claims. AI assembles competitive material faster than any human team, but current frontier models still hallucinate funding rounds, customer counts, and feature lists at rates between 3% and 19% depending on the task. Every numeric and named claim on a finished competitor profile must trace to a verifiable URL, and a fact-check pass — by a human or a verification subagent — is part of the method.

For market sizing (TAM / SAM / SOM, segment growth, willingness-to-pay benchmarks), use Secondary Market Research. This page is about who is in the field and where the gaps are; that one is about how big the field is.

How to

Prep

  1. Define the target segment first, not the competitor list. Write a one-paragraph description of the customer you are analyzing competitors against: who they are, what job they are trying to do, what they currently use, and what they care about most. Every later judgment — a feature being a strength, a price being a problem, a channel being saturated — is judged against this paragraph. If you skip this step you produce a generic landscape map, which is worse than no map because it feels finished.

  2. Pick which competitors are worth deep-research time. Don’t try to map “everyone in the space.” Aim for 8–15 names: 3–5 direct competitors, 3–5 indirect competitors solving the same job differently, and 2–3 substitutes from adjacent markets. Cross-reference at least two of these candidate-pool sources before locking the shortlist:

    • Customer discovery interview transcripts — whom did your interviewees name as alternatives, and what did they switch from?
    • Search trend analysis — which brands rank for the pain queries your target segment types into Google?
    • Keyword competition — who is bidding on your keywords in SpyFu or SEMrush?
    • Open-ended survey responses — answers to “what tool or method do you currently use for X?” and “who else did you consider?”
    • App Store / category browsing — which apps appear in the categories your customer browses?
    • Social listening — which names keep coming up in subreddits, Slack communities, Discord servers, and review threads your customer reads?

    If a name shows up in only one source, treat it as a weak candidate. If it shows up in two or more, it earns a slot.

  3. Decide your dimensions. Pick the 6–10 dimensions that matter most to your target segment, not a generic SaaS rubric. Examples: feature coverage, pricing model, time-to-value, integrations with the buyer’s existing stack, support quality, deployment options, compliance posture, channel mix, partner ecosystem. Drop dimensions that don’t map to a real concern of your segment — “supports SOC2” is dead weight if you sell to indie creators, and “free tier” is dead weight if you sell to procurement-led enterprises.

  4. Sketch the artifacts you will produce. Pick the visualizations that will actually drive the decision in front of you, and stub them out before you collect data. Common artifacts:

    • Feature × segment matrix — competitors on rows, dimensions on columns, scored against the target segment.
    • 2×2 positioning grid — pick the two dimensions your segment cares about most; plot competitors; look for empty quadrants.
    • Petal diagram — for new categories, place yourself at the center and draw petals for each adjacent market your customers will switch from.
    • Pricing-tier comparison — list price, packaging, what’s included at each tier, common discounting patterns.
    • Channel-mix breakdown — paid search, content, partnerships, marketplaces, outbound, community — what share each competitor leans on.

    You do not need every artifact. Pick the two or three that answer the actual decision you are about to make.

  5. Decide where verified primary sourcing matters. Some claims drive decisions (pricing, funding round size, feature presence on a paid plan, named integrations); others are background. Tag each dimension as decision-grade (must be verified to a primary source) or directional (a published roundup or AI summary is fine). Decision-grade claims will go through fact-checking later; directional claims will not. Without this triage, fact-checking either takes forever or doesn’t happen.

Execution

The verified shortlist from Prep is now the input to deep profiling. The goal here is not to write essays per competitor; it is to fill a fixed schema so that the synthesis stage has clean inputs.

  1. Use a fixed profile schema. Each profile records the same fields in the same order. A reasonable default:

    • Company name, URL, year founded, HQ location.
    • Funding stage and total raised (Crunchbase), most recent round.
    • Customer segments served (named, not “everyone”).
    • Top 6–10 features mapped to your dimension list.
    • Pricing model and published tier prices.
    • Channel mix (paid search, content, marketplaces, partnerships, outbound, community) — qualitative description plus traffic share if SimilarWeb is available.
    • Integrations and named partners.
    • Sentiment summary from review sites and forums (positives, complaints, common deal-breakers).
    • Any 2025–2026 product or strategy moves worth noting.
    • Inline [1], [2] citation markers for every numeric and named claim.
    • A Sources list at the end of the profile resolving every marker to a URL.
  2. Collect each field from its canonical source.

    • Funding, stage, founding year → Crunchbase, Pitchbook, SEC filings, press releases → spreadsheet column.
    • Features and positioning copy → competitor’s homepage hero, product pages, pricing page → feature matrix row.
    • Channel mix and GTM motion → SimilarWeb (traffic sources), SpyFu / SEMrush (keyword bids), Meta and Google ad libraries (creative), App Store presence, the competitor’s own integrations / partners page → channel-mix table.
    • Customer segment and sentiment → G2, Capterra, TrustRadius, Reddit, App Store reviews → segment notes + sentiment column.
    • Pricing and revenue model → published pricing page; review threads mentioning negotiated discounts → revenue model column.
  3. AI does the first pass; humans verify decision-grade claims. A capable LLM with web access can fill 70–80% of the profile in hours. The remaining 20–30% — and any claim you tagged as decision-grade in Prep — must be confirmed by visiting the cited source yourself or by routing through a fact-check subagent. Pricing changes weekly on some sites; funding round sizes are routinely overstated in press releases; “X customers” claims rarely include the basis on which a customer was counted.

  4. Flag anything gated. Anything that requires a login, a paid database (Pitchbook full reports, paid Crunchbase, paid Forrester / IDC / Gartner reports, gated G2 detail, LinkedIn Sales Navigator) gets flagged as MANUAL in the profile. AI cannot fetch behind authentication, and a confidently-stated number that the agent could not actually retrieve is the most common failure mode of AI-generated profiles.

Analysis

The point of analysis is to produce decisions, not to fill templates. Every artifact below should end in a sentence that names a choice you are now better equipped to make.

  1. Build a feature × segment matrix. Score each competitor’s coverage of each dimension against your target segment, not against a generic buyer. A “Yes” cell only counts if the feature meets the segment’s bar (e.g. “supports SSO” against an enterprise buyer means SAML SSO on the plan they would buy, not OAuth on the free tier). Use a three-state scale — Strong / Partial / Absent — rather than a numeric score; numeric scores invite false precision.

  2. Run focused SWOTs on the two or three competitors you actually compete with. A SWOT scoped to your target segment is useful; a SWOT scoped to “the market” becomes a list-making exercise. For each top competitor, write 3–5 entries per quadrant, and force every entry to end in a decision you would now make differently. “Strength: incumbent brand recognition” → so we lead with social proof in our positioning. “Weakness: reviews mention slow support” → so we lead with response-time SLA in our messaging. If an entry doesn’t end in a decision, drop it.

  3. Add Porter’s Five Forces only if you are entering a mature market. If you are entering an established market with multiple incumbents and clear buyer/supplier relationships, a quick Five Forces pass on competitive rivalry, threat of new entrants, threat of substitutes, buyer power, and supplier power tells you which forces are doing the most to shape margins. In a new or emerging category, skip it; you do not yet have stable forces to map.

  4. Run positioning gap analysis. Plot competitors on the 2×2 grid you sketched in Prep. Then ask the harder question: which empty quadrants represent real demand, and which are empty for a reason? Cross-reference your discovery sources from Prep — interview transcripts, search trends, survey responses. An empty quadrant with search volume and customer mentions is an opening; an empty quadrant with neither is usually empty because no one wants what would go there.

  5. Write the strategic insight. A short write-up — 200–400 words — answering three questions in order:

    • Who are your customers most likely to compare you to? Two or three names, drawn from interview mentions and search behavior, not from the full shortlist.
    • What do those competitors do well for your segment? Be specific, segment-relative, and source-cited.
    • What is the gap that is actually a gap? A claim about an empty position that has demand evidence behind it, not just an empty cell on a grid.

    This is the artifact you actually act on. The matrices and SWOTs exist to make this paragraph defensible.

Biases & Tips
  • Confirmation bias Founders don’t want to find competitors. Make the analysis exhaustive, repeated, and externally reviewed. If your map has no direct competitors, you have not looked hard enough.
  • AI hallucination risk AI competitive research is fast but routinely fabricates funding rounds, customer counts, and feature claims; current frontier models hallucinate at 3–19% on factual recall and citation tasks depending on the task. Every numeric and named claim must be primary-source-verified before it enters your analysis. Treat unverified AI output as a draft, not a finding.
  • Local-optima trap If you only look at direct competitors, you optimize for incremental differentiation. Always include 2–3 substitutes from adjacent markets. Airbnb beat hotels by studying Craigslist, not Marriott; Figma beat Adobe by studying Google Docs as much as Sketch.
  • Static-snapshot bias A competitor map is stale within 90 days in fast-moving markets. Schedule a re-run quarterly, and set Google Alerts (or Crayon / Klue equivalents) on the top 2–3 names so material moves don’t go unnoticed.
  • Absolute-feature scoring Comparing features in isolation (rather than against a named segment) produces matrices that look rigorous and mislead consistently. The same feature is a strength for one segment and irrelevant for another. Always score relationally.
  • Rubber-stamping AI synthesis When an AI produces a finished SWOT, the human reader tends to accept the framing and only edit at the wording level. Force yourself to challenge at least one claim per quadrant against the underlying profile evidence; if you can’t trace it, drop it.
  • The numbers Don’t worry about too few or too many players. Learn how you fit into the space being created. A market with no competitors is more often a sign you are early than a sign you are alone.
  • Too local Don’t limit your search to your local area. For most product ideas, take a global perspective; international competitors often define the category your buyer eventually compares you to.
  • Competitor Analysis should color your thinking, create the appropriate context, and help educate you on what’s going on. — @byosko

Next Steps

  • Validate the gap with Customer Discovery Interviews — does the gap you found match what your target segment actually struggles with?
  • Run Competitor Usability Testing on the top 2–3 competitors to surface the UX-level weaknesses your matrix will not catch.
  • Test the differentiated positioning with a Landing Page Test before committing to it in launch messaging.
  • For market sizing (TAM / SAM / SOM, segment growth, willingness-to-pay benchmarks), use Secondary Market Research — that is a different method and does not belong on this page.
  • Set up continuous monitoring (Google Alerts at minimum, Crayon / Klue if budget allows) on the top 2–3 names and re-run the synthesis quarterly.
Learn more

Case Studies

Crayon

We Predicted Our Competitor’s Product Launch Before They Hit Publish: Crayon’s AI-powered competitive intelligence tool (Sparks) analyzed competitor activity across data sources and surfaced predictions about upcoming product launches, demonstrating how AI has turned competitive analysis from periodic desk research into real-time monitoring.

Read more

Klue

Acquisition of Ignition for Agentic AI Product Marketing: In September 2025, Klue acquired Ignition, an agentic AI platform for product marketers, signaling the convergence of competitive intelligence and AI-driven product marketing. Klue now serves 200,000+ users with automated competitor tracking and AI-driven battlecards.

Read more

Agile Growth Labs

10 Best Competitive Intelligence Tools for SaaS 2025: Overview of how AI-powered CI tools (Crayon at $12K-$47K/year, Klue at $16K-$45K/year) replaced manual spreadsheet-based competitive analysis for SaaS startups, with features like real-time alerts, dynamic battlecards, and automated competitor profiling.

Read more

Figma

Studied Adobe’s design tool ecosystem and identified that professionals needed weeks to master it, required desktop installs, and lacked real-time collaboration. By systematically analyzing these gaps, Figma built a browser-based tool whose market share surged from 8% to 57% in three years, prompting Adobe’s attempted $20B acquisition.

Read more

Airbnb

Studied Craigslist’s massive user base for short-term rentals and identified poor photos, no trust mechanisms, and impersonal interactions. They cross-posted Airbnb listings and sent professional photographers — both insights driven by competitive analysis of an adjacent-market incumbent’s weaknesses, not direct competitors.

Read more

Got something to add? Share with the community.