Social Media Campaign

In Brief
A social media campaign for generative research is a series of posts published on social platforms around a topic or problem area, used to measure audience response through engagement metrics and qualitative feedback. The key distinction from other social media experiments is that this method explores a topic without proposing a specific value proposition or product. You are testing whether an audience exists and what they care about, not whether they will buy something specific.
For testing a specific value proposition through online advertising, see VPT Online Ad. That method is evaluative — it measures response to a defined offer. This method is generative — it discovers what resonates within a problem space.
Common Use Case
You have a hypothesis about a customer segment and a problem space, but you do not yet know whether a real audience exists, what language they use, or which aspects of the problem matter most to them. You want a low-cost, low-commitment way to surface that signal in the wild — observing how people in your target segment react to varied framings of the topic — before committing to interviews, surveys, or a value proposition. You are still in the discovery phase: listening, not selling.
Helps Answer
- Does an audience exist that cares about this topic or problem area?
- What specific aspects of the topic generate the most engagement?
- What language and framing does the audience respond to?
- Which platforms and content formats reach this audience most effectively?
- What questions, objections, and related topics does the audience raise?
- Is this audience reachable and engaged enough to build a business around?
Description
Social media platforms are living laboratories for understanding what people care about. By publishing content around a problem area and observing how people react, you can learn about audience size, engagement patterns, language preferences, and adjacent concerns — all before building anything.
The method is generative because you are not testing a hypothesis about a specific product. Instead, you are publishing varied content around a theme and seeing what resonates. A post about “the frustration of tracking expenses as a freelancer” might get moderate engagement while a post about “the anxiety of not knowing if you have saved enough for taxes” goes viral. This difference tells you something important about how your audience experiences the problem.
The qualitative analysis of comments is often more valuable than the engagement metrics themselves. Comments reveal how people think about the problem, what solutions they have tried, what language they use, and what adjacent problems they face. A single insightful comment thread can generate more actionable learning than a thousand likes.
How to
Prep
-
Define your topic area and audience hypothesis. Write a clear statement of the problem space you are exploring and who you believe cares about it. For example: “Small business owners who struggle with cash flow management.” This hypothesis guides your content strategy but should remain flexible based on what you learn.
-
Choose 2-3 platforms. Select platforms where your hypothesized audience is likely active. LinkedIn for B2B professionals, Instagram or TikTok for consumer audiences, Twitter/X for tech and media, Reddit for niche communities, Facebook Groups for local or interest-based communities. Do not spread yourself across too many platforms — depth on 2-3 is better than breadth across 6.
-
Create a content plan with varied angles. Plan 10-15 pieces of content that approach your topic from different angles: educational posts, provocative questions, personal stories, statistics, polls, and how-to content. Vary the framing to discover which angles resonate most. For example, if exploring project management pain points, create posts about missed deadlines, meeting overload, tool fatigue, and remote collaboration friction.
-
Set up tracking. Decide which metrics you will record per post (impressions, engagement rate, click-through rate, follower growth) and where (a simple spreadsheet is enough). Set up native platform analytics access for each chosen platform before you start posting, so you do not lose early data.
Execution
-
Publish consistently for 2 weeks. Post 3-5 times per week on each platform. Use each platform’s native format (short text on Twitter/X, visual on Instagram, long-form on LinkedIn). Optionally allocate $50-100 to boost 2-3 of your best-performing posts to expand reach beyond your immediate network.
-
Engage actively with responses. Reply to every comment. Ask follow-up questions. When someone shares their experience, ask “what have you tried to solve this?” or “how often does this happen?” These comment conversations are primary research data. Do not pitch a product — stay in listening mode.
-
Track quantitative metrics per post. For each post, record: impressions/reach, engagement rate (likes + comments + shares / impressions), click-through rate (if linking), and follower growth. Use native platform analytics or scheduling tools to aggregate data. Tag each post with its angle and format so you can compare across categories later.
-
Capture qualitative data as you go. Screenshot or export interesting comment threads while they are fresh — platforms hide or rearrange older threads, and threaded replies are easy to lose. Keep a running notes file of recurring phrases, unprompted comparisons to existing tools, and emotional language (“I hate that…”, “the worst part is…”).
-
Hold the line on listening. If commenters ask “what is your solution?” or “are you building something?” resist the urge to pitch. A short, neutral reply (“I am exploring this problem area, not pitching anything yet — what would you want?”) preserves the generative posture and often produces the richest follow-up data.
Analysis
-
Aggregate quantitative metrics by angle and format. Group posts by angle (e.g. “tax anxiety” vs. “expense tracking”) and by format (poll vs. story vs. educational). Compare engagement rate, not raw likes — a post that reached fewer people but engaged a higher fraction of them is a stronger signal.
-
Cluster qualitative comments thematically. Categorize comments by theme: agreement/recognition, personal stories, questions asked, solutions mentioned, objections, and related topics raised. Look for patterns across multiple posts. Comments that start with “This is so true because…” are particularly valuable — they contain unprompted customer language and context.
-
Interpret high-resonance angles. High engagement on a specific angle indicates which framing of the problem resonates most strongly. Posts where people tag friends or share personal stories indicate emotional resonance — the problem is real and felt. Questions in the comments reveal what information the audience is seeking.
-
Interpret low or flat results. If engagement is consistently low across all angles despite adequate reach, the audience may not be active on the chosen platforms, or the problem may not be significant enough to generate discussion. Before concluding the topic does not resonate, sanity-check content quality and posting times — low engagement may reflect execution, not topic.
-
Read engagement depth, not just volume. The ratio of passive engagement (likes) to active engagement (comments, shares) indicates depth of interest. High comment rates suggest the topic provokes thought and discussion. Follower growth during the campaign indicates sustained interest, not just one-time engagement.
-
Synthesize an audience and problem profile. Compile the angles, language, and themes that drove the most engagement into a one-page synthesis: who showed up, what they said, what they did not say, and which framings now feel like working hypotheses versus dead ends.
- Platform bias Each social media platform has its own demographic skew and content culture. Results on LinkedIn may not generalize to your broader market. Run the campaign on at least 2 platforms to triangulate.
- Engagement-equals-demand fallacy People who engage with content about a problem are not necessarily people who would pay for a solution. Social media engagement reflects interest, not purchase intent. Always follow up with a method that probes willingness to pay before committing to a product direction.
- Algorithmic amplification Platform algorithms may amplify certain content types (controversial, emotional, visual) regardless of their relevance to your research question. A viral post may tell you more about the algorithm than about your market.
- Echo chamber effect If your initial audience is your own network, their engagement patterns may reflect social obligation rather than genuine interest in the topic. Track engagement from people outside your immediate network separately.
- Vocal minority bias Commenters are a small fraction of your audience. The people who comment may have stronger opinions or more extreme experiences than the silent majority. Treat comment patterns as directional, not representative.
- Confirmation bias Founders tend to over-weight comments that agree with their hypothesis and dismiss those that disagree. Have someone else categorize a sample of comments blind to your hypothesis as a check.
- Content quality confound Low engagement may reflect poor content execution (bad headline, wrong posting time, weak visual) rather than lack of interest in the topic. Test multiple formats before concluding the topic does not resonate.
- Short time horizon Two weeks is enough to detect initial signals, but not enough to account for seasonal patterns or to build a representative audience from scratch. Treat findings as early indications, not conclusions.
Learn more
Case Studies
Buffer “Open” blog and social transparency
Buffer used its public blog and social channels to publish raw operating data (revenue, salaries, diversity stats) and observe which posts about which problems generated the most engagement from founders, marketers, and remote workers. The pattern of resonance directly shaped Buffer’s later product positioning around transparency-friendly teams.
Nomad List (Pieter Levels)
In 2014, Pieter Levels tweeted a publicly editable Google Sheet seeded with ~10 cities he knew, asking digital nomads “do you know any other places that are cool like this?” Within a month, more than a thousand strangers had added cities and — critically — added new columns he hadn’t included (cost of living, internet speed, safety). The engagement told him not just that the problem was real, but which dimensions of “where should I live as a nomad” actually mattered to the audience; those columns became the spine of the eventual Nomad List product.
Got something to add? Share with the community.