Vibe-Coded Disposable MVP

In Brief
A vibe-coded disposable MVP is a working prototype built in hours using AI coding assistants, with an upfront commitment to throw all the code away when the experiment ends. You describe the product in natural language, let AI generate the code, deploy it to a live URL, and put it in front of real users to test whether the solution delivers value. The output is behavioral evidence — did users complete the core task, come back, or ask to keep using it? The key discipline is pre-committing to disposal: the code tests a hypothesis, not a foundation for production.
Common Use Case
You have a product hypothesis you want to test with a working prototype as fast as possible, even if the code is throwaway. You describe the idea to an AI coding assistant, deploy a functional version in hours, and put it in front of real users to see whether they engage with it before you invest in production-quality development.
Helps Answer
- Does our solution concept actually solve the customer’s problem?
- Will users engage with this type of product experience?
- Which features matter most to early users?
- Is this problem worth solving with software at all?
Description
A vibe-coded disposable MVP is working software — described in natural language, generated by an AI coding assistant, deployed to a live URL — that you commit to deleting before you write the first line. The term “vibe coding” comes from Andrej Karpathy’s February 2025 description of a workflow where you “fully give in to the vibes, embrace exponentials, and forget that the code even exists” because the LLMs have gotten “too good.” The disposable variant adds one constraint: you set a deletion date before you start building.
What differentiates this from a paper or clickable prototype is that the user is interacting with real software that actually does the thing — not a simulation of the interaction. You get behavioral evidence about whether the solution works, not just whether the interaction flow makes sense. That distinction matters when the hypothesis is about value delivery rather than usability.
What differentiates it from any other single-feature MVP is the pre-commitment to disposal. Vibe-coded code accumulates technical debt at an unusual rate — the LLM optimizes for “make this work right now,” not for the abstractions a production system needs. The disposability constraint is load-bearing: it defends against the gravitational pull of sunk-cost reasoning that turns a throwaway prototype into the hidden foundation of a company. The lineage here runs through Alberto Savoia’s pretotyping discipline (in The Right It, 2019) — “make sure you are building The Right It before you build It right” — which treats the cheap, disposable test as a separate artifact from the eventual production build.
A few framings I use when teaching this:
- “The prototype is the experiment, not the product. When the experiment is over, throw it away.” — @TriKro
- “If you can’t bring yourself to delete the code, that’s a signal you skipped the hypothesis step.” — @TriKro
- “Vibe coding is like sketching on a napkin, except the napkin runs on Vercel.” — @TriKro
How to
Prep
-
Write a clear hypothesis. Articulate exactly what you’re testing in the form: “We believe [target customer] will [specific measurable behavior] when given [solution concept] because [reason].” If you can’t write this sentence, you’re not ready to build anything. This is the same build-measure-learn discipline Eric Ries describes in The Lean Startup — the MVP exists to test a specific hypothesis, not to be a small version of the product you wish existed.
-
Define the minimum feature set. List only the features required to test your hypothesis. If a feature doesn’t directly relate to the hypothesis, cut it. You’re building a disposable prototype to test one critical user behavior, not a product.
-
Pre-commit to disposability. Before writing any code, set a deletion date and write it in the README. Tell your team out loud: “We will delete this on [date], regardless of what happens.” Set a calendar reminder. This pre-commitment is what makes the method work — pretotyping, in Savoia’s framing, depends on the test artifact being separate from the production artifact.
-
Choose your AI coding tool. Pick one: Cursor, Claude Code, v0, Replit, Bolt, or Lovable are all reasonable defaults. Choose based on what you can deploy fastest, not what’s most powerful. The right tool is the one you can ship from in an afternoon. (Lovable’s mobile launch in April 2026 makes it possible to drive the agent from a phone, with previews rendered in a web browser to comply with Apple’s vibe-coding rules.)
-
Recruit 5–15 real users from your target segment. Line them up before the build, not after. The window between “the prototype works” and “I should keep iterating instead of testing” is short, and recruiting after the fact is how that window closes. Make sure these are people from your actual target persona, not friends or fellow founders who will be charitable about a rough demo.
Execution
-
Vibe-code the prototype. Use your chosen AI coding assistant to build the MVP:
- Describe the user flow in plain language.
- Let the AI generate the code.
- Don’t refactor. Don’t write tests. Don’t set up CI/CD.
- Focus on making the user-facing experience feel real enough to test.
- Hardcode things that would normally be configurable.
- Use placeholder data where it doesn’t affect the test.
-
Deploy to a live URL. Put it somewhere real users can access it — Vercel, Netlify, Replit, Lovable, whatever is fastest. The deployment doesn’t need to be production-grade. It needs to stay up for the duration of the test.
-
Run the experiment. Put the prototype in front of the 5–15 users you recruited in Prep. Observe their behavior. Measure the specific outcome defined in your hypothesis. Conduct brief follow-up interviews — but lead with what they did, not what they thought.
-
Record everything. Session recordings, click paths, drop-off points. The whole point of building real software (rather than a clickable prototype) is that you get behavioral data — capture it.
-
Delete the code. This is the hard part. Do it on the date you committed to in Prep, regardless of how the experiment went. If the hypothesis was validated, build the production version from scratch with proper architecture, informed by everything you learned. The knowledge transfers; the code doesn’t.
Analysis
The signal you’re looking for is behavioral, not verbal. Watch what users did with the prototype, not just what they said about it.
Strong positive signals:
- Users completed the core task without prompting.
- Users asked “when will this be ready?” or “can I keep using this?”
- Users tried to use it for things you didn’t build (indicates high engagement).
- Users shared it with colleagues unprompted.
Weak or misleading signals:
- “This is cool” (politeness, not validation).
- High page views with no task completion (curiosity, not value).
- Users praise the design or technology (they’re evaluating the demo, not the solution).
Red flags:
- Users needed extensive explanation to understand the purpose.
- Users completed the task but showed no interest in continuing.
- Users immediately suggested fundamental changes to the concept (not the execution).
-
Sunk cost fallacy The single most dangerous bias in this method. Once the prototype works, founders desperately want to keep the code. “It’s already built, why throw it away?” Because the code was built to test a hypothesis, not to serve customers. Vibe-coded prototypes accumulate technical debt at an extraordinary rate. Keeping the code means building your company on a foundation you don’t understand.
-
Scope creep AI coding tools make it easy to add “just one more feature.” Each addition dilutes your hypothesis and makes results harder to interpret. If you catch yourself saying “while we’re at it,” stop.
-
False confidence from a working demo A functional prototype creates an emotional sense of progress that can overwhelm analytical judgment. The fact that it works does not mean customers want it. These are different questions. Write a specific, falsifiable hypothesis before writing any code: if you can’t articulate what customer behavior would prove the hypothesis wrong, you haven’t defined what you’re testing.
-
Selection bias in testers Because the prototype is rough, you may unconsciously recruit testers who are more forgiving — friends, fellow founders, early adopter types. Make sure your testers match your actual target customer persona.
-
Anchoring on implementation Once you’ve built a solution one way, it’s psychologically harder to imagine fundamentally different approaches. The prototype anchors your thinking to a specific implementation even after you delete the code.
-
Pre-commit to disposal before you start building: name a specific date on which the code will be deleted. If you can’t bring yourself to delete working code when the date arrives, that’s a signal you’re invested in the solution rather than the learning.
-
Measure behavior, not reactions: did users complete the core task without help? Did they return the next day without being asked? Did anyone try to pay? “This looks great!” is not a data point.
Learn more
Case Studies
Y Combinator W25 batch
In March 2025, YC president Garry Tan that “for 25% of the Winter 2025 batch, 95% of lines of code are LLM generated.” TechCrunch’s coverage of the same panel attributes the 95%-of-lines figure to YC managing partner Jared Friedman, framed within a “Vibe Coding Is the Future” panel. The data point is the canonical “vibe coding has gone mainstream in early-stage startups” reference. Read more
Lovable on iOS and Android
Lovable’s mobile launch in April 2026 made vibe-coding accessible from a phone via voice or text prompts, with the agent running autonomously after an initial brief and previews rendered in a web browser to comply with Apple’s vibe-coding rules. It marks the moment when “build a disposable MVP from anywhere” stopped requiring a laptop.
Got something to add? Share with the community.