The term was popularized by Andrej Karpathy in early 2025, who described it as surrendering to the AI and not really reading the code — just describing problems, pasting errors, and asking for fixes. But vibe coding has already evolved past that pure definition.
In practice, it exists on a spectrum. At one end: complete novices building things they couldn't code before. At the other: experienced engineers dramatically compressing their development cycle by speaking instead of typing.
What unifies both is the shift from implementation-first thinking to intent-first thinking. You describe the destination; the AI finds the path.
Good prompting for code is a skill. It's not about magic words — it's about communicating your mental model clearly enough that the AI can reconstruct it.
The systems we've relied on — healthcare, education, community infrastructure, civic participation — are failing faster than institutions can fix them. For most of history, building the tools to address that required a team, capital, and years. That window has closed.
The person who has lived closest to the problem is now the person best placed to build the solution. Not a startup. Not a product roadmap. A tool — built with whatever time you have, for the people who need it most.
Vibe coding is not interesting because it makes software development faster. It's interesting because it makes it possible for people who have never been able to do it before. That's the change worth building toward.
The best way to understand vibe coding is to do it. Here's a minimal path to your first real session — starting not with the tool, but with the thing that matters:
The gap that kills most first-time builds isn't technical. It's the distance between a real idea and a buildable brief. Most people open an AI tool too early — before they've made the three decisions that determine whether the build will succeed.
Once you can answer all three with one sentence each, you have a v1 spec. Anything beyond that is scope creep. Cut it. Build the one thing first. Every feature you defer from v1 is a feature you'll build better in v2, with real user feedback to guide you.
This is the full loop — from blank page to something you can put in front of a real person. Follow it in order. Don't skip steps. The order matters more than the speed.
git add . && git commit -m "working: [what works]". This is your undo button. AI changes can break things unexpectedly — a committed working state means you can always get back.git checkout . to discard all uncommitted changes and go back to the last thing that worked. Then try a different approach."Ship the small version" is easy advice to give and hard advice to follow when you don't know what shipping means. Here it is concretely — the minimum bar for a v1 that counts, and the three ways to get it in front of people.
Speed is the point of vibe coding. But shipping a product that exposes your users' data, can be exploited by attackers, or creates legal liability for you is not a success — it's a crisis you haven't discovered yet.
The people using what you build are trusting you. That trust is the most fragile thing in any product. Protect it.
Traditional developers understand what they wrote. If something is wrong with the auth logic, they remember why they made that choice. They know which function touches which data.
In vibe coding, the AI writes code you didn't author. You know what it's supposed to do — but you may not know what it actually does in edge cases. That gap is where security vulnerabilities hide.
This is not a reason to stop. It's a reason to build a review habit that accounts for it.
.env, not in source code.env added to .gitignore<script>alert(1)</script> in every text fieldnpm audit showing 0 critical vulnerabilitiesRunning an AI audit is not the same as a professional security review. AI is excellent at catching common patterns — the issues listed above. It is less reliable at catching architectural vulnerabilities, logic flaws in complex flows, or novel attack vectors.
For a product handling sensitive data — health information, financial data, personal identifiers, children's data — a professional security review before public launch is not optional. It's the cost of operating responsibly.