Troubleshooting · Gap 6
When
Things
Break
20 failure scenarios
Step-by-step recovery
Exact prompts included
Bookmark this guide. Don't read it cover to cover — come here when you're stuck, search for your symptom, and follow the recovery path. Every scenario includes the exact prompt to give the AI, and when to stop prompting and take a different approach entirely.
These are the 20 most common failure modes in vibe coding, in order of how often they stop people cold. Each one has a clear way out.
Everything you need to go from first prompt to shipped product — the complete toolkit for serious vibe coders.
THE COMPLETE SERIES Included in The Complete Series ($29).
AI Behaviour
Code Errors
Lost Context
Architecture
Shipping
Problem 01 · AI Behaviour
The AI keeps making the same mistake
You correct it, it fixes the immediate instance, then repeats the mistake elsewhere. Could be a style issue, a pattern it keeps reverting to, or a wrong assumption baked into the session.
1
Name the pattern explicitly. Don't just fix the symptom — identify the rule being broken and state it clearly.
2
Add it as an explicit constraint in your next prompt: "Never do X. Always use Y instead. This applies to every piece of code in this session."
3
If it persists across 3+ corrections, start a new session with the rule stated in the very first message before any task.
Recovery prompt
You keep [describe the pattern]. This is wrong because [reason]. The correct approach is [describe it specifically]. This rule applies to every piece of code you write in this session. Now re-do the last output following this rule.
Problem 02 · Lost Context
The AI has forgotten what we built earlier in the session
The AI starts suggesting things that contradict earlier decisions, re-introduces code you removed, or proposes approaches you explicitly ruled out. The context window is saturated or the conversation is too long.
1
Don't try to fix it in the same session. When the AI forgets, the session is too long — adding more corrections makes it worse.
2
Create a context summary: ask the AI to "Summarise what we've built so far, what decisions we made, and what we're about to do next."
3
Start a fresh session. Paste the summary as the first message, then continue from where you left off.
Prevention
Keep a running context doc (CONTEXT.md or equivalent) that you update at the end of every session. Pasting it at the start of a new session eliminates context loss entirely.
Problem 03 · Code Errors
The app won't start — cryptic terminal error
You run the app and get an error before it loads. The error message is long, technical, or unclear.
1
Copy the full error message — not just the last line. The root cause is usually near the top.
2
Paste it to the AI with context about your stack and what you last changed.
3
If the AI's fix produces a new error, paste that immediately — don't try to fix it yourself while also accepting AI suggestions.
Recovery prompt
My app won't start. Here's the full error: [paste complete error] Stack: [your framework/language] Last change I made: [what you changed before this broke] What's causing this and how do I fix it?
Problem 04 · AI Behaviour
The AI rewrites code I didn't ask it to change
You ask for a small change and get back a completely rewritten file or module. The new version may be "better" but it's different from what you had, and you can't easily tell what changed.
1
Don't accept the full rewrite immediately. Ask: "Show me only what changed from the original, as a diff."
2
Review the diff. Accept the changes you asked for; reject anything else.
3
Prevention: Add "Only change what I specifically ask. Do not refactor, restructure, or improve anything else." to your session starter.
The git safety net
Commit before every significant AI interaction. If you accept a rewrite and regret it, git diff shows you exactly what changed and git checkout gets you back. No commits = no safety net.
Problem 05 · Architecture
The codebase has become a mess — hard to change anything
Every change breaks something unrelated. Files are sprawling. You're not sure what calls what. Adding a feature feels dangerous. You've accumulated significant technical debt from moving fast.
1
Stop adding features. Commit everything that works. This is a refactoring moment, not a building moment.
2
Ask the AI for a codebase audit: "Read these files and tell me: what are the biggest structural problems? What's causing fragility? What should be refactored first?"
3
Refactor one thing at a time, testing after each change. Never refactor and add features simultaneously.
4
Prevention going forward: After every 3–4 features, schedule a refactoring session before continuing.
Problem 06 · AI Behaviour
The AI is confidently wrong — and I believed it
The AI stated something as fact, you built on it, and later discovered it was wrong — a library API that doesn't exist, a configuration option that was removed, a framework behaviour that's version-specific.
1
For library/API questions: always verify against the official documentation, not just the AI's answer. The AI's training data has a cutoff and may reflect outdated versions.
2
Ask the AI to cite where it got the information: "Is this confirmed in the current documentation? What version is this for?"
3
For complex technical claims, ask: "What's your confidence level on this? Are there alternative approaches I should know about?"
The rule
Verify any specific API, configuration, or library behaviour claim against the official docs before building on it. The AI is excellent at patterns and architecture; it is less reliable on specific current API details.
Problem 07 · Code Errors
It worked, then I changed something small, and now it's broken
Regression after a change that seemed unrelated to the breakage. Classic sign of hidden coupling in the code.
1
Use git diff to see exactly what changed since it last worked. Be precise about the scope.
2
Paste the diff to the AI: "This was working. I made these changes [diff]. Now it fails with [error]. What did I break?"
3
If no git diff available: methodically undo changes one at a time until it works, then re-apply them one at a time to isolate the culprit.
Problem 08 · Lost Context
I don't understand the code that was built
You accepted AI output and moved on. Now you need to change something and realise you don't understand how the code works. You don't know if changing X will break Y.
1
Stop and understand before changing anything. Ask the AI: "Explain this file to me line by line. What does each function do? What are the dependencies?"
2
Ask specifically about the change you want to make: "If I change X, what else might break? What should I test?"
3
Going forward: never accept a significant block of code without asking the AI to explain it first. Understanding is a prerequisite for ownership.
Problem 09 · AI Behaviour
Every fix creates a new problem — the patchwork spiral
Three fixes in and the problem count hasn't gone down. Each solution introduces a new issue. You're in a patch spiral.
1
Stop patching immediately. The foundation is wrong and patches won't fix it.
2
Describe the original goal — not the chain of fixes — and ask the AI to approach it fresh: "Ignore our previous attempts. Here is what I need. What's the cleanest way to implement this?"
3
Accept the clean re-implementation. The time spent on patching is already lost; stop trying to recover it.
Problem 10 · Architecture
The AI made an architectural decision I didn't agree to
You asked for a feature; the AI built it by making a structural choice — a new file organisation, a new pattern, a new dependency — that you didn't ask for and may not want.
1
Don't accept it. Ask the AI to explain the architectural choice before you accept the code.
2
If you disagree with the choice: "I don't want to use [approach]. Implement this using [your preferred approach] instead."
3
Prevention: For any feature that could involve structural choices, ask "What approaches could I use to implement this? Give me options with tradeoffs." Choose before the AI builds.
Problem 11 · Shipping
Works locally but breaks in production
Local dev shows everything working. Deployed app crashes, shows blank pages, or API calls fail.
1
Check environment variables first — 80% of local-works-prod-fails issues are missing env vars on the deploy platform.
2
Check the platform's build logs and runtime logs — the exact error is in there.
3
Paste the production error to the AI with your stack and deploy platform: "Works locally, fails in production on [platform]. Error: [paste]. What's different?"
Problem 12 · Code Errors
I can't reproduce the bug
A user reports a bug or you've seen it once but can't make it happen again. You can't debug what you can't see.
1
Add logging to the area where the bug was reported. Even console.log at key points gives you data.
2
Ask the AI: "A user reported [describe exactly what they did and saw]. I can't reproduce it. What conditions could cause this? What should I add logging for?"
3
If using Sentry or error monitoring: check if the error has been recorded with a stack trace — this is often enough to fix it without reproducing.
Problem 13 · AI Behaviour
The AI gives me a solution that's too complex for what I need
You ask for something simple and get back an over-engineered solution with abstractions, design patterns, and generalisations you don't need yet.
1
Be explicit about the scope: "I need the simplest possible solution. I don't need it to be extensible or handle cases beyond [X]. Optimise for readability, not flexibility."
2
State your context clearly: "This is a side project with one user (me). Don't build for scale."
Problem 14 · Lost Context
I don't know if it's safe to ship
You've built something with AI assistance, it works, but you're not confident about whether it's production-safe. You don't know what you don't know.
1
Run the full safety audit prompt from the Safety Guide against all files that handle user data, authentication, or external inputs.
2
Run through the pre-launch checklist in the Deploy Guide.
3
Do the "attacker's five minutes" test: try to access data that shouldn't be yours, submit malformed input, hit routes without auth.
Problem 15 · Shipping
The app is slow — users are complaining
Performance is poor in production. Pages take too long to load, API calls are slow, or the UI feels unresponsive.
1
Measure before optimising. Use Chrome DevTools Network tab and Lighthouse to identify the actual bottleneck. Slow database? Large images? Unoptimised JS bundle?
2
Paste the Lighthouse report or slow query logs to the AI: "My app scores [X] on Lighthouse. Here are the specific issues. What should I fix first?"
3
Fix the single biggest issue first. Don't premature-optimise.
Problem 16 · Architecture
I need to add a feature but everything is tightly coupled
Adding a new feature requires touching 6 files and you're not confident you haven't broken something. Classic tight coupling from rapid AI-assisted development.
1
Before adding the feature: "Show me how you would add [feature] given our current code. What would need to change, and why?"
2
If the answer involves many files: "What's the minimum refactoring needed to make this addition clean? Show me that refactor first, then the feature."
3
Refactor and feature-add in separate commits. Never in the same PR.
Problem 17 · Code Errors
TypeScript / type errors everywhere after an AI change
AI added or modified code and now there are 10+ TypeScript errors throughout the codebase. The AI's types don't match the existing types.
1
Run tsc --noEmit to get the full list of errors.
2
Paste all errors to the AI: "Here are all the TypeScript errors after your last change. Fix them all, ensuring the types are consistent throughout."
3
Prevention: Ask the AI to provide TypeScript types explicitly in any prompt that involves data structures: "Include the TypeScript type definitions for everything you add."
Problem 18 · AI Behaviour
The AI refuses to help with something I legitimately need
The AI declines a reasonable request — usually security-adjacent topics, competitive analysis, or content it interprets as potentially harmful even though your use case is legitimate.
1
Reframe with explicit context: explain who you are, what you're building, and why you need the information. Legitimate context resolves most refusals.
2
Break the request into smaller, less ambiguous pieces. Sometimes a refusal is about the framing, not the substance.
3
Try a different AI tool — different models have different thresholds and some are more suited to technical security work than others.
Problem 19 · Shipping
Users are hitting errors I didn't see in testing
Real users are doing things you didn't test. Errors are happening in production that never appeared locally.
1
If you don't have Sentry: install it immediately. You're flying blind without error monitoring.
2
With error data: share the Sentry report with the AI — it includes the stack trace, user actions, and browser environment. This is far more useful than trying to reproduce it.
3
For each error class: add a test case that would have caught it. Grow the test suite from production failures.
Problem 20 · Lost Context
I'm completely lost — I don't know where to start
Overwhelm. The codebase is too complex, the bug is too deep, or the project scope has become unclear. You've lost the thread entirely.
1
Stop. Close everything. Take a break. Overwhelm is not a coding problem — it's a state problem, and you can't code your way out of it.
2
Write down — in plain language, away from the screen — what you're trying to achieve at the top level. Just the goal, not the implementation.
3
Come back with that written goal and ask the AI: "Given everything we have, what is the single most important next step to get closer to [goal]?"
4
Do one small thing. Momentum is the cure for overwhelm, not planning.
The real cause
Most "I'm completely lost" moments come from scope creep — the project grew without a corresponding update to the Intent Worksheet. Go back to the Intent framework in the Principles guide. Rewrite your one-liner. The path usually becomes clearer immediately.