⚠ Safety Guide · Critical Reading
Safe
Not
Sorry
A complete safety framework
for vibe coding — threats,
mitigations, and a step-by-step
process you can apply today
Vibe coding's speed is its greatest strength — and its greatest safety risk. Moving fast without a safety framework is how you ship data breaches, destroy user trust, and create liabilities you didn't know existed.
This guide doesn't ask you to slow down. It gives you a framework to move fast safely — knowing exactly what to check, when to check it, and what to never skip.
Risk Profile: AI-Generated Code (Without Review)
Missing input validation
92%
Exposed secrets/API keys
78%
Unhandled error states
61%
Correct auth implementation
35%
Based on common patterns in AI code reviews · Illustrative estimates
The people your tools serve deserve your full attention to their safety. This guide shows you exactly how to deliver that.
THE COMPLETE SERIES Included in The Complete Series ($29).
— 01

Why Vibe Coding Creates Unique Risks

Traditional software development has a built-in safety mechanism: the developer understands what they wrote. If something is wrong, they can reason about it. They remember the edge case they didn't handle. They know which function touches which data.

Vibe coding breaks that mechanism. When AI writes the code, you lose direct ownership of the implementation. You know what it's supposed to do — but not always what it actually does. That gap is where safety issues hide.

This is not an argument against vibe coding. It's an argument for building a deliberate safety practice alongside it — one that accounts for the specific failure modes of AI-generated code.

The Three Safety Gaps in Vibe Coding
Gap 1 — The Knowledge Gap: You don't fully understand what the AI built.

Gap 2 — The Testing Gap: AI code is accepted without systematic verification.

Gap 3 — The Pattern Gap: AI generates common patterns, which include common vulnerabilities.
Who this affects most
Non-technical founders, first-time developers, and fast-moving teams shipping quickly are most exposed. But even experienced engineers who vibe code without review discipline are at risk — speed creates complacency.
— 02

The Threat Landscape: What Can Go Wrong

These are the most common and serious safety issues found in AI-generated code. Understanding each one is the first step to preventing it.

Critical
Secret & Credential Exposure
AI frequently places API keys, database passwords, and other secrets directly in source code — or in places that get committed to git repositories. Once a secret is in a git history, it must be considered permanently compromised, even if you delete it later.
⚠ What AI might write: const API_KEY = "sk-live-abc123xyz..." — directly in your component file, commited to GitHub, visible to anyone with repo access.
Critical
Missing or Broken Authentication
AI-generated auth code often looks correct but contains subtle flaws: JWT tokens not properly validated, sessions not expiring, password reset flows that can be bypassed, or routes that should be protected left open. Auth bugs are catastrophic — they let attackers impersonate users or access any account.
⚠ A common pattern: AI generates a "protect this route" middleware but applies it inconsistently — some sub-routes remain unprotected without it being obvious from reading the code.
Critical
SQL Injection & Query Vulnerabilities
When AI builds database interactions, it may construct queries using string concatenation with user input rather than parameterised queries. This creates SQL injection vulnerabilities — one of the oldest and most dangerous attack vectors — allowing attackers to read, modify, or delete your entire database.
⚠ Vulnerable pattern: db.query("SELECT * FROM users WHERE email = '" + userInput + "'") — user input directly in the query string.
High
No Input Validation or Sanitisation
AI-generated forms and API endpoints frequently accept any input without validation. This enables: Cross-Site Scripting (XSS) attacks where attackers inject malicious scripts; buffer overflows from unexpectedly large inputs; and business logic bypasses where users submit values outside expected ranges.
⚠ A text field with no length limit, no character filtering, and no server-side validation. An attacker submits 100,000 characters or a script tag — both go straight to the database.
High
Insecure Direct Object References (IDOR)
AI often generates endpoints that fetch records by ID without checking whether the logged-in user is authorised to see that record. This means a user can change an ID in a URL or API call to access another user's data. Extremely common in AI-generated CRUD operations.
⚠ GET /api/invoices/1247 — AI generates this but doesn't check that invoice #1247 belongs to the currently authenticated user. Change 1247 to 1248: you see someone else's invoice.
High
Data Stored Without Encryption
Sensitive user data — passwords, personal information, health data, financial details — may be stored in plain text. AI rarely adds encryption proactively. In the event of a database breach, unencrypted data is immediately usable by attackers.
⚠ AI generates user registration storing passwords as plain text in the database. Any database dump immediately exposes every user's password.
Medium
No Rate Limiting
Login endpoints, registration flows, and API endpoints without rate limiting are vulnerable to brute-force attacks and abuse. AI almost never adds rate limiting unless explicitly asked. This can also result in large unexpected cloud bills from automated abuse.
⚠ An attacker writes a script that tries 100,000 password combinations on your login form. Without rate limiting, it runs unimpeded until it finds a match.
Medium
Over-permissive CORS Configuration
AI frequently sets CORS to "allow all origins" (Access-Control-Allow-Origin: *) for convenience during development — and this makes it into production. This allows malicious websites to make authenticated API calls on behalf of your users.
⚠ corsOptions = { origin: '*' } — This means any website can make API calls to your backend using a visitor's active session credentials.
Medium
Dependency Vulnerabilities
AI adds libraries without checking whether those libraries have known security vulnerabilities. Third-party packages are a leading attack vector — attackers target popular packages to inject malicious code into thousands of downstream projects.
⚠ AI adds an npm package that was last updated in 2019 and has 3 known critical CVEs. Your app now inherits all of them.
— 03

The Vibe Coding Safety Framework

SAFE — Security As a Foundational Element
A practical, step-by-step framework designed for real vibe coding workflows
The SAFE framework is organised into four phases: Before you build (prevention), During building (in-session habits), After building (pre-launch review), and Always (ongoing practices). Each phase has specific, actionable steps — not vague guidance. Every step is designed to be completable without deep security expertise.
Before You Build
Configure your environment to be safe by default. Many security issues can be prevented at the setup stage — before any AI writes a line of code. Steps 1–3.
While You Build
In-session habits that prevent security issues from entering your codebase in the first place. The smallest effort, the highest return. Steps 4–7.
Before You Ship
A systematic pre-launch security review. Non-negotiable before any real users touch your product. Steps 8–11.
— 04

Step-by-Step: The 12 Safety Steps

⬛ Phase 1 — Before You Build
01
Before
Setup
Set up secret management from day one
Never put secrets in your code. Before you write your first prompt, create a .env file and add it to .gitignore. Every API key, database URL, and password lives here — never in source code.
Create .env file in project root before starting
Add .env to .gitignore immediately
Create a .env.example with placeholder values (safe to commit)
Tell AI in your system prompt: "Never hardcode secrets. Always use environment variables."
!
Consider a secrets manager (Doppler, Vault) for team projects
02
Before
Setup
Initialise git immediately and commit often
Git is your safety net and audit trail. Every meaningful change should be committed so you can inspect history, roll back bad AI changes, and see exactly what was modified and when.
Run git init before writing any code
Commit a working state before every AI-assisted change
Never use git add . without first running git diff to review what changed
!
Use a private GitHub/GitLab repo — even for personal projects
Set up GitHub secret scanning to alert on accidentally committed secrets
03
Before
Build
Define your security requirements before prompting
Security is dramatically easier to build in than bolt on. Before your first coding session, write a short security brief that goes into every relevant prompt. This shapes AI output from the start.
Identify what sensitive data your app will handle (passwords, PII, payment info, health data)
Write a security constraints block to include in every coding prompt (see template below)
!
Identify applicable regulations (GDPR if you handle EU user data, PCI-DSS if you handle payments)
Choose a reputable auth library (Clerk, Auth0, Supabase Auth) rather than having AI build auth from scratch
Security Constraints Block — Add to Every Prompt
# Always include this in prompts that involve auth, data, or APIs SECURITY REQUIREMENTS (non-negotiable): - Never hardcode secrets, API keys, or credentials - Always use parameterised queries / ORMs — never string-concatenated SQL - Validate and sanitise all user inputs on the server side - Use environment variables for all configuration - Do not set CORS to allow all origins - Always hash passwords using bcrypt or argon2 — never store plain text - Add rate limiting to auth endpoints - Check authorisation (not just authentication) before returning any user data
⬛ Phase 2 — While You Build
04
During
Session
Review every git diff before committing
Before every commit, run git diff and actually read what changed. You are not looking for perfect code — you are looking for: hardcoded secrets, new dependencies you didn't ask for, or changes to security-sensitive functions.
Run git diff before every commit — even "small" changes
Scan for: any string starting with sk_, key_, secret, or containing long alphanumeric strings
Check for new import / require statements — were new packages added without your knowledge?
!
Check that no auth middleware was removed or bypassed
05
During
Session
Ask AI to explain security-sensitive code
Whenever AI generates code that handles auth, data access, or user input, ask it to explain how the security works before moving on. If the explanation doesn't make sense to you, something is wrong — either the code or your understanding of it.
After every auth-related generation: "Explain how this prevents unauthorised access"
After every data-fetching function: "What prevents a user from accessing another user's data here?"
!
After input handling: "How is this input validated and what happens with invalid input?"
Periodically: "Are there any security concerns with the code we've written so far?"
06
During
Session
Audit new dependencies before accepting them
Every library AI adds is a potential attack surface. Before accepting any new package, spend 60 seconds checking it. Many security breaches originate from compromised or malicious npm/pip packages.
Check the package exists on npmjs.com or PyPI and is what it claims to be (spelling matters — "leftpad" vs "left-pad")
Check weekly downloads — very low numbers on common-sounding packages are a red flag
Check last updated date — packages abandoned for years carry unpatched vulnerabilities
!
Run npm audit or pip-audit after adding any new dependency
Ask AI: "Could we accomplish this without adding a new dependency?" before accepting each one
07
During
Session
Never use AI-generated auth from scratch for real users
Authentication is one of the hardest things to get right. The risk of subtle, invisible bugs in AI-generated auth code is too high for anything handling real users. Use a proven, maintained auth service instead.
For user-facing apps: use Clerk, Auth0, Supabase Auth, or NextAuth — not custom AI-built auth
Never store passwords in plain text — if AI does this, stop and fix immediately
Verify the auth library you're using is actively maintained and has recent security patches
!
If you must use custom auth: have a security professional review it before launch
⬛ Phase 3 — Before You Ship
08
Pre-Launch
Run the AI security audit prompt
Before any real users access your product, paste your entire codebase (or key files) into Claude and ask for a security review. This is not a substitute for a professional audit — but it catches the most common issues quickly.
Paste every file that handles auth, data, or user input into the audit prompt
Address every Critical and High finding before launch
!
Document Medium findings with a plan to address them post-launch
Re-run after every significant feature addition
AI Security Audit Prompt
# Paste this before the code you want reviewed Please perform a security audit of the following code. Review specifically for: 1. Hardcoded secrets or credentials 2. SQL injection vulnerabilities 3. Missing input validation or sanitisation 4. Authentication/authorisation gaps (IDOR, missing middleware) 5. Insecure data storage (plain text passwords, unencrypted PII) 6. XSS vulnerabilities 7. CORS misconfiguration 8. Missing rate limiting on sensitive endpoints 9. Insecure dependencies or deprecated functions 10. Any other security concerns For each issue found: - Classify as Critical / High / Medium / Low - Explain the risk in plain language - Show the vulnerable code - Provide the fixed code [PASTE YOUR CODE HERE]
09
Pre-Launch
Scan for secrets in git history
Even if you've removed a secret from your current code, it may still be in your git history — permanently accessible to anyone with repo access. You must scan history, not just current files.
Run git log -p | grep -i "key\|secret\|password\|token" to scan history
Or install truffleHog or gitleaks for automated secret scanning
If a secret is found in history: rotate it immediately (assume it was seen), then clean history
!
Enable GitHub secret scanning if your repo is on GitHub — it alerts automatically
10
Pre-Launch
Run automated vulnerability scans
Several free tools perform automated security checks on your codebase and dependencies. Run them all before launch — they take minutes and catch issues that manual review misses.
npm audit (JS) or pip-audit (Python) — checks for known vulnerable dependencies
Run Lighthouse Security audit in Chrome DevTools on your deployed app
Check security headers using securityheaders.com after deployment
!
For web apps: run OWASP ZAP (free) for automated vulnerability scanning
Use Snyk (free tier) for continuous dependency monitoring post-launch
11
Pre-Launch
Manual penetration test — the "attacker's five minutes"
Spend five minutes trying to break your own application with the mindset of an attacker. You don't need security expertise — you need curiosity and a checklist. This catches the issues tools miss.
Try accessing another user's data by changing IDs in URLs or API calls
Try submitting HTML or JavaScript in every text field: <script>alert(1)</script>
Try accessing authenticated routes without being logged in by pasting the URL directly
Submit a form 20 times rapidly — is there rate limiting?
!
Try extremely long inputs (10,000+ characters) in every text field
!
Try SQL injection in search fields: ' OR '1'='1
⬛ Phase 4 — Always
12
Ongoing
Practice
Establish ongoing security hygiene
Security is not a one-time check — it's an ongoing practice. These habits protect you after launch, as your app evolves and new vulnerabilities are discovered in libraries you already use.
Run npm audit or equivalent before every significant release
Keep dependencies updated — set a monthly calendar reminder to update and test
Rotate API keys and secrets every 90 days, or immediately after team member offboarding
!
Set up error monitoring (Sentry — free tier) to catch and investigate unexpected errors
!
Re-run the AI security audit prompt after every major feature addition
As your app grows: consider a professional security audit before significant user growth milestones
— 05

Quick Reference: The Safety Checklist

Print this. Pin it. Run through it before every launch.

🔴 Never Ship Without
All secrets in .env, not source code
.env in .gitignore
Git history scanned for secrets
No plain text passwords in database
Auth routes actually protected
AI security audit run and issues fixed
npm audit showing 0 critical/high
CORS not set to wildcard (*) in production
🟡 Do Before Launch
!Rate limiting on login / register / API
!Server-side validation on all inputs
!IDOR test: try accessing other users' data by ID
!XSS test: submit script tags in text fields
!Security headers check at securityheaders.com
!Every dependency checked for known CVEs
!Using a maintained auth service (not custom)
!Error monitoring configured
🟢 Ongoing Practices
Monthly dependency updates
Quarterly secret rotation
Re-audit after every major feature
Snyk or equivalent for continuous monitoring
Review error logs for suspicious patterns
Security constraints in every AI session prompt
Git diff review before every commit
Professional audit before major user milestones
The Most Important Reminder
Security is not something you add at the end. It is something you build into every prompt, every commit, and every session from the very beginning. The SAFE framework makes this the default — not an afterthought.