These are the most common and serious safety issues found in AI-generated code. Understanding each one is the first step to preventing it.
Critical
Secret & Credential Exposure
AI frequently places API keys, database passwords, and other secrets directly in source code — or in places that get committed to git repositories. Once a secret is in a git history, it must be considered permanently compromised, even if you delete it later.
⚠ What AI might write: const API_KEY = "sk-live-abc123xyz..." — directly in your component file, commited to GitHub, visible to anyone with repo access.
Critical
Missing or Broken Authentication
AI-generated auth code often looks correct but contains subtle flaws: JWT tokens not properly validated, sessions not expiring, password reset flows that can be bypassed, or routes that should be protected left open. Auth bugs are catastrophic — they let attackers impersonate users or access any account.
⚠ A common pattern: AI generates a "protect this route" middleware but applies it inconsistently — some sub-routes remain unprotected without it being obvious from reading the code.
Critical
SQL Injection & Query Vulnerabilities
When AI builds database interactions, it may construct queries using string concatenation with user input rather than parameterised queries. This creates SQL injection vulnerabilities — one of the oldest and most dangerous attack vectors — allowing attackers to read, modify, or delete your entire database.
⚠ Vulnerable pattern: db.query("SELECT * FROM users WHERE email = '" + userInput + "'") — user input directly in the query string.
High
No Input Validation or Sanitisation
AI-generated forms and API endpoints frequently accept any input without validation. This enables: Cross-Site Scripting (XSS) attacks where attackers inject malicious scripts; buffer overflows from unexpectedly large inputs; and business logic bypasses where users submit values outside expected ranges.
⚠ A text field with no length limit, no character filtering, and no server-side validation. An attacker submits 100,000 characters or a script tag — both go straight to the database.
High
Insecure Direct Object References (IDOR)
AI often generates endpoints that fetch records by ID without checking whether the logged-in user is authorised to see that record. This means a user can change an ID in a URL or API call to access another user's data. Extremely common in AI-generated CRUD operations.
⚠ GET /api/invoices/1247 — AI generates this but doesn't check that invoice #1247 belongs to the currently authenticated user. Change 1247 to 1248: you see someone else's invoice.
High
Data Stored Without Encryption
Sensitive user data — passwords, personal information, health data, financial details — may be stored in plain text. AI rarely adds encryption proactively. In the event of a database breach, unencrypted data is immediately usable by attackers.
⚠ AI generates user registration storing passwords as plain text in the database. Any database dump immediately exposes every user's password.
Medium
No Rate Limiting
Login endpoints, registration flows, and API endpoints without rate limiting are vulnerable to brute-force attacks and abuse. AI almost never adds rate limiting unless explicitly asked. This can also result in large unexpected cloud bills from automated abuse.
⚠ An attacker writes a script that tries 100,000 password combinations on your login form. Without rate limiting, it runs unimpeded until it finds a match.
Medium
Over-permissive CORS Configuration
AI frequently sets CORS to "allow all origins" (Access-Control-Allow-Origin: *) for convenience during development — and this makes it into production. This allows malicious websites to make authenticated API calls on behalf of your users.
⚠ corsOptions = { origin: '*' } — This means any website can make API calls to your backend using a visitor's active session credentials.
Medium
Dependency Vulnerabilities
AI adds libraries without checking whether those libraries have known security vulnerabilities. Third-party packages are a leading attack vector — attackers target popular packages to inject malicious code into thousands of downstream projects.
⚠ AI adds an npm package that was last updated in 2019 and has 3 known critical CVEs. Your app now inherits all of them.