Reference
AI Code Vulnerability Index
The Sherlock Forensics AI Code Vulnerability Index catalogs 27 vulnerability patterns commonly found in code generated by GitHub Copilot, Claude, ChatGPT, Cursor and other AI assistants. Each entry includes severity rating, affected tools, frequency percentage and remediation guidance. Sherlock Forensics offers AI code security audits starting at $1,500 CAD. Contact: 604.229.1994.
Last updated: April 2026
Catalog
Vulnerability Patterns in AI-Generated Code
| Vulnerability | Severity | AI Tools Affected | Freq. | Description | Remediation |
|---|---|---|---|---|---|
| Plaintext Password Storage | Critical | All | 38% | Passwords stored in .txt, .json, .csv or database columns without hashing. Any access yields all credentials. | Use bcrypt, scrypt or Argon2 with per-user salts. Never store plaintext passwords. |
| Client-Side Only Authentication | Critical | Cursor, Bolt, Lovable | 42% | Login validation in JavaScript with no server-side verification. Bypassed by modifying browser code or calling API directly. | Implement server-side authentication. Validate sessions on every protected endpoint. |
| No Rate Limiting on Login | High | All | 88% | Authentication endpoints accept unlimited requests. Enables brute-force password attacks and credential stuffing. | Implement rate limiting (e.g. 5 attempts per minute). Add account lockout after repeated failures. |
| JWT Without Expiration | High | ChatGPT, Copilot | 52% | JSON Web Tokens issued without exp claim. Stolen tokens grant permanent access. | Set short expiration (15-60 minutes). Implement refresh token rotation. |
| Predictable Password Reset Tokens | High | ChatGPT, Cursor | 31% | Reset tokens use sequential IDs, timestamps or Math.random(). Attackers predict valid tokens and take over accounts. | Use crypto.randomBytes() or secrets.token_urlsafe(). Set expiration to 15 minutes. |
| Session Fixation | High | ChatGPT, Copilot | 28% | Session identifier not rotated after login. Attacker sets a known session ID before authentication. | Regenerate session ID after every authentication event and privilege change. |
| String-Concatenated SQL Queries | Critical | All | 54% | User input inserted directly into SQL strings via concatenation or template literals. Enables full database compromise. | Use parameterized queries exclusively. Never concatenate user input into SQL. |
| Command Injection via exec() | Critical | ChatGPT, Copilot | 18% | User input passed to child_process.exec(), os.system() or similar without sanitization. Enables remote code execution. | Use execFile() with argument arrays. Never pass user input to shell commands. |
| XSS via Unsanitized Output | High | All | 31% | User input rendered in HTML without encoding. Enables cookie theft, session hijacking and phishing. | Use framework auto-escaping. Encode all dynamic output. Implement Content-Security-Policy headers. |
| Server-Side Request Forgery | High | ChatGPT, Cursor | 14% | User-supplied URLs fetched by the server without validation. Enables access to internal services and metadata endpoints. | Validate and allowlist destination URLs. Block internal IP ranges. Use a dedicated HTTP client with restrictions. |
| No Input Validation | Medium | All | 72% | User input accepted without type checking, length limits or format validation. Prerequisite for multiple attack classes. | Validate all input server-side. Define schemas for expected data types and ranges. |
| Exposed .env Files | Critical | Cursor, Bolt, Lovable | 78% | .env files with API keys and database credentials accessible via URL, committed to git or bundled into client-side code. | Add .env to .gitignore. Use server-side env loading. Never reference .env in client bundles. |
| Hardcoded API Keys | Critical | All | 61% | API keys, database credentials and JWT secrets embedded directly in source files. Survive into production from AI placeholders. | Use environment variables or secrets management services. Scan git history for committed secrets. |
| Hallucinated Package Dependencies | High | ChatGPT, Copilot | 34% | AI references packages that do not exist. Attackers register these names with malicious code on npm and PyPI. | Verify every dependency against live registries before installing. Automate checks in CI/CD. |
| Math.random() for Security Tokens | High | All | 45% | Non-cryptographic PRNG used for tokens, session IDs and API keys. Output is predictable with sufficient observations. | Use crypto.getRandomValues() in JS, secrets module in Python, SecureRandom in Java. |
| MD5 or SHA1 for Password Hashing | High | ChatGPT, Copilot | 22% | Fast hash algorithms used for passwords. GPU cracking recovers plaintext in minutes to hours. | Use bcrypt, scrypt or Argon2 with appropriate work factors. |
| Broken Object-Level Authorization | High | All | 47% | Users access other users' data by changing ID values in URLs or API requests. App checks login but not resource ownership. | Verify resource ownership on every request. Filter queries by authenticated user ID. |
| Admin Panel Without Auth | Critical | Cursor, Bolt, Lovable | 26% | Administrative interfaces accessible without authentication. AI creates admin UI but omits auth middleware. | Apply authentication and role-based authorization to all admin routes. Use separate admin auth flow. |
| No CSRF Protection | Medium | All | 58% | State-changing requests lack CSRF tokens. Attackers forge requests from other sites using authenticated sessions. | Implement CSRF tokens on all state-changing forms. Use SameSite cookie attribute. |
| Session Tokens in localStorage | High | All | 64% | Auth tokens stored in localStorage instead of httpOnly cookies. Accessible to any XSS payload. | Store tokens in httpOnly, Secure, SameSite cookies. Never expose tokens to JavaScript. |
| Permissive CORS Configuration | Medium | Cursor, Copilot | 51% | Access-Control-Allow-Origin set to * with credentials allowed. Enables cross-origin data theft. | Allowlist specific origins. Never combine wildcard origin with credentials. |
| Firebase/Supabase Rules Disabled | Critical | Cursor, Bolt, Lovable | 35% | Database security rules left in test/open mode. Any authenticated or anonymous user can read and write all data. | Define granular security rules. Test rules with the Firebase/Supabase emulator before deploying. |
| Debug Mode in Production | Medium | ChatGPT, Copilot | 41% | Development flags left enabled in production. Exposes detailed error messages, stack traces and internal paths. | Set NODE_ENV=production or FLASK_DEBUG=0. Use environment-specific configuration files. |
| Missing Security Headers | Medium | All | 83% | No Content-Security-Policy, X-Frame-Options, X-Content-Type-Options or Strict-Transport-Security headers. | Add security headers via middleware or web server configuration. Use helmet.js for Express apps. |
| Exposed Stack Traces | Medium | All | 56% | Unhandled exceptions return full stack traces to users. Reveals internal file paths, dependency versions and logic. | Implement global error handlers. Return generic error messages to users. Log details server-side only. |
| Missing HTTPS Redirect | Medium | Replit, Bolt | 29% | HTTP requests not redirected to HTTPS. Credentials and session tokens transmitted in cleartext. | Force HTTPS redirect at server or load balancer level. Set HSTS header. |
| No Audit Logging | Medium | All | 91% | No logging of authentication events, authorization failures or data access. Breaches go undetected. | Log all auth events, access control failures and data modifications. Set up alerts for anomalous patterns. |
| Insecure Deserialization | Critical | ChatGPT, Copilot | 22% | pickle.loads(), ObjectInputStream or unserialize() on untrusted input without type filtering. Enables remote code execution. | Avoid deserializing untrusted data. Use JSON instead of binary formats. Add strict type allowlisting. |
Frequency percentages based on aggregate findings from Sherlock Forensics security assessments, January - April 2026. See the full 2026 AI Code Security Report for methodology.
Related
Further Reading
2026 AI Code Security Report
Full research report with methodology, aggregate statistics and recommendations based on our 2026 assessment data.
5 Vulnerabilities AI Code Assistants Introduce
Deep-dive analysis of the five most common vulnerability patterns with real code examples from audits.
AI Code Security Audit
Our AI code audit service covers every vulnerability pattern in this index. Quick audits from $1,500 CAD.
Get Started
Find These Vulnerabilities in Your Code
Sherlock Forensics tests for every pattern in this index. Quick audits from $1,500 CAD. Order online.
Order Online