What is AI slop?
AI slop is the industry term for unreviewed code generated by AI assistants like GitHub Copilot, Claude and ChatGPT that compiles and functions correctly but contains security vulnerabilities, poor architecture and technical debt. The code looks clean on the surface but hides injection flaws, hardcoded secrets and missing security controls.
The term borrows from "slop" in the AI content world, where it describes low-quality AI-generated text published without human editing. In software development, AI slop describes code that passes basic tests and looks professional but fails under adversarial conditions. It is the digital equivalent of a house that looks beautiful but has no foundation.
AI slop is not the same as bad code written by humans. Human developers make mistakes, but they typically understand the security implications of their design decisions. AI assistants generate code patterns that optimize for functionality without considering threat models, attack surfaces or security architecture.
Real-World Examples of AI Slop
At Sherlock Forensics, we audit AI-generated codebases regularly. Here are the most common patterns we find:
Hardcoded Secrets
AI assistants frequently generate code with API keys, database credentials and secret tokens embedded directly in source files. The AI needs a value to make the code work, so it uses a placeholder that developers often replace with real credentials and commit to version control. We find exposed secrets in approximately 40% of AI-generated codebases we audit.
# AI slop: hardcoded database credentials
db = psycopg2.connect(
host="production-db.example.com",
user="admin",
password="SuperSecret123!"
)
SQL Injection Vulnerabilities
AI assistants commonly generate SQL queries using string concatenation instead of parameterized queries. The code works perfectly in testing but allows an attacker to inject arbitrary SQL commands through user input. This is one of the oldest and most dangerous vulnerability classes, and AI consistently produces it.
# AI slop: SQL injection via string concatenation
query = f"SELECT * FROM users WHERE email = '{user_input}'"
cursor.execute(query)
Missing Authentication Checks
AI-generated APIs frequently expose endpoints without proper authentication or authorization. The AI creates the route and the business logic but forgets to verify that the requesting user is authenticated and authorized to access the resource. This leads to unauthorized data access and privilege escalation.
Insecure Session Management
AI assistants often implement session handling with weak tokens, missing expiration, no CSRF protection and cookies without the Secure and HttpOnly flags. Sessions work during development but are trivially exploitable in production.
Overly Permissive CORS
Nearly every AI-generated API we audit uses Access-Control-Allow-Origin: * in production. This allows any website on the internet to make authenticated requests to your API, bypassing same-origin protections that browsers enforce by default.
How to Identify AI Slop in Your Codebase
You do not need to be a security expert to spot the warning signs. Look for these indicators:
- Environment files committed to git: Check if
.env,.env.localor configuration files with credentials are in your repository history. - String concatenation in database queries: Search your codebase for
f"SELECT,f"INSERT,f"UPDATEor similar patterns. These are almost always injection vulnerabilities. - TODO and FIXME comments: AI assistants leave placeholder comments like
// TODO: add authenticationthat never get addressed. - Identical error messages: If every error returns a generic message with no differentiation, the AI likely generated a minimal error handler without proper logging or response handling.
- No rate limiting: Check your login, registration and password reset endpoints. If there is no rate limiting, an attacker can brute-force credentials or flood your system.
- Missing input validation: Submit unexpected data to your forms. Try special characters, extremely long strings and negative numbers. If the application accepts everything without complaint, validation is missing.
How to Fix AI Slop
Fixing AI slop requires a systematic approach:
Step 1: Get a professional audit. Before fixing anything, you need a complete inventory of vulnerabilities. A penetration test from Sherlock Forensics identifies every security issue in your application, rates each by severity and provides specific remediation instructions. Quick audits start at $1,500 CAD.
Step 2: Fix critical issues first. Address SQL injection, authentication bypasses, exposed secrets and other critical vulnerabilities immediately. These are the issues attackers exploit within hours of discovery.
Step 3: Implement security fundamentals. Add input validation to every user-facing input. Replace string concatenation with parameterized queries. Move secrets to environment variables managed by a secrets manager. Add rate limiting to sensitive endpoints. Set proper CORS policies.
Step 4: Add security testing to your pipeline. Integrate static analysis tools like Semgrep or SonarQube into your CI/CD pipeline. These catch common patterns before code reaches production. They are not a replacement for manual testing, but they prevent the most obvious AI slop from shipping.
Step 5: Schedule regular audits. AI slop accumulates with every prompt. If you continue using AI assistants for development, schedule security audits at regular intervals. Quarterly audits for applications in active development, annual audits for stable applications.
The Scale of the Problem
AI slop is not a niche concern. In 2026, an estimated 70% of new code written by professional developers involves AI assistance. Among non-technical founders using vibe coding tools like Cursor, Bolt and Lovable, the figure is closer to 100%. Every one of these codebases needs security review.
The State of AI Code Security report from Sherlock Forensics found that AI-generated applications average 12 security vulnerabilities per engagement, with 3 rated as critical or high severity. These are not theoretical risks. They are exploitable weaknesses that give attackers access to user data, administrative functions and backend infrastructure.
People Also Ask
Is AI-generated code secure?
Not by default. Research shows that AI-generated code contains security vulnerabilities at rates comparable to or higher than human-written code. AI assistants optimize for functionality, not security. They produce code that works but often lacks input validation, proper authentication, secure session handling and protection against injection attacks. Every AI-generated codebase should be audited.
How do I know if my code is AI slop?
Look for these indicators: hardcoded API keys or secrets in source files, missing input validation on user-facing forms, SQL queries built with string concatenation, authentication logic that can be bypassed, overly permissive CORS headers, missing rate limiting on sensitive endpoints and TODO comments that were never addressed. Sherlock Forensics AI code audits start at $1,500 CAD.
Can AI slop be fixed?
Yes. AI slop is fixable through professional security auditing and targeted remediation. The first step is identifying all vulnerabilities through a penetration test or code audit. Then each issue is remediated following security best practices. Sherlock Forensics provides detailed remediation guidance with every audit, including code examples showing the secure implementation for each finding.