We published the 2026 AI Code Security Report today. It is based on anonymized, aggregate findings from Sherlock Forensics security assessments conducted between January and April 2026. The full report has methodology, vulnerability breakdowns by category and comparisons across AI tools.
This post covers the five findings that surprised us most. Not because we did not expect to find vulnerabilities in AI-generated code. We expected that. What surprised us was the consistency, the severity and the specific patterns that kept appearing across every tool and every codebase.
1. Almost Nobody Implements Rate Limiting
Only 12% of AI-built applications implement rate limiting on authentication endpoints
This was the single most consistent finding. 88% of the applications we tested had no rate limiting on login endpoints. No throttling on password reset flows. No limits on API requests. An attacker can try thousands of passwords per minute against most AI-built login pages without triggering any defensive response.
AI code assistants build the login form, the authentication logic, the session management. They do not add rate limiting because rate limiting is a security constraint, not a functional requirement. The login works. The AI considers the task complete.
2. Secrets Are Everywhere
78% of AI-generated code stores secrets in plaintext or committed .env files
More than three quarters of the codebases we assessed had API keys, database credentials or authentication secrets accessible to anyone who could view the source code. The most common pattern: .env files committed to git repositories and never added to .gitignore. The second most common: API keys hardcoded directly in JavaScript files that ship to the browser.
In several cases we found live Stripe keys, AWS credentials and database connection strings in publicly accessible client-side bundles. These were not test keys. They were production credentials with full access to payment processing and data stores.
3. Hallucinated Packages Are a Real Supply Chain Risk
Hallucinated package dependencies appear in 34% of AI-generated Node.js projects
One in three Node.js projects we audited imported at least one package that does not exist on npm. AI assistants synthesize plausible-sounding package names from training data patterns. The developer runs npm install, the package is not found and they either remove it or (in the worst case) an attacker has already registered the name with malicious code.
This is a vulnerability class unique to AI-generated code. Human developers import packages they have used before or found through research. AI assistants invent packages from statistical patterns. The full AI Code Vulnerability Index catalogs this and 26 other patterns we track.
4. The Exploit Window Is Shorter Than You Think
The average time from deployment to first exploit attempt on an AI-built SaaS: 18 days
Across engagements where clients shared their server logs with us, we found that the median time between deployment and the first automated exploit attempt was 18 days. Attackers use automated scanning tools that identify common patterns in AI-generated code. The predictable structures, default configurations and consistent vulnerability patterns make AI-built applications easy to fingerprint and target.
If your application is live and has not been audited, the window for proactive security is narrower than most founders realize.
5. Nobody Is Watching
91% of AI-built applications have no meaningful security logging
This was the highest-frequency finding in the entire dataset. Nine out of ten AI-built applications had no audit logging for authentication events, no monitoring of authorization failures and no alerting on anomalous access patterns. If these applications are breached, the owner will not know until a customer reports it or the data appears for sale.
AI assistants do not add logging because logging is infrastructure, not features. The application works without it. But you cannot detect, investigate or respond to a breach without it.
What Should You Do?
If you built an application with AI and it has real users, processes payments or stores sensitive data:
- Read the full 2026 AI Code Security Report for the complete dataset and recommendations.
- Check your app against the AI Code Vulnerability Index for the 27 most common patterns.
- Use the Security Audit Cost Calculator to estimate your recommended assessment tier.
- Get a professional audit. Quick audits start at $1,500 CAD and are delivered in 3-5 business days. Order online.
Frequently Asked Questions
How secure is AI-generated code?
Based on our 2026 assessment data, AI-generated code is significantly less secure than code written by experienced developers with security training. 92% of AI-generated codebases contained at least one critical vulnerability. The most common issues are missing logging (91%), missing rate limiting (88%) and exposed secrets (78%). AI assistants prioritize functionality over security.
What are the most common AI code vulnerabilities?
The top vulnerability categories we found across AI-generated codebases in 2026 are: missing logging (91%), missing rate limiting (88%), secrets in plaintext (78%), security misconfiguration (67%), broken authentication (65%), injection (54%), broken authorization (47%), insecure dependencies (34%), XSS (31%) and insecure deserialization (22%). The full AI Code Vulnerability Index has detailed descriptions and remediation steps for all 27 patterns.
Should I audit my AI-built app?
If your application has real users, processes payments or stores sensitive data, yes. The data is clear: the vast majority of AI-built applications ship with exploitable vulnerabilities. The average time to first exploit attempt is 18 days after deployment. A $1,500 CAD quick audit that catches a critical vulnerability before it is exploited is orders of magnitude cheaper than the breach it prevents.