The Numbers
We scanned 100 production websites built with AI coding tools between January and March 2026. These were real applications with real users, not test projects. We identified them through public deployment platforms, community forums and client referrals. The scan methodology combined automated DAST scanning with manual verification of critical findings.
The results were not subtle.
Vulnerability Breakdown by Category
We classified findings into 10 categories based on the OWASP Top 10 and our own taxonomy from AI code audits. Here is the prevalence of each category across all 100 applications.
Breakdown by AI Tool
We tracked which AI tool was used to build each application. Some tools performed worse than others, though none produced consistently secure output.
| Tool | Apps Scanned | Avg Critical Findings | Avg Total Findings |
|---|---|---|---|
| Cursor | 34 | 4.2 | 9.8 |
| Bolt | 22 | 5.7 | 13.1 |
| Lovable | 18 | 5.1 | 12.4 |
| Replit Agent | 14 | 4.8 | 11.2 |
| Claude Code | 12 | 3.6 | 8.7 |
Cursor had the lowest average critical findings, likely because it is more commonly used by developers who have some security awareness. Bolt and Lovable, which target non-technical users, produced applications with the highest vulnerability counts. Claude Code performed slightly better on average but still produced critical issues in 83% of cases.
Vibe-Coded vs. Developer-Assisted
We categorized applications into two groups: "vibe-coded" (built entirely by non-developers describing features in natural language) and "developer-assisted" (built by developers using AI as a coding accelerator). The difference was significant.
Developer-assisted applications had roughly half the vulnerabilities of fully vibe-coded apps. However, 79% of developer-assisted applications still had at least one critical finding. Developers using AI tools catch some issues by instinct but miss others because they trust the AI output more than they would trust code from a junior team member.
The Most Dangerous Pattern
The single most dangerous pattern we observed was the combination of exposed secrets plus broken authorization. This combination appeared in 58% of all applications. It means an attacker can find the database credentials (from the .env file or client-side code) and access any user's data (through IDOR) in a single session. The median time to achieve this combination from an external perspective was under 10 minutes.
In 23% of cases, we could achieve full administrative access within 15 minutes. In 8% of cases, we found evidence that the application had already been compromised before our scan, including unauthorized admin accounts, suspicious database entries and exfiltration scripts.
What Scanners Caught vs. What They Missed
We ran three industry-standard DAST scanners against each application before performing manual analysis. Scanners detected an average of 4.1 issues per application. Manual analysis found an average of 11. The gap was consistent across all applications.
Scanners are useful for catching known vulnerability patterns. They are not useful for understanding whether the application's authorization model is correct, whether business logic can be bypassed or whether the combination of individually moderate findings creates a critical attack path.
Methodology Notes
All scans were conducted with explicit authorization from application owners. Applications were selected to represent a range of industries, team sizes and AI tools. We excluded applications that had previously undergone professional security assessment. Findings were verified manually to eliminate false positives. The full methodology, including tool configurations and classification criteria, is available in the complete 2026 AI Code Security Report.
What This Means for You
If you built a website or web application with AI, there is a 92% chance it has critical security vulnerabilities. The probability is not zero even if you are a developer. The probability is near certain if you are a non-technical founder.
These are not theoretical risks. 8% of the applications we scanned showed signs of prior compromise. Attackers are already targeting AI-built applications because they know the patterns and they know the vulnerabilities are predictable.
The full dataset, methodology and remediation guidance is in the 2026 AI Code Security Report. It is the most comprehensive analysis of AI-generated code security published to date.