The Experiment
I have spent 20 years breaking into other people's applications. I wanted to know what it felt like on the other side. So I built one.
The idea was simple: a client intake form for security consultants. Upload documents, track engagements, generate invoices. I gave Cursor a detailed prompt and let it build. Two days later I had a working SaaS with authentication, a dashboard, file uploads and Stripe integration.
It looked great. It worked. I was genuinely impressed. Then I did what I do for a living. I attacked it.
Finding 1: My Stripe Key Was in the Browser
The first thing I did was open the browser developer tools and search the JavaScript bundle for the word "key." My Stripe secret key was sitting in a client-side config file. Not the publishable key, which is designed to be public. The secret key. The one that lets you issue refunds, create charges and access customer data.
Cursor had put it there because I told it to "add Stripe payments." It chose the fastest path to a working feature. That path happened to expose the key to every visitor.
Time to find: 90 seconds.
Finding 2: Any User Could See Any Other User's Data
I created two test accounts. Logged in as User A. Opened an engagement record. The URL was /engagements/42. I changed it to /engagements/43. User B's engagement loaded without hesitation.
The app checked whether I was logged in. It never checked whether I was authorized to see that specific record. This is called an Insecure Direct Object Reference, and it is the single most common vulnerability in vibe-coded applications. Cursor built authentication but skipped authorization entirely because I never asked for it.
Time to find: 2 minutes.
Finding 3: SQL Injection in the Search Bar
I typed a single quote character into the search field. The app returned a database error with the full SQL query visible in the response. I did not even need to craft a proper injection payload. The error message told me the table structure, the column names and the database engine.
From there, a UNION-based injection would have given me every row in every table. User emails, hashed passwords (more on that in a moment), uploaded documents, billing records. Everything.
Time to find: 3 minutes.
Finding 4: Passwords Hashed With MD5
When I examined the authentication code, I found that Cursor had chosen MD5 for password hashing. MD5 has been considered broken for password storage since roughly 2004. A modern GPU can brute-force billions of MD5 hashes per second. Combined with the SQL injection from Finding 3, an attacker could extract and crack every password in the database in minutes.
The fix is bcrypt or Argon2. Cursor knew about both. It just did not default to them.
Finding 5: The Admin Route Was Guessable
I navigated to /admin and got the admin dashboard. No additional authentication. No role check. If you were logged in as any user, you were an admin. Cursor generated the admin panel because I asked for one. It did not occur to the model to restrict access to it.
Time to find: 30 seconds.
Finding 6: File Uploads Accepted Anything
The document upload feature accepted any file type with no size limit. I uploaded a PHP webshell disguised as a PDF. The server saved it to a publicly accessible directory. Visiting the file URL executed the PHP code. Full remote code execution on my own server.
This one stung. I know better. But I did not review the upload handler because Cursor generated it and "it worked."
Finding 7: No Rate Limiting Anywhere
The login page had no rate limiting. No account lockout. No CAPTCHA. I could attempt passwords at machine speed forever. The API endpoints were the same. No throttling on any route. An attacker could brute-force credentials, scrape data or launch denial-of-service attacks without any resistance.
Finding 8: The .env File Was Accessible
I visited /. env in my browser. The file downloaded. Database credentials, Stripe keys, JWT secret, SMTP password. Everything needed to completely compromise the application was available to anyone who knew to check.
This is not a Cursor problem specifically. It is a deployment problem that AI tools do not warn you about. The default server configuration served static files from the project root, and .env was in the project root.
The Final Count
In total, I found 14 vulnerabilities across the application. Six were critical. Four were high severity. Four were medium. Zero were false positives. The entire audit took about 45 minutes.
I want to be clear about something: I am not blaming Cursor. The tool did what I asked. I asked for features and it built features. I did not ask for security and it did not build security. That is the fundamental problem with vibe coding. The gap between "working" and "secure" is enormous, and the person building the app usually cannot see it.
What I Learned
Building this app was humbling. I have written hundreds of pentest reports telling clients they should have caught these issues. Then I built an app with AI and produced every single one of them myself. The tooling makes it easy to build and almost impossible to secure without deliberate effort.
If you are shipping a vibe-coded app, you are shipping with these vulnerabilities unless someone has specifically looked for them. The AI will not warn you. The deployment platform will not warn you. Your users definitely will not warn you. An attacker will.
Test It Yourself
I turned this experience into a tool. You can run the same basic checks against your own application and see what comes back. It is not a replacement for a professional pentest, but it will show you the obvious things. The things I found in under five minutes.
If the results scare you, that is the correct response.