Your Login Page Looks Great. It Is Also Wide Open.
You asked your AI coding tool to build a login page. It did. The page renders beautifully. The form submits. The user gets redirected to their dashboard. Everything works. You ship it.
Here is the problem: "it works" and "it is secure" are completely different things. AI coding tools optimize for making features functional. They do not optimize for making features safe. The login page is the single most attacked surface of any web application. And in our experience auditing vibe-coded apps at Sherlock Forensics, the login page is almost always the weakest point.
These are the 10 security disasters we find most often in vibe-coded authentication systems.
1. Passwords Stored in Plaintext Files
This is the one that makes security professionals lose sleep. When a non-technical founder asks an AI to "build a login system with a database," the AI sometimes interprets "database" very loosely. We have seen apps where user credentials are stored in users.txt, data/accounts.json or even a CSV file in the project directory.
The passwords in these files are stored exactly as the user typed them. No hashing. No encryption. Nothing. If an attacker accesses that file, and there are multiple ways they can, they get every username and password in cleartext. Every single one.
What makes this worse: most people reuse passwords. A breach of your 50-user side project gives attackers credentials to try against Gmail, banking sites and corporate accounts.
2. Client-Side Only Authentication
This one is shockingly common. The AI generates a login form that checks the username and password using JavaScript in the browser. The "authentication" happens entirely on the client side. The correct password might be hardcoded in the JavaScript file or fetched from an API endpoint that returns the full user list.
An attacker does not even need to guess the password. They open browser developer tools, read the JavaScript and find the credentials sitting right there. Or they simply disable JavaScript, bypass the check entirely and access the protected pages directly.
Real authentication must happen on the server. The browser sends credentials to the server. The server verifies them. The server creates a session. If any part of that verification happens in the browser, it is not authentication. It is decoration.
3. No HTTPS
When you type your password into a login form served over HTTP (not HTTPS), that password travels across the internet in plaintext. Anyone on the same WiFi network, any network hop between you and the server and any compromised router can read it.
Many vibe-coded apps deployed to custom domains skip HTTPS configuration because the AI did not include it in the deployment steps. The app works over HTTP so the founder assumes everything is fine. It is not. Without HTTPS, every login attempt broadcasts credentials to anyone listening.
4. SQL Injection in the Login Form
If your app uses a database (even SQLite) and the AI built the login query using string concatenation, your login form is vulnerable to SQL injection. An attacker can type something like ' OR '1'='1 into the password field and bypass authentication entirely.
This is not theoretical. This is one of the oldest and most well-known attacks in web security and it still works on vibe-coded apps because AI code assistants generate vulnerable queries with alarming consistency. The fix is parameterized queries. The AI knows about parameterized queries. It just does not always use them.
5. No Rate Limiting on Login Attempts
Without rate limiting, an attacker can try thousands of passwords per second against your login form. This is called a brute force attack and it is trivial to execute with free tools. If your users have weak passwords (and they do), the attacker will get in.
Rate limiting means: after 5 failed attempts, lock the account or add a delay. After 10 failed attempts, require a CAPTCHA. After 20, block the IP. AI-generated login systems almost never include this. The AI builds the happy path where the user enters the correct password on the first try.
6. Predictable Password Reset Tokens
When a user clicks "forgot password," the app generates a reset link with a token. If that token is predictable, an attacker can generate valid reset links for any account. Common patterns we find in vibe-coded apps include sequential numbers (reset?token=1, reset?token=2), timestamps, user IDs and tokens generated with Math.random() which is not cryptographically secure.
A secure reset token must be generated using a cryptographically secure random number generator, must be long enough to resist brute forcing, must expire after a short window and must be single-use. Most AI-generated reset flows get zero of these right.
7. Sessions Stored in localStorage
After a user logs in, the app needs to remember that they are logged in. AI tools frequently store session data in the browser's localStorage. This is a security problem because localStorage is accessible to any JavaScript running on the page.
If your app has a cross-site scripting (XSS) vulnerability anywhere, and vibe-coded apps usually do, an attacker can steal every session token from localStorage and hijack user accounts. Secure session tokens belong in HTTP-only cookies that JavaScript cannot access.
8. No CSRF Protection
Cross-Site Request Forgery (CSRF) means an attacker tricks a logged-in user's browser into making requests to your app. Without CSRF protection, an attacker can create a malicious page that changes a user's password, transfers their data or modifies their account settings just by getting the user to visit a link.
CSRF tokens are standard in any mature web framework. But when an AI generates a custom login system from scratch, CSRF protection is almost always missing. The AI does not include it because you did not ask for it and it is not required for the feature to "work."
9. Unprotected /admin Route
The AI built you an admin panel at /admin. The admin panel lets you manage users, view data and configure the app. The AI put a login check on the admin page. But the API endpoints that the admin page calls? No authentication check at all.
An attacker does not even need to find the admin page. They find the API endpoints by reading the JavaScript source and call them directly. POST /api/admin/delete-user works without any authentication because the AI only protected the frontend route, not the backend endpoint.
10. Exposed .env File
Your .env file contains your database password, API keys, JWT secret and possibly your Stripe secret key. If your server is misconfigured, which is common in vibe-coded deployments, visiting yoursite.com/.env in a browser downloads the file.
Attackers check this automatically. Bots crawl the internet requesting /.env on every domain they find. If your file is accessible, your database credentials, payment processing keys and authentication secrets are compromised within hours of deployment. Read our 5-minute security checklist to test for this before you launch.
What This Means for You
If you built your login page with AI, there is a high probability it has at least three of these vulnerabilities. We have never audited a vibe-coded authentication system that had zero issues. The reality of what happens when an attacker finds your app is not pretty.
This is not the AI's fault. AI tools are designed to make things work. Security is about making things not break when someone actively tries to break them. Those are fundamentally different goals.
The fix is straightforward: get a professional security audit before you launch. Not after your first breach. Not after a user reports their account was compromised. Before. Sherlock Forensics runs penetration tests specifically for vibe-coded apps starting at $1,500. We find what the AI missed and give you clear instructions to fix it.
Not sure if your app needs a pentest? Read our guide: Do I Need a Pentest for My Side Project?
Frequently Asked Questions
Is a login page built with AI secure?
In most cases, no. AI coding tools generate login pages that appear functional but contain serious security flaws. Common issues include plaintext password storage, client-side only authentication checks, missing rate limiting and insecure session management. A professional penetration test is the only way to know for certain whether your login system is safe.
Can hackers see my .txt database?
Yes. If your application stores data in .txt, .json or .csv files within your web directory, attackers can download those files by guessing or discovering the file path. Automated bots constantly scan for common file paths like /data/users.txt and /db/accounts.json.
Do I need a pentest for my side project?
If your side project has a login page, stores user data or processes payments, yes. The number of users does not determine breach severity. Even 50 compromised passwords can fuel credential stuffing attacks across other platforms. Read our full decision guide here.
How much does a security audit cost for a small app?
Sherlock Forensics offers quick security audits for vibe-coded and small applications starting at $1,500. This includes testing of authentication, authorization, injection vulnerabilities, secrets exposure and server configuration. Order online or call 604.229.1994 to scope your audit.