Lovable Left the Door Open for 48 Days and Called It a Feature
A $6.6 billion vibe coding platform exposed every user's source code and database credentials through a basic API flaw — then closed the bug report and denied anything happened.
War stories from the AI code trenches.
A $6.6 billion vibe coding platform exposed every user's source code and database credentials through a basic API flaw — then closed the bug report and denied anything happened.
Four AI coding CLIs, one Enter key, and your entire dev machine is someone else's playground.
Turns out a PR title is all it takes to make Claude, Gemini, and Copilot hand over your API keys.
A supply chain attack turned a password manager into a credential harvester specifically targeting Claude, Cursor, and Codex users.
A $6.6 billion vibe coding platform left database credentials wide open, then blamed everyone but themselves.
A Cursor agent session got socially engineered into delivering the AMOS Stealer. Two minutes from clone to credential exfiltration.
A $6.6 billion vibe coding platform left thousands of projects wide open for 48 days, then blamed HackerOne.
AI-generated code promises speed, but often delivers security vulnerabilities. Learn about common pitfalls like SQL injection, leaked keys, and broken authentication, and how to fix them before disaster strikes.
The most popular AI code editor had an RCE vulnerability that let attackers hijack your terminal through shell built-ins nobody thought to block.
An AI productivity tool nobody heard of just gave hackers the keys to Vercel's kingdom — and maybe your app's secrets too.
Real production disasters from apps built with Cursor, Lovable, and Bolt. Client-side security, exposed databases, and the fix-and-break death spiral that's costing founders real money.
CrowdStrike researchers discovered that innocent geographical references can cause AI models to generate drastically more insecure code. The implications for vibe coding are chilling.
A critical vulnerability in Claude Code silently disables user-configured security rules when commands get complex. The fix exists in their codebase. They just never shipped it.
New research tracking AI-generated vulnerabilities shows a 583% increase in security flaws, with Claude Code leading the pack. Here's what the data tells us about the real price of 'ship fast, fix never.'
Anthropic's Claude Code source leak and mass GitHub takedown affected thousands of repos. Here's what it means if you're building with AI coding tools.
A supply chain compromise in the popular LiteLLM AI proxy package exposed API keys and cloud credentials across thousands of projects. Here's what vibe coders need to know.
The most popular HTTP library on npm was compromised with a RAT. If you vibe-coded your app, you probably have no idea what's in your dependency tree.
Claude Code's entire proprietary codebase was exposed via an npm source map file. If Anthropic can't get this right, what chance does your vibe-coded app have?
AI coding tools confidently recommend packages that don't exist. Attackers are registering them. Here's how slopsquatting works and why your vibe-coded app might already be compromised.
Built your app with AI? Here are the warning signs that your codebase is a ticking time bomb — and what to do about it.