Vercel's AI Tool Got Popped and Your Env Vars Went With It
If you deployed anything to Vercel in the last two months and stored secrets in environment variables you didn’t mark as “sensitive,” today is a bad day.
Vercel confirmed this morning that attackers breached their internal systems after compromising Context.ai — a third-party AI tool one of their employees had connected via OAuth. The attackers used that foothold to hijack the employee’s Google Workspace account, pivot into Vercel’s infrastructure, and access customer environment variables that weren’t encrypted at rest.
Read that again: an AI tool you’ve never heard of just became the attack vector for one of the largest deployment platforms on the internet.
The Kill Chain
Here’s how it went down. Back in February, a Context.ai employee got hit with Lumma Stealer — an infostealer that scraped their Google Workspace credentials, Supabase keys, Datadog logins, and Authkit tokens. The attackers sat on that access for weeks.
Then they followed the OAuth chain. Context.ai had OAuth connections to customer accounts, including at least one Vercel employee. The attackers used that trust relationship to walk straight into Vercel’s internal environment. No brute force, no zero-day, no sophisticated exploit. Just a compromised AI tool and an OAuth token that nobody revoked.
Once inside, they accessed environment variables — API keys, database credentials, third-party tokens — for a “limited subset” of Vercel customers. If those variables weren’t flagged as sensitive (which encrypts them at rest), they were sitting there in plaintext.
Why This Matters for Every Vibe-Coded App
Vercel is the deployment platform for the vibe coding generation. Next.js, React, the entire Vercel-first ecosystem — if you followed a tutorial in the last two years, you probably deployed to Vercel. Millions of apps live there.
And now someone from the ShinyHunters orbit is allegedly selling what they describe as customer API keys, source code, and database records on a cybercrime forum. ShinyHunters publicly denied involvement, which, sure. Somebody’s selling it.
The crypto community is already scrambling. If your Vercel-hosted app talks to a blockchain, your private keys may have just become public ones.
The Lesson Nobody Will Learn
This is a supply chain attack, and the supply chain now includes AI tools. Every productivity app, every “AI assistant for developers,” every OAuth connection you casually approve — each one is a potential bridge from some random startup’s compromised AWS environment directly into your production infrastructure.
The vibe coding pitch is: connect everything, automate everything, ship faster. Nobody mentions that “connect everything” means your attack surface now includes every tool in every employee’s workflow. Context.ai didn’t even need to be compromised through some exotic vector. An employee got phished. The oldest trick in the book, amplified by modern trust architecture.
Vercel says they’re working with Mandiant and law enforcement. They’re telling customers to rotate any secrets stored in non-sensitive environment variables immediately. If you’re a Vercel customer, do that now. Not after you finish reading this.
What You Should Actually Do
Check your Vercel environment variables. Right now. If anything is stored as a non-sensitive variable — database URLs, API keys, anything with the word “secret” or “token” in the name — rotate it. Today.
Then audit your OAuth connections. Every third-party app that has access to your Google Workspace, your GitHub, your deployment platform — ask yourself if you’d bet your production database on that company’s security posture. Because that’s exactly the bet you’re making.
The AI tools aren’t just writing your code now. They’re part of your infrastructure. And when they get compromised, they take you with them.