Slopsquatting: AI-Hallucinated Packages Are a Supply Chain Nightmare
There’s a special kind of confidence that AI coding tools have when they’re completely wrong. They don’t hedge. They don’t say “I’m not sure this package exists.” They just tell you to npm install flask-http-utils or pip install react-native-auth-helper like it’s the most natural thing in the world.
The problem? Those packages don’t exist. Or rather, they didn’t exist — until an attacker registered them.
Welcome to slopsquatting, the supply chain attack that only works because AI can’t stop making things up.
How It Works
The mechanics are almost embarrassingly simple:
- An AI coding tool hallucinates a package name that sounds plausible but doesn’t exist on npm, PyPI, or another registry.
- A researcher (or attacker) notices these hallucinated names are consistent — the same models suggest the same fake packages over and over.
- The attacker registers the hallucinated package name on the public registry and fills it with malicious code.
- A developer asks an AI to solve a problem, gets told to install the package, and runs the install command without checking.
That’s it. No zero-day exploit. No sophisticated intrusion. Just an AI confidently lying and a developer trusting it.
The Numbers Are Ugly
Recent research found that nearly 20% of code samples generated by popular AI models across Python and JavaScript included hallucinated package names. These aren’t random one-off mistakes — they’re persistent and repeatable. Ask the same model the same question ten times, and it’ll recommend the same nonexistent package in seven of them.
As of this month, security researchers have tracked the hallucinated package problem contributing to at least 35 new CVE entries in March 2026 alone. That’s not a rounding error. That’s a trend line heading straight up.
And here’s the kicker: the models aren’t getting better at this. Despite improvements in syntax accuracy (now above 95%), the security pass rate for AI-generated code hovers around 55%. One in two code snippets contains a known vulnerability. The models are getting better at writing code that runs. They’re not getting better at writing code that’s safe.
Why Vibe Coders Are the Prime Target
If you’re a seasoned developer, you might read pip install fastapi-auth-utils in an AI response and think, “Huh, I’ve never heard of that package. Let me check PyPI first.” You have the instinct to verify because you’ve been burned before.
Vibe coders don’t have that instinct. They’re building apps by describing what they want in plain English and trusting whatever comes back. The whole point of vibe coding is that you don’t need to understand the underlying details. But those details are where the danger lives.
Surveys show that nearly half of developers using AI coding tools don’t manually review the generated code before deploying it. For vibe coders building their first app with Cursor or Bolt, that number is likely much higher. When the AI says “install this,” they install it. When it says “add this dependency,” they add it.
A malicious slopsquatted package can:
- Steal environment variables (including API keys, database credentials, and secrets)
- Create backdoors that persist even after the package is removed
- Exfiltrate user data silently in the background
- Spread downstream to every user who installs your app
The Three-Minute Check That Could Save You
Before you install any package an AI recommends, do this:
-
Search the registry. Go to npmjs.com or pypi.org and search for the exact package name. Check when it was published, who published it, and how many downloads it has. A package with 12 downloads published last week is a red flag.
-
Check the source. Click through to the GitHub repo (if one exists). Does it have real commits? Real contributors? Or is it a single file uploaded two days ago?
-
Cross-reference. Google the package name. If the only results are AI-generated tutorials and no real documentation, the package probably doesn’t exist in any legitimate form.
This takes three minutes. It could save you from shipping malware to your users.
It Gets Worse: Apple Is Watching
If you needed another reason to audit your dependencies, here it is: Apple started blocking updates for vibe coding apps in the App Store this month, citing concerns about apps that can alter their own functionality without review. The walls are closing in on unvetted AI-generated code from multiple directions simultaneously.
The ecosystem is sending a clear message: the era of blindly shipping whatever an AI produces is ending. Whether it’s supply chain attacks through hallucinated packages, the 45% vulnerability rate in AI-generated code, or platform gatekeepers cracking down — the bill for “move fast and don’t verify” is coming due.
What You Can Do Right Now
If you’ve built an app with AI assistance, your first step is understanding what’s actually in your codebase. Run a scan. Check your dependencies. Look for packages you didn’t intentionally add and verify the ones you did.
Our free URL scan can help you spot exposed secrets, security misconfigurations, and other vulnerabilities that AI coding tools love to introduce. It takes thirty seconds and might be the most productive half-minute you spend today.
Because the thing about slopsquatting is that you won’t know it happened until it’s too late. The malicious package installed cleanly. Your app works fine. Everything looks normal — right up until your users’ data shows up on a paste site.
Don’t trust the vibes. Verify the code.