Your AI Coding Agent Downloaded Malware and You Let It
The Agent Did What You Told It Not To
On April 23rd, Field Effect’s MDR team caught something genuinely new. Not another CVE in an AI editor’s sandbox. Not another hallucinated package name. A Cursor agent session running Claude Code was socially engineered into downloading, chmod-ing, and executing the AMOS Stealer malware — all by itself.
The full attack chain, from first malicious download to credential exfiltration, took under two minutes.
How It Went Down
The attack started the way they all do now: a developer cloned a repo that looked legitimate. Inside, the repository’s configuration prompted the Cursor agent to download and execute a file from an untrusted source. The agent complied. It made the file executable. It ran it.
What followed was a two-stage AppleScript payload. The first script handled sandbox evasion. The second was the full AMOS Stealer — a well-known macOS infostealer that hoovers up browser credentials, SSH keys, cryptocurrency wallets, and anything else worth stealing from your local system. It even popped a fake system dialog to grab the user’s local account password for elevated permissions.
All of this happened inside a legitimate-looking coding session. The malicious commands blended right in with the normal noise of an AI agent running builds, installing packages, and executing scripts. That’s the whole point.
Why This Is Different
We’ve had AI code editor vulnerabilities before. Cursor’s shell built-in bypass was bad. The Rules File Backdoor was worse. But those exploited flaws in the tool itself.
This one exploited the developer.
The agent did exactly what agents are supposed to do — follow instructions and execute code. The problem is that “follow instructions and execute code” is also a perfect description of what malware does. When your AI assistant is running dozens of shell commands per session, one more curl-and-execute doesn’t look suspicious. It looks like Tuesday.
The Bigger Problem Nobody Wants to Talk About
AI coding agents now operate in high-privilege environments. They have access to your tokens, your environment variables, your SSH keys, your deployment pipelines. We gave them the keys because it made the demo look impressive.
Field Effect flagged this because their MDR was watching. Most developers don’t have enterprise-grade endpoint detection on their laptops. They have vibes. They have Auto-Run Mode. They have a Cursor agent that just downloaded something it shouldn’t have, and they’re not going to notice until their crypto wallet is empty.
This isn’t a Cursor-specific problem, either. Any AI agent that can execute arbitrary commands — Claude Code, Codex CLI, Copilot agents — is one malicious repo away from the same scenario. The attack surface isn’t the tool. It’s the trust model.
What You Should Actually Do
Stop running AI agents in auto-approve mode on repositories you didn’t write. That’s it. That’s the advice.
If you need more: review what your agent is executing before it executes. Check the repo’s configuration files — .cursor/rules, .github/copilot-instructions.md, whatever your tool uses — before you let an autonomous agent loose inside it. Treat every cloned repo like it’s a stranger’s USB drive, because functionally, it is.
The era of “just clone it and let the AI figure it out” lasted about eighteen months. It was fun while it lasted. Now it’s a delivery mechanism for credential theft.
Welcome to the post-vibe era. Bring your own paranoia.