Anthropic MCP STDIO RCE: The 'By-Design' Supply Chain Vulnerability

critical CVE-2025-49596 · Disclosed · 3 min read

OX Security found that Anthropic's official MCP SDKs hand configuration values directly to OS command execution over the STDIO transport. Any path that lets an attacker influence MCP server configuration (a malicious package, a tampered config file, untrusted user input) becomes arbitrary command execution. Ten CVEs landed across downstream projects including LiteLLM, LangChain, LangFlow, Flowise, LettaAI, LangBot, MCP Inspector, and Cursor. The supply chain footprint: 7,000+ publicly accessible servers, 150M+ package downloads, up to 200,000 vulnerable instances. Anthropic's response: this is by design. Sanitisation is the developer's job.

What happened

OX Security's research team found that Anthropic's MCP SDKs implement the STDIO transport in a way that hands configuration values directly to operating-system command execution. The exposure isn't a single bug. It's the design.

Anywhere an attacker can influence MCP server configuration becomes a code execution path. A malicious package in the dependency chain. A tampered config file. Untrusted user input that ends up in a server spec. All of them compile to a process spawn with attacker-controlled arguments somewhere upstream.

Ten CVEs landed across downstream projects: LiteLLM, LangChain, LangFlow, Flowise, LettaAI, LangBot, MCP Inspector, Cursor. OX flagged the issue at protocol level and recommended structural changes. Anthropic's reply: the STDIO execution model is a secure default, sanitisation is the developer's responsibility. The SDK behaviour stays.

The PolicyLayer angle

When the protocol vendor tells you sanitisation is your job, the policy layer is how you make that responsibility enforceable instead of aspirational. The model can be jailbroken, the SDK won't change, downstream projects will ship the unsafe default for years. The thing under your control is the boundary the agent passes through to reach tools.

The pattern that contains a by-design RCE: every MCP server runs through a policy gate before its tools are exposed to the agent. Tool allowlists per task, manual approval for any server with untrusted config provenance, kill-switch for runaway invocations, deny-by-default for anything that looks like shell or process control. None of these depend on the SDK getting fixed.

This disclosure is the AI era's open-redirect moment. The protocol won't fix itself, the SDKs won't fix themselves, and downstream projects will keep shipping the unsafe defaults. Defence has to live one layer up.

Mitigations

Treat MCP server configuration as a code execution input. Don't accept untrusted config sources. Pin SDK versions and audit packages downstream. Run MCP processes in sandboxes (containers, isolated users, read-only filesystems where possible). Block public IP access to anything launching MCP STDIO subprocesses. Monitor for anomalous command lines from MCP-spawned processes.

FAQs

Did Anthropic patch this? +

No. Anthropic confirmed the behaviour is by design, calling the STDIO execution model a secure default and placing sanitisation responsibility on developers. Downstream projects have patched their own usage. The SDK behaviour stays.

How many systems are affected? +

OX Security identified 7,000+ publicly accessible servers and 150M+ package downloads in the supply chain, with up to 200,000 vulnerable instances across the ecosystem at disclosure.

Is using STDIO MCP servers safe? +

Depends on your config supply chain. If the server spec, args, and environment all come from sources you trust end to end, fine. The risk is that almost no real deployment meets that bar: configs come from npm packages, marketplaces, user input, automated tooling. Treat STDIO MCP launch as a privileged operation.

References

Control what your agents can do through MCP.

Get the gateway. Get the dashboard. Get the audit log.

We're prioritising teams running 5+ MCP servers in production.
// GET IN TOUCH

Have a question or want to learn more? Send us a message.

Message sent.

We'll get back to you soon.

// REQUEST EARLY ACCESS

We're letting people in as fast as we can.

You're in the queue.

We'll be in touch as soon as we can let you in.