Ravie LakshmananApr 20, 2026Artificial Intelligence / Vulnerability
Cybersecurity researchers have discovered a critical “by design” weakness in the Model Context Protocol’s (MCP) architecture that could pave the way for remote code execution and have a cascading effect on the artificial intelligence (AI) supply chain.
“This flaw enables Arbitrary Command Execution (RCE) on any system running a vulnerable MCP implementation, granting attackers direct access to sensitive user data, internal databases, API keys, and chat histories,” OX Security researchers Moshe Siman Tov Bustan, Mustafa Naamnih, Nir Zadok, and Roni Bar said in an analysis published last week.
The cybersecurity company said the systemic vulnerability is baked into Anthropic’s official MCP software development kit (SDK) across any supported language, including Python, TypeScript, Java, and Rust. In all, it affects more than 7,000 publicly accessible servers and software packages totaling more than 150 million downloads.
At issue are unsafe defaults in how MCP configuration works over the STDIO (standard input/output) transport interface, resulting in the discovery of 10 vulnerabilities spanning popular projects like LiteLLM, LangChain, LangFlow, Flowise, LettaAI, and LangBot –
- CVE-2025-65720 (GPT Researcher)
- CVE-2026-30623 (LiteLLM) – Patched
- CVE-2026-30624 (Agent Zero)
- CVE-2026-30618 (Fay Framework)
- CVE-2026-33224 (Bisheng) – Patched
- CVE-2026-30617 (Langchain-Chatchat)
- CVE-2026-33224 (Jaaz)
- CVE-2026-30625 (Upsonic)
- CVE-2026-30615 (Windsurf)
- CVE-2026-26015 (DocsGPT) – Patched
- CVE-2026-40933 (Flowise)
These vulnerabilities fall under four broad categories, effectively triggering remote command execution on the server –
- Unauthenticated and authenticated command injection via MCP STDIO
- Unauthenticated command injection via direct STDIO configuration with hardening bypass
- Unauthenticated command injection via MCP configuration edit through zero-click prompt injection
- Unauthenticated command injection through MCP marketplaces via network requests, triggering hidden STDIO configurations
“Anthropic’s Model Context Protocol gives a direct configuration-to-command execution via their STDIO interface on all of their implementations, regardless of programming language,” the researchers explained.
“As this code was meant to be used in order to start a local STDIO server, and give a handle of the STDIO back to the LLM. But in practice it actually lets anyone run any arbitrary OS command, if the command successfully creates an STDIO server it will return the handle, but when given a different command, it returns an error after the command is executed.”
Interestingly, vulnerabilities based on the same core issue have been reported independently over the past year. They include CVE-2025-49596 (MCP Inspector), LibreChat (CVE-2026-22252), WeKnora (CVE-2026-22688), @akoskm/create-mcp-server-stdio (CVE-2025-54994), and Cursor (CVE-2025-54136).
Anthropic, however, has declined to modify the protocol’s architecture, citing the behavior as “expected. While some of the vendors have issued patches, the shortcoming remains unaddressed in Anthropic’s MCP reference implementation, causing developers to inherit the code execution risks.
The findings highlight how AI-powered integrations can inadvertently expand the attack surface. To counter the threat, it’s advised to block public IP access to sensitive services, monitor MCP tool invocations, run MCP-enabled services in a sandbox, treat external MCP configuration input as untrusted, and only install MCP servers from verified sources.
“What made this a supply chain event rather than a single CVE is that one architectural decision, made once, propagated silently into every language, every downstream library, and every project that trusted the protocol to be what it appeared to be,” OX Security said. “Shifting responsibility to implementers does not transfer the risk. It just obscures who created it.”
Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


