What threat do MCP servers pose to AI? [Q&A]

what-threat-do-mcp-servers-pose-to-ai?-[q&a]
What threat do MCP servers pose to AI? [Q&A]
Artificial intelligence model

AI needs to be able to call upon tools and retrieve data from numerous disparate sources. This required custom integration, a resource intensive process, until Anthropic developed a neat solution in the form of the Model Context Protocol (MCP).

However, security issues have now emerged with MCP servers that could threaten adoption. We spoke to Shreyans Mehta, CTO at Cequence Security, about what this means for AI and how the business can embrace the technology safely.

BN: What is MCP and why is it fundamental to AI?

SM: Model Context Protocol (MCP) is an open-source standard that was unveiled by Anthropic in November 2024. It provides a universal interface and a standardized method of communication for AI models, effectively freeing AI from having to rely solely on its training data by providing a method to access current data from other sources in real time. MCP acts like a common language between the LLM and those data sources, avoiding the need to create custom integrations for each system, and it’s taken the world by storm, much as APIs did years ago when they made it faster and more convenient to roll out apps. There are now hundreds of official public MCP servers but thousands of unofficial ones. MCP is also crucial for agentic AI as it allows autonomous agents to access applications, tools, and data in order to complete tasks.

BN: How do MCP servers integrate with the existing architecture of the business?

SM: Much like any other server, requests are routed via the MCP server which acts as a bridge, channelling requests from the LLM to internal or external resources. Prompts made via the LLM use an MCP client to connect to the MCP server and to access data repositories. MCP servers also enable the LLM to go beyond its usual capabilities, so it can access read-only data, for instance, or utilize tools to perform tasks such as calling APIs, writing to a database or modifying a file. It’s this latter functionality that will empower agentic AI, as rather than the AI assistant providing the user with a summation in response to a request, the AI agent will be able to act upon it, finding out real-time information on airline tickets, for example, booking, and even paying for the flight.

If the MCP server is developed internally, it performs the same function, acting as a single hub to access multiple tools and data sources such as CRM and ERP systems. It again eliminates the need to create a custom API that integrates with these systems, acting as a universal integration layer to connect to disparate systems, and usually is configured with the same access permissions as the user to prevent privilege abuse.

BN: How are MCP servers putting AI and the applications and data they access at risk?

SM: The chief strength of MCP — its connectivity — is also proving a major weakness. Stories began to surface this year about MCP servers exhibiting a host of vulnerabilities.

In May, researchers showed it was possible to hijack an AI agent via a vulnerability with the GitHub MCP server. Should a developer have asked the AI assistant to check for open issues, the agent would have read a malicious issue, been prompt-injected and followed commands telling it to access private repositories and leak the data it discovered.

A month later, Asana disclosed a bug that could have exposed its users’ domain data to other Asana MCP users. Security researchers then separately announced the discovery of 7,000 MCP servers that were publicly accessible, with hundreds exposed to anyone on the same network. In the end, around 70 were found to have major flaws ranging from excessive permissions to unchecked input handling. These issues could allow the MCP server to be taken over and used nefariously and, as MCP servers effectively act as proxies, they would mask the malicious actor client-side.

More recently, researchers unearthed a mis-configuration of the Smithery.ai MCP hosting service that enabled them to access sensitive files and obtain over-privileged administrator credentials. These provided access to over 3,000 hosted AI servers, enabling the theft of API keys and secrets from thousands of customers using hundreds of services, demonstrating the potential cascade effect an attack could have and the fragility of the AI supply chain.

BN: If the business builds its own MCP server, does this make the process more secure? Is this a viable long-term strategy?

SM: There’s no reason why the business shouldn’t create its own MCP servers and many will already be looking to develop these to gain competitive advantage. But when building these from the ground up there’s a lot of work required to take it from being a prototype to an enterprise-grade solution, such as nailing down authentication, monitoring, and logging. It therefore makes more sense to look at creating MCP servers within a secure build environment where such controls are prioritized.

However, there will come a time when the business needs to access other MCP servers that are not under its control. In this situation, it’s vital to validate the MCP server comes from a reputable known provider.
MCP servers have already sprung up, seemingly overnight, that offer unofficial ways to connect to reputable APIs, with reports stating these now exist for LinkedIn, YouTube and AWS, for example. Unfortunately, developers are now giving their AI agents access to these services via these servers with very little knowledge of who is controlling them and how the server is protected, if at all.

BN: What steps can be taken to ensure the business can use MCP servers securely?

SM: Businesses will need to be able to both create and to connect to third-party MCP servers securely and one of the most effective ways of doing that is via an AI Gateway.

Developers can stand up an MCP server within such a solution by using the standardized API specifications as the primary input, thereby eliminating the need to dedicate significant resource to development. Additional controls can then be used that limit the potential business logic abuse of the server. These include pass-through authentication or integration with OAuth identity infrastructures to ensure AI agents can only access the systems and data they are supposed to.

When it comes to validating other MCP servers, the AI Gateway can create a verified registry of trusted servers while official APIs can be transformed into MCP-compatible endpoints. And as all AI-API traffic is monitored to track user and agent behavior, the applications they are accessing and the API calls being made, any rogue activity can be detected and mitigated. In this way, the business can exercise control both over the MCP servers it develops and those it does not so that when attackers do begin to abuse the infrastructure in earnest their applications, data, and AI are protected.

Image credit: jujong11/depositphotos.com