Ravie LakshmananMar 31, 2026Cloud Security / AI Security
Cybersecurity researchers have disclosed a security “blind spot” in Google Cloud’s Vertex AI platform that could allow artificial intelligence (AI) agents to be weaponized by an attacker to gain unauthorized access to sensitive data and compromise an organization’s cloud environment.
According to Palo Alto Networks Unit 42, the issue relates to how the Vertex AI permission model can be misused by taking advantage of the service agent‘s excessive permission scoping by default.
“A misconfigured or compromised agent can become a ‘double agent’ that appears to serve its intended purpose, while secretly exfiltrating sensitive data, compromising infrastructure, and creating backdoors into an organization’s most critical systems,” Unit 42 researcher Ofir Shaty said in a report shared with The Hacker News.
Specifically, the cybersecurity company found that the Per-Project, Per-Product Service Agent (P4SA) associated with a deployed AI agent built using Vertex AI’s Agent Development Kit (ADK) had excessive permissions granted by default. This opened the door to a scenario where the P4SA’s default permissions could be used to extract the credentials of a service agent and conduct actions on its behalf.
After deploying the Vertex agent via Agent Engine, any call to the agent invokes Google’s metadata service and exposes the credentials of the service agent, along with the Google Cloud Platform (GCP) project that hosts the AI agent, the identity of the AI agent, and the scopes of the machine that hosts the AI agent.
Unit 42 said it was able to use the stolen credentials to jump from the AI agent’s execution context into the customer project, effectively undermining isolation guarantees and permitting unrestricted read access to all Google Cloud Storage buckets’ data within that project.
“This level of access constitutes a significant security risk, transforming the AI agent from a helpful tool into a potential insider threat,” it added.
That’s not all. With the deployed Vertex AI Agent Engine running within a Google-managed tenant project, the extracted credentials also granted access to the Google Cloud Storage buckets within the tenant, offering more details about the platform’s internal infrastructure. However, the credentials were found to lack the necessary permissions required to access the exposed buckets.
To make matters worse, the same P4SA service agent credentials also enabled access to restricted, Google-owned Artifact Registry repositories that were revealed during the deployment of the Agent Engine. An attacker could leverage this behavior to download container images from private repositories that constitute the core of the Vertex AI Reasoning Engine.
What’s more, the compromised P4SA credentials not only made it possible to download images that were listed in logs during the Agent Engine deployment, but also exposed the contents of Artifact Registry repositories, including several other restricted images.
“Gaining access to this proprietary code not only exposes Google’s intellectual property, but also provides an attacker with a blueprint to find further vulnerabilities,” Unit 42 explained.
“The misconfigured Artifact Registry highlights a further flaw in access control management for critical infrastructure. An attacker could potentially leverage this unintended visibility to map Google’s internal software supply chain, identify deprecated or vulnerable images, and plan further attacks.”
Google has since updated its official documentation to clearly spell out how Vertex AI uses resources, accounts, and agents. The tech giant has also recommended that customers use Bring Your Own Service Account (BYOSA) to replace the default service agent and enforce the principle of least privilege (PoLP) to ensure that the agent has only the permissions it needs to perform the task at hand.
“Granting agents broad permissions by default violates the principle of least privilege and is a dangerous security flaw by design,” Shaty said. “Organizations should treat AI agent deployment with the same rigor as new production code. Validate permission boundaries, restrict OAuth scopes to least privilege, review source integrity and conduct controlled security testing before production rollout.”
Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


