Only 28 percent of firms think they can prevent damage from rogue AI agents

only-28-percent-of-firms-think-they-can-prevent-damage-from-rogue-ai-agents
Only 28 percent of firms think they can prevent damage from rogue AI agents
Agentic-AI

As AI agents gain autonomy, such as initiating actions, accessing systems, and interacting with other agents without direct human oversight, traditional security models struggle to cope.

A new report from Keyfactor, conducted in partnership with Wakefield Research, https://wakefieldresearch.com/ finds 86 percent of cybersecurity professionals agree that without unique, dynamic digital identities, AI agents and autonomous systems cannot be fully trusted.

While cybersecurity professionals acknowledge AI-based vulnerabilities, only half have implemented governance frameworks to address them, and just 28 percent believe they can actually prevent a rogue agent from causing damage. As such, agentic AI security represents a board-level priority, and yet 55 percent of security leaders say their C-suite is not taking agentic AI risks seriously enough, creating a recognition-action gap that leaves organizations vulnerable.

See also:
What happens if agentic AI falls into the wrong hands? [Q&A]
Agentic AI is being rolled out before organizations are ready for the identity risks
New report warns of looming agentic AI and quantum fraud risks

“As businesses race to deploy autonomous AI systems, the security infrastructure to protect them is falling dangerously behind,” says Jordan Rackie, CEO of Keyfactor. “C-Suite must provide the resources and support security teams need to enable their organizations to trust what their AI is doing, prove it, and stop it when needed. This is the next frontier of digital trust, and identity will define the winners in the next decade of AI.”

The report also finds that as vibe coding gains momentum in software development, a critical security gap is emerging, 68 percent lack full visibility or governance over AI-generated code contributions. This creates an untenable risk as AI assistants write increasingly large portions of enterprise codebases without the fundamental safeguards that make code trustworthy.

You can get the full report from the Keyfactor site.

Image Credit: Twoapril Studio/Dreamstime.com