
According to new research, 90 percent of enterprises say they have visibility into their AI footprint, yet 59 percent have confirmed or suspect the presence of shadow AI within their environments, suggesting that employees are operating unsanctioned AI tools or deploying agentic AI systems outside established monitoring and governance processes.
The survey from ArmorCode, in partnership with the Purple Book Community, of over 650 cybersecurity decision-makers also finds that 70 percent of organizations have confirmed or suspected vulnerabilities introduced by AI-generated code in their production systems. This highlights how the speed of AI-assisted development is outpacing traditional security review cycles.
“The greatest AI security threat isn’t what organizations can’t see — it’s what they can see but can’t govern fast enough to stop. The PBC State of AI Risk Management 2026 report underscores just how urgent this governance gap has become,” says Sangram Dash, PBC Charter Member and CISO and VP of IT at Sisense. https://www.sisense.com/
Among other findings 73 percent of organizations say AI-assisted development is increasing software velocity beyond the pace security teams can review, contributing to the widespread presence of AI-generated vulnerabilities in production. Furthermore 73 percent report extensive AI usage in their development processes, while 78 percent say they are piloting or deploying agentic AI systems capable of taking autonomous action.
More than half (51 percent) of enterprises use 11 or more security scanning and vulnerability management tools, creating siloed insights and operational complexity that make it harder for teams to prioritize the greatest risk to their business.
In addition 46 percent of respondents say they spend significant time triaging vulnerabilities that ultimately don’t matter, while critical issues remain buried across disconnected tools.
“These findings show that the real challenge is not AI adoption itself, but the governance required to manage it responsibly at enterprise scale,” says Karthik Swarnam, chief security and trust officer at ArmorCode and Purple Book Community member. “Across the industry, visibility into AI is improving, but the volume and speed of change are outpacing how teams actually operate. Signals are coming from everywhere, and without clear ownership and action, things slip through. That’s why many organizations are ending up with more unsanctioned AI than sanctioned, and risk in places they didn’t expect.”
You can get the full AI Risk Management Report from the Purple Book Community site.
Image credit: cunaplus/depositphotos.com
