Enterprises lack visibility into AI usage

enterprises-lack-visibility-into-ai-usage
Enterprises lack visibility into AI usage
Human interact with AI artificial intelligence virtual assistant EIDE

A new report reveals that while 72 percent of organizations believe they have full visibility into AI usage, 65 percent still report detecting unauthorized shadow AI, revealing a structural gap between perceived control and operational reality.

The study from CultureAI, with research conducted by Censuswide among 300 senior technology, security, and risk leaders from across North America and Europe, finds that AI is widely used across teams, with 67 percent of security leaders reporting wide use across the organization and 27 percent use in specific functions.

Currently, AI use is mostly focused on core functions like data analysis and RevOps (72 percent), software development and engineering (59 percent), and customer support (43 percent). Yet, the vast majority of respondents (91 percent) expect AI usage to grow across their entire organization over the next 12 months, with 41 percent expecting significant growth. However, risk scales with usage. As exposure grows faster than controls, an organisation often has little time to prepare.

Concerns around AI include compliance exposure (56 percent), data leakage via prompts and uploads (52 percent), credential compromise (40 percent), and intellectual property loss (39 percent). Despite this, nearly half (46 percent) of respondents rate AI risk as moderate or lower.

Most organizations have policies, committees, and training in place, but lack mechanisms that operate in real time at the point where AI risk is actually created, such as prompts, uploads, and embedded AI features inside SaaS tools. 62 percent of organizations report they have already implemented a formal AI governance framework, while a further third are actively developing one. Similarly, 67 percent say they have established an AI or risk committee with explicit oversight responsibilities. However, this confidence sits alongside clear operational gaps, with 20 percent of respondents acknowledging that their policies are not actively enforced and more than a third lacking dedicated AI detection capabilities altogether.

Oliver Simonnet, lead cybersecurity researcher at CultureAI, says, “Generative AI is now embedded across everyday workflows, often beyond traditional IT oversight. While many organizations believe they have governance frameworks in place, our research reveals a widening gap between perceived control and operational reality. The most significant AI risks in 2026 aren’t theoretical; they’re practical, high-probability risks tied to everyday use. Policies set intent, but without real-time enforcement at the point of use, risk is created quietly and at scale. To adopt AI at scale responsibly, businesses must move beyond policy and implement real-time, enforceable controls where risk is actually created.”

The full report is available from the CultureAI site.

Image credit: BiancoBlue/depositphotos.com