Over half of GenAI breaches in finance involve regulated data

over-half-of-genai-breaches-in-finance-involve-regulated-data
Over half of GenAI breaches in finance involve regulated data
Data-Breach-Hand

As financial services organizations rapidly adopt generative AI, the risk of exposing sensitive financial and customer data is increasing. New research shows regulated data accounts for 59 percent of all data policy violations related to GenAI usage, highlighting the scale of the challenge in protecting compliance-sensitive information.

The study from Netskope Threat Labs finds that across the financial services sector 70 percent of users are actively using GenAI tools and 97 percent are interacting with applications that incorporate GenAI-powered features indirectly. In addition, 94 percent of users are using GenAI applications that rely on user data for training.

At the same time, organizations are making progress in reducing shadow AI usage. The proportion of users relying on personal GenAI applications has dropped significantly from 76 percent to 36 percent over the past year, while adoption of organization-managed GenAI solutions has increased from 33 percent to 79 percent. However, the number of users switching between personal and organization-managed GenAI accounts has risen from 9 percent to 15 percent, increasing the risk of sensitive financial data moving between unmanaged and secure environments.

The GenAI ecosystem is also diversifying. ChatGPT remains the most widely used application, adopted by 76 percent of organizations, followed by Google Gemini at 68 percent. Newer tools are also gaining traction quickly, with Google NotebookLM reaching 39 percent adoption and AssemblyAI rising sharply from just one percent in June 2025 to 37 percent, reflecting growing demand for specialized AI capabilities.

Organisations are taking a cautious approach to risk too, with tools such as ZeroGPT (46 percent), DeepSeek (44 percent) and PolitePost (43 percent) among the most frequently blocked genAI applications due to security and compliance concerns.

Ray Canzanese, director of Netskope Threat Labs, says, “As financial institutions accelerate their adoption of generative AI, they are also expanding the number of pathways through which sensitive data can be exposed. While the shift towards organization-managed tools is a positive step, our findings show that risks persist, particularly where personal and enterprise usage overlap. To reduce risk, organizations need a layered approach — inspecting all web and cloud traffic to stop malware, blocking non-essential applications, and using data loss prevention to protect sensitive information. Technologies like remote browser isolation also play a key role in enabling safe access to higher-risk websites.”

To find out more get the full report from the Netskope site.

Image credit: photonphoto/depositphotos.com