Shadow AI threat increases as employees take risks to meet deadlines

shadow-ai-threat-increases-as-employees-take-risks-to-meet-deadlines
Shadow AI threat increases as employees take risks to meet deadlines
Shadow IT keyboard

A new study, based on a survey of 2,000 respondents, finds that 86 percent now use AI tools at least weekly for work-related tasks. However, 34 percent admit to using free versions of company-approved AI tools, raising concerns about where sensitive corporate data is stored, processed, and accessed.

The research from BlackFog also shows that among respondents using shadow AI tools not approved by their employer, 58 percent rely on free versions, which often lack enterprise-grade security, data governance, and privacy protections.

There appears to be a general acceptance of risk among employees, with 63 percent of respondents believing it is acceptable to use AI tools without IT oversight if no company-approved option is provided. The ‘speed outweighs security’ mindset is reinforced by the fact that 60 percent of respondents agree that using unsanctioned AI tools is worth the security risks if it helps them work faster or meet deadlines. Additionally, 21 percent believe their employer would ‘turn a blind eye’ to the use of unapproved AI tools as long as work is completed on time.

It’s also concerning that those at senior levels are more likely to accept risks. 69 percent of respondents at president or C-level and 66 percent of those at director or senior VP level believe speed trumps privacy or security. In contrast, just 37 percent of people in administrative roles and 38 percent in junior executive positions share this view.

The type of data being shared with AI tools is also worrying, 33 percent of employees have shared research or data sets, 27 percent have shared employee data such as staff names, payroll, or performance information, and 23 percent have shared financial statements or sales data. 51 percent admit to connecting or integrating AI tools with other work systems or apps without IT department approval or oversight.

Dr. Darren Williams, CEO and founder of BlackFog, says, “This research is a stark indication not only of how widely unapproved AI tools are being used, but also the level of risk tolerance amongst employees and senior leaders. This should raise red flags for security teams and highlights the need for greater oversight and visibility into these security blind spots. AI is already embedded in our working world, but this cannot come at the expense of the security and privacy of the datasets on which these AI models are trained.”

You can find out more on the BlackFog site.

Image credit: dimikwiat/depositphotos.com