Why shadow AI is the next big compliance challenge [Q&A]

why-shadow-ai-is-the-next-big-compliance-challenge-[q&a]
Why shadow AI is the next big compliance challenge [Q&A]
Shadow artificial intelligence

Shadow IT — workers using their own devices or choice of apps because they see their employer’s tech as slow or not up to the job — has long been an issue for enterprises. The rise of easily available generative AI tools has only added to the problem.

We spoke to Mat Newfield, president, chief commercial officer of Diversified, about the rise of shadow AI and how businesses can address the security and compliance risks that it introduces.

BN: How big a problem is shadow AI at the moment?

MN: Shadow AI has been a substantial and growing risk for enterprises. According to our Diversified Workplace Technology Maturity Survey of over 1,600 US employees, 89 percent of workers report using their own devices or apps for work tasks because company-provided tools are too cumbersome or slow.

This trend shows that unsanctioned tools, including AI tools, are being adopted widely, often daily. Once employees start using their personal devices or non-approved applications to do work, especially when traditional IT tools are inadequate, shadow AI becomes a systemic problem rather than a niche issue.

BN: Is shadow AI always born from a desire for increased efficiency, or can it be a more malicious act?

MN: Of course we want to believe that shadow AI begins with good intentions. Employees simply want to get their work done faster, or more effectively, especially when corporate systems are too slow, restrictive, or not fit for purpose. However, because usage bypasses IT or compliance oversight, it also creates fertile ground for malicious actors. Unsanctioned AI tools could be used to exfiltrate data, leak intellectual property, or create vulnerabilities that attackers can exploit.

BN: How can shadow AI compromise a company’s intellectual property or competitive edge?

MN: Shadow AI can undermine a company’s IP and competitive advantage in several ways:

  • Uncontrolled data exposure: Employees might paste confidential code, R&D materials, designs, customer data, or strategy documents into public AI tools, thereby risking leaks or inadvertent sharing. Analysts have documented cases where proprietary source code or internal documents were uploaded to consumer AI tools, leading to data exposure.
  • Loss of visibility/audit trail: Because unsanctioned AI tools operate outside approved enterprise systems, IT and security teams lose visibility into what data is being processed, by whom, and where it goes. That blind spot makes it very difficult to detect unauthorized copying or exfiltration of sensitive information.
  • Regulatory or compliance risk: If sensitive data is uploaded to third-party AI services without consent or proper controls (especially in regulated industries), the company risks compliance violations or even legal repercussions. That could damage reputation and erode competitive position.
  • Undermining competitive advantage: If proprietary algorithms, product plans, creative IP, or strategic documents are exposed via shadow AI, rivals (or external actors) might gain access to them, potentially eroding the company’s market differentiation or first-mover advantage.

BN: What are the essential components of a robust AI governance framework that businesses need to put in place?

MN: Based on best practices and informed by the security-by-design philosophy we promote in The Trojan Horse in Your Tech Stack: Securing AV & Media Before It’s Too Late, a robust AI Governance Framework should include the following core components:

  • Identity and Access Management (IAM): All AI tools — including media/AV, collaboration, and AI applications — should be integrated into enterprise identity management systems. Use role-based access control (RBAC) and enforce multi-factor authentication (MFA).
  • Visibility and Monitoring: Maintain comprehensive logging, telemetry, and real-time monitoring of AI tool usage. Ensure that AI deployments are visible to security teams, with alerts for suspicious behavior or unauthorized data uploads.
  • Integrity and Secure Configuration: Treat AI tools as part of the critical infrastructure. Patch appropriately, validate software/firmware, securely manage vendor supply chains, and ensure data transmitted to or from AI tools is encrypted and managed under enterprise policy.
  • Policy & Usage Guidelines (Governance and Compliance): Define clear, enforceable policies around acceptable AI use, data handling, permitted/forbidden content, and compliance requirements. Ensure all employees understand these rules.
  • Approval and Vetting Process for AI Tools: Before allowing any AI tool into the environment — whether for AV/media, collaboration, or generative AI — evaluate it for compliance, security posture, vendor trustworthiness, data governance, and alignment with corporate standards. (This reflects the ‘Alignment’ pillar described in The Trojan Horse framework.)
  • Education & Culture: Train employees on the risks of unsanctioned tools, safe AI usage, data sensitivity, compliance, and security hygiene. Encourage culture where employees understand why governance matters and feel comfortable requesting approved AI-tools rather than sidestepping IT.

BN: Should a business adopt a ‘ban and block’ approach to consumer AI tools, or is an ‘enable and govern’ strategy more effective for long-term control?

MN: When considering the use of AI tools, it is critical to take this risk very seriously and make the right decision for your organization. That could be different across organizations. But one thing is certain: Organizations must prioritize a strategy that reduces shadow IT. To answer this question more directly, a majority of organizations will probably find that an ‘enable and govern’ approach is more effective and sustainable. Here’s why:

  • Given that the majority of employees (according to the Diversified survey, 89 percent) are already using personal devices or apps for work, a pure ban is likely to lead to continued shadow-AI use, but under the radar, that could be creating unmanaged risks.
  • By contrast, enabling your people with approved AI tools, building governance around them, integrating them into IAM, and monitoring usage can actually channel user demand into secure, compliant, controlled environments. That turns potential security liability into strategic advantage (innovation, productivity, competitive intelligence).
  • A governed approach maintains visibility and reduces data-leakage risk, while supporting employees’ needs for efficiency and flexibility. Over time, this fosters a culture of trust and responsible AI usage rather than an adversarial ‘us vs. them’ situation between security teams and employees.

Image credit: casarda/depositphotos.com