
Artificial intelligence is transforming the way that many areas of business operate. But with the benefits also come new risks to corporate data.
We spoke to Rohan Sathe, CEO and co-founder of Nightfall AI, to find out how AI risks exposing sensitive information and what companies can do to protect themselves.
BN: How are shadow AI applications creating new data security vulnerabilities for enterprises?
RS: We define shadow AI as the unauthorized or unmonitored use of AI tools by employees — think pasting source code or customer data into chatbots — which creates exposure risks outside of IT governance. This definition aligns with what we’re seeing from other industry players like IBM and Splunk. Shadow AI is essentially AI being used without approval or oversight, which introduces these blind spots and potential data exfiltration risks. The combination of easy-to-use Generative AI apps and the lack of proper controls is why this problem is growing so rapidly.
BN: Will the proliferation of AI tools fundamentally change how intellectual property gets exposed and stolen?
RS: The landscape has really shifted dramatically. We started with SaaS scanning — think Slack and Google Drive — around 2019-2020. Then Generative AI guardrails became critical starting in 2023, and now we’re seeing this urgent need for autonomous, intelligent threat prevention that can scale with organizational growth. At Uber Eats, I was dealing with petabyte-scale data spread across many different systems, which is an environment where sensitive information can move really quickly and often invisibly. That experience, combined with what the whole industry learned from incidents like Uber’s 2016 breach — where attackers basically leveraged credentials that were exposed in code on GitHub to reach AWS data — really highlighted how this combination of data sprawl, credentials, and cloud infrastructure creates this outsized risk without better detection and guardrails.
BN: What strategies should organizations implement to safeguard against modern data exposure risks?
RS: From what we consistently hear from customers, there are really two main levers that make the biggest difference. First is pre-submission controls — actually catching sensitive content before it’s sent to AI tools, uploaded to or copied on the web. Second is AI-native detection that moves beyond legacy pattern-matching to understand data lineage and context.
Our browser extension and endpoint agents actually scan prompts, clipboard activity and all browser upload activity before any data is submitted. We can redact or block risky content in real time — so before a ChatGPT prompt is sent, for example. We’re also tracing lineage so security teams know if a file originated in a corporate system. We deploy on macOS and Windows with Chromium, Safari and Firefox extensions that provide this before-you-send redaction, clipboard copy/paste and file upload blocking functionality.
What’s really powerful is our noise reduction through continuous learning. Our system understands content and file lineage, learns from user annotations and actions, and identifies safe workflows to suppress low-risk activity. This dramatically reduces false positives compared to legacy DLP solutions.
BN: How does AI-powered data loss prevention differ from traditional approaches, and why is it more effective against today’s threats?
RS: Nightfall’s AI detection platform already delivers highly accurate, low-noise results — 95 percent precision compared to the five to 30 percent typical of traditional regex or rules-based DLP. We’re also doing real-time threat detection and risk prioritization using LLMs, transformers, and computer vision, with custom file and sensitivity classifiers that can uncover movement of intellectual property and high-value documents that go way beyond simple rules-based entity detection.
This is a pretty stark contrast to legacy DLP, which is mostly about after-the-fact detection. Security operations teams are struggling with increasingly complex tools, legacy pattern-matching DLP, constant manual policy tuning, and just crushing alert fatigue. These issues slow investigations, increase overhead, and reduce security effectiveness. Our customers tell us they’re seeing this transformation from alert fatigue to focused, high-impact security actions.
BN: What techniques can help solve the operational challenges that security teams face day-to-day?
RS: Even after the noise is gone, the real work begins. In large organizations, SecOps teams can still face hundreds of legitimate alerts every day. Sifting through them to separate business-approved workflows from risky data hygiene issues or insider threats can eat up hours. That’s where our Nightfall Nyx platform takes on this investigative heavy lifting — accelerating analysis so teams can focus on action, not searching and sorting through pages of alerts.
Nyx connects the dots across exfiltration events — users, domains, devices, data types, file names, and more — surfacing patterns instantly. Through her natural-language interface, analysts can act on patterns, investigate findings, produce reports, and get recommended actions in seconds. Tasks that once took two hours can now be done in under two minutes — a true 20× time-savings game-changer.
BN: How has your approach to securing information evolved?
RS: In the early days, we wanted to use machine learning to discover and protect sensitive data wherever it lives across cloud apps and modern workflows. When we came out of stealth in 2019, we positioned ourselves as a cloud-native, ML-powered SaaS DLP solution with a vision of building the ‘control plane for cloud data.’ As we expanded beyond SaaS to cover data exfiltration across endpoints and generative AI, ‘AI-native DLP’ became our umbrella term.
Our product evolution has tracked this shift from reactive and manual operations to proactive, intelligent automation. We announced Generative AI coverage in 2023, expanded to exfiltration prevention, encryption, and email protection in 2024, and now with Nyx, we’re ushering in what we see as the next era of agentic AI in data protection — transforming alert fatigue into focused, high-impact security actions across SaaS, endpoints, and AI tools.
BN: Which industries are most vulnerable to these emerging data protection challenges, and where do you see this trend heading?
RS: I think the trajectory suggests yes, tools like Nightfall will become a default layer of control for enterprise environments. We’re seeing widespread Generative AI adoption plans across enterprises, and major platforms like Microsoft Entra Internet Access are rolling out inline, pre-submission controls for Generative AI traffic. When you pair that with the industry consensus around Shadow AI risks, it’s reasonable to expect pre-submission, AI-aware DLP to become a default control layer alongside things like identity and access management and endpoint detection and response.
Our long-term vision builds on what we articulated at launch — to be the control plane for cloud data — but now we’re extending that with autonomous operations and agentic AI capabilities. We envision a future where security posture improves continuously without piling more work on analysts, where AI eliminates the need for specialized domain expertise, and where organizations can shift from reactive, manual security operations to proactive, intelligent threat prevention.
In practice, that means AI that both understands data in context and takes safe, intelligent actions — investigate, coach, redact, block — across SaaS, endpoints, email, and Shadow AI. We want to close the loop from detection to prevention, giving security teams an always-on intelligent partner that gets smarter with every investigation and transforms weeks of manual forensics into minutes of focused response.
Image credit: Funtap/depositphotos.com