
A new report finds that 99 percent of security operations centers use AI, and 77 percent of security teams regularly rely on AI, automation or workflow tools. Yet manual or repetitive work still consumes 44 percent of security teams’ time, contributing to emotional exhaustion and fatigue for 76 percent of respondents.
The study from workflow platform Tines, based on a global survey by Sapio Research of more than 1,800 security leaders and practitioners, also highlights significant obstacles to scaling AI and automation for meaningful returns. Key factors include security and compliance concerns (35 percent), limited resources (32 percent) and integration gaps between tools (31 percent). These limitations help explain why nearly all (92 percent) of security professionals believe that intelligent workflows, which unite automation, AI and humans to move work smoothly across systems and people, would add value to their organizations.
“The signal is clear: AI alone won’t fix broken security operations. Teams see its enormous potential for time savings and morale gains, but without strong governance and well-designed workflows, that potential remains out of reach,” says Thomas Kinsella, co-founder and chief customer officer at Tines. “Our research shows that real relief comes when organizations pair AI adoption with clear guardrails and intelligent workflows, redesigning how security work actually gets done.”
Security teams report gains from AI across several core functions, with threat detection (61 percent), identity and access monitoring (56 percent) and compliance and policy writing (55 percent) being the most frequently cited as highly effective use cases. However, these gains have largely been applied at the task level and have not yet translated into broad changes in overall workloads and processes. Still, looking ahead, confidence in AI’s long-term impact is high, 86 percent of respondents say they are optimistic that it will create new career opportunities, and 81 percent say their organizations are prepared to re-skill or hire for AI-related roles.
Interestingly AI has become a defining factor in the risk landscape. The top five cybersecurity concerns cited by respondents heading into 2026 center on, or are being transformed by AI, including data leakage through copilots and agents (22 percent), third-party and supply chain risks (21 percent), evolving regulations (20 percent), shadow AI (18 percent) and prompt injection attacks (18 percent).
You can get the full report from the Tines site.
Image credit: belchonock/depositphotos.com
