
Although there’s been a lot of talk about how AI has boosted the effectiveness of phishing threats, analysis by Hoxhunt shows that for most of 2025, less than five percent of attacks on their four million users each month were AI-generated.
However, December saw a 14-fold surge in AI-generated phishing attacks that bypassed email filters and landed in inboxes, which soared from four percent to 56 percent of all reported attacks across the Hoxhunt global threat detection network over the Christmas holiday season.
Phishing campaigns using .ics calendar invites are surging too, and they were found to be six times more likely to trick users into clicking compared to typical phishing attacks. Automatically appearing as meetings in users’ calendars, these invites are left behind like landmines even after a successful threat report of the calendar invite attack email, creating a second long-lasting opportunity for a malicious click.
Hoxhunt’s latest report highlights how phishing techniques are evolving, with a deep dive on AI-generated attacks. It finds 43 percent of AI-generated phishing emails contain malicious links, while 20 percent use open redirects to evade filters, 11 percent contain malicious attachments and five percent include malicious phone numbers tied to callback phishing.
Mika Aalto, co-founder and CEO at Hoxhunt says:
Our research shows that AI-generated phishing went from a trickle to a flood almost overnight. The lesson for security leaders is clear: if attackers can use AI to scale social engineering, defenders must use AI to scale human cyber skills.
The biggest mistake companies can make in the AI era is believing technology alone will solve social engineering. Attackers are targeting human behavior. That means the defense must strengthen human behavior as well. The advantage will go to whoever understands that technology is a lever, not a replacement, for influencing human psychology.
We’ve expected AI to reshape cybercrime for years, so the answer isn’t panic, it’s preparation. Right now there’s a wave of alarmist messaging around AI threats that almost resembles social engineering itself. Deepfakes are real, but they’re still rare and highly targeted. If companies focus training on exotic attacks instead of the common social engineering tactics people face every day, they’re not optimally managing human risk.
The full report is available from the Hoxhunt site.
Image credit: BiancoBlue/depositphotos.com
