AI-generated deepfakes used to drive attacks

ai-generated-deepfakes-used-to-drive-attacks
AI-generated deepfakes used to drive attacks

As generative AI tools have become more powerful, affordable and accessible, cybercriminals are increasingly adopting them to support attacks, these range from business fraud to extortion and identity theft.

A new report from Trend Micro shows that deepfakes are no longer just hype but are being used in real-world exploitation, undermining digital trust, exposing companies to new risks, and boosting the business models of cybercriminals.

David Sancho, senior threat researcher at Trend says, “AI-generated media is not just a future risk, it’s a real business threat. We’re seeing executives impersonated, hiring processes compromised, and financial safeguards bypassed with alarming ease. This research is a wake up call — if businesses are not proactively preparing for the deepfake era, they’re already behind. In a world where seeing is no longer believing, digital trust must be rebuilt from the ground up.”

The research finds that threat actors no longer need to have underground expertise to launch convincing attacks. Instead, they are using off-the-shelf video, audio, and image generation platforms, many of which are marketed to content creators, to generate realistic deepfakes that are used to deceive both individuals and organizations. These tools are inexpensive, easy to use, and increasingly capable of bypassing identity verification systems and security controls.

There’s also an expanding cybercriminal ecosystem where these platforms are being used to execute convincing scams. As a result CEO fraud is becoming increasingly hard to detect as attackers use deepfake audio or video to impersonate senior leaders in real-time meetings.
Recruitment processes too are being compromised by fake candidates who use AI to pass interviews and gain unauthorized access to internal systems. In addition financial services firms are seeing a surge in deepfake attempts to bypass KYC (Know Your Customer) checks, enabling anonymous money laundering through the use of falsified credentials.

Trend Micro urges businesses to take proactive steps to minimize their risk exposure and protect their people and processes. These include educating staff on social engineering risks, reviewing authentication workflows, and exploring detection solutions for synthetic media.

You can read more and get the full report on the Trend Micro site.

Image credit: Wrightstudio/Dreamstime.com