When agentic AI meets ad fraud — how bots are breaking digital marketing [Q&A]

when-agentic-ai-meets-ad-fraud-—-how-bots-are-breaking-digital-marketing-[q&a]
When agentic AI meets ad fraud — how bots are breaking digital marketing [Q&A]
Online Ads Strategy and Digital Content Marketing for Business Growth Muxer

Fraudulent advertising has been around for a long time. Of course, the internet has made it easier, but now with AI browsers and tools reshaping ad traffic, it’s becoming harder to tell the difference between bots and buyers and determine what engagement is even worth anything anymore.

We talked to Mike Schrobo, CEO of Fraud Blocker, to discuss how this shift is disrupting digital marketing and ad fraud and what businesses can do about it.

BN: How does traffic generated by an agentic AI bot differ from that of a standard, less sophisticated bot?

MS: The bots of yesteryear were far more predictable than today. Clicks at regular intervals, identical session times, and robotic patterns indicated that the browser wasn’t likely human. Repetition and uniformity were dead giveaways.

Click farms of the past are a good example of how these bots were used to perpetuate ad fraud. The methodology wasn’t cutting-edge but platforms don’t catch every fake engagement and not every business has ad fraud protection. So, predictable clicks still drained marketing budgets by exploiting the structure of digital advertising, which offers micro-payments per action.

Bad actors hired workers in developing countries to inflate clicks fraudulently and, in turn, payouts. By partnering with ad networks and selling ad space on their own websites, the scammers directed massive amounts of fake traffic to generate ad impressions while companies were hit with the bill. Ad fraud is already a scam eight times the size of credit card fraud and agentic AI adds a new degree of difficulty.

AI-powered bots vary their behavior based on context — scrolling at human-like speeds, pausing on content, even abandoning carts the way real shoppers do. The result is engagement that looks human but doesn’t convert. These bots can operate autonomously and convincingly, making traffic harder to categorize and fraud detection more challenging.

BN: What’s the financial risk for major ad platforms (like Google or Meta) if they fail to adequately address agentic AI fraud?

MS: That’s a good question because they aren’t impartial. The major players reimburse some invalid clicks but don’t catch everything, and our data shows manual refund requests have about a 10 percent approval rate at Google. Further, it’s not in their interest to really “fix” this issue. After all, when your business model rewards traffic volume over quality, aggressively filtering out fraud cuts into revenue. These middleman platforms are caught between satisfying advertisers and protecting their own bottom lines.

I see a few things happening here. First, advertisers may lose trust when they see engagement metrics rise while sales decline — a telltale sign of bot inflation. Platforms need to be seen as taking action to protect the integrity of their respective brands and the sector as a whole. If advertisers realize they are paying for fraud, then that’s when we could see them demand clawbacks and pull budgets (or shift to other channels).

Additionally, on the operational side, ad platforms will need to heavily invest in defense systems to keep pace with automated bad actors. These costs could get passed on to advertisers through higher pay-per-click costs. This evolution suggests multiple effects: eroding trust, budget flight, and rising costs. With agentic AI predicted to go mainstream in the next 12 months, platforms are running out of time to get ahead of this.

BN: In light of the recent Android ad fraud example, what pre-emptive security measures should platforms implement to defend against large-scale, malware-free agent replication of fraudulent bid requests?

MS: Yes, this was a particularly notable ad fraud scheme that covertly co-opted user smartphones worldwide. At its peak, it infected 38 million devices and generated 2.3 billion fraudulent bid requests per day.

In a nutshell, hackers created malicious apps that appeared legitimate and published them on the Google Play Store. The downloaded apps then secretly launched browsers that navigated to scammer-controlled domains. From there, these devices became ‘ghost’ click farms across a massive distributed network without users even knowing.

Perhaps even more concerning is that this scheme used traditional bot ad engagement – imagine what such a big network could get away with in this agentic era?

Going forward, platforms need a multi-layered defense. First, real-time behavioral analysis spots patterns humans wouldn’t generate — like identical session durations across thousands of devices, or coordinated traffic spikes to the same domains within seconds of each other. Second, IP and device fingerprinting identify when known devices suddenly shift their behavior – anomalous traffic patterns are precisely what led researchers to uncover the above scheme. Third, emerging technologies such as blockchain-based device attestation show promise for creating immutable records of provenance and traffic history that fraudsters can’t easily manipulate.

BN: If agentic AI bots can behave indistinguishably from real users, should that traffic be valued differently? What other kinds of fraud do these new tools potentially enable?

MS: This is another unknown that’s up for debate. AI agents are essentially bots that do your bidding. If an agentic browser — like the one recently released by OpenAI — searches for a new backpack on your behalf, that’s valuable marketing intent that triggers remarketing pixels. Days later, the human user sees retargeted ads and converts. Should advertisers pay full price for that agent activity? Half? Nothing? This is an ongoing conversation.

The other problem is that agentic browsers are potentially hackable. A prompt injection attack could instruct the very same agent to visit competitor sites and drain ad budgets. But the metrics look almost identical, with the agent searching from the user’s device using the same IP, browser, and session information.

Clearly, we’ll need better ways to distinguish bots from browsers, and real-time protection to block compromised agents before they cause harm. Additionally, ad platforms must be more transparent about the percentage of traffic that’s agent-driven so advertisers can make informed decisions about what they’re buying.

BN: Will the battle against ad fraud become a constant AI-vs-AI arms race, and if so, how do legitimate defense mechanisms gain a sustainable advantage?

MS: Most likely, yes, and advertisers are right to fight fire with fire. Projections show ad fraud costing $172 billion by 2028, an increase of almost $100 billion over five years, and that’s an estimate from before the arrival of agentic AI.

This is a battle of scale and advertisers don’t have much of a choice but to match the pace. It’s encouraging to see the sector striving to automate real-time oversight of its ad ecosystem. Smart tools can identify micro-behaviors that sophisticated bots still struggle to replicate authentically (scroll patterns, session durations, keystroke dynamics) and get ahead of fake engagement. Our platform alone analyzes 60 million IP addresses each month to detect these threats as they happen.

Second, if you see something, say something. Novel ad fraud methods are on the way and better threat intelligence sharing keeps everyone safer. The key is to prioritize proactive detection over reactive refunds. Advertisers are better off blocking fraud before charges arrive rather than scrambling for refunds after the fact.

I’ve worked in marketing for over two decades, and while this is probably the most sophisticated threat I’ve seen, it’s heartening that we count detection tools that operate at the same scale as the attacks. That gives me confidence we can win this arms race and protect digital ad integrity.

Image credit: BiancoBlue/depositphotos.com