
Artificial intelligence has infiltrated so many walks of life that it should come as no surprise that it is being used for ill as well as good. Drawing attention to this, Microsoft has issued a warning about the use of AI in cyberattacks.
Just as AI is being used to speed up coding, writing, design and so many other things, threat actors are turning to artificial intelligence to make their lives easier as well. An article from Microsoft Threat Intelligence warns of the growing threat this poses for enterprises.
Microsoft writes: “Threat actors have incorporated automation into their tradecraft as reliable, cost‑effective AI‑powered services lower technical barriers and embed capabilities directly into threat actor workflows. These capabilities reduce friction across reconnaissance, social engineering, malware development, and post‑compromise activity, enabling threat actors to move faster and refine operations. For example, Jasper Sleet leverages AI across the attack lifecycle to get hired, stay hired, and misuse access at scale”.
But the use of AI is so widespread that Microsoft describes it as being seen at all levels of cyberattacks. The company goes on to say:
As threat actors integrate AI into their operations, they are not limited to intended or policy‑compliant uses of these systems. Microsoft Threat Intelligence has observed threat actors actively experimenting with techniques to bypass or “jailbreak” AI safety controls to elicit outputs that would otherwise be restricted. These efforts include reframing prompts, chaining instructions across multiple interactions, and misusing system or developer‑style prompts to coerce models into generating malicious content.
As an example, Microsoft Threat Intelligence has observed threat actors employing role-based jailbreak techniques to bypass AI safety controls. In these types of scenarios, actors could prompt models to assume trusted roles or assert that the threat actor is operating in such a role, establishing a shared context of legitimacy.
AI is also used to support the infrastructure of cyberattacks, such as automatically generating website and domains:
Threat actors have leveraged generative adversarial network (GAN)–based techniques to automate the creation of domain names that closely resemble legitimate brands and services. By training models on large datasets of real domains, the generator learns common structural and lexical patterns, while a discriminator assesses whether outputs appear authentic. Through iterative refinement, this process produces convincing look‑alike domains that are increasingly difficult to distinguish from legitimate infrastructure using static or pattern‑based detection methods, enabling rapid creation and rotation of impersonation domains at scale, supporting phishing, C2, and credential harvesting operations.
Microsoft also warns of emerging trends such as AI-enabled malware which “embed or invoke models during execution rather than using AI solely during development”.
Take a look at the blog post for more of Microsoft Threat Intelligence’s findings as well as some advice about how to mitigate such attacks. Read more here.
