Grok tests if UK can penalize platforms for sexualized deepfakes generated by AI.
Elon Musk’s X is currently under investigation in the United Kingdom after failing to stop the platform’s chatbot, Grok, from generating thousands of sexualized images of women and children.
On Monday, UK media regulator Ofcom confirmed that X may have violated the UK’s Online Safety Act, which requires platforms to block illegal content. The proliferation of “undressed images of people” by X users may amount to intimate image abuse, pornography, and child sexual abuse material (CSAM), the regulator said. And X may also have neglected its duty to stop kids from seeing porn.
“Reports of Grok being used to create and share illegal non-consensual intimate images and child sexual abuse material on X have been deeply concerning,” an Ofcom spokesperson said. “Platforms must protect people in the UK from content that’s illegal in the UK, and we won’t hesitate to investigate where we suspect companies are failing in their duties, especially where there’s a risk of harm to children.”
X risks fines, Grok block
X is cooperating with the probe, Ofcom said, noting that X met a “firm” deadline last week to explain what steps it’s taking to comply with the UK law. Ofcom declined Ars’ request to share more details about possible changes X has already made to either limit Grok in the UK or more broadly, since the investigation is “live.”
Grok has already been blocked in Indonesia and Malaysia, as the chatbot remains unchecked. The UK could be next to block Grok if X fails to comply with the Online Safety Act. Additionally, X could face fines of up to 10 percent of its global revenue.
It’s unclear how long the probe will take to conclude. Ofcom’s spokesperson told Ars that the agency will progress the investigation “as a matter of the highest priority, while ensuring we follow due process.” The probe will end “as soon as reasonably possible.”
X will have an opportunity to respond to Ofcom’s preliminary ruling before any final decision is made.
Ars could not reach X to comment on the probe, but Musk has complained that Grok critics are looking for an “excuse for censorship,” the BBC reported. X has previously said that it will report harmful outputs to law enforcement and permanently suspend accounts that are abusing Grok to nudify images that X deems to be illegal content. The platform also started charging some users to edit images, instead of blocking outputs.
Grok tests UK’s power to regulate deepfakes
Shortly after Ofcom announced the probe, UK Technology Secretary Liz Kendall said the country would be bringing a new law into force that makes it illegal for companies to supply tools designed to create sexualized images, the BBC reported.
Before Kendall’s announcement, it seemed possible for X to escape the investigation unscathed due to “gaps” in the Online Safety Act, according to the chairwomen of the UK Parliament’s technology and media committees, the BBC reported.
“There are doubts as to whether the Online Safety Act actually has the power to regulate functionality—that means generative AI’s ability to nudify someone’s image,” Caroline Dinenage, chairwoman of the culture, media, and sport committee, told the BBC.
Chairwomen suggested that the UK may need to update the law to better explain platforms’ duties to remove or prevent the making and sharing of sexualized deepfakes. In a document defining illegal content, however, Ofcom emphasizes that deepfakes can count as both CSAM and intimate image abuse, suggesting X could face penalties under the Online Safety Act for some of Grok’s outputs, even if Ofcom cannot require changes to Grok’s functionality.
Ofcom noted that in its view, CSAM does include “AI-generated imagery, deepfakes and other manipulated media,” which “would fall under the category of a ‘pseudo-photograph.’” As Ofcom explained, “If the impression conveyed by a pseudo-photograph is that the person shown is a child, then the photo should be treated as showing a child.”
Similarly, “manipulated images and videos such as deepfakes should be considered within the scope” of intimate image abuse, Ofcom said. “Any photograph or video which appears to depict an intimate situation” that a real person would not want publicly posted should “be treated as a photograph or video actually depicting such a situation.”
Some Grok fans think that the chatbot’s outputs that undress people and put them in skimpy bikinis or underwear isn’t abuse. However, the UK law further details that an “intimate situation” could be an image where a person’s “genitals, buttocks, or breasts” are “covered only with underwear” or “covered only by clothing that is wet or otherwise transparent.”
It’s unclear how long Ofcom may take to reach its decision, but the regulator acted urgently to intervene. And UK officials who were shocked by the scandal have confirmed that they are quickly moving to protect people in the UK from being targeted by Grok’s worst outputs.
While Ofcom does not directly refer to Musk’s comments on censorship, the regulator takes a defensive stance in its announcement—likely preparing to fight X’s argument by pointing out that X would be the one in charge of deciding what is illegal content and what should be removed.
“The legal responsibility is on platforms to decide whether content breaks UK laws, and they can use our Illegal Content Judgements Guidance when making these decisions,” Ofcom noted. “Ofcom is not a censor—we do not tell platforms which specific posts or accounts to take down.”
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

