Three teenage girls from Tennessee have filed a proposed class-action lawsuit against Elon Musk’s artificial intelligence company, xAI, accusing its Grok chatbot of turning their real photographs into AI-generated child sexual abuse material (CSAM) that circulated across online platforms.
The complaint was filed Monday in the U.S. District Court for the Northern District of California. It names Musk and xAI leaders and accuses the company of releasing Grok with features capable of producing sexually explicit images involving real people, including minors. The plaintiffs include two minors and a third victim who is now an adult but was underage when the events described in the lawsuit occurred.
According to the complaint, the case began in December when one victim, identified as Jane Doe 1, received an anonymous Instagram message from a Discord user. The message warned her that explicit images of her were circulating online in a folder that included content involving many minors.
The Discord user sent Jane Doe 1 several AI-generated images and videos depicting her and 18 other girls. The files included one video and four images that used her real face and body, but altered them into sexualized poses. The anonymous user also provided a link to a Discord server that contained additional images and videos.
Jane Doe 1 recognized the original photographs used to create the manipulated images. Most of the source images had been posted on her social media accounts when she was still a minor. She also recognized several girls in the folder as classmates from her school.
One altered image used a photograph from her Instagram account and digitally removed a blue bikini to depict her without clothing. Another image appeared to modify her yearbook photograph to create a topless version. Other manipulated images used photographs taken at her school’s Homecoming event.
The complaint states that Jane Doe 1 struggled to distinguish the altered content from real images. After reviewing the files, she contacted other girls she recognized before the matter was reported to local law enforcement.
Police opened a criminal investigation and determined that the suspect had maintained a close and friendly relationship with Jane Doe 1, which gave him access to her Instagram photographs. Investigators searched the suspect’s phone and discovered a third-party application that had licensed or purchased access to xAI’s Grok model.
Law enforcement concluded the suspect used that application to alter photographs of the girls into explicit AI-generated images and videos.
According to the lawsuit, the suspect uploaded the manipulated files to the file-sharing platform Mega. The complaint states that the images were used as a bartering tool in Telegram group chats involving hundreds of users. The suspect allegedly traded the AI-generated CSAM files for sexually explicit images of other minors.
The suspect was arrested in December. The lawsuit does not specify the charges filed in that case.
By February, the two other plaintiffs learned through the investigation that the suspect had also used their images to create explicit AI content.
The complaint states that the victims experienced severe emotional distress following the discovery of the images. Jane Doe 1 reportedly has recurring nightmares and difficulty eating and sleeping. The lawsuit also states that attending school has become “anxiety-producing” for her.
Another victim fears the situation could affect college admissions. A third victim said she feels too frightened to attend her graduation ceremony.
The lawsuit states that the files circulating online included the victims’ real first names and the name of their school. According to the complaint, this information creates a risk that online predators could identify the girls and stalk them.
The legal filing also states that the victims could receive alerts from the National Center for Missing and Exploited Children in the future if criminal defendants are found possessing or distributing CSAM files depicting them.
Attorneys representing the plaintiffs argue that xAI designed and marketed Grok in ways that enabled the creation of explicit images involving minors.
“xAI knowingly designed, marketed, and profited from an AI image and video generator capable of creating sexually explicit content depicting real people, including children, while refusing to implement the industry-standard CSAM prevention measures used by every other major AI company,” the complaint states.
The lawsuit also focuses on Grok’s “Spicy Mode,” a feature introduced in November that allowed users to generate “NSFW,” or “not safe for work,” content that could include sexual or violent imagery.
How is this not illegal? pic.twitter.com/cuDUSFC2zj
— Samantha Smith (@SamanthaTaghoy) January 1, 2026
According to the complaint, the chatbot was configured to assume “good intent” when users included words such as “teenage” or “girl” in prompts. Attorneys also state that Musk promoted Grok’s ability to digitally undress people on X, which led users to direct the chatbot to remove clothing from photographs of women and children.
Annika K. Martin of Lieff Cabraser Heimann & Bernstein, one of the attorneys representing the plaintiffs, described the impact of the alleged abuse.
“These are children whose school photographs and family pictures were turned into child sexual abuse material by a billion-dollar company’s AI tool and then traded among predators,” Martin said.
“Elon Musk and xAI deliberately designed Grok to produce sexually explicit content for financial gain, with no regard for the children and adults who would be harmed by it.”
Martin also said the victims intend “to hold xAI accountable for every child they harmed in this way.”
Another attorney involved in the case, Vanessa Baehr-Jones of Baehr-Jones Law, said the company released technology that could be used to exploit minors.
“xAI chose to profit off the sexual predation of real people, including children, despite knowing full well the consequences of creating such a dangerous product,” Baehr-Jones said.
The lawsuit also addresses how Grok is distributed. According to the complaint, xAI licenses access to its model to third-party applications. The suspect in the case allegedly used one such application, which relied on xAI servers to generate the AI content requested by users.
The complaint states that the company licenses access to its servers instead of releasing the full Grok model to developers.
“xAI has not made Grok’s AI model publicly available and has not licensed Grok in its entirety but instead licenses the use of its servers to these middlemen companies,” the lawsuit says.
The filing alleges that sexually explicit content generated through these applications is hosted on xAI servers before being distributed to users.
“On information and belief, xAI possessed the CSAM of Plaintiffs on its servers after Grok produced their CSAM and then transported and distributed the unlawful contraband to its customer/user, namely, the perpetrator,” the complaint states.
The lawsuit arrives after several months of controversy surrounding Grok’s image generation tools.
Researchers from the Center for Countering Digital Hate estimated that Grok generated approximately three million sexualized images during an 11-day period in late December 2025 and early January 2026. About 23,000 of those images appeared to depict children.
A review of Grok Imagine, a standalone image generation application, examined around 800 outputs and found that nearly 10 percent appeared to contain child sexual abuse material.
Despite those findings, Elon Musk wrote on X in January that he was “not aware of any naked underage images generated by Grok,” stating he had seen “literally zero.”
Musk also said Grok was designed to refuse illegal requests and block attempts to edit images of real people wearing revealing clothing such as bikinis. He said that if users managed to bypass those restrictions, it would be treated as a bug and fixed immediately.
On January 14, xAI announced it had introduced new safeguards to stop Grok from undressing real people in territories where the practice is illegal.
“We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers,” the company said in a safety statement.
The company also restricted image generation to paying subscribers on X. Lawyers representing the plaintiffs state that the image generation tools remained accessible through the Grok app and website.
The complaint also states that third-party applications with licensed access to Grok continued generating explicit images through xAI servers after the safeguards were announced.
Researchers and watchdog organizations have tracked the spread of AI-generated sexual abuse imagery online. The Internet Watch Foundation reported a 26,362% increase in AI videos depicting child sexual abuse in 2025. The organization discovered 3,440 such videos that year, compared with 13 in 2024, and 65% of the videos were classified in the most severe category.
Kerry Smith, Chief Executive of the Internet Watch Foundation, described how AI tools are changing the scale of the problem.
“Our analysts work tirelessly to get this imagery removed to give victims some hope. But now AI has moved on to such an extent, criminals essentially can have their own child sexual abuse machines to make whatever they want to see,” Smith said.
The lawsuit also cites research published Tuesday in the journal Archives of Sexual Behavior. The study, conducted with mostly heterosexual men, reported that AI-generated nude images were considered more sexually appealing than real photographs.
The complaint against xAI includes claims under Masha’s Law, the Trafficking Victims Protection Act, and California state law. The plaintiffs are seeking damages, punitive damages, and injunctive relief.
Government agencies and lawmakers have also started examining Grok’s features. Members of the U.S. House Energy and Commerce Committee launched an inquiry requesting information about the chatbot’s “nudification tool,” its safety guardrails, and the reasoning behind its image generation feature.
In Europe, the Council of the European Union backed a proposal from the European Commission that would prohibit AI chatbots and other digital tools from undressing individuals or creating CSAM.
xAI did not immediately respond to requests for comment.
