In early December, a teenager from Tennessee reportedly received a message from an anonymous Instagram account alerting her that sexually explicit deepfake images of her had been shared on a Discord server.
One image was allegedly created from a photo taken at her school’s homecoming event last September. Another image, which reportedly showed her topless, seemed to have been generated from her yearbook picture taken last June.
After the teenager, who is now an adult, managed to access the link to the Discord server, she supposedly discovered images and videos of at least 18 other girls who were minors at that time, many of whom she recognized from school.
The fresh allegations are outlined in a 44-page complaint filed on Monday in federal court in San Jose, California, by three plaintiffs from Tennessee referred to as Jane Does. The anonymous plaintiffs are taking legal action against Elon Musk’s AI company, x AI, concerning its generative AI model known as Grok. The proposed class-action lawsuit claims x AI carelessly designed Grok in a way that facilitated such abuse and then opted to limit access to the technology for paid users and third-party companies instead of addressing the issue.
“x AI – and its founder Elon Musk – saw a business opportunity: an opportunity to profit off the sexual predation of real people, including children,” reads the complaint obtained by Rolling Stone. “Limiting the image- and video-generation features to paid subscribers … does not prevent the creation of AI-generated CSAM (child sexual abuse material), it merely ensures that x AI will profit from all such content.”
Rolling Stone reached out to x AI for comment on Monday but did not receive an immediate reply.
“No one should have to live with the fear that these survivors now carry with them, but I am inspired by their strength and clarity of purpose in bringing this lawsuit on behalf of themselves and other minors in the Class,” Vanessa Baehr-Jones of Baehr-Jones Law, one of the firms representing the plaintiffs, told Rolling Stone in a statement.
Source link
Editor’s picks
“These are children whose school photographs and family pictures were turned into child sexual abuse material by a billion-dollar company’s AI tool and then traded among predators. Elon Musk and x AI deliberately designed Grok to produce sexually explicit content for financial gain, with no regard for the children and adults who would be harmed by it,” said Annika K. Martin of Lieff Cabraser, whose firm is also representing plaintiffs. “Without x AI, this harmful, illegal content could never have existed. The lives of these girls have been shattered by the devastating loss of privacy and the deep sense of violation that no child should ever have to experience. We intend to hold x AI accountable for every child they harmed in this way.” Grok’s image-generation feature gained traction in late December after Musk announced that users on X could utilize Grok to edit images posted on the platform with just one click. Although Grok wasn’t supposed to create fully nude images, users allegedly bypassed restrictions by requesting changes so subjects looked like they were wearing “transparent bikinis or bikinis made of dental floss while posed suggestively,” according to statements made in the lawsuit. The image-generation feature was limited only to paying users starting January 9th with additional technical limitations added shortly after on January 14th. In a post on X dated January 16th, Musk claimed he wasn’t aware any naked images involving minors had been produced by Grok. “Literally zero,” he wrote. “When asked to generate images it will refuse anything illegal since Grok’s operating principle is obeying laws applicable anywhere. There may be times when adversarial hacking prompts something unexpected; if that’s true we fix those issues immediately.”Related Content
The Center for Countering Digital Hate has reported reviewing a random sample consisting of 200,000 images out of around4.6 million produced by Grok between December29 ,2025 , and January8 ,2026. From this sample set , the group estimatedthat aboutthree million sexualizedimageswere generated duringthat timeframe , including roughly23 ,000 depicting children. One Grok-generatedimage apparently displayed six young girlsin microbikinisand remained publicly accessibleon Xasof January15.Trending Stories
p >The second plaintiffinlawsuit stated shewas informedby law enforcementon February12 ,2026 , thatshehadalso been targeted. Accordingto complaints, theindividualwho managedthe Discordserver used Instagramphotosofher wearingabluebikiniatbeachlast Octobertogenerateimagesofherwithout clothing. Thealleged perpetratorwasarrestedin December, thelawsuit states. p >Thethreeplaintiffssaytheyhave sufferedsevere emotional distress. Theirlawsuitaimsto holdx AIl iableforcreationanddistributionoftheallegedchildsexualabusematerialand demandsajurytrial.Source link









