Grok is generating thousands of AI "undressing" deepfakes every hour on X

"X, the deepfake porn site formerly known as Twitter"

by · TechSpot

Serving tech enthusiasts for over 25 years.
TechSpot means tech analysis and advice you can trust.

A hot potato: There has been a lot of controversy over xAI's Grok chatbot and its ability to digitally "undress" women and children, a practice that has increased since late December. A new report says Grok is generating thousands of these deepfake images every hour. For comparison, the other top websites for such content average 79 similar images per hour combined.

Genevieve Oh, a social media and deepfake researcher, carried out a 24-hour (January 5 to 6) analysis of images the @Grok account posted to X. It generated about 6,700 images every hour that were identified as sexually suggestive or nudifying.

There's been a long pushback against nudify apps that use AI to nonconsensually undress people – several of these sites have been sued in the past.

Unlike the usual nudify apps, Grok does not charge users to undress people and is available to millions of X users. It's helping normalize these images on X – the Financial Times recently ran the headline "X, the deepfake porn site formerly known as Twitter."

One of the women who had fake sexualized images created of herself was the mother of one of Elon Musk's sons. Writer and political strategist Ashley St Clair, who became estranged from Musk after the birth of their child in 2024, told the Guardian that Musk supporters were using the tool to create a form of revenge porn, and had even undressed a picture of her as a child.

In a reply to users last week, Grok said that most cases of minors appearing in its generated sexualized images could be prevented through advanced filters and monitoring, but it admitted that "no system is 100% foolproof." It added that xAI was prioritizing improvements and reviewing details shared by users.

Musk has always positioned Grok as a less restricted chatbot that supposedly prioritizes free speech. xAI introduced a new Spicy Mode to Grok in August designed to output NSFW (usually) content. Oh calculated that 85% of Grok's images, overall, are now sexualized.

// Related Stories

An X spokesperson said that the company takes action against illegal content by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary. "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content," they said.

Several countries, including France, the UK, India, Australia, Malaysia, and Brazil, are now investigating Grok over the creation of nonconsensual sexualized images involving women and children.

Platforms have long used Section 230 of the US Communications Decency Act to shield themselves from liability for user content, but it's argued that with AI, the platform itself is creating the image.

Image credit: Salvador Rios