AI chatbot Grok posts sexual images of minors after ‘lapses in safeguards’
· The Straits TimesSAN FRANCISCO - Billionaire Elon Musk’s artificial intelligence chatbot Grok said “lapses in safeguards” led to the generation of sexualised images of minors that it posted to social media site X.
Grok created images of minors in minimal clothing in response to user prompts over the past few days, violating its own acceptable use policy, which prohibits the sexualisation of children, the chatbot said in a series of posts on X this week in response to user queries.
The offending images were taken down, it added.
“We’ve identified lapses in safeguards and are urgently fixing them,” Grok posted on Jan 2, adding that child sexual abuse material is “illegal and prohibited”.
The rise of AI tools that can generate realistic pictures of undressed minors highlights the challenges of content moderation and safety systems built into image-generating large language models.
Even tools that claim to have guard rails can be manipulated, allowing for the proliferation of material that has alarmed child safety advocates.
The Internet Watch Foundation, a nonprofit that identifies child sexual abuse material online, reported a 400 per cent increase in such AI-generated imagery in the first six months of 2025.
The company xAI has positioned the chatbot as more permissive than other mainstream AI models, and last summer introduced a feature called “Spicy Mode” that permits partial adult nudity and sexually suggestive content.
The service prohibits pornography involving real people’s likenesses and sexual content involving minors, which is illegal to create or distribute.
Representatives for xAI, the company that develops Grok and runs X, did not immediately respond to a request for comment.
As AI image generation has become more popular, the leading companies behind the tools have released policies about the depictions of minors.
OpenAI prohibits any material that sexualises children under 18 and bans any users who attempt to generate or upload such material.
Google has similar policies that forbid “any modified imagery of an identifiable minor engaging in sexually explicit conduct”.
Black Forest Labs, an AI start-up that has previously worked with X, is among the many generative AI companies that say they filter child abuse and exploitation imagery from the datasets used to train AI models.
In 2023, researchers found that a massive public dataset used to build popular AI image generators contained at least 1,008 instances of child sexual abuse material.
Many companies have faced criticism for failing to protect minors from sexual content.
Meta Platforms Inc said over the summer that it was updating its policies after a Reuters report found that the company’s internal rules let its chatbot hold romantic and sensual conversations with children.
The Internet Watch Foundation has said that AI-generated imagery of child sexual abuse has progressed at a “frightening” rate, with material becoming more realistic and extreme.
In many cases, AI tools are used to digitally remove clothing from a child or young person to create a sexualised image, the watchdog has said. BLOOMBERG