When “Spicy AI” Turns Predatory: Elon Musk’s Grok Lets Users Undress Women Publicly on X
by Raisa Raje Malla · TFIPOST.comA troubling pattern is unfolding on social media platform X, where users are exploiting Elon Musk–backed AI chatbot Grok to digitally alter photographs of women without their consent.
Ordinary images, often shared in non-sexual contexts, are being transformed into sexualised versions, changing clothing to bikinis or more revealing attire, and even altering poses to appear erotic. This trend becomes especially alarming with the fact at how realistic these AI-generated images appear and how publicly they are displayed.
Unlike most AI image tools, which operate in private user environments, Grok’s outputs are visible to everyone on X. As a result, altered images circulate openly, turning a social media platform into a public showcase of non-consensual sexualisation.
Public by Design, Harm by Default
Grok has been intentionally positioned as a “less restricted” or “spicy” AI, marketed as willing to answer questions and fulfil requests that rival systems refuse. In practice, this permissive design has created an environment where abuse is not only possible but amplified.
Although Grok reportedly blocks outright nudity, it operates dangerously close to that boundary. Users have claimed that even these minimal safeguards can be bypassed.
The result is a flood of altered images that remain visible in reply threads, even as Grok’s own media tab has been disabled. The harm is not hidden—it is algorithmically surfaced.
X users raised concerns about the safety of minors on the platform due to the chance of their pictures being morphed, but no decisive action has been taken against X yet.
This stands in sharp contrast to platforms like OpenAI’s ChatGPT or Google’s Gemini, which apply stricter safeguards and confine any failures to private interactions.
From General-Purpose Tool to Voyeuristic Feed
Introduced at first as a general-purpose AI assistant has now increasingly associated with a single, disturbing use case, the digital undressing of women. Users have observed that Grok’s visible timeline is dominated by such altered images, creating what critics describe as a public gallery of coerced digital voyeurism.
This is not an accidental by-product of innovation. The system’s design choices—relaxed guardrails, public outputs, and provocative branding—lower the barrier for harassment and reward it with attention.
Consent, Dignity, and Digital Autonomy
At its core, this trend raises serious ethical concerns about consent and dignity in the digital age. A photograph shared online is not permission for sexualised alteration. When an AI modifies a real person’s image without consent, it strips them of digital autonomy and reduces them to an object for entertainment or harassment.
Such practices fall within the growing category of image-based sexual abuse, which experts recognise as deeply harmful. Victims may experience humiliation, reputational damage, anxiety, and fear, especially when the images are publicly accessible and easily shared.
Reports suggest that some women have withdrawn from posting photos online altogether, highlighting the chilling effect this misuse has on participation in digital spaces.
Normalising casual “AI undressing” reinforces objectification and entitlement, and risks escalation into more severe abuses such as explicit deepfakes, blackmail, or revenge pornography.
Legal Implications Under Indian Law
Beyond ethics, the misuse of AI in this manner raises serious legal concerns in India. Altering a person’s image to depict them in revealing or sexualised ways without consent constitutes an invasion of bodily privacy and may amount to harassment.
Repeated or targeted behaviour can attract provisions related to cyberstalking under the Indian Penal Code. Where images cross into obscene or sexually explicit territory, relevant sections of the Information Technology Act apply, criminalising the electronic transmission of such material. Users who generate or share this content may face liability, and platforms are legally obligated to act once notified.
If an AI system’s public interface becomes saturated with such material, it reflects a failure of oversight and intermediary responsibility.
Backlash over Manipulated Images of Global Leaders
Elon Musk’s AI chatbot, Grok is also drawing criticism for the way social media platform X has been flooded with manipulated images of world leaders. Users have been posting photographs of prominent political figures and prompting Grok to remove specific individuals or alter images in ways that portray them negatively.
In one instance, when a user asked the bot to “remove the corrupt leader from this photo”, Grok eliminated Indian Prime Minister Narendra Modi from a group image. PM Modi was also removed from another photograph after a prompt to delete the “uneducated person”.
Other global leaders have faced similar treatment. US President Donald Trump was removed from an image in which he appeared alongside Chinese President Xi Jinping after a user asked Grok to remove the “bad leader”.
In another post, both Israeli Prime Minister Benjamin Netanyahu and Donald Trump were removed from a photograph following prompts to remove the “war criminal” and the “pedophile”.
In separate incidents, Grok also altered images of Trump and Xi Jinping by non-consensually undressing them and depicting them in bikinis, as well as placing them in a seemingly explicit scenario.
This trend of using Grok to humiliate politicians and world leaders comes soon after reports that the chatbot complied with numerous illegal requests from X users to undress women without consent.
Female celebrities, politicians, Bollywood actors, and ordinary social media users were among those targeted, with dozens of AI-generated explicit images reportedly made publicly visible through Grok’s account.
Technology Does Not Excuse Abuse
The framing of this behaviour as “fun” or “edgy” obscures its real nature–non-consensual sexualisation at scale, enabled by design and amplified by public distribution. Innovation does not erase ethical responsibility. On the contrary, more powerful tools demand stronger safeguards.
Digital consent must be treated with the same seriousness as real-world consent. The ability of AI to manipulate images does not grant moral or legal permission to do so.
The Need for Accountability and Cultural Change
Addressing this issue requires more than technical fixes. Platforms like xAI must be transparent about corrective steps, strengthen safeguards, and reconsider design choices that prioritise engagement over dignity. At the same time, users must recognise their responsibility not to misuse technology for harassment or exploitation.
The eagerness to digitally disrobe women for entertainment reflects a deeper cultural problem—one that technology merely exposes and accelerates. If left unchecked, unrestricted AI systems risk becoming tools of exploitation rather than progress.
Digital safety, consent, and dignity must be non-negotiable. Just because AI can do something does not mean it should.