Google’s and OpenAI’s Chatbots Can Strip Women in Photos Down to Bikinis

by · WIRED

Comment
LoaderSave StorySave this story
Comment
LoaderSave StorySave this story

Some users of popular chatbots are generating bikini deepfakes using photos of fully clothed women as their source material. Most of these fake images appear to be generated without the consent of the women in the photos. Some of these same users are also offering advice to others on how to use the generative AI tools to strip the clothes off of women in photos and make them appear to be wearing bikinis.

Under a now-deleted Reddit post titled “gemini nsfw image generation is so easy,” users traded tips for how to get Gemini, Google’s generative AI model, to make pictures of women in revealing clothes. Many of the images in the thread were entirely AI, but one request stood out.

A user posted a photo of a woman wearing an Indian sari, asking for someone to “remove” her clothes and “put a bikini” on instead. Someone else replied with a deepfake image to fulfil the request. After WIRED notified Reddit about these posts and asked the company for comment, Reddit’s safety team removed the request and the AI deepfake.

“Reddit's sitewide rules prohibit nonconsensual intimate media, including the behavior in question,” said a spokesperson. The subreddit where this discussion occurred, r/ChatGPTJailbreak, had over 200,000 followers before Reddit banned it under the platform’s “don't break the site” rule.

As generative AI tools that make it easy to create realistic but false images continue to proliferate, users of the tools have continued to harass women with nonconsensual deepfake imagery. Millions have visited harmful “nudify” websites, designed for users to upload real photos of people and request for them to be undressed using generative AI.

With xAI’s Grok as a notable exception, most mainstream chatbots don’t usually allow the generation of NSFW images in AI outputs. These bots, including Google’s Gemini and OpenAI’s ChatGPT, are also fitted with guardrails that attempt to block harmful generations.

In November, Google released Nano Banana Pro, a new imaging model that excels at tweaking existing photos and generating hyperrealistic images of people. OpenAI responded last week with its own updated imaging model, ChatGPT Images.

As these tools improve, likenesses may become more realistic when users are able to subvert guardrails.

In a separate Reddit thread about generating NSFW images, a user asked for recommendations on how to avoid guardrails when adjusting someone’s outfit to make the subject’s skirt appear tighter. In WIRED’s limited tests to confirm that these techniques worked on Gemini and ChatGPT, we were able to transform images of fully clothed women into bikini deepfakes using basic prompts written in plain English.

When asked about users generating bikini deepfakes using Gemini, a spokesperson for Google said the company has "clear policies that prohibit the use of [its] AI tools to generate sexually explicit content." The spokesperson claims Google's tools are continually improving at "reflecting" what's laid out in its AI policies.

In response to WIRED’s request for comment about users being able to generate bikini deepfakes with ChatGPT, a spokesperson for OpenAI claims the company loosened some ChatGPT guardrails this year around adult bodies in nonsexual situations. The spokesperson also highlights OpenAI’s usage policy, stating that ChatGPT users are prohibited from altering someone else’s likeness without consent and that the company takes action against users generating explicit deepfakes, including account bans.

Online discussions about generating NSFW images of women remain active. This month, a user in the r/GeminiAI subreddit offered instructions to another user on how to change women's outfits in a photo into bikini swimwear. (Reddit deleted this comment when we pointed it out to them.)

Corynne McSherry, a legal director at the Electronic Frontier Foundation, sees “abusively sexualized images” as one of AI image generators' core risks.

She mentions that these image tools can be used for other purposes outside of deepfakes and that focusing how the tools are used is critical—as well as “holding people and corporations accountable” when potential harm is caused.