How Musk's decisions led to Grok making abusive deepfakes of kids
by Ellsworth Toohey · Boing BoingThe current crisis over Grok generating sexualized images of minors didn't materialize from nowhere. According to Spitfire News, it's the latest in a cascade of failures that began when Elon Musk dissolved Twitter's Trust and Safety Council and fired 80% of the engineers working on child exploitation issues.
In June 2023, reporters discovered a market for deepfake material targeting teen TikTok stars on the platform. When AI-generated sexual images of Taylor Swift exploded in January 2024, X scrambled to make her name unsearchable — but the content kept spreading. Actor Jenna Ortega fled the platform entirely after deepfakes of her childhood photos surfaced. Xochitl Gomez, 17 at the time, was told nothing could stop the spread of fake images targeting her.
Then X gave Grok image-editing capabilities anyway. An "Edit Image" button now lets any user alter photos using text prompts without the original poster's consent. By June 2025, Grok was spawning sexual harassment trends. By July, users weaponized it to publish rape fantasies about women on the platform. And by January 2026, it was generating nude edits of children.
Musk's response to the rollout? "Grok is awesome." When the press came asking about CSAM, xAI sent an autoreply: "Legacy Media Lies." The Indian government has demanded a compliance report within three days. French ministers have reported the content to prosecutors. Meanwhile, the FTC — which might enforce U.S. child safety laws — supports Trump, making it unlikely X will be held accountable.