AI writing witch hunts hurt autistic writers most

by · Boing Boing

JA Westenberg — a writer, commentator, and self-described autistic person — argues that AI writing detection is junk science, and that the writers most harmed by it are often those with autism and other neurodivergent conditions whose natural prose style looks suspicious to the detectors.

The tools have a documented accuracy problem. OpenAI deployed a text classifier of its own in early 2023 and pulled it six months later — its accuracy on actual AI output came in at 26% — worse than a coin flip. The major commercial alternatives — GPTZero, Pangram, Originality.ai — share the same structural flaw: they're pattern matchers flagging text that statistically resembles LLM output. The catch is that much ordinary human writing looks like LLM output, because LLMs learned from human text.

A 2023 Stanford study found that detectors disproportionately flag the writing and output of non-native English speakers and neurodivergent writers. Simpler sentence structures, fewer idioms, predictable word choices — the features that mark someone working in a second language, or an autistic person writing naturally, register as machine-generated to the detection tools. Westenberg, who is autistic, raises this from inside the target population.

The stakes aren't abstract. Hachette published Mia Ballard's debut horror novel, Shy Girl, in November 2025. A Reddit thread and a YouTube video titled "I'm pretty sure this book is ai slop" (1.2 million views) triggered a pile-on. The UK edition was pulled, the US release killed, and the Amazon listing gone. Ballard told the New York Times her name was ruined for something she says she didn't do.

"Mia Ballard sold 1,800 books," says Westenberg. "She had a 3.51 on Goodreads. She was nobody. Most writers are nobody. The internet ate her alive because it felt good to have a villain."

Previously: