South Africa yanks AI policy after AI-assisted drafting invents citations
Eish shame man! Maybe you shouldn't ask AI to set the rules for AI use?
by Carly Page · The RegisterSouth Africa has pulled its draft national AI policy after discovering that it was citing sources that exist only in the fertile imagination of a chatbot.
The country's Department of Communications and Digital Technologies confirmed over the weekend that the draft, which had already cleared Cabinet and was out for public comment, included "various fictitious sources" in its reference list.
Communications minister Solly Malatsi said the department rechecked the draft after reports flagged fake references and found some citations were indeed made up, prompting its withdrawal. "This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy," he said in a post on X, adding that AI-generated citations appear to have slipped in without anyone checking them.
The document has now been yanked, and Malatsi said that those involved in drafting and sign-off can expect "consequence management."
"This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical. It's a lesson we take with humility," Malatsi said. "I want to reassure the country that we are treating this matter with the gravity it deserves."
The now-defunct policy was sold as a forward-looking framework, full of talk about "intergenerational equity" and AI benefiting current and future generations. It's now best known for a references section that doesn't hold up.
Local outlet News24 had reported that at least six references in the report were fabricated, with experts saying that the errors matched classic AI hallucinations: convincing on the surface, entirely made up underneath.
Following the publication of News24's report, Khusela Sangoni-Diko, chair of the parliamentary portfolio committee overseeing the department, publicly told Malatsi to pull the document before it caused further embarrassment. She also suggested that the redraft skip "using ChatGPT this time," adding that the government should stop looking for a scapegoat, or "scape-bot."
All in all, it's a great look for a government trying to set the rules on AI when its own policy can't clear a basic fact check. And it's not exactly a one-off either. As The Register reported last year, Deloitte had to help clean up a government report in Australia after AI-generated citations and even a made-up court quote slipped through, a reminder that letting the machine do the writing is one thing, checking it is another.
South Africa has now learned that lesson the hard way. When your national AI policy cannot tell real sources from imaginary ones, it is probably not ready to regulate anyone else's machines. ®