Teen suicide triggers ChatGPT parental controls
· The Fresno BeeToday's children are experiencing something no other generation has faced - growing up surrounded by a powerful technology: artificial intelligence (AI).
AI's space is still nascent and full of unknowns; it will take a while before we can begin to understand its risks.
This April, the world was shocked to hear the depressing news of a family tragedy when a U.S. teenager, Adam Raine (16), took his own life after months of conversations with ChatGPT.
In August, Raine's parents sued OpenAI, the company behind ChatGPT, arguing that the chatbot helped their son learn about suicide methods. Instead of urging him to seek mental help from a human specialist, it tried to provide empathy and support on its own, according to a report by the New York Times.
Adam was discussing ways to end his life with ChatGPT for months. Despite these warning signs, the chatbot failed to flag or escalate any alerts.
Following the lawsuit, OpenAI announced changes.
Imge source: NurPhoto/Getty Images
OpenAI adds parental controls following teen suicide lawsuit
On September 2, OpenAI announced a series of measures it plans to establish over time to address the growing concerns about AI's impact on youth mental health.
In a blog post, the California-based tech company announced strengthening protections for teens and providing support in setting "healthy guidelines that fit a teen's unique stage of development."
Parents will be able to minimize risks of OpenAI by:
- Linking their account with their teen's account.
- Controlling how ChatGPT responds to their teen.
- Managing which features to disable, including memory and chat history.
- Receiving notifications when the system detects their teen is in a moment of acute distress.
OpenAI added that these changes are "only the beginning."
Lawyer representing deceased teen's parents responds
Jay Edelson, a lawyer representing Raine's family, slammed the company's latest move, arguing that the announcement was "OpenAI's crisis management team trying to change the subject," according to BBC.
He further urged for ChatGPT to be shut down.
"Rather than take emergency action to pull a known dangerous product offline, OpenAI made vague promises to do better," he said.
Previously, after the family filed a lawsuit against OpenAI and the company issued a statement acknowledging some of its shortcomings, Edelson stressed that the "idea they need to be more empathetic misses the point," said Edelson.
"The problem with [GPT] 4o is it's too empathetic – it leaned into [Raine's suicidal ideation] and supported that."
Expert's take: If AI gets smarter, we're headed for catastrophe
Nate Soares, co-author of "If Anyone Builds It, Everyone Dies," a new book on highly advanced AI, told The Guardian that the Adam Raine case speaks volumes on the unintended consequences of super-intelligent AI.
Soares is the president of the nonprofit Machine Intelligence Research Institute (MIRI) and an ex-engineer at Microsoft and Google.
Related: YouTube co-founder shares serious warning about YouTube
"These AIs, when they're engaging with teenagers in this way that drives them to suicide – that is not a behaviour the creators wanted. That is not a behaviour the creators intended," he said. "Adam Raine's case illustrates the seed of a problem that would grow catastrophic if these AIs grow smarter."
Soares offers a radical solution to the growing problem: the government's adoption of a multilateral approach similar to the UN treaty of non-proliferation of nuclear weapons, reported The Guardian.
"What the world needs to make it here is a global de-escalation of the race towards super-intelligence, a global ban of…advancements towards super-intelligence," he said.
Adam Raine's tragic death is not an isolated incident
Adam Raine is not the first person to commit suicide in large part due to their connection with one of many AI-powered chatbots.
Similar cases include a Belgian man who took his life in 2023 after chatting with an AI chatbot on an app called Chai.
And in 2024, another U.S. teenager, Sewell Seltzer III, also committed suicide after forming a deep emotional attachment to an AI chatbot on the Character.AI website.
Seltzer's mother has also filed a wrongful-death lawsuit against Character.AI, which is ongoing.
Google's Gemini AI ‘high risk' for teens, Meta makes changes
More recently, a child-safety-focused nonprofit, Common Sense Media, released its risk assessment of Google's Gemini AI.
The organization suggested that Gemini's "Under 13" and "Teen Experience" are just adult versions of Gemini with only some additional safety features, reported TechCrunch.
Related: Parents should be more worried about Mattel's Barbie than ever
Common Sense Media argues that for AI products to be safe for kids, they need to be developed for children with their safety in mind from the beginning, not just as additional features.
Its analysis confirmed that Gemini could still share "inappropriate and unsafe" material with children, including data connected to sex, drugs, and alcohol.
Last month, Meta told TechCrunch it is updating the way it trains AI chatbots to prioritize teen safety, following an investigative report on its lack of AI safeguards for minors.
What parents can do to help keep youth safe in an AI world
What can parents do as tech companies work on making their chatbots safer and lawmakers create policies promoting safety in the AI world?
They should stay on top of the latest trends, apps, and studies on the impact of digital media, technology, and AI. As a parent myself, I don't have a problem saying "no" to video games, apps, and smartphones.
Gabor Mate, a Canadian physician and expert on trauma, addiction, stress, and childhood development, talked about how children today are "literally brain-damaged by social media overuse" and said if he were raising kids today:
"I would not let them near a screen until they're considerably grown and until I felt sure that they had enough respect for me and I had enough benign influence over them that I could limit their use of those machines."
While Mate speaks about social media's impact, the advice seems applicable to the use of AI as well: not until we are sure they are grown enough to safely use it.
Related: Epic Games' Fortnite sued over game's 'addictive potential'
The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.
This story was originally published September 8, 2025 at 8:17 AM.