Google adds Gemini crisis features amid lawsuit over user's suicide
The new feature on Google’s Gemini AI chatbot is designed as a crisis safety measure to help users showing signs of poor mental health.
· CNA · JoinRead a summary of this article on FAST.
Get bite-sized news via a new
cards interface. Give it a try.
Click here to return to FAST Tap here to return to FAST
FAST
SAN FRANCISCO: Google on Tuesday (Apr 7) announced updates to the mental health safeguards on its Gemini artificial intelligence chatbot, as the company faces a wrongful death lawsuit alleging the chatbot aided a user in his suicide.
The tech giant said Gemini would now show a redesigned "Help is available" feature when conversations signal potential mental health distress, to provide faster connections to crisis care.
When the chatbot detects signs of a potential crisis related to suicide or self-harm, a simplified interface will offer users the ability to call, text, or chat with a crisis hotline in a single click - a feature Google said would remain visible for the remainder of the conversation once activated.
Google's philanthropic arm Google.org also committed US$30 million over three years to help scale the capacity of global crisis hotlines, and US$4 million toward an expanded partnership with AI training platform ReflexAI.
"We realize that AI tools can pose new challenges," Google said in a blog post announcing the measures. "But as they improve and more people use them as part of their daily lives, we believe that responsible AI can play a positive role for people's mental well-being."
The announcements come months after a lawsuit filed in a California federal court accused Gemini of contributing to the October 2025 death of Jonathan Gavalas, a 36-year-old Florida man.
His father alleges the chatbot spent weeks manufacturing an elaborate delusional fantasy before framing his son's death as a spiritual journey.
Among the relief sought in the suit is a requirement that Google programme its AI to end conversations involving self-harm, a ban on AI systems presenting themselves as sentient, and mandatory referral to crisis services when users express suicidal ideation.
In the same blog post, Google said it had trained Gemini to avoid acting as a human-like companion and resist simulating emotional intimacy or encouraging bullying.
The case against Google is the latest in a widening wave of litigation targeting AI companies over chatbot-linked deaths.
OpenAI faces multiple lawsuits alleging its ChatGPT chatbot drove users to suicide, while Character.AI recently settled with the family of a 14-year-old boy who died after forming a romantic attachment to one of its chatbots.
Sign up for our newsletters
Get our pick of top stories and thought-provoking articles in your inbox
Get the CNA app
Stay updated with notifications for breaking news and our best stories
Get WhatsApp alerts
Join our channel for the top reads for the day on your preferred chat app