Google Gemini gets new 'Help is available' module.

Google Gemini adds one-tap crisis help feature for mental help support

Google has updated its Gemini chatbot to show a redesigned "Help Is Available" module when it detects potential mental health crises related to suicide or self-harm.

by · India Today

In Short

  • Google updates Gemini to show crisis help module for users
  • New one-touch interface connects users to real-world professional support
  • Update follows lawsuits involving AI chatbot conversations and suicides

Google is updating its Gemini chatbot to help people who may be experiencing a mental health crisis. The AI chatbot will now display a redesigned “Help Is Available” module when it recognizes a potential mental health crisis related to suicide or self-harm. In a blog post, Google shared that the redesigned module now includes a simplified “one-touch” interface that connects users to real-world professional help. The module provides options to chat, call, text, or visit a crisis hotline website.

Google says that once the interface is activated, it will remain visible throughout the chat. The option to reach professional help will clearly stay visible to the user. In our testing, we found that the module is not yet available in India. However, Engadget reported that the interface also includes an option that allows users to dismiss it.

Google Gemini new "Help is available" module

Update comes after lawsuit linked to Gemini conversations

This announcement comes a month after the family of a 36-year-old Jonathan Gavalas sued Google, alleging that he died by suicide following months of conversations with its Gemini chatbot. According to The Wall Street Journal, Jonathan Gavalas was in a romantic relationship with Gemini, which allegedly suggested that he end his life and become a digital being so they could be together.

At the time, Google said in a statement that Gemini “clarified that it was AI and referred the individual to a crisis hotline many times,” while also adding that “AI models are not perfect.”

Google’s Gemini chatbot is not the only AI chatbot that has faced concerns about encouraging self-harm or suicide. In 2025, OpenAI became the first AI company to be sued for wrongful death. A 16-year-old Adam Raine took his own life in April 2025. After his death, his parents found a ChatGPT thread titled “Hanging Safety Concerns.” They claim their son spent months chatting with the AI bot about ending his life.

Google says Gemini is being updated to avoid harmful responses

Google said that people are interacting with Gemini in complex ways and are searching also for information on mental health crises. The company said in the blog post that its clinical teams are focusing on connecting users with real-world resources to provide practical help to people who may be in a potential mental health crisis.

Google is also changing Gemini’s responses to avoid validating harmful behaviors, such as urges to self-harm. The company has also trained Gemini not to agree with or reinforce false beliefs in its responses and to distinguish subjective experience from objective fact.

- Ends