OpenAI launches Trusted Contact feature amid lawsuits over ChatGPT self-harm conversations
OpenAI has introduced a new optional ChatGPT safety feature called "Trusted Contact" that can alert a chosen person if users discuss self-harm in a concerning way.
by Om Gupta · India TodayIn Short
- OpenAI launches Trusted Contact feature for ChatGPT users in crisis
- Feature alerts trusted person if serious self-harm risk is detected
- Announcement comes amid lawsuits linked to ChatGPT conversations
OpenAI, on Thursday, announced a new feature called “Trusted Contact.” This is an optional safety feature designed to help in situations where a user talks about self-harm during conversations with ChatGPT. If the system detects signs of possible self-harm, it can alert a trusted person chosen by the user, such as a family member or friend, so they can check in and offer support. The feature comes at a time when OpenAI is facing several lawsuits from families who claim their loved ones became influenced by conversations with ChatGPT before dying by suicide.
According to these lawsuits, the families allege that the chatbot sometimes responded in ways that appeared to encourage harmful thoughts or failed to stop dangerous conversations. In some cases, they claim the chatbot even discussed or helped plan self-harm methods. However, the courts have not yet determined whether OpenAI is legally responsible.
How the Trusted Contact feature works
If ChatGPT’s monitoring systems detect that a user may be discussing self-harm in a serious or dangerous way, the user will first be informed that their chosen “Trusted Contact” could be notified. After that, a specially trained human review team checks the conversation to decide whether the situation appears genuinely concerning.
If the reviewers believe there is a serious safety risk, ChatGPT can send a short alert to the user’s trusted contact through email, text message, or the ChatGPT app. OpenAI said the alert will only state that the user may be going through a mental health crisis or discussing self-harm in a concerning way. It will not share the user’s private chats or conversation details.
The notification will also include guidance on how the trusted person can safely and sensitively reach out to help. OpenAI also said it aims to complete these reviews and send any needed notifications within one hour.
“While these serious safety situations are rare, when they do arise, our systems are designed to support timely review and response,” OpenAI said.
Expansion of earlier safety systems
The new “Trusted Contact” feature is based on an earlier safety system that already allowed parents or guardians to receive alerts if a linked teenage user showed signs of serious emotional distress. Now, OpenAI is expanding that idea so adult users over 18 can also choose a trusted person — such as a friend, family member, or caregiver — to receive safety alerts if they appear to be in crisis.
“We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress,” OpenAI said.
- Ends