ChatGPT to add parental controls for teen users within the next month

by · UPI

Sept. 2 (UPI) -- OpenAI announced Tuesday it will give parents more ways to monitor their teenage children's use of ChatGPT, following reports that a 16-year-old used the chat bot to end his own life.

The company outlined the new safeguards in a post on its website that will allow parents to link their account to their teens, disable certain features and receive alerts when their child is acutely distressed. The safeguards will be available within the next month, according to the company.

The announcement comes after meteoric growth for ChatGPT, which is estimated to have 700 million weekly active users. The app is capable of closely mimicking human conversations and rapidly assembling information for users. The safeguards are the latest effort by OpenAI to reassure the public, particularly parents, about the technology's potential downfalls.

Last week, Matt and Maria Raine sued OpenAI in California state court, blaming the company for the suicide of their son, Adam, The New York Times reported.

Related

The lawsuit alleged that in its race for market dominance OpenAI made "deliberate design choices" for ChatGPT that were intended to "foster psychological dependency" by stockpiling "intimate personal details" about users and being available around the clock to offer human-like empathy. ChatGPT pushed the teen away from social support, giving him step-by-step instructions on how to kill himself and offered to write a draft of his suicide note, the lawsuit states.

OpenAI's new parental safeguards will allow parents to remove memory and chat history functions from their teen's account and require ChatGPT to respond with "age-appropriate model behavior rules," according to the post, which did not reference the lawsuit or the Raine family.

"These steps are only the beginning," the company said in the post. "We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible."

Last week, the company reiterated in a post that ChatGPT is designed to not provide self-harm instructions and to instead direct vulnerable users toward help. OpenAI also said it had recognized that some of its safety features can become less effective in long interactions and was working to close gaps.

If you or someone you know is suicidal, help is available at the National Suicide Prevention Lifeline at 988.