OpenAI, ChaGPT logo

ChatGPT to add parental controls amid child safety concerns

OpenAI's decision followed health concerns and a lawsuit by parents whose teen allegedly committed suicide after consulting ChatGPT

by · Premium Times

OpenAI announced on Tuesday its decision to introduce parental control features on ChatGPT amidst concerns of mental health risk and abusive use among teenagers.

In a post on its website on Tuesday, the company stated that the features, which will be available in December, will connect parents’ accounts to children’s accounts to enable parents to monitor their children’s activities.

According to OpenAI, parents will also be able to monitor children’s chat history, AI’s response, and red flags raised in case any child or teenager asks something that can be prone to risks.

OpenAI’s decision followed a series of concerns raised by experts about the chatbot’s effects on teenagers’ mental health, and a lawsuit filed by parents of a 16-year-old Californian teen, Adam Raine, who allegedly committed suicide upon encouragement of ChatGPT on 11 April.

In the lawsuit filed in San Francisco state court, the parents of the deceased accused OpenAI of wrongful death and violations of product safety laws, claiming the GPT-40 model advised the late Adam Raine for months to commit suicide rather than restraining him.

However, to strengthen the protection of ChatGPT use for teens, OpenAI said the increasing use of the chatbot among teens necessitated the creation of healthy guidelines on AI usage.

“Many young people are already using AI. They are among the first “AI natives,” growing up with these tools as part of daily life, much like earlier generations did with the internet or smartphones.

“That creates real opportunities for support, learning, and creativity, but it also means families and teens may need support in setting healthy guidelines that fit a teen’s unique stage of development.”

While revealing ways it planned to incorporate parents’ controls into teen usage of the ChatGPT, OpenAI stated that parents will be able to get notifications of their children’s activities with the ChatGPT and use disabling features if necessary.

“Earlier this year, we began building more ways for families to use ChatGPT together and decide what works best in their home. Within the next month, parents will be able to link their account with their teen’s account (minimum age of 13) through a simple email invitation.

“Parents will also be able to control how ChatGPT responds to their teen with age-appropriate model behaviour rules, which are on by default.

“Parents will be able to manage which features to disable, including memory and chat history.

“Parents will receive notifications when the system detects their teen is in a moment of acute distress. Expert input will guide this feature to support trust between parents and teens,” OpenAI said.

The AI company further stated that it plans to rely on experts’ advice to incorporate more features that will be helpful to the chatbot users.

“These controls add to features we have rolled out for all users, including in-app reminders during long sessions to encourage breaks.

“These steps are only the beginning. We will continue learning and strengthening our approach, guided by experts, to make ChatGPT as helpful as possible. We look forward to sharing our progress over the coming 120 days,” OpenAI noted.

Earlier, to provide guidelines on well-being and mental health of its users, OpenAI announced its plans to improve its features in four key areas: expanding interventions to more people in crisis, making ChatGPT easier to reach emergency services and get help from experts, enabling connections to trusted contacts, and strengthening protections for teens.

The tech company stated that it will rely on the Expert Council on Well-Being and AI, and the Global Physician Network to provide both the depth of specialised medical expertise and the breadth of perspective into making the features.