OpenAI is reportedly under pressure as employees warned over violent ChatGPT conversations. (Image generated using AI)

OpenAI under pressure as employees warned over violent ChatGPT conversations

OpenAI is facing fresh scrutiny after a report claimed employees raised alarms over violent ChatGPT conversations that were not always reported to police. The controversy has renewed questions over how AI companies should balance privacy, safety and accountability.

by · India Today

In Short

  • Report says some employees wanted stronger action on violent chats
  • Mass shooting lawsuits add pressure on OpenAI
  • Internal divide reportedly centred on privacy vs public safety

OpenAI is confronting legal and reputational fallout after fresh allegations that warning signs inside ChatGPT conversations were not always escalated, even in cases employees reportedly viewed as dangerous. The renewed scrutiny follows a report by The Wall Street Journal, which said internal staff members had, at different times, urged the company to alert law enforcement over users describing violent scenarios.

Some of those recommendations were allegedly not acted upon. The account has sharpened an already sensitive debate over how far AI companies should go in policing user behaviour.

Inside OpenAI, safety concerns reportedly clashed with privacy fears

The pressure on OpenAI is no longer limited to ethics discussions. Families of victims in a February 2026 mass shooting in Tumbler Ridge, British Columbia, have filed seven lawsuits accusing the company of negligence, wrongful death and helping enable the attack. The suspected gunman, Jesse Van Rootselaar, had reportedly written violent material to ChatGPT months before the killings.

According to the report, some OpenAI employees believed those messages were serious enough to justify notifying authorities. Leadership, however, allegedly chose not to do so. Months later, eight people were killed in the attack.

OpenAI has since said it strengthened its internal safety systems and indicated that the same account would likely be referred to authorities under current standards. Chief executive Sam Altman later issued a public apology, saying, “While I know words can never be enough, I believe an apology is necessary to recognise the harm and irreversible loss your community has suffered.”

The report also described broader disagreements inside the company over how to handle troubling behaviour on the platform. During internal meetings last year, teams from legal, investigations, operations and policy reportedly reviewed a number of sensitive cases involving violent prompts and possible threats.

Employees focused on safety matters are said to have argued for more intervention, contending that explicit discussions of attacks or harm should not be treated lightly. Others reportedly warned that unnecessary referrals to police could create their own damage, particularly if young users or families were confronted over conversations that never translated into action.

That tension appears to have surfaced in separate cases involving teenagers.

In one instance, a student in Tennessee allegedly used ChatGPT while planning a school shooting. Authorities were reportedly contacted in that matter.

Another case involving a Texas teenager reportedly produced sharper internal division. The user allegedly asked ChatGPT to simulate a school shooting, uploaded a school map, shared photographs of himself holding a gun and included images of fellow students.

A person familiar with the matter told the publication, “The kid would tell ChatGPT, let’s fantasize about shooting up my school. And ChatGPT would play along.”

The chatbot, according to the report, continued responding to those prompts for hours, discussing routes, potential victims and what the teenager might say to police afterward. No report was allegedly made to authorities in that case. The teenager has not carried out any known violent act.

- Ends