Former board members accuse Sam Altman of dishonest leadership behaviour. (Photo: Representational image, generated by AI)

Sam Altman put words in other people's mouths, former OpenAI board members tell court

Former OpenAI board members Helen Toner and Tasha McCauley told the court they had concerns about Sam Altman's leadership and transparency at OpenAI. Former employee Rosie Campbell also testified that OpenAI changed over time from a research and AI safety-focused organisation into a more product-focused company.

by · India Today

In Short

  • Former board members accuse Sam Altman of dishonest leadership behaviour
  • Testimonies describe OpenAI’s shift from safety research to products
  • Mira Murati raised concerns about transparency and executive tensions

Sam Altman is a dishonest man. This is what two former board members of OpenAI, Helen Toner and Tasha McCauley, said in their video depositions to the court today. They believed Altman was not consistently truthful in his dealings with the board and within the company. Toner said Altman had a habit of “putting words in other people’s mouths.” She believed he sometimes described conversations or opinions in ways that made it appear others agreed with him or supported certain decisions when they may not have explicitly done so. Her testimony suggested she viewed this as a way of influencing discussions and outcomes.

McCauley went further, saying that what she described as a “pattern of lying” by Altman affected the company’s internal culture. According to her, employees began copying that behaviour, which she claimed created “a culture of lying and a culture of deceit” inside OpenAI. Her statement implied that dishonesty at the leadership level influenced how people across the organisation behaved.

Elon Musk’s case against OpenAI could depend on whether the court believes the company’s for-profit business model supports or harms its original mission. Musk’s legal argument is essentially questioning whether OpenAI’s shift toward a for-profit structure moved the company away from that mission.

During her deposition, Toner testified that she saw OpenAI change significantly over time. According to her, the company initially operated more like a research organisation focused on carefully developing artificial general intelligence (AGI) and discussing long-term AI safety risks. However, she said the company later became much more focused on building and launching products.

"I think it was a similar shift. Again, sort of expanding from just AI and research to more traditional tech company backgrounds," Toner said.

Toner also suggested that OpenAI’s hiring priorities changed during this shift. Instead of mainly bringing in AI researchers and safety experts, she said the company increasingly hired people from traditional technology and product-development backgrounds, similar to those found at major Silicon Valley tech firms. Her testimony described how the organisation evolved from a research-driven culture into a more commercially focused company.

Toner’s statements were also supported by former OpenAI employee and AI researcher Rosie Campbell, who testified that when she first joined OpenAI, the company had a significant focus on long-term AI safety research and work on future risks from advanced AI systems. But by the time she left, she said fewer people were working on those long-term safety efforts, suggesting the organisation’s priorities had shifted over time.

"I think there were still teams focused on the safety of current AI systems. But to me, it seemed like there were much fewer people focused on thinking about longer term systems," Campbell said.

During testimony, Campbell referred to an incident where a version of OpenAI’s GPT-4 model was launched by Microsoft in India through Bing before it had reportedly gone through the company’s internal safety review process. The launch of GPT-4 in India was reportedly seen as one of several warning signs by OpenAI’s board.

She stressed that OpenAI’s original mission, in her view, was not simply to build AGI as quickly as possible, but to make sure it is developed safely and in a way that benefits humanity.

Campbell added that she supported Altman returning to OpenAI during the 2023 leadership crisis because she believed his return would help the nonprofit organisation continue operating and pursuing its mission.

Mira Murati also raised concerns

Toner and McCauley were not the only former OpenAI leaders raising concerns about Altman. According to her testimony yesterday, Mira Murati also described problems with Altman’s leadership during her deposition.

Murati, who briefly served as interim CEO after Altman was fired in 2023, reportedly said that Altman sometimes did not fully share important information with her or was not completely transparent about key matters inside the company. This suggested that she felt she was not always kept fully informed despite being one of OpenAI’s top executives.

Her testimony also claimed that Altman weakened or undermined her position as Chief Technology Officer in 2023. According to the statement, she believed he created rivalry and tension among senior executives rather than encouraging teamwork and cooperation. Murati’s testimony suggested there were internal leadership conflicts at OpenAI and concerns among some executives about Altman’s management style and transparency.

- Ends