'Unconstrained by Truth': Ronan Farrow's Deep Dive into OpenAI Boss Sam Altman Reveals Sociopathic Tendencies of AI Kingpin

by · Breitbart

Journalists Ronan Farrow and Andrew Marantz have published an investigation into Sam Altman, the AI kingpin behind OpenAI, revealing a troubling history of deception and sociopathic tendencies. One former OpenAI board member explains, “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”

The New Yorker has published a major investigation of OpenAI CEO Sam Altman written by Ronan Farrow and Andrew Marantz. The article provides a fascinating and deeply researched view into the life of Altman, including blow-by-blow details of his short-lived ouster from the company.

The piece explains that prominent figures in the AI world hold a deep distrust of Altman, with many using the word “sociopathic” to describe his personality. Altman’s enemies list extends beyond OpenAI cofounder Ilya Sutskever, who left the company after failing to give Altman the boot, and Anthropic CEO Dario Amodei, who is bitter enemies with Altman. As Farrow and Marantz write, even former OpenAI board members see Altman as being “unconstrained by truth:”

Yet most of the people we spoke to shared the judgment of Sutskever and Amodei: Altman has a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart. “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.” The board member was not the only person who, unprompted, used the word “sociopathic.” One of Altman’s batch mates in the first Y Combinator cohort was Aaron Swartz, a brilliant but troubled coder who died by suicide in 2013 and is now remembered in many tech circles as something of a sage. Not long before his death, Swartz expressed concerns about Altman to several friends. “You need to understand that Sam can never be trusted,” he told one. “He is a sociopath. He would do anything.” Multiple senior executives at Microsoft said that, despite Nadella’s long-standing loyalty, the company’s relationship with Altman has become fraught. “He has misrepresented, distorted, renegotiated, reneged on agreements,” one said. Earlier this year, OpenAI reaffirmed Microsoft as the exclusive cloud provider for its “stateless”—or memoryless—models. That day, it announced a fifty-billion-dollar deal making Amazon the exclusive reseller of its enterprise platform for A.I. agents. While reselling is permitted, Microsoft executives argue OpenAI’s plan could collide with Microsoft’s exclusivity. (OpenAI maintains that the Amazon deal will not violate the earlier contract; a Microsoft representative said the company is “confident that OpenAI understands and respects” its legal obligations.) The senior executive at Microsoft said, of Altman, “I think there’s a small but real chance he’s eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer.”

Farrow and Marantz explain in their article that Altman’s sociopathic tendencies don’t only result in bruised egos with other executives. His approach to business has caused real world problems, like ChatGPT launching without the proper safety guardrails in place:

By then, internal messages show, executives and board members had come to believe that Altman’s omissions and deceptions might have ramifications for the safety of OpenAI’s products. In a meeting in December, 2022, Altman assured board members that a variety of features in a forthcoming model, GPT-4, had been approved by a safety panel. Toner, the board member and A.I.-policy expert, requested documentation. She learned that the most controversial features—one that allowed users to “fine-tune” the model for specific tasks, and another that deployed it as a personal assistant—had not been approved. As McCauley, the board member and entrepreneur, left the meeting, an employee pulled her aside and asked if she knew about “the breach” in India. Altman, during many hours of briefing with the board, had neglected to mention that Microsoft had released an early version of ChatGPT in India without completing a required safety review. “It just was kind of completely ignored,” Jacob Hilton, an OpenAI researcher at the time, said.

Breitbart News social media director and author Wynton Hall explains in his instant bestseller, Code Red: The Left, the Right, China, and the Race to Control AI, that conservatives must develop a plan to deal with the bias baked into AI by leftists in Silicon Valley. Especially when the personalities running AI companies are as troubling to learn about as Sam Altman, it takes an effective framework to gain the benefits of AI without the bias and downsides.

 Read the full article at the New Yorker here. Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.