Anthropic Mythos uncovers thousands of vulnerabilities across critical global software systems. (Photo: Representational image, generated by AI)

Anthropic CEO says world is running out of time to fix critical software flaws after Claude Mythos launch

Anthropic CEO Dario Amodei warned of a narrow window to fix vulnerabilities uncovered by Mythos, as AI systems in countries like China are not far behind.

by · India Today

In Short

  • Mythos uncovers thousands of vulnerabilities across critical global software systems
  • Governments and banks closely monitor risks, restrict wider AI access
  • Experts call risks temporary but urge urgent fixes and safeguards

Anthropic CEO Dario Amodei has warned that the world has a narrow window to find hidden security flaws in its systems. The warning came as Anthropic’s latest model, Mythos, has found tens of thousands of security flaws (software vulnerabilities)—some of them very old—in important software systems used by companies, governments, and banks. The “narrow window” Amodei mentions means there’s limited time to fix these problems before hackers could exploit them and take advantage of them. The urgency comes from the fact that AI systems in countries like China are not far behind—only about 6 to 12 months.

“The danger is just some enormous increase in the amount of vulnerabilities, in the amount of breaches, in the financial damage that’s done from ransomware on schools, hospitals, not to mention banks,” Amodei said.

Amodei made these comments during a public event hosted by Anthropic. At that event, he was on stage alongside Jamie Dimon, the head of JPMorgan Chase. During the same event, Anthropic also introduced a new set of AI tools (called “agents”) designed to automate financial tasks, like handling certain types of work in banking or finance.

Potential cyber exploits growing with new generation AI models

Anthropic’s Mythos model is so powerful that access to it is tightly restricted to a small, invite-only group under “Project Glasswing,” which includes around 40 organisations such as Apple, Google, and Microsoft. According to Amodei, the scale of potential cyber exploits has grown significantly with each new generation of Anthropic’s AI models. He noted that an earlier model identified about 20 vulnerabilities in the Firefox browser, while Mythos found nearly 300.

Overall, Amodei said, the number of vulnerabilities detected across different software systems has now reached into the tens of thousands, highlighting both the increasing capability of AI and the growing cybersecurity risks that come with it.

AI should be regulated like cars

Around the world, governments are paying close attention to Mythos because of its potential cybersecurity impact, especially in sensitive sectors like banking and finance. In India, Finance Minister Nirmala Sitharaman held a high-level meeting with top bank officials to discuss the risks the model could pose. At the same time, in the United States, the White House is reportedly closely monitoring how Mythos is being released. According to The Wall Street Journal, authorities even stepped in to stop Anthropic from expanding access to more clients, showing how seriously they view the potential risks.

On regulation, Amodei argued that AI should be governed in a way similar to the automotive industry. His point is that just like cars need safety features like brakes before being allowed on the road, AI systems should also have safeguards. At the same time, regulation shouldn’t be so strict that it slows down innovation.

Unpatched risks a ‘transitory period’

Amodei said that most of the security flaws found by Anthropic’s Mythos model haven’t been made public yet. The reason is simple: they haven’t been fixed (“unpatched”). If these vulnerabilities were openly disclosed now, hackers could immediately use them to attack systems. So they’re being kept confidential until companies can repair them.

At the same time, Dimon agreed that the cybersecurity concerns around AI are real, but he described them as a “transitory period.” That means he believes this is a temporary phase—a moment where risks are high because AI is advancing quickly, but over time, systems, defenses, and regulations will catch up and reduce those dangers.

- Ends