Concern arose from unauthorised access to Anthropic's Claude Mythos AI model.

AI security risk for banks? Why has Nirmala Sitharaman raised concerns

At the centre of the issue is Anthropic's new AI model, known as Claude Mythos Preview, capable of identifying and exploiting vulnerabilities across major operating systems and web browsers when prompted.

by · India Today

In Short

  • Finance Minister Nirmala Sitharaman held a meeting on AI risks to banking
  • Concern rose over Anthropic's Claude Mythos AI model's unauthorised access
  • AI tool Claude Mythos can find security flaws but misuse poses threat to banks

A high-level meeting called by Finance Minister Nirmala Sitharaman with top bank officials on Thursday has brought a new concern into focus: whether rapidly advancing artificial intelligence tools could pose a risk to the country’s banking system.

The discussion centred around the potential cybersecurity threats linked to Anthropic’s Claude Mythos model, a powerful AI system that has recently come under scrutiny after reports of unauthorised access.

The development has added urgency to discussions within the government and financial sector, especially as banks increasingly rely on digital systems and AI tools.

WHAT HAS TRIGGERED THE CONCERN

At the centre of the issue is Anthropic’s new AI model, known as Claude Mythos Preview.

The company itself has described the model as highly advanced, capable of identifying and exploiting vulnerabilities across major operating systems and web browsers when prompted.

The model is not meant for public use. It is being tested in a controlled environment under a programme called Project Glasswing, with access limited to a handful of companies such as Nvidia, Google, Amazon Web Services, Apple and Microsoft.

However, according to a Bloomberg report, a small group of users managed to gain access to the model through a third-party vendor environment on the same day it was introduced for limited testing.

Anthropic has confirmed that it is investigating the claim of unauthorised access.

"We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson said.

WHY THIS MATTERS FOR BANKS

This incident has raised a key concern. If a restricted and highly powerful AI system can be accessed without permission, it raises the risk of misuse in sectors that rely heavily on digital infrastructure, including banking.

The Claude Mythos model is designed to detect security flaws. But in the wrong hands, such capabilities could potentially be used to exploit weaknesses in financial systems.

This is where the government’s concern comes in.

Sitharaman’s warning highlights the risk that AI tools could move from being a defence mechanism to a possible threat if not properly controlled.

WHY A MEETING WAS CALLED

The concern over such risks has led to discussions at the highest levels. The government is understood to be assessing how emerging AI tools could impact financial stability, cybersecurity, and data protection.

The unauthorised access incident appears to have acted as a trigger, showing that even controlled AI deployments may not be fully secure.

This has pushed regulators to take a closer look at how banks are using AI and what safeguards are in place.

The issue is not just about one AI model. It reflects a broader shift in how powerful AI systems are becoming.

As banks adopt AI for fraud detection, customer service, and operations, the same technology could be used to identify gaps in systems if misused.

This creates a new kind of risk where the threat is not just hackers, but advanced tools that can automate and scale attacks.

The government is likely to focus on strengthening safeguards around AI use in financial systems. This could include tighter controls, better monitoring, and clearer rules on how such technologies are deployed.

The message from the recent warning is clear. While AI offers efficiency and innovation, it also brings new risks that regulators and banks cannot ignore.

The trigger may have been a single incident, but the concern is much wider — how to ensure that powerful AI tools do not become a threat to the financial system itself.

- Ends