Anthropic Mythos 101: India wants access, US says limit access, all of it explained
Claude Mythos has triggered a global access battle, with India seeking entry even as the US pushes for tighter controls from Anthropic. The reported cybersecurity capabilities of this new AI model have raised both excitement and fear among world leaders.
by Ankita Garg · India TodayIn Short
- India seeks Mythos access over cyber defence and AI competitiveness concerns
- US reportedly wants limits over misuse and national security risks
- Canada, UK and EU also discussing Mythos risks and impact
The next big global power struggle may not be over oil, weapons or semiconductor chips. It may be over access to an advanced AI system called Claude Mythos. Developed by Anthropic, Mythos has become the talk of the town for its stupendous capabilities, specifically around cybersecurity. Mythos is designed to detect hidden flaws in software systems, some that may be decades old. The flipside is that all this power, should it fall in the wrong hands, could lead to serious security implications for the world at large.
That is exactly why Anthropic has made Mythos available to a super-select group of companies under the Project Glasswing initiative. The catch is that, currently, Glasswing is limited to US companies, primarily with the White House monitoring the situation carefully. The list includes Apple, Google, Microsoft, Nvidia and others. Indian companies have been left out, at least in the initial rollout. In other words, India wants access to Mythos, while the US is saying limit access.
Unlike mainstream AI tools like ChatGPT and Gemini, Mythos is being viewed as a strategic technology. Access to it could help strengthen cyber defences, improve software systems both new and legacy and speed up technical work. But the same capabilities, when misused, could open the world to far-fetching risks particularly in areas like banking and finance. That is why the Mythos debate has quickly moved beyond Silicon Valley and into government circles worldwide.
Why is Mythos potentially dangerous and why world leaders are concerned
What makes Mythos concerning is its reported ability to find vulnerabilities in software at scale. BBC reported that Anthropic itself has revealed Mythos Preview has already discovered "thousands of high-severity flaws, including some affecting every major operating system and web browser." The company also claimed the model identified one vulnerability that had remained hidden in a system for 27 years.
Those claims are significant because much of the global digital economy still depends on legacy software. Banks, telecom networks, transport systems, power grids and government infrastructure often run on layers of old and complex code. A system that can rapidly detect hidden weaknesses could help secure them. But the same system could also be misused to search for weak targets faster than humans can.
That dual-use nature is what has made the world leaders scared. Mythos is not being seen as just another AI assistant, but as a tool that could influence cybersecurity preparedness at a national level.
Why India cares, why it was not given access, and what it is doing now
For India, the issue goes beyond curiosity over a new AI product. India has a massive developer base, a fast-growing startup ecosystem and rising ambitions in AI. Access to a model like Mythos could help Indian companies build stronger enterprise software, improve automation and double down on cybersecurity systems.
Officials are also said to be worried that if advanced security-focused AI remains concentrated abroad, key sectors such as banking and power could become more exposed over time. The concern is not only about innovation, but about dependence on foreign AI systems in the future.
Reports suggest India's urgency increased because no Indian company was included in Anthropic's initial Project Glasswing access group. That reportedly led to concerns that Indian firms could be left behind in the next phase of AI capability deployment.
Following a high-level review led by Finance Minister Nirmala Sitharaman, ministries are now exploring technical and policy pathways to secure controlled access for domestic firms. India is also said to be in talks with US officials, Anthropic and early users of the system. Industry body NASSCOM has reportedly urged inclusion, saying Indian companies already play a major role in securing global software systems.
What countries like Canada, UK and Europe are saying
The Mythos issue has also reached global economic and regulatory discussions.
Canadian Finance Minister Franois-Philippe Champagne told BBC that Mythos had been discussed at an International Monetary Fund meeting in Washington DC. He said the matter was serious enough to attract the attention of finance ministers and described the technology as an "unknown unknown."
In the UK, Bank of England Governor Andrew Bailey told BBC that authorities were looking carefully at what the latest AI developments could mean for cyber-crime risks.
Meanwhile, the European Union has reportedly also been in talks with Anthropic regarding concerns linked to Mythos. That suggests the debate is no longer limited to the US and India but is becoming a wider international policy issue.
Why the US is limiting access
On the other side of the debate are concerns around national security and misuse. AI systems that can write code, automate technical tasks and locate vulnerabilities may be extremely useful for defence, but they can also become dangerous if used irresponsibly.
According to The Wall Street Journal, White House officials have been concerned about the broader expansion of access to Mythos before stronger safeguards are in place. The US government doesn't want Anthropic to release Mythos AI model to 70 more organisations, which is in addition to the 40 firms that have already gained access to it. Supporters of tighter control argue that once a frontier AI system spreads widely, it becomes much harder to monitor how it is being used or copied.
There is also a strategic dimension. Countries leading in advanced AI could gain major economic and cybersecurity advantages. Restricting access, even temporarily, may help preserve that lead while regulations and safety frameworks catch up.
- Ends