A view shows a Microsoft logo at Microsoft offices in Issy-les-Moulineaux near Paris, France, Jan 9, 2025. (Photo: REUTERS/Gonzalo Fuentes)

Microsoft, Google and xAI to give US government early access to AI models for security checks

· CNA · Join

Read a summary of this article on FAST.
Get bite-sized news via a new
cards interface. Give it a try.
Click here to return to FAST Tap here to return to FAST
FAST

WASHINGTON: Microsoft, Google and Elon Musk’s xAI agreed to give the US government early access to new artificial intelligence models for national security testing, as US officials grow alarmed by the hacking capabilities of Anthropic’s newly unveiled Mythos.

The Centre for AI Standards and Innovation at the Department of Commerce said on Tuesday (May 5) that the agreement would allow it to evaluate the models before deployment and conduct research to assess their capabilities and security risks. The agreement fulfills a pledge the Trump administration made in July 2025 to partner with technology companies to vet their AI models for “national security risks."

Microsoft will work with US government scientists to test AI systems “in ways that probe unexpected behaviours,” the company said in a statement. Together they will develop shared datasets and workflows for testing the company’s models, the company said. Microsoft signed a similar agreement with the UK’s AI Security Institute, according to the statement.

Concern is growing in Washington over the national security risks posed by powerful AI systems. By securing early access to frontier models, US officials are aiming to identify threats ranging from cyberattacks to military misuse before the tools are widely deployed.

CNA Games

Guess Word
Crack the word, one row at a time

Buzzword
Create words using the given letters

Mini Sudoku
Tiny puzzle, mighty brain teaser

Mini Crossword
Small grid, big challenge

Word Search
Spot as many words as you can
Show More
Show Less

The development of advanced AI systems including Anthropic's Mythos has in recent weeks created a stir globally, including among US officials and corporate America, over their ability to supercharge hackers.

"Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," CAISI Director Chris Fall said in a statement.

The move builds on previous agreements with OpenAI and Anthropic, established in 2024 under the Biden administration when CAISI was known as the US Artificial Intelligence Safety Institute. Under former President Joe Biden, the institute focused on developing AI tests, definitions and voluntary safety standards. It was led by Biden tech adviser Elizabeth Kelly, who has since joined Anthropic, according to her LinkedIn profile.

CAISI, which serves as the government's main hub for AI model testing, said it had already completed more than 40 evaluations, including on cutting-edge models not yet available to the public. 

Developers frequently hand over versions of their models with safety guardrails stripped back so the centre can probe for national security risks, the agency said.

xAI did not immediately respond to a request for comment. Google declined to comment.

Last week, the Pentagon said it had reached agreements with seven AI companies to deploy their advanced capabilities on the Defense Department's classified networks as it seeks to broaden the range of AI providers working across the military.

The Pentagon announcement did not include Anthropic, which has been embroiled in a dispute with the Pentagon over guardrails on the military's use of its AI tools. 

Source: Reuters/fs

Newsletter

Week in Review

Subscribe to our Chief Editor’s Week in Review

Our chief editor shares analysis and picks of the week's biggest news every Saturday.

Sign up for our newsletters

Get our pick of top stories and thought-provoking articles in your inbox

Subscribe here

Get the CNA app

Stay updated with notifications for breaking news and our best stories

Download here

Get WhatsApp alerts

Join our channel for the top reads for the day on your preferred chat app

Join here