FILE PHOTO: A message reading "AI artificial intelligence", a keyboard, and robot hands are seen in this illustration taken January 27, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

AI labs should pass safety review to get US government contracts, group says

· CNA · Join

Read a summary of this article on FAST.
Get bite-sized news via a new
cards interface. Give it a try.
Click here to return to FAST Tap here to return to FAST
FAST

May 11 : The Trump administration should screen cutting-edge artificial intelligence models for security threats before they are publicly released and withhold lucrative government contracts from those that fail review, an advocacy group told U.S. officials on Monday.

The White House is grappling with the implications of Anthropic's Mythos, which could make complex cyberattacks easier and quicker to execute, posing national security risks. 

Americans for Responsible Innovation urged the Trump administration to develop methods to vet upcoming frontier models from larger developers for cyberattack and weapons development capabilities. 

Companies should have to pass the review to be eligible for government contracts, the group said in a letter to administration officials.

CNA Games

Guess Word
Crack the word, one row at a time

Buzzword
Create words using the given letters

Mini Sudoku
Tiny puzzle, mighty brain teaser

Mini Crossword
Small grid, big challenge

Word Search
Spot as many words as you can
Show More
Show Less

The U.S. Center ‌for AI Standards and Innovation already reviews some AI models through voluntary agreements with OpenAI, Anthropic, and, more recently, Google, Microsoft and xAI. 

CAISI should take the lead on developing mandatory requirements, and Congress should create a permanent enforcement office within the U.S. Department of Commerce to enforce the requirements, the group said.

The proposed requirements would apply to companies that spend $100 million or more a year on compute to train frontier models, or that make at least $500 million in revenue annually from AI products and services.

California has a similar threshold for safety reporting requirements enacted last year. 

Source: Reuters

Newsletter

Week in Review

Subscribe to our Chief Editor’s Week in Review

Our chief editor shares analysis and picks of the week's biggest news every Saturday.

Sign up for our newsletters

Get our pick of top stories and thought-provoking articles in your inbox

Subscribe here

Get the CNA app

Stay updated with notifications for breaking news and our best stories

Download here

Get WhatsApp alerts

Join our channel for the top reads for the day on your preferred chat app

Join here