OpenAI admits new models likely to pose 'high' cybersecurity risk

Better models also mean higher risk

· TechRadar

News By Sead Fadilpašić published 11 December 2025

(Image credit: Shutterstock / metamorworks)


  • OpenAI warns future LLMs could aid zero‑day development or advanced cyber‑espionage
  • Company is investing in defensive tooling, access controls, and a tiered cybersecurity program
  • New Frontier Risk Council will guide safeguards and responsible capability across frontier models

Future OpenAI Large Language Models (LLM) could pose higher cybersecurity risks as, in theory, they could be able to develop working zero-day remote exploits against well-defended systems, or meaningfully assist with complex and stealthy cyber-espionage campaigns.

This is according to OpenAI itself who, in a recent blog, said that cyber capabilities in its AI models are “advancing rapidly”.

While this might sound sinister, OpenAI is actually viewing this from a positive perspective, saying that the advancements also bring “meaningful benefits for cyberdefense”.

Catch the price drop- Get 30% OFF for Enterprise and Business plans

The Black Friday campaign offers 30% off for Enterprise and Business plans for a 1- or 2-year subscription. It’s valid until December 10th, 2025. Customers must enter the promo code BLACKB2B-30 at checkout to redeem the offer.

View Deal

Crashing the browser

To prepare in advance for future models that might be abused this way, OpenAI said it is “investing in strengthening models for defensive cybersecurity tasks and creating tools that enable defenders to more easily perform workflows such as auditing code and patching vulnerabilities”.

The best way to go about it, as per the blog, is a combination of access controls, infrastructure hardening, egress controls, and monitoring.

Furthermore, OpenAI announced that it would soon introduce a program that should give users and customers working on cybersecurity tasks access to improved capabilities, in a tiered manner.

Finally, the Microsoft-backed AI giant said it plans on establishing an advisory group called Frontier Risk Council. This group should consist of seasoned cybersecurity experts and practitioners and, after an initial focus on cybersecurity, should expand its reach elsewhere.

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsors