NVIDIA and OpenAI partnership highlights performance gains and efficiency improvements. (Photo: NVIDIA)

Sam Altman is giving Codex access to all Nvidia employees, Jensen Huang says welcome to the age of AI

Anthropic's rapid rise with Claude Code challenges OpenAI, which is advancing Codex with GPT-5.5 and partnering with NVIDIA to boost performance and efficiency.

by · India Today

In Short

  • Anthropic reaches massive valuation driven by Claude Code success
  • OpenAI pushes Codex evolution with GPT-5.5 integration
  • NVIDIA partnership highlights performance gains and efficiency improvements

Anthropic has gained immense popularity, and within five years it has reached a valuation of $350 billion. All this has been made possible by Anthropic's Claude Code—an AI coding tool that can write hundreds of lines of code by itself. But as they say, one’s gain is another’s loss. OpenAI is facing the heat of Anthropic's growth, but the company is not giving up. OpenAI has evolved its Codex from a simple code-completion tool into a comprehensive agentic software engineering ecosystem in the past several months, and now it has rolled out its latest GPT-5.5 model on Codex. OpenAI has also experimented with Codex by rolling it out on NVIDIA infrastructure.

"We tried a new thing with NVIDIA to roll out Codex across a whole company, and it was awesome to see it work," Sam Altman wrote in an X post.

Sam Altman's X post.

NVIDIA partnership shows measurable gains

NVIDIA, in a blog post, said that its engineers have had access to GPT-5.5 through the Codex app for a few weeks, and the gains are measurable. Codex, based on GPT-5.5, is running on NVIDIA GB200 NVL72 rack-scale systems.

“Over 10,000 NVIDIANs—across engineering, product, legal, marketing, finance, sales, HR, operations, and developer programs—are already using GPT-5.5-powered Codex to achieve, in their words, ‘mind-blowing’ and ‘life-changing’ results,” NVIDIA wrote in a blog post.

Efficiency battle and token advantage

While the company did not reveal why it chose NVIDIA for Codex, it could be related to token efficiency and presenting itself as a better alternative to Anthropic. Last month, Anthropic users complained that they were running out of their Claude token limits much faster than before. Tokens act as a unit of measurement for AI models—the more intensive the work, the more tokens you consume.

OpenAI says, “It also uses significantly fewer tokens to complete the same Codex tasks, making it more efficient as well as more capable.”
NVIDIA, in its blog post, said the system running Codex delivers 35x lower cost per million tokens and 50x higher token output per second per megawatt.

“Served on GB200 NVL72, which is capable of delivering 35x lower cost per million tokens and 50x higher token output per second per megawatt compared with prior-generation systems—economics that make frontier-model inference viable at enterprise scale,” NVIDIA wrote.

“Debugging cycles that once stretched across days are closing in hours. Experimentation that previously required weeks is turning into overnight progress in complex, multi-file codebases,” it added.

Industry backing and future outlook

NVIDIA is also showing confidence in OpenAI Codex, and CEO Jensen Huang told employees in a company-wide email, urging everyone to use Codex: “Let’s jump to lightspeed. Welcome to the age of AI.”

- Ends