Nvidia's $4 trillion moment came with a quiet warning sign

· The Fresno Bee

Nvidia is hurtling towards the end of 2025, after a very successful year, during which it redefined what it means to be a chip giant during the AI boom. Much of the reason for its success this year is its much-vaunted hardware.

But the next set of headlines isn't just about chips that are faster. It's about where the chips go and whether Nvidia can change the direction of the software world it rules.

The Nvidia chips China wasn't supposed to touch just showed up anyway

U.S. export restrictions were designed to keep Nvidia's most advanced AI hardware out of China.

Instead, they've changed demand into a new shape that's harder to control, easier to grow, and already full of cash.

A Financial Times report says Tencent is acquiring Nvidia's advanced Blackwell chips through a data center outside of Osaka, run by Datasection, a Japanese company that switched from marketing to AI data centers.

More Nvidia:

Tencent will gain access to a large number of Datasection's 15,000 Nvidia Blackwell processors without the chips ever being sent to China, per people privy to the matter.

The "neocloud" model suggests you shouldn't buy GPUs, but instead rent them.

And the demand is clear, as Datasection CEO Norihiko Ishihara notes.

Beneath Nvidia's milestone, new pressures are starting to surface.

Cho/Getty Images

A $272 million check says the business isn't a side hustle

The numbers in the FT reporting are the kind that transcend "interesting" and start looking "systemic."

Datasection agreed to pay $272 million for 5,000 Nvidia B200 chips for its Osaka facility, according to the FT. This was backed by a $406 million, three-year contract linked to a major customer relationship.

Then came the next swing: an $800 million, three-year deal for a Sydney data center that will hold tens of thousands of Nvidia's newer B300 chips. Datasection says the first 10,000 B300s will cost $521 million.

This is not the move of a company that believes demand is at its peak.

Rather, if you think demand is just starting to pick up and you want to be the toll booth, this is how you build a workaround.

Washington's rulebook changed, and the timing is the whole story

The Financial Times also said that laws from the Biden administration were going to remove the loophole that made this offshore leasing arrangement possible, but President Donald Trump canceled those plans in May. The FT stated that Datasection promptly finished its Osaka agreement after that.

President Trump's stance on AI has already affected Nvidia in other ways.

Related: A $1 trillion IPO looms, and Amazon may have just locked in early

An AI action plan from late July lifted limits on Nvidia's H200 processors and made it easier for AI data centers to obtain permits. It also said Nvidia temporarily became the first firm to have a market capitalization of $4 trillion on July 9.

All of that means investors need to face the fact that Nvidia's policy risk isn't a one-way street. It can get tighter, looser, or turn, and new business models will fill in the gaps.

Now comes the move that hits Nvidia where it actually lives

The following narrative is about why Nvidia hardware is the default in the first place, whereas the "neocloud" tale is about where Nvidia hardware can be used.

And this is where Nvidia's problems will worsen in the long run.

Google is working on a project called TorchTPU that will make it easier for Google's TPUs to run PyTorch, Reuters reports. The idea is to make users less dependent on Nvidia's CUDA environment.

Related: Oracle just made a power move Wall Street can't ignore

Google can make hardware all day long. The developer experience is what really sets it apart.

That's why one sentence from Reuters is important not as a demand indicator, but as a positioning signal: "We are seeing massive, accelerating demand for both our TPU and GPU infrastructure," a Google Cloud spokesperson told the news agency.

That's not a business giving up on the market; it's a corporation telling customers, "You have choices, and we're going to make them easy for you."

Why Meta's involvement should make Nvidia nervous

The Reuters article also indicated that Google is working with Meta, the company that runs PyTorch, to speed up the TorchTPU project.

That's a major matter, since it goes after Nvidia's edge from the inside.

Developers don't want to have to rewrite stacks. They want the easiest way to go.

If PyTorch works well with TPUs, the "switching cost" penalty that keeps Nvidia's profits safe becomes smaller - not in a day, but enough that procurement teams start to ask tougher questions, and enough that the top customers can get better prices when they negotiate.

Financial impact: The numbers show this battlefield's huge scale

This is when "official records" come in handy: They tie the hype to real money.

Nvidia is making a lot of money from data centers

Nvidia's financial report for the third quarter said the firm made a record $57.0 billion in sales. This included a record $51.2 billion in Data Center revenue.

That's the engine room that makes the whole AI trade work.

Related: US Navy bets $448M on Palantir AI to speed shipbuilding

And Nvidia's papers reveal that China is still a big part of the story. In its annual report for fiscal year 2025, Nvidia noted that its Data Center revenue in China rose, but was still "well below" pre-export-control levels as a share of overall Data Center revenue.

When overseas companies step in to meet Chinese demand, it's not just politics; it's also a straight channel to Nvidia's most crucial development area.

Google Cloud is stacking backlog like a war chest

Alphabet's official 10-Q for the quarter ended Sept. 30, 2025, and it reported $157.7 billion in remaining performance obligations (revenue backlog), "primarily related to Google Cloud," with just over 55% expected to be recognized over the next 24 months.

That's the kind of backlog that pays for long fights, like making TPUs more appealing to PyTorch devs.

Meta's spending tells you this isn't theory

Meta is projected to spend between $70 billion and $72 billion on capital expenditures in 2025, which includes payments on finance leases.

Meta doesn't want to be stuck with one vendor forever; thus, they don't spend too much money. If TorchTPU makes things easier, Meta has every reason to keep pushing the ecosystem away from relying on one supplier.

The market doesn't need Nvidia to "lose" for this to matter

This is the portion that investors often miss: If Nvidia can maintain making money, it could still have trouble with margins if:

  • Huge buyers have real choices.
  • The costs of switching software go down.
  • Offshore demand channels become more responsive to political changes.

That's how you get rid of dominance, not destroy it.

What to watch next for Nvidia

Here's what to watch to determine whether something Nvidia-related is just noise.

  • Does the "neocloud" paradigm go beyond Japan and Australia to other places where big data centers are being built?
  • Does TorchTPU come with a true developer experience (documentation, tools, and performance that are the same as other projects), or is it still a project?
  • Are the leading AI consumers beginning to discuss multi-stack publicly, even though Nvidia remains their primary supplier?

The AI economy still depends on Nvidia.

But when China can rent Nvidia computers from other countries and Google can make "not-Nvidia" easier to use in PyTorch, the next chapter is no longer just a story about demand.

It turns into a leverage narrative. And those are always more of a mess.

Related: Box office is booming in 2025 but Netflix's $82.7 billion surprise raises alarms

TheStreet

This story was originally published December 24, 2025 at 12:07 PM.