Anthropic eyes custom AI chips as it looks to power Claude and cut reliance on Big Tech hardware

Anthropic may build its own chips to power Claude AI, what it means for Google and Nvidia

Anthropic is reportedly exploring building its own AI chips to power Claude, as demand for computing surges and chip shortages continue to bite. While it's still early days, the move could eventually reduce its reliance on Nvidia and even partners like Google.

by · India Today

In Short

  • Anthropic is considering building its own AI chips to tackle hardware scarcity
  • It currently depends on Google, Nvidia and Amazon for AI chip supply
  • It also recently signed long-term deal with Google and Broadcom for TPU design

Anthropic is currently one of the hottest AI companies, pushing out new models almost every other day. However, as the company deals with rapid growth and expands its business, it is now facing a crunch of high-performance AI chips needed to power its Claude AI family. The company currently relies on Nvidia, Google and Broadcom for this hardware. But now, a new report suggests it is considering building its own chips to meet this growing demand.

According to a recent Reuters report, Anthropic is exploring the possibility of developing its own artificial intelligence (AI) chips to reduce its reliance on external suppliers and tackle the ongoing shortage of high-performance computing hardware.

Right now, Anthropic depends heavily on Amazon’s chips, primarily AWS Trainium and AWS Inferentia, as well as Google’s tensor processing units (TPUs) and Nvidia GPUs to train and run its AI software and chatbot, Claude. But with demand for AI skyrocketing in 2026, access to these chips has become one of the biggest bottlenecks in the industry. That’s pushing companies to think beyond simply buying hardware and instead build their own.

That said, Anthropic isn’t committing just yet. The report suggests the idea is still in its early stages, with no final chip design or dedicated team in place. In simple terms, the company is testing the waters rather than diving in.

Meanwhile, Anthropic recently entered into a long-term partnership with Google and Broadcom to work on tensor processing units (TPUs). The deal is part of a much larger push, a reported $50 billion investment to expand computing infrastructure in the United States.

So, on one hand, Anthropic is strengthening its relationship with Google. On the other, it’s exploring ways to become less dependent on it. The reason is simple: the ongoing chip shortage and the increasing competition for computing resources are pushing companies to secure more control over their infrastructure.

So what does it mean if Anthropic builds its own chips? For Google, it could mean that one of its major AI customers may eventually rely less on its TPU ecosystem. For Nvidia, which dominates the AI GPU market, it signals yet another major player looking to reduce its dependence on the company’s hardware. Notably, Anthropic isn’t the first to consider building its own chips to power AI systems. Amazon, Google and Microsoft have already gone down this path. Elon Musk has also recently announced TeraFab, a new manufacturing facility backed by Tesla, SpaceX and xAI, which will also work around building 2nm chips.

Of course, designing AI chips will not be an easy road for Anthropic. It can cost upwards of $500 million and requires hiring highly specialised engineers, many of whom are already working at companies like Apple, Nvidia and Google. Beyond talent, there’s also the challenge of manufacturing, testing and scaling production, all of which take years to get right.

At this point, there are no details on what Anthropic’s chip might look like or how it would differ from existing solutions. But the company seems to have a clear intent on gaining more control over performance, cost and the availability of computing power.

- Ends