Anthropic may invest $200 billion in Google, shows how AI boom is fueling Big Tech
Anthropic's massive Google Cloud deal highlights how AI demand is reshaping cloud computing, with OpenAI and Anthropic driving billions in infrastructure spending.
by Om Gupta · India TodayIn Short
- Anthropic commits $200 billion to Google Cloud infrastructure deal
- AI firms dominate cloud backlogs, driving massive future revenues
- Big Tech investments and demand fuel rapid AI industry expansion
Anthropic has reportedly agreed to spend $200 billion over five years with Google Cloud as part of its recent agreement. The AI company plans to use a huge amount of Google’s cloud infrastructure—like servers and computing power—to run and train its AI systems. According to The Information, this deal is so large that it could make up over 40 per cent of Google’s “revenue backlog.” For the unversed, a backlog here means future revenue Google is already contracted to receive from customers, even if the services haven’t been delivered yet.
Anthropic and OpenAI now make up more than half of the $2 trillion in backlogs at major cloud companies like Amazon Web Services, Microsoft Azure, and Google Cloud Platform, as per the report. This shows that AI demand is now the main driver of growth for cloud computing, with a large chunk of future business tied to just a few major AI players.
Big tech bets fuel the AI boom
Google’s parent company, Alphabet, is also investing up to $40 billion in Anthropic, deepening its partnership with the artificial intelligence startup, which is also its rival in the global AI race. These expensive, interconnected deals are what is fueling the AI boom. Companies like NVIDIA and Google are not just selling AI chips but also investing directly in AI companies like OpenAI and Anthropic. At the same time, those AI companies spend huge amounts of money buying hardware (like GPUs) and cloud services to build and run their models.
In simple terms, these big tech giants invested early in AI startups, betting that as these startups grow, they would need massive amounts of computing power—servers, storage, and infrastructure—which they will sell.
That “gamble” is now paying off. AI companies such as OpenAI and Anthropic are expected to spend enormous amounts on these services. Earlier estimates suggest that by 2026, OpenAI alone could spend around $45 billion on servers, while Anthropic could spend about $20 billion.
Scaling compute through strategic deals
Anthropic has also signed a separate agreement in April involving Broadcom, which works with Google on specialised AI chips. This deal is for large-scale computing capacity using Tensor Processing Units (TPUs)—Google’s custom chips designed for AI workloads. This multiple gigawatts of TPU capacity is expected to start coming online in 2027.
Demand for Anthropic’s Claude models has been very strong. Because more people and businesses are using these models, Anthropic needs much more computing power. That’s why it has been signing a series of large deals to secure additional infrastructure capacity.
A multi-provider hardware strategy
Instead of relying on just one company, Anthropic uses a mix of hardware to train and run its AI systems. The company says it trains and runs its Claude models using different types of AI hardware from multiple providers. This includes Trainium from Amazon Web Services, Tensor Processing Units from Google, and GPUs from NVIDIA.
- Ends