Broadcom Reveals $21 Billion Google TPUs Order from Anthropic

Meet Silicon Valley's Generative AI DarlingMeet Silicon Valley's Generative AI Darling

Broadcom disclosed during its Q4 2025 earnings call that it received a $10 billion order in the previous quarter to supply Google’s latest Tensor Processing Units (TPUs) to Anthropic.

“In Q4, we received an additional $11 billion order from this same customer for delivery in late 2026,” said Hock Tan, CEO of Broadcom. This brings Anthropic’s total TPU orders to $21 billion.

Furthermore, the company revealed a $73 billion backlog of AI product orders, which are expected to be shipped over the next six quarters (18 months).

TPUs are specialised accelerators developed by Google for AI workloads. Now in their seventh generation, TPUs are available to customers through Google Cloud and power many of Google’s internal systems, including training and deployment of the Gemini family of models.
Google designs the TPU architecture, while Broadcom converts those designs into manufacturable silicon and handles volume production. The relationship mirrors Google’s long-standing strategy of controlling key AI hardware design while relying on semiconductor partners for fabrication expertise.

Anthropic, a long-term user of TPUs, recently announced plans to significantly scale its infrastructure. The company intends to deploy one million TPUs, backed by more than one gigawatt of new compute capacity coming online in 2026. This represents one of the largest dedicated AI compute buildouts in the industry.

Several other companies have also confirmed their use of TPUs, including Meta, Cohere, Apple and Ilya Sutskever’s new startup, Super Safe Intelligence (SSI).

A report from The Information indicates that Meta is evaluating the deployment of TPUs in its data centres starting in 2027.

The growing adoption of TPUs stems from their power efficiency and tight optimisation for AI training and inference, creating increasing competitive pressure on NVIDIA’s GPU dominance.

Broadcom said it now has five TPU/XPU (custom AI accelerator) customers—with Google and Anthropic named on the call. Reports and industry analysis indicate that Meta and ByteDance are also among its custom AI chip customers, though Broadcom has not publicly confirmed the full roster.

The rise of TPUs over the years, thanks to their power efficiency and being fine-tuned to specifically handle AI workloads, poses a challenge to NVIDIA’s dominance with GPUs.

According to new analysis from SemiAnalysis, TPU v7 demonstrates that although it has roughly 10% lower peak floating-point operations per second (FLOPs) and memory bandwidth than NVIDIA’s GB200 platform, it still delivers a stronger performance-per-total-cost-of-ownership (TCO) profile.

SemiAnalysis estimates that Google’s internal cost to deploy Ironwood is about 44% lower than deploying an equivalent NVIDIA system.

Even when priced for external customers, TPUv7 offers an estimated 30% lower TCO than NVIDIA’s GB200, and roughly 41% lower TCO than the upcoming GB300.

SemiAnalysis notes that if Anthropic achieves around 40% machine-fraction utilisation (MFU) on TPUs—a realistic figure given the company’s compiler and systems expertise—the effective training cost per FLOP could be 50-60% lower than what GB300-class GPU clusters are expected to deliver.

The post Broadcom Reveals $21 Billion Google TPUs Order from Anthropic appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...