AMD and OpenAI Unveil Massive Chip Deal for AI Inference October 8, 2025 by Alex Woodie
Another week, another massive investment in chips by an AI firm. This week’s edition features OpenAI committing to buying billions of dollars worth of AI accelerators from AMD, one of the semiconductor companies aiming to knock Nvidia off its GPU pedestal.
The deal unveiled Monday calls for OpenAI to buy up to 6 gigawatts worth of AMD AI accelerators over the next five years, with the first gigawatt consisting of AMD’s next-generation Instinct GPU, the MI450, delivered in the second half of 2026.
In exchange for the chips, OpenAI will get a warrant for up to 160 million shares of AMD common stock. That represents roughly 10% of AMD’s outstanding shares. The company’s stock opened up 33% on the news.
OpenAI requires massive numbers of processors to train its large language models (LLMs) and to run them for inference, but has struggled to obtain enough. Nvidia remains the preferred supplier for large-scale AI training with its Blackwell line of GPUs and Grace Blackwell superchips, but it’s been unable to meet insatiable market demand.
OpenAI has sourced AI accelerators where it can get them. It uses AMD MI300X and MI350X series chips, and it recently entered into contract with XPU-maker Broadcom to design its own custom AI accelerators in a deal that is reportedly worth $10 billion.
Dr. Lisa Su, Chair and CEO of AMD
The deal with AMD, which CEO Lisa Su said Sunday could generate tens of billions of dollars of revenue for AMD by 2030, demonstrates that the market is receptive to names other than Nvidia when it comes to AI training and inference. In fact, AMD’s MI450 stacks up fairly nicely against Nvidia’s best.
The MI450 is expected to feature up to 432GB worth of HBM4 memory and deliver upwards of 40 PFLOPs of FP4 capacity when it ships in 2H26. That compares favorably with Nvidia’s next-generation Vera Rubin superchip, which will feature 288GB of HBM4 per GPU and deliver 50 PFLOPS of dense FP4 compute capacity when it ships next year.
Nvidia has benefited enormously from its dominant position in AI accelerators. Its revenues have increased by nearly 7x over the past four years, to $130.5 billion for fiscal 2025, and it has become the most valuable company in the world with a $4.5 trillion market cap.
Nvidia has searched for ways to invest its capital, and not surprisingly, it involves investing in AI companies so they can buy more of its chips. Two weeks ago, Nvidia announced a plan to invest $100 billion in OpenAI in a deal that involves OpenAI eventually buying 10 gigawatts worth of Nvidia’s chips, starting with the delivery of next-gen Vera Rubin GPUs in the second half of 2026.
The AI boom has kicked off a massive spending spree to build new AI data centers. Cloud giants like Meta, Google, Microsoft, and OpenAI are spending trillions of dollars to build hundreds of new data centers around the country and outfit them with systems, processors, storage, and network.
The focus in AI has shifted from model training to inference workloads. OpenAI, in particular, is looking to offer inference services to customers as a way to turn a profit. Last year, CEO Sam Altman told investors it was on pace to lose $44 billion until 2029, when it finally expects to bring in more than it spends.
This article first appeared on our sister publication, HPCwire.
Related
About the author: Alex Woodie
Alex Woodie has written about IT as a technology journalist for more than a decade. He brings extensive experience from the IBM midrange marketplace, including topics such as servers, ERP applications, programming, databases, security, high availability, storage, business intelligence, cloud, and mobile enablement. He resides in the San Diego area.