Whereas NVIDIA has dominated the AI market with its GPU choices, the corporate continues to face rising competitors from potential challengers. A rising risk comes from application-specific built-in circuits (ASICs).
Bondcap, a US-based enterprise capital agency, stated in a report that the demand for NVIDIA has outpaced its provide, and firms are additionally in search of AI-specific {hardware} for environment friendly leads to coaching and deploying AI fashions.
“Not like GPUs, that are designed to assist a variety of workloads, ASICs are purpose-built to deal with particular computational duties with most effectivity. In AI, meaning optimised silicon for matrix multiplication, token technology, and inference acceleration.”
Lately, an inner memo from international finance big JPMorgan confirmed an increase within the firm’s forecast for the full addressable market of ASICs, from $25 billion to $30 billion.

Jukan Choi, a semiconductor market analyst, shared the memo on X, which revealed a 40-50% CAGR within the ASIC market that caters to compute acceleration, serving prospects like Google, Microsoft, Meta, Amazon, and different AI-focused corporations.
Moreover, a report from Taiwanese media outlet United Every day Information famous that the provision chain of ASICs is ready to develop sooner than NVIDIA in 2026, citing one other report from Macquarie Securities.
That is as a result of rising demand for chip-on-wafer-on-substrate (CoWoS). This expertise helps pack a number of chips collectively for higher efficiency for patrons like AWS, Google, and Meta. These corporations are additionally set to develop chips in-house to cut back reliance on NVIDIA.
AWS and Google are Resorting to In-Home Chips As an alternative of NVIDIA
For example, Google has established itself properly within the ASIC market with its tensor processing models (TPUs), and it just lately launched its sixth technology.
These TPUs have performed an instrumental position in creating and serving their Gemini household of AI fashions to customers. Google launched the technical report for the Gemini 2.5 fashions, revealing that the mannequin was skilled on an enormous cluster of Google’s fifth-generation TPUs.
Moreover, Amazon Internet Providers (AWS) senior director for buyer and product engineering, Gadi Hutt, informed CNBC that the corporate needs to cut back AI coaching prices and supply an alternative choice to NVIDIA’s GPUs.
The CNBC report added that Challenge Rainer, AWS’s initiative to construct an AI supercomputer, will now comprise half one million of the corporate’s Trainium2 chips. This order would have historically gone to NVIDIA.
Hutt additionally stated that whereas NVIDIA’s Blackwell affords higher efficiency than Trainium2, the latter offers higher price efficiency. The corporate additionally claims that Trainium2 affords a 30-40% higher price-performance ratio than the present technology of GPUs.
In March, Reuters reported that Meta is testing its first in-house chip for coaching its AI fashions to cut back reliance on NVIDIA. The report said that the corporate is working with Taiwan Semiconductor Manufacturing Firm (TSMC) to supply the chip.
The chip is a part of the Meta Coaching and Inference Accelerator (MTIA) sequence, and the corporate plans to start out utilizing it to coach its fashions by 2026.
Moreover, Industrial Instances, a Taiwanese media outlet, reported on Wednesday that OpenAI is constructing its personal coaching chip, which is predicted to be launched within the fourth quarter of this yr.
Furthermore, corporations like Marvell and Broadcom are among the many main gamers within the customized ASIC market, helping corporations in constructing AI infrastructure.
Marvell has partnered with AWS to develop customized chips, whereas Broadcom is claimed to help Meta and OpenAI with their upcoming {hardware}.

Supply: x.com/Jukanlosreve
In addition to, corporations like Cerebras, SambaNova, and Groq have developed ASIC {hardware} programs that considerably improve the velocity at which these AI fashions generate outputs.
Lately, Cerebras introduced that working Meta’s latest giant mannequin on their {hardware} outperformed NVIDIA’s Blackwell programs and set a brand new document beforehand established by the latter.
‘I Imagine Most ASIC Initiatives Will Be Cancelled’
Though massive corporations like Google and AWS have achieved success with their ASIC initiatives, new entrants reminiscent of Cerebras and Groq will preserve competing with NVIDIA, alongside numerous startups which have emerged to problem them.
Citing a former Microsoft worker with experience in cloud computing expertise, AlphaSense, a market intelligence agency, stated in a publish on X that third-party ASIC corporations will face a “steep uphill battle” towards NVIDIA as a result of lack of a mature software program stack like CUDA.
“These corporations typically should instantly help shoppers in adapting fashions to their chips, making scalability troublesome,” AlphaSense famous.
For context, CUDA is NVIDIA’s software program stack, which helps builders program the corporate’s GPUs to their wants.
Moreover, even Groq CEO Jonathan Ross said that NVIDIA will proceed to keep up its place out there, regardless of his firm providing {hardware} programs that outperform NVIDIA’s GPU in inferencing, which is the method of extracting an output from an AI mannequin.
“Coaching needs to be carried out on GPUs,” Ross stated in an interview earlier this yr. “I believe NVIDIA will promote each single GPU they make for coaching.”
He additionally added that inference-specific {hardware} will work hand-in-hand with NVIDIA’s GPUs. Ross stated that if Groq deployed giant volumes of lower-cost inference chips, the demand for coaching would improve.
Thus, he stated that it really works greatest for builders to coach their fashions on NVIDIA GPUs after which use Groq’s {hardware} for inference. “They [NVIDIA] don’t provide quick tokens and low-cost tokens. It’s a really totally different product, however what they do very properly is coaching, and so they do it higher than anybody else,” he stated.
NVIDIA CEO Jensen Huang has repeatedly agreed that the corporate’s greatest problem is high-speed inferencing, and just lately referred to as it the “final excessive computing downside”.
Nonetheless, for apparent causes, he doesn’t appear to be bullish on these ASIC initiatives. In a Q&A session at NVIDIA’s GTC occasion 2025, Huang stated, “I imagine most of them (ASIC initiatives) will get cancelled.”
Hearken to Jensen explaining why most ASIC initiatives are prone to be cancelled and the promising way forward for the Nvidia's NVLink ecosystem. pic.twitter.com/hmWuA2e3Fk
— The AI Investor (@The_AI_Investor) June 11, 2025
He said that almost all won’t surpass the NVIDIA {hardware} obtainable to shoppers, significantly contemplating the tempo at which the corporate advances. Due to this fact, these ASIC initiatives should meet up with NVIDIA to offer something remotely superior.
The publish Is NVIDIA’s AI Market Dominance Underneath a Risk? appeared first on Analytics India Journal.