IndiaAI’s 40,000 GPUs are Still Short to Train Big AI Models

In less than a year, the IndiaAI Mission has turned into one of the largest GPU programmes. More than 34,000 GPUs are already empanelled, four times the original 10,000 target. Another 6,000 are in the pipeline, bringing the total to nearly 40,000 GPUs.

The government claims this proves that India has the hardware capacity to support its AI ambitions.

Union minister Ashwini Vaishnaw said the program has achieved scale. “Against a 10,000 target, 34,000 GPUs are already empanelled, and another 6,000 are in process. That means 40,000 GPUs we are able to provide to our development community,” he told Moneycontrol.

The mission has ₹10,000 crore in backing and has already disbursed ₹111 crore in GPU subsidies. Through the IndiaAI compute portal, startups, universities, and researchers can access GPUs at subsidised rates, sometimes under a dollar an hour.

Competition has also reduced costs by more than 10% between the first and second rounds of tender.

Counting the Chips

Sarvam AI is the biggest beneficiary so far. The Bengaluru startup landed 4,096 NVIDIA H100s through Yotta Data Services and nearly ₹99 crore in subsidies. Sarvam is expected to ship India’s first large language model by early next year, though the launch has been delayed from its original six-month target.

Other selected startups include Gnani.ai, GAN.ai and Soket. These companies are building foundational models using GPUs provided by empanelled cloud and datacentre operators such as Jio, Yotta, CtrlS, Tata Communications, NxtGen, Netmagic (NTT Global), Cyfuture India, Sify Digital Services, Vensysco Technologies, Locuz Enterprise Solutions and Ishan Infotech.

Cyfuture India will provide GPUs, including NVIDIA H100, L40S, A100, AMD MI300X, MI325X, and Intel’s Gaudi 2 and Gaudi 3. Ishan Infotech will offer H100, H200, and L4 units.

Netmagic will supply H100, H200, L40S, and L4 GPUs, and AMD MI300X. Sify Digital Services will provide H100, H200, and L4. Locuz Enterprise Solutions will offer the H200. Vensysco Technologies will provide H100 and A100. Yotta Data Services will supply the NVIDIA B200.

Sunil Gupta, CEO of Yotta, told AIM that the company already has 8,000 H100s and 1,000 L40s. “Most large-scale AI model development in India today is happening on Yotta’s infrastructure,” Gupta said. He added that Yotta has ordered 8,000 NVIDIA B200 Blackwells, which are expected to go live between December and January.

Meanwhile, RackBank, a full-stack sovereign AI infrastructure platform, has opened a ₹1,000 crore Raipur facility capable of housing 100,000 GPUs. NTT DATA and Neysa are building a ₹10,500 crore Hyderabad cluster with space for 25,000 GPUs. CtrlS is adding another ₹4,000 crore facility in Chennai.

IndiaAI is also diversifying beyond NVIDIA. Tender rounds now include AMD MI300s, Intel Gaudi accelerators, AWS Trainium and Inferentia chips, and Google’s Trillium TPUs.

The third tender brought in 3,850 units, including 1,050 Trillium TPUs — India’s first allocation of purpose-built AI silicon outside GPUs.

Vaishnaw has warned against dependence on foreign models and stressed that India must build its own models. “If we don’t build our own models, others will and we may not get access,” he said. The government is also preparing to develop an indigenous GPU within three to five years.

The Training Challenge

Despite the GPU allocations, projects are only just starting to move forward.

Gupta said Sarvam was initially allocated about 1,600 GPUs before receiving the remaining balance. He confirmed that Soket AI has recently received 1,536 GPUs from Yotta.

Soket, led by Abhishek Upperwal, is planning a 120-billion-parameter Indic language model. It will start with a 7-billion model in six months, scale to 30 billion, and then 120 billion within a year.

It will start with a 7-billion model in six months, scale to 30 billion, and then 120 billion within a year. Upperwal told AIM the model will be open source and optimised for defence, healthcare and education. Soket has already released a 1-billion-parameter model.

Meanwhile, Gnani.ai is another key beneficiary.

E2E Networks recently secured a ₹177 crore order to supply GPU resources to Gnani.ai. The deal covers 1.3 crore GPU hours over a year, with H100 and H200 units allocated.

The company has also acquired Jarvis Labs, a GPU cloud startup based in Coimbatore. The latter’s founder, Vishnu Subramanian, will join E2E with its IP, hardware and customers, strengthening its play as Gnani builds a 14-billion-parameter voice model across more than 40 languages.

Do We Have Enough?

Some industry leaders remain cautious. A S Rajgopal, CEO of NxtGen, said India needs at least 1,00,000 GPUs. “I think we currently have only about 22,000 GPUs all across,” Rajgopal told AIM.

He expects NVIDIA’s Blackwell chips to narrow the gap, but believes the timing is critical.

“It is important that we actually step in early so that there’s time to ramp this business and serve it before competition catches up,” he said. Rajgopal added that NxtGen’s strategy is not simply GPU rentals but building out enterprise inference.

“The first difference is that we have a lot of trust with our customers. We don’t lie,” he said.

But Gupta from Yotta said the demand is strong. “IndiaAI is plush with demand for making more LLMs. Not just horizontal LLMs but also vertical-specific models, and I think that is what the country needs,” he said.

IndiaAI has pushed GPU access at scale and lowered costs. The next challenge is whether startups like Sarvam, Gnani, GAN or Soket can translate the hardware into working models.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...