From Shortages to Scale, io.net’s Approach to Rewriting AI Compute Access

The world is in the middle of an AI compute crunch. Demand for GPUs has exploded far beyond the pace at which traditional cloud providers can build, provision, or price capacity. As startups scramble, enterprises stall projects, and researchers wait in line for hardware, a new model for scaling compute is emerging, one that aggregates unused or underutilised GPU supply and delivers it as a flexible, transparent marketplace.

The global race for AI compute has pushed enterprises, researchers and startups to look beyond traditional hyperscalers. In an interaction with AIM, io.net CEO Gaurav Sharma said he believes decentralised GPU networks may be the only model that can scale fast enough, and India may end up playing a far larger role in this sector than anyone expects.

Sharma’s journey spans two decades across Linux kernel engineering, AWS, Agoda, eBay, and Binance. In recent years, he noticed the shortage of compute in the AI space. That bottleneck, he said, was directly hurting builders. The answer was to create a platform that could return control, affordability and transparency to developers.

Decentralisation, But Not As India Knows It

Sharma argued that decentralisation is widely misunderstood, especially in markets where centralised cloud still dominates.

“When people think of decentralisation, they’re not very clear what we are talking about… decentralisation can mean different things to different people,” he said.

io.net’s network pools GPUs from individual users, data centres and global contributors, then routes workloads based on availability, stability and price. The pitch to customers is not ideological. “We don’t even talk in terms of decentralisation,” Sharma noted. Companies simply care about reliability and cost, not Web3 philosophy.

He compared the model to MakeMyTrip for GPUs, “aggregating supply from multiple suppliers… whatever they need, we give them.”

In his view, cloud incumbents have little incentive to fix today’s pricing and capacity barriers. A decentralised marketplace can fill that gap faster than building new data centres, which take months just to procure hardware.

India is not yet a decentralisation-first market, but paradoxically, that makes it more important to io.net’s plan.

Sharma highlighted three key advantages for training AI models in India, especially for projects prioritising speed and cost. First, a deep reservoir of technical talent with necessary GPU configuration expertise, often scarce in the US, second, cost-efficient operations due to favourable labour and energy economics, lowering GPU running costs compared to Western markets.

And the third: faster scaling capabilities, as India’s large engineering talent pool enables rapid provisioning of global AI workloads.

Making the Marketplace Work, Tackling Challenges

io.net is tackling the immediate AI compute shortage by adopting a rapid Web3-based scaling strategy, bypassing the slow, multi-year expansion model typical of traditional networks.

Instead of relying on a large conventional funding round to incentivise data centre integration, the company used a $40 million Web3 raise and tokenomics. This approach served the dual purpose of quickly building the necessary network infrastructure to overcome the “cold start problem” and fostering community engagement through airdrops and continuous product testing with minimal capital expenditure.

However, this accelerated scaling introduces two unavoidable operational challenges. The first is the continuous maintenance of quality and data accuracy, as platforms are inherently vulnerable to degradation over time. Like any marketplace, io.net must consistently refine its network with new data to ensure the reliability of its claimed GPU and data centre inventory, understanding that occasional negative customer experiences are a business reality requiring patience.

The second challenge is overcoming customer scepticism. Even with a strong inventory, new users are typically cautious, starting with small-scale testing (e.g., 10-15 GPUs) over a period of two to three months before committing to greater demands. While this initial verification phase slows down potential exponential growth, it is considered a necessary element for building “flywheel momentum.” This caution is expected to diminish as io.net’s reputation solidifies through successful adoption by peer networks and established companies.

io.net earns through a platform fee and revenue share with data centres. Operating costs scale minimally because the platform is horizontally scalable.

Who’s Using io.net Today?

Sharma listed a wide mix of customers — from IIT Bombay and UC Berkeley to Eros Now and robotics platform Frodobots. Startups working on audio generation, image models and voice-to-song synthesis use the GPUs heavily.
Through partnerships with Antler and YC, around 15–20, or more early-stage companies now rely on io.net for compute.

He highlighted AI startup Wondera.ai, which uses io.net for an LLM that can generate songs in the voice of specified artists.

In just six months of monetisation, io.net crossed $25 million in revenue, with larger contracts in the pipeline. The company expects India to become a major contributor to supply, engineering and customer demand.

Sharma believes hyperscalers cannot build data centres fast enough to keep pace with the explosion in AI workloads. A decentralised GPU marketplace, he said, may be the only model that can scale at internet speed.

The post From Shortages to Scale, io.net’s Approach to Rewriting AI Compute Access appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...