
ESDS Software Solution Limited has launched a sovereign-grade GPU-as-a-Service offering. Unveiled during the company’s 20th Annual Day, the service aims to meet the growing compute demands of AI/ML, GenAI and LLM workloads across enterprises, BFSI, research institutions and government agencies.
With the launch, ESDS said it positions itself as a full-stack provider spanning cloud, managed services, data centre infrastructure and software solutions, now adding large-scale, sovereign-grade GPU infrastructure to its portfolio.
The company said the service is designed to deliver high-performance AI compute at a global scale.
The announcement comes as global spending on AI-optimised servers, including GPUs and accelerators, is expected to touch $329.5 billion by 2026, driven by increasing need for deterministic, high-throughput computing environments.
ESDS said its new platform enables organisations to run mission-critical AI workloads on purpose-built GPU SuperPODs designed for secure operations, consistent performance and low-latency distributed training.
The company has evolved its expertise into a fully managed GPU infrastructure stack intended to help organisations scale AI on a reliable architectural foundation.
Piyush Somani, promoter, managing director and chairman of ESDS, in a statement said the move addresses surging demand for large-scale AI infrastructure.
“With this launch, we are democratising access to large-scale GPU clusters and SuperPODs, making them straightforward, transparent and purpose-built for enterprises that have AI ambitions,” Somani said.
He added that ESDS’s GPU SuperPODs “fundamentally change that narrative by delivering predictable performance, stability and scale.”
“To empower customers even further, we created the SuperPOD Configurator tool that lets businesses choose their GPU model, design their cluster and instantly gain visibility into the architecture and cost.”
At the core of the offering is a lineup of high-performance GPU systems, including NVIDIA DGX and HGX B200, B300, GB200 and the NVL72 architecture, along with AMD’s MI300X platforms.
These systems, the company said, are designed to support extremely large model training, accelerate inference workloads, run simulations and manage large-scale clustered data operations.
The company said its GPU SuperPODs use high-bandwidth NVLink, unified memory pools, intelligent scheduling, enhanced thermal management and AI-tuned orchestration to ensure predictable performance at any scale.
The service portfolio includes consultancy for captive GPU clusters, supply and deployment of GPU environments, dedicated GPU infrastructure-as-a-service, hybrid CPU+GPU cloud options and a fully managed on-demand GPU cloud.
ESDS will manage architecture design, network optimisation, container orchestration, performance tuning and 24×7 monitoring with AI/ML Ops support.
A key part of the rollout is the SuperPOD Configurator, a tool that helps enterprises design AI infrastructure by selecting GPU models, compute density, memory profiles, storage tiers and interconnect options.
The system automatically generates optimised architectures, performance estimates and cost projections.
ESDS cited a research lab that cut training time for a 50-billion-parameter model from over 40 days to 10 days, reduced costs by 60%, and achieved 30× faster inference after moving to NVL72-based GPU systems with optimised containers and high-speed NVLink.
The company said its offering is built to global AI performance standards but designed and optimised in India, and noted that it serves over 1,300 enterprise, BFSI and government clients with transparent pricing, flexible consumption models and integrated cloud and managed services.
The post ESDS Unveils GPU-as-a-Service to Power Large-Scale AI Workloads appeared first on Analytics India Magazine.