NVIDIA detailed new developments in AI infrastructure at the Open Compute Project (OCP), revealing advances in networking, compute platforms and power systems. The company also revealed new benchmarks for its Blackwell GPUs and plans to introduce 800-volt direct current (DC) power designs for future data centres.
Speaking at a press briefing ahead of the OCP Summit, NVIDIA executives said the company aims to support the rapid growth of AI factories by coordinating “from chip to grid”.
Joe DeLaere, data centre product marketing manager at NVIDIA, said the surge in AI demand requires integrated solutions in networking, compute, power and cooling, and that NVIDIA’s contributions will remain open to the OCP community.
Meta will integrate NVIDIA’s Spectrum-X Ethernet platforms into its AI infrastructure, while Oracle Cloud Infrastructure (OCI) will adopt the same technology for large-scale AI training clusters.
NVIDIA said Spectrum-X is explicitly designed for AI workloads, claiming it achieves “95% throughput with zero latency degradation”.
On performance, NVIDIA highlighted new open-source benchmarks showing a 15-fold gain in inference throughput for its Blackwell GB200 GPUs compared to the previous Hopper generation. “A $5 million investment in Blackwell can generate $75 million in token revenue,” the company said, linking performance efficiency directly to AI factory returns.
NVIDIA also confirmed that the forthcoming Rubin and Rubin CPX systems will build on the MGX rack platform and are expected to launch in the second half of 2026.
A significant focus was the industry move towards 800V DC power delivery, which NVIDIA presented as a way to cut energy losses and support higher rack densities. The company is working with infrastructure providers, including Schneider Electric and Siemens, to develop reference architectures.
When asked by AIM about how OCP contributions and Spectrum-X adoption by Meta and Oracle may affect smaller enterprises, NVIDIA said the technology is designed for all scales. “Spectrum-X becomes the infrastructure for AI; it serves enterprise, cloud and the world’s largest AI supercomputers,” said Gilad Shainer, SVP of marketing at NVIDIA.
The company confirmed new NVLink Fusion partnerships with Intel, Samsung Foundry and Fujitsu to expand custom silicon integration within MGX-compatible racks. NVIDIA will also publish a technical white paper on 800V DC design and present full architectural details during the OCP Summit.
The post NVIDIA Unveils New Partners & Plans across AI Networking, Compute, OCP appeared first on Analytics India Magazine.