Nvidia Contributes Vera Rubin Rack Innovations to OCP Community

Nvidia Contributes Vera Rubin Rack Innovations to OCP Community October 13, 2025 by Jaime Hampton

(Rokas Tenys/Shutterstock)

Nvidia is continuing its collaboration with the Open Compute Project (OCP) through new rack-level design contributions for its upcoming Vera Rubin system. At this week’s OCP Global Summit, the company announced that its Vera Rubin MGX rack will incorporate several innovations developed to align with OCP standards.

The updates are part of Nvidia’s effort to prepare datacenters for transforming into what it calls “giga-scale AI factories,” or facilities that integrate compute, power, and cooling as a unified design. According to Joe DeLaere, datacenter product marketing manager at Nvidia, the Vera Rubin system extends the company’s open MGX architecture, which was first shared with OCP last year and has since been used across multiple server designs.

“We all know AI demand is exploding. Datacenters are evolving toward giga-scale AI factories that manufacture intelligence and generate revenue,” DeLaere said during a press briefing. “But to maximize that revenue, the networking, the compute, the mechanicals, the power and the cooling, all have to be designed as one. We're at the center of this transformation across grid to chip with an open, collaborative approach.”

Rack-Level Innovations

The new Vera Rubin rack design introduces multiple hardware improvements meant to increase efficiency and speed up deployment. One of these improvements is a new liquid-cooled bus bar capable of delivering up to 5,000 amps of current, a design that Nvidia says supports greater power density and delivery for large-scale AI workloads. Complementing the bus bar are advanced supercapacitors that provide up to 20 times more energy storage than the Blackwell generation, which Nvidia says will help reduce surges in grid power demand and allow for more compute resources in the same footprint.

Mechanically, the Vera Rubin compute tray introduces a PCB midplane to create a cable-free interior, reducing assembly time and improving serviceability. A new modular expansion base at the front of the tray will support integration of Rubin CPX GPUs and ConnectX-9 SuperNICs. The system is also fully liquid-cooled and designed to operate at inlet temperatures up to 45°C, which Nvidia claims will eliminate inefficiencies seen in other solutions that require chilling down to 32°C or lower. Nvidia confirmed that these Vera Rubin MGX rack-level innovations will be exhibited on the OCP show floor and contributed to the OCP community following the event.

Power and Connectivity for Giga-Scale Systems

Nvidia is also introducing a new 800-volt DC power architecture designed to replace legacy 415-volt AC systems in datacenters. The approach moves power conversion upstream, delivering DC current directly to the rack to reduce energy loss and simplify the electrical path from the grid to the compute node. By removing several layers of AC-DC conversion, Nvidia claims the design will simplify the entire system, allowing for more GPUs per AI factory and more performance per watt.

(Shutterstock)

Several partners plan to adopt the 800-volt architecture in next-generation datacenters, including Foxconn, which is building a 40-megawatt facility in Taiwan to support Nvidia systems, as well as Oracle Cloud and CoreWeave. Nvidia said it is working with more than 20 companies across the hardware stack to create a shared blueprint for scaling AI factories.

Alongside the new power design, Nvidia highlighted updates to its NVLink Fusion ecosystem, which enables direct CPU-GPU integration and high-bandwidth interconnects across compute nodes. Intel will build x86 processors that link directly into Nvidia infrastructure using NVLink Fusion, while Samsung Foundry will offer custom CPU and XPU manufacturing to meet growing demand for heterogeneous compute. Fujitsu is also integrating its Monaka series CPUs with Nvidia GPUs through NVLink Fusion.

Maintaining OCP Compatibility

While some industry peers have explored double-wide configurations, Nvidia is retaining the single-wide OCP rack form factor for Vera Rubin. DeLaere said the design minimizes copper cabling and shortens interconnect distances, allowing for the highest NVLink data rates with fewer cables. A double-wide setup, he noted, would require “flyover” cables between sides of the rack, adding complexity and signal loss. Nvidia is continuing with the single-wide OCP architecture used across multiple existing systems, a configuration DeLaere described as mature and well-proven.

The Vera Rubin rack updates align with Nvidia’s strategy of contributing open, interoperable hardware designs to the OCP community while maintaining control of its core technologies. By designing its next-generation rack architecture based on OCP standards, the company aims to accelerate industry adoption of unified compute, power, and cooling designs specifically designed for large-scale AI systems. For more details, read an Nvidia blog at this link.

Related

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...