NVIDIA Contributes to Open Compute Project
Today, NVIDIA announced that it has shared elements of its NVIDIA Blackwell accelerated computing platform design with the Open Compute Project (OCP) to drive the development of open, efficient, and scalable data center technologies. Additionally, NVIDIA has expanded its support for OCP standards with NVIDIA Spectrum-X.
At the OCP Global Summit, NVIDIA will be presenting key portions of the NVIDIA GB200 NVL72 system electro-mechanical design to the OCP community. This includes rack architecture, compute and switch tray mechanicals, liquid-cooling and thermal environment specifications, and NVIDIA NVLink cable cartridge volumetrics to support higher compute density and networking bandwidth.
NVIDIA has made several official contributions to OCP, including the NVIDIA HGX H100 baseboard design specification, to provide a wider range of offerings from computer makers and promote the adoption of AI.
The expanded NVIDIA Spectrum-X Ethernet networking platform alignment with OCP Community-developed specifications allows companies to maximize the performance potential of AI factories deploying OCP-recognized equipment while maintaining software consistency.
"By advancing open standards, we're helping organizations worldwide take advantage of the full potential of accelerated computing and create the AI factories of the future," said Jensen Huang, founder and CEO of NVIDIA.
Accelerated Computing Platform for the Next Industrial Revolution
NVIDIA's accelerated computing platform, based on the NVIDIA MGX modular architecture, powers a new era of AI. The liquid-cooled GB200 NVL72 system connects 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs in a rack-scale design, delivering 30x faster real-time trillion-parameter large language model inference than the NVIDIA H100 Tensor Core GPU.
The NVIDIA Spectrum-X Ethernet networking platform, now including the next-generation NVIDIA ConnectX-8 SuperNIC, supports OCP's Switch Abstraction Interface (SAI) and Software for Open Networking in the Cloud (SONiC) standards, accelerating Ethernet performance for scale-out AI infrastructure.
ConnectX-8 SuperNICs feature accelerated networking at speeds of up to 800Gb/s and programmable packet processing engines optimized for massive-scale AI workloads.
Critical Infrastructure for Data Centers
NVIDIA is collaborating with global electronics makers to simplify the development process of AI factories. Partners like Meta are building on top of the Blackwell platform to meet the increasing computational demands of large-scale artificial intelligence.
Learn more about NVIDIA's contributions to the Open Compute Project at the 2024 OCP Global Summit, taking place at the San Jose Convention Center from Oct. 15-17.