The Open Compute Project Foundation (OCP), a non-profit organization specializing in creating open hardware specifications for data centers, announced that NVIDIA and Meta✴ will provide their own developments for its Open Systems for AI initiative.

The Open Systems for AI project was announced in January 2024 with the participation of Intel, Microsoft, Google, Meta✴, NVIDIA, AMD, Arm, Ampere, Samsung, Seagate, SuperMicro, Dell and Broadcom. The initiative’s goal is to develop open standards for AI clusters and the data centers that host such systems. It is expected that Open Systems for AI will help improve the efficiency and sustainability of AI platforms, as well as provide the ability to form equipment supply chains from several manufacturers.

As part of the initiative, NVIDIA will provide OCP specifications for electromechanical design elements of the GB200 NVL72 super accelerators, including rack and liquid cooling architectures, compute tray and patch tray mechanicals.

Image source: NVIDIA

In addition, NVIDIA will expand support for OCP standards in its Spectrum-X network infrastructure. We are talking about ensuring compatibility with the OCP Switch Abstraction Interface (SAI) and Software for Open Networking in the Cloud (SONiC) standards. This will enable customers to leverage Spectrum-X adaptive routing and telemetry-based congestion control to improve the performance of Ethernet connections as part of a scalable AI infrastructure. ConnectX-8 SuperNIC adapters with OCP 3.0 support will appear in 2025.

In turn, Meta✴ will donate its Catalina AI Rack architecture, which is specifically designed for creating high-density AI systems with support for the GB200, to the Open Systems for AI project. This is expected to enable the OCP organization to “drive the innovation needed to create a more resilient AI ecosystem.”

Leave a Reply

Your email address will not be published. Required fields are marked *