Google showed its own version of the NVIDIA Blackwell GB200 NVL super accelerator for its cloud AI platform, Datacenter Dynamics reports. Google’s solution differs from the Meta✴ and Microsoft options presented earlier. However, this shows strong interest in NVIDIA’s new AI platform from hyperscalers. Google said it is working closely with NVIDIA to shape the “sustainable computing infrastructure of the future.”
More details about the new platform will be discussed at one of the future Google conferences. So far, alas, the configuration of the super accelerator performed by Google is not even completely clear. In the photo you can see two racks at once. One contains an undisclosed number of GB200 accelerators, and the second is equipped with Google equipment, including power supplies, switches and cooling modules. Although NVIDIA recommends using InfiniBand as an interconnect in its AI platforms, some experts believe that Google uses its own Ethernet-based infrastructure developments. Thus, the company is already using optical switches (OCS) of its own design in AI clusters with proprietary TPU accelerators.
The Microsoft option also consists of two racks. The second contains a huge heat exchanger, which probably serves not one rack with accelerators, but several at once. It is known that previously there were disagreements between Microsoft and NVIDIA regarding the layout of platforms for the GB200. Finally, the Meta✴ variant is the closest to the original NVIDIA GB200 NVL72. At the same time, NVIDIA opened its specifications within the framework of OCP. Previously, the company also refused to release NVIDIA refused to release “compromise” super accelerators GB200 NVL36×2, which themselves occupy two racks.