American businessman Elon Musk, on his account on the social network X, announced the launch of his AI startup xAI “the world’s most powerful cluster for AI training.” This system, according to Musk, will provide “a significant advantage in training the world’s most powerful AI by all indicators by December of this year.”
«A system with 100 thousand liquid-cooled H100s on a single RDMA bus has become the most powerful cluster for AI training in the world,” Musk said in his message. It is unknown whether the businessman personally participated in the launch of the AI supercomputer, but the published photo shows that at least he communicated with xAI engineers while connecting the equipment.
Earlier this year, the media wrote about Musk’s desire to launch the so-called “gigafactory for computing”, which is a giant data center with the world’s most productive AI supercomputer, by the fall of 2025. The beginning of the formation of a cluster for AI training required the purchase of a huge number of Nvidia H100 accelerators. It appears the businessman didn’t have the patience to wait for the H200 accelerators to be released, let alone the upcoming Blackwell-generation B100 and B200 models, which are expected to be released before the end of this year.
Musk later wrote that the AI supercomputer would be used to train the most powerful AI in all respects. We are probably talking about the Grok 3 algorithm, the training phase of which should be completed by the end of this year. Interestingly, the AI supercomputer, located in a data center in Memphis, appears to be significantly superior to its peers. For example, the Frontier supercomputer is built on 27,888 AMD accelerators, Aurora uses 60 thousand Intel accelerators, and Microsoft Eagle uses 14,400 H100 accelerators from Nvidia.