Elon Musk is aiming to become a leader in the race to create the next generation of neural networks. To do this, he plans to double the xAI Colossus AI cluster, which currently includes 100 thousand Nvidia H100 graphics accelerators.
This news came from Nvidia, and later Musk himself confirmed it on his account on the social network X, writing that the Colossus AI supercomputer is close to accommodating 200 thousand Nvidia H100 and H200 accelerators, which will be housed in a building with an area of more than 7000 m² in Memphis, Tennessee. The Colossus supercomputer is notable for the fact that xAI employees were able to assemble it and put it into operation in an extremely short time. Supercomputers usually take years to build, but in this case, Musk said the entire process took 122 days from start to finish.
At the same time, xAI, owned by the billionaire, spent at least $3 billion on the creation of Colossus, since it now consists of 100 thousand Nvidia H100 accelerators, which usually cost around $30 thousand apiece. Now the businessman intends to modernize the cluster by using more powerful H200 accelerators, each of which costs about $40 thousand. Ultimately, Musk will have to spend several more billion dollars, not to mention the cost of electricity to maintain the cluster. The businessman’s ultimate goal is to increase the number of accelerators used to 300 thousand units by the summer of next year, and for further modernization it is planned to use the latest Nvidia Blackwell B200.
Musk is betting big on Nvidia’s AI accelerators to further develop xAI’s Grok chatbot and other neural network-based technologies. Earlier this week, a video appeared on the Internet showing the inside of an xAI AI cluster, in which you can see many server racks with Nvidia accelerators.