Nvidia introduced a personal AI supercomputer. In May of this year, the company will begin selling a system called Project Digits, which is based on the new GB10 Grace Blackwell superchip. It has enough processing power to run complex AI models (LLMs) while being small enough to fit on a desk and run from a standard wall outlet. Previously, this kind of computing power required much larger and more power-hungry systems.

Image source: NVIDIA

«AI will become central to every application in every industry. Thanks to Project Digits, the Grace Blackwell superchip will be available to millions of developers, said Nvidia CEO Jensen Huang. “Placing an AI supercomputer on the desks of every data scientist, AI researcher and student will give them the opportunity to participate in shaping the age of AI.”

Project Digits, about the size of a traditional desktop mini PC like a Mac mini, can run AI models containing up to 200 billion parameters, and has a starting price of $3,000. For even more demanding applications, the two Project Digits systems can be combined to handle models containing up to 405 billion parameters (Meta✴’s top model, Llama 3.1, has 405 billion parameters).

Grace Blackwell’s GB10 chip delivers up to 1 petaflop of performance with FP4 precision, meaning it can perform 1 quadrillion operations per second to train and run AI models. The system is equipped with a GPU with the latest generation Nvidia CUDA cores and fifth generation Tensor Cores. It is connected via NVLink-C2C to a Grace central processor with 20 energy-efficient cores based on Arm architecture. MediaTek contributed to the development of the GB10, helping to optimize energy efficiency and performance. Each system is equipped with 128 GB of unified coherent RAM and up to 4 TB of NVMe storage.

Users will also have access to Nvidia’s library of AI software, including development kits, orchestration tools and pre-trained models available in the Nvidia NGC catalog. The system runs on Linux Nvidia DGX OS and supports such popular frameworks as PyTorch, Python and Jupyter Notebooks. Developers can customize models using the Nvidia NeMo framework and accelerate data science workflows using Nvidia RAPIDS libraries.

Users can develop and test their AI models locally on Project Digits, then deploy them to cloud services or data center infrastructure that leverage the same Grace Blackwell architecture and Nvidia AI Enterprise software platform.

Note that this is not Nvidia’s first “consumer” AI supercomputer. In December, the company announced a $249 version of its Jetson computer for AI applications aimed at hobbyists and startups, called the Jetson Orin Nano Super, which can handle LLMs of up to 8 billion parameters.

Leave a Reply

Your email address will not be published. Required fields are marked *