NVIDIA Corporation and the French company Mistral AI announced the Mistral NeMo 12B large language model (LLM), specially designed to solve various enterprise-level tasks – chatbots, data summarization, working with program code, etc.
Mistral NeMo 12B has 12 billion parameters and uses a context window of 128 thousand tokens. The inference uses the FP8 data format, which is said to reduce memory requirements and speed up deployment without any reduction in response accuracy.
Image Source: Pixabay.com
When training the model, the Megatron-LM library, which is part of the NVIDIA NeMo platform, was used. In this case, 3072 NVIDIA H100 accelerators based on DGX Cloud were used. It is claimed that Mistral NeMo 12B copes well with multi-pass dialogues, mathematical problems, programming, etc. The model has “common sense” and “world knowledge”. Overall, it reports accurate and reliable performance across a wide range of applications.
The model is released under the Apache 2.0 license and is offered as a NIM container. The implementation of LLM, according to the creators, takes a matter of minutes, not days. To run the model, one NVIDIA L40S accelerator, GeForce RTX 4090 or RTX 4500 is enough. Among the key advantages of deployment via NIM are high efficiency, low computational cost, security and privacy.
Apple is preparing a large-scale rebranding of its line of operating systems. This was reported…
The cult open-world action role-playing game The Witcher 3: Wild Hunt, which recently celebrated its…
IDC analysts unexpectedly concluded that the current unstable tariff policy of the US administration will…
The first consumer SSDs with PCIe 5.0 interface appeared on the market about two years…
The IGN portal, citing internal correspondence from Electronic Arts, reported that the American publisher has…
A study by Cellular Insights Inc. found that Qualcomm's mobile modems perform better than Apple's…