Nvidia presented at the Siggraph conference in Denver (USA, Colorado) a suite of NIM (Nvidia Inference Microservices) products, which is designed to simplify the process of deploying artificial intelligence products in enterprises.

Image source: nvidia.com

Running generative AI systems on your own resources requires setting up hardware, software and searching for information – not all companies have the relevant competencies, and Nvidia offers assistance in resolving this issue for a fee. Nvidia chips form the basis for the development of most systems that enable AI. Thanks to this, the company doubled its revenue over the past year and is expected to double it this year.

NIM, which includes software and related services, is included in Nvidia’s AI Enterprise product, which costs $4,500 per GPU per year. Software and services are designed to run on Nvidia hardware. “Nvidia NIM is a comprehensive solution for deploying generative AI, simplified for developers, but built for applications that can scale,” said Kari Briski, Nvidia vice president of product management for AI and HPC software development kits.

In the preliminary version of NIM, about 100 microservices for inferring AI systems were available; the company has now released the final version of the product. For example, Getty Images services help generate high-resolution images, improving the quality of interpretation of text queries. Shutterstock Edify’s 3D image generator supports both text and images as queries. Software and services from Nvidia, for example, will help create virtual worlds for Apple Vision Pro. And digital twin technology will help train the computers that control robots to make the machines more human-like, eliminating the need for developers to do it manually.

Leave a Reply

Your email address will not be published. Required fields are marked *