Nvidia Powers New Hugging Face Inference Service, AI Industrial Solutions

Nvidia continued its push to simplify the development of AI applications by powering a new inference-as-a-service offering through model repository Hugging Face and introducing fresh microservices for industrial generative AI use cases.

At the SIGGRAPH 2024 conference in Denver Monday, the AI computing giant announced that Hugging Face’s new service will run on Nvidia’s DGX Cloud service and rely on its inference microservices to help developers “rapidly deploy” popular large language models like Meta’s Llama 3 family and Mistral’s set of AI models.

[Related: 12 Big Nvidia, Intel And AMD Announcements At Computex 2024]

Known officially as Nvidia NIM, the microservices consist of AI models served in optimized containers that developers can integrate with their applications, and the company debuted them in early June with support for more than 40 models developed by Nvidia and others.

NIM is available to businesses through the Nvidia AI Enterprise software suite, which costs $4,500 per GPU per year. It’s free to members of the Nvidia Developer Program.ADVERTISEMENT

The new inference service from Hugging Face enables developers to “quickly prototype with open-source AI models hosted on the Hugging Face Hub and deploy them in production,” according to the Santa Clara, Calif.-based company.

The new Hugging Face service complements the model repository’s Train on DGX Cloud service that it announced with Nvidia a year ago at SIGGRAPH 2023.

New NIM Microservices Enable Industrial GenAI Use Cases

Nvidia also announced that it is releasing new NIM microservices that will help developers bring generative AI capabilities to industrial sectors like manufacturing and robotics.

These include what it called the “world’s first generative AI models for OpenUSD development,” which refers to the open 3-D framework developed by Pixar that Nvidia uses to connect its Omniverse platform with other 3-D applications.

Available to preview now or coming soon, the initial batch of OpenUSD NIMs allow developers to, among other things, search through libraries of OpenUSD, 3D and image data using text or image inputs; generate realistic materials for 3-D objects; assemble OpenUSD-based scenes using text prompts; and upscale a physics simulation’s resolution using AI-based upscaling.

Nvidia also released a connector that links data between the Unified Robotics Description Format and OpenUSD, which is meant to help developers port robotics data across design, simulation and reinforcement learning applications, among other kinds. In addition, the company is giving developers the ability to develop their own OpenUSD data connectors through its new OpenUSD Exchange software development kit.

“Until recently, digital worlds have been primarily used by creative industries; now, with the enhancements and accessibility NVIDIA NIM microservices are bringing to OpenUSD, industries of all kinds can build physically based virtual worlds and digital twins to drive innovation while preparing for the next wave of AI: robotics,” said Rev Lebaredian, vice president of Omniverse and simulation technology at Nvidia, in a statement.

Leave a Reply

Your email address will not be published. Required fields are marked *