Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success.. Learn more
As the demand for generative AI continues to grow, Nvidia is doing everything it can to ensure that its technology stack for model development and deployment reaches where companies prefer to work.
Case in point: The IT giant’s latest partnership with Microsoft to integrate Nvidia AI enterprise software into Azure machine learning (Azure ML) and introduce deep learning frameworks to Windows 11 PCs.
The move, announced today at the ongoing Microsoft Build developer conference, accelerates enterprise and individual AI efforts, and comes just hours after Nvidia announced the Helix project with Dell to bring generative AI to mobile deployments. facilities.
Integration with Microsoft Azure Machine Learning
In simple terms, Nvidia AI Enterprise can be described as a secure, end-to-end software platform that accelerates the data science pipeline and streamlines production AI development and deployment.
Join us in San Francisco on July 11-12, where top executives will share how they’ve integrated and optimized AI investments to achieve success and avoid common pitfalls.
With this partnership, Nvidia is adding this software layer to Azure ML, providing Azure cloud customers with an enterprise-ready solution to rapidly build, deploy, and manage custom AI applications. As part of this, Azure ML users get access to over 100 AI frameworks, pre-trained models, and development tools, such as Nvidia Rapids, which come with Nvidia AI Enterprise, as well as Nvidia’s accelerated compute resources to speed development. training and inference. of specific AI models, including LLMs.
“The combination of Nvidia AI enterprise software and Azure machine learning will help enterprises accelerate their AI initiatives with a seamless and efficient path from development to production,” said Manuvir Das, vice president of enterprise computing at Nvidia in a release.
Coming to the Azure marketplace
In addition to the integration, which is available in preview by invitation only on Nvidia’s community registration, the companies also announced that Nvidia AI Enterprise and Omniverse Cloud are coming to the Microsoft Azure marketplace.
“What this means is that customers who have existing relationships with Azure can use the contracts they already have to access Nvidia AI Enterprise and use them within Azure ML or separately on instances of their choice,” Das said at a conference call. press.
The same is true of the Omniverse Cloud, which provides developers and enterprises with a full-stack cloud environment to design, develop, deploy, and manage industrial metaverse applications at scale.
AI in Windows 11
Finally, to help developers build AI models through laptops, Nvidia announced that all of its GPU-accelerated deep learning frameworks will be enabled on Windows. This, the companies said, will be done through the Windows Subsystem for Linux (WSL), which combines the best of Windows and Linux and allows AI libraries built for Linux to run on a Windows laptop.
Nvidia said it has been working closely with Microsoft to offer GPU acceleration and support for its entire AI software stack within WSL, allowing developers to use Windows PCs for all their local AI development needs.
While this will make leading generative AI models available on PCs, Das noted that users will still have the option to do large-scale training on Azure.
“Of course, you can use Nvidia AI Enterprise and Azure ML to do the training and then push the models to Nvidia PCs,” he noted. “It’s the same Nvidia stack, so it will run there.”
Microsoft Build runs through Thursday, May 25.
VentureBeat’s mission is to be a digital public square for technical decision makers to gain insights into transformative business technology and transact. Discover our informative sessions.
Leave a Reply