Nvidia Reveals Its AI Platform Powered by Azure from Microsoft

Microsoft and Nvidia expanded their collaboration when Nvidia’s generative artificial intelligence (AI) foundry service was added to Microsoft Azure. The action is a part of a larger collaborative effort between Microsoft and Nvidia that aims to accelerate software and offload general-purpose computing in order to enhance data center performance, reduce carbon emissions, improve energy efficiency, and save money for customers.

The generative AI (genAI) foundry service offers an end-to-end platform for creating unique genAI models on Microsoft Azure by fusing DGX Cloud AI supercomputing capabilities with Nvidia’s AI foundation models, NeMo frameworks, and tools.

During Ignite 2023’s opening address, Microsoft CEO Satya Nadella invited Nvidia CEO Jensen Huang to the stage to emphasize the two tech titans’ expanding collaboration on accelerated computing. “We work together throughout the whole stack,” Nadella declared.

Additionally, Huang stated that Microsoft and Nvidia together developed the two fastest AI supercomputers in the world: “one in your house, one in my house.” It usually takes several years to build and set up a machine like that, but it only took a few months for these two titans of technology. Furthermore, Huang boasted that it “seems without even trying to be the third-fastest supercomputer on the planet.”

However, faster computing and AI provide a cross-stack problem for data centers. “From chips to APIs, everything has been transformed as a result of genAI,”Huang stated. “the single most significant platform transition in computing history.” as he put it, is genAI.

Since genAI has made it possible for every business to employ AI, Huang stated that the technology is now “for the very first time is now useful, versatile and quite frankly easy to use,” Three primary methods are being used by modern businesses to approach AI: using proprietary data to train and deploy custom AI models, integrating AI into applications like Microsoft’s Copilot software, and utilizing public cloud services like ChatGPT.

In the same manner that TSMC assists Nvidia in the production of GPUs, Nvidia hopes to assist clients in creating proprietary large language models (LLMs) with the launch of Nvidia AI Foundry on Azure. We’ll be a foundry for AI,” Huang declared.

Pooja: