Microsoft and Nvidia team up to build massive cloud AI supercomputer

17 Nov 2022

Image: © Nmedia/Stock.adobe.com

The partnership will combine Microsoft Azure with Nvidia GPUs, networking and software to help enterprises create their own AI systems.

Nvidia and Microsoft have entered a multi-year collaboration to try build one of the most powerful AI supercomputers in the world.

The partnership will see Microsoft Azure’s advanced supercomputing infrastructure combine with tens of thousands of Nvidia GPUs. This collaboration makes Azure the first public cloud to incorporate Nvidia’s full AI stack, including its networking tech and AI enterprise software.

Nvidia said this combination will help enterprises to train, deploy and scale AI systems, including large state-of-the-art models.

“AI technology advances as well as industry adoption are accelerating,” said Nvidia enterprise computing VP Manuvir Das.  “The breakthrough of foundation models has triggered a tidal wave of research, fostered new start-ups and enabled new enterprise applications.

“Our collaboration with Microsoft will provide researchers and companies with state-of-the-art AI infrastructure and software to capitalise on the transformative power of AI,” Das added.

Both tech giants already have experience with developing supercomputer systems. Nvidia’s Cambridge-1 supercomputer launched in the UK last year with the goal of advancing healthcare through AI.

In 2020, Microsoft announced it had built one of the top five publicly disclosed supercomputers in the world, in a partnership with OpenAI and hosted by Azure.

Microsoft’s executive VP of its cloud and AI group, Scott Guthrie, said AI is fuelling the “next wave of automation” across enterprises and industrial computing, letting organisations “do more with less” amid economic uncertainty.

“Our collaboration with Nvidia unlocks the world’s most scalable supercomputer platform, which delivers state-of-the-art AI capabilities for every enterprise on Microsoft Azure.”

As part of the collaboration, Nvidia’s full stack of AI workflows and software development kits will be made available to Azure enterprise customers.

Nvidia said it will also utilise Azure’s scalable virtual machine instances to research further advances in generative AI.

These models are able to create new content using previously created content such as text, audio, video, images. Examples include text-to-image AI generators such as OpenAI’s DALL-E.

Last year, Nvidia and Microsoft teamed up to train a massive AI-powered language model. The two companies said this model had 105 layers and 530bn parameters, three times as many parameters as OpenAI’s GPT-3.

Nvidia earnings

Meanwhile, Nvidia reported a decline in third-quarter revenue after a steep drop in its gaming unit.

The company’s revenue went down by 17pc compared to the same period last year, reaching $5.93bn. However, this exceeded estimates of $5.77bn.

Gaming revenue for Nvidia dropped by 51pc year on year to $1.57bn, but the company’s overall revenue was boosted by a rise in its data centres division. Nvidia’s data centre revenue was $3.83bn for its third quarter, an increase of 31pc.

The company’s CEO and founder, Jensen Huang, said it is quickly adapting to the macro environment and paving the way for new products.

“Nvidia’s pioneering work in accelerated computing is more vital than ever,” Huang said. “Limited by physics, general purpose computing has slowed to a crawl, just as AI demands more computing.

“Accelerated computing lets companies achieve orders-of-magnitude increases in productivity while saving money and the environment,” he added.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com