Wallstreetcn
2024.02.20 18:16
portai
I'm PortAI, I can summarize articles.

Microsoft quietly reduces reliance on NVIDIA: developing alternative network cards.

Media reports that a network card developed by Microsoft is similar to Nvidia's ConnectX-7. The development may take more than a year. If successful, it could save Microsoft money and potentially reduce the time OpenAI spends training large AI models on Microsoft servers.

Microsoft is not only developing its own artificial intelligence (AI) chips but also attempting to reduce its reliance on NVIDIA in other AI-related product developments. However, these efforts are not as high-profile as the AI chips announced last year.

On Tuesday, February 20th, Eastern Time, media reports cited insiders as saying that Microsoft is developing an alternative to NVIDIA's network card, similar to NVIDIA's ConnectX-7 network card, to ensure fast data movement between Microsoft servers. Microsoft hopes that this new network device can save the company money while improving the performance of NVIDIA chip servers.

Network cards are crucial technology in data centers used to enhance server traffic speed. Reports suggest that developing the new network card may take over a year. If successful, the new product could benefit Microsoft's significant investment in OpenAI, potentially reducing the time it takes for OpenAI to train large AI models on Microsoft servers. When Microsoft uses NVIDIA's AI chips in its data centers, servers need to transfer large amounts of data required by AI development clients like OpenAI, which may lead to overload. OpenAI's CEO, Sam Altman, has privately expressed concerns about Microsoft's computing power compared to Google's.

Insiders revealed that Microsoft CEO Satya Nadella has appointed Pradeep Sindhu, who co-founded network equipment developer Juniper Networks, to lead Microsoft's network card development. Public records show that Microsoft acquired Sindhu's server chip startup company, Fungible, last year, and he and his team have since joined Microsoft.

Sindhu himself has not responded to the reports, while NVIDIA's spokesperson declined to comment. In a statement, Microsoft's spokesperson mentioned that as part of the Microsoft Azure cloud infrastructure system approach, Microsoft focuses on optimizing every layer of the stack. The company frequently develops new technologies to meet customer needs, including network chips.

If the news of developing the network card is confirmed, it will be the second time in three months that Microsoft has been rumored to compete with NVIDIA. NVIDIA previously estimated that annual sales of its server network equipment would exceed $10 billion.

In mid-November last year, Microsoft launched two high-end custom chips for Azure services: Microsoft's first AI chip, Maia 100, and a competitor to Intel CPUs, the Arm-based cloud-native chip Cobalt 100. Maia 100 is used for running cloud training and inference workloads for OpenAI models, Bing, GitHub Copilot, and ChatGPT. It is manufactured using TSMC's 5-nanometer process, with 1.05 trillion transistors, about 30% fewer than AMD's challenge to NVIDIA's AI chip MI300X with 1.53 trillion transistors.

At that time, some media commented that Maia 100 might directly compete with NVIDIA's chips and become an alternative to NVIDIA's chips.

Rani Borkar, the head of Microsoft's Azure hardware systems and infrastructure, stated that Microsoft was testing how Maia 100 could meet the needs of its Bing search engine, AI chatbot Copilot, GitHub Copilot coding assistant, and OpenAI model GPT-3.5 Turbo.

According to Borkar, Maia 100 has been tested on its Bing and Office AI products, and OpenAI is also in the trial phase. This means that cloud training and inference for models like ChatGPT may be based on this chip.