Wallstreetcn
2023.12.07 01:08
portai
I'm PortAI, I can summarize articles.

"The most powerful computing chip" is here, MI300X is faster than H100, but how much faster? Will Microsoft be the winner?

The "most powerful computing chip" MI300X has made its debut, and it is expected to reduce the development cost of artificial intelligence models, putting pressure on NVIDIA. MI300X significantly outperforms NVIDIA's GPU H100, with a 30% increase in floating-point operation speed, a 60% increase in memory bandwidth, and a memory capacity of more than twice that of H100. Although MI300X does not have a significant advantage over NVIDIA's H200, its performance can still translate into a better user experience. In addition, AMD needs to address the issue of whether companies that rely on NVIDIA are willing to switch to using AMD chips. Su Zifeng said that AMD does not need to defeat NVIDIA, and being in second place can also do well.

At the AMD investor conference held this Wednesday, Meta, OpenAI, and Microsoft announced that they will be using AMD's latest AI chip, Instinct MI300X. This move indicates that the tech industry is actively seeking alternatives to the expensive NVIDIA GPUs.

There are high expectations for the MI300X, as the industry has long been frustrated with the high price and limited supply of NVIDIA's flagship GPUs. If the MI300X can be widely adopted, it has the potential to lower the cost of developing AI models and put pressure on NVIDIA.

How much faster is it compared to NVIDIA's GPUs?

AMD stated that the MI300X, based on a new architecture, offers significant performance improvements. Its standout feature is the 192GB of cutting-edge high-performance memory, known as HBM3, which enables faster data transfer and can accommodate larger AI models.

Su Zifeng directly compared the MI300X and its system to NVIDIA's (previous generation) flagship GPU, the H100.

In terms of basic specifications, the MI300X has a 30% higher floating-point operation speed and a 60% higher memory bandwidth than the H100. The memory capacity is also more than twice that of the H100.

Of course, the MI300X is more comparable to NVIDIA's latest GPU, the H200. Although it also leads in terms of specifications, the MI300X's advantage over the H200 is not as significant, with only a slight increase in memory bandwidth and nearly 40% more capacity.

Su Zifeng believes:

"This performance can directly translate into a better user experience. When you ask a model a question, you want it to answer faster, especially as the answers become more complex."

Su Zifeng: AMD doesn't need to beat NVIDIA, being second is also good enough

The main challenge for AMD is whether companies that have relied on NVIDIA will invest time and money to adopt another GPU supplier.

Su Zifeng also acknowledged that these companies do need to "make an effort" to switch to AMD chips.

On Wednesday, AMD informed investors and partners that the company has improved its software suite, ROCm, which is comparable to NVIDIA's CUDA. CUDA has been one of the main reasons why AI developers favor NVIDIA.

Price is also important. AMD did not disclose the pricing of the MI300X on Wednesday, but it will certainly be cheaper than NVIDIA's flagship chips, which are priced at around $40,000 each. Su Zifeng stated that AMD's chips must have lower purchasing and operating costs than NVIDIA's in order to convince customers to buy them.

AMD also announced that it has secured orders for the MI300X from some of the largest companies in need of GPUs.Meta plans to use MI300X GPU for artificial intelligence inference tasks. Kevin Scott, Chief Technology Officer of Microsoft, also stated that the company will deploy MI300X in its cloud computing service Azure. In addition, Oracle's cloud computing service will also use MI300X. OpenAI will also use AMD GPU in a software product called Triton.

According to the latest report from research firm Omidia, Meta, Microsoft, and Oracle are all important buyers of NVIDIA H100 GPU in 2023.

AMD has not provided sales forecasts for MI300X, but it is estimated that the total revenue of data center GPUs in 2024 will be approximately $2 billion. On the other hand, NVIDIA's data center revenue in just the last quarter alone exceeded $14 billion, including other chips besides GPUs.

Looking ahead, AMD believes that the market size of AI GPUs will rise to $400 billion, doubling the previous forecast. This shows how high people's expectations and desires are for high-end AI chips.

Su Zifeng candidly stated to the media that AMD does not need to defeat NVIDIA to achieve good results in the market. In other words, being in second place can also thrive.

When talking about the AI chip market, she said:

"I think it can be said with certainty that NVIDIA is currently the market leader. We believe that by 2027, the market size could exceed $400 billion. We can get a share of that."