AMD releases new artificial intelligence chip aiming to compete with Nvidia Blackwell
AMD announced plans to mass produce the new MI325X artificial intelligence chip in the fourth quarter of this year to enhance its competitiveness in the market dominated by NVIDIA. The chip adopts new memory technology to improve AI computing speed, supporting up to 256GB of HBM3E memory and 6TB/s bandwidth. Despite the release of new products, AMD's stock price has slightly declined. The company also introduced a new generation of server CPUs aimed at accelerating artificial intelligence processing speed
According to the financial news app Zhitong Finance, AMD (AMD.US) announced plans to start mass production of the new MI325X artificial intelligence chip in the fourth quarter of this year to enhance its competitiveness in the market dominated by Nvidia (NVDA.US). The company revealed this news at an event held in San Francisco on Thursday, with major tech companies such as Microsoft (MSFT.US) and Meta (META.US) currently facing a significant demand-supply gap for AI processors.
AMD stated that the MI325X adopts the same architecture as the MI300X launched last year, equipped with a new memory technology that can improve AI computing speed. This chip is designed to compete with Nvidia's Blackwell architecture.
The MI325X chip features up to 256GB of HBM3E memory, an increase of 64GB compared to the MI300X, with bandwidth increasing from 5.3TB/s to 6TB/s. The core architecture remains unchanged, including a 5nm XCD module, 6nm IOD module, 3.5D packaging, 153 billion transistors, and 304 compute units. It is worth noting that the power consumption of the MI325X reaches 1000W, an increase of 750W compared to the MI320X. The MI325X also supports eight chips working in parallel to form a powerful platform with 2TB of HBM3E memory and 48TB/s of bandwidth.
Despite the release of new products, AMD's stock price experienced a slight decline during the event. The company also introduced several new network chips to accelerate data transfer speeds between chips and systems in data centers. In addition, AMD unveiled a new generation of server central processing units (CPUs) under the codename "Turin," with one version specifically providing data support for graphics processing units (GPUs) to further accelerate AI processing speed. This flagship chip has nearly 200 processing cores, priced at $14,813, using the Zen 5 architecture, offering up to a 37% speed boost for advanced AI data computing.
Although AMD has launched new products, it is not expected to have a significant impact on Nvidia's data center revenue in the short term. Market analysis shows that in July of this year, AMD raised its revenue forecast for AI chips from $40 billion to $45 billion, benefiting from the trend of generative AI products and strong demand for the MI300X chip. Analysts predict that AMD's data center revenue this year will reach $12.83 billion, while Nvidia's data center revenue is expected to reach as high as $110.36 billion.
As of Thursday's close, AMD's stock price fell by 4% to $164.18