Hyper Race | SK Hynix DDR5 Raises Prices by 15%-20%

Wallstreetcn
2024.08.13 05:17
portai
I'm PortAI, I can summarize articles.

Hudbay Minerals faces tight production capacity, while AI servers are being added

Author: Zhou Yuan / Wall Street News

Although NVIDIA's stock price has dropped nearly 20% from its high on June 20th (as of August 12th), HBM (High Bandwidth Memory) is still in urgent need.

On August 13th, Wall Street News exclusively learned from the supply chain that SK Hynix has raised the price of its DDR5 (DRAM) by 15%-20%. Supply chain sources told Wall Street News that the price increase of SK Hynix's DDR5 is mainly due to the capacity squeeze caused by HBM3/3E. At the same time, the decision to raise DDR5 prices was also driven by the increase in orders for downstream AI servers.

In May of this year, SK Hynix announced that most of its HBM (High Bandwidth Memory) capacity for 2024 and 2025 has already been fully booked. Micron Technology made a similar statement in March of this year.

Recently, according to TrendForce, by 2024, HBM's market share in the entire memory market is expected to grow by about 2.5 times: from 2% in 2023 to 5% this year. It is estimated that by 2025, HBM's market penetration rate will account for approximately 10% of the entire memory market.

A brief review of the role of HBM in NVIDIA's AI accelerator cards: When packaging AI accelerator cards (graphics cards or GPUs), multiple HBM (DARM) are packaged together with the GPU.

Among them, the HBM is located on both sides of the GPU, stacked together, connected to each other via TSV, and the GPU and HBM are connected through uBump and Interposer (a silicon chip for interconnection); subsequently, the Interposer is connected to the BALL via Bump and Substrate (packaging substrate), and the final BGA BALL is connected to the PCB.

HBM saves about 94% of the surface area compared to GDDR, with a bandwidth per watt 300%+ higher than GDDR5, resulting in a 3x reduction in power consumption.

The original design purpose of this type of memory was to provide GPUs with faster data transfer speeds and large memory capacities. Before the emergence of AI accelerator cards, the powerful performance of HBM lacked necessary matching scenarios, hence there was no market. With the emergence of ChatGPT-3.5, the demand for AI training and inference speed surged, and HBM found a suitable market space.

Due to the high requirements for data transfer rates in the emerging AI technology, the high bandwidth memory technology characteristics of HBM have become a sought-after feature. Coupled with the heterogeneous integration route used during packaging (which not only reduces physical space occupation but also facilitates including more memory closer to the processor), it significantly increases memory access speed.

After adopting these technological means, the direct result is that the HBM memory interface is wider, with far more contact points for interconnection below than the number of lines connecting DDR memory to the CPU. Therefore, compared to traditional memory technologies, HBM has higher bandwidth, more I/O count, lower power consumption, and smaller physical size.

Currently, HBM has developed to the fifth generation HBM3E; previously there were HBM (first generation), HBM2 (second generation), HBM2E (third generation), and HBM3 (fourth generation) The sixth generation of HBM4 is currently under development. According to news from SK Hynix, the company is set to achieve mass production of HBM4 in the second half of 2025, while Samsung Electronics claims to be ahead and will launch HBM4 in the first half of 2025.

The speed of HBM3E is 8Gbps: it can process 1.225TB of data per second. For example, in less than a second, you can download a 163-minute full HD movie (1TB). Samsung Electronics has already released a 12-layer stacked HBM3E with a storage capacity of up to 36GB. SK Hynix and Micron Technology also have similar products, with bandwidth exceeding 1TB/s.

Globally, the companies capable of producing this type of memory are SK Hynix, Samsung Electronics, and Micron Technology. In terms of market share, SK Hynix currently holds the largest share at around 52.5%, Samsung Electronics has approximately 42.4% market share, and Micron Technology holds a 5.1% share.

In terms of the market itself, according to TrendForce's prediction, the market value of HBM will account for over 20% of the total DRAM market value this year, and it is expected to rise to over 30% by 2025.

The demand for AI accelerator cards in the AI wave has not diminished due to the delayed launch of NVIDIA's latest product, Blackwell.

As the biggest beneficiary of this wave of AI, SK Hynix has had to reduce the production capacity of DDR5 due to the overwhelming market demand for HBM. This is because the new HBM production line requirements far exceed standard DRAM, thus requiring more time.

Additionally, there is another piece of less optimistic news. In addition to NVIDIA's strong demand for HBM in AI accelerator cards, AMD's Versal series FPGA processors also use HBM, further exacerbating the shortage of this high-performance memory product