Track Hyper | Samsung has completed HBM4 design: leading SK Hynix by a half step

Wallstreetcn
2025.01.05 09:58
portai
I'm PortAI, I can summarize articles.

A slight lead, Tesla also wants to get involved

Author: Zhou Yuan / Wall Street News

In the competition with its "peer" SK Hynix's HBM technology, Samsung Electronics has finally regained some face.

On January 5th, supply chain sources reported that Samsung's DS division's memory business completed the design of HBM4 memory logic chips around early January this year.

According to this design, Samsung Electronics' Foundry division is trial-producing using a 4nm process. After completing the final performance verification of the logic chips, Samsung will provide HBM4 sample verification.

The logic chip, also known as Logic die (or Base die, as indicated in the red box in the image below), is at the bottom layer of HBM, stacked from DRAM, and is the core component that controls multiple layers of DRAM.

The core advantage of HBM lies in its use of 3D stacking technology, which vertically stacks multiple DRAM chips together. High-speed signal transmission between chips is achieved through Through-Silicon Via (TSV) technology, significantly shortening the distance and delay of data transmission, thus providing data support to processors with extremely high bandwidth.

Since Samsung Electronics lost its leading position in the HBM market to SK Hynix, it has adjusted the technical structure and leadership of its DS division 4-5 times in 2023, "gritting its teeth" to gradually reclaim its once-exclusive industry dignity through HBM4 from SK Hynix in South Korea.

Samsung Electronics' market leader position in the HBM3e generation has been replaced by SK Hynix.

In 2024, Samsung frequently adjusted its technical human resources to achieve parity with SK Hynix's HBM technology development in the HBM4 generation through new technological means, using an independent 4nm process for the Base die.

This surpassing mainly "bullies" SK Hynix's lack of foundry capability. Previous reports indicated that SK Hynix is implementing a strategic bundling with TSMC, relying on TSMC's 5nm process to promote the design and manufacturing of HBM4's Base die.

HBM (High Bandwidth Memory) is primarily used in high-performance computing (HPC), artificial intelligence (AI), and graphics processing (GPU) fields.

HBM technology has developed to the sixth generation, namely HBM (first generation), HBM2 (second generation), HBM2e (third generation), HBM3 (fourth generation), HBM3e (fifth generation), and HBM4 (sixth generation).

The first generation of HBM had a bandwidth of 128GB/s, marking the beginning of high-speed data transmission; by HBM4, the data transmission rate has reached 6.4GT/s (2048-bit interface), with a single stack bandwidth of 1.6TB/s, which is 1.4 times that of HBM3e, and power consumption can be reduced by 30%.

Such high data transmission speeds bring pressure to HBM4 mainly due to excessive energy consumption (heat), which in turn affects the performance of HBM4. In the entire HBM4 suite, the part with the highest heat generation is the Base die. Therefore, Samsung Electronics hopes to gain a leading advantage with a more advanced 4nm process However, Samsung Electronics has never been a competitor to TSMC in terms of high-process chip energy consumption control. Can it surpass TSMC this time? This still depends on the test results after the official samples of Samsung's HBM4 are released. From this perspective, Samsung Electronics' "lead" is limited to a certain aspect, namely faster design speed, but the actual performance is still unknown.

Samsung Electronics is also aware of the key point, which is why there are industry reports stating that Samsung Electronics claims: "We no longer have the advantage of creating a significant gap with competitors in the memory business as we did before; since we have our own foundry process, we are optimistic about quickly manufacturing logic chips to meet customer customization needs."

It is clear that Samsung Electronics understands that this time it only completed the Base die design work ahead of competitors using its own 4nm process, but it has not achieved a comprehensive lead over SK Hynix.

To maintain a more advantageous lead over SK Hynix, Samsung Electronics also intends to use the sixth generation 10nm (c) DRAM chips for stacking in HBM. SK Hynix is currently using the fifth generation 10nm (b) DRAM.

Even if it is a slight technological lead, after all, it is still a lead, right?

Previous industry reports indicated that Samsung Electronics plans to use a new method of "hybrid bonding" to stack 16hi (layers) of HBM4 products. Currently, HBM4 is divided into two categories: 12hi (layers) and 16hi (layers); HBM3e is divided into 8hi (layers) and 12hi (layers).

Hybrid bonding is a process that stacks chips using copper without the need for traditional "bumps" to connect chips, thereby reducing size and improving performance.

Samsung Electronics has adopted a more advanced "Thermal Compression Non-Conductive Adhesive Film (TC-NCF)" technology, which can improve the performance of placing film-like materials during each chip stacking, allowing for a maximum of 12hi (layers) of HBM products to be stacked.

Currently, Samsung Electronics is rapidly advancing its foundry process. Due to the lag of previous generations of products behind competitors, Samsung Electronics is accelerating the progress of HBM4 to quickly respond to customer sample testing and improvement requirements.

SK Hynix is also not idle; the company plans to mass-produce HBM4 by the end of 2025, and Samsung Electronics has a similar timeline.

Another piece of information related to HBM4 may yield returns for Samsung Electronics' comprehensive efforts in the HBM4 generation.

Just like Microsoft, Meta, and Google, Tesla is also seeking samples of the upcoming HBM4 memory chips. To this end, Tesla has recently contacted both Samsung Electronics and SK Hynix.

Tesla's Dojo supercomputing system platform plans to integrate HBM4 to accelerate the training speed of the "fully autonomous driving" neural network. At the same time, HBM4 can also be deployed in Tesla's data centers and future autonomous vehicles.

The Dojo system is currently using older HBM2e chips to train the complex AI models that Tesla's fully autonomous driving capabilities rely on, and there is an urgent need to replace them with more powerful HBM products to cope with the rapidly expanding data volume