Zhitong
2024.06.06 03:33
portai
I'm PortAI, I can summarize articles.

Guosheng Securities: NVIDIA releases Ethernet million-card cluster for comprehensive upgrade of communication network logic

Guosheng Securities released a research report stating that NVIDIA has launched a million-card cluster based on Ethernet, which will bring new demand and stronger logic to the communication market, marking the entry of large cloud technology giants' AI inference demand into a high-growth period. This move is to meet the needs of downstream cloud vendors and provide ultra-large-scale GPU cloud computing clusters. Ethernet has better scalability and optimization space, suitable for cloudification and resource virtualization, aligning with the demand for AI inference and cloud computing. This event will propel the communication network to new heights

According to the information from Zhitong Finance and Economics APP, Guosheng Securities released a research report stating that the larger inference demand requires a million-card super large-scale GPU cluster, which represents a completely new large-scale incremental market for communication networks. NVIDIA's layout in Ethernet is also to meet the development needs of downstream cloud vendors' customers. Such super large-scale GPU cloud computing clusters will bring new demand and stronger logic to the communication market mainly dominated by switches and optical modules. The team believes that NVIDIA's entry into the Ethernet ecosystem signifies that the AI inference demand from large cloud technology giants is entering a period of rapid development.

Event: NVIDIA recently announced at a Taiwan conference the release of the next-generation Ethernet switch Spectrum-X based on Ethernet, stating that it will be able to support super large-scale data center clusters in the millions in the future.

Why is this important? With recent industry developments, North America has entered the stage of practical application of AI large models. Numerous innovative applications based on large models are emerging, and from the perspective of cloud vendors' strategies, almost all cloud vendors are actively selling their cloud infrastructure to customers. By supporting numerous open-source large model projects and providing the ability to quickly deploy large models and applications based on large models, cloud vendors are offering many price advantages. From the strategies of cloud vendors, it can be seen that they are trying to embrace the application needs of more small and medium-sized AI startups through extremely friendly conditions. This type of demand is mainly for inference, indicating that cloud vendors have entered a stage of competing for inference demand customer resources, which is highly similar to the previous round of rapid development in cloud computing, except that the CPUs in the infrastructure clusters have been replaced by GPUs.

The Ethernet-based million-card cluster is pushing communication networks to new heights. Compared to InfiniBand's ultra-high performance on the training side, Ethernet has better scalability and optimization space, as well as stronger advantages in cloudification and resource virtualization. These characteristics have emerged from the rapid development of super large-scale cloud computing based on the previous generation of Ethernet architecture. The demand on the AI inference side is essentially the same as cloud computing, but GPUs inherently provide an exponential increase in data volume compared to CPUs. Cloud computing essentially forms a scale advantage through super large-scale clusters and resource virtualization capabilities. A super large-scale cloud computing cluster based on millions of GPUs will have unprecedented requirements for the communication network, leading to a massive demand for high-end Ethernet switches and high-speed optical modules.

Limited to training? Obviously not. In the past, the market generally believed that InfiniBand was the core demand for communication networks, assuming that only training clusters required efficient networks and that inference did not. However, from the current situation, the larger inference demand requiring a million-card super large-scale GPU cluster represents a completely new large-scale incremental market for communication networks. We believe that NVIDIA's layout in Ethernet is also to meet the development needs of downstream cloud vendors' customers. Objectively speaking, the demand on the AI inference side far exceeds that of training, and such super large-scale GPU cloud computing clusters will bring new demand and stronger logic to the communication market mainly dominated by switches and optical modules Investment Recommendation: We believe that NVIDIA's entry into the Ethernet ecosystem marks the beginning of a high-growth period for AI inference demand from major cloud technology giants. We recommend investors to focus on network product suppliers in the Ethernet ecosystem, with a key focus on cloud vendors such as InterXuChuang and XinYiSheng, as well as Tianfu Communication. In the switch segment, we recommend Shanghai Electric and Industrial Fulian. We also recommend keeping an eye on Ethernet-related stocks in the US market, such as Broadcom, Arista, and Marvell.

Risk Warning: AI progress may fall short of expectations, intensifying competition