Wallstreetcn
2024.05.23 08:41
portai
I'm PortAI, I can summarize articles.

AI frenzy continues! NVIDIA: 25 years and still in short supply | AI dehydration

B100 will accelerate its supply speed, with shipments expected in Q2 and a gradual increase in volume in Q3. This year, the B series chips will contribute a significant amount of revenue

Author: Zhang Yifan

Editor: Shen Siqi

Source: Hard AI

Despite analysts' high expectations before the meeting, the actual data disclosed by NVIDIA exceeded the market's forecasts.

On May 23, NVIDIA released its financial report for Q1 2025, showing revenue and profits both exceeding expectations.

Even before the earnings call, due to the impressive data on NVDA AI chip supply chain, analysts had high expectations, predicting a 246% year-on-year revenue growth. However, this number was still significantly lower than the 262% year-on-year growth announced by NVIDIA during the earnings call.

From the information officially disclosed by the company, there are several reasons for this better-than-expected performance.

1. GB100 Supply Exceeding Expectations

Although the demand for NVDA's B-series and H-series is strong, considering the tight supply of HBM and CoWos, as well as the tight advanced capacity of TSMC, the market previously predicted that B100 shipments would not start until the second half of 2024 at the earliest. Some conservative institutions even predicted that shipments would not start until Q4 2024, believing that the volume of B100 would need to wait until 2025.

However, during the earnings call, the company revealed that Blackwell chips will start shipping in Q2, ramping up in Q3, and are expected to bring in significant revenue this year.

Furthermore, the previously concerned slowdown in chip demand was addressed by the company: H200 and B-series continue to be in short supply, and this supply shortage situation may persist until 2025.

The company explained that this strong demand is mainly due to substantial return on investment. The company disclosed that for every $1 spent on NVIDIA AI infrastructure, cloud providers have the opportunity to earn $5 in GPU instant hosting revenue within 4 years. Additionally, for every $1 spent on HGX H200 servers, API providers hosting Llama 3 services can earn $7 in revenue within 4 years.

2. Business Expansion

NVIDIA has expanded its business to system suppliers, not just GPU sellers.

Large clusters built by companies like Meta and Tesla are examples of important infrastructure for AI production. Currently, NVIDIA is collaborating with over 100 customers to build AI clusters, ranging from hundreds to tens of thousands of GPUs, some reaching 100,000 GPUs.

In addition to companies in the commercial sector, there is also significant demand from sovereign AI. The company stated during the meeting that sovereign AI revenue will expand from zero revenue last year to nearly billions of dollars this year.

3. Ethernet Product Spectrum-X to Contribute Billions of Dollars in Revenue

To meet the Ethernet ecosystem demands in the networking market, NVIDIA, following Infiniband, has introduced Spectrum-X network devices suitable for the Ethernet ecosystem At the meeting, the company mentioned the latest developments of Spectrum-X.

Currently, Spectrum-X has entered mass production with several customers, including a large cluster of 100,000 GPUs.

Referring to the recent performance meeting of the switch leader Arista, the company's Ethernet products are planned to connect 100,000 GPUs by 2025. The share of Ethernet in AI computing clusters may gradually expand.

Spectrum-X has opened up a new market for NVIDIA's networking business, and the company expects Spectrum-X to become a product line worth billions of dollars within a year.

4. Demand for Autonomous Driving AI

Referring to Tesla's performance meeting last month, Musk revealed that by the end of 2024, Tesla will have 85,000 NVIDIA H100 GPUs for AI training.

This time, NVDA stated at the meeting that the company currently supports Tesla in expanding its AI training cluster to 35,000 H100 GPUs. This expansion has also led to significant progress in Tesla's FSD 12.

The company expects the automotive industry to become the largest enterprise vertical in the data center business this year, bringing in billions of dollars in revenue opportunities in the local and cloud consumer sectors.

5. Increasing Difficulty of Inference Chips, NVDA's Barrier Remains

AI chips are divided into training chips and inference chips.

Previously, there were rumors in the market that the technical barriers for inference chips were low, which could lead to a diversified competitive landscape in the inference end, thereby affecting NVIDIA's market share in inference.

The company responded at the meeting that in the future, as the complexity of models, the number of users, and the number of queries per user increase, the complexity of inference chips will also increase.

In addition, looking at actual shipment data, the performance of the past four quarters shows that inference chips have been driving approximately 40% of the company's data center revenue.

Finally, the performance meeting provided guidance for 25Q2, with expected revenue of $28 billion, fluctuating up and down by 2%; GAAP gross margin is expected to be 74.8%, fluctuating up and down by 50 basis points, with an expected annual gross margin of around 70%. The company pointed out that the company's future revenue growth will come from annual iterations of new products, as well as long-term revenue from networks and software