Jensen Huang: Mass production of Blackwell has already begun, with deliveries scheduled for the fourth quarter

Wallstreetcn
2024.08.29 01:46
portai
I'm PortAI, I can summarize articles.

Jensen Huang was interviewed by foreign media after NVIDIA announced its financial report. He stated that the most important thing for NVIDIA is to improve the performance and efficiency of the next generation of products. NVIDIA's cloud strategy is to become the best AI partner in every cloud, rather than an independent cloud provider

At 7:00 on the 29th Beijing time (19:00 on the 28th US Eastern Time), Jensen Huang, after NVIDIA announced its financial report, was interviewed by Bloomberg TV host Ed Ludlow. Regarding the market's concern about the Blackwell delay issue, he stated, "Blackwell has started mass production and will begin shipping in the fourth quarter."

The financial report released last night showed that NVIDIA's revenue in the second quarter hit a record high for a single quarter, exceeding expectations with a 122% increase. The revenue guidance for the third quarter shows a maximum growth of 83%, the first time in six quarters that it is below 100%, stronger than the Wall Street consensus but lower than the highest expectations of continued triple-digit growth. After-hours, NVIDIA initially rose by over 2%, but later fell by over 8%.

The key points of the interview are summarized as follows:

  • "We have started mass production (Blackwell) and will begin shipping in the fourth quarter. We will generate billions of dollars in Blackwell revenue."
  • Applications for accelerated computing are not limited to generative AI, but also include database processing, data preprocessing, and post-processing.
  • Some countries realize that digital knowledge is part of natural and national resources. They must harvest this knowledge, process it, and transform it into national digital intelligence. This is what we call sovereign AI.
  • NVIDIA's most important focus is to improve the performance and efficiency of next-generation products.
  • Strong demand for AI chips, with significant improvement in supply compared to the same period last year.
  • NVIDIA's cloud strategy: to become the best AI partner in every cloud, rather than an independent cloud provider.

Blackwell has started mass production and will begin shipping in the fourth quarter

Ed Ludlow:

I think the market hopes to learn more about Blackwell, they want more details. I am trying to look at the entire conversation and record. It seems that this is clearly a production issue, not a fundamental design issue of Blackwell. But in the real world deployment, what does it look like? Is there a delay in the deployment schedule that affects the product's revenue?

Jensen Huang:

We have made significant changes to increase the production capacity of Blackwell, which is great. Today, we are testing Blackwell around the world. We show people and lead them to see the Blackwall system that we have started and operated. You can find pictures of the Blackwall system online. We have started mass production and will begin shipping in the fourth quarter. We will generate billions of dollars in Blackwell revenue, and we will build from there.

Of course, initially the demand for Blackwell far exceeds its supply because the demand is very high. But our supply will be very large, and we will be able to ramp up from the fourth quarter. We have billions of dollars in revenue, and we will build from there into the first quarter, the second quarter, and next year. We will also perform well next year.

Applications for accelerated computing are not limited to generative AI

Ed Ludlow:

Apart from supercomputing and exascale computing, what are the demands for accelerated computing?

Jensen Huang:

Supercomputing accounts for about 45% of our current data center business, relatively diversified. We have supercomputing, internet service providers, sovereign AI, industry, and enterprises. So it is quite diversified. Sites other than supercomputing account for the remaining 55%.

Now, the application usage in the entire data center all starts with accelerated computing, of course, starting from models, from what we know, which is generative AI, has received the most attention. But at the core, we also do database processing, data preprocessing and post-processing, and then use them for generative AI, transcoding, scientific simulations, computer graphics, of course, image processing as well. Therefore, there are many applications where people use accelerated computing. One of them is generative AI. Let's see, what else can I say? I think that's it.

Sovereign AI: Treating digital data as important as land and air

Ed Ludlow:

Please talk about sovereign AI. We discussed this issue before, and hearing some behind-the-scenes news is really interesting. In this fiscal year, the sales of sovereign AI will reach double digits, I think you mentioned billions of dollars. But for laymen, what does this mean? Does it mean making deals with specific governments? If so, please specify where.

Jensen Huang:

Not necessarily, sometimes it is making deals with government-funded specific regional service providers. That's usually the case. Take Japan for example, I think the Japanese government has provided billions of dollars in subsidies to several different internet companies and telecom companies to fund their artificial intelligence infrastructure. India has a sovereign AI program and is building their artificial intelligence infrastructure. Canada, the UK, France, Italy, Singapore, Malaysia, you know, many countries are subsidizing their regional data centers so that they can build their artificial intelligence infrastructure.

They realize that the knowledge of their country, their data, digital data is also their natural resource, not just the land they sit on, not just the air above their heads. But now they realize that digital knowledge is part of natural and national resources, and they must harvest this knowledge, process it, and transform it into national digital intelligence. This is what we call sovereign AI. You can imagine that almost every country in the world will eventually realize this and establish their own artificial intelligence infrastructure.

The most important thing is to improve the performance and efficiency energy needs of the next generation products

Ed Ludlow:

You used the word resources, which made me think of energy needs here. I remember in the conference call, you mentioned that the computing requirements for the next generation of models will increase by several orders of magnitude, but how will the energy requirements increase? What advantages do you think NVIDIA has in this regard?

Jensen Huang:

Well, the most important thing we do is to improve the performance and efficiency of the next generation products. So, at the same power level, Blackwell's performance is many times higher than Hopper's This is energy efficiency, where performance is higher at the same power consumption, or performance is the same at lower power consumption, that's the first point.

The second point is the use of liquid cooling. We support air cooling, we support air cooling, we support liquid cooling, but liquid cooling is much more energy efficient. So, combining all of these, you will make significant progress.

Equally important is to realize that artificial intelligence models can be trained in different locations. Therefore, we will increasingly see artificial intelligence being trained elsewhere, bringing the model back and using it among the people, even running on your PC or phone.

So, we will train large models, but the goal is not necessarily to always run large models. Of course, this can be done for some advanced services and very high-value AI, but these large models are likely to help train and teach smaller models. What we will ultimately have is a large, you know, several large models that can train a bunch of small models, and they can run anywhere.

Jensen Huang: Strong demand growth, significant improvement in supply compared to last year

Ed Ludlow:

You clearly explained that the demand for building generative AI products on the model or even GPU level is greater than the current supply. Especially in the case of Blackwell, please explain to me the dynamics of your product supply and whether you see quarter-over-quarter improvement from the end of the fiscal year to some point next year.

Jensen Huang:

In fact, our growth indicates that our supply is improving, and our supply chain is very extensive, one of the largest in the world. We have incredible partners who have done an outstanding job in supporting our growth. As you know, we are one of the fastest-growing technology companies in history, and none of this would have been possible without very strong demand and very strong supply. We expect the supply in the third quarter to exceed the second quarter. We expect the supply in the fourth quarter to exceed the third quarter, and we expect the supply in the first quarter to exceed the fourth quarter. So I think our supply situation next year will see significant improvement compared to last year.

Regarding demand. Blackwell is such a leap. A few things are happening. You know, just the scale of the base model makers themselves, the scale of the base models has grown from billions of parameters to tens of trillions of parameters. They are also learning more languages, not just human languages, they are also learning the language of images, sounds, and videos, and even the language of 3D graphics, as long as they can learn these languages, they can understand what they see, but they can also generate what they are asked to generate. So they are learning the language of proteins, chemicals, and physics. You know, it could be fluid, it could be particle physics. So they are learning various different languages, or learning the meaning of what we call modes, but basically learning languages.

So the scale of these models is expanding. They are learning from more data, and there are more model makers than a year ago. Therefore, due to all these different modes, the number of model makers has increased significantly This is just one of them, frontier model manufacturers and basic model manufacturers have made great progress themselves.

Moreover, the generative AI market has truly diversified. You know, in addition to internet service manufacturers, there are also startups, and now enterprises are also joining in. Different countries are all joining. So the demand is indeed growing.

NVIDIA's Cloud Strategy: Becoming the Best AI Partner in Every Cloud, Not a Standalone Cloud Provider

Ed Ludlow:

I'm sorry to interrupt you. Your business has also become diversified. When I told our audience that you are about to take the stage, I received a lot of questions. Perhaps the most common question is, what is NVIDIA? We talked about you as a system supplier, but there are many highlights of NVIDIA GPU cloud. Finally, I would like to ask, do you have plans to become a true cloud computing provider?

Jensen Huang:

No. Our GPU cloud aims to be the best version of NVIDIA cloud embedded in every cloud. NVIDIA DGX cloud is embedded in GCP, Azure, AWS, and OCI. So, we build our cloud in their clouds so that we can achieve the best version of our cloud, work with them to make the cloud infrastructure, the AI infrastructure, and the NVIDIA infrastructure as excellent as possible, and overall have the highest Total Cost of Ownership (TCO). So, this strategy is very effective.

Of course, we are heavy consumers of AI because we create a lot of AI ourselves. Without AI, our chips would not be designed. Without AI, our software would not be written. So we use it ourselves, you know, a lot of AI, autonomous vehicles, general robotics work we are doing, work across the universe. So we use DGX cloud ourselves. We also use it for AI foundries. We make AI models for companies that want expertise in this area. Like TSMC, we are an AI foundry.

So we do this for three fundamental reasons. One is to have the best NVIDIA version in all clouds, two because we are heavy consumers ourselves. Third, because we use it for AI foundries to help all other companies