Wallstreetcn
2024.03.06 07:00
portai
I'm PortAI, I can summarize articles.

NVIDIA's CFO mentioned that the AI infrastructure market is expected to reach $20 trillion, highlighting that NVIDIA is not just a hardware company.

Kress mentioned that NVIDIA positions itself not only as a hardware company providing AI chips but also as a company offering comprehensive accelerated computing solutions for data centers.


Author: Bu Shuqing

Source: Hard AI

At the recently held 2024 Morgan Stanley TMT Conference, NVIDIA expressed that they have seen the enormous commercial potential of AI and accelerated computing. They anticipate that the AI infrastructure market size will far exceed the current $1 trillion. NVIDIA CEO Jensen Huang once mentioned in a roadshow that the market size could be close to $2 trillion.

Kress stated that NVIDIA is not just positioning itself as a hardware company providing AI chips, but as a company offering comprehensive accelerated computing solutions for data centers.

With the introduction of more products, NVIDIA's gross margin may decline to around 75%. This is because the company is currently in a phase of product diversification following the growth driven by the H100.

Addressing concerns about the inference business contributing 40% of the company's revenue, Kress mentioned that NVIDIA has a vast recommendation engine and search base. Considering that generative AI is still in its early stages of development, the company's inference business is expected to continue growing.

Beyond data centers, NVIDIA is also shifting its focus to personal computers and workstations. Kress mentioned that in the future, models can not only exist in the cloud but can also be deployed on smaller LLMs on laptops.

Kress also highlighted that NVIDIA is shortening its product innovation cycle from 2 years to about 1 year. This level of acceleration is not only an advantage that competitors find hard to match but also prepares the company for potential accelerated innovation in the future.

Here are some highlights from Kress at the 2024 Morgan Stanley TMT Conference:

  • The revolution of accelerated computing is imminent. This revolution is crucial as Moore's Law has reached its limit. New platforms will be necessary to drive accelerated computing and may coexist with us for the next few decades.
  • Every quarter, we have the same task, which is to increase supply for customers, and we have achieved that. Even as we enter this year, we will continue this practice.
  • As we introduce more types of products in the future, we may return to the level before the launch of H100, with a gross margin of around 75%. I believe this is a reasonable level.
  • Indeed, there is a lot of wasted electricity. This wasted electricity was once used for inefficient data center construction, which may need a complete rebuild.
  • When you consider the availability of next-generation products and the possibility of tight supply, but also take into account the certification cycle, the best and most advanced products are still H100.
  • In this brand-new era of privacy, marketing needs a complete reshaping, where recommendation engines play a crucial role. Search is also another very important aspect... Generative AI is just getting started. We have a vast recommendation engine and search base, and based on definition, inference may continue to grow.
  • Models may not only exist in the cloud. There may be a smaller model in computers, laptops. Some creative workers may build a separate model in their workstations.

Full Interview with Colette M.Kress, Executive Vice President and Chief Financial Officer, at the 2024 Morgan Stanley TMT Conference

Joseph Lawrence Moore, Research Department, Executive Director at Morgan Stanley: Great to have you back. I'm Joe Moore. A warm welcome to Colette M.Kress, CFO of NVIDIA. Let me quickly go through the research disclosure. For important disclosure information, please visit the Morgan Stanley research disclosure website at morganstanley.com. If you have any questions, please contact your Morgan Stanley sales representative.

Colette, before we begin, I made a joke. I don't know how you're living your life now because a big part of my life is occupied by NVIDIA. And there are 30 people like me here, and you are one of them, you are the CFO of NVIDIA, so you must be very busy. Thank you very much for taking the time to be here.

Perhaps we can start from here. You were here last year, and at that time, we were just beginning to understand the importance of ChatGPT to the entire (AI) ecosystem. NVIDIA's data center revenue was $4 billion at that time, and now it is about to reach $20 billion per quarter. I'm not flattering, but it's indeed close to $20 billion per quarter.

How did you achieve this goal in one year? I mean, when you were sitting here a year ago, did you ever consider this possibility?

Did Huang Renxun say, "Hey, are we ready for the potential demand of $20 billion per quarter?" How did you do it? How did you meet this demand?

Colette M.Kress: Therefore, I must also make a brief opening statement. As a reminder, this speech contains forward-looking descriptions. I advise investors to read the reports we submitted to the SEC to understand the risks and uncertainties our business faces.

Let's go back a year. It was truly an extraordinary year, we were in an exciting period. Yes, it was a busy time. But perhaps, even a year ago when we stood on this stage, we had a completely different perspective. I believe that the launch of generative artificial intelligence was still in an exploratory stage back then, people were learning about what ChatGPT was and how it could be used.

From our perspective, it was a significant product. We understood OpenAI, we had been collaborating with them for several years, understanding their work. We might have just seen it as another important component in our entire journey of deep learning and GPU inference applications, but now it is a very important large language model (LLM), an area we have been working on.

But things have certainly changed, as we are interested in the global attention. When you say globally, every country, every company, every consumer, every CEO in the world has a deep understanding of what AI can do, whether it is in terms of generating revenue for them or improving business efficiency and productivity.

But in terms of our company's overall goal, we have been focused on accelerating computing for the past decade. It has been about 15 years now, our overall mission is to help people understand (accelerate computing) and prepare for the upcoming transformation. This transformation is crucial as Moore's Law has reached its limit. New platforms will be necessary to drive accelerated computing and may be with us for the next few decades.

Artificial intelligence has turned out to be the killer application driving the early adoption of accelerated computing. Therefore, we are tirelessly expanding our platform, systems, software, and everything we can do for future data centers. We are extremely excited about this, we see it as a turning point, and generative artificial intelligence is a crucial part of this work.

Joseph Lawrence Moore:

Fantastic. Perhaps we can talk about the demand side, but before that, maybe you can talk about how you achieved such rapid growth. I mean, I find myself discussing NVIDIA's affairs more and more with non-semiconductor investors. When you consider achieving 4 to 5 times growth in an enterprise of this scale and complexity, it is truly remarkable, requiring the use of all specialized multi-chip packaging processes, and so on. We see some companies facing supply constraints in similar situations, growing by 40% or 50%.

So how did NVIDIA do it? How did you achieve such aggressive expansion? I will touch on some people's supply chain anxieties about you eventually catching up with demand. How do you view this? I mean, ultimately, it's about striving to catch up with demand. Colette M.Kress: Correct. Therefore, when it comes to supply expansion, we have to increase the supply from multiple perspectives. This is not something that can be achieved overnight. For many years, we have been discussing the flexibility and redundancy needed in the future supply chain. All businesses must consider this when expanding. It's just happening sooner than we expected. However, we are still leveraging the foundation we have been building.

First of all, please remember that many of our suppliers and partners have been working with us for decades. It's a two-way communication. They ask, how can we help you expand the capacity of existing suppliers, how can we help you increase supply from new suppliers? But we are also looking for new suppliers to provide more building capacity for the work we are doing.

Lastly, we are focusing on shortening the manufacturing cycle, breaking it down, and finding ways to get inventory to market faster. In the past year, we have made significant progress. Every quarter, we have the same task, which is to increase supply for customers, and we have achieved that. Even going into this year, we will continue this practice. So we are very satisfied with this work.

Joseph Lawrence Moore: Thank you. Looking at the current environment, do you feel that you are close to meeting the demand? In this context, people are very concerned about delivery times and finding answers to this issue. However, there are still issues such as power supply and rack space, which your customers are also dealing with. In many cases, these have not kept up with the supply you provide. It seems that the final demand for GPUs is still strong and not being met. So, what is your view on the extent to which you are meeting this demand?

Colette M.Kress: Yes. We have been facing this demand for about a year now. People are not saying they need these on a certain day, but they are queuing up in advance to meet their anticipated needs. We are meeting a significant portion of that demand. But please remember, this largely depends on a very important product we launched into the market about a year ago, the H100. The H100 has been a real success in our platform architecture. But remember, we have new products about to enter the market, which will move into the next phase of supply and demand management.

I may discuss this in more detail later. Our focus is on maintaining a balance between supply and demand, while helping customers understand the requirements for building and deploying these products as we launch them.

Looking back, leasing a data center at that time, or building an unequipped data center, may take a year to set up everything inside the data center and be ready. The planning process is lengthy because you also have to consider what changes will occur when new products are launched. How should I consider the power supply of the data center? How should I consider the overall configuration of the data center? Our top customers and data center builders have been preparing for the accelerated computing transformation in the coming years and keeping all of this consistent. The current situation is good.

There is indeed a lot of wasted electricity. This wasted electricity was once used for the construction of inefficient data centers, which may need to be completely rebuilt. This may be the first thing they want to do. However, in the long run, ensuring the power supply for future data centers will be a key requirement. Yes, our top clients are already working on solving all these issues.

Joseph Lawrence Moore:

During the last conference call, Huang Renxun did mention one of the super-large clients who extended the depreciation period of traditional servers. You also mentioned that, to some extent, NVIDIA is replacing the need for traditional server workloads, or creating a traditional environment for traditional servers. Is this true? Or are people just looking to upgrade servers, but they need to address this issue first? Do you think you are now competing with traditional server workloads? Is it just a budget issue, or a functionality issue?

Colette M.Kress:

Yes, that's a very good question. It's currently difficult to determine how it will ultimately develop, but there may be multiple scenarios. Some of the situations we see in the market are that in the past 10 to 20 years, about $250 billion has been spent annually on data center capital expenditures, which has remained relatively stable for a long time. However, last year, this number saw growth for the first time in a long time. Of course, their focus is on accelerated computing, but you also see their practice of extending the lifespan of existing x86 servers, allowing these servers to continue running without necessarily upgrading.

When considering the use of capital, all companies will consider what return on investment they will get. They may prioritize the most important projects. For them, the most important project at the moment is to remain competitive in the field of artificial intelligence. All companies will embed artificial intelligence technology. Therefore, artificial intelligence has become a very important part of their capital expenditures.

The question is, will they continue some investments that are not high-return types? This may not happen. This will be the first thing to be replaced by more efficient solutions, such as accelerated computing and artificial intelligence. So, I think you will see this situation in the future.

Joseph Lawrence Moore:

That's very helpful. Thank you. You mentioned during the conference call that when new products are launched, you may face high demand that you cannot meet. Obviously, you have officially announced the H200, but you have other products that have not been announced yet. But I think Huang Renxun revealed some information in some news reports, saying that we will also face high demand for the next generation of products. Can you talk about this situation? How did you foresee the high demand for these unannounced products before their launch?

Colette M.Kress:

Yes. Through decades of improving the process of bringing products to market, one important step is to maintain close contact with our major clients with whom we have been working for over 10 years. In the early stages of the new architecture, this was not unexpected for them, as we all understand their needs in order to incorporate these requirements into our architecture. Next, they have a good understanding of the specifications and sampling situation of the products about to be launched, even in the early stages.

We also have a good understanding of their expected demand levels. This is helpful for us to understand their needs when launching a new architecture. This is why Huang Renxun mentioned that the supply is very tight, in other words, demand may exceed our initial supply. So once again, we are starting to meet those demands in front of us.

Joseph Lawrence Moore: Great. I know you will soon have more introductions to these new products, but based on all the marketing activities you have done, and the feedback we have heard from customers, these new products seem very promising. So, as the launch of new products approaches, how do you consider the transition? When very attractive new products like B100, B200 appear, do you think there is a risk of stagnation for old products? Or do you think you already know where all the H100s will go, as a temporary phase?

Colette M.Kress: Yes. This involves our work to shorten the architecture cycle from 2 years to 1 year. But even within the same architecture, we now have the ability to launch other key products to meet certain market needs. H200 is an example, built on the basis of H100.

We have seen time and time again that when you use a certain architecture and continue to use it, you must certify it in terms of system, software, and security, which is an important part of their process. This continuity will be an important demand cycle for many people. The idea now is that there are many people in this room, in this city who have not even touched H100, so even as we introduce new products, the opportunity to get H100 is still very important for those clusters that have been built and those that have not yet started. When you consider the availability of next-generation products, as well as the possibility of tight supply, but also consider the certification cycle, the best and most advanced product is still H100.

Joseph Lawrence Moore: Great. Let's follow up on those customers you mentioned who still have unmet needs. I mean, you have talked about government buyers, non-traditional enterprise software buyers. Obviously, a sovereign country would definitely want to have its own hardware rather than building on a public cloud. There is already a global region that has not been allowed to purchase products from you for several months. So, can you summarize where this unmet demand is coming from? When you mention that every government in the world wants to do this, is that an exaggeration? Or do you really see this level of demand?

Colette M.Kress: Yes, there are unique needs in many different fields. Those industries we have not yet touched. We have very strong connections in healthcare, financial services, automotive, manufacturing, and other fields. But we also see that every enterprise software company is advancing the application of generative artificial intelligence. Therefore, there is still a significant demand from American companies. We have discussed sovereign artificial intelligence, as well as the unique perspectives brought by OpenAI and ChatGPT. ChatGPT is native to the United States. It uses American language, American culture, and American slang throughout. Many other countries also hope to have something of their own culture and language. Therefore, the work of building LLM in these regions is being driven by sovereign nations, which is very important.

You will see some companies interested in building their own LLM based on this and starting to apply it at the corporate level. Therefore, this interest from emerging enterprises and sovereign nations has become an important trend. We have a fairly large project pipeline for this and collaborate with them because they have not yet reached the computing power level we have in the United States.

So, we will continue to make significant efforts to build for these new enterprises and sovereign nations, while also delivering products that meet the expectations of our partners.

Joseph Lawrence Moore:

Great. Excellent. You mentioned in this quarter's earnings conference call that 40% of the revenue comes from the inference business. This is actually the question I have been asked the most after this quarter, as we are still in the relatively early stages of actually using these models. Figures like this seem to indicate that the growth prospects for inference are very clear and strong for quite some time. But how do you know? As I discussed with employees in the cloud computing business, they think it's just another GPU, and we're not sure how much of it is. And you said on the conference call that this is an estimate, but you believe it's a conservative estimate. Maybe you can talk more about this 40% figure?

Colette M.Kress:

Yes, that's a very good question for us. I don't want you to be immediately surprised. But what we are actually doing is helping people understand our most powerful systems, and we understand the work we are doing with engineers from partner companies. So, we really studied all the projects we are working on with many clients and were able to categorize their use cases.

So I know that we are currently in the early stages of generative artificial intelligence, and people may still be building LLMs. Some companies have already entered the monetization stage, like Copilot. But the most important thing in creating LLM is the recommendation engine, which powers various applications on everyone's phones in this room, as well as reshaping their work in areas such as news, shopping, and restaurants.

In this brand-new era of privacy, marketing needs a complete overhaul, where the recommendation engine plays a crucial role.

Search is also another very important task. I understand people's excitement about generative artificial intelligence, but it is still a huge task. What we are focusing on is the future of inference. Not the prototype binary responsive inference of the past 30 years, but understanding the massive data that requires millisecond response. We know that we will play a very important role in this market. The era of generative artificial intelligence is just beginning. With our vast recommendation engine and search foundation, reasoning is likely to continue to grow based on definitions.

Joseph Lawrence Moore: When we hear cloud computing companies talk about reducing the cost of each query at this conference and other occasions, does this mean people will turn to NVIDIA? Or away from NVIDIA? How should we view this dynamic? Because using NVIDIA chips for these inferences is very expensive, but obviously very efficient.

Colette M. Kress: Costs must be broken down. The costs they need to consider are not just the system costs. They need to consider the total cost of production. For example, during the epidemic, one of the biggest areas people focus on is studying their electricity bills to see if there is any wasted electricity, even if the machines are not actually running there. The use of NVIDIA systems, engineering, software, and power consumption are the most efficient inference solutions. That's why people turn to NVIDIA.

Not only does it have excellent response rates, but it is also the most energy-efficient while achieving these response rates. Power consumption will initially rise, but will quickly decrease during operation. Therefore, you must consider the overall cost of the system and all related factors. You cannot just focus on the price of the chip, which is incorrect.

Joseph Lawrence Moore: Alright, that's really helpful. Thank you. There are some other figures. You mentioned the network, I believe the operating income has reached $13 billion now. This is not surprising, as you also mentioned $10 billion in operating income last quarter. But this is still a very impressive number, as we see some of your semiconductor competitors getting excited about around $1 billion in network opportunities. And it seems like you are dominating the network processor field just like in the processor field. So, could you talk about this figure and your expectations for its continued growth?

Colette M. Kress: Yes, this is truly remarkable. The Mellanox team has collaborated with NVIDIA to add a critical module focused on networking to data center computing. We not only can but indeed have the best systems and processors for accelerated computing and processing, but without a high-quality network system, the success that computing can achieve is actually exhausted.

Therefore, in the years following the acquisition, we have been working closely to understand what we can do to integrate networking into our work, focusing mainly on traffic patterns and speed. Traffic patterns within the data center are crucial during the inference stage. Traffic patterns are crucial from all directions, be it from the east, west, south, or north.

Our InfiniBand platform has always been the gold standard for artificial intelligence and accelerated computing clusters. Therefore, when we sell data center computing architectures, we often bundle them together, which is a continuation. We have new products coming soon, including Spectrum-X. Spectrum-X is designed to focus on Ethernet, which is also a crucial standard in many enterprises. Therefore, we can now gain the same benefits in the Ethernet phase as with InfiniBand. We are very excited about these upcoming products.

Joseph Lawrence Moore: Fantastic. The response from skeptics is that you have shortages, you engage in bundled sales, and so on... It's not as we have seen. Just like there are unmet needs in the networking aspect as well.

Colette M. Kress: When all good things come up, at some point, they will encounter some challenges in creating these great products. Obviously, these are not commodity products. Sometimes, our cable demands may be high. But we believe that through the suppliers and partners we have created, we have already solved most of the issues.

Joseph Lawrence Moore: Great. Fantastic. You also mentioned that the software and services business reached the $1 billion mark for the first time. Can you talk about what this is all about? What are the main components?

Colette M. Kress: Yes. We are pleased that the software, services, and SaaS business achieved a $1 billion annual level at the end of last year. The main component of the business is NVIDIA AIE, which can be sold separately or together with our main platforms. This is crucial for enterprises. You will also see the work we have done with DGX Cloud at the SaaS level. You will see the services we provide for many systems, including people building models, using NeMo, BioNeMo, and the overall support we provide for the entire system.

We have a wide range of software solutions. This will once again be an important growth driver in the future as we will introduce automotive software that will be sold together with our overall infrastructure in the automotive field. Omniverse is also a great product. With the development of generative artificial intelligence, there will be an increasing demand for software licenses to ensure that the software remains up-to-date, has new features, and ensures security, as we continue to grow in the enterprise sector, AIE will also become very important.

Joseph Lawrence Moore: Fantastic. Regarding services, I have received many questions about DGX Cloud. Essentially, you are partnering with hyperscale cloud service providers to offer cloud-based services. People are concerned about how much you are pursuing in this business area. Secondly, are some hyperscale cloud service providers anxious about the possibility of NVIDIA competing with them?

Colette M. Kress: Yes. I think it's important to use the right tone when talking about our cloud service provider partners. They are building large-scale multi-tenant computing environments for enterprises, in which they have great expertise. When discussing software with businesses, it's a great opportunity to say, "Can I introduce you to NVIDIA to help you solve software-related issues?" So, it's a win-win situation. Cloud service providers build the computing environment, while we provide overall software solutions.

Cloud service providers, clients, and NVIDIA are all satisfied, with NVIDIA selling and establishing relationships in the software field. That's how it currently operates.

Alternatively, clients can directly collaborate with NVIDIA on software, services, and solutions, whether they are building LLM or developing applications based on their LLM.

But remember, we have already procured computing resources for cloud service providers. Therefore, cloud service providers can still generate revenue from this part of the business, while we simply collaborate with them directly on our platform. Either way, it's a great solution for them.

Joseph Lawrence Moore:

Fantastic. Perhaps you can talk about competition, as I feel like every week we receive press releases from you about custom silicon from your hyperscale cloud service providers. And there are many startups. You seem to have gained some market share with commercial products like AMD and Intel. How do you view all of this? Are you more focused on beating competitors, or on NVIDIA's own development?

Colette M. Kress:

Our focus is completely different from many other companies, as they focus on silicon or chips designed for specific workloads.

Let's take a step back to understand NVIDIA's overall vision as a platform company, one that can provide solutions for any future data center computing. Creating a platform is a completely different process from creating chips. Our focus is to ensure that at any data center level, we can provide different components, whether it's computing infrastructure, network infrastructure, or the entire memory section. We can assemble them into a complete supercomputer.

So, this is our business, which brings an end-to-end software stack. This software ensures that at any stage of the continuous development of artificial intelligence, people need to delve deep into development, such as CUDA, to ensure they stay in sync with the latest developments in AI. But it also provides them with CUDA DNN for deep learning networks, NeMo, BioNeMo, as well as SDKs, APIs, and a complete set of end-to-end solutions fully integrated with our data center infrastructure. Therefore, our market positioning and solutions are completely different from other companies.

We may occasionally see some simple chips emerge. No problem. But always remember, customers must weigh the overall cost of using these chips. Now, there is a large group of developers focusing on the NVIDIA platform. Developers want to spend time where other developers are. So, when you consider some other silicon products, you must convince developers that investing time in this area is a good idea. This is not just about the cost of chips, but the issue of overall ownership cost.

Joseph Lawrence Moore: Fantastic. I have a few more questions, and then I'll leave it to the audience to ask questions. Another focus of artificial intelligence at the edge is seen as a different opportunity from NVIDIA. However, you do have a lot of activities in this area. You have Huang Renxun in the robotics field, and many activities in the automotive field will eventually shift to edge artificial intelligence. Can you talk about how you view the edge opportunity and your ability to penetrate this field?

Colette M. Kress: Yes. There is an idea that not everything will happen in data centers. Looking at autonomous driving cars and the automotive field, we have long realized that some processing needs to be done inside the car. You may also see similar situations in the robotics field. We have a robot platform called Jetson. I think some key points around this area will be a good focus at the upcoming GTC conference in the next few weeks.

But I also want to shift the focus beyond the data center, thinking about personal computers and workstations, and how they will play a significant role. Models may not only exist in the cloud. There may be a smaller model in computers, laptops. Some creative professionals may build a separate model in their workstations.

We have seen interest and growth in workstations so that they can do preliminary work before being deployed in data centers. So, where will artificial intelligence transactions take place? Yes, in the cloud. But you will also see applications on key devices such as cars and robots, which are very good examples.

Joseph Lawrence Moore: Fantastic. However, I have more questions about data centers, but I would like to ask a financial question first, and then open the floor for questions.

You obviously believe that NVIDIA's gross margin is very good in the short term. But you expect it to decline in the second half of the year. Can you talk about this? Is it out of caution, not wanting to be burdened by a very high gross margin target? Or do you think we must pursue something that will lower the gross margin?

Colette M. Kress: Yes. Our gross margin for the fourth quarter reached 75%, and our guidance for the first quarter is at the same level.

We successfully launched the H100 product, which, through collaboration with other suppliers, improved our gross margin. However, as we introduce more types of products in the future, we may return to the level before the launch of H100, around a gross margin of 75%. I think this is a reasonable level, and of course, we will discuss the specific composition of the product portfolio. This is our plan - the product mix will be different, H100 brought us great success, and we successfully refined its manufacturing process, reducing costs. Joseph Lawrence Moore:
Great, let's see if there are any questions from the audience.

Unknown Analyst:
We are very optimistic about NVIDIA and the AI revolution led by Jensen Huang. I have two questions. The first one is about the long-term outlook. For example, some of NVIDIA's competitors, such as AMD and TSMC, have made long-term forecasts, expecting the market size to reach $400 billion by 2029.

Joseph Lawrence Moore:
In 2027, that's correct.

Unknown Analyst:
In 2027. If we borrow TSMC's forecast, it's roughly $300 billion. We are very concerned about the long-term profit potential for each company we analyze, not specifically yours. How do you view the upward potential for long-term profits? That's the first question. The second question is about the product innovation cycle shortening from 2.5 years to 1 year, which is a significant challenge for competitors. Can you talk about products like B100, H100, etc.? Is it possible to further shorten this cycle in the future?

Colette M. Kress:
There are indeed many forecasts for the market size, whether it's 2027 or beyond. Our view of the market is relatively macro, focusing on the installed data center infrastructure and transforming it into accelerated computing and AI. Even just considering the transformation of existing $1 trillion infrastructure, this is already a $1 trillion opportunity for us.

Moreover, the application of AI in accelerated computing can not only improve efficiency but also create new value, potentially surpassing the existing infrastructure. Dealing with data that was previously unmanageable and finding solutions more effectively will bring significant growth. Jensen Huang mentioned in a roadshow that the market size could be close to $2 trillion, not just $1 trillion. In our view, NVIDIA is not just an accelerator company but a company dedicated to accelerating computing in data centers. Therefore, we believe the market potential is even greater.

Unknown Analyst:
In a previous earnings conference call, Jensen Huang talked about multimodal reasoning. How do you see the demand for multimodal reasoning? For example, we have seen applications from text to video. How do you see this as a means to drive the growth of reasoning demand? As investors, we are concerned about the trend of demand growth. That's the first question. The second question is about government spending. How much of reasoning do you see in it? Or is it mainly focused on training? Thank you.

Colette M. Kress:
Regarding reasoning, that's a great question. As we discussed earlier, reasoning accounts for 40% of total demand, and there is still considerable growth potential in the future. Our focus on reasoning extends beyond standard data to video and other emerging areas such as recommendation engines and biological applications. As for government (AI) spending, it is initially more focused on training, especially in natural language processing and localization. Building large-scale models for a country or region is the primary task in the early stages. Outside the United States, both government funding and corporate efforts are involved, with each playing a role. Of course, after the training is completed, application development and customized solutions are also crucial.

Joseph Lawrence Moore: Well, our time is almost up. Colette, thank you very much for your insights.

Colette M. Kress: Thank you, everyone.