What will be the AI capital expenditure next year?
According to investment bank forecasts, AI capital expenditures may reach $300 billion next year, exceeding the total expenditure of the Apollo moon landing program. Although market concerns about an AI capital expenditure bubble have eased, high expectations for 2025 may impact growth in 2026. Morgan Stanley and Goldman Sachs have both raised their capital expenditure forecasts, with the main increase coming from Amazon. Other companies such as xAI, Oracle, and NCP are also actively expanding, with overall capital expenditures expected to range from $50 billion to $30 billion next year
After the performance of the giants last week, various investment banks have successively raised their overall capital expenditure forecasts for next year. Morgan Stanley raised it from USD 270 billion to USD 300 billion (see the chart below), with the main increase coming from Amazon, which was directly adjusted from USD 79 billion to USD 96 billion. Previously, Microsoft was the highest, but now MS has placed Amazon at the top, which is noteworthy.
Naturally, the forecast for 2026 has also been raised, with a predicted growth of +12% for 2026 compared to 2025.
Goldman Sachs also raised its forecast by nearly USD 30 billion (the total amount differs because Amazon only accounts for AWS).
Of course, this is just a few CSPs, not including xAI, Oracle, and a large number of NCPs led by Coreweave, as well as China's BATJ:
-
xAI + Tesla, market expectations are for 300,000 B cards (I've heard even higher), which will require about USD 10-15 billion.
-
Oracle, market expectations are set at USD 18-20 billion.
-
NCP, too varied to calculate precisely, but taking the largest Coreweave as an example, their capex plan is very aggressive. The CFO once confidently stated, "Once the capacity for 2025 is in place, it will be quickly snapped up." Based on their power capacity plan (from 350MW this year to 850MW next year), it can be estimated that the capex for 2025 will be around USD 15 billion. Other NCPs can basically be understood as NVIDIA's "computing power distributors," permeating every corner of the world like capillaries. The total capex for NCPs next year is expected to be around USD 50 billion (covering many long-tail demands).
-
Chinese Internet, roughly estimated at USD 30 billion.
-
Enterprise, many demands may shift to the cloud next year. Factors such as power bottlenecks, the GB200 threshold, and the cheap leasing of older cards like H100 make self-building less attractive than leasing. However, sovereign AI remains a significant area, with NV guiding a potential run rate of around USD 10-20 billion.
-
Unknown variables: Apple, Ilya, etc.
Based on the above, we can calculate the total, which adds up to USD 400-450 billion (last year’s comparable figure was around USD 300 billion). Of course, not all of this is AI, but AI definitely accounts for over 60%, while traditional computing has been squeezed out for two consecutive years Not all AI relies solely on GPUs; there are also DC data center switches. However, in the past two years, spending on land for new DCs can be shifted to GPUs in the following two years...
Over 400 billion USD, with AI at 300 billion USD, the numbers are indeed quite shocking. By 2025, annual spending will have already exceeded the total amount of the Apollo moon landing program (calling it a modern "moon landing" is not an exaggeration).
But if we refer to the current profitability of tech giants, it’s actually not that bad. I pulled the Operating Cash Flow of various companies, subtracted their respective capex mentioned earlier, and roughly calculated that free cash flow is actually still increasing.
Why? Making money from core business is really too easy... steady growth + margin improvement, and the profits are all cash, with monopolistic businesses performing better and better. The speed of making money from core business is faster than the speed of increasing capex...
Moreover, whether it’s leasing for training or selling APIs for inference, or replacing CPUs with recommendation systems, the investments from several CSPs still have returns. As mentioned before, they are just "shovel carriers," not the "miners" who bear the ultimate risk.
For NCP, it’s also not as dire as the "plummeting rental prices" suggest. A friend who specializes in NCP told me that, from a business model perspective, they basically have demand first, then determine capacity, so the business risk itself is not high, and the ROI is calculated clearly in advance.
Especially in North America, the vacancy rate for cabinets is actually very low (within 3%), essentially at full capacity. If inference demand rises next year, there will be a long-tail demand overflow, and the NCP business can continue to profit.
The enterprise market does indeed have differentiation. In the past year, many startups that burned VC money to buy GPUs have failed, or Fortune 500 companies that bought cards for POC trials did not succeed. This part is indeed "wasted money," wasted capital expenditure, but it should not be overly magnified into a "bubble." There will definitely be waste in the early stages; this is not a leading indicator, as waste may continue to exist.
What does this mean for investment?
The biggest concern about AI trading before was that the capex bubble would burst, making it impossible to invest. This concern can be set aside for now... After the capex revision, the logic supporting the upward shift of NV is that the 2025 EPS is solidified and has upward potential, and the certainty of growth in 2026 is enhanced. **
If we have to consider something in advance, it is how much room there is for marginal adjustments in capex. For example, as the stock price continues to rise and the 2025 EPS expectations are fully met, how much growth is left for 2026? The higher the capex expectations for 2025, the more difficult it will be for 2026 to grow or achieve substantial growth.
Of course, this all depends on whether the inference demand explodes next year, such as whether the so-called inference scaling law like o1 is validated in various downstream fields, especially from an economic model perspective. Only then can GPUs transform from a "cost item" to an "asset item," "Buy more, earn more." The business itself will also shift from "cyclical capital expenditures" to "recurring stable expenditures."
On the other hand, is ASIC more flexible? The entire capex pool is concentrating towards the giants, and only the giants have the motivation and scenarios to expand ASIC. The first phase of accelerated computing is general-purpose, where only GPUs are available; the second phase may see more and more dedicated chip projects, but without scale; the third phase may only be large-scale ASIC deployment.
Now Broadcom and Marvell have more and more projects, TPU ramped up last year, and Trainium 2 will ramp up next year, seemingly slowly moving towards the third phase? I understand NV's monopoly power too well, as well as the flaws in the ASIC business itself. But I can't resist the fact that this $400 billion pool is just too large...
Moreover, most of it may be inference in the future. A brother from Seattle told me that the second-generation ASIC plan is in place, and it will at least be stronger than MAIA, while GB200 still needs to be snatched, but self-research will also accelerate. On the contrary, a certain massage parlor is a bit awkward...
Information equity, original title: "How much AI capital expenditure will there be next year?"