How did the market value exceed one trillion? Broadcom told a grand narrative about ASIC
Broadcom exceeded market expectations in its latest quarterly results, with CEO Hock Tan forecasting that AI revenue will reach USD 60-90 billion by 2027, primarily from three major clients (Google, Meta, ByteDance). This forecast indicates that AI revenue will nearly double each year, reflecting Broadcom's strong growth potential in the ASIC market. In 2024, Broadcom's AI revenue is expected to be USD 12 billion, mainly contributed by Google's TPU and networking products
Broadcom's performance this quarter has undoubtedly exceeded expectations, with CEO Hock Tan providing a 2027 AI revenue of USD 60-90 billion SAM (serviceable addressable market). He specifically noted that this SAM metric is more rigorous, only accounting for revenue opportunities from the existing three major clients. This is far higher than market expectations, indicating that from this year to 2027, AI revenue (AISC + networking) will almost double each year.
In fact, this is not the first time the CEO has said this. Back in July during a JPM roadshow, CEO Hock Tan stated that “Broadcom's AI revenue opportunity over the next five years is USD 150 billion.” At that time, the metric was based on five clients contributing 150, meaning each client would contribute USD 30 billion over the next five years (still not as aggressive as this time). Later, at the Goldman Sachs conference in September (which Jensen Huang attended), Hock Tan expressed an even more optimistic outlook, stating that “50% of future AI Flops will be ASICs, and even 100% of CSP's internal use will be ASICs.” In other words: the AI computing power allocation of all giants will be like Google's (Google's internal use is almost entirely TPU, while external use like GCP is almost entirely NVIDIA GPUs).
So, what does an annual revenue opportunity of USD 60-90 billion mean?
According to Hock, the three companies contributing USD 60-90 billion (Google, Meta, ByteDance) means that each will have an AI chip procurement demand of USD 20-30 billion by 2027 (ASIC + networking). In 2024, this metric corresponds to Broadcom's revenue of only USD 12 billion, with Google TPU contributing USD 8 billion, and Tomahawk 5 and Jericho3 switches, PCIe Gen5/Gen6 networking contributing USD 3 billion. Meta contributes only a little, and ByteDance contributes almost nothing. The fourth and fifth clients that the company has explicitly mentioned have just established a cooperation roadmap, and their revenue contribution is also zero. (Most likely they are OpenAI and Apple. OpenAI has given Broadcom two generations of ASIC projects, which will start in 2026, using 3nm and 2nm processes, as well as 3D SOIC packaging. The Apple project is more ambiguous; after consulting with a big shot involved in the project in North America, it turns out that both the front and back ends have been sorted out internally, but it is uncertain what additional die will be given to Broadcom, such as switches. There is also a sixth client, whose identity is uncertain; it could be Tesla or some domestic company.)
Looking at it separately, it is conceivable that Google will increase its revenue contribution from the current USD 8 billion to USD 25 billion in TPU procurement by 2027. Especially with the release of Gemini 2.0 Flash and Agent products yesterday, combined with Google's global traffic and vast application ecosystem, the demand after inference will definitely surge Previously, everyone checked that the TPU cowos numbers were generally low, largely due to the ratio migration between training cards and inference cards, with the proportion of inference computing power rapidly increasing.
Meta's MTIA is expected to contribute $2-3 billion in revenue to Broadcom by 2025, which means a tenfold growth in the last three years from 2025 to 2028. Considering that MTIA supports Meta's recommendation engine, and the replacement of GPUs has just begun, the deployment of ASICs and the potential replacement of GPUs should theoretically have huge space. This does not yet consider the demand for GenAI computing power, as Meta is also stockpiling a large number of H cards (over 600,000 by the end of the year), building a 100,000 card cluster to train Llama4, and purchasing GB200 Ariel. This training demand can later be converted to inference. Whether ASICs will be used to support GenAI inference afterward is uncertain if it falls within Hock Tan's assumptions.
The most interesting discovery is ByteDance... Because the above expectations mean that ByteDance's ASIC procurement demand needs to climb from $0 to $20-30 billion. Let's calculate: the assumption of single card ASP has a lot of randomness, depending on the HBM content at that time. If it is to be launched in 2026-2027, PK is rubin or even rubin-Next, with HBM capacity aiming for over 500GB. This means that the single card BOM cost will exceed $10,000, and the single card selling price should be around $20,000. Calculating this way, with an annual shipment of one million cards, it aligns with Broadcom's repeated claims of a "million card cluster." Clearly, ByteDance is simultaneously buying N, purchasing domestic products, and also stockpiling long-term ASICs... Each part of the procurement amount looks very substantial.
Finally, AI network revenue is expected to grow from $3 billion this year to $15 billion by 2027 (equivalent to NVIDIA's network revenue this year).
In summary, to summarize this large satellite: Three major clients will each reach an annual procurement scale of one million ASICs by 2027-2028. The fourth and fifth largest clients are also starting to rise rapidly.
On the other hand, Microsoft's next-generation project has been given to Marvell, with a significant commitment. According to Marvell's recent management meeting statements: Microsoft's opportunities are "significantly" larger than expected and will become the company's largest revenue contributor. Microsoft's investment in ASICs may also be related to recent changes in its AI strategy. Satya mentioned in an interview on GB2 yesterday that he holds a different view on AI development compared to Sam Altman. Sam wants huge training resources from Microsoft, but Satya believes the application layer is the key, and the model layer is "generalized commercialization." It is evident that Microsoft's focus has shifted to inference, and the diversity of inference scenarios will provide more opportunities for ASICs Seeing this, almost all the giants are making ASICs, just at different stages.
What conclusion can we draw? Broadcom has described a hardware blueprint for three years from now. Clearly, it is currently the era of NV GPUs, but in three years, Broadcom "believes" or "hopes to achieve" that ASICs and GPUs will at least share the market equally, or even replace GPUs. This represents the will of CSPs, who have been seeking alternative supply solutions outside of Nvidia from the very beginning, but NV is too powerful, for example, the GB200 in 2025, which they still have to compete for... However, once there is an opportunity and change, cost-controlled and supply chain-controlled ASICs will emerge. Especially recently, Trainium 2 was actually planned earlier, and its performance metrics are just average. The reason AWS recently raised its demand is that the GB200 is in short supply and the migration of models to inference is happening too quickly, giving ASICs a bit of opportunity. Therefore, CSPs must have given Broadcom/Marvell a very large long-term commitment; whether they can achieve it really depends on the development of AI and the iteration speed of competitor NV. But this "will" will continue to exist.
Broadcom told a grand narrative about ASICs.
Risk warning and disclaimer
The market has risks, and investment requires caution. This article does not constitute personal investment advice and does not take into account the specific investment goals, financial situation, or needs of individual users. Users should consider whether any opinions, views, or conclusions in this article are suitable for their specific circumstances. Investing based on this is at your own risk