
Meta increases investment in NVIDIA: Deploying millions of chips in the coming years, first using Grace CPU

According to a statement released on Tuesday, Meta has committed to using more AI processors and networking equipment from NVIDIA. At the same time, Meta will also adopt NVIDIA's Grace CPU for the core components of its standalone computers for the first time. This deployment will cover AI accelerator products based on NVIDIA's current Blackwell architecture, as well as the upcoming Vera Rubin design
Meta has agreed to deploy "millions" of NVIDIA chips internally over the next few years, further solidifying the already close partnership between the two giants in the artificial intelligence industry.
According to a statement released on Tuesday, Meta has committed to using more AI processors and networking equipment from NVIDIA. Additionally, Meta will use NVIDIA's Grace CPU for the core components of its standalone computers for the first time. This deployment will cover AI accelerator products based on NVIDIA's current Blackwell architecture, as well as the upcoming Vera Rubin design.
Meta contributes 9% to NVIDIA's revenue. Meta CEO Mark Zuckerberg stated in the announcement: "We are excited to expand our collaboration with NVIDIA to build state-of-the-art computing clusters using their Vera Rubin platform, providing personal superintelligence to everyone around the globe."
The agreement reaffirms Meta's loyalty to NVIDIA amid the rapidly changing AI competitive landscape. Currently, NVIDIA's systems are still regarded as the gold standard for AI infrastructure, bringing in hundreds of billions of dollars in revenue for the company. However, competitors are offering alternatives, and Meta is also developing its own chip components.
Following the announcement, the stock prices of both NVIDIA and Meta rose over 1% in after-hours trading, while NVIDIA's rival AMD saw its stock price drop over 3%.
Ian Buck, NVIDIA's Vice President of Accelerated Computing, stated:
The parties did not disclose specific investment amounts or timelines. He noted that it is reasonable for companies like Meta to test other alternatives, but he emphasized that only NVIDIA can provide the complete components, systems, and software ecosystem needed for a company hoping to maintain a leading position in AI.
Meanwhile, Zuckerberg has made AI Meta's top priority, committing to invest hundreds of billions of dollars to build the infrastructure needed for the next generation:
Meta expects to set a spending record in 2026. Zuckerberg stated last year that the company would invest $600 billion in U.S. infrastructure projects over the next three years. Meta is building several gigawatt-level data centers across the U.S., including in Louisiana, Ohio, and Indiana. One gigawatt of power is enough to supply electricity to approximately 750,000 households.
Buck emphasized that Meta will be the first large data center operator to use NVIDIA CPUs in standalone servers. Typically, NVIDIA provides these CPUs alongside its high-end AI accelerators.
This move indicates that NVIDIA is further entering a space traditionally dominated by Intel and AMD. At the same time, it provides an alternative for large data center operators to develop their own chips, such as those designed by Amazon AWS.
Buck stated that the application scenarios for these chips are continuously growing:
As the parent company of Facebook and Instagram, Meta will not only use these chips itself but will also leverage computing capabilities provided by other companies based on NVIDIA's architecture.
NVIDIA's CPUs will increasingly be used for tasks such as data processing and machine learning The CPU has many different types of workloads. We found that Grace is an excellent data center backend CPU, meaning it excels at handling backend computing tasks. In these backend workloads, it can actually achieve a twofold performance improvement per watt
