Wallstreetcn
2023.07.03 05:29
portai
I'm PortAI, I can summarize articles.

Microsoft, Alphabet-C, and Amazon are waging a cloud war in the era of big models.

Investing in the most fundamental infrastructure - AI acceleration chips - has become the first priority in today's competition among cloud providers. In addition to developing chips, these cloud giants also engage in strategic investments to seize AI customers and projects. However, in the long run, large-scale models are the true key to determining the success or failure of market competition.

With the tightening of cloud spending by internet enterprise software, the slowdown in growth has gradually become a dark cloud hanging over cloud vendors.

The emergence of ChatGPT has broken through this bottleneck, and AI will reshape software. Cloud vendors' customers - software companies - are actively integrating the AI capabilities brought by large models into their existing workflows to achieve higher levels of automation.

In the case of gradually diminishing new cloud customers, software companies are no longer pursuing cloud adoption for the sake of it, but striving to improve productivity with AI. "This is the biggest increment in the cloud computing market for the next decade. Computational infrastructure is the absolute beneficiary of large models," explained an industry veteran in the cloud computing industry to GeekPark.

In light of this prospect, several major overseas cloud service giants - Microsoft, Amazon, Google, and Oracle - have quickly made changes. Over the past few months, these cloud giants have invested heavily in developing large models, strategic investments, and self-developed AI chips... The era of large models is flourishing, and they have set their sights on the new generation of AI software customers.

The once unassailable dominance is far from unbreakable, and the cloud market is undergoing rapid reshuffling as the giants open a new chapter of competition.

After all, the decline of the big brother in the era of mobile internet is imminent. Nokia went from commanding a 70% market share in the mobile phone market to being ignored in just a few years, all due to a single wrong decision. As for large models, the cloud industry quickly reached a consensus: this AI revolution is not a minor variable. Judging from the industry's rapid pace of development, even the current leaders may be left behind.

Half of 2023 has already passed, and this article will focus on several major overseas cloud giants to analyze the key factors driving their competition today.

01 Developing AI-specific chips, don't rely solely on NVIDIA

In the era of large models, the scarcest resource for cloud service providers today is computing power, or more precisely, AI chips. Investing in the most fundamental infrastructure - AI acceleration chips - has become the first priority in the competition among cloud vendors.

Scarcity and high cost are considered the main reasons why cloud vendors are accelerating their self-developed chip efforts. Even tech tycoons like Musk have commented, "This thing (NVIDIA GPU) is harder to get than drugs," and secretly bought ten thousand cards from NVIDIA for his AI company X.AI, while also acquiring a significant amount of idle Oracle shares.

The extent of this scarcity is reflected in the business of cloud giants, directly corresponding to the business losses caused by the "bottleneck". Even Microsoft, who took the lead, has been rumored to have implemented GPU allocation policies within its internal AI research team and experienced delays in various new plans, as well as new customers having to wait for months to access Azure due to GPU shortages.

Even venture capital firms have to rely on their stockpile of NVIDIA chips to compete for projects. Various forces have gone to extremes for the sake of these graphics cards.

Another name for scarcity is "expensive". Considering that large models require tens of times more computing power, the cost of these cards will only increase. Recently, an investor told GeekPark, "At the beginning of the year, the price of a single A100 card was 80,000 yuan, but now it has skyrocketed to 160,000 yuan, and it's still hard to get." "Correspondingly, the astronomical figure that the cloud giants will have to pay for tens of thousands of cards will be the 'Nvidia tax'.

Microsoft, which is currently in the spotlight, has the most say in this matter. A month ago, The Information exclusively reported that Microsoft has formed a 300-person 'dream team' to accelerate the development of self-developed AI chips. The server chip, codenamed Cascade, may be launched as early as next year.

Apart from the bottleneck issue, the development of self-developed chips by cloud providers also implies another layer of meaning - GPUs are not necessarily the most suitable chips for running AI, and the self-developed versions may optimize specific AI tasks.

Admittedly, most advanced AI models currently rely on GPUs for power because GPUs are better at running machine learning workloads than general-purpose processors. However, GPUs are still considered general-purpose chips and not native processing platforms for AI calculations. As pointed out in the Faraday Research Institute's 'A Crack in the Nvidia Empire', GPUs were not designed specifically for training neural networks, and as AI develops rapidly, these issues become more apparent. Making modifications for each scenario using CUDA and various technologies is an option, but not the optimal solution.

Amazon, Google, and Microsoft have been developing chips called ASICs - application-specific integrated circuits - which are more suitable for artificial intelligence. According to interviews conducted by The Information with chip industry practitioners and analysts, Nvidia GPUs have helped train the models behind ChatGPT, but ASICs typically perform these tasks faster and with lower power consumption.

As shown in the above figure, Amazon, Microsoft, and Google have all elevated the importance of in-house self-developed chips and have developed two types of chips for their data center departments: standard computing chips and chips specifically designed for training and running machine learning models, which can support chatbots like ChatGPT.

Currently, Amazon and Google have developed custom ASICs for key internal products and have made these chips available to customers through the cloud. Since 2019, Microsoft has also been committed to developing custom ASIC chips to power large-scale language models.

According to performance data released by cloud customers and Microsoft, some of the chips developed by these cloud providers, such as Amazon's Graviton server chip and the AI-specific chips released by Amazon and Google, are already comparable in performance to chips from traditional chip manufacturers. Google's TPU v4 is 1.2-1.7 times faster than Nvidia's A100 in terms of computing speed, while reducing power consumption by 1.3-1.9 times."

02 Strategic Investment Competition: Big Players Spend Money to "Buy Customers"

In addition to chip development, the second key point of competition among major overseas cloud giants is external strategic investment to snatch AI customers and projects.

Compared to venture capital, the big players have an absolute advantage in strategic investment. The collaboration between OpenAI and Microsoft serves as an excellent example, marking the beginning of the integration of large models and strategic investment. This is because the resource barrier for large models and related applications is extremely high. With limited funds, it is simply not enough to snatch AI projects. After all, companies like Google, Microsoft, AWS, Oracle, and NVIDIA can not only write hefty checks but also provide scarce resources such as cloud credits and GPUs.

From this perspective, the competition for projects and customers is happening among the cloud giants, without any other competitors. They are engaged in a new game - seeking commitments from AI companies to use their cloud services instead of their competitors'.

Microsoft, as the exclusive cloud service provider for OpenAI, pays a huge cloud bill to OpenAI and in return, gains equity and priority access to OpenAI's products, among other enviable benefits.

Microsoft's competitors are also racing to win the support of other AI customers. These cloud providers offer significant discounts and credits to AI companies in order to win their business. Some critics argue that this is akin to buying customers, although it is not uncommon in the enterprise software field to hold equity in current or future customers.

According to an earlier report by The Information, Oracle has also offered computing credits worth hundreds of thousands of dollars as incentives for AI startups to rent Oracle cloud servers.

Google may be the most proactive among these major cloud providers, offering a combination of cash and Google Cloud credits in exchange for equity in AI startups. Earlier this year, Google invested $400 million in Anthropic, one of OpenAI's main entrepreneurial challengers. In February, Google Cloud announced that it had become the "preferred" cloud provider for Anthropic.

Recently, Google invested $100 million in Runway, an AI company in the "video editing" field. However, prior to this, Amazon AWS touted Runway as a key AI startup customer. In March of this year, AWS and Runway announced a long-term strategic partnership, making AWS their "preferred cloud provider." Now, Runway seems to be one of the "pawns" in the showdown between Google and Amazon, as Runway is also expected to rent cloud servers from Google.

Earlier, Google Cloud also announced partnerships with two other popular AI companies: Midjourney in the field of visual storytelling and chatbot app Character.ai, the latter of which used to be a key cloud customer for Oracle. These transactions will help Google catch up with larger cloud computing competitors - AWS and Microsoft. It is still too early to judge, but Google Cloud is making a strong push.

Among the 75 (AI) software companies in The Information database, Google provides some cloud services to at least 17 companies, more than any other cloud provider. Amazon closely follows with at least 15 companies using AWS for cloud computing. Microsoft and Oracle provide cloud services to six and four companies respectively. Of course, using multiple clouds is common in the industry, with at least 12 of these 75 companies using multiple cloud providers.

03 Large-scale models are the real key to victory

Computing power and investment are the early battlegrounds in this cloud war. However, in the long run, large-scale models are the real key to determining market competition.

Microsoft owes its leadership position to its collaboration with OpenAI, and with Microsoft's excellent engineering capabilities, GPT-4 was integrated into the Microsoft "suite" within a few months. In the past six months, Microsoft has gained more market share in the cloud market by leveraging the priority use of OpenAI products and lowering the prices of enterprise software products. They have also increased revenue by raising prices for the product line, now known as Microsoft 365 Copilot.

According to research by CloudStart Capital, Microsoft's underlying models are heavily dependent on OpenAI. After integrating large-scale models, Microsoft began selling bundled application layer products such as Teams, Power BI, and Azure at lower prices.

Microsoft CFO Amy Hood told investors in April that as more people start using OpenAI services, OpenAI will generate revenue for Azure.

The latest reports indicate that Microsoft is charging certain Office 365 customers an additional 40% fee to test AI features, such as automatically performing tasks like writing text in Word documents and creating PowerPoint slides. At least 100 customers have paid a fixed fee of up to $100,000. Data shows that within less than a month of its launch, Microsoft has generated over $60 million in revenue from the AI features of Microsoft 365 Copilot.

In stark contrast to Microsoft, the former leader, Amazon Web Services (AWS), is falling behind in the large-scale model field and facing even more challenges today.

AWS was an early developer of AI cloud services, with its layout starting around 2016. However, customers did not find these services very useful, including facial recognition, converting text into realistic speech, and primitive chatbots for customer service tasks, among others. AWS also launched AI digital tool SagaMaker for engineers in 2017, which helped them develop and use machine learning models. It was once AWS's main AI product.

However, in the following years, AWS failed to keep up with the trend of large language models. Since November 2021, Microsoft has started selling AI products developed based on the GPT series models for enterprise customers. At the same time, Google has also acquired major AI startups as cloud customers and sold proprietary AI software to its cloud customers. Even Oracle, a laggard in cloud computing, has its advantages in providing computing resources to AI startups.

AWS, realizing its belatedness, is now making efforts to catch up. In April, it announced a cloud service that allows customers to integrate the large models of Stability, Anthropic, and AI 21 Labs into their own products as a foundation. In return, AWS will share a portion of the revenue with these partners.

At the 2023 Google I/O conference, CEO Sundar Pichai introduced Google's latest AI developments. | Image source: Google Official Website

Google, on the other hand, was early to rise but late to the party. As the industry giant with the deepest accumulation in the field of large models, Google responded quickly to the release of ChatGPT by launching the conversational intelligent robot Bard and the new generation of large language model PaLM 2. However, the product launch event turned out to be a disaster, and the subsequent speed of product releases was not ideal, forming a sharp contrast with Microsoft's powerful engineering capabilities.

Lastly, it is worth mentioning that Oracle, which fell out of the top ranks of the cloud market long ago, unexpectedly showed a trend of counterattack in this wave of enthusiasm.

Oracle has long been in a disadvantaged position in the cloud field, but it has achieved astonishing success by renting cloud servers to well-known AI startups competing with OpenAI. According to The Information, part of the reason is that Oracle Cloud can run complex machine learning models more economically than Amazon Web Services or Google Cloud.

Oracle's approach to entering the AI race seems to be similar to AWS. AWS developed its own AI software to sell to customers, but it also sells access to open-source AI software and other AI developers' products.

In addition, some insiders revealed that Oracle has started testing OpenAI's products to enrich its product line for B2B customers, including human resources and supply chain management software. However, Oracle is more likely to develop its own software for this purpose. Future AI capabilities can help Oracle customers quickly generate job descriptions and arrange meetings between recruiters and candidates. However, the company is still deciding which products to improve first.