
"If GPU supply is sufficient, growth will exceed 40% soon!" Microsoft addresses market concerns in a conference call: We lack production capacity, not orders

Microsoft's record capital expenditure of $37.5 billion has triggered market panic, with a drop of over 6% in after-hours trading. During the conference call, the CFO bluntly stated, "If it weren't for supply shortages, Azure's growth rate would exceed 40%," countering doubts. CEO Nadella revealed that paid seats for Copilot surged by 160%, revenue from the data platform Fabric skyrocketed by 60%, and announced the launch of the self-developed chip Maya 200, which reduces costs by 30%, demonstrating the monetization capability of AI and long-term cost control, while emphasizing the huge demand for AI in "storage adjacent to computing."

On January 29, Microsoft released its second quarter financial report for fiscal year 2026. Although revenue (USD 81.3 billion) and earnings per share (USD 4.14) both exceeded Wall Street expectations, the stock price fell more than 6% in after-hours trading.
The market's emotional contradiction lies in: Microsoft is burning money at an unprecedented rate, but the growth of its cloud business seems to be lagging behind this spending pace.
The financial report shows that Microsoft's capital expenditures surged approximately 66% year-over-year this quarter, reaching a record USD 37.5 billion. In contrast, Azure's cloud business revenue grew by 39% (38% at constant currency). Although this figure is still impressive, given such massive investments, some investors had expected to see more explosive growth or were concerned that the return cycle on AI investments would be significantly prolonged.
In the subsequent earnings call, Microsoft CEO Satya Nadella and CFO Amy Hood faced analysts' sharp questions about "return on investment (ROI)" without evasion, instead presenting a core logic: the current growth limit is not demand, but supply.
"If all GPUs were allocated to Azure, the growth rate would have exceeded 40% long ago."
During the call, Morgan Stanley analyst Keith Weiss directly asked: capital expenditure growth is faster than expected, but Azure's growth has slightly slowed, raising investor concerns about ROI.
In response, CFO Amy Hood provided the most substantial reply of the session:
"If I allocated all the GPUs that just went live in Q1 and Q2 to Azure, our KPI (growth rate) would have exceeded 40% long ago."
Hood explained that Microsoft is facing a "resource allocation battle." The newly added computing power not only needs to meet external customer demand for Azure but must also prioritize internal AI products that are growing rapidly—especially Microsoft 365 Copilot and GitHub Copilot, as well as long-term R&D innovation.
"Our customer demand continues to exceed our supply capacity."
Hood emphasized that currently, about two-thirds of the massive expenditures are allocated to short-term assets like servers (GPU/CPU), which directly reflects the current supply-demand tension.
CEO Nadella: What we value is customer lifetime value (LTV), not a single business.
In response to market concerns, CEO Nadella further explained from a strategic perspective:
"You can't just look at Azure."
“You also need to look at M365 Copilot, GitHub Copilot, Dragon Copilot, and Security Copilot; they each have their own gross profit structure and lifecycle value.”
He clearly stated that Microsoft does not pursue extreme short-term growth for any single business but rather aims for a long-term LTV (Customer Lifetime Value) portfolio:
“We hope to allocate computing power to build the ‘optimal long-term LTV portfolio’ under supply constraints.”
Is AI investment too early? Management repeatedly emphasizes that contracts are “locked in”
In a subsequent Q&A with Bernstein Research, the risks of AI hardware investment were further amplified.
Analysts pointed out directly: the server depreciation cycle is 6 years, while the average remaining performance obligation (RPO) is only 2.5 years. Does this imply a risk mismatch?
CFO Hood responded, most of the GPUs purchased by the company are already locked in by contracts for their entire lifespan. Additionally, many of the Azure-related GPU contracts cover the entire usage cycle, so there is no risk of ‘not being able to sell them.’
AI monetization: Copilot seats surge by 160%
To prove that the massive investment is turning into real revenue, Microsoft disclosed a series of impressive AI commercialization data during the conference call.
Nadella revealed, the paid seats for Microsoft 365 Copilot have increased by 160% year-on-year, currently boasting 15 million paid users. Furthermore, daily active users have grown tenfold year-on-year, a figure aimed at refuting market rumors about a “decline in AI tool usage.”
“This is a record quarter,” Nadella stated, noting that the number of large enterprise customers with over 35,000 seats has doubled, including institutions like Pfizer and NASA.
In the coding domain, GitHub Copilot paid subscription users reached 4.7 million, a 75% year-on-year increase. This indicates that AI is not only penetrating the consumer side but also accelerating its adoption in B2B productivity tools.
Self-developed chip Maya 200: “Total cost reduced by 30%”
Faced with the high costs from hardware manufacturers like NVIDIA, Microsoft is also accelerating its “de-reliance.”
Nadella announced during the conference call that Microsoft’s self-developed Maya 200 accelerator officially launched this week. He boldly claimed:
“Maya 200 provides over 10 Petaops of computing power at FP4 precision, with total cost of ownership (TCO) reduced by more than 30% compared to the latest generation hardware in our fleet.”
This move is interpreted as a key strategy for Microsoft to control AI infrastructure costs and improve gross margins. Nadella clearly stated that Microsoft will begin large-scale deployment of self-developed chips starting with inference and synthetic data generation.
“AI requires a lot of storage”
Additionally, while the market's attention is focused on GPUs, Microsoft management revealed another side of the AI coin during the conference call: storage and data management. The explosion of AI agents is reshaping the demand for data infrastructure Nadella pointed out that to make the Agent work effectively, it must be built on the company's "data and knowledge." This directly ignited the growth of Microsoft's unified data platform Microsoft Fabric. He revealed during the conference call:
“The annual revenue run rate for Fabric has now exceeded $2 billion... continuing to be the fastest-growing analytics platform in the market, with revenue growth of 60% year-over-year.”
This growth rate indicates that companies are frantically cleaning, storing, and managing their core data to prepare for the AI era.
At the end of the Q&A session, when a Barclays analyst asked about the driving force behind cloud transformation, Nadella emphasized the indispensable role of storage in AI architecture from a technical foundational logic perspective. He clearly stated that AI workloads are not just about accelerators (GPUs):
“By the way, even for training tasks, AI training tasks require a bunch of compute and a bunch of storage very close to the compute.”
He further explained that in future inference scenarios, the Agent model will not only run on GPUs but will also require configured computing resources, namely "compute and storage."
Confident About the Future
Regarding the market's concerns about AI demand in 2027 and beyond, Microsoft has shown strong confidence. Although the stock price is under short-term pressure, the signals conveyed by management are very clear: this is an "arms race" about computing power; whoever can acquire more cards and deploy them more efficiently will reap the largest dividends from this wave of AI diffusion.
For investors worried about the return cycle, Nadella set the tone with one sentence:
“In fact, even at this early stage, we have built an AI business that is larger than some of the biggest franchises we have taken decades to build.”

Full Translation of Microsoft's Q2 FY2026 Earnings Call:
Company Participants
Jonathan Neilson, Vice President of Investor Relations
Satya Nadella, Chairman and CEO
Amy Hood, Executive Vice President and CFO
Q&A Session Analysts
Keith Weiss, Morgan Stanley
Mark Moerdler, Bernstein Research
Brent Thill, Jefferies
Karl Keirstead, UBS
Mark Murphy, JP Morgan
Brad Zelnick, Deutsche Bank
Raimo Lenschow (miswritten in the transcript as Analyst/Remo Lentschel), Barclays
Host
Hello everyone, welcome to Microsoft's Q2 fiscal year 2026 earnings call. At this time, all participants are in listen-only mode. There will be a question-and-answer session following the formal remarks. (Operator instructions) Just a reminder, this conference is being recorded.
I am now pleased to introduce Jonathan Neilson, Vice President of Investor Relations. Please go ahead.
Jonathan Neilson
Good afternoon, thank you for joining us today. Joining me on the call are Chairman and CEO Satya Nadella, CFO Amy Hood, Chief Accounting Officer Alice Jolla, and Corporate Secretary and Deputy General Counsel Keith Dolliver.
On Microsoft's Investor Relations website, you can find our earnings press release and financial summary slides, which are intended to supplement our prepared remarks for today's call and provide a reconciliation of GAAP and non-GAAP financial metric differences. More detailed outlook slides will be available on Microsoft's Investor Relations website when we provide outlook comments during this call.
During this call, we will discuss certain non-GAAP items. The non-GAAP financial metrics provided should not be viewed as a substitute or superior to the financial performance metrics prepared in accordance with GAAP. They are included as additional clarification items to help investors further understand the company's performance in the second quarter and the impact of these items and events on the financial results.
Unless otherwise noted, all growth comparisons we make during today's call are year-over-year. We will also provide growth rates at constant currency (if available) as a framework to assess our underlying business performance, excluding the effects of foreign exchange rate fluctuations. If the growth rate at constant currency is the same, we will only mention the growth rate.
We will post the prepared remarks on our website immediately following the call until the full transcript is available. Today's call is being webcast live and recorded. If you ask a question, it will be included in our live stream, transcript, and any future use of the recording. You can replay the call and view the transcript on Microsoft's Investor Relations website.
During this call, we will make forward-looking statements, which are predictions, projections, or other statements about future events. These statements are based on current expectations and assumptions and are subject to risks and uncertainties Due to today's earnings press release, comments from this conference call, and the risk factors discussed in our 10-K, 10-Q forms, and other reports and documents submitted to the U.S. Securities and Exchange Commission, actual results may differ significantly. We do not undertake any obligation to update any forward-looking statements.
Next, I will hand the meeting over to Satya.
Satya Nadella
Thank you very much, Jonathan.
In this quarter, Microsoft Cloud revenue has surpassed $50 billion for the first time, a year-over-year increase of 26%, reflecting the strength of our platform and the accelerating demand for growth. We are in the early stages of AI diffusion and its broad impact on GDP. As this diffusion accelerates and spreads, the potential market size (TAM) at every layer of our technology stack will grow significantly.
In fact, even at this early stage, we have built an AI business that is larger in scale than some of our largest franchise businesses that took us decades to establish. Today, my remarks will focus on three layers of our stack: Cloud and Token Factory, Agent Platform, and High-value Agentic Experiences.
Speaking of our Cloud and Token Factory, the key to long-term competitiveness lies in shaping our infrastructure to support new large-scale workloads. We are building infrastructure for the heterogeneous and distributed nature of these workloads, ensuring it meets the geographic and segment-specific needs of all customers, including long-tail customers.
Our key metric for optimization is "tokens per dollar per watt," which boils down to leveraging chips, systems, and software to improve utilization and reduce total cost of ownership (TCO). A good example of this is our 50% throughput improvement on one of the highest capacity workloads in OpenAI reasoning, powering our Copilot.
Another example is unlocking new capabilities and efficiencies for our Fairwater data center. In this case, we connected sites in Atlanta and Wisconsin through an AI wide-area network, establishing a pioneering AI super factory. Fairwater's dual-layer design and liquid cooling technology enable us to run higher GPU densities, improving performance for large-scale training and reducing latency. Overall, we added nearly 1 gigawatt of total capacity just this quarter.
At the chip level, we have NVIDIA, AMD, and our own Maya chips, providing the best overall hardware fleet performance, cost, and supply across multiple generations of hardware. Earlier this week, we launched the Maya 200 accelerator. The Maya 200 delivers over 10 Petaops of computing power at FP4 precision, with a total cost of ownership (TCO) reduced by more than 30% compared to the latest generation hardware in our fleet We will start expanding the use of reasoning and synthetic data generation from our super-intelligent team for inference in Copilot and Foundry.
Given that AI workloads not only concern AI accelerators but also consume a significant amount of computing resources, we are pleased with the progress made on the CPU front. Cobalt 200 represents another significant leap, with over a 50% performance improvement compared to our first custom processor designed for cloud-native workloads.
Sovereignty is increasingly becoming a focus for customers, and we are expanding our solutions and global footprint to match. Just this quarter, we announced data center investments in seven countries to support local data residency requirements. We offer the most comprehensive set of sovereignty solutions across public cloud, private cloud, and national partner clouds, allowing customers to choose the right approach for each workload based on the local control they require.
Next, I would like to talk about the Agent Platform.
Like every platform transition, all software is being rewritten. A new application platform is being born. You can think of Agent as a new application; to build, deploy, and manage Agents, customers will need a model catalog, tuning services, orchestration tools, context engineering services, AI security, management, observability, and safety.
It all starts with having a wide selection of models. Our customers want to use multiple models across any workload, and they can fine-tune and optimize them based on cost, latency, and performance requirements. We offer the broadest selection of models among all hyperscale cloud providers. This quarter, we added support for GPT-5.2 and Cloud 4.5. Over 1,500 customers are using Anthropic and OpenAI models on Foundry.
We are seeing increasing demand for specific regional models (including Mistral and Cohere) as more customers seek sovereign AI options, and we continue to invest in our first-party models, which are optimized to meet the highest value customer scenarios in productivity, coding, and security.
As part of Foundry, we also empower customers to customize and fine-tune models. Customers increasingly want to capture their tacit knowledge and translate it into model weights, as this is their core intellectual property (IP). For companies, this may be the most critical sovereignty consideration, as AI spreads more broadly in our GDP, and every company needs to protect its enterprise value.
For Agents to work effectively, they need to be rooted in enterprise data and knowledge. This means connecting their Agents to record systems, operational data, analytical data, and semi-structured and unstructured productivity and communication data. This is precisely what we are doing through the unified IQ layer that spans Fabric, Foundry, and the data powering Microsoft 365
In the world of contextual engineering, Foundry Knowledge and Fabric are gaining momentum. Foundry Knowledge provides better context through automated source routing and advanced agent retrieval while respecting user permissions. Fabric brings together end-to-end operations, real-time, and analytical data. Since its full launch two years ago, Fabric's annual revenue run rate has now exceeded $2 billion, with over 31,000 customers, and it continues to be the fastest-growing analytics platform in the market, with a year-on-year revenue growth of 60%.
Overall, the number of customers spending over $1 million per quarter on Foundry has grown by nearly 80%, driven by strong growth across various industries, with over 250 customers expected to process more than 1 trillion tokens on Foundry this year.
There are many great examples showing that customers are using all these features of Foundry to build their own agent systems. Alaska Airlines is creating natural language flight search, BMW is accelerating design cycles, Land O'Lakes is achieving precision agriculture for cooperative members, and Symphony AI is addressing bottlenecks in the fast-moving consumer goods industry. Of course, Foundry remains a powerful gateway to the entire cloud service. The vast majority of Foundry customers use additional Azure solutions, such as developer services, application services, and databases, as they scale.
In addition to Fabric and Foundry, we are also addressing the issue of knowledge workers building agents through Copilot Studio and Agent Builder. Over 80% of Fortune 500 companies have active agents built using these low-code and no-code tools.
With the surge of agents, every customer needs new ways to deploy, manage, and secure them. We believe this creates a significant new category and substantial growth opportunity. This quarter, we launched Agent 365, enabling organizations to easily extend their existing governance, identity, security, and management to agents. This means that the same controls they are already using in Microsoft 365 and Azure now extend to the agents they build and deploy in our cloud or any other cloud. Partners like Adobe, Databricks, GenSpark, Glean, NVIDIA, SAP, ServiceNow, and Workday have already integrated Agent 365. We are the first provider to offer such a cross-cloud agent control plane.
Now let's turn to the High-value Agentic Experiences we are building.
AI experiences are intent-driven and begin to work within the scope of tasks. We are entering an era of macro authorization and micro control across domains. Intelligence using multiple models is embedded in various form factors You can see this in chat, the new Agent inbox application, coworker scaffoldings, Agent workflows embedded in applications and IDEs used daily, and even in command lines with file system access and skills.
This is the approach we are taking through our first-party Copilot series that spans key areas.
In the consumer space, for example, the Copilot experience spans chat, news, information feeds, search, creation, browsing, shopping, and integration with operating systems, and the momentum is strong. The daily active users of our Copilot applications have nearly tripled year-over-year, and through Copilot Checkout, we partner with PayPal, Shopify, and Stripe so that customers can make purchases directly within the app.
For Microsoft 365 Copilot, we focus on productivity across the organization. WorkIQ leverages the underlying data of Microsoft 365 to create the most valuable, stateful Agents for each organization. It provides powerful reasoning capabilities regarding people, roles, artifacts, communications, as well as history and memory, all within the organization's secure boundaries. The accuracy and latency of Microsoft 365 Copilot powered by WorkIQ are unparalleled, delivering faster and more accurate work outcomes than competitors. We have seen the largest quarter-over-quarter improvement in response quality to date.
This has driven record usage intensity, with the average number of conversations per user doubling year-over-year. Microsoft 365 Copilot is also becoming a true daily habit, with daily active users increasing tenfold year-over-year. We have also seen strong momentum for the Researcher Agent, which supports OpenAI and Cloud models, as well as Agent modes in Excel, PowerPoint, and Word.
Overall, this is a record quarter for the increase in Microsoft 365 Copilot seats, with a year-over-year growth of over 160%. We are seeing accelerated quarter-over-quarter growth in seat additions, now with 15 million paid Microsoft 365 Copilot seats, and several times that number in enterprise chat users. We are witnessing larger-scale commercial deployments. The number of customers with over 35,000 seats has doubled year-over-year. Pfizer, ING, NASA, the University of Kentucky, the University of Manchester, the U.S. Department of the Interior, and Westpac have all purchased over 35,000 seats. Publicis alone purchased over 95,000 seats, covering nearly all employees.
We are also gaining share in Dynamics 365, with Agents built into the entire suite. A great example is how Visa uses our customer knowledge management Agent in Dynamics to transform customer conversation data into knowledge articles, and how Sandvik uses our sales qualification Agent to automate lead qualification among tens of thousands of potential customers
In the coding field, we see strong growth in all paid GitHub Copilot subscriptions. The Copilot Pro Plus subscriptions for individual developers increased by 77% quarter-over-quarter, with a total of 4.7 million paid Copilot subscribers, a year-on-year increase of 75%. For example, Siemens is fully committed to GitHub after successfully promoting Copilot to over 30,000 developers, adopting a full platform to enhance developer productivity.
GitHub Agent HQ is the organizational layer for all coding agents (such as Anthropic, OpenAI, Google, Cognition, and xAI) in the context of customer GitHub code repositories. Through Copilot CLI and VS Code, we provide developers with the full range of form factors and models needed for an AI-first coding workflow. When you add WorkIQ as a skill or MCP to our developer workflow, it becomes a game changer that can present more context, such as emails, meetings, documents, projects, messages, etc. You can simply ask the agent to plan and execute changes to the codebase based on updates specified in SharePoint or using the transcripts from the last engineering and design meeting in Teams.
We go beyond this; with the GitHub Copilot SDK, developers can now embed the same runtime, multi-model, multi-step planning, tools, MCP integration, and authorization flow behind Copilot CLI directly into their applications.
In terms of security, we have added dozens of new and updated security Copilot agents in Defender, Entra, Intune, and Purview. For example, Icertis's SOC team used the security Copilot agent to reduce manual classification time by 75%, which is a true game changer in an industry facing severe talent shortages. To make it easier for security teams to get started, we are launching the security Copilot to all E5 customers, and our security solutions are becoming an essential part of managing organizational AI deployments. This quarter, Purview audited 24 billion Copilot interactions, a year-on-year increase of 9 times.
Finally, I want to talk about two other high-impact agent experiences.
First, in the healthcare sector, Dragon Copilot is the leader in this category, helping over 100,000 healthcare providers automate workflows. Mount Sinai Health is now deploying Dragon Copilot to healthcare providers across the entire system after a successful trial with primary care physicians. Overall, this quarter we helped document 21 million patient visits, a year-on-year increase of 3 times
Secondly, when it comes to science and engineering, consumer goods companies like Unilever and EDA companies like Synopsys are using Microsoft Discovery to orchestrate dedicated agents for end-to-end R&D. They are capable of reasoning over scientific literature and internal knowledge, formulating hypotheses, initiating simulations, and continuously iterating to drive new discoveries.
In addition to AI, we continue to invest in all core franchises to meet the needs of our customers and partners, and we are seeing strong progress. For example, in terms of cloud migration, the IaaS adoption rate of our new version of SQL Server is more than twice that of the previous version. In security, we now have 1.6 million security customers, including over 1 million using four or more of our workloads.
Windows has reached an important milestone, with Windows 11 users reaching 1 billion, a year-on-year increase of over 45%. This quarter, we gained market share in Windows, Edge, and Bing. LinkedIn achieved double-digit member growth, and paid video advertising grew by 30%. In gaming, we are committed to delivering great gaming experiences on Xbox, PC, cloud, and all other devices. We have seen record numbers of PC players and paid streaming hours on Xbox.
Finally, we are very pleased with our current delivery situation and building a full stack to seize future opportunities.
Next, I will hand the meeting over to Amy to introduce our financial results and outlook, and I look forward to answering everyone's questions later.
Amy Hood
Thank you, Satya, and good afternoon, everyone.
With the growing demand for our products and the focused execution of our sales team, we have once again exceeded expectations in revenue, operating income, and earnings per share while investing to drive long-term growth.
This quarter, revenue was $81.3 billion, an increase of 17%, with a 15% increase at constant currency. Gross profit grew by 16%, with a 14% increase at constant currency, while operating income grew by 21%, with a 19% increase at constant currency. Earnings per share were $4.14, an increase of 24%, with a 21% increase at constant currency, reflecting the results after adjusting for the impact of the OpenAI investment.
The boost from foreign exchange to reported results was slightly below expectations, particularly in intelligent cloud revenue. The company's gross margin was 68%, slightly down year-on-year, primarily driven by ongoing investments in AI infrastructure and the increasing usage of AI products, but this was offset by continued efficiency improvements (especially in Azure and M365 commercial cloud) and a shift in sales mix towards higher-margin businesses.
Operating expenses grew by 5%, with a 4% increase at constant currency, primarily driven by investments in computing power and AI talent R&D, as well as impairment charges in the gaming business. Operating margin increased year-on-year to 47%, exceeding expectations
Just a reminder, we still account for our investment in OpenAI using the equity method. Due to OpenAI's capital restructuring, we now record gains or losses based on our share of the changes in net assets on its balance sheet, rather than based on our share of operating profits or losses on its income statement. As a result, we recorded a gain that pushed other income and expenses in the GAAP results to $10 billion. Excluding the impact of OpenAI, other income and expenses were slightly negative and below expectations, primarily driven by net investment losses.
Capital expenditures (CapEx) were $37.5 billion this quarter, with about two-thirds of the capital expenditures allocated to short-term assets, mainly GPUs and CPUs. Customer demand continues to exceed our supply. Therefore, we must balance the following demands: better aligning our procurement supply to meet the growing Azure demand, expanding first-party AI usage (across services like M365 Copilot and GitHub Copilot), increasing allocations to R&D teams to accelerate product innovation, and continuously replacing obsolete servers and network equipment. The remaining expenditures were for long-term assets that will support monetization for the next 15 years and beyond.
This quarter, total financing leases amounted to $6.7 billion, primarily for large electronic center sites. Cash paid for property, plant, and equipment (PP&E) was $29.9 billion. Operating cash flow was $35.8 billion, a 60% increase, driven by strong cloud billing and collections. Free cash flow was $5.9 billion, down quarter-over-quarter, reflecting higher cash capital expenditures due to a lower proportion of financing lease portfolio. Finally, we returned $12.7 billion to shareholders through dividends and stock buybacks, a 32% year-over-year increase.
Now let's look at our business performance. Commercial bookings grew by 230%, with a 228% increase at constant currency, driven by the previously announced large Azure commitments from OpenAI (reflecting multi-year demand) and the commitments from Anthropic announced in November, as well as healthy growth in our core annuity sales activities. Commercial remaining performance obligations (RPO) continued to be reported on a net basis after deducting reserves, increasing to $625 billion, a 110% year-over-year increase, with a weighted average remaining term of about 2.5 years. About 25% will be recognized as revenue in the next 12 months, a 39% year-over-year increase. The remaining portion to be recognized after the next 12 months grew by 156%. Approximately 45% of our commercial RPO balance comes from OpenAI. The remaining significant balance grew by 28%, reflecting ongoing broad customer demand across the entire portfolio.
Microsoft Cloud revenue was $51.5 billion, a 26% increase, with a 24% increase at constant currency. Microsoft Cloud gross margin was slightly better than expected at 67%, down year-over-year, as ongoing AI investments were partially offset by the continued efficiency improvements mentioned earlier.
Now looking at segment performance.
Productivity and Business Processes revenue was $34.1 billion, a 16% increase, with a 14% increase at constant currency. M365 commercial cloud revenue grew by 17%, with a 14% increase at constant currency, consistent execution in the core business, and increased contributions from Copilot performance ARPU growth is once again led by E5 and M365 Copilot. Paid M365 commercial seats grew 6% year-over-year to over 450 million, with an expanded install base across all customer segments, primarily in our small and medium-sized business and frontline worker products. M365 commercial product revenue grew 13%, with a 10% increase at constant currency, exceeding expectations due to higher-than-expected transactional purchases of Office 2024. M365 consumer cloud revenue grew 29%, with a 27% increase at constant currency, again driven by ARPU growth. M365 consumer subscriptions grew 6%. LinkedIn revenue grew 11%, with a 10% increase at constant currency, driven by marketing solutions. Dynamics 365 revenue grew 19%, with a 17% increase at constant currency, with sustained growth across all workloads. Segment gross profit grew 17%, with a 15% increase at constant currency, and gross margin increased, again driven by improved efficiency in M365 commercial cloud, partially offset by ongoing AI investments (including the impact of the growing Copilot usage). Operating expenses grew 6%, with a 5% increase at constant currency, and operating income grew 22%, with a 19% increase at constant currency. Due to improved operating leverage and the aforementioned higher gross margin, operating margin increased year-over-year to 60%.
Next is the Intelligent Cloud segment. Revenue was $32.9 billion, growing 29%, with a 28% increase at constant currency. In Azure and other cloud services, revenue grew 39%, with a 38% increase at constant currency, slightly exceeding expectations, thanks to the continued efficiency improvements in our alternative hardware fleet, allowing us to reallocate some capacity to Azure and monetize it this quarter. As previously mentioned, we continue to see strong demand across workloads, customer segments, and geographic regions, with demand continuing to exceed available supply. In our on-premises server business, revenue grew 2%, with a 1% increase at constant currency, exceeding expectations, driven by demand for hybrid solutions, including the benefits from the SQL Server 2025 release, as well as higher transactional purchases before the rise in memory prices. Segment gross profit grew 20%, with a 19% increase at constant currency. Gross margin declined year-over-year, driven by ongoing AI investments and a shift in sales mix towards Azure, partially offset by efficiency improvements in Azure. Operating expenses grew 3%, with a 2% increase at constant currency, and operating income grew 28%, with a 27% increase at constant currency. Operating margin was 42%, slightly down year-over-year, as increased AI investments were mostly offset by improved operating leverage.
Now looking at the More Personal Computing business. Revenue was $14.3 billion, down 3%. Windows OEM and device revenue grew 1%, remaining flat at constant currency. Windows OEM grew 5%, with strong execution, continuing to benefit from the end of support for Windows 10. Results exceeded expectations as inventory levels remained high, and purchases increased before the rise in memory prices. Search and news advertising revenue (excluding traffic acquisition costs ex-TAC) grew 10%, with a 9% increase at constant currency, slightly below expectations, impacted by some execution challenges As expected, the sequential growth rate has slowed down as the revenue from third-party partnerships normalizes. In gaming, revenue declined by 9%, or 10% at constant currency. Xbox content and services revenue fell by 5%, or 6% at constant currency, which was below expectations, impacted by first-party content, affecting the entire platform. Segment gross profit increased by 2%, or 1% at constant currency, with the gross margin increasing year-on-year, driven by a shift in sales mix towards higher-margin businesses. Operating expenses grew by 6%, or 5% at constant currency, driven by the impairment costs in the gaming business mentioned earlier and investments in computing power and AI talent development. Operating income decreased by 3%, or 4% at constant currency, with the operating margin remaining flat year-on-year at 27%, as higher operating expenses were mostly offset by higher gross margins.
Now, turning to our Q3 outlook, unless otherwise noted, all figures are in USD.
Based on current exchange rates, we expect foreign exchange to increase total revenue growth by 3 percentage points. Within segments, we expect foreign exchange to increase revenue growth for productivity and business processes by 4 percentage points, and for intelligent cloud and more personal computing by 2 percentage points. We expect foreign exchange to increase COGS (cost of goods sold) and operating expense growth by 2 percentage points. Reminder, this impact is due to exchange rates from a year ago.
Starting with the company overall. We expect revenue to be between $80.65 billion and $81.75 billion, or growth of 15% to 17%, with commercial business continuing to grow strongly, partially offset by consumer business. We expect COGS to be between $26.65 billion and $26.85 billion, or growth of 22% to 23%. Operating expenses are expected to be between $17.8 billion and $17.9 billion, or growth of 10% to 11%, driven by ongoing investments in R&D, AI computing capabilities, and talent, with a low year-ago comparison. Operating margin is expected to decline slightly year-on-year.
Excluding any impact from our investment in OpenAI, other income and expenses are expected to be around $700 million, driven by fair market gains from our equity investment portfolio and interest income, partially offset by interest expenses (including interest payments related to data center financing leases). We expect the adjusted effective tax rate for Q3 to be around 19%.
Next, we expect capital expenditures to decline sequentially, due to normal fluctuations in cloud infrastructure build-out and the timing of financing lease deliveries. As we work to narrow the supply-demand gap, we expect the mix of short-term assets to remain similar to Q2.
Now looking at our commercial business. In terms of commercial bookings, after adjusting for last year's OpenAI contracts, we expect healthy growth in the core business with a continuously growing base. Reminder, the significant OpenAI contracts signed in Q2 represent their multi-year demand, which will lead to some quarterly fluctuations in future bookings and RPO growth rates. Microsoft cloud gross margin should be around 65%, declining year-on-year, driven by ongoing AI investments
Now let's look at the segment guidance.
In terms of Productivity and Business Processes, we expect revenue to be between $34.25 billion and $34.55 billion, representing a growth of 14% to 15%. For M365 Commercial Cloud, we anticipate revenue growth at constant currency to be between 13% and 14%, maintaining stable year-over-year growth on a large and continuously expanding base. The accelerating momentum of Copilot and ongoing adoption of E5 will again drive ARPU growth. Assuming the normalization of transactional purchasing trends for Office 2024, M365 Commercial product revenue should see a low single-digit decline quarter-over-quarter. Just a reminder, M365 Commercial products include components that may vary due to current revenue recognition dynamics. Driven by ARPU growth and sustained subscription volumes, M365 Consumer Cloud revenue growth should be in the mid-to-high 20% range. For LinkedIn, we expect revenue growth to be in the low double digits. In terms of Dynamics 365, we expect revenue growth to be in the high teens, with all workloads continuing to grow.
For Intelligent Cloud, we expect revenue to be between $34.1 billion and $34.4 billion, representing a growth of 27% to 29%. For Azure, we expect third-quarter revenue growth at constant currency to be between 37% and 38%, while the year-over-year comparison base includes significantly accelerated growth rates in the third and fourth quarters of last year. As mentioned earlier, demand continues to exceed supply, and we will need to continue balancing the supply of incoming inventory allocated here with other priorities. Just a reminder, depending on the timing of capacity delivery and go-live, as well as current revenue recognition based on contract mix, year-over-year growth rates may experience quarterly fluctuations. In our on-premises server business, as growth rates normalize following the release of SQL Server 2025, we expect revenue to see a low single-digit decline, although rising memory prices may introduce additional volatility to transactional purchasing.
In terms of More Personal Computing, we expect revenue to be between $12.3 billion and $12.8 billion. Windows OEM and device revenue is expected to decline in the low teens. As Windows 10 support revenue normalizes and high inventory levels decrease this quarter, growth rates will be impacted. Therefore, Windows OEM revenue is expected to decline by about 10%. The range of potential outcomes remains wider than normal, partly due to the potential impact of rising memory prices on the PC market. Search and news advertising (net of traffic acquisition costs) revenue growth is expected to be in the high single digits. Even as we strive to improve execution, we expect the share of Bing and Edge to continue to grow, driven by sales. As contributions from third-party partnerships continue to normalize, we expect quarter-over-quarter growth to slow. In terms of Xbox content and services, we expect revenue to see a mid-single-digit decline, while the year-over-year comparison base benefited from strong content performance, partially offset by growth in Xbox Game Pass. Hardware revenue is expected to decline year-over-year
Now, some additional thoughts on the remaining time of this fiscal year and beyond.
First, foreign exchange. Based on current exchange rates, we expect foreign exchange to contribute less than one percentage point to total revenue and COGS growth in the fourth quarter, with no impact on operating expense growth. Within the segments, we anticipate that foreign exchange will increase productivity and business process revenue, as well as more personal computing revenue, by about one percentage point, while contributing less than one percentage point to intelligent cloud revenue growth.
With the excellent work done in the first half of the year in prioritizing investments in key growth areas, along with the favorable impact of a higher revenue mix from Windows OEM and commercial on-premises business, we now expect operating profit margins for fiscal year 2026 to rise slightly. We previously mentioned the potential impact of rising memory prices on the Windows OEM and on-premises server markets. Additionally, the continuously rising memory prices will affect capital expenditures, although the impact on Microsoft Cloud gross margins will become more gradual as these assets depreciate over six years.
Finally, we achieved strong revenue growth in the first half of the year and are investing in every layer of the stack to continue providing high-value solutions and tools to our customers.
With that, let's move into the Q&A session. Jonathan.
Jonathan Neilson
Thank you, Amy. We now enter the Q&A session. Out of respect for others, we ask participants to ask only one question. Operator, please repeat your instructions?
Q&A Session
Operator
Thank you (operator instructions). Our first question comes from Keith Weiss at Morgan Stanley. Please go ahead.
Q - Keith Weiss
Great. Thank you for taking my question. I see Microsoft's earnings report shows a 24% year-over-year profit growth, which is an impressive result; you executed well, revenue growth is strong, and margins are expanding. However, I see that in after-hours trading, the stock is still down. I think one of the core issues weighing on investors is the pace of capital expenditures (CapEx) growing faster than we expected, while Azure's growth may be a bit slower than we anticipated.
I think this fundamentally comes down to concerns about the return on investment (ROI) of this capital expenditure over time. So I hope you can help us fill in some blanks on how we should think about capacity expansion and how much Azure growth this can bring in the future. More importantly, when these investments materialize, how should we think about their ROI? Thank you.
A - Amy Hood
Thank you, Keith. Let me start, and Satya will certainly add some broader comments.
I think first, you are indeed asking a very direct correlation question, and I think many investors are doing the same, which is to establish a link between capital expenditures and Azure revenue numbers. Last quarter we tried, and I think this quarter as well, to talk more specifically about where all the capital expenditures (especially short-term capital expenditures across CPUs and GPUs) are going
Sometimes, I think it's best to view the Azure guidance we provide as a capacity allocation guideline regarding how much Azure revenue we can deliver.
Because when we spend capital and specifically invest in GPUs (of course, this also applies to CPUs, but GPUs are more specific), we are actually making long-term decisions. The first thing we do is address the increasing usage and sales of M365 Copilot, GitHub Copilot, and our first-party applications, as well as the pace of acceleration. Then we ensure the long-term nature of our investments in R&D and product innovation. I believe a significant part of the product acceleration you've seen from us over the past period is because we allocated GPUs and capacity to many talented AI personnel we've hired over the past few years.
The remaining portion is then used to serve the continuously growing demand for Azure. One way to think about this is that we are sometimes asked, if I had allocated all the GPUs that just went live in Q1 and Q2 to Azure, our KPI (growth rate) would have exceeded 40% long ago.
I think the most important point is to realize that this is about investing in all levels of the stack that benefit customers. I hope this helps in thinking about capital growth. It is reflected in revenue growth across various parts of the business and also manifests as an increase in operating expenses as we invest in personnel.
A - Satya Nadella
Yes, I think you covered it comprehensively, Amy. But fundamentally, as investors, when you consider our capital and the gross margin (GM) profile of our portfolio, you should obviously consider Azure.
But you should also consider M365 Copilot, you should consider GitHub Copilot, you should consider Dragon Copilot, Security Copilot. All of these have gross margin profiles and lifetime value (LTV). I mean, if you think about it, acquiring an Azure customer is very important to us, but acquiring a customer for M365 or GitHub or Dragon Copilot is equally important, by the way, these are all incremental businesses for us.
So, we don't want to just maximize one of our businesses. We want to allocate capacity in a way that allows us to essentially build the best LTV portfolio under supply constraints. That's one aspect. The other aspect that Amy mentioned is R&D. I mean, you have to consider that computing is also R&D, that's its second element.
So we leverage all of this, obviously for long-term optimization.
Q - Keith Weiss
Great. Thank you.
A - Jonathan Neilson
Thank you, Keith. Operator, please go to the next one
Operator
The next question comes from Mark Moerdler of Bernstein Research. Please go ahead.
Q - Mark Moerdler
Thank you very much for taking my question, and congratulations on this quarter's performance.
Another question we believe investors want to understand is how you view the transition from hardware capital expenditure investments to revenue and margin. You capitalize servers over six years, but your RPO average term has increased from two years last quarter to 2.5 years. How can investors be assured that, since much of this capital expenditure is AI-centric, you will be able to generate sufficient revenue over the six-year lifespan of the hardware to achieve robust revenue and gross margin growth, similar to CPU revenue? Thank you.
A - Amy Hood
Thank you, Mark. Let me start at a high level, and Satya can add on. I think when you consider the average term, what you need to remember is that the average term is a combination of a wide range of contract arrangements we have. Many of those around M365 or the commercial applications portfolio are short-term, right? Three-year contracts, so frankly, their terms are quite short.
The rest are primarily longer-term Azure contracts, which you saw this quarter when you saw the term extend from about two years to two and a half years. One way to think about this is that most of the capital we are spending today and the large number of GPUs we are purchasing have contracts signed for most of their useful life.
So one way to think about this is that many of the risks you pointed out do not exist because they have already been sold throughout their useful life. The reason this risk partially exists is that due to some of the M365 stuff, you have this shorter RPO. If you only look at Azure's RPO, it is a bit longer. This is largely based on CPUs. Not just GPUs, but on the GPU contracts we are talking about, including some of our largest customers, these are sold for the entire useful life of the GPU, so I don't think the kind of risk you might be pointing to exists. Hope that helps.
A - Satya Nadella
Yes, in addition to what Amy mentioned (i.e., it has been contracted for its useful life), I would also add that we do run even the latest models on aging hardware facilities, if you will, so that gives us that durability. Ultimately, that’s why we even consider continuously aging the hardware fleet, right? So it’s not just about buying a bunch of equipment in a given year. But rather, every year you are rewriting Moore's Law, you are adding equipment, using software, and then optimizing across all the devices
A - Amy Hood
Mark, perhaps I should clarify one point, in case it's not obvious, which is that over the lifespan of usage, your delivery efficiency actually increases, so in places where you have sold the entire lifecycle, the margins actually improve over time. So I think this might be a good reminder for people, as we have always seen this in our CPU hardware fleet.
Q - Mark Moerdler
That's a great answer. Thank you very much, thanks.
A - Jonathan Neilson
Thank you, Mark. Operator, next question please.
Operator
The next question comes from Brent Thill of Jefferies. Please go ahead.
Q - Brent Thill
Thank you, Amy. Regarding the 45% backlog related to OpenAI, I'm curious if you could comment on that.
Clearly, there are concerns about its durability. I know you might not be able to say too much on this, but I think everyone is worried about this risk exposure. It would be great if you could share your perspective and what you and Satya are seeing.
A - Amy Hood
I think I might think about this in a very different way, Brent.
The first thing to focus on is that the reason we talk about this number is that 55% or about $350 billion relates to the breadth of our portfolio, which is a wide customer base across solutions, across Azure, across industries, and across geographies. This is a huge RPO balance, larger than most peers and more diversified than most peers. Frankly, I think we are very confident about this. When you think about just this portion growing by 28%, it’s really impressive work in terms of breadth and the adoption curves we are seeing, which is what I get asked about the most.
It is growing by customer segment, by industry, and by geography. So it’s very consistent. Now if you ask me about OpenAI and the contracts and health, listen, it’s a great partnership. We continue to be their scale provider.
We are pleased to be doing this. We are under one of the most successful businesses being built. We continue to feel good about it. It keeps us at the forefront of content building and application innovation.
A - Jonathan Neilson
Thank you, Brent. Operator, next question please.
Operator
The next question comes from Karl Keirstead of UBS. Please go ahead.
Q - Karl Keirstead
Okay.
Thank you very much. Amy, regardless of how you allocate capacity between first-party and third-party, can you qualitatively comment on the amount of capacity you are about to launch? I think the addition of 1 gigawatt in the December quarter is extraordinary, suggesting that capacity increases are accelerating. However, I believe many investors are focused on Fairwater in Atlanta and Fairwater in Wisconsin, and regardless of how this capacity is allocated in the coming quarters, we would like to hear some comments on the magnitude of the capacity increase. Thank you.
A - Amy Hood
Yes, Karl, I think we’ve already mentioned a few things. We are working as hard as we can to increase capacity as quickly as possible. You mentioned specific locations like Atlanta or Wisconsin. Those are projects that have been delivered over several years, so I wouldn’t necessarily focus on specific locations.
What we really need to do, and what we are working very hard on, is to increase capacity globally. A lot of that will increase in the U.S. You will see the locations you mentioned, but we also need to increase globally to meet the customer demand and increased usage we are seeing. You know, we will continue to build long-term infrastructure.
The way to think about this is that we need to ensure we have available power, land, and facilities, and once they are completed, we will continue to put in GPUs and CPUs as quickly as possible. Ultimately, we will strive to ensure that we are as efficient as possible in execution speed and operational methods so that they can have the highest utility possible.
So I think this is really not about two locations. Karl, I would absolutely abstract this out. That is a multi-year delivery timeline, but in reality, we just need to get it done at every location we are currently building or starting to build. We are working as quickly as we can.
Q - Karl Keirstead
Okay. Got it. Thank you.
A - Jonathan Neilson
Thank you, Karl. Operator, please go to the next question.
Operator
The next question comes from Mark Murphy of JP Morgan. Please go ahead.
Q - Mark Murphy
Thank you very much, Satya. For example, the performance achievements of the Maya 200 accelerator look outstanding, especially compared to TPU, Tranium, and Blackwell, which have been around for much longer. Can you look at this achievement from the perspective of how much core competitiveness Microsoft believes the chip might become? Amy, does this have a noteworthy impact on the gross margin profile supporting future inference costs?
A - Satya Nadella
Yes, no, thank you for your question.
So, there are a few things. One is that in various forms, we have been building our own chips for a very long time. So, we are very, very excited about the progress of the Maya 200 Especially when we consider running GPT-5.2 and the performance we can achieve in GEMS at FP4 precision, it proves one point: when you have a new workload, a new shape of workload, you can start to innovate end-to-end between the model and the chip.
The entire system is not just about the chip, but also how the rack-scale network works, which is optimized for this specific workload.
Another thing is that we are clearly going back and forth and working closely with our own superintelligent team, leveraging all our models. You can imagine that anything we build will be fully optimized for Maya. So, we feel good about this. I think overall, we are in such an early stage, even looking at the number of chip innovations and system innovations. Even since December, I think the fresh thing is that everyone is talking about low-latency inference, right? So one thing we want to ensure is that we are not locked into anything.
If anything, we have a good partnership with NVIDIA and AMD, who are innovating, and we are innovating. We want your hardware fleet to achieve the best TCO at any given point in time. This is not a game of generations. I think many people are just talking about who is ahead. Just remember that you have to stay ahead for all time in the future.
This means you really want to consider letting a lot of the innovation happening outside come into your hardware fleet so that your hardware fleet has a fundamental advantage at the TCO level. This is how I see it: we are excited about Maya, we are excited about Cobalt, we are excited about our DPU, our network cards. So we have a lot of system capabilities. This means we can vertically integrate.
And just because we can vertically integrate doesn't mean we only vertically integrate. So we want to have flexibility here. That’s what you see us doing.
A - Jonathan Neilson
Thank you, Mark. Operator, please go to the next question.
Operator
The next question comes from Brad Zelnick of Deutsche Bank. Please go ahead.
Q - Brad Zelnick
Okay.
Thank you very much. Satya, we heard a lot about frontier transformation from Judson and the Ignite conference. We also see customers achieving breakthrough gains as they adopt the Microsoft AI stack. Can you help us build the momentum for enterprises starting these journeys, and what expectations do they have for how much their spending with Microsoft can expand as they become frontier companies? Thank you.
A - Satya Nadella
Yes. Thank you for the question. So I think one thing we are seeing is the adoption across our three major suites, right? So if you look at M365, you look at what’s happening in the security space, you look at GitHub In fact, this is quite fascinating.
What I mean is that these three things have actually created a compound effect for our customers in the past, just like Entra as an identity system or Defender as a protective system across these three areas is very helpful. But what you see now is something like WorkIQ, right?
Just to give you a sense, for any company using Microsoft today, the most important database is the underlying data of Microsoft 365, because it holds all this implicit information, right? Who are your people? What are their relationships? What projects are they working on? What are their artifacts, their communications? So this is a very important asset for any business process, business workflow context. In fact, the scenario I mentioned in my speech is that you can now put WorkIQ as an MCP server in a GitHub code repository and then say, hey, please check my design meetings in Teams over the past month and tell me if my code repository reflects that. I mean, this is a pretty high-level way to think about how things that might have happened before in our tools business and GitHub business suddenly become transformative, right? In a sense, the Agent black plane is really changing the company, right? I think that's the most magical thing, that you deploy these things, and suddenly the Agent is helping you coordinate, bringing more leverage to your business.
Besides that, of course, there's transformation, which is what businesses are doing. How should we think about customer service? How should we think about marketing? How should we think about finance? How should we think about and build our own Agents? That's where all the services in Fabric and Foundry come in, and of course, the GitHub tools are helping them, even low-code and no-code tools. I have some statistics on the usage of these tools, but one of the things that excites me more is these new Agents, systems, M365 Copilot, GitHub Copilot, Security Copilot, all of these combined together, through all the data and all the deployments to compound benefits, I think this could be the most transformative effect at the moment.
Q - Brad Zelnick
Thank you. Very helpful.
A - Jonathan Neilson
Thank you, Brad. Operator, we have time for one last question.
Operator
The last question will come from Raimo Lenschow of Barclays. Please go ahead.
Q - Raimo Lenschow
Perfect. Thank you for letting me ask a question. Over the past few quarters, we've talked about the CPU and GPU aspects of Azure
You made some operational adjustments at the beginning of last January. Can you talk about what you saw there, perhaps from a broader perspective, that customers realize if they want to provide proper AI, migrating to the cloud is important? So what are we seeing in terms of cloud transformation? Thank you.
A - Jonathan Neilson
I don't quite understand. Sorry, Ryan (Raimo), are you asking about SNC CPU, or can you repeat the question?
Q - Raimo Lenschow
Yes.
Yes. Sorry. I want to know about the CPU aspect of Azure because we made some operational adjustments there. We also hear a lot from the front lines that people realize if you want to do proper AI, you need to go to the cloud, and that is driving momentum.
Thank you.
A - Satya Nadella
Yes. I think I understand. So first, I mentioned in my remarks that when you think about AI workloads, you shouldn't just think of AI workloads as AI accelerator compute, right? Because in a sense, it requires any Agent.
The Agent will then generate through the tools used, perhaps a container, which obviously runs on compute. In fact, whenever we think about building hardware infrastructure (fleet), we think about it in scale. By the way, even for training tasks, AI training tasks require a bunch of compute and a bunch of storage very close to compute. So the same thing is happening in inference.
So in inference, the Agent model will essentially require you to configure computers or compute resources for the Agent. They don't need GPUs; they run on GPUs, but they need computers, i.e., compute and storage. That's what's happening even in the new workloads. Another thing you mentioned is that cloud migration is still ongoing.
In fact, one statistic I have is that the latest SQL Server as an IaaS service in Azure is growing. So that's one of the reasons we need to think about our commercial cloud and balance it with the rest of our AI cloud because when customers bring their workloads and bring new workloads, they need all these infrastructure elements in the regions they are deploying.
Q - Raimo Lenschow
Yes. Okay.
Perfect. Thank you.
A - Jonathan Neilson
Thank you, Raimo. The Q&A session of today's earnings call ends here.
Thank you all for joining us today, and we look forward to speaking with you soon. Thank you, everyone.
A - Amy Hood
Thank you.
Operator
Thank you.
Today's meeting is concluded. You may now disconnect the line, and we appreciate your participation. Good night
