MSFT Microsoft Corporation
📄 4 transcripts CIK 0000789019 SIC 7372

Latest Transcript

Q2 2026
Prepared Remarks

Microsoft FY26 Second Quarter Earnings Conference Call

Prepared Remarks
Jonathan Neilson
Satya Nadella, Amy Hood

Wednesday January 28, 2026

Prepared Remarks
Jonathan Neilson

Good afternoon and thank you for joining us today. On the call with me are Satya Nadella, chairman and chief executive officer, Amy Hood, chief financial officer, Alice Jolla, chief accounting officer, and Keith Dolliver, corporate secretary and deputy general counsel.

On the Microsoft Investor Relations website, you can find our earnings press release and financial summary slide deck, which is intended to supplement our prepared remarks during today’s call and provides the reconciliation of differences between GAAP and non-GAAP financial measures. More detailed outlook slides will be available on the Microsoft Investor Relations website when we provide outlook commentary on today’s call.

On this call we will discuss certain non-GAAP items. The non-GAAP financial measures provided should not be considered as a substitute for or superior to the measures of financial performance prepared in accordance with GAAP. They are included as additional clarifying items to aid investors in further understanding the company's second quarter performance in addition to the impact these items and events have on the financial results.

All growth comparisons we make on the call today relate to the corresponding period of last year unless otherwise noted. We will also provide growth rates in constant currency, when available, as a framework for assessing how our underlying businesses performed, excluding the effect of foreign currency rate fluctuations. Where growth rates are the same in constant currency, we will refer to the growth rate only.

We will post our prepared remarks to our website immediately following the call until the complete transcript is available. Today's call is being webcast live and recorded. If you ask a question, it will be included in our live transmission, in the transcript, and in any future use of the recording. You can replay the call and view the transcript on the Microsoft Investor Relations website.

During this call, we will be making forward-looking statements which are predictions, projections, or other statements about future events. These statements are based on current expectations and assumptions that are subject to risks and uncertainties. Actual results could materially differ because of factors discussed in today's earnings press release, in the comments made during this conference call, and in the risk factor section of our Form 10-K, Forms 10-Q, and other reports and filings with the Securities and Exchange Commission. We do not undertake any duty to update any forward-looking statement.

Prepared Remarks
Thank you
Jonathan.

This quarter, the Microsoft Cloud surpassed $50 billion in revenue for the first time, up 26% year-over-year, reflecting the strength of our platform and accelerating demand.

We are in the beginning phases of AI diffusion and its broad GDP impact.

Our TAM will grow substantially across every layer of the tech stack as this diffusion accelerates and spreads.

In fact, even in these early innings, we have built an AI business that is larger than some of our biggest franchises that took decades to build.

Today, I will focus my remarks across the three layers of our stack: Cloud & Token Factory, Agent Platform, and High Value Agentic Experiences.

When it comes to our Cloud & Token Factory, the key to long term competitiveness is shaping our infrastructure to support new high-scale workloads.

We are building this infrastructure out for the heterogenous and distributed nature of these workloads, ensuring the right fit with the geographic and segment-specific needs for all customers, including the long tail.

The key metric we are optimizing for is tokens per watt per dollar, which comes down to increasing utilization and decreasing TCO using silicon, systems, and software.

A good example of this is the 50% increase in throughput we were able to achieve in one of our highest volume workloads – OpenAI inferencing powering our Copilots.

Another example was the unlocking of new capabilities and efficiencies for our Fairwater datacenters.

In this instance, we connected both our Atlanta and Wisconsin sites through AI WAN to build a first-of-its-kind AI superfactory.

Fairwater’s two-story design and liquid cooling allow us to run higher GPU densities and thereby improve both performance and latencies for high-scale training.

All up, we added nearly one gigawatt of total capacity this quarter alone.

At the silicon layer, we have NVIDIA and AMD, and our own Maia chips, delivering the best all-up fleet performance, cost, and supply across multiple generations of hardware.

Earlier this week, we brought online our Maia 200 accelerator.

Maia 200 delivers 10+ petaFLOPS at FP4 precision with over 30% improved TCO, compared to the latest generation hardware in our fleet.

We will be scaling this starting with inferencing and synthetic data gen for our superintelligence team, as well as doing inferencing for Copilot and Foundry.

And given AI workloads are not just about AI accelerators, but also consume large amounts of compute, we are pleased with the progress we are making on the CPU side as well.

Cobalt 200 is another big leap forward, delivering over 50% higher performance compared to our first custom-built processor for cloud-native workloads.

Sovereignty is increasingly top of mind for customers, and we are expanding our solutions and global footprint to match.

We announced DC investments in seven countries this quarter alone, supporting local data residency needs.

And we offer the most comprehensive set of sovereignty solutions across public, private, and national partner clouds, so customers can choose the right approach for each workload, with the local control they require.

Prepared Remarks
Next
I want to talk about the agent platform.

Like in every platform shift, all software is being rewritten. A new app platform is born.

Prepared Remarks
You can think of agents as the new Apps.

And to build, deploy, and manage agents, customers will need a model catalog, tuning services, harnesses for orchestration, services for context engineering, AI safety, management, observability, and security.

Prepared Remarks
It starts with having broad model choice.

Our customers expect to use multiple models as part of any workload that they can fine-tune and optimize based on cost, latency, and performance requirements.

And we offer the broadest selection of models of any hyperscaler.

This quarter, we added support for GPT 5.2, as well as Claude 4.5.

Already, over 1,500 customers have used both Anthropic and OpenAI models on Foundry.

We are seeing increasing demand for region-specific models, including Mistral and Cohere, as more customers look for sovereign AI choices.

And we continue to invest in our first-party models, which are optimized to address the highest value customer scenarios, such as productivity, coding, and security.

As part of Foundry, we also give customers the ability to customize and finetune models.

Increasingly, customers want to be able to capture the tacit knowledge they possess inside of model weights as their core IP.

This is probably the most important sovereign consideration for firms as AI diffuses more broadly across our GDP and every firm needs to protect their enterprise value.

For agents to be effective, they need to be grounded in enterprise data and knowledge.

That means connecting their agents to systems of record and operational data, analytical data, as well as semi-structured and unstructured productivity and communications data.

And this is what we are doing with our unified IQ layer, spanning Fabric, Foundry, and the data powering Microsoft 365.

In a world of context engineering, Foundry Knowledge and Fabric are gaining momentum.

Foundry Knowledge delivers better context with automated source routing and advanced agentic retrieval, while respecting user permissions.

And Fabric brings together end-to-end operational, real-time, and analytical data.

Two years since it became broadly available, Fabric’s annual revenue run rate is now over two billion dollars, with over 31,000 customers.

And it continues to be the fastest growing analytics platform on the market, with revenue up 60% year-over-year.

All-up, the number of customers spending one million dollars-plus per quarter on Foundry grew nearly 80%, driven by strong growth in every industry.

And over 250 customers are on track to process over one trillion tokens on Foundry this year.

There are many great examples of customers using all of this capability on Foundry to build their own agentic systems.

Alaska Airlines is creating natural language flight search.

Prepared Remarks
BMW is speeding up design cycles.

Land O’Lakes is enabling precision farming for co-op members.

And Symphony.AI is addressing bottle necks in the CPG industry.