META Meta Platforms, Inc. · Q2 2025 Earnings Call
📄 Download TranscriptSecond Quarter 2025 Results Conference Call
July 30th, 2025
Thank you. Good afternoon and welcome to Meta’s second quarter 2025 earnings conference call.
Our remarks today will include forward‐looking statements, which are based on assumptions as of
today. Actual results may differ materially as a result of various factors including those set forth in
today’s earnings press release, and in our quarterly report on Form 10-Q filed with the SEC. We
undertake no obligation to update any forward-looking statement.
During this call we will present both GAAP and certain non‐GAAP financial measures. A
reconciliation of GAAP to non‐GAAP measures is included in today’s earnings press release. The
earnings press release and an accompanying investor presentation are available on our website at
investor.atmeta.com.
We had another strong quarter with more than 3.4 billion people using at least one of our apps
each day -- and strong engagement across the board. Our business continues to perform very
well, which enables us to invest heavily in our AI efforts.
Over the last few months we have begun to see glimpses of our AI systems improving themselves.
The improvement is slow for now, but undeniable. Developing superintelligence -- which we define
as AI that surpasses human intelligence in every way -- we think is now in sight.
Meta's vision is to bring personal superintelligence to everyone -- so that people can direct it
towards what they value in their own lives. We believe this has the potential to begin an exciting
new era of individual empowerment.
A lot has been written about the economic and scientific advances that superintelligence can
bring. I am extremely optimistic about this. But I think that if history is a guide, then an even more
important role will be how superintelligence empowers people to be more creative, develop
culture and communities, connect with each other, and lead more fulfilling lives.
To build this future, we've established Meta Superintelligence Labs, which includes our
foundations, product, and FAIR teams, as well as a new lab that is focused on developing the next
generation of our models. We're making good progress towards Llama 4.1 and 4.2 -- and in
parallel, we're also working on our next generation of models that will push the frontier in the next
year or so.
We're building an elite, talent-dense team. Alexandr Wang is leading the overall team, Nat
Friedman is leading our AI products and applied research, and Shengjia Zhao is Chief Scientist for
1
the new effort. They're all incredibly talented leaders and I'm excited to work closely with them
and the world-class group of AI researchers, and infrastructure and data engineers that we're
assembling.
I've spent a lot of time building this team this quarter, and the reason that so many people are
excited to join is because Meta has all the ingredients required to build leading models and deliver
them to billions of people. The people who are joining us will have access to unparalleled compute
as we build out several multi-GW clusters. Our Prometheus cluster is coming online next year and
we think it'll be the world's first 1GW+ cluster. We're also building out Hyperion, which will be able
to scale up to 5GW over several years. And we have multiple more titan clusters in development as
well.
We're making all these investments because we have conviction that superintelligence is going to
improve every aspect of what we do.
From a business perspective, I mentioned last quarter that there are five basic opportunities that
we're pursuing: improved advertising, more engaging experiences, business messaging, Meta AI,
and AI devices. So I can go into a bit of detail on each.
On advertising, the strong performance this quarter is largely thanks to AI unlocking greater
efficiency and gains across our ads system. This quarter, we expanded our new AI-powered
recommendation model for ads to new surfaces and improved its performance by using more
signals and a longer context. It's driven roughly 5% more ad conversions on Instagram and 3% on
We're also seeing good progress with AI for ad creative -- with a meaningful percent of our ad
revenue now coming from campaigns using one of our Generative AI features. This is going to be
especially valuable for smaller advertisers with limited budgets, while agencies will continue the
important work to help larger brands apply these tools strategically.
The second opportunity is more engaging experiences. AI is significantly improving our ability to
show people content that they’re going to find interesting and useful. Advancements in our
recommendation systems have improved quality so much that it has led to a 5% increase in time
spent on Facebook and 6% on Instagram just this quarter.
There's a lot of potential for content itself to get better too. We're seeing early progress with the
launch of our AI video editing tools across Meta AI and our new Edits app, and there's a lot more to
do here.
The third opportunity is business messaging. I've talked before about how I believe every business
will soon have a business AI just like they have an email address, social media account, and
website. We're starting to see some product market fit in a number of countries where we're
testing these agents, and we're integrating these business AIs into ads on Facebook and
Instagram, as well as directly into e-commerce websites.
The fourth opportunity is Meta AI. Its reach is already quite impressive with more than a billion
monthly actives. Our focus is now deepening the experience and making Meta AI the leading
personal AI. As we continue improving our models we see engagement grow, so our next
generation of models is going to continue to really help here.
2
The fifth opportunity is AI devices. We continue to see strong momentum with our Ray-Ban Meta
glasses, with sales accelerating. We're also launching new performance AI glasses with the Oakley
Meta HSTNs. They have longer battery life, higher resolution camera, and are designed for sports.
The percent of people using Meta AI is growing and we're seeing new users' AI retention increase
too, which is a good sign for that continued use. I think that AI glasses are going to be the main
way that we integrate superintelligence into our day-to-day lives, so it's important to have all
these different styles that appeal to different people in different settings.
Finally, we’re seeing people continue to spend more time with our Quest ecosystem and the
community continues to grow steadily. We launched the Meta Quest 3S Xbox Edition last month,
and we're seeing record interest in cloud gaming. And beyond gaming, we continue to see a
broader set of use cases with media and web-browsing contributing a significant portion of
engagement.
We’re going to have more to share on all of this, especially our Reality Labs work, at Connect on
September 17th, so I encourage you to tune into that.
Overall, this has been a busy quarter. Strong business performance and real momentum in
assembling both the talent and the compute needed to build personal superintelligence for
everyone. I am very grateful for our teams who are working hard to deliver this, and thanks to all of
you for being on this journey with us. And now, here's Susan.
Let’s begin with our consolidated results. All comparisons are on a year-over-year basis unless
otherwise noted.
Q2 total revenue was $47.5 billion, up 22% on both a reported and constant currency basis.
Q2 total expenses were $27.1 billion, up 12% compared to last year.
In terms of the specific line items:
Cost of revenue increased 16%, driven mostly by higher infrastructure costs and payments to
partners, partially offset by a benefit from the previously announced extension of server useful
lives.
R&D increased 23%, mostly due to higher employee compensation and infrastructure costs.
Marketing & Sales increased 9%, primarily due to an increase in professional services related to
our ongoing platform integrity efforts as well as marketing costs, partially offset by lower
employee compensation.
G&A decreased 27%, driven mostly by lower legal-related costs.
We ended Q2 with over 75,900 employees, down 1% quarter-over-quarter as the vast majority of
the employees impacted by performance-related reductions earlier this year were no longer
3
captured in our headcount. This was partially offset by continued hiring in priority areas of
monetization, infrastructure, Reality Labs, AI, as well as regulation and compliance.
Second quarter operating income was $20.4 billion, representing a 43% operating margin.
Our tax rate for the quarter was 11%, which reflects excess tax benefits from share based
compensation due to the increase in our share price versus prior periods.
Net income was $18.3 billion or $7.14 per share.
Capital expenditures, including principal payments on finance leases, were $17.0 billion, driven by
investments in servers, data centers and network infrastructure.
Free cash flow was $8.5 billion. We repurchased $9.8 billion of our Class A common stock and paid
$1.3 billion in dividends to shareholders. We also made $15.1 billion in non-marketable equity
investments in the second quarter, which includes our minority investment in Scale AI along with
other investment activities. We ended the quarter with $47.1 billion in cash and marketable
securities and $28.8 billion in debt.
I’ll begin with our Family of Apps segment.
Our community across the Family of Apps continues to grow, and we estimate more than 3.4
billion people used at least one of our Family of Apps on a daily basis in June.
Q2 Total Family of Apps revenue was $47.1 billion, up 22% year-over-year.
Q2 Family of Apps ad revenue was $46.6 billion, up 21% or 22% on a constant currency basis.
Within ad revenue, the online commerce vertical was the largest contributor to year-over-year
growth.
On a user geography basis, ad revenue growth was strongest in Europe and Rest of World at 24%
and 23%, respectively. North America and Asia-Pacific grew 21% and 18%.
In Q2, the total number of ad impressions served across our services increased 11%, with growth
mainly driven by Asia-Pacific. Impression growth accelerated across all regions due primarily to
engagement tailwinds on both Facebook and Instagram and, to a lesser extent, ad load
optimizations on Facebook. The average price per ad increased 9%, benefiting from increased
advertiser demand, largely driven by improved ad performance. Pricing growth slowed modestly
from the first quarter due to the accelerated impression growth in Q2.
Family of Apps other revenue was $583 million, up 50%, driven by WhatsApp paid messaging
revenue growth as well as Meta Verified subscriptions.
We continue to direct the majority of our investments toward the development and operation of
our Family of Apps. In Q2, Family of Apps expenses were $22.2 billion, representing 82% of our
overall expenses. Family of Apps expenses were up 14%, mainly due to growth in employee
compensation and infrastructure costs, partially offset by lower legal-related costs.
4
Family of Apps operating income was $25.0 billion, representing a 53% operating margin.
Within our Reality Labs segment, Q2 revenue was $370 million, up 5% year-over-year due to
increased sales of AI glasses, partially offset by lower Quest sales.
Reality Labs expenses were $4.9 billion, up 1% year-over-year driven by higher non-headcount
related technology development costs.
Reality Labs operating loss was $4.5 billion.
Turning now to the business outlook. There are two primary factors that drive our revenue
performance: our ability to deliver engaging experiences for our community, and our effectiveness
at monetizing that engagement over time.
On the first, daily actives continue to grow across Facebook, Instagram and WhatsApp as we
make additional improvements to our recommendation systems and product experiences.
We continue to see momentum with video engagement in particular. In Q2, Instagram video time
was up more than 20% year-over-year globally. We’re seeing strong traction on Facebook as well,
particularly in the US where video time spent similarly expanded more than 20% year-over-year.
These gains have been enabled by ongoing optimizations to our ranking systems to better identify
the most relevant content to show.
We expect to deliver additional improvements throughout the year as we further scale up our
models and make recommendations more adaptive to a person’s interests within their session.
Another emphasis of our recommendations work is promoting original content. On Instagram,
over two-thirds of recommended content in the US now comes from original posts. In the second
half, we’ll be focused on further increasing the freshness of original posts so the right audiences
can discover original content from creators soon after it is posted.
We are also making good progress on our longer-term ranking innovations that we expect will
provide the next leg of improvements over the coming years. Our research efforts to develop
cross-surface foundation recommendation models continue to progress. We are also seeing
promising results from using LLMs in Threads recommendation systems. The incorporation of
LLMs are now driving a meaningful share of the ranking-related time spent gains on Threads.
We’re now exploring how to extend the use of LLMs in recommendation systems to our other
apps. We’re leveraging Llama in several other back-end processes as well, including actioning bug
reports so we can identify and resolve recurring issues more quickly and efficiently. This has
resulted in top-line bug reports in the US & Canada in Facebook Feed and Notifications dropping
by roughly 30% over the past 10 months.
The primary way we’re using Llama in our apps today is to power Meta AI, which is now available
in over 200 countries and territories. WhatsApp continues to be the largest driver of queries as
people message Meta AI directly for tasks such as information gathering, homework assistance,
and generating images. Outside of WhatsApp, we’re seeing Meta AI become an increasingly
valuable complement to our content discovery engines. Meta AI usage on Facebook is expanding
as people use it to ask about posts they see in Feed and find content across our platform in
Search. Another way we expect Meta AI will help with content discovery is through the automatic
5
translation and dubbing of foreign-language content into the audience’s local language. We'll have
more to share on our efforts there later this year.
Moving to Reality Labs. The growth of Ray-Ban Meta sales accelerated in Q2, with demand still
outstripping supply for the most popular SKUs despite increases to our production earlier this
year. We’re working to ramp supply to better meet consumer demand later this year.
Now to the second driver of our revenue performance: increasing monetization efficiency.
The first part of this work is optimizing the level of ads within organic engagement.
We continue to optimize ad supply across each surface to better deliver ads at the time and place
they are most relevant to people. In Q2, we also began introducing ads within Feed on Threads
and the Updates tab on WhatsApp, which is a separate space away from people’s chats.
As of May, advertisers globally can now run video and image ads to Threads users in most
countries, including the United States. While ad supply remains low and Threads is not expected to
be a meaningful contributor to overall impression growth in the near-term, we are optimistic
about the longer-term opportunity with Threads as the community and engagement grow and
monetization scales.
On WhatsApp, we are rolling out ads in Status and Channels, along with Channel Subscriptions in
the Updates tab to help businesses reach the more than 1.5 billion daily actives who visit that part
of the app. We expect the introduction of ads in Status will be gradual over the course of this year
and next, with low levels of expected ad supply initially. We also expect WhatsApp ads in Status to
earn a lower average price than Facebook or Instagram ads for the foreseeable future, due in part
to WhatsApp’s skew toward lower monetizing markets and more limited information that can be
used for targeting. Given this, we do not expect ads in Status to be a meaningful contributor to
total impressions or revenue growth for the next few years.
The second part of increasing monetization efficiency is improving marketing performance. There
are three areas of this work that I’ll focus on today: improving our ads systems, advancing our ads
products, including by building tools that assist in ads creation, and evolving our ads platform to
drive results that are optimized for each business’ objectives.
First is our ads systems, where we’re innovating in both the ads retrieval and ranking stages to
serve more relevant ads to people. A lot of this work involves us continuing to advance the
modeling innovations we’ve introduced previously while expanding their adoption across our
platform.
The Andromeda model architecture we began introducing in the second half of 2024 powers the
ads retrieval stage of our ads system, where we select the few thousand most relevant ads from
tens of millions of potential candidates. In Q2, we made enhancements to Andromeda that
enabled it to select more relevant and more personalized ads candidates, while also expanding
coverage to Facebook Reels. These improvements have driven nearly 4% higher conversions on
Our new Generative Ads Recommendation System, or GEM, powers the ranking stage of our ads
system, which is the part of the process after ads retrieval where we determine which ads to show
someone from candidates suggested by our retrieval engine. In Q2, we improved the performance
6
of GEM by further scaling our training capacity and adding organic and ads engagement data on
Instagram. We also incorporated new advanced sequence modeling techniques that helped us
double the length of event sequences we use, enabling our systems to consider a longer history of
the content or ads that a person has engaged with in order to provide better ad selections. The
combination of these improvements increased ad conversions by approximately 5% on Instagram
and 3% on Facebook Feed and Reels in Q2.
Finally, we expanded coverage of our Lattice model architecture in Q2. We first began deploying
Lattice in 2023 with our later stage ads ranking efforts, allowing us to run significantly larger
models that generalize learnings across objectives and surfaces in place of numerous, smaller ads
models that have historically been optimized for individual objectives and surfaces. In April, we
began deploying Lattice to earlier stage ads ranking models as well. This is leading not only to
greater capacity and engineering efficiency, but also improved performance, with the recent
Lattice deployments driving a nearly 4% increase in ad conversions across Facebook Feed and
Reels in Q2.
Next, ads products. Here, we’re seeing strong momentum with our Advantage+ suite of AI
powered solutions.
In Q2, we completed the roll out of our streamlined campaign creation flow for Advantage+ sales
and app campaigns, which makes it easier for advertisers to realize the performance benefits from
Advantage+ by having it turned on at the beginning. We’ve seen lifts in advertiser adoption of
Sales and App campaigns since we’ve expanded availability and are working to complete the
rollout for leads campaigns in the coming months.
Within our Advantage+ creative suite, adoption of gen AI ad creative tools continues to broaden.
Nearly 2 million advertisers are now using our video generation features - Image Animation and
Video Expansion, and we’re seeing strong results with our text generation tools as we continue to
add new features. In Q2, we started testing AI-powered translations so that advertisers can
automatically translate the caption of their ads to 10 different languages. While it’s early, we’ve
seen promising performance lifts in our pre-launch tests.
We are also continuing to see strong adoption of Image Expansion among small and medium-sized
advertisers, which speaks to how these tools help businesses who have fewer resources to
develop creative. With larger advertisers, we expect agencies will continue to be valuable partners
in helping apply these new tools to drive performance.
Outside of Advantage+, we’re seeing good momentum in business messaging, particularly in the
US where click-to-message revenue grew more than 40% year-over-year in Q2. The strong US
growth is benefiting from a ramp in adoption of our Website to Message ads, which drive people
to a businesses’ website for more information before choosing to launch a chat with the business
in one of our messaging apps.
Finally, we continue to evolve our ads platform to drive results that are optimized for each
business’ objectives and the way they measure results.
In Q2, we completed the global roll out of our incremental attribution feature, which is the only
product on the market that optimizes for and reports on incremental conversions, which are
conversions that would not have happened without a person seeing the ad.
7
We also launched Omnichannel ads globally in Q2, which enable advertisers to optimize for
incremental sales both in-store and online with just one campaign. In tests, Advertisers using
Omnichannel ads have seen a median 15% reduction in total Cost Per Purchase compared to
website-only optimization.
Next, I would like to discuss our approach to capital allocation. Our primary focus remains
investing capital back into the business, with infrastructure and talent being our top priorities.
I’ll start with hiring. Our approach to adding headcount continues to be targeted at the company’s
highest priority areas. We expect talent additions across all of our priority areas will continue to
drive overall headcount growth through this year and 2026, while headcount growth in our other
functions remains constrained. Within AI, we’ve had a particular emphasis on recruiting leading
talent within the industry as we build out Meta Superintelligence Labs to accelerate our AI model
development and product initiatives.
Next, infrastructure. We expect having sufficient compute capacity will be central to realizing
many of the largest opportunities in front of us over the coming years. We continue to see very
compelling returns from our AI capacity investments in our core ads and organic engagement
initiatives, and expect to continue investing significantly there in 2026. We also expect that
developing leading AI infrastructure will be a core advantage in developing the best AI models and
product experiences, so we expect to ramp our investments significantly in 2026 to support that
work.
Moving to our financial outlook. We expect third quarter 2025 total revenue to be in the range of
$47.5-50.5 billion. Our guidance assumes foreign currency is an approximately 1% tailwind to
year-over-year total revenue growth, based on current exchange rates. While we are not providing
an outlook for fourth quarter revenue, we would expect our year-over-year growth rate in the
fourth quarter of 2025 to be slower than the third quarter as we lap a period of stronger growth in
the fourth quarter of 2024.
We expect full year 2025 total expenses to be in the range of $114-118 billion, narrowed from our
prior outlook of $113-118 billion and reflecting a growth rate of 20-24% year-over-year.
While we’re still very early in planning for next year, there are a few factors we expect will provide
meaningful upward pressure on our 2026 total expense growth rate. The largest single driver of
growth will be infrastructure costs, driven by a sharp acceleration in depreciation expense growth
and higher operating costs as we continue to scale up our infrastructure fleet. Aside from
infrastructure, we expect the second largest driver of growth to be employee compensation as we
add technical talent in priority areas and recognize a full year of compensation expenses for
employees hired throughout 2025. We expect these factors will result in a 2026 year-over-year
expense growth rate that is above the 2025 expense growth rate.
Turning now to the capex outlook. We currently expect 2025 capital expenditures, including
principal payments on finance leases, to be in the range of $66-72 billion, narrowed from our prior
outlook of $64-72 billion and up approximately $30 billion year-over-year at the mid-point. While
the infrastructure planning process remains highly dynamic, we currently expect another year of
similarly significant capex dollar growth in 2026 as we continue aggressively pursuing
8
opportunities to bring additional capacity online to meet the needs of our AI efforts and business
operations.
On to tax. With the enactment of the new U.S. tax law, we anticipate a reduction in our U.S. federal
cash tax for the remainder of the current year and future years. There are several alternative ways
of implementing the provisions of the Act, which we are currently evaluating. While we estimate
that the 2025 tax rate will be higher than our Q2 tax rate, we cannot quantify the magnitude at
this time.
In addition, we continue to monitor an active regulatory landscape, including the increasing legal
and regulatory headwinds in the EU that could significantly impact our business and our financial
results. For example, we continue to engage with the European Commission on our Less
Personalized Ads offering, or LPA, which we introduced in November 2024 based on feedback
from the European Commission in connection with the DMA. As the Commission provides further
feedback on LPA, we cannot rule out that it may seek to impose further modifications to it that
would result in a materially worse user and advertiser experience. This could have a significant
negative impact on our European revenue, as early as later this quarter. We have appealed the
European Commission’s DMA decision but any modifications to our model may be imposed during
the appeal process.
In closing, this was another strong quarter for our business as our investments in infrastructure
and technical talent continue to improve core ads performance and engagement on our platforms.
We expect the significant investments we’re making now will allow us to continue leveraging
advances in AI to extend those gains and unlock a new set of opportunities in the years to come.
ask a question, please press star one on your touchtone phone. To withdraw
your question, again press star one.
Please limit yourself to one question. Please pick up your handset before
asking your question to ensure clarity. If you are streaming today’s call, please
mute your computer speakers. And your first question comes from the line of
Eric Sheridan with Goldman Sachs. Please go ahead.
Eric Sheridan: Thanks so much for taking the questions. Mark, when you think about where
the AI parts of your business have been evolving over the last three to six
months, I wanted to know what your key learnings were as you went deep into
that strategy that inform some of the shifts in both talent, acquisition and
compute.
Coupled with some of the blogs you put out recently in terms of how that
strategy might have evolved based on those key learnings. And Susan, building
on Mark’s comments on scaling talent and compute, I wanted to know if you
could go a little bit deeper in how we should be thinking about those two
components driving some of the commentary you’ve given around OpEx and
CapEx over the next 12 to 18 months. Thanks so much.
9
Mark Zuckerberg: Yes. Sure. I can start. At a high level, I think that there are all these questions
that people have about what are going to be the timelines to get to really
strong AI or Superintelligence or whatever you want to call it.
And I guess at each step along the way so far, we’ve observed the more kind of
aggressive assumptions or the fastest assumptions have been the ones that
have most accurately predicted what would happen. And I think that, that just
continued to happen over the course of this year, too.
past. And I think, certainly, some of the work that we’re seeing with teams
internally being able to adapt Llama 4 to build autonomous AI agents that can
help improve the Facebook algorithm to increase quality and engagement, or
like.
I mean that’s like a fairly profound thing if you think about it. I mean it’s
happening in low volume right now. So I’m not sure that, that result by itself
was a major contributor to this quarter’s earnings or anything like that.
But I think the trajectory on this stuff is very optimistic. And I think it’s one of
the interesting challenges in running a business like this now is there’s just a
very high chance, it seems, like the world is going to look pretty different in a
few years from now. And on the one hand, there are all these things that we can
do, there are improvements to our core products that exist.
And then I think we have this principle that we believe in across the company,
which we tell people, take Superintelligence seriously. And the basic principle is
this idea that we think that this is going to really shape all of our systems
sooner rather than later, not necessarily on the trajectory of a quarter or two,
but on the trajectory of a few years.
And I think that, that’s just going to change a lot of the assumptions around
how different things work across the company. So anyway, I think it’s basically
just we’re continually observing how this works and what the trajectory or the
pace of AI progress has been.
I think it continues to be on the faster end. And that I think informs a lot of the
decisions from everything from the importance and value of having the
absolute best and most elite talent dense team at the company to making sure
that we have a leading compute fleet so that the people here can do – so that
the researchers here have more compute per person to be able to leave their
research and then roll it out to billions of people across our products, making
sure that we build and drive these products through all the different things that
we do.
Which I think is one of the things that our company is the best in the world at is
basically when we take a technology, we’re good at driving that through all of
our apps and our ad systems and all that stuff, it’s not just going to kind of sit
on the vine.
10
I think that there’s no other company, I think that is as good as us at kind of
taking something and kind of getting it in front of billions of people. So yes, I
mean we’re just going to push very aggressively on all of that.
But at some level, yes, this is -- there’s sort of a bet and a trajectory that we’re
seeing and those are the signals that we’re seeing. But we’re just trying to read
it.
Susan Li: Eric, for the second part of your question, we haven’t, in fact, kicked off our
budgeting process for 2026. So thinking about next year, there are clearly
many, many moving pieces in a very dynamic operating environment.
But there are certain aspects that we have some visibility into today including
the rough shape of our 2026 infrastructure plans. And that flows through into
our expense expectations next year. And we also have some visibility into the
compensation expense growth that we’ll recognize from the AI talent that
we’re hiring this year.
And so those two things are part of why we gave a little bit of an early preview
into the expectations for growth for 2026 total expenses as well as for 2026
So on the total expenses side, as I mentioned, we expect infrastructure will be
the single largest contributor to 2026 expense growth. That’s driven primarily
by a sharp acceleration in depreciation expense growth in 2026, largely driven
by recognizing incremental depreciation from assets that we purchased and
placed in service in ‘26 as well as from infrastructure deployed through 2025
that we’ll recognize a full year of depreciation next year.
and ‘26 than it has been in prior years. And then the other component of infra
cost growth next year would come from higher operating expenses including
energy costs, leases, maintenance and operational expenses that are
associated with maintaining that fleet.
And we also expect some increased spend on cloud services in ‘26 to meet our
capacity needs as well as growth in network-related costs.
So a lot going on, on the infrastructure side as it contributes to the 2026 total
expense number. After that, employee compensation is the next largest driver
of expense growth in ’26, again, driven primarily in the investments that we’re
making in technical talent including recognizing a full year of compensation
expense for the AI talent we hire this year.
I realize this answer is getting a little long, so I’ll try to wrap up quickly. On the
CapEx side, the big driver of our increased CapEx in ‘26 will be scaling GenAI
capacity as we build out training capacity that’s going to drive higher spend
across servers, networking, data centers next year.
11
We also expect that we’re going to continue investing significantly in core AI in
2026. And again, this is a pretty very dynamic area of planning, but we wanted
to share kind of our early thoughts as things are shaping up.
Operator: Your next question comes from the line of Brian Nowak with Morgan Stanley.
Brian Nowak: Thanks for taking my questions. I have two. The first one, Mark, just to kind of
go back to the intelligence labs and sort of the vision for Superintelligence.
As you sort of sit here now versus 12 months ago, can you just sort of walk us
through any changes of technological constraints or technological gating
factors that you are most focused on overcoming in the next 24 months that
may have been different than they were in the past just to make sure you can
really lead in the idea of Superintelligence over the next ten years?
And then the second one to Susan or Mark, one on the core, you’ve made so
many improvements to the core to drive higher engagement,
recommendations, et cetera.
Can you just walk us through a couple of the factors you’re still most excited
about to come in the next 18 months that you think could drive a further lift to
engagement on the core platform? Thanks.
Mark Zuckerberg: Yes. Sure. I mean in terms of the research agenda and a bunch of the areas that
we’re very focused on.
And there’s obviously different scaling paradigms, and I don’t want to get too
much into the detail of research that we’re doing on this.
But I think that for developing superintelligence at some level, you’re not just
going to be learning from people because you’re trying to build something that
is fundamentally smarter than people.
So it’s going to need to learn how to -- or you’re going to need to develop a way
for it to be able to improve itself.
implications for how we build products, how we run the company, new things
that we can invent, new discoveries that can be made, society more broadly.
I think that, that’s just a very fundamental part of this. In terms of the shape of
the effort overall, I guess I’ve just gotten a little bit more convinced around the
ability for small talent-dense teams to be the optimal configuration for driving
frontier research. And it’s a bit of a different setup than we have on our other
world-class machine learning systems.
So if you look at like what we do in Instagram or Facebook or our ad system, we
can very productively have many hundreds or thousands of people basically
12
working on improving those systems, and we have very well-developed
systems for kind of individuals to run tests and be able to test a bunch of
different things. You don’t need every researcher there to have the whole
system in their head.
But I think for this -- for the leading research on superintelligence, you really
want the smallest group that can hold the whole thing in their head, which
drives, I think, some of the physics around the team size and how -- and the
dynamics around how that works. But I’ll hand it over to Susan to talk about
more of the practical stuff.
Susan Li: Brian, on the sort of forward-looking roadmap for the core recommendation
engine. There are a handful of shorter-term things that we’re focused on in the
near term.
One is we’re focused on making recommendations even more adaptive to what
a person is engaging with during their session so that the recommendations we
surface are the most relevant to what they’re interested in at that moment.
And we’re making optimizations to help the best content from smaller creators
break out by matching it to the right audiences sooner after it gets posted. And
we’re also working on improving the ability for our systems to discover more
diversified and niche interests for each person through interest exploration and
learning explicit user preferences.
We’re also planning to scale up our models further and incorporate more
advanced techniques that should improve the overall quality of
recommendations. But we also have a lot of long-term bets in the hopper
around areas like developing foundational models that will support
recommendations across multiple services, incorporating LLMs more deeply
into our recommendation systems.
And a big focus of this work is going to be on optimizing the systems to make
them more efficient, so that we can continue to scale up the capacity that we
use for our recommendation systems without eroding the ROI that we deliver.
Operator: Your next question comes from the line of Doug Anmuth with JPMorgan.
Douglas Anmuth: Thanks so much for taking the questions. One for Mark and one for Susan.
thinking changed here at all, just as you pursue superintelligence and push for
even greater returns on your significant infrastructure investments? And then,
Susan, your comments on ‘26 CapEx suggest more than $100 billion of spend
next year potentially. Do you continue to expect to finance all this yourself? Or
could there be opportunities to partner here? Thanks.
Mark Zuckerberg: Yes. I mean on open source, I don’t think that our thinking has particularly
changed on this. We’ve always open-sourced some of our models and not open
sourced everything that we’ve done.
13
So I would expect that we will continue to produce and share leading open
source models. I also think that there are a couple of trends that are playing
out.
One is that we’re getting models that are so big that they’re just not practical
for a lot of other people to use. So it’s -- we would kind of wrestle with whether
it’s productive or helpful to share that or if that’s really just primarily helping
competitors or something like that.
So I think that there’s that concern. And then obviously as you approach real
superintelligence, I think there is a whole different set of safety concerns that I
think we need to take very seriously that I wrote about in my note this morning.
sourcing work.
I expect us to continue to be a leader there. And I also expect us to continue to
not open source everything that we do, which is a continuation of kind of what
we’ve been kind of working on. And yes, I mean I think Susan will talk a little bit
more about the infrastructure, but it really is a massive investment.
We think it will be good over time. But we do take very seriously that this is a
just massive amount of capital to convert into many gigawatts of compute
which we think is going to help us produce leading research and quality
products and running the business, I do look for opportunities to basically
convert capital into quality of products that we can deliver for people.
But this is certainly a massive bet that we’re kind of -- we’re focused on and we
want to make sure that what we build -- accrues to building the best products
that we can deliver to the billions of people who use our services.
Susan Li: Doug, on your second question about how we expect to finance the growing
CapEx next year. We certainly expect that we will finance some large share of
that ourselves, but we’re also exploring ways to work with financial partners to
codevelop data centers.
We don’t have any finalized transactions to announce, but we generally believe
that there will be models here that will attract significant external financing to
support large-scale data center projects that are developed using our ability to
build world-class infrastructure while providing us with flexibility should our
infrastructure requirements change over time. So we are exploring many
different paths.
Operator: Your next question comes from the line of Justin Post with Bank of America.
Justin Post: Great, thank you. I’ll ask another one on the infrastructure. Mark, your spend is
now approaching some of the biggest hyperscalers out there. Do you think of
all this capacity mostly for internal uses? Or do you think there’s a way to share
14
or even come up with a business model where leveraging that capacity for
external uses?
And then Susan, when you think about the ROI on this CapEx, I’m sure you
have internal models, I’m sure you can’t share all that, but how are you thinking
about the ROI? And are you optimistic about the long-term returns? Thank you.
Susan Li: Justin, I can go ahead and take a crack at both of those. And obviously Mark,
you should feel free to weigh in. Right now we are focused on ensuring that we
have enough capacity for our internal use cases, which includes both all of the
core AI work that we do to support the recommendation engine work on the
organic content side, to support all the ads ranking and recommendation work.
And then, of course, to make sure that we are building the training capacity
that we think we need in order to build frontier AI models. And to make sure
that we’re preparing ourselves for the types of inference use cases that we
think might -- that we might have ahead of us as we eventually focus not only
on developing frontier models, but also how we can expand into the kinds of
consumer use cases that we think will be hopefully widely useful and engaging
for our users.
So at present, we’re not really thinking about external use cases on the
infrastructure, but I’d say it’s a good question. On your second question, which
is really around the sort of ROI on CapEx, there are a couple of things.
So again, on the core AI side, we continue to see strong ROI. Our ability to
measure that is quite good, and we feel sort of very good about the rigorous
measurement and returns that we see there.
On the GenAI side, we are clearly much, much earlier on the return curve and
we don’t expect that the GenAI work is going to be a meaningful driver of
revenue this year or next year.
But we remain generally very optimistic about the monetization opportunities
that will open up, and Mark spoke to them in his script, the sort of five pillars, so
I won’t repeat them here.
opportunities that are very adjacent and intuitive for where -- in terms of where
our business is today, why they would be big opportunities for us and that
there will be sort of big markets attached to each of them.
So we, again, are also -- I would say, the last thing I would add here is we are
building the infrastructure with fungibility in mind. Obviously there are a lot of
things that you have to build up front in terms of the data center shells, the
networking infrastructure, et cetera.
But we will be ordering servers, which ultimately will be the biggest bulk of
CapEx spend as we need them and when we need them and making sort of the
15
best decisions at those times in terms of figuring out where the capacity will go
to use.
Operator: Your next question comes from the line of Mark Shmulik with Bernstein. Please
go ahead.
Mark Shmulik: Yes, thank you for taking my questions. Mark, as you go after the
Superintelligence vision, especially for those of us on the outside, what are kind
of some of the markers or KPIs that you’re tracking on whether you’re on track
and making progress? Is it really against kind of those five pillars you outlined
above? Or should we be thinking more broadly?
And Susan, obviously AI is delivering great ROI today, all those investments
and also building towards kind of longer-term goals. Just curious, has there just
been any change or adjustment to how you think about the relationship
between revenues or core business performance and the cadence of
investments? Thank you.
Mark Zuckerberg: Yes. In terms of what to look at, I mean what I’m going to look at internally, the
quality of the people on the teams, the quality of the models that we’re
producing, the rate of improvement of our other AI systems across the
company and the extent to which the leading kind of foundation models that
we’re building contribute to improving all of the other AI systems and kind of
everything that we’re doing around the company.
Then I think you just get into our standard product and business playbook,
which is translating that technology into new products, which will first scale to
billions of people and then over time we will monetize.
But I think that there’s going to be some lag in that, right? And that, I think, is
kind of always the way that we work is, whether we’re building some new social
product or this something like Meta AI or a new product around this that we’re
going to work on getting to leading scale, building the highest quality product,
focused on that for a few years. And then once we’re really confident in that
position, then we’ll focus on ramping up the business around it.
So it’s -- I mean going back to the last question a little bit, it’s sort of when you
compare this business to some of the cloud businesses, it’s like we do have this
delay where we focus on building research and then doing research and then
ramping consumer products, and it often does take some period of time before
we really are ramping up the business around it.
I think that’s kind of a known property of our business and the cycle around it.
But I guess, on the flip side, we believe that if you are building
superintelligence, you should use all of your GPUs to make it so that you’re
serving your customers really well with that.
And we think that there’s going to be a much higher return than we can do by
generating that directly rather than just kind of renting or leasing out the
infrastructure at other companies.
16
Susan Li: On the second part of your question, we’ve said in the past that our primary
focus from a profitability perspective is driving consolidated operating profit
growth over time. And it won’t be linear.
In some years, we’ll deliver above-average profit growth. And in years where
we’re making big investments, I think we will see that impact the amount of
operating profit growth that we can deliver. And at the moment, we see a lot of
attractive investment opportunities that we believe are going to set us up to
deliver compelling profit growth in the coming years for all of our investors.
And so we’re focused on constraining investments elsewhere as we pursue
those investments. But we really believe that this is a time for us to really make
investments in the future of AI as I think it will open up both new opportunities
for us in addition to strengthen our core business.
Operator: Your next question comes from the line of Ron Josey with Citi. Please go
ahead.
Ronald Josey: Great, thanks for taking the question. Mark, I wanted to ask you on Meta AI and
I think you talked about in the call just growing engagement overall, particularly
on WhatsApp and now you have 1 billion users on the platform and the focus is
now on driving personalization.
drive adoption here, particularly with Behemoth coming online at some point.
And then as people are using Meta AI with WhatsApp, thoughts on search and
queries and potentially monetizing that.
Mark Zuckerberg: Yes. I’m not going to get super deep into the roadmap on this, but the basic --
we do see that as we continue improving the models behind Meta AI and post
training and just engagement increases and as we swap in the updated models,
when we go from Llama 4 to Llama 4.1 when we have that, we expect that just
-- the models are inherently pretty general.
So it’s -- yes, you focus on specific areas, but in general, just sort of gets better
at a lot of different things that people want to ask it or want to do with it. And I
think with each version, both like what we’re doing on a week-to-week basis in
terms of continuing to train it. And when we drop kind of new generations or
big dot releases of each generation, that will improve engagement, too.
So we’re focused on that. I’m not going to go into the specific research areas or
capabilities that we’re planning on dropping in the future. But obviously I’m
pretty excited about it.
Operator: Our last question comes from the line of Youssef Squali with Truist Securities.
Youssef Squali: Great, thank you guys for taking the questions. I have two. So Mark, the Ray-
Ban initiative has been a homerun for you guys so far. Where are we on the
17
development of glasses? Is that new computational platform that you’ve talked
about in the past? Is it moving faster or slower than you thought? And as you
leverage Meta AI, do you believe glasses will ultimately replace smartphones?
Or do you need a new form factor that’s AI first? And then, Susan, just quickly,
how do you guys see SBC progressing over the next couple of years? Is it fair to
assume it will grow materially faster than revenue and OpEx? And how do you
minimize shareholder dilution? Thank you.
Mark Zuckerberg: Yes. I can talk a bit about the glasses. Yes. I mean I’m very excited about the
progress that we’re making. I think both the Ray-Ban Metas and I’m very
excited about the Oakley Meta, the HSTN’s too and other things that we have
planned.
Yes. I mean this product category is clearly doing quite well. And I think it’s
good for a lot of things. It is stylish eyewear, so people like wearing them just as
glasses.
It has a bunch of interesting functionality. And then the use of Meta AI in them
just continues to grow, and the percent of people who are using it for that on a
daily basis is increasing, and that’s all good to see.
I mean I continue to think that glasses are basically going to be the ideal form
factor for AI because you can let an AI see what you see throughout the day,
hear what you hear, talk to you. Once you get a display in there, whether it’s the
kind of wide holographic field of view like we showed with Orion or just a
smaller display that might be good for displaying some information.
And that’s also going to unlock a lot of value where you can just interact with
an AI assistant throughout the day in this multimodal way. It can see the
content around you. It can generate a UI for you, show you information and be
helpful.
I mean I personally think that -- I wear contact lenses, I feel like if I didn’t have
my vision corrected, I’d be sort of at a cognitive disadvantage going through
the world. And I think in the future, if you don’t have glasses that have AI or
some way to interact with AI, I think you’re kind of similarly probably be at a
pretty significant cognitive disadvantage compared to other people who you’re
working with, or competing against.
So I think that this is a pretty fundamental form factor. There are a lot of
different versions of it. Right now we’re building ones that I think are stylish,
but aren’t focused on the display.
I think that there’s a whole set of different things to explore with displays. This
is kind of what we’ve been maxing out with Reality Labs over the last 5 to 10
years is basically doing the research on all of these different things.
18
And it’s a -- I don’t know 10 years ago, I would have -- like the other thing that’s
awesome about glasses is, they are going to be the ideal way to blend the
physical and digital worlds together.
important, too, and AI is going to accelerate that, too.
It’s just that if you’d asked me five years ago, whether we’d have kind of
holograms that created immersive experiences or superintelligence first, I think
most people would have thought that you’d get the holograms first. And it’s
this interesting kind of quirk of the tech industry that I think we’re going to end
up having really strong AI first.
But because we’ve been investing in this, I think we’re just several years ahead
on building out glasses. And I think that, that’s something that we’re excited to
keep on investing in heavily because I think it’s going to be a really important
part of the future.
Kenneth Dorell: Youssef, we didn’t quite catch your second question, do you mind just
repeating it?
Youssef Squali: Sure. Just as you look at the spend on stock-based compensation over the next
couple of years with all these hires, I’m assuming that we’re going to see that
materially or grow materially faster maybe than revenue and OpEx. And I just
want to know how -- what you guys are doing to plan to minimize shareholder
dilution? Is it mostly buybacks or anything else? Thank you.
Susan Li: Thanks, Youssef. So I mean the impact of the sort of increased compensation
costs including SBC, of our AI hires this year is reflected in the revised 2025
expense outlook and in the comments I made about sort of the 2026, expense
outlook.
Those are obviously a big driver of 2026 expense growth as we recognize the
full year of compensation for the additional talent we’re bringing on. Having
said that, so we factored that into our sort of expense outlook. Having said
that, we certainly -- we are very focused on making sure, on keeping an eye on
dilution.
And we generally believe that our strong financial position is going to allow us
to support these investments while continuing to repurchase shares as part of
the sort of buyback program that offsets equity compensation and as well as
provide quarterly cash dividend distributions to our investors.
Kenneth Dorell: Great. Thank you, everyone, for joining us today. We look forward to speaking
with you again soon.
Operator: This concludes today’s conference call. Thank you for your participation, and
you may now disconnect.
19