Microsoft’s Head of Cloud Predicts an ‘Explosion’ of AI Use. How Agents Could Change the Game.
Oct 15, 2025 16:20:00 -0400 by Tae Kim | #AI #Barron's TechScott Guthrie, executive vice president of cloud and artificial intelligence at Microsoft, says growth in AI agents is “going to lead to an explosion of infrastructure usage.” (Aaron M. Sprecher/Bloomberg)
This article is from the weekly Barron’s Tech email newsletter. Sign up here to get it delivered directly to your inbox.
Smart Operator. Hi everyone. OpenAI is practically a household name these days, but six years ago it was just another start-up with little name recognition. Microsoft had the prescience then to invest in the company, becoming a partner in its growth several years before ChatGPT sparked the artificial intelligence boom.
Today, OpenAI is the leader in AI technology with market-leading products that include ChatGPT, models for enterprises, and the recently released video app Sora.
Thanks partly to its OpenAI connection, Microsoft’s cloud business, known as Azure, has been gaining share from Amazon Web Services in the cloud computing market. Azure sales grew 39% year-over-year in its most recent quarter versus 18% growth for Amazon Web Services.
But Azure is now facing its own competition from new AI cloud vendors. OpenAI itself has made large AI capacity deals with other providers such as Oracle and CoreWeave.
Barron’s Tech recently spoke with Microsoft senior executive Scott Guthrie, who runs the company’s cloud and AI group. We talked about the state of AI demand, how companies are finding returns on investment, and Microsoft’s strategy for working with OpenAI and AI chip suppliers going forward.
Here are edited highlights from our conversation:
Barron’s: How big is the current computing shift to AI? Where are we going next?
Scott Guthrie: It’s just as profound as going back to the Industrial Revolution, when you suddenly had railroads, communications, and a variety of technologies that allowed businesses to reach beyond their local market.
AI has this ability to optimize processes and decisions and then be able to decide if you want to scale up or scale down the amount of resources dynamically. That changes the way every business can operate.
The average consumer has seen ChatGPT or Copilot. They might think of AI as a very request-response-like model. I type in this window, it gives me a response.
Where we’re going is where you have a more agentic workflow. Where instead of asking it questions, it’s more about assigning something to the agent to research and it comes back an hour later or a couple of days later, with a much more complete solution.
For instance, what’s a good way to derisk my supply chain if I’m worried about tariffs? What’s a good way to optimize the reach of my website? The AI will call multiple systems, work with data, and come back with a detailed set of solutions.
It’s a shift from request-response to something that’s more of an agent that’s working on your behalf or working on your business’s behalf. As that happens, and as people get the value from that, it’s going to lead to an explosion of infrastructure usage.
The models will continue to get smarter and more use cases will start to appear. That fundamentally is why we keep saying we’re supply constrained. We’re going to bring a lot more live this year. And that’s going to keep going up, but at the same time, the demand keeps going up. It’s a good problem to have versus other problems.
What are the uses cases for AI that you’re seeing from Microsoft’s customers? How are they benefiting?
Ultimately the business value will be the thing that drives it. For use cases, we’re hearing from customers on developer productivity. I don’t hear anyone saying, I’ve tried GitHub Copilot or Cursor or some of the other equivalents and not seen productivity improvements. If you get 40% productivity value for 20 bucks a month. That’s a pretty phenomenal ROI.
We’re seeing it in healthcare. The nice thing with AI summaries is it frees doctors from having to type lots of notes. Their quality of life gets better. The hospital can measure that business value if a physician can go from seeing 10 patients to 14 patients.
Then customer support. Most organizations are starting to look at AI to be able to self help answer more questions to customers, but then also provide more AI assistance to call center agents. I think that’s going to be another one where businesses can easily measure the business impact.
Microsoft has the right of first refusal to sell additional AI cloud capacity to OpenAI. Recently, though, Oracle, Stargate, and others made deals with OpenAI. Can you explain why Microsoft passed on those deals?
We have a great partnership with OpenAI, and we’re building out gigawatts of capacity for OpenAI. We haven’t talked as publicly about the projects as others that are entering the space have.
Some of the press around projects with other providers won’t start for a while or might not come to market for multiple years. We like to celebrate when we ship things as opposed to when we start things.
We’re not going to build everything. And there are projects that we deliberately choose to go after and there’s projects that we sometimes pass. We are trying to make sure we get the balance right in terms of pre-training capacity, inferencing capacity, and post-training capability. And then different parts of the world, we have different needs.
When you look at our capex, I think it’s very aggressive. And at the same time, we want to be disciplined and be smart about it. Let’s invest in the projects that are going to deliver the highest ROI and that we feel confident in that we can use for a huge number of use cases.
What about your AI chip strategy? How do you decide when to use Nvidia versus your internal projects?
The approach we’ve taken with it is very similar to the approach we’ve taken with CPUs. Once upon a time, we only had one CPU provider. And then several years ago, we kind of adopted a two plus one strategy. We’re a great customer and partner with AMD and we’re a great customer and partner with Intel.
Then we’ve also built our own ARM64 processor we call Cobalt that we built in-house. TSMC manufactures the wafers for us, but it’s completely designed, completely built in-house.
Our strategy is we have two great third-party providers and one great internal offering. We have maximum leverage and flexibility. And we get better pricing from our external providers when they know there’s two external providers and one internal option that are all viable and all good solutions. It’s a good way to drive innovation and then, frankly, make sure that from a cost perspective we’re in a good position. We’re doing a very similar thing with AI chips.
We have a very, very deep partnership with Nvidia . We’ll continue to have a great partnership with Nvidia. But we’ve also been public that we’re the first cloud provider to launch with AMD GPUs. We want to make sure we have both Nvidia and AMD in our fleet, and they’re both great third-party offerings. Then we have our own product called Maia that we’re building as well.
If we have that same dynamic we have with CPUs, with GPUs, it means that all of us are accelerating like crazy to deliver more innovation with lower costs.
Similarly, with the AI model landscape, how do you think through your strategy in using OpenAI? I noticed Microsoft is working on internal models and has recently started using Anthropic
At a high level for models, the most important thing is customer success and customer choice. We never want to lose a customer or have a customer feel like if I bet on Microsoft, I’m locked into a specific model and if that model ever falls behind, I’m in trouble.
We obviously have a very, very deep partnership with OpenAI, but we do now support Anthropic. Both with things like GitHub Copilot as well as Microsoft 365 Copilot. We want to support all the models.
In addition to OpenAI and Anthropic, we’re building our internal Microsoft AI models and have our own model training team. Again, I think it’s a similar dynamic to what we have with CPUs and GPUs, have multiple big partnerships externally, but also have internal first-party competitiveness.
It maximizes customer choice. It ensures that if in any given generation a model or a chip ever falls behind it doesn’t impact us and we have options.
The history of tech is that there are ups and downs with every generation. We’re big enough that we want to always have that flexibility where if any of our partners ever have a hit product, we are able to take advantage of it. And when any of our partners ever have less of a hit product, then we’re not that exposed.
Thanks for your time, Scott.
This Week in Barron’s Tech
- ASML Earnings Are Good Enough to Lift the Stock Despite Sales Miss. Here’s Why.
- Oracle to Deploy 50,000 AI Chips From AMD. The Stock Is Rising.
- Google Pours Another $24 Billion Into AI. This Country Is the Major Beneficiary.
- Broadcom Stock Surges on OpenAI Deal for Custom AI Systems
- Nvidia Touts Software Advantage in Beating Rivals Like AMD
Write to Tae Kim at tae.kim@barrons.com or follow him on X at @firstadopter.