Table of Contents
Microsoft Ignite 2023 was one part a celebration of yearlong technological innovations, one part announcing the general availability of previously announced products, one part vision, one part ecosystem and four parts copilots everywhere.
Copilots promise a historic software-led productivity increase. Perhaps for the first time in industry history, we’re seeing huge demand for software coincide with the ability to make it easier to write software. Just as Amazon Web Services turned the data center into an application programming interface, copilots are turning software development into natural language, enabling many more people to create. The implications on productivity are massive and we believe will kick off a new wave of growth that will become increasingly noticeable throughout 2024.
In this Breaking Analysis, we give you our impressions of Microsoft Ignite 2023. TheCUBE Research Analyst George Gilbert and CUBE Collective contributor Sarbjeet Johal both weighed in for this episode and we’ll also share some recent Enterprise Technology Research data that shows the progression of some of the major artificial intelligence players in the past 12 months and the the relative impact generative AI has had on each of their businesses.
Satya’s keynote underscored Microsoft’s current gen AI lead
As usual, Microsoft Corp. Chief Executive Satya Nadella (pictured) was a strong presenter. His main focus was on the AI copilot stack that is going to supercharge the next wave of innovation. There were several “we’re announcing the general availability of…” types of announcements, including Azure boost, which is Microsoft’s server offload engine (that is, like AWS Nitro), Fabric, Microsoft’s modern data platform, Copilots for 365 and Studio… in addition more than 100 new updates.
Nvidia Corp. CEO Jensen Huang was onstage doing his thing and talking about the Nvidia supercomputer clusters it has jointly built with Microsoft. Importantly, this is not Azure infrastructure… it’s not Azure Boost. Rather, this is running on Nvidia systems infrastructure with a thin layer of Azure software to help OpenAI train and run its models. Microsoft is currently winning in training and inference of large language models, maybe not so much out of foresight but because it had plugged internal holes and outsourced the infrastructure to Nvidia, which has put it in a really good position.
Where was Sam Altman? Now we know…
At the time of recording the video for this post (Thursday evening), we noted that while Jensen was on stage with Satya, Sam Altman, CEO of OpenAI, was not. We found that interesting particularly after watching the OpenAI launch recently where we felt Sam Altman somewhat underplayed Satya’s presence at the event. Of course, on Friday afternoon, we saw the news about Altman’s ouster from OpenAI.
Azure follows the leader in infrastructure, then pulls a judo move with its AI stack
Microsoft’s graph connects all the pieces and is a linchpin of its competitive advantage
Just like AWS, Azure now effectively has a Nitro, a Graviton, an Inferentia and a Tranium — Arm-based chips which we knew were coming and are finally here. And Satya spent most of his time double-clicking into the AI stack, which looks like this:
We don’t have the time today to dig in too deep, but let’s say a few things here starting at the bottom and moving up.
Satya gave a nice commercial for Azure as the world’s computer and reiterated their commitment to have 100% of energy usage be renewable by 2025. He talked about the network and the hollow core fiber it’s manufacturing, its servers and chips — which we’ll discuss in a moment along with alternatives from Advanced Micro Devices Inc. and Nvidia — and all the way up the stack into the data layer and spent a lot of time on the copilots.
The Graph is a semantic layer that connects all the elements and enables copilots to act
The interesting thing we see evolving is the Microsoft Graph, which connects all apps, services and the infrastructure that supports them. It’s essentially a semantic layer that makes all the elements and the data feeding them coherent. The reason this is important is all these copilots work on the Graph and it allows them to take action. The idea is the copilots know what to and can be a system of agency that acts with fidelity and confidence because the data is all coherent and trusted.
The way to think about this is Copilot will be the new UI that helps us gain access to the world’s knowledge and your organization’s knowledge. But most importantly, it’s your agent that helps you act on that knowledge.
That is enabled by the Microsoft knowledge graph.
The services underneath in Azure support the upper layers of the stack and drive consumption of compute, storage, networking, database and all the platform services that power not only all the productivity software but also the copilots that consume all these resources.
This architecture and its self-supporting model from infrastructure to software powered by autonomous AI is a massive flywheel for the consumption of Azure services.
Chip wars heat up
Let’s briefly talk about the custom silicon Microsoft has announced.
Two chips were announced. Maia is the AI chip for inference and training. Maia is manufactured on a five-nanometer TSMC process and has 105 billion transistors. It’s being packaged in a data center configuration with closed-loop cooling that can be retrofitted into existing data center infrastructure. That’s not unique to Microsoft, by the way — we’ve seen other vendors taking a similar approach –but it’s cool, no pun intended.
The Azure Cobalt CPU on the right (above) is a 128-core chip built on an Arm Neoverse and it’s designed for general cloud services on Azure.
As we said earlier, we have the Azure version of Nitro virtualization and offload, AI chips like Inferentia and Tranium and Graviton in the form of Cobalt. A big question is how much of a lead does AWS have. It announced Graviton in 2018, AI chips in 2019 and 2021. Microsoft chips are Arm-based, so time to tape-out will be compressed, and if Microsoft can line up foundry capacity, which it appears to have done, perhaps it can close the gap on AWS.
Custom silicon is critical because hyperscalers will optimize workloads through integration and develop features that confer unique advantage to their clouds.
High-level puts and takes from Ignite 2023
Some of the high-level themes at Ignite are shared below.
Microsoft has an offering for professional developers with its GitHub copilot. It has copilots like Studio, a platform for citizen developers, a copilot for 365 for its users, its search copilot (bye bye Bing), Azure ops copilots, including a security copilot, which is new… copilots everywhere — with a promise of vertical-market copilots.
By the way, Microsoft announced a number of security products at Ignite, and though some are playing catch-up, they’re still essential. Check out this post by SiliconANGLE security journalist David Strom for more info.
As well, there was lots of emphasis on ecosystems from infrastructure partners to independent software vendors.
We talked earlier about the resource graph and its power.
A big takeaway ahead of AWS re:Invent is the dynamics of the LLM market are evolving quickly. Microsoft has a differentiated and leading strategy thanks to its OpenAI investment and the pace Microsoft is moving at is AWS-like. The integration of services and apps via the semantic graph and the juxtaposition relative to AWS’ diverse, choice-oriented ethos is notable.
But the door is still open for AWS the week after Thanksgiving to show its stuff. Amazon will have the last word in AI in 2023 at the show. It has to combat the narrative that AWS is the old guard cloud. Our guess is we’ll see a strong showing from Amazon as usual, but the pressure is on and the clock is ticking.
Keeping up with the AI Joneses
To underscore the importance of not falling too far behind in the AI race, let’s bring in some ETR data to show what has happened since OpenAI’s announcement of ChatGPT.
The graphic above uses a format we’ve shown many times. It depicts machine learning and AI spending patterns among some of the leading platforms. The vertical axis is Net Score or spending momentum, and the horizontal axis is an indicator of presence in the data, determined by the N mentions in the quarterly survey of more than 1,700 information technology decision makers.
In the upper right you can see OpenAI. ETR started tracking OpenAI in July of 2023 and you can see where it is today. This is an astoundingly strong Net Score. Literally off the charts.
Also, you can see the position of Microsoft just below OpenAI and actually more ubiquitous on the X axis. But look where Microsoft was in October 2022. Compare that with AWS. Although it has made moves, they’re not nearly as a significant as were made by Microsoft and even Google. As we said in our research note last week, there appears to be a correlation between announcements and general availability and the adoption of gen AI offerings. Though this is not unusual, what is striking is the speed at which adoption is occurring post-general availability.
The moves that Google, Microsoft and, of course, OpenAI made were more dramatic than AWS’. We view this data as a proxy for market presence and with the general availability of AWS Bedrock last month and new announcements likely at re:Invent, we expect big moves from Amazon coming into 2024.
For context, we plot IBM Corp.’s Watson and Oracle Corp.. Last week we published on IBM’s big move up with watsonx post-GA. IBM was below below Oracle last survey. We’re going to be watching all this in the January survey. The point is this is really a tight race and that race is on in a big way. A lot of folks talk about this being a marathon and it is… but it doesn’t mean there’s plenty of time to relax. Getting a head start in this race and keeping close to the lead is going to confer competitive advantage in our view.
We’ve seen that advantage already go to Microsoft from the standpoint of mindshare and initial revenue. But the market is still, small so we’ll keep monitoring its pulse.
This whole idea of copilots everywhere, where everyone becomes a developer, is powerful. If the new interface to technology is words, this is going to give us a massive productivity boost. It’s starting with the laptop/desktop end-user interaction. It’s rapidly moving to developers so they can develop software faster and that’s going to go into many different use cases, vertical markets and domain-specific LLMs along the gen AI power law.
As we’ve talked about, this flywheel of productivity is in play. Erik Brynjolfsson at the recent UiPath Forward conference said that he’d be disappointed if productivity doesn’t grow to 3% to 4% annually, up from its tepid 1.2%.
The second point above is the accelerated demand for software meets the ease of building software. This is the first time we’ve ever seen that in the industry and it’s going to create an interesting dynamic. John Furrier has brought up a point that potentially there could be unintended consequence for Microsoft. His theory is that with AI, developers can develop better productivity software than Microsoft has. Perhaps this is what AWS customers or partners are banking on: leveraging LLMs to compete with Microsoft by creating better software than Microsoft has.
The third point above is the Microsoft Graph, where all apps services and that supporting infrastructure becoming connected and coherent in a data layer. This is yet another massive flywheel for Azure. The key point is, it not only is your assistant, it also allows the AI to take action. It becomes a system of agency.
End-user productivity is the king and as we’ve said, 2023 is the year of technology innovation. The year 2024 in our view must be the year of showing return on investment and productivity.
Those companies that can show ROI are going to distance themselves from their competitors.
Keep in touch
Many thanks to George and Sarbjeet for the help this week. Thanks to Alex Myerson and Ken Shifman on production, podcasts and media workflows for Breaking Analysis. Special thanks to Kristen Martin and Cheryl Knight who help us keep our community informed and get the word out. And to Rob Hof, our EiC at SiliconANGLE.
Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail. Note: ETR is a separate company from Wikibon and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at [email protected].
Here’s the full video analysis:
All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE Media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.
Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of Wikibon. None of these firms or other companies have any editorial control over or advanced viewing of what’s published in Breaking Analysis.
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.