So they’re taking the Akamai solution with great interest, I would say. Now in terms of being cheaper, Akamai has been for a long time, the world’s most distributed platform. We run one of the world’s largest backbones. We have worked for many, many years to be incredibly efficient in terms of moving data around and doing the delivery. And that gives us a real advantage of being able to do this now for compute in a very cost-effective way. Now that said, I am sure that the hyperscalers pay a little bit less for their hardware than we do, probably not a lot less, maybe a little less. But when it comes to everything else, I think Akamai is in an excellent position. And what we’re seeing in the marketplace is that they’ll get their best offer from the hyperscalers, and we can be a lot lower than that and be very profitable at doing it.
And the good news, I think, for Akamai is that here you’ve got a $10-plus billion market growing at 20% a year. And we’re a tiny guy there compared to the hyperscalers. So we can operate at a level that is not threatening to them in any way. They got to worry about each other. And there’s plenty of room for us to take on a lot of revenue at lower price points and very good margins for Akamai. Now in terms of the marketplace, that’s an area, obviously, with the hyperscalers are way ahead some of those folks, every application ever made is available in the marketplace. — as a managed service, we are growing our marketplace. We are growing the tools that are available on Akamai Connected Cloud. And the first applications that we’re targeting are applications that are more easily able to be cloud agnostic, that are not locked in, that aren’t using 20 other applications in the marketplace.
And so that we are much easier to port onto Akamai Connected Cloud. And so there are some applications that won’t be able to report in the near term. But we don’t need all those applications. All we need is to get going as a tiny share of that market. And we’ve identified just, for example, in the media vertical alone, there’s a lot of applications that are amenable to moving to our platform. And of course, a lot of our partners in our marketplace to begin with our media-related companies, for that reason because they’re also threatened by the hyperscalers, and they’re very excited about having media applications beyond Akamai. Ed, do you want to add anything there?
Ed McGowan: Yes. Just a couple of things. Yes, Tom talked a little bit about the leverage we have with the backbone, but we also have a lot of other leverage in the company with our go-to-market, where the focus is going to be initially with our installed base. As Tom talked about in media to start, there’s a tremendous amount. We pretty much work with every major brands. So there’s a lot of leverage from that perspective. Also the people that build and deploy the network are the same people who are building and deploying our CDN network. — and we’re getting leverage with our co-location vendors and things like that. So we — I’ve seen some pretty large proposals go out that get us margins that are pretty similar to the company margins in terms of gross margins that are somewhere between security and delivery and operating margins that potentially could be even greater as we get scale to the bottom line.
So there’s an enormous amount of margins. If you think of the math that we’re doing with how much we’re saving, the amount of capital that we’re deploying and the cost, we’re going to be saving a tremendous amount of money on moving our own applications, and we’ll be able to offer some of that to our customers as well.
Operator: The next question is from Alex Henderson of Needham & Company.
Alex Henderson: Great. I’m rather than astounded that we haven’t heard the word AI, so far in this conference call, at least I don’t think we have. So can you talk a little bit about the impact of AI in terms of your opportunity to bring it to compute? Is it something that you think you can bring to the edge piece? Or is to run on your security — our CDN Edge. And alternatively, is at a risk in the sense that InferenceI is going to be distributed, but a lot of the compute process might be more centralized in that context, diminish the willingness of people to move applications to your compute. So how do I think about the – the infant AI opportunity and the risk of customers being more challenges in moving.
Tom Leighton: Okay. Great question. In fact, there’s a lot of components to this one, too. At a high level, there’s a lot of potential opportunity, I would say. But let me step back just a minute. Akamai has been using AI and machine learning in our products for a long, long time. Obviously, useful for anomaly detection, bot detection, when an entity is accessing their bank account with the right credentials. Is it the right person or not detecting that malware has infected an application inside an enterprise. Lots of ways that we’ve been using AI and machine learning. Now with Gen AI, I think it helps some, but it really helps the attack. It’s much easier now to morph malware into a lot of different forms, makes it harder to detect.
Our teams have created some very nasty bots very quickly using Gen AI. And I think we’re already seeing more penetration as a result of Gen AI. That’s one area where really it is being actively used today. Now on that side of the house, the implications are — there’s more risk in terms of cybersecurity for enterprises. They’re going to get penetrated more. And so you really have to double down on your defense and depth. I think it makes products like [indiscernible] segmentation even more critical because you’re going to get penetrated, the key is to identify it quickly and proactively block the spread. And that — I think when you look at our growth rate there, very, very hot with a market-leading solution. Now you also asked about what about compute and the impact of Gen AI there?
I do think over time, it will suck up a lot more compute. And that’s good for vendors selling compute, like Akamai sales compute. And you’re right, there’s a difference between the generation of the model, which is — if they’re large models, very heavy, and that will be done in core compute and storage data centers. inference engines can run at the edge and will make sense to run at the edge for many applications. And we already have several partners that are porting their AI models on to Akamai for inference engines — and so — and I expect that they will be selling that in our marketplace to other companies. And so in fact, we’re already in a sense, using AI as we put our internal applications on to Akamai Connected Cloud. So I think you will see, over time, a lot of revenue generated there because of all the uses that I think will come about through AI.
Now you ask about risk. I think you will need — you do the model generation in the big data centers. That’s not a risk for Akamai because we’ve got 2 dozen of those today. So that’s — it’s just to be done in a different place. It won’t be done at the edge. But inference engines, yes, a lot of that work will be done at the edge, and we’re in a great place there because other companies don’t have an edge, anything like Akamai..
Operator: Next question is from Abdul Khan of Evercore.
Q – Unidentified Analyst: This is Dua speaking for Abdul. And I just wanted to ask for you broadly on the enterprise spend environment. And I know we’ve previously we’ve noted elongating sales cycles. And I was just generally curious whether there’s any change there. And if you had to characterize it, is enterprise IT spend incrementally worse or better or about the same versus, let’s say, 90 days ago?