Operator: The next question comes from Madeline Brooks of Bank of America. Please go ahead.
Madeline Brooks: Hi, team. Thanks so much for taking my question tonight. Just one on security. Outside of Guardicore, I just want to touch on this year, the rest of the Zero Trust portfolio trends that you’ve seen and maybe if you’re feeling any additional competitive pressure now that the market has really expanded there? And then I have one follow-up.
Tom Leighton: Yes. In Web App Firewall, we’ve been the market leader there, for 10 years since we started that marketplace with Web App Firewall as a service. And after 10 years, you do get, competition. But we’re still the market leader by a good margin. And that’s a good growth business for us. We’ve added a lot of capabilities on top, bot management and more recently, account protector, client side protection so that customers of commerce sites can stay safe by going to the site. You need going to need that now for compliance. There’s a brand protector. So that’s identifying the phishing sites and keeping them from stealing, user information. Of course, we’ve been doing, denial of service protection for a long, long time now.
Market leadership position there. And then you have on the enterprise side, of course, Guardicore we talked about doing very well. And I’m really excited about API security. I think over the longer term, that becomes as big a marketplace and just as important as Web App Firewall has become. And our goal there is to become the market leader. And already in the go-to-market motion, there’s a strong synergy between Web App Firewall and API security. We built a very easy way to do a proof-of-concept for our Web App Firewall customers. And that’s where we’re getting a lot of early traction. Also, we’ve integrated with a lot of the load balancers and other firewalls out there so that we can sign on new customers who are, not using my CDN or Web App Firewall.
So I think, there’s a variety of areas in security that are working very well for us.
Madeline Brooks: Thanks so much, Tom. And then just one quick one on compute, too. I think if we think about earnings that have happened so far, especially with hyperscalers like AWS, Microsoft, Meta, we’ve kind of heard of this theme of the optimization inflection in terms of cloud computing, meaning maybe this year we’re going to see a little bit more investment in new workloads. I’m just wondering if you’ve heard of any of those trends among your customers who are thinking about compute for the first time, or maybe if you’re seeing increased appetite for compute for this coming year versus 2023. Thanks so much.
Tom Leighton: Yes, compute is an enormous marketplace and growing rapidly, and there’s always new applications that are being created, and not just migrating from a data center into the cloud, but just brand new applications. So that’s where we’re seeing a lot of traction. Also, in some cases, lift and shift out of a data center or out of a hyperscaler. But it’s just an enormous marketplace and a great place for us to operate. And even those that are optimizing, that’s sort of, I guess, not such a great thing for the hyperscalers, but we’re part of that trend. It’s great for us, because we can help customers reduce cloud spend. And we’ve gotten very good feedback from our early adopters of Akamai Connected Cloud that they’re saving a lot of money. So the trend to optimization is a positive thing for Akamai.
Operator: The next question comes from Rishi Jaluria of RBC. Please go ahead.
Rishi Jaluria: Wonderful. Thanks so much for taking my questions. And let me echo my colleagues in thanking Tom Barth. It’s been a great decade working with you, and I’m really excited for your next chapter. I wanted to drill on to maybe going back to Gecko. I guess, number one, can you talk a little bit about edge inferencing and what those use cases look like? It’s one of those things that we hear a lot of talk about in theory, but maybe in practicality as you’re talking with your customers and having those conversations, what can that look like? And what positions Gecko uniquely for that? And then maybe financially, to the extent I know it’s still early, are you assuming real Gecko contribution on the compute line in your guidance for the year? Or is that something that as it gets traction could lead to more upside beyond what you model? Thank you.
Tom Leighton: Great. So let me start with edge inferencing. And so some of the examples I gave, that’s exactly what’s happening for commerce sites in figuring out in real time what content you’re actually going to give to the user that’s coming to the site. Ad targeting, what ad do they get? Anything that involves personalization. On the security side, a ton of inferencing is used to analyze real-time data. For example, even our own bot management solution. Is that entity that’s coming to the site, is it a bot or is it a human? And even if it’s a human and they have the right credentials, is it the right human? And you use AI and inferencing for that, and you’ve got to do it really fast. You can’t afford to send it back to the centralized data center because you’ve got a massive number of people that you’ve got to process in real time, especially if you’re doing some kind of live event.
And so being at the edge matters, because you can be scalable, you can handle it locally, you get great performance, you can make it be real time. And Akamai’s unique value proposition with Gecko is that we’re going to be able to now support this, not in a few cities, but in a hundred cities by the end of this year. So anything you can put in a VM, virtual machine, which is most things, you’re going to be able to do that in a hundred cities. And then ultimately in hundreds of cities, because we can put this in general Akamai Edge pods. And then next will be containers, which is pretty much the rest of what you do in cloud computing. And then to be able to spin it all up automatically. It’s a whole new concept for compute that I think is very powerful, and there’s a high overlap of wanting to do that with inferencing engines, where you’re trying to do something intelligent based on that end user or that end entity that’s interacting with the application.
Now, in terms of Gecko, we’re just now in the early stages of getting it deployed. We’re in nine cities, we’ll get the 10th new city up in another month or so. By the end of the year, we’ll be in a total of a hundred cities supporting compute. So not a lot of revenue is factored into the guidance based on Gecko for this year. That would come more next year. So this year’s revenue guidance is based on the original 25 core compute regions that we’ve set up by the end of last year. Now we will deploy this, of course, just as fast as we can and as customers want to adopt it. And hopefully we have the situation where we want to build out more, get more compute capacity because there’s so much demand for compute on Akamai. Ed, do you have any color you want to add around the guidance there?
Ed McGowan: No, I think you captured it right, Tom. We don’t, we’re not really anticipating anything. I mean, one thing we are doing this year is we are making changes to our comp plan with our reps so that they’re all, all have to sell compute this year. So you could see things, reps get very creative. I learned in sales that comp drives behavior. So by leaning in here and making it something that all reps have to do, we should see a lot more use cases, a lot more opportunities, etcetera. So there’s always a chance that we could be surprised here with the creativity of our field bringing us opportunities, but we did not factor in anything material as it relates to Gecko.
Rishi Jaluria: All right, wonderful. Thank you so much, guys.
Operator: The next question comes from Michael Elias of TD Cowen. Please go ahead.
Michael Elias: Great. Thanks for taking the questions. Two, if I may. Just first on Gecko, presumably the pops that you have, they’re already supporting security and delivery workloads. So from an architecture perspective, can you help us think about what expanding the compute platform into these pops means? Is it just additional co-location deployments and on the CapEx side, networking gear and servers? Any color that you could give there in terms of the mechanics of what the expansion would look like? And then second, Ed, last year you were talking about elongation of enterprise sales cycles. Just curious what you’re seeing in terms of the buying behavior of your customer base. Any notable call-outs there? Thank you.
Tom Leighton: All right. So I’ll take the first one. With Gecko, that is generally speaking an existing Akamai Edge pops. And in particular, they tend to be the larger ones where we already have a lot of equipment. It’s already connected into our backbone. And what we’d be doing is adding additional servers. And for compute, it would be a beefier server and additional COLO for those servers. But all the other infrastructure is generally already there. And it’s already connected in and we already have delivery and security operating there. So it does become a very efficient way for us to deploy Gecko. And Ed, you want to take the second one there?
Ed McGowan: Sure. Yes. So I think the trend, obviously, acquiring new customers is always challenging in an environment like this. The one probably exception to that is in the security space. That tends to be something that obviously, now with the requirements with the SEC reporting and I added a disclosure with CISOs being now potentially criminally charged for breaches and things like that. Audit Committee is spending more and more time on cybersecurity as a topic. That tends to be a budget that, one, you don’t typically cut and, two, you’re generally adding to. But yes, new customers are challenging. I do think this kind of environment helps us what we were just talking about in the last few questions, around optimizing cloud spend.
Certainly, if you’ve seen what we’ve done, we’ve spent [ph] a tremendous amount of money. So I think that can also help us in this particular environment. But definitely, new customer acquisition is a bit more challenging, but we’re still doing pretty well. Obviously, the environment can change. But it hasn’t been a major factor for us yet.
Michael Elias: Great. Thank you
Operator: The next question comes from Ray McDonough of Guggenheim Securities. Please go ahead.
Raymond McDonough: Great, thanks for sneaking me in. Maybe, Tom, just a follow-up to a prior question. As we think about Gecko and you mentioned that your edge sites right now don’t have full stack compute. But how much work is done to be to converge what you already have at your edge sites and what you’ve done in terms of building out the nodes capabilities? Should we expect there to be a common software stack across both edge and centralized sites? And if so, is the plan to have that in place by year-end?
Tom Leighton: Yes. Great question. So what we’re doing now is, as I mentioned in the last question, deploying more hardware in existing edge region and generally the larger edge regions. We already have network there. We already have delivery security located there. So there’s some additional color and servers. And yes, the goal is to put it all on one common software stack. Now initially, we have the Linode stack is moving into these edge pops for Gecko. But as we once we get the support for virtual machines and containers, then next, we want to add the software stack that we have for delivery and for security and for function as a service that automatically, for example, spins up JavaScript apps and milliseconds based on end-user demand, we want all of that to be operating on containers and VMs. So that you don’t have to think ahead of time about how many VMs do you want in each of these hundreds of cities?
It just happens based on end user demand. You automatically get new ones spun up, load balancing, fail over, really a very compelling concept. And that doesn’t exist in the cloud marketplace today. That is the vision — I think you really nailed it when you talked about the common software stack because only Akamai has that full edge platform today, that software stack around delivery and security that will be now including compute.
Raymond McDonough: Appreciate that color. And maybe as a quick follow-up, as we think about the expansion of the Linode footprint last year, can you help us understand as much as you can, how much of the space currently built out was for internal use purposes to help with that third-party cloud savings versus space that’s online now that’s revenue generating? And I know we’ve talked about this in the past, but any color around what we should expect from a customer utilization perspective that might be embedded in your guidance as we move through year-end? That would be helpful.
Ed McGowan: Yes, sure. So the majority of the growth in the compute guidance is going to come from enterprise compute. So the stuff that we built out for the last year. So if you kind of go back and look at the math in terms of — we’ve kind of said roughly speaking, $1 of CapEx is $1 of revenue. You can kind of look at what we’re doing for compute build-out now what we did last year. We said we got about $100 million roughly for our — all in for our internal use. So that leaves a pretty significant amount left for customer demand. Now obviously, the way people buy today, they pick a location, etcetera. So it’s not going to be exactly a dollar for dollar right now, but it’s a general rule of thumb. So I would say the majority of what we built out is for customer usage.
Raymond McDonough: Great. Thanks for sneaking me in. Appreciate it.
Operator: The next question comes from Tim Horan of Oppenheimer. Please go ahead.
Timothy Horan: Thanks guys. Following up on Ray’s question. So I’m assuming the goal here is to get to one single platform where customers can access the full range of services relatively easily on, I guess, one on ramp up. When do you think you’ll get there? And secondly, the Gecko product, it sounds like is this completely serverless? And is it a development platform also? Thanks.
Tom Leighton: Yes. So I think one platform really in terms of being able to do everything together and all the same software so that we have our Edge software running with the Linode software to spin up VMs and containers that’s not until 2025. We are, first, combining the infrastructure and of course, customers can buy the services as a package. We have common reporting now in many cases. But in terms of doing all the automatic spinning up and truly serverless used for VMs and containers, think 2025 for that. And let’s see. So — and what was the other question you had?
Timothy Horan: So the new product, Gecko, it is primarily a serverless product, it sounds like. And do you have all the support there for developers to completely run their applications on this new platform?