Akamai Technologies, Inc. (NASDAQ:AKAM) Q4 2023 Earnings Call Transcript

Page 4 of 4

Tom Leighton: Yes. And it’s — it depends how you define serverless, initially with Gecko, you would operate it the same way you would Linode. You decide how many VMs and containers you want in the various cities. And it is very developer-friendly, works just like Linode. So if you’re familiar there, that would now work in — well, at the end of the year, 100 cities for your virtual machines. Now if you define serverless to be — which doesn’t exist today out in the marketplace for VMs and containers, they just spin up automatically like we do today for Function as a Service and for JavaScript, that’s what comes 2025.

Timothy Horan: If you don’t mind me asking me, it sounds a lot like what Cloudflare is doing, but you’re saying it doesn’t exist today. Tom, can you maybe talk about a little bit what’s different, what you’re doing?

Tom Leighton: Yes, that’s a great question. They don’t support VMs or containers at all, never mind serverless or anything else, just — they don’t have support for that. They don’t do this full stack cloud computing.

Timothy Horan: Got it. Thanks a lot.

Operator: The next question comes from Alex Henderson of Needham. Please go ahead.

Alex Henderson: Great. So it seems pretty clear that Guardicore is a critical piece of your security growth. And obviously is perturbing the overall growth rate. I was hoping you could give us some sense of what the security product lines, excluding Guardicore look like in terms of their growth rates. Any sizing of that growth would be even a ballpark would be quite helpful. And then second, I was hoping you could talk a little bit about your — you mentioned inferencing, but I think it came in kind of as an afterthought as opposed to the primary focus. Can you talk about your involvement in AI inferencing at the edge and to what extent that requires either the 2025 kind of structure or what needs you have there and whether you’re putting GPUs out of the edge in order to facilitate that?

Tom Leighton: Ed, do you want to go with the first one, and I’ll take the second one?

Ed McGowan: Sure. Why don’t I go to the first one? On the spirit of it being a year-end call, I’ll break out these numbers at a high level for you, but won’t be doing it every quarter. So if I look at the — what we used to — what we call the Appen API security bucket, that’s our largest bucket, that includes bot management, our fraud products, our web app firewall, our new API security product. That’s actually in Q4 growing over 20%. So that’s been incredible. Guardicore itself, if I normalize for the onetime software that we did last Q4 is growing at about 6%. And infrastructure and services are growing sub-10%.

Alex Henderson: Just to be clear, is this the full year growth rate? Or is this the fourth quarter growth rate?

Ed McGowan: This is the fourth quarter growth rate. The full year I don’t have [indiscernible].

Alex Henderson: That’s sufficient. I just needed to know what it was.

Tom Leighton: Okay. Yes, in terms of the question around inferencing and AI and so forth, yes, we’re building full stack compute to have great performance at a lower price point and have that available in hundreds of cities. And one of the many things that you would do with that is AI inferencing and that’s not an afterthought. In fact, we’ve been using AI in our products for, well, 10 years, bot management, for example, runs on Akamai connected cloud. It’s one of many things that run on it. So not an afterthought. There is an enormous amount of buzz now about AI. And I think a lot of that is justified. And I think there’s a lot more compute going to be consumed because of AI. And it is a strong use case among our customers that are using Akamai connected cloud.

That said, it’s not all AI. In fact, our biggest customers are doing media workflow, doing live transcoding. And that’s not using AI. So I think AI is an important use case, one of several use cases. Now in particular, you asked 2024 versus 2025 it’s being done already on our platforms. There’s no need to wait until the end of the year unless you want to do it in 100 cities, then that comes at the end of the year. No need to wait to 2025 when the instances are spun up automatically instead of by design ahead of time the way compute works today. So you talked about GPUs. Akamai has GPUs deployed, we’re deploying more. We’ve used them in the past for graphics. And going forward, probably use them for Gen AI uses. We’re not really deploying them right now in the edge pops.

And that’s not — it just because you don’t need to. It’s not cost-effective. And the edge pops, you’re going to be doing the inferencing. And for the inferencing, you can use GPUs, but we’re also using CPUs. And right now, we get a better ROI on the CPUs. So I guess there’s a lot of confusion there as well. Now GPUs are critical for doing training for especially large language models and that’s going to be done in the core and we’re not supporting that as a key use case today. We could, in theory, right, we have all the technology to do it, but that’s not where we’re focused in terms of getting the best ROI for our platform. And for that matter, most of the work with these models, most of the compute is done when you’re using them for the inferencing.

You do the training, and then you spend — just so it learns you get it ready to go, and then you operate it. And it’s the operation where most the vast majority of the cycles are and that can be done on CPUs. And in many cases, the cases I mentioned, for a personalization for security, for data analytics, that’s done on the edge more as good reason to be done there using CPU-based hardware.

Alex Henderson: Great. Thanks for the complete answer.

Operator: The next question comes from Jonathan Ho of William Blair. Please go ahead.

Jonathan Ho: Hi, good afternoon. Just one question from me. How important is the global load balancing capability? And what does that maybe mean for your ability to either attract more customers or to drive revenue from that product? Thank you.

Tom Leighton: Yes, that’s very helpful because it makes it much more scalable. You have failover, so much more reliable. And I think it’s the basic capability, of course, we’ve had forever, it seems in delivery of security, and now that’s available for compute. So I think that’s important and that greatly increases the market we can go after for compute.

Jonathan Ho: Thank you.

Tom Leighton: And operator, we have time for one last question.

Operator: Our last question will come from Rudy Kessinger of D.A. Davidson. Please go ahead.

Rudy Kessinger: Hey thanks for squeezing me in guys. Ed if my math is correct, even if I exclude the $100 million in CapEx and compute last year, intended for moving over internal workloads between last year and this year, it looks to be about $400 million in compute CapEx. And going back to that kind of $1 in CapEx equals $1 of revenue capacity, $400 million in CapEx, roughly $200 million of compute growth 2024 versus 2022. Do you feel like you guys are maybe over building at all? Or what gives you the confidence, I guess, in the pipeline and the ramping usage to spend so much on another round of build-out this year when we’re not yet seeing growth accelerate, right? You’re guiding to 20% growth next year that’s flat with Q4.

Tom Leighton: Yes. So I’ll just — let me address that part first. So I would say if you look at the underlying components of what’s growing, it’s actually that enterprise compute opportunity that’s growing very, very, very fast, like those numbers, the percentages would be kind of foolish to break out because they’re going off of small numbers and adding to them very big numbers. Now also part of our strategy is to be competitive and have big core centers in many cities, and that does require a larger build out. So there’s a lot of capacity that we have to sell. And then also, we’re seeing demand in certain cities, you have to build out more capacity where you’re getting demand. And then the Gecko sites that we’re building out, it’s not a significant — I mean it’s a decent amount of capital.

But I think that is another big key differentiator for us. And as Tom mentioned, we think there’s a big opportunity there. So I know a lot of people have been questioning us being able to take on large workloads, etcetera. We clearly have a lot of capacity out there. As I talked about earlier, we’ve made the change with our compensation plans where our reps now have to sell compute. So we’re going to see a lot more of that. We’ve done a tremendous amount with the platform in terms of adding functionality. We built out the platform, connected it to our backbone, and we have a lot of new compute partners. The platform is ready to be sold so we’re pretty optimistic about it. And I think we’re building in a pretty responsible manner. As I talked about, our CapEx is relatively modest for this business right now.

So I think we’re in pretty good shape.

Tom Leighton: And with that, that will end today’s call. I want to thank everyone for joining, and have a great evening.

Operator: The conference has now concluded. Thank you for attending today’s presentation, and you may now disconnect.

Follow Akamai Technologies Inc (NASDAQ:AKAM)

Page 4 of 4