Cloudflare, Inc. (NYSE:NET) Q4 2023 Earnings Call Transcript

Andrew Nowinski: Sounds like especially with Mark Anderson joining now as well too. I want to ask a quick follow-up on Vectorize because I think it’s really interesting and how it may potentially be driving growth of R2 as well. From talking to some industry folks, it sounds like customers are starting to recognize how important a vector database is as it relates to inference and fine-tuning models. But I’m curious, I know it’s still early days, but what is the feedback on Vectorize? And are you actually seeing a pull-through or sort of a push to revenue on to R2 as well from that? Thanks.

Matthew Prince: Yes. The way that we see, the space in general is we want to have all of the different components that you need in order to build just a full-featured AI application. And that means supporting as many models as possible and some of the work that we’re doing with the major model vendors like the Metas of the World, the marketplaces like Hugging Face gives us that. We want to have the best place for inference and we think that we’re Goldilocks in that space, where the centralized public clouds are too far away and your device that you’re holding in your hand or wearing on your wrist is often doesn’t have enough power. But where we are sitting in between gives that really incredible place for inference. But then the next step of that is you have to be able to take those models and make them your own.

And that’s exactly what Vectorize does. It allows you to customize those models and tune them, the fine-tuning around making them your own. And so what’s been interesting about that is that being local and having presence all around the world ends up mattering in various places, an example that a customer gave us the other day was that if you have a model and it responds in the United States and it sells color, C-O-L-O-U-R, it feels very foreign, whereas if it’s in the UK and it spells as C-O-L-O-R, it also feels very foreign. And so the ability to not just tune models, but tune them locally, while still having the power of beefy GPUs that can then run the inference task. That’s a really killer combination. And as you said, it’s built on top of the existing primitives that we have, including R2 which is our object storage space.

And so while it’s early in the entire AI space. I think we are very well positioned and very strategic. And if you look at things like the various downloads of our libraries, on public — open source repos, they are taking off like crazy. And it is incredibly encouraging and I think will become a larger and larger part of our business going forward.

Andrew Nowinski: Thank you.

Operator: Your next question comes from the line of Hamza Fodderwala with Morgan Stanley. Your line is open.

Hamza Fodderwala: Hey, good evening. Thank you for taking my question. Maybe just stick on the Edge AI theme, Matthew. I’m just curious, I know it’s very early days, but as you deploy these GPUs at all your locations worldwide, how are you feeling from a capacity standpoint, from a CapEx perspective and your ability to meet the demand as some of the sort of inference starts to ramp? And then maybe a follow-up for Thomas, I think variable revenues are a very low percentage of your overall sales. Can you give us any context of how that looks for Q4 relative to Q3? Thank you.

Matthew Prince: Yes. So I’ll take the first question and Thomas can take the second. I’ll start with — in a place that you might not expect, which is our success in the Zero Trust market has actually helped empower our ability to invest in the developer platform and especially the AI market. To understand that, the nature of how our business works is that every single service that we run is capable of running on every single piece of equipment that’s out there. So we don’t run a different network for our security products than we do for our performance products. We don’t run a different network for the CASB service that we have versus our access versus our DDoS mitigation, it’s all the same machines that are running across that.

And so one of the secrets to Cloudflare’s success has been that we’ve been able to always look for places where there is more sort of unused capacity and then effectively act as a giant scheduler in order to make that capacity more productive. And what’s been interesting about the Zero Trust space is that the very nature of how that traffic works, where our first-generation services were all reverse proxy services. The traffic was flowing kind of in one direction, whereas our Zero Trust services are all what’s called forward proxy services, the traffic is flowing in the other. But it turns out that you can actually have traffic flow both directions without having additional CapEx cost. And so as a bigger and bigger part of our revenue is the Zero Trust products, it means that it actually frees up CapEx as a percentage of revenue for us to go after other opportunities.

And so that’s what has freed up us to go after a lot of the AI opportunities that we’ve had. It was freed up our ability to acquire the GPUs and invest behind the demand that we’re seeing. And I feel really good because at core, what we are really good at is running a giant network, as a giant scheduler and then wringing as much efficiency and utilization out of that as possible. And so we have been able to stay ahead of the demand that we’ve seen for the GPUs and other resources that we need for some of these newer products. And we are complementing that by also adding revenue, which is just much less CapEx efficient because it’s much less CapEx intensive and much more CapEx efficient because it’s running on top of the same platform. Hopefully, that makes sense.

But it really is part of the key to our business. And it’s part of why we have the gross margins that we do where some of the other companies that are point solutions only doing one of these things have much worse margin profiles.

Thomas Seifert: Before I answer your second question, maybe a bit more color to what Matthew just said. I mean you really could see the inherently efficient architecture of the network in the fourth quarter. Our network CapEx was 8% of revenue in the fourth quarter. And we were ahead of delivering GPU capacity. We were at the 120 cities out of our plan of 100. And the guidance of 10% to 12% of revenue for this year includes a full build-out and bringing GPU capacity into pretty much every location we have. And on top of that, making sure that the network grows with the traffic that we put on our network. So that is, I think, one of the key competitive advantages we have is the network architecture and that’s just another proof point in the fourth quarter.

Regarding your question on variable revenue, variable revenue in the fourth quarter was a bit higher than it was in the third quarter, but it’s still a small percentage of our overall revenue. This will change over time. As our packaging and the offerings we bring to the market will have higher variable components. But for the fourth quarter, it was still a small part of our overall revenue and revenue mix.

Hamza Fodderwala: Thank you.

Operator: Your next question comes from the line of Jonathan Ho with William Blair. Your line is open.

Jonathan Ho: Hi. Good afternoon. Matthew, you mentioned that a third of workers’ AI accounts are new to workers. How could this be potentially an accelerant to your developed platform overall? And could you maybe give us some color on the adoption of workers in terms of the serverless model by developers relative to competitors?

Matthew Prince: Yes, I think that if you study developer platforms, they inherently take some time to really take off and take hold. And it averages about 10 years from when they launch before they seem like they’re getting to that. And the reason for that is because of the fact that you need early adopters, those early adopters need to build something, which is a killer app, which people say, wow, I couldn’t have built that in any other way. Some of the people who are behind that first killer app need to then go on and start other services, other companies, build other things. There’s an ecosystem that has to develop around all of that. And that inherently takes some time to take off. Developer platforms are adopted. They’re not sold.

And so again, I think that we’ve been very excited about the rate at which things have been adopted at Cloudflare for our developer platform, but we’ve also been realistic that for that developer platform to take off, inherent in how these develop, there is some time. There are a handful of ways that you can have shortcuts. One of those is to attach yourself to other developer ecosystems, which are really robust. And so some of the ways that we are effectively providing the underlying infrastructure for some major other platforms that are out there, a number of the big e-commerce platforms and other things. That helps train developers early on how to build with us and again is one of the shortcuts to get there. Another one is attaching yourself to wherever there is a lot of interest in the developer ecosystem.