Cloudflare, Inc. (NYSE:NET) Q2 2023 Earnings Call Transcript

Keith Weiss: One clarification. So in that world view, you, it sounds like you think we’re going to see more kind of smaller open source and distribution of a lot of very small model versus, like, a world view that everything is going to come up into a big GPT or LaMDA model over time. Is that correct?

Matthew Prince: Not necessarily, but we run enough capacity see out at the edge of our network that we can run fairly large, I mean, very large models out at our network. And what I think is a little bit confusing is most of, if you’re trying to do the training of the models, then having the absolute latest, greatest GPU, the H100 from Nvidia right now, there’s a lot of constraint in getting those chips. But there’s actually a sweet spot for inference tasks, which isn’t necessarily at the absolute cutting edge of the models. And so Cloudflare is not the right place to actually process the training of models. That’s — that makes much more sense to do in a more traditional, centralized data center model, much like much of the traditional hyper scale public clouds.

And in those cases, you have to have the latest, greatest GPUs. But when you’re doing inference, again, a lot of that’s going to run on your device. And a lot of that is also going to run inside the network. And we’re going to be able to, with a much lower capex spend, leverage the edge of our network, in order to be able to do that processing extremely efficiently. And maybe we don’t need the H100. Maybe we can live within A100 or you know, whatever is, again, a generation or two behind. But that’s also the difference between training and inference. An inference doesn’t need necessarily the latest greatest GPU. Does that make sense?

Keith Weiss: Yeah, super helpful. Thank you.

Thomas Seifert: Keith, what I would add is, I always remind people to truly understand the competitive mode of Cloudflare and the efficiency of there, the business model. You have to start with a network and how it’s efficiently targeted using off-the-shelf hardware, completely integrated homogeneous software stack that allows you to run every product on every server and every location. So if this now massive globally distributed network that is not only efficiently designed to hard to handle large volumes of data, but also large volumes of simultaneous requests. And that makes it already today very well suited for inference tasks, which by nature often works to processing a request simultaneously. So — and it requires less computational power than the training model.

So I think the architecture of the network itself puts us in this really advantageous position. And that’s why we are so confident that the business model is going to hold, and it’s one of the reasons why we’re able to — AI’s CapEx ratio down for the year despite the fact that AI workloads are putting — are being put more and more on our network. So you really have to, as I always say, go back and really send the efficient architecture of the network itself, and you’ll find the answer there.

Keith Weiss: Excellent. Super interesting guys. Thank you.

Operator: And we will take our next question from Andrew Nowinski with Wells Fargo. Your line is open.

Andrew Nowinski: Okay. Thank you. And congrats on another, great quarter. I wanted to shift gears and ask about Zero Trust. Is there any more details you could provide on record Zero Trust contract you talked about and whether that was a displacement of another vendor and maybe why they selected Cloudflare. And then I have a quick follow-up. Thank you.

Matthew Prince: Sure. I think that in almost all of the Zero Trust deals that we see, we are at least in competition with, some of the first generation Zero Trust vendors. In many of them, it’s — they are — there’s an incumbent vendor, and we are displacing them. Usually, when that happens, it’s because the usability of the existing Zero Trust vendor has been really bad. It’s crazy that with the some of the leading Zero Trust vendors, if you try to use your laptop when you’re on a United Airlines Wi-Fi flight, that the captive portal on United Airlines doesn’t work. That — that’s obviously unacceptable — maybe that was acceptable in the pandemic when no one was traveling, but now that people are traveling, that’s something that just doesn’t hold up anymore.

And so I think the thing that has been an advantage of Cloudflare is that because we almost think of ourselves at times as a consumer company and we have a Zero Trust product that you can download to your phone right now and use 1.1.1.1, that is been running on so many millions of devices. And as we work with device manufacturers to actually build our network directly into their application. Those things give us visibility to be able to focus on performance, to be able to focus on end user experience, and to be able to directly replace oftentimes, what have been those sort of first generation, Zero Trust vendors that frankly don’t have the same user experience and the same performance. And so, in many of these cases, in all the cases, we’re at least competing with the other more traditional Zero Trust vendors.

And in many cases now, we’re displacing them.