Matthew Prince: Yes. So first of all, I mean, the world is getting a lot more complicated, and we’re seeing even nation state actors turning to DDoS attacks to disrupt services around the world, and a new attack vector, which our team alongside Google and AWS helped discover and announced this last quarter, is generating attacks that are, I mean, literally almost doubling the total volume of traffic on the entire Internet when they’re going forward. And the nature of how we’re able to stop those attacks and the architecture of how we’re able to stop those attacks is very unique to Cloudflare. And we’re seeing even some of the large hyperscale public clouds that have their own limited DDoS mitigation services point customers to us, because we’re the best in the world at this.
And I think that, that’s a real differentiator for us. The pricing also is important. And what’s unique is because every single server that is part of Cloudflare’s network can run every single service. As we stop these massive attacks, not only are we, again, better able to technically stop them, but we are then able to do it without changing our underlying pricing because it doesn’t drive up what our costs are. Early on, we said that we should pass that advantage on to our customers. And so we created pricing that was, as you said, very customer-centric that’s appreciated by the market. I think more and more people are leaning in on DDoS and using us for that. And what we’re seeing is that, then we can use that as sort of the milk in the grocery store where we can sell other products across our suite.
And just like I said before, customers don’t just want to protect the front door. They don’t just want to protect the back door. They want to protect all of the parts of our business. And so we’re seeing that having collective solutions from a platform that can solve DDoS, Zero Trust, WAF, rate-limiting, bot management, Access Control and have that all behind one single pane of glass is a very, very, very compelling offering, or somewhat starkly. If you look at some of the other Zero Trust vendors that are out there, they’re actually Cloudflare customers using our DDoS mitigation products because we’re the best of the world – in the world of them.
Shrenik Kothari: Great. Just – very super helpful. Just a quick follow-up on what you said around Zero Trust. I mean I agree, your margin really allows you to kind of disrupt the market, kind of enabling you to use pricing as a competitive advantage. And of course, you discussed the DDoS pricing. And on Zero Trust, like where you’re bundling around SASE and Zero Trust. Just curious, like, it still seems like you guys are still pricing kind of similarly uniquely versus the market more attractively? Just curious, are you thinking about kind of also going for like premium optimal pricing given where the market is, given the strength of the demand and also try to push forward on the margin front, is that a lever that you guys are thinking through?
Matthew Prince: I think that we can use price there as a weapon to win business. We have tended not to see that there’s a lot of price sensitivity there. So, we’re not going to just push that if we don’t have to. I think that the place that is more attractive is actually in how we create platforms where you can have a complete network security solution. And it’s also really powerful that we can run our Zero Trust products at extremely, extremely high margins, where it’s actually the same reasons as the DDoS mitigation products. If you take all of the other Zero Trust vendors that are out there and add up all their traffic, we could add them all to Cloudflare’s network without significantly increasing our underlying COGS of delivering that traffic.
And so, that gives us an advantage over time. And we do believe that whoever has the lowest cost of servicing tends to win over the long term. And that is something that is very difficult for any of our competitors in that space to match.
Shrenik Kothari: Got it. That’s super helpful. Thanks a lot.
Operator: Your next question comes from the line of Alex Henderson from Needham & Company. Please go ahead.
Alex Henderson: Great. Thank you so much. Matt, you guys continue to amaze me in the ability to anticipate things five, six, seven years before they happen, I think about the microthreading of – micro services in your serverless platform as an example. And now you’re talking about having less slots open for inference AI, six years ahead of schedule. It’s pretty amazing prescience. But I was hoping you could talk a little bit about the uniqueness of the platform as we move into the world driven by inference AI. It’s pretty clear to me that the combination of the Workers platform, combined with the location of your edge, combined with all of the other elements of the service platform at the edge gives you a unique positioning, particularly with the R2 and the vectors stuff that you’ve announced. So is there anybody else that has, in our reasonable positioning to compete with you in that context? Or are you as unique as you look to me – in this competitive landscape?
Matthew Prince: Thanks for the kudos. And I think we sometimes are a little bit early. And sometimes, and for people who are paying close attention almost three years ago, we actually did an announcement with NVIDIA that was a trial balloon, kind of in the space to see how much demand there was. And at the time, there wasn’t a ton of demand, but we could see how models are improving. Inference is improving. We knew that this was something which was coming. And so we learned from that first thing. I think we’ve built a really strong relationship with the NVIDIA team in part because of that, and some of the work that we’ve done with them in the networking space. But I think that we try to learn and stay and buy ourselves the flexibility over time to be able to deliver in this space.
I don’t know of anybody else that has an architecture like ours, where we made the hard decision early on, to say, every machine everywhere can run every task. So that we don’t have dedicated scrubbing centers. So that we don’t have dedicated regions for 1 service or another. That has required us to invent a lot of technology. And build a lot of intellectual property around that technology and just a lot of know-how in running a network like that. It is harder upfront to build it that way. But it results in a much higher level of efficiency, a much higher level of – a much faster pace of innovation, and we’re able to capitalize on that today. And so I think it would require a complete rearchitecture from any of the providers that we know in order to be able to do what we’ve done in this space.
And I think it’s, again, part of the secret to our continued pace of innovation. And again, really proud of our team and everything that they’ve done to be able to deliver it.
Alex Henderson: One last question along the same line, if I could. The inference AI market, how much of it do you expect to be at the edge? And how much do you expect within inference that might be in more centralized or regionalized locations? Thanks.
Matthew Prince: So I mean my thesis around this is that, probably the most inference tasks will be run directly on your device. So on your Apple device, on your Samsung device, on your LG device, whatever that is. But ideally, you’re going to want to have it so that you can seamlessly hand off, whether you’re using a low-power device that needs to optimize for battery life or needs to optimize for the lowest build of materials, or you’re trying to run a task, which is so big and powerful that you’re going to have to hand that off to a device nearby. And so you want the rails between those things to be as seamless and efficient as possible. And from a user’s experience, you’re going to want that to be transparent to them. And so I think the most powerful devices out there are going to get more and more powerful with the models that are running on them, but less powerful devices, devices that, again, have to have weeks of battery life but still need to be smart or the most interesting models that are bigger and can do more interesting things, I think it’s going to naturally make sense for that inference to be actually running as close as possible to the end user.
I don’t see a ton of reasons why you would run inference back in some centralized location. I think that, that is going to have a performance penalty in doing that. I think it’s going to have a regulatory penalty in doing that. I think it’s going to also have, actually, cost disadvantage in sending it back to a central location. And so as we build this out and we give people the tools to be able to run those sophisticated models at the edge, I think it’s a two-horse race that is going to be the phone and device manufacturers that are going to get better and better and better over time. And then it’s going to be connectivity clouds like Cloudflare that are going to deliver on those models that can’t run on the end device itself.