Jayshree Ullal: Okay, Amit. Again I’ll share a few words and I’d love for Anshul to step in and say some too. Look, if you look back three years ago, we started seriously investing in the enterprise. And back in 2020, we had a small enterprise business and it was largely comprised of, as you rightly pointed out, data center and some high-performance compute and low-latency HFT. Can’t ever forget our original heritage. But in the last three years, we have made an investment and seen a significant uptake in enterprise customers wanting to do business with Arista. Historically, it’s been the high-tech enterprise and the financials. And today, we’re seeing a much better cross section of verticals, including healthcare, education, we expect to see more and more distributed enterprises.
And to your question on data center versus campus, the answer is yes, to both. We actually see one uniform architecture where you can have a universal spine that connects into a wired leaf, a wireless leaf, a storage cluster, a compute cluster, a border leaf for routing, and WAN transit. It’s pretty exciting that Arista is truly and remarkably setting the tone for a two-tier defined architecture across the enterprise, and building that modern operating model based on CloudVision.
Anshul Sadana: Amit, this is Anshul here. We have a great team being led by Chris Schmidt and Ashwin Kohli in this space. And now we sell in many, many countries around the world. And as Jayshree mentioned, both data center and campus, customers are coming to us for the automation for the higher quality, for the visibility, that we’re able to bring to them across the board in one architecture, one OS, and one CloudVision. That message resonates with every CIO today, and they are no longer worried about Arista being this new kid on the block that’s risky move for them. We are, in fact, becoming the de facto and they like it. So, which is why the momentum just continues. It’s good execution by the team and getting to more and more customers around the world.
Liz Stine: Thank you, Amit.
Operator: Your next question comes from the line of Ben Bollin with Cleveland Research. Your line is open.
Ben Bollin: Thanks for taking the question. Good evening, everyone. Jayshree and Anshul, I was hoping you might be able to comment a little bit about your thoughts as you make progress in the backend network around GPU cluster opportunity, how you see that developing versus what you’ve shared with us previously? And any color in particular around both pre-existing and the opportunity for net new wins would be helpful. Thanks.
Jayshree Ullal: Sure. Again, this is an area that Anshul lives and breathes more than I do, so I’ll give you some executive comments. But, Ben, as I see it, the back-end network was something we didn’t even see a few months or years ago and was largely dominated by InfiniBand. Today, if I look at the five major designs for AI networking, one of them is still very InfiniBand dominated, all the others we’re looking at are adopting a dual strategy of both Ethernet and InfiniBand. So, I think AI networking is going to become more and more favorable to Ethernet, particularly with the Ultra Ethernet Consortium and the work they’re doing to define a spec, you’re going to see more products based on UEC. You’re going to see more of a connection between the back-end and the front-end using IP as a singular protocol.
And so, we’re feeling very encouraged that especially in 2025, there will be a lot of production rollout of back-end and of course front-end based on Ethernet. Over to you, Anshul.
Anshul Sadana: Sure, thanks, Jayshree. Ben, our cloud titan customers, as well as the specialty providers, have been great partners of ours. So, the level of partnership and co-development that’s going on in this space is high. It’s just like in previous cycles, previous products that we’ve done with them, there’s a lot of fine tuning needed in these back-end networks to get the maximum utilization of GPUs. And as you know, we are good at these [engineering] (ph) projects. So the teams are enjoying it. The activity is much, much higher than before. And the goal is to scale these clusters as quickly as possible so our customers can run their jobs faster. We’re feeling good about it. You’ve heard comments from Jayshree as well in the past, and you’ll hear more on the analysts here on this topic, too, but all good on the activity front over here.
Ben Bollin: Thank you.
Jayshree Ullal: I think one thing to just add is the entropy and efficiency of these large language models and the job completion time is becoming so critical that it’s not just about packet latency, it’s really about end-to-end latency. And this is something our team, especially our engineers, know a lot about from the early days. So, we’re really working this end to end.
Liz Stine: Thanks, Ben.
Operator: Your next question comes from the line of Aaron Rakers with Wells Fargo. Your line is open.
Aaron Rakers: Yeah, thanks for taking the question. I just want to kind of dovetail off that last question a little bit. I know, Jayshree, last quarter, I think it was you commented that you’d expect to see pilot deployments for these AI opportunities in ’24 and then meaningful volume in 2025. First of all, do you reaffirm that view, or has that changed at all? And then on that, can you give us some context of how you see network spend intensity for these AI fabrics relative to, I think in the past, it’s been kind of high-single-digit percent of compute spend on networking in classical cloud infrastructure environments?
Jayshree Ullal: Well, first of all, Aaron, the first question is easy. I reaffirmed that view and more later on November 9 at our Analyst Day. So, if I tell you everything now, you may not attend that session. Coming back to this networking spend versus the rest of the GPUs and et cetera, I would say it started to get higher and higher with 100 gig, 400 gig, 800 gig, where the optics and the switches are more than 10%, perhaps even 15%, in some cases 20%. A lot of it’s governed by the cables and optics too. But the percentage hasn’t changed a lot in high-speed networking. In other words, it’s not too different between 10, 100, 200, 400, and 800. So, you’ll continue to see that 10% to 15% range.