Arista Networks, Inc. (NYSE:ANET) Q1 2024 Earnings Call Transcript

Jayshree Ullal: Thank you, Meta.

Meta Marshall: Thank you.

Operator: And your next question comes from the line of Ben Bollin with Cleveland Research Company. Your line is open.

Ben Bollin: Good afternoon, everyone. Thanks for taking the question. Jayshree, you made a comment that back when you had guided in November, you had about three to six months of visibility. Could you take us through what type of visibility you have today and maybe compare and contrast the different subsets of customers and how they differ? Thank you.

Jayshree Ullal: Thank you, Ben. That’s a good question. So let me take it by category, like you said. In the Cloud and AI titans in November, we were really searching for even three months visibility. Six would have been amazing. Today, I think after a year of tough situations for us where the Cloud Titans were pivoting rather rapidly to AI and not thinking about the cloud as much, we’re now seeing a more balanced approach where they’re still doing AI, which is exciting, but they’re also expanding their regions on the cloud. So I would say our visibility has now improved to at least six months and maybe it gets longer as time goes by. On the Enterprise, I don’t know. I’m not a bellwether for macro, but everybody else is citing macro, but I’m not seeing macro.

What we’re seeing with Chris Schmidt and Ashwin and the entire team is a profound amount of activity in Q1 better than we normally see in a Q1. Q1 is usually, come back from the holidays, January is slow. There’s some East Coast storms to deal with. Winter is still strong, but we have had one of the strongest activities in Q1, which leads us to believe that it can only get better for the rest of the year. Hence the guide increase from an otherwise conservative team of Chantelle and myself. And then the tier two cloud providers, I want to speak to them for a moment because not only are they strong for us right now, but they are starting to pick up some AI initiatives as well. So they’re not as large, of course, as the Cloud Titans, but the combination of the service providers and the tier two specialty providers is also seeing some momentum.

So overall, I would say our visibility has now improved from three months to over six months. And in the case of the enterprise, obviously our sales cycles can be even longer. So it takes time to convert into wins, but the activity has never been higher. Thanks, Ben.

Ben Bollin: Thank you.

Operator: And your next question comes from the line of Michael Ng with Goldman Sachs. Your line is open.

Michael Ng: Hey, good afternoon. Thank you very much for the question. It was very encouraging to hear about the migration of trials to pilots with ANET’s production rollout to support GPUs in the range of, I think you said 10,000 to 100,000 GPUs for 2025. First, I was just wondering if you could talk about some of the key determinants about how we end up in that range, high end versus low end. And then second, assuming $250,000 per GPU, that would imply about $25 billion of compute spending. ANET’s target of $750 million would only be about 3% of the high end. And I think you’ve talked about, 10% to 15% networking as a percentage of compute historically. So I was just wondering if you could talk about, what I may be missing there, if there’s anything to call out in those assumptions. Thank you.

Jayshree Ullal: Yes, thank you, Michael. I think we could do better next year. But your point is well taken that in order to go from 10,000 of GPUs to, 30, 50, 100,000, a lot of things have to come together. First of all, let’s talk about the data center or AI center facility itself. There’s a tremendous amount of work and lead time that goes into the power, the cooling, the facilities. And so now when you’re talking this kind of production, as opposed to proving something in the lab, that’s a key factor. The second one is the GPU, the number of GPUs, the location of the GPUs, the scale of the GPUs, the locality of these GPUs. And should they go with BlackRock? Should they, build with a scale up inside the server or scale out to the network?

So the whole center of gravity, what’s nice to watch, which is why we’re more constructive on the 2025 numbers, is that the GPU lead times have significantly improved, which means more and more of our customers will get more GPUs, which in turn means they can build out the scale out network. But again, a lot of work is going into that. And the third thing I would say is the scale, the performance, how much ratings they want to put in. And I’ll give a quick analogy here. We ran into something similar on the cloud when we were talking about four-way ECMP or eight-way ECMP or these rail-based designs, as it’s often called, and the number of NICs you connect to go eight-way or four-way or 12-way or switch off and go to 800 gig. The performance and scale will be the third metric.

So I think power, GPU, locality, and performance of the network are the three major considerations that allow us to get more positive on the rate of production in 2025.

Michael Ng: Thank you.

Operator: And your next question comes from the line of Matthew Niknam with Deutsche Bank. Your line is open.

Matthew Niknam: Hey, thanks so much for taking the question. I got to ask one more on AI. Sorry to beat a dead horse. But as we think about the stronger start to the year and the migration from trials to pilot specific in relation to AI, is there a ramp towards getting to that 750 mil next year? And I guess more importantly, is there any material contribution baked into this year’s outlook, or is there any contribution that may be driving the two-percentage point increase relative to the prior guide for 2024? Thanks.

Jayshree Ullal: Chantelle, you want to take that? I’ve been talking about AI a lot. I think you should.

Chantelle Breithaupt: Yes, I can take this AI question. So I think that when you think about the 750 million target that has become more constructive to Jayshree’s prepared remarks, that’s a glide path. So it’s not zero in 2024. It’s a glide path to 2025. So I would say there is some assumed in the sense of it’s a glide path, but it will end in 2025 at the 750 in a glide path, not a hockey stick.

Jayshree Ullal: It’s not zero this year, Matt, for sure.

Matthew Niknam: Thank you.

Jayshree Ullal: Thanks, Matt.

Operator: And your next question comes from the line of Sebastian Naji with William Blair. Your line is open.

Sebastian Naji: Thanks. Good afternoon. I’ve got a non-AI question here. So maybe you can talk a little bit about some of the incremental investments that you’re making within your go-to-market this year, particularly as you look to grab some share from competitors. A lot of them are going through some type of disruption, one or the other, acquisitions, etcetera. And then what you might be doing with the channel partners to land more of those mid-market customers as well?

Jayshree Ullal: Yes, Sebastian, we’re probably doing a little more on investment than we have done enough progress on channel partners, to be honest. But last couple of years, we were getting very apologetic about our lead times. Our lead times have improved. So we have stepped up our investment on go-to-market, where I’m expecting Chris Schmidt and Ashwin’s team to grow significantly. And judging from the activity they’ve had and the investments they’ve been making in 2023 and 2024, we’re definitely going to continue to peddle to the metal on that. I think our investments in AI and cloud types remain about the same, because while there is a significant technical focus on the systems engineering and product side, we don’t see a significant change on the go-to-market side.

And on the channel partners, I would say where this really comes to play, and this will play out over multiple years, it’s not going to happen this year, is on the campus. Today our approach on the campus is really going after our larger enterprise customers. We’ve got 9,000 customers, probably 2,500 that we’re really going to target. And so our mid-market is more targeted at specific verticals, like healthcare, education, public sector. And then we appropriately work with the channel partners in the region, in the country, to deal with that. To get to the first billion, I think this will be a fine strategy. As we aim beyond $750 million to $1 billion, and we need to go to the second billion, absolutely we need to do more work on channels. This is still work in progress.

Thanks, Sebastian.

Sebastian Naji: Great, thank you.

Operator: And your next question comes from the line of Aaron Rakers with Wells Fargo. Your line is open.

Aaron Rakers: Yes, thanks for taking the questions. I’m going to shift gears away from AI, actually. Jayshree, if we look at the server market over the past handful of quarters, we’ve seen unit numbers down quite considerably. I’m curious, as you look at some of your larger cloud customers, how you would characterize the traditional server side, and whether or not you’re seeing signs of them moving past this kind of optimization phase, and whether or not you think a server refresh cycle in front of you could be a incremental catalyst to the company.

Jayshree Ullal: Yes, no, I think, if you remember, there was this one dreadful year where one of our customers skipped a service cycle. But generally speaking, on the front end network now, we’re going back to the cloud, and we do see service refresh and service cycles continue to be in the three to five years. For performance upgrades, they like three, but occasionally some of them may go a little higher. So, absolutely, we believe there will be another cloud cycle because of the server refresh and the associated use cases, because once you do that on the server, there’s appropriately the regional spine, and then the data center interconnect, and the storage, and so much ripple effect from that server use case upgrade. That side of compute and CPU is not changing. It’s continuing to happen. In addition to which, we’re also seeing more and more regional expansion. New regions are being created and designed and outfitted for the cloud by our major titans.