Aaron Rakers: Yes, thank you.
Jayshree Ullal: Thank you, Aaron.
Operator: And your next question comes from the line of Karl Ackerman with BNP Paribas. Your line is open.
Karl Ackerman: Yes, thank you. Jayshree, you spoke about how you are not seeing slowness in enterprise. I’m curious whether that is being driven by the growing mix of your software revenue, and do you think the deployment of AI networks on-prem can be a more meaningful driver for your enterprise and financial customers in the second half of fiscal 2024, or will that be more of a fiscal 2025 event? Thank you.
Jayshree Ullal: Oh, that’s a really good question. I have to analyze this some more. I would say our enterprise activity is really driven by the fact that Ken has produced some amazing software quality and innovation. And we have a very high quality universal topology where you don’t have to buy five different OSs and 50 different images and operate this network with thousands of people. It’s a very elegant architecture that applies to the data center use case that you just outlined for leaf spine. The same universal spine can apply to the campus. It applies to the wide area. It applies to the branch. It applies to security. It applies to observability. And you bring up a good point that while the enterprise use cases for AI are small, we are seeing some activity there as well.
Relative to the large AI titans, they’re still very small, but think of them as back in the trial phase I was describing earlier, trials, pilots, production. So a lot of our enterprise customers are starting to go in the trial phase of GPU clusters. So that’s a nice use case as well. But the biggest ones are still in the data center campus and the general purpose enterprise.
Karl Ackerman: Thank you.
Liz Stine: Operator, we have time for one last question.
Operator: Thank you. And your final question comes from the line of David Vogt with UBS. Your line is open.
David Vogt: Great. Thanks, guys. And congratulations. So, Jayshree, we have a question about, I want to go back to AI, the roadmap and the deployment schedule for BlackRock. So it sounds like it’s a bit slower than maybe initially expected with initial customer delivery late this year. How are you thinking about that in terms of your roadmap specifically and how that plays into what you’re thinking about 2025 in a little bit more detail? And does that late delivery maybe put a little bit of a pause on maybe some of the cloud spend in the fall of this year as there seems to be somewhat of a technology transition going on towards BlackRock away from the legacy product? Thanks.
Jayshree Ullal: Yes, we’re not seeing a pause yet. I don’t think anybody’s going to wait for BlackRock necessarily in 2024 because they’re still bringing up their GPU cluster. And how a cluster is divided across multiple tenants, the choice of host, memory, storage architectures, optimizations on the GPU for collective communication libraries, specific workloads, resilience, visibility, all of that has to be taken into consideration. All this to say, a good scale out network has to be built no matter whether you’re connecting to today’s GPUs or future BlackRocks. And so they’re not going to pause the network because they’re waiting for BlackRock. They’re going to get ready for the network, whether it connects to a BlackRock or a current H100.
So as we see it, the training workloads and the urgency of getting the best job completion time is so important that they’re not going to spare any investments on the network side. And the network side can be ready no matter what the GPU is.
Liz Stine: Thanks, David. This concludes the Arista Network’s first quarter 2024 earnings call. We have posted a presentation which provides additional information on our results which you can access on the investor section of our website. Thank you for joining us today and thank you for your interest in Arista.
Operator: Ladies and gentlemen, thank you for joining. This concludes today’s call and you may now disconnect [ph] from the investor section of our website. Thank you for joining us today and thank you…