Operator: Your next question comes from the line of Ittai Kidron from Oppenheimer. Please go ahead. Your line is open.
Ittai Kidron: Thanks and congrats to you as well, Ita, I’ll miss you. And, Chantelle, good luck, of course, to you in your new role. I guess a couple ones for me. First of all, on the cloud mix, it kind of declined a little bit on the year. Maybe you can tell us what are your underlying working assumptions for ’24? And then more broadly on the ’24 guide, Chantelle, Ita, it feels like you’re talking about $600 million increase year-over-year in revenue. It feels like half of it can already come from the AI networking, given your ’25 targets, and you seem very comfortable about your ’25 targets, and I would think your ’24 should be comfortable as well. So why — if I assume that $200 million, $300 million come from AI networking this year, why should the rest of the business generate only $300 million to get to your annual targets? Why such an aggressive conservatism here on the guide?
Jayshree Ullal: Okay, Ita, let me take the first question, and then I’ll pass it over to Ita and Chantelle for what you call conservatism. So first of all, our cloud mix is very strong, very good. But I think what you should take away from this is not that our cloud mix came down, but our enterprise did really, really well. And since 100% is the total pie when something does really well, then the others look less so. So we’re doing well on all three sectors and we’re very proud of the enterprise momentum. AI is going to come. It is yet to come — certainly in 2023, as I’ve said to you many, many times, it was a very small part of our number, but it will gradually increase. Okay, which of one of my fantastic CFOs wants to take the conservatism question?
Chantelle Breithaupt: I’ll start the [indiscernible], thank you for the well wishes. I think coming into 2024, it’s a balanced view in the sense that we want to have multiple options to get to our year and so we’ll work through what those mixes are and how to get to that performance that we’ve laid out for our guidance. I think that Jayshree very eloquently put in the sense of ’23, ’24, ’25 on what we expect from AI going from trials to pilots to production. And so we’ll work through what that means in 2024. But I think to change anything in Q1 at this time, we’re just going to go a quarter at a time, especially with me coming in and we’ll see how the year progresses.
Ittai Kidron: All right. Good luck.
Chantelle Breithaupt: Thank you.
Operator: Your next question comes from the line of Alex Henderson from Needham & Company. Please go ahead. Your line is open.
Alex Henderson: Ita, I can’t believe you’re leaving us. I’m going to miss you. Go ahead.
Jayshree Ullal: No, she said she will miss you.
Alex Henderson: I’m sorry, go ahead.
Ita Brennan: Go ahead, Alex. Ask your question.
Alex Henderson: So, the question I have really is what are you hearing from the field, particularly in the enterprise segment. There’s been a lot of noise about indigestion of large amounts of volume that have been shipped to various companies. And clearly, there’s some concern that there’s some oversupply over the last year into the enterprise market. And I think you’ve talked to a lot of CEOs. What are they telling you in terms of where their IT spending intentions are for ’24? Where are they saying the spending is going relative to the networking gear versus alternative spending priorities? Thanks.
Jayshree Ullal: That’s a good question, Alex. I certainly talk to a lot of CIOs and CEOs. And if I rewind the clock to January last year, I think price was a lot spookier then. We were going through this whole financial crisis, Silicon Valley Bank, this, that, the other. And if I now fast forward to a year later, our momentum in the enterprise is actually stronger now than it was a year ago. So all this [Technical Difficulty] customers are looking for that innovation, modern network model, CI/CD principles, bringing DevOps, NetOps, SecOps, all of this together. And so Arista continues, in my view, with the large TAM we have in the enterprise of at least $30 billion out of that $60 billion to find the opportunity to really deliver that vision of client to cloud, break down the operational silos. And I would say today, the CIOs recognize us as the pure-play innovator more than any other company.
Alex Henderson: Great. Thank you.
Jayshree Ullal: Thanks, Alex.
Operator: Your next question comes from the line of Atif Malik from Citi. Please go ahead. Your line is open.
Atif Malik: Thank you for taking my question. Jayshree, thanks for providing that comments on the four wins against InfiniBand. Now your networking competitor announced a collaboration with NVIDIA on Ethernet AI enterprise solutions last week. Can you talk about what this means for your Ethernet back-end business, if anything?
Jayshree Ullal: Yeah. I don’t understand the announcement as well as probably my competitor does. I think it has more to do with UCS and Cisco validated designs. Specific to our partnership, you can be assured that we’ll be working with the leading GPU vendors. And as you know, NVIDIA has 90% or 95% of the market. So, Jensen and I are going to partner closely. It is vital to get a complete AI network design going. We will also be working with our partners in AMD and Intel. So we will be the Switzerland of XPUs, whatever the GPU might be, and we look to supply the best network ever.
Atif Malik: Thank you.
Operator: Your next question comes from the line of Tim Long from Barclays. Please go ahead. Your line is open.
Tim Long: Thank you. Yeah, Ita, going to miss you as well, good luck. So I wanted to follow up a little bit more on that AI, Jayshree. You talked about those wins. Could you just talk a little bit about — a little bit more color there. Do you think these deployments are going to be more sole-sourced or will there be multiple vendors? Did you face kind of a different competitive landscape than normal in these? And what are you thinking about breadth of this business? I’m sure it’s a lot of the really large customers as you said right now. But can you talk a little bit about how you see this moving into whether it’s other service providers or the enterprise vertical? Thank you.
Jayshree Ullal: Yeah. Thanks, Tim. Okay. So let me just step back and say the first real consultative approach from Arista is to provide our expertise on how to build a robust back-end AI network. And so the whole discussion of Ethernet become — versus InfiniBand becomes really important because as you may recall, a year ago, I told you we were outside looking in, everybody had an Ethernet — everybody had an InfiniBand HPC cluster that was kind of getting bundled into AI. But a lot has changed in a year. And the popular product we are seeing right now as the back-end cluster for our AI is the Arista 7800 AI spine, which in a single chassis with north of 500 terabits of capacity can give you a substantial number of ports, 400 or 800.
So you can connect up to 1,000 GPUs just doing that. And that kind of data parallel scale-out can improve the training time dimensions, large LLMs, massive integration of training data. And of course, as we shared with you at the Analyst Day, we can expand that to a two-tier AI leaf and spine with a 16-way CMP to support close to 10,000 GPUs nonblocking. This lossless architecture for Ethernet and then the overlay we will have on that with the Ultra Ethernet Consortium in terms of congestion controls, packet spring and working with a suite of UEC mix is what I think will make Ethernet the default standard for AI networking going forward. Now will it be sole source [indiscernible], I would be remiss if I didn’t tell you that our cloud networking isn’t sole sourced.
So probably our AI won’t be too. But today’s models are moving very rapidly, relying on a high bandwidth, predictable latency, the focus on application performance requires you to be sole sourced initially. And over time, I’m sure it’ll move to multiple sources, but I think Arista is very well positioned for the first innings of AI networking, just like we were for the cloud networking decade. And one other thing I want to say is, although a lot of these customers are doing AI pivots, these AI pivots will result in revisiting the front-end cloud network, too. So this AI anatomy is being really well understood. And if you take a deep look at the center piece of it, which is all the GPUs, they have to connect to something very reliable and this is really where we come in.
And so this — being actively involved has — is going to pay a lot of dividends, but we’re still very much in our first innings of AI.
Tim Long: Great. Thank you.
Jayshree Ullal: Thanks, Tim.
Operator: Your next question comes from the line of Ben Reitzes from Melius Research. Please go ahead.
Ben Reitzes: Hey, thanks for the question. And obviously, Ita, it’s been great working with you. Thanks for all you’ve done for us. I wanted to ask about your guidance and the conservatism from another lens here. With regard to 2024, since your November 9 Analyst Day, some things have changed. Microsoft, Meta and Google have all raised their CapEx forecast for 2024. Obviously, your guidance for 2024 stays the same, and I know you’re usually conservative. And then for 2025, AMD upped their TAM very significantly for AI and — by a multiple. And I guess they’re seeing something that many of us are seeing with regard to the future demand. And you’ve kept your guidance at $750 million. I just — with that backdrop and the changes since November 9 and you guys keeping your guidance, and I understand you’re conservative, do you mind addressing your conservativism or your guidance from those lenses, both with regard to ’24 and ’25, Jayshree?