Jayshree Ullal: Hey, David. I think first of all, the TAM we took up was for 2027, not Q3, Q4, 2023, just to be clear. And I think we continue to feel very optimistic about our long-term demand in enterprise cloud and AI. So, we shouldn’t confuse the comps and difficulty of comparing Q3 2022 with Q3 2023 with our long-term demand and TAM. Both are valid statements. But as you know, the cloud is a volatile market and the titans will spend a lot one year and then spend a little less the other year as they’re digesting it and deploying it. So, if you look across multiple years, we’re going to have the strong demand and .
Anshul Sadana: And David, this is very consistent with what we talked about last quarter, right? I mean we’re – because of the comps and the pattern and the comps, we’re going to grow quarter-by-quarter growing each quarter consecutively, but you will see that deceleration just because of how last years, kind of revenue trended as well. So, it doesn’t say anything new here. In fact, we probably took up the overall number a little bit to get to the 26%.
Jayshree Ullal : Right. We said 25 in November, now we’re saying 26.
David Vogt: Right. Thank you. Thanks, guys.
Jayshree Ullal : Thank you.
Operator: Your next question comes from the line of Meta Marshall with Morgan Stanley. Your line is open.
Meta Marshall: Great. Thanks. Maybe just zeroing in, kind of on the cloud titans vertical. You mentioned kind of reduced visibility, but just wanted to clarify, you know, had you seen any changes in orders or any push outs, kind of within the quarter of orders or – and within, kind of your near-term guidance of orders that you thought were going to take place that are maybe getting pushed out? Thanks.
Jayshree Ullal: Yes, Meta, I’ll let Anshul answer the question, but I would say, it’s sort of a give and a take. Some things are getting pushed out and some are getting pulled, the silver lining is clearly AI. That’s not getting pushed out, but some of the deployments of cloud regions are getting pushed out. Anshul, you want to add to that?
Anshul Sadana: Meta, if I can add some more color to key areas that we’ve been tracking. I know we’re only going to talk about AI. This is backbone, which is what started the 400-gig cycle in the first place. Those deployments are progressing as expected as well. So that part is steady state. And obviously, AI is growing compared to what we knew before.
Meta Marshall: Great. Thank you.
Operator: Your next question comes from the line of Sebastien Naji with William Blair. Your line is now open.
Sebastien Naji: Hi, thanks for taking the question. Just given this discussion around generative AI, maybe can you frame for us the advantages of Ethernet for building out these AI network fabrics and any metrics you might have that highlight these advantages versus something like InfiniBand?
Jayshree Ullal: Sure. I think I said this before, but I think the Number 1 advantage of Ethernet is the fact that you’re building a standard space, multi-vendor, highly interoperable network, where everything from troubleshooting to familiarity when you’re connecting to the GPU clusters is very well known. So from a best-of-breed horizontal approach, Ethernet can win every time and generally struggle. Having said that, the vertical approach that InfiniBand adopted for high-performance computing can be applied to GPU clusters as well. So, I think it all depends on the customers’ clusters and how large they are and the larger they become, the more it favors Ethernet.
Operator: Your next question comes from the line of Tal Liani with Bank of America. Your line is now open.