Ita Brennan: Yes. No, I think I don’t like balloon as a word. I mean there are certain suppliers where lead times are inventory. So, we will continue to do that. I think on the purchase commitment, we talked about this a little bit at the Analyst Day as well. I mean as lead times start to move around, obviously, we will work with the contract on those take . That’s why, I mean over time, that number should come down as aging to lead time with the contract manufacturers.
Anshul Sadana: Okay. And on the Meta question, the Meta architecture already is quite modular with you talked about design for development than terabits 7388. It can go up to 256 ECMP, 256 net bus. The cluster sizes are smaller, they don’t need 256. Maybe they can start with 16 or 32. So, we already built into the model today. I don’t believe it has any impact on us. Same thing on the 7800 AI spine, they can add a number of line cards based on the number of GPUs or RACs that they are connected to. So, we are very, very efficient already and this fits very well in their model.
Liz Stine: Thank you Erik. Operator, next question.
Operator: Your next question comes from the line of Sami Badri with Credit Suisse. Please go ahead. Your line is open.
Sami Badri: Great. Just for me in two quick ones. First one is for Ita. Can we just talk about the benefits of pricing from some of the price increases that you guys have put through to the portfolio and the effect it had on gross margins? And then the second question is for Jayshree. Jayshree, you have given us kind of a ballpark visibility, I guess some kind of quantification in the number of months that you see visibility with some of your biggest customers. Could you give us an update on that same type of visibility?
Ita Brennan: Yes. I think on the pricing piece of it, I mean for sure, we are getting some benefit from the pricing. But as time goes on, it starts in the dynamic environment, it starts to be harder to track that kind of when it gets lost in the overall growth in the business. But we did check, and there is definitely some uptick for pricing there. It’s just not something that we are kind of tracking on an ongoing basis.
Jayshree Ullal: And in terms of visibility, Sami, in the past, we have seen as much as a year’s visibility. If I were to guess, I think as the lead times improve, that visibility will reduce. Maybe it’s down to three quarters now. And the visibility was very much tied to planning cycles. And when the planning cycles were longer than a year because our lead times were longer than a year, then that then we got greater visibility.
Liz Stine: Thank you, Sami.
Operator: Your next question comes from the line of George Notter with Jefferies. Please go ahead. Your line is open.
George Notter: Hi there. I am curious about why you guys think you should take share from InfiniBand going forward in AI and HPC environments. I am just curious about what the logic is there. Thanks.
Jayshree Ullal: Yes. There is two big reasons. I think in the past, Ethernet was always striking in terms of performance and bandwidth to InfiniBand. Today, as we start talking about 400, 800, 1.2 terabits, the options on Ethernet are much greater and very cost-effective than anybody is there. The other is, I think historically, InfiniBand has been more for high-performance compute use cases. We are very bullish on the AI workloads and its impact on Ethernet, where we don’t believe InfiniBand has any particular advantage and, in fact Ethernet does.
Liz Stine: Thank you, George. We have time for one last question.
Operator: Your final question comes from the line of Simon Leopold with Raymond James. Please go ahead. Your line is open.