So it’s not — E is interesting, not just in the — just in the flash transition, but in the ability of customers to move to a more consistent portfolio overall.
Rob Lee: And Meta, just to jump-in here. I think part of your question had to do with building comfort with customers around the transition. Look I think it’s important to realize that the transition from hard disk to the flash we’re offering with E, is completely seamless. All right. An other way to look at it is, there are basically no reasons that our customer says, hey, I would like to get disk. I would like to keep disk, there really no puts and takes. The only reason that customers have held onto disk in a lot of these environments, historically, has been price. And then now with our technology with what we’ve been able to do with E, we have effectively neutralized that. And so, I think. I think that’s something that perhaps goes less appreciated. And I think we’ll use that transition, because it’s not a re-architecture, it’s not a redesign. It really is, just a seamless and really instantaneous improvement on all dimensions.
Kevan Krysler: And let me just touch on that last question on order to ship time for FlashBlade//E, consistent with what we see across our portfolio. So no significant difference there.
Paul Ziots: Thank you, Meta. Next question, please.
Operator: We go next now to Pinjalim Bora at JPMorgan.
Pinjalim Bora: So great, guys. Congrats on the quarter. Staying on the AI theme. I wanted to ask you about Portworx seems that was a little bit surprising. I don’t think people are thinking about Portworx and AI together, maybe talk about that AI opportunity with respect to Portworx, what are you seeing? What kind of workloads are these on-premise or cloud attaches? Are this is more of a training or influence time of AI that would be helpful. Thank you.
Charlie Giancarlo: Yeah. Thanks, Pinjalim. Well, first of all, it’s both influence and training. And as you might imagine, a lot of these new developments are being made in container based and Kubernetes based environments. And Portworx is without equal, in terms of its ability to manage storage of all types, for Kubernetes and containers and to do it both — can do it on bare metal could do it in the cloud can do it on-top of our infrastructure. And so, these are very large environments. Portworx has always been really superior, when it gets too large scale production and so before going into a development where the developers know it’s going to go large scale when they when they scale out, they’re starting off with Portworx for their stateful data management.
Kevan Krysler: And Pinjalim just to just to add-on to that. I look at the Portworx and cloud native piece of this puzzle is, really a part of the overall set of environments that we see as being impacted by the uptake of AI technology, positively impacted. And so, certainly, number one, AI training infrastructure and environments, we’ve talked to you a lot about that. Number two, is really the demand to store more and more data in the enterprise, remove the silos and really move more of that cold data into the warm. And then as Charlie says, number three, looking at the application environments that the trend AI models are connected to. If you look at, where a lot of that data is coming into enterprises, it’s coming from multiple sources, it is coming from business data, databases, IoT sensors, machine data all over the place.
A lot of these applications sets environments, very highly dynamic, very aligned to open source, cloud native technologies. It’s also important to realize that, getting these training — trained AI models deployed and connected to real time systems is ultimately the goal for a lot of these enterprises. And so, when you look at the application environments driving these real time systems that folks want to plug, chat bots or what have you into, again, all very heavily based on and built on cloud-native architectures, open source software and they have the needs for agility, scalability elasticity, that those architectures afford, while at the same time, having the enterprise capabilities that technologies like Portworx can offer. And really that’s what we’re seeing out there today.
Paul Ziots: Thank you, Pinjalim. Next question, please.
Operator: We’ll go next now to Wamsi Mohan at Bank of America.
Wamsi Mohan: Hi. Yes. Thank you so much. Charlie, we’ve not really seen a large uptick on-premise AI-driven workloads, but you mentioned sort of this large inference opportunity at enterprises. Any thoughts on when that can happen, do you see that in calendar ’24 or ’25 from a materiality perspective? And if I could like subscription ARR has been decelerating over the last four quarters. How should we think about the growth trajectory here? Thank you.
Charlie Giancarlo: It’s an interesting question. I would say that we do see opportunities on-prem for AI in — I would say, highly specialized environments. And so, I think that is a real thing, of course, many of them are waiting for delivery of GPU and AI based processing systems and environments. And I would say that a lot of focus has been on the — are currently on the compute side of it, a little bit less focus by the customers, because they’ve been so focused on the compute side, a little less focused on the storage infrastructure. I believe that’s just starting to become a better known and understood requirement for these AI systems. But I’d say that, I just — I would disagree, Wamsi, I’d say that we do, we are starting to see interest. If not yet deployments on-prem.
Kevan Krysler: And Wamsi, let me touch on the subscription ARR growth definitely pleased with what we saw in terms of subscription ARR growth especially Evergreen//One, which is outperforming our already strong expectations that we had at the beginning of year. And as a reminder, in my prepared remarks, closed Evergreen//One contracts, where the effective service date has not yet started, are excluded from the ARR calculation. And our subscription ARR growth would have been 28% had we included those contracts, where the service date had not begun. And look, is it just, as a result of product revenue being lower for our CapEx sales. We do have less attach of our Evergreen subscription, which is also reflected in our subscription ARR growth rate.
Paul Ziots: Thank you, Wamsi. And just kind of reminder to everyone to please ask one question consisting of one part, and we’d be happy if you’d like to ask another question later on in the queue. Next question, please.
Operator: Next now to Krish Sankar at TD Cowen.
Krish Sankar: Yeah. Hi. Thanks for taking my question. Charlie, I had a question on AI too. From a storage standpoint, where do you think benefits the most for AI workloads between block storage and object , whether you think benefits the most. And also, can you help us clarify what products in your portfolio today support InfiniBand and how to think about it into future?
Charlie Giancarlo: Absolutely. I would say that, a lot of the — a lot of the AI environments use block because it’s very straightforward. Especially with programmers and block can be utilized for any structure that you want underneath, but increasingly, it’s moving to an object based environment, because over time block is easier. It’s more efficient and it’s less, it requires less state the — than what’s generally necessary in a block environment. So, I think we’re starting to see that shift, but it’s been taking a lot of time to get to very high performance object. Rob do you want to add?