Hewlett Packard Enterprise Company (NYSE:HPE) Q4 2023 Earnings Call Transcript

And that’s why HPE went bold on that front last June to basically make the announcement we’re going to offer supercomputing as a public cloud instance so customers can use it as a virtual private cloud. So that we feel very good about it. Green Lake continued to be very strong. Just to be in a context, we added $1 billion in TCV quarter-over-quarter. We added 2,000 customers, and ARR obviously, is a function of the deferred revenue that we materialize over time, but what customers really love about our experience is that it’s hybrid by design. They can consume anything from Edge to cloud to HPE GreenLake, where they pay CapEx or OpEx, it doesn’t really matter in the end. But they really love the experience. And that’s why we’re building the AI components into the same platform.

So those are two very strong. Edge obviously had tremendous momentum. I think we’re going to have the typical adjustments, but that’s why we spend a lot of energy and time on adding more capabilities to the edge platform security as one private 5G data center networking, which adds to the momentum, understanding there will be potentially some digestion in the campus and branch. But as Jeremy said, we have very well covered for the first half of 2024. So we have to see that. And then compute, right, is a typical business that goes through this cyclicality, right? So last year, obviously, we had a huge amount of orders we converted the order faster than people expected. And in the back half of this year, we saw sequential demand improvements in units and stable AUPs. And now we start seeing upticks in the mix with AI inferencing, which has these accelerators.

But Q3 demand was higher than Q2 and Q4 was higher than Q3. So I think it’s fair to say we are stabilized, and we are improving. I would not call it yet a recovery. And on storage, I believe we’re going to see some improvements over time because of AI demand, which require file and object and we have a great portfolio with HPE Eletra, and we intend to capitalize on that. But for three consecutive quarters now, we have seen stability and improvement. And in Q4, we saw revenue improvements on a sequential basis. So customers are prioritizing the spend where it makes sense, but ultimately we have a portfolio that can meet their needs, wherever they are and HPE GreenLake is the way we deliver all of this which ultimately for shareholders drives higher margins and higher reported revenues and profit.

Sorry?

Jeff Kvaal: Cannibalization.

Antonio Neri: Cannibalization. Sorry, Jeff, remind me, cannibalization. We have no evidence of that yet. I think that will become clear when the traditional compute CPU-driven returned to some normal levels. But remember, not every customer has deployed cloud across that enterprise. Still quite a bit of journey to go. And there are clear customers assessing what is the best place to deploy that, whether it’s in a power cloud or whether it’s repatriating on-prem because of the cost or because of data. I think AI is a huge driver of repatriation in my mind because if you have data distributed across multiple states, it’s very hard to really train and fine-tune the models when you have data everywhere. And our focus there is really providing them an automated data pipeline with our unified analytics platform. So fundamentally, it’s early to say. But so far, in the traditional compute business, we have not seen evidence of cannibalization at this point in time.

Jeff Kvaal: Samik, thank you, Gary.

Operator: The next question is from Simon Leopold with Raymond James. Please go ahead.

Simon Leopold: Thanks for taking the question. I wanted to see if you could talk a bit about the trends you’re seeing in compute for the non-accelerated platforms. And really, the thing I’m trying to tease out here is sort of this issue of a projects, pulling budget or sucking oxygen on the room versus organizations buying up compute platforms to prepare for AI inferencing and embracing AI as an inferencing element, not just training?

Antonio Neri: Yes. Thanks, Simon. Again, maybe I will elaborate a little more to the comments I made before. So we saw Q4 over Q3 and Q3 over Q2 improvement in demand in units. And a lot of that was CPU-driven. Although there is a small base of AI accelerated kind of APUs, if you will, that we saw an increase in Q4. But I will say the unit growth in Q4 was not driven by the APUs, it was driven by a combination of CPUs, the vast majority and some APUs because the base is still very, very small. So definitely, customers are preparing for that. Again, they are all assessing what is the best place to deploy this model. That’s why I do believe the inferencing side will accelerate over time, where we have to do some pre-training or POCs or really deploying in production.

And I think many customers also will accelerate deployments of tuning solutions on-prem because of the data aspect I talked before. No question is still digesting what they bought last time on the CPU side of the house. But again, we saw some improvements in demand sequentially in units. And then let’s remind ourselves that we also, for us, in the industry. We are going to the transition of Sapphire Rapids. And ultimately, we call that the Gen 11 platform. That became now, what, Jeremy? 25% of the mix which…

Jeremy Cox: 53% of orders in Q4.