Antonio Neri : Sure, I can start. I will say, listen, I think if you look at our server segment that we just published, we delivered very strong performance. I mean, we are in the target range we committed well back of 11% to 13%. And that — the fact that we’re bringing together give us the flexibility, opportunity to maximize the blended margin here as we go forward. To give a reference, when you sell an EX system, generally it’s a liquid cool system that tends to gravitate to the supercomputing side or large AI clusters of thousands of GPUs. But an EX system supports today up to 80,000 GPUs in one system, and that’s because of our interconnect fabric HPE Slingshot. In fact, some of those systems have 80,000 GPUs and maybe 40,000 CPUs in one system.
But then you have other customers are many 2,000 or 4,000 GPUs. And depending on which location they pick, they need liquid cooling, we deploy those. Now generally speaking, the XD platform, the Cray XD platform is the one that has the density and is more oriented and ability to mix much different configurations, and that’s where the vast majority of the action is today in AI. And ProLiant Gen11 actually is more used for inferencing or in some areas of fine-tuning as well. So we have the flexibility to meet all those demands with our unique IP. And on top of that, we actually lay our machine learning development environment. In fact, there are customers that come to us just for the MLDE environment. Later on, we pull the server. Now on AUPs, I will you that when you sell an XD, the can be 20 times the value of a traditional server with CPUs. And in EX, it can be up to 35 times.
And so as we go forward, the ability to optimize margin through the configs and attach the services, whether it’s our data center services plus the software and the Operational Services allows us to really drive the best outcome for our shareholders.
Shannon Cross: Thank you, Tim. Gary, we’ll take one final question.
Operator: And that final question will come from Lou Miscioscia from Daiwa Capital Markets. Please go ahead.
Louis Miscioscia : Hey, thank you for taking my question. Antonio, I guess the question I have is since you’re talking a lot about data centers, I’m wondering what’s going to happen is that the vast majority of x86 applications going to start to shift over to really be accelerated with GPUs due to the concern of more coming to end? And what I’m asking is not really inference and it’s not training. These are just normal applications, sort of like the same way architectures shifted from IBM mainframes or PA years and years ago to x86 eventually to that and cloud. Do you think that, that’s going to shift over to running on GPUs?
Antonio Neri : Well, thank you for the question. I think we need to understand there are two worlds that will coexist. There is the cloud-native world. Think about the cloud-native world where you have thousands and thousands of applications running on thousands and thousands of servers and they share everything. That architecture will exist for a long, long time because it’s cost efficient. And the realization is that those applications world were designed for that type of environment, for the traditional monolithic approach to more a cloud-centric approach. And then you have these AI applications where you may have one application, only one, running on thousands and thousands of servers, which have accelerated compute. And it’s a little bit far-fetched to say everything is going to move there.
I argue that you will have inferencing solutions that a CPU will be just fine. Think about your phone, right? The phone will have, at some point, the ability to manage a large language model, let’s say, 20 billion or 30 billion parameters, or the PC, maybe in the 80 billion to 100 billion parameters. But when you go up higher than that, obviously, you need potentially a server at the edge. And what I’ll call 8-way GPU will be the right way to go. So I argue there will be a mix in the transition here for a long period of time. Not everything will go to a GPU. It also depends how these large language models and all the AI applications get constructed. Now you asked us — you made another interesting point which I want to make sure the — all of you remember.
We, as a company, have two now public instances of AI, power with renewal energies where we are supporting some of these customers, including a hyperscaler. And going forward, enterprise customers because they don’t have the space and the cooling and the understanding how to run the system of scale. That’s a unique differentiation Hewlett Packard Enterprise have in addition to build systems and ship them. And I think that’s an opportunity for us because that will drive stickiness to our HPE GreenLake platform, which obviously will drive recurring revenues but better attach software and services down the road. And Juniper will play a huge role in that environment.
Shannon Cross: Thank you, Lou. Let me now turn it back to Antonio for concluding remarks.
Antonio Neri : Well, thank you, Shannon. And thank you, everyone. I know you have been covering multiple calls today. I know it’s late on the East Coast, but I will leave you with a few comments. Number one, we have the right strategy and the right team at the right time. This quarter, obviously, it was a little bit mixed because of the revenue. But remember, a lot of revenue also went through the ARR so we need to understand that implication going forward. I’m very confident about the future. And the moves we have made and continue to make, including the Juniper acquisition, will allow us to participate in this inflection point with a unique IP. Everybody, obviously, is focused about this AI momentum and the server side, but you need more than servers.
AI will drive the need for more ports. That means you need more networking bandwidth. That’s for sure. Also, let’s not forget, we need to do this responsibly. One of the things I’m really proud about our company is the commitment to social responsibility. Doing all of this, addressing the sustainability and the ethical challenges and the responsibility around AI. But listen, just we came out two weeks ago where HP was ranked number one in the Just Capital, ranking something that when you are proud of it and I know shareholder value, all of that. We have to take some actions here. We are really focused on the strong execution and discipline, something we have shown now for six years plus. And that’s why I’m confident in the adjusted guidance we provided with Marie.
And as we get into ‘25, obviously with the pending acquisition, I feel HPE will be even in a stronger position as we get through 2024. So thank you for your time and hope to connect with you soon.
Operator: Ladies and gentlemen, this concludes our call for today. Thank you. You may now disconnect.