Ananda Baruah: Hey, good afternoon, guys. Really appreciate it. Antonio, I would love to get any context you can provide going back to the AI and large language model conversation. Is there any useful way for us to think about the required resources sort of difference and what you’re seeing for those applications relative to kind of typical high-performance Compute application resources? And then are you also seeing for those AI type projects. Are you also seeing any impact to the storage attach? And then is there any networking attach impact there as well? Would just love context on those rates. Thanks a lot.
Antonio Neri: No, thank you. Well, we have been in the AI business now for many years, right? So — and we have been in the specific AI a scale business. One of the key differentiations we have in that business, actually several, right? Number one is the — what you refer to as networking, I call it interconnect fabric. The ability to connect 40,000 GPUs at scale requires a unique differentiated fabric. That’s what the Frontier system is all about. And as I think about the next generation of this, we can easily double to 80,000 GPUs because our software and our silicon scales to those levels. And so that’s a unique value proposition that you don’t get in the traditional commoditized cloud environment. The other key differentiation we have is the programming environment we acquired to the Cray acquisition because when you develop these AI models, you have to deploy and you have to manage it at scale to take advantage of the massive set of capabilities.
That also is a unique software value proposition that’s very hard to duplicate. And then last but not least, to be able to leverage all these wonderful capabilities, you have to be able to prepare the data. And the data pipeline requires a lot of work upfront because it has to be clean and compliance and all of that. And that’s why our acquisitions like Determined AI and Pachyderm in particular, now allows us to automate that data pipeline. But we are not stopping there. We continue to move up and build what I call the platform as a service for developers, so they can take advantage of this automation for the data, train the models and then deploy the model. And if they need a supercomputing type of capabilities, we will be there for them. So that’s why I said early on, we are a unique point in time, where an inflection in the market intersects a unique set of capabilities, which we intend to fully capitalize top to bottom.
Not just on the hardware level, but all the way to the software level. And you will hear more about that as we come to the next months and quarters. And I’m really excited about that opportunity because we already have customers coming to us we need that, and they are generally enterprise customers that deploy these large case model that they don’t want to spend hundreds of millions of dollars, but they want to use it as-a-Service. Okay. Well, thank you, everyone. I always appreciate, you making the time to talk to us. I know today was an incredible busy day about all the earnings being posted. But let me remind you a couple of things. I mean, first of all, today, results is not a coincidence. It’s a combination of many things we have done over a longer period of time.