Matthew Ramsay: Jensen, I wanted to ask a couple of questions on the DGX Cloud. And I guess, we’re all talking about the drivers of the services and the compute that you’re going to host on top of these services with the different hyperscalers. But I think we’ve been kind of watching and wondering when your data center business might transition to more of a systems level business, meaning pairing and InfiniBand with your Hopper product, with your Grace product and selling things more on a systems level. I wonder if you could step back, over the next 2 or 3 years, how do you think the mix of business in your data center segment evolves from maybe selling cards to systems and software? And what can that mean for the margins of that business over time?
Jensen Huang: Yes, I appreciate the question. First of all, as you know, our Data Center business is a GPU business only in the context of a conceptual GPU because what we actually sell to the cloud service providers is a panel, a fairly large computing panel of 8 Hoppers or 8 Amperes that’s connected with NVLink switches that are connected with NVLink. And so this board represents essentially 1 GPU. It’s 8 chips connected together into 1 GPU with a very high-speed chip-to-chip interconnect. And so we’ve been working on, if you will, multi-die computers for quite some time. And that is 1 GPU. So when we think about a GPU, we actually think about an HGX GPU, and that’s 8 GPUs. We’re going to continue to do that. And the thing that the cloud service providers are really excited about is by hosting our infrastructure for NVIDIA to offer because we have so many companies that we work directly with.
We’re working directly with 10,000 AI start-ups around the world, with enterprises in every industry. And all of those relationships today would really love to be able to deploy both into the cloud at least or into the cloud and on-prem and oftentimes multi-cloud. And so by having NVIDIA DGX and NVIDIA’s infrastructure are full stack in their cloud, we’re effectively attracting customers to the CSPs. This is a very, very exciting model for them. And they welcomed us with open arms. And we’re going to be the best AI salespeople for the world’s clouds. And for the customers, they now have an instantaneous infrastructure that is the most advanced. They have a team of people who are extremely good from the infrastructure to the acceleration software, the NVIDIA AI open operating system, all the way up to AI models.
Within 1 entity, they have access to expertise across that entire span. And so this is a great model for customers. It’s a great model for CSPs. And it’s a great model for us. It lets us really run like the wind. As much as we will continue and continue to advance DGX AI supercomputers, it does take time to build AI supercomputers on-prem. It’s hard no matter how you look at it. It takes time no matter how you look at it. And so now we have the ability to really prefetch a lot of that and get customers up and running as fast as possible.
Operator: Your next question comes from the line of Timothy Arcuri with UBS.