Unidentified Analyst: Wonder if you could give some more color about the inventory charges you took in the quarter and then internal inventory in general. In the documentation, you talked about that being a portion of inventory on hand plus some purchase obligations. And you also spoke in your prepared remarks that some of this was due to China data centers. So if you can clarify what was in those charges. And then in general, for your internal inventory, does that still need to be worked down? And what are the implications if that needs to be worked down over the next couple of quarters?
Colette Kress: Thanks for the question, Chris. So, as we highlighted in our prepared remarks, we booked an entry of $702 million for inventory reserves within the quarter. Most of that, primarily, all of it is related to our data center business, just due to the change in expected demand looking forward for China. So, when we look at the data center products, a good portion of this was also the A100, which we wrote down. Now looking at our inventory that we have on hand and the inventory that has increased, a lot of that is just due to our upcoming architectures coming to market, our Ada architecture, our Hopper architecture and even more in terms of our networking business. We have been building for those architectures to come to market and as such to say.
We are always looking at our inventory levels at the end of each quarter for our expected demand going forward. But I think we’ve done a solid job, at least in this quarter just based on that expectation going forward.
Operator: Your next question comes from the line of Timothy Arcuri with UBS.
Timothy Arcuri: Colette, can you — I have a two-part question. First, is there any effect of stockpiling in the data center guidance? I ask because you now have the A800 that is sort of a modified version of the A100 with the lower data transfer rate. So, one could imagine that customers might be stocking that while they can still get it. And I guess the second part of that is related to the inventory charge, can you just go into that a little bit more? Because last quarter, it made sense that you took a charge because revenue was less than you thought, but revenue came in pretty much in line. And it sounded like China was a net neutral. So, is the charge related to just working A100 inventory down faster? Is that what the charges related to?
Colette Kress: Sure. So, let me talk about the first statement that you indicated. Most of our data center business that we see is we’re working with customers specifically on their needs to build out accelerated computing and AI. It’s just not a business in terms of where units are being held for that. They’re usually four very, very specific products and projects that we see. So, I’m going to answer no, nothing that we can see. Your second question regarding the inventory provisions. At the end of last quarter, we were beginning to see softness in China. We’ve always been looking at our needs long term. It’s not a statement about the current quarter in inventory, as you can see. It usually takes two or three quarters for us to build product for the future demand.
So, that’s always a case of the inventory that we are ordering. So now looking at what we’ve seen in terms of continued lockdowns, continued economy challenges in China, it was time for us to take a hard look of what do we think we’ll need for data center going forward, not led to our write-downs.
Operator: Your next question comes from the line of Stacy Rasgon with Bernstein.
Stacy Rasgon: Collect, I had a question on the commentary you gave on the sequentials. It kind of sounded like data center maybe had some China softness issues. You said gaming resumed sequential growth. But then you said sequential growth for the company driven by auto, gaming and data center. How can all three of those grow sequentially if the overall guidance is kind of flattish? Are they all just like growing just a little bit, or is one of them actually down? Like how do we think about the segments into Q4 given that commentary?
Colette Kress: Yes. Thanks, Stacy. So, your question is regarding the sequentials from Q3 to our guidance that we provided for Q4. As we are seeing the numbers in terms of our guidance, you’re correct, is only growing about $100 million. And we’ve indicated that three of those platforms will likely grow just a little bit. But our pro visualization business we think is going to be flattish and likely not growing as we’re still working on correcting the channel inventory levels, to get to the right amount. It’s very difficult to say which will have that increase. But again, we are planning for all three of those different market platforms to grow just a little bit.
Operator: Your next question comes from the line of Mark Lipacis with Jefferies.
Mark Lipacis: Jensen, I think for you, you’ve articulated a vision for the data center where a solution with an integrated solution set of a CPU, GPU and DPU is deployed for all workloads or most workloads, I think. Could you just give us a sense of — or talk about where is this vision in the penetration cycle? And maybe talk about Grace — Grace’s importance for realizing that vision, what will Grace deliver versus an off-the-shelf x86, do you have a sense of where Grace will get embraced first or the fastest within that vision? Thank you.
Jensen Huang: Thanks Mark. Grace’s data moving capability is off the charts. Grace also is memory coherent to our GPU, which allows our GPU to expand its effective GPU memory, fast GPU memory by a factor of 10. That’s not possible without special capabilities that are designed between Hopper and Grace and the architecture of Grace. And so, it was designed — Grace is designed for very large data processing at very high speeds. Those applications are related to — for example, data processing is related for recommender systems, which operates on petabytes of live data at a time. It’s all hot. It all needs to be fast, so that you can make a recommendation within milliseconds to hundreds of millions of people using your service. It is also quite effective at AI training, machine learning. And so, those kind of applications are really terrific. We — Grace, I think I’ve said before that we will have production samples in Q1, and we’re still on track to do that.
Operator: Your next question comes from the line of Harlan Sur with JPMorgan.
Harlan Sur: Your data center networking business, I believe, is driving about $800 million per quarter in sales, very, very strong growth over the past few years, near term, as you guys pointed out, and the team is driving strong NIC and BlueField attached to your own compute solutions like DGX and more partner announcements like VMware. But we also know that networking has pretty large exposure to general purpose cloud and hyperscale compute spending trends. So, what’s the visibility and growth outlook for the networking business over the next few quarters?
Jensen Huang: Yes, if I could take that. The — first, thanks for your question. Our networking, as you know, is heavily indexed to high-performance computing. We’re not — we don’t serve the vast majority of commodity networking. All of our networking solutions are very high end, and they’re designed for data centers that move a lot of data. Now, if you have a hyperscale data center these days, and you are deploying a large number of AI applications, it is very likely that the network bandwidth that you provision has a substantial implication on the overall throughput of your data center. So, the small incremental investment they make in high-performance networking translates to billions of dollars of savings frankly in provisioning the service or billions of dollars more throughput, which increases their economics.
And so, these days, with disaggregated and AI application — AI provisioning and data centers, high-performance networking is really quite fantastic and it pays for itself right away. But that’s where we are focused in high-performance networking and provisioning AI services in — well, the AI applications that we focus on. You might have noticed that NVIDIA and Microsoft are building one of the largest AI infrastructures in the world. And it is completely powered by NVIDIA’s InfiniBand 400 gigabits per second network. And the reason for that is because that network pays for itself instantaneously. The investment that you’re going to put into the infrastructure is so significant that if you were to be dragged by slow networks, obviously, the efficiency of the overall infrastructure is not as high.
And so, in the places where we focus networking is really quite important. It goes all the way back to when we first announced the acquisition of Mellanox. I think at the time, they were doing about a few hundred million dollars a quarter, about $400 million a quarter. And now, we’re doing what they used to do in the old days, in a year, practically coming up in a quarter. And so, that kind of tells you about the growth of high-performance networking. It is an indexed to overall enterprise and data center spend but it is highly indexed to AI adoption.
Operator: Your next question comes from the line of Aaron Rakers with Wells Fargo.