I can say the same regarding our prepaids. Our prepaids are pre-designed to make sure that we have the reserve capacity that we need at several of our manufacturing suppliers as we look forward. So wouldn’t read into anything regarding approximately about the same numbers as we are increasing our supply. All of them just have different lengths as we have sometimes had to buy things in long-lead times or things that needed capacity to be built for us.
Operator: Your next question comes from the line of Ben Reitzes from Melius Research. Your line is open.
Ben Reitzes: Yeah. Thanks. Congratulations on the results. Colette, I wanted to talk about your comment regarding gross margins and that they should go back to the mid-70s. If you don’t mind unpacking that. And also, is that due to the HBM content in the new products and what do you think are the drivers of that comment? Thanks so much.
Colette Kress: Yeah. Thanks for the question. We highlighted in our opening remarks really about our Q4 results and our outlook for Q1. Both of those quarters are unique. Those two quarters are unique in their gross margin as they include some benefit from favorable component cost in the supply chain kind of across both our compute and networking and also in several different stages of our manufacturing process. So looking forward, we have visibility into a mid-70s gross margin for the rest of the fiscal year, taking us back to where we were before this Q4 and Q1 peak that we’ve had here. So we’re really looking at just a balance of our mix. Mix is always going to be our largest driver of what we will be shipping for the rest of the year. And those are really just the drivers.
Operator: Your next question comes from the line of C.J. Muse from Cantor Fitzgerald. Your line is open.
C.J. Muse: Yeah. Good afternoon, and thank you for taking the question. Bigger picture question for you, Jen-Hsun. When you think about the million-x improvement in GPU compute over the last decade and expectations for similar improvements in the next, how do your customers think about the long-term usability of their NVIDIA investments that they’re making today? Do today’s training clusters become tomorrow’s inference clusters? How do you see this playing out? Thank you.
Jensen Huang: Hey, CJ. Thanks for the question. Yeah, that’s the really cool part. If you look at the reason why we’re able to improve performance so much, it’s because we have two characteristics about our platform. One, is that it’s accelerated. And two, it’s programmable. It’s not brittle. NVIDIA is the only architecture that has gone from the very, very beginning, literally the very beginning when CNN’s and Alex Krizhevsky and Ilya Sutskever and Geoff Hinton first revealed AlexNet, all the way through to RNNs to LSTMs to every — RLs to deep learning RLs to transformers to every single version. Every single version and every species that have come along, vision transformers, multi-modality transformers, every single — and now time sequence stuff, and every single variation, every single species of AI that has come along, we’ve been able to support it, optimize our stack for it and deploy it into our installed base.
This is really the great amazing part. On the one hand, we can invent new architectures and new technologies like our Tensor cores, like our transformer engine for Tensor cores, improved new numerical formats and structures of processing like we’ve done with the different generations of Tensor cores, meanwhile, supporting the installed base at the same time. And so, as a result, we take all of our new software algorithm invest — inventions, all of the inventions, new inventions of models of the industry, and it runs on our installed base on the one hand. On the other hand, whenever we see something revolutionary we can — like transformers, we can create something brand new like the Hopper transformer engine and implement it into future. And so we simultaneously have this ability to bring software to the installed base and keep making it better and better and better, so our customers installed base is enriched over time with our new software.
On the other hand, for new technologies, create revolutionary capabilities. Don’t be surprised if in our future generation, all of a sudden amazing breakthroughs in large-language models were made possible And those breakthroughs, some of which will be in software because they run CUDA, will be made available to the installed base. And so we carry everybody with us on the one hand. We make giant breakthroughs on the other hand.
Operator: Your next question comes from the line of Aaron Rakers from Wells Fargo. Your line is open.
Aaron Rakers: Yeah. Thanks for taking the question. I wanted to ask about the China business. I know that in your prepared comments you said that you started shipping some alternative solutions into China. You also put it out that you expect that contribution to continue to be about a mid-single digit percent of your total data center business. So I guess the question is what is the extent of products that you’re shipping today into the China market and why should we not expect that maybe other alternative solutions come to the market and expand your breadth to participate in that in that opportunity again? Thank you.
Jensen Huang: Think of, at the core, remember the US government wants to limit the latest capabilities of NVIDIA’s accelerated computing and AI to the Chinese market. And the U.S. government would like to see us be as successful in China as possible. Within those two constraints, within those two pillars if you will, are the restrictions, and so we had to pause when the new restrictions came out. We immediately paused. So that we understood what the restrictions are, reconfigured our products in a way that is not software hackable in any way. And that took some time. And so we reset — we reset our product offering to China and now we’re sampling to customers in China. And we’re going to do our best to compete in that marketplace and succeed in that marketplace within the — within the specifications of the restriction.