There are millions of CUDA experts around the world, software all accelerated, tool all accelerated. And then very importantly, they like the reach. They like the fact that you can see — they can reach so many users after they develop the software. And it is the reason why we just keep attracting new applications. And then finally, this is a very important point. Remember, the rate of CPU computing advance has slowed tremendously. And whereas back in the first 30 years of my career, at 10x in performance at about the same power every 5 years and then 10x every 5 years. That rate of continued advance has slowed. At a time when people still have really, really urging applications that they would like to bring to the world, and they can’t afford to do that with the power keep going up.
Everybody needs to be sustainable. You can’t continue to consume power. By accelerating it, we can decrease the amount of power you use for any workload. And so all of these multitude of reasons is really driving people to use accelerated computing, and we keep discovering new exciting applications.
Operator: Your next question comes from the line of Atif Malik with Citi.
Atif Malik: Colette, I have a question on data center. You saw some weakness on build plan in the January quarter, but you’re guiding to year-over-year acceleration in April and through the year. So if you can just rank order for us the confidence in the acceleration. Is that based on your H-100 ramp or generative AI sales coming through or the new AI services model? And also, if you can talk about what you’re seeing on the enterprise vertical.
Colette Kress: Sure. Thanks for the question. When we think about our growth, yes, we’re going to grow sequentially in Q1 and do expect year-over-year growth in Q1 as well. It will likely accelerate there going forward. So what do we see as the drivers of that? Yes, we have multiple product cycles coming to market. We have H-100 in market now. We are continuing with our new launches as well that are sometimes fueled with our GPU computing with our networking. And then we have grades coming likely in the second half of the year. Additionally, generative AI, it’s sparked interest definitely among our customers, whether those be CSPs, whether those be enterprises, one of those be start-ups. We expect that to be a part of our revenue growth this year.
And then lastly, let’s just not forget that given the end of Moore’s Law, there’s an error here of focusing on AI, focusing on accelerated continuing. So as the economy improves, this is probably very important to the enterprises and it can be fueled by the existence of cloud first for the enterprises as they . I’m going to turn it to Jensen to see if has any additional things he’d like to add.
Jensen Huang: No, you did great. That was great.
Operator: Your last question today comes from the line of Joseph Moore with Morgan Stanley.
Joseph Moore: Jensen, you talked about the sort of 1 million times improvement in your ability to train these models over the last decade. Can you give us some insight into what that looks like in the next few years and to the extent that some of your customers with these large language models are talking about 100x the complexity over that kind of time frame. I know Hopper is 6x better transformer performance. But what can you do to scale that up? And how much of that just reflects that it’s going to be a much larger hardware expense down the road?