We’re very disciplined about it. We make it our, if you will, architecture compatibility is job one. And that has conveyed to the world, the certainty of our platform stability. NVIDIA’s platform stability certainty is the reason why everybody builds on us first and the reason why everybody optimizes on us first. All the engineering and all the work that you do, all the invention of technologies that you build on top of NVIDIA accrues to the — and benefits everybody that uses our GPUs. And we have such a large installed base, large — millions and millions of GPUs in cloud, 100 million GPUs from people’s PCs just about every workstation in the world, and they all architecturally compatible. And so, if you are an inference platform and you’re deploying an inference application, you are basically an application provider.
And as a software application provider, you’re looking for large installed base. Data processing, before you could train a model, you have to curate the data, you have to dedupe the data, maybe you have to augment the data with synthetic data. So, process the data, clean the data, align the data, normalize the data, all of that data is measured not in bytes or megabytes, it’s measured in terabytes and petabytes. And the amount of data processing that you do before data engineering, before that you do training is quite significant. It could represent 30%, 40%, 50% of the amount of work that you ultimately do. And what you — and ultimately creating a data driven machine learning service. And so data processing is just a massive part. We accelerate Spark, we accelerate Python.
One of the coolest things that we just did is called cuDF Pandas. Without one line of code, Pandas, which is the single most successful data science framework in the world. Pandas now is accelerated by NVIDIA CUDA. And just out-of-the box, without the line of code and so the acceleration is really quite terrific and people are just incredibly excited about it. And Pandas was designed for one purpose and one purpose only, really data processing, it’s for data science. And so NVIDIA CUDA gives you all of that.
Operator: Your final question comes from the line of Harlan Sur of J.P. Morgan. Your line is open.
Harlan Sur: Good afternoon. Thanks for taking my question. If you look at the history of the tech industry like those companies that have been successful have always been focused on ecosystem; silicon, hardware, software, strong partnerships and just as importantly, right, an aggressive cadence of new products, more segmentation over time. The team recently announced a more aggressive new product cadence in data center from two years to now every year with higher levels of segmentation, training, optimization in printing CPU, GPU, DPU networking. How do we think about your R&D OpEx growth outlook to support a more aggressive and expanding forward roadmap, but more importantly, what is the team doing to manage and drive execution through all of this complexity?
Jensen Huang: Gosh. Boy, that’s just really excellent. You just wrote NVIDIA’s business plan, and so you described our strategy. First of all, there is a fundamental reason why we accelerate our execution. And the reason for that is because it fundamentally drives down cost. When the combination of TensorRT-LLM and H200 reduce the cost for our customers for large model inference by a factor of four, and so that includes, of course, our speeds and feeds, but mostly it’s because of our software, mostly the software benefits because of the architecture. And so we want to accelerate our roadmap for that reason. The second reason is to expand the reach of generative AI, the world’s number of data center configurations — this is kind of the amazing thing.
NVIDIA is in every cloud, but not one cloud is the same. NVIDIA is working with every single cloud service provider and not one of the networking control plane, security posture is the same. Everybody’s platform is different and yet we’re integrated into all of their stacks, all of their data centers and we work incredibly well with all of them. And not to mention, we then take the whole thing and we create AI factories that are standalone. We take our platform, we can put them into supercomputers, we can put them into enterprise. Bringing AI to enterprise is something generative AI Enterprise something nobody’s ever done before. And we’re right now in the process of going to market with all of that. And so the complexity includes, of course, all the technologies and segments and the pace.