NVIDIA Corporation (NASDAQ:NVDA) Q4 2023 Earnings Call Transcript February 22, 2023
Operator: Good afternoon. My name is Emma, and I will be your conference operator today. At this time, I would like to welcome everyone to the NVIDIA’s Fourth Quarter Earnings Call. . Thank you. Simona Jankowski, you may begin your conference.
Simona Jankowski: Thank you. Good afternoon, everyone, and welcome to NVIDIA’s conference call for the fourth quarter of fiscal 2023. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer. I’d like to remind you that our call is being webcast live on NVIDIA’s Investor Relations website. The webcast will be available for replay until the conference call to discuss the financial results for the first quarter of fiscal 2024. The content of today’s call is NVIDIA’s property. It can’t be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations.
These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today’s earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, February 22, 2023, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.
With that, let me turn the call over to Colette.
Colette Kress: Thank you, Simona. Q4 revenue was $6.05 billion, up 2% sequentially, while down 21% year-on-year. Full year revenue was $27 billion, flat from the prior year. Starting with data center. Revenue was $3.62 billion was down 6% sequentially and up 11% year-on-year. Fiscal year revenue was $15 billion and up 41%. Hyperscale customer revenue posted strong sequential growth, though short of our expectations as some cloud service providers paused at the end of the year to recalibrate their build plans. Though we generally see tightening that reflects overall macroeconomic uncertainty, we believe this is a timing issue as the end market demand for GPUs and AI infrastructure is strong. Networking grew but a bit less than our expected on softer demand for general purpose CPU infrastructure.
The total data center sequential revenue decline was driven by lower sales in China, which was largely in line with our expectations, reflecting COVID and other domestic issues. With cloud adoption continuing to grow, we are serving an expanding list of fast-growing cloud service providers, including Oracle and GPU specialized CSPs. Revenue growth from CSP customers last year significantly outpaced that of Data Center as a whole as more enterprise customers moved to a cloud-first approach. On a trailing 4-quarter basis, CSP customers drove about 40% of our Data Center revenue. Adoption of our new flagship H100 data center GPU is strong. In just the second quarter of its ramp, H100 revenue was already much higher than that of A100, which declined sequentially.
This is a testament of the exceptional performance on the H100, which is as much as 9x faster than the A100 for training and up 30x faster than (ph) transformer-based large language models. The transformer engine of H100 arrived just in time to serve the development and scale out of inference of large language models. AI adoption is at an inflection point. Open AI’s ChatGPT has captured interest worldwide, allowing people to experience AI firsthand and showing what’s possible with generative AI. These new types of neural network models can improve productivity in a wide range of tasks, whether generating text like marketing copy, summarizing documents like , creating images for ads or video games or answering customer questions. Generative AI applications will help almost every industry do more faster.
Generative large language models with over 100 billion parameters are the most advanced neural networks in today’s world. NVIDIA’s expertise spans across the AI supercomputers, algorithms, data processing and training methods that can bring these capabilities to enterprise. We look forward to helping customers with generative AI opportunities. In addition to working with every major hyperscale cloud provider, we are engaged with many consumer Internet companies, enterprises and start-ups. The opportunity is significant and driving strong growth in the data center that will accelerate through the year. During the quarter, we made notable announcements in the financial services sector, one of our largest industry verticals. We announced a partnership with Deutsche Bank to accelerate the use of AI and machine learning in financial services.
Together, we are developing a range of applications, including virtual customer service agents, speech AI, fraud detection and bank process automation, leveraging NVIDIA’s full computing stack, both on-premise and in the cloud, including NVIDIA AI enterprise software. We also announced that NVIDIA captured leading results for AI inference in a key financial services industry benchmark for applications such as asset price discovery. In networking, we see growing demand for our latest generation InfiniBand and HPC optimized Ethernet platforms fueled by AI. Generative AI foundation model sizes continue to grow at exponential rates, driving the need for high-performance networking to scale out multi-node accelerated workloads. Delivering unmatched performance, latency and in-network computing capabilities, InfiniBand is the clear choice for power-efficient cloud scale, generative AI.
For smaller scale deployments, NVIDIA is bringing its full accelerated stack expertise and integrating it with the world’s most advanced high-performance Ethernet fabrics. In the quarter, InfiniBand led our growth as our Quantum 2 40 gigabit per second platform is off to a great start, driven by demand across cloud, enterprise and supercomputing customers. In Ethernet, our 40 gigabit per second Spectrum 4 networking platform is gaining momentum as customers transition to higher speeds, next-generation adapters and switches. We remain focused on expanding our software and services. We released version 3.0 of NVIDIA AI enterprise with support for more than 50 NVIDIA AI frameworks and pretrained model and new workflows for contact center intelligent virtual assistance, audio transcription and cybersecurity.
Upcoming offerings include our NeMo and BioNeMo large language model services, which are currently in early access with customers. Now to Jensen to talk a bit more about our software and cloud business.
Jensen Huang: Thanks, Colette. The cumulation of technology breakthroughs has brought AI to an inflection point. Generative AI’s versatility and capability has triggered a sense of urgency at enterprises around the world to develop and deploy AI strategies. Yet, the AI supercomputer infrastructure, model algorithms, data processing and training techniques remain an insurmountable obstacle for most. Today, I want to share with you the next level of our business model to help put AI within reach of every enterprise customer. We are partnering with major service — cloud service providers to offer NVIDIA AI cloud services, offered directly by NVIDIA and through our network of go-to-market partners, and hosted within the world’s largest clouds.
NVIDIA AI as a service offers enterprises easy access to the world’s most advanced AI platform, while remaining close to the storage, networking, security and cloud services offered by the world’s most advanced clouds. Customers can engage NVIDIA AI cloud services at the AI supercomputer, acceleration library software or pretrained AI model layers. NVIDIA DGX is an AI supercomputer, and the blueprint of AI factories being built around the world. AI supercomputers are hard and time-consuming to build. Today, we are announcing the NVIDIA DGX Cloud, the fastest and easiest way to have your own DGX AI supercomputer, just open your browser. NVIDIA DGX Cloud is already available through Oracle Cloud Infrastructure and Microsoft Azure, Google GCP and others on the way.
At the AI platform software layer, customers can access NVIDIA AI enterprise for training and deploying large language models or other AI workloads. And at the pretrained generative AI model layer, we will be offering NeMo and BioNeMo, customizable AI models, to enterprise customers who want to build proprietary generative AI models and services for their businesses. With our new business model, customers can engage NVIDIA’s full scale of AI computing across their private to any public cloud. We will share more details about NVIDIA AI cloud services at our upcoming GTC so be sure to tune in. Now let me turn it back to Colette on gaming.
Colette Kress: Thanks, Jensen. Gaming revenue of $1.83 billion was up 16% sequentially and down 46% from a year ago. Fiscal year revenue of $9.07 billion is down 27%. Sequential growth was driven by the strong reception of our 40 Series GeForce RTX GPUs based on the Ada Lovelace architecture. The year-on-year decline reflects the impact of channel inventory correction, which is largely behind us. And demand in the seasonally strong fourth quarter was solid in most regions. While China was somewhat impacted by disruptions related to COVID, we are encouraged by the early signs of recovery in that market. Gamers are responding enthusiastically to the new RTX4090, 4080, 4070 Ti desktop GPUs, with many retail and online outlets quickly selling out of stock.
The flagship RTX 4090 has quickly shot up in popularity on Steam to claim the top spot for the AI architecture, reflecting gamers’ desire for high-performance graphics. Earlier this month, the first phase of gaming laptops based on the Ada architecture reached retail shelves, delivering NVIDIA’s largest-ever generational leap in performance and power efficiency. For the first time, we are bringing enthusiast-class GPU performance to laptops as slim as 14 inches, a fast-growing segment, previously limited to basic tasks and apps. In another first, we are bringing the 90 class GPUs, our most performing models, to laptops, thanks to the power efficiency of our fifth-generation Max-Q technology. All in, RTX 40 Series GPUs will power over (ph) gaming and creator laptops, setting up for a great back-to-schools season.
There are now over 400 games and applications supporting NVIDIA’s RTX technology for real-time ray tracing and AI-powered graphics. The AI architecture features DLSS 3, our third-generation AI-powered graphics, which massively boosts performance. With the most advanced games, Cyberpunk 2077, recently added DLSS 3 enabling a 3 to 4x boost in frame rate performance at 4K resolution. Our GeForce NOW cloud gaming service continued to expand in multiple dimensions, users, titles and performance. It now has more than 25 million members in over 100 countries. Last month, it enabled RTX 4080 graphics horsepower in the new high-performance ultimate membership tier. Ultimate members can stream at up to 240 frames per second from a cloud with full ray tracing and DLSS 3.
And just yesterday, we made an important announcement with Microsoft. We agreed to a 10-year partnership to bring to GeForce NOW Microsoft’s lineup of Xbox PC games, which includes blockbusters like Minecraft, Halo and Flight Simulator. And upon the close of Microsoft’s Activision acquisition, it will add titles like Call of Duty and Overwatch. Moving to Pro Visualization. Revenue of $226 million was up 13% sequentially and down 65% from a year ago. Fiscal year revenue of $1.54 billion was down 27%. Sequential growth was driven by desktop workstations with strengths in the automotive and manufacturing industrial verticals. Year-on-year decline reflects the impact of the channel inventory correction, which we expect to end in the first half of the year.
Interest in NVIDIA’s Omniverse continues to build with almost 300,000 downloads so far, 185 connectors to third-party design applications. The latest released Omniverse has a number of features and enhancements, including support for 4K, real-time path tracing, Omniverse Search for AI-powered search through large untagged 3D databases, and Omniverse cloud containers for AWS. Let’s move to automotive. Revenue was a record $294 million, up 17% from and up 135% from a year ago. Sequential growth was driven primarily by AI automotive solutions. New program ramps at both electric vehicle and traditional OEM customers helped drive this growth. Fiscal year revenue of $903 million was up 60%. At CES, we announced a strategic partnership with Foxconn to develop automated and autonomous vehicle platforms.
This partnership will provide scale for volume, manufacturing to meet growing demand for the NVIDIA Drive platform. Foxconn will use NVIDIA Drive, Hyperion compute and sensor architecture for its electric vehicles. Foxconn will be a Tier 1 manufacturer producing electronic control units based on NVIDIA Drive Orin for the global . We also reached an important milestone this quarter. The NVIDIA Drive operating system received safety certification from TÃV SÃD, one of the most experienced and rigorous assessment bodies in the automotive industry. With industry-leading performance and functional safety, our platform meets the higher standards required for autonomous transportation. Moving to the rest of the P&L. GAAP gross margin was 63.3%, and non-GAAP gross margin was 66.1%.
Fiscal year GAAP gross margin was 56.9%, and non-GAAP gross margin was 59.2%. Year-on-year, Q4 GAAP operating expenses were up 21%, and non-GAAP operating expenses were up 23%, primarily due to the higher compensation and data center infrastructure expenses. Sequentially, GAAP operating expenses were flat, and non-GAAP operating expenses were down 1%. We plan to keep them relatively flat at this level over the coming quarters. Full year GAAP operating expenses were up 50%, and non-GAAP operating expenses were up 31%. We returned $1.15 billion to shareholders in the form of share repurchases and cash dividends. At the end of Q4, we had approximately %7 billion remaining under our share repurchase authorization through December 2023. Let me look to the outlook for the first quarter of fiscal ’24.
We expect sequential growth to be driven by each of our 4 major market platforms led by strong growth in data center and gaming. Revenue is expected to be $6.5 billion, plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 64.1% and 66.5%, respectively, plus or minus 50 basis points. GAAP operating expenses are expected to be approximately $2.53 billion. Non-GAAP operating expenses are expected to be approximately $1.78 billion. GAAP and non-GAAP other income and expenses are expected to be an income of approximately $50 million, excluding gains and losses of nonaffiliated divestments. GAAP and non-GAAP tax rates are expected to be 13%, plus or minus 1%, excluding any discrete items. Capital expenditures are expected to be approximately $350 million to $400 million for the first quarter and in the range of $1.1 billion to $1.3 billion for the full fiscal year 2024.
Further financial details are included in the CFO commentary and other information available on our IR website. In closing, let me highlight upcoming events for the financial community. We will be attending the Morgan Stanley Technology Conference on March 6 in San Francisco and the Cowen Healthcare Conference on March 7 in Boston. We will also host GTC virtually with Jensen’s keynote kicking off on March 21. Our earnings call to discuss the results of our first quarter of fiscal year ’24 is scheduled for Wednesday, May 24. Now we will open up the call for questions. Operator, would you please poll for questions?
See also 15 Best Oil Stocks to Buy and 13 Most Profitable Food Stocks.
Q&A Session
Follow Nvidia Corp (NASDAQ:NVDA)
Follow Nvidia Corp (NASDAQ:NVDA)
Operator: . Your first question comes from the line of Aaron Rakers with Wells Fargo.
Aaron Rakers: Clearly, on this call, a key focal point is going to be the monetization effect of your software and cloud strategy. I think as we look at it, I think, straight up, the enterprise AI software suite, I think, is priced at around $6,000 per CPU socket. I think you’ve got pricing metrics a little bit higher for the cloud consumption model. I’m just curious, Colette, how do we start to think about that monetization contribution to the company’s business model over the next couple of quarters relative to, I think, in the past, you’ve talked like a couple of hundred million or so? Just curious if you can unpack that a little bit.
Colette Kress: So I’ll start and turn it over to Jensen to talk more because I believe this will be a great topic and discussion also at our GTC. Our plans in terms of software, we continue to see growth even in our Q4 results, we’re making quite good progress in both working with our partners, onboarding more partners and increasing our software. You are correct. We’ve talked about our software revenues being in the hundreds of millions. And we’re getting even stronger each day as Q4 was probably a record level in terms of our software levels. But there’s more to unpack in terms of there, and I’m going to turn it to Jensen.
Jensen Huang: Yes, first of all, taking a step back, NVIDIA AI is essentially the operating system of AI systems today. It starts from data processing to learning, training, to validations, to inference. And so this body of software is completely accelerated. It runs in every cloud. It runs on-prem. And it supports every framework, every model that we know of, and it’s accelerated everywhere. By using NVIDIA AI, your entire machine learning operations is more efficient, and it is more cost effective. You save money by using accelerated software. Our announcement today of putting NVIDIA’s infrastructure and have it be hosted from within the world’s leading cloud service providers accelerates the enterprise’s ability to utilize NVIDIA AI enterprise.
It accelerates people’s adoption of this machine learning pipeline, which is not for the faint of heart. It is a very extensive body of software. It is not deployed in enterprises broadly, but we believe that by hosting everything in the cloud, from the infrastructure through the operating system software, all the way through pretrained models, we can accelerate the adoption of generative AI in enterprises. And so we’re excited about this new extended part of our business model. We really believe that it will accelerate the adoption of software.
Operator: Your next question comes from the line of Vivek Arya with Bank of America.
Vivek Arya: Just wanted to clarify, Colette, if you meant data center could grow on a year-on-year basis also in Q1? And then Jensen, my main question kind of relate to 2 small related ones. The computing intensity for generative AI, if it is very high, does it limit the market size to just a handful of hyperscalers? And on the other extreme, if the market gets very large, then doesn’t it attract more competition for NVIDIA from cloud ASICs or other accelerator options that are out there in the market?
Colette Kress: Thanks for the question. First, talking about our data center guidance that we provided for Q1. We do expect a sequential growth in terms of our data center, strong sequential growth. And we are also expecting a growth year-over-year for our data center. We actually expect a great year with our year-over-year growth in data center probably accelerating past Q1.