Cadence Design Systems, Inc. (NASDAQ:CDNS) Q3 2023 Earnings Call Transcript

Cadence Design Systems, Inc. (NASDAQ:CDNS) Q3 2023 Earnings Call Transcript October 23, 2023

Cadence Design Systems, Inc. beats earnings expectations. Reported EPS is $1.26, expectations were $1.21.

Operator: Good afternoon. My name is Bo, and I will be your conference operator today. At this time, I would like to welcome everyone to the Cadence Third Quarter 2023 Earnings Conference Call. All lines have been placed on mute to prevent any background noise. After the speakers’ prepared remarks, there will be a question-and-answer session. [Operator Instructions] Thank you. I will now turn the call over to Richard Gu, Vice President of Investor Relations for Cadence. Please go ahead, sir.

Richard Gu: Thank you, operator. I would like to welcome everyone to our third quarter of 2023 earnings conference call. I’m joined today by Anirudh Devgan, President and Chief Executive Officer; and John Wall, Senior Vice President and Chief Financial Officer. The webcast of this call and a copy of today’s prepared remarks will be available on our website at cadence.com. Today’s discussion will contain forward-looking statements, including our outlook on future business and operating results. Due to risks and uncertainties, actual results may differ materially from those projected or implied in today’s discussion. For information on factors that could cause actual results to differ, please refer to our SEC filings, including our most recent Forms 10-K and 10-Q, CFO commentary and today’s earnings release.

All forward-looking statements during this call are based on estimates and information available to us as of today, and we disclaim any obligation to update them. In addition, we will present certain non-GAAP measures, which should not be considered in isolation from, or as a substitute for, GAAP results. Reconciliations of GAAP to non-GAAP measures are included in today’s earnings release. For the Q&A session today, we’d ask that you observe a limit of one question and one follow-up. Now, I will turn the call over to Anirudh.

Anirudh Devgan: Thank you, Richard. Good afternoon, everyone, and thank you for joining us today. I’m pleased to report that Cadence delivered strong results for the third quarter of 2023. We exceeded our Q3 guidance on all key metrics and are raising our outlook for 2023. John will provide more details on our financials shortly. Notwithstanding the macro uncertainties, design activity remains strong, driven by transformative generational trends such as AI, hyperscale computing, 5G and autonomous driving. Growing hyperconvergence between electrical and mechanical domains, systems and semis, and hardware and software, is driving the need for tightly integrated co-design and analysis solutions. Additionally, trends such as a growing number of 3D-IC and chiplet designs, and system companies building custom silicon, are accelerating.

In this rapidly evolving design landscape, the relevance of AI-driven design automation cannot be overstated, as it’s enabling customers to accelerate their pace of innovation, while enabling them to meet their targets more efficiently. Over the past few years, we focused initially on incorporating powerful AI algorithms in our core engines, and then built our Generative AI solutions on top of our software platforms. We are seeing growing momentum for our comprehensive JedAI Generative AI platform, with an increasing number of customers adopting these solutions, and achieving exceptional quality of results and productivity gains. While still in the early stages, sales of our GenAI solutions have nearly tripled in the last year. Our solutions are enabling marquee AI infrastructure platform companies to deliver their next-generation compute, networking, and memory products.

Last quarter, we had referenced our successes with NVIDIA and Tesla. And this quarter, we’re pleased to announce that Broadcom has accelerated the adoption of Cadence Cerebrus across multiple business units, achieving impressive quality of results. In Q3, we pioneered leveraging GenAI’s LLM capabilities to chip design, successfully collaborating with Renesas on accelerating functional specification to final design. This is a key step in demonstrating the potential of LLMs to automate the translation of natural language specifications to final chip design and verification tasks, thereby boosting their quality and efficiency. We also renewed and deepened collaborations with some large semi and systems customers in the 5G, AI, hyperscale and connectivity areas.

For instance, we strengthened our long-standing partnership with a global marquee systems company through a significant expansion of our EDA software, hardware, design IP and system solutions. As the digital transformation in aerospace and defense accelerates, we continued our momentum by enhancing our core EDA and systems footprint with several customers, including at two market shaping companies. Now let me share some of the business highlights starting with Digital IC. With 11 new wins, our digital full flow delivering industry leading quality of results at the most advanced nodes, continued proliferating with market shaping customers. We are very pleased with the accelerating momentum of our flagship Cadence Cerebrus GenAI solution, whose transformative results have led to its deployment at all of our top 10 digital customers and in about 300 tapeouts to date.

Imagination Technologies used Cadence Cerebrus and our digital full flow on its latest 5 nanometer GPU design in the cloud, to achieve a 20% reduction in leakage power. Next, I will talk about our Functional Verification business, which had another strong quarter with 18% year-over-year revenue growth. Ever-growing system design complexity coupled with the need for first time right silicon, continued to drive strong demand for our Palladium Z2 and Protium X2 hardware platforms that provide industry-leading system verification and software bring-up capabilities. Our hardware business had a record Q3 with close to half of the hardware orders including both platforms. Highlights for the quarter included a major dynamic duo expansion with a top AI and automotive chip supplier, and a significant deal with a market shaping datacenter chip company.

A closeup of a software engineer architecting a cutting-edge Global Pricing Management application. Editorial photo for a financial news article. 8k. –ar 16:9

Our flagship Custom IC business continued to pave the way in analog innovation, delivering 15% year-over-year revenue growth. We’re pleased with the reception to our AI-driven Virtuoso Studio solution as several marquee customers adopt it for their N2 and N3 designs, and it has close to a thousand downloads since its launch six months ago. And Nisshinbo Micro Devices utilized the Virtuoso Studio Custom IC design platform to gain a 30% reduction in turnaround time for routing analog blocks. In Q3, we continued investing in our IP business and closed the acquisition of the Rambus PHY IP assets. Customer reception has been overwhelmingly positive to the addition of their HBM and GDDR IP to our Star Design IP portfolio. Design IP had a record bookings quarter with strong AI and chiplet design activity, especially in the mobile, automotive and hyperscaler verticals.

In addition, we launched our Tensilica Neo NPU IP and NeuroWeave software tools to accelerate on-device and edge AI performance. Our System Design and Analysis business that is driving our expansion beyond EDA continued to deliver strong growth, increasing revenue by 20% year-over-year. On the PCB front, Allegro X AI has several successful engagements with market-shaping customers underway, and we announced OrCAD X, our next-generation AI-driven PCB Design solution, enabled by Cadence OnCloud, and targeting small and medium businesses. Our Fidelity CFD platform continued its strong momentum with customers in automotive, aerospace and defense and industrial verticals. In summary, I’m pleased with our team’s continued innovation and execution.

We’re well positioned to benefit from the tremendous opportunities ahead, as we help customers design their differentiated products with improved quality of results, productivity and shorter time to market. I did want to take a moment to comment on the unfolding conflict in the Middle East. The ongoing violence and the loss of innocent lives is truly heartbreaking and a matter of global concern. The wellbeing of our employees and their families in the region is of utmost importance to us and we’ll continue doing everything we can to support them. Our thoughts are with everyone who has family, friends and loved ones there and we are helping out by providing humanitarian aid through the Cadence Giving Foundation. John will now go through the Q3 results and present our Q4 and updated 2023 outlook.

John Wall: Thanks, Anirudh, and good afternoon, everyone. I am pleased to report that Cadence delivered another strong quarter of top- and bottom-line results in Q3. All businesses contributed to revenue growth, and we completed more hardware installations in Q3 than we originally assumed. Here are some of the financial highlights from the third quarter, starting with the P&L. Total revenue was $1.023 billion. GAAP operating margin was 28.6%, and non-GAAP operating margin was 41.1%. GAAP EPS was $0.93, and non-GAAP EPS was $1.26. Next, turning to the balance sheet and cash flow. Cash balance at quarter-end was $962 million, while the principal value of debt outstanding was $650 million. Operating cash flow was $396 million. And we used $125 million to repurchase Cadence shares in Q3.

Before I provide our updated outlook, I’d like to highlight that our outlook contains the usual assumption that export control regulations that exist today remain substantially similar for the remainder of the year. Our updated outlook for fiscal 2023 is, revenue in the range of $4.06 billion to $4.1 billion. GAAP operating margin in the range of 30.5% to 31%. Non-GAAP operating margin in the range of 41.5% to 42%. GAAP EPS in the range of $3.48 to $3.54. Non-GAAP EPS in the range of $5.07 to $5.13. Operating cash flow in the range of $1.3 billion to $1.4 billion, and we expect to use at least 50% of our annual free cash flow to repurchase Cadence shares. As a result, for Q4, we expect revenue in the range of $1.039 billion to $1.079 billion.

GAAP operating margin of approximately 31%. Non-GAAP operating margin of approximately 42%. GAAP EPS in the range of $0.85 to $0.91. Non-GAAP EPS in the range of $1.30 to $1.36, and we expect to use approximately $125 million of cash to repurchase Cadence shares. As usual, we’ve published a CFO Commentary document on our Investor Relations website, which includes our outlook for additional items, as well as further analysis and GAAP to Non-GAAP reconciliations. In summary, we are on track to deliver a strong 2023. I am pleased with our team’s continued execution of our Intelligent System Design strategy. With our updated outlook for 2023, at the midpoint, we now expect: revenue growth of approximately 15%; non-GAAP operating margin of approximately 41.75%, a seventh consecutive year of greater than 50% incremental operating margin; and non-GAAP EPS of $5.10, a sixth consecutive year of high-teen or better non-GAAP EPS growth.

As always, I’d like to close by thanking our customers, partners, and our employees for their continued support. And with that, operator, we will now take questions.

Operator: Thank you. [Operator Instructions] And your first question comes from Charles Shi at Needham.

See also 20 Most Popular Scotch Whisky Brands in USA and 15 Highest Quality Fast Food Chains in the US.

Q&A Session

Follow Cadence Design Systems Inc (NASDAQ:CDNS)

Charles Shi: Hey, good afternoon. Thanks for taking my question. I want to ask you a little bit about the backlog. It looks like your backlog compared with last quarter was up. I mean, the September quarter kind of implies very good bookings for September quarter. Just want to ask, do you see the backlog will continue to grow into the year-end? Because you talked about the second half booking strength. I want to see where it goes from here. Thank you.

John Wall: Yeah, Charles, thanks for the question, and thanks for remembering what we said last quarter. Yeah, we expect a very strong second half for bookings, and Q4 is exceptionally strong. But — so, our expectations for bookings is very, very strong Q4.

Charles Shi: Thanks. Maybe I want to ask a one quick follow-up. You kind of raised your full year revenue outlook a little bit less than you beat the Q3 in terms of revenue. It kind of implies that your full year outlook you provided one quarter ago was largely accurate, but there seems to be some timing shift for the revenue, I mean, pulling in from Q4 to Q3. Was that related to your comment about hardware installation, the timing of that? Thank you.

John Wall: That’s right, Charles. In hindsight now, I was a little too prudent in the Q3 guide with respect to hardware installations that were scheduled in China around the end of September. If you recall in our guide, we assumed that those installations would fall into Q4. In actual fact, we completed those hardware installations, and the second half looks stronger than we thought this time last quarter. But even with all of those hardware, we kind of beat our expectations in Q3 and Q4 is higher than we thought.

Charles Shi: Thank you.

Operator: Thank you. We go next now to Gianmarco Conti at Deutsche Bank.

Gianmarco Conti: Yeah. Hi, thank you for taking my questions. I guess my first one would be, when do you expect to be giving out more AI KPIs, whether on contract value uplift or penetration rates, or just, like, any color you can provide us on how can we quantify the AI tailwind in your numbers, and whether we are going to see this coming through in bookings in sometime? Thank you.

Anirudh Devgan: Yeah. Hi, it’s Anirudh. Let me take that. So, like we mentioned in the prepared remarks, we are seeing a lot of activity in AI. And that’s multiple customer and multiple verticals. So, whether it’s the system company designing their own chips, or, of course, the semiconductor companies designing it, or we mentioned this time, for example, Broadcom, which helps other companies design it. So, we are participating in the AI kind of design process in all three ways. And last time we talked about NVIDIA and Tesla. And then, on top of that, it’s also applying AI to our own products. And then, we have these extra generative AI products on top of our base products that also drives revenue. So, the first part, which is build out of the AI infrastructure, whether it’s with, of course, large semi companies like NVIDIA, or like large system companies like Tesla, or companies like Broadcom, I mean, that’s a big part of our business.

We don’t break that out specifically because it’s sometimes difficult to figure out exactly what part of customer’s business is AI or not, and we don’t want to be in that kind of to try to guess what part of our — what the customers are using it for. But AI is a significant portion of design activity and the buildout that’s going to happen for years. Now there’s a second part of our business in which we are selling AI products ourselves, like Cerebrus and Verisium and our JedAI platform, which has five main products. So in that segment, if our own software products and IP products are AI enabled, so in that, we did comment that even though it’s early in the process, our revenue from our own AI products has almost tripled from a year ago. So, we are very pleased with that progress.

So, I just want to highlight that, and also say there’s another part of AI, which is the buildout of infrastructure, which is more difficult to predict.

John Wall: And Gian, I would just add to that when Anirudh calls out that’s — the revenue from those products has almost tripled in the space of 12 months, we’re not reclassifying any revenue. This is direct revenue attributable to those five products that we have in our JedAI platform.

Gianmarco Conti: Right. That’s really good. Thank you. I just have a follow-up, perhaps talking a little bit on China. If you could comment on whether there is any impact from the entity list and the new rules come into place? And also, if you had any visibility into Chinese customers into trying to understand whether you actually know whether they’re designing at more mature versus advanced nodes? I feel like there’s been a lot of conversation around whether there is a way to track whether EDA tools are being used in China for mature versus advanced nodes. Any color on that — on this, it would be great. Thank you.

Anirudh Devgan: Yes, that’s a good question because there were a lot of recent reports on some of the changes in regulation. So for us, there’s not that much difference. Most of the regulations were targeted at some chip companies or manufacturing companies. As you know, we are in the design process. So those regulations, the latest round doesn’t have a big effect on Cadence business. Now there are some companies added to the entity list. So, we monitor that carefully. But since we are so diversified geographically and in terms of customers, that’s not a significant impact either. And all our guidance that we just gave includes the impact of all these regulations that were announced recently. And of course, we carefully follow all U.S. regulation. But the latest change is not that material to our business.

Gianmarco Conti: Okay, thank you.

Operator: Thank you. We’ll go next now to Harlan Sur at JPMorgan.

Harlan Sur: Good afternoon. Thanks for taking my question. Macro conditions in the semiconductor industry are still fairly muted, right? We’re close to a cyclical bottom, but recovery seems more gradual than expected across many different end markets, right? Accelerated computing, AI are strong. Auto, industrial, enterprise service provider market is still relatively soft. So, across some metrics that you track, renewals, hardware buys, IP take rates, is the team seeing any signs of hesitation or pushouts across your different customers or different businesses?

Anirudh Devgan: Yeah, Harlan, that’s a good question. Like we mentioned last time, we still see a lot of strong design activity. And I would say compared to like three months ago, I would say the activity is similar. Like you mentioned, some segments are going through tough times and then some segments like accelerated compute and AI have a lot of growth. But overall, as you know, these products that our customers are designing take several years to develop and we are part of the R&D cycle. So, what we see is the customers still investing in R&D or building our products for the future, and we are glad to partner with them. So, I think I would say that largely, the environment is similar than it was like a three months ago.

John Wall: Yeah, absolutely. And on the hardware side, we’re producing hardware as fast as we were all year. And you can see in our 10-Q that we filed today that the value of finished goods and our inventory was less than $10 million at the end of the quarter in Q3. So, the demand is really strong still and we’re just producing the hardware as quickly as we can. We’re expecting a very strong Q4 as well for our IP group. I mean they’re delivering a number of silicon solutions to our customers in Q4. And I think that sets up a really strong quarter for that group, but we were expecting that all year.

Harlan Sur: Yeah. No, I appreciate the comments there. One of your large AI SoC customers recently laid out their future road maps, right? And given the complexity of all these next-generation AI compute workloads right, they’re actually accelerating their chip road maps, so new GPU chip every year versus every two years, which was their prior cadence. And then on top of that, they’re starting to segment their product lines, right? So, not only accelerating road maps but more chips per product family. I’ve got to believe that other competitors in this space are doing exactly the same thing. Are you guys seeing the step-up in design activity? Obviously, much higher productivity is required. So, how is this all being sort of reflected in the business momentum and your visibility?

Anirudh Devgan: Yeah, good point. I mean like you said earlier, the macro environment is challenging, especially some of the segments are weaker, some are stronger. But design activity is very strong. And especially, I would say, in two verticals for the future, for the future of the semi and the system business, and at least the two very, very strong verticals in terms of design activity, is data center and AI, and then automotive, given the electrification and the massive transformation that’s happening. So, if you look at even — you know this anyway, Harlan, if you look at for the next three, four years, these two segments will grow significantly, the whole AI-driven data centers and automotive. And because they are growing so — first of all, the cadence of those end customer products is increasing.

And also, they need to be more and more efficient given the design activity and complexity is going up. So, there is more design activity and also use of AI to accelerate and be more productive. And even we are using AI internally to be more productive ourselves. So, definitely for these two big, big verticals, and this is a multi-year trend. This is not a — and you mentioned some of our large customers, we are very fortunate to work with. We always say we want to win with the winners, and we always focus on the leading companies in the data center and AI space and also now in the automotive space. So that activity is strong, and I expect that to continue.

Harlan Sur: Yeah. Well, thank you.

Operator: Thank you. We go next now to Gary Mobley at Wells Fargo.

Gary Mobley: Hey, guys, thanks for taking my question. John, your upfront license revenue year-to-date has averaged around 17%. I think typically, it’s 15%. Given where you’re at in the verification hardware product cycles, Z2 and X2 and the conversion of the backlog there, how do you see that upfront revenue trending, looking into next year? And related to that, how would you see the influence on overall growth next year?

John Wall: Yeah, great question, Gary. I mean, we’re always watching that carefully. As you know, last year, the upfront piece ticked up to 15%. This year, I think in the 10-Q, if you look over on a rolling four-quarter basis till the end of Q3, it’s at 16% now, but I think your point is probably closer to 17% for the first three quarters. But I think that’s a reflection of the strength of hardware. On the ratable and recurring part of the business, although that’s 84% of the trailing 12-month revenue, if you look at our guide at [40, 80] (ph), I mean, we’re assuming essentially about a 13% growth rate on a current revenue line for the year, but that’s consistent with like over a three-year CAGR basis is about 13% as well. Of course, we’re not guiding next year.

Gary Mobley: Understood. All right. I suspect that we’re not going to get any more AI metrics out of you, Anirudh, but maybe if you can just give us a sense of where we’re at in the commercialization of the five different AI tools? Have those started working their way in the baseline license renewals? Or are they still on a per design basis and maybe it gives us a sense of where you expect them to cut into baseline licensing activity?

Anirudh Devgan: Yeah, Gary, that’s a good point. So, we are watching that carefully, of course. And as you know, like these JedAI and these five major platforms or new products that our customers should engage with us on and they run on top of our existing kind of leading platforms. So, it depends on the customers. I would still say we are still in the early stages of the adoption of these AI products because, as you know, any of these new software tools take years to fully deploy it, right? I mean, this sort of happened in digital or in any major kind of platform releases we do. So, even though we are like two years into it, I think it will still take some time to fully deploy these products. And what we have said in the past is typically at least in my experience with digital like about seven, eight years ago, it took like two contract cycles for them to fully deploy, okay?

So that’s still three, four years to go. You’re probably like two, three years into it and still three, four years to go, which is a good thing in my mind because this is natural progress of deployment. Now, it depends on the customer. Some customers are adopting them in a much bigger way, especially — like in the previous discussion, the new kind of AI design or hyperscalers, there is like an improved cadence of design activity. So they are adopting them maybe a little faster than some of the other verticals. So, it just depends on — some are still on like tried on few designs or a few groups. But we have seen some pretty broad kind of deployment, and that helps our overall engagement with that particular customer. So that’s what I would like to say, Gary, that I think it’s still early, but what good thing, I think we mentioned in the prepared remarks that all top customers are now fully engaged and some of the results are truly remarkable.

Actually, I was talking to one major customer recently, and they are getting like 8% to 10% power improvement from Cerebrus, okay? And we have mentioned several of these in the past also. I mean that’s a huge improvement. Sometimes that’s equivalent or roughly equivalent to a node migration. Typically, you go from one node to another, you may get like 10% to 15% PPA improvement, and you’re getting close to that or roughly two-third of that from better AI tools. So the value is there. And that’s what we are focused on, make sure the products really provide value and then work with the customers in the pace of deployment that they want to see because it’s a natural process to try some and then deploy. But some of them are doing it much faster, like I mentioned.

Gary Mobley: Thank you, both.

Operator: Thank you. We go next now to Jay Vleeschhouwer at Griffin Securities.

Jay Vleeschhouwer: Thank you. Good evening. For my first question, I’d like to ask a variant of the EDA market environment question. So, on the one hand, what are you seeing in terms of unscheduled new business, that is to say, intra-contract new or expansion business that could be construed positively? On the other hand, how concerned are you about the evident deceleration of semi R&D growth? It’s still reasonably good, better than four or five years ago, but so much lower than it’s been. A lot of that can be attributable to Intel. But still, how are you thinking about those two different dynamics? And then I’ll ask a follow-up.

Anirudh Devgan: Yeah, hi, Jay. Good question. I mean, we are watching it carefully. Like you said — like we said earlier, I mean, design activity is still strong. But of course, the macro environment is challenging. It’s like natural — even though the customers realize that they need to invest in R&D for the future, if the revenue is impacted because of macro situations, that decisions become a lot more prudent. But — so this is just natural business process. But in general, still the large customers in the big segment, they’re all investing in R&D, design activity is still strong. And then we just have to see — we had a good Q3 in terms of bookings, like we mentioned. So, we’ll see what happens in Q4 and that will also give us a better idea going forward.

John Wall: Jay, as you know, a lot of our customers come back and purchase add-ons during the course of their baseline contracts and with the teams releasing significantly new business — new products from the different R&D groups that customers have an intent to come back and keep purchasing. So, they don’t wait — when we launch AI tools, they don’t wait for the baseline renewal to come up or to expire to purchase them, they’ll purchase add-ons and they’ll purchase a few licenses and then hopefully proliferate more on the contract renewal. And as you know, we have a lot of contracts that come up for renewal in Q4.

Jay Vleeschhouwer: Understood. For a follow-up, I’ll ask about some interesting Cadence management comments at last month’s Cadence Live event up in Boston. So, there was an interesting comment about the role of AI as “derisking schedules” in addition to the design exploration use case. And what’s interesting there is historically, schedule risk or completion risk has to do more towards the back end of the process, for example, physical verification. So, to the extent that more of that risk mitigation moves up earlier in the process, do you think that there will be a spending share shift within the totality of EDA spend, perhaps some from the back end, more towards the front end where you play with a lot of your tools?

Anirudh Devgan: Hey, Jay, I would say that it should lead to more design activity if we are able to reduce risk in the design process. I mean, as you know, this is the history of EDA, history of automation, even for the last 20, 30 years. I remember in the old days in the ’90s, we would take like five years and 500 engineers to design some big chip. And now that takes six to 12 months and maybe engineers. So that’s like 100 times more efficient than 25 years ago. And I think AI can, as you know, provide the next generation, next level of improvement in productivity and risk mitigation. I mean, part of it is also risk mitigation. So then I think it should lead to more design activity, especially by the system company. Now the shift of front-end to back-end, I mean I think back-end is still a complicated process.

And so, I think even though some of the things can be pulled upfront using AI or using hardware platforms, I still think the back-end design process requires a lot of work. So I would expect it affects all of them. And then, the other thing we are trying to do on the front-end, as you may have noticed, is really incorporate LLMs like our partnership with Renesas, because a lot of the front-end process has been less formal. The back-end process, especially once we have RTL, then we go to gates, we go to GDS, it’s a very formal process, very structured process. But the front-end of the process, especially verification and specification, has been less formalized. And I think AI and LLM can help formalize that which definitely, like you said, can minimize the risk.

But I think activity should still be strong in both front-end and back-end. And our goal anyway is to make the design easier, so more customers and more people can do them.

Jay Vleeschhouwer: Thank you, Anirudh.

Operator: Thank you. We go next now to Jason Celino at KeyBanc Capital Markets.

Jason Celino: Great. Thanks for taking my question. Maybe first for John, on the Q4 guide. Apologies for asking this again, but folks might be wondering tomorrow, I guess, why aren’t we seeing more upside to the guide for fourth quarter? Because where might be some conservatism or what way will you be overlooking in terms of the setup?

John Wall: Yeah, Jason, as you know, your question probably emanates from the fact that we beat by $23 million in Q3 and raised by $10 million, but that was mainly due to a prudent guide for Q3 with respect to certain hardware installations. I think overall, we’ve taken the quarter up — or the year up by $10 million at the midpoint. But I think there’s — because it’s expected to be a strong bookings quarter and a particularly strong quarter for our IP Silicon Solutions group that — I mean, they’re having — they’re going to have an excellent Q4 that we’re expecting that all year. I think if there’s upside, it’ll probably come from that group.

Jason Celino: Okay. No, that’s fair. And then just my quick follow-up on backlog. I know you’ve got some weird comps because of the hardware stuff. But when might we see like year-over-year growth again? Or I guess if we stripped out the hardware-related backlog, I don’t know if there’s any way to share what type of growth you might be seeing?

John Wall: Yeah. I think just to give you a bit of color on that, I think if you recall, at the end of last year, our backlog included about 28 weeks of lead time on hardware. I think we’re down to an eight to 10 week range now on lead time for hardware. So, of course, we’ve eaten some backlog as a result of that. But I think we troughed out essentially in the middle of the year. We’re expecting the second half to be stronger for contract renewals, because the number of contracts expired in the second half. In Q3, you saw backlog starting to tick back up again. We’d expect it to tick back up again in Q4 because we have a strong bookings quarter, or we’re expecting a strong bookings quarter. The one I’d look for really is the annual, the kind of CRPO is the one I track, because I’m looking at the annual value.

And I think when you compare the annual value at the end of this year with the annual value of backlog at the end of last year, the thing to remember is the fact that there’ll be so much less hardware in it, I would expect, because we have the production capacity now to deliver on the hardware.

Jason Celino: Okay. Perfect. No, it’s super helpful. Thank you.

Operator: We go next now to Vivek Arya at Bank of America.

Vivek Arya: Thanks for taking my question. I appreciate it’s early for a ’24 outlook, but Anirudh, I was hoping that you could give us some color given that your model is 85% recurring. So just conceptually, what is the likelihood Cadence can maintain this kind of mid-teens growth rate? And what would make ’24 different or similar to ’23 from a growth perspective?

Anirudh Devgan: Yeah. Hi, Vivek. Like before, in Q3, we don’t comment on the next year. We are diligent. We want to make sure we finish out the year, see what Q4 looks like, and then we’ll be glad to share our assessment in our next — in the full year, in the February earnings call. And that’s what we have done in the past, and that has worked out well, right? So…

Vivek Arya: Okay. On the IP side, I think John, you mentioned that you’re expecting a strong quarter for IP in Q4. I was wondering how much would your two recent acquisitions contribute to that? And just longer term, do you think IP has a category over or undergrows the EDA? And does that influence your growth prospects? So both kind of near- and longer-term question on the IP business.

John Wall: Let me take the first part of that, and then I’ll hand it over to Anirudh for the second part. I think in relation to the IP business, like I say, we’re expecting a strong Q4 for that group. I mean, if you look at the guide we’ve given for the year, essentially, we’re guiding to 14% to 15% revenue growth for the year, which means Q4 over Q4 is going to grow kind of between 15% and 20%. Now, largely that’s due to the strength of our IP business in Q4. Do you want to talk about the longer term?

Anirudh Devgan: Yeah, Vivek, I mean, as you know already, more customers are outsourcing their IP need, and we have always participated in that, and we have always said we want to participate in that in kind of a star IP portfolio so that it’s more and more profitable. And the profitability of our IP business has improved over the last few years. And so, I think we are overall happy with the profitability of the IP business. So, now we’re trying to see, okay, what other areas can it grow and maintain profitability? And I think the areas that are emerging, which are strong are this whole chiplet-based design and 3D-IC, and which are used for a lot of AI and hyperscaler applications. And that’s also the reason we bought the PHY assets of Rambus, which is HBM and GDDR based IP.

So, I feel now that our IP portfolio is in the right areas. And also the use of this automotive and hyperscaler and AI IP — and most of these markets are evolving into chiplet-based and 3D-IC-based designs, which also has certain new IPs like UCIe and other things. So, as a result of that, we are investing more in our IP business as you saw, and then we expect a strong Q4, and then, we’ll see what happens in ’24.

John Wall: Just to clarify, the contribution from acquisitions is likely to be immaterial for this year. So, the strong Q4 that we’re expecting is really from organic business.

Vivek Arya: Thank you.

Operator: Thank you. We go next now to Ruben Roy at Stifel.

Ruben Roy: Thank you. Anirudh, I wanted to ask if you could maybe talk a little bit more in detail about the collaboration with Renesas and kind of incorporating Generative AI, LLM into chip design. I think you mentioned some expectations for quality improvement, efficiency improvement. I would think that longer term, you’d be thinking about productivity improvement as well. Are those milestones that you’re expecting to have answers about within the next year, two years? It sounds like this is sort of a longer-term collaboration and sort of testing going on today. Just wondering sort of what you’re thinking about timeframe in terms of incorporating some of these types of tools into chip design. Along with that, just the final part is, would you consider this a leading-edge design that Renesas is working on? Or if you could talk a little about the type of design, that’d be great. Thank you.

Anirudh Devgan: Yes, absolutely. So, I mean, we are very pleased with the collaboration with Renesas. And I think they have a whole initiative, if you follow them or if you look into the AI for their design process. And we are glad to be very close partner with Renesas as we are with other companies, right? So, we just wanted to highlight Renesas this time and the collaboration is broad based. I think they’re using almost all of our AI tools, whether it’s Cerebrus for digital or Verification, Verisium and other tools. And also, we are doing some new collaboration with them on LLM, like we mentioned. And the LLM collaboration is fairly broad-based. It can be applied to any kind of design, especially Renesas has a range of design all the way from advanced node to mainstream nodes.

And the other — you may know all this already, but the key thing, one benefit of AI is that there’s a, of course, the quality of results can be better, productivity can be better, but there are other benefits which are also true for large kind of global companies like Renesas. And the two that I would like to highlight, which came to the forefront with our partnership with Renesas. One is, all these large companies have geographically diverse teams, right? It’s not that the team is only in one location. Typically, they are in multiple locations. So, the good thing with AI is that, and we can do, in a lot of cases, design better than a human can do, but also it depends on the starting point, right? So, if you have a geographically diverse team, not all teams are super experts.

So, if the AI tool is same or better than your best team, then the reason to deploy it is that wherever — just by the nature of human productivity, there’s a variation across the organization, the results can be even greater in your teams which are historically not performing as well as you would like. And the other thing is also true in terms of experience. And this will happen, I believe, in AI in other industries as well, but it’s definitely happening in chip design. So if you have three years’ experience doing chip design versus 20 years’ experience in chip design, okay, with AI, that gap is narrowing. So, less experienced engineers can be almost as productive as more experienced engineers. So, apart from like productivity and quality of results benefit, it has this other kind of almost workforce management benefit for large organization like Renesas, because they have organization in multiple locations and also a wide experience range from young engineers to experienced engineers.

And this we are seeing in other companies as well. And I think what is also interesting is that the companies that adopt these AI tools first and faster will benefit more versus their peers. So, we are seeing that the fast-moving companies and Renesas is definitely one of them. And then we talked about, of course, Broadcom, we talked about NVIDIA, we talked about Tesla, and there’s so many other kind of great, large multinational companies we have the privilege of working with. So, there are more than one, there’s a whole workforce development benefit of AI, which is actually quite profound.

Ruben Roy: That’s very helpful. Thanks for all that detail, Anirudh. I guess just a quick follow-up. I mean, it sounds from what you’re saying, this should be incremental. I mean, EDA has grown nicely. If you look at the core EDA growth over the last several years, you guys like to call out the three-year CAGR, but from what’s going on here, we should assume that this would be incremental on sort of the way you’ve seen EDA growth. Can you comment on that as you think about whether it’s software renewals or adding add-ons, as John talked about, over the next 12, 18, 24 months, would you say this would be incremental to sort of that mid-teens growth that the EDA tools have been growing at over the last three years or so?

John Wall: Well, I would comment on that. That’s just, I think our style at Cadence is to be patient with our customers and we’ll go at the pace that they’re ready. As Anirudh said earlier in the call, we expect to proliferate our AI tools across our entire customer base over about two contract cycles. And some are adopting more rapidly and embracing the AI tools. Some are — they’re adopting the AI tools in add-ons, but they might be shaving back their configuration somewhere else. That tends to be a false economy because they’ll part they’ll just come back and purchase more add-ons later. So, to get the full effect, it probably takes a couple of contract cycles, but we’re very, very pleased with the start we’ve made.

Ruben Roy: It’s very helpful. Thank you, John.

Operator: Thank you. We go next now to Josh Tilton at Wolfe Research.

Josh Tilton: Hey, guys, thanks for squeezing me in. Can you hear me?

Anirudh Devgan: Clear.

Josh Tilton: Great. My first question is just how does the 4Q hardware pipeline look compared to kind of some of the strength that you saw in the first three quarters of the year? And given that you mentioned that the macro is still challenging, is there any extra conservatism in the Q4 guide to account for the potential for maybe some hardware to slip into next year?

John Wall: Yeah, that’s a great question. Pipeline is very strong. I mean the hardware demand just continues to amaze me that it’s just tremendous. Those products are — that verification group is just performing at a really, really high level and in such a consistent fashion through probably eight quarters now. But — so very pleased with that. You might’ve noticed that we kept the same range on the guide from last Q3 — from Q3 the same range, because we thought there’s probably a broader kind of a array of potential outcomes with the amount of business that we expect to sign in Q4. We’re expecting a strong booking quarter in Q4, and there is a strong pipeline for hardware. But like I said, in relation to the AI question, that we’re very patient with our customers.

We’ll go with their pace. And naturally, if something slips from Q4 to Q1, it goes from this year to next year or vice versa, you can have stuff that customers are planning to buy in Q1, happening in Q4 as well. But I think we’ve accounted for that in the guide. Everything we know is in our guidance.

Josh Tilton: Super helpful. And then, just a follow-up. Obviously, on AI, I can’t not touch it. But as that business of yours triples, are you seeing the drive or the want to adopt these AI tools cause more of your users to make full flow decisions when maybe this has been more of a best of breed market historically?

Anirudh Devgan: Yes, absolutely. That’s a very good point. Because the AI tools, our AI tools will run on the full flow by nature, whether that’s digital implementation or it is on verification. And of course, we believe we have best on breed tools anyway on the base. It’s like you have to have the full flow, the basic engines to be best in class, and then add AI on top of them, which is best in class. But it is helping the underlying tools. So, when our customers are doing more AI tools, it also — and also, as you know, we have commented in the past that the AI tools by nature use a lot of underlying tools. So, when Cerebrus runs, for example, which is our AI tool for digital implementation, which is one of the most difficult tasks in chip design, so it will typically — the customers will use them on like, one run of Cerebrus will typically run on 10, 20 machines, okay?

And each of them could be like 32 CPUs or 16 CPUs. So, they are using a lot of compute and also they’re using a lot of underlying licenses. So, it could be like 10 instances of Innovus which Cerebrus is running. So — and then it is also synthesis, place and route and sign-off, like in case of digital. And then logic simulation, formal verification, hardware in case of verification. And same thing with analog. It’s not just Virtuoso, but it’s Spectre. So, it’s definitely full flow is enabled, but also typically it requires more instances. Because we’re doing AI-based design or AI-based intelligent search of the design process. So, it will require multiple runs. Typically, instead of one or two runs, it may require 100 or 200 runs. But the user were doing that manually in a sequential manner, and we can do that automatically in a more parallel manner.

So, it definitely helps. But it’s still worth it because you get much better PPA. And it’s like using more compute and more software, more automation in place of more human effort. And we can do it faster and a better PPA.

Josh Tilton: Super helpful. Thank you, guys.

Operator: We go next now to Joe Vruwink at Baird.

Joe Vruwink: Great. Hi, everyone. Sorry to belabor the backlog questions, but I suppose I’m going to. If we rewind two years ago and look at 3Q and the 4Q of 2021, current RPO then went up by, I think, nearly $400 million sequentially. Is that maybe how you would start to frame just renewal values that are coming to and what you could potentially look to build on? And then, second part of my backlog question, and it gets back to Jason’s question on just the changing composition of hardware and software. Just given what Cadence has been able to do on production capacity and ramping there, does that change the relationship in terms of what needs to be sitting in backlog at year-end in order to support some sort of next 12 month revenue expectation?

John Wall: Hi, Joe, great questions there. I guess the way your profile last year’s growth, a large portion of that growth would have been, of course, the hardware we weren’t able to service at the time. And the reason I called out the lead times was, end of last year, that backlog and current year or the next 12 months backlog if you like, contained about 26 to 28 weeks of lead time for hardware. That sounds about eight to 10 weeks now. So, I guess to answer the second part of your question there, when you get to the end of this year, because we’ve ramped up on the hardware production, you’ll need less to be in backlog for the — there’ll be less need for revenue to come out of backlog for next year’s revenue than there was for this year.

And like I said, we’ve kept the production levels at the same level all year. So, every quarter, we ratchet it up in Q1, and we’ve maintained that production level to try and reduce those lead times, because we think we’re more competitive with customers. I mean, I was impressed this time last year, people were waiting over six months for our hardware solutions. But we’d be silly to assume that that would continue is important to get the lead times down to eight to 10 weeks. And I think that’s more normal kind of — more normal level to get to. But yeah, very, very pleased with the progress we’ve made so far this year. And again, we’re not really talking about next year, but we’ve got a very busy Q4 ahead of us.

Joe Vruwink: Great. Thanks, John. If I can squeeze one more in, I think we’re about to lap the OpenEye acquisition. I just wanted to see how that generally is tracked relative to your original expectations? And maybe just get an update about how Cadence is thinking about the opportunity from the Molecular Sciences group and the role you can play in life sciences looking forward?

Anirudh Devgan: Absolutely. We are super excited about that. We’re super excited about molecular design and the future. It’s almost like where EDA was maybe 20 years ago. And before I talk specifically about molecular design, I want to tell you in terms of our product strategy and how that is synergistic and similar to. So, I see a lot of activity in our main products, which I would describe as like three layers. So, the middle layer is actual software products that we have, which are like either you’re doing like EDA design or they’re doing system simulation or they’re doing finite element, computational fluid dynamics or molecular simulation. So, that’s the middle layer of the cake. And below that is this new emergence of computational hardware, which is special purpose hardware.

Like we had in the past, we had special purpose hardware with Palladium, which is our custom chip, but now we use FPGAs for Protium. We have of course, x86-based Intel and AMD CPUs, and then recently a lot of activity on GPUs, especially with NVIDIA GPUs and accelerated computing. So, that’s the bottom layer of the cake. And then the top layer of that three-layer cake is AI orchestration. AI can provide this new level of automation and productivity and doing what was typically done by humans. It can be done with AI like we talked about Cerebrus or Verisium. So, this three-layer cake is central to our product strategy. So, the middle is simulation, which is physics-based, biology-based. The top is AI orchestration. The bottom is computational hardware.

And then, this can be verticalized across multiple verticals, whether it’s EDA and chip design, whether it’s package design with Allegro X AI, whether it is clarity and optimality, whether it’s CFD with fidelity, and more importantly, like you asked with biosimulation. So, the reason we acquired OpenEye is that gives that critical middle layer of physics-based biological simulation, which are very few companies that can do that but then we can add to it AI-based drug discovery and computational hardware with GPUs. So, as you may know, the OpenEye has a Orion cloud-based platform. It already runs on GPUs, giving significant speed up for biosimulation. And then recently, and we’ll talk more about in the future, we expanded our collaboration with one major top five pharmaceutical company to do traditional and AI-based drug discovery on top of OpenEye and Orion platform.

And so, I think this thing in the future, if you go forward, I mentioned in the last time also, the application of AI also, I think is going to go into three steps. So, first step application of AI is going to be in building out the infrastructure. Like we talked about, of course, the great companies like NVIDIA and Tesla and now Broadcom. So, the build of infrastructure and then there’s so many other hyperscaler companies as you know, they are all building out AI infrastructure. So that’s the first way of AI adoption. The second phase of AI adoption is applying AI to our own products, like Cerebrus and JedAI and Verisium, we talked about that today also. And that’s going pretty well. And we talked about the progress in the last one year. And that, I think, will still take several years to go.

And then, the third phase of AI adoption is AI applied to areas that were not automated in the past, okay? So, I think that may take longer, maybe five years plus, but that has to be driven to digital biology and life sciences. I mean, there’s a huge application of AI and to do that properly, we need that three-layer cake and we need AI on top. We need biosimulation with OpenEye and then computational hardware with our leading compute platforms. So, I’m very optimistic about the future. It will take some time. This not happens in a quarter or two quarters, but it’s right to invest for the future and it is synergistic with the other parts of — a lot of the biosimulation is similar to what we do in circuit simulation or CFD and things like that.

So, what overall I would like to say is still in the early innings with biology and biosimulation and OpenEye, but it is a good start and we are investing it for the future in a controlled way, of course. We’re always financially disciplined. But I think the potential is there in the future for it to emerge as one of the big areas.

Joe Vruwink: That’s great. Thanks very much, Anirudh.

Operator: Thank you. And our final question comes from Andrew DeGasperi at Berenberg.

Andrew DeGasperi: Thanks for fitting me in. Just had two quick ones. I know most of them were answered so far on this call. But first on the margin, maybe could you lay out, John, in terms of the guidance for the year, I know you took down slightly the top end of the range for the operating margin on an on-GAAP basis. Just wondering maybe if you could lay out what the puts and takes are there? Is it revenue mix? Is it the recent acquisitions that you made that might have sort of crystallized that number? And without having to answer the second time, but like in terms of investments that you’re making for next year, is the pace of all the hiring going to change at all based on what you’re seeing right now?

John Wall: Great questions, Andrew. Yeah, in relation to the margin, the recent acquisitions are more dilutive to this year. So, we’re picking up more expense — we’re picking very, very little revenue. We’re picking up expense immediately. And that kind of narrowed the range on the on the margin outcomes for us that are at the midpoint of 41.75%. I think it works out to be about $5.10 on non-GAAP EPS. In relation — what was the second part of the question? Sorry, I’ve forgotten the second part of the question.

Andrew DeGasperi: No worries. It’s just on the terms of hiring, just in terms of how you’re thinking about it so far.

John Wall: Yeah, I mean, it’s great we continue to attract top talent to Cadence. You may notice, though, in our 10-Q, we did some restructuring. In August, we initiated a restructuring plan to better align our resources with our business strategy, and we incurred about $12 million of costs comprised of severance payments and termination benefits in relation to headcount reductions. But I would kind of categorize that as a bit of housekeeping and preparation for next year.

Andrew DeGasperi: Understood. Thank you.

Operator: Thank you. I’m now going to turn it back over to Anirudh Devgan for closing remarks.

Anirudh Devgan: Thank you all for joining us this afternoon. A strong execution of the intelligent system design strategy and customer-first mindset continue to drive growth as we expand our portfolio with new innovative AI-driven solutions. We are proud of our inclusive culture and focus on enabling sustainable innovation and honored to recently be named to Newsweek’s America’s Greenest Companies 2024 list. On behalf of our Board of Directors, we thank our customers, partners, and investors for their continued trust and confidence in Cadence. Thank you.

Operator: Thank you for participating in today’s Cadence third quarter 2023 earnings conference call. This does conclude today’s call. You may now disconnect.

Follow Cadence Design Systems Inc (NASDAQ:CDNS)