Cadence Design Systems, Inc. (NASDAQ:CDNS) Q1 2023 Earnings Call Transcript

John Wall: So, that — I just wanted to clarify that. Yes, the — so the remainder of the existing bookings will be in backlog, but that will reduce until we get to the actual renewal date. And then, on the renewal date, we’ll have a new booking come through.

Charles Shi: Got it.

John Wall: And that will be additive to the back up.

Charles Shi: Got it. So, just want to really fear this up. I think another analyst just asked about — the implied booking seems to be down in Q1. And I think you did provide some color, but I want to ask from another perspective. I think in March 2021 quarter, if you still remember the details, that — in that particular quarter, you had some similar relatively lower bookings back then. And now, in hindsight, it looks like it was just some one-off weakness in bookings and not meaning much in terms of the broader industry trend. Is that the same case this time? Or are you seeing some other more like a trend — indicative of the trend going on in the industry today? Thank you.

John Wall: Oh, thanks for the opportunity to clarify there, Charles. Certainly, I didn’t see any weakness in bookings in Q1. Bookings in Q1 were stronger than we were actually expecting at the start of the quarter. What I was trying to convey was that we had very few software renewals that came up for renewal in Q1 and therefore bookings were expected to be light. The beauty of the recurring revenue model that we have is that the timing of those renewals is not especially important, but it’s the annual value of those renewals. And that we continue to see growth in the annual value of those bookings, and we see growth across all of the businesses.

Charles Shi: Thank you, John.

John Wall: Thanks.

Operator: Your next question comes from the line of Harlan Sur with J.P. Morgan. Your line is now open.

Harlan Sur: Hi. Good afternoon, and nice job on the strong quarterly execution. We had a call last week with one of the largest ASIC semiconductor companies, a very large customer of yours, that they’ve got chip design programs with many of the cloud and hyperscalers. And they told us that they’re seeing a pickup just over the last sort of 60 to 90 days, just the meaningful pickup in design activity and design project pull-ins on their accelerated compute and AI SoC programs from their hyperscale customers. I guess, it’s not a surprise given the AI arms race amongst the cloud titans. But have you seen this recent pickup in customer design activity, program pull-ins reflected in your recent discussions on upcoming renewals and/or customer engagements?

Anirudh Devgan: Yeah, Harlan, that’s a great point. So, in general, like I mentioned earlier, I mean, there is a lot of strong design activity. We see with our customers, both on the semi and the system side, and when I talk to our ecosystem partners, right, the foundries and the IP providers. So overall, I think design activity is very strong. Now, in particular, to your specific question on AI, I definitely see a lot more interest. And the reason is, I mean, you know — I mean, at least one of the reasons I think in which we know — or you may know already is this new kind of generative AI and all this talk about ChatGPT is that, traditionally search or this kind of AI inferencing the past was done on — in a CPU. But this new generative AI tools, of course, the training is done on GPUs as always, but even inferencing, when you ask it a question, a lot of the inference is done on GPUs, which is traditionally — it’s a great accelerating platform, but traditionally more expensive than CPU platform.

So, not only it will drive more and more adoption of GPU and accelerated computing in the cloud, but also naturally look for customized silicon that can do it much more effectively and efficiently both for performance and power. So, we do see generative AI and adoption of this more reinforcement learning based kind of inference — training an inference, especially the inference part of it, to drive more silicon demand and more customized silicon, and which is — which you are correct in your observation. But in general, right, we have talked about for several years, the need for customized silicon, whether it is for generative AI now or in general for self-driving or variety of applications is expected to continue, and we are pleased to see continued momentum in that space.

Harlan Sur: No, I appreciate that. And then, maybe just a follow-on to that question, many of your cloud and OEM customers that historically have worked with these large ASIC companies, we’re also hearing that some of them may be trying to build extra distance, right, and pull together the capability to do the entire chip design themselves, right, what we call COT based models, which I would think would mean further expansion of their design teams beyond just front-end design, right, which again would be maybe more market opportunity for things like your Virtuoso franchise, and many of your back-end physical implementation of verification tools. Are you guys seeing this trend as well?

Anirudh Devgan: Yes, absolutely. So, I think if you look at this kind of transition of system companies doing their own silicon, I think there are at least three phases of that. And that’s why I’ve commented in the past that we are still in the second inning of this. And you can look at other companies, like, in the mobile space when they started doing their own silicon. So, the first phase is using ASIC provider. And, actually, we have great relationship with almost all the major ASIC providers. We have very deep partnership with all the — in all geography, world-leading ASIC providers. But typically, the system companies will start with the ASIC provider, but then typically go to a COT flow, or customer owned tooling. And in that, they will do more and more back-end design and more and more — so usually that is more opportunity for Cadence.

And so that the front-end and back-end can be optimized together. I mean, the ASIC provider can do that too, but typically, the customer will go to a COT flow over time. So that’s one thing that really — has already happened in other system companies, is happening in the newer ones. And the other trend, of course, is that they will do more and more designs also. Initially, they’ll start with one or two designs like — and as you can see these examples in all the public ones, like Amazon doing in the beginning in networking chip. And once that networking chip is successful, they moved to graviton, which is a compute server. And then, once that’s successful, they go to a AI chip. So, the number of chips also increases. And then, the third reason that there is more and more business is new system companies get in.

Traditionally like in the auto space, in the beginning, only one or two will do it, and then more companies will do it. See, that’s why I think there is a growing trend will last for a while because there is companies moving from part of the flow in-house to full flow in-house, which is moving to COT from ASIC; more and more designs being done; and thirdly, more and more companies doing silicon. So, all these three trends are positive for Cadence and the products that we supply.

Harlan Sur: Yeah, insightful. Thank you.

Operator: Your next question comes from the line of Jason Celino with KeyBanc. Your line is now open.