Harlan Sur: Good afternoon. Thanks for taking my question. Macro conditions in the semiconductor industry are still fairly muted, right? We’re close to a cyclical bottom, but recovery seems more gradual than expected across many different end markets, right? Accelerated computing, AI are strong. Auto, industrial, enterprise service provider market is still relatively soft. So, across some metrics that you track, renewals, hardware buys, IP take rates, is the team seeing any signs of hesitation or pushouts across your different customers or different businesses?
Anirudh Devgan: Yeah, Harlan, that’s a good question. Like we mentioned last time, we still see a lot of strong design activity. And I would say compared to like three months ago, I would say the activity is similar. Like you mentioned, some segments are going through tough times and then some segments like accelerated compute and AI have a lot of growth. But overall, as you know, these products that our customers are designing take several years to develop and we are part of the R&D cycle. So, what we see is the customers still investing in R&D or building our products for the future, and we are glad to partner with them. So, I think I would say that largely, the environment is similar than it was like a three months ago.
John Wall: Yeah, absolutely. And on the hardware side, we’re producing hardware as fast as we were all year. And you can see in our 10-Q that we filed today that the value of finished goods and our inventory was less than $10 million at the end of the quarter in Q3. So, the demand is really strong still and we’re just producing the hardware as quickly as we can. We’re expecting a very strong Q4 as well for our IP group. I mean they’re delivering a number of silicon solutions to our customers in Q4. And I think that sets up a really strong quarter for that group, but we were expecting that all year.
Harlan Sur: Yeah. No, I appreciate the comments there. One of your large AI SoC customers recently laid out their future road maps, right? And given the complexity of all these next-generation AI compute workloads right, they’re actually accelerating their chip road maps, so new GPU chip every year versus every two years, which was their prior cadence. And then on top of that, they’re starting to segment their product lines, right? So, not only accelerating road maps but more chips per product family. I’ve got to believe that other competitors in this space are doing exactly the same thing. Are you guys seeing the step-up in design activity? Obviously, much higher productivity is required. So, how is this all being sort of reflected in the business momentum and your visibility?
Anirudh Devgan: Yeah, good point. I mean like you said earlier, the macro environment is challenging, especially some of the segments are weaker, some are stronger. But design activity is very strong. And especially, I would say, in two verticals for the future, for the future of the semi and the system business, and at least the two very, very strong verticals in terms of design activity, is data center and AI, and then automotive, given the electrification and the massive transformation that’s happening. So, if you look at even — you know this anyway, Harlan, if you look at for the next three, four years, these two segments will grow significantly, the whole AI-driven data centers and automotive. And because they are growing so — first of all, the cadence of those end customer products is increasing.
And also, they need to be more and more efficient given the design activity and complexity is going up. So, there is more design activity and also use of AI to accelerate and be more productive. And even we are using AI internally to be more productive ourselves. So, definitely for these two big, big verticals, and this is a multi-year trend. This is not a — and you mentioned some of our large customers, we are very fortunate to work with. We always say we want to win with the winners, and we always focus on the leading companies in the data center and AI space and also now in the automotive space. So that activity is strong, and I expect that to continue.
Harlan Sur: Yeah. Well, thank you.
Operator: Thank you. We go next now to Gary Mobley at Wells Fargo.
Gary Mobley: Hey, guys, thanks for taking my question. John, your upfront license revenue year-to-date has averaged around 17%. I think typically, it’s 15%. Given where you’re at in the verification hardware product cycles, Z2 and X2 and the conversion of the backlog there, how do you see that upfront revenue trending, looking into next year? And related to that, how would you see the influence on overall growth next year?
John Wall: Yeah, great question, Gary. I mean, we’re always watching that carefully. As you know, last year, the upfront piece ticked up to 15%. This year, I think in the 10-Q, if you look over on a rolling four-quarter basis till the end of Q3, it’s at 16% now, but I think your point is probably closer to 17% for the first three quarters. But I think that’s a reflection of the strength of hardware. On the ratable and recurring part of the business, although that’s 84% of the trailing 12-month revenue, if you look at our guide at [40, 80] (ph), I mean, we’re assuming essentially about a 13% growth rate on a current revenue line for the year, but that’s consistent with like over a three-year CAGR basis is about 13% as well. Of course, we’re not guiding next year.
Gary Mobley: Understood. All right. I suspect that we’re not going to get any more AI metrics out of you, Anirudh, but maybe if you can just give us a sense of where we’re at in the commercialization of the five different AI tools? Have those started working their way in the baseline license renewals? Or are they still on a per design basis and maybe it gives us a sense of where you expect them to cut into baseline licensing activity?
Anirudh Devgan: Yeah, Gary, that’s a good point. So, we are watching that carefully, of course. And as you know, like these JedAI and these five major platforms or new products that our customers should engage with us on and they run on top of our existing kind of leading platforms. So, it depends on the customers. I would still say we are still in the early stages of the adoption of these AI products because, as you know, any of these new software tools take years to fully deploy it, right? I mean, this sort of happened in digital or in any major kind of platform releases we do. So, even though we are like two years into it, I think it will still take some time to fully deploy these products. And what we have said in the past is typically at least in my experience with digital like about seven, eight years ago, it took like two contract cycles for them to fully deploy, okay?
So that’s still three, four years to go. You’re probably like two, three years into it and still three, four years to go, which is a good thing in my mind because this is natural progress of deployment. Now, it depends on the customer. Some customers are adopting them in a much bigger way, especially — like in the previous discussion, the new kind of AI design or hyperscalers, there is like an improved cadence of design activity. So they are adopting them maybe a little faster than some of the other verticals. So, it just depends on — some are still on like tried on few designs or a few groups. But we have seen some pretty broad kind of deployment, and that helps our overall engagement with that particular customer. So that’s what I would like to say, Gary, that I think it’s still early, but what good thing, I think we mentioned in the prepared remarks that all top customers are now fully engaged and some of the results are truly remarkable.
Actually, I was talking to one major customer recently, and they are getting like 8% to 10% power improvement from Cerebrus, okay? And we have mentioned several of these in the past also. I mean that’s a huge improvement. Sometimes that’s equivalent or roughly equivalent to a node migration. Typically, you go from one node to another, you may get like 10% to 15% PPA improvement, and you’re getting close to that or roughly two-third of that from better AI tools. So the value is there. And that’s what we are focused on, make sure the products really provide value and then work with the customers in the pace of deployment that they want to see because it’s a natural process to try some and then deploy. But some of them are doing it much faster, like I mentioned.