And this we are seeing in other companies as well. And I think what is also interesting is that the companies that adopt these AI tools first and faster will benefit more versus their peers. So, we are seeing that the fast-moving companies and Renesas is definitely one of them. And then we talked about, of course, Broadcom, we talked about NVIDIA, we talked about Tesla, and there’s so many other kind of great, large multinational companies we have the privilege of working with. So, there are more than one, there’s a whole workforce development benefit of AI, which is actually quite profound.
Ruben Roy: That’s very helpful. Thanks for all that detail, Anirudh. I guess just a quick follow-up. I mean, it sounds from what you’re saying, this should be incremental. I mean, EDA has grown nicely. If you look at the core EDA growth over the last several years, you guys like to call out the three-year CAGR, but from what’s going on here, we should assume that this would be incremental on sort of the way you’ve seen EDA growth. Can you comment on that as you think about whether it’s software renewals or adding add-ons, as John talked about, over the next 12, 18, 24 months, would you say this would be incremental to sort of that mid-teens growth that the EDA tools have been growing at over the last three years or so?
John Wall: Well, I would comment on that. That’s just, I think our style at Cadence is to be patient with our customers and we’ll go at the pace that they’re ready. As Anirudh said earlier in the call, we expect to proliferate our AI tools across our entire customer base over about two contract cycles. And some are adopting more rapidly and embracing the AI tools. Some are — they’re adopting the AI tools in add-ons, but they might be shaving back their configuration somewhere else. That tends to be a false economy because they’ll part they’ll just come back and purchase more add-ons later. So, to get the full effect, it probably takes a couple of contract cycles, but we’re very, very pleased with the start we’ve made.
Ruben Roy: It’s very helpful. Thank you, John.
Operator: Thank you. We go next now to Josh Tilton at Wolfe Research.
Josh Tilton: Hey, guys, thanks for squeezing me in. Can you hear me?
Anirudh Devgan: Clear.
Josh Tilton: Great. My first question is just how does the 4Q hardware pipeline look compared to kind of some of the strength that you saw in the first three quarters of the year? And given that you mentioned that the macro is still challenging, is there any extra conservatism in the Q4 guide to account for the potential for maybe some hardware to slip into next year?
John Wall: Yeah, that’s a great question. Pipeline is very strong. I mean the hardware demand just continues to amaze me that it’s just tremendous. Those products are — that verification group is just performing at a really, really high level and in such a consistent fashion through probably eight quarters now. But — so very pleased with that. You might’ve noticed that we kept the same range on the guide from last Q3 — from Q3 the same range, because we thought there’s probably a broader kind of a array of potential outcomes with the amount of business that we expect to sign in Q4. We’re expecting a strong booking quarter in Q4, and there is a strong pipeline for hardware. But like I said, in relation to the AI question, that we’re very patient with our customers.
We’ll go with their pace. And naturally, if something slips from Q4 to Q1, it goes from this year to next year or vice versa, you can have stuff that customers are planning to buy in Q1, happening in Q4 as well. But I think we’ve accounted for that in the guide. Everything we know is in our guidance.
Josh Tilton: Super helpful. And then, just a follow-up. Obviously, on AI, I can’t not touch it. But as that business of yours triples, are you seeing the drive or the want to adopt these AI tools cause more of your users to make full flow decisions when maybe this has been more of a best of breed market historically?
Anirudh Devgan: Yes, absolutely. That’s a very good point. Because the AI tools, our AI tools will run on the full flow by nature, whether that’s digital implementation or it is on verification. And of course, we believe we have best on breed tools anyway on the base. It’s like you have to have the full flow, the basic engines to be best in class, and then add AI on top of them, which is best in class. But it is helping the underlying tools. So, when our customers are doing more AI tools, it also — and also, as you know, we have commented in the past that the AI tools by nature use a lot of underlying tools. So, when Cerebrus runs, for example, which is our AI tool for digital implementation, which is one of the most difficult tasks in chip design, so it will typically — the customers will use them on like, one run of Cerebrus will typically run on 10, 20 machines, okay?
And each of them could be like 32 CPUs or 16 CPUs. So, they are using a lot of compute and also they’re using a lot of underlying licenses. So, it could be like 10 instances of Innovus which Cerebrus is running. So — and then it is also synthesis, place and route and sign-off, like in case of digital. And then logic simulation, formal verification, hardware in case of verification. And same thing with analog. It’s not just Virtuoso, but it’s Spectre. So, it’s definitely full flow is enabled, but also typically it requires more instances. Because we’re doing AI-based design or AI-based intelligent search of the design process. So, it will require multiple runs. Typically, instead of one or two runs, it may require 100 or 200 runs. But the user were doing that manually in a sequential manner, and we can do that automatically in a more parallel manner.
So, it definitely helps. But it’s still worth it because you get much better PPA. And it’s like using more compute and more software, more automation in place of more human effort. And we can do it faster and a better PPA.
Josh Tilton: Super helpful. Thank you, guys.
Operator: We go next now to Joe Vruwink at Baird.
Joe Vruwink: Great. Hi, everyone. Sorry to belabor the backlog questions, but I suppose I’m going to. If we rewind two years ago and look at 3Q and the 4Q of 2021, current RPO then went up by, I think, nearly $400 million sequentially. Is that maybe how you would start to frame just renewal values that are coming to and what you could potentially look to build on? And then, second part of my backlog question, and it gets back to Jason’s question on just the changing composition of hardware and software. Just given what Cadence has been able to do on production capacity and ramping there, does that change the relationship in terms of what needs to be sitting in backlog at year-end in order to support some sort of next 12 month revenue expectation?
John Wall: Hi, Joe, great questions there. I guess the way your profile last year’s growth, a large portion of that growth would have been, of course, the hardware we weren’t able to service at the time. And the reason I called out the lead times was, end of last year, that backlog and current year or the next 12 months backlog if you like, contained about 26 to 28 weeks of lead time for hardware. That sounds about eight to 10 weeks now. So, I guess to answer the second part of your question there, when you get to the end of this year, because we’ve ramped up on the hardware production, you’ll need less to be in backlog for the — there’ll be less need for revenue to come out of backlog for next year’s revenue than there was for this year.