John Wall: Yes, Harlan. That’s a great question. And thanks for highlighting that — because I had that on my list of things to say. I think there’s something funny going on with the rounding on when you kind of apply the growth rates for SDA for Q1 over Q1, the actual growth rate is probably high single digits Q1-over-Q1. I know, that’s lapping tough comps against Q1 ’23. I think, if you look on a two year CAGR basis, I think it’s up about 17% per annum on a two year CAGR basis for SDA. But we’re expecting strong SD&A growth again this year, and it will be higher than the Cadence average. That’s our expectation.
Harlan Sur: Great. Thanks for that. And then Anirudh lots of new accelerated compute AI SoC announcements just even over the past few weeks where we saw flagship Blackwell GPU announcement by one of your big customers Nvidia. But we’ve actually seen even more announcements by your cloud and hyperscale customers bringing their own custom [ASIC] (ph) to the market with Google with TPU V5, Google with their Arm-based CPU ASIC; Meta unveiled their Gen 2 TPU AI classes of chips as well. And in addition to that, like their road maps seem to be accelerating. So can you give us an update on your systems and hyperscale customers? I mean are you seeing the design activity accelerating within this customer base? And is the contribution mix from these customers rising above that sort of roughly 45% level going forward?
Anirudh Devgan: Yeah Harlan, that’s a very good observation. And the pace of AI innovation like is increasing and not just in the big semi companies, but of course, in these system companies. And I think several announcements did come out, right, including, I think now Meta is public that Meta is designing a lot of silicon for AI, and of course, Google, Microsoft, Amazon. So all the big, really hyperscaler companies, along with Nvidia, AMD, Qualcomm, all the other kind of Samsung had AI phone this year. So I mean, there is a lot of acceleration both on the semi side and on the system side. And we are involved with all the major players there, and we are glad to provide our solutions. And I do think — and this is the other thesis we have talked about for years now, right, five years, seven years that the system companies will do silicon because of a lot of reasons for customization, for schedule and supply chain control for cost benefits, if there is enough scale.
And I think, the workload of AI, like if you look at I think some of the big hyperscaler and social media companies, they are talking about using like 20,000, 24,000 GPUs to train these new models. I mean this is immense amount. And then the size of the model and the number of models increased, so that could go to a much, much higher number than right now that is required to train these models and of course, to do inference on these models. So I think, we are still in the early innings in terms of system companies developing their own chips and at the same time, working with the semi companies. So I expect that to grow and those that — our business with those system companies doing silicon, I would like to say is growing faster than Cadence average.
But the good thing is the semi guys are also doing a lot of business. So I don’t know, if that 45% will — because that’s a combination of a lot of companies. But overall, the AI and hyperscalers, they are doing a lot more than so are the big semi company.
Harlan Sur: Perfect. Thank you.
Operator: I’ll now turn it back over to Anirudh Devgan for closing remarks.
Anirudh Devgan: Thank you all for joining us this afternoon. It is an exciting time for Cadence as our broad portfolio and product leadership highly positions us to maximize the growing opportunities in the semiconductor and systems industry. And on behalf of our employees and our Board of Directors, we thank our customers, partners and investors for their continued trust and confidence in Cadence.
Operator: Thank you for participating in today’s Cadence first quarter 2024 earnings conference call. This concludes today’s call, and you may now disconnect.