Synopsys, Inc. (NASDAQ:SNPS) Q4 2023 Earnings Call Transcript

Jason Celino: Great. Thanks for taking my question. The backlog number, it’s quite impressive. Looking back at the last couple of years, it looks like it’s sequentially the biggest quarter-over-quarter increase we’ve ever seen, up 20% year-over-year. Just wanted to check how much of this has been driven just from the AI design activity or the AI tool revenue. It sounds like it was more broad-based, but just wanted to check.

Sassine Ghazi: With AI, we communicated last quarter that what we’re observing on average and that average emphasizing the average because in some cases, we were seeing a much bigger number or a lesser number. About 20% contract-over-contract growth when a customer is renewing their EDA agreement and asking for the AI capability to be added. The other point that I made earlier with Vivek, I’d like to emphasize as well, that we are in early stages of that monetization with AI. We still have customers that they are in early adoption, meaning if they have x number of projects, they may be using it on one or two projects and other customers, they started early and they’re going more aggressively. But to give you a sense, it’s roughly 20% on the EDA side contract-over-contract growth we’re observing.

Jason Celino: Okay. Interesting. Thank you. And then one question on the OCF guidance. It will be down next year. Are you saying that there’s a $600 million headwind from cash taxes and then additional amortization and then are you saying it’s going to grow just in line with net income growth that you’re after? Can you clarify?

Shelagh Glaser: Yeah, Jason, thanks for the question. So the total impact from taxes is $600 million as we’re moving to the new tax guidance where R&D is capitalized. Out of that $600 million, $200 million of it was payable in November. We’ve already paid it, which was for 2023 and then $400 million of that $600 million is for 2024. And again, both of those are impacted by the amortization of R&D. As we move forward, as we move into 2025 and beyond, we anticipate that the growth of the cash tax rate will align with the growth the growth of our operating income. So there is a bit of a big step up this year. We do not anticipate that same level step up as we move forward.

Jason Celino: Got it. But this is the new base level, right?

Shelagh Glaser: It is because we’re now going forward for the foreseeable future, we will be amortizing R&D. And for us, about half of our R&D is in the US and about half of our R&D is outside the US. So that new scheme is with us for the foreseeable future.

Jason Celino: Got you. Okay, great. Thank you.

Sassine Ghazi: Thank you.

Shelagh Glaser: Thank you, Jason.

Operator: We’ll take our next question from Jay Vleeschhouwer with Griffin Securities.

Jay Vleeschhouwer: Thank you. Good evening. For Aart and Sassine, first and a financial follow-up for Shelagh. For Aart and Sassine, one of the tailwinds that you’ve had for the last number of years, and we’ve spoken about this often is what you’ve referred to as domain-specific architectures on the part of your customers or specialty chips. And AI is probably a very special case of that. The question is, how do you think the whole phenomenon of the main specific design at either the chip or system level is going to be fundamentally altered by the AI phenomenon? And in turn, what does it mean for any additional investments you need to make outside of R&D, for example, in AE capacity services, et cetera?

Sassine Ghazi: Yeah. Thank you, Jay, for the question. You’re right. The whole domain-specific architecture I want to say, about six, seven years ago, we started seeing a number of customers investing in it, mostly hyperscalers. And you can argue before that couple of the mobile companies started optimizing based on their own system software optimizing their silicon. With the hyperscalers, initially, the target was — can they — based on the workload — on a specific workload, can they develop a chip that is more effective for power, performance cost, et cetera. And the answer is yes, and they made a number of these investments. Now you’re seeing another wave of expanded investment around AI and how can they train the models that they are creating for, again, their specific applications.

And I’m sure you’ve noticed in the last couple of weeks almost the top three hyperscalers announced their own silicon investments and chips for AI specific training to drive more optimization and efficiency for their work loans. What that means for Synopsys is not only it’s another chip that our customer base is investing in, which drives both EDA and IP, typically, they are the most advanced node and different methodology in many cases, they’re pushing towards multi-die and chiplets, which opens up the door for IP and the comments I made earlier. From our solution point of view, yes, we’re expanding the solution offering to enable our customers to design these complex chips. On IP, I want to say it’s fairly straightforward. You deliver [to] (ph) standards to connect these chips.