And I don’t think it’s a zero-sum game. I’ve been saying this for a while. I think both segments, the merchant side as well as the custom side are going to grow to be very enormous TAMs. And so yes, we’re right in the mix on all the major opportunities and programs. We have assembled an incredible team, an incredible portfolio of technology. We announced today our continued partnership with TSMC with a very full and robust roadmap of IP and technology to specifically address AI and accelerated computing applications for the next generation. So that’s got a lot of excitement with the customers as well. So yes, overall, we’re very bullish, and we think the whole — all the boats are going to lift on this one and certainly for Marvell creates a huge new TAM opportunity that we really didn’t have just a few years ago.
Operator: The next question will come from Christopher Rolland with Susquehanna.
Christopher Rolland: I guess, first of all, Matt, thanks for giving the custom ASIC opportunity numbers again. I think that means, probably means you guys have a pretty strong pipeline. But I would love to drill down on that a little bit more. So I guess, first of all, maybe you can talk about these opportunities, training versus inference, how you see that versus perhaps just compute? And then what’s kind of the — do you have a secret sauce that’s landing these? Like your competitor has really used there at the time, leading edge SerDes IP as a driver of those wins. Is there something that you have some special sauce that are winning these?
Matt Murphy: Yes, hopefully, some of the custom numbers were helpful. Investors last year and this year, there was a lot of interest in sort of what the size of that was. And now that we’ve got the visibility, it’s great to start articulating where we think that is. I think on your second sort of piece of your question around training versus inference and compute. At this point, I would just put it in a very large bucket of what I’m calling decabillions [ph] in TAM creation. I don’t know that we’re — I’m on this call certainly prepared to slice it out. And I don’t even think we know where this is going to be in three to five years relative to that exact split. But I would say the opportunity set for us crosses all of those applications, okay?
And then relative to where we are in terms of our technology, our competitiveness, I mean it’s pretty astonishing if you think about it. When we won the designs we won, Chris, at 5-nanometer, that was really our first time as Marvell to compete in this segment, especially in the data center. The origins of this business go back to our purchase of Avera in 2019. We pivoted that roadmap during dependency of the close and right after from 14-nanometer technology all the way to 5-nanometer and even though it was our first time out, SerDes was competitive, packaging technology, interconnect, manufacturing scale, strategic partnerships up and down the supply chain, and that enabled us to win a significant share of ASICs in business, which is now coming to fruition here in fiscal ’25 and ’26.
We followed that up with our 3-nanometer platform, which has now been a big part of the opportunity set in the funnel we’re competing in today. We are in a completely different position now relative to 5-nanometer because we had the learnings, we had the readiness. And now we’re really leaning in, okay, in terms of being a first mindset around not just nanometers, but die-to-die interconnect, packaging, SerDes technology, optics and really in chiplets and think about how to stitch all this together. And on top of that, with the scale of Marvell and our strategic partnerships, we really look to our customers like an incredibly solid, trustworthy long-term partner for their needs to really help them realize their AI silicon ambitions. So that’s where we are today.
I mean we did great on 5-nanometer, we’re going to do great on 3-nanometer, and we’re going to do great in the future. Thanks.
Operator: The next question will come from Quinn Bolton with Needham & Company.
Quinn Bolton: I’m going to follow-up on the cloud ASIC question as well. But I guess I’m coming at it from a margin perspective. Matt, you look at some of these programs on the cloud compute side to the extent that they integrate high-bandwidth memory. It certainly does nice things to the ASP, but there’s also a pretty significant costs that you may have to pass through. And so I’m wondering, can you talk about the gross margins on some of these ASIC platforms, how does that compare to the corporate average? And as you ramp, especially if cloud ASICs get to be similar in size to the optics, which I know are good margins, how do you see the margin mix changing with this now very, very strong ramp of cloud ASIC programs? And then I’ve got a quick follow-up, if I can sneak it in.
Matt Murphy: Sure. Yes. I mean on the way we’ve talked about that business, actually since Avera in 2019 is that it would always due to the nature of custom business be at a lower gross margin overall than the company average and the target. At the same time, because of the NRE and non-recurring engineering that we receive as part of the customer funding of these programs, the operating margins of this business tend to be very competitive and over time, in line with our company targets. When you start talking about these very, very large programs, you’ve got opportunity given the volume and scale on the OM line to really drive a lot of leverage in the model. But the GMs, just due to the nature of that business model will always be a little bit lower than the total.