Credo Technology Group Holding Ltd (NASDAQ:CRDO) Q1 2024 Earnings Call Transcript

So we feel very good about that. We’re seeing more customer interest. When we talk about the chiplet business in general, there’s lots of different kinds of chiplets. We’re really specializing in connectivity. So it’s really SerDes chiplets are our target and we see that generally we expect it to be a large opportunity. One of the things that we’ve talked about in the past is our efforts for PCIe Gen 6. That effort is well underway. We see a big opportunity within the servers whether their compute or AI. Internally that network is managed with the PCIe standard. And so there is a bandwidth explosion that’s happening inside the box. And so we see a large opportunity for PCIe retimers as well as chiplets and people have talked about the UCIe standard as being a die-to-die interconnect-off package would be PCIe. So we’re definitely going to be in that market long-term.

Brett Simpson: And just a follow-up on this. I mean a lot of chipmakers are talking about next architectures for AI where they physically separating out the IO from their main compute – main accelerator. Can you talk a bit about what that means for Credo? How do you position for some of these next-gen architectures? And are you engaging with any of these sorts of projects at this stage? Thanks.

Bill Brennan: Yes, I think that my understanding is what I just spoke about this UCIe standard that’s being driven by Intel. We’re part of that group. We’re active and this is defining a standard where you can do chip-to-chip connectivity and then off-chip you can manage it in different ways. We think the right approach is to go with fast connections that are off-package and we’re going to bring the same kind of advantages to that opportunity that we’ve brought to everything we’ve been involved with which is faster connectivity with better power efficiency. But that’s – that I think is what maybe you’re referring to. That’s the way that I perceive it.

Brett Simpson: And, Dan, maybe just a final one. In terms of the guide for next quarter, I wanted to just ask about some of the licensors for your USB for V2. Are you guiding for any royalty revenues in the current quarter from some of these licensors or not? Thanks.

Bill Brennan: No, that’s all kind of beyond the fiscal year – the current fiscal year. We haven’t given guidance on that yet.

Brett Simpson: Okay, thank you.

Operator: Thank you. And one moment for our next question. And our next question comes from Vijay Rakesh from Mizuho. Your line is now open.

Vijay Rakesh: Yes, hi. Thanks, Bill, and Dan. Just a question on the AI side. Just wondering, I know you talked about maybe bigger ramp in ’24, ’25, but you also talked about 5 times content getting 20 servers per rack as you go to high. Any idea – any thoughts on how – what percent of your cables now go into on the AI side. And then as we look out is the ramp on AI with Habana only or do you see opportunities on the AMD MI300 et cetera as well?

Bill Brennan: I can say that the opportunity is broad for us. Anything that’s Ethernet, I think we see that as a big opportunity. As it relates to your question about overall percentages right now, we’re really in high-volume production with one customer, and we’ve got a second lined up that is at the early stages of ramp. It’s hard for me to really project without detailed forecasts, but my expectation is that both of those customers will eventually buy our solutions for their AI platforms in a significant way. And so I would say, I think general compute will continue to be large for us, but I think that AI will ultimately be where you’ll see the bulk of our cables really at 100 gig lane rates in the near future.