Richard Shannon: Can you hear me now?
Operator: Yes sir, please proceed.
Richard Shannon: All right, great. Thanks. Dan, I have a question for you, based on the comment in your prepared remarks. So, I’m not sure if I caught it correctly, but I think you said you had three 10% customers and including your next largest one, the top four, all came — each were supporting a different product line. I think we can all guess what the first one or — well, first one is, but I wonder if you can delineate specifically, which product lines each of the next three customers were primarily purchasing?
Bill Brennan: Yeah, it kind of covers the broad gamut of our product lines, actually. So, obviously, the largest one being Microsoft is AEC, but for a long time our Line Card PHY business has been strong. So you could assume that would be in there. Optical DSP, we have been gaining traction there, starting with Q1, as we described last quarter. So — and then our chiplet business we described a bit last quarter as well. So that kind of covers all of the different product lines that are materially contributing at this point in time.
Richard Shannon: Okay. Since you didn’t say it in your prepared remarks and you have talked about in this context in the past, you didn’t say DSP was 10%, that I would assume that both customers at or is it one of the 10% customers is DSP?
Dan Fleming: Yeah, I mean, you could assume it’s near that if it’s not at that being where it is. And what we’ve said, we haven’t changed our expectations there. We expect for next fiscal year, our target is to be at 10% or more of revenue for optical DSP. And as our first production ramp is occurring with a large hyperscaler, you might expect that we’d have a quarter or two this year where it trips 10% based upon their build schedule.
Richard Shannon: Okay, all right, fair enough. Thanks for that characterization. I guess my second question is on product gross margins. We’ve had a couple of quarters of, I guess, somewhat volatile, but I think you’re still talking directionally upwards over time here. Maybe specifically on the product gross margins here, with the growth in AECs, is it fair to think that that product line is gross margins, has continued to grow, and has it been somewhat steady or is it the volatility coming from that line?
Dan Fleming: Yeah, it — I would expect all of our — over the long term, most of our product lines will grow a bit in gross margin, really due to increasing scale. That had been a large part of our story last year, last fiscal year with the Microsoft reset. This year fluctuations in gross margin have really been more about product mix as opposed to scale. Although now that we’re approaching a point where we’ll be exiting the year at record levels of revenue, that scale factor will come in again. So I would expect some uplift in the AECs as well as kind of really across the board as we stay on target to achieve that 63% to 65% overall gross margin.
Operator: Thank you. Our next question comes from the line of Quinn Bolton of Needham and Company.
Quinn Bolton: Thanks for taking my question. I just wanted to follow up on your comments about both Microsoft Ignite and the re:Invent Conference for Amazon. You talked about the Maia 100 Accelerator racks. I think in the Microsoft blog there were certainly lots of purple cables. So it’s great to see. But can you give us some sense in that Maia 100 rack, you’re we talking about? As many as 48 multi-hundred gig AECs for the back end network, as well as a number of lower speed for the front end network and then for the re:Invent is — Amazon looking at similar architectures or can you just give us some sense of what the AEC content might look like in some of those AI racks?
Bill Brennan: Yeah, on the Maya platform, I think you’ve got it absolutely right. The back end network is comprised of 800-gig or 100-gig per lane AECs. The front end network is also connected with Credo AECs, and those are lower speed. So you’re right in terms of the number total in the rack and you could kind of visually see that when they introduced that as part of the keynote. I would say that for Amazon, they’re also utilizing Credo AECs for front end connections as well as back end. And so I think just the nature of those two different types of networks, there’s going to be some strong similarities between the architectures.
Quinn Bolton: And Bill, I think in the past, you had talked about some of these AI applications, and I think you’re referring to the back end networks here. You might not ramp until kind of late fiscal 2024, and then maybe not until fiscal 2025. It sounds like at least, in the Microsoft announcement, that they may be starting to ship these racks as early as kind of early next year. And so I’m kind of wondering, could you give us an update? When do you think you see volume revenue from AECs in the back end networks? Could that be over the next couple of quarters or do you still think it may be further out than that?
Dan Fleming: Well, I think that it’s playing out the way that we’ve expected and we’ve spoken about this on earlier calls that in our fiscal 2024, the types of volume or revenue that we’ve built into the model is really based on qualification, small pilot types of build. So it’s meaningful, but not necessarily what you would expect to see from a production ramp. And so as we look out into fiscal 2025, we still are being somewhat conservative about when exactly these are going to ramp. And so it was nice to see all of these things talked about publicly in November. However, deploying these at a volume scale, it’s a complicated thing that they’ve got to work through. And so when we talk about when exactly does the linear ramp start, that’s when we — confident is going to happen in fiscal 2025, but we can’t necessarily pinpoint what quarter.
Quinn Bolton: Understood.
Operator: Thank you. Our next question, please stand by comes from the line of Tore Svanberg of Stifel. Please go ahead, Tore.
Tore Svanberg: Yes, thank you. I just had a follow-up. So Bill, I think you’ve said in the past that for the AEC business with AI, you’re looking at sort of a 5 times to 10 times opportunity versus general compute. And I guess related to Quinn’s question, sort of the timing of how that plays out is, again, that 5 to 10, primarily on the back end side, or are you also starting to see the contribution on the front end side of the AI clusters?
Bill Brennan: Yeah, so I think generally as we talk about AI versus general compute, we’re starting to think about it in terms of front-end networks and back-end networks. And so when we see a rack of AI appliances, of course, there’s going to be a front-end network that looks very similar to what we see for General compute. And so to a certain extent, the way it plays out from a ratio perspective, serving the front-end network is really something that’s common for both general compute and AI. You might see a larger number of general compute servers in rack. So it might say per rack, the front-end opportunity for general compute might be a little bit larger than AI. But just generally, when we think about the back-end networks, the network that is really networking every GPU within a cluster, that’s where we see the big increase in overall networking density.