Credo Technology Group Holding Ltd (NASDAQ:CRDO) Q2 2024 Earnings Call Transcript November 29, 2023
Credo Technology Group Holding Ltd reports earnings inline with expectations. Reported EPS is $0.01 EPS, expectations were $0.01.
Operator: Ladies and gentlemen, thank you for standing by. At this time, all participants are in a listen-only mode. Later, we will conduct a question-and-answer session. [Operator Instructions] I would now like to turn the conference over to Dan O’Neil, please go ahead, sir.
Dan O’Neil: Good afternoon and thank you all for joining us on our fiscal 2024 second quarter earnings call. Today I am joined by Credo’s Chief Executive Officer, Bill Brennan; and Chief Financial Officer, Dan Fleming. I’d like to remind everyone that certain comments made in this call today may include forward-looking statements regarding expected future financial results, strategies and plans, future operations, the markets in which we operate, and other areas of discussion. These forward-looking statements are subject to risks and uncertainties that are discussed in detail in our documents filed with the SEC. It’s not possible for the company’s management to predict all risks nor can the company assess the impact of all factors on its business or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement.
Given these risks, uncertainties, and assumptions the forward-looking events discussed during this call may not occur and actual results could differ materially and adversely from those anticipated or implied. The company undertakes no obligation to publicly update forward-looking statements for any reason after the date of this call to conform these statements to actual results or to changes in the company’s expectations except as required by law. Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be important measures of the company’s performance. These non-GAAP financial measures are provided in addition to and not as a substitute for or superior to financial performance prepared in accordance with US GAAP.
A discussion of why we use non-GAAP financial measures and how reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed using the Investor Relations portion of our website. I will now turn the call over to our CEO. Bill?
Bill Brennan: Thank you, Dan. Welcome to everyone joining our Q2 fiscal 2024 earnings call. I’ll start with an overview of our fiscal Q2 results. I’ll then discuss our views on our outlook. After my remarks, our CFO, Dan Fleming, will provide a detailed review of our Q2 financial results and share the outlook for the third fiscal quarter. We will then be happy to take questions. For the second quarter, Credo reported revenue of $44 million and non-GAAP gross margin of 59.9%. Our Q2 results and our future growth expectations are driven by the accelerating market opportunity for highspeed and energy-efficient connectivity solutions. We target port speeds up to 1.6 terabits per second with solutions including Active Electrical Cables or AECs, optical DSPs, laser drivers and TIAs, Line Card PHYs, SerDes Chiplets, and SerDes IP licensing, enabling us to address a broad spectrum of connectivity needs throughout the digital infrastructure market.
Each of these solutions leverage our core SerDes technology and our unique customer-focus design approach. As a result, Credo delivers application-specific, highspeed solutions with optimized energy efficiency and system costs and our advantage expands as the market moves to 100-gig per lane speeds. Within the data center market today, we’ve seen a dramatically increasing demand for higher bandwidth, higher density, and more energy-efficient networking. This demand is driven by the proliferation of generative AI applications. For the past several years Credo has been collaborating with our customers on leading-edge AI platforms that are now in various stages of ramping production. In fact, the majority of Credo revenue will be driven by AI applications in the foreseeable future.
Now, I will review our overall business in more detail. First, I’ll discuss our optical business. I am pleased with the traction we’ve been gaining in this market. In the quarter, we continued shipping to multiple global hyperscaler end customers, and we are making progress in positioning Credo to add additional hyperscale and customers in the upcoming quarters, targeting 400-gig and 800-gig applications. Credo also has optical designs in various stages with module customers and networking OEMs for the fiber channel market, and with service providers for 5G infrastructure deployments. Credo plays a disruptive role in optical DSP market. Our fundamental SerDes technology is leveraged to provide a compelling combination of performance, energy efficiency, and system cost.
Additionally, we focus on solving our customers problems and market challenges through engineering innovation. At the OFC Optical Conference in March of this year, there was an important call to action to address the unsustainable power and cost increases for optical modules in the 800-gig and 1.6T generations. Much industry discussion has ensued this year, especially related to the plausibility of the Linear Pluggable Optics architecture or LPO, also sometimes referred to as Linear Direct Drive. The LPO architecture is based on eliminating all optical DSP functionality. The industry has widely concluded that the LPO architecture will not be feasible for a material percentage of the optical module market, and that DSP functionality is critical to maintaining industry standards and interoperability, as well as achieving the bit error rate performance necessary for high yields in volume production.
However, this does not mean that the industry call to action will be unanswered. Credo’s response following OFC was to look at innovative ways to drastically reduce DSP power and subsequently cost through architectural innovation. Today, Credo issued a press release introducing our Linear Receive Optical or LRO DSPs. Our LRO DSP products provide optimized DSP capability in the optical transmit path-only, and eliminate the DSP functionality in the optical receive path. This innovative architecture, as optimized by Credo, effectively reduces the optical DSP power by up to 50%, and at the same time lowers cost by eliminating unneeded circuitry. Our LRO products address the pitfalls of the LPO architecture by maintaining standards and enabling interoperability among many components of an optical system.
And the DSP functionality maintains the equalization performance that’s critical to high yields and volume production. We’ve already shipped our Dove 850 800-gig LRO DSP device and evaluation boards to our lead optical and hyperscale end customers for their development and testing. While any revenue ramp will be a ways out, I view this innovation as the latest example of Credo pioneering a new product category that directly addresses the energy and system cost challenges faced by the Hyperscalers, especially for AI deployments. Regarding our AEC solutions, Credo continues to be an AEC market leader. While our initial success in our AEC business has been connecting front-end data center networks for general compute and AI appliances, we have seen an expansion in our AEC opportunity in the backend networks that are fundamental to AI cluster deployments.
Due to the sheer bandwidth required by backend networks, an acceleration in single lane speeds and networking density is driving the need for AECs, given the significant benefits compared to both passive copper cables and to active optical cables or AOCs for in-rack connectivity. We continue to make progress with our first two hyperscale customers for both front-end and back-end networks, and we’re especially encouraged to see Credo AECS prominently featured in the leading-edge deployments introduced at their respective conferences in November. Years in the making, we continue to maintain strong and close working relationships with our customers, and I’m pleased to say that in Q2, we made our initial shipments of 800-gig production AECs, an industry first, and again we’ve demonstrated our market leadership.
We also continue to expand our hyperscale with customer base, with one in [indiscernible] with 400-gig AEC solutions and another in development with 800-gig AEC solutions. Additionally, we’ve seen the increased need for 400-gig and 800-gig AECs among Tier 2 data center operators and service providers. As a group, these customers contribute meaningful revenue to Credo. I’ll also highlight one of Credo’s announcements at the recent Open Compute Conference in October. Credo announced the P3, pluggable patch panel system, a multi-tool that enables service providers and hyperscalers, the freedom by using the P3 and AECs to decouple pluggable optics from core switching and routing hardware. The combination of the P3 and AECs enables network architects to optimize for power distribution and system cost, as well as to bridge varying speeds between switching and optical ports.
We’re engaged with several customers and believe the efforts will result in meaningful revenue in the future. To sum up, we remain confident that the increasing demand for greater networking bandwidth driven by AI applications, combined with the extraordinary value proposition of our AEC solutions, will drive continued AEC market expansion. Now, regarding our Line Card PHY business, Credo is an established market leader with our Line Card PHY solutions, which include retimers, gearboxes, and MACsec PHYs for data encryption. Our overall value proposition becomes even more compelling as the market is now accelerating to 100-gig per lane deployments. According to our customer base, Credo’s competitive advantage in this market segment derives from the common thread across all of our product lines, which is leading performance in signal integrity that is, optimized for energy efficiency and system cost.
We’re building momentum and winning design commitments for our Screaming Eagle 1.6T PHYs and for our customer-sponsored next-generation 1.6T MACsec PHY. We remain excited about the prospects for this business with networking OEMs and hyperscale customers. Regarding our SerDes IP licensing and SerDes Chiplet businesses, credo’s SerDes IP licensing business remains a strategically important part of our business. We have a complete portfolio of SerDes IP solutions that span a range of speeds, reach distances and applications with process nodes from 28 nanometer to 4 nanometer, and our initial 3 nanometer SerDes IP for 112-gig and 224-gig is in Fab now. During Q2, we secured several licensing wins across networking data center applications. Our wins include new and recurring customers, a testament to our team’s execution in contributing to our customer’s success.
We’re also enthusiastic about the prospects for our chiplet solutions. During Q2, we secured a next-generation, 112-gig, 4 nanometer SerDes Chiplet win that includes customer sponsorship. Credo is aligned with industry expectations that chiplets will play an important role in the highest-performance designs in the future. In conclusion, Credo delivered strong fiscal Q2 results. We remain enthusiastic about our business given the market demand for dramatically increasing bandwidth. This plays directly to Credo’s strengths, and we’re one of the few companies that can provide the necessary breadth of connectivity solutions at the highest speeds, while also optimizing for energy efficiency and system cost. As we embark on second half of fiscal 2024, we expect continued growth that supports a more diversified customer base across a diversified range of connectivity solutions.
Lastly, I’m pleased to announce that, yesterday, Credo published our first ESG report, which can be found on our website. As reiterated several times today in my comments energy efficiency is built into our DNA and is a key part of our report. We aspire to be leaders across the ESG spectrum, and we strive to help enable our customers to be leaders as well. I’m very pleased with how Credo is pursuing our goals, and we look forward to continuing our positive ESG efforts. At this time, Dan Fleming, our CFO, will provide additional financial details. Thank you.
Dan Fleming: Thank you, Bill, and good afternoon. I will first review our Q2 results and then discuss our outlook for Q3 of fiscal 2024. As a reminder, the following financials will be discussed on a non-GAAP basis unless otherwise noted. In Q2, we reported revenue of $44 million, up 25% sequentially and down 14% year-over-year. Our IP business generated $7.4 million of revenue in Q2, up 165% sequentially and up 125% year-over-year. IP remains a strategic part of our business, but as a reminder, our IP results may vary from quarter-to-quarter, driven largely by specific deliverables to pre-existing or new contracts. While the mix of IP and product revenue will vary in any given quarter over time, our revenue mix in Q2 was 17% IP, above our long-term expectation for IP, which is 10% to 15% of revenue.
We expect IP as a percentage of revenue to be within our long-term expectations for fiscal 2024. Our product business generated $36.7 million of revenue in Q2, up 13% sequentially and down 24% year-over-year. Our top three end customers were each greater than 10% of our revenue in Q2. In fact, our top four end customers each represented a different product line, which illustrates the increasing diversity of our revenue base. Our team delivered Q2 gross margin of 59.9% at the high end of our guidance range and up 10 basis points sequentially. Our IP gross margin generally hovers near 100% and was 95.6% in Q2. Our product gross margin was 52.7% in the quarter, down 405 basis points sequentially due to product mix and some minor inventory-related items, and up 39 basis points year-over-year.
Total operating expenses in the second quarter were $27.1 million at the low end of our guidance range, down 1% sequentially and up 9% year-over-year. Our year-over-year OpEx increase was a result of an 11% increase in R&D as we continued to invest in the resources to deliver innovative solutions. Our SG&A was up 5% year-over-year. Our operating loss was $731,000 in Q2 compared to operating income of $3.2 million a year ago. The second quarter operating loss represented a sequential improvement of $5.7 million. Our operating margin was negative 1.7% in the quarter compared to positive 6.1% last year, due to reduced top-line leverage. We reported net income of $1.2 million in Q2, compared to net income of $2.2 million last year. Cash flow from operations in the second quarter was $5 million, an increase of $3.3 million year-over-year, due largely to a net reduction of inventory of $5 million in the quarter.
CapEx was $2 million in the quarter, driven by R&D equipment spending and free cash flow was $3 million, an increase of $6.9 million year-over-year. We ended the quarter with cash and equivalents of $240.5 million, an increase of $2.9 million from the first quarter. We remain well capitalized to continue investing in our growth opportunities while maintaining a substantial cash buffer. Our accounts receivable balance increased 17% sequentially to $32.7 million, while day sales outstanding decreased to 68 days, down from 73 days in Q1. Our Q2 ending inventory was $35.8 million, down $5 million sequentially. Now, turning to our guidance, we currently expect revenue in Q3 of fiscal 2024 to be between $51 million and $53 million, up 18% sequentially at the midpoint.
We expect Q3 gross margin to be within a range of 59% to 61%. We expect Q3 operating expenses to be between $28 million and $30 million. And we expect Q3 diluted weighted average share count to be approximately 166 million shares. We are pleased to see fiscal year 2024 continue to play out as expected. While we see some near term upside to our prior expectations, the rapid shift to AI workloads has driven new and broad based customer engagement. We expect that this rapid shift will enable us to diversify our revenue throughout fiscal year 2024 and beyond, as Bill alluded to. However, as new programs at new and existing customers ramp, we remain conservative with regard to the upcoming quarters, as we continue to gain better visibility into forecasts at our ramping customers.
In summary, as we move forward through fiscal year 2024, we expect sequential revenue growth, expanding gross margins due to increasing scale and improving product mix, and modest sequential growth in operating expenses. As a result, we look forward to driving operating leverage in the coming quarters. And with that, I will open it up for questions.
See also 20 States With Highest Migrant Workers in the US and 15 Most Expensive Cities To Heat A Home In Winter.
Q&A Session
Follow Credo Technology Group Holding Ltd
Follow Credo Technology Group Holding Ltd
Operator: [Operator Instructions] We’ll pause for just a moment to compile the Q&A roster. Our first question comes from the line of Toshiya Hari of Goldman Sachs.
Toshiya Hari: Hi. Good afternoon. Thank you so much for the question. I had two questions. First one on the revenue outlook, I just wanted to clarify, Dan, I think you mentioned sequential growth throughout the fiscal year. So April, I’m assuming is up sequentially. I guess that’s the first part. And then the second part, as you think about calendar 2024, Bill, you gave quite a bit of color by product line. At a high level, the outlook sounds pretty constructive across AEC and your optical business and I guess your SerDes business as well. But if you can try to quantify the growth that you’re expecting into calendar 2024 and what the top three key drivers are, that would be helpful? Thank you.
Dan Fleming: Yeah. So with regard to fiscal 2024 on your first question, generally speaking, we’re very pleased with our quarterly sequential growth this year. And as we stated in our prepared remarks our Q3 guide was at the midpoint, up 18%, $52 million at the midpoint. But as we stated on our call previously, we expect modest top line growth fiscal year 2023 to 2024. So the key takeaway there, there’s no change in our overall expectation for fiscal year 2024.
Bill Brennan: For the second question, I would just reiterate what Dan has said. As we look at our fiscal 2024, it’s playing out very much like we expected. So really no change there. We expect, I think, what should be considered fast sequential growth and it’s been driven by multiple factors. AECs, optical chiplets, really, we’re firing on all cylinders.
Toshiya Hari: And Bill, sorry if I wasn’t clear. Calendar 2024 or fiscal 2025, I realize it’s early and you’ve got many moving parts, but based on customer engagements, all the color you provided across product lines, how are you thinking about the overall business into next year, if you could [Multiple Speakers]
Dan O’Neil: So, we’re not providing any formal guidance right now at this point for fiscal year 2025. However, as you can imagine, we do expect meaningful growth based on all the customer engagements that we have. And as Bill mentioned, we continue to have lots of irons in the fire, but as we’ve stated, it takes a long time to turn a lot of these engagements into meaningful revenue, which will happen throughout the course of the year.
Toshiya Hari: Okay, got it. And then as my follow up on gross margins, as you noted in your remarks, Dan, I think your product gross margins were down sequentially in the October quarter, off a really high base in July, but curious what drove the sequential decline there? And then as you look ahead, I think you talked about gross margins expanding over the next couple of quarters. I think you said, what are the drivers there? And if you can speak to foundry costs potentially going from a headwind to something more neutral into calendar 2024 and how the diversification of your customer base helps your gross margins going forward, that would be helpful? Thank you.
Dan O’Neil: Yeah, so there was a lot to that question. Generally speaking, so as you correctly know, our Q2 product gross margin was down sequentially from Q1, which — and if you recall, Q1 was up substantially 700 basis points from Q1. It’s kind of easy to read, probably too much into these movements quarter-over-quarter at the scale that we’re at right now because there are slight product mix changes from quarter to quarter. In Q2, we also had some very minor inventory-related items that impacted product gross margin. But the important thing or the most important thing is that there’s no change to our long-term expectations. Our gross margin expectation over the upcoming years is to expand to the 63% to 65% range and from fiscal 2023 to 2024, you’re seeing that play out, although it’s not quite linear from quarter to quarter, and that’ll continue to play out through next year as well.
Operator: Thank you. Our next question comes from the line of Tom O’Malley of Barclays.
Thomas O’Malley: Hey, guys. Good afternoon and thanks for taking my question. I just wanted to clarify something you said on the call. You guys have talked previously about two customers that you’re ramping with AEC. You talked about one customer in qualification with 400G and one in development with 800G. I just want to make sure you’re still referring to processes that you’ve talked about before, or are those new developments that you guys are talking about? Thank you.
Bill Brennan: I think we’ve alluded to those developments in the past, but I think these are additional hyperscale customers. So the first two that we’ve got, November, was kind of a big month. Both of them had shows, so the Microsoft Ignite really prominently displayed their maya.ai appliance and rack. And you could — you see the Credo AECs prominently displayed as part of that rack. So that’s really something we’ve messaged in the past, and now it’s been publicly announced and shown. And also, Amazon is having the re:Invent Conference right now as we speak. And if you look at the demos on the show floor, you’ll see our 50-gig and 100-gig per lane products as part of those demonstrations. And so the two additional one we’re in qual with, and we’re expecting qualification to be completed sometime in the upcoming quarter, maybe give or take a month or so.
And then the other one is more of a long-term plan as we’re putting together, an 800-gig customer-specific solution for another hyperscaler.
Thomas O’Malley: Super helpful. And then just on the optical side, you guys had previously talked about a new 400G customer. The upside in the near term, the beginning of that ramp, or are you just seeing additional traction from customers you talked about in the past? I know that, you had — there were some Chinese customers that you were looking to get back into rev run rate. Can you just help me understand where the strength you’re seeing in the optical DSP side is coming from? Thanks.
Bill Brennan: Yeah, so generally, we continue to ramp with the partner that we’re engaged with serving the US hyperscaler. So that ramp is going to happen for the next several quarters. We’re also seeing further signs of life in our customer base in China. And so we’ve actually got — we’ve got demand that we’re seeing from three or four hyperscalers in China. As far as the new US. hyperscaler that we’ve talked about, really, that is not baked into any of the numbers that we’ve talked about. And so that would be if we can ultimately close that, we expect that will impact revenues in the in the fiscal 2025 time frame.
Operator: Thank you. Our next question comes from the line of Tore Svanberg of Stifel. Tore Svanberg, your line is open. Please go ahead. Please make sure your line is unmuted and if you in speaker phone lift your handset.
Tore Svanberg: Yes. Can you hear me?
Operator: Yes, sir. Please proceed.
Tore Svanberg: Yes. Sorry about that. Yeah. Bill, my first question was on the [indiscernible] HALO product that you just announced this afternoon. You did say that this is something that should generate revenues longer term, but I think the market is also very, very hungry for lower costs near-term. So what kind of time frame are we looking at here as far as when that product could be in production?
Bill Brennan: So I think that the first message is that we’ve shipped samples that are going to be built into modules. We’ve shipped eval boards that are going to be thoroughly tested by our lead hyperscale customer. And so [indiscernible] is really now. And so the typical development time for an optical module is on the order of — on the order of 12 months to get to production. And that’s really based on building and qualifying the module and then going through qualification with the hyperscale end customer. And so as we look at kind of best-case scenario, we’re talking about something on the order of 12 months from now, so it could impact our fiscal 2025.
Tore Svanberg: That’s very helpful. And as my follow-up, I know the first half of the year, there were still some headwinds, obviously, from your largest customer inventory digestion on the compute side. I’m just wondering is that now — as we look at the January quarter, is that headwind completely behind you or is there still some lingering effects there?
Bill Brennan: Well, I think, as we think about the front-end networks at this lead customer of ours, the application is general compute as well as AI. And so, of course, there is both of these applications are kind of contributing to the digestion of the inventory that was built up as a result of the pivot earlier in the year. And so as we look at fiscal 2024, I think we’ve got good visibility. And exactly when it turns back on, I think we’re still being conservative in a sense that we got to wait for that to really develop in our fiscal 2025.
Tore Svanberg: Great. Thank you very much.
Bill Brennan: Thank you, Tore.
Operator: Thank you. And for our next question it comes from the line of Karl Ackerman of BNP Paribas.
Karl Ackerman: Yeah. Thank you, gentlemen. Two questions, if I may. The first question is a follow-up from the previous one, but you are introduced using AOC solutions today to address both DSP-based and non-DSP-based optical links. How do you see the adoption of non-DSP-based solutions for back-end network connections in calendar 2024? And as you address that question, I guess, why not introduce an AEC solution for back-end networks?
Bill Brennan: So, let me take the first part of that question. Really, the two solutions that we’ve got for optical are what we might call a full DSP, which is kind of the traditional approach, where there’s a DSP on the transmit path as well as the receive path on a given optical link. That activity is going to continue. The product that we really announced today was eliminating the DSP on the receive path and having it on the transmit path only. And so you might say that that would be half of the DSP on a typical optical and so those are really the two solutions that we’re promoting. We believe that completely eliminating the DSP is really not something that’s going to play out in a big way. Analysts have been out front saying that they don’t see it ever being more than 10% of the market if it achieves that level.
So you’d have to have a very tight control over the entire length to be able to be able to manage that. And that’s just not the typical scenario in the market today. Typically people are putting together various solutions and interoperability is really the key as well as troubleshooting and ultimately yielding in production. The second part of your question was regarding AECs, and we are absolutely building AECs for backend networks. And the AECS are really covering in rack, three [indiscernible] or less solutions. There are also rack-to-rack connections, and those are all optical connections, whether they’re AOCs or transceivers. And especially in that situation for rack-to-rack connectivity within a cluster, that’s where we really believe that the LRO DSP is going to be highly applicable and, and really quite valuable to customers.
Karl Ackerman: Thanks for that. For my follow-up, I wanted to pivot to your IP business. This is primarily [indiscernible] data center today, or at least a data center focused application. But over time, the idea is that as PAM3 ramps, it will transition more toward the consumer. How do you expect the end market mix of your IP business transitioning to consumer over the next few quarters? Thank you.
Bill Brennan: So as we look at our IP business primarily today, it’s Ethernet. We’ve talked about one large consumer license that we’ve engaged on for consumer, and that’s moving to 40-gig PAM3 for the CIO 80 or 80 gigabits per second, two lanes of 40-gig for that market. And that market is going to be out sometime in the future, probably on the order of two to three years before that ramps production. I don’t expect it to be a big part of our IP business long-term. I expect that our Ethernet IP business will continue very strongly. And I also believe that from a PCIe perspective, we’ll be able to talk about that as we bring our 64-gig and 128-gig solutions to market.
Operator: Thank you. Our next question — please standby, comes from the line Vijay Rakesh of Mizuho. Please go ahead, Vijay.
Vijay Rakesh: Yeah. Hey, Bill and Dan. Just on the PPP, the Pluggable Patch Panel, is that included in your AEC and all your — all the three customers using it or is it — how do you see that ramping, I guess?
Bill Brennan: Yeah, so you broke up a little on the line, but I’ll answer the question by saying that this P3 was something that was developed in conjunction with a leading service provider. So they spoke about their challenges as they were connecting ZR optics to routers or switch ports. And so, this was really developed with them and their application in mind, also knowing that developing this solution, it would become a multi-tool in a sense, to be able to solve different networking problems associated with power and cooling and control plane access. And so the, our lead customer is a service provider, but we’re seeing that there’s also applications where this, really fits well. When we talk about the situation where switch and router port speeds are different from the optic speeds that a customer wants to use.
So a customer could connect 800-gig ZR optics with 400-gig switch ports, or vice versa they could move to the fastest switches, 800-gig ports, but still use 400-gig ZR. So in a sense, this P3 system can gearbox and really seamlessly connect different speed optics with different speed ports on routers and switches. Also, from a thermal distribution standpoint, this is a really useful tool in a sense, because some customers want to use lower-cost, smaller switches that lack the power and cooling envelope for advanced ZR optics. So you would have a lot of stranded ports. So in a sense, you can take that, that thermal management away from the switch. And so there’s multiple applications, we introduced this at OCP, and we realized putting out a multi-tool like this that basically enables optics to be connected directly with AECs as a different type of solution.
We were surprised at the ideas that some of the engineers that came by our booth at OCP were surprised at some of the great ideas that they came up with. So generally, when we think about this product, we think about it in terms of a combination of the P3 and AECs. So we developed the P3 to basically be a catalyst for more AEC demand.
Vijay Rakesh: Got it. And so in better utilizing the stranded ports, I guess, does the P3 with the AEC actually double your content on the server to the rack or —
Bill Brennan: It’s hard to say. I don’t think there’s a relative reference point on contact. These are new applications and with our lead customer, we think that the content can be significant. But the nice thing is, this is really an application as we prove out our lead customer, this is one that many service providers we think will pick up.
Vijay Rakesh: Got it. And then the last question on your 10% customers, how many were there in the quarter? And if you were to look out, let’s say, fiscal exiting calendar 2025, any thoughts on how many 10% customer you think you would be working on?
Dan Fleming: Yeah, so for Q2, we had, as you’ll see, when our Q is filed, we had three 10% end customers. Recall last quarter, we added an additional disclosure to show end customers. So you’ll see the largest one was 29%. Generally, we don’t disclose who our 10% customers are, but obviously the 29% one was Microsoft. Most importantly, we continue to expand our customer base throughout the year. One of the customers — one of those three end customers is a new end customer, as you’ll see in our disclosure. So it’s hard to answer the latter part of your question, how many we’ll add at the end of the year, but I would guess maybe four.
Operator: Thank you. Please stand by for our next question. Our next question comes from the line of Suji Desilva of ROTH MKM.
Suji Desilva: Hi, Bill. Hi, Dan. My question is on the competitive landscape. I’m wondering what you’re seeing in the chip-based AEC efforts. Chip plus cable guys competing with you? Are you guys able to provide a faster time to market? Is that one of the reasons you’re in some of these demo racks, perhaps. And maybe you can talk about the share you might think you’d be having in the AEC market versus the size? Thanks.
Bill Brennan: I think we’ve been consistent in saying that we don’t expect to maintain 100% of the AEC market, and we do see competitors. As this product category becomes really more and more established, that is a de facto way of making short in-rack connections, we do see more competitors. The way that we’re organized, for sure we’re going to be able to deliver better time to market. And what we’re seeing is that for the high volume applications, customers are asking for special features, special functions, and fundamentally, we are responsible for working our company. Although we’re a chip company, I’ve built a system organization for AECs. And so we’re the ones that are working directly with the hyperscalers. We’re the ones having daily conversations when crunch time comes.
And so for sure, we’ve got a time-to-market advantage. And so I think the way that this will play out, I think that our market share will ultimately play out, and I hope that we maintain more than 50% long term. And I think that’s a function of being first, that’s a function of having a model that delivers just a better experience with hyperscale customers directly.
Suji Desilva: Okay. All right. Thanks, Bill. And then my other questions on the customer base and where they are in the racks. You talked about Amazon and Microsoft demoing the racks, and they seem like they’re a little bit ahead of the rest of the customer base, but perhaps you can clarify that? And if so, are the other folks really close behind them, or do those guys have maybe a substantial technical lead? Just trying to figure out how the customers may waterfall in for you?
Bill Brennan: Yeah, I think from a timing standpoint, I would expect the third customer would probably ramp in the upcoming two to three quarters. It takes time for these new platforms to be deployed and then the fourth customer would be following that by a number of quarters. So I think it’s one where the first two customers, of course, the architectures that they’ve decided to take to market, really each one of these customers is different, in a sense. So I wouldn’t say that they’re necessarily ahead from a technology standpoint or it’s just that, they’ve chosen to move forward more quickly than the others.
Operator: Thank you. Our next question comes from the line of Richard Shannon of Craig-Hallum. Richard, please make sure your line is unmuted and if you’re on a speaker phone, lift your handset.
Richard Shannon: Can you hear me now?
Operator: Yes sir, please proceed.
Richard Shannon: All right, great. Thanks. Dan, I have a question for you, based on the comment in your prepared remarks. So, I’m not sure if I caught it correctly, but I think you said you had three 10% customers and including your next largest one, the top four, all came — each were supporting a different product line. I think we can all guess what the first one or — well, first one is, but I wonder if you can delineate specifically, which product lines each of the next three customers were primarily purchasing?
Bill Brennan: Yeah, it kind of covers the broad gamut of our product lines, actually. So, obviously, the largest one being Microsoft is AEC, but for a long time our Line Card PHY business has been strong. So you could assume that would be in there. Optical DSP, we have been gaining traction there, starting with Q1, as we described last quarter. So — and then our chiplet business we described a bit last quarter as well. So that kind of covers all of the different product lines that are materially contributing at this point in time.
Richard Shannon: Okay. Since you didn’t say it in your prepared remarks and you have talked about in this context in the past, you didn’t say DSP was 10%, that I would assume that both customers at or is it one of the 10% customers is DSP?
Dan Fleming: Yeah, I mean, you could assume it’s near that if it’s not at that being where it is. And what we’ve said, we haven’t changed our expectations there. We expect for next fiscal year, our target is to be at 10% or more of revenue for optical DSP. And as our first production ramp is occurring with a large hyperscaler, you might expect that we’d have a quarter or two this year where it trips 10% based upon their build schedule.
Richard Shannon: Okay, all right, fair enough. Thanks for that characterization. I guess my second question is on product gross margins. We’ve had a couple of quarters of, I guess, somewhat volatile, but I think you’re still talking directionally upwards over time here. Maybe specifically on the product gross margins here, with the growth in AECs, is it fair to think that that product line is gross margins, has continued to grow, and has it been somewhat steady or is it the volatility coming from that line?
Dan Fleming: Yeah, it — I would expect all of our — over the long term, most of our product lines will grow a bit in gross margin, really due to increasing scale. That had been a large part of our story last year, last fiscal year with the Microsoft reset. This year fluctuations in gross margin have really been more about product mix as opposed to scale. Although now that we’re approaching a point where we’ll be exiting the year at record levels of revenue, that scale factor will come in again. So I would expect some uplift in the AECs as well as kind of really across the board as we stay on target to achieve that 63% to 65% overall gross margin.
Operator: Thank you. Our next question comes from the line of Quinn Bolton of Needham and Company.
Quinn Bolton: Thanks for taking my question. I just wanted to follow up on your comments about both Microsoft Ignite and the re:Invent Conference for Amazon. You talked about the Maia 100 Accelerator racks. I think in the Microsoft blog there were certainly lots of purple cables. So it’s great to see. But can you give us some sense in that Maia 100 rack, you’re we talking about? As many as 48 multi-hundred gig AECs for the back end network, as well as a number of lower speed for the front end network and then for the re:Invent is — Amazon looking at similar architectures or can you just give us some sense of what the AEC content might look like in some of those AI racks?
Bill Brennan: Yeah, on the Maya platform, I think you’ve got it absolutely right. The back end network is comprised of 800-gig or 100-gig per lane AECs. The front end network is also connected with Credo AECs, and those are lower speed. So you’re right in terms of the number total in the rack and you could kind of visually see that when they introduced that as part of the keynote. I would say that for Amazon, they’re also utilizing Credo AECs for front end connections as well as back end. And so I think just the nature of those two different types of networks, there’s going to be some strong similarities between the architectures.
Quinn Bolton: And Bill, I think in the past, you had talked about some of these AI applications, and I think you’re referring to the back end networks here. You might not ramp until kind of late fiscal 2024, and then maybe not until fiscal 2025. It sounds like at least, in the Microsoft announcement, that they may be starting to ship these racks as early as kind of early next year. And so I’m kind of wondering, could you give us an update? When do you think you see volume revenue from AECs in the back end networks? Could that be over the next couple of quarters or do you still think it may be further out than that?
Dan Fleming: Well, I think that it’s playing out the way that we’ve expected and we’ve spoken about this on earlier calls that in our fiscal 2024, the types of volume or revenue that we’ve built into the model is really based on qualification, small pilot types of build. So it’s meaningful, but not necessarily what you would expect to see from a production ramp. And so as we look out into fiscal 2025, we still are being somewhat conservative about when exactly these are going to ramp. And so it was nice to see all of these things talked about publicly in November. However, deploying these at a volume scale, it’s a complicated thing that they’ve got to work through. And so when we talk about when exactly does the linear ramp start, that’s when we — confident is going to happen in fiscal 2025, but we can’t necessarily pinpoint what quarter.
Quinn Bolton: Understood.
Operator: Thank you. Our next question, please stand by comes from the line of Tore Svanberg of Stifel. Please go ahead, Tore.
Tore Svanberg: Yes, thank you. I just had a follow-up. So Bill, I think you’ve said in the past that for the AEC business with AI, you’re looking at sort of a 5 times to 10 times opportunity versus general compute. And I guess related to Quinn’s question, sort of the timing of how that plays out is, again, that 5 to 10, primarily on the back end side, or are you also starting to see the contribution on the front end side of the AI clusters?
Bill Brennan: Yeah, so I think generally as we talk about AI versus general compute, we’re starting to think about it in terms of front-end networks and back-end networks. And so when we see a rack of AI appliances, of course, there’s going to be a front-end network that looks very similar to what we see for General compute. And so to a certain extent, the way it plays out from a ratio perspective, serving the front-end network is really something that’s common for both general compute and AI. You might see a larger number of general compute servers in rack. So it might say per rack, the front-end opportunity for general compute might be a little bit larger than AI. But just generally, when we think about the back-end networks, the network that is really networking every GPU within a cluster, that’s where we see the big increase in overall networking density.
And Quinn earlier talked about the idea of having 48 connections to the back-end network or 48 AECs within an AI appliance rack that are dedicated to the back-end network versus, say, if it’s a rack with eight appliances, there’d be eight AECs for the front end. So that’s where we see in an actual appliance rack. We could talk about 5 times to 6 times the volume. But then when we think about the switch racks that are part of that back-end network, there’s also an additional opportunity there. And that’s when we can think about the overall opportunity compared to front end being 5 to 10 times the volume.
Tore Svanberg: That’s very helpful. My last question, and I have to ask you this question, just given your strong SerDes IP, but as it relates to the chiplet market, obviously, the CPU market is the first to embrace that. But are you starting to see the GPU market moving in the direction of GPUs as well, or is it just way too early for that?
Bill Brennan: I think that the standard that Intel has been promoting, the UCIe standard, I think that, that is going to be a big market for chiplets. And that for us ties in closely with the efforts that we’re making on PCIe. And so, one thing I would note is that the acceleration and speeds is happening really across the board. And so we’ve been targeting the 64-gig PAM4, PCIe Gen 6 CXL 3 market, but I also see an acceleration for the next generation 128-gig. And so that’s very much part of what’s happening with this explosion in the AI market, is this need for faster and faster speeds. And so I think that you’re going to see the same type of thing that’s happened in Ethernet, you’re going to see that happen with PCIe. And at OCP this year, we had kind of a vision piece that we presented with the possibility of CXL really, and PCIe possibly being the protocol for back-end network connectivity as well as an expansion in front end networks.
So there’s really exciting things coming in the future as we see that standard accelerating.
Tore Svanberg: Great. Thank you so much.
Operator: Thank you. There are no further questions at this time. Mr. Brennan, I turn the call back over to you.
Bill Brennan: Thank you very much for the questions. We really appreciate the participation and we look forward to following up on the call backs. Thank you.
Operator: This concludes today’s conference call. You may now disconnect.