Credo Technology Group Holding Ltd (NASDAQ:CRDO) Q4 2023 Earnings Call Transcript May 31, 2023
Credo Technology Group Holding Ltd reports earnings inline with expectations. Reported EPS is $-0.04 EPS, expectations were $-0.04.
Operator: Good day, and thank you for standing by. Welcome to the Credo Q4 Fiscal Year 2023 Earnings Conference Call. Please be advised that today’s conference is being recorded. I would now like to go ahead and turn the call over to Dan O’Neil. Please go ahead.
Dan O’Neil: Good afternoon, and thank you all for joining us today for our fiscal 2023 fourth quarter and year ending earnings call. Joining me today from Credo are Bill Brennan, our Chief Executive Officer; and Dan Fleming, our Chief Financial Officer. I’d like to remind everyone that certain comments made in this call today may include forward-looking statements regarding expected future financial results, strategies and plans, future operations, the markets in which we operate in other areas of discussion. These forward-looking statements are subject to risks and uncertainties that are discussed in detail in our documents filed with the SEC. It’s not possible for the company’s management to predict all risks or can the company assess the impact of all factors on its business or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statements.
Given these risks, uncertainties and assumptions, the forward-looking events discussed during this call may not occur, and actual results could differ materially and adversely from those anticipated or implied. The company undertakes no obligation to publicly update forward-looking statements for any reason after the date of this call to conform these statements to actual results or to changes in the company’s expectations except as required by law. Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be important measures of the company’s performance. These non-GAAP financial measures are provided in addition to and not as a substitute for or superior to financial performance prepared in accordance with U.S. GAAP.
A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed using the Investor Relations portion of our website. With that, I’ll now turn the call over to our CEO. Bill?
Bill Brennan: Thanks, Dan, and good afternoon, everyone. Thank you for joining our Q4 fiscal ’23 earnings call. I’ll begin by providing an overview of our fiscal year ’23 and fiscal Q4 results. I will then highlight what we see going forward into fiscal ’24. Dan Fleming, our CFO, will follow my remarks with a detailed discussion of our Q4 and fiscal year ’23 financial results and share our outlook for the first quarter. Credo is a high-speed connectivity company, delivering integrated circuits, system-level solutions and IP licenses to the hyperscale data center ecosystem along with a range of other data centers and service providers. All our solutions leverage our core series technology and our unique customer-focused design approach, enabling Credo to deliver optimized, secure high-speed solutions with significantly better power efficiency and cost.
Our electrical and optical connectivity solutions delivered leading performance of port speeds ranging from 50 gig up to 1.6 terabits per second. While we primarily serve the Ethernet market today, we continue to expand into other standards-based markets as the need for higher speed with more power-efficient connectivity increases exponentially. Credo continues to have significant growth expectations within the accelerating market opportunity for high-speed connectivity solutions. In fact, the onset of generative AI applications is already accelerating the need for higher speed and more energy-efficient connectivity solutions. And this is where Credo excels. I’ll start with comments on our fiscal 2023 results. Today, Credo is reporting results from our first full fiscal year as a public company.
In fiscal ’23, Credo achieved just over $184 million in revenue, up 73% over fiscal ’22, and we achieved non-GAAP gross margin of 58%. Product revenue increased 87% year-over-year, primarily due to the ramp of our active electrical cable solutions. License revenue grew 28% year-over-year from $25 million to $32 million. Throughout fiscal ’23, we had several highlights across our product lines. For active electrical cables or AECs, we continue to lead the market, Credo pioneered during the last several years. Our team continued to quickly innovate with application-specific solutions, and we’ve been successful in expanding our engagement to include multiple data centers and service providers. Our customer-focused innovation has led to more than 20 different versions of AECs shipped for qualification or production in the last year, and we remain sole-sourced in all our wins.
And while our significant power advantage was a nice to have a couple of years ago, it’s increasingly becoming imperative as our hyperscaler customers are pushed to lower their carbon footprint. For optical DSPs, Credo continued to build momentum by successfully passing qualification for 200-gig and 400-gig solutions at multiple hyperscalers with multiple optical module partners. In addition, Credo introduced our 800-gig optical DSPs, laser drivers and TIAs and we announced our entry into the coherent optical DSP market. For line card clients, we continue to expand our market leadership. In particular, Credo built upon our position as the leader for MAXes-5 with over 50% market share. We also extended our performance and power efficiency advantages for 100 gig per latent line card 5s with the introduction of our screening Eagle family of retimers and gearboxes with up to 1.6 terabits per second of bandwidth.
For IP licensing, we continue to build on our offering of highly optimized SerDes IP. In the year, we licensed SerDes IP in several process nodes from 4-nanometer to 28-nanometer with speeds ranging from 28 gig to 112 gig and reach performance ranging from XSR to LR. We believe our ability to innovate to deliver custom solutions remains unparalleled. We maintain very close working relationships with hyperscalers, and we’ll continue to collaborate with them to deliver solutions that are optimized to their needs. Despite recent macroeconomic headwinds in the data center industry, we believe the need for higher speed with better power efficiency will continue to grow. This plays perfectly to Credo’s strengths, which is why we remain optimistic about our prospects in fiscal ’24 and beyond.
I will now discuss the fourth quarter more specifically. In Q4, we delivered revenue of $32.1 million and non-GAAP gross margin of 58%. I’ll now provide an overview of key business trends for the quarter. First, regarding AEC. Market forecasters continue to expect significant growth in this product category due to the benefits of AECs compared to both legacy direct attached copper cables and compared to active optical cables, which is significantly higher power and higher cost. With our largest customer, we’re encouraged by our development progress on several new AEC programs, including an acceleration in the first 100-gig per lane AI program where they intend to deploy accretive AECs. We saw the initial ramp of a second hyperscale customer, which we expect to grow meaningfully throughout the year.
We’re ramping 50 gig per lane Nick to tour ADC solutions for both their AI and compute applications. And I’m happy to report that Credo has been awarded this customer’s first 100-gig per lane program. We’re also actively working to develop several other advanced AEC solutions for their next-generation deployments. We continue to make progress with additional customers as well. We remain in flight with two additional hyperscalers and are also engaged in meaningful opportunities with service providers. We’ve seen momentum building for AEC solutions across AI, compute and switch applications, and we continue to expect to benefit as speeds move quickly to 100-gig per lane. Regarding our progress on optical solutions. In the optical category, we’ve leveraged our SerDes technology to deliver disruptive products, including DSPs, laser drivers and TIAs for 50 gig through 800 gig port applications.
We remain confident we can gain share over time due to our compelling combination of performance, power and cost. In addition to the hyperscalers that have previously production-qualified Credos optical DSPs, we started the production ramp of a 400-gig optical DSP for a U.S. hyperscaler as the end customer. At OFC in March, we received very positive feedback on our market solutions, including our DOVE-800 products as well as on our announcement to enter the 100-gig ZR coparent DSP market. We’re well positioned to win hyperscalers across a range of applications, including 200 gig, 400 gig and 800 gig port speeds. We’re also engaged in opportunities for Fibre Channel, 5G, OTM and PON applications with optical partners, service providers and networking OEMs. Within our line card by category.
During the fourth quarter, we saw growing interest in our solutions, specifically for our screening Eagle 1.6 terabit per second buys. We’ve already been successful winning several design commitments from leading networking OEMs and ODMs for the screening and Eagle devices. Credo was selected due to our combination of performance, signal integrity, power efficiency and cost effectiveness. We also made significant development progress with our customer-sponsored next-generation 5-nanometer 1.6 terabit per second MACsec-5 which we believe will extend our leadership well into the future for applications requiring Regarding our SerDes IP licensing in SerDes chiplet businesses, our IP deals in Q4 were primarily led by our 5 and 4-nanometer 112-gig series IP, which according to customers, offers significant power advantage versus competition based on our ability to power optimize to the reach of an application.
Our SerDes chiplet opportunity continues to progress. Our collaboration with Tesla on their Dojo supercomputer design is an example of how connectivity chiplets can enable advanced next-generation AI systems. We’re working closely with customers and standard sites such as the UCI e-consortium to ensure we retain leadership as the chiplet market grows and matures. We believe the acceleration of AI solutions across the industry will continue to fuel our licensing in chiplet businesses. To sum up, the hyperscale landscape has shifted swiftly and dramatically in 2023. Compute is now facing a new horizon, which is generative AI. We expect this shift to accelerate the demand for energy-efficient connectivity solutions that perform at the highest speeds.
From our viewpoint, this technology acceleration increases the degree of difficulty and will naturally slim the field of market participants. We remain confident that our technology innovation and market leadership will fuel our growth as these opportunities materialize. We expect to grow sequentially in Q1 and then continue with sequential quarterly revenue growth throughout fiscal ’24. We believe our growth will be led by multiple customers across our range of connectivity solutions, which will result in a more diversified revenue base as we exit fiscal ’24. I’ll now turn the call over to our CFO, Dan Fleming, who will provide additional details. Thank you.
Dan Fleming: Thank you, Bill, and good afternoon. I will first provide a financial summary of our fiscal year ’23, then review our Q4 results and finally, discuss our outlook for Q1 and fiscal ’24. As a reminder, the following financials will be discussed on a non-GAAP basis, unless otherwise noted. Revenue for fiscal year ’23 was a record at $184.2 million, up 73% year-over-year, driven by product revenue that grew by 87%. Gross margin for the year was 58.0%. Our operating margin improved by 13 percentage points even as we grew our product revenue mix. This illustrates the leverage that we can produce in the business. We reported earnings per share of $0.05, an $0.18 improvement over the prior year. Moving on to the fourth quarter.
In Q4, we reported revenue of $32.1 million, down 41% sequentially and down 14% year-over-year. Our IP business generated $5.7 million of revenue in Q4, down 55% sequentially and down 49% year-over-year. IP remains a strategic part of our business, but as a reminder, our IP results may vary from quarter-to-quarter, driven largely by specific deliverables to preexisting contracts. While the mix of IP and product revenue will vary in any given quarter over time, our revenue mix in Q4 was 18% IP, above our long-term expectation for IP, which is 10% to 15% of revenue. We continue to expect IP as a percentage of revenue to come in above our long-term expectations for fiscal ’24. Our product business generated $26.4 million of revenue in Q4, down 37% sequentially and flat year-over-year.
Our team delivered Q4 gross margin of 58.2%, above the high end of our guidance range and down 94 basis points sequentially due to lower IP contribution. Our IP gross margin generally hovers near 100% and was 97.4% in Q4. Our product gross margin was 49.7% in the quarter, up 245 basis points sequentially and up 167 basis points year-over-year, due principally to product mix. Total operating expenses in the fourth quarter were $27.2 million within guidance and up 6% sequentially and 25% year-over-year. Our year-over-year OpEx increase was a result of a 36% increase in R&D as we continue to invest in the resources to deliver innovative solutions. Our SG&A was up 12% year-over-year as we built out public company infrastructure. Our operating loss was $8.5 million in Q4, a decline of $10.7 million year-over-year.
Our operating margin was negative 26.4% in the quarter, a decline of 32.2 percentage points year-over-year due to reduced top line leverage. We reported a net loss of $5.7 million in Q4, $8.3 million below last year. Cash flow used by operations in the fourth quarter was $11.8 million, a decrease of $14.2 million year-over-year due largely to our net loss and changes in working capital. CapEx was $3.9 million in the quarter driven by R&D equipment spending and free cash flow was negative $15.7 million, a decrease of $8.4 million year-over-year. We ended the quarter with cash and equivalents of $217.8 million, a decrease of $15.2 million from the third quarter. This decrease in cash was a result of our net loss and the investments required to grow the business.
We remain well capitalized to continue investing in our growth opportunities while maintaining a substantial cash buffer for uncertain macroeconomic conditions. Our accounts receivable balance increased by 14.6% sequentially to $49.5 million, while days sales outstanding increased to 140 days, up from 72 days in Q3 due to lower revenue. Our Q4 ending inventory was $46.0 million, down $4.3 million sequentially. Now turning to our guidance. We currently expect revenue in Q1 of fiscal ’24 to be between $33 million and $35 million, up 6% sequentially at the midpoint. We expect Q1 gross margin to be within a range of 58% to 60%. We expect Q1 operating expenses to be between $26 million and $28 million. We expect Q1 basic weighted average share count to be approximately 149 million shares.
We feel we have moved through the bottom in the fourth quarter, while we see some near-term upside to our prior expectations, we remain cautious about the back half of our fiscal year due to uncertain macroeconomic conditions. In summary, as we move forward through fiscal year ’24, we expect sequential revenue growth, expanding gross margins due to increasing scale and modest sequential growth in operating expenses. As a result, we look forward to driving operating leverage as we exit the year. And with that, I’ll open it up for questions. Thank you.
Q&A Session
Follow Credo Technology Group Holding Ltd
Follow Credo Technology Group Holding Ltd
Operator: Thank you. The first question that we have is coming from Tore Svanberg of Stifel. Your line is open.
Tore Svanberg: Yes. Thank you. From my first question and in regards to the Q1 guidance as far as what’s driving the growth, given your gross margin comment, I assume that AEC will probably continue to be down with perhaps the growth coming from kind of for BSP and IP. Is that sort of the correct thinking? Or if not, please correct me.
Dan Fleming: Hi, Tore, this is Dan. So you’re correct in that our – if you look at the sequential increase in gross margin from Q3 to Q4, while our product revenue was down, that’s really reflective of a favorable product mix, where AEC, as we all know, which is on the lower end of our margin profile was – contributed less of the overall product mix. That trend will continue in Q1. And I would characterize that really as broadly across all of our other product lines, not really singling out one specific product line that’s taking up the slack from AEC, so to speak.
Tore Svanberg: Sounds good. And as my follow-up question for you, Bill, with generative AI, as you mentioned in your script, things are clearly changing. I was just hoping you could talk a little bit more granular about how it impacts each business. I’m even thinking about sort of the 800-gig PAM4 cycle. I mean is that getting pulled in? So yes, I mean, how – if you could just give us a little bit more color on how generative AI could impact each of your four business units at this point? Thank you.
Bill Brennan: Sure, sure, absolutely. So I think generally, I think that AI applications will create revenue opportunities for us across our portfolio. I think the largest opportunity that we’ll see is with AEC. However, optical DSPs, there will definitely be a big opportunity there. Even linecard PHYs chiplets, even SerDes IP licensing will get an uplift as AI deployments increase. So maybe I can start first with AEC. Now it’s important to kind of identify the differences between traditional compute server racks, which is kind of commonly referred to – use the front-to-end network, so basically a nick tour connection, the tour up to the leaf and spine network. The typical compute rack would have 10 to 20 AECs in rack, meaning in rack connections from nick to tour.
And you highlight the leading-edge lane rates today for these connections with compute servers is 50 gig per lane. Within an AI cluster, in addition to the front-end network, which is similar, there’s a back-end network referred to as the RDMA network. And that basically allows the AI appliances to be networked together within a cluster directly. And if we start going through the map, this back-end network has 5 to 10x the bandwidth as the front-end network. And so the other important thing is to note within these RDMA networks, there are leaf spine racks as well. And so if we look at the – if we look at one example of a customer that we’re working with in deploying, the AI plant track itself will have a total of 56 ADCs between the front end and back-end networks.
Each lease fine rack is a class track or at this aggregated chassis, which will have 256 ADCs. And so when we look at it from an overall opportunity for AEC, this is a huge uplift in volume. The volume coincides with the bandwidth. Now lane rates will quickly move certain applications will go forward at 50 gig per lane others will go straight to 100-gig per lane. And so we see probably a 5x plus revenue opportunity difference between the typical – if you were to say apples-to-apples with the number of compute server racks versus an AI cluster. So it’s kind of extend of extend into optical. There is also a typically large – there’s typically a large number of AOCs in the same cluster. So you can imagine that the short in rack connections are going to be done with ADCs. These are three meters or less.
But these appliances will connect to the to the back-end lease spine racks, these disaggregated racks, all of those connections will be AOCs. Those are connections that are greater than three meters. And so if we look at this, this is all upside to, say, a traditional compute deployments where there’s really no AOCs connecting rack to rack. Okay? So when we look at the overall opportunity, we think that the additional AEC opportunity within an AI cluster is probably twice as large as twice as many connections as AOCs, but the AOC opportunity for us will be significant in a sense that AOCs represent the most cost-sensitive portion of the optical market. And so it’s also a lower technology hurdle since the optical connection is well defined, and it’s within the cable.
So this is a really natural spot for us to be disruptive in this market. We see some of our planning on deploying with 400-gig AOCs. Others are planning to go straight to 800-gig AOCs. So we view – AEC is the largest opportunity. Optical DSPs for sure will get an uplift in the overall opportunity set. But also, I think that if we look at Tesla, as an example, that’s an example of where, as they deploy, we’re going to see a really nice opportunity for our chiplets that we did for them for that DOJoe supercomputer. And it’s an example of how AI applications are doing things completely differently, and we view that long term, this will be kind of a natural thing for us to benefit from. We can extend that to SerDes IP licensing. Many of the licenses that we’re doing now are targeting different AI applications.
And also, don’t forget Linecards, the opportunity for the network OEMs and ODMs is also increasing. And of course, Linecards are something that go on those switch line cards that are developed. So generally speaking, I think that AI will drive faster lane rates. And we’ve been very, very consistent with our message that as the market hits the knee in the curve on AI deployments, we’re naturally going to see lane rates go more quickly to 100-gig per lane. And that’s where we really see our business taking off. So we’re getting a really nice revenue increase from 50 gig per lane applications, but we really see acceleration this 100-gig per lane happens. And especially when you start thinking about the power advantages that all of our solutions offer compared to others that are doing similar things.
Does that might have been more than you were looking for, but…
Tore Svanberg: No, that’s a great overview. Thank you so much, Bill. That was great. Thank you.
Bill Brennan: Sure.
Operator: Thank you for your question . And one moment while we prepare for the next question. And the next question will be coming from Quinn Bolton of Needham & Company. Your line is open.
Quinn Bolton: Thank you very much for taking my question. Bill, maybe a follow-up to Tore’s question, just sort of the impact of generative AI on the business. Given that most of your AEC revenue today comes from the standard compute racks rather than AI racks, what do you see in terms of potential cannibalization at least in the near term, as these hyperscalers prioritize building out the AI racks potentially at the expense of compute deployments again in the near term?
Bill Brennan: So I feel very good about how we’re positioned. It is the case that our first ramp with our largest customer was compute rack. I think we’re very well positioned with our customer as they transition to AI deployments. So we’ve talked in the past about two different types of deployments at the server level. Of course, compute will continue, and we can all guess as to what ratio is going to be between compute and AI. We’ve got the road map very well covered for compute. So I think we’re well set. And so as that resumes at our largest customer, I think we’re going to be in good shape. I’m actually more excited about the acceleration of the AI program that we’ve been working with the same customer on – for close to a year.
And so I feel like we’re well covered for both compute and AI and that’s really a long-term statement. So a little bit of new information, I would say, is that with our second hyperscale customer, just to give an update generally on that and then relate that back to the same point that I was making about the earlier customer. We are right on track with the AEC ramp. The first program is a compute server rack that we’ve talked about. We saw small shipments in Q4, and we expect to see a continued ramp through fiscal ’24. However, during the past several months, a new AI application has taken shape. So if we would have talked 100 days ago, we wouldn’t have seen this – we wouldn’t have talked about this program. And so we quickly delivered a different configuration of the AEC that was designed for the computer server rack.
So if you recall, we did a straight cable as well as an ex-cable configuration. So they asked us to deliver a new configuration that had specific changes that were needed for their deployment. And we delivered the new configuration within weeks, which is – that’s another example of the benefit to how we’re organized. The qualification is underway, and we expect this AI appliance rack to also ramp in our fiscal ’24. It’s unclear as to the exact schedule from a time standpoint and a volume standpoint. But we feel like this is going to be another significant second program for us. And so I think that I think that for both our first and our second hyperscale customer, I think we’re covering the spectrum between compute and AI. So I feel like we’re really in great shape.
So hopefully, that answers your question. If I take it a little bit further and say, okay, long term, let’s say, it’s 80% compute, 20% AI. And you think maybe because the opportunity for us is 5 times larger in AI, maybe the opportunity is similar if the ratio is like that. So compute might be equal to AI from an AEC perspective. I think that any way that goes, we’re going to benefit. If it goes 50-50, that’s a big upside for us with AEC given the fact that there’s larger volume, larger dollars associated with an AI cluster deployment. And so I think that for us, it won’t affect us one way or another, maybe in the near-term quarters, yes. But the situation at our first customer really hasn’t changed since the last update. So we think that the year-over-year increase in revenue for that customer will happen in FY ’25, as we’ve discussed before.
Quinn Bolton: Okay. But no further push out or delay of the compute rack at the first hyperscaler given the potential reprioritization to AI in the near term?
Bill Brennan: Well, the new program qualifications we’ve talked about two of them. They’re still scheduled in the relatively near future. And of course, as those get qualified and ramp, we’ll see benefit from that. But it’s a little bit tough to track month by month, right? That’s a little bit too specific in a time frame standpoint. So we’ve seen a slight delay, but it’s not something that we’re necessarily concerned about.
Quinn Bolton: Got it, Bill. And then just a clarification on the second hyperscaler. I think the last update, you said you may not yet have a hard forecast for that hyperscalers needs on the AEC side. Have you received sort of a hard PO or at least a more reliable forecast that you’re now sort of forecasting that business from in fiscal ’24?
Bill Brennan: Yes, it’s coming together. It’s coming together. And I think we feel comfortable saying that the revenue that will be generated by this second customer will be significant. And I’m not exactly able to talk about how significant. I think that we’re – we continue to view this through a conservative lens because we really don’t know how the second half is going to shape up. But all the indicators that we’ve heard over the last 90 days are quite positive. And I think Dan referenced the fact that in Q2, we expect significant material revenue as that starts.
Quinn Bolton: Perfect. Thank you.
Operator: Thank you. One moment while we prepare for the next question. And our next question will be coming from Suji Desilva of Roth Capital. Your line is open.
Suji Desilva: Hi, Bill. Hi, Dan. Just you want to talk about the AEC, the products. You have multiple products. And I just want to know if are there certain ones that are more relevant to IR rack versus a traditional compute rack? Or are they all applicable across the board?
Bill Brennan: So I would I would say that I wouldn’t classify all of these solutions, I wouldn’t lump them together. We’re very much looking at the AEC opportunity as one where we’re positioned to implement really customer-specific requests. And so part of what we’re seeing is that most of the designs that we’re engaged now have something very specific to a given customer. And so I can say that we’re seeing – we’re seeing that there’s a large number of customers moving to 100 gig per lane quickly, but we’re also seeing customers that are reconfiguring existing NICs and building different AI appliances with those NICs. And so they’re going to be able to ramp with 50 gig per lane solutions. Now as far as configurations go, we see straight cable opportunities.
We see why cable opportunities. We see opportunities where just recently, we had a customer ask us to have 100 gig on one into the cable and 50 gig on the other end of the cable. And so obviously, that’s a breakout cable. But it’s an interesting challenge because this is the first time we’ll be mixing different generations of ICs. And so again, this is something we’re able to do because we’re so unique in a sense that we have a dedicated organization internal to Credo that’s responsible for delivering these system solutions. It’s really that single party that’s responsible for collaborating with the customer, designing, developing, delivering and qualifying and then supporting the designs with our customers. And so I can’t emphasize enough that you give engineers at these hyperscalers, the opportunity to innovate in the space they’ve never been thought of.
And it’s something that we’re getting really good uptake on. And of course, – our investment in the AEC space is really unmatched by any of our competition. I think we’re unique in the sense that we can offer this type of flexibility. So to answer your question, it’s not – I couldn’t really point to one type of cable that is going to be leaned on.
Suji Desilva: That’s helpful. Paint the picture of how the cable is being deployed here. And then also, I believe in the prepared remarks, Mark, you mentioned 20 ADCs being qualified for shipments, if I heard that right. I’m curious how many – across how many customers or how many programs that is just to understand the breadth of that qualification effort.?
Bill Brennan: Yes, I would say that — there’s a set of hyperscalers that are really the large, large opportunity within the industry for the AEC opportunity. But we’ve also had a lot of success with – with data centers that might not qualify as capital a hyperscaler as well as service providers. And so we can look at the relationships with hyperscalers directly, and there are several SKUs that we’ve delivered. And there’s even more in the queue for these more advanced next-generation systems. But even if we look at – I think we’re – if you look at the number of $1 million per quarter per year customers that we’ve got, the list is really increasing. The product category, I think, has really been solidified over the last 6 to 9 months. And you see that also because a lot of companies are announcing that they intend to compete longer term.
Suji Desilva: Right, okay. Thanks, Bill.
Operator: Thank you. One moment while we prepare for the next question. And our next question will be coming from Karl Akerman of BNP. Your line is open.
Karl Akerman: Thank you. I have two questions. Good afternoon, Dan and Bill. I guess, first off, it’s great to see the sequential improvement in your fiscal Q1, but I didn’t hear you confirm your fiscal ’24 revenue outlook from 90 days ago. And I guess, could you just speak to the visibility you have on your existing programs that gives you confidence in the sequential growth that you spoke about throughout fiscal ’24? If you could just touch on that, that would be helpful.
Dan Fleming: Yes. Thanks, Karl. This is Dan. So yes, generally speaking, we – as we’ve described, we see some near-term upside, but we still remain a bit cautiously optimistic about the back half of the year. But we’re – so we’re very comfortable ultimately with the current estimates for the back half. We do have certainly increasing visibility as time passes, and we hope to provide meaningful updates over the next upcoming quarters. But we’re working hard to expand these growth opportunities for FY ’24 and beyond, and we remain very encouraged with what we’re seeing, especially with the acceleration of AI programs.
Karl Akerman: Got it. Understood. Thanks for that. I guess as a follow-up, of the DSP opportunity that you’ve highlighted in the prepared remarks, are you seeing your design engagements primarily in fiscal ’24 on coherent offerings? Or are you seeing more opportunities in DCI for your 400 gig and 800 gig opportunities? Thank you.
Dan Fleming: Yes. So the opportunities that we’re seeing, the large opportunities that we’re seeing are really within the data center. And I can say that it’s across the board, 200 gig, 400 gig and 800 gig, all of these hyperscalers have different strategies as to how they’re deploying optical. I think we continue to make progress with 200 and 400. And I think we’re in a really good position from a time-to-market perspective on 800 gig. And so we can talk about the cycles that we’re spending with every hyperscaler. We’re also aligning ourselves very closely in a strategic go-to-market strategy with select optical module companies. And we think that as it relates to DCI and coherent specifically, we’re in development for that first solution that we’re pursuing, which is 100 gig ZR And we feel like that development will take place throughout this year and that we’ll see first revenue in the second half of calendar ’24.
But as far as 400 gig, that would really be a second follow-on type of DCI opportunity for us. Now in the ZR space, we’re going to be unique because we’ll market and sell the DSP to optical module makers. And so we intend to engage 3 to 4 module makers in addition to our partner, EFFECT Photonics and that makes us somewhat unique in the sense that other competitors are going directly to market with the ZR module. I highlight power is really an enabler here. And the key thing is we can do 100-gig ZR module and fit under the power ceiling for a standard QSFP connector, which is roughly 4.5 watts. So there’s a large upgrade cycle from 10 gig modules that will enable, but also there’s new deployments in addition. So that kind of gives you a little bit of flavor about the coherent, but I really see our opportunities more within the data center.
Karl Akerman: All right. Thank you.
Operator: Thank you. One moment while we prepare for the next question. And our next question will be coming from Vivek Arya of Bank of America. Your line is open.
Vivek Arya: Thanks for taking my question.. Bill, I’m curious to get your perspective on some of these technology changes. One is the role of InfiniBand that’s getting more share in these AI clusters. What does that do to your AEC opportunity? Is that a competitive situation? Is that a complementary situation? And then the other technology question is some of your customers and partners have spoken about their desire to consider co-packaged optics and linear direct drive type of architectures. What does that do right to the need for stand-alone pledgibles
Bill Brennan: Thanks. I appreciate the opportunity to talk about Ethernet versus InfiniBand because there’s been a lot said about that. Generally, we see coexistence. I think depending on how you look at the market forecast information, there is a point soon in the future with Ethernet exceeds InfiniBand for AI specifically. Beyond AI, I think it’s game over already. Whether you measure the TAM in ports or dollars, Ethernet is forecasted to far exceed InfiniBand in the out years. So calendar ’25 and beyond. And so if we think about from an absolute TAM perspective, forecasters are showing Ethernet dollars perspective. they’re showing that Ethernet surpasses InfiniBand by 2025. And so the forecast show a CAGR for Ethernet of greater than 50%, while InfiniBand, they’re showing a CAGR of less than 25%.
And so you can also look at this from a port cost perspective where InfiniBand is 2 to 4 times the ASP per port compared to Ethernet, depending on who you talk to. And so in a sense, it’s so secret that the world will continue to do what the world does, they’ll pursue cost-effective decisions. And we think from a technology standpoint, they’re very similar. So if you think from a cost perspective, if you look apples-to-apples, if you think that an InfiniBand port is 2 to 4x the cost of an Ethernet port. In a sense, you could justify that 1 to 3 of those ports of Ethernet are free in comparison to InfiniBand. So I think that our position here is that we really believe that Ethernet is going to prevail. We’re working on so many AI programs. Every single one of them is Ethernet.
Vivek Arya: And then Bill on the move by some of your customers to think about co-packaged optics and direct drive and maybe let me just ask Dan have a follow-up on the fiscal ’24. I think Dan, you suggested you are comfortable with where I think expectations are right now. That still implies a very steep ramp into the back half. So I’m just trying to square the confidence in the full year with some of just kind of the macro caution that came through in your comments.
Dan Fleming: Yes, we are confident in how we have guided. And as I mentioned, we’re very comfortable with the current estimates. If we look at FY ’24, as you alluded to, Vivek, there’s – we see strong sequential top line growth throughout the year in order to get – to achieve those numbers. And it’s kind of well documented, the – what’s happened at Microsoft to us for this fiscal year. So if we exclude Microsoft, what that means is we have in excess of 100% year-on-year growth of other product revenue from other customers, which, again, we’re very confident based on all of the traction that we’ve seen recently that we’ll be able to achieve that. And of course, I’ll just reiterate one of the key drivers is AI in some of those programs. So hopefully, that gives you some additional color on our confidence for FY ’24.
Bill Brennan: Yes, regarding your question about the linear direct drive – that was, I think, this year’s shiny object at OFC. I do think that the idea that it’s really – the idea is really to how to address the power challenges basically move away from the optical DSP. I think that this is not a new idea. There was a big move towards this linear drug drive in the 10 gig space when that was emerging. And I think the fact that there is really none in existence. I think that DSP has chosen that is really critical to close the system. Our feeling is that I think we’ll see much of the same this year. I think Marvell did a great job in kind of setting expectations correctly. They did a long session right after OFC that I think addressed it quite well.
I think you’ll see small deployments where every link in the system is very, very controlled. But these are typically very, very small in terms of the overall – the overall TAM. Now we’re fully signed up. If the real goal is power, that’s exactly where we play. So we’re fully signed up to looking at unique approaches in the future to be able to offer compelling things from a power perspective. And it’s not like I’m completely dismissing the concept that was really behind the idea of linear drug drive. We’re actually viewing that as a potential opportunity for us in solving the problem differently. But generally speaking, I don’t think you’ll see in the future a world where linear direct drive is measured in any kind of significant way. It’s not to say that people aren’t spending money trying to prove it out right now that is happening.
And regarding CPO, I think that was, yes, like that was kind of a – that was something that was talked about for many, many years prior. And I think also on that, you’ll see smaller deployments if that’s ultimately something that some customers embrace. But I don’t think you’ll see it in a big way. That’s simply not what the customer base is looking for.
Vivek Arya: Thank you.
Operator: Thank you. One moment for the next question. And our next question will be coming from David Wu of Please go ahead…
Q – Unidentified Analyst: Yes. This is David on for Vijay Mizu. My first question is assuming that in fiscal ’25, data set demand for genome improves and you see the continued new AI ramps. Can you provide any more color on the puts and takes there and the type of operating leverage you can improve?
Bill Brennan: Well, we’re not giving specific guidance yet to fiscal ’25. But you’re right in that the ingredient certainly exist where operating leverage. We should exit FY ’24 with pretty robust operating leverage and that we would expect based on what we know now to carry forward into FY ’25. But we haven’t framed yet, of course, what that’s going to ultimately look like.
Q – Unidentified Analyst: Okay. Sure. And I guess for my second question, when you’re talking with hyperscalers on these new AI applications, how important is sort of your TCO advantage when they’re exploring your solution? Or are they currently kind of just primarily focused on time-to-market and maximum performance and just getting their AI deployments out there?
Bill Brennan: So I just want to make sure you said total cost of ownership?
Q – Unidentified Analyst: Yes, yes.
Bill Brennan: Yes. It’s – I think it’s hands down in favor of AEC. So if we look at 100 gig lane rates, I think the conclusion throughout the market is that there’s two ways to deploy short cable solutions. It’s really AEC or AOC. If we look at it from a CapEx standpoint, AUCs are about half the cost. If we look at – from an OpEx standpoint, also about half the cost, about half the power, half the ASP for an apples-to-apples type solution. So I think the TCO benefit is significant. The other thing you’ve got to consider is that, especially when you’re down in server racks, these are different than switch racks in a sense that having a failure with your cable solution is it becomes a very urgent item. And so when we think about AOCs the reliability in terms of number of years, it is probably anywhere from one third to one tenth, the ACs that we sell are – we talked about a 10-year product life.
And so it kind of matches or exceeds the life of the rack that is being deployed in the same cannot be true for – it cannot be said for any kind of optical solution. So I think across the board, it hands down, the TCO is much more favorable for AEC.
Q – Unidentified Analyst: Okay. Thank you.
Operator: Thank you. And our next question will be coming from Quinn Bolton of Needham & Company. Your line is open.
Quinn Bolton: Thanks. Quick two follow-ups. One, Dan, was there any control revenue in the April quarter?
Dan Fleming: That’s an excellent question, Quinn. I’m glad you caught that. Actually, so there was, and you will see that when we file our 10-K. In the past, you’ve been able to see that in our press release in our GAAP to non-GAAP reconciliation. But from Q4 and going forward, we’re no longer excluding that contra revenue from our non-GAAP financials. And this really came about through a comment from the SEC, not signaling out Credo, but actually all of Amazon suppliers who have a warrant or Amazon has a warrant with them. So the positive things of this change are you’ll still be able to track ultimately what that warrant expense is, but when we file our Q-on-Q. And looking historically, there’s not really – it doesn’t really make a reporting difference on a non-GAAP basis.
It was not material, the difference. And it just makes calculation a little bit more straightforward going forward. Our only non-GAAP reconciling item going forward, at least for the foreseeable future, is really just share-based compensation.
Quinn Bolton: Got it. So the revenue doesn’t change, you’ll just sort of – you won’t be making the adjustments for the contra revenue and the non-GAAP gross margin calculation going forward?
Dan Fleming: That’s exactly correct. Yes. Revenue is still revenue. It has a portion of it, which is contra-revenue, which obviously brings down the revenue a little bit.
Quinn Bolton: Got it. Okay. And then for Bill, would you expect in fiscal ’24, a meaningful initial ramp of the 200 or 400 gig optical DSPs? Or would you continue to encourage investors sort of think that the optical DSP ramp is really beyond a fiscal ’24 event at this point?
Bill Brennan: I think that when we think about significant, we think about crossing the 10% of revenue threshold, and we don’t see that until fiscal ’25. We do see signs of life in China. And as I said, we’re shipping 400-gig optical DSPs to a U.S. hyperscaler now. My expectation is throughout the year, we’re going to have a lot more success stories to talk about, but those ramps will most likely not take place within the next 3 quarters. So it’s really a fiscal ’25 target at this point.
Quinn Bolton: Got it. But it starts this year, it’s just you’re not going to meaningful because it doesn’t hit 10% threshold?
Bill Brennan: Right, exactly.
Quinn Bolton: Got it, okay. Thank you.
Operator: Thank you. One moment. While we have a follow-up question. And that question will be coming from Tore Svanberg of Stifel. Your line is open.
Tore Svanberg: Yes. Tore here. Bill, maybe a follow-up to the previous question about 240 gigawats a little bit more curious about 800 gig. Are you seeing any changes at all to the time lines there? I think the expectation was that the 800-gig market would maybe take off second half of next calendar year. But with all these new AI trends, just wondering if you’re seeing any pull-in activity there or maybe even seeing some cannibalization versus 200 gig and 400 gig?
Bill Brennan: I think that – my expectation is that this is really a calendar year ’24 type of market take off. And whether it’s the second half or first half, we, of course, would like to see it in the first half, given the fact that, that would imply that there would be success in pulling in AI programs. And so there’s a lot of benefit that comes with 800 gig modules and the implication that it has on our AEC business. But I definitely see it kind of in that time frame. I don’t really see it as a cannibalization of the 200 and 400 gig. It’s really – unless you look at it, that these new deployments are in lieu of the old technology. But like I said before, every hyperscaler has their own strategy related to the port size that they plan on deploying.
Everybody’s got a unique architecture. And where we see optical is typically. Typically, in the leafy network for anything above the tour. And AI, I think the real opportunity is going to be with AOCs. And that, I think, is going to be a very large 800-gig market when those AI clusters really begin deployment, which again, I think it could be in calendar ’24. So I appreciate the question, though.
Tore Svanberg: Thank you.
Operator: Thank you. That concludes the Q&A for today. I would like to turn the call back over to Bill Brennan for closing remarks. Please go ahead.
Bill Brennan: We really appreciate the participation today, and we look forward to following up on the call backs. So thanks very much.
Operator: This concludes today’s conference call. Thank you all for joining, and everyone, enjoy the rest of your evening. +