Astera Labs, Inc. Common Stock (NASDAQ:ALAB) Q3 2024 Earnings Call Transcript November 4, 2024
Astera Labs, Inc. Common Stock beats earnings expectations. Reported EPS is $0.23, expectations were $0.17.
Operator: Hello. Good afternoon. My name is Jeremy, and I will be your conference operator today. At this time, I would like to welcome everyone to the Astera Labs Q3 2024 Earnings Conference Call. All lines have been placed on mute to prevent any background noise. After management’s remarks, there will be a question-and-answer session. [Operator Instructions]. Thank you. I will now turn the call over to Leslie Green, Investor Relations for Astera Labs. Leslie, you may begin.
Leslie Green: Thank you, Jeremy, and good afternoon, everyone, and welcome to the Astera Labs third quarter 2024 earnings conference call. Joining us on the call today are Jitendra Mohan, Chief Executive Officer and Co-Founder; Sanjay Gajendra, President and Chief Operating Officer and Co-Founder; and Mike Tate, Chief Financial Officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results, strategies and plans, future operations and the markets in which we operate. These forward-looking statements reflect management’s current beliefs, expectations and assumptions about future events which are inherently subject to risks and uncertainties that are discussed in today’s earnings release and the periodic reports we file from time to time with the SEC, including risks set forth in the final prospectus relating to our IPO.
It is not possible for the company’s management to predict all risks and uncertainties that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement. In light of these risks and uncertainties and assumptions, the results, events or circumstances reflected in the forward-looking statements discussed during this call may not occur, and actual results could differ materially from those anticipated or implied. All of our statements are made based on information available to management as of today, and the company undertakes no obligation to update such statements after the date of this call to conform to these as a result of new information, future events or changes in our expectations, except as required by law.
Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be an important measure of the company’s performance. These non-GAAP financial measures are provided in addition to and not as a substitute for or superior to financial results prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the Investor Relations portion of our website and will also be included in our filings with the SEC, which will also be accessible through the Investor Relations portion of our website. With that, I would like to turn the call over to Jitendra Mohan, CEO of Astera Labs.
Jitendra Mohan: Thank you, Leslie. Good afternoon, everyone, and thanks for joining our third quarter conference call for fiscal year 2024. Today, I will provide a quick Q3 financial overview, discuss several of our recent company-specific product and strategic announcements, followed by a discussion around the key secular trends driving our market opportunity. I will then turn the call over to Sanjay to delve deeper into our growth strategy. Finally, Mike will provide additional details on our Q3 results and our Q4 financial guidance. Astera Labs delivered strong Q3 results, setting our fifth consecutive record for quarterly revenue at $113 million, which was up 47% from the last quarter and up 206% versus the prior year. Our business has entered a new growth phase with multiple product families ramping across AI platforms, featuring both third-party GPUs and internally developed AI accelerators, which drove the Q3 sales upside versus our guidance.
We also demonstrated strong operating leverage during Q3, with non-GAAP operating margin expanding to over 32% and delivered non-GAAP EPS of $0.23. Looking into Q4, we expect our revenue momentum to continue, largely driven by the Aries PCIe and Taurus Ethernet product lines. The Scorpio Fabric Switches is continuing to ship in preproduction volumes. The criticality of connectivity in modern AI clusters continues to grow with trillion parameter model sizes, multistep reasoning models and faster, more complex AI accelerators. These developments present a tremendous opportunity for Astera Labs’ intelligent connectivity platform to enhance AI server performance and productivity with our differentiated hardware and software solutions. At the start of Q4, we announced our fourth product line, the Scorpio Smart Fabric Switch family, which expands our mission of solving the increasingly complex connectivity challenges within AI infrastructure, both for scale out and for scale up networks.
Extending beyond our current market footprint of PCI Express and Ethernet Retimer class products and controller class devices for CXL memory, our Scorpio Smart Fabric Switch family delivers meaningfully higher functionality and value to our AI and cloud infrastructure customers. We estimate that Scorpio will expand our total market opportunity for our four product families to more than $12 billion by 2028. Scorpio family unlocks a large and growing opportunity across AI Server Head Node scale-out applications with the P-Series and AI accelerators scale up clustering use cases with the X-Series. The P-Series devices directly address the challenge of keeping modern GPUs fed with data at ever-increasing speeds while the X-Series devices improve the efficiency and size of AI clusters.
The Scorpio family was purpose-built from the ground up for these AI-specific workloads with close alignment with our hyperscaler and AI platform partners. At the recent 2024 OCP Global Summit, we demonstrated the industry’s first PCIe Gen 6 fabric switch, which is currently shipping in preproduction volumes for AI platforms. We are happy to report that we already have design wins of both Scorpio P-Series and X-Series and that our recent product launches further accelerated strong customer and ecosystem interest. Over the coming quarters, we expect to further expand our business opportunities for the Scorpio product family across PCIe Gen 5, PCIe Gen 6 and platform-specific customized connectivity platforms. Further expanding our market opportunity, Astera Labs has joined the Ultra Accelerator Link, UALink Consortium as a promoting member on the Board of Directors, along with industry-leading hyperscalers and AI platform providers.
This important industry initiative places us at the forefront of developing and advancing an open high-speed, low-latency interconnect for scale-up connectivity between accelerators. Astera Labs’ deep expertise in developing advanced silicon-based connectivity solutions, along with a strong track record of technology execution makes us uniquely suited to contribute to this compelling and necessary industry initiative. We will shift towards shorter AI platform refresh cycles. Hyperscalers are increasingly relying on their trusted partners as they deploy new hardware in their data center infrastructure at an unprecedented pace and scale. To date, we have demonstrated strong execution with our Aries, Taurus, Leo and now Scorpio product family.
Our products increase data, networking, memory bandwidth and capacity and our COSMOS software provides our hyperscaler customers with the tools necessary to monitor and observe the health of their expensive infrastructure to ensure maximum system utilization. To conclude, Astera Labs expanding product portfolio, including the new Scorpio Smart Fabric Switches, is cementing our position as a critical part of AI connectivity infrastructure, delivering increased value to our hyperscaler customers and unlocking additional multiyear growth trajectories for Astera Labs. With that, let me turn the call over to our President and COO, Sanjay Gajendra, to discuss some of our recent product announcements and our long-term growth strategy.
Sanjay Gajendra: Thanks, Jitendra, and good afternoon, everyone. We are pleased with our Q3 results and strong financial outlook for Q4. Overall, we believe we have entered a new phase at the company based on two key factors; first is the increased diversity of our business. In 3Q, our business diversified significantly with new product lines and form factors going to high-volume production. We also started ramping on multiple new AI platforms based on internally developed AI accelerators at multiple customers to go along with the continued momentum with third party GPU-based AI platforms. These together helped achieve a record sequential growth in 3Q. Second, with the introduction of Scorpio Smart Fabric Switches, our market opportunity has significantly expanded.
This new product line delivers increased value to our hyperscaler customers and for Astera Labs unlocks higher dollar content in AI platforms, an additional multiyear growth trajectories. Let me explain our business drivers in more detail. As noted, we now have multiple product lines generations and form factors in high-volume production. This includes Aries Smart DSP Retimers and smart cable modules for PCIe 5.0 and Taurus smart cable modules for 200 and 400-gig Ethernet active electrical cables. We are also shipping preproduction volumes for Leo CXL memory controller’s times for PCIe 6.0 and Scorpio fabric switch solutions for PCIe head node connectivity. All these new products deliver increased value to our customers and therefore, command higher ASPs. Our first market Scorpio PCIe Gen 6 fabric switch addresses a multibillion-dollar opportunity with a ground-up architecture designed for AI data flows and delivers maximum predictable performance per watt in mixed-mode PCIe head note connectivity compared to incumbent solutions.
We are currently shipping Scorpio P-Series in preproduction quantities to support qualification for customized AI platforms based upon leading third-party GPUs. Interest for Scorpio P-Series has accelerated since the former launch given its differentiated features and as a result, we are engaging in multiple design opportunities across a diverse spectrum of AI platforms. These include both PCIe Gen 6 and PCIe Gen 5 implementations on third-party GPU and internal accelerator-based platforms. Overall, we are very bullish on the market for our entire product portfolio across third-party GPU-based systems with increasing dollar content per GPU on our new design wins and we believe that Astera’s opportunity with internally developed AI accelerator platforms can be even larger with opportunities both in the scale-out and back-end scale-up clustering use cases.
This additional market opportunity has unlocked design activity for Aries and Taurus product lines for reach extension within and between racks and for our newly introduced Scorpio X-Series Fabric Switches for homogeneous accelerator to accelerator connectivity maximum bandwidth. The Scorpio X-Series is built upon our software-defined architecture and leverages our cost more software suite to support a variety of platform-specific customization which provide hyperscalers with valuable flexibility. As we look ahead to 2025, we will begin to ramp designs across new internally developed AI accelerator-based platforms that will incorporate multiple Astera Labs product families, including Aries, Taurus and Scorpio. As a result, we will continue to benefit from increased dollar content per accelerator in these next-generation AI infrastructure deployments.
Though we remain laser focused on AI platforms, we continue to see large and growing market opportunities within the general-purpose compute space for our PCIe, Ethernet and CXL product families. While the transition to faster bandwidth requirements within general purpose computing trails the leading adoption across AI systems, the market size remains substantial. Our Aries business within general purpose compute is poised to benefit from the transition of PCIe peripherals to Gen 5 speeds and the introduction of new CPU generations from Intel and AMD. We are also ramping volume production of our Taurus SCMs for front end networking across hyperscaler general-purpose server platforms. We expect to see broadening adoption of Leo CXL memory controllers across the ecosystem as CXL capable server CPUs are deployed in new cloud infrastructure over the coming years.
In summary, we believe Astera Labs has entered a new growth phase and we are well positioned to outpace industry growth rates through a combination of strong secular tailwinds and the expansion of our silicon content opportunity in AI and general-purpose cloud platforms. We have become a trusted and strategic partner to our customers with over 10 million smart connectivity solutions widely deployed and field tested across nearly all major AI infrastructure programs globally. The introduction of the Scorpio Smart Fabric Switch family and our strategic involvement with the UALink standard for scale-up connectivity is the next critical step in our corporate journey. We are hard at work collaborating with our partners to identify and develop new technologies that will expand Astera’s footprint from retimer solutions for connectivity or copper within the rack to fabric and optical solutions that connect AI accelerators across the data center.
While we have come a long way, it’s a remarkable sense of urgency and energy within the company for the opportunities that lay ahead. With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q3 financial results and our Q4 outlook.
Mike Tate: Thanks, Sanjay, and thanks to everyone for joining the call. This overview of our Q3 financial results and Q4 guidance will be on a non-GAAP basis. The primary difference in Astera Labs non-GAAP metrics is stock-based compensation and its related income tax effects. Please refer to today’s press release available on the Investor Relations section of our website for more details on both our GAAP and non-GAAP Q4 financial outlook as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call. For Q3 of 2024, Astera Labs delivered record quarterly revenue of $113.1 million, which was up 47% versus the previous quarter and 206% higher than the revenue in Q3 of 2023. During the quarter, we shipped products to all major hyperscalers and AI accelerator manufacturers, and we recognize revenue across all four of our product families.
Our Aries product family continues to be our largest sales contributor and helped drive the upside in our revenues this quarter. Aries revenues are being driven by continued momentum with third-party GPU-based AI platforms as well as strong ramps on new platforms, featuring internally developed AI accelerators from multiple hyperscaler customers. Also, in Q3, Taurus revenue started to diversify beyond 200 gig applications with an initial production ramp of our 400-gig Ethernet-based systems which are designed into both AI and general-purpose compute systems. Q3 Leo CXL revenues continue to be driven by our customers’ purchasing preproduction volumes for the development of their next generation CXL capable compute platforms. Lastly, we began to ship preproduction volumes of our recently announced Scorpio Smart Fabric Switch family during Q3.
Q3 non-GAAP gross margins was 77.8% and was down 20 basis points compared with 78% in Q2 of 2024 and better than our guidance of 75%, driven by higher sales volume and a favourable product mix. Non-GAAP operating expenses for Q3 were $51.2 million, up from $41.1 million in the previous quarter, driven by greater-than-expected hiring conversion during the quarter as we aggressively pushed to support additional commercial opportunities. Within Q3 non-GAAP operating expenses, R&D expenses was $36 million, sales and marketing expenses were $7 million and general and administrative expenses was $8.3 million. Non-GAAP operating margin for Q3 was 32.4%, up from 24.4% in Q2, and demonstrates strong operating leverage as revenue growth outpaced higher operating expenses.
Interest income in Q3 was $10.9 million. Our non-GAAP tax provision was $7.3 million for the quarter, which represented a tax rate of 15% on a non-GAAP basis. Non-GAAP fully diluted share count for Q3 was 173.8 million shares, and our non-GAAP diluted earnings per share for the quarter was $0.23. Cash flow from operating activities for Q3 was $63.5 million and we ended the quarter with cash, cash equivalents and marketable securities of $887 million. Now turning to our guidance for Q4 of fiscal 2024. We expect Q4 revenue to increase to within a range of $126 million and $130 million, up roughly 11% to 15% from the prior quarter. For Q4, we expect continued strong growth from our Aries product family across a diverse set of AI platforms, some of which are just starting to ramp and also from our Taurus SCM for 400-gig applications, and additional preproduction shipments of our Scorpio P-Series switches.
We expect non-GAAP gross margins to be approximately 75%. The sequential decline in gross margin is driven by an expected product mix shift towards hardware solutions during the quarter. We expect non-GAAP operating expenses to be in a range of approximately $54 million to $55 million as we continue to expand our R&D resource pool across headcount and intellectual property. Interest income is expected to be approximately $10 million. Our non-GAAP tax rate should be approximately 10% and our non-GAAP fully diluted share count is expected to be approximately 179 million shares. Adding this all up, we are expecting non-GAAP fully diluted earnings per share of a range of approximately $0.25 and to $0.26. This concludes our prepared remarks. And once again, we appreciate everyone joining the call.
And now we will open the line up for questions. Operator?
Operator: All right. Thank you. [Operator Instructions]. Our first question comes from Harlan Sur from JPMorgan. Please go ahead.
Q&A Session
Follow Astera Labs Inc.
Follow Astera Labs Inc.
Harlan Sur: Good afternoon. And congratulations on the strong results. On your core, Retimer business looks like very strong this year, looks strong next year. Majority of the XPU shipments are still going to be I think Gen 5 based where your market share is still somewhere in the range of, I think, like 95%. And your customers, both merchant and ASIC are ramping new SKUs starting second half of this year, first half of next year. We’ve also got the ramp of your Gen 6 products, Retimers, Scorpio Switch products with your lead GPU customer in which they’re ramping now, but AEC firing, SCM is firing. So given all of this activity, I assume your visibility and backlog is quite strong. Maybe if you can just qualitatively sort of articulate your confidence on driving sequential growth over the next several quarters or at least visibility for a strong first half of next year versus second half of this year?
Sanjay Gajendra: Yes. Thanks, Harlan. Yes, right now, our visibility is very strong, both as always with our backlog position, but also the breadth of designs we have — right now, we’re really kind of entering a new phase of growth here where our revenue streams are clearly diversifying. If you look at right now versus a year ago, we’re very excited about the breadth of designs and product lines that we’re ramping in. So in the back half of this year, obviously, Taurus has been very incremental, and that continues in Q4 and also the programs that we’re ramping on Taurus are just getting started. So we have good confidence that Taurus will continue to grow nicely into 2025. Aries, as you alluded to, the Gen 5 story is still — got a lot of legs.
We have both with the merchant GPUs, but also on these internal ASIC designs, a lot of those products are just starting. And with the Gen 5, we have both the front-end connectivity and the back-end connectivity. Incrementally, Gen 6 will start to play out as well. And with Gen 6, we get an ASP boost on top of that. And then finally, with Leo, we’ve been talking about Leo for quite a while, but these are new types of technologies that are being adopted in the marketplace, and we’re happy to report that we do have line of sight to our first production shipments starting in the middle of the year next year.
Harlan Sur: Perfect. Thank you, for that. And then on the early traction you’re getting with your Scorpio switch portfolio, team has talked on some of the performance benefits, both at the silicon level and chip level, but how much of the differentiator is your COSMOS software stack that your customers have already built into their data center management systems adopted it with your Aries Retimer products, and now you’ve got the same software stack, you’ve integrated into your switch products that enables telemetry, linked performance, linked monitoring, tuning up parameters, all that kind of stuff, which is so critical for data center managers. But is the software stack familiarity, current adoption, sort of a big differentiator on the traction with Scorpio?
Jitendra Mohan: Harlan, this is Jiten. And you’re absolutely right. There are several things that differentiate Scorpio family. First and foremost, I would say that we built this ground up from for AI applications. If you think about historical deployments of PCI Express switches, they were generally built for server applications for storage and things like that, which are quite different from AI. So we designed the chip for AI applications. We put in the performance that’s required for these AI applications. In addition, even the form factor has been designed for AI — so rather than building a large switch, we ended up building a smaller device such that you are not running these high-speed signals all over the board. So all of that is very purpose-built for AI.
Now to your point about software, — as you remember, our chips are architecture with a software-first approach. So we can deliver a lot of customization based on COSMOS, which is something that our hyperscaler customers are looking for, especially for the X family, which is deployed in a more homogeneous GP to GPU connectivity. Where the Scorpio family sits, we have access to a lot more diagnostic information. And we can couple that with the information that we are collecting from our other families deployed such as Aries and even Taurus to provide a holistic view of the AI infrastructure to the data center operators. So both from the hardware side, the kind of the purpose-built nature of these devices as well as the software stack that comes with it is a big differentiator for us.
Harlan Sur: Very insightful. Thank you.
Operator: Our next question comes from the line of Joe Moore from Morgan Stanley. Please go ahead.
Joe Moore: Great. Thank you. And congratulations on the numbers. I wonder if you could talk about the Scorpio business in terms of — you gave a number for 2028. Can you give us a sense for what that ramp looks like? Any kind of qualitative discussion of how big it could be in calendar ’25, would be very helpful. Thank you.
Sanjay Gajendra: Yes. Thanks, Joe. Yes. What’s exciting is, since our public release, we’re getting a lot of incremental excitement for the customer base. And what’s really neat about the Scorpio product family is a diversity of designs that we’re seeing. So clearly, being first with the Gen 6, there’s a lot of interest in that application. But there’s still a lot of Gen 5 opportunities that are developing that we’re addressing as well because with a Gen 6 capable switch, it’s backwards compatible. So both design opportunities are open for us. And then incrementally, we have the X-Series that does the back-end connectivity, and that’s kind of a greenfield opportunity as well. So these designs are generally a little more customized systems.
So the bring up in the qualification cycle is a little bit a lot more long. So we -we take a conservative view on how they ramp like we always do with most of our business. But overall, given that we expect Scorpio to exceed 10% of our revenues in 2025. And as the deployments get into production during the course of the year and exit the year at a very good run rate and good momentum going into 2026.
Joe Moore: That’s great. Thank you. And then on Leo, you talked about some of the ramps there. I guess this application, in particular, of these large memory servers being able to actually reduce the CPU count and maintain this high-memory bandwidth. Can you just talk about that application? And are you seeing that as a material factor next year as well?
Sanjay Gajendra: Yes. So Joe, this is Sanjay here. Yes. So I think like we have been maintaining with any new technology, it takes a little bit of time for things to mature. So right now, the way we look at CXL is it’s a transition from the sort of the crawl stage to walk stage. There are three or four key use cases that have emerged. One of them is what you noted, which is the large in-memory database applications. And there, the use case becomes one of how do you enable more memory capacity. In the past, this was done by adding additional CPUs into the server box to provide for more memory channels. But what we have demonstrated is that by using Leo you’re not only able to get the higher performance by the added memory. But from an overall TCO standpoint, it’s significantly less than adding additional CPUs. So that’s one key use case that we see from a deployment standpoint.
But having said that I think at OCP this year, you might have seen some of our videos and all that, there’s been tremendous amount of different platform level solutions that were being deployed for various high bandwidth applications, HPC applications, including some of the rack level disaggregation type of use cases. So to that standpoint, there are many different ways in which the technology can develop, but just like any new technology, it will take some time before the requisite ecosystem and software is built. So we are in that period right now, getting those pieces in place and 2025 is when we expect the production volumes to begin.
Joe Moore: Great. Thank you.
Operator: Our next question comes from the line of Blayne Curtis from Jefferies. Please go ahead.
Blayne Curtis: Hi, good afternoon and congrats on the results. I wanted to ask you last quarter you kind of talked about the September growth. I think it was like 20 million and you kind of loosely said it was kind of one-third Retimers, one-third of the PCIe cabling and one-third Taurus. So I’m not expecting to dial us in completely, but you kind of double that. So I’m kind of just curious, the strength you’re seeing between your products, if you could add a little bit more color. And then also just between AI accelerators and GPUs, obviously, the big GPU vendor has a ramp coming with Blackwell, but that’s not exactly now. So I’m just kind of curious what’s driving the strength in September and December a little bit more?
Mike Tate: Yes. Thanks, Blayne. Yes. In Q3, our business is really benefiting from the strong contribution of multiple product lines. And the Aries SCM and Taurus both was really big, strong ramp quarters. Those ramps kind of were consistent with our expectations. The upside to the guidance was driven largely from Aries’ revenue, both for the third-party GPUs, but also as well with the strong ramps on new platforms on the internally developed AI accelerators. And we’re seeing that across multiple hyperscaler customers, so it’s not just one. So the upside was largely driven by that Aries revenue.
Blayne Curtis: Thanks. And then, Mike, I want to ask you on with the addition of the Scorpio product line, but before you kind of talked about how — when some of the products like Taurus or the PCI modules ramp, it would be a little bit dilutive to margins because it’s not a chip sale. How do you think about it if you do have switches greater than 10%, maybe switch margins versus Aries? How do we think of that blend next year?
Mike Tate: Yes. So for Scorpio, there will be a broader range of margins. There’s different use cases. So it depends on the functionality and the volume. But at this point, we believe Scorpio will not impact our long-term gross margin targets of 70% and it was kind of contemplated when we set up those targets. I’d say overall, beyond just Scorpio though we are seeing a wider range of margin profiles across all our product portfolio. So mix will be important to contribute from a quarter-to-quarter perspective. But we still feel good about the 70% target.
Blayne Curtis: Thanks, Mike.
Operator: Our next question comes from the line of Thomas O’Malley from Barclays. Please go ahead.
Thomas O’Malley: Hi, guys. Thanks for taking my questions. My first one is just on the X-Series for Scorpio. I think this is the first real kind of backend switch that we’ve seen in the marketplace for PCIe. Could you talk about your positioning there? How far you think you’re ahead? And would you expect the same kind of competitive dynamic that you’re seeing in the P-Series switch? Just talk about where you are competitively and just from an opportunity perspective, do you think longer term, the X Series is a bigger opportunity than the P-Series?
Jitendra Mohan: Tom, let me take the question. This is Jitendra. So you have three questions in there. So you don’t get a follow-up. So the first question, let me ask you the P-series first. So P-series is actually broadly applicable to all of the accelerators. All of the accelerators require connectivity from the GPU or the accelerator to the head node or to the NIX and so on. So P-series is applicable to all of them. The P family for our supports PCI Express Gen 6. So that’s where the deployment will happen. I already mentioned what are some of the advantages of the family, but at the same time, we should not estimate the Gen 5 socket. There are also Gen 5 designs that are taking place right now, both with the third-party GPUs as well as with ASIC, internally developed ASIC.
So I think that’s the market opportunity we see with the P-Series. We estimate that the TAM for this P-Series to be about $1 billion plus or minus today and growing to double of that over time. But you’re also correct that we do think over the longer period, X series will have a bigger TAM. The TAM today is nearly zero. It’s not very commonly used outside of the Nvidia ecosystem. We do expect many hyperscalers to start deploying this, starting with the X family and the designs for which that we have. And we are able to do that because of the architecture that we have. Because of our software-defined architecture, we can customize many parts of the X-Series to cater to the specific requirements of the hyperscalers both on the side of performance, the exact configuration that they require in count and so on and also the diagnostics framework that they require to monitor their infrastructure.
So over time, we do expect X-Series to become larger. Now I also mentioned during the remarks that we have joined the UALink consortium and that gives us another market, another opportunity where we can play with back-end interconnect.
Thomas O’Malley: Helpful. Let me sneak in another one. I know I broke the rules there on the first one. But the second is just on the Taurus product family. So historically, you’ve been concentrated within one customer. Can you talk about the breadth of your engagements there as that kind of expanded to multiple customers? And when you look out into next year is that going to be largely consolidated to one or two customers or do you see that kind of proliferating across multiple?
Sanjay Gajendra: Yes. This is Sanjay here. Let me take that. Yes. So 2025 is a year that we think the business will get broader. As we’ve always said, AEC or active electrical cable is a case-by-case situation, meaning it’s not like every hyperscaler uses active electrical cables. So to that standpoint, we do expect as data rates go higher with 800 gig and so on, that market to be much more diversified than what it is today. So with that said, I mean, today, if you look at our business, we do have our AEC modules or Taurus modules going into both AI and compute platforms. There are different kinds of cables in terms of various different configurations that they need to support. So overall, I want to say we are fairly diversified with our business today, but as the speeds increase, in 2025 and beyond.
We do expect that the customer bases will continue to evolve. With the note like that I highlighted that every infrastructure will be different the place where AEC would be used will differ between the various hyperscalers.
Operator: Our next question comes from the line of Tore Svanberg from Stifel. Please go ahead.
Tore Svanberg: Yes. Thank you and congrats on the strong results. You mentioned that PCIe Gen 6 is going to be in preproduction this quarter. When do you expect it to be an actual volume production? Would that be in the first half of next year, Q1?
Sanjay Gajendra: I want to say it depends on the customers’ timeline. So we don’t want to speak for any of our customers on what they are communicating. But in general, what I would say though, similar to what we’ve shared with you in the past that our design wins are in the customized rack implementation. So to that standpoint, the timing of qualification and the deployment would be based on that. But in general between the initial design wins we had to where we are now where we are engaging with multiple opportunities, both for Gen 5 and Gen 6. And both for third-party GPUs as well as internally developed GPUs. So to that standpoint, our opportunities on Scorpio continues to grow. And as you look at overall for 2025 like Mike suggested, we do expect our contribution from Scorpio to be in excess of 10% of our overall revenue.
Tore Svanberg: That’s very helpful. And I had a follow-up question on Scorpio in relation to your PCIe Retimer business. So would those typically pull each other, meaning — could it be instances where there’s a switch, a PCIe switch with somebody else’s Retimer or I mean, do they pretty much go hand in hand, especially given the customer software?
Sanjay Gajendra: Yes. So if you look at today’s deployment with Gen 5, at least from an industry snapshot standpoint, it’s mix and match, right, meaning our Retimers get used with switch products from other vendors. We have gone — because of our software-based architecture, it allows us to uniquely customize and optimize for different system-level configurations. So that is what it is today. Going forward, with COSMOS, we do see an advantage because we have integrated the management framework, the customization framework and the optimization type of feature set across all of our products. Meaning if a customer is using COSMOS today for Aries, they will very easily be able to extend the infrastructure that they’ve already built to run on top of our Scorpio devices. That’s a unique advantage we bring compared to some of the alternatives out there.
Tore Svanberg: Very helpful. Thank you.
Operator: Our next question comes from Ross Seymore from Deutsche Bank. Please go ahead.
Ross Seymore: Hi, thanks. So, yes, a couple of questions, and congrats on the strong results. The first question, Jitendra, you mentioned and Sanjay you too, about the importance of the ASIC side of the business really ramping up strongly. What was the inflection point that’s really driving that that’s something where the market itself is just getting stronger? Or is there something with the inflection point of your technology that’s being adopted and penetrating the market faster?
Sanjay Gajendra: Yes. So I think in terms of the ASIC designs, I think it’s fairly public knowledge now that all the hyperscalers have doubled down on the amount of investment they’re doing for their own internal ASIC programs. The third-party GPUs, obviously, have done a great job, but also hyperscalers are starting to realize where the money is in terms of their AI use cases and workloads. So to that standpoint, we have been seeing an increased investment from hyperscalers in terms of their internal programs. And we are, of course, addressing those across all of our product lines. So if you look at our business today, like we highlighted in the prepared remarks, we have truly entered a new phase in terms of our overall business, where we not only have the third-party GPU-based designs, we also have several internal AI internally developed accelerator-based AI platforms.
And then we have multiple of our product lines that are ramping on these platforms. The one caveat — one additional point I would note is that for the internally developed AI platforms, we get to play not only in the front-end network, connecting the GPU to CPU and storage, we also get to play in the back end, which generally like -generally tends to be, like I call it, a fertile land where there are multiple connectivity requirements that we can service with our Aries, Taurus product line and now, of course, with the Scorpio X-Series product line.
Ross Seymore: Thanks for that, and I guess as my one follow-up, a quarter ago, we were having significant debates about your statements about the average content per GPU. Obviously, that’s not as big a topic today now that we know about Scorpio with more detail — but if I revisit that and you talked still on this call about the average content going up, is that just because of Scorpio, something you had in your back pocket before that you obviously couldn’t tell us about — or do you still believe the Retimer content in some way, shape or form will still be bigger on most of these platforms going forward, especially for the third-party merchant suppliers.
Sanjay Gajendra: Yes. So I think when we talked about it before, there were two reasons we highlighted. One is generally speaking with each new generation of a protocol like PCIe going from Gen 5 to Gen 6. There is an ASP uplift. That’s number one. Number two, of course, we were hinting at the Scorpio product line, which because of the value it delivers to customers is at a higher ASP, as you can imagine. So overall, if you look at the design wins we have today, the dollar content per GPU goes up, that’s one way to look at it based on what we’ve shared before. The other way to also look at it is that for internally developed platform, we get to play also in terms of the back-end network, like I noted, we get to also address some of the scale-out networks that are based on Ethernet using Taurus module.
So overall, if you look at sort of the increasing speed, additional product line as well as the fact that the internally developed platforms, AI accelerated database platforms, they are starting to gain more and more traction. So when you look at all of them, on an average, our content is on the up.
Ross Seymore: Thank you.
Operator: Our next question comes from the line of Quinn Bolton from Needham. Please go ahead.
Quinn Bolton: Hey thanks for squeezing me in. I just wanted to — just a follow-up clarification maybe. But for the Scorpio family being over 10% of revenues, is that largely from the P-Series? Or would you expect any X Series production revenue in 2025 given the longer, I think, designing cycle for the back-end scale-up networks.
Mike Tate: We have designs for both — both P and X will contribute to revenues. The P designs will generally be first, but we do see X starting in the back half of the year as well.
Quinn Bolton: Excellent. And then can you give us some sense for the P-Series and the X Series — on the Retimer, I think the market sort of generally looked at the Aries Retimer attach rate per GPU or per accelerator. Is there any framework you can provide for us for P-Series X Series, would you look or expect a typical attach rate per GPU accelerator, would that be 1:1? Would it be less than 1:1, could it be higher than 1:1. Any thoughts on attach rates for P-Series and X-Series? Thank you.
Sanjay Gajendra: Yes. So it’s a very broad question because there’s all kinds of implementations that are out there. To a high level, I’ll probably share three points. The first is P Series is broadly applicable. In the sense that it could work for a third-party GPU or an internally developed accelerator because every accelerator doesn’t matter where it comes from needs to connect to the head note side, which generally includes the networking, storage as well as CPU. So to that standpoint, that will be a very broadly used device. And when it use, it’s 1:1, meaning every GPU would need one of our Scorpio P-Series device. That’s number one. Number two is the X Series. Now these are generally used for GPU to GPU interconnect, right?
So to that standpoint, depending on the configuration, the number of devices is a function of the number of ports that an X Series device exposes and really depends on how the back-end fabric is built. And to that standpoint, again, it truly depends on how the configuration is being built. And this one, like Mike noted, it’s a greenfield use case, meaning if you keep Nvidia and NV Switch aside, everyone else is starting to build configurations that are obviously going to need some kind of a switching functionality, which is what we are addressing with our X Series device. So that’s the second point to keep in mind. And then in general, what I would say is that overall, depending on where things are and how big of a chip that we’re building, the attach rate will continue to evolve.
But in general, the dollar content that we’re talking about is expected to continue to grow both because we are adding more functionality with devices like Scorpio. And at the same time, we are seeing additional pull-in for products like Aries and Taurus and other things that we’re working on.
Operator: Our next question comes from the line of Mark Lipacis from Evercore ISI. Please go ahead.
Mark Lipacis: Hi, thanks for taking my question. A question on the diversification. If you think just longer term, you can diversify by your customer base and then by your product line. So pick your time in the future three years out, five years out, what do you think your split will be between the merchant GPU player versus the custom AI accelerator player like how your product will be attached to either type of solution? And then let’s just say, three years out, you have five product lines. Is that — would you expect to have still have a skew to one? Or would you expect to have like 20%, in each product line bucket or something like that? If you could help us out how you’re thinking about diversification like three years out? I think that would be helpful. And then I have a follow-up. Thanks.