Astera Labs, Inc. Common Stock (NASDAQ:ALAB) Q4 2024 Earnings Call Transcript February 11, 2025
Operator: Hello. Good afternoon. My name is Joe, and I will be your conference operator today. At this time, I would like to welcome everyone to the Astera Labs Q4 ‘24 Earnings Call. All lines have been placed on mute to prevent any background noise. After management’s remarks, there will be a question-and-answer session. [Operator Instructions] Thank you. I will now turn the conference over to Leslie Green, Investor Relations of Astera Labs. Leslie, you may begin.
Leslie Green: Thank you, Joe. Good afternoon, everyone, and welcome to the Astera Labs fourth quarter 2024 earnings conference call. Joining us today on the call today are Jitendra Mohan, Chief Executive Officer and Co-Founder; Sanjay Gajendra, President and Chief Operating Officer and Co-Founder; and Mike Tate, Chief Financial Officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results, strategies and plans, future operations and the markets in which we operate. These forward-looking statements reflect management’s current beliefs, expectations and assumptions about future events which are inherently subject to risks and uncertainties that are discussed in detail in today’s earnings release and the periodic reports and filings we file from time to time with the SEC, including the risks set forth in the final prospectus relating to our IPO and our upcoming filing on Form 10-K.
It is not possible for the company’s management to predict all risks and uncertainties that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement. In light of these risks, uncertainties and assumptions, the results, events or circumstances reflected in forward-looking statements discussed during this call may not occur, and actual results could differ materially from those anticipated or implied. All of our statements are made based on information available to management as of today, and the company undertakes no obligation to update such statements after the date of this call to conform to these as a result of new information, future events or changes in our expectations, except as required by law.
Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be an important measure of the company’s performance. These non-GAAP financial measures are provided in addition to and not as a substitute for or superior to financial results prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the Investor Relations portion of our website and will also be included in our filings with the SEC, which will also be accessible through the Investor Relations portion of our website. With that, I would like to turn the call over to Jitendra Mohan, CEO of Astera Labs.
Jitendra?
Jitendra Mohan: Thank you, Leslie. Good afternoon, everyone, and thanks for joining our fourth quarter conference call for fiscal year 2024. Today I’ll provide an overview of our Q4 and full year 2024 results, followed by a discussion around the key secular trends and the company-specific drivers that will help Astera Labs to deliver above industry growth in 2025 and beyond. I will then turn the call over to Sanjay to discuss our medium and long-term growth strategy in more detail. Finally, myself provide details of our Q4 ‘24 financial results, in addition to a financial guidance for Q1 of 2025. Astera Labs delivered strong Q4 results and set our sixth consecutive record for quarterly revenue at $141 million, which was up 25% from last quarter and up 179% versus Q4 of the prior year.
Revenue growth during Q4 was primarily driven by our Aries PCIe Retimer and Taurus Ethernet Smart Cable Module product family. Within these product families, we saw additional diversification driven by strong demand for both AI, scale up and scale out connectivity. Leo and Scorpio momentum also continued with both products shipping in pre-production volumes during the fourth quarter and support our customers’ qualifications across multiple platforms for a variety of use cases. Looking at the full year, our strong Q4 finish culminated in an outstanding 2024, which saw full year sales increase by 242% year-over-year to $396 million. The value of our expanding product portfolio across both hardware and software was reflected in a robust fiscal 2024 non-GAAP gross margin of 76.6%.
Over the past 12 months, we have aggressively broadened our technology capabilities by investing in our R&D organization to solve next-generation connectivity infrastructure challenges. We successfully increased our headcount in 2024 by nearly 80%, to 440 full-time employees. In Q4, we also closed a small, but strategic acquisition that included our talented group of architects and engineers to help accelerate our product development, strengthen our foundational IP capability and provide holistic connectivity solutions for our hyperscaler customers at a rack scale. Our revenue growth in 2024 was largely driven by Aries products, along with a strong ramp of Taurus in the fourth quarter. We expect 2025 to be a breakout year as we enter a new phase of growth driven by production revenue from all four of our product families to support a diverse set of customers and platforms.
In 2025, our Aries and Taurus retirements are on track to continue their strong growth trajectory. Also, Astera Labs is poised to be a key enabler of CXL proliferation over the next several years with the volume ramp of our Leo family expected to start in second half of ‘25. Finally, our Scorpio smart fabric switches will begin ramping this year with new and broadening engagements for a scale up with our X series and scale out with our PCD switches. In time, we expect Scorpio Fabrics switches to become our largest product line, given the size and growth of the market opportunity for AI Fabrics. Across our industry, hyperscalers are pushing the boundaries of scale up compute to support large language models that continue to grow in capacity and complexity.
Recent algorithmic improvements have shown the potential to deliver AI applications with better return on investment for AI infrastructure providers. These innovations enable increased adoption and broader use cases for AI across the industry. The secular trends underlying our business are projected to be robust in 2025, driven by growing CapEx investments by hyperscalers in AI and cloud infrastructure. Hyperscalers are deploying internal ASIC-based rack scale AI servers that use end-to-end scale up networks to deliver larger, higher performance in more efficient cluster. These scale up networks require every accelerator to connect with every other accelerator with fully non-blocking, high throughput and low latency data bytes. This drives the need for more and faster interconnect and the homogeneity of such a system allows for many optimizations and innovations.
For PCIe-based scale-up clusters our innovative Scorpio X Series and Aries Retimer family are perfectly suited to provide a custom-built interconnect solutions. In addition to PCIe-based scale-up opportunities, we are excited about the next potentially broader opportunity with Ultra Accelerator Link or UA Link. This is an impactful initiative by the AI industry to develop and open, scale up interconnect fabric for the entire market. Early in 2025, significant progress has been made advancing the development of UA Link to provide the industry with a high speed, scale up interconnect for next-generation AI clusters. The UA Link consortium recently expanded its Board of Directors to include several more technology leaders, including Alibaba Cloud and Apple.
Given our intimate involvement within this open standard, we are seeing overall engagement accelerate for Astera Labs next-generation high-speed connectivity solutions. In summary, we anticipate the market opportunity for high-speed connectivity to increase at a faster rate than underlying AI accelerator shipments. We look to take full advantage of these robust strengths by broadening our existing product portfolio with differentiated hardware and software solutions across multiple protocols and interconnect media. We are accelerating our pace and level of development driven by our customers to deliver new products that address the rapidly growing market opportunity ahead of us. With that, let me turn the call over to our President and COO, Sanjay Gajendra to discuss our growth strategy in more detail.
Sanjay Gajendra: Thanks, Jitendra, and good afternoon, everyone. 2024 was a significant year for Astera Labs as we diversified our business across multiple vectors. We launched our Scorpio Fabric Switches, generated revenue from all four of our product lines, and transitioned into high volume production for Aries and Taurus Smart Cable Modules. We also started ramping multiple new AI platforms, based on internally developed AI accelerators at multiple customers to go along with continued momentum with third-party GPU-based AI platforms. This expansion to internal accelerator-based platforms took off in the third quarter and helped us establish a new revenue baseline for our business with continued growth in the fourth quarter.
As we look into 2025, we see strong secular trends across the industry, supported by higher CapEx spent by our customers, broadening deployment of AI infrastructure driven by more efficient AI models, and company-specific catalysts that should drive above market growth rates for Astera Labs. Specifically, for 2025, we expect three key business drivers. One is the continued deployment of internally developed AI accelerator platforms that incorporate multiple Astera Labs product families, including Aries, Taurus and Scorpio. As a result, we will continue to benefit from increased dollar content per accelerator in these next-generation AI infrastructure systems. Based on known design wins, backlog and full cash from multiple customers we see strong continued growth of our Aries PCIe Gen 5, products in 2025.
As AI accelerator cluster sizes scale within the rack and rack to rack, we see meaningful opportunities to drive reach extension with our Aries Retimer Solutions in both chip onboard and smart cable module formats. Our Taurus product family has demonstrated strong growth for the past several quarters, paving the way for solid revenue contributions in 2025. We continue to see good demand for Taurus 400 gig Ethernet Solutions utilizing our smart cable modules, both for AI and general purpose compute infrastructure applications. Looking ahead, we view the transition to 800 gig Ethernet at the late 2025 event for a broader market opportunity in 2026. Additionally, design activity for Scorpio X Series products across next-generation scale up architectures in AI accelerated platforms is also showing exciting momentum as we continue to broaden our customer engagements.
The Scorpio X Series is built upon a software-defined architecture and leverages our COSMOS software suite and supports a variety of platform-specific customization, which enables valuable flexibility for our customers. We are pleased to report that we have received the first pre-production orders for our Scorpio X Series product family. The second driver for our 2025 business is a expected production ramp of custom AI racks based on industry-leading third-party GPUs. We are shipping pre-production quantities to support qualification of designs utilizing our Scorpio P Series and Aries PCIe Gen 6 solutions to maximize GPU throughput, while leveraging our customers’ own internal networking hardware and software capabilities. These programs are driving higher dollar content opportunities for Astera Labs on a full rack and full accelerator basis and we expect volume deployments to begin in the second half of this year.
At the DesignCon 2025 Trade Show, we demonstrated the better together combination of pCIe Fabric switch, PCIe Retimer and 100 gig per lane Ethernet Retimer Solution, utilizing our COSMOS software suite. This provides deeper levels of telemetry to pinpoint connectivity issues in complex topologies, while enabling tighter integration of COSOMOS APIs into a customer’s operating stacks. We also showcased the first public demonstration of end-to-end interoperability between Scorpio Fabric Switches, Aries Retimers and Micron’s PCIe Gen 6 SSDs. This demonstration highlighted the maturity of our PCIe Gen 6 solutions growing PCI Gen 6 ecosystem and our performance leadership by doubling the maximum storage throughput possible today and setting a new industry benchmark.
The third driver for our 2025 business is general compute in the datacenter. We expect revenue growth from general compute-based platform opportunities, featuring new CPUs, new network cards and SSDs, with our Aries PCIe Retimers, Taurus Ethernet FTM and the Leo CXL product families. So general compute is a smaller portion of our business compared to AI servers we benefit from the diversity with multiple layers of growth. Overalls, we’re excited by the many opportunities and secular trends in front of us to drive 2025 revenues. We are also encouraged by our customers and partners increasing their trust in us and opening new opportunities for new product lines to support their platform roadmaps. As a result, we will continue to aggressively invest in R&D to further expand our product and technology portfolio as we work to increase our total addressable market, we will build upon our semiconductor, software, and hardware capabilities to address comprehensive connectivity solutions at a rack scale to ensure robust performance, maximum system utilization and capital efficiency.
As we look to 2026 and beyond, our playbook remains the same. One, stay closely aligned with our customers and partners. Two, innovate exponentially in everything we do and three continue to be laser-focused on product and technology execution. Our long-term growth strategy is to aggressively attack the large and growing high-speed connectivity markets. We estimate our portfolio of hardware and software solutions across retimers, controller and fabric switches will address a $12 billion market by 2028. Significant portions of this market opportunity such as AI Fabric Solutions for back-end scale up applications are Greenfield in nature with a diverse and broad set of technology capabilities, we are partnering with key AI ecosystem players to help solve the increasingly difficult system level interconnect challenges of tomorrow.
By helping to eliminate data, networking, and memory connectivity bottlenecks, our value proposition expands and will drive our dollar content opportunity higher. In conclusion, we are motivated by the meaningful opportunity that lies before us and will continue to passionately support our customers, by strengthening our technology capabilities and investing in the future. Before I turn the call over to our CFO, Mike Tate to discuss Q4 financial results and our Q1 outlook, I want to take a quick moment to thank our customers, partners and most importantly to our team and families for a great 2024. With that, Mike?
Mike Tate: Thanks, Sanjay, and thanks to everyone for joining the call. This overview of our Q4 financial results and Q1 2025 guidance will be on a non-GAAP basis. The primary difference in Astera Labs non-GAAP metrics is stock-based compensation and its related income tax effects. Please refer to today’s press release available on the Investor Relations section of our website for more details on both our GAAP and non-GAAP Q1 financial outlook as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call. For Q4 2024, Astera Labs delivered record quarterly revenue of $141.1 million, which was up 25% versus the previous quarter and 179% higher than the revenue in Q4 of 2023. During the quarter, we enjoyed strong revenue growth of both our Aries and Taurus Smart Cable Module products supporting both scale up and scale out PCIe and Ethernet connectivity for AI rack level configurations.
For Leo CXL, and Scorpio Smart Fabric Switches, we shipped pre-production volumes as our customers work to qualify their products for production deployments later in 2025. Q4 non-GAAP gross margins was 74.1% and was down from the September quarter levels due to a product mix shift towards hardware-based solutions with both our Aries and Taurus Smart Cable Modules. Non-GAAP operating expenses for Q4 were $56.2 million, up from $51.3 million in the previous quarter, as we continue to scale our R&D organization to expand and broaden our long-term market opportunity. As previously mentioned on this call, we closed a small acquisition toward the latter half of the quarter, which also contributed slightly higher spending during the period. Within Q4, non-GAAP operating expenses, R&D expense was $37.8 million, sales and marketing expense was $8.1 million and general and administrative expenses was $10.4 million.
Non-GAAP operating margin for Q4 was 34.3%, up from 32.4% in Q3, which demonstrates strong operating, leverage as revenue growth outpaced increased operating expenses. Interest income in Q4 was $10.6 million. On a non-GAAP basis, given our cumulative history of non-GAAP profitability, starting in Q4, we will no longer be accounting for a full valuation allowance on our deferred tax assets. As a result, in the fourth quarter, we realized that income tax benefit for this change resulting in a Q4 tax benefit of $7.6 million and an income tax benefit rate of 13%, which compares to our previous guidance of an income tax expense rate of 10%. Non-GAAP fully diluted share count for Q4 was 177.6 million shares, and our non-GAAP diluted earnings per share for the quarter was $0.37.
Excluding the impact of the Q4 tax benefit just noted and based on a 10% non-GAAP income tax rate during the quarter, as previously guided, non-GAAP EPS would have been $0.30.. Cash flow from operating activities for Q4 was $39.7 million and we ended the quarter with cash, cash equivalents and marketable securities of $914 million. Now turning to our guidance for Q1 of fiscal 2025. We expect Q1 revenues to increase to within a range of $151 million and $155 million up roughly 7% to 10% from the prior quarter. For Q1, we expect continued growth from our Aries product family across multiple customers over a broad range of AI platforms. We look for our Taurus SCM revenue for 400 gig applications to also provide strong contribution to the top-line in Q1.
Our Leo CXL controller family will continue shipping in pre-production quantities to support ongoing qualification ahead of volume ramp in the second half of 2025. Finally, we expect our Scorpio product revenue to grow sequentially in Q1 driven by growing pre-production volumes of designs for rack scale systems. We continue to expect Scorpio revenue to comprise at least 10% of our total revenue for 2025 with acceleration exiting the year. We expect non-GAAP gross margins to be approximately 74% as the mix between our silicon and hardware modules remain consistent with Q4. We expect first quarter non-GAAP operating expenses to be in a range of approximately $66 million to $67 million. Operating expenses will grow in Q1 is largely driven by three factors, one, continued momentum in.
expanding our R&D resource pool across headcount and intellectual property. Two, seasonal labor, expense step ups associated with annual performance merit increases and payroll tax resets, and three, a full quarter contribution of the strategic acquisition we executed in the latter part of Q4. We continue to be committed to driving operating leverage over the long-term via strong revenue growth, while reinvesting into our business to support the new market opportunities associated with next-generation Ai and cloud infrastructure projects. Interest income is expected to be approximately $10 million. Our non-GAAP tax rate should be approximately 10% and our non-GAAP fully diluted share count is expected to be approximately 180 million shares.
Adding this all up, we are expecting non-GAAP fully diluted earnings per share in a range of approximately $0.28 to $0.29. This concludes our prepared remarks. And once again, we appreciate everyone joining the call. And now we will open the line for questions. Operator?
Operator: [Operator Instructions] Your first question comes from the line of Harlan Sur of JPMorgan. Your line is open.
Q&A Session
Follow Astera Labs Inc.
Follow Astera Labs Inc.
Harlan Sur: Good afternoon and congratulations on the strong results and execution. You know the one big inflection in accelerated compute and AI as you mentioned is the ramp of numerous AI ASIC XPU programs or is that GPU and Google Trainium at Amazon MTIA as META, also multiple new programs in the not too distant future. It looks like these custom add program ramps are growing as we factored in the overall merchant GPU markets. The question is what percentage of your business last year came from Merchants GPU AI systems versus ASIC-based systems and where do you expect that mix to be say, exit the industry or ASIC programs seem to be on a faster growth trajectory. You have a very, very strong attach here across higher product as you guys mentioned. I am just wondering if the team agrees with me on this view?
Jitendra Mohan : Yeah, we’re very excited about the addition of the internal AI accelerator programs. Also in particular, on those programs to set you doing the scale up connectivity. The unit volume steps up to a meaningful way compared to the merchant GPU designs that we see right now. As we look into 2024, the first half of the year was predominantly merchant GPUs. So that was the first to really adopt our product lines. And then in Q3, that’s the first quarter it inflected up with the internal AI accelerators especially, you see that with our Taurus and our Aries SCM business inflecting up. Q3 was a partial quarter and then Q4 was a full quarter. So that it really set the nice baseline of revenues. Now, if you look into 2025, we see both contributing growth.
The first half of the year will be more predominantly the AI – internal AI accelerator programs. But if you get into the back half of the year, the transition on the merchant GPUs will also be very strong for us. This is where you’ll see the custom rack configurations start to deploy and that’s where we see a big dollar increase in our contemporary GPU with Scorpio starting to ramp.
Harlan Sur: Appreciate that. And on the balance sheet, inventories are up almost 80% sequentially in the December quarter. That’s an all-time high for the team. I think it’s up 60% versus the average inventory level over the past four quarters. Is the significant step-up reflective of a strong multi-quarter shipment profile across the overall portfolio? Or maybe reflective of a step up in more of your board level solutions? Or is it a kind of a combination of both?
Jitendra Mohan : Well, if you remember, in Q3, our revenues were very strong. We were up 47% sequentially. A lot of that strength developed during the quarter. So we drew down our inventories pretty significantly in Q3. Now in Q4, we had time to build back to our more normalized level. So this this level of inventory is actually where we feel much comfortable. We always want to be in a position to support upsize from our customers and – because most of our programs are sole sourced. But this level reflects the growth in our business now.
Harlan Sur: Great. Thank you.
Operator: Your next question is from the line of Blayne Curtis of Jefferies. Your line is open.
Blayne Curtis : Hey, good afternoon, thanks for taking my question. I have two. I was just kind of curious you mentioned the strength in Taurus. I don’t know if you’re going to delve into how maybe big that was in December. And then Mike, I wanted to ask on gross margin. I know it’s a mix of hardware and you are seeing strength from – your ASIC customer there with multiple products. I am just kind of curious as you think about ‘25 as Scorpio ramps, how that mix shifts and what the impacts are shape of gross margins for this year?
Mike Tate: Yeah. So for Q4 you see the margins, they ticked down and we did highlighted Taurus and Aries which are in the module form factors grew as a percentage of our total revenues and the upside in the quarter was in particular from Taurus, as well. So, that’s the margin going down to 74.1% which is we have very close to what we had expected. Now, if you go into 2025, we still see good contribution from Taurus and Aries SCM modules. But as we make it through the year, the Aries board and chips as well as Leo and Scorpio are a positive for us as well. So, we think Q1 and Q2, we should have a consistent margin profile of around 74%. And as we highlighted, margins will be trending down closer to the longer term model of 70%, but it all depends on the mix of our hardware versus silicon.
Blayne Curtis : Thanks. And then, I was just kind of curious if you could – now that you’re a quarter removed from launching Scorpio, just kind of common on the design momentum you’ve had in terms of the number of engagements? And I am just kind of curious in terms of from a competitive landscape-wise for scale up outside of Ambi Link what else is out there that you’re seeing that you’re competing against with those products?
Sanjay Gajendra: Yeah, Blayne. This is Sanjay here. Let me help answer that. The products as you know, Scorpio has got two series P series for PCIe and hedgehog use case which tends to be pretty broad and then X Series is for the GPU cutting on the back-end type. So overall, since we launched, we continue to pick up multiple design opportunities at this point, both for PCDs, as well as for X Series. X Series tends to be a more sort of a longer designing and qualification just because it’s going into the back-end GPU side, but the front-end, the P series is what we expect to start contributing meaningful revenue starting second half of this year as the production volumes take off. We have been shipping pre-production for P series already and then, for X Series we started receiving our first pre-production orders.
So to that standpoint, what I want to share is that the momentum on Scorpio is being definitely more than what we expected largely driven by the feature set that we have implemented where Scorpio is the first set of fabric devices for PCIe and back-end connectivity that’s been developed ground up for AI use cases. So to that standpoint, the customer see the value in the features we have and the performance that we are delivering.
Jitendra Mohan : And Blayne, this is Jitendra. Your question on what else is out there? Of course, clearly, NV Link is the one that is most commonly deployed, most widely deployed within the NVIDIA ecosystem and we don’t play there. Other than that, there is a quote to PCIe-based scale up network that Sanjay just talked about and the other alternative is Ethernet-based scale up network. And the difference between the two is really, Ethernet is a very commonly used standard but was not designed for scale up. So the latencies are quite higher and you don’t quite get the performance of Ethernet, which is why we see many of our customers gravitate towards PCIe-based systems, which are inherently better suited for this application.
And what we see happening in the future is the industry might try to get behind UA Link, which is a developing standard that we are very excited about. And with the UA Link you get the benefit of both Ethernet speeds, as well as the lower latency and the memory-based IO of the PK like protocol. Now over time, we do expect Scorpio to become our largest product line, just because the market for scale up interconnect is so large. So very excited about what’s coming in this space.
Blayne Curtis : Thanks guys.
Operator: Your next question comes from the line of Joe Moore of Morgan Stanley. Your line is open.
Joe Moore: Great. Thank you. I know there’s a lot of attention on DeepSeek innovations that we saw a couple weeks ago. Can you talk about what you’ve seen or how you would position that with regards to other innovations that we’ve seen? Do you see it as deflationary to the long-term opportunity? And I just would love to get your perspective on this?
Jitendra Mohan : Hi Joe. This is Jitendra again. So let me start on that. First of all, you are right. There is a lot of discussion, a lot of articles that have been written about DeepSeek. So not sure exactly what all we will be able to add. But what I do want to point out is, what matters most is, what our customers, the hyperscalers think about DeepSeek. And in the phase of that announcement, they have all gone and increased the CapEx spending. So that really shows that the hyperscalers believe in the future of AI. And the continued demand for GPUs and an accelerators is likely to continue. And I can kind of give you my perspective on why that is. If you break it down into two, but if you look at inference, for inference, DeepSeek has shown that algorithmic improvements will drive the cost of inference lower.
And we have seen time over again, when the cost goes down, the adoption goes up. It happened with PCs. It happened with phones. It happened even with servers and virtualization kicked in and we do think that AI will follow similar trajectory. So it gets more adoption. And then if you look at training, just consumers, like you and I we are all looking for better results from these models. And by embracing some of the innovations that the DeepSeek team has put forward, the quality of results from these models will go up and that again is beneficial for the overall AI ecosystem. So our focus has always been on enabling AI infrastructure with both the third-party GPUs as well as ASIC platforms. And to the extent that any of the dynamics seen we stand to benefit from the trends.
Joe Moore: It’s very helpful. Thank you. And then, for my follow-up, you talked about Leo ramp in the second half of the year. I know you are seeing quite a bit of interest in kind of memory bandwidth boosting kind of capabilities. Can you help us size how important that could be in the second half?
Jitendra Mohan : Yeah, so we we’ve been working with our customers as the next-generation CPUs that support CXL come to market. These are going to initially be very high memory data chipsets applications, high performance compute type applications. So we’ll see those start to pull in the back half of the year. Ultimately, longer term, we do expect to take the CXL technology to be very beneficial for more mainstream general compute. So we hope to see that play out in 2026 and ‘27.
Sanjay Gajendra: Yes, to add to that Joe. Yes, to add to what Mike said, so just to give you a little bit broader picture to it. I think there are at this point, based on the fact that we have been working closely with the hyperscalers and CPU vendors for quite some time now. It’s become pretty clear that three are three or four sort of applications that are driving the ROI or the use cases for CXL. So at this point, we understand that the first one is getting deployed in the second half of this year. And we do expect additional use cases and the associated opportunities to come along probably in 2026 and beyond.
Joe Moore: Okay. Thank you.
Operator: Your next question comes from the line of Tore Svanberg of Stifel. Your line is open.
Tore Svanberg : Yes, thank you and congrats on the strong results. I had a question on Scorpio. I think you said you still expect it to be more than 10% of revenues in calendar ’25. Obviously, that number is now higher. Is that going to be predominantly the P Series, or you are going to get some contribution already from X Series in calendar ’25?
Jitendra Mohan : We expect contribution for both. The P series will be first to launch and then the X Series we expect contributions in the latter part of the year.
Tore Svanberg : Great. And just a follow-up on the Taurus. Could you just talk a little about the revenue profile there? How diversified is it by customer and use case? And how do you think about this that business first half versus second half? Because obviously, if it is more diversified in nature, then obviously maybe second half would be greater first half, but, yeah, just some profile of that revenue base right now?
Jitendra Mohan : Yeah, so we’re shipping both, 200 gig and 400 gig. 400 gig is what really launched here in Q3 and Q4, and we have multiple designs across different types of configurations and we also support different cable providers and different form factors. The 400 gig opportunity is still relatively a limited opportunity set out there. So we were focusing primarily on our lead customers. This should continue to provide good strong growth for both in 2025 driven by both AI and general purpose in 2025. And then in the latter part of the year, then we see the markets start to transition to 800 gig.
Tore Svanberg : Great. Thank you.
Operator: Your next question comes from the line of Tom O’Malley of Barclays. Your line is open.
Tom O’Malley : Hey guys. Thanks for taking my questions. My first was in your prepared remarks, you mentioned, that over time, Scorpio would become your biggest product line. I don’t think that you mentioned that before. Perhaps you could talk about the timeframe that you are thinking about that product line, taking over there is something that we should be seeing potentially as early as 2026. And is that a function of jut Scorpio growing faster than you had originally thought or perhaps Aries coming down to some extent? Just understanding why you made that comment in the preamble?
Jitendra Mohan : Yeah, so I think, to add some color to that, the ASP profile of a Retimer class device and the switch device tends to be very different, meaning on the switch side, we do get a significantly higher ASP. And if you look at, at least for the customize AI rack that are being deployed, we are essentially adding a Scorpio socket to go along with the retime socket. And given that attach rate configuration, what we also see is that the dollar content per GPU will go up. But in general, the switch is a much bigger TAM out there. And then we get to play both in the front-end with the P series and the back-end. Back-end tends to be obviously lot more for dial-in in many ways, because many GPUs talking to each other and we benefit from having a high ASP device like the X Series switches and then being deployed in a scale that’s much more significant compared to any other products that we believe so far.
It does not mean that retimers or CXL controllers are going to go away, simply means that the time that we we’re able to address with Scorpio tends to be larger. And given the market momentum and opportunities that we are seeing including some of the roadmap products we’re developing, we feel confident that going forward that will continue to evolve and become our flagship product both from a technology and revenue standpoint.
Tom O’Malley : Super helpful. And then, my follow-up was just – I think it was Mike’s commentary on one of the first questions here on the call. You talked about kind of the year 2025 on the Aries side, talking about how in the first half of the year, you would see more internal AI efforts followed by the second half of the year being more merchant GPU. That comment was a bit surprising to me given we’re going through a big product transition now at the large customer of your so. Is there any change in the way that you see the ramp of 2025, versus where you did before? I would have anticipated maybe the merchant GPU being a bit stronger earlier in the year. Just any reason behind those comments, I’ll off guard. Thank you.
Jitendra Mohan : Sure. So, the – first of all, the merchant GPU drives both Scorpio and Aries. So the big incremental piece of the merchant GPUs is the Scorpio content, which is all kind of new for us. The designs that we have are complex in nature. They’re all new. So, the – to get them to productize at ramp we are looking at that to start off in the back half of the year. Right now, in the first half of the years pre-production. These are all for customer configuration. So the customization adds a little bit of lead time to the volume ramps.
Operator: Your next question comes from the line of Ross Seymore of Deutsche Bank. Your line is open.
Ross Seymore : Hi guys. Thanks. I will ask a couple questions. The first one is a little bit higher level and it’s on diversification. You guys talked from a product diversification point of view with Scorpio being over 10% of your revenues going forward. But there’s also an admittedly concentrated batch of hyperscalers. There is also the customer concentration. So, whether it’d be on the customer side or the product side or both, anyway, you can give us a little bit of framework of how 2024 ended, and how you think 2025 will differ from a diversification lens?
Jitendra Mohan : Yes, overall, 2024, in fact, 2025 still so we are shipping to all the hyperscalers across multiple different product sets that we have. So there should not be any doubt or question about that. But having said that, there are some nuances that are important to keep in mind when you are dealing with a datacenter market. The first thing like we always say, customer concentration is an occupational hazard in the datacenter market just because there are only a handful of hyperspcalers, The second thing to keep in mind is that the hyperscalers differ in terms of their maturity when it comes to internal accelerator chip development. Some are more advanced than the others. For us, when we are designing and counting revenue from internal accelerator program, obviously, there will be a difference between the hyperscalers where we get to play both on the merchant as well as the internal accelerator programs compared to hyperscalers where we only have the merchant silicon opportunity?
So there is that nuance. The other one that is also true is that the appetite for the new technology deployment differs from hyperscaler to hyperscaler. So there will be some hyperscalers that are pretty aggressive in terms of deploying new technology. Others take some time. So you can expect that in a given window of time there will be situation where our revenue will be coming more from a given hyperscaler versus the other. It does not mean that the second hyperscaler is not a potential customer for us. It simply means that they take more time to deploy something just given their own workloads and other things that they are tracking. But overall, our goal is to make sure that the revenue contribution we get reflects the share that each hyperscaler has meaning, if a given hyperscaler has got a certain percentage of the market in terms of cloud services, then we expect that we are seeing similar numbers in our share.
Ross Seymore : That’s very helpful. Thank you for that. Mike, one for you. You mentioned earlier about the gross margin, why it was down a little bit below your guidance in the fourth quarter, it stays flat in the first. Sounds like you said it stays flat in the second, as well. Is that kind of hardware/module mix shift in the second half? It sounded like from what you said, that it does go back away from the hardware side to a little bit more towards the chip level. Was I hearing that correct? Or any sort of update on that will be helpful.
Mike Tate : Yeah, we do see growth in the hardware, but it should stay at a same – a similar level as the rest of the business. So it’s not growing as a percentage of revenue in the first half of the year.
Operator: Your next question comes from the line of Quinn Bolton of Needham. Your line is open.
Quinn Bolton: Hey guys. I wanted to come back to Joe’s question on DeepSeek and obviously one of the benefits is greater deployment of AI model subscriber means to shift towards inferencing. Do you guys see, I mean, inferencing side the need for phosphorous that are as large as you’ve historically seen on the training side as we don’t – we see bigger adoption of sort of more inferencing clusters sort of smaller size. Is that a positive or a negative for the connectivity TAMs?
Jitendra Mohan : Let me take a crack at that. So, I think, first of all, at the high level, I can see that our business is not strongly dependent on inferencing workers training. We tend to benefit from both of those. Now the point that you made is valid is that you don’t need as large of a cluster for inferencing as you need for training. Having said that, if you look at the DeepSeek announcement or even for that matter, other folks are announcing as well, we came up hot models which actually require far more compute than they have required historically. So over time, we do expect that the unit of compute will become a kind of a rack level AI server or typically use to be a two year or full year server will now be at a rack level.
And when you go to a rack level, you come up with these different connectivity challenges that we are very well positioned to address as we are doing today with many of our different product lines. So, all in all, as this unit of compute goes to rack level, we will see higher opportunity. We already are participating in other form factors with our Scorpio P and Aries Retimer type products. And so, overall, we don’t see this as a headwind or a tailwind. We just stand to benefit from both of those. Got it. Got it. Very helpful. And then, a second question just on the Scorpio P family. You mentioned that that growth is really driven by the sort of – versions of merchant GPU-based platforms. Wondering can you guys have engagements for Scorpio P on the ASIC platforms as well?
Thank you.
Jitendra Mohan : Yeah, absolutely. So we are for the customers that engage right now we do see opportunities both for P Series and X Series as it relates to ASIC platform. P Series more for the headnote connectivity to interconnect GPUs that CPU storage and networking and X Series for the back-end to interconnect GPUs.
Operator: Your next question comes from the line of Atif Malik of Citi. Your line is open.
Atif Malik : Hi. Thank you for taking my questions. My first question is on co-packaged optics, Jitendra or Sanjay. There is a bit of discussion in terms of its timing. If you can share your thoughts on the volume of that and what impact it could have on your Retimer in PCIe opportunities?
Jitendra Mohan : Yes. So let me start. First of all, at the high level, we don’t expect CPU to negatively impact our business in the near future. The current product that we have, or even the next generation of products that we have. The way we look at it at 200 gigabit per second per lane at the rack level, the connectivity will remain largely copper. And as a matter of fact, I think the industry will work very hard to even keep it at – copper at 400 gigabit per second. The reason for that is our customers really prefer copper. The reason for that is, it is easier to deploy. When we go to optical specifically, GPO type solutions, you introduce a lot of additional components into what used to be a purely silicon-based package, which has its own challenges for reliability, as well as serviceability aspects.
So in general what we have seen from our customers is if they can stay with copper, they will stay with copper. If they cannot stay with copper because the bandwidths are too high and then so on, they will go to pluggable optics. And when pluggable optics do not become feasible, only then we go to co-package optics. As a result, we see the first instance of co-packaged optics happen when the data rates are the highest, the line speeds are the highest and the density of interconnects is the highest, which typically happens in Ethernet switches. So, our view is that that’s where CPU will get deployed first and that’s not an area that we play in today. So it’s not likely to impact our near to medium-term revenues. And for the longer term, we will continue to explore different media types and keep watching this space for the solutions that our customers might like to from us.
Atif Malik : Great. Thank you. And Mike, can you talk about how should we think about Opex for the year?
Mike Tate: Yeah, we – like we highlighted, we’re going to continue to invest aggressively in the business. Q1 is a bigger step up than typical given for the reasons we outline including the small acquisition we did. So the rate of growth will hopefully normalize a little bit, but right now we really believe it’s a time to press our advantage and invest in the business. We do have a goal of hitting a long-term operating margin model of 40% operating margins. And that will be driven more by inflections in revenue growth versus our controlling our investment in R&D.
Atif Malik : Thanks.
Operator: Your next question comes from the line of Suji Desilva of ROTH Capital. Your line is open.
Suji Desilva: Hi, Jitendra, Sanjay, Mike. First question just to be clear on the customer focus, I know it’s almost entirely hypersscale. I’m just curious if you’ve been approached by or trying to engage the emerging AI companies versus the large establisher that has a bit new business model?
Jitendra Mohan : Hey, it’s absolutely fixed and just to be very clear, right, we talk about hyperscalers because they are the ones that are deploying the big infrastructure right now in terms of AI training and so on. But as a company, we are tracking both the OEM space for enterprise applications, as well as the emerging, AI players for various different data formats, right? So, we are tracking it. But what we also believe is that the first generation of systems that are being rolled out right now, takes something that’s directly available from a company that’s providing GPUs or folks that are integrating that into OEM level boxes. So, that is the first generation. Probably that will continue, probably with a little bit more customization in Gen 2 before we start seeing more sort of a hardware level decisions being made in future generations.
So overall, we’re tracking it. But that the fact that the matter right now is bulk of the TAM that’s available is through the hyperscalers. And that’s where we talked about. But at the same time every OEM out there that’s building AI servers. They are buying components from us either directly or through base boards that they procure from GPU platform providers.
Suji Desilva: Okay. Appreciated. Appreciate all the color there. And then my other question is on UA Link. I’m just wondering, what do you think the biggest challenges are timing impacts of cutting over from Ethernet PCIe, UA Link or and most importantly for – What’s the share and content uplift implications that maybe UA Link gaining traction?
Jitendra Mohan : No, that’s great question. So UA Link promises to be a very good Initiative for the industry to try to bring everybody together for a standardized scale of cluster. So we see a lot of benefit for Astera Labs because of our prominent position on the board. Over time, we will develop a full UA Link portfolio to address the connectivity requirement at the rack level that our customers will have. In terms of the timing, the Standard Group is working, to release the – standard the final specifications. That is supposed to happen at the end of Q1. So, we expect the earliest product to hit the market sometime in 2026, which is when we will start to see first instance of UA Link.
Suji Desilva: Okay, thanks.
Operator: Your next question comes from the line of Richard Shannon of Craig-Hallum. Your line is open.
Richard Shannon: Hi guys. Thanks for taking my question, as well. You will ask a question that kind of the competitive dynamics here kind of going in between your retimer and P Series Switch Products because obviously you have a really strong position in the retimers coming into the market and the P Series you would be another competitor who’s very strong there. And while you’ve got COSMOS, the software bear is a switching there. Switches are generally thought of as more of a stickier kind of a chip. And so I’m wondering, how does the competitive dynamics play out here? Do we see kind of an attached rate similar to what you’re seeing in the P Series to the Aries, any time soon?
Sanjay Gajendra: Yeah, so, first of all this is a big market and let’s that in mind when you start thinking about this switch type of socket, there are many sockets Gen 6, Gen 5 and so on, right? So there is several different areas that we can go and surely that the market is seeing, that the competitors are seeing that and you have folks that are playing there. The angle that we have taken, there is a software and COSMOS point that you made, which brings in lot of diagnostic telemetry and feed management type of features that scales across all of our products, Scorpio retimers and everything else that we do, right? But the bigger reason why Scorpio is gaining traction is how it’s architected. Now the incumbent switches are originally designed for storage applications.
So, obviously, the feature set that are incorporated with more tune for addressing and attaching to an SSD drives. What we have done is essentially created a device for the first time where the data flows are created for a GPU to GPU traffic. Now the bandwidth optimization and other capabilities that are needed. So in general, the functionality itself is not better and that’s being appreciated by your customers. And based on that we continue to build on the portfolio and we do expect that we will be playing a significant role in that market. And in terms of the timing itself, – so there is a timing aspect here, obviously, Gen 6 window is now, I mean, as far as we know, we are the only ones in the market providing PCIe Gen 6 switches. So many times connectivity, what happens is the first vendor to provide a solid product that is scaling and getting qualified, those things go a long way in terms of maintaining and building a competitive barrier.
Richard Shannon: Okay. Excellent. Thanks for those thoughts. My second question is on the Scorpio X Series. Wondering if you can kind of give us a picture looking forward here about the size of the scale up domains that hyperscalers are looking to do – I can’t remember the exact number what NVIDIA does for theirs today, but I imagine it’s going to go up quite a bit here and to what degree does the size play into your commentary about the Scorpio venture being a large product line with us? Thank you..
Jitendra Mohan : Yeah, let me take a crack at that. This is Jitendra. So we’ve mentioned previously that not counting unveiling core NVIDIA. We expect the market for Scorpio X family, the TAM to be $2.5 billion or more than that by 2028. And if you look at what it is today, it’s effectively nearly zero. So it’s a very, very rapidly growing TAM and it’s one of our – it’s the largest TAM that we have, which is why we are bullish on the prospects of Scorpio and it becoming the largest product line over time.
Operator: Thank you. There are no further questions. I turn the call back over to Leslie Green for closing remarks.
Leslie Green: Thank you, Joe. And thank you, everyone for your participation and questions. We look forward to updating you on our progress.
Operator: This concludes today’s conference call. You may now disconnect