Astera Labs, Inc. Common Stock (NASDAQ:ALAB) Q2 2024 Earnings Call Transcript August 7, 2024
Operator: Good afternoon. My name is Audra, and I will be your conference operator today. At this time, I would like to welcome everyone to the Astera Labs Q2 2024 Earnings Conference Call. All lines have been placed on mute to prevent any background noise. After managements’ remarks there will be a question-and-answer session. [Operator Instructions] I will now turn the call over to Leslie Green, Investor Relations for Astera Labs. Leslie, you may begin.
Leslie Green: Good afternoon everyone and welcome to the Astera Labs Second Quarter 2024 Earnings Conference Call. Joining us on the call today are Jitendra Mohan, Chief Executive Officer and Co-Founder; and Sanjay Gajendra, President and Chief Operating Officer and Co-Founder; and Mike Tate, Chief Financial Officer. Before we get started, I’d like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results, strategies and plans, future operations and the markets in which we operate. These forward-looking statements reflect management’s current beliefs, expectations and assumptions about future events which are inherently subject to risks and uncertainties that are discussed in detail in today’s earnings release and the periodic reports and filings that we file from time to time with the SEC, including the risks set forth in the final prospectus relating to our IPO.
It is not possible for the company’s management to predict all risks and uncertainties that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statements. In light of these risks, uncertainties and assumptions, the results, events or circumstances reflected in the forward-looking statements discussed during this call may not occur, and actual results could differ materially from those anticipated or implied. All of our statements are based on information available to management as of today, and the company undertakes no obligation to update such statements after the date of this call to conform to these as a result of new information, future events or changes in our expectations, except as required by law.
Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be an important measure of the company’s performance. These non-GAAP financial measures are provided in addition to and not as a substitute for or superior to financial results prepared in accordance with US GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the Investor Relations portion of our website and also be included in our filings with the SEC, which will also be accessible through the Investor Relations portion of our website. With that, I’d like to turn the call over to Jitendra Mohan, CEO of Astera Labs.
Jitendra?
Jitendra Mohan: Thank you, Leslie. Good afternoon, everyone, and thanks for joining our second quarter conference call for fiscal 2024. AI continues to drive a strong investment cycle, as entire industries look to expand their creative output and overall productivity. The velocity and dynamic nature of this investment in AI infrastructure, is generating highly complex and diverse challenges for our customers. Astera Lab’s intelligent and flexible connectivity solutions are developed ground up to navigate these fast-paced, complicated deployments. We are working closely with our hyperscaler customers to help them solve these challenges across diverse AI platform architectures that features both third-party and internally developed accelerators.
In addition to these favorable secular trends, we are also benefiting from new company specific product cycles across multiple technologies, which will also contribute to our growth in the form of higher average silicon content per AI platform. A strong leadership position and great execution by our team resulted in record revenue for Astera Labs in the June quarter supports our strong outlook for the third quarter and gives us confidence in our ability to continue outperforming industry growth rates. Astera Labs delivered strong Q2 results, setting our first consecutive record for quarterly revenue, strong non-GAAP operating margin and positive operating cash flows. Our revenue in Q2 was $76.9 million up 18% from the previous quarter and up 619% from the same period in 2023.
Non-GAAP operating margin was 24.4%, and we delivered $0.13 of non-GAAP diluted earnings per share. Operating cash flow generation was also strong during the quarter, coming in at $29.8 million. With continued business momentum and a broadening set of growth opportunities, we are investing in our customers by rapidly scaling the organization. During the quarter, we expanded our Cloud-Scale Interop Lab to Taiwan and announced the opening of a new R&D center in India. We also announced the appointment of Bethany Mayer to our Board of Directors, bringing additional strategic leadership to the company. Today, Astera Labs is focused on three core technology standards; PCI Express, Ethernet, and Compute Express Link. We are shipping three separate product families supporting these different connectivity protocols, all generating revenue and in various stages of adoption.
Let me touch upon our business with each of these product families and how we support them with our differentiated architecture and COSMOS software suite. Then I will turn the call over to Sanjay to dive deeper into our growth strategy. Finally, Mike will provide additional details on our Q2 results and our Q3 financial guidance. First, let us talk about PCI Express. During the quarter, we saw continued strong demand for our Aries product family to drive reliable PCI Gen 5 connectivity in AI systems by delivering robust signal integrity and link stability. While merchant GPU suppliers drove early adoption of PCI Gen 5 into real systems over the past year, we are now also seeing our hyperscaler customers introduce and ramp new AI server programs based upon their internally developed accelerators utilizing PCI Gen 5.
Looking ahead, AI accelerator processing power is continuing to increase at an incredible pace. The next milestone for the AI technology evolution is the commercialization of PCI Gen 6, which doubles the connectivity bandwidth within AI servers, creating new challenges for link reach, reliability and latency. Our Aries 6 PCI Retimers family helps to solve these challenges with the next generation of our software-defined architecture, offering a seamless upgrade path to a widely deployed and field-tested Gen 5 solutions. We have started shipping initial quantities of preproduction orders of our PCIe Gen 6 solution, Aries 6. We ship and support our hyperscaler customers initial program developments that are based on Nvidia’s Blackwell platform, including GB200.
We look forward to supporting more significant production ramps in the quarters to come. Next let us talk about Ethernet. Our portfolio of Taurus Ethernet smart cable modules helps relieve connectivity bottlenecks by overcoming reach, signal integrity and bandwidth issues by enabling robust 100-gig per lane connectivity over copper cables or AEC. Today, we are pleased to announce that our 400-gig Taurus Ethernet SCMs have shifted into volume production, with an expected ramp through the back half of 2024. This ramp is happening across multiple platforms in multiple cable configurations, and we are working with multiple cable partners to support the expected volumes. Taurus will be ramping across a multitude of 400-gig applications to scale out connectivity on both AI compute platforms, as well as general purpose compute systems.
We are excited about the breadth and diversity of our Taurus design wins and expect the product family to be accretive to our corporate growth rate going forward. Next is Compute Express Link, or CXL. We continue to work closely with our hyperscaler customers on a variety of use cases and applications for CXL. In Q2, we shipped material volume of our Leo products for preproduction large-scale deployment in data centers. We expect to see data center platform architects utilize CXL technology to solve memory bandwidth and capacity bottlenecks using our Leo family of products. The initial deployments are targeting memory expansion use cases, with production ramps starting in 2025 when new CXL-capable CPUs are broadly available. Finally, I’d like to spend a moment on COSMOS, which is a software platform that brings all of our product families together.
We have discussed how COSMOS not only runs on our chips, but also in our customers’ operating stacks to deliver seamless customization, optimization and monitoring. The combination of our semiconductor and hardware solutions with COSMOS software enables our product to become the eyes and ears of connectivity infrastructure, helping fleet managers to ensure their AI and cloud infrastructure is operating at peak utilization. By improving the efficiency of their data centers, our customers are able to generate higher ROI and reduce downtime. To summarize sustained secular trends in AI adoption, design wins across diverse AI platforms at hyperscalers, featuring both third-party and internally developed accelerators an increasing average dollar content in next-generation GPU-based AI platforms gives us confidence in our ability to outperform industry growth rates.
With that, let me turn the call over to our President and COO, Sanjay Gajendra to discuss some of our recent product announcements and our long-term growth strategy.
Sanjay Gajendra: Thanks, Jitendra and good afternoon, everyone. We are pleased with our robust Q2 results and strong top-line outlook for Q3. But we are even more excited about the volume and breadth of opportunities that lie ahead. Today, I will focus on five growth vectors that we believe will help us to grow our business faster than industry growth rates over the long term. First, Astera Labs is in a unique position with design wins across diverse AI platform architectures featuring both third-party and internally developed accelerators. This diversity gives us multiple paths to grow our business. This hybrid approach of using third-party and internally developed accelerators allows hyperscalers to optimize their fleet to support unique workload requirements and infrastructure limitations, while also improving capital investment efficiency.
Our intelligent connectivity platform with its flexible software-based architecture enables portability and seamless reuse between platforms while creating growth opportunities for all our product families. In addition to the third-party GPU platforms, we also expect to see several large deployments based on internally developed AI accelerators hitting production volume over the next few quarters and driving incremental PCIe and Ethernet volumes for us. Second, we see increasing content on next-generation AI platforms. Nvidia’s Blackwell GPU architecture is particularly exciting for us, as we expect to see strong growth opportunities based on our design wins as hyperscalers compose solutions based on Blackwell GPUs, including GB200 across their data center infrastructure.
To support various AI workloads, infrastructure challenges, software, power and cooling requirements, we expect multiple deployment variants for this new GPU platform. For example, Nvidia cited 100 different configurations for Blackwell in their most recent earnings call. This growing trend of complexity and diversity presents an exciting opportunity for Astera Labs as our flexible silicon architecture and COSMOS software suite can be harnessed to customize the connectivity backbone for a diverse set of deployment scenarios. Overall, we expect our business to benefit from the Blackwell introduction with higher average dollar content of our products per GPU, driven by a combination of increasing volumes and higher ASPs. The next growth vector is the broadening applications and use cases for our Aries product family.
Aries is in its third generation now and represents the gold standard for PCIe Retimers in the industry. The introduction of the new Aries 6 Retimers built upon the company’s widely deployed and battle-tested PCIe 5 retimers and the industry transition to PCIe Gen 6 will be a catalyst for increasing PCIe retimer content for Astera. Our learnings from hundreds of design wins and production deployment over the last several years enables us to quickly deploy PCIe Gen 6 technology at scale. As Jitendra noted, we’re now shipping initial quantities of preproduction volume for Aries 6 and currently have meaningful backlog in place to support the initial deployment of hyperscaler AI servers featuring Nvidia’s Blackwell GPUs including GB200. We’re also very excited about the incremental PCIe connectivity market expansion that will be driven by multi-rack GPU clustering.
Similar to the dynamic within the Ethernet AEC business, the reach limitations of passive PCIe copper cables are a bottleneck for the number of GPUs that can be clustered together. Our purpose-built Aries smart cable modules solve these issues by providing robust signal-integrity and link stability over materially longer distances improving rack airflow, while actively monitoring and optimizing link health. This PCIe AEC opportunity is in the early stages of adoption and deployment and we view the multi-rack GPU clustering application as a new and growing market opportunity for our Aries product family. In June, we announced the industry’s first demonstration of end-to-end PCIe optical connectivity to provide unprecedented reach for larger GPU clusters.
We are proud to broaden our PCIe leadership once again by demonstrating robust PCIe links over optical interconnects between GPUs, CPUs, CXL memory devices and other PCIe endpoints. This breakthrough expands our intelligent connectivity platform to allow customers to seamlessly scale and extend high-bandwidth, low-latency PCI interconnects over optics. Overall, we expect our Aries PCIe retimer business to deliver strong growth as system complexity, platform diversity and speeds continue to increase and on average, result in higher retimer content per GPU in next-generation AI platforms. Next in addition to the strong growth prospect of our Aries product family across the PCIe ecosystem, we are also seeing our Taurus product family for Ethernet AEC application start to meaningfully contribute to the growth in the back half of 2024.
What is exciting about these ramps is the diversity in application and use cases. We are seeing demand for our Taurus product family for both AI and general compute platforms. We’re supporting the market with multiple cable configurations, including straight, Y cables and X cables. We will be shipping volume into hyperscaler build-outs, supporting multiple cable vendors to enable a diverse supply chain that is crucial for hyperscalers. Overall, we are very excited about Taurus becoming yet another engine of growth as we look to expand the top-line while also diversifying our product family contributions. Last but not least, CXL is an important technology to solve memory bandwidth and capacity bottlenecks in compute platforms. We are working closely with our hyperscaler partners to demonstrate various use cases for this technology and starting to deploy our Leo CXL controllers in preproduction racks in data centers.
We have incorporated the learnings, customization and security requirements into our COSMOS software and have the most robust, cloud-ready CXL solution in the industry. We have demonstrated that our Leo CXL Smart Memory Controllers improve application performance and reduce TCO in compute-platforms. Very importantly, we can accomplish many of these performance gains with zero application-level software changes or upgrades. Overall, we remain very excited about the potential of CXL in data center applications. Finally, our close collaboration and front-row seat with hyperscalers, and AI platform providers continues to yield valuable insights regarding the direction of compute technologies and the connectivity topologies that will be required to support them.
This close collaboration is helping us identify new product and business opportunities and additional engagement models across our entire intelligent connectivity platform, which we believe will drive strong, long-term growth for Astera. With that, I’ll turn the call over to our CFO Mike Tate, who will discuss our Q2 financial results and our Q3 outlook.
Mike Tate: Thanks, Sanjay, and thanks to everyone for joining the call. This overview of our Q2 financial results and Q3 guidance will be on a non-GAAP basis. The primary difference in Astera Labs’ non-GAAP metrics is stock-based compensation and its related income tax effects. Please refer to today’s press release available on the Investor Relations section of our website for more details on both our GAAP and non-GAAP Q3 financial outlook, as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call. For Q2 of 2024, Astera Labs delivered record quarterly revenue of $76.9 million, which was up 18% from the previous quarter and 619% higher than the revenue in Q2 of 2023. During the quarter, we shipped products to all major hyperscalers and AI accelerator manufacturers.
We recognized revenue across all three of our product families during the quarter, with the Aries product being the largest contributor. That is seen from continued momentum in AI based platforms. In Q2, Taurus revenues contributed to — continued to primarily shift into 200-gig Ethernet-based systems, and we expect Taurus revenue to now diversify further as we begin to ship volume into 400-gig Ethernet-based systems in the third quarter. Q2 Leo revenues were largely from customers purchasing preproduction volumes for the development of their next-generation, CXL capable compute platforms, with our customers’ production launch timing being dependent on the data center server CPU refresh cycle. Q2 non-GAAP gross margins was 78% and was down 20 basis points compared to 78.2% in Q1 of 2024 and better than our guidance of 77%.
Non-GAAP operating expenses for Q2 were $41.2 million, up from $35.2 million in the previous quarter and consistent with our guidance. Within non-GAAP operating expenses, R&D expenses was $27.1 million; sales and marketing expense was $6.3 million and general and administrative expenses was $7.8 million. Non-GAAP operating margin for Q2 was 24.4%. Interest income in Q2 was $10.3 million. Our non-GAAP tax provision was $6.8 million for the quarter, which represents a tax rate of 23% on a non-GAAP basis. Non-GAAP fully diluted share count for Q2 was 175.3 million shares and our non-GAAP diluted earnings per share for the quarter was $0.13. Cash flow from operating activities for Q2 was $29.8 million, and we ended the quarter with cash, cash equivalents and marketable securities of just over $830 million.
Now turning to our guidance for Q3 of fiscal 2024. We expect Q3 revenue to increase within a range of $95 million and $100 million, up roughly 24% to 30% sequentially from the prior quarter. We believe our Aries product family will continue to be the largest component of revenue and will be the primary driver of sequential growth in Q3, driven by growing volume deployment with our customers’ AI servers. We also expect our Taurus family to drive solid growth quarter-over-quarter as design wins within new 400-gig Ethernet-based systems ramp into volume production. We expect non-GAAP margins to be approximately 75%. The sequential decline in gross margin is being driven by an expected product mix shift towards hardware solutions during the quarter.
We expect non-GAAP operating expenses to be in the range of approximately $46 million to $47 million as we remain aggressive in expanding our R&D resource pool across head count and intellectual property. Interest income is expected to be approximately $10 million. Our non-GAAP tax rate should be approximately 20%, and our non-GAAP fully diluted share count is expected to be approximately 177 million shares. Adding this all up, we’re expecting non-GAAP fully diluted earnings per share of a range of approximately $0.16 to $0.17. This concludes the prepared remarks. And once again, we are very much appreciative of everyone joining the call. And now we will open the line for questions. Operator?
Q&A Session
Follow Astera Labs Inc.
Follow Astera Labs Inc.
Operator: Thank you. [Operator Instructions] We’ll take our first question from Harlan Sur at JPMorgan.
Harlan Sur: Good afternoon. Thanks for taking my question and congratulations on the strong results. During the quarter, lots of concerns around your large GPU customer and one of their next-generation GPU SKUs, the GB200. Glad that the team could clarify that your dollar content across all Blackwell GPU SKUs is actually rising versus prior-generation Hopper. But as you guys mentioned, AI ASIC accelerator mix is rapidly rising, and actually we believe outgrowing GPUs both this year and next year and accounting for like 50% of the XPU mix sort of next year, right? And with ASIC, it’s 100% PCIe based. And as you pointed out, right, many of these ASIC customers are still in the early stages of the ramp. So given all of this, given some of the new product ramps with your AEC solution. What’s the team’s visibility and confidence level on driving continued quarter-on-quarter growth from here maybe over the next several quarters?
Jitendra Mohan: Harlan, thank you so much for the question. It is great to be in the place that we are here today. We feel very confident about what is to come. Clearly we don’t guide more than one quarter out, so please don’t take that as any guidance. But we really believe that we are in the early innings of AI here. All of the hyperscalers are increasing their CapEx targets for the rest of the year. 2025 is expected to be even higher. We heard that the Llama model requires 10 times more compute in order to solve that. So all of these trends are basically driving a radical shift in technology. We are seeing, as you correctly pointed out, a lot of our hyperscaler customers ramp their internally developed AI accelerators in addition to deploying third-party AI accelerators.
And we are very pleased that we have designed wins across all of these different platforms. Our customers are ramping their platforms and we are ramping multiple product families. As Sanjay mentioned, both Aries and Taurus are ramping into these new platforms. So we feel very good about what is in store for the future and feel that with the rising content on a per GPU basis, we’ll be able to outpace the market growth in the long-term.
Harlan Sur: I appreciate that. And on top of the strong AI demand trend pulls, on top of the new product ramps that you guys articulated today, one thing I haven’t baked in to my model is the penetration of your retimer technology into general purpose servers, right? And the good news is that we are finally starting to see the flash vendors aggressively bringing, finally bringing Gen 5 PCIe SSDs to the market, which could potentially unlock the retimer opportunities in general purpose servers where the Gen 5 retimer content today is still 0. So what’s the team’s outlook? Do you see that there may be some penetration starting in 2025 of your retimer solutions into general purpose server? And maybe if you could size that potential opportunity for us.
Sanjay Gajendra: Absolutely, Harlan. Sanjay here. Good to hear your voice. Yes. So in general, that’s a correct statement. We have several design wins on the compute side. Just for reasons like you highlighted, either SSDs not being Gen 5-ready or for dollars being sort of factoring into the AI platforms. There has been a slower than expected growth on the general compute. But at some point, like we keep saying, the servers are going to fall off the rack at some point given how long they’ve been in the fleet. So we do expect the general compute to start picking up, especially as both AMD and Intel get to production with the Turin and the kind of Granite Rapids based CPUs. So overall 2025, we do expect that the compute platform will start figuring in terms of being meaningful revenue growth.
Like I noted, we do have design wins already in these platforms for Aries retimers. But also I would like to add that we do have design wins for our Taurus Ethernet module application as well in general compute. So we should see sort of the two-engine growth story along the general compute to go along with all of the things we shared on AI, both for third-party or merchant GPUs, as well as the big change that we are seeing now is the ramp in the internally developed accelerators, those things being a meaningful and a significant driver for our growth.
Harlan Sur: Thank you.
Operator: We’ll move next to Joe Moore at Morgan Stanley.
Joe Moore: Great. Thank you. I wonder if you could talk about the competitive dynamics within PCI Gen 5 retimers. Are you seeing any number of people who have qualified solutions in China and in the US? Are you seeing any encroachment there? And then in terms of PCI Gen 6, can you talk about the prospects for when you start to see volume there?
Sanjay Gajendra: Yes, absolutely. Let me take that, Joe. So overall, this is a big and growing market. I think that fact is clear. I mean the fact that you have larger names jumping into the mix sort of validates the market that the retimer represents. Now a couple of points to keep in mind is that connectivity products, especially PCI Express, tends to have a certain nuance to it which is the fact that we are the device in the middle. We are always in between GPU, storage, networking and so on. And interoperation, especially at high volume, cloud-scale deployment becomes critical. So what we have done in the last three years, four years is really work shoulder-to-shoulder with our hyperscaler and AI platform providers to ensure that the interoperation is met, the platform-level deployment, whether it is diagnostic, telemetry, firmware management is all addressed, including the COSMOS software that we provide from a management — fleet management and diagnostic type of capability.
Those all have been integrated into our customers’ operating stack. So in general, the picture I’m trying to paint here is that the tribal knowledge that we have built, the penetration that we have not just with the silicon, but also software does give us a significant advantage compared to our competitors. Now having said that, we will continue to work hard. We have several design wins for PCIe Gen 6 like we shared in today’s call that are all designed around the next-generation GPU platform, specifically the Blackwell-based GPUs from Nvidia, which I publicly noted to support Gen 6. So we’ll continue to work through them. We are currently shipping preproduction volume to support some of the initial ramps, including for GB200 based platforms.
So overall, we feel good about the position that we are in, both in terms of Gen 5, as well as transitioning those designs into Gen 6 as the platforms develop and grow.
Joe Moore: Great. Thank you.
Operator: We’ll move next to Blayne Curtis at Jefferies LLC.
Blayne Curtis: Hi, thanks for taking my question. I just want to ask you in terms of the September outlook, you talked about meaningful revenue from AEC. I mean I think the other point was that the gross margin was because of the mix, which I’m assuming is because of that ramp. But just trying to size it, I know you don’t break out the segments. But if you can kind of just give us some broad strokes as to how much of the growth is coming from retimers versus Aries in September.
Mike Tate: Yes, Blayne. The margins will come down to the extent we sell more hardware versus silicon. So Taurus is definitely one of those drivers. Also we do modules on the Aries side, and both — we’re seeing growth in both of those. So when you look at the growth guidance we are giving in third quarter, you have the contribution from Taurus, you have the incremental modules on Aries, but also we are seeing a lot of growth just from Aries Gen 5 going into AI servers. And a lot of new platforms, and the platforms generally are getting more content per platform. So when you look at the growth, I think it is kind of balanced between those three drivers, largely.
Blayne Curtis: Got you. And then I want to ask on the Gen 6 adoption moving from preproduction to production. The main GPU in the market supports Gen 6. I think the CPUs that would talk Gen 6 are going to be a bit of a way — over your way. So I’m just kind of curious, the catalyst there, do you expect Gen 6 to be in the market next or even if there is not CPUs that kind of speak Gen 6?
Jitendra Mohan: Blayne, it is a great observation. Let me say that as these compute platform gets more and more powerful to address these growing AI models, the only way to keep them fed, to keep these GPUs utilized is to get more and more data in and out of these platforms. So in the past, the CPUs played a very central role in terms of being the can’t do it for all of this information. But with the new accelerated compute architecture, CPU is largely orchestration of a control engine for the most part, it does do a few other things. But in general, you are trying to get the data in and out of the GPU, using the scale out and scale up networks that are made up of either PCI Express, Ethernet or NVLink protocols. And as these protocols go faster and faster, we end up seeing more and more demand for the product that we have.
And as a result, as these new systems get deployed, we see higher content for us on a per GPU basis. and it’s largely to improve the GPU utilization through these increased data rates.
Sanjay Gajendra: Blayne, if I can add one more point. You didn’t quite directly asked us for the September quarter growth. I do want to be abundantly clear on one point, which is the growth that we are forecasting for September quarter is based upon not just the power sampling, but all of the additional production ramps that we are seeing for both the third-party platforms, but also internally developed accelerators. That is what is modeling and driving the growth that we are highlighting for September although there are maybe other things that you can look at the overall stuff.
Operator: We’ll move to our next question from Tom O’Malley at Barclays.
Thomas O’Malley: Hi, guys. Thanks for taking my questions. Congrats on a nice results. I just wanted to ask a broader network architecture question. You talked a little bit more about PCIe over optical. And when you look at the back end today, I think there’s a lot of efforts to improve the Ethernet offering as it compares to kind of the leader in the market as they kind of expand NVLink. Could you talk about when you see the inflection point with PCIe over optical kind of being the majority of the back end? Is that something that’s coming sooner? Just kind of the time frame there. And then just explain a little further, I think you mentioned that it comes with a lot of additional retiming content when you use those cables. Just anything additional there, and then I have a follow-up.
Jitendra Mohan: Let me take that. This is Jitendra, Tom. The architectures for AI systems are definitely evolving. And actually, I would say, they are evolving at a very rapid pace. Different customers use different architectures to craft their systems. If you look at Nvidia-based systems, they do use NVLink, which is, of course, a proprietary closed interface. The rest of the world largely uses protocols that are either PCI Express or Ethernet or they are based on PCI Express and Ethernet. And the choice of particular protocol is really dependent upon the infrastructure that the hyperscalers have and how they choose to deploy this technology. Clearly, we play in both. Our Taurus Ethernet Smart Cable Module support Ethernet.
And now with our Aries Smart Cable Modules, we are able to support our PCI Express as well. And if you think about the evolution, we started with Aries retimers for driving mostly within the box connectivity and shorter distance connectivity over passive cables. As these networking architecture evolved and you needed to cluster more GPUs together, we went with the Aries smart cable modules that allow you to connect multiple racks together, up to 7 meters of copper cables. And as it expands into even further distances, we go into optical where we demonstrated running a very popular GPU over 50 meters of optical fiber. So these are all of the tools that we are making available to our hyperscaler partners for them to craft their solutions and deploy AI at the data center scale.
Thomas O’Malley: Helpful. As a follow-up, I know this is a bit of a tougher question, but I do think that there is a lot of confusion out there. And I just would appreciate your thoughts. You mentioned in the prepared remarks hundreds of different types of deployment styles for the GB200. Obviously, certain hyperscalers are going to do it their way and then certain hyperscalers are going to take what’s called the kind of entire system, so the $36 million of the $72 million. Can you talk about your assumptions for what you think will be the percentage that goes towards the full system and then kind of towards the hyperscalers that use kind of their own methods and talk about the content opportunities if they would kind of play out in those two scenarios?
I do think that Nvidia and others are talking potentially about more systems than historical, but just maybe the puts and takes upon how different hyperscalers or architect systems and what it means for your content. Thank you.
Jitendra Mohan: Yes. Great questions. And as you pointed out, a lot of moving pieces obviously, right? But here is, I think, what we know and what we can comment on. First of all, all the hyperscalers are indeed deploying new AI platforms that are based on merchant silicon or third-party accelerators, as well as their own accelerators. And overall, we do expect our retimer content to go up. Now if you double-click specifically on Nvidia or the Blackwell system, it comes in many, many different flavors. If you think about the overall Blackwell platform, it is really pushing the technology boundaries. And what that is doing is it’s creating more challenges for the hyperscalers, whether it is power delivery or thermals, software complexities or connectivity.
As these systems grow bigger, they run faster, become more complex, we absolutely think that the need for retimer goes up. And that drives our content higher on a per GPU basis. Now it is harder to predict which particular platform will have what kind of share. That’s not really our business to predict. What we are doing is we are supporting our customers, our AI platform providers as well as hyperscalers to make sure that these kind of high-tech platforms can be deployed as easily as possible. And at the end of the day, what you will find is hyperscalers will have to either adopt their data centers to these new technologies or they’ll have to adopt this new technology to their data centers. And that creates a great opportunity for our products.
We already have design wins across multiple form factors of hyperscaler GPUs as well as the third-party GPUs. And overall, we expect our business to continue to grow strongly. Very exciting times for us.
Thomas O’Malley: Thank you very much.
Operator: Our next question comes from Tore Svanberg at Stifel, Nicolaus.
Jeremy Kwan: Yes. Good afternoon. This is Jeremy calling for Tore. And let me also add my congratulations on a very strong quarter and outlook. A couple of questions. First, could you provide maybe a revenue breakout between the three product segments here? I’m not sure if that was covered at all.
Mike Tate: Yes. We don’t break out specifically the revenue by product. But like we said on the call, the Q2 revenues were driven heavily by the AI growth for Gen 5 and the broadening out of our design win portfolio. When you look into Q3, it’s — the three main drivers are the initial Taurus ramp into 400 gig, the broadening out of AI servers for Gen 5 in both merchant as well as internally developed accelerator programs, and then also we are doing back-end clustering with our Aries SCM modules. So when you look at that, those three drivers are mainly giving us the growth in Q3.
Jeremy Kwan: Great. And then I guess maybe looking more into the Leo CXL. I understand you are shipping preproduction. Is the — when can — are you expecting to see more of a material ramp for Leo?
Sanjay Gajendra: Yes. So in terms of material ramp, it’s a function of CPUs being available that supports CXL 2.0. So we are of course, tracking the announcements from AMD and Intel to essentially get to production in the second half of this year for Turin and Granite Rapids. So in general, these things will take a little time in terms of engineering those things into platforms. So what we are guiding is 2025 is when we expect production ramps to pick up on CXL.
Jeremy Kwan: Great. Thanks. And if I could squeeze one last question in. Can you give us maybe a sense of your revenue, how it might break out between modules and stand-alone retimers? Is there a way to kind of look at revenues in that way and how that can impact your SAM growth over time? Thank you.
Mike Tate: Yes. Taurus predominantly is modules. Aries, we are doing the back-end clustering of GPUs with the modules predominantly, but the bulk of the revenues is stand-alone retimers in that product family. Leo, once it ramps, we’ll do add-in cards and silicon, but they’ll be heavily skewed towards silicon.
Operator: We’ll move next to Quinn Bolton at Needham & Company.
Quinn Bolton: Hi, guys. Thanks for taking my question. I guess maybe a follow-up just on the Blackwell question. It looks like there have been some recent architectural or system-level changes at Nvidia with sort of the introduction of the GB200A that looks like it uses PCI interconnect or PCI Express to connect the GPUs and the CPUs and perhaps a deemphasis in the HGX platforms. Just wondering if you see any shifts in content, if that’s favorable, if it’s about a wash going from one platform to the other. And then I’ve got a follow-up.
Jitendra Mohan: Yes. Thank you, Quinn. Unfortunately, it will not be appropriate for us to kind of comment on rumors and third-party information that seems to be circulating around. What we will say is that we are committed to whatever platform our customers want to deploy. Whether it’s a full rack or it’s an HGX server or something in between, we are working with them very, very closely, shoulder to shoulder every day. As Sanjay mentioned, we already have multiple design wins in the Blackwell family including the GB200. We are shipping initial quantities of preproduction to the early adopters. And we do have backlog in place that serves the Blackwell platform, including GB200.
Quinn Bolton: Got it. Okay. Thank you for that. And just maybe a clarification on the Taurus 400-gig ramp as well as the Aries SCM ramps. Are those ramping across multiple hyperscalers? Are they driven by a lead hyperscaler initially and then you would expect to broaden it out to other hyperscalers as we move into 2025?
Sanjay Gajendra: Yes. Good question. Let me take that. So if you think about AECs, in general, 800 gig, where you’re running 100 gig per lane is the first broad use case that we believe for AEC applications. If you look at [data] (ph) rates lower than that, let’s say, 400 gig and so on, it tends to be very, frankly, case by case. It depends on the topology, application and so on. So the good thing about the design wins we have is that these scale across multiple platforms both from an AI and general compute standpoint and supporting various different topologies. And the revenue drivers that we are essentially highlighting for 3Q and beyond is based on supporting these applications. With 800 gig, it becomes much more broader with several different customers essentially requiring AECs.
Quinn Bolton: And is it similar for the Aries SCMs for back-end clustering as well?
Sanjay Gajendra: Exactly. It depends on the topology for what it is in terms of how the back-end networks are designed for the AI subsystems. In general all of this — when it comes to active cabling type of technology, it becomes case by case depending on the infrastructure and how exactly systems are being put together compared to a component like a PCIe retimer, that goes across a broad array of use cases across multiple different deployment scenarios. So that’s the nuance to keep in mind when you look at AEC markets. And so the volume and the deployment scale tends to be very broad, right, if you are looking at how infrastructures are being put together. So it is one of those things where you look at case by case. But as long as you’re able to address a wide variety of applications, it does very significantly add up.
Operator: We’ll take our next question from Ross Seymore at Deutsche Bank.
Ross Seymore: Hi, guys. Thanks for asking the question. Apologies if I go back to one that’s been hit on a couple of me, so I want to do it nonetheless, and kind of the Blackwell topic and the content topic. You guys gave us the punchline that you believe your content, on average, will go up per GPU generation to generation. It also seems like you’re getting across that the customization of it is still very broad-based. And so just looking at the vanilla system SKUs and reference design Nvidia itself has might be misleading. Two-part question to this. Are you of the belief that your content is equal across the board in the same way it was in Hopper? Or do things get more skewed where there’ll be places where you’ll have a significant step-up in content and some configurations and others where you would have a significant step down? And the difference between those two might be where investors are getting a little bit confused.
Sanjay Gajendra: Let me try to add a little bit more color on that. But before I do that, let me give you and remind two data points we’ve already covered in the Q&A so far. First point, let us be very clear that our PCIe retimer content per GPU, on average will continue to grow as the AI systems scales across various different topologies. And this applies to both third-party, like standard merchant GPUs as well as internally developed GPUs. The second reminder that I want to kind of note is that specifically for Blackwell, we expect our PCIe content per GPU to go up. Now what you are asking is specifically about the deployment scenarios, which right now is evolving, right? So we have design wins for several different topologies, including the GB200.
But if you look at the various different options that Nvidia is offering and how those are being composed and considered by the hyperscalers, that situation is evolving at the moment. The key message that we want to deliver is that, overall, our PCIe content is going to be higher than the Hopper generation. We expect that the design wins that we are starting to see and we’re starting to ship from a pre-production standpoint are all meaningful that will essentially allow us to continue to have a robust growth engine, as far as our PCIe retimer business is concerned.
Ross Seymore: Thanks for that. And I guess as a follow-up, you guys have focused more on this call about the internally developed accelerators than you have in calls in the past. And I realize there haven’t been too many since your IPO. But are you trying to get across the key message that those are really growing as a percentage of your mix, that those are penetrating the market and kind of catching up and taking relative share from the GPU side of things? Or is your commentary meant to get across that Astera itself with its retimers and other components will take significant share in that kind of ASIC market relative to the GPU side?
Sanjay Gajendra: Yes, it’s probably both, to be honest with you, in the sense that we do see it, it’s no secret, right? I think many of the hyperscalers are doing their own accelerators, which are driven by the workloads or the business models that they pursue. I think that will continue as a macro trend in terms of internally developed accelerators going hand in hand with GPUs that are available from Nvidia or AMD or others. So that’s the model that we believe will be here to stay, that hybrid approach. And for us really, the reason we are highlighting is that, of course, we have had a significant business that has grown in the last year or two years from the designs that we have been supporting with the merchant GPU deployments that have happened.
But at the same time, now we’re reaching a point where the accelerator volumes are also starting to ramp up. And for us the good news is that we are on all the major AI accelerator platforms from a design win standpoint or at least all the major ones that are out there. And for us, we have multiple parts to grow our business, and that is a very positive thing that we believe will continue to allow us to keep delivering the kind of reserves that we’re doing. And as new CPU/GPU architectures come about, just like the Nvidia’s Blackwell platform, we do expect to gain from it both on the retimer content as well as other products that we can service to this space.
Operator: We’ll take our next question from Richard Shannon at Craig-Hallum Capital Group.
Richard Shannon: Hi, guys. Thanks for taking my question. Maybe a question on PCI Express Gen 6 here. Last call, you talked about some of the wins, designs being decided in the next six months to nine months are obviously three to six months — three more months farther forward here. Obviously you’ve got some wins already on Gen 6, but I just want to get a sense of the share of the market kind of looking backwards. How much of that market has been decided versus up for grabs? Maybe you can help characterize what’s left here to win in the next three months to six months.
Sanjay Gajendra: I’m trying to see how best to answer that question. So you got to — let me try to provide some color. The design win windows is, whatever, for these platforms, it’s you’re looking at — once GPUs become available, you’re looking at 6 to 12 months before they go to production. So that’s one thing to keep in mind. But also please also think about how hyperscalers go about doing their stuff, right? Everyone is in an arms’ race right now getting to production as quickly as possible. In many different situations resources are also limited. Meaning for every 10 engineers that they may need, they might have two or three, just given the number of platforms and how quickly everyone is trying to move. And to that standpoint, what is happening is that many of these engineers are familiar with our Gen 5 retimers.
They’ve designed it across multiple platforms. They’ve built software tools and capabilities around it. And now our Gen 6 retimers are essentially a seamless upgrade from a software standpoint, from a hardware standpoint. So it does offer the lowest risk and fastest path to our customers. And that plays well within their own objectives of trying to get something out quickly and dealing with resources that might not be available at the levels that are required. So overall we’re starting to gain from it, and we are essentially being the leader in the space, being the one that is getting the first crack at these opportunities. And we are doing everything we can to convert those things into design wins and revenue.
Richard Shannon: Okay. Great. My follow-on question is a pretty simple one, just looking at the Taurus line here. Great to see the ramp, you’re at 400 gig, and I don’t want to kind of get too far ahead of what looks to be a pretty nice ramp here in the second half of the year, but I think you’ve talked about the 800-gig generation ramping later in 2025. Any update on that timing and how your design win is looking so far?
Jitendra Mohan: Yes. Good question. So the 800-gig timing, we believe, is going to be late in 2025. Right now, what we are seeing is 400-gig applications for some of the AI systems as well as actually we are seeing them for general purpose compute as well where you are doing the traditional server to top of the rack connection. So that will continue on for the rest of this year for 400 gig deployments. And then as we get some of the newer mix that are capable of 100 gig and 200 gig per lane, et cetera are try to get to 800 gig is where we see broadening of this market and more deployments across different hyperscalers across different platforms in the latter half of 2025.
Richard Shannon : Okay. Great. Thanks guys.
Operator: We’ll go next to Suji Desilva at ROTH Capital.
Unidentified Analyst: Hi, Jitendra, this is [Andre] (ph). And Mike, congrats on the progress here. This question maybe may not have been asked explicitly, but can you give us a relative content framework for internally developed versus third-party processors or accelerators? Is it higher for internally developed on average? Or is it hard to generalize like that?
Jitendra Mohan: I would say it’s a little bit hard to generalize. It varies quite a bit. Even one particular platform, you can have different form factors. Even if you look at, let’s say, Blackwell, you have HGX, you have MGX, you have NVLs, you have custom racks that are getting deployed. And if you look at each one of them, you’ll find different amount of content. Number of retimers will vary where they get placed. It will vary — but what is very consistent is that the overall content does go up for us. Now the other factor to consider is the choice of back-end network. Again, for example if you look at the Blackwell family, they use NVLink, which is a closed proprietary standard, which we do not participate in. But when somebody uses a PCI Express or PCI Express-based protocol at their back-end connectivity, then our content goes up pretty significantly because now we are shipping not only our retimers but also the smart cable — Aries smart cable modules into that application.
Similarly, if the back-end interconnected Ethernet, that will benefit our Taurus family of product lines. So it really varies greatly on what the architecture is of the platform and what form factor is getting deployed in.
Unidentified Analyst: Okay. Great. That’s very helpful color. Thanks. And then just a quick follow-up here. Was there something inherent in the Blackwell transition from Hopper that made this much platform diversification and our ex-diversification possible? Or was it just the hyperscalers getting more sophisticated about what they are trying to do? Or was it availability of things like Astera’s PCI products? Any color there would be helpful as to how this kind of proliferation of architectures kind of came about.
Jitendra Mohan: I mean if you look at the Blackwell family, it’s like a marvel of technology. The amount of content that is being pushed into that platform is incredible. And as I mentioned earlier, that does create other problems, right? There is so much compute packed in such small space. They’re delivering power to those racks — to those GPUs themselves and the CPUs is the challenge. How to cool them, it becomes a challenge. And the fact that modern data centers are just not equipped to handle many of these issues. So what the hyperscalers are doing is they’re taking these broad platforms in our technology and trying to adapt it so that it fits into their data centers. And that’s where we see a lot of opportunity for our existing products, the ones that we have talked about, as well as some new products that we’ve been working on again, shoulder to shoulder with our hyperscaler and air platform customers.
So very excited to see how these new platforms will get rolled out, including Blackwell, including the hyperscaler internal AI platforms and the increased content that we have there.
Unidentified Analyst: Okay. Great. Thanks for the color.
Operator: And finally, we’ll move to Quinn Bolton at Needham & Company.
Quinn Bolton: Hi, guys, just a quick follow-up. I know you had a potential for an early lockup expiring Thursday morning. Just wanted to see if you guys could confirm, are we still within the 10-day measuring period, so that you could trigger that early lockup? Or does the release of second quarter results sort of in that period and we’re now looking at a September 16 lockup expiration? Thank you.
Mike Tate: Yes. The release of our earnings today releases a lockup that is — opens up on Thursday.
Quinn Bolton: It opens on Thursday. Okay, thank you.
Jitendra Mohan: The early lockup already expired long ago.
Operator: And there are no further questions at this time. I will turn the call back over to Leslie Green for closing remarks.
Leslie Green: Thank you, everyone, for your participation and questions. We look forward to updating you on our progress during our Q3 earnings conference call later this fall. Thank you.
Operator: And this concludes today’s conference call. Thank you for your participation. You may now disconnect.