Astera Labs, Inc. Common Stock (NASDAQ:ALAB) Q1 2024 Earnings Call Transcript

Mike Tate: Sure. So as I mentioned, we started shipping into AI server platforms in volume in Q3 and a lot of our customers are still in the ramp mode to the extent we’ve been shipping for the past couple of quarters. But there’s still a lot of designs that haven’t even begun to ramp. So, we’re still in the early phases that if you look out in time, we see the Gen 5 piece of it in AI continue to grow into next year as well. So as you look into Q2, the growth that we’re guiding to is still largely driven by the Aries Gen 5 deployment in AI servers both for existing platforms with increased unit volumes, but also the new customers begin their ramps as well.

Thomas O’Malley : And then just a broader one. In talking with NVIDIA, they’re referencing your GP-200 architecture becoming a bigger percent of the mix, NVLink 72 being more of the deployments that hyperscalers are taking. When you look at the Hopper architecture versus the Blackwell architecture and their NV72 platform, where they’re using NVLink amongst their GPUs, can you talk about the puts and takes when it comes to your retiming product? Do you see an attach rate that’s any different than the current generation?

Jitendra Mohan: Let me take that. Great question. First, let me say that we are just at the beginning phases of AI. We will continue to see new architectures being produced by AI platform providers at a very rapid pace, just match up with the growth in AI models. And on top of that, we’ll see innovative ways that hyperscalers will deploy these platforms in their cloud. So as these architectures evolve, so do the connectivity challenges. Some challenges are going to be incremental and some are going to be completely new. And so what we believe is given the increasing speeds, increasing complexities with these new platforms, we do expect our dollar content per AS platform to increase over time. We see these developments providing us good tailwinds going here into the future.

So now to your question about the GP-200 specifically, well, first of all, we cannot speak about specific customer architectures. But here is something that is very clear to see. As the AI platform providers produce these new architectures, the hyperscalers will choose different form factors to deploy them. And in that way, no two clouds are the same. Each hyperscaler has a unique requirement, unique constraint to deploy these AI platforms and we are working with all of them to enable these deployments. This combination of new platforms and very cloud specific deployment strategies, it presents great opportunities for our PCIe connectivity portfolio. And to that point, as Sanjay mentioned, we announced the sampling of our Gen 6 Retimer during GTC.

If you look at our press release, you will see that broad support from AI platform providers. And to this day, to the best of our knowledge, we are still the only one sampling agnostic solution. So, on the whole, given the fact that speeds are increasing, complexity is increasing, in fact the pace of innovation is going up as well, these all play to our strengths and we have customers coming to us for new approaches to solve these problems. So we feel very good about the potential to grow our PCIe connectivity business.

Operator: Your next question will come from the line of Quinn Bolton with Needham.

Quinn Bolton : I just wanted to follow-up on the use of PCI Express in the GPU to GPU backend networks. I think that’s something historically you had excluded from your TAM, but it looks like it’s becoming an opportunity here and starts to ramp in the second half of this year. Wondering if you could just talk about the breadth of some of the custom AI accelerators that are choosing PCI Express as their interconnect over, say, Ethernet? And then I’ve got a follow-up.

Jitendra Mohan: Again, great question. So just to kind of follow-up the response that we provided before. There are two, three dominant protocols that are used to cluster GPUs together. The one that’s most well known, of course, is NVLink, which is what NVIDIA uses and is the proprietary interface. The other two are Ethernet and PCI Express. We do see some of our customers using PCI Express and I think it’s not appropriate to say who, but certainly PCI Express is a fairly common protocol. It is the one that’s natively found on all GPUs and CPUs and then others data center components. Ethernet is also very popular and to the extent that a particular customer chooses to use Ethernet or PCI Express, we are able to support them both with our solutions, the Aries PCIe Retimer family as well as the Taurus Ethernet Retimer family.

We do expect these two to make meaningful contributions to our revenue, as I mentioned starting with the end of this year and then of course continuing into next year.

Quinn Bolton : And my second question is you guys have talked about introduction of new products as new TAM expansion activity and I’m not going to ask you to introduce them today. But just in terms of timing as we think out, are these new products timeline sort of introduction later this year or 2025 with revenue ramp in 2026? Is that the general framework investors should be thinking about the new products that you’ve discussed?

Sanjay Gajendra: Again, I think we, as a company don’t talk about unreleased products, the timing of it. But what I can share with you is the following. First, we’ve been very fortunate to be in the central seat of AI deployment and enjoy a great relationship with the hyperscalers and AI platform providers. So we get to see a lot, we get to hear a lot in terms of some of the requirements. So clearly, we are going to be developing products that address the bottlenecks, whether it is on the data side, network side or on the memory side. So we are working on several products, as you can imagine, that would all be developed ground up for AI infrastructure and enable connectivity solutions that will deploy the AI application sooner.

There is a lot going on, a lot of new infrastructure, a lot of new GPU announcements, CPU announcement. So, you can imagine given the pace of this market and the changes that are upcoming, we do anticipate that this will all start having meaningful impact and incremental revenue impact to our business.

Operator: Your next question will come from the line of Ross Seymore with Deutsche Bank.

Ross Seymore : I wanted to go into the ASIC versus GPU side of things. As ASIC start to penetrate this market to certain degrees, how does that change, if any, the Retimer, TAM that you would have? And I guess even the competitive dynamic in that equation considering one of the biggest ASICs suppliers is also an aspiring competitor of yours?

Jitendra Mohan: So, great question again. Let me just refer back to what I said, which is we will see more and more different solutions come to the market to address the evolving AI requirements. Some of them are going to be GPUs from the kind of known AI providers like NVIDIA, AMD and others. And some others will be custom built ASICs that are built typically by hyperscalers, whether they are AWS or Microsoft or Google and others. And the requirements for these two systems are common in some ways, but they do differ. For example, what particular type of backend connectivity they use and exactly what are the ins and outs that are going to each of these chips. The good news is with the breadth of portfolio that we have and the close engagement with the several ASIC providers as well as the GPU providers, we understand the challenges of these systems very well.

And not only are we providing solutions that address those today with current generation, we are engaged with them very closely on the next generation, on the upcoming platforms, whether they are GPU based or ASIC based to provide these solutions. Great example was the Aries SCM, where we enabled using our trusted solution for PCI Express Retimers. We enabled a new way of connecting some of these ASICs on the backend network.

Sanjay Gajendra: And just maybe if I can add to that, one way to visualize connectivity market or subsystem is the nervous system within the human anatomy. Right? It’s one of those things where you don’t want to mess with it. Yes. There will be a sick vendor. There are options or off-the-shelf. Once the nervous system is built, tested, especially like what we have developed where the nervous system that we’ve built is specifically done for AI application. And there’s a lot of qualification, a lot of software investment that hyperscalers have done. And they want to reuse that across different kinds of topologies, whether it’s ASIC based or merchant silicon based. And we do see a trend happening, when we look at customers that we’re engaged with today.

And for protocols like PCI Express, Ethernet, and CXL, and especially where as Taurus plays, these are standards based. So to that standpoint, whatever end possible architecture is being used, we believe that we will stand to gain from that.

Ross Seymore : I guess as my follow-up one quick one for Mike, how should we think about OpEx beyond the second quarter? I know there’s a bigger step up there with a full quarter of being a publicly traded company etcetera, but just walk us through your OpEx plans rest of the year or even to the target?

Mike Tate: Yes, I mean, thanks Ross. We are continuing to try to invest quite a bit in headcount particularly in R&D. There’s so many opportunities ahead of us that we love to get a jump on those products and also improve the time to market. That being said, we’re pretty selective on who we bring into the company. So that will just meter our growth. And we believe our OpEx although it’s going to be increasing will probably not increase at a rate of revenue over the near- and long-term. And that’s why we feel good about a long-term operating margin model of 40%. So over time, we feel confident we can trend that direction even with increasing investment in OpEx.

Operator: Your next question will come from the line of Suji Desilva with ROTH MKM.

Suji Desilva : Hi Jitendra, Sanjay, Mike congrats on the first quarter here. On the backend, the addressable market here that’s non NVLink, I’m trying to understand if the PCIe and Ethernet opportunities there will be adopted at a similar pace out of the gate or whether PCI would lead that adoption, in the non NVLink backend opportunity?

Sanjay Gajendra: It’s hard to say at this point just because there is so much of development going on here. I mean, you can imagine the non NVIDIA ecosystem, they will rely on standards technologies, whether it is PCI Express or Ethernet. And the advantage of PCI Express is that it’s low latency. Right? Significantly low latency compared to Ethernet. So there are some benefits to that. And there are certain extensions that people consider to add on top of PCI Express, or when it comes to the proprietary implementation. So, overall, we do see this, from a technology standpoint, PCI Express will have that advantage. Now Ethernet also has been around, so we’ll have to wait and see how all of this develops over the next, let’s say, 6 to 18 months.