Advanced Micro Devices, Inc. (NASDAQ:AMD) Q4 2023 Earnings Call Transcript

Operator: And the next question comes from the line of Matt Ramsay with TD Cowen. Please proceed with your question.

Matthew Ramsay: Thank you very much. Good afternoon. Lisa, I wanted to ask, I mean there’s been so much focus and scrutiny as there should be on the really exciting progression with MI300 and I mean we’ve progressed over the last six months from I think some doubts in the investment community on to software and your ability to ramp the product and now you’ve proven that you’re ramping it what, I think you said dozens of customers right across different end-markets. So, it’s what I’m interested in hearing a little bit more about and you guys have been open about what some of the forward programs in your traditional server business look like from a roadmap perspective. I’d be interested to hear how you’re thinking about the roadmap in your MI accelerator family.

Is it going to — they’re going to continue to be parts that are CPU and GPU together? Or is that a primarily a GPU only roadmap? What kind of cadence are you thinking about? I’d just be, any kind of color you can give us on some of the forward roadmap trajectory for that program would be really helpful. Thanks.

Lisa Su: Yeah, sure, Matt. So, I appreciate the comments. I think the traction that we’re getting with the MI300 family is really strong. I think what’s benefited us here is our use of chiplet technologies, which has given us the ability to have sort of both the APU version, as well as the GPU version and we continue to use that to differentiate ourselves and that’s how we get our memory bandwidth and memory capacity advantages. As we go forward, you can imagine, like we did in the EPYC timeframe, we planned multiple generations in sequence. That’s the way we’re planning the roadmap. One of the things I will note about the AI accelerator market is the demand for compute is so high that we are seeing sort of an acceleration of the roadmap generations here and we are similarly planning acceleration of our roadmap.

I would say that we’ll talk more about the overall roadmap beyond MI300 as we get into later this year. But you can be assured that we’re working very closely with our customers to have a very competitive roadmap for both training and inference that will come out over the next couple of years.

Matthew Ramsay: Thank you for that, Lisa. Just as a follow-up, I guess one of the questions that I’ve been getting a lot in different forms is, with respect to the $400 billion TAM that you guys have laid out for 2027. Maybe you could give us a little look under the hood as the, I guess, the — I’ve got 100 versions of the same question which is, how the heck did you come up with that number. So, if you could give us a little bit more in terms of are we talking about systems and accelerator cards? Are we talking about just the silicon? Are we talking about full servers? And what kind of sort of unit assumptions? Any kind of thing that you can give us on-market sizing or what gives you the visibility so early into this generative AI trend to give a precise number of three years out? That would be really, really helpful. Thank you.

Lisa Su: Sure. Well, Matt, I don’t know-how precise it is, but I think we said approximately $400 billion. But I think what we need to look at is growth rate and how do we get to those growth rates. I think we expect units to grow sort of substantial double-digit percentage. But you should also expect that content is going to grow. So, if you think about how important memory and memory capacity is as we go forward, you can imagine that we’ll see acceleration there and just the overall content as we go to more advanced technology nodes. So, there’s some ASP uplift in there. And then, what we also do is, we are planning longer-term roadmaps with our customers in terms of how they’re thinking about sort of the size of training clusters, the number of training clusters.

And then, the fact that we believe inference is actually going to exceed training as we go into the next couple of years just given as more enterprises adopt. So, I think as we look at all those pieces, I think we feel good that the growth rate is going to be significant and sustained over the next few years. In terms of what’s in that TAM, it really is accelerator TAM. So, within accelerators, there are certainly GPUs and there will also be some ASICs that are other accelerators that are in that TAM. As we think about sort of the different types of models from smaller models to fine-tuning of models, to the largest large language models, I think you’re going to need different silicon for those different use cases. But from our standpoint, GPUs are still going to be the sort of the compute element of choice when you’re talking about training and inferencing on the largest language models.

Operator: And the next question comes from the line of Joe Moore with Morgan Stanley. Please proceed with your question.

Joseph Moore: Great. Thank you. I think you talked about the MI300 cloud workloads being kind of split between the more customer-facing workloads versus internal. Can you talk about how you see the breakdown of that and how is your ecosystem progressing? This is a brand-new chip. It seems impressive they are able to support kind of a broad range of customer-facing workloads and cloud.

Lisa Su: Yeah, sure, Joe. So, yes, look, we are really happy with how the MI300 has come up and we’ve now deployed and working with a number of customers. What we have seen is certainly ROCm 6 has been a very important, as well as the direct optimization with our top cloud customers. We always said that the best way of optimizing the software is working directly on the most important workloads. And we’ve seen performance come up nicely, which is what we expect frankly with the GPU capabilities that we would have to do some level of optimization, but the optimization has gone well. I think to your broader question. The way I look at this is, there are lots of opportunities for us to work directly with large customers, both on the Cloud side as well as on the Enterprise side, who have specific training and inferencing workloads.

Our job is to make it as easy as possible for them and so our entire tool chain all of our, sort of the overall ROCm suite has really gone through significant progress over the last six to nine months. And then, we’re also getting some nice support from the open-source community. So, the work that Hugging Face is doing is tremendous. In terms of just real-time optimization on our hardware, our partnership with OpenAI on Triton and our work across a number of these open source models has helped us actually make very rapid progress.

Joseph Moore: Great. And for my follow-up, I guess a lot of the forecasting of your business that I’m hearing is coming from supply chain and we’re sort of hearing AMD is building X in Asia. I guess, how would you ask us to think about that? Are you looking at being kind of sold-out for the year and so the supply chain would be close to revenue? Are you building for the best-case scenario? Just I worry about sometimes expectations when people hear the supply chain numbers. And I’m just curious how you bridge the gap.