Advanced Micro Devices, Inc. (NASDAQ:AMD) Q3 2023 Earnings Call Transcript

Jean Hu: Yeah, Matt. Thank you for the question. Yes, there are a few puts and takes, especially in a mixed demand environment. So let me just comment on Q3 first. We are very pleased with our gross margin expansion sequentially, 140 basis points. As you mentioned, the Embedded segment, revenue actually declined double-digits sequentially. There are two primary drivers. The first one is definitely Data Center grew 21% sequentially, which should provide a tailwind to our gross margin. Secondly, as we go through the inventory correction in PC market, we did encounter some headwinds in the Client segment gross margin. And in Q3, we saw very significant improvement with our client segment gross margin. I think going forward, the pace of Client segment improvement will moderate, but it will continue to drive incremental gross margin improvement in Client segment.

So that really is why we are able to drive sequential growth in Q3. And in Q4, I would say the major dynamics is with a very strong double-digit growth in Data Center business, we definitely have the tailwind, which more than offset the Embedded segment decline sequentially double-digit again. I think going forward, it’s really mix, primarily mix is driving our gross margin, but we feel pretty good about second half next year when we can expand the Data Center significantly and especially, Embedded segment start to recover, we should be able to drive more meaningful gross margin improvement in second half.

Operator: And the next question comes from the line of Ross Seymore with Deutsche Bank. Please proceed with your question.

Ross Seymore: Lisa, I had a question on the MI300 side of things. When you go to market, obviously, there’s been shortages this year of GPU accelerators, and so a second source is definitely needed. But beyond just providing that second source role, can you just walk us through some of the competitive advantages that the customer lists that you’re going to talk about on the sixth is finding to be so attractive relative to your primary competitor?

Lisa Su: Yeah. I think there’s a couple of different things, Ross. I mean, if we start with, it’s just a very capable product. The way it’s designed from a triplet standpoint, we have a very strong compute as well as memory capacity and memory bandwidth. In inference, in particular, that’s very helpful. And the way to think about it is on these larger language models, you can’t fit the model on one GPU. You actually need multiple GPUs. And if you have more memory, you can actually use fewer GPUs to infer those models, and so it’s very beneficial from a total cost of ownership standpoint. From a software standpoint, this has been perhaps the area where we’ve had to invest more and do more work. Our customers and partners are actually moving towards an area where they’re more able to move across different hardware so really optimizing at the higher-level frameworks.

And that’s reducing the barrier of entry of sort of taking on a new solution. And we’re also talking very much about, going forward, what the road map is. It’s very similar to our EPYC evolution. When you think about sort of the — our closest partners in the cloud environment, we work very closely to make each generation better. So I think MI300 is an excellent product and we’ll keep evolving on that as we go through the next couple of generations.

Ross Seymore: For my follow-up, I want to focus on the OpEx side of things. You guys have kept that pretty tight over the years. Jean, I just wondered what the puts and takes on that might be heading into 2024. I think you’re exiting this year at about up kind of high single digits, maybe 10% year-over-year. Any sort of unique puts and takes, especially as you guys are driving for all that MI300 success as we think about OpEx generally in 2024?

Jean Hu: Yeah. Thanks for the question. Our team has done an absolutely great job in reallocating resources within our budget envelope to really invest in the most important areas in AI and the data center. We are actually in the planning process for 2024. I can comment on a very high level, given tremendous opportunities we have in AI and the Data Center, we definitely will increase both R&D investment and go-to-market investment to address those opportunities. I think the way to think about it is our objective is to drive top line revenue growth much faster than OpEx growth, so our investment can drive long-term growth. And we also can leverage our operating model to really actually expand earnings much faster than revenue. That’s really how we think about running the company and driving the operating margin expansion.

Operator: And the next question comes from the line of Harsh Kumar with Piper Sandler. Please proceed with your question.

Harsh Kumar: Hi, Lisa. I had a strategic one for you and then somewhat of a tactical one. On the strategic side, as your key competitor is sort of getting their act together on the manufacturing technology and the nodes, would it not be feasible to think that their manufacturing cost could be significantly better, let’s say, than that of yours? And so if that’s the case down the line one year or two years out, I’m curious what kind of value-add offerings would AMD have to provide to a customer to keep the market share that you have in the server space, data center space and then keep that growing as well?

Lisa Su: Yes, Harsh. Maybe I should just take a step back and just talk about sort of the engagement that we have with our data center customers. When we think about sort of the EPYC portfolio and what we’ve been able to build over the last few generations and what we have going forward with Zen 5 and beyond, process technology is only one piece of the optimization. It’s really about process technology, packaging. We’re leading sort of the usage of chiplets and 2.5D and 3D integration and then when you go to architecture and design. So it’s really the holistic product. And from a pricing standpoint, actually, price is only one aspect of the conversation. Much of the conversation is on how much performance can you give me at what efficiency.