Joseph Moore: Great. And now that you’re getting a look at volume in that space, can you talk about, are the gross margins there going to be comparable to your other Data Center businesses?
Jean Hu: Yeah. So on the gross margin side, we do expect our GPU gross margin to be accretive to corporate average. Of course, right now, we are at a very, very early beginning of the ramp of the product. As you probably know, typically when you ramp new product, it takes some time to improve yield, testing time, manufacturing efficiency. So typically, it takes a few quarters to ramp the gross margin to a normalized level. But we are quite confident that our team is executing really well.
Operator: And the next question comes from the line of Timothy Arcuri with UBS. Please proceed with your question.
Timothy Arcuri: Lisa, I also wanted to ask about that $2 billion number for Data Center GPU next year. That’s still a pretty small portion, obviously, of the total TAM. Where do you think that can go? Do you think when we look at this out a couple of years, do you think you can be 15%, 20% share for total Data Center GPU or do you have aspirations to be even larger than that?
Lisa Su: Yeah, Tim. I mean, I would say that, first of all, this is an incredibly exciting market, right? I think we all see the growth in generative AI workloads. And the fact is, we’re just at the very early innings of people truly adopting it for enterprise, business productivity applications. So I think we are big believers in the strength of the market. We previously said we believe that the compound annual growth rate could be 50% over the next three or four years. And so we think the market is huge and there will be multiple winners in this market. Certainly, from our standpoint, we want to be — we’re playing to win and we think MI300 is a great product, but we also have a strong road map beyond that for the next couple of generations.
And we would like to be a significant player in this market so we’ll see how things play out. But overall, I would say that I am encouraged with the progress that we’re making on hardware and software and certainly with the customer set.
Timothy Arcuri: Thank a lot. And then Jean, I just wanted to ask on March. I know that there’s a lot of moving parts. It sounds like Data Center is up but PC is going to be down, normal seasonal and Embedded and Gaming sound down as well. So can you just help us shape sort of how to think about March? Is it down a smidge? Is it flat? Could it be up a little bit? And maybe then how to think about like first half, back half next year, if you even want to go there. Thanks.
Jean Hu: Hey, Tim. We’re guiding one quarter at a time. But just to help you with some of the color, as Lisa mentioned earlier, we said the Data Center GPU revenue will be flattish sequentially. That’s the first thing, right? The mix will shift from El Capitan majority in Q4 to predominantly more for AI in Q1. So that — because of the long lead time manufacturing cycle, we feel like it’s going to be a similar level of revenue with the Data Center GPU. But in general, if you look at our business, we do have a seasonality. Typically Q1, the Client business, server business, Gaming business seasonally is down. Of course, right now, we definitely have a little bit more seasonality, given Embedded and the Gaming dynamics we are seeing right now.
Server and Client typically were down sequentially, seasonally too. But overall, I think we are really focused on just execution. We probably can provide more color when we get close to Q1 2024, and especially, Lisa, please add if we have any color we can provide on the whole year 2024.
Lisa Su: Yeah. No, I think that covers it. When we look at the various pluses and minuses, I think we feel very good about the Data Center business. It continues to be a strong growth driver for us as we think about 2024 for both server as well as our MI300 clients as well, we think incrementally improves from a market standpoint as well as we believe we can gain share, given the strength of our product portfolio. And then we have the headwinds of Embedded in the inventory correction that we’ll go through in the first half and the console cycle. So I think those are the puts and takes.
Operator: And our next question comes from the line of Vivek Arya with Bank of America. Please proceed with your question.
Vivek Arya: Thanks for taking my question. Lisa, on the MI300, many of your hyperscaler customers have internal ASIC solutions ready or in the process of getting them ready. So if inference is the primary workload for MI300, do you think it is exposed to replacement by internal ASICs over time or do you think both MI300 and ASICs can coexist, right, along with the incumbent GPU solution?
Lisa Su: Yeah. I think Vivek, when we look at the set of AI workloads going forward, we actually think they’re pretty diverse. I mean, you have sort of the large language model training and inference then you have what you might do in terms of fine-tuning off of a foundational model and then you have, let’s call it, straight inferencing what you might do there. So I think within that framework, we absolutely believe that MI300 has a strong place in the market, and that’s what our customers are telling us and we’re working very closely with them. So yes, I think there will be other solutions, but I think for the — particularly for the LLMs, I think GPUs are going to be the processing of choice and MI300’s very, very capable.
Vivek Arya: Got it. And then a question, Lisa, on just this interplay between AI and traditional computing. It seems like especially when it relates to ASPs and units, seems like server CPU makers are kind of holding the line on price per core. But at the same time, the cloud players are extending the depreciation and replacement cycle of traditional server CPUs. So I’m just curious to get your take. What do you think is the interplay between units and ASP, right? If you were to take a snapshot of what you have seen in ’23 and how it kind of informs you as you look at ’24, that is it possible that maybe unit growth in server is not that high but you are able to make up for it on the ASP side. So just give us some color on, one, what is happening to traditional computing deployments? And secondly, is there a difference in kind of the unit and ASP interplay on the server seat side?
Lisa Su: Yeah. I think it’s a good point, Vivek. So I mean, if I take a look at 2023, I think it’s been a mixed environment, right? There was a good amount of, let’s call it, caution in the overall server market. There was a bit of inventory digestion at some of the cloud guys and then some optimization are going on with enterprise, again, somewhat mixed. I think as we go forward, we’ve returned to growth in the server CPU market. Within that realm, because these — like for example, 4th Gen EPYC, somewhere between 96 and 128 cores. I mean, you just get a lot of compute for that. So I do think there is the framework that unit growth may be more modest, but ASP growth, given the core count and the compute capability will contribute to overall growth. So from a traditional server CPU standpoint, I think we do see those trends. 2023 was a mixed environment and I think it improves as we go into 2024.