Lisa Su: Yes. I think, Aaron, what I would say is there are — the need for refresh of, let’s call it, older equipment is certainly there. So we see a refresh cycle coming. We also see AI head nodes as another place where we see growth in, let’s call it, the more traditional SDU market. Our sweet spot is really in the highest performance, sort of high core count, energy efficiency space, and that is playing out well. And we’re also — we’ve traditionally been very strong in, let’s call it, cloud first-party workloads and that is now extending to cloud third-party workloads where we see enterprises who are, let’s call it, more of a hybrid environment, adopting AMD both in the cloud and on-prem. So I think overall, we see it as a continued good progression for us with the server business going through 2024 and beyond.
Aaron Rakers: Thank you.
Lisa Su: Thanks.
Operator: Thank you. Our next question is from Vivek Arya with Bank of America Securities. Please proceed with your question.
Vivek Arya: Thanks for taking my question. Lisa, I just wanted to go back to the supply question and the $4 billion outlook for this year. I think at some point, there was a suggestion that the $4 billion number, right, that there are still supply constraints. But I think at resent point, you said that you have supply visibility significantly beyond that. Given that we are almost at the middle of the year, I would have thought that you would have much better visibility about the back half. So is the $4 billion number of supply constrained number? Or is it a demand constrained number? Or relatively, if you could give us some sense of what the exit rate of your GPU sales could be. I think on the last call, $1.5 billion was suggested. Could it be a lot more than that in terms of your exit rate of MI for this year?
Lisa Su: Yes. Vivek, let me try to make sure that we answered this question clearly. From a full-year standpoint, our $4 billion number is not supply capped — I’m sorry, yes, it’s not supply cap. It is — we do have supply capability above that. It is more back half weighted. So if you’re looking at sort of the near-term, I would say, for example, in the second quarter, we do have more demand than we have supply right now, and we’re continuing to work on pulling in some of that supply. By the way, I think this is an overall industry issue. This is not at all related to AMD. I think overall, AI demand has exceeded anyone’s expectations in 2024. So you’ve heard it from the memory guys. You’ve heard it from the foundry guys. We’re all ramping capacity as we go through the year.
And as it relates to visibility, we do have good visibility into what’s happening. As I said, we have great customer engagements that are going forward. My goal is to make sure that we pass all of the milestones as we’re ramping products. And as we pass those milestones, we put that into the overall full-year guidance for AI. But in terms of how customer progression things are going, they’re actually going quite well, and we continue to bring new customers on and we continue to expand workloads with our current customers. And so hopefully, that clarifies the question, Vivek.
Vivek Arya: Thank you, Lisa. Maybe one not in MI, but maybe on the embedded business. I think you sound a bit more mature about Q2 and the second-half rebound, which is similar to what we have heard from a lot of the auto industrial peers. But where are you in the inventory clearing cycle? And if embedded has a somewhat more measured rebound in the back half, what implication does that have on gross margin expansion? Can we continue to expect, I don’t know, 100 basis points a quarter in terms of gross margin expansion, because of the data center mix? Or just any puts and takes of embedded and then what it means for gross margins in the back half? Thank you.
Jean Hu: Hi, Vivek, thank you for the question. I think the embedded business declined a little bit more than expected, really due to the weaker demand in some of the markets, very specifically, communication has been weak. And some pockets of industrial and automotive, as you mentioned, it’s actually quite consistent with the peers. Second-half, we do think the first-half is the bottom of embedded business and will start to see gradual recovery in the second-half. And going back to your gross margin question, is when you look at our gross margin expansion in both Q1 and the guide Q2, the primary driver is the strong performance on the data center side. The data center will continue to ramp in second-half. I think that will continue to be the major driver of gross margin expansion in second-half. Of course, if embedded is doing better, we’ll have a more tailwind in the second half.
Vivek Arya: Thank you.
Operator: Thank you. Our next question is from Timothy Arcuri with UBS. Please proceed with your question.
Timothy Arcuri: Thanks very much. I also wanted to ask about your data center GPU road map. The customers that we talk to say that they’re engaged, not just because of MI300, but really because of what’s coming. And it seems like there’s a big demand shift to rack scale systems that try to optimize performance per square foot given some of the data center and power constraints. So can you just talk about how important systems are going to be in your road map? And do you have all the pieces you need as the market shifts to rack scale systems?
Lisa Su: Yes, sure, Timothy. Thanks for the question. For sure, look, our comers are engaged in the multigenerational conversation. So we’ve definitely going out over the next couple of years. And as it relates to the overall system integration, it is quite important. It is something that we’re working very closely with our customers and partners on. That’s a significant investment in networking, working with a number of networking partners as well to make sure that the scale-out capability is there. And to your question of do we have the pieces. We do absolutely have the pieces. I think the work that we’ve always done with our Infinity Fabric, as well as with our Pensando acquisition that’s brought in a lot of networking expertise.
And then we’re working across the networking ecosystem with key partners like Broadcom and Cisco and Arista, who are with us at our AI data center event in December. So — our work right now in future generations is not just specifying a GPU. It is specifying, let’s call it, full system reference designs, and that’s something that will be quite important going forward.