We’re participating across all of our segments, whether that’s client, edge, enterprise or our foundry opportunities as well, delivering AI everywhere.
John Pitzer: Thanks, Vijay. Jonathan, can we have the next caller, please?
Operator: Certainly. And our next question comes from the line of Timothy Arcuri from UBS Securities. Your question please.
Timothy Arcuri: Thanks a lot. Dave, I also wanted to ask about gross margin. You did seem to be better next year, but it is really whipping around a lot and it looks like you sort of have to exit this year at 48 or maybe a little higher, which is already well-above the 45.5 that you’ll be at this year because you’re guiding at up 200 basis points. So I know you don’t want to guide next year, but if you could even qualitatively help us, can you sustain those margins at that level? And I ask because last year you sort of exited at 49% and then things crashed here during the first-half of the year. So can you help us just think about what some of the puts and takes will be next year off of that high base that you’re going to exit this year at?
David Zinsner: Yeah. Good question, Tim. So there’ll be additional start-up costs next year. We do think on a percent of revenue basis, it will be lower. So that should help lift the margins. Of course, the expectation would be we’d see growth in revenue that also should help. On-top of that, we already are seeing good decision-making and changing decision-making around how we operate now under this new different business structure that we have at this point. A lot of that stuff doesn’t actually show-up in the P&L, but all these decisions get made this year, but a lot of the decisions made, sorry, a lot of the benefits to those decisions don’t show up until next year and the year after. So we should see some benefit from that as well.
The other thing that kind of has whipped this our margins around a bit more — over the last few years has been this notion where we reserve material all the way up until the PRQ. Pat just mentioned that Sierra Forest is just PRQ. So ordinarily we take a whole bunch of reserves on Sierra Forest and then we would release them. As we started shipping beyond the PRQ date, we won’t be doing that going forward. So that should help adjust the volatility of the gross margin. So it will be more a function of revenue growth, spending profile in the fabs, startup costs that we have and the mix.
John Pitzer: Tim, do you have a follow up question?
Timothy Arcuri: I do. Yes. So I want to ask about server CPU share. March, I think the assumption for March was that server share was going to be pretty flat. So the question is, was that the case? And it sounds — you sound maybe a little bit less optimistic if I’m sort of reading between the lines on share into the back-half of the year, just given how long it takes these things to sort of impact your share. So your bullish outlook in the second-half of the year, it sounds like it’s more market-driven versus share-driven. Can you just clarify that? Thanks.
Patrick Gelsinger: Well, overall, I’d just say, it’s hard to predict, right, exactly how these will play-out in light of the overall GenAI surge that we’ve seen. That said, products are good, right. We came into the year improving our market-share position in the first-quarter of the year. It does take time to ramp these new products, but better products, rebuilding trust with our customers that we’re delivering on these and now hitting the — what we would call the early end of the cycles on these new products is giving us a lot of interest with the market and the customer. New use cases, I also demonstrated a 70 billion parameter model running natively on Granite Rapids at our Vision event. All of these just make us more-and-more confident in our business execution.
We’re also seeing that we don’t need, right, socket count to increase. The ASPs are going up with the core counts on our new leadership products as well. So all of those in a fairly optimistic view that we’re getting from our OEMs and our channel partners for their view of upgrade cycles, building momentum from customers across the industry. We feel very comfortable that we’re stabilizing our position. We have an improving roadmap and we do expect to see share gains as we end the year and go into ’25.
John Pitzer: Jonathan, can we have the next caller, please?
Operator: Certainly. Our next question comes from the line of Srini Pajjuri from Raymond James. Your question please.
Srini Pajjuri: Thank you. My question is on the client side. I think, Pat, you mentioned something about supply constraints impacting your 2Q outlook. If you could provide some color as to what’s causing those supply constraints and when do you expect those to ease as we, I guess, go into the second-half? And then, in terms of your AI PCs, I think you’ve been talking about 40 million or so potentially shipping this year. Could you maybe put that into some context as to how it actually helps Intel? Is it just higher ASP? Is it higher margin? I would think that these products also come with higher costs. I just want to understand how we should think about the benefit to Intel as with these — as these AI PCs ramp?
Patrick Gelsinger: Yeah. Thank you. And overall, we — as we’ve seen, this is a hot product, the AI PC category and we declared this as we finished last year. And we’ve just been incrementing up our AI PC or the Core Ultra product volumes throughout. We’re meeting our customer commitments that we’ve had, but they’ve come back and asked for upside on multiple occasions across different markets. And we are racing to catch-up to those upside requests. And the constraint has been on the back-end, wafer-level assembly, one of the new capabilities that are part of Meteor Lake and our subsequent client products. So with that, we’re working to catch-up and build more wafer-level assembly capacity to meet those. How does it help us? Hey, it’s a new category and that new category products will generally be at higher ASPs as your question suggests, but we also think it’s new use cases.
And new use cases over time create a larger TAM, it creates an upgrade cycle that we’re seeing. It creates new applications and we’re seeing essentially every ISV AI-ifying their app whether it’s the communications capabilities of Zoom and Team for translation and contextualization, whether it’s new security capabilities with Crowd Strike and others finding new ways to do security on the client or its way of its creators and gamers taking advantage of this. So we see that every PC is going to become an AI PC over time. And when you have that kind of cycle underway, Srini, everybody starts to say, oh, how do I upgrade my platforms? And we even demonstrated how we’re using AI PC in the Intel factories now to improve yields and performance inside of our own factories.
And as I’ve described, it’s like a Centrino moment, right, where Centrino ushered in WiFi at-scale. We see the AI PC ushering in these new use cases at-scale, and that’s going to be great for the industry. But as the unquestioned market-leader, right, the leader in the category creation, we think we’re going to differentially benefit from the emergence of the AI PC.
John Pitzer: Srini, do you have a follow up?
Srini Pajjuri: Yes, John. Thank you. And I guess my other question is on the other bucket. I think, Dave, you kind of talked about Altera potentially exiting the year at a 2 billion run-rate from current levels, that’s a pretty steep ramp. And also I think you said NEX growth will accelerate over the next couple of quarters. So and given the telecom weakness out there that we’re seeing, I’m just curious as to what’s giving you that visibility or confidence. I mean, is this driven by some new products or is it just the market recovering? Any color would be helpful. Thank you.
David Zinsner: Yeah. On Altera and this is not unprecedented when you see a massive work down of inventory, of course, that significantly impacts the revenue. But as that normalizes, then you start shipping to end consumption. So it’s actually a pretty easy lift to get to the $2 billion mark once we’re through the inventory digestion period. So I think we have high confidence on that.
Patrick Gelsinger: And others have commented on their inventory cycles as well in the FPGA category. We have good products in the second-half of the year with Agilex starting to ramp as well.
David Zinsner: And then on NEX, of course, that business also has gone through its own inventory adjustments. So we have good confidence around that reversing, which will help drive strength. And then some of the products that are — were tailored to the AI space, of course, we’ll see like FNIX, for example, see strength through the year. And so that should drive good revenue growth through the year as well.
Patrick Gelsinger: Yeah. And also on NEX, the AI networking products are strong, our IPU products. We’re seeing a strength in that area. So it’s inventory as well as products. Even though as your question suggests, the communications sector and the service providers that is weaker through the year, but pretty much every aspect of their business and Edge AI, as Dave said, is seeing strength as we go into the second-half of the year and into ’25.
John Pitzer: Thanks, Srini. Jonathan, can we have the next caller, please?
Operator: Certainly. Our next question comes from the line of Vivek Arya from Bank of America Securities. Your question please.
Vivek Arya: Thanks for taking my question. Pat, just a conceptual question. In a GenAI server with accelerators, how important is the role of a specific CPU? Or is it easily interchangeable between yours or AMDs or ARMs? I guess the question is that, if most of the workload is being done on the accelerator, does it really matter which CPU I use? And can that move towards GenAI servers essentially shrink the TAM for X86 server CPUs? Because number of your cloud customers have announced ARM-based server alternative. So I’m just curious, how do you think about that conversion over to GenAI and what that means for X86 server CPU TAM going forward?
Patrick Gelsinger: Yeah, thanks, Vivek. And we spoke at our Vision event about use cases like RAG, retrieval augmented generation where the LLMs might run an accelerator, but all of the real-time data, all of the databases, all of the embeddings is running on the CPU. So you’re seeing all of these data environments, which are already running on Xeon and X86 being augmented with AI capabilities to feed an LLM. And I believe this whole area of RAG becomes one of the primary use cases for enterprise AI. And if you think about it, an LLM might be trained with one, two-year old data, right, but many of the business processes and environment are real-time, right? You’re not going to be retraining constantly and that’s where this area of the front-end database becomes very prominent.
All of those databases run on X86 today. All of them are being enhanced for use cases like RAG. And that’s why we see this unlock occurring because the data sits on-prem, the data sits in the X86 database environments that are all being enhanced against these use cases. And as we’ve shown, we don’t need accelerators in some cases. We can run a 70 billion parameter model natively on Xeon with extraordinary TCO value for customers. And furthermore all of the IT environments that enterprises run today, they have the security, they have the networking, they have the management technologies in place. They don’t need to upgrade or change those from any of those use cases. So we see a lot of opportunity here to build on the enterprise asset that we have with the Xeon franchise, but we’re also going to be aggressively augmenting that.
And we’re commonly the head node even when it’s other accelerators being used or other GPUs being used. And as we’ve described, Xeon plus Gaudi, we think it’s going to be a very powerful opportunity for enterprises. So in many of those cases, we see this as a market lift, new applications, new use cases, new energy coming to the enterprise AI. Here we are in year ’23 of the cloud and while 60% of the workload has moved to the cloud, over 80% of the data remains on-prem under the control of the enterprise, much of that underutilized in businesses today. That’s what GenAI is going to unlock and a lot of that is going to happen through the X86 CPU and we see a powerful cycle emerging. And I would just point you back to what we described at Vision.