Karl Ackerman: Yes, thank you. Hock, weakness in broadband, server and storage customers is understandable given what your peers have said this earnings season. But perhaps you could speak to the backlog visibility you have with your customers in those markets that would indicate those markets could begin to order again and see sequential growth in the second half through the calendar year? Thank you.
Hock Tan: You’re correct. We are — as I say, we are almost like near the trough. This year, ’24, first half, for sure, will be the trough. Second half 24, don’t know yet. But I tell you what, we have 52-week lead time, as you know. We are very disciplined in sticking to it. And based on that, we are seeing bookings lately, significantly up from bookings a year ago.
Karl Ackerman: Thank you.
Operator: Our next question comes from the line of Christopher Rolland with Susquehanna. Your line is open.
Christopher Rolland: Thanks for the question. So Hock, this one is for you on optical. So, our checks suggest that you’re vertically integrating there. You’re now putting in your own drivers, TIAs, you’re starting to get traction in PAM4 DSP. And I think you kind of had an early lead in 100-gig data center lasers as well. And this is — a lot of this should be on the back of AI networking that appears to be exploding here. So I was wondering if you could help us size the market and then also talk about how fast this is growing for you. I think there may have been some clues in that one-third number the AI you gave us, but perhaps if you can kind of double click or square that for us, it would be great. Thanks.
Hock Tan: Okay. Before you get carried away, please, and those in the other categories outside AI accelerators, all those things that PAM-4, DSPs, optical components, retirements. They are small compared to Tomahawk switches and Jericho routers using AI networks. And also being in an environment where, as you all know, traditional enterprise networking is kind of also in a bit of a slowdown now. So always think, it’s demand driven very much by AI. And that tends to push us in a line of thinking that could be very biased because what it is showing is that the mix and the content on networking relative to compute, is very skewed very different from — in an AI data center compared to a traditional CPU-based data center. So I don’t want to get you guys all in the wrong way.
But you’re right, in the AI data center, there’s a lot of — there’s quite a bit of content on DSPs, PAM4s, optical components and retirements and PCI Express switches. But they are still not that big in the overall scheme of things compared to what we sell in switches and routers. And compared to AI accelerators, they are even small, I think in that ratio. As I said, AI revenue of $10 billion plus this year, 70% would be AI accelerators. 30%, everything else. And within that everything else, 30% or so, I would say more than half of that 30%, more like 20% are the switches and routers. And the rest are other various retirement DSP components because we are not — unlike what you said, we’re not vertically integrated in the sense we do not do the entire transceiver the optical transceiver.
We don’t do that. Those are manufactured typically by OEM contract manufacturers like in online, Eoptolink guys in China. Where those guys are much more competitive, but we provide those key components we talk about. So when you look at it that way, you can understand the kind of the weighting of the various values.
Christopher Rolland: Super helpful. Thank you, Hock.
Operator: Our next question comes from the line of Toshiya Hari with Goldman Sachs. Your line is open.
Toshiya Hari: Hi, thank you for taking the question. Hock, I think we all appreciate the capabilities you have in terms of custom compute. I asked this question last quarter on the group call back. But there is one competitor based in Asia, who continues to be pretty vocal and adamant that one of the future designs at your largest customer, they may have some share and we’re picking up conflicting evidence, and we’re getting a bunch of investor questions. I was hoping you could address that and your confidence level in sort of maintaining, if not extending your position there? Thank you.
Hock Tan: You know, I can’t stop somebody from trash talking, okay? It’s the best way to describe it. Let the numbers speak for themselves, please, and leave it that way. And I add to it like most things we do in terms of large critical technology products. We tend to always have, as we do here, a very deep strategic and multiyear relationship with our customer.
Toshiya Hari: Understood. Thank you.
Operator: Our next question comes from the line of Vijay Rakesh with Mizuho. Your line is open.
Vijay Rakesh: Yes, hi, Hock. Just on the custom silicon side, obviously, you guys dominate that space. But you mentioned two customers, only two major customers. But just wondering what’s really holding back other hyperscalers from ramping up their custom silicon side. And on the flip side, you’re hearing some peers talk about custom silicon road maps as well, so if you could hit both? Thanks.
Hock Tan: Well, number one, we don’t dominate this market. I only have two. I can’t be dominating it, too and number one. Number two, the second point is very — it takes years. It takes a lot of heavy lifting to create that custom silicon because you need to do more than just hardware of silicon to really have a solution for generative AI or even AI in trying to create those AI capabilities in our data centers. It’s more than just silicon. You have to invest a lot in creating software models that works on your custom silicon that matches. You’ve got to match your business model in the first place, which leads to and create foundation models which then needs to work and optimize on the custom silicon you are developing. So it’s an iterative process.