Broadcom Inc. (NASDAQ:AVGO) Q2 2023 Earnings Call Transcript June 1, 2023
Broadcom Inc. beats earnings expectations. Reported EPS is $10.32, expectations were $10.08.
Operator: Welcome to Broadcom Inc.’s Second Quarter Fiscal Year 2023 Financial Results Conference Call. At this time, for opening remarks and introductions, I would like to turn the call over to Ji Yoo, Head of Investor Relations of Broadcom Inc.
Ji Yoo: Thank you, operator, and good afternoon, everyone. Joining me on today’s call are Hock Tan, President and CEO; Kirsten Spears, Chief Financial Officer; and Charlie Kawwas, President, Semiconductor Solutions Group. Broadcom distributed a press release and financial tables after the market closed, describing our financial performance for the second quarter fiscal year 2023. If you did not receive a copy, you may obtain the information from the Investors section of Broadcom’s website at broadcom.com. This conference call is being webcast live and an audio replay of the call can be accessed for one year through the Investors section of Broadcom’s website. During the prepared comments, Hock and Kirsten will be providing details of our second quarter fiscal year 2023 results, guidance for our third quarter, as well as commentary regarding the business environment.
We will take questions after the end of our prepared comments. Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to U.S. GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today’s press release. Comments made during today’s call will primarily refer to our non-GAAP financial results. I’ll now turn the call over to Hock.
Hock Tan: Thank you, Ji, and thank you, everyone, for joining us today. So, in our fiscal Q2 2023, consolidated net revenue was $8.7 billion, up 8% year-on-year. Semiconductor Solutions revenue increased 9% year-on-year to $6.8 billion. And Infrastructure Software grew 3% year-on-year to $1.9 billion as the stable growth in core software more than offset softness in the Brocade business. Now, as I start this call, I know you all want to hear about how we are benefiting from this strong deployment of generative AI by our customers. Put this in perspective, our revenue today from this opportunity represents about 15% of our semiconductor business. Having said this, it was only 10% in fiscal ’22. And we believe it could be over 25% of semiconductor revenue in fiscal ’24.
In fact, over the course of fiscal ’23 that we’re in, we are seeing a trajectory where our quarterly revenue entering the year doubles by the time we exceed ’23. And in fiscal third quarter ’23, we expect this revenue to exceed a $1 billion in the quarter. But, as you well know, we are also a broadly diversified semiconductor and infrastructure software company. And in our fiscal Q2, demand for IT infrastructure was driven by hyperscale, while service providers and enterprise continued to hold up. Following the 30% year-on-year increases we have experienced over the past five quarters, overall IT infrastructure demand in Q2 moderated to mid-teens percentage growth year-on-year. As we have always told you, we continue to ship only to end-user demand.
We remain very-disciplined on how we manage inventory across our ecosystem. We exited the quarter with less than 86 days on hand, a level of inventory consistent with what we have maintained over the past eight quarters. Now, let me give you more color on our end markets. Let me begin with wireless. As you saw in our recent 8-K filing, we entered into a multiyear collaboration with a North American wireless OEM on cutting-edge wireless connectivity and 5G components. Our engagement in technology and supply remains deep, strategic and long term. Q2 wireless revenue of $1.6 billion represented 23% of semiconductor revenue. Wireless revenue declined seasonally, down 24% quarter-on-quarter and down 9% year-on-year. In Q3, as we just begin the seasonal ramp of the next-generation phone platform, we expect wireless revenue to be up low single digits sequentially.
We expect however, it will remain around flattish year-on-year. Moving on to networking. Networking revenue was $2.6 billion and was up 20% year-on-year, in line with guidance, representing 39% of our semiconductor revenue. There are two growth drivers here. One, continued strength in deployment of our merchant Tomahawk switching for traditional enterprise workloads as well as Jericho routing platforms for telcos; and two, strong growth in AI infrastructure at hyperscalers from compute offload and networking. And speaking of AI networks, Broadcom’s next generation Ethernet switching portfolio consisting of Tomahawk 5 and Jericho3-AI offers the industry’s highest performance fabric for large-scale AI clusters by optimizing the demanding and costly AI resources.
These switches based on an open distributed disaggregated architecture will support 32,000 GPU clusters running at 800 gigabit per second bandwidth. Ethernet fabric, as we know it, already supports multi-tenancy capability and end-to-end congestion management. This lossless connectivity with high QoS performance has been well proven over the last 10 years of network deployment in the public cloud and telcos. In other words, the technology is not new. And we are, as Broadcom, very well positioned, to simply extend our best-in-class networking technology into generative AI infrastructure, while supporting standard connectivity, which enables vendor interoperability. In Q3, we expect networking revenue to maintain its growth year-on-year of around 20%.
Next, our server storage connectivity revenue was $1.1 billion or 17% of semiconductor revenue and up 20% year-on-year. And as we noted last quarter, with the transition to next-generation MegaRAID largely completed and enterprise demand moderating, we expect server storage connectivity revenue in Q3 to be up low single digits year-on-year. Moving on to broadband. Revenue grew 10% year-on-year to $1.2 billion and represented 18% of semiconductor revenue. Growth in broadband was driven by continued deployments by telcos of next-generation 10G PON and cable operators of DOCSIS 3.1 with high attach rates of Wi-Fi 6 and 6E. And in Q3, we expect our broadband growth to moderate to low-single-digit percent year-on-year. And finally, Q2 industrial resales of $260 million increased 2% year-on-year as the softness in China was offset by strength globally in renewable energy and robotics.
And in Q3, we forecast industrial resales to be flattish year-on-year on continuing softness in Asia, offset by strength in Europe. So summary, Q2 Semiconductor Solutions revenue was up 9% year-on-year. And in Q3, we expect semiconductor revenue growth of mid-single-digit year-on-year growth. Turning to software. In Q2, Infrastructure Software revenue of $1.9 billion grew 3% year-on-year and represented 22% of total revenue. As expected, continued softness in Brocade was offset by the continuing stable growth in core software. Relating to core software, consolidated renewal rates averaged 114% over expiring contracts. And in our strategic accounts, we averaged 120%. Within the strategic accounts, annualized bookings of $564 million included $133 million or 23% of cross-selling of other portfolio products to these same core customers.
Over 90% of the renewal value represented recurring subscription and maintenance. And over the last 12 months, consolidated renewal rates averaged 117% over expiring contracts. And among our strategic accounts, we averaged 128%. Because of this, our ARR, the indicator of forward revenue at the end of Q2 was $5.3 billion, up 2% from a year ago. And in Q3, we expect our Infrastructure Software segment revenue to be up low single digits percentage year-on-year as the core software growth continues to be offset by weakness in Brocade. On a consolidated basis, we’re guiding Q3 revenue of $8.85 billion, up 5% year-on-year. Before Kirsten tells you more about our financial performance for the quarter, let me provide a brief update on our pending acquisition of VMware.
We’re making good progress with our various regulatory filings around the world, having received legal merger clearance in Australia, Brazil, Canada, South Africa and Taiwan and foreign investment control clearance in all necessary jurisdictions. We still expect the transaction will close in Broadcom’s fiscal 2023. The combination of Broadcom and VMware is about enabling enterprises to accelerate innovation and expand choice by addressing their most complex technology challenges in this multi-cloud era. And we are confident that regulators will see this when they conclude their review. With that, let me turn the call over to Kirsten.
Kirsten Spears: Thank you, Hock. Let me now provide additional detail on our financial performance. Consolidated revenue was $8.7 billion for the quarter, up 8% from a year ago. Gross margins were 75.6% of revenue in the quarter, about 30 basis points higher than we expected on product mix. Operating expenses were $1.2 billion, down 4% year-on-year. R&D of $958 million was also down 4% year-on-year on lower variable spending. Operating income for the quarter was $5.4 billion and was up 10% from a year ago. Operating margin was 62% of revenue, up approximately 100 basis points year-on-year. Adjusted EBITDA was $5.7 billion or 65% of revenue. This figure excludes $129 million of depreciation. Now, a review of the P&L for our two segments.
Revenue for our Semiconductor Solutions segment was $6.8 billion and represented 78% of total revenue in the quarter. This was up 9% year-on-year. Gross margins for our Semiconductor Solutions segment were approximately 71%, down approximately 120 basis points year-on-year, driven primarily by product mix within our semiconductor end markets. Operating expenses were $833 million in Q2, down 5% year-on-year. R&D was $739 million in the quarter, down 4% year-on-year. Q2 semiconductor operating margins were 59%. So, while semiconductor revenue was up 9%, operating profit grew 10% year-on-year. Moving to the P&L for our Infrastructure Software segment. Revenue for Infrastructure Software was $1.9 billion, up 3% year-on-year and represented 22% of revenue.
Gross margins for Infrastructure Software were 92% in the quarter and operating expenses were $361 million in the quarter, down 3% year-over-year. Infrastructure Software operating margin was 73% in Q2, and operating profit grew 8% year-on-year. Moving to cash flow. Free cash flow in the quarter was $4.4 billion and represented 50% of revenues in Q2. We spent $122 million on capital expenditures. Days sales outstanding were 32 days in the second quarter compared to 33 days in the first quarter. We ended the second quarter with inventory of $1.9 billion, down 1% from the end of the prior quarter. We ended the second quarter with $11.6 billion of cash and $39.3 billion of gross debt, of which $1.1 billion is short term. The weighted average coupon rate and years to maturity of our fixed rate debt is 3.61% and 9.9 years, respectively.
Turning to capital allocation. In the quarter, we paid stockholders $1.9 billion of cash dividends. Consistent with our commitment to return excess cash to shareholders, we repurchased $2.8 billion of our common stock and eliminated $614 million of common stock for taxes due on the vesting of employee equity, resulting in the repurchase and elimination of approximately 5.6 million AVGO shares. The non-GAAP diluted share count in Q2 was $435 million. As of the end of Q2, $9 billion was remaining under the share repurchase authorization. Excluding the potential impact of any share repurchases, in Q3, we expect the non-GAAP diluted share count to be $438 million. Based on current business trends and conditions, our guidance for the third quarter of fiscal 2023 is for consolidated revenues of $8.85 billion, and adjusted EBITDA of approximately 65% of projected revenue.
In Q3, we expect gross margins to be down approximately 60 basis points sequentially on product mix. That concludes my prepared remarks. Operator, please open up the call for questions.
Q&A Session
Follow Avago Technologies Ltd (NASDAQ:AVGO)
Follow Avago Technologies Ltd (NASDAQ:AVGO)
Operator: [Operator Instructions] And today’s first question will come from the line of Ross Seymore with Deutsche Bank.
Ross Seymore: Hock, I might just as well start off with the topic that you started, AI these days is everywhere. Thanks for the color that you gave and the percentage of the sales that it was potentially going to represent into the future. I wanted to just get a little bit more color on two aspects of that. How you’ve seen the demand evolve during the course of your quarter? Has it accelerated, in what areas, et cetera? And is there any competitive implications for it? We’ve heard from some of the compute folks that they want to do more on the networking side. And then obviously, you want to do more into the compute side. So I just wondered how the competitive intensity is changing, given the AI workload increases these days.
Hock Tan: Okay. Well, on your first part of your question, yes, we — I mean, last earnings call, we have indicated there was a strong sense of demand, and we have seen that continue unabated in terms of that strong demand such that’s coming in. Now, of course, we all realize lead times — manufacturing lead times on most of these cutting-edge products is fairly extended. I mean, you don’t make this — manufacture these products under our process anything less than 6 months or thereabouts. And while there is strong demand and a strong urgency of demand, the ability to ramp up will be more measured and addressing demands that are most urgent. On the second part, no, we’ve always seen competition. And really, even in traditional workloads in enterprise data centers and hyperscale data centers, our business, our markets in networking, switching, routing continues to face competition.
So really nothing new here. Competition continues to exist, and we — each of us do the best we can in the areas we are best at doing.
Operator: One moment for our next question. That will come from the line of Vivek Arya with Bank of America Securities.
Vivek Arya: Hock, I just wanted to first clarify. I think you might have mentioned it, but I think last quarter, you gave very specific numerical targets of $3 billion in ASICs and $800 million in switches for fiscal ’23. I just wanted to make sure if there is any specific update to those numbers. Is it more than $4 billion in total now, et cetera? And then my question is, longer term, what do you think the share is going to be between kind of general purpose GPU-type solutions versus ASICs? Do you think that share shifts towards ASICs? Do you think it shifts towards general purpose solutions? Because if I look outside of the compute offload opportunity, you have generally favored, right, more the general purpose market. So, I’m curious, how do you see this share between general purpose versus ASICs play out in this AI processing opportunity longer term?
Hock Tan: On your first part of your question — you guys love your question in two parts, let’s do the first part first. We guided or we indicated that for fiscal ’23 that the revenue we are looking in this space is $3.8 billion. There’s no reason nor are we trying to do it now in the middle of the year to change that forecast at this point. So, we still keep to that forecast we’ve given you in fiscal ’23. We’re obviously giving you a sense of trajectory in my remarks on what we see ’24 to look like. And that, again, is a broad trajectory of the guidance, nothing more than that, just to give you a sense for the accelerated move from ’22, ’23 and headed into ‘24. Nothing more than that. But in terms of specific numbers that you indicated we gave, it’s — we stay by our forecast of fiscal ’23, 3.8%, frankly, because my view, it’s a bit early to give you any revised forecast.
Then beyond that, on your most broad specific question, ASICs versus merchant, I always favor merchant, whether it’s in compute, whether it’s in networking. In my mind, long-term, merchant will eventually, in my view, have a better shot at prevailing. But what we’re not talking — what we’re talking about today is, obviously, a shorter term issue versus a very long-term issue. And the shorter term issue is, yes, compute offload exists. But again, the number of players in compute offload ASICs is very, very limited, and that’s what we continue to see.
Operator: One moment for our next question. And that will come from the line of Harlan Sur with JP Morgan.
Harlan Sur: Great to see the strong and growing ramp of your AI compute offload and networking products. On your next generation — Hock, on your next-generation AI and compute offload programs that are in the design phase now, you’ve got your next-gen switching and routing platforms that are being qualified. Like, are your customers continuing to push the team to accelerate the design funnel, pull in program ramp timing? And then, I think you might have addressed this, but I just wanted to clarify, all of these solutions use the same type of very advanced packaging, like stack die, HBM memory [Indiscernible] packaging. And not surprisingly, this is the same architecture used by your AI GPU peers, which are driving the same strong trends, right? So is the Broadcom team facing or expected to face like advanced packaging, advanced substrate supply constraints? And how is the operations team going to sort of manage through all of this?
Hock Tan: Well, you’re right in that — this kind of AI product, this — in this generative AI products, next-generation, current generation are all using very leading-edge technologies in wafers, silicon wafers and substrates and packaging, including memory stacking. And — but it’s — from consumption, it’s still — there’s still products out there. There’s still capacity out there as I said. And this is not something you want to be able to ship or deploy right away. It takes time. And we see it as a measured ramp over — that has started in fiscal ’23 and will continue its pace through to ’24.
Harlan Sur: And on the design win funnel, are you seeing customers still trying to pull in all of their designs?
Hock Tan: Well, it’s — we are — our basic opportunity still lies in the networking of AI networks. And we have the products out there. And we are working with many, many customers, obviously, to put in place this disaggregated — distributed, disaggregated architecture, which — of Ethernet fabric on AI. And yes, that’s a lot of obvious interest and lots of design that exists out there.
Operator: One moment for our next question. And that will come from the line of Timothy Arcuri with UBS.
Timothy Arcuri: Hock, I was wondering if you can sort of help shed some light on the general perception that all this AI spending is sort of boxing out traditional compute. Can you talk about that? Or is it that just CapEx budgets are going to have to grow to support all this extra AI CapEx? I mean, the trick is probably somewhere in between, but I’m wondering if you can help shed some light on just the general perception that all of this is coming at the expense of the traditional compute and the traditional infrastructure. Thanks.
Hock Tan: Your guess is as good as mine, actually. I can tell you this. I mean, you’re right, there’s this AI networks and this budget that are now allocated more and more by the hyperscale towards this AI networks. But not necessary, particularly in enterprise, at the expense of traditional workloads and traditional data centers. I think there’s going to be — there’s definitely coexistence. And a lot of the large amount of spending on AI today that we see for us, that is very much on the hyperscale. And so, enterprises are still focusing a lot of their budgets as they have on the traditional data centers and traditional workloads supporting x86. But it’s just maybe too early to — really for us to figure out whether that is that cannibalization.
Operator: One moment for our next question. And that will come from the line of Ambrish Srivastava with BMO Capital Markets.
Ambrish Srivastava: I have a less sexy topic to talk about, but obviously very important in how you manage the business. Can you talk about lead times and especially in the light of demand moderating, manufacturing cycle times coming down, not to mention the six months that you highlighted for the cutting edge? Are you still staying with the 52-week kind of lead quoting to customers, or has that changed? Thank you.
Hock Tan: By the way, it’s 50. Yes, my standard lead time for our products is 50 weeks, and we are still staying with it because it’s not about as much lead time to manufacture the products as our interest and, frankly, mutual interest between our customers and ourselves to take a hard look at providing visibility for us in ensuring we can supply and supply in the right amount at the right time the requirements. So yes, we’re still sticking to 50 weeks.
Operator: One moment for our next question, and that will come from the line from Harsh Kumar with Piper Sandler.
Harsh Kumar: Yes. Hey Hock, I was hoping you could clarify something for us. I think earlier in the beginning of the call when you gave your AI commentary, you said that gen AI revenues are 15% odd today, they’ll go to 25% by the end of 2024. That’s practically all your growth. That’s the $4 billion — $3 billion, $4 billion that you’ll grow. So looking at your commentary, I know your core business is doing really well. So I know that I’m probably misinterpreting it. But I was hoping that maybe there’s not so many — hoping that there’s no cannibalization going on in your business, but maybe you could clarify for us.
Hock Tan: Answer — from an earlier question by a peer of yours, we do not see — obviously, we do not know, we do not see cannibalization, but these are early innings, relatively speaking, and budgets don’t change that rapidly. If there’s cannibalization, obviously, it comes from where the spending goes in terms of priority. It’s not obvious to us there is that clarity to be able to tell you there’s cannibalization, not in the lease. And by the way, if you look at the numbers that all the growth is coming from it, perhaps you’re right. But as we talk — as we sit here in ’23 and we still show some level of growth, I would say, we still show growth in the rest of our business, in the rest of products, augmented — perhaps that growth is augmented with the growth in our AI revenue, in delivering AI products, but it’s not entirely all our growth. I would say at least half the growth is still on our traditional business, the other half may be out of generative AI.
Operator: One moment for our next question. And that will come from the line of Karl Ackerman with BNP Paribas.
Karl Ackerman: Hock, you rightly pointed to the custom silicon opportunity that supports your cloud AI initiatives. However, your AI revenue that’s not tied to custom silicon appears to be doubling in fiscal ’23. And the outlook for fiscal ’24 implies that it will double again. Obviously, Broadcom has multiple areas of exposure to AI really across PCI switches, Tomahawk, Jericho and Ramon ASICs and electro-optics. I guess what sort of opportunity do you see your electric optics portfolio playing a role in high-performance networking environments for inferencing and training AI applications?
Hock Tan: Look, what you say is very, very insightful. It’s — a big part of our growth now in AI comes from the networking components that we’re supplying into creating this Ethernet fabric for AI clusters. In fact, a big part of it, you hit on. And the rate of growth there is probably faster than our offload computing can grow. And that’s where we are focused on, as I say, our networking products are merchant standard products, supporting the very rapid growth of generative AI clusters out there in the compute side. And for us, this growth in the networking side is really the faster part of the growth.
Operator: One moment for our next question, and that will come from the line of Joseph Moore with Morgan Stanley.
Joseph Moore: I wanted to ask about the renewal of the wireless contract. Can you give us a sense for how much sort of concrete visibility you have into content over the duration of that? As you mentioned, it’s both, RF and wireless connectivity. Just any the additional color you can give us would be great.
Hock Tan: Okay. Well, I don’t want to be what’s me if you are nitpicky? It’s an extension, I would call it, of our existing long-term agreement. And it’s an extension in the form of a collaboration and strategic arrangement is the best way to describe. It’s not really a renewal. But the characteristics are similar, which is with supply technology, we supply products in a bunch of very specific products related to 5G components and wireless connectivity, which is our strength, which is the technology we keep leading in the marketplace. And it’s multiyear. And beyond that, I truly would — I’ll refer you to our 8-K and not provide any more specifics simply because of sensitivities all around.
Operator: One moment for our next question. And that will come from the line of Christopher Rolland with Susquehanna. Your line is open. Mr. Roland, your line is open. Okay. We’ll move on to the next question. And that will come from the line of Toshiya Hari with Goldman Sachs.
Toshiya Hari: Hock, I’m curious how you’re thinking about your semiconductor business long term. You’ve discussed AI pretty extensively throughout this call. Could this be something that drives higher growth for your semiconductor business on a sustained basis? I think historically, you’ve given relatively subdued or muted growth rates for your business vis-à-vis many of your competitors. Is this something that can drive sustained growth acceleration for your business? And if so, how should we think about the rate of R&D growth going forward as well? Because I think your peers are growing R&D faster than what you guys are doing today. Thank you.
Hock Tan: Very, very good question, Toshiya. Well, we are still a very broadly diversified semiconductor company, as I pointed out, with multiple — with still multiple end markets beyond just AI, most of which AI revenue happen to sit in my networking segment of the business, as you all noted, and you see. So we still have plenty of others. And even as I mentioned, for fiscal ’24, our view is that it could hit over 25% of our semiconductor revenue. We still have many large number of underpinnings for the rest of our semiconductor business. I mean, our wireless business, for instance, has a very strong lease of life for multi-years, and that’s a big chunk of business. Just that the AI business appears to be trying to catch up to it in terms of the size.
But our broadband server storage enterprise business continues to be very, very sustainable. And when you mix it all up, I don’t know, we haven’t updated our forecast long-term, so to show. I really have nothing more to add than what we already told you in the past. Would it have — make a difference in our long-term growth rate? Don’t know. We haven’t thought about it. I’ll leave it to you to probably speculate before I put anything on paper.
Operator: One moment for our next question. And that will come from the line of William Stein with Truist Securities.
William Stein: Hock, I’m wondering if you can talk about your foundry relationships. You’ve got a very strong relationship with TSM. And of course, Intel has been very vocal about winning new customers potentially. I wonder if you can talk about your flexibility and openness and considering new partners. And then maybe also talk about pricing from foundry and whether that’s influencing any changes quarter-to-quarter. There have been certainly a lot of price increases that we’ve heard about recently, and I’d love to hear your comments. Thank you.
Hock Tan: Thank you. We tend to be very loyal to our suppliers. The same reason we look at customers, the same — in that same manner, it cuts both ways for us. So, there’s a deep abiding loyalty in all our key suppliers. Having said that, we also have to be very realistic of the geopolitical environment we have today. And so, we are also very open to looking at in certain specific technologies to broaden our supply base. And we have taken steps to constantly look at it, much as we still continue to want to be very loyal and fair to our existing base. So — and so we continue that way. And because of that partnership and loyalty, for us, price increase is something that is a very long-term thing, it’s part of the overall relationship. And put it simply, we don’t move just because of prices. We stay put because of support, service and abiding sense of — a very abiding sense of commitment mutually.
Operator: One moment for our next question. And that will come from the line of Edward Snyder with Charter Equity Research.
Edward Snyder: Hock, basically housekeeping question. It sounded like your comments in the press release on the wireless deal did not include Mixed Signal, which is part of your past agreement. And everything you’ve seen to have said today doesn’t — suggest that may not be the next — in wireless and RF, but you’re also doing a lot of mixed single stuff, too. So maybe you can provide some clarity on that. And now also, why shouldn’t we expect the increased interest in AI to increase the prospects, if not orders immediately for the electro-optic products that are coming on discipline? So I would think that would be much greater demand, given the clusters and the size of these arrays that people are trying to put together, provide enormous benefits, I think, in power. Maybe give us some color on that.
Hock Tan: All right. You have two questions here, don’t you?
Edward Snyder: Well, it was a two-part question. I was going to do a three, but…
Hock Tan: Thank you. I love you guys with your multipart questions. Let’s do the first one. You’re right. Our long-term collaboration agreement that we recently announced, it includes, as it indicated, wireless connectivity and 5G components. It does not include the high-performance analog components, mixed signal components that we also sell to the North American OEM customer. right? That doesn’t make it any less, I would add, strategic, not deeply engaged with each other. I would definitely hasten to add. And on the second part, Ed, if you could indulge me, could you repeat that question?
Edward Snyder: Yes. So, you’re talking about general AI and the increase in demand that you’re seeing from hyperscale guys. And we’re already seeing how big these customers can get. And it’s really putting, I don’t want to say, stress on your networking assets. But I would think, given the size of the razor facing with the electro-optic products that you’re releasing next — in Tomahawk 5 that you’re releasing next year that puts Tomahawks right on the chip, would become more attractive because it significantly uses the power requirements. And I know no one’s used, it has not been deployed, but I would think that interest in that should increase. Am I wrong?
Hock Tan: You’re not wrong. All this, as I indicated upfront in my remarks, current remarks, yes, we see our next generation coming up Tomahawk 5, which will have silicon photonics, which is co-packaging as a key part of that offering and not to mention that it’s going up to 51 terabit per second cut-through bandwidth. It’s exactly what you want to put in place for very high demanding AI networks, especially if those AI networks start running to — over 32,000 GPU clusters running at 800 gigabit per second. Then you really need a big amount of switching because those kind of networks, as I mentioned, have to be very low latency, virtually lossless. Ethernet lossless calls for some interesting science and technology in order to make Ethernet lossless.
Because by definition, Ethernet tends to have it traditionally. But the technology is there to make it lossless. So all this fits in with our new generation of products. And not to mention our Jericho3-AI, which, as you know, the router has a unique differentiated technology that allows for very, very low tail latency and in terms of how it transmits and reorder packets so that there’s no loss and very little latency. And that exists in network routing in telcos, which we now apply to AI networks in a very effective manner, and that’s our whole new generation products. So yes, we’re leaning into this opportunity with our networking technology and next-generation products very much. So, you hit it right on, and which is, one, makes it very exciting for us in AI.
It’s in the networking area, networking space that we see most interesting opportunities.
Operator: One moment for our next question, and that will come from the line of Antoine Chkaiban with New Street Research.
Antoine Chkaiban: I’ll stick to a single-part question. Can you maybe double-click on your computes offload business? What can you maybe tell us about how growth could split between revenues from existing customers or potential diversification of that business going forward? Thank you.
Hock Tan: Thank you. Good question. And I’ll reiterate the answers in some other ways I’ve given to certain other audience who have asked this question. We really have only one rail customer — one customer. And in my forecast, in my remarks so far in offload computing, it’s pretty much very, very largely around one customer. It’s not very diversified. It’s very focused. That’s our compute offload business.
Operator: One moment for our next question. And that will come from the line of C.J. Muse with Evercore ISI.
Unidentified Analyst: This is Kurt Swartz Kurt [ph] on for C.J. I wanted to touch on software gross margins, which continue to tick higher alongside softness in Brocade. Curious what sort of visibility you may have into Brocade stabilization and how we should think about software gross margins as mix normalizes. Thank you.
Hock Tan: Okay. Well, our core — our software segment comprises, you hit it correctly, two parts. That’s our core software products revenues and sold directly to enterprises. And these are your typical infrastructure software products. And they are multiyear contracts. And we have ton — and we have a lot of backlog, something like $17 billion of backlog, averaging over almost 2.5, 3 years. And every quarter, a part of that renews, and we give you the data on it. It’s very stable. And given our historical pattern of renewing on expanding consumption of our core group of customers, we tend to drive that in a very stable manner. And the growth rate is very, very predictable, and we’re happy with that. Then we overlay on it a business that is software, but also very appliance different, the fiber channel SAN business of Brocade.
And that’s very enterprise-driven, very, very much so. Only used by enterprises, obviously, and large enterprises at that. And it is a fairly cyclical business. And last year was a very strong up cycle. And this year, not surprisingly, the cycles are not as strong, especially compared year-on-year to the very strong numbers last year. So, that’s — well, this is the phenomenon — the outcome of the combining the two is what we’re seeing today. But given another — my view next year, the cycle could turn around and Brocade would go on. And then instead of a 3% year-on-year growth in this whole segment, we could end up with high single digits year-on-year growth rate because the core software revenue, as I’ve always indicated to you guys, you want to plan long term on mid-single-digit year-on-year growth rate.
And that’s very predictable part of our numbers.
Operator: And today’s final question will come from the line of Vijay Rakesh with Mizuho.
Vijay Rakesh: Yes. Hi Hock, just a quick — I’ll keep it a two-part question for you to wrap up. So just wondering what the content uplift for Broadcom is on an AI server versus a general compute server. And if you look at generative AI, what percent of servers today are being outfitted for generative AI as you look? You have a dominant share there. And where do you see that uptake ratio for generative AI and year out if you look at fiscal ’24, ’25?
Hock Tan: I’m sorry to disappoint you on your two parts, but it’s too early for me to be able to give you a good answer or a very definitive answer on that. Because by far the majority of servers today are your traditional servers driving x86 CPUs. And the networking today are very, very still running Ethernet traditional data center networking. Because most enterprises if not virtually, all enterprises today are very much still running their own traditional servers on x86. Generative AI is something so new and in a way, so — the limits of it is so extended that what we largely see today are at the hyperscale guys in terms of deploying at scale those generative AI infrastructures. Enterprises continue to deploy and operate standard x86 servers and Ethernet networking in the traditional data centers.
And so, that’s still — so what we’re seeing today may be early part of the whole cycle, that’s your question, which is why I cannot give you any definitive view, opinion of how — what the attach rate, what the ratio will be or if there’s any stability that could be achieved anywhere in the near term. We both — we see both running and coexisting very much together.
Operator: Thank you. I would now like to turn the call over to Ji Yoo for any closing remarks.
Ji Yoo: Thank you, operator. In closing, we would like to highlight that Broadcom will be attending the BofA Global Technology Conference on Tuesday, June 6. Broadcom currently plans to report its earnings for the third quarter of fiscal ’23 after close of market on Thursday, August 31, 2023. A public webcast of Broadcom’s earnings conference call will follow at 2:00 p.m. Pacific. That will conclude our earnings call today. Thank you all for joining. Operator, you may end the call.
Operator: Thank you all for participating. This concludes today’s program. You may now disconnect.