Arista Networks, Inc. (NYSE:ANET) Q3 2023 Earnings Call Transcript October 30, 2023
Arista Networks, Inc. beats earnings expectations. Reported EPS is $1.83, expectations were $1.58.
Operator: Welcome to the Third Quarter 2023 Arista Networks Financial Results Earnings Conference Call. During the call, all participants will be in a listen-only mode. After the presentation, we will conduct a question-and-answer session. [Operator Instructions] As a reminder, this conference is being recorded, and will be available for replay from the Investor Relations section at the Arista website following this call. Ms. Liz Stine, Arista’s Director of Investor Relations, you may begin.
Liz Stine: Thank you, operator. Good afternoon, everyone, and thank you for joining us. With me on today’s call are Jayshree Ullal, Arista Networks’ President and Chief Executive Officer; and Ita Brennan, Arista’s Chief Financial Officer. This afternoon, Arista Networks issued a press release announcing the results for its fiscal third quarter ending September 30, 2023. If you would like a copy of the release, you can access it online at our website. During the course of this conference call, Arista Networks’ management will make forward-looking statements, including those relating to our financial outlook for the fourth quarter of the 2023 fiscal year, longer-term financial outlook for 2024 and beyond, our total addressable market and strategy for addressing these market opportunities, including AI, customer demand trends, supply chain constraints, component costs, manufacturing output, inventory management, and inflationary pressures on our business, lead time, product innovation, working capital optimization, and the benefits of acquisitions, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K, and which could cause actual results to differ materially from those anticipated by these statements.
These forward-looking statements apply as of today, and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. Also, please note that certain financial measures we use on the call are expressed on a non-GAAP basis and have been adjusted to exclude certain charges. We have provided reconciliations of these non-GAAP financial measures to GAAP financial measures in our earnings press release. With that, I will turn the call over to Jayshree.
Jayshree Ullal: Thank you, Liz, and happy Halloween, everyone. We delivered revenues of $1.51 billion for the quarter, with a non-GAAP earnings per share of $1.83. Services and software support renewals contributed approximately 16.8% of revenue. Our non-GAAP gross margins of 63.1% was influenced by improving supply chain overheads and higher enterprise contributions. As we have said before, gross margins have consistently improved every quarter this year and will stabilize next year in 2024. International contribution registered at 21.5%, with the Americas at 78.5%. As predicted, Arista’s supply chain and lead times are improving steadily in 2023 and we expect it to normalize in 2024. We are now projecting 33% annual growth versus our prior Analyst Day forecast of 25% growth for the 2023 calendar year.
During the past year, our cloud titan customers have been planning a different mix of AI networking and classic cloud networking for their compute and storage clusters. Our historic classification of our cloud titan customers has been based on industry definition of customers with or likely to attain greater than 1 million installed compute servers. Looking ahead, we will combine cloud and AI customer spend into one category called cloud and AI titan sector. And as a result of this combination, Oracle OCI becomes a new member of the sector, while Apple shifts to cloud specialty providers. This new cloud and AI titan sector is projected to represent greater than 40% of our total revenue mix due to the favorable AI investments expected in the future.
In terms of enterprise momentum, Arista continues to focus on multi-domain modern software with architectural superiority based on our single EOS, Extensible Operating System, and CloudVision stack. This is truly a unique foundation and differentiator. We have demonstrated our strong execution and uncompromised quality with predictable release cadence that our customers have come to enjoy and appreciate. The power of our one consistent software stack across a breadth of use cases, be the WAN routing, campus, branch, or data center infrastructure, is truly unmatched by our industry peers. Let me illustrate with a few customer wins. Our first customer win is an international one where the customer is providing services for their interconnect of high-performance compute, HPC, clusters, which are often its foundation for GPU as a service offering.
Arista’s Ethernet modular switch coupled with EOS created a perfect combination of a phishing platform with real-time telemetry leveraging our EOS state-driven publish/subscribe model. Our next win showcases our expansion of Arista in the public sector with their AI initiative. This grant-funded project utilizes Arista’s simplified operational models with CloudVision. New AI workloads require high scale, high radix, high bandwidth, and low latency, as well as a need for granular visibility. This build out of a single EVPN VXLAN-based 400 gig fabric is based on deep buffer spines and underscores the importance of a lossless architecture for AI networking. This last but not least customer is an example of a campus WAN. Couple of years ago, the customer was looking to do a complete refresh of their aging campus network which comprises of four major headquarter campuses and several remote sites.
The customer was able to leverage the Arista Validated Design models, AVD, all the way from data center into the campus network. The customer chose Arista because they were able to offer best-of-breed operational excellence, as well as security with our zero trust AVA sensors for threat mitigation across the entire campus of wired switches. Arista’s innovative macro segmentation, MSS, combined with Leaf access and core Spine, delivered a compelling two-tier cognitive campus solution. These three customers illustrate our power of the platform and software innovations for a modern network model with a low total cost of operation. We are pleased with our trajectory, setting the gold standard in our industry with the lowest CVEs and vulnerabilities and the highest Net Promoter Score for cloud networking.
And with that, I’d like to hand to Ita, our CFO, for financial specifics.
Ita Brennan: Thanks, Jayshree, and good afternoon. This analysis of our Q3 results and our guidance for Q4 ’23 is based on non-GAAP. It excludes all non-cash stock-based compensation impacts, certain acquisition-related charges, and other non-recurring items. A full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release. Total revenues in Q3 were $1.51 billion, up 28.3% year-over-year, and well above the upper end of our guidance of $1.45 billion to $1.5 billion. Services and subscription software contributed approximately 16.8% of revenues in the third quarter, up from 15.2% in Q2. International revenues for the quarter came in at $324.7 million, or 21.5% of total revenue, up from 20.9% last quarter.
This quarter-over-quarter increase largely reflected a healthy contribution from our enterprise customers in EMEA and APAC and some reduction in domestic shipments to our cloud titan customers. Overall gross margin in Q3 was 63.1%, well above guidance of approximately 62% and up from 61.3% last quarter. We continue to see incremental improvements in gross margin quarter-over-quarter with higher enterprise shipments and better supply chain costs, somewhat offset by the need for additional inventory reserves as customers refine their forecast product mix. Operating expenses in the quarter were $255.6 million, or 16.9% of revenue, down from last quarter at $287.3 million. R&D spending came in at $164.4 million, or 10.9% of revenue, down from $188.5 million last quarter.
It’s primarily reflected increased headcount more than offset by lower new product introduction costs in the period. Sales and marketing expense was $79 million, or 5.2% of revenue, consistent with last quarter, with increased headcount and some reduction in product demo costs. Our G&A cost came in at $12.1 million, or 0.8% of revenue down from last quarter and reflecting the recovery of some bad debt amounts recorded in prior periods. Our operating income for the quarter was $696.2 million or 46.1% of revenue. Other income and expense for the quarter was a favorable $42.3 million, and our effective tax rate was 21.3%. This resulted in net income for the quarter of $581.4 million, or 38.5% of revenue. Our diluted share number was 317.6 million shares, resulting in a diluted earnings per share number for the quarter of $1.83, up 46.4% from the prior year.
Now, turning to the balance sheet. Cash, cash equivalents and investments ended the quarter at approximately $4.5 billion. We did not repurchase shares of our common stock in the quarter. To recap our repurchase program to date, we have repurchased $855.5 million, or 8 million shares, at an average price of $107 per share, under our current $1 billion Board authorization. This leaves $144.5 million available for repurchase in future quarters. The actual timing and amount of future repurchases will be dependent on market and business conditions, stock price, and other factors. Now, turning to operating cash performance for the third quarter. We generated approximately $699 million of cash from operations in the period, reflecting strong earnings performance combined with some increase in deferred revenue and taxes payable.
DSOs came in 51 days, up from 49 days in Q2, reflecting the strong collections quarter and a good linearity of billing. Inventory turns were 1.1 times, down from 1.2 last quarter. Inventory remains flat to last quarter at $1.9 billion, reflecting the ongoing receipt in consumption components from our purchase commitments and an increase in switch-related finished goods. Our purchase commitments at the end of the quarter were $2 billion, down from $2.2 billion at the end of Q2. We expect the overall purchase commitment number to continue to decline as we further optimize our supply positions. However, we will maintain a healthy position related to key components, especially as we focus on new products. Our total deferred revenue balance is $1.195 billion, up from $1.085 billion in Q2.
The majority of the deferred revenue balance and services related and directly linked to the timing and term of service contracts, which can vary on a quarter-by-quarter basis. Our product deferred revenues balance increased by $47 million from last quarter. Accounts payable days were 44 days, down from 57 days in Q2, reflecting the timing of inventory receipt payments. Capital expenditures for the quarter were $11.2 million. Now, turning to our outlook for the fourth quarter. Customer planning horizons for new deployments have shortened in concert with steadily improving lead time. On the supply side, we expect to continue to ship against previously committed deployment plans for some time, targeting supply improvements where most needed, but also careful not to create redundant customer inventory.
As outlined in our guidance, we expect to make incremental improvements to our 2023 outlook, which now calls for year-over-year revenue growth of approximately 33%. On the gross margin front, we expect gross margins of approximately 63% in the fourth quarter, reflecting ongoing supply chain and manufacturing benefits while maintaining a reasonably healthy cloud contribution. Turning to spending and investments, we expect to monitor the overall macro environment carefully while engaging in targeted hiring in R&D and go-to-market as the team sees the opportunity to acquire talent. On the cash front, while increases in working capital has begun to moderate in recent quarters, our year-to-date 2023 tax payments have been deferred to October, and this will represent a significant incremental use of cash in the fourth quarter at approximately $352 million.
With all of this as a backdrop, our guidance for the fourth quarter, which is based on non-GAAP results and excludes any non-cash stock-based compensation impacts and other non-recurring items is as follows: revenues of approximately $1.5 billion to $1.55 billion; gross margin of approximately 63%; operating margin of approximately 42%; our effective tax rate is expected to be approximately 21.5%, with diluted shares of approximately 319 million shares. I will now turn the call back to Liz. Liz?
Liz Stine: Thank you, Ita. We will now move to the Q&A portion of the Arista earnings call. To allow for greater participation, I’d like to request that everybody please limit themselves to a single question. Thank you for your understanding. Operator, take it away.
See also 16 Best EV Stocks Under $50 and 16 Best EV Stocks Under $50.
Q&A Session
Follow Arista Networks Inc. (NYSE:ANET)
Follow Arista Networks Inc. (NYSE:ANET)
Operator: We will now begin the Q&A portion of the Arista earnings call. [Operator Instructions] Your first question comes from the line of Samik Chatterjee with JPMorgan. Your line is open.
Samik Chatterjee: Hi, thank you for the question, and congrats on the results. I guess just to keep it simple, Jayshree, if you can give us an update on when we think about the last 90 days, how have the two sort of verticals, cloud titans and enterprise, sort of shown up in terms of momentum of orders and demand relative to where your expectations were 90 days ago? I know some of the cloud companies have talked about their CapEx outlook for next year as well. So, an update on that would be helpful. And on the last call, you did talk about a target for double-digit growth next year. So, how are you thinking in relation to that number still going into the Investor Day? Thank you.
Jayshree Ullal: Okay. Thanks, Samik. First of all, we are looking forward to sharing more detail on Analyst Day. But just to reiterate, our team has always projected at least a double-digit growth for next year and years beyond. So that goal remains unchanged. And we’ll share more with you. Coming back to the last 90 days, as you know, as our lead times improve, our visibility declines. But we don’t see significant change in improvements or declines in the last 90 days. We continue to see good momentum on enterprise and we continue to see a good expected push on the combination of both cloud and AI together.
Samik Chatterjee: Okay. Thank you. Thanks for taking my question.
Liz Stine: Thanks, Samik.
Operator: Your next question comes from the line of Antoine Chkaiban with New Street Research. Your line is open.
Antoine Chkaiban: Thanks very much for taking my question. So, accelerated AI cluster deployment is clearly waiting on traditional infrastructure deployment this year. And I’m keen to hear how sustainable you think this is, because the vast majority of workloads still run on traditional infrastructure, right? So, is it fair to expect a rebound in traditional infrastructure spend next year?
Jayshree Ullal: Yes, thank you, Antoine. I’ll share some of my thoughts and I’d like to hand it over to Anshul for further thoughts. We’ve always looked at the cloud network as a front-end and a back-end. And as we said last year, many of our customers are favoring spending more on the back-end with AI, which doesn’t mean they stopped spending on front-end, but they’ve clearly prioritized and doubled down on AI this year. My guess is as we look at the next few years, they’ll continue to double down on AI, but you cannot build an AI back-end cluster without thinking of the front-end. So we’ll see a full cycle here where, while today the focus is greatly on AI and the back-end of the network, in the future we expect to see more investments in the front-end as well.
Anshul Sadana: Jayshree, that’s right. You said it’s spot on. AI is everyone’s priority right now, and the rest will get touched at the right time.
Antoine Chkaiban: Thank you.
Liz Stine: Thanks, Antoine.
Operator: Your next question comes from the line of Matt Niknam with Deutsche Bank. Your line is open.
Matt Niknam: Hey, thank you for taking the question. One question, very simple one, on services. Pretty nice improvement, about 13% sequential improvement in the quarter. Seasonally, I think we’ve seen low-single digits, mid-single digits. Anything you would call out? And how are we thinking about that for the fourth quarter? Thanks.
Ita Brennan: Yeah, look, I think every now and again you see kind of a pop on the services line. It’s usually either somebody has consumed services faster than they intended to or we’ve been negotiating a contract and then when we do actually finally sign the renewals contract, there’s some flush of prior periods into the quarter. So, if you look back historically, you’ll see that happens from time to time. I don’t think it changes the kind of fundamental growth and services we’ve talked about that’s kind of mid to maybe a little bit higher teens growth on an ongoing basis year-over-year. I don’t think it changes that. It’s just you do have these little spikes from time to time.
Matt Niknam: Thank you.
Liz Stine: Thanks, Matt.
Operator: Your next question comes from the line of Karl Ackerman with BNP Paribas. Your line is open.
Karl Ackerman: Yes, thank you. I suppose this is a question for Ita, but is the upside in the quarter an outlook coming from a combination of better bookings and working down some of your prior backlog? Just any thoughts in terms of maybe where your backlog may end up relative to normal levels pre-pandemic would be super helpful. Thank you.
Ita Brennan: Yeah, Karl, we don’t talk about backlogs, specifically. I think what we have said is, as lead times improve, you expect to see some reduction in visibility [because] (ph) customers, the time where they have to pay orders changes over time, right? So I think that we are seeing that dynamic, we’ve talked about that dynamic that we are — as lead times get better, we are seeing kind of customer planning horizons are shortening. We will be still deploying, if you listen to my prepared remarks, I mean we are still deploying equipment into next year from plans that we made some time ago, and that’s just kind of again working with customers and laying out their plans. But in terms of giving specific numbers, we haven’t done that.
Liz Stine: Great. Thanks, Karl.
Operator: Your next question comes from the line of Amit Daryanani with Evercore. Your line is open.
Amit Daryanani: Good afternoon, everyone, and congrats on a nice set of numbers here. I was hoping you could talk a little bit more on the enterprise side. You’re seeing some really good strength over here clearly. But maybe you can talk about, is the strength more coming from campus versus the data center side, maybe just qualitatively where you’re seeing better trends? And really the context of this is I think a lot of your peers are seeing a very severe drop in their growth rates as their backlogs have gone away. You don’t seem to be having that issue. So I’m wondering like what is the offset to that and what’s enabling the growth? And to the extent, you can talk about campus versus the data center that would be really helpful. Thank you.
Jayshree Ullal: Okay, Amit. Again I’ll share a few words and I’d love for Anshul to step in and say some too. Look, if you look back three years ago, we started seriously investing in the enterprise. And back in 2020, we had a small enterprise business and it was largely comprised of, as you rightly pointed out, data center and some high-performance compute and low-latency HFT. Can’t ever forget our original heritage. But in the last three years, we have made an investment and seen a significant uptake in enterprise customers wanting to do business with Arista. Historically, it’s been the high-tech enterprise and the financials. And today, we’re seeing a much better cross section of verticals, including healthcare, education, we expect to see more and more distributed enterprises.
And to your question on data center versus campus, the answer is yes, to both. We actually see one uniform architecture where you can have a universal spine that connects into a wired leaf, a wireless leaf, a storage cluster, a compute cluster, a border leaf for routing, and WAN transit. It’s pretty exciting that Arista is truly and remarkably setting the tone for a two-tier defined architecture across the enterprise, and building that modern operating model based on CloudVision.
Anshul Sadana: Amit, this is Anshul here. We have a great team being led by Chris Schmidt and Ashwin Kohli in this space. And now we sell in many, many countries around the world. And as Jayshree mentioned, both data center and campus, customers are coming to us for the automation for the higher quality, for the visibility, that we’re able to bring to them across the board in one architecture, one OS, and one CloudVision. That message resonates with every CIO today, and they are no longer worried about Arista being this new kid on the block that’s risky move for them. We are, in fact, becoming the de facto and they like it. So, which is why the momentum just continues. It’s good execution by the team and getting to more and more customers around the world.
Liz Stine: Thank you, Amit.
Operator: Your next question comes from the line of Ben Bollin with Cleveland Research. Your line is open.
Ben Bollin: Thanks for taking the question. Good evening, everyone. Jayshree and Anshul, I was hoping you might be able to comment a little bit about your thoughts as you make progress in the backend network around GPU cluster opportunity, how you see that developing versus what you’ve shared with us previously? And any color in particular around both pre-existing and the opportunity for net new wins would be helpful. Thanks.
Jayshree Ullal: Sure. Again, this is an area that Anshul lives and breathes more than I do, so I’ll give you some executive comments. But, Ben, as I see it, the back-end network was something we didn’t even see a few months or years ago and was largely dominated by InfiniBand. Today, if I look at the five major designs for AI networking, one of them is still very InfiniBand dominated, all the others we’re looking at are adopting a dual strategy of both Ethernet and InfiniBand. So, I think AI networking is going to become more and more favorable to Ethernet, particularly with the Ultra Ethernet Consortium and the work they’re doing to define a spec, you’re going to see more products based on UEC. You’re going to see more of a connection between the back-end and the front-end using IP as a singular protocol.
And so, we’re feeling very encouraged that especially in 2025, there will be a lot of production rollout of back-end and of course front-end based on Ethernet. Over to you, Anshul.
Anshul Sadana: Sure, thanks, Jayshree. Ben, our cloud titan customers, as well as the specialty providers, have been great partners of ours. So, the level of partnership and co-development that’s going on in this space is high. It’s just like in previous cycles, previous products that we’ve done with them, there’s a lot of fine tuning needed in these back-end networks to get the maximum utilization of GPUs. And as you know, we are good at these [engineering] (ph) projects. So the teams are enjoying it. The activity is much, much higher than before. And the goal is to scale these clusters as quickly as possible so our customers can run their jobs faster. We’re feeling good about it. You’ve heard comments from Jayshree as well in the past, and you’ll hear more on the analysts here on this topic, too, but all good on the activity front over here.
Ben Bollin: Thank you.
Jayshree Ullal: I think one thing to just add is the entropy and efficiency of these large language models and the job completion time is becoming so critical that it’s not just about packet latency, it’s really about end-to-end latency. And this is something our team, especially our engineers, know a lot about from the early days. So, we’re really working this end to end.
Liz Stine: Thanks, Ben.
Operator: Your next question comes from the line of Aaron Rakers with Wells Fargo. Your line is open.
Aaron Rakers: Yeah, thanks for taking the question. I just want to kind of dovetail off that last question a little bit. I know, Jayshree, last quarter, I think it was you commented that you’d expect to see pilot deployments for these AI opportunities in ’24 and then meaningful volume in 2025. First of all, do you reaffirm that view, or has that changed at all? And then on that, can you give us some context of how you see network spend intensity for these AI fabrics relative to, I think in the past, it’s been kind of high-single-digit percent of compute spend on networking in classical cloud infrastructure environments?
Jayshree Ullal: Well, first of all, Aaron, the first question is easy. I reaffirmed that view and more later on November 9 at our Analyst Day. So, if I tell you everything now, you may not attend that session. Coming back to this networking spend versus the rest of the GPUs and et cetera, I would say it started to get higher and higher with 100 gig, 400 gig, 800 gig, where the optics and the switches are more than 10%, perhaps even 15%, in some cases 20%. A lot of it’s governed by the cables and optics too. But the percentage hasn’t changed a lot in high-speed networking. In other words, it’s not too different between 10, 100, 200, 400, and 800. So, you’ll continue to see that 10% to 15% range.
Aaron Rakers: Okay. Thank you.
Liz Stine: Thanks, Aaron.
Operator: Your next question comes from the line of Tal Liani with Bank of America. Your line is open.
Tal Liani: Hi. Jayshree, your tone is definitely better this quarter than last quarter, and you sound more confident in the numbers. And I want to understand if something changed in the last three months that made you more optimistic. I’m looking at the consensus estimates and it looks like the growth rate has been declining for four quarters from like 54% to about 20% next quarter. And then, it troughs at Q1, stays there and recovers after that. Do you agree that we are nearing kind of the end of the down adjustment to the growth rates and then it’s going to stabilize and go up from there? Or how do you look at the risks of that not materializing?
Jayshree Ullal: What do you think, Ita?
Ita Brennan: So, Tal, I think, look, we’ve been talking about kind of the growth decelerating as we move through the year, just because the comps are so high. I think if you look at the discussion we’ve had so far about ’24, and obviously there’s more to come next week, we’ve talked about double-digit growth. But again, we are expecting that there is some moderation on the cloud side of the business next year. So, I think within the bounds of kind of the plans that we’ve laid out and discussions that we’ve laid out, I think we’re executing well, right? We’re giving you some upside in the guide for ’23 and by default almost, some upside in ’24, right? So I think we’re executing well, but within the bounds of what we talked about. And we do believe that there’s moderation of cloud spending as we head into 2024.
Jayshree Ullal: And Tal, I think I need to focus on my tone and maybe sing a song or something, because I felt really [enthusiastic] (ph) last quarter and this quarter.
Ita Brennan: I would say she was pretty happy last quarter.
Jayshree Ullal: I’m a happy kind of gal at the moment.
Tal Liani: We read in between the lines, you know?
Jayshree Ullal: Thanks, Tal.
Liz Stine: Thank you, Tal.
Operator: Your next question comes from the line of Sebastian Naji with William Blair. Your line is open.
Sebastian Naji: All right, thanks for taking the question. I wanted to ask about the change in revenue breakdown and the inclusion of OCI in this new cloud titan and AI segment. Was this the result of the material change in Arista’s wallet share at Oracle, or is that business becoming a larger portion of revenue? Anything you can provide there?
Jayshree Ullal: Yeah, no, we don’t do it based on wallet share of Arista. We do it based on definition. So, I think OCI has become a meaningful top-tier cloud customer, and they belong in the cloud titan category in addition to their AI investments as well. So, for reasons of classification and definition, the change is very warranted. And yes, they happen to be a good customer of Arista. That’s nice as well.
Sebastian Naji: Got it. Okay. Thank you.
Liz Stine: Thank you.
Operator: Your next question comes from the line of Meta Marshall with Morgan Stanley. Your line is open.
Meta Marshall: Great, thanks. Jayshree or Anshul, maybe just some commentary on the tier-two and specialty providers, and just what you’re seeing in terms of other people kind of building out some of these AI clusters? You classify some of those customers as largely focused on back-end today and those represent opportunities going forward, or just kind of what the discussion is outside of the cloud titans amongst some of these other guys that are building very large networks? Thanks.
Anshul Sadana: Sure. Meta, this is Anshul. The tier-two cloud providers are doing exactly what the tier-one is doing, just at a smaller scale. So, the activity is out there. Many companies are trying to build these clusters, maybe not hundreds of thousands of GPUs, but thousands of GPUs together in their real estate if they can get them. But the designs that we’re working on with them, the type of sort of features, fine tuning is actually very, very similar to the cloud, just at a smaller scale. So, we’re very happy with that activity. And this is across the board. It’s very positive to see this in the ecosystem that it’s not limited to just four or five customers.
Jayshree Ullal: I think they’re also waiting for GPUs like everyone else is. So, there’s that common problem that we’re not the only one with lead time issues. But to clarify the comment on scale, Anshul and I are also seeing some very interesting enterprise projects against smaller scale. So, a lot of customers are trying AI for small clusters, not too different from what we saw with HPC clusters back in the day.
Anshul Sadana: Yeah.
Meta Marshall: Thank you.
Operator: Your next question comes from the line of Michael Ng with Goldman Sachs. Your line is open.
Michael Ng: Hey, good afternoon. Thank you very much for the question. I just had one on the OpEx outperformance in the quarter. We saw an unseasonal decline quarter-on-quarter, And I think you mentioned lower product introduction costs that may have helped R&D. I was just wondering if you could talk a little bit more about that aspect of it. Any way we should think about product introductions going forward to help us understand the trajectory of OpEx? Thank you.
Ita Brennan: Yeah, I mean a lot of it is timing, right? We’ve got a lot of different projects, a lot of different products kind of filling through the R&D labs right now, so there is going to be some kind of volatility in terms of when the spend shows up, when the proto spend happens, et cetera. So I think we were lower this quarter in Q3 than maybe we even anticipated coming into the quarter. I expect that to come back and kind of the guide for Q4. And again, there may be some volatility in that even going forward, just because it’s all about timing, nothing unusual in that. There’s just a lot of products kind of going through the R&D labs.
Jayshree Ullal: So, Michael, when the chips are down, our spending is down, but when the chips come on hot, our spending gets hot too. So, expect our prototypes to have some high variability and we’ve got a lot and lot of new products in the pipeline that Andy, Anshul, Ken, you are all working on. So, we expect that number to go up over the next four quarters.
Michael Ng: Great, very helpful. Thanks, Jayshree. Thanks, Ita.
Operator: Your next question comes from the line of Atif Malik with Citi. Your line is open.
Atif Malik: Hi, thank you for taking my question. Jayshree, at the recent Open Compute Project Conference, Marvell and Broadcom, leading Ethernet switch merchant chip providers sounded very confident in terms of Ethernet adoption at hyperscalers like Meta and Oracle as well. And one of your peers has talked about 500 million in AI orders, whether it’s custom chip. So, I was curious about your thoughts on the dynamics between custom chip and merchant switch chip providers, and how does that help Arista? Thank you.
Jayshree Ullal: Yeah, Atif, we have been strong proponents in our last 15, 17 years of Arista Korea on merchant silicon. We look for the best of breed chips. It’s something my team, engineering team, has built a lot of chips in their past before, but we decided to work with the best of breed companies, Broadcom being one of our favorite and major suppliers. Of course, in the past, we worked with Intel, Cavium, and we don’t rule out other suppliers as well. But this is clearly an area where you can’t just build one chip, you have to build a portfolio of silicon. And what Broadcom has done in building that portfolio not only for cloud networking, but for campus and AI is impressive. And you have to not just look at performance, you have to look at price, density, power.
These are all very important metrics as we look ahead. The root issue here, and we’ll share this more with you going forward as well, is not just the merchant silicon, but how you can enable the merchant silicon with the right software and drivers. And this is an area that really Arista excels in. If you just have chips, you can’t build a system. But our system-wide features, whether it’s in dynamic load balancing or latency analyzer to really improve the job completion time and deal with that frequent communication and generative AI is also fundamentally important. You’re going to hear a lot more about this next week, so stay tuned.
Atif Malik: Thank you.
Operator: Your next question comes from the line of Ben Reitzes with Melius Research. Your line is open.
Ben Reitzes: Yeah, hey, thanks for the question. Jayshree and Ita, can you discuss a little more your gross margin commentary that it should moderate next year from the 63% levels in the back half? I mean, are we talking about it going to the first half ’23 kind of levels, or just a little bit of a degradation next year? And what would be the reason behind it? Other than lead times, is there any other mix or other issues that would cause it to go down? Thanks.
Ita Brennan: So, Ben, I think what Jayshree commentary, my commentary, is we have been seeing it incrementally improve as we’ve gone through the year. We expect it to stabilize. So not that we expect it to go down next year, but more that it will stabilize. And then it will become more dependent on customer mix and other thing again, similar to where we’ve been before. But obviously, we’ll provide more outlook on — discussion on this next week too. But the intention was not to say that we think it starts to decline again. It was more that we think it will stabilize after a period where we’ve been seeing these incremental improvements.
Ben Reitzes: Okay, thanks a lot. Appreciate the color.
Liz Stine: Thanks, Ben.
Operator: Your next question comes from the line of Tim Long with Barclays. Your line is open.
Tim Long: Thank you. I just wanted to hit on the cloud titan vertical or cloud titan AI vertical now if I could. I think, Ita, one of your comments was down a little or something in the quarter. Could you just — two parts here, talk a little bit about that comment? Is this just timing or are there some different market share dynamics there? And then, secondly, if you could talk a little bit about opportunities at other hyperscalers? I know that’s something where there’s been trial activity and potential and sounds like it might take a little while, but any updates on other cloud titans that could become larger customers? Thank you.
Ita Brennan: Just in terms of cloud, I mean, it’s going to be a good cloud year again in 2023 for us, I think, but we did come into the year saying we wanted, if we could to balance supply a little bit towards enterprise. And we have been doing that. There’s been some — you’ll see it, it’s not a huge mix shift, but there has been some mix shift towards enterprise when we can, and we’re pleased that we’ve been able to do that. Anshul, I don’t know if you want to take the other cloud.
Anshul Sadana: Sure. Tim, the engagement with other cloud titans who are customers — our customers is still very positive. They’re good customers, as many of you know, in routed layers, backbone, WAN use cases as well. In the next week, we’ll touch a little bit more on the whole build versus [by topic] (ph).
Liz Stine: Thanks, Tim.
Operator: Your next question comes from the line of James Fish with Piper Sandler. Your line is open.
James Fish: Hey, ladies, and Anshul, great quarter. Just on the product side, you guys released a new 25 gig offering recently. I guess what’s been the early feedback? What kind of differentiates down there? And Jayshree, just to clarify here, when you talk about that double-digit growth rate for next year and years beyond, are you talking about a multi-year CAGR or for ’24 specifically and then for ’25 and ’26 and beyond? Just trying to clarify here. Thanks.
Jayshree Ullal: All right. Okay. Well, Anshul, you want to answer the product question first?
Anshul Sadana: Sure. James, the recent announcement was the launch of our 25 gig ultra-low latency switches. These are the 7130 series. And now the whole world can upgrade the high-frequency trading infrastructure going from 10 to 25. That’s very, very low latency. You’re talking about, with cross point technology, you’re talking about 7 nanoseconds. But we also now introduce layer 2, layer 3 features at about 100 to 130 nanoseconds.
Jayshree Ullal: And Anshul, just to put this in perspective, back in the day, it used to be 500 nanoseconds, right?
Anshul Sadana: That’s right. It only keeps going down.
Jayshree Ullal: Yeah, faster than the speed of light. And James, just to give you a clarification, I was saying as a company, Ita and myself, Anshul, we’re aiming for at least double digits in ’24 and years beyond, but I wasn’t making any forecasts for exact numbers.
James Fish: Helpful. Thanks.
Liz Stine: Thank you, James.
Operator: Your next question comes from the line of Ittai Kidron with Oppenheimer. Your line is open.
Ittai Kidron: Thanks, ladies. Quick question on gross margin. Nice improvements there. Ita, maybe you can go into the details of how much room is there more to go? And I’m just kind of wondering with your customers now looking at your excellent financials and your recovering gross margins, what are the odds that pricing pressures start coming back? Something you probably have not seen much in the last couple of years since COVID, now that margins are normalizing, could prices come down potentially, perhaps even for the — more specifically, to the larger customers of yours?
Jayshree Ullal: Ittai, I’ll just start by saying prices are always coming down. As we go from one feed factor to another, between the SerDes technology and the density, the dollar per gigabit is always coming down. So, pricing pressure doesn’t change independent of our growth margin. We’re always in competitive deals. Where the value really comes in, and again, as I alluded to this, is CapEx versus OpEx. We expect pricing to be reasonably stable, but we expect the operational cost to be significantly advantageous with Arista technology. The total, the TCO, because of singular cloud vision, because of our software-driven approach, because of the fact that we have single-digit vulnerabilities while our industry peers have 100 to 500 of them in a given five-year factor, these are all now paying — customers and enterprises especially are very fatigued with the poor quality of our competitors and are paying a lot of attention to that and willing to pay for that quality.
Anshul Sadana: Ittai, as Jayshree mentioned, I want to emphasize this. The market is very competitive and it has been ever since we started. The gross margin that we report is not the reason why customers try to negotiate price and the gross margin is simply a result of what we’ve been executing on. I think the [indiscernible].
Ittai Kidron: Very good, thank you.
Liz Stine: Thank you, Ittai.
Operator: Your next question comes from the line of Simon Leopold with Raymond James. Your line is open.
Simon Leopold: Thanks for taking the question. I wanted to see if you would be able — willing to comment on your customer concentration year-to-date. I appreciate it can be lumpy quarter-to-quarter, but given sort of where you were in 2022, I’d just like to get a better understanding of what essentially the progress has been in 2023. And in that context, how big is enterprise as a percent of revenue this year, year-to-date versus where it was last year? Thank you.
Jayshree Ullal: Simon, we’re very proud of our customers, even if they’re concentrated, we love it. And as you know, the last year, we had some outsized concentration. If I recall the numbers, Meta was at 26% and what was Microsoft, Ita?
Ita Brennan: Microsoft was 17%.
Jayshree Ullal: 16% or 17%. While we expect, due to many of the CapEx news you’ve seen and shift in AI spending, that it’s possible they come down, but they’re still going to be very strong north of 10% contributors to our 2023 results. And we continue to — even as the denominator may get larger in the forthcoming years, we continue to look at them as two very important and strategic customers for us.
Liz Stine: Thanks, Simon.
Operator: Your next question comes from the line of David Vogt with UBS. Your line is open.
David Vogt: Great. Thanks, guys, for taking the question. I just want to follow up on Simon’s question, maybe put it a little bit differently. So, if I think about your market sector trend update, how much of the shift to that 40% to 45% cloud and AI titans reflects the inclusion of new AI use cases going forward and the shift of Oracle combined with maybe some normalization at Meta and Microsoft? Can kind of help us think through the dynamics there? And if it looks like shares going to be unchanged with the enterprise and financials, does that suggest to you that those markets are going to grow comparably over the long-term across the cycle? Is that the right way to look at it? Thanks.
Jayshree Ullal: Yeah, David, your analysis is really deep on this one. Let me just say how innocently we reported this, which is Oracle is a greater than $1 million installed server company right now. And both their cloud spend as OCI and AI is significant, both as a company and for Arista. But we’re not making any assumptions and that will vary every year, of course, on the mix of Microsoft or Oracle or any other for that matter. We’re simply saying AI is going to become such an important component of all our cloud types that it’s now a combined vertical. Don’t read too much more into it.
Ita Brennan: Yeah, it’s more of a forward-looking impact, to be honest. Historically, this doesn’t really change the trends that we’ve been talking about previously. It’s really more about the future and how do you think AI will impact these numbers going forward.
Jayshree Ullal: Yeah, AI is too small to impact as much right now, it’s right, you know that. But as it starts to become important, then this combined will go north of the 39% we have normally forecast.
David Vogt: Great. Understood. Thank you.
Operator: Your next question comes from the line of Erik Suppiger with JMP Securities. Your line is open.
Erik Suppiger: Yeah, thanks for taking the question, and congrats. I know you don’t want to talk about backlog, but can you give us a sense at what point or what time you think your book to bill will return back to 1 or greater than 1, or when will your lead times reach normalized level? And then I have a quick follow-up after that.
Ita Brennan: We’re definitely not going to talk about book to bill if we don’t talk about backlog, Erik. So, look, I think honestly we’re making improvements. Jayshree talked about how lead times are much improved, right? So, we’ll continue to do that. That’s a positive thing for the customers and for the business. And we’re still not back to kind of the turn business that we had some time ago. We’re making progress, but we’re still out there.
Jayshree Ullal: I’m very — I just want to add that I’m very proud of the progress the team has made. When you look back a few years ago, we were short of components, we were making multi-year purchases. There was a risk of a very large exposure, because you can’t get all these forecasts right. And then obviously the mix changes from time to time, especially with the cloud and AI. So, it’s very hard to measure our business on book to bill and backlog at a given time. But if you have to look at it as an overall multi-year trend.
Erik Suppiger: Can you comment then on just when will the lead times be at a normalized level?
Jayshree Ullal: Yeah, and I think we said this. It’s been improving consistently and we expect it to be normalized just like our gross margins in 2024.
Erik Suppiger: All right, very good. Thank you.
Liz Stine: Operator, we have time for one last question.
Operator: Your last question today comes from the line of Woo Jin Ho with Bloomberg. Your line is open.
Woo Jin Ho: Oh, great. Thanks. I made the cut. Happy Halloween, folks. So, I think there was a mention on merchant silicon earlier in the Q&A. And one of your merchant silicon partners has actually moved up the stack towards a surface provider routing. I’m just curious if there’s any intention on going after that piece if that chip is made available to you?
Anshul Sadana: Sure. Woo Jin, I believe you are referring to the latest announcement at Broadcom on their 25.6T Jericho chip that was announced recently.
Woo Jin Ho: Yeah, the Qumran3D.
Anshul Sadana: Qumran3D, exactly. So, it’s the same family, same features. And as you know, we’ve been a great partner of Broadcom for a long time, and we’ll continue to build new products. This is not a new entry, so to speak. We’ve been building these products that can be used as switches or routers for a while, and the bandwidth just doubled going to now 25.6T. You can expect some products from us in the future with those variants as well, but really nothing really changes. Just innovation continues and merchant silicon continues to succeed.
Jayshree Ullal: And the investments, Woo Jin, we have made in our routing stack over the last 10 years, I want to say, has just gotten better and stronger. Powering the internet, powering the cloud, powering the AI, these are hard problems. And they require thousands of engineers of investment to build the right VXLAN, BGP routing, EVPN, et cetera. So, it’s not just the chip, it’s how we enable the chip to do these complicated routing algorithms.
Woo Jin Ho: Great. Thank you.
Liz Stine: Thanks, Woo Jin. Thank you. This concludes the Arista Network’s third quarter 2023 earnings call. We have posted a presentation, which provides additional information on our results which you can access on the Investors section of our website. As a reminder, Arista will be hosting our 2023 Cloud and AI Innovators Analyst Day on Thursday, November 9. If you are interested in attending virtually, you may register from the Investors section of our website. Thank you for joining us today, and thank you for your interest in Arista.
Operator: Thank you for joining. Ladies and gentlemen, this concludes today’s call. You may now disconnect.