SMART Global Holdings, Inc. (NASDAQ:SGH) Q2 2024 Earnings Call Transcript April 9, 2024
SMART Global Holdings, Inc. beats earnings expectations. Reported EPS is $0.27, expectations were $0.25. SGH isn’t one of the 30 most popular stocks among hedge funds at the end of the third quarter (see the details here).
Operator: Good afternoon and thank you for joining the SMART Global Holdings Second Quarter Fiscal 2024 Earnings Call. My name is Kate and I will be the moderator for today’s call. At this time, all lines are in a listen-only mode and will be until the question-and-answer portion of the call. I would now like to turn the call over to Suzanne Schmidt, Investor Relations for SMART Global. Suzanne, you may proceed.
Suzanne Schmidt: Thank you, operator. Good afternoon and thank you for joining us on today’s earnings conference call and webcast to discuss SGH’s Second Quarter Fiscal 2024 Results. On the call today are Mark Adams, Chief Executive Officer; Jack Pacheco, Chief Operating Officer; and Ken Rizvi, Chief Financial Officer. You can find the accompanying slide presentation and press release for this call on the Investor Relations section of our website. We encourage you to go to the site throughout the quarter for the most current information on the company. I would also like to remind everyone to read the note on the use of forward-looking statements that is included in the press release and the earnings call presentation. Please note that during this conference call, the company will make projections and forward-looking statements, including, but not limited to, statements about the company’s growth trajectory and financial outlook.
Forward-looking statements are based on current beliefs and assumptions and are not guarantees of future performance and are subject to risks and uncertainties, including, without limitation, the risks and uncertainties reflected in the press release and the earnings call presentation filed today as well as in the company’s most recent annual and quarterly reports. The forward-looking statements are representative only as of the date they are made and except as required by applicable law, we assume no responsibility to publicly update or revise any forward-looking statements. We will also discuss both GAAP and non-GAAP financial measures. Non-GAAP measures should not be considered in isolation from, as a substitute for or superior to our GAAP results.
We encourage you to consider all measures when analyzing our performance. A reconciliation of the GAAP to non-GAAP measures is included in today’s press release and accompanying slide presentation. And with that, let me turn the call over to Mark Adams. Mark?
Mark Adams: Thank you, Suzanne, and thanks to all of you for joining us today for our Fiscal 2024 Second Quarter Earnings Call. We delivered solid financial results in the second quarter and continued to make great strides in our transformation into a provider of high-performance, high-availability solutions that enterprise customers need to deploy AI on-premise, at the edge and in the cloud. As one of the few players in the industry with decades-long experience in high-performance compute and specialty memory, SGH is uniquely positioned to help companies manage the complexity of AI implementation at scale. As a total solution provider, we offer our customers and partners innovative technology-agnostic hardware configurations, software that manages AI systems for maximum output and availability, and a professional services suite that enables our customers to achieve best-in-class performance and reliability.
Let me summarize our operational results for the quarter. Revenues totaled $285 million in line with the midpoint of our guidance range. Although non-GAAP gross margin was at the lower end of our guidance due to a higher portion of hardware mix, we achieved non-GAAP earnings per share of $0.27, which was above the midpoint of our guidance through better operating expense controls. We exited Q2 with a strong balance sheet. Cash and short-term investments totaled $466 million. Now let me start our business line review with the Intelligent Platform Solutions group or IPS. Our IPS team offers a robust solution set of industry-leading hardware, advanced cluster management software, and best-in-class professional services. This solution portfolio enables our design, build, deploy and manage solutions framework for HPC and AI applications on-premise, at the edge and in the cloud.
In Q2, IPS revenue came in at $141 million, up 19% from our prior quarter, representing 50% of total SGH revenue, thus making IPS the largest component of our overall business in Q2. Our vision is clear, partner with our customers and collaborate with them to build the future of AI. The market continues to see strong investment in the deployment of AI infrastructure solutions by hyperscalers and large-scale cloud service providers or CSPs. The first few months of 2024, however, have confirmed that AI is not just for early adopters anymore. We are seeing signs of AI adoption by larger enterprises in markets such as financial, oil and gas, defense, education and digital media as well as Tier 2 CSPs with projects ranging from proof-of-concepts to large-scale deployments.
Our engagements with existing and potential AI customers have noticeably picked up in volume over the last few months, reflecting this market dynamic. In our conversations with both current and targeted new clients, they have shared the challenges they are facing in deploying AI, trying to manage the complexity that arises as they integrate advanced compute, memory, networking, storage and cooling and large-scale data center rollouts. Our customers ultimately must have high-performance compute running workloads at scale, an environment that provides for maximum uptime and overall efficiency. As a total solution provider, we are ready to meet that challenge. We offer our customers a complete solution that combines innovative hardware design, software to manage AI infrastructure for maximum output and availability, and a suite of professional services, all designed to help them achieve best-in-class performance and reliability.
With our customer-first approach, we put our customers’ priorities at the heart of everything we do, ensuring that each solution we deliver is tailored to their specific requirements and ready to support their success. We believe that AI inferencing at the edge will also be a critical market opportunity because it brings intelligence closer to where the information is most valuable, closer to where decisions are being made. We have expanded our capabilities at the edge with our new next-generation fault-tolerant computing platform, the Stratus ztC Endurance Server. We are seeing strong demand for this platform which enables our customers to run applications with an unplanned target downtime estimated in minutes per year. We are developing an approach to enable our customers to implement AI at the edge with a high-performance, high-availability platform.
The ztC Endurance Server is another example of the investments we have made and continue to make to support the needs of our customers, whether on-premise, at the edge or in the cloud. Today, we are also announcing a new member of our management team. I am pleased to announce that Pete Manca has joined us as President of IPS. Pete brings a wealth of experience in building businesses that provide high-availability, high-performance solutions to enterprise customers. Prior to joining our team, Pete served as Senior Vice President and General Managers at Dell Technologies for five years, managing several large businesses, including Converged Solutions, OEM Solutions, and APEX, Dell’s end-to-end portfolio of cloud offerings, ranging from storage to high-performance computing to AI services and solutions.
Prior to Dell, Pete served as President and CEO of Egenera, a leading provider of wholesale cloud computing solutions, underscoring his broad experience and expertise. I am confident that Pete is the right leader to propel the team forward and make the most of the opportunities that are ahead. Pete will work with former IPS President, Dave Laurello, who is transitioning into an advisory role. Dave has been an invaluable partner in transforming IPS. With his guidance, IPS has become more effective and efficient across the board from go-to-market to engineering to manufacturing. He is a leader of high-integrity with an execution mind-set that we will miss. We wish Dave all the best. Turning now to memory, which operates under the SMART Modular brand name.
We provide customers with high-performance, high-reliability memory solutions for specialty markets such as supercomputing, networking and telecom, storage, data centers, industrial and other specialty applications. For Q2, revenue came in at $83 million, or 29% of total SGH sales. As expected, sales declined slightly from Q1 levels, primarily due to continued elevated inventory levels at a number of our large customers. We continue to see signs that the memory cycle is turning upwards. However, as mentioned on our last earnings call, near-term unit demand still remains challenging at some of our traditional enterprise customers. Nevertheless, we remain confident that business will rebound as we move into the second half of our fiscal ’24 and expect revenues to grow sequentially in the third quarter.
AI is also reshaping the memory market landscape as the need for higher density and greater bandwidth becomes increasingly critical to system performance required to handle the most advanced compute workloads. We are expanding our product portfolio to capitalize on the convergence of compute and memory and system-level solutions by leveraging Compute Express Link or CXL memory expansion and switching technology, which allows different parts of a computer memory system to communicate faster and more efficiently. We continue to make progress on CXL product development. We have successfully completed the design of our 8 DIMM DDR5 CXL Add-In Card and anticipate sampling this innovative product to our customers later this year. This high-density solution offers 512 gigabytes of memory, making the system faster and more capable of handling the complex tasks required by large-scale AI and high-performance computing workloads.
During the second quarter, we also shipped initial engineering samples of our 4 DIMM DDR5 CXL Add-In Card to a number of our enterprise customers. This product has a unique patented technology that keeps its footprint within a single-width PCIe slot. This design is exceptionally beneficial for 1U servers because it optimizes space for other PCIe devices, including accelerators and network interface cards. We expect that revenues will begin ramping from this product in early fiscal 2025. Finally, we introduced our Zefr ZDIMM Memory Modules with ultra-high reliability for demanding compute applications and our DDR5 SODIMM products continue to gain market momentum, bolstered by our exclusive I-Temp and Zefr testing offerings, which significantly enhance reliability.
Taken together, these advancements exemplify our ongoing commitment to industry-leading innovation, empowering us to address our customers’ needs. Now turning to Cree LED, which produces application-optimized LEDs for products in markets such as specialty lighting, video screens, outdoor, horticulture, and architectural lighting. In the second quarter of our fiscal 2024, LED Solutions revenue totaled $60 million, or 21% of total SGH sales. As anticipated, second quarter sales were lower sequentially, primarily due to seasonality. With the LED demand environment remaining relatively muted in the near term, we continue to manage our Cree LED operations prudently and are aligning our expenses with current business conditions. We are optimistic that demand trends will begin to improve and currently expect revenues to increase sequentially in the third quarter.
Our R&D team has remained diligently focused on driving the technology development needed for Cree to continue leading in high-value specialty applications. During this past quarter, we launched the XLamp XP-G4 High Intensity LEDs, which are optimized for indoor directional, aftermarket auto and portable applications. We also introduced an extension to our XLamp S product line, targeting the horticultural sector with products tailored for environments like greenhouses, where precision lighting can transform crop growth and yield. As a technology and brand leader with a strong intellectual property portfolio, the Cree LED team continues to lead the charge in lighting innovation. Given our strong R&D and IP portfolio combined with our capital-light outsource model, I am confident in Cree’s LED competitiveness and prospects for future success.
I’ll stop here and hand it over for Ken for a more detailed review of our Q2 financial performance and our guidance for next quarter. Ken?
Ken Rizvi: Thanks, Mark. I will focus my remarks on our non-GAAP results, which are reconciled to GAAP in our earnings release tables and in the investor materials on our website. Now let me turn to our second quarter results. Total SGH revenues were $285 million at the midpoint of our guidance and non-GAAP gross margin came in at 31.5% at the low end of our guidance, primarily driven by a higher mix of hardware revenues. Non-GAAP diluted earnings per share was $0.27 for the second quarter, which was above the midpoint of our guidance and helped by better operating expense controls. In the second quarter, our overall services revenue totaled $49 million, down from the $55 million in the year-ago quarter. Product revenues were $235 million.
Second quarter revenue by business unit was as follows. IPS at $141 million, Memory at $83 million and LED at $60 million. This translates into a sales mix of 50% IPS, 29% Memory and 21% LED. Non-GAAP gross margin for SGH in Q2 was 31.5%, down from 32.1% in the year-ago quarter, driven primarily by lower memory volumes that were partially offset by improved mix in IPS. Gross margin was down sequentially from 33.3% in the prior quarter, primarily due to a higher mix of hardware revenue. Non-GAAP operating expenses for the second quarter were $63.2 million, down from $64.6 million in the first quarter, primarily due to lower variable expenses and cost reduction actions. Operating expenses were also down from $68.7 million in the year-ago quarter.
Non-GAAP diluted earnings per share for the second quarter of 2024 was $0.27 per share compared to $0.24 per share last quarter and $0.87 per share in the year-ago quarter. An adjusted EBITDA for the second quarter of 2024 was $33 million, or 12% of sales, compared to $34 million, or 13% of sales in the last quarter, and $65 million, or 17% of sales in the year-ago quarter. Turning to balance sheet highlights. For working capital, our net accounts receivable totaled $170 million, slightly lower than the $171 million last quarter. Days sales outstanding came in at 41 days, down from 44 days from last quarter, primarily due to the timing of invoicing and collections. Inventory totaled $173 million at the end of the second quarter, lower than the $208 million at the end of the prior quarter, and inventory turns were 6.8 times in the second quarter, up from 5.8 times in the prior quarter.
Consistent with past practice, net accounts receivable, days sales outstanding and inventory turnover are calculated on a gross sales and cost of goods sold basis, which were $375 million and $294 million, respectively, for the second quarter. And as a reminder, the difference between gross and net revenue is related to our logistics services, which is accounted for on an agent basis, meaning that we only recognize the net profit on logistic services as revenue. Cash and cash equivalents and short-term investments totaled $466 million at the end of the second quarter, down $88 million from the $553 million in the prior quarter. In the second quarter, we paid down $50 million for the Stratus earn-out and retired approximately $37 million of our term loan facility.
Second quarter cash flows used in operating activities totaled $22 million compared to $60 million provided by operating activities in the prior quarter. Second quarter cash flows used for operating activities included the $29 million payment of contingent consideration as part of a total of $50 million cash payment for the Stratus earn-out. During the second quarter, we repurchased approximately 106,000 shares of our common stock using $1.9 million. Since our initial repurchase authorization in April of 2022, we have used a total of $72.3 million to repurchase 4.1 million shares through the end of our second quarter. As of our second quarter, we have a total of $78 million available for future repurchases under our authorizations. And to remind everyone, our capital allocation strategy remains the same.
First and foremost, we will continue to invest in our business as we see significant opportunities for further organic growth. Second, we will continue to evaluate acquisition opportunities in a disciplined manner, such as our most recent acquisition of Stratus. And third, the share repurchase authorizations provide us flexibility to return capital to our shareholders in an opportunistic and price-sensitive manner. And finally, we look to retire debt to keep our gross leverage at reasonable levels. We retired $37 million of our term loan in the second quarter, and subsequent to the quarter end we retired an additional $75 million of term loan. These payments bring down the principal by $112 million since the first quarter to $425 million outstanding.
For those of you tracking capital expenditures and depreciation, capital expenditures were $5.2 million in the second quarter and depreciation was $7.2 million. Turning to our third fiscal quarter 2024 guidance. We expect that revenues for the third quarter of 2024 will be approximately $300 million at the midpoint, plus or minus $25 million. Our guidance for the third quarter reflects the following. For IPS, we expect revenues to be flat to slightly up at the midpoint, with additional opportunities that may fall in the third or fourth quarter depending on the timing of deployments. For Memory, we expect revenues to grow in the high-single-digit range sequentially at the midpoint, and for LED, we currently expect revenues to be up in the high-single-digit range sequentially at the midpoint as well.
Our GAAP gross margin for the third quarter is expected to be approximately 29% at the midpoint, plus or minus 1.5%. Non-GAAP gross margin for the third quarter is expected to be approximately 32% at the midpoint, plus or minus 1.5%. Our non-GAAP operating expenses for the third quarter are expected to be approximately $66 million plus or minus $2 million and slightly up from the prior quarter, primarily due to variable expenses associated with the higher expected revenue. GAAP diluted earnings per share for the third quarter is expected to be approximately a $0.07 loss plus or minus $0.15. On a non-GAAP basis, excluding share-based compensation expense, intangible asset amortization expense, debt discount and other adjustments, we expect diluted earnings per share will be approximately $0.30, plus or minus $0.15.
Our GAAP diluted share count for the third quarter is expected to be approximately 52.6 million shares based on our current stock price, while our non-GAAP diluted share count is expected to be approximately 54.4 million shares. Cash capital expenditures for the third quarter are expected to be in the range of $4 million to $6 million. And as a reminder, we are utilizing a long-term projected non-GAAP tax rate of 28%, which reflects currently available information, including the sale of SMART Brazil, which was completed in the first quarter, as well as other factors and assumptions. While we expect to use this normalized non-GAAP tax rate through 2024, the long-term non-GAAP tax rate may be subject to changes for a variety of reasons, including the rapidly evolving global tax environment, significant changes in our geographic earnings mix, or changes to our strategy or business operations.
Our forecast for the third quarter of 2024 is based on the current environment, which contemplates the current global macroeconomic headwinds and ongoing supply chain constraints, especially as it relates to our IPS business. This includes extended lead times for certain components that are incorporated into our overall solutions, impacting how quickly we can ramp existing and new customer projects. We continue to manage our operations in a prudent manner as we navigate a challenging environment while also investing in our long-term growth. Please refer to the non-GAAP Financial Information section and reconciliation of GAAP to non-GAAP measure table in our earnings release for further details. Now, let me turn it over to Mark for a few remarks prior to Q&A.
Mark Adams: Thanks, Ken. As we are still in the early innings of fully operational AI infrastructure being deployed at scale for most enterprise customers, we are seeing an accelerating need by the market for a trusted adviser to help with the challenges of deploying this new AI infrastructure. Our 25 years plus of HPC and memory expertise and deployment know-how position us to become a leader in this market by working to solve our customers’ most challenging AI infrastructure needs. As our transformation continues, I want to thank our global team members for their efforts this quarter. We feel we are well-positioned for the exciting market opportunities ahead. Operator, we are now ready for Q&A.
Operator: Absolutely. We will now begin the question-and-answer session. [Operator Instructions] The first question comes from the line of Quinn Bolton with Needham & Company. Quinn, your line is now open.
See also 20 Countries With The Highest Railway Passenger Traffic in The World and 12 Best Depressed Stocks To Buy in 2024.
Q&A Session
Follow Penguin Solutions Inc. (NASDAQ:PENG)
Follow Penguin Solutions Inc. (NASDAQ:PENG)
Quinn Bolton: Hey, guys, congratulations on the nice results and outlook. I guess I want to start on the IPS business and Mark maybe just get your thoughts kind of demand across the customer base between large CSPs, Tier 2 CSPs and enterprises where you may be seeing some of the greatest demand for new deployments as you look out into the second half of calendar ’24 and then I’ve got a couple of follow-ups.
Mark Adams: No problem. Thanks, Quinn, for the question by the way. Yeah, so when we think about how our view of the market is, it’s based on obviously our kind of executive engagements with the three segments you talked about. Obviously, the early investors in the market were the large hyperscalers of which, obviously one’s a big customer of ours. And then you have the two other segments you mentioned, large enterprises that are — kind of have an infrastructure requirement in place for AI to enable their future success. And then you have these Tier 2, what we call Tier 2 CSPs, which are companies that have the infrastructure themselves to offer services that would lead for AI implementations at a number of different types of verticals, so to speak.
So those three segments are how we look at the market. And I would say that the early investors, as I mentioned, were hyperscalers. However, what we’ve seen a major uptick in is twofold. Large enterprises are evaluating their AI strategy, and then in that strategy really is, do they do it on-prem in their environment, do they do it in a co-location model where they use an outsourced data center provider for the space and power, and do they have someone like Penguin come in and help them deploy? Or do they actually partner with someone in some type of JV structure or co-investment structure for a future build? And we’re seeing all of those different models, but the level of demand signals from those type of customers on the enterprise side is very high.
And then in addition to that, the hyperscalers are largely building to their own requirements. The Tier 2 CSPs are largely investing in infrastructure for enterprises who don’t have access to their own infrastructure, mostly around data centers. And one dynamic I just want to reinforce, and if you talk to anyone in the power and data center markets, there’s a massive shortage today in available data centers for AI. And the massive shortage really is around the power requirements to run AI. And so today, there’s a big pent-up demand for new data center capabilities to drive AI with gigawatt, megawatt type performance that allows you to install these type of large-scale cluster AI platforms. That shortfall is causing a lot of activity as far as who can help people enable these systems as quick as possible, or in most rapid timeframe possible.
And that’s where we’re seeing a lot of the engagements that we have, is where customers are saying, hey, we need this space and we need to be up and running by x. They don’t — the customers in the enterprise world, they don’t typically have the infrastructure that one of the big four hyperscalers have in terms of deployment, and they really don’t even know what they know in terms of performance uptime, reliability, design uptime and optimization. These are all things that are critical. So the number of engagements we’ve had, the designs and the proposals leading to more clarity around bookings and even future deliveries, it’s all heading in the right direction.
Quinn Bolton: Excellent. Second question I had you mentioned sort of AI at the edge and seemed to kind of highlight your Stratus offering. I’m wondering, are there opportunities for AI at the edge also in the traditional sort of Penguin business, where you may be setting up larger clusters to run LLMs or other AI infrastructure? So it’s not necessarily just a high-reliability deployment, but you may also just say, hey, we’ve got to get the inferencing as close to the end user as possible. And you could see some pretty large deployments for inferencing equipment, rather than say just training, which I think some of your traditional deployments may have been more focused on.
Mark Adams: I think that’s right. So you really — you kind of — your question really embedded the answer, which is we are now developing solutions and investing in development for future integration into our solution roadmap for providing just that. As you mentioned, if you look at kind of the prior spend in AI, it was more training versus inferencing. And training is never going to go away because when you think about large language models, there’s going to be a bunch of large language models that people use to then extract their own version of what they need from that large language model. So the training piece will go away, I think, as a percentage of the whole inferencing will obviously take on a greater share in the future.
And with that, as you noted, the more speed and information availability there is closer to the decision-making that could make or break someone’s competitiveness. And so the solutions we’re thinking about are how do we not just, as you said, leverage the platform, but how do we develop and offer our customer base the highest performing, highest reliable systems where there might not be IT resources in that environment? And that’s a key part of our strategy. But you could see a day where we might be designing small cluster platforms or single cluster platforms with these platforms from a cost and a reliability and a performance perspective that allows them to be in these end markets, something like oil and gas, maybe on a rig, maybe at a retail store, maybe at an ATM, maybe at a healthcare regional office, what have you.
It’s just these are the way we’re thinking about AI. Clearly, AI at the edge has got a bright future. It’s just so early. I mean, AI in general is early. So we’re investing for tomorrow as we think about that solution platform we’re developing.