You think about we get them booked, but again, we have to then go off and then map that out against supply chain confirmation of what we can do in our configurations. And then we talk about deployment schedules and the like. And so I do want to call something out, because I don’t think I’ve done a good enough job in prior calls. We don’t think about our customers like in a buy-sell relationship. That’s not our business. And I know it’s hard because we used to be a memory module company only and it was very much — the industry is very much to be transactional. But our transformation that we repeatedly talk about in the market is one of not just going from memory to AI infrastructure and HPC. That’s not where it stops. It really is in terms of how we think about our customer relationships more in terms of clients and engagements, because we’re with them for a while.
We’re not just selling something and disappearing. We design it, we install it, we manage it, and then we make sure it performs over time. And so that relationship, I think again, is we have clients and engagements. We don’t have transactional customers. Yeah, we have customers, obviously, but those are managed as a client and an engagement, and that’s where it shows up in the gross margin line when I compare myself to some of the larger competitors we compete against, so to speak. And so I’m very pleased on how that process is evolving and playing out. I’ve also just like, Kevin, if it’s okay, just take the opportunity. As we’re doing that, what’s really become apparent is the need of what we offer is becoming more and more validated every day.
What I mean by that is, yeah, there’s a lot of GPU sales over the last 12 months to some odd months, but as people deploy it, the complexity is not lost on the customers. Where is that complexity? Well, on the design of a data center. How do I get the power to the building, to the transformer, to the main systems? How do I design each part of the data center from a cold aisle to a hot aisle? How are my racks designed? How are the GPUs managed? How do I maximize uptime and availability? Can I be proactive in detecting future failures that saved me downtime? And then in the future, hey, how do I scale? And so we’re in a much different consultative advisory sale. And so these executive engagements I’m talking about are reaffirming that there is a need for this type of trusted advisor relationship.
And I think we’re very well positioned in that after. There couldn’t be a better precursor to AI infrastructure than HPC. When you think about HPC, it was really helping people build large multiprocessor architectures inside of data centers, albeit not for AI at the time. But it evolved that way as we got closer to 2018, ’19, ’20. And so there’s just not that many companies out there with 25 plus years in deployment knowledge, both at the hardware, the software, and the future add-on services. And I think we’re doing a better job of articulating that value to these customers.
Kevin Cassidy: Okay. Great. Thank you.
Mark Adams: Thank you.
Operator: The next question comes from the line of Thomas O’Malley with Barclays. Thomas, your line is now open.
Unidentified Analyst: Hi. This is Scott on for Tom O’Malley. I wanted to touch on the services line. So it looks like services down ticked pretty meaningfully in the quarter. Should we think about this as sort of the new run rate level or do you expect that it ticks back up? Thank you.
Ken Rizvi: Yeah. So I think as we look at Q4, I would expect the services to tick up. And as we’ve highlighted before, within that basket of services that we have, there are kind of point-in-time services, design, implementation, and the like. And there are managed services that we have. And so we had fewer point-in-time services here in Q3. I would expect that number to tick back up here, sorry, in Q2. I would expect that number to tick back up in Q3. And that was embedded in our guide.
Unidentified Analyst: Thank you. That’s helpful. And then one more, if you could touch again on CXL? Could you just give us an idea of when you see the market inflecting more meaningfully there and then the types of customers that you see interested, whether that’s AI or more general purpose?
Mark Adams: Why don’t I take the first part of that and I’ll let Jack talk about more the productization and the revenue. So if you talk to leading technology executives and engineering executives on the AI kind of performance curve and AI infrastructure, by far at the top of the list is latency and performance issues caused by the lack of bandwidth and availability to memory from the GPU and CPU. And there’s an immense amount of capital going into investing in early-stage companies as well as some of the larger companies who are investing in solutions that help solve this bandwidth issue. Like if you look at the analogy I like to use is if you think about sharing storage in a network, you can do that today, you can’t do that in memory.
So eventually you’ll have the capabilities. I think it’s within CXL 3.0. Out in the future, you’ll be able to spool memory, but that’s not enough. The transport layer of memory is another issue that people are trying to tackle with optical transport layer with CXL on top of it for just maximizing the speed. High bandwidth memory is not the only solution. And there will be other solutions as it relates to how we can enhance the throughput and the latency and the overall systems performance of the compute by having more innovative hardware solutions like CXL and like optical transport integrated into such a solution. But I’ll let Jack talk about kind of the market dynamics around when we see that starting to be a meaningful contributor.