Applied Digital Corporation (NASDAQ:APLD) Q1 2024 Earnings Call Transcript

Page 3 of 3

Wes Cummins: So Mike I would say that we’re right in the, I would say, in the middle of the kind of the two scenarios that you described. So the funnel has gotten smaller, but we’re not right at the end yet. Does that makes sense?

Michael Grondahl: Got it. Yes. Just trying to understand. And then on the prepayments, you mentioned $39.5 million in the August quarter. And then I think a $15 million prepayment and a $23 million prepayment you expect this week. Can you say are the $15 million and the $23 million, are those from customers three and four? Or do those relate back to customer one and two?

Wes Cummins: It’s both. It’s both for those prepayments. The ones that we talked about that we’ve received already and then the one we expect to receive this week.

Michael Grondahl: Got it. Got it. And then lastly, the 1,024 GPUs in August that you put to work, did they all go to customer one?

Wes Cummins: So those came in September, Mike. And they are for customer one.

Michael Grondahl: Okay. Thanks, guys.

Operator: Next question is from the line of Kevin Dede with H.C. Wainright. Please proceed with your questions.

Kevin Dede: Hi, Wes. Thanks for taking the question. Maybe you could just help me understand a little bit better how you’re thinking about AI cloud versus AI host and how that might figure in your calculus your construction calculus.

Wes Cummins: In what way, Kevin?

Kevin Dede: Well, I guess what I’m wondering is when you go look for your anchor tenants, are you looking at them purely from a cloud customer perspective? Or are you looking at some perhaps from a host perspective, where they’re bringing their own GPUs?

Wes Cummins: Yes, yes. So the anchor tenants are absolutely a hosting business for us where they will bring their own equipment. It’s just a data center hosting business for us. When you think about anchor tenants and for these data centers, we’ve talked about the ideas, 70% goes to the colocation hosting style business and then we keep 30% for our own cloud capacity.

Kevin Dede: Then the — you said you’re comfortable in — colocation that you need for to support the 26,000 to 34,000 GPUs you have coming in. How do you figure moving them once they’re at their colocation to your own facilities once they’re ready?

Wes Cummins: So we won’t ever move those installations. And the way that I think about it from a cloud service perspective is a lot of that colocation is in what I would call kind of traditional cloud regions. And as we ramp up capacity in North Dakota, it’s really being built for very large training clusters. And so I think about the training portion of the business that’s being deployed in smaller training clusters in these cloud regions moving over to the infrastructure that gets put in place in North Dakota and then using the smaller clusters that we’re building out now is inferencing which I think the way this market splits is there will be training in batch inference and then some inferencing done in these large facilities like we’re building in North Dakota.

And then a lot of the inference portion of the market will be more in what I would call traditional cloud regions. And so it works well for our cloud business over time to have that type of architecture of just being spread into more cloud regions for the inferencing portion of the business, while a lot of the training will move into North Dakota.

Kevin Dede: Okay. The redesign that you did in September, Wes, for your new Ellendale facility, did you have to rethink latency? Or did you have to be more concerned about power backup on those? Or do you still think you can operate under the conditions that you built for in Jamestown?

Wes Cummins: So it’s not about latency or necessarily power backup. It’s about — it’s a redesigned specifically for density. So it’s designed to be able to basically take a network core and go through multiple floors and put all of the GPU. So the design here for the new North Dakota facility, it’s really around kind of this magic number of, I call it, 30 meters being the magic number of how far you know rack can be away from the network core and still be in the same cluster, same spine for networking. You technically Kevin can go out to 50 meters away from the network core, but you have to use a single mode transceiver on optics instead of multimode, so it gets more expensive, a little more difficult. So it’s really designed around how many racks and how many servers can we get within 30 meters of the network core. And so that’s why it goes through multiple levels in the building instead of the single level. That’s the primary piece of the redesign.

Kevin Dede: And what are the heat implications, though, if you’ve got multiple racks stacked, right, and heat’s tendency to rise.

Wes Cummins: Yes. So this is a liquid cool facility. That’s the other part of the redesign. So it was air cool. You can effectively do air cool, our opinion up to, call it, roughly 50 kW per rack. But as you start to go above that, you really need to move to a liquid cool solution. And so the new facility is designed for liquid cool. The new facility will go to 150 kW per rack. It can still — it still has the floor space to do it at 45 kW, which is what we’re doing in Jamestown, but we’ve — this will be built for liquid cooling.

Kevin Dede: Okay. Thanks, Wes. Appreciate it. Appreciate the detail.

Wes Cummins: Sure.

Operator: [Operator Instructions] Thank you. At this time, this concludes our question-and-answer session. I’d now like to turn the call back over to Wes Cummins.

Wes Cummins: Thank you, operator, and thanks, everyone, for joining our call. I look forward to speaking with you on Thursday at our Investor Day, which will be held in Midtown Manhattan. And again, thanks to all of our employees for the efforts they put in the last quarter and this quarter to date. I look forward to speaking with you on our next quarterly call.

Operator: Thank you. Thank you for joining us today for Applied Digital’s conference call. You may now disconnect.

Follow Applied Digital Corp.

Page 3 of 3