Applied Blockchain, Inc. (NASDAQ:APLD) Q3 2023 Earnings Call Transcript

Wes Cummins : Yeah, that’s correct. And that’s, so there’s a couple of things going there, there George, and it’s new. Some of the stuff which is just yesterday that, we don’t think, so, from that site specifically, so we’re beyond that portion. And then also from – on that site, specifically, they are talking about the grid balancing or the demand response benefits in the legislature now. And our €“ the setup of that site, it is not an area where – it’s not where we take advantage of those programs in Texas, to reach the price on energy that we’re contracting there, that we’re contracting there. So, we don’t believe it has any impact on us.

George Sutton : Gotcha. Great. And then, on the use cases for the HPC, I want to make sure I fully understood when we talk machine learning versus what I was viewing as large language models. I just want to make sure, have we seen somewhat of a use case change as we look to deploy in new customer?

Wes Cummins : No, no, the large language models are falling in that machine learning category. So it’s beyond some training model. When I say machine learning, it’s, training the model, A, you can, if you want to call it AI or machine learning, but the applications that are running now include, what we call natural language processing.

George Sutton : Okay, perfect. Thank you.

Wes Cummins : Thanks George.

Operator: Thank you. Our next question comes from the line of John Todaro with Needham and Company. Please proceed with your question.

John Todaro: Thanks for taking my question guys. Two questions here. First on the Bitcoin mining piece. Can you just remind us for, at least your major mining contract, when those are up for renewal? And then, any expectations you have with the – having coming up in Q1 €˜24 which should raise those mining across questions, I should ask?

Wes Cummins : Yeah, so the contracts are – the vast majority is contracted which are under five year contracts that are effectively just starting as we energize – we energize Ellendale. In Jamestown, I think, I don’t know the – off the top of my head, the exact contract life remaining, but it’s probably in the two-and-a-half year time frame. And so that’s the majority of our contracts. And then, on the on the having €“ yeah, they’re having event, I mean it’s pretty simple to go – to think through that, we think our current customers, I think we run a really efficient operation and our current customers are very profitable at the current Bitcoin price and significantly below the Bitcoin price. But when you think about just about variable cost.

But effectively, your cost will increase double at the having event. So that really depends on two things. It’s the price of Bitcoin and then the network hash rate. And, the – John, the thing about Bitcoin over the life that it has had is generally, there’s a self-correcting mechanism, in the network of hash rate versus price that allows miners to stay profitable. There will be periods of time where they won’t be. But at the end of the day, you’re looking for the most efficient guys. I think they’re the ones who will stay online post to having. We’re running only S19 Pro and the majority of what we’re turning on now and will be in our current – the new facilities are all XPs. So running the – all the latest model mining equipment and so we think that we and our customers are well positioned for that.

John Todaro: Got it. Thanks for that. And then, just the other question on the HPC segment, obviously, a lot of interest in the space recently. I would imagine, that’s kind of increased something the competition there. Could you just discuss that a little bit and some of the competitive landscape there?

Wes Cummins : Yeah. I expect there to be plenty of competition and people entering the space. I expected from the traditional data center providers. I expect new entrants to come into the space. I think it’s a really large opportunity. And just to kind of frame this opportunity and why I think we have this opportunity and what the opportunity set is, we talked about these next-generation data centers a fair amount. But let me just give you, kind of some comps of traditional data centers versus what we’re building and what we see in the future. So, traditional data center, we talk about it’s generally built for ultra-low latency, energized video streaming all of those types of things. Traditional data centers build, racked – power to the rack, typically for a full rack, they deliver about 7.5 kilowatts.

And so generally their capability is somewhere around 10 to 15 kilowatts to a rack. When we load a rack full of, – a video box with eight, a 100 GPUs, which is the kind of the gold standard right now for machine learning and AI, that that €“ if you want that rack full, it requires about 40 kilowatts to power that rack. So you’re looking versus what traditionally they’re almost a five-fold increase in the power needed and then that really directly correlates to the amount of heat created. But, we’re talking to the largest players in the industry and we’re spending a lot of time with them and so that the view is things are going from 40 to 70 to kilowatt needs per rack. So you are bringing this power density. It’s increasing significantly and it’s really difficult to retrofit older data centers to do this.

And then they might be, then you’re thinking about the amount of power that they need. And I think you’re going to see a trend or specifically machine learning and AI, because it’s a very unique load that it looks a lot more like what we’re handling now to move them closer to the power source. You don’t need this ultra-low latency aspect for these. So we’re positioned very well. We have, large amounts of low-cost power. We’re sitting mostly in very cold locations. In our data center, for example, the airflow is significantly higher than you would see from a traditional, just because we want to use air cooling to cut down on the electricity usage and lower our cost significantly. So traditional data center, maybe half a mile per hour of air flow through, you can see our new facility is designed for around eight miles per hour of air flow, just massive amounts of air flow for the cooling.

Because the climate in North Dakota is, is absolutely perfect for this. We are sitting on a fiber grid. We have 100 Gig connectivity now. We are going to 400 Gig to the end – at the end of the year. So we have really good fiber connectivity, which is also necessary. You just don’t need the ultra-low latency aspect. But we’re sitting in a really good spot. Now, is it easy for everyone to go into this space? Just like we’ve seen in building out hosting capacity for Bitcoin mining, that was hard. I think this is even harder. We’ve put a really good team in place. We’ve pulled some people out of a large company that builds data centers here in the US and actually internationally, as well. They were building data centers for some of the largest hyperscalers.

So we have people that are experienced with doing this. We’ve spent a lot of time designing the specific design. Now we’re implementing it. So I do think we’re, we have a significant lead versus almost everyone out there. We’ve seen a few other smaller players that have been doing this for a few years. But I think you’re going to see some of the larger ones move towards it. But I do think we have a pretty good advantage, given where we are now, where the sites that we already have, the operations that we already have and the knowledge base we have with our – really, on our electrical engineering, as far as being able to deal with this kind of power density already. But I do expect competition. I expect it to be an extraordinarily large industry, and there’s no way you will have one industry like that without a lot of competition.