Nebius Group N.V. (NASDAQ:NBIS) Q4 2024 Earnings Call Transcript February 20, 2025
Nebius Group N.V. beats earnings expectations. Reported EPS is $-0.34567, expectations were $-0.39.
Neil Doshi: Hello, and welcome to Nebius Group’s Fourth Quarter and Full Year 2024 Earnings Conference Call. My name is Neil Doshi, Head of Investor Relations. Joining me today to discuss our results are Arkady Volozh, Founder and CEO, and the rest of the Nebius management team. Before we get started, I would like to take this opportunity to remind you that our remarks today will include forward-looking statements. Actual results may differ materially from those contemplated by these forward-looking statements. Factors that could cause these results to differ materially are set forth in today’s earnings press release and in our quarterly report on Form 6-K filed with the SEC. Any forward-looking statements that we make on the call are based on assumptions as of today, and we undertake no obligation to update these statements as a result of new information or future events.
During this call, we will present both GAAP and certain non-GAAP financial measures. A reconciliation of GAAP to non-GAAP measures is included in today’s earnings press release. The earnings press release is available on our website at group.nebius.com/investor-hub. And now, I’d like to turn the call over to Arkady.
Arkady Volozh: Thank you, Neil, and welcome everyone to our fourth quarter earnings call. As we said on our last call, our aim is to build one of the world’s largest AI infrastructure companies. We believe that we are well positioned to do this because we have a proven track record of building and running efficient data centers with stable power, delivering GPU-based AI compute infrastructure, cloud, and offering a wide range of value-added services to businesses that are adopting GenAI. Since we publicly launched Nebius in July of last year, just seven months ago, we have been extremely focused on putting in place the foundation to support our future growth in ‘25 and below — and beyond, of course. I’d like to share what we accomplished in the fourth quarter alone.
First, we resumed trading as a public company in October, becoming the first publicly traded AI specialized cloud. This came much faster than expected as we were still building our corporate and business functions. We also raised $700 million in capital in December in an oversubscribed offering, a deal that saw the likes of NVIDIA, Accel, Orbis, and others enter our capital structure. On the infrastructure side, we successfully expanded our data center footprint and our GPU deployments, and we are building the foundation which will enable us to aggressively scale up this year in the US and Europe. Also, we launched our new AI cloud platform and migrated all of our customers to it in Q4. We also launched our Inference-as-a-Service platform called AI Studio.
And we made the great progress in building our global sales and marketing team with a particular focus on the US market. In that context, I’m very pleased with everything we were able to accomplish in Q4. Our sales function is now up and running and we are seeing the results. More clients are coming onto the platform and our more diversified customer base is already contributing to strong growth run rate revenue. Based on the contracts already in place, March analyzed run rate revenue will be at least $220 million, and we have more potential contracts in the pipeline. Given this momentum, as well as the anticipated impact of additional data center capacity that we’re building as well as next-generation Blackwell’s GPU coming online later this year, our projected December, 2025 analyzed run rate revenue of $750 million to $1 billion is well within reach.
Q&A Session
Follow Nebius Group N.v. (NASDAQ:NBIS)
Follow Nebius Group N.v. (NASDAQ:NBIS)
Looking ahead to 2025, we have basically three strategic focus areas. The first is our unique full-stack AI technology. In addition to our European data centers in Finland and France, we started deploying GPU clusters in Iceland and in Kansas City, which was the first capacity in the US for us. Also, we will soon be announcing our first US data center that we will be built to our own design. And we will continue to enhance and expand our AI cloud and Inference-as-a-Service platforms. The second area of focus was the capital to invest in our future growth. At the end of last year, we had $2.4 billion in cash, including those $700 million raised in Q4 from high quality investors. And being a public company also gives us access to a wide variety of efficient financing options.
And finally, another focus point is our corporate structure. Here we have been particularly focused on our sales and marketing and customer support teams. We made great hires from within our industry in the last quarter and we are already seeing positive results in terms of run rate revenue growth as we start 2025. In addition to our core infrastructure business, our business units are doing very well. Avride, our autonomous technology platform business, signed a contract with Uber as one of just two autonomous technology partners in the US, along with Waymo and Zoox. Uber Eats has already deployed Avride robots for food delivery in Austin, Dallas, and Jersey City. Avride also signed a contract with Grubhub, which now uses Avride robots for food deliveries for college campuses across the United States, starting with Ohio State University.
In addition, Avride’s robots have recently received certification in Japan. The team will start to explore opportunities there as well. Toloka, our data training platform for GenAI, grew full year revenue by 140% and diversified its customer base by adding several of the leading AI labs to its client portfolio. It has also completed its transition to a new platform which is tuned for complex GenAI tasks such as red teaming for AI agents, evaluation of reasoning models, and scalable training by coding and math experts. TripleTen, our edtech business, education tech business, doubles its new student additions year-over-year in 2024, and maintains its position as one of the leading IT bootcamps in the US by student feedback. Finally, as a reminder, we still own a 28% stake in ClickHouse, which we believe is a significant potential source of value, although it is not part of our consolidated results.
Well, all in all, 2024 was just the starting point for Nebius Group. And I’m excited to enter 2025 with significant momentum and big ambitions to scale and grow our business. And now, I turn over to Tom for questions.
A – Tom Blackwell: Thank you very much, Arkady. And thanks to those of you who’ve already sent through your questions. We’ll try and cover as much ground as we can on this call. And actually, the first question is about capacity expansion plans. And so, I’ll actually come to Andrey Korolenko for this first one. And so, Andrey, can you discuss a little bit more the broader plans for capacity expansion over 2025?
Andrey Korolenko: Sure, I can, Tom. Thank you. Hi, everyone. So before getting into 2025, let me quickly recap what we did last year. We announced tripling the capacity of our data center in Finland, and the expansion is going well. The construction is going according to the schedule with the first part of the delivery to be completed in Q3. And the remaining capacity will come online around year-end. We also added a data center facility in France, which is already up and running. And recently announced our first US cluster in Kansas City, which is now in deployment phase and should be up and running by the end of this quarter. For 2025, we are already well underway in scaling up further. We can announce today that we are deploying a new H200 cluster in Iceland, which we expect to be available for customers in March.
And with the additions of Iceland and Kansas City, it takes our total current capacity to around 38,000 GPUs, most of which are H200s. I’m also pleased to announce that we just signed a significant new build-to-suit facility in the US, specifically equipped for Blackwell deployments, and we will update on the details soon. Our guidance calls for 100 megawatt in operations by end of this year, and we are on track to deliver that. Also, I would like to highlight that all these new facilities are potentially scalable to a total of more than 300 megawatts as and when we are ready to do so. We are working on even further expansion focusing on greenfield sites in US and Europe that would increase our capacity multiply times. That’s it, Tom.
Tom Blackwell: Thank you. Thank you very much, Andrey. And actually, next question, Arkady, I’ll come to you on as it relates to you. So the question is, in some of your recent public appearances, you talked about potential growth in capacity that went beyond the state of guidance. Can you give us an update in terms of how you’re thinking about capacity expansion generally?
Arkady Volozh: Well, I think what you’re referring to is my comments at the recent conference when we were having a more general discussion about the overall market opportunity, which we believe is huge. And I think during that conversation, I talked about a potential for building a gigawatt and more of capacity. If we see that, the demand is there. And I still believe that the advantage of having a team like ours, which with decades of experience of building hundreds of megawatts of data centers and they have specific products, gives us disadvantage. And as Andrey said, we already have over 300 megawatts secured, and we know how to scale it well, much more above that.
Tom Blackwell: Thanks, Arkady. And I’ll stick with you. So, in the press release, you reconfirmed, reiterated the 2025 ARR guidance. Question is, how are you thinking about that, and what is it that gives you the confidence that you’ll be able to achieve it?
Arkady Volozh: Well, there are a number of factors that give us confidence in our full year ARR revenue run rate guidance. First, we’re significantly scaling up our data center capacity this year, and we will have more than enough to support this ARR number. And second, we also have not yet deployed all of our H200 GPUs. And we have Blackwells reserved and coming online later this year. This is — maybe will be the main source of our income. And finally, we entered 2025, as we said, with great momentum. As I said earlier, even with existing capacity and based on contracts already in place, our March ARR will be at least $220 million and we have more potential deals to come soon. So we feel very good that we are well on track to hit the ARR guidance through the end of 2025. And actually in reality, we believe that the scale opportunity could be even much, much bigger.
Tom Blackwell: Thank you, Arkady. There’s one question about Q4 and the revenue in Q4. Roman, I wonder if I can come to you to just explain a little bit of how we saw the revenue dynamics in the last quarter.
Roman Chernin: Yeah, thank you, Tom. Okay, talking about Q4 revenue and analyzed revenue run rate as of end of Q4, I believe these were mostly timing issues that were very specific for Q4. And now we feel that we are very much on track. And if we deepen some details, what we observed is the deals with customers, the lead times were taking longer than we observed in the previous periods of 2024, as customers became much more selective and wanted to do more in-depth proof-of-concept testing or proof of — POCs and so on. But we believe this was industry-wide. In addition, we had a couple of large customers that completed their engagement with us in Q4. This was anticipated, but because of the longer lead times, we were not able to replace them before the end of the year.
We were also launching our new AI cloud platform and put a lot of efforts to move all our customer base to the new version of the platform, which took some resources to support this migration. Now, we are done with that and fully focused on scaling. It’s also worth to mention that during this period, we were very much busy building out our sales and marketing teams. Essentially, we had to do it from the scratch. We were pleased with our efforts, but many of these hires came late in the quarter, came late in the quarter, and we are now starting to see the early results in terms of new contracts finally. But to reiterate, as Arkady said, this was more a timing issue and we are back on track in 2025.
Tom Blackwell: Okay, thank you very much, Roman. And actually, I’ll stay with you. In fact, it picks up on a topic that you briefly addressed there around the sales function. So, can you give a little bit more color in terms of the ramp-up of the sales function and just sort of generally, when you think you’re going to start to see that flowing into additional deals and revenue?
Roman Chernin: Sure. I think we had a — we made a very good progress on hiring front and we’re starting to see the benefits with new customer additions and ramp up in revenue in this quarter, like Arkady mentioned. Generally speaking, we were very focused on building our sales, marketing, and customer success function in the US because as you can imagine, a lot of our customers come from US. And regardless of the fact that we brought on board pretty strong people, normally it takes like six months or more for new sales hires to become fully productive and come to the full capacity. So, I believe we are moving in a very right direction now and we see some results already, but the growth opportunity ahead of us is huge and we’ll continue to develop our sales, marketing, and other customer related functions in respect to that.
Tom Blackwell: Thanks very much, Roman. And actually, one more for you, which is around the customers. Can you give some color in terms of the types of customers that we’re working with and attracting, and also if there’s anything you can comment on in terms of length of contracts that we’re seeing?
Roman Chernin: Yeah. So, in terms of customers, I want to emphasize that we are building a very flexible platform that let us attract and work with a wide variety of customers from small ones to large enterprises. We currently see that many of our customers are obviously native GenAI startups and tech native companies. These kinds of customers appreciate the flexibility and scalability of the platform, the fact that we can spin out very quickly and provide them services in a flexible mode. And we also see that the mixture of the contract is pretty diverse. So we see some short-term on-demand contracts that we actually benefit from because we believe we’re one of the provider, one of the player in the market who can really serve cloud type of workloads. But as the Blackwells come on stream, we anticipate moving to more of a mix of short-term as well as long-term contracts, since at the beginning of generation of Blackwells, we expect to sign longer term contracts.
Tom Blackwell: Okay, that’s great. And just to remind it to all the investors on the line, you can send through questions through the chat function. Okay, so — okay, we have a question actually about the competitive environment. Daniel, I’ll come to you on this one. And so really, it’s basically, can you help us sort of think about what the competitive environment looks like now? Roman made a reference to longer deal times. Is that because you see the market becoming more competitive? And generally, how do you see it? How do you see the market evolving going forward?
Unidentified Company Representative: Yeah, thanks, Tom. Roman did touch on this, but I’ll see if I can add a little more color commentary to it. In terms of market developments, the longer lead times for deals are less a function of increased competition in the market and more, just a reflection of the increasing maturity of our customers coupled with the larger scale of the opportunities that we’re seeing now versus six months ago. That’s sort of the competition on the customer side. We’re now seeing longer proof of concepts and Roman mentioned that with new customers. And so as we build out our AI-specific cloud platforms and its functionality, we expect this trend should work in our favor as we’re not only able to demonstrate our price advantage, but also the quality and flexibility of our platform.
And then finally, in terms of, I think, the question asked about scale, we’re increasingly working with customers looking for clusters in the multiple thousands of GPU size range. Logically, these are more significant investment decisions and so they take a little bit longer from a sales cycle perspective.
Tom Blackwell: Okay, that’s great. Thank you very much, Daniel. Okay, so actually there’s a question that’s come through about DeepSeek. So obviously, that DeepSeek took the market a little bit by storm a couple of weeks ago. And basically the question is, what do we think about it and what has been the impact that we’ve seen on our business, if any. Arkady, maybe I can ask you to give some thoughts on that one.
Arkady Volozh: Well, DeepSeek just highlights that the nature of this market is very dynamic. It was a great example that demonstrates that our strategy to deploy flexible and extendable AI-focused cloud infrastructure is working because we were able to actually to meet the demands of the customers. At the infrastructure level, we saw incremental demand for several thousand NVIDIA chips, H200s, because that was the best processor to run inference of DeepSeek. So we had a spike in demand for those chips in the end of January. Yeah, exactly by the way, the week, we are also done. We had this unexpected demand and we’re pretty sure that there will be more DeepSeeks on the way. The industry is very young and these kinds of things, like DeepSeek specifically, just lowered the bar for the really good high-end models, which actually will help to accelerate the growth of the whole industry.
Tom Blackwell: Thank you for that, Arkady. So just going through, we’ve got a question around the full stack. So the point is that we’ve talked a lot about how we see that as an interesting advantage against our peers in the space. And the question is really, can you just give us a bit more color on how we’re thinking about full stack, why we see it as an advantage, and what type of customers are sort of utilizing particularly the software layer at the top end of the stack. And so maybe, Daniel, maybe Daniel, I can come to you on this one.
Unidentified Company Representative: Yeah. Thanks, Tom. So, I do think that when we consider our direct competitors in this space and really the entire landscape, having a full stack approach is the thing that sets us apart. But it’s important to understand what that means. For us, it’s really four main components. The first one is our ability to leverage our experience in building operating and operating data centers, specifically with high power intensive workloads that we see across the AI landscape. We are experts at building high PUE or power use efficiency and laying the foundation of our cost advantage amongst our Neo Cloud peers. So that’s a really important first component. With that, we couple our expertise in developing our own hardware, from servers, racks, motherboards, cooling.
This gives us a massive advantage in terms of our ability to provide the most cost-effective solution on the market. And then we couple that with a third component, which is the AI specific cloud platform that we’ve built from the ground up and built it in a way that is targeted and bespoke to AI workloads. And this is what helps us deliver the flexibility and reliability that our customers need, especially as they’re looking to providers to help them navigate the sometimes choppy waters of deploying AI infrastructure. And then the last component are the value-added services. This helps us extend the economic life of our GPUs and extends a greater value well above a pure bare metal offering. In turn, this gives us a better and enhanced way to return — to create a return on capital invested and also creates new revenue streams for the company that enable us to command software-like margins.
So we’re very excited about from the top to the bottom of the stack, all of the innovation that Nebius is bringing to market. Going forward, we think pure bare metal offerings will become more commoditized, and it’s our investments now across a full stack, especially at the top end of the stack, that are going to pay the biggest dividends in the mid to long term.
Tom Blackwell: Okay, that’s great, Daniel. Thank you very much. One second. Okay, so actually there’s a question around hardware deployment plans for this year. So, Andrey, maybe I can come to you on this one. Can you update generally and also more specifically when do you expect to be able to deploy the GB200s and B200s in the next Blackwell series?
Andrey Korolenko: Sure, Tom. So as of now, we are continuing with the deployments of H200s. And the last deployed H200s will be available to the customers in late March or early April. And right after that, we will be switching to the Blackwell generations, which will include B200, GB200s, and GB300s later in the year. B200 should be available in Q2, GB200 should be available in the platform in Q3. And after that, as I said earlier, like later this year, we plan to deploy GB300s.
Tom Blackwell: Very good. We’re all waiting for the Blackwells to come on stream. So, thank you, Andrey. So the next question is, maybe Neil, actually, perhaps I can come to you on this one. This is really about — so the question is, as we’re trying to model the business, how should we be thinking about the cadence of revenue run rate and revenue over the year.
Neil Doshi: Yeah, Tom. So we really are in the early days of building our business. There are many variables, including how quickly we can build data centers, time of GPU delivery, deployment and availability of those GPUs. Like Andrey mentioned, we have more than — more H200s coming online in Iceland and Kansas City by April, and then we’ll have GPU deployments for Blackwells coming on after that. So, we’re excited to see those Blackwells come online. And as Roman also said, our sales team is ramping up, but it takes about six months for that team to get fully productive. So putting all these elements together, we expect that most of our ARR and revenue will come in the second half of the year and will probably be weighted more towards the end of the year.
Tom Blackwell: Okay. Thank you, Neil. Okay, so the next question is actually about investment plans for sort of 2026 and beyond, and specifically how we’re thinking about raising capital to support those future expansion plans. So, I’m happy to take that one. I think that right Now, the full management focus is on executing the plans for 2025. As I reiterated in the press release today and as Arkady mentioned, we’re targeting to exit the year with an ARR of $750 million to $1 billion. And again, based on what we’ve been seeing in the very positive dynamics coming into 2025, we see that we’re very much within reach of that target. At the same time, we’re looking at the roadmap for the next generation of GPUs. So, it’s actually the B300s, GB300s, and even further improvements.
And I think we’re just really at the stage where we’re still assessing the scope of addressable demand beyond ’25, it’s too early to comment specifically and publicly at this stage, but obviously, as Arkady mentioned, we believe that the opportunity here is very significant. So, generally speaking, when we’re thinking about going forward, our aim is to move really as fast and as aggressively as we can to grow the business and to maximize value for our shareholders. This will clearly require a significant capsule that goes beyond what we currently have. But we’re very confident in our ability to continue funding this growth. We’ve seen very, very strong interest and demand from different sides. And we’ll be opportunistic in terms of when and how we raise.
So, we’ve got another question now that comes back into the next generation Blackwell series. And, Roman, maybe I can come to you on this one. We launched presales of GB200s and B200s right at the very end of 2024. Can you update on what the progress is there and what is the feedback that you’re getting from the market around the next generation?
Roman Chernin: Yeah, I can comment on this. First, we need to mention that we’re still working through the initial customer demand for Blackwells, which is obviously the function of delivery and deployment timelines. But I already can say that at this stage demand is positive and we expect to complete pre-sales for a proportion of the initial deliveries. So, we’re now discussing a variety of reserve contracts, length from one year and higher, and we have different levels of pre-commitments in multiple thousands of GPUs ahead of they being delivered. So, at this point, we see positive dynamics. It’s supported by the ongoing efforts in building our sales and marketing, and as well as solid demand on the market overall. And yeah, we’ll update you in the next calls on how the development of Blackwell is going.
Tom Blackwell: Okay. Thank you very much, Roman. One moment. So, actually, the next question is around return on invested capital. And again, the question is sort of how should the market be thinking about CapEx investments and what’s the general return that we expect to be able to receive on those investments. So, maybe Neil, perhaps I can come to you on that question.
Neil Doshi: Yeah, we seem to get this question quite a bit. So if we think about the Hopper generation of GPUs, we were really excited about this, especially with the H200s coming online. And based on some of our calculations, we believe that the payback period is somewhere between the 2.5 to 3 year range. When it comes to Blackwells, the payback period on the Blackwells, it’s a little too early to say what that’ll be. But I want to kind of put it to perspective that there are certain other elements to our business that we think can actually make that payback period a little bit faster. So first, we’re able to build very efficient data centers. We have developed our own racks and servers, and so we think that’s an area of cost savings.
In addition, we are a full-stack business, and as a full-stack business, we have additional value-added services that we can offer that could really help accelerate the ROIC as that software layer is a high margin element to our business. It’s still very small today that that software piece, but we think that as we grow our customer count and we start building more tools and softwares and services, we think that can actually be a nice contributor to revenue and margin and help with that ROIC.
Tom Blackwell: Okay, great. Thank you, Neil. So we’ve had a few questions actually just asking about how things are going with the other businesses, with Toloka, TripleTen and Avride. And so, Arkady, perhaps I can ask you to address that and give a sense of what things are going with the three.
Arkady Volozh: Okay, let me go back to them again. They’re all growing very well, as I said. Well, Toloka. Toloka grew revenue again 140% in 2024 versus 2023. And last year they actually pivoted the platform to focus on more experts rather than just simple crowdsourcing data labeling. And maybe most important that they added almost all of the leading AI labs as their customers, virtually all the big names. In TripleTen, they also had a great year. Revenue was up even higher, 250%. And they ended last year with [14,000] (ph) students mostly in the US which was of course a record high in the short career developed. On Avride, as you saw, they came — entered into a partnership with Uber in the US. Avride delivery robots are already being tested in various markets, Uber Eats, in Dallas, Austin, Jersey.
And the cars are expected to come into service later this year. Austin also entered into a partnership with Grubhub, where we will provide also robots for food delivery on college campuses. We recently deployed around 100 robots for food delivery service at Ohio State University. And they already, in the first week, they already made more than 100 deliveries a day, which was a great start — sorry, 1000 a day of course, 700, 800 a week, which is 7000 to 8000 a week. Sorry. Well, Avride also received certifications in Japan and started exploring opportunities in that country. And we think Japan is yet another huge opportunity for autonomous robots and cars. And as we told before, we are planning to bring one or more co-investor to this business.
Because as we said, this is yet another business with substantial capital requirements, but at the same time with a great market opportunity which became obvious this year. So there’s nothing specific to say about the new core investments right now, but we definitely will keep everyone updated as things progress.
Tom Blackwell: Thank you very much, Arkady. And I’ll just, for the benefit of everyone on the line, just confirm a couple of the numbers that we misspoke. So, with the Grubhub partnership, which got off to a great start, we’ve deployed around 100 robots already for food delivery service, and they were already showing 1,000 daily deliveries in the first week alone. So, we’re very pleased with how that’s going, and again, we see a lot of opportunity there and around that space and a lot of interest in it from the outside. Okay, so just a reminder again, please keep your questions coming through the chat function. We still have a little bit more time. There’s been a question on the kind of regulatory side, and so actually I’ll read it out.
It’s, how does the US IFR on Framework for Artificial Intelligence Diffusion impact the outlook for the Nebius core business? Have you assessed whether Nebius will be able to qualify for universal verified end user status under the rules? So I’m happy to have a go at that one. So, while it remains to be seen how the framework will be implemented in practice, and if indeed there will be any changes, our initial reading of the text as it stands right now is that at being a Netherlands headquartered business with our current largest data center in Finland, and also a very significant part of our expansion being focused in and around the US market, we sort of anticipate that we’ll be able to qualify for UVEU status and we don’t anticipate any material negative impact on our business or prospects at this time.
But again, it’s a little bit early and it’s still, we will be watching it and still remains to be seen how it’s all going to be implemented in practice. Okay, so actually continuing on the kind of regulatory side of things. So there’s a question which says that, we’ve stated previously that we’re servicing US clients when a lot of our data centers currently are in the US, sorry, in the EU. And do we see any risks to the business if tariffs are introduced against the EU, as President Trump has indicated? And are we considering a corporate relocation to the US? So, President Trump indicates a lot of things. But as of now, we don’t, again, we don’t see any material risks to our business in this regard. Again, while it’s indeed our initial data center footprint started in Europe, we’re now actively building out physical footprint in the US.
And it’s both in terms of data centers as well as GPU hardware as well as the team so on. And so we believe that we’re going to continue to be very well positioned to service US and global customers from our data center footprint around the world. So, in terms of the sort of corporate headquarters, so indeed our headquarters are in Amsterdam. We’re happy here, but as a global business, so powering the needs of AI companies around the world, many of whom are in the US, it obviously makes sense for us to keep building out the on the ground presence in the US. So, Andrey a bit earlier outlined some of the specific plans around the DC footprint expansion and GPU cluster expansion in the US. Arkady and Roman made reference to the fact that we’ve been really building out the sales function, again, with a real focus on the US market.
And you might remember we announced that we opened up sort of sales and customer facing offices in San Francisco and Dallas, and we’ll continue to keep building up the US presence going forward. So, notes, I think so, maybe sorry, there was a long-winded way of saying, we don’t anticipate any material risk around this, around tariffs. We think that we have the right geographic spread to be able to continue to develop very efficiently on both sides. Okay, so there’s a question about the Stargate project and potentially European Stargate and whether potentially Nebius can be a beneficiary. So just for context, I’ll add. So this makes reference to what we heard during the week of Davos about the US plans to invest $500 billion in AI infrastructure.
Not long after that, France came out with their plan to invest $100 billion in AI infrastructure. And generally there’s been a lot of interest and enthusiasm around these big mega projects in the sector. So, Arkady, maybe I can come to you for how you’re thinking about those projects and potentially what they could mean for us.
Arkady Volozh: Well, we are definitely believers that AI infrastructure is a huge opportunity. And so it’s good to see that there is so much focus in this space now. I just think that when time comes, probably later this year, someone will need to come and build all of this. And I think we are — our technology and know how to build all these AI cloud data centers at scale. I believe we are just one of the best companies positioned here to help build something here.
Tom Blackwell: Okay. Thank you very much, Arkady. Okay, so there’s a question around pricing. And so, Roman, I’ll come to you on this one. There’s been some discussion of price declines for the H100s in Q4 of last year. Did you experience this? And how have the market dynamics changed in the last few months?
Roman Chernin: Yeah. So, we, as all the market, did see some price pressure in Q4. It was partly expected as H200s came online. But in general, we believe that we are one of those who have one of the best cost structures in the market and we can be successful even on a price pressure situation. And what is more important that going forward the significant majority of our fleet will be in the new generations of chips like Blackwells this year which will come on higher pricing and we believe we are well positioned on this front to get healthy margin on new generation of the chips.
Tom Blackwell: Okay, great. So, then, actually a question that’s come through asking to clarify where we are in terms of GPUs, in terms of what’s already been deployed. I know you touched on that briefly, Andrey, in one of your earlier remarks, but maybe I can ask you to come back to that and to elaborate a little bit on where we are on GPU capacity.
Andrey Korolenko: Sure, Tom. So what I already mentioned is, we’ll have around 38,000 by the end of March deployed in the platform and available for the customers, 20,000 of which are H200. And just to remind that we actually started deploying these numbers for the last half a year, basically, and the H200s came to the deployment time in late October, November. And so the lifespan and deployment time for us is Q4 — second half of the Q4, and the first, well, and Q1, essentially.
Tom Blackwell: Okay. Thank you, Andrey. Neil, maybe I can come to you for a question again around the revenue and EBITDA guidance. So, we provided guidance on the last call in light of everything. Can you provide an update on the guidance of where we are?
Neil Doshi: Yeah. So, as you read in our press release and heard from Arkady’s prepared remarks, we did reiterate our annualized run rate revenue guidance for exiting 2025 at $750 million to $1 billion. And so this would basically be a reiteration of our revenue and EBITDA guidance that we provided previously. So just as a reminder, we said revenue for Nebius Group should be in the range of $500 million to $700 million. And for EBITDA, we said that EBITDA will remain negative for the full year, but we should pass breakeven at some point during the year. We haven’t given specific timing. There’s a lot of factors as we’re ramping and scaling the business. But we believe that with the — with more Blackwells coming online and as Roman said, we’re starting to pre-sell some of the Blackwells, we believe that that will help drive better revenue and overall margin growth for the business.
Tom Blackwell: Okay. Thank you, Neil. There was a specific question. In the press release earlier, we had referenced that in Q4, we had some churn with some of our clients. And the question is just if we can provide any more color around that. Daniel, perhaps I can come to you on that one.
Unidentified Company Representative: Sure. Yeah, so I think that there’s — we should expect a certain amount of churn in a market such as the AI market, period. That’s baked into our plans and in some ways highlights the fact that one of the things that makes us unique is the range of customers that we serve. From the single instances of inference with our AI studio, to self-service, to large implementations, like I mentioned earlier, thousands of GPUs at a time, to bare metal, it’s a very wide range. And so as we look at the lower end, we expect churn to be a part of the number. A great example of that was, as DeepSeek hit the news waves, our demand for H200s really skyrocketed. And we expect more of that in this market. And so there’s definitely one element of seeing turn across the broad spectrum of our offering.
The other is as we exited ‘24 and into ‘25, there are lots of learnings in terms of how we service our customers and perfect our AI platform. And so as we do that, there were things that we picked up, things that we experienced with our customers and expect to solve those going into and extending into ‘25.
Tom Blackwell: That’s great. Thank you very much, Daniel. So, once again, at the beginning of the call, we introduced Neil Doshi, who we are very pleased joined us as Head of Investor Relations, who’s based in San Francisco. And actually, Neil joined us just in time, there’s a good question for you. So, there’s a question is if you can update it where we are in terms of sell-side analyst coverage and just maybe more generally as we’re putting together the IR calendar, how we’re thinking about IR for the coming year.
Neil Doshi: Yeah. Thanks, Tom. So, we are building out the IR plan for 2025. In terms of sell-side coverage, today we don’t have really any covering analysts. As everyone knows, we did not have the luxury of an IPO, so we were one of the few companies that have the market cap that we do with zero brokerage or broker-focused analyst coverage. So, this is really a priority for us. We would like to have several analysts from broker dealers and research firms that pick up coverage of our stock. We started to have some good conversations with many of those analysts and we’ll continue to engage with them. Also, we’ll be attending investor conferences this year. We really want to elevate Nebius in terms of brand and recognition from the investment community.
And so, for example, Arkady will actually be speaking at the Goldman Sachs Disruptive Technology Symposium in London in March. And he’ll be doing a fireside chat there. And then finally, we’ll consider hosting analysts and investors at industry conferences and maybe even do a tour of one of our data centers, but more on that to come. But overall, we’re excited to build a plan and work with both our institutional and retail investors.
Tom Blackwell: Great. Thank you, Neil. So, we have a question about Dutch tax liability. Just to recap, to remind people what this refers to, we had said on the previous call that there was a potential Dutch tax payment that would be made that would be probably on the upper end of $400 million, but that we were in discussion with the Dutch tax authorities to potentially to bring that down. And the reason for the variable was because it would depend on how the shares that we received as part of the consideration for the divestment of how we would use them and how they would be treated. So, through the shares issued in the pipe transaction that we announced at the end of last year when we had NVIDIA, Accel, Orbis and others coming into the capital structure, we were able to utilize part of the treasury shares and that already reduced some of the potential obligation on that front.
Anyway, we’re happy to say that actually in February we submitted the required filings and paid tax to amounting to $180 million to the authorities to settle the remaining potential liability. This isn’t completely done yet, but we’re actually reasonably confident that $180 million will end up being the amounts, which would mean that from the $400 million that we had previously guided around, that would free up $220 million to put into our expansion plans. So, as I say, it’s not 100% done, but we’re reasonably confident that that’s the way things are going to end up.
Tom Blackwell: Okay. So, thank you very much to everyone for having joined us on the call. It’s only our second quarterly results call since we became Nebius, but we’re looking forward to many more. And maybe just before we close, I’ll turn the floor back to Arkady to wrap up.
Arkady Volozh: Yeah, thanks, Tom. Again, the company actually didn’t exist seven months ago. And look where we are today. We are public on NASDAQ. We raised the profile of our investors. We had this pipe. We started building our infrastructure. We now have hundreds of megawatts of data centers guaranteed this year and with huge plans going forward. We deployed the first tens of thousands of GPUs and waiting for the next generation with more tens of thousands of GPUs deployed — which we will deploy this year. We completely relaunched our cloud platform and Inference-as-a-Service platform as well. We now have hundreds of customers quickly growing. It will be thousands of customers there. And we believe that we’re going to have a lot of big contracts going forward on this new generation of Blackwells.
So other businesses are growing well as well, and a lot of going on there. So all in all, the group, which didn’t exist again, just seven months ago, goes very well. And going forward, looking forward into great results this year.
Tom Blackwell: Thank you, Arkady. I can certainly confirm it’s been a busy time and at the same time, we’re just getting going. So thank you very much to everyone for joining us. Good afternoon, morning, and all of that, and evening. And we look forward to seeing — to following up with many of you over the coming periods. And otherwise, thanks again. Be in touch. Thank you.