Innodata Inc. (NASDAQ:INOD) Q4 2024 Earnings Call Transcript February 20, 2025
Innodata Inc. beats earnings expectations. Reported EPS is $0.31, expectations were $0.11.
Operator: Good afternoon, ladies and gentlemen, and welcome to the Innodata to Report Fourth Quarter and Fiscal Year 2024 Results Conference Call. At this time, all lines are in listen-only mode. Following the presentation, we will conduct a question-and-answer session. [Operator Instructions] This call is being recorded on Thursday, February 20, 2025. I would now like to turn the conference over to Amy Agress, General Counsel at Innodata, Inc. Please go ahead.
Amy Agress: Thank you, Inay. Good afternoon, everyone. Thank you for joining us today. Our speakers today are Jack Abuhoff, CEO of Innodata, and Marissa Espineli, Interim CFO. Also on the call today is Aneesh Pendharkar, Senior Vice President, Finance and Corporate Development. We’ll hear from Jack first, who will provide perspective about the business, and then Marissa will follow with a review of our results for the fourth quarter and fiscal year 2024. We’ll then take questions from analysts. Before I get started, I’d like to remind everyone that during this call, we will be making forward-looking statements, which are predictions, projections, or other statements about future events. These statements are based on current expectations, assumptions, and estimates, and are subject to risks and uncertainties.
Actual results could differ materially from those contemplated by these forward-looking statements. Factors that could cause these results to differ materially are set forth in today’s earnings press release, in the risk factors section of our Form 10-K, Form 10-Q, and other reports and filings with the Securities and Exchange Commission. We undertake no obligation to update forward-looking information. In addition, during this call, we may discuss certain non-GAAP financial measures. In our SEC filings, which are posted on our website, you will find additional disclosures regarding these non-GAAP financial measures, including reconciliations of those measures with comparable GAAP measures. Thank you. I will now turn the call over to Jack.
Jack Abuhoff : Thank you, Amy, and hello, everyone. Our Q4 ’24 revenue totaled $59.2 million, a year-over-year increase of 127% for the quarter. This exceeded our projected Q4 revenue guidance of $52 million to $55 million for Q4 ’24. And our adjusted EBITDA for the quarter was $14.1 million, or 23.9% of revenue, a 231% year-over-year increase. For the full year 2024, we delivered $170.5 million of revenue, up 96% over 2023. And our full year adjusted EBITDA was $34.6 million, or 20.3% of revenue, a 250% year-over-year increase. We finished the year with $46.9 million of cash, up from $13.8 million of cash at the end of 2023. Our $30 million credit facility remains undrawn. We are very pleased with these results. In the fourth quarter, we experienced accelerating business momentum across key strategic imperatives that we believe will serve our medium and long-term growth plans.
The momentum we’re achieving gives us confidence to forecast 2025 as another year of strong growth. In terms of guidance, in 2025, we will be taking the same approach we took in 2024. We’ll start the year with an initial growth forecast based primarily on one and near-in forecastable business. And in succeeding quarters, if we win new business, we’ll update the guidance. Last year at this time, we forecasted 20% growth for 2024. But then we revised this initial guidance upward multiple times through the course of the year as we won more business, ultimately delivering 96% revenue growth. We are now forecasting 40% or more revenue growth for 2025, and we will update this initial guidance through the course of the year. Our strong business momentum is reflected in revenue growth, margin expansion, broadening customer relationships, and continued progress on our strategic roadmap.
We are laser-focused on providing big tech companies with the data engineering they require to develop generative AI frontier models. We believe our efforts are paying off. In Q4 in January, we were awarded additional programs and expansions with our largest customer, valued at approximately $24 million of annualized run rate revenue. These newest awards expand our total annualized run rate revenue with this customer to approximately $135 million. With our other big tech customers, we’re also seeing accelerated demand for our services. Sequentially, from Q3 2024 to Q4 2024, our revenues from our largest big tech customer grew by 8%, while our aggregate revenues from our other seven big tech customers grew by 159%. This increased growth by our other big tech customers, which we hope will continue in 2025, serves as validation of our land and expand strategy, and we expect it will continue to diversify our revenue base.
Our confidence that these seven other big tech customers will collectively become a significant part of our revenue makeup in 2025 is bolstered by the progress we made in Q4 in building relationships, expanding work, securing new wins, gaining traction, and earning trust. The number of projects and pilots we have underway with these customers significantly increased in Q4. This includes several pilots running now, which hold the promise, potentially, of seven or even eight-figure wins. As we discussed last quarter, our strategy encompasses both services and platforms. On the services side, we intend to be a go-to partner for big techs that are building generative AI frontier models and enterprises that seek to transform their products and operations with generative AI technologies.
We believe these are lucrative markets which we are well-suited to serve. The first focus area is big tech, and we believe we are positioned to benefit from big tech’s aggressive planned investments in generative AI. Following recent earning reports from the Magnificent Seven, it is estimated that Amazon, Meta, Microsoft, and Google Paradox will spend a cumulative $325 billion in CapEx and investments in 2025, driven by continued commitment to building out their artificial intelligence offerings. Amazon expects CapEx to be over $100 billion, up from $83 billion in 2024, with its CEO reiterating his previous views that AI is a, quote, once-in-a-lifetime type of business opportunity. Meta expects its 2025 CapEx to be between $60 billion and $65 billion, which, even at the bottom end of this guidance, is over 50% higher than its 2024 CapEx of $39.2 billion.
Its CEO has termed 2025 as the, quote, fining year for AI. Microsoft, meanwhile, expects to spend $80 billion in its fiscal 2025, which will end in June. And Alphabet has forecast 2025 CapEx of $75 billion, which is almost 50% higher than its 2024 CapEx of $52.5 billion. In recent earnings, press releases have reinforced the commitment by these large tech companies to accelerate their AI investments with the goal of approaching AGI, artificial general intelligence. We believe that the long road to AGI will be paved with data, multilingual and multimodal data, data for safety and alignment, meta learning and reasoning data, computer use, agentic and operator data, industry-specific data, and data for real-world modeling and simulation. Now, we know models perform better when supervised fine-tuning data is high-quality, large-scale, highly consistent, and diverse.
An industry analogy to explain where we are in capturing data is to imagine the realm of all useful data to be the size of a football. By comparison, today’s best-performing LLMs have been trained with data sets that are probably the size of a dime. What’s even more interesting is that much of this is uncaptured, but useful data does not even exist explicitly today, such as how to execute a multi-step process using a series of websites or how to reason through complex domain-specific problems. We believe this likely means an even greater need for investments in our services that will be necessary to achieve the goal of AGI. We intend for Innodata to be at the forefront of providing these services. Moreover, we believe that innovation in hardware optimization, such as that lower the costs of compute required to train tomorrow’s LLMs on data, will enable big tech to accelerate their investments in data.
We saw this kind of innovation recently from DeepSeek, the Chinese AI research lab. Their innovative use of several existing technologies, which enabled more data to be trained with less compute, are in fact part and parcel of the technology revolution previously popularized as Moore’s Law for the semiconductor industry. We expect DeepSeek’s hardware optimization techniques to be quickly absorbed by our largest customers, much like other recent hardware optimization techniques that received less fanfare and future hardware optimization techniques that are inevitable. We believe that there is no viable substitute for pre-training data and fine-tuning data to progress to AGI. Techniques such as data distillation, using output of existing models to train new models, may result in high performance on benchmarks because benchmarks are inherently biased toward the past.
But limiting data diversity in this way actually results in a more limited performance and ultimately causes what’s referred to as model collapse. DeepSeek relied heavily on data distillation. That’s why in the last few weeks we’re seeing more and more of the limits of what their model can do. We’re also seeing big tech companies putting in place the technologies to effectively shut the back door to future data distillation. In addition to supplying supervised fine-tuning data, we are increasingly identifying opportunities to source and transform pre-training data to solve the issues around IP infringement. One of the big tech companies that we signed in 2024 engaged us on pre-training data in Q4, which resulted in $3 million of Q4 revenue.
We’re also finding expanded opportunities with big tech companies in LLM safety and evaluation. Just last month we won two LLM trust and safety engagements with a big tech company that we valued approximately $3.6 million of annualized revenue run rate. Let’s talk a bit about the enterprise market. The DeepSeek hardware optimization that we just spoke of, that makes both training and inferencing less expensive, will, we believe, significantly catalyze enterprise-gen AI adoption. This has recently been talked about as an example of the Jevons Paradox, the simple idea that when technological progress makes a resource cheaper or more efficient to use, it often leads to an increase in demand for that resource. We believe we are on the precipice of a rapid acceleration in enterprise adoption of generative AI.
This, we believe, will result from hardware optimization that lowers the cost of building gen AI solutions and managing gen AI infrastructure, as well as advancements in high-quality open-source models that can be fine-tuned as expert agents, innovations in orchestrating agentic AI ecosystems, and frontier models capable of performing deep research, utilizing both websites and tomorrow’s agents. We’re seeing that enterprise customers struggle with access to gen AI talent, we’re building technical roadmaps to capture both operational enhancement and product innovation, and we’re building prototypes that move into development. And as their ecosystems become more densely populated with AI agents, we anticipate that they will struggle with issues around safety and trust as well.
We believe our enterprise gen AI focus presents opportunities for us to continue delivering strong revenue growth in 2026 and beyond. The strong relationships we have with leading information companies, financial services companies, and other businesses have been and are expected to be a right proving ground for enterprise AI solutions and services. We believe we already have a line of sight to double-digit growth with a number of these customers in 2025, based on forecasted gen AI-related spend. Moreover, our gen AI focus creates a clear path for reinvesting in our business with what we anticipate to be near-term payback. Our 2025 budget calls for us to reinvest a portion of our cash from operations back into the business, while at the same time exceeding our 2024 adjusted EBITDA.
Our scheduled investments are largely in people, spanning technology, product development, operations, and sales. We are pleased with how successful we have been recently at recruiting select top talent from prominent technology companies and leading competitors. They find our business momentum attractive, as well as the opportunity to build practices that align to the industry and technology trends that I just described. The work we’re doing with our big tech customers on trust and safety is helping to inform our development of our automated trust and safety platform. We believe that our automated trust and safety platform will be useful to enterprises to measure how their models and agents are working, to surface vulnerabilities and misalignments, and to identify specific training required for continuous improvement.
We’re building this for the agentic era, in which we anticipate companies will depend on a rich ecosystem of agents to power their operations and products. In the last few months, we have worked hand-in-hand with prospective customers and partners, designing functions and features. We demoed the platform for the big tech company that I spoke about earlier as having just engaged us at a 3.6 million trust and safety program. And I believe what they saw helped us seal the deal. We expect to beta release the product to select charter customers in Q2. I’ll now turn the call over to Marissa to go over the financial results, after which Marissa, Aneesh, and I will be available to take questions from our analysts. Marissa?
Marissa Espineli : Thank you, Jack, and good afternoon, everyone. Revenue for Q4 2024 reached $59.2 million, reflecting a year-over-year increase of 127%. This exceeded our expectations, benefiting from a project we delivered in the quarter for one of our new big tech customers. For 2024 as a whole, we grew 96%. For the quarter, adjusted gross margin was 48%, representing a 4% sequential increase from the 44% we achieved in Q3 of 2024. This was a strong result, driven by a project for one of our new big tech customers in the fourth quarter that yielded strong margin, as well as automation efforts and management initiatives that reduced headcount and optimized costs in our Synodex business. For the year, adjusted gross margin climbed slightly from 42% to 43%, which would have been 45% without the $3.6 million of recruiting costs we incurred in Q2 to scale the business.
Adjusted EBITDA for the fourth quarter was $14.1 million, or 23.9% of revenue up from $4.3 million year-over-year. For the year, adjusted EBITDA was $34.6 million, or 20.3% of revenue, up from $9.9 million from 2023. Our result in both quarters and the year demonstrated strong operating leverage characteristics of our business. Net income was $10.3 million in the fourth quarter, up from $1.7 million in the same period last year. For the year, net income was $28.7 million, which was 3,256% higher than 2023. We were able to utilize the benefit of accumulated net operating losses, or NOLCO, in Q3 and in Q4 to partially offset our tax provision. As a result, our effective tax rate for 2024 was approximately a negative 17.1%. Without the benefit of NOLCO, our effective tax rate would have been approximately 20%.
Absent any changes to our tax environment and variables, we expect our 2025 tax rate to be in the range of 28% to 31.5%. Our cash position at the end of Q4 was $46.9 million, up $26.4 million at the end of Q3, and up from $13.8 million at the year-end 2023. We still have not drawn on our $30 million Wells Fargo Credit Facility that we increased in Q2 of 2024. The amount drawable under this facility at any point in time is determined based on borrowing-based formula. Jack also mentioned that in 2025, we plan on investing in extending our capabilities. Our internal budget has us dialing up strategic hires and product development expenditure as we saw the seed for long-term growth. From a budget and forecasting perspective, we believe we can dial up reinvestment of operating cash flow while at the same time aim to exceed 2024’s adjusted EBITDA.
That’s all on my side. Thank you, everyone, for joining today. Operator, please open the line for questions.
Q&A Session
Follow Innodata Inc (NASDAQ:INOD)
Follow Innodata Inc (NASDAQ:INOD)
Operator: [Operator Instructions] Your first question comes from the line of George Sutton from Craig Hallum. Please go ahead.
George Sutton : Thank you. Fabulous job, guys. So, I was particularly excited to see the 159% sequential growth from your seven other big tech customers. I wondered if you could just lay out the future for that group, if you would. And how does it dovetail with the number of pilots that you’re seeing? You mentioned seven to eight-figure opportunities. Are those coming from those seven or those additional customers?
Jack Abuhoff: Hi, George. Well, thank you very much for that. Great question. So, the pilots that we’re talking about are coming from a combination of those additional big techs. And I can think of one enterprise that we also have, you know, a very large-scale deal that’s in what we would refer to as a pilot stage right now. So, you know, we’re focused this year on big techs in order to achieve our growth plans. But we’re, you know, we’re working hard on the enterprise side as well. We’re putting in place some key partnerships, some key wins, some key capabilities. And we’re seeing that start to bubble up a bit in the pipeline.
George Sutton: So, I want to go back to the football-dime analogy that you used in terms of where the current models are and talk about your largest customer by example. And this is really a duration question, because I think one of the challenges people have is how long will this customer be this size or larger? Can you just address it from that football-dime analogy perspective?
Jack Abuhoff: Yes. So, the analogy that we’re using, and I think I can credit one of our internal data scientists with this. I don’t know if he got it from somewhere else, but I find it very useful. So, we think about the bounds of, you know, the containerization of all human knowledge as expressed or expressible potentially as data as being the size of a football. And if we, you know, hold that visual image in mind, we can think by comparison as the data that’s been used to train today’s LLMs as being the size of a dime. Now, what does that mean? That means that there’s a whole lot of additional data that the models of the future are going to need to learn from to be able to function at something that begins to resemble over time as AGI, the ability to closely mimic the capabilities of a human.
And when we, you know, peel that back a bit, we see lots of different things. We see expert data, reasoning data, multilingual data, multimodal data, meta-learning. So, learning and expressing as data, you know, how do human beings think when they take apart a problem, when they assign components of that problem out, when they order, you know, their operations around solving a particular problem? And the problem doesn’t even have to be a particularly sophisticated one, although it can be. So, there’s a ton of data that needs to be captured and that needs to be addressable in order for the models to learn from it. And that, we believe, is our opportunity, or one of our opportunities. It’s not even our only opportunity, but it’s a very exciting opportunity.
And given the — what, you know, what you were all and we were all reading about in their recent earnings reports in terms of the uptick in, you know, capital spending principally for these technologies and these capabilities, we believe that we’re still in the early innings.
George Sutton: Got you. And then one other question. A lot of discussion relative to models being either open or closed. Can you talk about your opportunity as it may differ from one versus the other?
Jack Abuhoff: Yes, sure. So, I think that we’ve got opportunities very clearly on both sides of the fence. We’re working on open source models with customers for whom open source is a primary strategy, and we’re working with customers who are building closed source models. I think open source is particularly interesting because in combination with the likely declines in cost of inferencing and costs of computing cycles necessary to train models, the world of opportunity is going to open up well past today’s integration strategies for an enterprise. They’re going to open up in terms of training, you know, very specific agents. You can call it small language models for building on top of open source models, doing, you know, complete, you know, supervised, you know, SFT models based on that.
That’s a huge opportunity for us. You know, there’s a lot that enterprise struggle with in terms of even where is their data? What’s their policies around data? How do they access it? Where’s a federated golden source for the latest data? Well all of that is data engineering that’s going to get worked out with our help, we hope. And on top of that, building the models that form up their complex future agentic ecosystems. So we see opportunity all over the place right now.
George Sutton: All right. Good stuff. Thank you very much.
Operator: Thank you. And your next question comes from the line of Allen Klee from Maxim Group. Please go ahead.
Allen Klee : Great. Congrats. One business question, then a couple of financials. he business one is, there’s been a lot of questions in the market with DeepSeek and some thinking of like that maybe through inferencing, whether using other AI models to kind of figure out the answers instead of training. And does that mean that there could potentially be less training done? And I think you did address it, but could you just kind of go into that a little more? Because I think there’s some questions in the market.
Jack Abuhoff: Yes, sure. Happy to. So it’s well respected and well recognized among data scientists that distillation of data. So using data from existing models to train new models creates model collapse. What happens is model diversity drops and that compromises certainly the upper limit of performance. What you observe is memorization versus true cognition. Now, you can do that cleverly in a way that maximizes how you perform on benchmarks. We call that benchmaxing. And I think clearly the DeepSeek team did an amazing job at benchmaxing across a diversity of benchmarks. So hats off to them for that. But the fact remains that when you do distillation, you’re hugely compressing the data. You’re introducing a huge amount of bias into the model.
You’re inviting model collapse. And that’s why even if you read the technical report from DeepSeek, much less the notes and transcripts that are available of some closed door discussions they had in China with their researchers, they clearly recognize the limitations of what they’ve done. They clearly talk about how going forward they’re going to need to do a lot more work and a lot more investment in terms of data in order to catch up with true performance. So data distillation is a technique. It’s a known technique. That wasn’t an innovation. Data distillation creates model collapse. And that’s why I don’t think you’re going to see the people that are really driving toward AGI in a serious way and creating the frontier models embracing that as a viable technique.
Allen Klee: Very good. Thank you. And a couple of financial questions. I’ll try to be quick. Your gross margins were 45.2% versus 40.8% in the third quarter and 34.8% in the prior year, fourth quarter. You mentioned that of the reasons why one of them, I heard you guys say that it was a new tech customer, something you were working on, but also automation and less headcount and Synodex. What I’m trying to understand is, do you think there’s potential from where it’s at now to potentially expand as we go through ’25? Or is there any reason why it was unusually a little high in the fourth quarter?
Jack Abuhoff: Aneesh, do you want to take that?
Aneesh Pendharkar : Yes, of course, Jack. So Alan, I guess just in terms of Q4 specifically, with that set customer, there were certain projects which we were working on where we had pretty healthy margins. And then just the mixed effect with some of our other customers, we were able to generate, as you said, 45% in Q4. As we think about 2025, holistically, there’s always going to be some puts and takes as it relates to gross margin, as it relates to specific programs with specific customers. We, on a fully loaded basis, will generally be targeting around the 40% adjusted gross margin for net new opportunities, which we think is a pretty healthy level of margin for a services company. Of course, all the opportunities, as we continue to win new customers, we’d be hoping to get more than that. But that’s kind of how we’re thinking about net new opportunities on the margin front. I hope that makes sense, Allen.
Allen Klee: Yes, that’s very helpful. Thank you. The last question, and I know you may not answer this because you didn’t do it yet, but can you help us in any way, you say, you’re going to be reinvesting, which I believe is the right thing to do for long term, medium term, long term growth. EBITDA will be up year over year. Is there anything you can help us with in terms of qualitatively what that is, of being up year over year and how to think about that?
Jack Abuhoff: I think what our plan calls for, at least, is to be investing primarily in people across an array of different areas. As you know, I mean, as is evident, we prize operational excellence. We’re doing a lot still to tweak and automate and improve and everything else. As we make investments, we’re going to be very careful about those. We believe that there’s such a rich opportunity landscape that we can make investments and get a very near term return on those investments. We see that we’re able to hire some people who can be very impactful within our business. They love the momentum we’ve got right now. They love the relationship, or excuse me, the reputation that they hear we’re creating in the market when they talk to our customers.
They like our plan. They like our vision for what our capabilities are going to be now and into the future. But I think we’re reluctant to fix a number in terms of what those investments will be, even though in our plan we have a number. We want to be somewhat flexible. We want to quite possibly dial that up as we dial up forecastable growth. But we’re going to be disappointed about it. That’s why one of the things we want to measure ourselves against and hopefully hold us to is that as we dial up those investments, we’ll also be looking at adjusted EBITDA with our stated goal of beating last year.
Allen Klee: Okay, great. Thank you very much.
Operator: Thank you. And your next question comes from the line of Hamed Khorsand from BWS Financial. Please go ahead.
Hamed Khorsand : Hi. Just on the topic of you’re looking to invest, are you at capacity now with the client base you have? Do you need it for new projects? Do you need it because you have a sales funnel? Why do you feel like you need to expand your headcount now?
Jack Abuhoff: Sure. So I think it’s helpful, Hamed, to distinguish between cost of goods and SG&A. On the cost of goods side, we’re able to expand in close proximity to opportunity. We don’t need to carry a large bench of capabilities or bench of talent. We don’t experience constraint in terms of revenue and revenue opportunity relative to cost of goods. What we do though, in cost of goods is we prepare for the next phases of growth. We want to make sure that we’ve got the management talent and technical talent and other things required in order to achieve our growth ambitions on that side. Now, on the SG&A side, it’s a question of ambition. It’s a question of, well, how much more do we want to do? How much opportunity do we want to try to seize?
What are the things that we think we have justification in potentially winning with the relationships we have and the brand that we have and the capabilities we have? And frankly, there’s a lot that we see there. So what you’re seeing when we talk about investments is a reflection of the ambition that we have for who we think we can become. The people we’re hiring today, they know how to do things I can hardly even understand. And I love that because we’re getting deep into the technology. We’re operating in ways that go well beyond anything that we’ve ever done in the past. And the work that we’re doing is well-received by our customers. So we’re going to keep feeding the beast there. And we’ll do so, as I said, in response to the last question, in a way that is disciplined, in a way that enables us to have our cake and eat it too.
So to both show, hopefully, year-over-year improvements in key operating metrics, as well as growth levels like what we’re showing now and expansion capabilities.
Hamed Khorsand: Okay. And then on the pilot trials you were talking about earlier, is that opportunity as far as taking business away from a competitor or competitors? Or is that brand-new projects that you hope to win?
Jack Abuhoff: I think that there’s probably a bit here and there that we’ve taken from competitors. I know that that is the case here and there. But for the most part, that’s not our strategy. This pie is expanding so rapidly that we’re focused not on eating someone else’s lunch relative to yesterday’s — I’m mixing metaphors terribly, yesterday’s pie. But we’re focused on, as that pie expands, how do we get a disproportionate share of that expansion? What do we need to be able to do? What do we need to have in place? What do we need to be able to prove that we’re seeing as adding disproportionate value relative to our competitors?
Hamed Khorsand: Okay. And then my final question is, do you feel like you’re under less stress because you’re cash-positioned now? Or do you think because you’re at a higher revenue run rate, you still need a little bit more liquidity than you have now?
Jack Abuhoff: I think we’re very well-positioned. We’ve seen the increase in cash that we’ve got on our balance sheet now. We haven’t tapped our credit facility at all. We’re forecasting a very significant level of free cash flow generation. And we’re very targeted and very disciplined in terms of the investments we’re going to be making. So, I think we’ve got what it takes and we’ve got what it needs. That said, as I’ll repeat again, we’re very ambitious. We want to be in a position to seize opportunity. And as that opportunity presents itself to us, we’ll, I hope, make the right decisions.
Hamed Khorsand: Okay. Thank you.
Operator:
,:
Jack Abuhoff: Operator, thank you. So, yes, Q4 was a record quarter. 2024 was a record year. We entered 2025 with really strong momentum. We’re starting the year with guidance of 40% or more revenue growth, and we’ll update that as we go forward, much like we did in 2024. Our confidence is underpinned by the continuing increase we’re seeing in customer demand. We’re announcing today new wins of $24 million in annualized run rate revenue from our largest customer, bringing our total run rate to approximately $135 million with this customer. And at the same time, you know, equally exciting. We grew revenue from our other seven big techs by 159% sequentially. And we think this is a big deal because it shows that our land and expand strategy is working.
The macro environment is working in our favor as well. We believe the big techs are continuing to dial up their capital commitments to AI, and at the same time, we’re anticipating a rapid acceleration in enterprise adoption, thanks in part to DeepSeek and other research labs that are optimizing hardware utilization, which then lowers entry costs for enterprise. So, you know, the net-net is we believe we’re in the right place at the right time, and that a potentially massive opportunity exists in front of us to position ourselves for continued strong growth in 2026 and beyond. Our plan calls for reinvesting in the business, while at the same time, hopefully, growing 2025s adjusted to EBITDA over 2024. The balance sheet is strong, $46.9 million in cash at year-end and an undrawn $30 million credit facility.
So, we’ve clearly got the flexibility to execute our strategy. So, yes, again, thank you all for participating today and for being on this journey with us. We’re committed to nothing less than making Innodata one of the greatest AI services companies out there. We’ll look forward to updating you on our progress as the year progresses.
Operator: Thank you. And this concludes today’s call. Thank you for participating. You may all disconnect.