Jack Abuhoff: So no, we’re happy to take you through that. Basically, what we’re adjusting for is the stock-based compensation and D&A, so.
Tim Madey: So there’s an aspect there. Okay.
Jack Abuhoff: So that would be the add back and you’ll get leverage on that add back because that won’t necessarily keep increasing at the same rate as revenue will.
Tim Madey: Okay. I understand now. And last question was on the Microsoft call the other day and I couldn’t help but notice that they’re using co-pilot also. You trademark that with PR co-pilot. How does that work where they’re using co-pilot around large language models also?
Jack Abuhoff: Well, I think it’s a really good name.
Tim Madey: It’s a great name, I just kind of wondering, did they talk to you before they started using that name? Or are they labeling that from you or?
Jack Abuhoff: They’re not. And that certainly isn’t our biggest concern. I think it’s a great description for the way these technologies can be used to augment the work that people do and provide that kind of augmented real-time, real live assistance. And I think the exciting thing is those technologies, certainly, our PR co-pilot is just going to get better and better and better and more and more personalized. So I’m happy we pick the name that other people think is cool too. And maybe a good benefit for us in that. There’s certainly no lawsuits that we’re initiating.
Tim Madey: I know that. Just last quick question. I was thinking about the question earlier, we’ve been tracking you for years and you had some great projects over the years. And I was wondering if you could talk a little bit about the history and what you learned on some of these projects and how it relates to your current business kind of tying that lineage or heritage altogether for us?
Jack Abuhoff: Yes, happy to. So what we’ve made the business over the years is creating large-scale, high-quality data for companies where errors are not welcomed, where errors are not tolerated. The tolerance for AIM mistakes is virtually non-existent. So we’ve developed technology around that and processes around that and DNA around that. And we’ve done this in lots of different domains, by which I mean subject areas, medical, health care, legal, regulatory, tax, financial, insurance, on and on and on. Now the thing to know about large language models in AI fundamentally is the key ingredient beyond compute for training and inferencing. The next key ingredient is data and the higher the quality of data, the better performing the AI will be.
So we’re able to take that fundamental core competency that we have and pivot off of that very directly for creating high-quality AI. That’s why I like to think that all of the work that we’ve done over now decades has been kind of training camp for — it’s like training for the Olympics. Now we’re in the Olympics and we’re bringing a lot of very relevant trading to the table.
Tim Madey: Yes. That’s some of the criticism I’ve heard on large language models is that the — if the data set is not right, the answer might sound logical but it could be false. How do you ensure or could you talk a little bit more about the skill set of putting together the right data set for the right model to make sure that you’re getting the right output?
Jack Abuhoff: Yes. So there’s a little bit of danger there in conflating 2 problems. One is that the model just doesn’t work very well. The language isn’t helpful. It’s kind of cognitive ability isn’t there and things like that. The other related issue is hallucination and you don’t necessarily solve hallucination through the quality of data, you solve hallucination in some respects through the kind of work that you’re doing on performance evaluation and the trust and safety work and the kinds of data that you’re feeding into it but it’s just not a data quality problem.
Operator: Thank you. We have reached the end of our question-and-answer session. And I will now turn the call over to Jack Abuhoff for closing remarks.
Jack Abuhoff: Great. Well, thank you, operator and thank you, everybody, for your great questions. I’ll recap a little bit. We now have hard fought for master services agreements with 5 of the 10 largest technology companies in the world for generative AI development. We’re super excited about that. We’re expecting these companies to spend billions of dollars over the next several years for training and fine-tuning generative AI models. We’re now or soon expecting to be ramping up engagements with all of these companies. I guess in Q3, we got a taste of the growth that we believe is in store and we anticipate further growth in Q4 and continuing into 2024. As we said, we’re guiding to $24.5 million or more of revenue in Q4. Today, we also announced having signed an agreement with yet another of the world’s largest tech companies, adding to our already rich roster of opportunities.
And with the significant incremental adjusted EBITDA gains we’re delivering, we’re demonstrating that we have what it takes to grow aggressively but to grow aggressively and profitably as we harness the opportunity that’s in front of us and the tailwinds that we’re benefited by. My team and I are energized by what we’ve accomplished by the number of new major accounts we now have to deliver growth and the magnitude of the market opportunity that’s in front of us. We believe we’re now just at the early stages of exploiting these market opportunities and we believe that these market opportunities are themselves at their early stages. So very exciting. And again, thank you all. We’ll be very much looking forward to our next call with you.
Operator: Thank you. This does conclude today’s conference. You may disconnect your lines at this time. Thank you for your participation.