C3.ai, Inc. (NYSE:AI) Q3 2024 Earnings Call Transcript February 28, 2024
C3.ai, Inc. misses on earnings expectations. Reported EPS is $-0.60282 EPS, expectations were $-0.28. C3.ai, Inc. isn’t one of the 30 most popular stocks among hedge funds at the end of the third quarter (see the details here).
Operator: Good day and thank you for standing by. Welcome to the C3 AI’s Third Quarter Fiscal Year 2024 Conference Call. At this time, all participants are in a listen-only mode. After the speakers’ presentation, there will be a question-and-answer session. [Operator Instructions] Please be advised that today’s conference is being recorded. I would now like to hand the conference over to your speaker today, Amit Berry. Please go ahead.
Amit Berry: Good afternoon and welcome to C3 AI’s Earnings Call for the Third Quarter of Fiscal Year 2024, which ended on January 31st, 2024. My name is Amit Berry and I lead Investor Relations at C3 AI. With me, on the call today is Tom Siebel, Chairman and Chief Executive Officer, Juho Parkkinen, Chief Financial Officer, and Hitesh Lath, Chief Accounting Officer. After the market closed today, we issued a press release with details regarding our third quarter results, as well as a supplemental to our results, both of which can be accessed through the Investor Relations section of our website at ir.c3.ai. This call is being webcast and a replay will be available on our IR website following the conclusion of the call. During today’s call, we will make statements related to our business that may be considered forward-looking under federal securities laws.
These statements reflect our views only as of today and should not be considered representative of our views as of any subsequent date. We disclaim any obligation to update any forward-looking statements or outlook. These statements are subject to a variety of risks and uncertainties that could cause actual results to differ materially from expectations. For a further discussion on material risks and other important factors that could affect our actual results, please refer to our filings with the SEC. All figures will be discussed on a non-GAAP basis unless otherwise noted. Also during today’s call, we will refer to certain non-GAAP financial measures. A reconciliation of GAAP to non-GAAP measures is included in our press release. Finally, at time, in our prepared remarks in response to your questions, we may discuss metrics that are incremental to our usual presentation to give greater insight into the dynamics of our business or our quarterly results.
Please be advised that we may or may not continue to provide this additional detail in the future. And with that, let me turn the call over to Tom.
Thomas Siebel: Thank you, Amit. Good afternoon, everyone, and thank you for joining our call today. C3 AI had a great third quarter. Total revenue of $78.4 million grew 18% year-over-year, exceeding the high end of our guidance range. The total number of customer engagements was 445, an increase of 80% compared to 247 a year ago. Subscription revenue was $70.4 million, constituting 90% of total revenue and increasing 23% from a year ago. Non-GAAP gross profit was $54.7 million, representing a 70% gross margin. Our GAAP operating loss was $82.5 million. Our non-GAAP operating loss was $25.8 million. Better than our guidance for a loss of $40 million to $46 million. Our non-GAAP net loss per share was $0.13. We ended the quarter with $723.3 million in cash, cash equivalents and investments.
All these numbers exceeded our guidance and exceeded analysts’ consensus. This is the 13th consecutive quarter as a public company in which we have met or exceeded our revenue guidance range. None of this should have come as a surprise. The enterprise AI market is on fire. We have been predicting for some years that the market for enterprise AI would be quite large. Those predictions were subject to much speculation in the analyst community and media. As of February 2024, I believe it’s broadly resolved that the enterprise AI market opportunity is substantially larger than anyone predicted, constituting a secular change in the composition and growth rate of enterprise software writ large. Cloud infrastructure is scaling rapidly, NVIDIA grew 265% year-over-year.
NVIDIA’s data center GPU sales grew by 409% year-over-year. Now, some believe that this capacity is being built to use LLMs to write Christmas cards in the style of Charles Dickens and write college application essays to Yale. But that’s simply not the case. This capacity is being built to run enterprise AI applications, stochastic optimization of the supply chain, supply network risk, predictive maintenance, demand forecasting, fraud detection, production optimization, customer engagement, predictive medicine, precision medicine, and government services. And that is what we do at C3 AI. So as we power into 2024 and 2025, the world is very much coming our way. C3 AI has been preparing for this enterprise AI market explosion for 15 years. We started in 2009 at the infancy of AWS.
This is before Azure and before Google Cloud even existed. We carefully designed, developed and tested a reference AI software platform architecture now known as the C3 AI platform. We then used that platform to design, develop and bring to market over 45 production enterprise AI applications, that address the value chains of manufacturing, financial services, agribusiness, chemicals, lumber, paper products, utilities, oil and gas, state and local government, defense and intelligence. The market for enterprise — the market interest in enterprise AI is staggering. Virtually all commercial, military and government organizations are focused today on leveraging AI to improve their operations, optimize their processes and transform their businesses.
Our qualified opportunity pipeline has increased by 73% from a year ago. Now this is led by C3 Generative AI opportunities. Our go-to-market with partners is driving strong pilot additions, and it’s still in the earliest innings. In the third quarter, we closed 62% more bookings with our partner network than we did in Q2. 337% more bookings than a year ago. Bottom line, the market demand for enterprise AI products is overwhelming. Customers. In the commercial sector. In Q3, we signed new agreements with Boston Scientific, Pantaleon, Booz Allen, Southwire, Carpenter Technology, Florida Crystals, Santa Ana Agriculture, Cerveceria Guatemala, AbbVie, T-Mobile, Bloom Energy, Ball Corporation, DLA Piper, Carlisle Companies and Holcim among others.
Holcim, a European leader in sustainable building solutions, embarked upon a production pilot with C3 AI beginning in May 2023 to configure and deploy the C3 AI reliability suite. Following a successful six-month pilot Holcim entered into a four-year subscription agreement with C3 AI to scale the reliability application across its in excess of 100 cement plants. Holcim’s predictive maintenance program monitors a large number of assets and will generate significant yearly economic value in reduced maintenance costs and production increases. Holcim is also implementing C3 Generative AI to enrich the reliability application and to assist with complex machinery troubleshooting. Another example, DLA Piper, a global law firm pioneering technology innovation in the legal sector worked with C3 AI to create a first of kind generative AI application to streamline the analysis of complex legal agreements.
In just three months, DLA Piper applied C3 Generative AI to reduce the attorney time it takes to create over 200 point due diligence analyses of limited partner agreements, and it reduced the effort by 80%. As a result of this application, DLA Piper is realizing new operational efficiencies and freeing up their attorneys’ time to focus on delivering increased value to their clients. State and local government had a huge quarter, generating 29% of our bookings. Lighthouse agreements with San Mateo County, Daily City and Riverside County, California are spearheading growing awareness for our state and local government suite of applications and we’re now closing deals nationally. We see a huge potential to expand our footprint through the counties, cities and municipalities across the United States.
San Mateo County Sheriff’s Office and the Daily City Police Department both signed multiagency expansion contracts for the C3 AI law enforcement application as a countywide crime investigation platform to combat organized retail, vehicle and cargo theft. This contract involves a combined 15 local police departments and 16 agencies deploying the application in coordination, including Burlingame, Daly City, the City of San Mateo, San Bruno, Atherton, Redwood City, South San Francisco, Menlo Park, Foster City, Belmont Pacifica, East Palo Alto, Colma, Broadmoor, Brisbane and others. These deployments promised to significantly decrease the investigative timelines to solve crimes and empower multi-agency collaboration to help prevent crime. These engagements were funded through the Organized Retail Theft grant from the California Board, the Board of State and Community Corrections.
And by the way, of all the submissions to this board, San Mateo County submission ranked number one in priority as the best submission in the state with C3 AI law enforcement being the top investment in their initiatives to combat retail theft in the State of California. We also executed an agreement, actually multiple agreements with San Mateo county, to deploy C3 AI Property Appraisal, C3 AI Law Enforcement and C3 Generative AI applications, the assessors and county clerks, recorders and elections office licensed C3 AI Residential Property Appraisal and C3 AI Commercial Property Appraisal to kind of monetize the appraisal of over 230,000 parcels each year, constituting more than $300 billion in assessed property value. And to do this more accurately, more expeditiously and more defensively.
Our US federal business continues to show significant strength. Third quarter revenue was up over 100% year-over-year, and bookings were up 85%. We signed new and expansion agreements with the Missile Defense Agency, the Department of Defense, the United States Air Force, and the US Intelligence community, including seven new generative AI agreements at the Missile Defense Agency, the United States Air Force, JROC, and the US Marine Corps. We have a really substantial growth opportunity in federal and we’re laying the groundwork to seize it. We’re getting strong traction with partners in this federal space, not the least of which is AWS. We’ve deployed our federal solutions on the AWS marketplace for the US intelligence community in June of last year, and we’re basically on speed dial collaborating with AWS federal executives every day in the defense and intelligence communities.
Additionally, and importantly, we entered into an enhanced partnership agreement with Paradyme. This is a high-end professional services organization in the federal sector, and we did this to increase our capacity for deploying appropriately cleared data scientists and application engineers into our classified government installations. We’ve aligned with Paradyme and our commitment to support the defense intelligence communities in their critical missions related to national security. Under the new agreement, Paradigm will significantly grow its number of dedicated C3 AI staff to accelerate joint selling and to accelerate the delivery of these secure, classified, and powerful AI applications, providing predictive insights into these federal agencies.
In addition, Paradyme would jointly mark up the C3 Generative AI for defense application and the C3 AI law enforcement applications. I want to highlight the continued diversification in our go-to-market efforts. Our business continues to diversify across industries. In the last quarter, Q3, our bookings distribution by industry was 29% state and local government, 25% federal defense and aerospace, 16% manufacturing, 11% agribusiness, 8% chemical, 7% professional services, 2% energy and service utilities, 1% food processing and consumer packaged goods, and 1% oil and gas. Now let’s take a minute and this is important. So please pay attention and look at the evolution of our go-to-market model. And I want to particularly talk about the transition to consumption-based pricing.
You’ll recall that six quarters ago, we changed our go-to-market model from a leading with a subscription-based pricing model to leading with a consumption-based pricing model. The general idea being that rather than people licensing $10 million, $20 million, $30 million, $40 million, $50 million upfront, they would simply do a project, a pilot project, for say, $0.5 million for six months and we’ll bring the predictive maintenance application, the supply chain optimization, the demand forecasting application, whatever it may be, into full production in six months and then the customer will offer $0.5 million, which is insignificant. And if the customer likes it, they could keep it. And they pay depending upon the volumes, you generally $0.40 to $0.50 per BCP or to run the application.
Bring it live, if you like it, keep it. That was the idea. And this clearly and substantially lowered the price barrier for companies to engage with us. And we’ve seen this now in the number of deals that we close each quarter, the size of the transaction that we’re doing, it’s been very effective. So this model has been and continues to be very successful in driving engagement. And our contract volume has grown as we expected. Now, that time we provided expectations of VCPU consumption ramps, pricing and pilot conversion rates. And now we’ve compared this to the empirical data that we’ve realized over the past four quarters. And it turns out those initial assumptions have been proved to be quite accurate. Our estimates on VCPU usage and how that ramps.
How that ramps say over four, five, six, seven, eight quarters. They’re pretty darn accurate. I would say that plus or minus 10% it’s about right on. We assumed a pilot conversion rate of 70% and what we’re seeing that appears to be about right. We’re also pleased to share that the pricing on the converted pilots is pretty much in line with our expectations. Now, when we began the transition, we assumed that virtually almost all pilots actually would convert to month-to-month pay as you go consumption-based pricing. Counter-intuitively, it hasn’t quite worked out that way. And the empirical data show that the majority of our customers are choosing to sign multi-year subscription contracts rather than after the trial they have the option and they’re choosing to sign multi-year subscription contracts rather than month-to-month pay as you go, and so why are they doing that?
After pilot completion, many customers are electing to deploy multiple applications across multiple business units, and they’re electing for a multi-year subscription — committed subscription pricing model as it offers them a more predictable cost model at large scale. Now note also it’s not quite this simple for those that you’re working on models, because that subscription agreement also involves a consumption runtime component. So this is not going to be easy to model people for those who are working on your spreadsheets. Now the good news is that our customers are making significant commitments to C3 AI and our multi-year subscription agreements are a positive indicator of the depth of their commitment. Also our data shows that over the term of the contract, the revenue from these subscription agreements is generally equal to the revenue from our consumption agreements.
So whatever the customer chooses is fine to us because it’s revenue neutral over, say, 10 to 12 quarters. Bottom line, our go to market transition is working, as evident from the growth of our opportunity pipeline, the increase in our customer engagements, and increased revenue. Now, let’s take a minute and talk about generative AI. Because this is a very significant market development. The opportunity ahead of us in generative AI is enormous. C3 AI has been at the vanguard of enterprise AI innovation now for 15 years. We’ve spent 15 years building enterprise AI applications for manufacturing, supply chain, demand chain, finance, defense intelligence, smart grid, oil and gas, et cetera. In all fairness, we largely established the enterprise AI category.
Now, with the advent of generative AI, I mean, this is fundamentally changing the nature of the market. It’s changing the nature of the human computer interface. It’s unlocking new use cases and it’s breaking, and it’s creating kind of breakthrough opportunities for nonobvious applications in new industries and enterprises that we would not have expected and we’re just in the earliest days of this. Gartner predicts — Gartner Group predicts that by 2026, over 80% of enterprise will be using generative AI, including the deployment of generative AI enabled applications in their production environments. This is up from basically zero in 2022. For Gartner, organizations, this is a quote that do not consistently manage AI risks are exponentially inclined to experience adverse outcomes such as security failures, financial and reputational loss, loss and social harm.
We are leveraging our first-to-market advantage in scalable trusted enterprise AI to bring secure, deterministic, hallucination-free, traceable, domain-specific generative AI solutions and generative AI augmented applications to market. And we’re seeing a groundswell of interest in all of our generative AI offerings and our remarkable uptake of the C3 generative AI suite. Our generative AI customer activity has ramped sharply since we introduced this product to market four quarters ago. We’re now applying generative AI in a kind of areas where we wouldn’t have expected the operator assistance in a major manufacturing facility, customer assistance at a global financial services company, and field technical support at a major multinational manufacturing group.
DLA Piper is using generative AI to significantly reduce the labor associated with limited partner agreement due diligence. Another leading law firm is using the corpus of S1s that are contained in sec.gov to train a large language model to reduce the attorney labor in generating first drafts of S1s for IPO candidates. I mean, imagine you train a large learning model on the corpus of S1s in sec.gov, and then you want to come up with your first draft of your S1 if and when the IPO market opens again. And you put in the name, address, the finances, the key risk factors, hit the carriage return, and you know it was so facto. You know, the first draft of the S1 is there. And we’ve saved, you know, god knows how many person weeks of legal associates where now they can just edit and get it done.
So it’s really, this is pretty neat. Baker Hughes is using C3 generative AI on top of Workday and ServiceNow to provide its global employees broad and immediate answers to all employee questions about, you know, employee questions, policies, benefits, compensation, what have you. Riverside County in California is using C3 generative AI to assist their staff in answering questions about citizen questions, about taxation, zoning, building codes, et cetera. We continue to be really impressed by the broad range of applications to which this generative AI technology is applicable. In Q3, we closed 17 generative AI applications pilots across a broad range of industries, including federal, defense, aerospace, ag, forestry, food processing, manufacturing, state, local government, chemicals, life science, and others.
And these generative AI pilots spans Europe, Latin America, North America. There are no bounds to this. We continue to drive innovation in the generative AI market with our highly differentiated C3 generative AI solution. Our newest innovations include omni-modal data support. You’ve heard about multi-modal. Well multi-modal doesn’t cut it. Multi-modal includes text and images. That doesn’t get you very far. You need enterprise data, ERP, CRM, OSI-PI data, telemetry, images, text, photos, voice. So in order for this dog to hunt, it needs to, you know, multimodal doesn’t get you anywhere. You need omni-modal data support, which is what we support in the C3 AI product today. We have advanced parsing and embedding capabilities to increase reasoning on tables and images within documents.
We have the planning and execution of complex multi-step workflows, which are multilingual support, and the automatic invocation of advanced math tooling. We do this in a manner that is virtually hallucination-free and it’s LLM-agnostic. So it’s — our solution here is really quite unique and it’s getting a lot of traction and we are installed today in some of the most secure installations on the planet Earth. Earlier this year, I’d actually asked when we announced in last quarter’s conference call, we made a well-considered decision to accelerate our investments in generative AI to seize market share in this large and rapidly growing market opportunity. As a result of that investment last quarter, our web page views are up 57% year-over-year in Q3.
Our organic search traffic is up 68%. And our unique visitors to our website are up an eye popping 230%. So this decision is already accelerating our business, already accelerating our product innovation, and it’s driving definitely broader market awareness of what we do in enterprise AI and in generative AI. That being said, we continue to expect that we will operate a positive free cash flow business in Q4 in the next quarter. Now, while we’re not giving fiscal year ’25 guidance yet, we will continue to expect positive free cash flow for the full year of fiscal year ’25. Talk a minute about our International Users Group Conference C3 Transform. We’ll be holding our Fifth Annual International Users Group Conference C3 Transform in Boca Raton next week, March 5th through 7th.
Over 500 customers and partners are registered to attend, including leaders from almost every industry sector. General sessions on March 5th and 6th will be substance packed, including C3 AI product roadmaps, C3 AI customer success stories, best practices in enterprise AI from C3 AI customers and partners, AI innovation in defense and intelligence, and discussions from experts about the past and future of generative AI. And I’m very pleased to announce that the C3 Transform General Sessions will be simulcast to qualified investors and analysts and those of you who are interested can register online beginning this Friday and tune in real time to participate in any of those general sessions that you would like. And I think you’ll find them, these are not cheesy sales pitches, folks.
This is pretty substantive stuff. And for those of you who are either interested in C3 or interested in AI in general, I think you’ll find it, you know, we hope you’ll join us and you’re welcome to. In conclusion, business is good, prospects look bright, and C3AI has returned to accelerating growth. Talk about guidance. So given current market conditions, we are increasing revenue guidance for Q4, and for fiscal year, this fiscal year, 2024. For Q4, we’re anticipating revenue in the range of $82 million to $86 million, and for the year, we’re anticipating revenue in the range of $306 million to $310 million. Our non-GAP loss from operations is expected to be in the range of $43.5 million to $51.5 million for the quarter. And our non-GAAP loss from operations for the year is expected to be in the range of $115 million to $123 million.
Now let me turn the call over to my colleagues, Juho Parkkinen, the Chief Financial Officer, and Hitesh Lath, the Chief Accounting Officer for additional detail. Juho?
Juho Parkkinen: Thank you, Tom. I will now provide a recap of our Q3 financial results and some additional color on pilot activity. Then I’ll discuss factors that would drive our financials in Q4 and in FY’25. All figures are non-GAAP unless otherwise noted. Total revenue for the third quarter increased 17.6% year-over-year to $78.4 million. Subscription revenue increased 23.4% year-over-year to $70.4 million and represented 89.8% of total revenue. Professional services revenue was 8.0 million and represented 10.2% of total revenue. Gross profit for the third quarter was 54.7 million and gross margin was 69.7%. As a reminder, we continue to expect short-term pressure on our gross margins due to higher mix of pilots which carry a greater cost of revenue during the pilot phase of the customer lifecycle.
Also, as we discussed last quarter, We expect short-term pressure on an operating margin due to the investments we’re making in generative AI and upgrading customers to our platform version 8.3. Operating loss for the quarter was negative 25.8 million compared to our guidance range of negative 40 million to negative 46 million. The improvement in operating loss versus guidance was driven by our team’s ongoing focus on disciplined expense management, as well as the timing of additional investments we’re making to capture market share. At the end of Q3, our accounts receivable balance was 173.5 million, including unbilled receivables of 102.6 million. Total allowance for bad debt remains low at 400,000, and we have no concerns regarding collections.
The general health of our accounts receivable remains strong. Six quarters ago, we announced a transition from subscription-based pricing to consumption-based pricing, a standard in the industry. We anticipated and announced that this transition would have a short to medium term negative effect upon revenue growth and RPO as the average sales price was significantly reduced and the contracts often lacked a time certain multi-period commitment. As the transition progressed, we expected to return to revenue growth as customer engagements accelerate and customers expand their consumption. Reflecting our transition to a more consumption-based contract, we reported third quarter GAAP RPO of 186.9 million, which is down 28.8% from last year and current GAAP RPO of 172 million, which is down 2.4% from last year.
Free cash flow for the quarter was negative 45.1 million. We continue to be very well capitalized and closed the quarter with 723.3 million in cash, cash equivalents and marketable securities. Now I’ll provide some additional metrics for the third quarter. During the quarter, we started 29 pilots, a 71% increase from last year and down 19.4% from last quarter due to timing of the sales activity. 10 industries were represented in our pilot starts. At quarter-end, we had cumulatively signed 138 pilots, of which 132 are still active. This means they’re still in their original three to six month term, extended for some duration, converted to consumption or a license contract or currently being negotiated for a production license. Our customer engagement count for the quarter was 445, an 80.2% increase from 247 a year ago.
As a last item on our call today, I wanted to inform you that after three wonderful years at C3 AI, I will be stepping down as the company’s CFO and assuming a role as VP of Finance. I have been at C3 AI for three amazing years and two of its CFO. I’m grateful to Tom and the Board of Directors for the opportunity. It has been a privilege to work beside Tom and the entire executive team in this role. As you all are aware, C3 AI is a unique hyper-performance technology company. 100% in the office, incredibly hard working fast paced. You have to be on 150% all the time. And for me personally, I need to take a break and step aside for a bit and spend more time with my family. I am pleased to inform you that my colleague Hitesh Lath is assuming the role as CFO.
I hired Hitesh about three months ago as our Chief Accounting Officer. Hitesh comes to us from Ernst & Young, where he was a partner for over eight years and brings a total experience of 24 years spent serving large multinational technology clients. I’ve known Hitesh for over a decade, and in fact, when I was at Ernst & Young, I worked for him at some of the engagements. From day one, Hitesh jumped right in and was heavily involved in preparing the financials for this quarter, and I’m very excited to hand the finance team into Hitesh’s capable hands and expect him to continue the progress we have made and take the team to new heights. I will remain a C3 AI employee, advising Hitesh and Tom and assisting the team as necessary to make sure our operations are smooth and our financial reports continue to be pristine.
Hitesh, would you like to say something?
Hitesh Lath: Thank you, Juho. I have been here at C3 AI for about three months. And I’m very excited to take on this role. These are great times for AI and for C3 AI. And I look forward to working with Tom and rest of the executive team and be a part of the growth story. With that I’d like to hand it over to the operator for Q&A.
See also 15 States Where You’re Most Likely To Get In A Car Accident and 10 Countries with No Income Taxes in the World.
Q&A Session
Follow C3.Ai Inc.
Follow C3.Ai Inc.
Operator: Thank you. [Operator Instructions] Please stand by while we compile the Q&A roster. And our first question comes from Timothy Horan of Oppenheimer.
Timothy Horan: Thanks guys. Could you give us a little bit more metrics on the productivity improvements your customers are seeing or success stories and where is generative AI adding the most value to AI as you see it now? Thanks.
Thomas Siebel: Timothy, hi, it’s Tom. Thanks for the question. I would, to the extent that you have time, either you or one of your associates, dial in next week to our users group conference. Because our customers are going to say what the productivity increases they’re getting are. Michelle has stood up and said they’re getting $2 billion in economic benefit. The United States Air Force, who will be presenting, has stood up and said that they’re getting a 25% increase in aircraft availability from our predictive maintenance application that’s deployed as a standard in the Air Force. So that’s 25% increase in capability in the United States Air Force is a lot of capability. So, you know, those are two examples. Generative AI can be this, there’s just no telling where this goes.
Whether it’s sitting on top of Workday, sitting on top of ServiceNow, sitting on top of SAP, sitting on top of Salesforce, writing contracts for lawyers, you know, the work we’re doing at DLA Piper. I mean, one of the applications they’re going to see, like, you know, for those of you that tune into our user group conference, they’ll talk about the RSO application. This is the application, this is one of the largest AI applications on earth, deployed at the United States Air Force. And now we’re putting, and this is, we fused the data from 22 weapon systems into a unified federated image, F-15, F-16, F-18, F-35, KC-135, et cetera, and including the telemetry from many of these devices. And a B-1 bomber has 42,000 sensors, and many telemetry at something like 8 hertz cycles.
So there’s a lot of data. I think it’s an order of 100 terabytes of data. And like any application of this nature, it has a highly technical interface. When we put generative AI on top of it, it has a mosaic browser user interface. The mosaic browser, you guys know that, is basically the Google browser. Google copied it. And the mosaic browser came out of University of Illinois, I think in 1993. So now you simply, anybody in the Air Force with the proper authority can ask any question about any weapon system or any weapon systems and immediately get the answer. What have I rounded up levels for F-35 squadrons in central Europe? What’s my cost of running the B1B program in each of the last five years? What are my biggest parts issues associated with the F-35 project?
Whatever it may be. So it really is, as it relates to the, as we deploy these enterprise AI applications in mass across organizations, it provides a user interface, it makes the change management process much simpler. So these are examples in both of those areas. And I don’t know how big this generative AI thing is, but it’s bigger than a bed box, I can tell you that.
Timothy Horan: Thank you.
Operator: Thank you. One moment for our next question. And our next question comes from Sanjit Singh of Morgan Stanley.
Sanjit Singh: Thank you for taking the questions. And sorry, Juho, to see you go and best of luck spending time with the family. Tom, I wanted to ask a little bit about retrieval of augmented generation. It seems like that’s an AI access pattern that’s getting really, really popular across enterprises and across AI companies. And so there’s a lot of companies sort of pursuing this opportunity. I wanted to hear a little bit about how C3 is sort of differentiated in terms of providing this capability to enterprises, you know, that sort of differentiates RAG from C3 versus some of the other players that are trying to make this a reality for customers.
Thomas Siebel: Sanjit, enterprise AI in general or specifically generative AI?
Sanjit Singh: Generative AI, but specifically about the retrieval of augmented generation, like RAG being. It seems like one of these use cases has catching fire with enterprises and just wanted to see like how C3 is allowing customers to pursue RAG use cases.
Thomas Siebel: Okay. Well RAG usually, what we’re referring to is a technology that allows the answer to be traceable. Okay, it allows you to do, so when you ask the question, it tells you where the answer comes from. And most large language models will not do that. The C3 generative, by combining 15 years’ worth of platform architecture with the large language model, we’re able to solve the generative AI equation in a highly differentiated manner. The C3 AI generative AI, when we simply generate, all the answers are deterministic, not random. That means every time you ask the same question, you get the same answer. Go on to Bard or go on to ChatGPT. It doesn’t work that way. Every time you ask the same, if two people ask the same question, you get different answers.
Secondly, everything that we do is traceable. And this is where we use a RAG, retrieval augmented generation technology so we know exactly where the answer came from. And most large language model will not do that, okay, because we have the temperatures turned down to zero, okay, we don’t hallucinate. If it doesn’t know the answer, it doesn’t tell you where the answer is. Most of these other solutions are unimodal. You know that. They can put anything you want as long as you put in text. That’s not useful. Now they’re thinking about multimodal. Multimodal if you ask Andrew Ng means text plus images. That’s not that useful either. If we’re going to do generative AI for example on the airport application, it needs to be omni-modal. Text, images, graphics, enterprise data, telemetry, voice, signals, what have you.
And so we’re omnimodal. Almost all of these other LLM solutions, whether they come from OpenAI or Anthropic or Google or whoever they might be, they are LLM specific. Our answer is we’re LLM agnostic. So you can change out the large language model every week as these guys out innovate each other. One of the big hobgoblins associated with large language models is data exfiltration. This is what is stopping these applications from being installed everywhere. See Samsung for details. There’s a lot of research, particularly coming out of University of Carnegie Mellon, that are showing the cyber-attack vectors that are open by these large language models. And then you know the story about Samsung with the massive data exfiltration of their intellectual property onto the public internet.
Because of the way that our system is structured, we have the LLM has no access to the data, therefore it can’t exfiltrate and it’s not a vector for cyber. Okay, it also because it doesn’t have access to the data, it doesn’t have the opportunity to hallucinate. Another reason why these LLMs are not being installed has to do with IP liability. The IP liability associated with these large language models that are trained on the public internet are unbounded. Unbounded liability is a problem for Bank of America. Unbounded liability is a problem at Cargill. It’s a problem at any responsible organization, even Morgan Stanley. The way that our system works, only your data are available to the large language model, so there is no IP liability problem.
And finally, our solutions are LLM-agnostic. So our solutions are highly differentiated from the other solutions that are out there. And for any one of those reasons I mentioned, that system does not get installed at Goldman Sachs, does not get involved at Morgan Stanley, take Coke Industries, United States Air Force, or the CIA. That dog doesn’t hunt and ours does. That’s the difference.
Sanjit Singh: That’s a super comprehensive answer, Tom. I’m really looking forward to watching some of those sessions at a C3 transform on next week.
Thomas Siebel: And there is a mention specifically on that with Nathaniel Christian, who you know, James Lawrence from RSO, and Rowan Curran from Forrester Research on exactly that. And at 9. 15 on March 6th, I highly recommend it.
Sanjit Singh: Perfect. I did have one follow-up, and it goes to some of the color that you provided on, the customers coming out of pilot and how they are choosing to license going forward. And so with the kind of more of a shift or more of a preference I guess for kind of subscription contracts, so what is that imply for like quarterly revenue over the next couple of quarters? Because we’ve been trying to think that there might be a headwind as customers move to consumption contracts and consumption will eventually grow and that becomes revenue accretive. So now with this mixed shift directionally how can I think about that impacting quarterly revenue? That trend continues from here on going forward.
Thomas Siebel: Let’s see, I’m trying to find a headwind in either of these stories. Okay, sometimes you’ll have to like, offline, tell me where the headwind is. The bottom line is the customer has the option of licensing it, completely by it, say, okay, we’ll take it month to month and we’ll pay for the you know CPU hour, okay, or many of them, you know, want to deal with price certainty because they plan on expanding in a pretty big way and they say, hey guys, let’s talk about a three-year commitment. We’re going to make a certain amount of money in year one, year two, year three and there’s a consumption pricing component. Now, the bottom line is, Sanjit, it’s revenue neutral to us over 10 quarters. So it doesn’t matter. And our business is to win the customer service business. However the customer wants to buy it, we’re going to sell it. But it has really no meaningful impact on our revenue modeling. But it did come as a surprise. It’s counterintuitive.
Sanjit Singh: Yeah, that’s great. I’ll leave it there and give the floor to other analysts. Thank you so much, Tom.
Operator: Thank you. One moment for our next question. And our next question comes from Pat Walravens of JMP Securities.
Patrick Walravens: Oh, great. I’d like to start with one sort of financial one and then Tom, I have a big picture one for you. So, Juho, maybe on your way out. I mean, this is your Q3, next quarter’s Q4, and then you’re going to give us some guidance for fiscal ’25 just to keep us all in check, you know, a quarter ahead of it. Yeah, let’s not get ahead of ourselves. Can we just get some sense of what, you know, some boundaries in terms of what we should think about for fiscal ’25?
Juho Parkkinen: Pat, thanks, first of all for that question. But I mean, we’re still planning for FY’25. I mean, we think the opportunity is massive. We’re very excited about the generative AI opportunity, but it’s too early for us to give you any sort of guidance as to what the FY’25 revenues look like. We are confident on free cash flow positivity for ’25, however.
Patrick Walravens: Okay, that’s helpful. All right. Then Tom, the big one for you is, I’m just wondering about the sort of, I think, future demand curve for, you know, nation-states sovereign clouds, that sort of thing. I had breakfast with another AI executive this week, and one of the comments he made was, if you don’t have access, if you don’t have your own committed access to AI in country, you are dead. So, is he overstating the case, or is this another area where there’s potentially a lot of work for you guys to do?
Thomas Siebel: Who’s you? You don’t have access.
Patrick Walravens: Pick whatever. Pick the Prime Minister or President of any of our allies.
Thomas Siebel: Well, I think France, Germany, and the UK, we do have sovereign access. So we’re not finding this a problem, data sovereignty a problem, because one or more of the cloud providers does guarantee data sovereignty. I think actually the interesting trend, Pat, that we’re seeing, and this is really counterintuitive, I think we’re going to see the return of in-house data centers. I think this, like what HP is leading with what do they call it? Green grass, greenhouse, with these super computers that’ll be inside of Goldman Sachs and Bank of America and other firms. I think we’re going to see a return to in-house data centers, believe it or not, where people have the GPUs inside. But data sovereignty is not a difficult problem to solve, and we’re able to address it. We have customers all over the world. And I believe it’s a level zero requirement and the — our friends at Azure and AWS in particular do kind of nail that quite well.
Patrick Walravens: Okay. Can I go a little deeper on your comment about why we’re returning to in-house data centers?
Thomas Siebel: Why do I think that?
Patrick Walravens: Yeah, well what are you seeing that’s driving that?
Thomas Siebel: I’m hearing this in the marketplace and even my engineers have talked about it. Okay, that it’s going to be cost effective for us to build. Now, the constraint is that you’d think, well, somebody needs to call up Jensen and beg for GPUs. Let’s say we can figure out how to do that. Maybe we know somebody who knows Jensen. Okay, the hard part is we can’t get power. So I think the constraint on this going for everybody sees with the constraint is availability of GPUs. I think it’s straight on this soon is going to be the availability of power. You cannot get power to build the data center in Silicon Valley. You know this, right? Northern Silicon Valley is PG&E, which is, I have no comment on that. And Southern Silicon Valley is some other power company. I know not what, but they will not give you power for a data center. So I think this GPU constraint is ephemeral as soon as it’s going to be power.
Patrick Walravens: All right. Awesome. Thank you.
Operator: Thank you. One moment for our next question. And our next question comes from Mike Cikos of Needham & Company.
Unidentified Analyst: Great. Thanks, guys. This is Matt [indiscernible] on for Mike Cikos over at Needham. Good to hear that the consumption transition is tracking in line with your initial target. On that note, what can you tell us about the size of your sales force, the ramping of these reps, and the number of pilots being signed per sales rep relative to your expectations.
Juho Parkkinen: Hi, Matt. This is Juho. So, the sales, we are still hiring actively in all of our sales functions, but not as fast as we’d like. So our sales force is not as high as we initially projected when we provided these sort of assumptions, but we continue to ramp up on that. As it becomes to the ramping up the sales force, I think previously we’ve said that everybody should make some sort of expectations and assumptions in their models until they close their first pilots. But a reasonable assumption would be 1.5, 2 quarters before they get fully ramped up and get going on closing their first pilots. And then, Matt, what was your third question?
Unidentified Analyst: Just if anything has changed as far as like how many pilots you’re expecting each rep to close. I know that was part of the initial assumptions.
Juho Parkkinen: Yeah, once again, we lead with the pilot sales motion. So all the sales reps, when they close new business is expected to lead with a pilot. So we would expect each of the sales guys to bring in at least a pilot a quarter once they’re fully ramped up and everything’s at scale, but we’re not quite yet there.
Unidentified Analyst: Okay, great. Thanks for that. And then has the company noted any change in the length of sales cycles this quarter versus the prior quarter given management’s comments on customer considerations for AI governance?
Thomas Siebel: I think we published that, did we?
Juho Parkkinen: We took it out from the queue. So but the sales cycle is about the same.
Thomas Siebel: It’s about the same. It’s about the same, Matt. It hasn’t changed.
Juho Parkkinen: Yeah, which is about 3.5 months.
Unidentified Analyst: All right. Beautiful. Thanks so much, guys.
Operator: Thank you. One moment for our next question. And our last question comes from Kingsley Crane of Canaccord Genuity.
Kingsley Crane: Great. Thank you for taking the question. I commend you for the great Q3 and really strong momentum in the business. With respect to your comments on NVIDIA and Enterprise AI, I think what we’re all trying to figure out is when does NVIDIA’s 200 plus percent growth and hardware investments start flowing into the software layer? And we’re seeing some of it already, but is that more like one year, five years? I think we see the tidal wave coming, but we’re all trying to time it.
Thomas Siebel: Well, you know, Kingsley, it’s a good question, you know, but I can tell you all this infrastructure is not being put out there to like play games, okay? It’s being put out there to run enterprise AI applications. And so these guys are out there building the highway for us and thank goodness for that. And I think that, you know, we, in terms of, you look at the, I think you’re generally aware of the market interest in AI applications and I think you’re generally aware maybe more than generally probably specifically aware of the interest in generative AI applications and that’s what those GPUs are going to be doing, they’re going to be running. And the good news is they’ll be there. So it’s, you know, there’s a lot of things coming into place.
Kingsley Crane: Thank you. That’s really helpful. And so last one is just that we’re seeing so much investment domestically. You’ve had such great success both with federal and state and local. How are you doing the international opportunity today, both in Europe and then in regions like APAC?
Thomas Siebel: Well, I disclosed, I think, last quarter that our performance in EMEA was significantly substandard and that was pretty clear about that. And come as a big surprise to everybody on this call that maybe we made some organizational changes. And the, I’m pleased to report that those changes have been quite positive and we’re seeing the levels of sales activity and customer engagement increasing dramatically in the EMEA theater and interesting enough the South American theater too. So that thanks for the question and we’re seeing you know very positive news there.
Kingsley Crane: That’s great. Thank you and Juho it’s been fantastic work with you I wish you the best of luck. Thanks again.
Juho Parkkinen: Thanks, Kingsley.
Operator: Thank you. I would now like to hand it back to Mr. Sebel for closing remarks.
Thomas Siebel: Ladies and gentlemen, thank you for your time this afternoon. We appreciate the opportunity to update you on the state of our business. I can tell you that, you know, for those of you who have visited us, you know this is a very unique place. We have the only full parking lot in Silicon Valley. There are 500 or 600 people here working with us today shoulder to shoulder. They’re here Monday through Friday. And we’re working in Chicago, Atlanta, Tyson, Washington DC, London, Rome, Paris. We’re all at it. We have a very unique high performance corporate culture that I think is going to serve as a real strong competitive advantage in the long run. So this place is just kind of vibrating with excitement. And I think for all of us involved, it’s the professional experience of a lifetime.
And we thank you for the opportunity to share it with you. And we look forward to bringing you up to speed next quarter. And let me just last thought, for those of you who have time, I really encourage you to dial into C3 Transform next week because I think you’ll find it these are professional presentations that are quite substantive and it will be a good use of your time if you have time. Ladies and gentlemen, thank you very much and we look forward to talking again soon.
Operator: This concludes today’s conference call. Thank you for participating and you may now disconnect.