C3.ai, Inc. (NYSE:AI) Q1 2024 Earnings Call Transcript September 6, 2023
C3.ai, Inc. beats earnings expectations. Reported EPS is $0.09, expectations were $-0.17.
Operator: Good day and thank you for standing by. Welcome to the C3.AI’s First Quarter Fiscal Year 2024 Conference Call. At this time, all participants are in a listen-only mode. After the speakers’ presentation, there will be a question-and-answer session. [Operator Instructions] Please be advised that today’s conference is being recorded. I would now like to hand the conference over to your speaker today, Amit Berry. Please go ahead.
Amit Berry: Good afternoon and welcome to C3.AI’s Earnings Call for the First Quarter of Fiscal Year 2024, which ended on July 31st, 2023. My name is Amit Berry and I lead Investor Relations at C3.AI. With me on the call today is Tom Siebel, Chairman and Chief Executive Officer, and Juho Parkkinen, Chief Financial Officer. After the market close today, we issued a press release with details regarding our first quarter results, as well as a supplemental to our results, both of which can be accessed through the Investor Relations section of our website at ir.c3.ai. This call is being webcast and a replay will be available on our IR website following the conclusion of this call. During today’s call, we will make statements related to our business that may be considered forward-looking under Federal Securities Laws.
These statements reflect our views only as of today and should not be considered representative of our views as of any subsequent date. We disclaim any obligation to update any forward-looking statements or outlook. These statements are subject to a variety of risks and uncertainties that could cause actual results to differ materially from expectations. For a further discussion on the material risks and other important factors that could affect our actual results, please refer to our filings with the SEC. All figures will be discussed on a non-GAAP basis unless otherwise noted. Also during the course of today’s call, we will refer to certain non-GAAP financial measures. A reconciliation of GAAP to non-GAAP measures is included in our press release.
Finally, at times in our prepared remarks, in response to your questions, we may discuss metrics that are incremental to our usual presentation to give greater insight into the dynamics of our business or our quarterly results. Please be advised that we may or may not continue to provide this additional detail in the future. And with that let me turn the call over to Tom.
Thomas Siebel: Thank you, Amit. Good afternoon, everyone, and thank you for joining our call today. We’re off to a strong start for fiscal year ’24. Our revenue came in at the high end of our guidance exceeded analyst consensus and we’re seeing significant traction across our business. This is the 11th consecutive quarter as a public company in which we have met or exceeded our revenue guidance. Following the release of ChatGPT in November of 2022, we are seeing a dramatic increase in demand for enterprise AI adoption. In Q1, we experienced strong traction with our enterprise AI applications and especially strong traction with C3 Generative AI. Let’s take a look at our revenue highlights for the first quarter. Total revenue for the quarter was $72.4 million, coming at the high end of guidance that was $70 million to $72.5 million and exceeding the analyst consensus.
Subscription revenue for the quarter was $61.4 million, constituting 85% of total revenue. Gross profit for the quarter was $40.5 million representing a 56% gross margin. Non-GAAP gross profit for the quarter was $49.6 million representing a 69% non-GAAP gross margin. GAAP RPO was $334.6 million. Current RPO was $170.6 million. GAAP net loss per share was $0.56. Our non-GAAP net loss per share was $0.09, both exceeded analyst consensus expectations substantially. We finished the quarter with $809.6 million in cash, cash equivalents, and investments, exceeding the average analyst consensus of $774.3 million. Net cash provided by operating activities was $3.9 million and free cash flow was negative $8.9 million, significantly exceeding analyst consensus that was negative $38.7 million.
The market interest in applying enterprise AI to business processes appears to be expanding exponentially, fueled by the interest in ChatGPT and other consumer generative AI tools initially released in late last year. CEOs, business leaders, military leaders, and investors are all focused on how they can take advantage of these powerful new tools to improve operational processes. In Q1, we entered into new and expanded agreements with Saudi Arabia’s Smart City, NEOM, Nucor, a steel company, Roche, sugar producer, Pantaleon in Central America. Ball Corporation, Cargill, Con Ed, Shell, Tyson Foods, and the US Department of Defense. Our partner ecosystem continues to expand. In Q1, we closed 60% of our agreements with and through our partner network, including Google Cloud, AWS, Microsoft, and Booz Allen Hamilton.
Our qualified partner opportunity has increased by over 100% in the past year and our qualified pipeline with our cloud providers grew by 61% just from Q4 to Q1 — Q4 ’23 to Q4 ’21. C3.AI’s federal business is showing significant strength with federal bookings up 39% compared with the year ago quarter. The company continues to expand its work with the US Department of Defense with new and expanded projects with the Chief Digital and AI Office, CDAO, the US Marine Corps, US Air Force, the Missile Defense Agency, and the Defense Counterintelligence Security Agency. C3.AI commercial customers including Shell, Georgia-Pacific, Koch Industries, Bank of America and others, and the US Department of Defense, continue to expand their C3 application footprints increasingly now including C3 Generative AI, realizing outsized economic benefit from digital transformations using C3’s enterprise AI.
Let’s talk about a few of these. First, the Department of Defense. Our business relationships with the Department of Defense are extensive and rapidly expanding. The DoD uses the C3.AI platform and C3.AI applications across many services, components, and combatant commands to realize significant improvement in readiness and decision advantage. One example, beginning in 2017, we started to work for the US Air Force to improve the readiness and applied predictive maintenance for the E-3 Sentry, an aircraft that you probably know of as the AWACS. By fusing the handwritten maintenance notes with the flight logs and historical inventory, okay, and pilot logs, C3.AI readiness improved the Air Force’s legacy maintenance procedure substantially. Following this initial project, the United States Air Force Rapid Sustainment Office selected C3.AI for additional readiness projects, an additional readiness project called Condition-Based Maintenance Plus, CBM+, to apply similar analytics space predictive maintenance approaches to the B1 strategic bomber and other aircraft weapon systems.
This configuration of C3.AI readiness for the United States Air Force called the Predictive Analytics and Decision Assistant or PANDA, went live into production and is now scaled out to over 16 Air Force aircraft weapon systems. This system PANDA was subsequently selected as the system of record for all United States Air Force predictive maintenance applications. This is the only system of record for an AI application in Department of Defense that we’re aware of. The goal of C3.AI PANDA is to realize up to a 25% increase in overall aircraft mission capability and when rolled out to all aircraft in the United States Air Force, this is budgeted to realize a $3 billion cost savings in maintenance and readiness. Talk for a minute about the CDAO.
The Department of Defense Chief Digital and AI office. This is the organization that is chartered with choosing it was selecting the AI platform of record for all DoD. We began working with them less than a year ago, initially to bring the C3.AI platform into production across a number of unclassified secret and top-secret enclaves as part of CDAO’s Advanta ecosystem, a centralized data repository for the entire Department of Defense. Our first project showed how nodal analysis and contested logistics can radically improve when AI systems are applied to US Transportation Command or TRANSCOM data. This application took a simulation-based approach to provide options in response to global logistics disruptions. We’re able to accelerate the time it takes to conduct this kind of nodal analysis from days to minutes.
C3.AI has now been engaged less than a year later in a dozen projects through CDAO including contested logistics, strategic force readiness, supply chain visibility, commander’s dashboards, and combined joint all domain command and control. Let’s take a look at Shell. Shell has been an important customer since 2018. The C3.AI applications are continuing to expand across the entire Shell asset base, including upstream, downstream, integrated gas, renewables, and retail to address asset integrity, optimization, ESG, and predictive maintenance. Today, Shell’s C3.AI predictive maintenance program monitors almost 20,000 pieces of equipment and because C3.AI can identify failure in advance with very high levels of accuracy, this can both increase production and prevent potential disasters such as offshore oil rod failures, the cost of which maybe incalculable.
The economic benefit for Shell is enormous and they have given presentations at Bank of America and other conferences, where they estimate it to be in excess of $2 billion per year. In the past three months, Shell and C3.AI have further expanded deployments, applying AI-based estimation techniques in subsurface reservoir management, deployed a new C3.AI-based Shell oil condition monitoring application for its customers to reduce unplanned downtime and optimized maintenance of heavy-duty assets and expanded Shell’s use of the C3.AI ESG solution. Let’s switch to Koch Industries. We continued to expand our partnership with Koch, particularly at Georgia-Pacific at Flint Hills Resources. We generated almost $4 million monthly predictions across 300 plus assets using our reliability and C3.AI supply chain applications.
Georgia-Pacific is realizing up to 5% improvements in overall equipment effectiveness. Koch also initiated two Generative AI projects to help process data, documents and files. Georgia-Pacific is improving efficiency in triaging and resolving equipment and production and maintenance issues to automate processing for paper manufacturing. Flint Hills Resources is using C3 Generative AI to increase efficiency and improve information access in commodity trading operations. At Bank of America, our C3 applications are deployed to deliver customer insights, optimize business workflows, and provide recommendations to its liquidity product specialists and treasury sales officers. The liquidity team is responsible for managing the bank’s cash flows. Every day, over 500 liquidity and sales users login to the C3.AI applications.
The bank is applying AI-based techniques to access client responsiveness, assess client responsiveness and sensitivity in a fluctuating interest rate environment. Three applications are in production today. Bank of America and others are in development. All are expected to generate significant annual benefits, especially in a higher interest rate environment, where balanced retention, optimal pricing of interest rates, and efficiency of sales and operations become important drivers of profitability and expense reduction. Let’s talk for a minute about C3 Generative AI because ladies and gentlemen, this is big. Now by combining the power of the tried, tested, and proven C3.AI platform that we’ve built in the course of the last 14 years with large language models that you’ve been reading about every day, C3 Generative AI enables immediate interaction with the relevant and frequently massive corp of data, documents, and signals associated with enterprise domains.
For example, machines, factories, systems, supply chains, natural phenomena, biological systems and operating divisions. We use a natural language interface to rapidly locate retrieve and present relevant data across an entire enterprise’s information systems, allowing users to use the full power of AI to optimize productivity, monitor systems, forecast demand, and in general, understand what is happening, what will happen, how to plan and how to maximize efficiency. The production adoption and customer success since our initial March 2023 C3 Generative AI release has been immediate. In the last quarter, C3.AI closed eight new agreements for C3 Generative AI, addressing use cases across multiple industries, including agriculture, consumer package goods, defense, intelligence, manufacturing, state and local government, oil and gas, and utilities.
To-date, we have closed 12 generative AI agreements and have a pipeline of more than 140 qualified Generative — C3 Generative AI enterprise opportunities. Over 140 to get out in less than six months. So putting in this perspective, our qualified pipeline of Generative AI sales opportunities exceeds that of any other product in our product line that we’ve introduced — that we’ve and even all the products we’ve released in the last 14 years. This is big. To meet market demand, C3.AI today announced the immediate availability of the new C3 Generative AI suite, including 28 new domain-specific Generative AI solutions for industries, business processes, and enterprise systems. C3 Generative AI provides fine-tuned tailored domain-specific Generative AI solutions that mitigate the crippling problems that prevent the widespread industry adoption of LLMs. The market response to our Generative AI offerings is simply staggering.
We believe that the advent of Generative AI may more than double the addressable — the immediately addressable market opportunity available to C3.AI and now with our generative — with our suite of Generative AI products at the door, you can expect that we will be investing in the coming quarters to promote market and support these initiatives. The 28 applications that we released today and are available today include are in three categories. C3 Generative AI for industries. This includes Generative AI for aerospace, for defense, for financial services. C3 Generative AI for healthcare, intelligence, manufacturing. C3 Generative AI for oil and gas, for telecommunications and for utilities. Our family of products to address the requirements of the business processes include C3 Generative AI for customer service, C3 Generative AI for energy management, C3 Generative AI for ESG, C3 Generative AI for finance, for human resources, for process optimization, for reliability, and C3 Generative AI for supply chain.
Finally, we’re releasing a family and importantly, okay, C3 Generative AI for enterprise systems. Ladies and gentlemen, this is not slide-ware that’s being offered by software vendors. This is production software available to order today, available to ship today, and available to solve tomorrow and will be live in 12 weeks. These products include C3 Generative AI for Databricks, C3 Generative AI for Microsoft Dynamics 365, C3 Generative AI for Oracle ERP, C3 Generative AI for Oracle NetSuite, C3 Generative AI for Palantir, for Salesforce, for SAP, for ServiceNow, for Snowflake, and C3 Generative AI for Workday. LLM support is immediately available in these products for Falcon 40B, Llama 2, Flan-T5, Azure GPT-3.5, AWS Bedrock Claude 2, Google PaLM 2, OpenAI GPT-3.5, and MPT-7B.
Additional support will be announced for leading LLMs as the market develops. By combining the power of LLMs and Generative AI with the tried, tested, and proven C3.AI platform, we believe C3 Generative AI solves the troubling problems endemic to all other Generative AI solutions currently being proposed in the marketplace. Firstly, the answers from C3 Generative AI are deterministic, not random. I mean, every time you ask the same question, you get the same answer, you’ll get a different answer. All answers are immediately traceable with one click to ground truth. So honestly, the LLMs that you’re playing with on ChatGPT and Google Bard or whatever you, they don’t tell you where the answers come from because they don’t know where the answer is coming from.
With C3.AI we can tell you, we give you the link, where immediately you can go to ground truth, no matter what the question is, how am I doing answering my diversity goals in North America, okay. Which of my product lines are the least profitable? How am I doing or my readiness levels of F35 squadron in Central Europe? How am I doing, where are the gaps in my satellite coverage in INDOPACOM you know if we give you the answer and it will tell you that exactly where the answer came from. With C3.AI, the LLM is, by combining the LLM and utilizing all the investment of the platform, the LLMs are firewall from the data, minimizing the risk of LLM-caused data exfiltration, see Samsung for details, you’ve all read about it. In closing, the many LLM-caused cyber-attack vectors that are now becoming evident.
There’s a lot of research. If you look at what’s going on in the research that Zico Kolter is doing at Carnegie Mellon, you’ll see that they’re fighting really troubling cyber security problems associated with these LLMs that did not manifest themselves in the C3 solution. The C3.AI platform — the C3.AI Generative AI solution assures the enforcement of all enterprise access and cyber security controls, in addition providing end-factor authentication and data encryption both in motion and at rest. LLM reasoning is limited to enterprise-owned and enterprise-license data, mitigating the potentially unbounded risks that you’re now starting to read about, okay, in the literature associated with IP liability provided from most LLM virtually unlimited IP liability associated with other LLM solutions because C3.AI generative AI is LLM and not agnostic, not specifically LLM depending, okay.
We allow enterprise to interchange LLMs at will, taking advantage of the ongoing massive innovation that we’re going to see in LLMs coming in the coming years and you can just switch one in and switch one out and all the applications keep running. Finally, the way that C3 AM is structured, C3 Generative AI is structured, the fact that we have firewall LLM from the data itself that we go along on this some other time, we basically almost eliminated any risk of hallucination. So it doesn’t, basically does not hallucinate. If it doesn’t know the answer, it comes back and says, I don’t know the answer, I can’t tell you the answer or the answer that I don’t have access to the answer, it’s not going to make up some line of Creative Pros that you’ve all seen with the LLMs that you’ve played with on the Internet.
All C3 Generative AI applications can be fully deployed within 12 weeks for $250,000 and they’re available today, right now actually, on the AWS marketplace, the Google Cloud marketplace, and the Azure marketplace. The licensing model is straightforward. C3.AI supports the customer to bring its Generative AI application into production. We do it in 12 weeks, after that the customer continues to pay per the CPU or per CPU hour with volume discounts. The Generative AI market appears huge. Bloomberg Intelligence predicts this market will reach $1.3 trillion by 2032 much of this will accrue to chip manufacturers, cloud service providers, and professional service providers, the balance will accrue to Generative AI applications. If we double click on this Generative AI application box, selected by Bloomberg to reach $280 billion in the same time frame, we believe the bulk of this will accrue to providers of software that enables businesses to apply LLMs to improve business processes and associated decision-making.
Now, countless start-ups today are proposing companies based on Generative AI for one industry niche or another, okay, whether big doctors’ offices or insurance or automotive or pharmaceutical companies and what have you. They’re taking their pictures around the venture capitalists all up and down in Silicon Valley and many are getting significant funding in some cases with private market valuations in billions of dollars and their big idea. In each case, a handful of former — a handful of entrepreneurs proposed to apply LLMs to develop market-specific, business process-specific, and application-specific LLM solutions. Well, C3.AI offers these solutions today and we offer them from a well-capitalized company with almost 1,000 seasoned professionals, partnered with a powerful market partner ecosystem and a global footprint.
The market opportunity appears enormous. We have demonstrated in recent quarters that we have solid management and expense controls in place. In Q4 of last year, our cash flow operations — from operations was a positive $27 million. In Q1 of ’24, cash flow from operations was $3.9 million. Non-GAAP operating loss substantially beat market expectations in both Q4 of ’23 and Q1 of ’24. We finished Q1 of ’24 with $809.6 million in cash and investments, a decrease of $2.8 million from the prior quarter. Now after careful consideration with our leadership and our marketing partners, we have made the decision to invest in Generative AI, to invest in lead generation, to invest in branding, to invest in market awareness, and to invest in market and customer success related to our Generative AI solutions.
The market opportunity is immediate and we intend to seize it. So while we still expect to be cash positive in Q4 this year and in fiscal year ’25, we will be investing in our Generative AI solutions and at this time do not expect to be non-GAAP profitable in Q4, ’24, you can expect though, we’re still, we want to see what actually happens in the market in the next couple of quarters and how this plays out, but it looks right now, you can expect us and will update you on this as we know more, but you’re going to see this happen in some place in the Q2 to Q4 time frame of fiscal year ’25. We have a tight rein on our financial controls. We are operating a disciplined business and we’re making this decision to invest in Generative AI because we are confident that is in the best interests of our shareholders.
C3.AI was well ahead of its time predicting the scale of the opportunity in enterprise AI applications. When we began the market was nascent and as the market has developed and expanded, we’ve expanded our branding and our marketing offers — our marketing offerings to meet market expectations. While we believe for over a decade that this market would be quite large. Even we could not have anticipated the size and growth rate of AI market that we now address. C3.AI has spent the last 14 years preparing for this opportunity and now the market is coming to us. Our technology foundation is tried, tested, and proven. We have a strong portfolio of Enterprise AI applications in place. We have a pricing and distribution model that meets the needs of the market.
We have a quality brand, a strong partner ecosystem, and a long list of satisfied customers. We are armed with a battalion of professional services, employees, professional employees deployed around the world, our partner ecosystem with Google Cloud, AWS, Azure, Booz Allen, Baker Hughes, and others is well-developed, is expanding. The company is well-capitalized with the senior leadership team. And now I will turn it over to my colleague, Juho Parkkinen, our Chief Financial Officer to talk about more specific financial details associated with our performance last quarter. Juho?
Juho Parkkinen: Thank you, Tom. I will now provide a recap of our financial results, some color around the expected drivers of our financial results for the remainder of the year, and walk you through our second quarter and full year fiscal ’24 guidance. Finally, I will conclude with some additional information related to the consumption-based revenue model we introduced a year ago. All figures will be discussed on a non-GAAP basis unless otherwise noted. First quarter revenue increased 10.8% year-over-year to $72.4 million, subscription revenue was up 7.6% and represented 85% of total revenues. As we discussed last quarter, we expected professional services to be within our historical range of 10% to 20% with our actual professional services coming in at 15% of the mix.
Gross profit for the first quarter was $49.6 million and gross margin was 68.6%. I would like to remind everyone on the call that we expect short-term pressure on our gross margin due to a higher mix of pilots, which carry a higher cost of revenue during the pilot phase of our customer life cycle. We are pleased with our progress in managing expenses and our success in getting the entire employee base bought into a mission of managing our company with expense discipline. Our success in expense management is reflected in our first quarter operating loss of $20.7 million, which was better than our guidance of a loss of $25 million to $30 million. Operating loss margin was 28.6%. As Tom mentioned, the Generative AI opportunity is so massive that we believe it is in the best interest of our company and to shareholders, together it’s our first mover advantage to seize the market opportunity by making incremental investments in sales, marketing, and customer success.
As a result, we are revising our 2024 expense guidance to reflect these investments. I will provide details when I discuss guidance. Turning to RPO and bookings. We reported GAAP RPO of $334.6 million, which is down 27% from last year. This was expected as we transition to consumption-based agreements. Trade GAAP RPO is $170.6 million, which is down 1.7% from last year. We continue to see positive trends diversifying our pilot bookings with Q1 pilots representing eight industry sectors. Turning to cash flow, operating cash flow was $3.9 million in the quarter and free cash flow was a negative $8.9 million reflecting expenses related to the build-out of our new corporate headquarters. We closed the quarter with a strong balance sheet with $809.6 million of cash, cash equivalents and investments.
Total cash and investments balance was decreased by only $2.8 million from last quarter. We continue to be very well capitalized. Our accounts receivables are in good shape at $122.6 million at the end of Q1 compared to $134.6 million last quarter. Total allowance for bad debt remains low at $359,000 and we have no concerns regarding collections. As it relates to our consumption business model, I would like to provide two key updates. First, we previously told you that we are assuming a 70% conversion rate of pilot phase engagement to production phase. At quarter-end, we had signed a total of 73 pilots, 70 of these are active, meaning that they were either converted in their original six-month term, extended for one to two months or are currently negotiated for production license.
Second, regarding consumption data, our actual vCPU consumption from the last three quarters is slightly higher than our original estimates. Finally, our customer engagement increased to 334 from 287 in Q4 ’23. Now turning to guidance. We’re guiding Q2 revenue to a range of $72 million to $76.5 million. We expect our non-GAAP loss from operations to range from negative $27 million to negative $40 million. As mentioned before, the Generative AI opportunity is so massive that we have decided to invest for success. As a result, we expect to cross the non-GAAP profitability in the course of FY’25. We will provide more updates on this in future calls. We expect to be cash flow positive for Q4 ’24 and the full fiscal year FY’25. For full year FY’24, we are maintaining our previous guidance for revenue in the range of $295 million to $320 million and increasing the non-GAAP loss from operations to a range between negative $70 million and negative $100 million.
I’d now like to turn the call over to the operator to begin the Q&A session.
See also 10 Undervalued Stocks to Buy According to Goldman Sachs and 30 Friendliest Cities in America.
Q&A Session
Follow C3.Ai Inc.
Follow C3.Ai Inc.
Operator: Thank you. [Operator Instructions] Our first question comes from Patrick Walravens with JMP Securities. You may proceed.
Patrick Walravens: Great. Thank you very much. So it’s great to hear about the demand levels and all the activity. Tom, can you talk a little bit about how the linearity in the quarter, how that was and how things close out at your investor event, you told us that you had closed 16 agreements. You ended up with 32. But if you look back a quarter, you had 10 at the middle and you ended up with 43%. So it makes it seem like maybe the second half wasn’t quite as good as you would have hoped, but I don’t know. Maybe I’m interpreting that wrong, too?
Thomas Siebel: Or maybe the first half was great.
Patrick Walravens: Right. Okay.
Thomas Siebel: It’s like the half glass flow model. I would say that if the — this might have been our best quarter ever in terms of linearity. I’m not sure, okay, in terms of being in terms of predictability. So we aren’t getting too specific, I would say, the business volume in the course of the quarter was activity in the course of the quarter was quite consistent.
Patrick Walravens: Okay. And then if we multiply the average TCV times the number of deals right then we get a total TCV number, which I mean, you guys are the only ones to disclose it, so thank you for that transparency. And if you look at that, that was around $26 million this quarter, and then last quarter again it was $52 million, almost twice as much. So I just want to make sure we understand what’s going on here, is the TCV not a good indication of what you’re actually doing in the quarter?
Thomas Siebel: Well, we used to compensate people on TCV, and that’s back when we used to do $10 million, $20 million, $30 million, $40 million, $50 million deals, Pat. And now we’re doing $250.000 projects in Generative AI and $0.5 million projects in for the balance of our enterprise products. The Generative AI products last 12 weeks, the other pilots last projects last — generally last up to six months, generally six months. So it’s a cost, I mean, it’s a sure as — I mean, it follows directly that TCV goes down, RPO goes down. I mean and by the way, gross margins go down in the short run, okay? Because the gross margin when you — when we’re doing these Generative AI pilots for $0.25 million, wherever it may be, I mean, there is no way we are not going to succeed at any cost, let’s say, on the first 50 of these guys.
And if we have to overinvest to make that pilot successful, we’re going to do it. And so I’m not certain that RPO is meaningful going forward. I’m not certain that in TCV, I’ve been trying to drive that down as you’re well aware, for well, 15, 20 quarters ago, our TCV, I think, was about $15 million. Average contract value, it was about $15 million. And our average contract value, I think, is less than $1 million, right? So that’s — this is a good thing.
Patrick Walravens: Okay, great. And then lastly, Juho, probably for you. You have a footnote on the balance sheet where there is a related party presumably Baker Hughes that still has a $75 million, you saw $75 million in accounts receivable from them, that’s the same as last quarter, are you guys okay with that?
Thomas Siebel: It’s a lot bigger than $75 million.
Juho Parkkinen: No, total. All right. Yes, we are okay with that. I’m not entirely sure how to interpret your question and we have no collection concerns from any of our customers. Our bad debt reserve is only at $359,000 and all of our customers are paying on time and in full, so no concerns there.
Patrick Walravens: Okay, thank you.
Operator: Thank you. One moment for questions. Our next question comes from Mike Cikos with Needham. You may proceed.
Mike Cikos: Hey, I appreciate the new pronunciation of the last name. Thanks for taking the questions here guys. A couple of questions, first on the guidance and I appreciate this pivot you guys are trying to take advantage of this opportunity, where it really feels like the Gen AI has come online, right? I think my question is more around the guidance if you will, and where I’m going with this is, given the increase that we’re talking to in the go-to-market investments, which is obviously acting as a drag on your operating losses, no question about it. But why aren’t we seeing some sort of benefit when looking at the fiscal ’24 revenues? Why maintain that guidance as we sit here today?
Thomas Siebel: Hi, Mike. I think we’ve been — we’re doing the best we could do since we’ve been a public company to be credible in setting expectations and we have met or exceeded expectations in every quarter that we’ve been a public company, okay? Now, we are in unchartered territory still with the consumption pricing model and we’re definitely in unchartered territory with Generative AI, okay. Now, let’s take, I were to take all the spread sheets of all my product groups and business plans and you can be sure that they come to a larger number than we’ve talked about in guidance, okay, but we would — our position is we feel where the guidance we’re comfortable with the guidance that’s out there today, okay, and at the same time, we feel comfortable that after a couple of quarters of acceleration, we’re able to look straight in the eye and say, guys, we’re planning on significantly accelerated growth, but I don’t want to do it prematurely.
I don’t want to lose credibility and I think this is the responsible thing to do.
Mike Cikos: All right. Thank you for asking that out, I appreciate that. And I guess another one, I totally understand the commentary on RPO and even CRPO declining, I guess it’s more for Juho here, but with the transition to the consumption model, shouldn’t we be seeing CRPO remain more resilient as these consumption pilots starts to convert or are consumption pilots even when they move to production not necessarily going to be showing up in CRPO, can you provide any more color on that, please?
Juho Parkkinen: Yeah, absolutely. So effectively the CRPO is flat, right, and the way to consumption-based business model works is that we start with a pilot phase that pilot amount would be RPO in the given quarter that we signed that deal. The consumption phase unless the customer were to sign up for volume discounts is never going to be in RPO, because it’s going to be after consumed invoicing, so you only see ever that in revenue.
Thomas Siebel: So if it were a 100% consumption model, RPO would be zero.
Juho Parkkinen: That is exactly right.
Mike Cikos: Okay. And the expectation is that most of these customers would not be signing up for those larger volume commitments, so that is going to be an expected drag on the RPO and CRPO.
Juho Parkkinen: Yeah.
Thomas Siebel: Yeah.
Mike Cikos: Okay, all right, thank you for that. Thank you.
Thomas Siebel: And that’s quite easier to buy rather than saying 10, 20, 30, 40, 50, I think one deal we did was $0.5 billion if I’m not mistaken, okay, pretty well it was $300 million plus a couple of things. We’re saying, hey, it’s a $0.5 million, if you like it, keep it, okay? And so after they pay their $0.5 million if it goes that way there is no RPO.
Juho Parkkinen: That’s right.
Mike Cikos: Got it, got it. And maybe just one more if I could and apologies to be taking all the time here, but I did just want to circle up, I know that you guys are talking about the C3 Generative AI pilots being $250,000, 12 weeks and the remaining product lines, I believe, and correct me if I’m wrong, but you have typically about six months for those pilots, can you help us think through what — is it just a ton of value on these Gen AI pilots is so much quicker than you think that these customers can convert that much faster?
Thomas Siebel: It is quicker Mike. In one case, we might have to add or load all the data, model supply chain, and build machine learning models that fit the — that fit the scale of the enterprise of a Cargill, which is roughly $100 billion business or the United States Air Force, which is a pretty big business, okay? With generative AI, we don’t have to do any of that, okay? We just load their data into a deep learning model and it kind of takes the learnings from those data, stores it in a vector store and we’re kind of — we’re the Masters of the Universe at aggregating structured data, non-structured data, sensor data, enterprise data, images, what have you into a unified federated image, we have 14 years of that, we’re really good at that, so that’s easy, and then with all the mappings are worked out by one deep learning model, they are stored in a vector data store and then the — so we don’t have these huge data science projects that we have at all these other organizations.
So, yes, the time to value is faster, the implementation effort is easier, and it’s technically, honestly, it’s an order of magnitude easier problem.
Mike Cikos: Awesome. Thank you very much, guys, I appreciate it.
Thomas Siebel: And there is nobody who doesn’t want to talk about it.
Mike Cikos: Great to hear. Thank you, guys.
Operator: Thank you. One moment for questions. Our next question comes from Kingsley Crane with Canaccord Genuity. You may proceed.
Kingsley Crane: H. Thanks for taking the question, and congrats on the result. It sounds like your plan is to invest more in lead-gen branding market awareness, customer success, you’ve mentioned that you have more than 140 qualified leads in Gen AI, so it seems like you’ve done tremendously well in generating leads. So as we think about the incremental change to the profit guidance are you balancing investments between customer success and pilot conversion without lead-gen and brand awareness?
Thomas Siebel: I’m sorry, what was the — how we’re balancing between customer success and lead-gen? Okay. A lot of this is branding and lead-gen, Kingsley, is what we’re looking at. Kind of like we used to do in 2021 when we establish the brand for enterprise AI that worked out pretty well and we’re going out to plant a flag on this Generative AI market and we’re going to — we’re first to market, how many companies out there, have 28 enterprise Generative AI solutions in the world. I know how many, exactly one, and we’re going to communicate that, we’re going to make it available. So that’s what the bulk of it is. At the same time, if we have a customer in any one of these markets where we need to really get resources to make them successful with their pilot, you can be sure, we’re going to make them successful with their project. And as we get down the learning curve will get increasingly efficient at it and gross margins go up.
Kingsley Crane: Okay, thanks, Tom. That makes a lot of sense. And so if I could ask one more, hoping you gain some clarity on the 28th domain-specific Gen AI solution, so for example, if you’re an oil and gas customer you’re building a solution in sales and this is ultimately linked into Salesforce is that requiring three separate apps or how would that be consumed and priced?
Thomas Siebel: That will be in one, basically, it’s price per CPU. I mean, that looks like it’s going to be on a judgment basis whether it’s a discrete projects or whether it’s a — whether the union out them is one generative AI application, whereas as you have described it, the union of them is one generative AI application, it will be a $0.25 million to bring it live in 12 weeks, and after that pay $0.35 per VCPU hour or VGPU hour.
Kingsley Crane: Okay, very helpful. Keep up the good work. Thank you.
Thomas Siebel: And as it relates to when it gets to runtime pricing, it doesn’t really matter whether it’s one application or whether it’s three, it’s going to be the same amount of run time.
Kingsley Crane: Thank you.
Operator: Thank you. One moment for questions. Our next question comes from Pinjalim Bora at JPMorgan. You may proceed.
Noah Herman: Hey, guys, this is Noah on for Pinjalim, thanks for taking our questions. So on the semi-pilots that are active at the moment, if we exclude the pilots that have been extended one or two months, is there any way to parse out how many of these pilots are under production licenses? And I have a quick follow-up.
Thomas Siebel: I think — thanks, Noah, for the question. So I think at this point, the way we are looking at this that there were 73 pilot deals that we’ve been doing, 70 are either converted or in the process of the pilot or we are negotiating a production license on those. I think the meaningful amount or a meaningful message you should take from this that out of 73 pilots, we only have three notes. So we have a pretty — we feel very comfortable and very bullish about how that pilot program is currently progressing.
Noah Herman: Understood. And then maybe just a double click on the gross margins, I know you commented that with the transition to function.
Thomas Siebel: Let me comment on the nos. The no wasn’t that the pilot wasn’t successful. Okay. The no because I know exactly what they are and they are hugely successful. That said, what happened is the genius CIO went to the CEO and said we’re going to build this ourselves out of a bunch of tinker toys, so let him go do that. He’s going to go through that for about two years. They’re not going to be able to resolve their cyber security problems, their IP infringement problems, they’re going to have data exfiltration problems, they’re going to have random answers and they’ll be back. So the sales cycle there was just a little bit longer than we thought. They’re not lost. They just — they’re just suspended. Sorry, could —
Noah Herman: No. I appreciate that, appreciate the clarity. And just a quick follow-up on the gross margins. Just any way to kind of help us with our model going forward in terms of how to think about gross margins, I know you laid out your commentary about this quarter’s impact, but just any additional thoughts there would be helpful for the year?
Juho Parkkinen: I mean, I think, Noah, the punch line is that we are still expecting some margin pressure on it and as there is going to be more pilots, it’s going to be margin pressure until the consumption becomes a more dominant portion of the revenue stream, which would then offset it and start picking up that margin, so continue to expect some pressure still on the gross margin.
Operator: Thank you. One moment for our next question. Our next question comes from Sanjit Singh with Morgan Stanley. You may proceed.
Sanjit Singh: Thank you for taking the questions. I had one for Tom and one for Juho. Tom, what’s the vision around sort of multimodal? I know there’s a lot of interest around the language models that as you think about the different diffusion models, video, audio, image, what’s the — what’s the vision around supporting those types of models if multimodal becomes the dominant deployment architecture for enterprise AI?
Thomas Siebel: Are you talking about data, Sanjit? I’m not sure I understand the question.
Sanjit Singh: Yeah, what I was referring to is like obviously like the GPT models or language models and they’ve taken the world by storm, but there other AI models that deal with image, audio, video with other sources of data as we think of —
Thomas Siebel: So you are commenting on the fact that these large language models tend to be almost exclusively limited to text, HTML, and code. So other sorts of data they don’t know how to ingest. Okay, good, good. So let’s talk about this. We are the Masters of the Universe at ingesting what you call multimodal data images, okay, images from space, trajectories of hypersonics, high-speed telemetry, trading volume, the rate at which electrons are going across the grid, enterprise data free text, and so we’re using our standard architecture to ingest those data. We’re using one of our standard deep learning models to basically parcel this data and store all the relationships in a vector data store, okay. All the large language model we’re using for is interacting with you and me to handle the natural language, to understand what we’re saying and it takes the answer back from the data and give it to us in pros rather than some gibberish that might be spewed out of SAP.
Sanjit Singh: Right. It makes perfect sense
Thomas Siebel: That is one of the reasons why people find our generative AI solution attractive as we’re tried, tested, and proven at ingesting any kind of data they can think of.
Sanjit Singh: Understood. And then the question for Juho is if I sort of look at the presentation and we sort of look at where we are in the sort of transition Phase 1, Phase 2, it sounds like we’ve just started sort of Phase 2 and the glass sort of implies that we’ll put to get to revenue neutral by seven quarters in, we’re about four quarters in and then revenue accretive about eight quarters in, so about three or four quarters away, is that still the timeline should we should be thinking about in terms of revenue acceleration, any color around that would be hugely helpful.
Juho Parkkinen: So, Sanjit, the chart that you’re looking at, I think you should think about this as a kind of per-customer basis, right, like it’s not necessarily the entirety of how our business is going, but the idea is that as we now have some of the regional early pilots from last year’s Q2 and Q3, they’re starting in the Phase 2 category. And as I mentioned on my prepared remarks, we have preliminary data on actual VCPU consumption for that first three quarters and it’s slightly above what we’ve modeled before. So we are in this fourth quarter of the transition and we are starting to see some very positive indicators with respect to how the consumption will run for these consumption-based deals.
Sanjit Singh: Got it. I appreciate the context. Thank you.
Juho Parkkinen: Thank you.
Operator: Thank you. And we have time for one final question. Our final question comes from Michael Turits with KeyBanc Capital Markets. You may proceed.
Eric Heath: Hey, thanks for taking the question. This is Eric Heath on for Michael. So I wanted to ask on Baker Hughes, two-part question, just one of you can give us some color on what changed with the relationship that they are no longer considered a related party? And then secondly and I hope this isn’t too nuanced, but if I take that $16.5 million of Baker revenue contribution for two months in the quarter and kind of extrapolate that out for an additional month, I get about $24 million versus what we’re thinking of around $20 million, so I guess my question is how did the Baker Hughes contribution in the quarter compared to your expectations and anyway to understand how the non-Baker Hughes did relative to your guidance. Thanks.
Thomas Siebel: First of all, Baker Hughes is not a related party because they monetize some of their stock. Remember they bought some stock some time ago for about 3 bucks and they sold it for I forget what the rough number was. It could be half by a buck or 2, I don’t know but for nothing and they sold it for a lot, so it’s pretty darn good trade. And today, because they own less than 5% by definition they no longer related party. As it relates to the Baker Hughes revenue, you should actually know that, didn’t we provide that in the memo, in other words — that we wrote like three quarters ago. I mean it’s — I’m sorry, I forgot to ask the question.
Juho Parkkinen: What was your name?
Eric Heath: Hey, Tom, it’s Eric from KeyBanc.
Thomas M. Siebel: Okay. We were actually — it’s on our website. It’s on our IR site. You’re going to be able to see what the minimum Baker Hughes revenues is. We provided that in great detail and it’s on the IR site.
Eric Heath: Anyway to just kind of quickly frame how it wasn’t that quarter relative to your expectations and contribution. Sorry, it’s something that I forgot about.
Thomas Siebel: Exactly what we expected.
Juho Parkkinen: That’s right.
Thomas Siebel: Exactly what we expected.
Eric Heath: All right. Thank you.
Juho Parkkinen: Thank you.
Amit Berry: I guess that was our last question. Ladies and gentlemen, so, Tom and Juho are out. Thank you for your time. Thank you for your attention and we look forward to providing an update at the end of our second quarter. So thanks a lot. Stay tuned and hopefully we’ll have some exciting things to report.
Operator: Thank you. This concludes today’s conference call. Thank you for participating. You may now disconnect.