Zapata Computing Holdings Inc. Common Stock (NASDAQ:ZPTA) Q1 2024 Earnings Call Transcript May 15, 2024
Operator: Good day, and welcome to the Zapata Computing Holdings Inc. First Quarter 2024 Financial Results and Business Update Conference Call. As a reminder, this conference is being recorded. It is now my pleasure to introduce Eduardo Royes. Thank you. You may begin.
Eduardo Royes: Thank you. On today’s call are Christopher Savoie, Chief Executive Officer and Co-Founder, Yudong Cao, Chief Technology Officer and Co-Founder, and Jon Zorio, Chief Revenue Officer of Zapata AI. Earlier today, Zapata AI issued a press release announcing its first quarter 2024 results. Following prepared remarks, we will open up the call for questions. Before we begin, I would like to remind you that this call may contain forward-looking statements. While these forward-looking statements reflect Zapata AI’s best current judgment, they are subject to risks and uncertainties that could cause actual results to differ materially from those implied by these forward-looking statements. These risk factors are discussed in Zapata AI’s filings with the SEC and in the release issued today, which are available in the investor’s section of the company’s website.
Zapata AI undertakes no obligation to revise or update any forward-looking statements to reflect future events or circumstances, except as required by law. With that, I’d like to turn over the call to Christopher Savoie, CEO and Co-Founder of Zapata. Christopher?
Christopher Savoie: Thank you, Eduardo, and greetings to all. We appreciate you joining our inaugural earnings call. Since this is the first earnings call of many, we will spend a few minutes setting some context and highlighting the Zapata AI story before turning to an overview of business, technical, and financial highlights from Q1 and more recent developments. Zapata AI is at the forefront of the industrial generative AI revolution, and we’re extremely proud to now be trading on the NASDAQ exchange following the closing of our business combination with Andretti Acquisition Corp in late Q1. We would like to once again thank the Andretti team for their vision, partnership, and unwavering support. We are confident that our public listing will allow us to further advance our position as a technology leader in this nascent industry while providing the resources required to scale up our business.
I look forward to providing our stakeholders with updates on our success on a quarterly basis going forward. Since our founding in 2017, Zapata has been an early pioneer in generative AI, innovating new techniques before the term generative AI entered Zapata as the digital transformation priority that it is now. And we have only accelerated the pace of our innovation as we continue our journey as pioneers and trailblazers today. While much of the industry is focused on large language model and related LLM use cases, we are expanding the scope of what’s possible with this revolutionary technology to address mission-critical operations, analytics, and business intelligence use cases. Specifically, we are unlocking the powerful insights that can be hidden in real-time data from sensors, nodes, and other previously untapped sources across an enterprise’s operating environment.
This includes predicting and simulating the future more accurately and in near real-time, detecting anomalies more quickly and with higher accuracy, creating virtual sensors to infer data for critical variables that would be difficult or impossible to measure directly, generating optimization recommendations to drive better and faster decision-making and as we’ve demonstrated working trackside with Andretti Global at INDYCAR races, we can run these decision support capabilities in challenging, extreme, industrial environments on edge networks. For those unfamiliar, edge refers to processing that takes place nearer to where the data actually originates or where it’s actually being generated. Zapata is also unique as one of the only pure-play public companies who offer quantum and quantum-based algorithms for generative AI.
These techniques deliver unique advantages, including more accurate and more expressive AI models. They also provide our customers with an on-ramp to the revolutionary potential of quantum computing as we expect that the hardware matures in coming years but today through these techniques Zapata is enabling our customers to benefit from the supercharged model and Gen AI capability. We have long held that generative AI will be the first place that we see a practical quantum advantage and we continue to believe this today. We are seeing vast opportunities for what we have coined Industrial Generative AI, which spans many industries including financial services, telecom, transport and logistics, government and defense, biotech pharma, and manufacturing including automotive, energy, chemical, and materials.
Our team has deep experience working across many of these sectors. Throughout our history, our enterprise and government customers have included Sumitomo Mitsui Trust Bank, BBVA, BP, BASF, DARPA, and Andretti Global, and we have worked with numerous research and university partners. At a time of growing anxiety about big tech monopolizing AI, we are addressing the concerns we hear from our prospective customers every day about being forced to lock in with an AI vendor with Orquestra, our open-source platform for developing and deploying industrial generative AI, we are giving enterprises the freedom to use the best hardware and software tools from across the ecosystem for their unique challenges and use cases. We support a range of deployment options and all of our models and custom applications can be integrated with and live in production in any customer’s environment.
During the first quarter, we continued to make strong progress in demonstrating the powerful breakthrough technology we have at our fingertips. To dive deeper into this, in a few minutes, I will turn the call over to our CTO, Yudong Khao but first, I want to emphasize how excited we are about our sales and business development pipeline. Based on active discussions we are having with potential customers and partners today, we are particularly excited about the opportunity set ahead of us in five key industries, pharma and biotech, financial services, insurance, telecom, and defense. To elaborate a bit, I’ll start with pharma and biotech. We are confident that we have the potential to significantly reduce drug development and manufacturing timelines, thus bringing drugs to market faster, while also materially reducing development and production costs.
In financial services and insurance, recent work we have delivered has demonstrated our ability to drastically reduce compute costs and runtimes for highly complex risk and compliance models, as well as dramatically speeding up classical approaches to Monte Carlo simulations. These techniques have the potential to transform core operations, freeing up resources, bringing new products to market faster, unlocking substantial savings for customers, and enabling them to make faster, more informed pricing, risk, and trading decisions. In telecom, early customer discussions indicate that there is a potential market for us to deploy the same capabilities and expertise we have delivered trackside for Andretti to anticipate, predict, and respond to network disruptions before they occur.
Wait till you hear about the yellow flag prediction and help telcos optimize their network operations to prepare for high-traffic moments while streamlining how they respond to these events. These are billion-dollar challenges. Applying our generative AI and anomaly detection capabilities could allow communication service providers to significantly reduce these types of burdensome and costly events, which in turn could result in major OpEx reduction and improvement in operations, agility, and customer satisfaction. To this end, we are excited to formally announce our global strategic partnership with Tech Mahindra, one of the most highly respected global technology partners in the telecom space. This partnership is a game changer for Zapata, and it provides us access to Tech Mahindra’s incredible portfolio of global telecom providers, bringing us closer to where we can do the most good with our quantum-based generative AI solutions, especially our real-time machine learning on the edge, our sensor intelligence platform, and our optimization capabilities.
During Q1, we have been in active dialogues with two leading telcos in the US exploring the possibilities. In this collaboration, Jon will speak more about our partnership with Tech Mahindra in a few minutes. Finally, defense. In this area, we have shown that our technology can build high-performance applications that provide literally mission-critical decision support capabilities in contested and unpredictable edge environments where timing is of the essence and where connectivity is typically a challenge. We cannot wait to share more as we continue to make progress on these and other opportunities. Now over to Yudong to summarize our technical highlights from Q1. Yudong?
Yudong Cao: Thank you, Christopher. Picking up on the pharmaceuticals theme, in February, Zapata AI made history by generating for the first time anywhere viable cancer drug candidates using quantum-enhanced generative AI, together with our partners at Insilico Medicine, the University of Toronto, and Harvard University. Not only were those drugs completely new to the scientific literature, they also showed superior binding affinity over molecules generated by purely classical generative AI. We believe this milestone demonstrates the potential for quantum-enhanced generative AI to aid in molecular discovery and other complex design challenges. To pursue these opportunities further, we announced in February a partnership with quantum compute provider, D-Wave to develop commercial applications for quantum-enhanced generative AI in molecular discovery.
In defense, we have continued to reach technical milestones in our work with DARPA and the US DoD, including software delivery. With the Phase 2 award we were granted last year, we have been steadily improving BenchQ, our open-source tool for benchmarking quantum computing applications. This work will help the US Government and the quantum community to understand the resources required to unlock the utility of high-value quantum use cases across multiple hardware modalities. We’re in year three of this partnership, and we’re not slowing down. Finally, I’ll end with a major milestone from our work with Andretti Global. For the first time, we deployed our yellow flag prediction model that’s alive in a race on our Sensor Intelligence Platform, or SIP.
For those unfamiliar with auto racing, a yellow flag is flown when there’s an accident, dangerous condition, or other hazard on the racetrack or that may require response vehicles or personnel to intervene. Given the massive danger posed by race cars going 200 plus miles per hour past the tow trucks and around people standing on the racetrack, all of the racecars on track are forced to drastically slow down, bunch together, and hold position while the issue is addressed. The implications of the yellow flag on race strategy is significant. Every driver has to pit several times during the race to refuel and change tires. Strategically timing this pit stops around yellow flags can win or lose a race because it can mean pitting while the field is spread out and moving at full speed or pitting when everyone is bunched together and unable to pass under the yellow flag.
Either can be advantageous depending on the scenario. Using a combination of live and extensive historical race data from sensors around the track and the cars themselves, we were able to accurately predict the likelihood when a yellow flag will be flown in the next five laps, giving Andretti’s strategists a critical window of time to adjust their strategy before the yellow flag was actually flown. We have recorded this demonstration with the actual rate data from the INDYCAR St. Petersburg race earlier this season, and it’s available on our website and our digital channels, check it out. We look forward to continue to evolve our capabilities and sharing additional such examples of our technology in action. With that said, I’ll turn it over to Chief Revenue Officer, Jon Zorio, for an update on our commercial progress.
Jon Zorio : Thanks, Yudong. The yellow flag prediction model Yudong mentioned represents just one component of the powerful suite of generative AI and ML applications that we’re developing and deploying in support of Andretti’s various race teams. As another compelling example, we’ve deployed generative AI to create virtual sensors that infer real-time data, which are key race variables that would otherwise be unmeasurable like a tire slip angle during the race. We’ve been able to tune our models to over 99% accuracy. Other use cases include lap time prediction, fuel savings optimization, tire degradation analytics. All of these applications run on our Sensor Intelligence Platform or SIP, which we’ve deployed at the edge in our Mobile Race Analytics Command Center or RACC.
The RACC has proven to be an excellent marketing and demonstration tool for our prospective customers as they physically tour the interior, the math, the models, and the generative AI all of a sudden becomes real and they can see for themselves the power of real-time analytics at the edge, and they leave INDYCAR races truly impressed with the possibilities for harnessing our technology for their business. We’re now well into our third season with the Andretti team. As a testament to the value, we continue to deliver for Andretti on and off the track. In the first quarter, we signed a significant commercial expansion with Andretti worth $1 million ACV for 2024, in which the Zapata team is building an innovative database solution. This Q1 booking represents 42.8% growth in sequential workings versus Q4 2023.
This agreement significantly expands our work with Andretti’s engineering, operations, and race strategy teams with the goal of delivering significant innovation and operational efficiencies across Andretti’s global presence in multiple racing series and driver teams. The database solution will also help build the foundation for Andretti’s expansion to future race series. In addition, we’ve also expanded our co-branding and marketing relationship to enhance visibility around our cutting-edge innovation in INDYCAR Racing with Zapata now formally recognized as Andretti’s official artificial intelligence partner. We continue to believe that our work with Andretti is critical to raising the visibility of industrial generative AI and demonstrating our capabilities in the context of real customer problems.
The same challenges we’re tackling with Andretti around real-time analytics and time-compressed decision-making, relying on massive amounts of streaming data in challenging environments are directly applicable to any number of use cases across industries from financial services to advanced manufacturing to network operations and telecom, transportation and logistics, energy and the battle space. To put it in perspective, upwards of one terabyte of data comes off of one INDYCAR during a race, flowing from hundreds of sensors, not to mention additional streaming data sources from the track environment and INDYCAR itself. In financial services, we continue our work with our valued customers, Sumitomo Mitsui Trust Bank or SMTB. We started working with SMTB in November of last year.
With SMTB, we’re applying generative AI to produce synthetic financial time series data for practical purposes, including simulating a range of plausible scenarios for future market movements. These scenario simulations will enable traders and investors to be prepared to make decisions more quickly, more accurately, and with more confidence. We’re also helping risk managers conduct more sophisticated stress tests and supporting derivative traders to better hedge their portfolios in addition to enabling more efficient and trustworthy derivative pricing calculations and value adjustments or XVAs. We hope to share some results from this work in the coming months. As Christopher mentioned earlier, we’re thrilled to formally announce our global strategic partnership with Tech Mahindra, initially focusing on the telecom vertical and our portfolio of leading telecom service providers.
Throughout Q1, we’ve been engaged in working sessions with teams from several of the largest US-based telcos, and we’re focusing in on specific use cases where we can apply our solutions. This has the potential to be a game changer for Zapata, and you can read more about it in the press release which hit the wires yesterday. In Q1, we also made great progress in delivering a breakthrough transformation pilot for a large UK-based insurer, working in collaboration with one of Zapata’s global services partners focused on the financial services space. We’re excited to share more about the successful outcome in the near future and where this may lead. We continue to build our ecosystem of industry-focused partners, and we’ve also worked hard to deepen our relationships with current partners like One Stop Systems or OSS.
We expect to be collaborating closely with them on several opportunities in the defense space. During Q1, I was pleased to see strong growth in our pipeline for direct customer engagement, and those logos comprise who’s who of leading brands and organizations across our core verticals from pharma, healthcare, financial services, insurance, telco, automotive, manufacturing, and defense. From a go-to-market operations standpoint, we’ve been pragmatically adding sales in BD capacity in North America and in our primary theaters in EMEA and Japan. We continue to work to enhance the Zapata brand, to raise our profile, and evolve our industry vertical orientation, our value-based messaging to produce original thought leadership and indicate our ideal customer personas and buyers to enhance our consultative go-to-market and sales motion, to tell compelling customer stories and demonstrate the value of our capabilities, and up-level our day-to-day prospecting activities with the objective of building awareness and executive mind share.
We’re in the process of building other lead gen and referral programs to expand and cultivate our network of senior executive friends of Zapata and we’re tracking these activities and our progress systematically, and we’re building the process discipline that will help us scale. Much more to come here. Let me now turn it back over to Christopher to go through Zapata’s Q1 financials.
Christopher Savoie : Thanks, Jon. Since this is our first earnings call, we are providing a little extra color where applicable to help you better understand our business. But before I get into our financial results, I wanted to say a quick welcome to Sumit Kapur, who has recently joined us and will be our CFO effective May 20th. Sumit is an experienced CFO with expertise in scaling growth tech companies. We look forward to having him handle the discussion around financials going forward. We would also like to thank our outgoing CFO, Mimi Flanagan, for her years of incredible dedication and commitment to Zapata, including supporting us through the rigorous listing process. We appreciate her continued role as a consultant to the company.
So with that said, starting with a recap of the first quarter 2024 results. Q1 2024 revenues were 1.22 million, which compares to revenues of 1.5 million in Q1 2023. The period-over-period change primarily reflects a decrease of 0.5 million from the completion of certain customer contracts that occurred subsequent to Q1 2023, partially offset by an increase of 0.2 million from ongoing customer contracts entered into subsequent to March 31, 2023. As a reminder, we primarily earn revenue via annual or multiyear subscriptions to our software platform, Orquestra, which is available on a stand-ready basis as well as through the provision of consulting services. Gross margin in Q1 2024 was 14%, flat with Q1 2023. Gross margins can be quite volatile and lumpy at these revenue levels.
Operating costs during the period were 5.24 million versus 5.3 million in Q1 2023. Of note, general and administrative costs made up more than 40% of our operating costs and were up 0.74 million period over period to 2.2 million primarily associated with costs related to the merger with Andretti Acquisition Corporation. We currently do not anticipate another significant step up in G&A costs in the near future. Putting all of this together, our GAAP operating loss was 5.08 million in Q1 2024, generally in line with the loss of 5.09 million in the year-ago quarter. Our GAAP net loss during Q1 2024 was 22.32 million and reflects the impact of 17.18 million of other non-cash expenses. Our Q1 2023 net loss was 5.07 million. As of May 10, 2024, we had 31.98 million basic shares outstanding.
Before I turn to our balance sheet and cash flows, a quick reminder that we closed our business combination with Andretti Acquisition Corp on March 28, 2024. As such, our reported results reflect net cash brought in through the transaction, although there have been subsequent financing transactions, which I will touch on momentarily. On March 31, 2024, we had 7.39 million in cash and cash equivalents, including 0.14 million in restricted cash. Net cash used by operating activities was 2.15 million during the first quarter of 2024. Included in this figure is 2.55 million in cash generated by working capital. During Q1 2024, we raised a total of 6.10 million through financing activities. This includes proceeds from the issuance of additional senior secured notes prior to the closing of the business combination with Andretti as well as funds brought in from the business combination.
We have raised additional capital subsequent to the end of Q1 2024. Specifically, we brought in 2.5 million for our forward purchase agreement with Sandia Investment Management and have raised 2.9 million as of May 10th through our equity line of credit with Lincoln Park. We plan to be judicious, flexible, and opportunistic as we fund our growth strategy going forward while remaining disciplined on cost control. However, given the inherent lumpiness in our business and where we stand in our company’s lifecycle today, we will not be providing formal guidance at this time. That concludes our discussion on our financial results for the period, and I’ll now offer up some closing remarks. Early in my call, I expressed my enthusiasm about the constructive conversations we’re having with potential new partners across industries and especially in pharma, financial services, insurance, telecom, and defense.
These are all sectors where our work has demonstrated very real tangible benefits and we believe we will have more to say across these fronts in the upcoming quarters. We are only at the beginning of our journey as a public company and we look forward to sharing more milestones as we grow and continue to lead the industrial generative AI revolution. Thank you for your time and attention. Operator, we’re ready for questions.
Operator: Thank you. We will now be conducting a question-and-answer session. [Operator Instructions] And our first question comes from the line of Michael Latimore with Northland Capital Markets.
Q&A Session
Follow Zapata Computing Holdings Inc.
Follow Zapata Computing Holdings Inc.
Michael Latimore : Congrats on your first earnings call, there. The Tech Mahindra relationship is very interesting. Can you talk a little bit more about the prospects you see there? Are you, is the intent to supplement kind of current network and fault management capabilities, replace them? And then how extensive could you get deployed? I mean, are you going to be in specific network elements like backbone versus CPE devices? Just a little more clarity on the opportunity there would be great.
Christopher Savoie: Sure. I can’t get into, obviously, proprietary things about these folks’ networks and whatnot. But in a generic sense, yes, follow the story of the ELO5 prediction really. And if you can imagine there where we have five different models looking a lap ahead, two laps ahead, three laps ahead, different models working in an ensemble. Imagine, having a model on each node of a network, either a side of router or next to a beat box or such next to the tower. And so you could conceivably there have the ability to report say jitter on the line or other signals that indicate a possible malfunction or incoming network traffic events and pre-report those before you get a 20 devices down type of a message. So that has applicability obviously in telco there, but you can think more broadly on other edge network types of situations like power management, grid, and other places where this may be applicable.
Michael Latimore : And then maybe just can you comment a little bit on the pipeline here, just a little more, I don’t know, quantification of anything, number of prospects, how much, how many prospects how fast the prospect count is growing. Any just more color on pipeline?
Christopher Savoie: Yeah, I think, I can say that it’s growing significantly. And the demand as Palantir said in their call recently, the demand for generative AI is pretty relentless. So the pipeline is growth. We don’t want to provide guidance on that. I think what’s going to be important is watching the bookings come in, over the next few quarters. And I hopefully, you’ll see, you saw, as we reported today an increase through customer expansion in one account, you’ll see new logos and new pipeline coming in as well as expansion over the coming quarters to meet that demand.
Operator: And our next question comes from the line of Rohit Kulkarni with ROTH MKM.
Rohit Kulkarni : A few questions come to mind. At a big picture, maybe talk about, the kind of the team, your hiring plans, as you kind of set yourself up to be a successful public company. And then, also on the IT and the patent portfolio. Maybe, talk through what types of opportunities you think can be unlocked through a pretty robust IP portfolio that you seem to have, maybe just talk through that. Those are the first questions and then I have a couple of follow-ups.
Christopher Savoie: Sure. First of all, yes, I mean, you will see Sumit joining us as CFO, with a fantastic background, and who actually has an applied mathematics background as well, so understands the technology we’re into in a very deep way. So we’re excited about that. I think you’ll see some additional executive-level hires coming over the next quarters. And then it’s continued growth as we add revenues and customers in our generative AI pipeline, obviously, to meet that demand, we’ll be continuing to add high-level engineers and scientists to our already extremely talented pool of people that we have here. And we’re going to continue to do that as we have globally. You can always bring everyone who has this kind of backgrounds to North America because they’re not a diamond dozen.
So we’ve been pretty advantaged I think in the fact that we can operate globally with our people mechanisms. And we have people in Europe, we have people in Japan, and in other geographies that are key for our global expansion because the customer base here is Global Fortune 100 type companies that are also global. And you mentioned the IP. Yeah, we have a pretty significant portfolio there. As was in our general presentation, you will have seen that we rank pretty highly on, especially with common quantum-enhanced AI techniques there we believe are going to give us a really differentiated advantage in the marketplace and what we can deliver with our technology. You saw and Yudong commented and I did a bit about our drug discovery work that we did within Insilico Medicine, Harvard, and University of Toronto where we developed an actual drug leveraging this quantum-enhanced AI technology and that’s really exciting.
We’re continuing on that work in the context of our work with D-Wave and other hardware providers as we go and that is an advantage. So it’s not just the IP, but the actual ability to bring this into commercial relevance, that’s really exciting for us.
Rohit Kulkarni : Perfect. Great. And then just on the business side, maybe provide more color on to the extent you can around how are your conversations with, kind of these large customers that you’re trying to sell to, so over the last six, maybe nine months, how are these large companies kind of migrating from experiments and pilots on Gen AI into actual production kind of projects with the mission-critical applications in Gen AI. Maybe talk through that kind of adoption curve and where do you think we are in this adoption curve, right now, when through your conversations and through your demos and various different kind of leads that you have through your business pipeline?
Christopher Savoie: Yes. Well, thank you for that question because I think these are indeed early days, but I think that there is some transition that we’re, observing, in the marketplace. We have the Open AI, I guess, event horizon, if you will, where people realize, wow, generative AI has some possibility to do some really incredible things with GPT 3 and then 4, and recent releases and other folks’ models but these are language models. And I think that we were into, pretty quickly in quick order a kicking-the-tires kind of mode to see where could we use large language models to do things. I think there was a little bit of expectation that a large language model could possibly be a general AI that knows everything about everything, kind of omniscient.
Maybe that was maybe expectation by some people in the market, some people in the C suites of these large companies. But I think as, things got into the brass tacks, the blocking and tackling of, okay, where can I really use this? It became obvious that, okay, there’s no one language model that rules them all. Where the industry, I think, is going is, well, I think we’re going to use small language models and also smaller other models, that are better at things like numbers. Large language models are language models. They’re good at language. They’re not necessarily good at analytics and numbers. Actually, they fail grade school-level math sometimes. So, what we’re finding and it’s to our advantage is that now people want to do useful things in enterprise in production with this.
They realize that while there is power in this generative AI revolution, some you’re going to need different models and several models to do the kind of stuff that you want to do, like improve your network performance if you’re a telco or reduce your costs or, get a race car to have a better strategy. These kinds of things are numerical in nature, oftentimes combined with maybe some language UI with them. But for the most part, you’re going to need, different lots of models and we can provide those models. And so there is this shift has been advantageous getting into the current conversation. So, LLMs, language models, which we can also do are the conversation starter. But when it gets to the brass tacks of what you need to do in an enterprise like FP&A analysis, trading strategies, different things you might do in finance, actuarial science if you’re an insurance company.
This kind of things involve analytical-numerical models, which we excel at, which really opens up our market opportunity and that’s where people are kind of moving now, we’re sensing in the enterprise space.
Rohit Kulkarni : And one last question, just on the core value prop and the receptivity that you’re getting. The pipeline you mentioned is across a variety of, industries and pretty diverse like pharma, financial services, telco, defense, and the use cases for each of those verticals could actually be quite diverse. So is there is there a common thread that you feel, is emerging as you are building applications and probably selling to these pretty different industries? Perhaps there is a common thread of reducing costs or creating new products or anything to that effect that you think, you’re seeing early signs of success or greater adoption in or just would love to understand, where is the common factor here across these diverse industries for a young company like yours?
Christopher Savoie: Sure. The good news for us, it sounds very diverse, but the good news is a lot of the use cases are mathematically the same. It’s kind of a geeky answer, but, important that, a lot of these are series data, time series data, and these kinds of data that we’re attacking in a lot of these use cases. So while drug discovery and discovering a new molecule and race car strategy and insurance may not sound like the same thing at all from a use case perspective and certainly they’re not from the data’s perspective what data it is. The way we formulate the problem and solve it is mathematically the same time series or series data. And maybe I can get Yudong on the call here to expound upon that a little bit. Mathematically, there’s the same thing.
And I’ll say another thing. This is why we also in these different verticals, the partnerships are so critical because we don’t want to or pretend to have the expertise to be the experts in the domain in every one of these verticals. That’s why it’s great to have a partner like Tech Mahindra that knows Telco really well, already has fantastic customers like AT&T and Verizon, to and know those customers, to be able to take us into those accounts and work with those people with the domain expertise. So it’s very complementary and orthogonal. And that gives us global reach into verticals with the same math and the same tools and the same platform, to be able to reach different verticals without stretching ourselves and increasing our costs. Yudong, I don’t know if you want to comment a little bit about the time series data.
Yudong Cao: Yes. So fundamentally, we are solving essentially sequential data modeling problems, and molecules can be cast as a sequential representation. And then obviously, the timing of scoring and raise data takes a time series form. Financial data as well, takes a time series form. Even text data is also a sequential form of data. So what we have done is that we have looked at the underlying structure of these problems, the mathematical structure of these problems, and developed quantum techniques towards those. And so on the science side, there’s definitely a very strong sense of convergence. I’ll also add that through projects like Andretti, we also have developed the machine learning practice, the engineering practice, and also the overall team practice of how to get our algorithm deployed into production. That operationalization expertise is also something that can, that is repeatable across the board as well.
Operator: Our next question comes from the line of Brian Dobson with Chardan Capital Markets.
Brian Dobson : So you mentioned several industries that you see as key moving forward that would be a good fit for your technology. Which of those might you focus on the most in the immediate future? And can you share any feedback you may have received from those industries regarding your technology?
Christopher Savoie: Yes, I have to be careful because, obviously, these are mission-critical, applications that we’re doing. So, how, the feedback is related to, yes, we can deploy. Yes. This is relevant. Yes. This is exciting. And there’s an immediate recognition that the math, when we get the technical people on those sides involved, that the math is applicable to what they are doing. As far as focus goes, it’s four or five. Some of them, like we said, through partners, some of them more directly, like in the finance area, SMTB. We announced that relationship that began in Q4 last year, where we’re directly working with the customer on these predicted trading scenario generation work with generative AI in the context of portfolio management and trading strategies.
And that is cookie cutterable across, as you can imagine, many financial areas. We’ve also had some success in delivering some similar capabilities in the insurance area, which is adjacent to bank financing and trading. And so that expansion is particularly important. We think that the applicability of this time series data, just as in racing, it’s really the same problem. Time series and real-time updates of things as the market trades, these things are things. These are places where we can add value pretty immediately. And we have some initial traction with a very large bank in Japan already. So we expect to be able to leverage that very much in that sector in the coming quarters. And then, very importantly, I think we all know that the – doing things on the edge with things that fly by themselves.
In the defense industry in recent years and months, it’s become obvious how important that is to our defenses and the ability for us to deliver a generative AI solution that allows us to predict things and see things at the edge, and update data real-time. You can see where that would be important to national defense, particularly in an era where unmanned vehicles and unmanned robotic machines are becoming more and more prevalent in the defense space. So that’s another one that we’re very focused on and very excited about, and we hope to be able to announce some stuff in the coming quarters about our progress.
Brian Dobson : That’s very exciting and great color. Thank you for that. Speaking about what you’ve learned through your work with racing, I guess, can you speak to some of the synergies that INDYCAR has offered as far as facilitating new business introductions? And, I mean, that yellow flag predictive model that you have is very impressive. Have you been able to use the RACC to illustrate that your technology can work in a call it less than ideal or suboptimal environment?
Christopher Savoie: Absolutely. And it’s important to note that we have two NVidia GPU’s in the truck at that RACC that are in a server in an environment that’s horrible. There’s actually a damper factory and damper testing facility in that truck, shaking the thing. So you can imagine a card-based solution won’t work there. At one of the recent races, the power to the INDYCAR series, sand got cut, so they had to throw off a yellow flag before the race even started. So this is the kind of environment you’re in. You’re in these ad hoc tracks that are just set up with Cat 7 cable running across the city, just for a weekend. It’s very interesting in that way. It’s not a fixed road course a lot of the time, which creates a really horrible network environment.
The truck itself gets really hot and access to the cloud is not always guaranteed. We actually have a Starlink on the truck in case, connectivity becomes a problem just to get data. So it’s really the most horrific environment you could have to try and do AI in some ways. But the fact that we’ve been able to do it and we’ve been able to operationalize there, is really important for folks who run, say, power stations and power grid networks and telcos who have to work with exactly those kind of problems in those kind of situations. And the ability to do that real-time with very fast updates, in a sport where, at the Indy 500 in a couple of weeks, there was a tie last year, I believe, between 2020 and 2021 for qualifying down to 110,000 of a second after 8 miles at 240 miles an hour at peak.
So that’s a kind of environment and so our ability actually the marketing relationship we have is really valuable. So people can actually not just see it as think of it as a concept, but see it really actually be there and see the trucks, see the analytics going on live and looking at the kind of environment that we have there. And that, I think, creates a lot of confidence, with our customer base that we can deliver in some of the most extreme environments. And if we can do it there, then doing a banking solution on the cloud where we pretty much have 24×7 connections to market data is less horrible, right, than that. And actually, the data is, it may sound extreme, but actually there’s fewer data points and fewer variables. So we have to monitor in trading portfolio situations than there are ironically than in the car situation.
So it’s a really good format. We’re able to take our customers to actually see it, feel it, touch it, see the data as it comes in while they’re watching the race happen live, and really get the experience of seeing it in production. And that has been able to obviously transfer itself into a very real pipeline. And hopefully, we’ll be able to give more color on that in the coming quarters with new logos and new wins that we’ll be able to announce. And hopefully, we’ll be able to tell you that a lot of that came directly from these interactions at the racetrack.
Operator: Our next question comes from the line of Yi Fu Lee with Cantor Fitzgerald.
Yi Fu Lee: Maybe I have one question for each gentleman, maybe starting with Chris. Thank you for flushing out the, how’s the product AI different from your standard large language model, whether it be from Anthropic or OpenAI. I was wondering if you were to like compare your competitive edge, Chris, in terms of, like, coupling edge computing with quantum physics, right? I guess other startups, right? What Zapata AI does differently versus competitors?
Christopher Savoie: Well, I think certainly the quantum edge that we have and the mathematical capability and the capability of the people that we have doing that math really gives us a competitive edge. I mean, we have some of the brightest minds in the planet on quantum math and we’ve been working with the Defense Department and the DARPA program on benchmarking these algorithms for them for a couple of years now. And so we really do have that kind of, people advantage, the IP advantage that comes out of that. And so we have really the most cutting-edge math that we can apply to these, generative, AI, models, I think, as a key advantage. And then, I think Yudong said it best in saying that we’re bringing this to production and you can actually see it.
It’s actually working. There’s these models are doing something, every weekend at the track. I mean, it’ll be there at the Indy 500 with these models and the yellow flag models running in real-time and update in real-time. So we can actually do this stuff in production. This isn’t just a kick-the-tires kind of experiment or POC or this kind of things. We can actually make this stuff work for making real important decisions. And so it really is a decision science and the ability to deploy a decision-assisting, AI into enterprise environments that really gives us an edge. We’re not thinking we can do this or saying we can do this. We’ve actually done it past tense.
Yi Fu Lee: And then moving on to Yudong, right? Yudong, you mentioned about the D-Wave on drug discovery. You’re working with the Harvard University. I was wondering if you could give us some color how you could transition this, I guess, working with the academics to more commercial opportunity with other large big pharmas?
Yudong Cao: Yes. So the machine-learning model by itself and also the infrastructure can be used for other types of discovery, like material discovery or other types of design problems. So our role in the project is that we will take a given target and propose molecular design. And then so this is assuming that there is a partner that does – that has identified a target and also there’s another partner that works downstream to produce those molecules. So this is an embodiment of what Christopher was saying, like where do we get our repeatability and how does our map plug into an actual process? So we’re not a drug company that we don’t actually make molecules. We’re also not a bioinformatics company that does target identification.
So but we plug in very nicely and we have ongoing partnerships with D-Wave and also with through the universities. We will connect with organizations that can perform these target identification and drug testing and manufacturing. So by partnering with these organizations in ecosystem, plus our generative AI capability, so this is how we truly go to market with the entire pipeline essentially.
Yi Fu Lee: And then ending with Jon, on the Tech Mahindra partnership, sounds very exciting. Can you describe a little bit more about this opportunity, it sounds very big, and that you could probably bring it to like all the US carriers like Verizon, AT&T, et cetera. Maybe a little bit on this opportunity, please?
Jon Zorio: Sure. Happy to. And I think Christopher touched on it briefly in earlier comments. I think the power is placing our models both at the edge of those large networks with literally 100s of millions of nodes of data. So it’s not lack of data is the problem, it’s being able to have the fidelity and the sensitivity to pick up very slight signals, anomaly detections that might indicate some sort of an intrusion or some sort of the change in track pattern or as Christopher said, possible jitter, something happening with a piece of hardware where we can have that sensitivity at the farthest reaches of the network and place those our models with real-time machine learning at different places along the hierarchy to pinpoint where something may be changing or there might be a signal that was previously undetectable.
So if there is a disruption, we can the carrier can react to it immediately, route traffic somewhere else, just potentially proactively dispatch a truck, and you kind of zoom out when you think about this problem with a large carrier with all the truck rolls, with all the outages and there was this recently a large outage, we all probably were impacted by these are literally $1 billion problems. So when we’re sitting down with some of these carriers and they’re taking us through their pain points and we’re doing the math together figuring out how we can kind of co-solve this together if they had more intelligence baked into the network and they have the ability to pinpoint where a problem happened and update correlation engines to send out teams or automate responses in a much more timely proactive manner, the savings are just tremendous.
And frankly, I think you’re picking up on the point that more or less these networks are generally the same in terms of how they the characteristics and how they operate. So we were able to, as we’re planning, drive value in one, there’s no reason we couldn’t drive value in many more, as well as the network construct. Now, it doesn’t just have to be telco, actually. Anytime there is a maybe it’s utility, maybe it’s an airline, maybe it’s rail, anytime there’s a large geographically dispersed complex network of anything, large devices, large complex machinery that we need to monitor and take action quickly, that’s very much in our sweet spot. So, the repeatability and extendibility of the solution is pretty tremendous.
Operator: Thank you. We have reached the end of our question-and-answer session. And with that, I would like to turn the floor back over to Christopher Savoie for closing comments.
Christopher Savoie: Thank you very much. And thank you for all the great questions. And we look forward to talking to you again at our next calls. Thank you much. Bye now.
Operator: This concludes today’s teleconference. You may disconnect your lines at this time. Thank you for your participation.