This is a critical development because it opens up a new scale for conversational automation and essentially a much bigger TAM for our products. Today, still, approximately 80% of business-to-consumer conversations occur over the phone. And I think none of us are satisfied with the IVR-style automation presented in that context today. One of the main drivers of this problem and this dissatisfaction is this concept that long — that dialogues are a long-tailed set of events. They are extremely unique. And existing automations can’t handle all that different uniqueness without a massive code to effort — without massive effort to code and maintain. Not only do the latest LLMs unlock this set of conversations and over time, will unlock more conversations, over time, they’ll unlock more conversations in other enterprise use cases as well.
Also, as we’ll see in a moment with Matt’s demonstration, opportunities open up outside the enterprise altogether, like the health domain. But what do we mean by long tail, I think, is worth diving into that just for a minute. If that’s the real problem here, what does that mean? It means that most dialogues don’t — are unique and don’t fit into a tidy box. The easiest way to understand this is to look at some real brands and some real conversations. For example, we have a major retailer who has millions of conversations on LivePerson annually, has a 20% automation rate. Here’s a conversation that’s a real conversation on their platform about scheduling a delivery. We all imagine scheduling a delivery to be kind of a simple task and kind of repetitive.
You fill out a form and you give it your address. But here, you can see this person is trying to figure out if they can coordinate this delivery with a kitchen remodel, and they don’t even know when the kitchen remodel is going to be finished. They want to make sure their appliance is going to show up on time and be hooked up properly and all the things that you and I would care about. Up until now, this would have resulted in a human escalation or programming some bespoke flow into a chatbot with a host of parameters that had to be managed. Now this is automatable. One more example, a major bank and a customer simply trying to pay their mortgage. Again, we think of this as a simple, repetitive event. But as you can see here, it’s often not.
The customer in this case has had a payment increase that they were unaware of. They’ve been made a promise by a previous agent that they claim. With if you take large language models, you combine them with the LP data set, this claim becomes an auditable fact, for example, and the brand’s policy on matters like this can be inferred from the data and executed by a machine. Incidentally, we worked with this particular brand to use the latest generation of generative AI to recognize thousands of intents like this with greater than 90% accuracy, which up until now have been unheard of. One of the more exciting aspects of all of this is that to build systems — the effort to build systems like this is radically reduced as well. When we bring together brand knowledge bases, brand conversational data in our platform with large language models, we can have a working system up and running with minutes of effort rather than weeks or months.
In fact, over the last 6 to 8 weeks, we’ve built over 200 conversational AI systems in this way, and we’ve begun demonstrating this to some of our largest customers already. You’ll see examples of systems that can handle this level of depth and flexibility of conversation shortly in Matt’s WildHealth demonstration. And that’s what I’ve got for now. So thanks, Rob, and thanks, everyone, for listening.