Frederick Havemeyer: Matt. I wanted to ask also, on both renewal rates and net retention rates, understanding also, like you said earlier, that a couple of data points does not yet a trends make. But I wanted to ask, it looks like your total gross renewal rate ticked down slightly in 2023 by quarters, while your cloud subscription revenue retention rate ticked up. So I wanted to ask, is there anything happening between the total company business and cloud that would be worth calling out at this point that could be attributable to that?
Matt Calkins: Yes. First of all, I want to address that downtick. Our gross revenue retention rate did indeed downtick from 99% to 98%, bottoming at 97% and it’s now risen back to 98%. And I just want to clarify that though that may have been a downtick, is still best in class. It’s still remarkable numbers. And then secondly, I want to say there has been, I would say, just a little bit of migration, just a very small amount, from on-premise to cloud, at a point when I thought there wouldn’t be any more but there was just a little bit. And so that may be impacting the numbers a small amount.
Operator: Our next question comes from the line of Thomas Blakey from KeyBanc Capital Markets.
Thomas Blakey: I have a couple here. Maybe first on the heels of Fred’s great question on the data fabric, I think he also asked about the actual use cases, if you could maybe double click on that, Matt. And then after answering that, if the company — as we’re hearing an uptick from our calls on Gen AI, especially in the enterprise, if these customers don’t use your data fabric, what are these organizations going to do architecturally in terms of breaking down silos/bringing all their data together? Is it something akin to Appian solutions or compared to a cloud-based data warehouse like Snowflake, or — I just want to understand like the pros — if they don’t use you, what are they going to have to use in terms of launching these Gen AI enterprise applications given the examples? That would be great.
Matt Calkins: No, that’s a great question. Like what are they going to do without data fabric? Well, Snowflake is one obvious example. Snowflake is asking, give us all your data. It’s like a modern data warehouse. Just pile everything you can into this one data source. And when you do, we’ve already got a partnership lined up for Gen AI on top of it. That’s fine, if you can move all your data there, if you can move all of it. But, boy, I talk to a lot of CIOs and I can’t remember any of them saying that they could move all of their data or even all of their pertinent data into a central repository, Snowflake or anyone else’s. So typically, today, AI either runs on one giant silo, like Snowflake, or all you can train which I’ll address in a moment, or a data fabric.
If it’s all you can train, then essentially you’re saying that AI isn’t going to run on a source. It’s just everything you can upload, right? So you can upload one source after another if you want but you’ve got data loading costs, you’ve got data freshness issues, you’ve got variable levels of personal security access to that data issues. There’s a lot of flaws with that strategy. And I think also just the idea of training at great length an algorithm that the CIO does not own is problematic for a lot of tech decision-makers. So I think that even though there is the data lake with Snowflake strategy and there is the train an external algorithm on everything pertinent strategy, these are not plausible strategies. And what I see happening, in the absence of data fabric, most of the time, is AI is too limited on the data it knows.
AI runs on one silo and just one. And I think that is, unfortunately, the typical fallback in the absence of data fabric.
Thomas Blakey: That’s interesting. Any use cases that you’ve seen maybe sprout out early in the evolution or planning to in ’24?
Matt Calkins: Well, you mean use cases for data fabric? Yes, most of our customers actually use data fabric. We’ve got a terrific usage rate, somewhere 80%, 90% which is good for a participation in a feature. Because it’s so beneficial. It makes it easy to connect to data sources; like even if you’re using just one, it makes it intuitive and simple. But if you’re using multiple, it’s a huge step forward over what was possible in the past. And it also makes it far easier for a user to develop new applications because we objectize all of the data that’s been touched by the data fabric so that a creator of a new report or process can just grab and drag and drop that object. All of these objects of data across the enterprise are now sort of draggable objects within the development environment.