And it’s great to see that all the work that we’ve been putting into machine learning, into incorporating that in core into our platform, making sure that we always make it possible for users to bring in all of their TRACE data, metrics data, log data in one place and apply the kinds of algorithms that we provide, but also algorithms that users might bring, all of that work, it’s actually being recognized. We’re seeing great — the customer — I talked about some of the customer examples that I gave, included customers using AI Ops, but it’s also the analyst recognition, which really makes me feel good about the fact that the work that we are doing is being seen, it’s being recognized, and we feel very confident about how this is going to help continue to make us a stronger and stronger player in observability.
Operator: Our next question comes from Tyler Radke with Citi.
Tyler Radke : Yes. And I wanted to first start. Ash, you talked about some of the ML use cases, kind of a related question to the one Pinjalim asked. But how are you just seeing that broader opportunity beyond the ML and AIOps? But just given there’s been a lot of interesting application of large language models on search, how are you approaching that? And then if you could just give us an update on the enterprise tier adoption? How are you seeing the uptake of that premium SKU in this environment? What percent of the base is migrated over there?
Ash Kulkarni: Yes. No, absolutely. So obviously, machine learning has been a very core element of what we’ve done at Elastic for the last many years. I’ve talked about it in the past. When we came out with version 8.0. We talked a lot about vector search and machine learning. Right now, I know that there’s a lot of interest in this area, AI and machine learning, thanks to ChatGPT and Open AI. And in some ways, like we see ChatGPT and the very large transformer models that they have introduced has been really complementary to what we do. And in the long term, this is a really exciting opportunity. Though, I think we can all agree that it’s still early phases. If you think about Elastic in our platform, we provide the foundation for our customers to build applications on top of our platform with their data using vector search, using machine learning.
And in the last many releases, we’ve been making some investments in this direction very thoughtfully in the area where we’ve now added the ability for customers to directly load large transformer models into Elastic Search. So they don’t have to take the data out of Elastic Search, and they can perform all the activities on data in Elastic Search using these large transformer models, perform inference on them directly. And many of these transformer models, like the ones built by Open AI, they often rely on embeddings. The core functionality that they often require is vector search, vector search kinds of capabilities. So these are the areas that we’ve talked about in the past. But hopefully, now you get a sense of how these things are coming together.
And customers can build new natural language search applications on our platform powered by vector search using these kinds of machine learning functionalities. So in the long term, I feel that this is going to be a really awesome opportunity, but it’s still very early days. So it’s — we are working on it. We’ve been innovating in this area for a long time. And we believe that the role that we play as a foundational platform that customers can use to bring their data and then use these kinds of ML models on that data, on Elastic Search is what’s going to be really compelling for us.