We’re still in the very early days. We’ve got some really interested customers who are doing some interesting things and working with us on pilots. Our engineering and product and field teams are really focused on this, but we are in the very, very early days of really automating relational migrated to the next level leveraging Gen AI.
Rishi Jaluria: All right, wonderful. That’s really helpful. And then I wanted to ask about MongoDB Serverless. I know this is something you’ve kind of had at least been talking about for a while, given a lot of the concerns around cloud optimization and rationalization, customers overpaying, and having to figure out how to optimize our footprint. It feels like that could be a natural tailwind for MongoDB Serverless especially because you were early to embrace it, can you talk a little bit about what you’re — what you’re seeing in terms of adoption there? How we should be thinking about that opportunity and maybe just kind of what this could look like over the next several years in terms of service adoption versus traditional consumption adoption in Atlas. Thanks.
Dev Ittycheria: Yes, so just to be clear, when we talk about serverless, basically what customers think about is that they don’t have to start thinking about capacity planning that the workload can scale-up and scale-down based on the needs of whatever the use cases and what the compute and other resource needs are, and so there’s been a lot of interest from customers. At first stage, there was a lot of the similar workloads, where they didn’t want to go provision a dedicated cluster. They wanted to be able to leverage our serverless functionality, we think long-term that almost every workload will become serverless because over time that’ll be the way most applications are provisioned. But we’re in the early days and the receptivity and use of our serverless functionality has been very high.
And you’re right, it’s — for those legacy platforms that they can offer similar solutions MongoDB becomes that much more attractive because a development team and architecture team doesn’t have to worry about capacity planning, they can just build the app and they know the background our infrastructure can scale-up and down as their usage — as the usage — as the usage goes up and down.
Rishi Jaluria: Wonderful. Really helpful. Thank you.
Operator: Thank you. One moment, please. Our next question comes from the line of Brent Bracelin of Piper Sandler. Your line is open. Brent Bracelin, your line is open.
Hannah Rudoff: Hi guys, this is Hannah on for Brent. Thanks for taking my question. Just one from me. The Subscription gross margins remained above 80% for the second straight quarter, even with that continued mix-shift to Atlas. I know you mentioned efficiency improvement to Atlas, but are we at a structural point, Michael, where the scale of gross margins can remain at that 80% plus range into the next year?
Michael Gordon: Yes, so what I would say is, I do not think — I think if you think about it, Atlas gross margins continue to be lower than Enterprise Advanced gross margins, and while we’re very pleased with that 80% margin performance on a subscription margin basis in Q3, Atlas is quote-unquote only two-thirds and so there is still a delta between the two and so I think that that will have a slightly dilutive effect on margins as Atlas increases as a percent of overall revenue.
Hannah Rudoff: Okay, it makes sense. Thank you.
Operator: Thank you. One moment please. Our next question comes from the line of Patrick Colville of Scotiabank. Your line is open.
Joe Vandrick: Hi, this is Joe Vandrick on for Patrick Coleville. As of 3Q, it looks like about 29% of direct sales customers are spending over 100K on the platform. Just curious where you think that percentage can trend over the longer term and kind of how big the opportunity is within these existing direct sales customer accounts. Thanks.
Michael Gordon: Yes, Joe. I would say that we still believe that we have a very small percentage of wallet share in most accounts. And so, obviously the smaller customer the bigger the wallet share. But in most direct sales customers, our percent of wallet share is still quite small. So we see a big opportunity there and as we talked about in terms of our new business, a big part of our new business came from acquiring new workloads with existing customers. And that is a big focus for our go-to-market teams and the runway is quite long for that trend to continue.
Joe Vandrick: Great and just one more for me. I mean, you kind of touched on this but what’s the feedback been from those customers who have used Vector Search in preview, and then obviously with Vector Search comes quite a bit more data. So, how are you making sure that customers don’t receive a surprise bill and end up unhappy?
Dev Ittycheria: Yes, so, as we mentioned earlier the feedback on our Vector Search has been very-high, even when it was in public preview, we’re getting a lot of feedback and then we saw this report that came out, obviously, we don’t talk to people who are using alternatives, we just talk focus on our own customers, but it was — we’re pleased to see that out of all the products available in the marketplace, our own preview product had the highest NPS score. So then if you unpack that. Why do customers like using MongoDB, because it is one tightly integrated solution, you can tightly integrate capturing Vector data, metadata, and then data regarding a particular use case and that becomes very, very attractive, it just becomes much more seamless and easier to use versus either using point solutions or some kludgy solution that’s been put together.
So, I think that’s a big reason about how we remove friction from a developers workflow and why the MongoDB approach makes it so much better to use instead of any other alternative approach. In terms of your question around the amount of data and data bills, obviously vectors can be memory intensive and the amount of vectors you generate will obviously drive the amount of usage on those nodes, that’s one of the reasons we also introduced dedicated search knows. So you can asymmetrically scale particular nodes of your application especially Search nodes without having to increase the overall size of your cluster. So you’re not — to your point, shocked with a big bill for underlying usage — for non-usage, right. So, you only scale to nodes that are really — need that incremental compute and memory versus nodes that don’t and that becomes a much more cost-effective way for people to do this and obviously that’s another differentiator for MongoDB.