Because eventually, given the [run-in to our] (ph) strategy, it will make it easier for your ultimate move to the cloud if that’s what you wind up doing.
Raimo Lenschow: Okay, perfect. Thank you.
Operator: Thank you. [Operator Instructions] Our next question comes from Kash Rangan of Goldman Sachs. Your line is open.
Kash Rangan: Hey, thank you very much. Congrats on the results. One quick one for Dev and one hopefully quick one for Michael as well. Dev, you talked about Generative AI applications. You described the three layer architecture. When do you think it hits the sweet spot of how MongoDB is positioned from a timing standpoint? When do these Generative AI applications start to really drive underlying need for the kind of databases that you’re best suited for? One for Michael. In your assumptions, when I take away the $40 million of the upfront, that’s like a couple of percentage points of growth. I’m just trying to understand what kind of consumption trends you are using to build guidance? Was it average of fiscal ‘24 consumption trends or weighted more to its second half or exiting fourth quarter?
Any color there would be tremendously useful. And also want to ensure the sales force is still selling EA and can get comp for EA because it does not look like you’re giving much weight for EA in your forecast. That’s it for me. Thank you.
Dev Ittycheria: Thanks, Kash. I’ll take the first question. In regards to when we see enterprises deploying [indiscernible] production, I think it’s a combination of customers getting comfortable with the technology and also these technologies maturing from both a performance and from a cost point of view. If you played with ChatGPT or any of the other chatbots out there or large language models, you’ll know that the performance of these applications, you need to get response time in the one to two to three seconds, depending on the type of question you’re asking. And so naturally a chatbot is a very simple but easy to understand use case, but to embed that technology into a sophisticated application, making real-time decisions based on real-time data, the performance and to some degree, the cost of these architectures are still not there.
Also, customers are still in the learning phase. They’re experimenting, they’re prototyping, but I would say you’re not seeing a lot of customers really deploy AI applications at scale. So I think it’s going to take them, I would say this year is a year where they’re going to do, you know, probably roll out a few applications, learn, and then as they get more experience, become more comfortable in rolling out more and more applications as they get — as both these technologies mature and the costs come down. We feel very good about our positioning because from an architecture point of view, the document model, the flexible schema, the ability to handle real time data, performance at scale, the unified platform, the ability to handle data, metadata, and vector data with the same query language, same semantics, et cetera, is something that makes us very, very attractive.
The other thing that we’re finding is unlike a typical sale where someone’s deciding to either build a new workload or monetize a workload, the AI decision is more of a central decision — centralized decision more than ever. So it allows us to actually go higher in the organization. So we’re actually engaging with customers at much more senior levels because obviously this is coming down as a top down initiative. And so this allows us to position us as a very modern platform designed for these new modern use cases and workloads. So we feel good about a positioning, but as I said this year I think is going to be continued experimentation and rollout of some individual applications.
Michael Gordon: And then on the consumption questions, Kash, thanks for that. Overall, if you look at the guidance and the piece parts that we’ve tried to share with you, when you take into account the $80 million of impact from the unused commitments and the multi-year outperformance, you’ll see at the top line level around 500 basis points of headwind. And then we also walked you through our expectations that the non-Atlas will be modestly down, given the $40 million of that part that isn’t recurring. So when you kind of piece all those together, you’ll wind up probably coming to a conclusion that Atlas looks consistent from a consumption growth standpoint, and that’s in line with the stable trends we’ve seen over the course of fiscal ‘24.
So we’re using those fiscal ‘24 numbers. Obviously there’s some seasonal adjustments that we have factored in there, but that’s really what we’re seeing there. And then the last part of your question around EA, we are — we do still sell EA, we don’t tend to sell EA into new accounts, it tends to be into an existing account, expanding their MongoDB footprint, sellers do get compensated on EA. In part it goes back to the comment from the earlier question that our sales rep really can’t dictate the IT deployment environment at a customer. And so yeah, they get paid on that. And to your comment about the sort of, EA results or expectations, that’s really just as a result of the difficult compare, in part given that multi-year dynamic.
Operator: Thank you. One moment, please. Our next question comes from Alana Brent Bracelin of Piper Sandler. Your line is open.
Brent Bracelin: Good afternoon. Thank you. Michael, we’re going to stick with the guide scene here. If I look at last year, you guided to, I think, 16% growth. You ended up doing 31% for the full year. Even if I take out the $80 million tailwind you talked about, that’s still 25% growth. You’re guiding to 14% growth this year, again 5% headwind, so closer to 19% organically. Are you more confident kind of going into this year than last year just as you think about the trends. Is the 14% comparable to the 16% initial guide last year? Is it really more like 19% adjusted basis comparing to 16%? I know it’s a little confusing, but getting a lot of questions on it, thanks.