Michael Gordon: Yeah, no, it’s fine. I go back to what I said in response to, I think it was Sanjit’s question. There’s been no fundamental change or approach in terms of how we’re looking and determining our guidance. I do think to the confidence point, I think that that is correct. We do have more confidence. We have more data. We — if you think back a year ago, as one of the questions indicated, there was much more macro uncertainty. I think over the course of fiscal ‘24, we saw narrower variability. We saw more consistent results that does give us increased confidence. I think we also have another year under our belt in terms of understanding the seasonality trends of Atlas. I know Atlas is a big business, but it’s still a relatively young one, especially when you think about getting quarterly data points.
And so I think we have more confidence and better handle on that. And then lastly, while there is a difficult compare on EA, I think, we talked about this in the second half of last year where we were, at some point you could only be — continue to be surprised by EA so much. And so as we looked at our, I think it was in our third quarter call, we talked about how we were upping our views on what EA could do. And so all that sort of baked into the guide.
Brent Bracelin: Helpful color there. And then just, Dev, as you think about the million dollar question, when do you think these AI tailwinds, the interest in Vector starts to really impact your business? It sounded like you think another — we’ll see another year of more experimentation before we see big production moves. Is that the right take? Just walk us through your current thinking on when AI really starts to show up in your business. Thanks.
Dev Ittycheria: Yeah, I think it’s going to show up in a business when people are deploying AI apps at scale, right? So I think that’s going to be at least another year. But that being said, we do see some really interesting startups who are building on top of MongoDB, so it gives us confidence about our platform fit for these sophisticated workloads. But, given all the noise around AI, you have to remember we’re still in the very, very early days. The performance of some of these systems is, I would classify as okay, not great. The cost of inference is quite expensive, so people have to be quite careful about the types of applications they deploy. There’s some debate about open versus closed source LLMs. Do they use case specific LLMs or more general purpose LLMs?
So there’s a lot of learnings going on. And obviously, there’s an announcement today that yet another company had delivered better performance than GPT-4. So, people are — there’s a lot going on in this space. So, for people to really get comfortable in picking a stack and deploying workloads in mass is going to take a bit of time. There are obviously some outliers who are obviously being far more aggressive. But that’s essentially what we see across our customer base. But the good news is that we feel like we’re well positioned. We feel that people really resonate with a unified platform. One way to handle data, metadata and vector data, that we are open and composable, that we integrate to not only all the different LLMs, we integrate to different embedding models.
And we also — essentially also integrate with some of the emerging application frameworks that developers want to use. So we think we’re well positioned and you’ll see us continue to expand and broaden our reach in this category, but I do think it’s going to take a little bit of time.
Brent Bracelin: Makes sense. Thank you.
Operator: Thank you. One moment, please. Our next question comes from the line of Karl [Technical Difficulty] of UBS. Your line is open.
Unidentified Analyst: Okay, great. Maybe one for Dev and one for Mike. Dev, just because my first question follows on that, I’ll go to you first. There are certainly some voices in the industry that would argue that even in advance of AI applications being deployed at scale, which you just said might take a year, enterprises might look to spend more to modernize their existing data stack and on data readiness in advance of those AI apps going into production. Are you seeing any of that type of behavior that could proceed the in-production deployment timeframe?
Dev Ittycheria: Yeah, I touched a little bit about relational migrations. I mean, that’s one way where a lot of people feel like they have a lot of data trapped in these legacy platforms. As we’ve shared, we’ve always had customers migrate from legacy SQL apps to MongoDB, But the hardest part was basically rewriting the application. Generative AI essentially lowers the cost to do so. We are running a bunch of pilots with customers. Customers are very aligned. We have access to senior level decision makers. And we’re learning a lot. Obviously, we’re learning about the effectiveness of some of these AI technologies. We’re learning about how you have to handle old languages, old libraries, old packages, the different versions. And so the variability and all that makes it clear that this will require a mix of product and services.
Product alone today will not solve the problem. So we do think this is a big opportunity, but we’re in the early days. And as I said in the past, even when I talked to investors about this pre-AI, there was no big red easy button to press to kind of migrate a SQL app to MongoDB. And while GenAI makes that easier, it’s still going to take a little bit of time, but it’s definitely exciting and there’s a lot of customers leaning in. And so, we’re excited about the option, but it’s a lot of work to do.