Brent Bracelin: Thank you. I wanted to go back to the discussion around the increase we’re seeing in model training capacity. Clearly, billions of incremental dollars going into NVIDIA GPUs here. You need data to train the models. I appreciate there’s going to be a lag relative to when the spend hits the data layer. But are there any technical hurdles that need to be overcome? Or you do you think this cycle is different and that there are other considerations as well? Just thinking through that — the investment we’re seeing right now in infrastructure and thinking through what are the other factors we need to think about before it starts to impact the data layer? Thanks.
Frank Slootman: I’ll start, and then maybe Christian can follow, even think about the question while I’m talking. You can’t characterize as AI as one thing, right, because you see the things that people are doing with unstructured data and the whole notion of copilots and the systems and tutors and all that, it’s very much focused on contextual data. And we see action with support call records, contact centers and so on. But again, you look at Snowflake, who sits on mountains of structured proprietary enterprise data, that’s a different realm for AI than the very text model-oriented type of inquiry. And I have to say that just from all my conversation with customers, I mean people are behind, I would say, the textual side [Technical Difficulty] proprietary data, how we’re going to approach that.
We view that as our business, and we’re driving that very hard, and hence the emphasis on getting your data [indiscernible] in order because you just cannot unleash large language model and hope for the best, because of all the issues that we’ve mentioned before around governance and just understanding of what kind of data we are generating in the process. That’s why I said the early going, you’re going to see a lot of upside from [indiscernible] that analysts are going to be able to generate data far quicker, far better than they ever have been before. And we’re really massively reducing the skill sophistication requirements to be able to do that. Data in and of itself is going to be a big driver for us.
Christian Kleinerman: Yes. Christian here. I would add maybe two areas in addition to what Frank mentioned. The first one is around having the right data to be fed into these models. Frank started the call with no AI strategy without data strategy. And it is very pleasing that the results of traditional ML or Gen AI is a function of having the right data, the right data quality, the right metrics. The technology will be as good as the data that is fed into. All of the investments that we make on data quality and cleansing and pipelines, all of that is very important. The other piece that I think will be a technical imperative for everyone doing AI and Gen AI is around the measurement and feedback how good are the solutions, how do I know if there are potential buyers into the data or are there gaps in their understanding and performance of the model.
Those two are inherent parts of the lifecycle and [interestingly, they all run to] (ph) having a great data foundation enabled service.
Brent Bracelin: Helpful color. And then lastly for Mike on consumption, one follow-up. The implied Q4 product growth is, I think, 26% at the midpoint. I know there’s a delta between signing growth and consumption. Exiting this year, do you think product growth stabilizes in the mid-20% range, maybe starts to reaccelerate next year, or is it just too early to tell?