Jeff Silber: Thanks so much. I’m sorry to keep honing in on Market Intelligence, but I wanted to focus on margins. I know you slightly lowered the adjusted operating margin guidance, but it still implies a pretty steep ramp in the second half of the year. Are there any timing issues, or cost cuts that are impacting this? And if not, how do you expect to see that margin acceleration in the back half?
Ewout Steenbergen: Yes. Thank you so much for that question. You’re absolutely right that we see quite some seasonality in margins in Market Intelligence, but by the way, also in several of our other segments during this year. And that has to do with the reason that I mentioned before around incentive compensation where we saw a large pullback in incentive compensation in the second and third quarter of last year. So, that is what we are going to lap this year. But then in the end, in the fourth quarter, we have much easier comps. And just to give you a data point for the company as a whole, that would mean that we are expecting from an expense perspective that expenses for the company as a whole for this year will still end up in low single digit growth territory for the company as a whole.
So, you would expect there for that reason that the fourth quarter margins are going to be really strong. And the expense growth in the fourth quarter is going to be really low in order to achieve those outcomes. So, also, let me give you another data point in terms of margin expansion for the company as a whole. We still expect 60 to 160 basis points margin expansion for the company as a whole. And for Market Intelligence, you’re also looking at quite a significant increase in margins over the next two quarters from a trailing 12-month level now of 32.4 to a range of 33.5 to 34.5 for the whole year of 2023. So, yeah, third quarter, still a little bit depressed by the expense seasonality, but fourth quarter very strong margins and expenses really low.
That’s the overall trend that you should expect for Market Intelligence and the other divisions.
Douglas Peterson: Thank you, Jeff.
Operator: Thank you. Our next question comes from Andrew Nicholas with William Blair. Your line is open.
Unidentified Analyst: Hi. Good morning. This is Tom [indiscernible] for Andrew Nicholas. I wanted to ask a question around AI, and I was curious about what your strategy is for allocating potential resources to assist in developing AI capabilities across the segments. Given S&P’s extensive portfolio, it seems like resources could be stretched in there. So, I was wondering if you guys take more like a holistic approach where there is universal capability across the segments developing AI, or if there are specific segments where you are focusing AI development. Thank you.
Douglas Peterson: Andrew, let me start, and then I’m going to hand it over to Ewout. I want to go back to a period six and seven years ago when we first made our investment in Kensho and around the — and we started thinking about what the future was going to look like. And we actually see that future playing right now. We felt that in five years from then, which is now — and 10 years from then, which is five years from now, that people like us would be making decisions assisted by artificial intelligence and machine learning tools. And in the last year or so, it’s become apparent that there’s another leg to that stool. It’s not just artificial intelligence and machine learning. It’s also generative AI. We’ve been embracing that across the company.
Kensho has developed an expertise in different types of generative AI models. They are a go-to source in the company for learning about what we can do and what are the different models we could be applying. As you saw in our prepared remarks, we have a governance approach around AI, which is to ensure that we’re always thinking about our customers. We have a hybrid methodology of philosophy about using multiple types of models and sources whether that’s internally driven from Kensho or the divisions or externally from open sources from partnerships. When you look at the needs for developing AI solutions, you end up actually having to stack multiple models on top of each other. And you require really careful management of your data, so that you can protect it as well as ensure that it will be then used and displayed in the right way.
And then the third part of our governance is to ensure that we’re all protecting our data. This is one of the first things we did when we started seeing generative AI models. We took a step back to make sure that we could protect our data and our IP. But let me hand it over to Ewout and then over to Edouard talk a little bit more about some of the things we’re seeing in the company.
Ewout Steenbergen: Tom, one of the things that I’m really excited about is that on the one hand, we have Kensho, which is a relatively small group, but really an innovation accelerator within the company. And then we have all of our colleagues around the world, because we have so many data scientists, technology engineers, data experts, experts in a lot of specific areas and fields around the company. And you have to bring all of those together in order to get the acceleration with the opportunities that generative AI and large language models are bringing for us as a company. So, Kensho is focusing on a couple of areas that will have the biggest impact for the company as a whole. This could be use cases that we are developing.
But also think about collecting the data and structuring the data in token sets that are the best readable for large language models. And that’s actually, from a technical perspective, a really important challenge. But the good thing is Kensho has been doing that kind of work already for the last five years. We have experience we’re working with AI for the last five years. We know how to prioritize this for the last five years. We know how to track the economic benefits and we have been doing that for the last five years. So, we have a lot of experience around dealing with this, and we’re not trying to invent this for the first time at this moment. But then on the other hand, we also have the crowdsourcing. So, we have a lot of colleagues that are contributing to these kind of initiatives to collect data for large language models and many other initiatives we are experimenting across the board.
So, let me hand it over to Edouard, because there’s also a lot of great experimentation and development going on in Mobility.
Edouard Tavernier: Thank you, Ewout. Thank you, Doug. And just to add to what you said, I want to bring up one example of how we’re thinking about innovating at scale with a new technology like generative AI. And as Doug said, our experience in earlier forms of AI and our cloud investment over the past two years, I think, puts us in good stead to harness this new technology, but I think we have to recognize there’s a step change here. And so, as we think about how do we enable our organization, one of the big strengths of our strategy is to upskill people and make sure we familiarize them with this technology. Earlier in July, within Mobility, we actually launched internal solution called autopilot, which is all about bringing the tool to hundreds of our colleagues within the Mobility division.
And we’ve already seen dozens of use cases developed over the past two or three weeks and we learn every day from this particular solution. We learn not just about large language models, but we learn about data curation, we learn about what kind of UI do we need to develop, what kind of business governance do we need around it. And that’s a great example of one of the ways in which as an organization, we’re learning and we’re harnessing this new technology. Thank you.
Operator: Thank you. Our next question comes from Craig Huber with Huber Research Partners. Your line is open.