And Jim, if I just sneak one in to you, to if you could talk about the total MIPS growth, not just the cycle growth, but the total MIPS growth that you’ve recently experienced as we look out over the next couple of years. Just trying to understand what the pricing increase, how that may — how you may be able to use pricing on TP to leverage that MIPS? That’s it for me. Many thanks.
Arvind Krishna: So Keith, let me address the first part of your question. So if I look at it in the short-term, let’s just walk through. What does the consulting team actually do? They spend a lot of time upfront with the business side going through business processes, worrying about how to optimize business processes, cutting across silos, coming up with data architectures, which could help the business, helping the business decide how they’re going to roll forward. All of that, to be candid, holds even in a world of gen AI, assuming it gets to perfection. Now let’s get to the next level. A lot of what our teams do, and it is perhaps unique to us, we tend to work on much more mission-critical systems. We tend to work much more on things which are fundamental to the business around financials, supply chain, cyber resilience and so on.
Will gen AI and large language models have an impact? Absolutely. I would tell you that if I look over a five to 10-year horizon, I would agree with you that for that second part of consulting, not the first, I would agree and expect to see a 30% productivity. Now I would tell you that in that time frame, I would believe that for IBM, that’s an advantage, not a disadvantage. Why? Absolutely, it’s disinflationary. But if we share that with the client, that means we can win more work. And the total labor pool we need to drive an amount of revenue is lower because there is obviously value in the technology that we are using, be it for test automation or code writing. And maybe I can be a little bit boastful. I’ll use that word, Keith, and talk about one example that we have.
So we wrote a code assistant for Ansible, and it is part of our watsonx family. That code assistant for Ansible can help our Ansible developers, which is used as a — I think it’s the most widely adopted language for IT deployment, gets up to 60% more productive. Now that’s one piece of it. IT deployment overall, I would then say, gets probably somewhere between 10% and 30% overall productivity. So that’s why I think that 30% number is a good one to keep in mind. But to go across all the environments, KOBOL, Java, icon, AI models, that’s going to take a few years before our clients get the full confidence around that topic.
Jim Kavanaugh: Yes. Keith, thanks for the question. On mainframe overall, pretty pleased given where we’re at, GA plus four quarters in. So we wrapped on last year. Remember, we talked a lot last year. We came out with, unlike the prior programs. We came out with a second quarter launch in a highly seasonal transactional quarter, and we grew 77% last year overall, and we wrapped on that this year. I think we printed down 30% here in the second quarter. But a two-year CGR, so far, our five quarters in its most successful program and mainframe that we’ve had. Now why is it important? Yes, definitely, the value we deliver to clients, whether it’s embedded AI, it’s scale, cyber resilient security, cloud native development, but it’s important internally to us, because it’s a platform that drives a multiplier effect, to your question, around transaction processing.