Nigel Coe: Great. Thanks very much.
Operator: Thank you. Our next question comes from Andy Kaplowitz with from Citi Group. Andy, please go ahead.
Andy Kaplowitz: Hey good morning, everyone.
Dave Cote: Good morning.
Andy Kaplowitz: Gio, you said it’s difficult to discern AI versus not AI in your markets. But as you said, your pipeline of opportunities seems to have increased in the last 90 days, and you turned EMEA cloud hyperscale to green. So is there a way to quantify how much bigger your pipeline is that is feeding into the expectation of flat bookings in Q3 and not modestly in Q4? And then just stepping back, would you say AI contribution and orders is coming earlier than you expected in 2023?
Giordano Albertazzi: Well, the acceleration of pipeline that I referred to in my – when we were going through the slide is a specific to industry – sorry, to region and customer really. So it varies, but acceleration it is. How big? It is, again, part of a wave of demand, as I explained, this is additional demand, and it is additional demand that will probably drive also the more traditional type of loads and compute further up. Specifically the comment about EMEA is that we start to see this effect hit EMEA as well. And anyway, we wanted to send the message that we see acceleration on colo/cloud across the board because of AI. How big, exactly? Again, it’s premature to say. The market is moving. I would say it’s hard to distinguish what is AI and what is not, but we know that some technologies, specifically for high-density, specifically for GPU.
And that is a little bit earlier to track. But we know that there are a lot of traditional technologies, a lot of traditional technologies that are there to enable AI as well. And that’s true for the power part of our portfolio and for the thermal part of our portfolio.
Andy Kaplowitz: Thanks guys.
Giordano Albertazzi: Thanks.
Dave Cote: Thanks, Andy.
Operator: Thank you. Our next question comes from Amit Daryanani from Evercore. Amit, please go ahead.
Amit Daryanani: Thanks for taking my question and congrats on a nice print. Gio, I was hoping you could just talk a little bit more about when it comes to cooling AI clusters, liquid cooling clearly becomes a more important thing, especially in higher densities. But there seems to be multiple ways that you can use liquid cooling, direct-to-chip is something you folks do, but I think there’s immersion and other forms of it. So from your perspective, do you think Vertiv strategy would be to have a broader liquid cooling solution across different formats? Or would you want to focus more on direct-to-chip? And then as it comes – as it relates to these air clusters, can you just touch on what do you think your economics look like versus the corporate average?
Giordano Albertazzi: So – I want to reiterate the message that I had when I was going through the slides is that when we talk about liquid cooling here, we really talk about how we extract heat from the heat generation point, i.e. the chip to outside the server outside the rack and into the data hole either way or not [ph]. This is the novelty. This is the new part. As I was saying, actually, there are three ways to make that extraction. A, continue to do it through air. And we have seen air cooled racks going all the way to north of 40 kilowatts per rack density. But clearly, at a certain stage, as one of our slides was explaining, at a certain stage, liquid kicks in. Liquid clicks in form of immersion or direct-to-chip. We have both in our portfolio.