The cloud was once hailed as the ultimate solution, offering infinite scalability, on-demand flexibility, and freedom from the hassle of managing physical infrastructure. For many, it delivered on those promises.
However, a significant shift is underway: a 2024 Barclays CIO Survey (PDF) revealed that 83% of enterprise CIOs plan to repatriate at least some workloads from the public cloud back to on-premises or private infrastructure this year, a substantial increase from 43% in 2020.
This wave of cloud repatriation isn’t about going backwards – it’s about making smarter, more strategic choices. Especially for predictable, high-volume workloads, the economics and performance benefits of keeping operations in-house are becoming impossible to ignore. But repatriation isn’t as simple as flipping a switch.
Before making the move, organisations must fully understand the cost, complexity, and infrastructure requirements needed to support modern workloads on-premises. That’s where emerging technologies like advanced liquid cooling come into play – offering the efficiency and density gains that make an on-premises approach viable again at scale.
Shifting Clouds
The shift away from cloud-first strategies isn’t just noise, it’s a response to growing economic and operational pressure. Cloud sticker shock is real, and organisations are recognising that not every workload belongs in the cloud. For predictable, high-volume tasks – like analytics pipelines or AI training – on-premises infrastructure can offer more consistent performance and clearer cost control.
Beyond cost, data gravity and compliance are major concerns. Transferring large volumes of data across cloud environments can be expensive, introduce latency, and increase regulatory risk. In the EU and beyond, data sovereignty requirements are tightening, making single-tenant edge data centres an increasingly attractive option for enterprises seeking control and locality.
Add in growing geopolitical uncertainty, from cross-border data laws to regional trade tension, and the picture becomes even more complex.
Repatriating workloads is becoming a key lever for regaining autonomy, improving resilience, and future-proofing IT investments.
Repatriation isn’t just a strategic or financial shift – it comes with very real physical consequences. When organisations decide to bring workloads back on-premises, they often underestimate the infrastructure demands required to support them. The workloads themselves haven’t stood still. Thanks to the explosion of AI, machine learning, and real-time analytics, compute intensity has increased dramatically. What once ran comfortably in a modest virtualised environment now demands high-density servers, GPU clusters, and specialised accelerators.
Cooling Bottleneck
This shift has exposed a major gap in enterprise readiness. Many existing data centres were built in an era of lighter thermal and power requirements, designed for traditional CPU-based, air-cooled systems. These facilities are quickly reaching their limits.
Simply put, the physical environment can no longer keep pace with the performance expectations of modern workloads. To meet new power and cooling demands, organisations are often forced to overprovision space, deploy expensive workarounds, or risk operational inefficiencies.
The result is a cooling bottleneck. As compute demand rises, so does the heat – and with it, the cost and complexity of managing it. Without modernisation, legacy infrastructure becomes a constraint rather than a foundation for innovation.
That’s why new approaches to thermal management, particularly liquid cooling, are emerging as critical enablers for repatriation at scale. In 2024, 20.1% of enterprises reported using some form of liquid cooling in their data centers. This figure is expected to nearly double to 38.3% by 2026, according to a survey from The Register.
Hybrid Liquid Cooling for Data Centers
As enterprises repatriate workloads, the question is not only where compute happens – but also how that compute is supported. This is where hybrid liquid cooling shines. It delivers the thermal performance today’s high-density, GPU-accelerated workloads demand, without requiring a complete redesign of existing data centre infrastructure. For organizations modernising on-premises without the luxury of expanding their physical footprint, that’s a game-changer.
Hybrid liquid cooling enables significantly more compute per square foot, allowing enterprises to do more with their existing space and power envelopes. This density is critical not only for supporting AI and analytics workloads today, but for preparing for the edge-driven, distributed compute environments of tomorrow. As data moves closer to the source, whether in branch offices, industrial settings, or telco environments, cooling must become both more efficient and more adaptable.
Enterprises need to plan for performance and long-term sustainability. Liquid cooling systems dramatically reduce energy consumption compared to traditional air-cooling, supporting broader ESG initiatives while also offsetting the operational cost savings enterprises are hoping to achieve by exiting the cloud.
Reduced thermal stress means less wear and tear on hardware, longer equipment lifecycles, and fewer maintenance headaches – translating to even more efficiency gains over time.