Thermal Management in Data Centres: Liquid Cooling and Heat Reuse

Waste heat recovery

Data centres have always produced a lot of heat, but the rise of AI training, dense GPU racks, and high-power CPUs has changed the scale of the problem. By 2025, it’s common to see individual chips drawing 500–1,000W, while some modern racks are designed for far higher loads than traditional air-cooled layouts can handle. This is why liquid cooling has moved from “specialist HPC rooms” into mainstream planning for new builds and major retrofits. At the same time, operators are being pushed to treat waste heat as a resource rather than a nuisance, particularly across Europe where reporting and sustainability expectations have tightened.

Why air cooling is hitting its limits in 2025

Air cooling still works well for many standard enterprise workloads, but it becomes inefficient when heat density rises faster than airflow can realistically scale. The bottleneck is simple physics: air has low heat capacity, so moving enough of it through high-density racks requires large fans, high static pressure, and increasingly complex containment. As a result, the cooling energy overhead grows quickly, raising operating costs and making it harder to keep Power Usage Effectiveness (PUE) stable.

High-density AI deployments add another constraint: temperature stability at the chip level. Even if the room temperature looks acceptable, hotspots around GPUs, VRMs, and memory can trigger thermal throttling. That throttling is not just a performance issue—it can also disrupt workload predictability and complicate capacity planning. Many operators now design for “performance per rack” rather than “servers per hall”, which naturally pushes them towards cooling approaches that remove heat closer to the source.

Water and sustainability pressures also matter. Traditional evaporative cooling can be very efficient from an energy perspective, but it consumes water—sometimes at a scale that becomes politically and operationally sensitive in water-stressed regions. Several large operators have publicly shifted toward closed-loop approaches that reduce or even remove evaporative water use, and liquid cooling plays a major part in that strategy.

What makes liquid cooling different from “better air cooling”

Liquid cooling is not simply “air cooling with a stronger chiller”. It changes the heat pathway. Instead of trying to cool the entire room and hoping the air will carry heat away from components, liquid cooling captures heat at or near the chip. That means less wasted effort cooling parts of the server that don’t need it, and far less energy spent pushing air through tight spaces.

Because liquids carry far more heat than air, they can remove the same thermal load with smaller temperature differences. This opens the door to higher coolant temperatures (depending on design), which can reduce compressor use and support “free cooling” for more of the year. In practice, this can lower cooling electricity demand while enabling higher rack densities.

Perhaps the most overlooked benefit is heat quality. Air exhaust from a traditional room is often too low-grade and too diffuse for easy reuse. Liquid loops, by contrast, can deliver heat in a controlled and concentrated form, which is exactly what you need if you want to send it to a building heating system, a district heating network, or an industrial process.

Main liquid cooling approaches: direct-to-chip and immersion

In 2025, most projects fall into two major categories: direct-to-chip (also called cold plate cooling) and immersion cooling. Direct-to-chip attaches cold plates to high-heat components—typically CPUs, GPUs, and sometimes memory or VRMs. Coolant flows through these plates, lifting heat away efficiently. The rest of the server can still be air-cooled, which makes this approach practical for gradual adoption.

Immersion cooling takes a different route: servers (or server components) sit in a bath of dielectric fluid. Heat transfers directly from all surfaces into the fluid, which is then circulated through a heat exchanger. Immersion can be extremely effective for dense compute, but it often requires more specialised hardware choices and operational procedures (maintenance, handling, compatibility, and fluid management).

There is also a “hybrid reality” that many operators now accept: you do not have to pick a single method for an entire site. Mixed halls are increasingly common—air cooling for legacy or low-density racks, direct-to-chip for new AI clusters, and occasional immersion deployments for very specific workloads or research environments.

Practical selection criteria: what engineers check first

The first question is usually the heat density target. If you are designing for very high rack power, direct-to-chip is often the most straightforward path because it integrates with familiar server form factors while delivering strong thermal performance. Immersion becomes more attractive when you want maximum density and the operational model fits your organisation.

The second factor is facility integration. Direct-to-chip generally connects to a secondary water loop via a coolant distribution unit (CDU). This can be retrofitted into many facilities with careful planning around piping, redundancy, and leak detection. Immersion often changes the floor layout, maintenance workflow, and sometimes even procurement strategy, because the cooling system is tightly linked to the compute hardware.

Finally, engineers look at long-term economics: energy costs, water constraints, capacity growth, and equipment lifecycle. Liquid cooling can raise upfront costs (CDUs, manifolds, plumbing, monitoring), but it may lower operational costs by reducing fan power, increasing usable IT density, and improving opportunities for heat reuse. The “right” choice is rarely universal; it depends on the power roadmap, the building constraints, and how quickly the operator expects workloads to intensify.

Waste heat recovery

Turning waste heat into a resource: reuse models that work

Heat reuse is no longer just a sustainability talking point. In 2025, it’s becoming a measurable design goal, especially in Europe where energy performance reporting and decarbonisation targets are pushing operators to demonstrate efficiency beyond PUE. The key is to treat waste heat as a product with specifications: temperature level, reliability, seasonality, and delivery method.

Liquid cooling improves the feasibility of reuse because it can supply warmer, more controllable heat than typical air exhaust. Depending on the loop design, operators may deliver water at temperatures suitable for preheating domestic hot water, feeding heat pumps, or supporting district heating networks. Even when heat pumps are required, starting with higher-grade heat improves efficiency and strengthens the business case.

There are several viable reuse pathways. The most common is exporting heat to nearby buildings—offices, residential blocks, hospitals, or universities—where there is consistent demand. Another model is industrial synergy, where data centre heat supports low-temperature processes. In colder climates, district heating integration is particularly attractive, but it requires coordination with local utilities and long-term contracts to justify infrastructure investment.

Designing for reuse: what must be planned from day one

To reuse heat effectively, the thermal system must be designed around stable delivery. That means thinking beyond cooling: supply temperature targets, buffering, redundancy, and monitoring become critical. A district heating network, for example, needs predictable heat availability and clear performance parameters. Operators often use heat exchangers to isolate loops for safety and compliance while still transferring energy efficiently.

Commercial structure matters as much as engineering. Successful reuse projects usually involve clear agreements on pricing, maintenance responsibility, and what happens if either side changes demand. Some operators treat heat as a revenue stream; others see it as a route to planning approval, carbon reduction, or community benefit. The practical outcome is the same: without contractual clarity, reuse projects are hard to sustain.

Finally, reuse must be measured properly. In 2025, it’s common to track not only PUE, but also metrics tied to water usage, energy recovery, and carbon impact. Operators increasingly publish sustainability performance data, and regulators in parts of the EU require standardised reporting. If heat reuse is part of the strategy, the site needs metering and reporting capability from the start, not as an afterthought.