The data infrastructure industry is facing a number of challenges in today’s digital world.
Demand for data services is growing at a phenomenal rate and yet there has never been a greater pressure, or duty, to deliver those services as efficiently and cleanly as possible.
As every area of operation comes under greater scrutiny to meet these demands, one area in particular – cooling – has come into sharp focus. It is an area not only ripe for innovation, but where significant progress has been made that shows a way forward for a greener future.
According to some estimates, the number of internet users worldwide has more than doubled since 2010, while internet traffic has increased some 20-fold. Furthermore, as technologies emerge that are predicted to be the foundation of future digital economies, such as streaming, cloud gaming, blockchain, machine learning and virtual reality, demand for digital services will rise not only in volume, but also sophistication and distribution.
Increasingly, the deployment of edge computing, bringing compute power closer to where it is required and where data is generated, will see demand for smaller, quieter, remotely managed infrastructure. This one area alone is expected to grow at a compound annual growth rate (CAGR) of 16% to 2026 to a market of more than $11 billion, according to GlobalData.
This level of development brings significant challenges for energy consumption, efficiency, and architecture. The IEA already estimates that data centres and data transmission networks are responsible for nearly 1% of energy-related greenhouse gas (GHG) emissions. While it acknowledges that since 2010, emissions have grown modestly despite rapidly growing demand, thanks to energy efficiency improvements, renewable energy purchases by information and communications technology (ICT) companies, and broader decarbonisation of electricity grids, it also warns that to align with the net zero by 2050 target, emissions must halve by 2030.
This is a significant technical challenge. Firstly, in the last several decades of ICT advancement, Moore’s law has been an ever-present effect. It states that compute power would more or less double, with costs halving every two years or so. As transistor densities become more difficult to increase as they get into the single nanometre scale, no less a figure than the CEO of Nvidia has asserted that Moore’s law is effectively dead. This means that in the short-term to meet demand, more equipment and infrastructure will have to be deployed, in greater density. Added to this are the recent developments from both Intel and AMD, where their high-end data centre-aimed processors will work in the 350-400W range, further exacerbating energy demand.
All changes will impact on cooling infrastructure and cost
In this scenario of increasing demand, higher densities, larger deployments, and greater individual energy demand, cooling capacity must be ramped up too.
Air as a cooling medium was already reaching its limits, being as it is difficult to manage, imprecise, and somewhat chaotic. As rack systems become more demanding, often mixing both CPU and GPU-based equipment, individual rack demands are approaching or exceeding 30W each. Air-based systems, at large scale, also tend to demand a very high level of water consumption, for which the industry has also received criticism in the current environment. One estimate equated the water usage of a mid-sized data centre as equivalent to three average-sized hospitals.
Liquid cooling technologies have developed as a way of meeting demand for both the volume and density needed for tomorrow’s data services. Studies with different liquid cooling techniques have established that they can be anything from 50 to 1,000 times more efficient than air cooling.
Liquid cooling takes many forms, but the three primary techniques currently are direct-to-chip, rear door heat exchangers, and immersion cooling.
Direct to chip (DtC), or direct to plate, cooling is where a metal plate sits on the chip or component, and allows liquid to circulate within enclosed chambers carrying heat away. This is a highly effective technique that is precise and easily controlled. It is often used with specialist applications, such as high-performance compute (HPC) environments.
Rear door heat exchangers, as the name suggests, are close-coupled indirect systems that circulate liquid through embedded coils to remove server heat before exhausting into the room. They have the advantage of keeping the entire room at the inlet air temperature, making hot and cold aisle cabinet configurations and air containment designs redundant, as the exhaust air cools to inlet temperature and can recirculate back to the servers. The most efficient units are passive in nature, meaning server fans move the air as necessary. They are currently regarded as limited to 20 kW to 32 kW of heat removal, though units incorporating supplemental fans can handle higher loads in the 60 kW maximum range.
Immersion technology employs a dielectric fluid that submerges equipment and carries away heat from direct contact. Whilst for many, liquid immersion cooling immediately conjures up the image of a bath brim full of servers and dielectric, precision liquid immersion cooling operates at rack chassis-level with servers and fluid in a sealed container. This enables operators to immerse standard servers with certain minor modifications such as fan removal, as well as sealed spinning disk drives. Solid-state equipment generally does not require modification.
A distinct advantage of the precision liquid cooling approach is that full immersion provides liquid thermal density, absorbing heat for several minutes after a power failure without the need for back-up pumps. Liquid capacity equivalent to 42U of rack space can remove up to 100 kW of heat in most climate ranges, using outdoor heat exchanger or condenser water, allowing the employment of free cooling.
Cundall’s liquid cooling findings
According to a study by engineering consultants Cundall, liquid-cooling technology consistently outperforms conventional air-cooling, in terms of both PUE and water usage effectiveness (WUE).
This, says the report, is principally due to the much higher operating temperature of the facility water system (FWS), compared to the cooling mediums used for the air-cooled solutions. In all air-cooled cases, considerable energy and water is consumed to arrive at a supply air condition that falls within the required thermal envelope. The need for this is avoided with liquid-cooling, it states. Even in tropical climates, the operating temperature of the FWS is high enough for the hybrid coolers to operate in economiser free cooling mode for much of the time, and under peak ambient conditions, sufficient capacity can be maintained by reverting to ‘wet’ evaporative cooling mode.
A further consistent benefit, the report adds, is the reduction in rack-count and data hall area that can be achieved through higher rack power density.
There were consistent benefits found, in terms of energy efficiency and consumption, water usage and space reduction, in multiple liquid cooling scenarios, from hybrid to full immersion, as well as OpEx and CapEx benefits.
In hyperscale, co-location, and edge computing scenarios, Cundall found the total cost of cooling information technology equipment (ITE) per kWh consumed in liquid versus the base case of current air cooling technology varied from 13-21% less.
In terms of emissions, Cundall states PUE and Total Power Usage Effectiveness (TUE) are lower for the liquid-cooling options in all tested scenarios. Expressing the reduction in terms of kg CO2 per kW of ITE power per year, results saw more than 6% for colocation, rising to almost 40% for edge computing scenarios.
What does the immediate future hold in terms of liquid cooling?
Combinations of liquid and air cooling techniques, in hybrid implementations, will be vital in providing a transition, especially for legacy instances, to the kind of efficiency and emission-conscious cooling needs of current and future facilities. Though immersion techniques offer the greatest effect, hybrid cooling offers an improvement over air alone, with OpEx, performance and management advantages.
Even as the data infrastructure industry institutes initiatives to better understand, manage and report sustainability efforts, such as the Climate Neutral Data Centre Pact, the Open Compute Project, and 24/7 Carbon-free Energy Compact, more can and must be done to make every aspect of implementation and operation sustainable.
Developments in liquid cooling technologies are a significant step forward that will enable operators and service providers to meet demand, while ensuring that sustainability obligations and goals can be met. Initially hybrid solutions will facilitate legacy operators to make the transition to more efficient and effective systems, while more advanced technologies will ensure new facilities are more efficient, even as capacity is built out to meet rising demand.
By working collaboratively with the broad spectrum of vendors and service providers, cooling technology providers can ensure that requirements are met, enabling the digital economy to develop to the benefit of all, while contributing towards a liveable future.