By Curt Wallace
A recent report on energy efficiency in data centers made a splash in the industry. The article, Recalibrating Global Data Center Energy-Use Estimates, argues that energy use in data centers may not have increased at the scale reported by previous analysis. While that is good news for an energy hungry industry that continues to grow with escalating demands for data, it’s not a reason for data centers to rest on their laurels. With the increasing reliance on digital services and adoption of new technologies like artificial intelligence (AI) and smart systems, data centers will continue to need more energy to satisfy demand. And since cooling is one of the two major uses of electricity in a data center, finding a way to reduce cooling costs is one easy-to-identify way to increase efficiency and reduce power consumption.
In 2016, global data centers spent $8 billion to cool their data centers and if not kept in check that is expected to reach $20 billion by 2024. This is due to the sheer amount of data being used and in need of a place to be stored: 175 zettabytes by 2025. Artificial intelligence changes that equation, too. Because it involves processing images, not just code, it requires GPU servers, which run considerably hotter than CPUs and can significantly exceed the 5-10 kW/rack of a typical air-cooled data center.
Many of these data centers, like mainstream information and communications technology (ICT) companies like Dell and Hewlett Packard, understand the challenge of new technologies and rising heat loads and the pressure to reduce energy consumption. Any data center looking to significantly improve efficiency can take a page out of the playbook of these examples—focusing on reducing electricity for cooling the servers. Liquid immersion cooling is a way to achieve that reduction.
Cutting cooling costs is one of the most significant ways to improve data center energy and cooling efficiency: With the average PUE for 2019 at 1.67, cooling costs account for $0.67 of every $1 spent for data center power load. Immersing the servers in a non-conductive fluid, with 1,200 times the heat conducting properties of air, absorbs 100% of the heat and can maintain optimal core temperatures with a much lower delta between the coolant and the heat source. This eliminates the need for compressor-based cooling and reduces the amount of mechanical work and electrical power needed to run chiller plants. In addition to being a more energy efficient cooling solution, liquid immersion cooling also reduces costs to acquire, maintain, and power computer room air conditioning (CRAC) systems, by an industry average of 3-5% upfront costs and annual maintenance costs for a chiller.
Liquid immersion cooling starts to be worthwhile with density as low as 15 kw per rack and can easily cool up to 100 kw per rack (theoretically up to 200 kw per rack when used with a chilled water system), contrasted with rear door heat exchange, which maxes out at about 15kw per rack. And liquid cooling does all this in a reduced footprint (no need for air conditioner units or raised floors) and by disabling or removing server fans, which can reduce server power by up to 10-30%, while maintaining a stable thermal environment. Installations of liquid immersion cooling systems have seen reductions in total energy center usage by 50% and PUE reduced to as low as 1.02.
Soon enough increasing processing speeds and compute loads, changing technologies, and growing demand will outpace the efficiency gains data center operators have been relying on to reduce energy use. Data centers will need forward thinking strategies and management to become as energy efficient as possible, without adding complexity. Liquid immersion cooling is the answer. It is easy to deploy quickly, incrementally, and cost effectively, and it reduces energy costs. It may be the only way data center operators can effectively scale up and participate in the global AI race.
Wallace currently serves as Green Revolution Cooling (GRC)’s Senior Solutions Architect where he works with end-users to provide an understanding of GRC’s capabilities and potential to decrease energy usage and increase resiliency in data centers. He earned BS in Education/Mathematics from Eastern Illinois University.