Edge_0519_1402x672

The Edge Is Hot, But Keep Cool

May 1, 2019
Data Centers Changes Require Thermal Management Evolution — Data centers, whether they are multi-story hyperscale facilities in excess of 10 MW or small facilities designed to reduce latency at the […]

Data Centers Changes Require Thermal Management Evolution —

Data centers, whether they are multi-story hyperscale facilities in excess of 10 MW or small facilities designed to reduce latency at the edge of the network, require a means to reject the heat generated by the servers. The methods for achieving this are changing along with the industry as the demand for efficiency and sustainability continues to increase. Cooling methods may vary depending on facility size and location, but one thing remains constant: reliable thermal management is a mission critical requirement.

Large Data Centers: 5 Trends

1. Bigger. It’s a fact. Large data centers will only get larger with form factors differing from what we’ve seen in the past. Today’s large projects are more than 10 megawatts in size, dwarfing what was considered "large" even 2 or 3 years ago. Standardized or high-volume custom solutions with common design platforms are used in colo and hyperscale projects to drive down costs, improve efficiency, reduce risk, and increase speed to market.

2. Warmer. ASHRAE 2016 thermal guidelines modified the higher end of the recommended range from 18ºC (64.4ºF) to 27ºC (80.6ºF). Raising the supply temperatures opens the door for more cooling technologies to be applicable, but it’s important to remember the thermal management system, while designed to lower operating costs and reduce CapEx, must ensure availability of the IT hardware.

3. Taller. As the cost of real estate increases in areas where large data centers are concentrated and the availability of land becomes increasingly scarce, it only makes sense to build up instead of out to reduce the amount of land required for these facilities. Multi-story data centers, particularly areas such as northern Virginia and the San Francisco Bay area, have become the norm.

4. Dryer. As data centers look to reduce their environmental impact on the local communities, the reduction or complete elimination of water has been an important focus. While it is not likely that chilled water-cooling technology utilizing cooling towers for final heat rejection will be eliminated completely, it’s a technology that can add significant cost in terms of infrastructure and maintenance to any deployment. Additionally, the ongoing cost of water in areas where it may be a relatively scarce commodity and the reoccurring cost of water treatment continues to increase operating costs. Finally, the sheer volume of water consumed — estimated at 6.75 million gallons per year per MW1, — may have a hidden cost that takes away from a facility’s sustainability story.

Pumped refrigerant systems use a refrigerant pump to save energy instead of compressors in low to moderate ambient situations and can save approximately 95% of the energy compressors would use.

These factors, as well as many other advantages, are part of why water-free cooling systems have grown in popularity in recent years. Today’s water-free economization systems can deliver comparable annual efficiencies to chilled water systems with cooling towers but without the challenges of water. By utilizing pumped refrigerant technology in place of energy intensive mechanical cooling through parts of the year, industry-leading systems can deliver annual mechanical power usage effectiveness (PUE) ranging between 1.05 – 1.20 and save those nearly 7 million gallons of water per year per MW previously mentioned. Furthermore, recent enhancements to the controls of such systems have increased annual energy savings further, by up to 50% in certain applications.

Pumped refrigerant systems use a refrigerant pump to save energy instead of compressors in low to moderate ambient situations. This can save approximately 95% of the energy the compressors would use, offering energy savings while delivering the designated level of cooling. The systems rely on outdoor ambient temperatures and IT load to optimize operation instead of defined outdoor temperature set points, allowing the operator to maximize their potential economization hours.

Such systems deliver a consistent data center environment through physical separation of indoor and outdoor air streams, preventing the cross-contamination or transfer of humidity. The systems can be arranged in a variety of capacities and configurations and are available as split systems or as package systems. For package systems deployed external to the building, air leakage can be kept to a minimum (<1% under normal operating conditions).

In large deployments, units are purchased at high volumes, so pumped refrigerant systems also save costs by providing added capacity without the need for large, centralized chiller plants, pumps or cooling towers.

Alternative cooling solutions such as direct evaporative cooling systems are also being deployed, particularly in regions of moderate temperatures and low levels of humidity. These systems take advantage of the cool external air for free cooling. During times of higher external temperatures, the system uses a wetted media pad to cool the air as it is drawn in. DX or chilled water solutions may be added to the systems as mechanical "trim", but this depends on the data center’s physical location and desired internal conditions.

Such a system offers advantages such as lower peak power requirements compared to traditional compressor-based systems and lower overall energy consumption while consuming minimal water throughout the year but require a wider allowable temperature and humidity operating range to truly capture these savings. Additionally, consideration as to how to maintain cooling for the data center in the event outside air is not available must be considered and may drive the requirement for a full mechanical system to be in place for such an event.

Food for Thought from Our 2022 ICT Visionaries

5. More flexible. Raised floor environments are becoming less common in today’s design and construction. Slab floor construction, in tandem with hot-aisle containment, is becoming the norm to drive down building costs and increase speed of deployment. This changes the cooling profile as airflow can no longer be altered by simply moving floor tiles. It presents a new challenge: making sure the cool air reaches where it’s most needed. Advanced thermal controls may be used to integrate between rack sensors and cooling units to ensure the system is working properly and efficiently.

In addition to pumped refrigerant or evaporative cooling, there are several other cooling enhancements or alternatives that can be incorporated into raised floor and non-raised floor environments to improve efficiency or solve specific challenges, such as expanding capacity or supporting higher density racks:

• Containment: Aisle containment prevents hot and cold air from mixing and improves efficiency. When the air is allowed to mix, the temperature of the air returning to the cooling units is lower, which reduces their efficiency.

• Rack chimneys: Another option is the use of chimneys mounted to the back of racks to capture the heated server exhaust and channel the air directly to the ceiling plenum. The heated air is then returned to the computer room air conditioning unit for room recirculation. While this maximizes server space, it could limit future flexibility as racks are basically fixed in position by the ducting.

• In-row cooling: Row-based thermal management units are designed to sit alongside equipment racks and have a small footprint — similar to the equipment racks — and provide cool air efficiently to the front of the racks.

• Rear door cooling: Heat exchanger modules may be installed on racks in place of existing rear doors to deliver up to 50kW of room-neutral cooling. Multiple units may connect to a coolant distribution unit that transfers heat between the building’s chilled water source and circulating cooling water. The rear door heat exchangers can be installed on the back of racks and use the server fans to push the air through the heat exchanger (passive) or fans can be added to assist in drawing air through the coil (active).

Stay Cool Amid the Changes

Keeping data and compute nearer to the customer — is behind the growth of the edge. The Internet of Things (IoT) and 5G networks are 2 factors driving this growth, but there are many more. Small data centers, or even converted storage spaces or closets make up this powerful market.

On the business side, edge computing is where the majority of data is stored. It’s a revenue driver and as such, failure is not an option. Insufficient cooling capacity is a primary concern of edge IT managers, and with good reason. Reliance on a building’s air conditioning is asking for trouble. The level of cooling provided may not be sufficient to reliably meet the thermal management needs of a given space.

Dedicated cooling equipment for edge applications is a 24/7 consideration to ensure proper temperature, humidity levels and airflow. Today’s edge thermal management systems typically offer remote system controls and monitoring and can even tie in to building management systems (BMS). Some remote monitoring even offers management via an IoT smart device app.

Cooling options are available to fit various site limitations. These include mounting above dropped ceilings or on walls. Mini-splits have gained popularity in these spaces with the advent of higher efficiency offerings via variable capacity compressors and fans.  Additionally, rack cooling systems are becoming popular, offering up to 3.5KW cooling with heat rejection to the room or ceiling plenum.

Despite all the options, customers continue to drive for efficient, yet easy to install systems, to lower their overall deployment costs. Budget restrictions continue to be a concern for many edge applications, but it’s difficult to put a price on availability. The edge is not a place to cut corners.

Flexibility remains key, as well as keeping a finger on the pulse of what’s new in thermal management. On the horizon, liquid cooling options are in development that will provide direct cooling at the chip for direct heat dissipation that will allow for higher performance servers, increased density and lower overall operating costs.

Like this Article?

Subscribe to ISE magazine and start receiving your FREE monthly copy today!

Every day is a new chapter in terms of data center thermal management. It will only become even more critical as new applications are developed and faster service is demanded. PUE levels never thought achievable have become commonplace today, lowering OpEx. Millions of gallons of water are saved daily, adding to the sustainability story of thousands of data centers.

Endnote
1. Ignore Data Center Water Consumption at Your Own Peril, June 2016, The Uptime Institute

About the Author

Dave Klusas

Dave Klusas is Senior Director, Custom & Large Systems Offerings, Liebert Thermal Management, Vertiv. For more information, please visit https://www.vertiv.com/en-us.