skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, March 22 until 6:00 AM ET on Saturday, March 23 due to maintenance. We apologize for the inconvenience.


Title: Raising Inlet Air Temperature for a Hybrid-Cooled Server Retrofitted With Liquid Cooled Cold Plates
In typical data centers, the servers and IT equipment are cooled by air and almost half of total IT power is dedicated to cooling. Hybrid cooling is a combined cooling technology with both air and water, where the main heat generating components are cooled by water or water-based coolants and rest of the components are cooled by air supplied by CRAC or CRAH. Retrofitting the air-cooled servers with cold plates and pumps has the advantage over thermal management of CPUs and other high heat generating components. In a typical 1U server, the CPUs were retrofitted with cold plates and the server tested with raised coolant inlet conditions. The study showed the server can operate with maximum utilization for CPUs, DIMMs, and PCH for inlet coolant temperature from 25–45 °C following the ASHRAE guidelines. The server was also tested for failure scenarios of the pumps and fans with reducing numbers of fans and pumps. To reduce cooling power consumption at the facility level and increase air-side economizer hours, the hybrid cooled server can be operated at raised inlet air temperatures. The trade-off in energy savings at the facility level due to raising the inlet air temperatures versus the possible increase in server fan power and component temperatures is investigated. A detailed CFD analysis with a minimum number of server fans can provide a way to find an operating range of inlet air temperature for a hybrid cooled server. Changes in the model are carried out in 6SigmaET for an individual server and compared to the experimental data to validate the model. The results from this study can be helpful in determining the room level operating set points for data centers housing hybrid cooled server racks.  more » « less
Award ID(s):
1738811
NSF-PAR ID:
10100237
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the ASME 2018 International Mechanical Engineering Congress and Exposition
Page Range / eLocation ID:
V08BT10A044
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Modern day data centers are operated at high power for increased power density, maintenance, and cooling which covers almost 2 percent (70 billion kilowatt-hours) of the total energy consumption in the US. IT components and cooling system occupy the major portion of this energy consumption. Although data centers are designed to perform efficiently, cooling the high-density components is still a challenge. So, alternative methods to improve the cooling efficiency has become the drive to reduce the cooling cost. As liquid cooling is more efficient for high specific heat capacity, density, and thermal conductivity, hybrid cooling can offer the advantage of liquid cooling of high heat generating components in the traditional air-cooled servers. In this experiment, a 1U server is equipped with cold plate to cool the CPUs while the rest of the components are cooled by fans. In this study, predictive fan and pump failure analysis are performed which also helps to explore the options for redundancy and to reduce the cooling cost by improving cooling efficiency. Redundancy requires the knowledge of planned and unplanned system failures. As the main heat generating components are cooled by liquid, warm water cooling can be employed to observe the effects of raised inlet conditions in a hybrid cooled server with failure scenarios. The ASHRAE guidance class W4 for liquid cooling is chosen for our experiment to operate in a range from 25°C – 45°C. The experiments are conducted separately for the pump and fan failure scenarios. Computational load of idle, 10%, 30%, 50%, 70% and 98% are applied while powering only one pump and the miniature dry cooler fans are controlled externally to maintain constant inlet temperature of the coolant. As the rest of components such as DIMMs & PCH are cooled by air, maximum utilization for memory is applied while reducing the number fans in each case for fan failure scenario. The components temperatures and power consumption are recorded in each case for performance analysis 
    more » « less
  2. Modern Information Technology (IT) servers are typically assumed to operate in quiescent conditions with almost zero static pressure differentials between inlet and exhaust. However, when operating in a data center containment system the IT equipment thermal status is a strong function of the non- homogenous environment of the air space, IT utilization workloads and the overall facility cooling system design. To implement a dynamic and interfaced cooling solution, the interdependencies of variabilities between the chassis, rack and room level must be determined. In this paper, the effect of positive as well as negative static pressure differential between inlet and outlet of servers on thermal performance, fan control schemes, the direction of air flow through the servers as well as fan energy consumption within a server is observed at the chassis level. In this study, a web server with internal air-flow paths segregated into two separate streams, each having dedicated fan/group of fans within the chassis, is operated over a range of static pressure differential across the server. Experiments were conducted to observe the steady-state temperatures of CPUs and fan power consumption. Furthermore, the server fan speed control scheme’s transient response to a typical peak in IT computational workload while operating at negative pressure differentials across the server is reported. The effects of the internal air flow paths within the chassis is studied through experimental testing and simulations for flow visualization. The results indicate that at higher positive differential pressures across the server, increasing server fans speeds will have minimal impact on the cooling of the system. On the contrary, at lower, negative differential pressure server fan power becomes strongly dependent on operating pressure differential. More importantly, it is shown that an imbalance of flow impedances in internal airflow paths and fan control logic can onset recirculation of exhaust air within the server. For accurate prediction of airflow in cases where negative pressure differential exists, this study proposes an extended fan performance curve instead of a regular fan performance curve to be applied as a fan boundary condition for Computational Fluid Dynamics simulations. 
    more » « less
  3. null (Ed.)
    Abstract

    In today’s world, most data centers have multiple racks with numerous servers in each of them. The high amount of heat dissipation has become the largest server-level cooling problem for the data centers. The higher dissipation required, the higher is the total energy required to run the data center. Although still the most widely used cooling methodology, air cooling has reached its cooling capabilities especially for High-Performance Computing data centers. Liquid-cooled servers have several advantages over their air-cooled counterparts, primarily of which are high thermal mass, lower maintenance. Nano-fluids have been used in the past for improving the thermal efficiency of traditional dielectric coolants in the power electronics and automotive industry. Nanofluids have shown great promise in improving the convective heat transfer properties of the coolants due to a proven increase in thermal conductivity and specific heat capacity.

    The present research investigates the thermal enhancement of the performance of de-ionized water-based dielectric coolant with Copper nanoparticles for a higher heat transfer from the server cold plates. Detailed 3-D modeling of a commercial cold plate is completed and the CFD analysis is done in a commercially available CFD code ANSYS CFX. The obtained results compare the improvement in heat transfer due to improvement in coolant properties with data available in the literature.

     
    more » « less
  4. The most common approach to air cooling of data centers involves the pressurization of the plenum beneath the raised floor and delivery of air flow to racks via perforated floor tiles. This cooling approach is thermodynamically inefficient due in large part to the pressure losses through the tiles. Furthermore, it is difficult to control flow at the aisle and rack level since the flow source is centralized rather than distributed. Distributed cooling systems are more closely coupled to the heat generating racks. In overhead cooling systems, one can distribute flow to distinct aisles by placing the air mover and water cooled heat exchanger directly above an aisle. Two arrangements are possible: (i.) placing the air mover and heat exchanger above the cold aisle and forcing downward flow of cooled air into the cold aisle (Overhead Downward Flow (ODF)), or (ii.) placing the air mover and heat exchanger above the hot aisle and forcing heated air upwards from the hot aisle through the water cooled heat exchanger (Overhead Upward Flow (OUF)). This study focuses on the steady and transient behavior of overhead cooling systems in both ODF and OUF configurations and compares their cooling effectiveness and energy efficiency. The flow and heat transfer inside the servers and heat exchangers are modeled using physics based approaches that result in differential equation based mathematical descriptions. These models are programmed in the MATLAB™ language and embedded within a CFD computational environment (using the commercial code FLUENT™) that computes the steady or instantaneous airflow distribution. The complete computational model is able to simulate the complete flow and thermal field in the airside, the instantaneous temperatures within and pressure drops through the servers, and the instantaneous temperatures within and pressure drops through the overhead cooling system. Instantaneous overall energy consumption (1st Law) and exergy destruction (2nd Law) were used to quantify overall energy efficiency and to identify inefficiencies within the two systems. The server cooling effectiveness, based on an effectiveness-NTU model for the servers, was used to assess the cooling effectiveness of the two overhead cooling approaches 
    more » « less
  5. Abstract Transistor density trends till recently have been following Moore's law, doubling every generation resulting in increased power density. The computational performance gains with the breakdown of Moore's law were achieved by using multicore processors, leading to nonuniform power distribution and localized high temperatures making thermal management even more challenging. Cold plate-based liquid cooling has proven to be one of the most efficient technologies in overcoming these thermal management issues. Traditional liquid-cooled data center deployments provide a constant flow rate to servers irrespective of the workload, leading to excessive consumption of coolant pumping power. Therefore, a further enhancement in the efficiency of implementation of liquid cooling in data centers is possible. The present investigation proposes the implementation of dynamic cooling using an active flow control device to regulate the coolant flow rates at the server level. This device can aid in pumping power savings by controlling the flow rates based on server utilization. The flow control device design contains a V-cut ball valve connected to a microservo motor used for varying the device valve angle. The valve position was varied to change the flow rate through the valve by servomotor actuation based on predecided rotational angles. The device operation was characterized by quantifying the flow rates and pressure drop across the device by changing the valve position using both computational fluid dynamics and experiments. The proposed flow control device was able to vary the flow rate between 0.09 lpm and 4 lpm at different valve positions. 
    more » « less