skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Experimental Study of Improved Chassis and Duct Redesign for Air-Cooled Server
In the United States, out of the total electricity produced, 2% of it is consumed by the data center facility, and up to 40% of its energy is utilized by the cooling infrastructure to cool all the heat-generating components present inside the facility, with recent technological advancement, the trend of power consumption has increased and as a consequence of increased energy consumption is the increase in carbon footprint which is a growing concern in the industry. In air cooling, the high heat- dissipating components present inside a server/hardware must receive efficient airflow for efficient cooling and to direct the air toward the components ducting is provided. In this study, the duct present in the air-cooled server is optimized and vanes are provided to improve the airflow, and side vents are installed over the sides of the server chassis before the duct is placed to bypass some of the cool air which is entering from the front where the hard drives are present. Experiments were conducted on the Cisco C220 air-cooled server with the new duct and the bypass provided, the effects of the new duct and bypass are quantified by comparing the temperature of the components such as the Central Processing Unit (CPUs), and Platform controller hub (PCH) and the savings in terms of total fan power consumption. A 7.5°C drop in temperature is observed and savings of up to 30% in terms of fan power consumption can be achieved with the improved design compared with the standard server.  more » « less
Award ID(s):
2209751
PAR ID:
10454827
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Date Published:
Journal Name:
2023 22nd IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITherm)
Page Range / eLocation ID:
1 to 8
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In typical data centers, the servers and IT equipment are cooled by air and almost half of total IT power is dedicated to cooling. Hybrid cooling is a combined cooling technology with both air and water, where the main heat generating components are cooled by water or water-based coolants and rest of the components are cooled by air supplied by CRAC or CRAH. Retrofitting the air-cooled servers with cold plates and pumps has the advantage over thermal management of CPUs and other high heat generating components. In a typical 1U server, the CPUs were retrofitted with cold plates and the server tested with raised coolant inlet conditions. The study showed the server can operate with maximum utilization for CPUs, DIMMs, and PCH for inlet coolant temperature from 25–45 °C following the ASHRAE guidelines. The server was also tested for failure scenarios of the pumps and fans with reducing numbers of fans and pumps. To reduce cooling power consumption at the facility level and increase air-side economizer hours, the hybrid cooled server can be operated at raised inlet air temperatures. The trade-off in energy savings at the facility level due to raising the inlet air temperatures versus the possible increase in server fan power and component temperatures is investigated. A detailed CFD analysis with a minimum number of server fans can provide a way to find an operating range of inlet air temperature for a hybrid cooled server. Changes in the model are carried out in 6SigmaET for an individual server and compared to the experimental data to validate the model. The results from this study can be helpful in determining the room level operating set points for data centers housing hybrid cooled server racks. 
    more » « less
  2. Modern day data centers are operated at high power for increased power density, maintenance, and cooling which covers almost 2 percent (70 billion kilowatt-hours) of the total energy consumption in the US. IT components and cooling system occupy the major portion of this energy consumption. Although data centers are designed to perform efficiently, cooling the high-density components is still a challenge. So, alternative methods to improve the cooling efficiency has become the drive to reduce the cooling cost. As liquid cooling is more efficient for high specific heat capacity, density, and thermal conductivity, hybrid cooling can offer the advantage of liquid cooling of high heat generating components in the traditional air-cooled servers. In this experiment, a 1U server is equipped with cold plate to cool the CPUs while the rest of the components are cooled by fans. In this study, predictive fan and pump failure analysis are performed which also helps to explore the options for redundancy and to reduce the cooling cost by improving cooling efficiency. Redundancy requires the knowledge of planned and unplanned system failures. As the main heat generating components are cooled by liquid, warm water cooling can be employed to observe the effects of raised inlet conditions in a hybrid cooled server with failure scenarios. The ASHRAE guidance class W4 for liquid cooling is chosen for our experiment to operate in a range from 25°C – 45°C. The experiments are conducted separately for the pump and fan failure scenarios. Computational load of idle, 10%, 30%, 50%, 70% and 98% are applied while powering only one pump and the miniature dry cooler fans are controlled externally to maintain constant inlet temperature of the coolant. As the rest of components such as DIMMs & PCH are cooled by air, maximum utilization for memory is applied while reducing the number fans in each case for fan failure scenario. The components temperatures and power consumption are recorded in each case for performance analysis 
    more » « less
  3. In recent years, there have been phenomenal increases in Artificial Intelligence and Machine Learning that require data collection, mining and using data sets to teach computers certain things to learn, analyze image and speech recognition. Machine Learning tasks require a lot of computing power to carry out numerous calculations. Therefore, most servers are powered by Graphics Processing Units (GPUs) instead of traditional CPUs. GPUs provide more computational throughput per dollar spent than traditional CPUs. Open Compute Servers forum has introduced the state-of-the-art machine learning servers “Big Sur” recently. Big Sur unit consists of 4OU (OpenU) chassis housing eight NVidia Tesla M40 GPUs and two CPUs along with SSD storage and hot-swappable fans at the rear. Management of the airflow is a critical requirement in the implementation of air cooling for rack mount servers to ensure that all components, especially critical devices such as CPUs and GPUs, receive adequate flow as per requirement. In addition, component locations within the chassis play a vital role in the passage of airflow and affect the overall system resistance. In this paper, sizeable improvement in chassis ducting is targeted to counteract effects of air diffusion at the rear of air flow duct in “Big Sur” Open Compute machine learning server wherein GPUs are located directly downstream from CPUs. A CFD simulation of the detailed server model is performed with the objective of understanding the effect of air flow bypass on GPU die temperatures and fan power consumption. The cumulative effect was studied by simulations to see improvements in fan power consumption by the server. The reduction in acoustics noise levels caused by server fans is also discussed. 
    more » « less
  4. Abstract Data centers are witnessing an unprecedented increase in processing and data storage, resulting in an exponential increase in the servers’ power density and heat generation. Data center operators are looking for green energy efficient cooling technologies with low power consumption and high thermal performance. Typical air-cooled data centers must maintain safe operating temperatures to accommodate cooling for high power consuming server components such as CPUs and GPUs. Thus, making air-cooling inefficient with regards to heat transfer and energy consumption for applications such as high-performance computing, AI, cryptocurrency, and cloud computing, thereby forcing the data centers to switch to liquid cooling. Additionally, air-cooling has a higher OPEX to account for higher server fan power. Liquid Immersion Cooling (LIC) is an affordable and sustainable cooling technology that addresses many of the challenges that come with air cooling technology. LIC is becoming a viable and reliable cooling technology for many high-power demanding applications, leading to reduced maintenance costs, lower water utilization, and lower power consumption. In terms of environmental effect, single-phase immersion cooling outperforms two-phase immersion cooling. There are two types of single-phase immersion cooling methods namely, forced and natural convection. Here, forced convection has a higher overall heat transfer coefficient which makes it advantageous for cooling high-powered electronic devices. Obviously, with natural convection, it is possible to simplify cooling components including elimination of pump. There is, however, some advantages to forced convection and especially low velocity flow where the pumping power is relatively negligible. This study provides a comparison between a baseline forced convection single phase immersion cooled server run for three different inlet temperatures and four different natural convection configurations that utilize different server powers and cold plates. Since the buoyancy effect of the hot fluid is leveraged to generate a natural flow in natural convection, cold plates are designed to remove heat from the server. For performance comparison, a natural convection model with cold plates is designed where water is the flowing fluid in the cold plate. A high-density server is modeled on the Ansys Icepak, with a total server heat load of 3.76 kW. The server is made up of two CPUs and eight GPUs with each chip having its own thermal design power (TDPs). For both heat transfer conditions, the fluid used in the investigation is EC-110, and it is operated at input temperatures of 30°C, 40°C, and 50°C. The coolant flow rate in forced convection is 5 GPM, whereas the flow rate in natural convection cold plates is varied. CFD simulations are used to reduce chip case temperatures through the utilization of both forced and natural convection. Pressure drop and pumping power of operation are also evaluated on the server for the given intake temperature range, and the best-operating parameters are established. The numerical study shows that forced convection systems can maintain much lower component temperatures in comparison to natural convection systems even when the natural convection systems are modeled with enhanced cooling characteristics. 
    more » « less
  5. Modern Information Technology (IT) servers are typically assumed to operate in quiescent conditions with almost zero static pressure differentials between inlet and exhaust. However, when operating in a data center containment system the IT equipment thermal status is a strong function of the non- homogenous environment of the air space, IT utilization workloads and the overall facility cooling system design. To implement a dynamic and interfaced cooling solution, the interdependencies of variabilities between the chassis, rack and room level must be determined. In this paper, the effect of positive as well as negative static pressure differential between inlet and outlet of servers on thermal performance, fan control schemes, the direction of air flow through the servers as well as fan energy consumption within a server is observed at the chassis level. In this study, a web server with internal air-flow paths segregated into two separate streams, each having dedicated fan/group of fans within the chassis, is operated over a range of static pressure differential across the server. Experiments were conducted to observe the steady-state temperatures of CPUs and fan power consumption. Furthermore, the server fan speed control scheme’s transient response to a typical peak in IT computational workload while operating at negative pressure differentials across the server is reported. The effects of the internal air flow paths within the chassis is studied through experimental testing and simulations for flow visualization. The results indicate that at higher positive differential pressures across the server, increasing server fans speeds will have minimal impact on the cooling of the system. On the contrary, at lower, negative differential pressure server fan power becomes strongly dependent on operating pressure differential. More importantly, it is shown that an imbalance of flow impedances in internal airflow paths and fan control logic can onset recirculation of exhaust air within the server. For accurate prediction of airflow in cases where negative pressure differential exists, this study proposes an extended fan performance curve instead of a regular fan performance curve to be applied as a fan boundary condition for Computational Fluid Dynamics simulations. 
    more » « less