skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: CFD Optimization of the Cooling of Yosemite Open Compute Server
Over the past few years, there has been an ever increasing rise in energy consumption by IT equipment in Data Centers. Thus, the need to minimize the environmental impact of Data Centers by optimizing energy consumption and material use is increasing. In 2011, the Open Compute Project was started which was aimed at sharing specifications and best practices with the community for highly energy efficient and economical data centers. The first Open Compute Server was the ‘ Freedom’ Server. It was a vanity free design and was completely custom designed using minimum number of components and was deployed in a data center in Prineville, Oregon. Within the first few months of operation, considerable amount of energy and cost savings were observed. Since then, progressive generations of Open Compute servers have been introduced. Initially, the servers used for compute purposes mainly had a 2 socket architecture. In 2015, the Yosemite Open Compute Server was introduced which was suited for higher compute capacity. Yosemite has a system on a chip architecture having four CPUs per sled providing a significant improvement in performance per watt over the previous generations. This study mainly focuses on air flow optimization in Yosemite platform to improve its overall cooling performance. Commercially available CFD tools have made it possible to do the thermal modeling of these servers and predict their efficiency. A detailed server model is generated using a CFD tool and its optimization has been done to improve the air flow characteristics in the server. Thermal model of the improved design is compared to the existing design to show the impact of air flow optimization on flow rates and flow speeds which in turn affects CPU die temperatures and cooling power consumption and thus, impacting the overall cooling performance of the Yosemite platform. Emphasis is given on effective utilization of fans in the server as compared to the original design and improving air flow characteristics inside the server via improved ducting.  more » « less
Award ID(s):
1738811
PAR ID:
10065914
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ASME 2017 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems
Page Range / eLocation ID:
V001T02A011
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In recent years, there have been phenomenal increases in Artificial Intelligence and Machine Learning that require data collection, mining and using data sets to teach computers certain things to learn, analyze image and speech recognition. Machine Learning tasks require a lot of computing power to carry out numerous calculations. Therefore, most servers are powered by Graphics Processing Units (GPUs) instead of traditional CPUs. GPUs provide more computational throughput per dollar spent than traditional CPUs. Open Compute Servers forum has introduced the state-of-the-art machine learning servers “Big Sur” recently. Big Sur unit consists of 4OU (OpenU) chassis housing eight NVidia Tesla M40 GPUs and two CPUs along with SSD storage and hot-swappable fans at the rear. Management of the airflow is a critical requirement in the implementation of air cooling for rack mount servers to ensure that all components, especially critical devices such as CPUs and GPUs, receive adequate flow as per requirement. In addition, component locations within the chassis play a vital role in the passage of airflow and affect the overall system resistance. In this paper, sizeable improvement in chassis ducting is targeted to counteract effects of air diffusion at the rear of air flow duct in “Big Sur” Open Compute machine learning server wherein GPUs are located directly downstream from CPUs. A CFD simulation of the detailed server model is performed with the objective of understanding the effect of air flow bypass on GPU die temperatures and fan power consumption. The cumulative effect was studied by simulations to see improvements in fan power consumption by the server. The reduction in acoustics noise levels caused by server fans is also discussed. 
    more » « less
  2. Abstract In recent years there has been a phenomenal development in cloud computing, networking, virtualization, and storage, which has increased the demand for high performance data centers. The demand for higher CPU (Central Processing Unit) performance and increasing Thermal Design Power (TDP) trends in the industry needs advanced methods of cooling systems that offer high heat transfer capabilities. Maintaining the CPU temperature within the specified limitation with air-cooled servers becomes a challenge after a certain TDP threshold. Among the equipments used in data centers, energy consumption of a cooling system is significantly large and is typically estimated to be over 40% of the total energy consumed. Advancements in Dual In-line Memory Modules (DIMMs) and the CPU compatibility led to overall higher server power consumption. Recent trends show DIMMs consume up to or above 20W each and each CPU can support up to 12 DIMM channels. Therefore, in a data center where high-power dense compute systems are packed together, it demands efficient cooling for the overall server components. In single-phase immersion cooling technology, electronic components or servers are typically submerged in a thermally conductive dielectric fluid allowing it to dissipate heat from all the electronics. The broader focus of this research is to investigate the heat transfer and flow behavior in a 1U air cooled spread core configuration server with heat sinks compared to cold plates attached in series in an immersion environment. Cold plates have extremely low thermal resistance compared to standard air cooled heatsinks. Generally, immersion fluids are dielectric, and fluids used in cold plates are electrically conductive which exposes several problems. In this study, we focus only on understanding the thermal and flow behavior, but it is important to address the challenges associated with it. The coolant used for cold plate is 25% Propylene Glycol water mixture and the fluid used in the tank is a commercially available synthetic dielectric fluid EC-100. A Computational Fluid Dynamics (CFD) model is built in such a way that only the CPUs are cooled using cold plates and the auxiliary electronic components are cooled by the immersion fluid. A baseline CFD model using an air-cooled server with heat sinks is compared to the immersion cold server with cold plates attached to the CPU. The server model has a compact model for cold plate representing thermal resistance and pressure drop. Results of the study discuss the impact on CPU temperatures for various fluid inlet conditions and predict the cooling capability of the integrated cold plate in immersion environment. 
    more » « less
  3. Abstract This paper proposes a computational fluid dynamics (CFD) simulation methodology for the multi-design variable optimization of heat sinks for natural convection single-phase immersion cooling of high power-density Data Center server electronics. Immersion cooling provides the capability to cool higher power-densities than air cooling. Due to this, retrofitting Data Center servers initially designed for air-cooling for immersion cooling is of interest. A common area of improvement is in optimizing the air-cooled component heat sinks for the fluid and thermal properties of liquid cooling dielectric fluids. Current heat sink optimization methodologies for immersion cooling demonstrated within the literature rely on a server-level optimization approach. This paper proposes a server-agnostic approach to immersion cooling heat sink optimization by developing a heat sink-level CFD to generate a dataset of optimized heat sinks for a range of variable input parameters: inlet fluid temperature, power dissipation, fin thickness, and number of fins. The objective function of optimization is minimizing heat sink thermal resistance. This research demonstrates an effective modeling and optimization approach for heat sinks. The optimized heat sink designs exhibit improved cooling performance and reduced pressure drop compared to traditional heat sink designs. This study also shows the importance of considering multiple design variables in the heat sink optimization process and extends immersion heat sink optimization beyond server-dependent solutions. The proposed approach can also be extended to other cooling techniques and applications, where optimizing the design variables of heat sinks can improve cooling performance and reduce energy consumption. 
    more » « less
  4. Abstract Data centers are witnessing an unprecedented increase in processing and data storage, resulting in an exponential increase in the servers’ power density and heat generation. Data center operators are looking for green energy efficient cooling technologies with low power consumption and high thermal performance. Typical air-cooled data centers must maintain safe operating temperatures to accommodate cooling for high power consuming server components such as CPUs and GPUs. Thus, making air-cooling inefficient with regards to heat transfer and energy consumption for applications such as high-performance computing, AI, cryptocurrency, and cloud computing, thereby forcing the data centers to switch to liquid cooling. Additionally, air-cooling has a higher OPEX to account for higher server fan power. Liquid Immersion Cooling (LIC) is an affordable and sustainable cooling technology that addresses many of the challenges that come with air cooling technology. LIC is becoming a viable and reliable cooling technology for many high-power demanding applications, leading to reduced maintenance costs, lower water utilization, and lower power consumption. In terms of environmental effect, single-phase immersion cooling outperforms two-phase immersion cooling. There are two types of single-phase immersion cooling methods namely, forced and natural convection. Here, forced convection has a higher overall heat transfer coefficient which makes it advantageous for cooling high-powered electronic devices. Obviously, with natural convection, it is possible to simplify cooling components including elimination of pump. There is, however, some advantages to forced convection and especially low velocity flow where the pumping power is relatively negligible. This study provides a comparison between a baseline forced convection single phase immersion cooled server run for three different inlet temperatures and four different natural convection configurations that utilize different server powers and cold plates. Since the buoyancy effect of the hot fluid is leveraged to generate a natural flow in natural convection, cold plates are designed to remove heat from the server. For performance comparison, a natural convection model with cold plates is designed where water is the flowing fluid in the cold plate. A high-density server is modeled on the Ansys Icepak, with a total server heat load of 3.76 kW. The server is made up of two CPUs and eight GPUs with each chip having its own thermal design power (TDPs). For both heat transfer conditions, the fluid used in the investigation is EC-110, and it is operated at input temperatures of 30°C, 40°C, and 50°C. The coolant flow rate in forced convection is 5 GPM, whereas the flow rate in natural convection cold plates is varied. CFD simulations are used to reduce chip case temperatures through the utilization of both forced and natural convection. Pressure drop and pumping power of operation are also evaluated on the server for the given intake temperature range, and the best-operating parameters are established. The numerical study shows that forced convection systems can maintain much lower component temperatures in comparison to natural convection systems even when the natural convection systems are modeled with enhanced cooling characteristics. 
    more » « less
  5. In typical data centers, the servers and IT equipment are cooled by air and almost half of total IT power is dedicated to cooling. Hybrid cooling is a combined cooling technology with both air and water, where the main heat generating components are cooled by water or water-based coolants and rest of the components are cooled by air supplied by CRAC or CRAH. Retrofitting the air-cooled servers with cold plates and pumps has the advantage over thermal management of CPUs and other high heat generating components. In a typical 1U server, the CPUs were retrofitted with cold plates and the server tested with raised coolant inlet conditions. The study showed the server can operate with maximum utilization for CPUs, DIMMs, and PCH for inlet coolant temperature from 25–45 °C following the ASHRAE guidelines. The server was also tested for failure scenarios of the pumps and fans with reducing numbers of fans and pumps. To reduce cooling power consumption at the facility level and increase air-side economizer hours, the hybrid cooled server can be operated at raised inlet air temperatures. The trade-off in energy savings at the facility level due to raising the inlet air temperatures versus the possible increase in server fan power and component temperatures is investigated. A detailed CFD analysis with a minimum number of server fans can provide a way to find an operating range of inlet air temperature for a hybrid cooled server. Changes in the model are carried out in 6SigmaET for an individual server and compared to the experimental data to validate the model. The results from this study can be helpful in determining the room level operating set points for data centers housing hybrid cooled server racks. 
    more » « less