skip to main content


Search for: All records

Award ID contains: 1738811

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. null (Ed.)
    Abstract

    With more development in electronics system capable of having larger functional densities, power density is increasing. Immersion cooling demonstrates the highest power usage efficiency (PUE) among all cooling techniques for data centers and there is still interest in optimizing immersion cooling to use it to its full potential. The aim of this paper is to present the effect of inclination and thermal shadowing on two-phase immersion cooling using FC-72. For simulation of boiling, the RPI (Rensselaer Polytechnic Institute) wall boiling model has been used. Also, two empirical models were used for calculation of bubble departure diameter and nucleate site density. The boundary condition was assumed to be constant heat flux and the bath temperature was kept at boiling temperature of FC-72 and the container pressure is assumed to be atmospheric. this study showed that due to the thermal shadowing, boiling boundary layer can lay over the top chipset and increases vapor volume fraction over top chipsets. This ultimately causes increase in maximum temperature of second chip. The other main observation is with higher inclination angle of chip, maximum temperature on the chip decreases up to 3°C.

     
    more » « less
  2. null (Ed.)
    Abstract

    In today’s world, most data centers have multiple racks with numerous servers in each of them. The high amount of heat dissipation has become the largest server-level cooling problem for the data centers. The higher dissipation required, the higher is the total energy required to run the data center. Although still the most widely used cooling methodology, air cooling has reached its cooling capabilities especially for High-Performance Computing data centers. Liquid-cooled servers have several advantages over their air-cooled counterparts, primarily of which are high thermal mass, lower maintenance. Nano-fluids have been used in the past for improving the thermal efficiency of traditional dielectric coolants in the power electronics and automotive industry. Nanofluids have shown great promise in improving the convective heat transfer properties of the coolants due to a proven increase in thermal conductivity and specific heat capacity.

    The present research investigates the thermal enhancement of the performance of de-ionized water-based dielectric coolant with Copper nanoparticles for a higher heat transfer from the server cold plates. Detailed 3-D modeling of a commercial cold plate is completed and the CFD analysis is done in a commercially available CFD code ANSYS CFX. The obtained results compare the improvement in heat transfer due to improvement in coolant properties with data available in the literature.

     
    more » « less
  3. null (Ed.)
    Abstract

    Modern-day data center administrators are finding it increasingly difficult to lower the costs incurred in mechanical cooling of their IT equipment. This is especially true for high-performance computing facilities like Artificial Intelligence, Bitcoin Mining, and Deep Learning, etc. Airside Economization or free air cooling has been out there as a technology for a long time now to reduce the mechanical cooling costs. In free air cooling, under favorable ambient conditions of temperature and humidity, outside air can be used for cooling the IT equipment. In doing so, the IT equipment is exposed to sub-micron particulate/gaseous contaminants that might enter the data center facility with the cooling airflow.

    The present investigation uses a computational approach to model the airflow paths of particulate contaminants entering inside the IT equipment using a commercially available CFD code. A Discrete Phase Particle modeling approach is chosen to calculate trajectories of the dispersed contaminants. Standard RANS approach is used to model the airflow in the airflow and the particles are superimposed on the flow field by the CFD solver using Lagrangian particle tracking. The server geometry was modeled in 2-D with a combination of rectangular and cylindrical obstructions. This was done to comprehend the effect of change in the obstruction type and aspect ratio on particle distribution. Identifying such discrete areas of contaminant proliferation based on concentration fields due to changing geometries will help with the mitigation of particulate contamination related failures in data centers.

     
    more » « less
  4. null (Ed.)
    Abstract

    Increased demand for computer applications has manifested a rise in data generation, resulting in high Power Density and Heat Generation of servers and their components, requiring efficient thermal management. Due to the low heat carrying capacity of air, air cooling is not an efficient method of data center cooling. Hence, the liquid immersion cooling method has emerged as a prominent method, where the server is directly immersed in a dielectric liquid. The thermal conductivity of the dielectric liquids is drastically increased with the introduction of non-metallic nanoparticles of size between 1 to 150 nm, which has proven to be the best method. To maintain the dielectric feature of the liquid, non-metallic nanoparticles can be added.

    Alumina nanoparticles with a mean size of 80 nm and a mass concentration of 0 to 5% with mineral oil are used in the present study. The properties of the mixture were calculated based on the theoretical formula and it was a function of temperature. Heat transfer and effect of the nanoparticle concentration on the junction temperature of the processors using CFD techniques were simulated on an open commute server with two processors in a row. The junction temperature was studied for different flow rates of 0.5, 1, 2, and 3 LPM, at inlet temperatures of 25, 35, and 45 degrees Celsius. The chosen heatsink geometries were: Parallel plate, Pin fin, and Plate fin heatsinks.

     
    more » « less
  5. Abstract Transistor density trends till recently have been following Moore's law, doubling every generation resulting in increased power density. The computational performance gains with the breakdown of Moore's law were achieved by using multicore processors, leading to nonuniform power distribution and localized high temperatures making thermal management even more challenging. Cold plate-based liquid cooling has proven to be one of the most efficient technologies in overcoming these thermal management issues. Traditional liquid-cooled data center deployments provide a constant flow rate to servers irrespective of the workload, leading to excessive consumption of coolant pumping power. Therefore, a further enhancement in the efficiency of implementation of liquid cooling in data centers is possible. The present investigation proposes the implementation of dynamic cooling using an active flow control device to regulate the coolant flow rates at the server level. This device can aid in pumping power savings by controlling the flow rates based on server utilization. The flow control device design contains a V-cut ball valve connected to a microservo motor used for varying the device valve angle. The valve position was varied to change the flow rate through the valve by servomotor actuation based on predecided rotational angles. The device operation was characterized by quantifying the flow rates and pressure drop across the device by changing the valve position using both computational fluid dynamics and experiments. The proposed flow control device was able to vary the flow rate between 0.09 lpm and 4 lpm at different valve positions. 
    more » « less
  6. Abstract Over the last decade, several hyper-scale data center companies such as Google, Facebook, and Microsoft have demonstrated the cost-saving capabilities of airside economization with direct/indirect heat exchangers by moving to chiller-less air-cooled data centers. Under pressure from data center owners, information technology equipment OEMs like Dell and IBM are developing information technology equipment that can withstand peak excursion temperature ratings of up to 45 °C, clearly outside the recommended envelope, and into ASHRAEs A4 allowable envelope. As popular and widespread as these cooling technologies are becoming, airside economization comes with its challenges. There is a risk of premature hardware failures or reliability degradation posed by uncontrolled fine particulate and gaseous contaminants in presence of temperature and humidity transients. This paper presents an in-depth review of the particulate and gaseous contamination-related challenges faced by the modern-day data center facilities that use airside economization. This review summarizes specific experimental and computational studies to characterize the airborne contaminants and associated failure modes and mechanisms. In addition, standard lab-based and in-situ test methods for measuring the corrosive effects of the particles and the corrosive gases, as the means of testing the robustness of the equipment against these contaminants, under different temperature and relative humidity conditions are also reviewed. It also outlines the cost-sensitive mitigation techniques like improved filtration strategies and methods that can be utilized for efficient implementation of airside economization. 
    more » « less
  7. Abstract Structural components such as printed circuit boards (PCBs) are critical in the thermomechanical reliability assessment of electronic packages. Previous studies have shown that geometric parameters such as thickness and mechanical properties like elastic modulus of PCBs have direct influence on the reliability of electronic packages. Elastic material properties of PCBs are commonly characterized using equipment such as tensile testers and used in computational studies. However, in certain applications viscoelastic material properties are important. Viscoelastic influence on materials is evident when one exceeds the glass transition temperature of materials. Operating conditions or manufacturing conditions such as lamination and soldering may expose components to temperatures that exceed the glass transition temperatures. Knowing the viscoelastic behavior of the different components of electronic packages is important in order to perform accurate reliability assessment and design components such as printed circuit boards (PCBs) that will remain dimensionally stable after the manufacturing process. Previous researchers have used creep and stress relaxation test data to obtain the Prony series terms that represent the viscoelastic behavior and perform analysis. Others have used dynamic mechanical analysis in order to obtain frequency domain master curves that were converted to time domain before obtaining the Prony series terms. In this paper, nonlinear solvers were used on frequency domain master curve results from dynamic mechanical analysis to obtain Prony series terms and perform finite element analysis on the impact of adding viscoelastic properties when performing reliability assessment. The computational study results were used to perform comparative assessment to understand the impact of including viscoelastic behavior in reliability analysis under thermal cycling and drop testing for Wafer Level Chip Scale Packages. 
    more » « less
  8. Abstract Continuous rise in cloud computing and other web-based services propelled the data center proliferation seen over the past decade. Traditional data centers use vapor-compression-based cooling units that not only reduce energy efficiency but also increase operational and initial investment costs due to involved redundancies. Free air cooling and airside economization can substantially reduce the information technology equipment (ITE) cooling power consumption, which accounts for approximately 40% of energy consumption for a typical air-cooled data center. However, this cooling approach entails an inherent risk of exposing the ITE to harmful ultrafine particulate contaminants, thus, potentially reducing the equipment and component reliability. The present investigation attempts to quantify the effects of particulate contamination inside the data center equipment and ITE room using computational fluid dynamics (CFD). An analysis of the boundary conditions to be used was done by detailed modeling of ITE and the data center white space. Both two-dimensional and three-dimensional simulations were done for detailed analysis of particle transport within the server enclosure. An analysis of the effect of the primary pressure loss obstructions like heat sinks and dual inline memory modules inside the server was done to visualize the localized particle concentrations within the server. A room-level simulation was then conducted to identify the most vulnerable locations of particle concentration within the data center space. The results show that parameters such as higher velocities, heat sink cutouts, and higher aspect ratio features within the server tend to increase the particle concentration inside the servers. 
    more » « less