Proponents of AC-powered data centers have implicitly assumed that the electrical load presented to all three phases of an AC data center are balanced. To assure this, servers are connected to the AC power phases to present identical loads, assuming an uniform expected utilization level for each server. We present an experimental study that demonstrates that with the inevitable temporal changes in server workloads or with dynamic sever capacity management based on known daily load patterns, balanced electrical loading across all power phases cannot be maintained. Such imbalances introduce a reactive power component that represents an effective power loss and brings down the overall energy efficiency of the data center, thereby resulting in a handicap against DC-powered data centers where such a loss is absent.
more »
« less
AC vs. Hybrid AC/DC Powered Data Centers: A Workload Based Perspective
Proponents of AC-powered data centers have implicitly assumed that the electrical load presented to all three phases of an AC data center are balanced. To assure this, servers are connected to the AC power phases to present identical loads, assuming an uniform expected utilization level for each server. We present an experimental study that demonstrates that with the inevitable temporal changes in server workloads or with dynamic sever capacity management based on known daily load patterns, balanced electrical loading across all power phases cannot be maintained. Such imbalances introduce a reactive power component that represents an effective power loss and brings down the overall energy efficiency of the data center, thereby resulting in a handicap against DC-powered data centers where such a loss is absent.
more »
« less
- Award ID(s):
- 1738793
- PAR ID:
- 10162332
- Date Published:
- Journal Name:
- EEE Conference on Industrial Informatics, At Aalto University, Espoo, Finland
- Page Range / eLocation ID:
- 1411 to 1418
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Adoption of renewable energy in power grids introduces stability challenges in regulating the operation frequency of the electricity grid. Thus, electrical grid operators call for provisioning of frequency regulation services from end-user customers, such as data centers, to help balance the power grid’s stability by dynamically adjusting their energy consumption based on the power grid’s need. As renewable energy adoption grows, the average reward price of frequency regulation services has become much higher than that of the electricity cost. Therefore, there is a great cost incentive for data centers to provide frequency regulation service. Many existing techniques modulating data center power result in significant performance slowdown or provide a low amount of frequency regulation provision. We present PowerMorph , a tight QoS-aware data center power-reshaping framework, which enables commodity servers to provide practical frequency regulation service. The key behind PowerMorph is using “complementary workload” as an additional knob to modulate server power, which provides high provision capacity while satisfying tight QoS constraints of latency-critical workloads. We achieve up to 58% improvement to TCO under common conditions, and in certain cases can even completely eliminate the data center electricity bill and provide a net profit.more » « less
-
Server-level power monitoring in data centers can significantly contribute to its efficient management. Nevertheless, due to the cost of a dedicated power meter for each server, most data center power management only focuses on UPS or cluster-level power monitoring. In this paper, we propose a low-cost novel power monitoring approach that uses only one sensor to extract power consumption information of all servers. We utilize the conducted electromagnetic interference (EMI) of server power supplies to measure their power consumption from non-intrusive single-point voltage measurements. We present a theoretical characterization of conducted EMI generation in server power supply and its propagation through the data center power network. Using a set of ten commercial-grade servers (six Dell PowerEdge and four Lenovo ThinkSystem), we demonstrate that our approach can estimate each server's power consumption with less than ~7% mean absolute error.more » « less
-
Abstract Data centers are witnessing an unprecedented increase in processing and data storage, resulting in an exponential increase in the servers’ power density and heat generation. Data center operators are looking for green energy efficient cooling technologies with low power consumption and high thermal performance. Typical air-cooled data centers must maintain safe operating temperatures to accommodate cooling for high power consuming server components such as CPUs and GPUs. Thus, making air-cooling inefficient with regards to heat transfer and energy consumption for applications such as high-performance computing, AI, cryptocurrency, and cloud computing, thereby forcing the data centers to switch to liquid cooling. Additionally, air-cooling has a higher OPEX to account for higher server fan power. Liquid Immersion Cooling (LIC) is an affordable and sustainable cooling technology that addresses many of the challenges that come with air cooling technology. LIC is becoming a viable and reliable cooling technology for many high-power demanding applications, leading to reduced maintenance costs, lower water utilization, and lower power consumption. In terms of environmental effect, single-phase immersion cooling outperforms two-phase immersion cooling. There are two types of single-phase immersion cooling methods namely, forced and natural convection. Here, forced convection has a higher overall heat transfer coefficient which makes it advantageous for cooling high-powered electronic devices. Obviously, with natural convection, it is possible to simplify cooling components including elimination of pump. There is, however, some advantages to forced convection and especially low velocity flow where the pumping power is relatively negligible. This study provides a comparison between a baseline forced convection single phase immersion cooled server run for three different inlet temperatures and four different natural convection configurations that utilize different server powers and cold plates. Since the buoyancy effect of the hot fluid is leveraged to generate a natural flow in natural convection, cold plates are designed to remove heat from the server. For performance comparison, a natural convection model with cold plates is designed where water is the flowing fluid in the cold plate. A high-density server is modeled on the Ansys Icepak, with a total server heat load of 3.76 kW. The server is made up of two CPUs and eight GPUs with each chip having its own thermal design power (TDPs). For both heat transfer conditions, the fluid used in the investigation is EC-110, and it is operated at input temperatures of 30°C, 40°C, and 50°C. The coolant flow rate in forced convection is 5 GPM, whereas the flow rate in natural convection cold plates is varied. CFD simulations are used to reduce chip case temperatures through the utilization of both forced and natural convection. Pressure drop and pumping power of operation are also evaluated on the server for the given intake temperature range, and the best-operating parameters are established. The numerical study shows that forced convection systems can maintain much lower component temperatures in comparison to natural convection systems even when the natural convection systems are modeled with enhanced cooling characteristics.more » « less
-
Abstract Physics-based modeling aids in designing efficient data center power and cooling systems. These systems have traditionally been modeled independently under the assumption that the inherent coupling of effects between the systems has negligible impact. This study tests the assumption through uncertainty quantification of models for a typical 300 kW data center supplied through either an alternating current (AC)-based or direct current (DC)-based power distribution system. A novel calculation scheme is introduced that couples the calculations of these two systems to estimate the resultant impact on predicted power usage effectiveness (PUE), computer room air conditioning (CRAC) return temperature, total system power requirement, and system power loss values. A two-sample z-test for comparing means is used to test for statistical significance with 95% confidence. The power distribution component efficiencies are calibrated to available published and experimental data. The predictions for a typical data center with an AC-based system suggest that the coupling of system calculations results in statistically significant differences for the cooling system PUE, the overall PUE, the CRAC return air temperature, and total electrical losses. However, none of the tested metrics are statistically significant for a DC-based system. The predictions also suggest that a DC-based system provides statistically significant lower overall PUE and electrical losses compared to the AC-based system, but only when coupled calculations are used. These results indicate that the coupled calculations impact predicted general energy efficiency metrics and enable statistically significant conclusions when comparing different data center cooling and power distribution strategies.more » « less
An official website of the United States government

