skip to main content

Title: Characterization of 300 GHz Wireless Channels for Rack-to-Rack Communications in Data Centers
Award ID(s):
Publication Date:
Journal Name:
IEEE International Symposium on Personal, Indoor and Mobile Radio Communications
Page Range or eLocation-ID:
Sponsoring Org:
National Science Foundation
More Like this
  1. This study presents an experimental and numerical characterization of pressure drop in a commercially available direct liquid cooled (DLC) rack. It is important to investigate the pressure drop in the DLC system as it determines the required pumping power for the DLC system, which affects the energy efficiency of the data center. The main objective of this research is to assess the flow rate and pressure distributions in a DLC system to enhance the reliability and the cooling system efficiency. Other objectives of this research are to evaluate the accuracy of flow network modeling (FNM) in predicting the flow distribution in a DLC rack and identify manufacturing limitations in a commercial system that could impact the cooling system reliability. The main components of the investigated DLC system are: coolant distribution module (CDM), supply/return manifold module, and server module which contains a cold plate. Extensive experimental measurements were performed to study the flow distribution and to determine the pressure characteristic curves for the server modules and the coolant distribution module (CDM). Also, a methodology was described to develop an experimentally validated flow network model (FNM) of the DLC system to obtain high accuracy. The measurements revealed a flow maldistribution among themore »server modules, which is attributed to the manufacturing process of the micro-channel cold plate. The average errors in predicting the flow rate of the server module and the CDM using FNM are 2.5% and 3.8%, respectively. The accuracy and the short run time make FNM a good tool for design, analysis, and optimization for DLC systems. The pressure drop in the server module is found to account for 56% of the total pressure drop in the DLC rack. Further analysis showed that 69% of the pressure drop in the server module is associated with the module's plumbing (corrugated hoses, disconnects, fittings). The server cooling modules are designed to provide secured connections and flexibility, which come with a high pressure drop cost.« less
  2. Low-latency online services have strict Service Level Objectives (SLOs) that require datacenter systems to support high throughput at microsecond-scale tail latency. Dataplane operating systems have been designed to scale up multi-core servers with minimal overhead for such SLOs. However, as application demands continue to increase, scaling up is not enough, and serving larger demands requires these systems to scale out to multiple servers in a rack. We present RackSched, the first rack-level microsecond-scale scheduler that provides the abstraction of a rack-scale computer (i.e., a huge server with hundreds to thousands of cores) to an external service with network-system co-design. The core of RackSched is a two-layer scheduling framework that integrates inter-server scheduling in the top-of-rack (ToR) switch with intra-server scheduling in each server. We use a combination of analytical results and simulations to show that it provides near-optimal performance as centralized scheduling policies, and is robust for both low-dispersion and high-dispersion workloads. We design a custom switch data plane for the inter-server scheduler, which realizes power-of-k- choices, ensures request affinity, and tracks server loads accurately and efficiently. We implement a RackSched prototype on a cluster of commodity servers connected by a Barefoot Tofino switch. End-to-end experiments on a twelve-server testbedmore »show that RackSched improves the throughput by up to 1.44x, and scales out the throughput near linearly, while maintaining the same tail latency as one server until the system is saturated.« less