skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Monte Carlo Based Server Consolidation for Energy Efficient Cloud Data Centers
The growing energy consumption of data centers is a compelling global problem and effective server consolidation is at the heart of energy efficient cloud data centers. A variant of bin packing can be used to model the server consolidation problem, where the constraints are multidimensional and heterogeneous vectors rather than scalars and the goal is to satisfy the requested resource allocation using the minimum number physical servers. Since bin packing is NP-hard, we rely on heuristics for practical solutions. Variations of First Fit Decreasing (FFD) based heuristics have been shown to be effective both in theory and practice for the one dimensional homogeneous case. However, the multidimensional and heterogeneous aspects of the server consolidation problem make it more complicated, requiring additional research to adapt FFD to the server consolidation problem. In this paper, we present a new FFD-based server consolidation technique using a Monte Carlo method and Shannon entropy, which considers resource bottlenecks and dynamically adjusts to variance in the utilization of different resources. The proposed heuristic outperforms existing techniques in all scenarios, achieving within 2-5% of optimal on average for medium to high variance in resource utilization, and within 10% worse than optimal on average for all scenarios.  more » « less
Award ID(s):
1657296 1828521
PAR ID:
10157200
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2019 IEEE International Conference on Cloud Computing Technology and Science (CloudCom)
Page Range / eLocation ID:
263 to 270
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In modern computing systems, jobs' resource requirements often vary over time. Accounting for this temporal variability during job scheduling is essential for meeting performance goals. However, theoretical understanding on how to schedule jobs with time-varying resource requirements is limited. Motivated by this gap, we propose a new setting of the stochastic bin-packing problem in service systems that allows for time-varying job resource requirements, also referred to as 'item sizes' in traditional bin-packing terms. In this setting, a job or 'item' must be dispatched to a server or 'bin' upon arrival. Its resource requirement may vary over time while in service, following a Markovian assumption. Once the job's service is complete, it departs from the system. Our goal is to minimize the expected number of active servers, or 'non-empty bins', in steady state. Under our problem formulation, we develop a job dispatch policy, named Join-Reqesting-Server (JRS). Broadly, JRS lets each server independently evaluate its current job configuration and decide whether to accept additional jobs, balancing the competing objectives of maximizing throughput and minimizing the risk of resource capacity overruns. The JRS dispatcher then utilizes these individual evaluations to decide which server to dispatch each arriving job to. The theoretical performance guarantee of JRS is in the asymptotic regime where the job arrival rate scales large linearly with respect to a scaling factor r. We show that JRS achieves an additive optimality gap of O(√r) in the objective value, where the optimal objective value is Θ(r). When specialized to constant job resource requirements, our result improves upon the state-of-the-art o(r) optimality gap. Our technical approach highlights a novel policy conversion framework that reduces the policy design problem into a single-server problem. 
    more » « less
  2. Proponents of AC-powered data centers have implicitly assumed that the electrical load presented to all three phases of an AC data center are balanced. To assure this, servers are connected to the AC power phases to present identical loads, assuming an uniform expected utilization level for each server. We present an experimental study that demonstrates that with the inevitable temporal changes in server workloads or with dynamic sever capacity management based on known daily load patterns, balanced electrical loading across all power phases cannot be maintained. Such imbalances introduce a reactive power component that represents an effective power loss and brings down the overall energy efficiency of the data center, thereby resulting in a handicap against DC-powered data centers where such a loss is absent. 
    more » « less
  3. Proponents of AC-powered data centers have implicitly assumed that the electrical load presented to all three phases of an AC data center are balanced. To assure this, servers are connected to the AC power phases to present identical loads, assuming an uniform expected utilization level for each server. We present an experimental study that demonstrates that with the inevitable temporal changes in server workloads or with dynamic sever capacity management based on known daily load patterns, balanced electrical loading across all power phases cannot be maintained. Such imbalances introduce a reactive power component that represents an effective power loss and brings down the overall energy efficiency of the data center, thereby resulting in a handicap against DC-powered data centers where such a loss is absent. 
    more » « less
  4. Data center operators generally overprovision IT and cooling capacities to address unexpected utilization increases that can violate service quality commitments. This results in energy wastage. To reduce this wastage, we introduce HCP (Holistic Capacity Provisioner), a service latency aware management system for dynamically provisioning the server and cooling capacity. Short-term load prediction is used to adjust the online server capacity to concentrate the workload onto the smallest possible set of online servers. Idling servers are completely turned off based on a separate long-term utilization predictor. HCP targets data centers that use chilled air cooling and varies the cooling provided commensurately, using adjustable aperture tiles and speed control of the blower fans in the air handler. An HCP prototype supporting a server heterogeneity is evaluated with real-world workload traces/requests and realizes up to 32% total energy savings while limiting the 99th-percentile and average latency increases to at most 6.67% and 3.24%, respectively, against a baseline system where all servers are kept online. 
    more » « less
  5. Apache Mesos, a cluster-wide resource manager, is widely deployed in massive scale at several Clouds and Data Centers. Mesos aims to provide high cluster utilization via fine grained resource co-scheduling and resource fairness among multiple users through Dominant Resource Fairness (DRF) based allocation. DRF takes into account different resource types (CPU, Memory, Disk I/O) requested by each application and determines the share of each cluster resource that could be allocated to the applications. Mesos has adopted a two-level scheduling policy: (1) DRF to allocate resources to competing frameworks and (2) task level scheduling by each framework for the resources allocated during the previous step. We have conducted experiments in a local Mesos cluster when used with frameworks such as Apache Aurora, Marathon, and our own framework Scylla, to study resource fairness and cluster utilization. Experimental results show how informed decision regarding second level scheduling policy of frameworks and attributes like offer holding period, offer refusal cycle and task arrival rate can reduce unfair resource distribution. Bin-Packing scheduling policy on Scylla with Marathon can reduce unfair allocation from 38% to 3%. By reducing unused free resources in offers we bring down the unfairness from to 90% to 28%. We also show the effect of task arrival rate to reduce the unfairness from 23% to 7%. 
    more » « less