skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: When Green Computing Meets Performance and Resilience SLOs.
This paper addresses the urgent need to transition to global net-zero carbon emissions by 2050 while retaining the ability to meet joint performance and resilience objectives. The focus is on the computing infrastructures, such as hyperscale cloud datacenters, that consume significant power, thus producing increasing amounts of carbon emissions. Our goal is to (1) optimize the usage of green energy sources (e.g., solar energy), which is desirable but expensive and relatively unstable, and (2) continuously reduce the use of fossil fuels, which have a lower cost but a significant negative societal impact. Meanwhile, cloud datacenters strive to meet their customers’ requirements, e.g., service-level objectives (SLOs) in application latency or throughput, which are impacted by infrastructure resilience and availability. We propose a scalable formulation that combines sustainability, cloud resilience, and performance as a joint optimization problem with multiple interdependent objectives to address these issues holistically. Given the complexity and dynamicity of the problem, machine learning (ML) approaches, such as reinforcement learning, are essential for achieving continuous optimization. Our study highlights the challenges of green energy instability which necessitates innovative MLcentric solutions across heterogeneous infrastructures to manage the transition towards green computing. Underlying the MLcentric solutions must be methods to combine classic system resilience techniques with innovations in real-time ML resilience (not addressed heretofore). We believe that this approach will not only set a new direction in the resilient, SLO-driven adoption of green energy but also enable us to manage future sustainable systems in ways that were not possible before.  more » « less
Award ID(s):
2029049
PAR ID:
10546463
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Corporate Creator(s):
Editor(s):
nd
Publisher / Repository:
Institute of Electrical and Electronics Engineers
Date Published:
Edition / Version:
1
Volume:
1
Issue:
1
ISBN:
979-8-3503-9570-9
Page Range / eLocation ID:
17-22
Subject(s) / Keyword(s):
sustainability, green energy, cloud computing, resilience, machine learning, machine learning resilience
Format(s):
Medium: X Size: 445 kb Other: pdf
Size(s):
445 kb
Location:
Brisbane, Australia
Sponsoring Org:
National Science Foundation
More Like this
  1. Cloud providers are adapting datacenter (DC) capacity to reduce carbon emissions. With hyperscale datacenters exceeding 100 MW individually, and in some grids exceeding 15% of power load, DC adaptation is large enough to harm power grid dynamics, increasing carbon emissions, power prices, or reduce grid reliability. To avoid harm, we explore coordination of DC capacity change varying scope in space and time. In space, coordination scope spans a single datacenter, a group of datacenters, and datacenters with the grid. In time, scope ranges from online to day-ahead. We also consider what DC and grid information is used (e.g. real-time and day-ahead average carbon, power price, and compute backlog). For example, in our proposed PlanShare scheme, each datacenter uses day-ahead information to create a capacity plan and shares it, allowing global grid optimization (over all loads, over entire day). We evaluate DC carbon emissions reduction. Results show that local coordination scope fails to reduce carbon emissions significantly (3.2%–5.4% reduction). Expanding coordination scope to a set of datacenters improves slightly (4.9%–7.3%). PlanShare, with grid-wide coordination and full-day capacity planning, performs the best. PlanShare reduces DC emissions by 11.6%–12.6%, 1.56x–1.26x better than the best local, online approach’s results. PlanShare also achieves lower cost. We expect these advantages to increase as renewable generation in power grids increases. Further, a known full-day DC capacity plan provides a stable target for DC resource management. 
    more » « less
  2. The rapid increase in computing demand and corresponding energy consumption have focused attention on computing's impact on the climate and sustainability. Prior work proposes metrics that quantify computing's carbon footprint across several lifecycle phases, including its supply chain, operation, and end-of-life. Industry uses these metrics to optimize the carbon footprint of manufacturing hardware and running computing applications. Unfortunately, prior work on optimizing datacenters' carbon footprint often succumbs to the sunk cost fallacy by considering embodied carbon emissions (a sunk cost) when making operational decisions (i.e., job scheduling and placement), which leads to operational decisions that do not always reduce the total carbon footprint. In this paper, we evaluate carbon-aware job scheduling and placement on a given set of servers for several carbon accounting metrics. Our analysis reveals state-of-the-art carbon accounting metrics that include embodied carbon emissions when making operational decisions can increase the total carbon footprint of executing a set of jobs. We study the factors that affect the added carbon cost of such suboptimal decision-making. We then use a real-world case study from a datacenter to demonstrate how the sunk carbon fallacy manifests itself in practice. Finally, we discuss the implications of our findings in better guiding effective carbon-aware scheduling in on-premise and cloud datacenters. 
    more » « less
  3. Traditional datacenter design and optimization for TCO and PUE is based on static views of power grids as well as computational loads. Power grids exhibit increasingly variable price and carbon-emissions, becoming more so as government initiatives drive further decarbonization. The resulting opportunities require dynamic, temporal metrics (eg. not simple averages), flexible systems and intelligent adaptive control. Two research areas represent new opportunities to reduce both carbon and cost in this world of variable power, carbon, and price. First, the design and optimization of flexible datacenters. Second, cloud resource, power, and application management for variable-capacity datacenters. For each, we describe the challenges and potential benefits. 
    more » « less
  4. Large-scale network-cloud ecosystems are fundamental infrastructures to support future 5G/6G services, and their resilience is a primary societal concern for the years to come. Differently from a single-entity ecosystem (in which one entity owns the whole infrastructure), in multi-entity ecosystems (in which the networks and datacenters are owned by different entities) cooperation among such different entities is crucial to achieve resilience against large-scale failures. Such cooperation is challenging since diffident entities may not disclose confidential information, e.g., detailed resource availability. To enhance the resilience of multi-entity ecosystems, carriers are important as all the entities rely on carriers’ communication services. Thus, in this study we investigate how to perform carrier cooperative recovery in case of large-scale failures/disasters. We propose a two-stage cooperative recovery planning by incorporating a coordinated scheduling for swift recovery. Through preliminary numerical evaluation, we confirm the potential benefit of carrier cooperation in terms of both recovery time and recovery cost/burden reduction. 
    more » « less
  5. Major innovations in computing have been driven by scaling up computing infrastructure, while aggressively optimizing operating costs. The result is a network of worldwide datacenters that consume a large amount of energy, mostly in an energy-efficient manner. Since the electric grid powering these datacenters provided a simple and opaque abstraction of an unlimited and reliable power supply, the computing industry remained largely oblivious to the carbon intensity of the electricity it uses. Much like the rest of the society, it generally treated the carbon intensity ofthe electricity as constant, which was mostly true fora fossil fuel-driven grid. As a result, the cost-driven objective of increasing energy-efficiency - by doing more work per unit of energy - has generally been viewed as the most carbon-efficient approach. However, as the electric grid is increasingly powered by clean energy and is exposing its time-varying carbon intensity, the most energy-efficient operation is no longer necessarily the most carbon-efficient operation. There has been a recent focus on exploiting the flexibility of computing's workloads-along temporal, spatial,and resource dimensions-to reduce carbon emissions,which comes at the cost ofeither perfor- mance or energy efficiency. In this paper, we discuss the trade-offs between energy efficiency and carbon efficiency in exploiting com- puting's flexibility and show that blindly optimizing for energy efficiency is not always the right approach. 
    more » « less