skip to main content


Title: Adapting Datacenter Capacity for Greener Datacenters and Grid
Cloud providers are adapting datacenter (DC) capacity to reduce carbon emissions. With hyperscale datacenters exceeding 100 MW individually, and in some grids exceeding 15% of power load, DC adaptation is large enough to harm power grid dynamics, increasing carbon emissions, power prices, or reduce grid reliability. To avoid harm, we explore coordination of DC capacity change varying scope in space and time. In space, coordination scope spans a single datacenter, a group of datacenters, and datacenters with the grid. In time, scope ranges from online to day-ahead. We also consider what DC and grid information is used (e.g. real-time and day-ahead average carbon, power price, and compute backlog). For example, in our proposed PlanShare scheme, each datacenter uses day-ahead information to create a capacity plan and shares it, allowing global grid optimization (over all loads, over entire day). We evaluate DC carbon emissions reduction. Results show that local coordination scope fails to reduce carbon emissions significantly (3.2%–5.4% reduction). Expanding coordination scope to a set of datacenters improves slightly (4.9%–7.3%). PlanShare, with grid-wide coordination and full-day capacity planning, performs the best. PlanShare reduces DC emissions by 11.6%–12.6%, 1.56x–1.26x better than the best local, online approach’s results. PlanShare also achieves lower cost. We expect these advantages to increase as renewable generation in power grids increases. Further, a known full-day DC capacity plan provides a stable target for DC resource management.  more » « less
Award ID(s):
1901466 1832230
NSF-PAR ID:
10428097
Author(s) / Creator(s):
;
Date Published:
Journal Name:
ACM Symposium on Future Energy Systems (E-Energy 2023)
Page Range / eLocation ID:
200 to 213
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Ardakanian, Omid ; Niesse, Astrid (Ed.)
    The rapid growth of datacenter (DC) loads can be leveraged to help meet renewable portfolio standard (RPS, renewable fraction)targets in power grids. The ability to manipulate DC loads over time(shifting) provides a mechanism to deal with temporal mismatch between non-dispatchable renewable generation (e.g. wind and solar) and overall grid loads, and this flexibility ultimately facilitates the absorption of renewables and grid decarbonization. To this end, we study DC-grid coupling models, exploring their impact on grid dispatch, renewable absorption, power prices, and carbon emissions.With a detailed model of grid dispatch, generation, topology, and loads, we consider three coupling approaches: fixed, datacenter-local optimization (online dynamic programming), and grid-wide optimization (optimal power flow). Results show that understanding the effects of dynamic DC load management requires studies that model the dynamics of both load and power grid. Dynamic DC-grid coupling can produce large improvements: (1) reduce grid dispatch cost (-3%), (2) increase grid renewable fraction (+1.58%), and (3) reduce DC power cost (-16.9%).It also has negative effects: (1) increase cost for both DCs and non-DC customers, (2) differentially increase prices for non-DC customers, and (3) create large power-level changes that may harm DC productivity. 
    more » « less
  2. Traditional datacenter design and optimization for TCO and PUE is based on static views of power grids as well as computational loads. Power grids exhibit increasingly variable price and carbon-emissions, becoming more so as government initiatives drive further decarbonization. The resulting opportunities require dynamic, temporal metrics (eg. not simple averages), flexible systems and intelligent adaptive control. Two research areas represent new opportunities to reduce both carbon and cost in this world of variable power, carbon, and price. First, the design and optimization of flexible datacenters. Second, cloud resource, power, and application management for variable-capacity datacenters. For each, we describe the challenges and potential benefits. 
    more » « less
  3. As modern server GPUs are increasingly power intensive, better power management mechanisms can significantly reduce the power consumption, capital costs, and carbon emissions in large cloud datacenters. This letter uses diverse datacenter workloads to study the power management capabilities of modern GPUs. We find that current GPU management mechanisms have limited compatibility and monitoring support under cloud virtualization. They have sub-optimal, imprecise, and non-intuitive implementations of Dynamic Voltage and Frequency Scaling (DVFS) and power capping. Consequently, efficient GPU power management is not widely deployed in clouds today. To address these issues, we make actionable recommendations for GPU vendors and researchers. 
    more » « less
  4. Generative AI, exemplified in ChatGPT, Dall-E 2, and Stable Diffusion, are exciting new applications consuming growing quantities of computing. We study the compute, energy, and carbon impacts of generative AI inference. Using ChatGPT as an exemplar, we create a workload model and compare request direction approaches (Local, Balance, CarbonMin), assessing their power use and carbon impacts. Our workload model shows that for ChatGPT-like services, in- ference dominates emissions, in one year producing 25x the carbon-emissions of training GPT-3. The workload model characterizes user experience, and experiments show that carbon emissions-aware algorithms (CarbonMin) can both maintain user experience and reduce carbon emissions dramatically (35%). We also consider a future scenario (2035 workload and power grids), and show that CarbonMin can reduce emissions by 56%. In both cases, the key is intelligent direction of requests to locations with low-carbon power. Combined with hardware technology advances, CarbonMin can keep emissions increase to only 20% compared to 2022 levels for 55x greater workload. Finally we consider datacenter headroom to increase effectiveness of shifting. With headroom, CarbonMin reduces 2035 emissions by 71%. 
    more » « less
  5. The end of Dennard scaling and the slowing of Moore's Law has put the energy use of datacenters on an unsustainable path. Datacenters are already a significant fraction of worldwide electricity use, with application demand scaling at a rapid rate. We argue that substantial reductions in the carbon intensity of datacenter computing are possible with a software-centric approach: by making energy and carbon visible to application developers on a fine-grained basis, by modifying system APIs to make it possible to make informed trade offs between performance and carbon emissions, and by raising the level of application programming to allow for flexible use of more energy efficient means of compute and storage. We also lay out a research agenda for systems software to reduce the carbon footprint of datacenter computing.

     
    more » « less