Reducing buildings’ carbon emissions is an important sustainability challenge. While scheduling flexible building loads has been previously used for a variety of grid and energy optimizations, carbon footprint reduction using such flexible loads poses new challenges since such methods need to balance both energy and carbon costs while also reducing user inconvenience from delaying such loads. This article highlights the potential conflict between electricity prices and carbon emissions and the resulting tradeoffs in carbon-aware and cost-aware load scheduling. To address this tradeoff, we propose GreenThrift, a home automation system that leverages the scheduling capabilities of smart appliances and knowledge of future carbon intensity and cost to reduce both the carbon emissions and costs of flexible energy loads. At the heart of GreenThrift is an optimization technique that automatically computes schedules based on user configurations and preferences. We evaluate the effectiveness of GreenThrift using real-world carbon intensity data, electricity prices, and load traces from multiple locations and across different scenarios and objectives. Our results show that GreenThrift can replicate the offline optimal and retains 97% of the savings when optimizing the carbon emissions. Moreover, we show how GreenThrift can balance the conflict between carbon and cost and retain 95.3% and 85.5% of the potential carbon and cost savings, respectively. 
                        more » 
                        « less   
                    This content will become publicly available on December 1, 2025
                            
                            Data-driven Algorithm Selection for Carbon-Aware Scheduling
                        
                    
    
            As computing demand continues to grow, minimizing its environmental impact has become crucial. This paper presents a study on carbon-aware scheduling algorithms, focusing on reducing carbon emissions of delay-tolerant batch workloads. Inspired by the Follow the Leader strategy, we introduce a simple yet efficient meta-algorithm, called FTL, that dynamically selects the most efficient scheduling algorithm based on real-time data and historical performance. Without fine-tuning and parameter optimization, FTL adapts to variability in job lengths, carbon intensity forecasts, and regional energy characteristics, consistently outperforming traditional carbon-aware scheduling algorithms. Through extensive experiments using real-world data traces, FTL achieves 8.2% and 14% improvement in average carbon footprint reduction over the closest runner-up algorithm and the carbon-agnostic algorithm, respectively, demonstrating its efficacy in minimizing carbon emissions across multiple geographical regions.1 
        more » 
        « less   
        
    
    
                            - PAR ID:
- 10591395
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- ACM SIGEnergy Energy Informatics Review
- Volume:
- 4
- Issue:
- 5
- ISSN:
- 2770-5331
- Page Range / eLocation ID:
- 148 to 153
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Real-time data stream processing at the edge is crucial for time-sensitive tasks within large-scale IoT systems. Task scheduling plays a key role in managing the Quality of Service (QoS), necessitating a prioritization system to distinguish between high and low-priority tasks, thus ensuring efficient data processing on edge nodes. Existing scheduling algorithms rigidly prioritize tasks deemed as high-priority, often at the expense of fairness and overall system efficiency. In this paper, we propose a Priority-aware Fair Task Scheduling (FTS-Hybrid) algorithm that addresses these challenges by managing priority based task execution in a controlled manner. Our task scheduling algorithm streamlines resource utilization and enhances system responsiveness, contributing to low latency and high throughput, outperforming competing techniques including First-Come-FirstServe (FCFS), Round Robin (RR), and Priority Scheduling (PS). We implemented FTS-Hybrid on Apache Storm and evaluated its performance using an open-source real-time IoT benchmark (RIoTBench). Experimental results show that the FTS-Hybrid algorithm reduces task execution latency by 24%, 31%, and 26% compared with FCFS, RR, and PS, respectively, by strategically mitigating queuing delays under dynamic workload conditions.more » « less
- 
            The rapid increase in computing demand and corresponding energy consumption have focused attention on computing's impact on the climate and sustainability. Prior work proposes metrics that quantify computing's carbon footprint across several lifecycle phases, including its supply chain, operation, and end-of-life. Industry uses these metrics to optimize the carbon footprint of manufacturing hardware and running computing applications. Unfortunately, prior work on optimizing datacenters' carbon footprint often succumbs to the sunk cost fallacy by considering embodied carbon emissions (a sunk cost) when making operational decisions (i.e., job scheduling and placement), which leads to operational decisions that do not always reduce the total carbon footprint. In this paper, we evaluate carbon-aware job scheduling and placement on a given set of servers for several carbon accounting metrics. Our analysis reveals state-of-the-art carbon accounting metrics that include embodied carbon emissions when making operational decisions can increase the total carbon footprint of executing a set of jobs. We study the factors that affect the added carbon cost of such suboptimal decision-making. We then use a real-world case study from a datacenter to demonstrate how the sunk carbon fallacy manifests itself in practice. Finally, we discuss the implications of our findings in better guiding effective carbon-aware scheduling in on-premise and cloud datacenters.more » « less
- 
            An important goal of modern scheduling systems is to efficiently manage power usage. In energy-efficient scheduling, the operating system controls the speed at which a machine is processing jobs with the dual objective of minimizing energy consumption and optimizing the quality of service cost of the resulting schedule. Since machine-learned predictions about future requests can often be learned from historical data, a recent line of work on learning-augmented algorithms aims to achieve improved performance guarantees by leveraging predictions. In particular, for energy-efficient scheduling, Bamas et. al. [NeurIPS '20] and Antoniadis et. al. [SWAT '22] designed algorithms with predictions for the energy minimization with deadlines problem and achieved an improved competitive ratio when the prediction error is small while also maintaining worst-case bounds even when the prediction error is arbitrarily large. In this paper, we consider a general setting for energy-efficient scheduling and provide a flexible learning-augmented algorithmic framework that takes as input an offline and an online algorithm for the desired energy-efficient scheduling problem. We show that, when the prediction error is small, this framework gives improved competitive ratios for many different energy-efficient scheduling problems, including energy minimization with deadlines, while also maintaining a bounded competitive ratio regardless of the prediction error. Finally, we empirically demonstrate that this framework achieves an improved performance on real and synthetic datasets.more » « less
- 
            Cloud platforms are increasing their emphasis on sustainability and reducing their operational carbon footprint. A common approach for reducing carbon emissions is to exploit the temporal flexibility inherent to many cloud workloads by executing them in periods with the greenest energy and suspending them at other times. Since such suspend-resume approaches can incur long delays in job completion times, we present a new approach that exploits the elasticity of batch workloads in the cloud to optimize their carbon emissions. Our approach is based on the notion of carbon scaling, similar to cloud autoscaling, where a job dynamically varies its server allocation based on fluctuations in the carbon cost of the grid's energy. We develop a greedy algorithm for minimizing a job's carbon emissions via carbon scaling that is based on the well-known problem of marginal resource allocation. We implement a CarbonScaler prototype in Kubernetes using its autoscaling capabilities and an analytic tool to guide the carbon-efficient deployment of batch applications in the cloud. We then evaluate CarbonScaler using real-world machine learning training and MPI jobs on a commercial cloud platform and show that it can yield i) 51% carbon savings over carbon-agnostic execution; ii) 37% over a state-of-the-art suspend-resume policy; and iii) 8 over the best static scaling policy.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
