skip to main content

Title: VECMAN: A Framework for Energy-Aware Resource Management in Vehicular Edge Computing Systems
In Vehicular Edge Computing (VEC) systems, the computing resources of connected Electric Vehicles (EV) are used to fulfill the low-latency computation requirements of vehicles. However, local execution of heavy workloads may drain a considerable amount of energy in EVs. One promising way to improve the energy efficiency is to share and coordinate computing resources among connected EVs. However, the uncertainties in the future location of vehicles make it hard to decide which vehicles participate in resource sharing and how long they share their resources so that all participants benefit from resource sharing. In this paper, we propose VECMAN, a framework for energy-aware resource management in VEC systems composed of two algorithms: (i) a resource selector algorithm that determines the participating vehicles and the duration of resource sharing period; and (ii) an energy manager algorithm that manages computing resources of the participating vehicles with the aim of minimizing the computational energy consumption. We evaluate the proposed algorithms and show that they considerably reduce the vehicles computational energy consumption compared to the state-of-the-art baselines. Specifically, our algorithms achieve between 7% and 18% energy savings compared to a baseline that executes workload locally and an average of 13% energy savings compared to a more » baseline that offloads vehicles workloads to RSUs. « less
Authors:
; ;
Award ID(s):
1948365 1724227
Publication Date:
NSF-PAR ID:
10280442
Journal Name:
IEEE Transactions on Mobile Computing
Page Range or eLocation-ID:
1 to 1
ISSN:
1536-1233
Sponsoring Org:
National Science Foundation
More Like this
  1. The low-latency requirements of connected electric vehicles and their increasing computing needs have led to the necessity to move computational nodes from the cloud data centers to edge nodes such as road-side units (RSU). However, offloading the workload of all the vehicles to RSUs may not scale well to an increasing number of vehicles and workloads. To solve this problem, computing nodes can be installed directly on the smart vehicles, so that each vehicle can execute the heavy workload locally, thus forming a vehicular edge computing system. On the other hand, these computational nodes may drain a considerable amount of energy in electric vehicles. It is therefore important to manage the resources of connected electric vehicles to minimize their energy consumption. In this paper, we propose an algorithm that manages the computing nodes of connected electric vehicles for minimized energy consumption. The algorithm achieves energy savings for connected electric vehicles by exploiting the discrete settings of computational power for various performance levels. We evaluate the proposed algorithm and show that it considerably reduces the vehicles' computational energy consumption compared to state-of-the-art baselines. Specifically, our algorithm achieves 15-85% energy savings compared to a baseline that executes workload locally and an averagemore »of 51% energy savings compared to a baseline that offloads vehicles' workloads only to RSUs.« less
  2. Apache Mesos, a two-level resource scheduler, provides resource sharing across multiple users in a multi-tenant clustered environment. Computational resources (i.e., CPU, memory, disk, etc.) are distributed according to the Dominant Resource Fairness (DRF) policy. Mesos frameworks (users) receive resources based on their current usage and are responsible for scheduling their tasks within the allocation. We have observed that multiple frameworks can cause fairness imbalance in a multi-user environment. For example, a greedy framework consuming more than its fair share of resources can deny resource fairness to others. The user with the least Dominant Share is considered first by the DRF module to get its resource allocation. However, the default DRF implementation, in Apache Mesos' Master allocation module, does not consider the overall resource demands of the tasks in the queue for each user/framework. This lack of awareness can lead to poor performance as users without any pending task may receive more resource offers, and users with a queue of pending tasks can starve due to their high dominant shares. In a multi-tenant environment, the characteristics of frameworks and workloads must be understood by cluster managers to be able to define fairness based on not only resource share but also resourcemore »demand and queue wait time. We have developed a policy driven queue manager, Tromino, for an Apache Mesos cluster where tasks for individual frameworks can be scheduled based on each framework's overall resource demands and current resource consumption. Dominant Share and demand awareness of Tromino and scheduling based on these attributes can reduce (1) the impact of unfairness due to a framework specific configuration, and (2) unfair waiting time due to higher resource demand in a pending task queue. In the best case, Tromino can significantly reduce the average waiting time of a framework by using the proposed Demand-DRF aware policy.« less
  3. Recent advances in computing algorithms and hardware have rekindled interest in developing high-accuracy, low-cost surrogate models for simulating physical systems. The idea is to replace expensive numerical integration of complex coupled partial differential equations at fine time scales performed on supercomputers, with machine-learned surrogates that efficiently and accurately forecast future system states using data sampled from the underlying system. One particularly popular technique being explored within the weather and climate modelling community is the echo state network (ESN), an attractive alternative to other well-known deep learning architectures. Using the classical Lorenz 63 system, and the three tier multi-scale Lorenz 96 system (Thornes T, Duben P, Palmer T. 2017 Q. J. R. Meteorol. Soc. 143 , 897–908. ( doi:10.1002/qj.2974 )) as benchmarks, we realize that previously studied state-of-the-art ESNs operate in two distinct regimes, corresponding to low and high spectral radius (LSR/HSR) for the sparse, randomly generated, reservoir recurrence matrix. Using knowledge of the mathematical structure of the Lorenz systems along with systematic ablation and hyperparameter sensitivity analyses, we show that state-of-the-art LSR-ESNs reduce to a polynomial regression model which we call Domain-Driven Regularized Regression (D2R2). Interestingly, D2R2 is a generalization of the well-known SINDy algorithm (Brunton SL, Proctor JL, Kutzmore »JN. 2016 Proc. Natl Acad. Sci. USA 113 , 3932–3937. ( doi:10.1073/pnas.1517384113 )). We also show experimentally that LSR-ESNs (Chattopadhyay A, Hassanzadeh P, Subramanian D. 2019 ( http://arxiv.org/abs/1906.08829 )) outperform HSR ESNs (Pathak J, Hunt B, Girvan M, Lu Z, Ott E. 2018 Phys. Rev. Lett. 120 , 024102. ( doi:10.1103/PhysRevLett.120.024102 )) while D2R2 dominates both approaches. A significant goal in constructing surrogates is to cope with barriers to scaling in weather prediction and simulation of dynamical systems that are imposed by time and energy consumption in supercomputers. Inexact computing has emerged as a novel approach to helping with scaling. In this paper, we evaluate the performance of three models (LSR-ESN, HSR-ESN and D2R2) by varying the precision or word size of the computation as our inexactness-controlling parameter. For precisions of 64, 32 and 16 bits, we show that, surprisingly, the least expensive D2R2 method yields the most robust results and the greatest savings compared to ESNs. Specifically, D2R2 achieves 68 × in computational savings, with an additional 2 × if precision reductions are also employed, outperforming ESN variants by a large margin. This article is part of the theme issue ‘Machine learning for weather and climate modelling’.« less
  4. Cloud computing has motivated renewed interest in resource allocation problems with new consumption models. A common goal is to share a resource, such as CPU or I/O bandwidth, among distinct users with different demand patterns as well as different quality of service requirements. To ensure these service requirements, cloud offerings often come with a service level agreement (SLA) between the provider and the users. A SLA specifies the amount of a resource a user is entitled to utilize. In many cloud settings, providers would like to operate resources at high utilization while simultaneously respecting individual SLAs. There is typically a trade-off between these two objectives; for example, utilization can be increased by shifting away resources from idle users to “scavenger” workload, but with the risk of the former then becoming active again. We study this fundamental tradeoff by formulating a resource allocation model that captures basic properties of cloud computing systems, including SLAs, highly limited feedback about the state of the system, and variable and unpredictable input sequences. Our main result is a simple and practical algorithm that achieves near-optimal performance on the above two objectives. First, we guarantee nearly optimal utilization of the resource even if compared with themore »omniscient offline dynamic optimum. Second, we simultaneously satisfy all individual SLAs up to a small error. The main algorithmic tool is a multiplicative weight update algorithm and a primal-dual argument to obtain its guarantees. We also provide numerical validation on real data to demonstrate the performance of our algorithm in practical applications.« less
  5. Robust Adaptive Secure Secret Sharing (RASSS) is a protocol for reconstructing secrets and information in distributed computing systems even in the presence of a large number of untrusted participants. Since the original Shamir's Secret Sharing scheme, there have been efforts to secure the technique against dishonest shareholders. Early on, researchers determined that the Reed-Solomon encoding property of the Shamir's share distribution equation and its decoding algorithm could tolerate cheaters up to one third of the total shareholders. However, if the number of cheaters grows beyond the error correcting capability (distance) of the Reed-Solomon codes, the reconstruction of the secret is hindered. Untrusted participants or cheaters could hide in the decoding procedure, or even frame up the honest parties. In this paper, we solve this challenge and propose a secure protocol that is no longer constrained by the limitations of the Reed-Solomon codes. As long as there are a minimum number of honest shareholders, the RASSS protocol is able to identify the cheaters and retrieve the correct secret or information in a distributed system with a probability close to 1 with less than 60% of hardware overhead. Furthermore, the adaptive nature of the protocol enables considerable hardware and timing resource savingsmore »and makes RASSS highly practical.« less