skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Energy-Aware Resource Management in Vehicular Edge Computing Systems
The low-latency requirements of connected electric vehicles and their increasing computing needs have led to the necessity to move computational nodes from the cloud data centers to edge nodes such as road-side units (RSU). However, offloading the workload of all the vehicles to RSUs may not scale well to an increasing number of vehicles and workloads. To solve this problem, computing nodes can be installed directly on the smart vehicles, so that each vehicle can execute the heavy workload locally, thus forming a vehicular edge computing system. On the other hand, these computational nodes may drain a considerable amount of energy in electric vehicles. It is therefore important to manage the resources of connected electric vehicles to minimize their energy consumption. In this paper, we propose an algorithm that manages the computing nodes of connected electric vehicles for minimized energy consumption. The algorithm achieves energy savings for connected electric vehicles by exploiting the discrete settings of computational power for various performance levels. We evaluate the proposed algorithm and show that it considerably reduces the vehicles' computational energy consumption compared to state-of-the-art baselines. Specifically, our algorithm achieves 15-85% energy savings compared to a baseline that executes workload locally and an average of 51% energy savings compared to a baseline that offloads vehicles' workloads only to RSUs.  more » « less
Award ID(s):
1724227
PAR ID:
10183074
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proc. of the IEEE International Conference on Cloud Engineering (IC2E 2020)
Page Range / eLocation ID:
49 to 58
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In Vehicular Edge Computing (VEC) systems, the computing resources of connected Electric Vehicles (EV) are used to fulfill the low-latency computation requirements of vehicles. However, local execution of heavy workloads may drain a considerable amount of energy in EVs. One promising way to improve the energy efficiency is to share and coordinate computing resources among connected EVs. However, the uncertainties in the future location of vehicles make it hard to decide which vehicles participate in resource sharing and how long they share their resources so that all participants benefit from resource sharing. In this paper, we propose VECMAN, a framework for energy-aware resource management in VEC systems composed of two algorithms: (i) a resource selector algorithm that determines the participating vehicles and the duration of resource sharing period; and (ii) an energy manager algorithm that manages computing resources of the participating vehicles with the aim of minimizing the computational energy consumption. We evaluate the proposed algorithms and show that they considerably reduce the vehicles computational energy consumption compared to the state-of-the-art baselines. Specifically, our algorithms achieve between 7% and 18% energy savings compared to a baseline that executes workload locally and an average of 13% energy savings compared to a baseline that offloads vehicles workloads to RSUs. 
    more » « less
  2. Recently, with the advent of the Internet of everything and 5G network, the amount of data generated by various edge scenarios such as autonomous vehicles, smart industry, 4K/8K, virtual reality (VR), augmented reality (AR), etc., has greatly exploded. All these trends significantly brought real-time, hardware dependence, low power consumption, and security requirements to the facilities, and rapidly popularized edge computing. Meanwhile, artificial intelligence (AI) workloads also changed the computing paradigm from cloud services to mobile applications dramatically. Different from wide deployment and sufficient study of AI in the cloud or mobile platforms, AI workload performance and their resource impact on edges have not been well understood yet. There lacks an in-depth analysis and comparison of their advantages, limitations, performance, and resource consumptions in an edge environment. In this paper, we perform a comprehensive study of representative AI workloads on edge platforms. We first conduct a summary of modern edge hardware and popular AI workloads. Then we quantitatively evaluate three categories (i.e., classification, image-to-image, and segmentation) of the most popular and widely used AI applications in realistic edge environments based on Raspberry Pi, Nvidia TX2, etc. We find that interaction between hardware and neural network models incurs non-negligible impact and overhead on AI workloads at edges. Our experiments show that performance variation and difference in resource footprint limit availability of certain types of workloads and their algorithms for edge platforms, and users need to select appropriate workload, model, and algorithm based on requirements and characteristics of edge environments. 
    more » « less
  3. Software applications and workloads, especially within the domains of Cloud computing and large-scale AI model training, exert considerable demand on computing resources, thus contributing significantly to the overall energy footprint of the IT industry. In this paper, we present an in-depth analysis of certain software coding practices that can play a substantial role in increasing the application’s overall energy consumption, primarily stemming from the suboptimal utilization of computing resources. Our study encompasses a thorough investigation of 16 distinct code smells and other coding malpractices across 31 real-world open-source applications written in Java and Python. Through our research, we provide compelling evidence that various common refactoring techniques, typically employed to rectify specific code smells, can unintentionally escalate the application’s energy consumption. We illustrate that a discerning and strategic approach to code smell refactoring can yield substantial energy savings. For selective refactorings, this yields a reduction of up to 13.1% of energy consumption and 5.1% of carbon emissions per workload on average. These findings underscore the potential of selective and intelligent refactoring to substantially increase energy efficiency of Cloud software systems. 
    more » « less
  4. Connected Autonomous Vehicles (CAVs) have achieved significant improvements in recent years. The CAVs can share sensor data to improve autonomous driving performance and enhance road safety. CAV architecture depends on roadside edge servers for latency-sensitive applications. The roadside edge servers are equipped with high-performance embedded edge computing devices that perform calculations with low power requirements. As the number of vehicles varies over different times of the day and vehicles can request for different CAV applications, the computation requirements for roadside edge computing platform can also vary. Hence, a framework for dynamic deployment of edge computing platforms can ensure CAV applications’ performance and proper usage of the devices. In this paper, we propose R-CAV – a framework for drone-based roadside edge server deployment that provides roadside units (RSUs) based on the computation requirement. Our proof of concept implementation for object detection algorithm using Nvidia Jetson nano demonstrates the proposed framework's feasibility. We posit that the framework will enhance the intelligent transport system vision by ensuring CAV applications’ quality of service. 
    more » « less
  5. Vehicular edge computing relies on the computational capabilities of interconnected edge devices to manage incoming requests from vehicles. This offloading process enhances the speed and efficiency of data handling, ultimately boosting the safety, performance, and reliability of connected vehicles. While previous studies have concentrated on processor characteristics, they often overlook the significance of the connecting components. Limited memory and storage resources on edge devices pose challenges, particularly in the context of deep learning, where these limitations can significantly affect performance. The impact of memory contention has not been thoroughly explored, especially regarding perception-based tasks. In our analysis, we identified three distinct behaviors of memory contention, each interacting differently with other resources. Additionally, our investigation of Deep Neural Network (DNN) layers revealed that certain convolutional layers experienced computation time increases exceeding 2849%, while activation layers showed a rise of 1173.34%. Through our characterization efforts, we can model workload behavior on edge devices according to their configuration and the demands of the tasks. This allows us to quantify the effects of memory contention. To our knowledge, this study is the first to characterize the influence of memory on vehicular edge computational workloads, with a strong emphasis on memory dynamics and DNN layers. 
    more » « less