skip to main content


Title: R-CAV: On-Demand Edge Computing Platform for Connected Autonomous Vehicles
Connected Autonomous Vehicles (CAVs) have achieved significant improvements in recent years. The CAVs can share sensor data to improve autonomous driving performance and enhance road safety. CAV architecture depends on roadside edge servers for latency-sensitive applications. The roadside edge servers are equipped with high-performance embedded edge computing devices that perform calculations with low power requirements. As the number of vehicles varies over different times of the day and vehicles can request for different CAV applications, the computation requirements for roadside edge computing platform can also vary. Hence, a framework for dynamic deployment of edge computing platforms can ensure CAV applications’ performance and proper usage of the devices. In this paper, we propose R-CAV – a framework for drone-based roadside edge server deployment that provides roadside units (RSUs) based on the computation requirement. Our proof of concept implementation for object detection algorithm using Nvidia Jetson nano demonstrates the proposed framework's feasibility. We posit that the framework will enhance the intelligent transport system vision by ensuring CAV applications’ quality of service.  more » « less
Award ID(s):
1642078
NSF-PAR ID:
10400171
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the IEEE World Forum on the Internet of Things (WF-IOT)
Page Range / eLocation ID:
65 to 70
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Mobile devices such as drones and autonomous vehicles increasingly rely on object detection (OD) through deep neural networks (DNNs) to perform critical tasks such as navigation, target-tracking and surveillance, just to name a few. Due to their high complexity, the execution of these DNNs requires excessive time and energy. Low-complexity object tracking (OT) is thus used along with OD, where the latter is periodically applied to generate "fresh" references for tracking. However, the frames processed with OD incur large delays, which does not comply with real-time applications requirements. Offloading OD to edge servers can mitigate this issue, but existing work focuses on the optimization of the offloading process in systems where the wireless channel has a very large capacity. Herein, we consider systems with constrained and erratic channel capacity, and establish parallel OT (at the mobile device) and OD (at the edge server) processes that are resilient to large OD latency. We propose Katch-Up, a novel tracking mechanism that improves the system resilience to excessive OD delay. We show that this technique greatly improves the quality of the reference available to tracking, and boosts performance up to 33%. However, while Katch-Up significantly improves performance, it also increases the computing load of the mobile device. Hence, we design SmartDet, a low-complexity controller based on deep reinforcement learning (DRL) that learns to achieve the right trade-off between resource utilization and OD performance. SmartDet takes as input highly-heterogeneous context-related information related to the current video content and the current network conditions to optimize frequency and type of OD offloading, as well as Katch-Up utilization. We extensively evaluate SmartDet on a real-world testbed composed by a JetSon Nano as mobile device and a GTX 980 Ti as edge server, connected through a Wi-Fi link, to collect several network-related traces, as well as energy measurements. We consider a state-of-the-art video dataset (ILSVRC 2015 - VID) and state-of-the-art OD models (EfficientDet 0, 2 and 4). Experimental results show that SmartDet achieves an optimal balance between tracking performance – mean Average Recall (mAR) and resource usage. With respect to a baseline with full Katch-Up usage and maximum channel usage, we still increase mAR by 4% while using 50% less of the channel and 30% power resources associated with Katch-Up. With respect to a fixed strategy using minimal resources, we increase mAR by 20% while using Katch-Up on 1/3 of the frames. 
    more » « less
  2. Connected autonomous vehicles (CAVs) have fostered the development of intelligent transportation systems that support critical safety information sharing with minimum latency and making driving decisions autonomously. However, the CAV environment is vulnerable to different external and internal attacks. Authorized but malicious entities which provide wrong information impose challenges in preventing internal attacks. An essential requirement for thwarting internal attacks is to identify the trustworthiness of the vehicles. This paper exploits interaction provenance to propose a trust management framework for CAVs that considers both in-vehicle and vehicular network security incidents, supports flexible security policies and ensures privacy. The framework contains an interaction provenance recording and trust management protocol that extracts events from interaction provenance and calculates trustworthiness using fuzzy policies based on the events. Simulation results show that the framework is effective and can be integrated with the CAV stack with minimal computation and communication overhead. 
    more » « less
  3. Advanced sensing technologies and communication capabilities of Connected and Autonomous Vehicles (CAVs) empower them to capture the dynamics of surrounding vehicles, including speeds and positions of those behind, enabling judicious responsive maneuvers. The acquired dynamics information of vehicles spurred the development of various cooperative platoon controls, particularly designed to enhance platoon stability with reduced spacing for reliable roadway capacity increase. These controls leverage abundant information transmitted through various communication topologies. Despite these advancements, the impact of different vehicle dynamics information on platoon safety remains underexplored, as current research predominantly focuses on stability analysis. This knowledge gap highlights the critical need for further investigation into how diverse vehicle dynamics information influences platoon safety. To address this gap, this research introduces a novel framework based on the concept of phase shift, aiming to scrutinize the tradeoffs between the safety and stability of CAV platoons formed upon bidirectional information flow topology. Our investigation focuses on platoon controls built upon bidirectional information flow topologies using diverse dynamics information of vehicles. Our research findings emphasize that the integration of various types of information into CAV platoon controls does not universally yield benefits. Specifically, incorporating spacing information can enhance both platoon safety and string stability. In contrast, velocity difference information can improve either safety or string stability, but not both simultaneously. These findings offer valuable insights into the formulation of CAV platoon control principles built upon diverse communication topologies. This research contributes a nuanced understanding of the intricate interplay between safety and stability in CAV platoons, emphasizing the importance of information dynamics in shaping effective control strategies.

     
    more » « less
  4. Recently, with the advent of the Internet of everything and 5G network, the amount of data generated by various edge scenarios such as autonomous vehicles, smart industry, 4K/8K, virtual reality (VR), augmented reality (AR), etc., has greatly exploded. All these trends significantly brought real-time, hardware dependence, low power consumption, and security requirements to the facilities, and rapidly popularized edge computing. Meanwhile, artificial intelligence (AI) workloads also changed the computing paradigm from cloud services to mobile applications dramatically. Different from wide deployment and sufficient study of AI in the cloud or mobile platforms, AI workload performance and their resource impact on edges have not been well understood yet. There lacks an in-depth analysis and comparison of their advantages, limitations, performance, and resource consumptions in an edge environment. In this paper, we perform a comprehensive study of representative AI workloads on edge platforms. We first conduct a summary of modern edge hardware and popular AI workloads. Then we quantitatively evaluate three categories (i.e., classification, image-to-image, and segmentation) of the most popular and widely used AI applications in realistic edge environments based on Raspberry Pi, Nvidia TX2, etc. We find that interaction between hardware and neural network models incurs non-negligible impact and overhead on AI workloads at edges. Our experiments show that performance variation and difference in resource footprint limit availability of certain types of workloads and their algorithms for edge platforms, and users need to select appropriate workload, model, and algorithm based on requirements and characteristics of edge environments. 
    more » « less
  5. New breed of applications, such as autonomous driving and their need for computation-aided quick decision making has motivated the delegation of compute-intensive services (e.g., video analytic) to the more powerful surrogate machines at the network edge–edge computing (EC). Recently, the notion of pervasive edge computing (PEC) has emerged, in which users’ devices can join the pool of the computing resources that perform edge computing. Inclusion of users’ devices increases the computing capability at the edge (adding to the infrastructure servers), but in comparison to the conventional edge ecosystems, it also introduces new challenges, such as service orchestration (i.e., service placement, discovery, and migration). We propose uDiscover, a novel user-driven service discovery and utilization framework for the PEC ecosystem. In designing uDiscover, we considered the Named-Data Networking architecture for balancing users workloads and reducing user-perceived latency. We propose proactive and reactive service discovery approaches and assess their performance in PEC and infrastructure-only ecosystems. Our simulation results show that (i) the PEC ecosystem reduces the user-perceived delays by up to 70%, and (ii) uDiscover selects the most suitable server–"accurate" delay estimates with less than 10% error–to execute any given task. 
    more » « less