- Award ID(s):
- 2107020
- PAR ID:
- 10338336
- Date Published:
- Journal Name:
- 2022 2nd Workshop on Data-Driven and Intelligent Cyber-Physical Systems for Smart Cities Workshop (DI-CPS)
- Page Range / eLocation ID:
- 7 to 13
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
null (Ed.)A critical aspect of autonomous vehicles (AVs) is the object detection stage, which is increasingly being performed with sensor fusion models: multimodal 3D object detection models which utilize both 2D RGB image data and 3D data from a LIDAR sensor as inputs. In this work, we perform the first study to analyze the robustness of a high-performance, open source sensor fusion model architecture towards adversarial attacks and challenge the popular belief that the use of additional sensors automatically mitigate the risk of adversarial attacks. We find that despite the use of a LIDAR sensor, the model is vulnerable to our purposefully crafted image-based adversarial attacks including disappearance, universal patch, and spoofing. After identifying the underlying reason, we explore some potential defenses and provide some recommendations for improved sensor fusion models.more » « less
-
The operational safety of Automated Driving System (ADS)-Operated Vehicles (AVs) are a rising concern with the deployment of AVs as prototypes being tested and also in commercial deployment. The robustness of safety evaluation systems is essential in determining the operational safety of AVs as they interact with human-driven vehicles. Extending upon earlier works of the Institute of Automated Mobility (IAM) that have explored the Operational Safety Assessment (OSA) metrics and infrastructure-based safety monitoring systems, in this work, we compare the performance of an infrastructure-based Light Detection And Ranging (LIDAR) system to an onboard vehicle-based LIDAR system in testing at the Maricopa County Department of Transportation SMARTDrive testbed in Anthem, Arizona. The sensor modalities are located in infrastructure and onboard the test vehicles, including LIDAR, cameras, a real-time differential GPS, and a drone with a camera. Bespoke localization and tracking algorithms are created for the LIDAR and cameras. In total, there are 26 different scenarios of the test vehicles navigating the testbed intersection; for this work, we are only considering car following scenarios. The LIDAR data collected from the infrastructure-based and onboard vehicle-based sensors system are used to perform object detection and multi-target tracking to estimate the velocity and position information of the test vehicles and use these values to compute OSA metrics. The comparison of the performance of the two systems involves the localization and tracking errors in calculating the position and the velocity of the subject vehicle, with the real-time differential GPS data serving as ground truth for velocity comparison and tracking results from the drone for OSA metrics comparison.
-
Objective This study examines the extent to which cybersecurity attacks on autonomous vehicles (AVs) affect human trust dynamics and driver behavior.
Background Human trust is critical for the adoption and continued use of AVs. A pressing concern in this context is the persistent threat of cyberattacks, which pose a formidable threat to the secure operations of AVs and consequently, human trust.
Method A driving simulator experiment was conducted with 40 participants who were randomly assigned to one of two groups: (1) Experience and Feedback and (2) Experience-Only. All participants experienced three drives: Baseline, Attack, and Post-Attack Drive. The Attack Drive prevented participants from properly operating the vehicle in multiple incidences. Only the “Experience and Feedback” group received a security update in the Post-Attack drive, which was related to the mitigation of the vehicle’s vulnerability. Trust and foot positions were recorded for each drive.
Results Findings suggest that attacks on AVs significantly degrade human trust, and remains degraded even after an error-less drive. Providing an update about the mitigation of the vulnerability did not significantly affect trust repair.
Conclusion Trust toward AVs should be analyzed as an emergent and dynamic construct that requires autonomous systems capable of calibrating trust after malicious attacks through appropriate experience and interaction design.
Application The results of this study can be applied when building driver and situation-adaptive AI systems within AVs.
-
As we add more autonomous and semi-autonomous vehicles (AVs) to our roads, their effects on passenger and pedestrian safety are becoming more important. Despite extensive testing before deployment, AV systems are not perfect at identifying hazards in the roadway. Although a particular AV’s sensors and software may not be 100% accurate at identifying hazards, there is an untapped pool of information held by other AVs in the vicinity that could be used to quickly and accurately identify roadway hazards before they present a safety threat.more » « less
-
This paper addresses the challenges of computational accountability in autonomous systems, particularly in Autonomous Vehicles (AVs), where safety and efficiency often conflict. We begin by examining current approaches such as cost minimization, reward maximization, human-centered approaches, and ethical frameworks, noting their limitations addressing these challenges. Foreseeability is a central concept in tort law that limits the accountability and legal liability of an actor to a reasonable scope. Yet, current data-driven methods to determine foreseeability are rigid, ignore uncertainty, and depend on simulation data. In this work, we advocate for a new computational approach to establish foreseeability of autonomous systems based on the legal “BPL” formula. We provide open research challenges, using fully autonomous vehicles as a motivating example, and call for researchers to help autonomous systems make accountable decisions in safety-critical scenarios.more » « less