Autonomous systems like aircraft and assistive robots often operate in scenarios where guaranteeing safety is critical. Methods like Hamilton-Jacobi reachability can provide guaranteed safe sets and controllers for such systems. However, often these same scenarios have unknown or uncertain environments, system dynamics, or predictions of other agents. As the system is operating, it may learn new knowledge about these uncertainties and should therefore update its safety analysis accordingly. However, work to learn and update safety analysis is limited to small systems of about two dimensions due to the computational complexity of the analysis. In this paper we synthesize several techniques to speed up computation: decomposition, warm-starting, and adaptive grids. Using this new framework we can update safe sets by one or more orders of magnitude faster than prior work, making this technique practical for many realistic systems. We demonstrate our results on simulated 2D and 10D near-hover quadcopters operating in a windy environment.
more »
« less
Egocentric abstractions for modeling and safety verification of distributed cyber-physical systems
Modeling is a significant piece of the puzzle in achieving safety certificates for distributed IoT and cyberphysical systems. From smart home devices to connected and autonomous vehicles, several modeling challenges like dynamic membership of participants and complex interaction patterns, span across application domains. Modeling multiple interacting vehicles can become unwieldy and impractical as vehicles change relative positions and lanes. In this paper, we present an egocentric abstraction for succinctly modeling local interactions among an arbitrary number of agents around an ego agent. These models abstract away the detailed behavior of the other agents and ignore present but physically distant agents. We show that this approach can capture interesting scenarios considered in the responsibility sensitive safety (RSS) framework for autonomous vehicles. As an illustration of how the framework can be useful for analysis, we prove safety of several highway driving scenarios using egocentric models. The proof technique also brings to the forefront the power of a classical verification approach, namely, inductive invariant assertions. We discuss possible generalizations of the analysis to other scenarios and applications.
more »
« less
- Award ID(s):
- 1918531
- PAR ID:
- 10298749
- Date Published:
- Journal Name:
- 2021 IEEE Security and Privacy Workshops (SPW)
- Page Range / eLocation ID:
- 268 to 276
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Driving safety is a top priority for autonomous vehicles. Orthogonal to prior work handling accident-prone traffic events by algorithm designs at the policy level, we investigate a Closed-loop Adversarial Training (CAT) framework for safe end-to-end driving in this paper through the lens of environment augmentation. CAT aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios that are dynamically generated over time. A novel resampling technique is developed to turn log-replay real-world driving scenarios into safety-critical ones via probabilistic factorization, where the adversarial traffic generation is modeled as the multiplication of standard motion prediction sub-problems. Consequently, CAT can launch more efficient physical attacks compared to existing safety-critical scenario generation methods and yields a significantly less computational cost in the iterative learning pipeline. We incorporate CAT into the MetaDrive simulator and validate our approach on hundreds of driving scenarios imported from real-world driving datasets. Experimental results demonstrate that CAT can effectively generate adversarial scenarios countering the agent being trained. After training, the agent can achieve superior driving safety in both log-replay and safety-critical traffic scenarios on the held- out test set. Code and data are available at https://metadriverse.github.io/cat.more » « less
-
This paper addresses the challenges of computational accountability in autonomous systems, particularly in Autonomous Vehicles (AVs), where safety and efficiency often conflict. We begin by examining current approaches such as cost minimization, reward maximization, human-centered approaches, and ethical frameworks, noting their limitations addressing these challenges. Foreseeability is a central concept in tort law that limits the accountability and legal liability of an actor to a reasonable scope. Yet, current data-driven methods to determine foreseeability are rigid, ignore uncertainty, and depend on simulation data. In this work, we advocate for a new computational approach to establish foreseeability of autonomous systems based on the legal “BPL” formula. We provide open research challenges, using fully autonomous vehicles as a motivating example, and call for researchers to help autonomous systems make accountable decisions in safety-critical scenarios.more » « less
-
We present an implementation of a formally verified safety fallback controller for improved collision avoidance in an autonomous vehicle research platform. Our approach uses a primary trajectory planning system that aims for collision-free navigation in the presence of pedestrians and other vehicles, and a fallback controller that guards its behavior. The safety fallback controller excludes the possibility of collisions by accounting for nondeterministic uncertainty in the dynamics of the vehicle and moving obstacles, and takes over the primary controller as necessary. We demonstrate the system in an experimental set-up that includes simulations and real-world tests with a 1/5-scale vehicle. In stressing simulation scenarios, the safety fallback controller significantly reduces the number of collisions.more » « less
-
Connected Autonomous Vehicles (CAVs) are expected to enable reliable, efficient, and intelligent transportation systems. Most motion planning algorithms for multi-agent systems implicitly assume that all vehicles/agents will execute the expected plan with a small error and evaluate their safety constraints based on this fact. This assumption, however, is hard to keep for CAVs since they may have to change their plan (e.g., to yield to another vehicle) or are forced to stop (e.g., A CAV may break down). While it is desired that a CAV never gets involved in an accident, it may be hit by other vehicles and sometimes, preventing the accident is impossible (e.g., getting hit from behind while waiting behind the red light). Responsibility-Sensitive Safety (RSS) is a set of safety rules that defines the objective of CAV to blame, instead of safety. Thus, instead of developing a CAV algorithm that will avoid any accident, it ensures that the ego vehicle will not be blamed for any accident it is a part of. Original RSS rules, however, are hard to evaluate for merge, intersection, and unstructured road scenarios, plus RSS rules do not prevent deadlock situations among vehicles. In this paper, we propose a new formulation for RSS rules that can be applied to any driving scenario. We integrate the proposed RSS rules with the CAV’s motion planning algorithm to enable cooperative driving of CAVs. We use Control Barrier Functions to enforce safety constraints and compute the energy optimal trajectory for the ego CAV. Finally, to ensure liveness, our approach detects and resolves deadlocks in a decentralized manner. We have conducted different experiments to verify that the ego CAV does not cause an accident no matter when other CAVs slow down or stop. We also showcase our deadlock detection and resolution mechanism using our simulator. Finally, we compare the average velocity and fuel consumption of vehicles when they drive autonomously with the case that they are autonomous and connected.more » « less