skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Inter-triggering hybrid automata: a formalism for responsibility-sensitive safety
This paper introduces inter-triggering hybrid automata, a formalism to represent multi-agent systems where each agent is represented as a hybrid automaton and agents interact by triggering discrete transitions (jumps and resets) on their “neighboring" agents. Using this formalism, we define responsibility-sensitive safety as respecting one another’s invariances while triggering jumps and resets. This allows us to make a formal connection between responsibility and robust controlled invariant sets for individual agents, therefore leading to a compositional verification framework for the safety of the overall multi-agent system. We discuss several advantages of this viewpoint and illustrate it on a highway driving example.  more » « less
Award ID(s):
1918123
PAR ID:
10195588
Author(s) / Creator(s):
Date Published:
Journal Name:
23rd International Conference on Hybrid Systems: Computation and Control (HSCC 2020)
Page Range / eLocation ID:
1 to 2
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In this paper, we present a compositional condition for ensuring safety of a collection of interacting systems modeled by inter-triggering hybrid automata (ITHA). ITHA is a modeling formalism for representing multi-agent systems in which each agent is governed by individual dynamics but can also interact with other agents through triggering actions. These triggering actions result in a jump/reset in the state of other agents according to a global resolution function. A sufficient condition for safety of the collection, inspired by responsibility-sensitive safety, is developed in two parts: self-safety relating to the individual dynamics, and responsibility relating to the triggering actions. The condition relies on having an over-approximation method for the resolution function. We further show how such over-approximations can be obtained and improved via communication. We use two examples, a job scheduling task on parallel processors and a highway driving example, throughout the paper to illustrate the concepts. Finally, we provide a comprehensive evaluation on how the proposed condition can be leveraged for several multi-agent control and supervision examples. 
    more » « less
  2. null (Ed.)
    Modeling is a significant piece of the puzzle in achieving safety certificates for distributed IoT and cyberphysical systems. From smart home devices to connected and autonomous vehicles, several modeling challenges like dynamic membership of participants and complex interaction patterns, span across application domains. Modeling multiple interacting vehicles can become unwieldy and impractical as vehicles change relative positions and lanes. In this paper, we present an egocentric abstraction for succinctly modeling local interactions among an arbitrary number of agents around an ego agent. These models abstract away the detailed behavior of the other agents and ignore present but physically distant agents. We show that this approach can capture interesting scenarios considered in the responsibility sensitive safety (RSS) framework for autonomous vehicles. As an illustration of how the framework can be useful for analysis, we prove safety of several highway driving scenarios using egocentric models. The proof technique also brings to the forefront the power of a classical verification approach, namely, inductive invariant assertions. We discuss possible generalizations of the analysis to other scenarios and applications. 
    more » « less
  3. Multi-Agent Reinforcement Learning can be used to learn solutions for a wide variety of tasks, but there are few safety guarantees about the policies that the agents learn. My research addresses the challenge of ensuring safety in communication-free multi-agent environments, using shielding as the primary tool. We introduce methods to completely prevent safety violations in domains for which a model is available, in both fully observable and partially observable environments. We present ongoing research on maximizing safety in environments for which no model is available, utilizing a centralized training, decentralized execution framework, and discuss future lines of research. 
    more » « less
  4. null (Ed.)
    Cooperatively avoiding collision is a critical functionality for robots navigating in dense human crowds, failure of which could lead to either overaggressive or overcautious behavior. A necessary condition for cooperative collision avoidance is to couple the prediction of the agents’ trajectories with the planning of the robot’s trajectory. However, it is unclear that trajectory based cooperative collision avoidance captures the correct agent attributes. In this work we migrate from trajectory based coupling to a formalism that couples agent preference distributions. In particular, we show that preference distributions (probability density functions representing agents’ intentions) can capture higher order statistics of agent behaviors, such as willingness to cooperate. Thus, coupling in distribution space exploits more information about inter-agent cooperation than coupling in trajectory space. We thus introduce a general objective for coupled prediction and planning in distribution space, and propose an iterative best response optimization method based on variational analysis with guaranteed sufficient decrease. Based on this analysis, we develop a sampling-based motion planning framework called DistNav1 that runs in real time on a laptop CPU. We evaluate our approach on challenging scenarios from both real world datasets and simulation environments, and benchmark against a wide variety of model based and machine learning based approaches. The safety and efficiency statistics of our approach outperform all other models. Finally, we find that DistNav is competitive with human safety and efficiency performance. 
    more » « less
  5. We consider the problem of designing distributed collision-avoidance multi-agent control in large-scale environments with potentially moving obstacles, where a large number of agents are required to maintain safety using only local information and reach their goals. This paper addresses the problem of collision avoidance, scalability, and generalizability by introducing graph control barrier functions (GCBFs) for distributed control. The newly introduced GCBF is based on the well-established CBF theory for safety guarantees but utilizes a graph structure for scalable and generalizable decentralized control. We use graph neural networks to learn both neural a GCBF certificate and distributed control. We also extend the framework from handling state-based models to directly taking point clouds from LiDAR for more practical robotics settings. We demonstrated the efficacy of GCBF in a variety of numerical experiments, where the number, density, and traveling distance of agents, as well as the number of unseen and uncontrolled obstacles increase. Empirical results show that GCBF outperforms leading methods such as MAPPO and multi-agent distributed CBF (MDCBF). Trained with only 16 agents, GCBF can achieve up to 3 times improvement of success rate (agents reach goals and never encountered in any collisions) on <500 agents, and still maintain more than 50 success rates for >1000 agents when other methods completely fail. 
    more » « less