skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Verification of Neural Network Compression of ACAS Xu Lookup Tables with Star Set Reachability
Neural network approximations have become attractive to compress data for automation and autonomy algorithms for use on storage-limited and processing-limited aerospace hard-ware. However, unless these neural network approximations can be exhaustively verified to be safe, they cannot be certified for use on aircraft. This manuscript evaluates the safety of a neural network approximation of the unmanned Airborne Collision Avoidance System (ACAS Xu). First, a set of ACAS Xu closed-loop benchmarks is introduced, based on a well-known open-loop benchmark, that are challenging to analyze for current verification tools due to the complexity and high-dimensional plant dynamics. Additionally, the system of switching and classification-based nature of the ACAS Xu neural network system adds another challenge to existing analysis methods. Experimental evaluation shows selected scenarios where the safety of the ownship aircraft’s neural network action selection is assessed with respect to an intruder aircraft over time in a closed loop control evaluation. Set-based analysis of the closed-loop benchmarks is performed using the Star Set representation using both the NNV tool and the nnenum tool, demonstrating that set-based analysis is becoming increasingly feasible for the verification of this class of systems.  more » « less
Award ID(s):
1918450
PAR ID:
10297296
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
AIAA Scitech 2021 Forum
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Neural network approximations have become attractive to compress data for automation and autonomy algorithms for use on storage-limited and processing-limited aerospace hardware. However, unless these neural network approximations can be exhaustively verified to be safe, they cannot be certified for use on aircraft. An example of such systems is the unmanned Airborne Collision Avoidance System (ACAS) Xu, which is a very popular benchmark for open-loop neural network control system verification tools. This paper proposes a new closed-loop extension of this benchmark, which consists of a set of 10 closed-loop properties selected to evaluate the safety of an ownship aircraft in the presence of a co-altitude intruder aircraft. These closed-loop safety properties are used to evaluate five of the 45 neural networks that comprise the ACAS Xu benchmark (corresponding to co-altitude cases) as well as the switching logic between the five neural networks. The combination of nonlinear dynamics and switching between five neural networks is a challenging verification task accomplished with star-set reachability methods in two verification tools. The safety of the ownship aircraft under initial position uncertainty is guaranteed in every scenario proposed. 
    more » « less
  2. The ACAS X family of aircraft collision avoidance systems uses large numeric lookup tables to make decisions. Recent work used a deep neural network to approximate and compress a collision avoidance table, and simulations showed that the neural network performance was comparable to the original table. Consequently, neural network representations are being explored for use on small aircraft with limited storage capacity. However, the black-box nature of deep neural networks raises safety concerns because simulation results are not exhaustive. This work takes steps towards addressing these concerns by applying formal methods to analyze the behavior of collision avoidance neural networks both in isolation and in a closed-loop system. We evaluate our approach on a specific set of collision avoidance networks and show that even though the networks are not always locally robust, their closed-loop behavior ensures that they will not reach an unsafe (collision) state. 
    more » « less
  3. N. Matni, M. Morari (Ed.)
    This paper proposes a computationally efficient framework, based on interval analysis, for rigorous verification of nonlinear continuous-time dynamical systems with neural network controllers. Given a neural network, we use an existing verification algorithm to construct inclusion functions for its input-output behavior. Inspired by mixed monotone theory, we embed the closed-loop dynamics into a larger system using an inclusion function of the neural network and a decomposition function of the open-loop system. This embedding provides a scalable approach for safety analysis of the neural control loop while preserving the nonlinear structure of the system. We show that one can efficiently compute hyper-rectangular over-approximations of the reachable sets using a single trajectory of the embedding system. We design an algorithm to leverage this computational advantage through partitioning strategies, improving our reachable set estimates while balancing its runtime with tunable parameters. We demonstrate the performance of this algorithm through two case studies. First, we demonstrate this method’s strength in complex nonlinear environments. Then, we show that our approach matches the performance of the state-of-the art verification algorithm for linear discretized systems. 
    more » « less
  4. We present a framework based on interval analysis and monotone systems theory to certify and search for forward invariant sets in nonlinear systems with neural network controllers. The framework (i) constructs localized first-order inclusion functions for the closed-loop system using Jacobian bounds and existing neural network verification tools; (ii) builds a dynamical embedding system where its evaluation along a single trajectory directly corre- sponds with a nested family of hyper-rectangles provably converging to an attractive set of the original system; (iii) utilizes linear transformations to build families of nested paralleletopes with the same properties. The framework is automated in Python using our interval analysis tool- box npinterval, in conjunction with the symbolic arith- metic toolbox sympy, demonstrated on an 8-dimensional leader-follower system. 
    more » « less
  5. There has been an increasing interest in using neural networks in closed-loop control systems to improve performance and reduce computational costs for on-line implementation. However, providing safety and stability guarantees for these systems is challenging due to the nonlinear and compositional structure of neural networks. In this paper, we propose a novel forward reachability analysis method for the safety verification of linear time-varying systems with neural networks in feedback interconnection. Our technical approach relies on abstracting the nonlinear activation functions by quadratic constraints, which leads to an outer-approximation of forward reachable sets of the closed-loop system. We show that we can compute these approximate reachable sets using semidefinite programming. We illustrate our method in a quadrotor example, in which we first approximate a nonlinear model predictive controller via a deep neural network and then apply our analysis tool to certify finite-time reachability and constraint satisfaction of the closed-loop system. 
    more » « less