We present an approach for the synthesis and verification of neural network controllers for closed loop dynamical systems, modelled as an ordinary differential equation. Feedforward neural networks are ubiquitous when it comes to approximating functions, especially in the machine learning literature. The proposed verification technique tries to construct an over-approximation of the system trajectories using a combination of tools, such as, Sherlock and Flow*. In addition to computing reach sets, we incorporate counter examples or bad traces into the synthesis phase of the controller as well. We go back and forth between verification and counter example generation until the system outputs a fully verified controller, or the training fails to terminate in a neural network which is compliant with the desired specifications. We demonstrate the effectiveness of our approach over a suite of benchmarks ranging from 2 to 17 variables.
more »
« less
Building Verified Neural Networks for Computer Systems with Ouroboros
Neural networks are powerful tools. Applying them in computer systems—operating systems, databases, and
networked systems—attracts much attention. However, neural networks are complicated black boxes that may
produce unexpected results. To train networks with well-defined behaviors, we introduce ouroboros, a system that
constructs verified neural networks. Verified neural networks are those that satisfy user-defined safety properties,
known as specifications. Ouroboros builds verified networks by a training-verification loop that combines deep
learning training and neural network verification. The system employs multiple techniques to fill the gap between
today’s verification and the properties required for systems. Ouroboros also accelerates the training-verification
loop by spec-aware learning. Our experiments show that ouroboros can train verified networks for five applications
that we study and has a 2.8× speedup on average compared with the vanilla training-verification loop.
more »
« less
- Award ID(s):
- 2237295
- PAR ID:
- 10479393
- Publisher / Repository:
- Sixth Conference on Machine Learning and Systems
- Date Published:
- Journal Name:
- Sixth Conference on Machine Learning and Systems
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Neural network approximations have become attractive to compress data for automation and autonomy algorithms for use on storage-limited and processing-limited aerospace hardware. However, unless these neural network approximations can be exhaustively verified to be safe, they cannot be certified for use on aircraft. An example of such systems is the unmanned Airborne Collision Avoidance System (ACAS) Xu, which is a very popular benchmark for open-loop neural network control system verification tools. This paper proposes a new closed-loop extension of this benchmark, which consists of a set of 10 closed-loop properties selected to evaluate the safety of an ownship aircraft in the presence of a co-altitude intruder aircraft. These closed-loop safety properties are used to evaluate five of the 45 neural networks that comprise the ACAS Xu benchmark (corresponding to co-altitude cases) as well as the switching logic between the five neural networks. The combination of nonlinear dynamics and switching between five neural networks is a challenging verification task accomplished with star-set reachability methods in two verification tools. The safety of the ownship aircraft under initial position uncertainty is guaranteed in every scenario proposed.more » « less
-
Ivrii, Alexander ; Strichman, Ofer (Ed.)Artificial Neural Networks (ANNs) have demonstrated remarkable utility in various challenging machine learning applications. While formally verified properties of their behaviors are highly desired, they have proven notoriously difficult to derive and enforce. Existing approaches typically formulate this problem as a post facto analysis process. In this paper, we present a novel learning framework that ensures such formal guarantees are enforced by construction. Our technique enables training provably correct networks with respect to a broad class of safety properties, a capability that goes well-beyond existing approaches, without compromising much accuracy. Our key insight is that we can integrate an optimization-based abstraction refinement loop into the learning process and operate over dynamically constructed partitions of the input space that considers accuracy and safety objectives synergistically. The refinement procedure iteratively splits the input space from which training data is drawn, guided by the efficacy with which such partitions enable safety verification. We have implemented our approach in a tool (ART) and applied it to enforce general safety properties on unmanned aviator collision avoidance system ACAS Xu dataset and the Collision Detection dataset. Importantly, we empirically demonstrate that realizing safety does not come at the price of much accuracy. Our methodology demonstrates that an abstraction refinement methodology provides a meaningful pathway for building both accurate and correct machine learning networks.more » « less
-
This paper presents Verisig, a hybrid system approach to verifying safety properties of closed-loop systems using neural networks as controllers. We focus on sigmoid-based networks and exploit the fact that the sigmoid is the solution to a quadratic differential equation, which allows us to transform the neural network into an equivalent hybrid system. By composing the network's hybrid system with the plant's, we transform the problem into a hybrid system verification problem which can be solved using state-of-the-art reachability tools. We show that reachability is decidable for networks with one hidden layer and decidable for general networks if Schanuel's conjecture is true. We evaluate the applicability and scalability of Verisig in two case studies, one from reinforcement learning and one in which the neural network is used to approximate a model predictive controller.more » « less
-
This work presents a methodology for analysis and control of nonlinear fluid systems using neural networks. The approach is demonstrated in four different study cases: the Lorenz system, a modified version of the Kuramoto-Sivashinsky equation, a streamwise-periodic two-dimensional channel flow, and a confined cylinder flow. Neural networks are trained as models to capture the complex system dynamics and estimate equilibrium points through a Newton method, enabled by back-propagation. These neural network surrogate models (NNSMs) are leveraged to train a second neural network, which is designed to act as a stabilizing closed-loop controller. The training process employs a recurrent approach, whereby the NNSM and the neural network controller are chained in closed loop along a finite time horizon. By cycling through phases of combined random open-loop actuation and closed-loop control, an iterative training process is introduced to overcome the lack of data near equilibrium points. This approach improves the accuracy of the models in the most critical region for achieving stabilization. Through the use of L1 regularization within loss functions, the NNSMs can also guide optimal sensor placement, reducing the number of sensors from an initial candidate set. The data sets produced during the iterative training process are also leveraged for conducting a linear stability analysis through a modified dynamic mode decomposition approach. The results demonstrate the effectiveness of computationally inexpensive neural networks in modeling, controlling, and enabling stability analysis of nonlinear systems, providing insights into the system behavior and offering potential for stabilization of complex fluid systems.more » « less