Deep Neural Networks (DNNs) have become a popular instrument for solving various real-world problems. DNNs’ sophisticated structure allows them to learn complex representations and features. For this reason, Binary Neural Networks (BNNs) are widely used on edge devices, such as microcomputers. However, architecture specifics and floating-point number usage result in an increased computational operations complexity. Like other DNNs, BNNs are vulnerable to adversarial attacks; even a small perturbation to the input set may lead to an errant output. Unfortunately, only a few approaches have been proposed for verifying BNNs.This paper proposes an approach to verify BNNs on continuous input space using star reachability analysis. Our approach can compute both exact and overapproximate reachable sets of BNNs with Sign activation functions and use them for verification. The proposed approach is also efficient in constructing a complete set of counterexamples in case a network is unsafe. We implemented our approach in NNV, a neural network verification tool for DNNs and learning-enabled Cyber-Physical Systems. The experimental results show that our star-based approach is less conservative, more efficient, and scalable than the recent SMT-based method implemented in Marabou. We also provide a comparison with a quantization-based tool EEVBNN.
more »
« less
StarV: A Qualitative and Quantitative Verification Tool for Learning-Enabled Systems
Abstract This paper presents StarV, a new tool for verifying deep neural networks (DNNs) and learning-enabled Cyber-Physical Systems (Le-CPS) using the well-known star reachability. Distinguished from existing star-based verification tools such as NNV and NNENUM and others, StarV not only offers qualitative verification techniques using Star and ImageStar reachability analysis but is also the first tool to propose using ProbStar reachability for quantitative verification of DNNs with piecewise linear activation functions and Le-CPS. Notably, it introduces a novel ProbStar Temporal Logic formalism and associated algorithms, enabling the quantitative verification of DNNs and Le-CPS’s temporal behaviors. Additionally, StarV presents a novel SparseImageStar set representation and associated reachability algorithm that allows users to verify deep convolutional neural networks and semantic segmentation networks with more memory efficiency. StarV is evaluated in comparison with state-of-the-art in many challenging benchmarks. The experiments show that StarV outperforms existing tools in many aspects, such as timing performance, scalability, and memory consumption.
more »
« less
- PAR ID:
- 10649230
- Publisher / Repository:
- Springer Nature Switzerland
- Date Published:
- Page Range / eLocation ID:
- 376 to 394
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This manuscript presents the updated version of the Neural Network Verification (NNV) tool. NNV is a formal verification software tool for deep learning models and cyber-physical systems with neural network components. NNV was first introduced as a verification framework for feedforward and convolutional neural networks, as well as for neural network control systems. Since then, numerous works have made significant improvements in the verification of new deep learning models, as well as tackling some of the scalability issues that may arise when verifying complex models. In this new version of NNV, we introduce verification support for multiple deep learning models, including neural ordinary differential equations, semantic segmentation networks and recurrent neural networks, as well as a collection of reachability methods that aim to reduce the computation cost of reachability analysis of complex neural networks. We have also added direct support for standard input verification formats in the community such as VNNLIB (verification properties), and ONNX (neural networks) formats. We present a collection of experiments in which NNV verifies safety and robustness properties of feedforward, convolutional, semantic segmentation and recurrent neural networks, as well as neural ordinary differential equations and neural network control systems. Furthermore, we demonstrate the capabilities of NNV against a commercially available product in a collection of benchmarks from control systems, semantic segmentation, image classification, and time-series data.more » « less
-
Pérez, Guillermo A.; Raskin, Jean-François (Ed.)Deep neural networks (DNNs) are increasingly being deployed to perform safety-critical tasks. The opacity of DNNs, which prevents humans from reasoning about them, presents new safety and security challenges. To address these challenges, the verification community has begun developing techniques for rigorously analyzing DNNs, with numerous verification algorithms proposed in recent years. While a significant amount of work has gone into developing these verification algorithms, little work has been devoted to rigorously studying the computability and complexity of the underlying theoretical problems. Here, we seek to contribute to the bridging of this gap. We focus on two kinds of DNNs: those that employ piecewise-linear activation functions (e.g., ReLU), and those that employ piecewise-smooth activation functions (e.g., Sigmoids). We prove the two following theorems: 1. The decidability of verifying DNNs with piecewise-smooth activation functions is equivalent to a well-known, open problem formulated by Tarski; and 2. The DNN verification problem for any quantifier-free linear arithmetic specification can be reduced to the DNN reachability problem, whose approximation is NP-complete. These results answer two fundamental questions about the computability and complexity of DNN verification, and the ways it is affected by the network’s activation functions and error tolerance; and could help guide future efforts in developing DNN verification tools.more » « less
-
Silva, A. and (Ed.)We present 𝖲𝖼𝖾𝗇𝖾𝖢𝗁𝖾𝖼𝗄𝖾𝗋, a tool for verifying scenarios involving vehicles executing complex plans in large cluttered workspaces. 𝖲𝖼𝖾𝗇𝖾𝖢𝗁𝖾𝖼𝗄𝖾𝗋 converts the scenario verification problem to a standard hybrid system verification problem, and solves it effectively by exploiting structural properties in the plan and the vehicle dynamics. 𝖲𝖼𝖾𝗇𝖾𝖢𝗁𝖾𝖼𝗄𝖾𝗋 uses symmetry abstractions, a novel refinement algorithm, and importantly, is built to boost the performance of any existing reachability analysis tool as a plug-in subroutine. We evaluated 𝖲𝖼𝖾𝗇𝖾𝖢𝗁𝖾𝖼𝗄𝖾𝗋 on several scenarios involving ground and aerial vehicles with nonlinear dynamics and neural network controllers, employing different kinds of symmetries, using different reachability subroutines, and following plans with hundreds of waypoints in complex workspaces. Compared to two leading tools, DryVR and Flow*, 𝖲𝖼𝖾𝗇𝖾𝖢𝗁𝖾𝖼𝗄𝖾𝗋 shows 14× average speedup in verification time, even while using those very tools as reachability subroutines.more » « less
-
Pérez, Guillermo A; Raskin, Jean-François (Ed.)Deep neural networks (DNNs) are increasingly being deployed to perform safety-critical tasks. The opacity of DNNs, which prevents humans from reasoning about them, presents new safety and security challenges. To address these challenges, the verification community has begun developing techniques for rigorously analyzing DNNs, with numerous verification algorithms proposed in recent years. While a significant amount of work has gone into developing these verification algorithms, little work has been devoted to rigorously studying the computability and complexity of the underlying theoretical problems. Here, we seek to contribute to the bridging of this gap. We focus on two kinds of DNNs: those that employ piecewise-linear activation functions (e.g., ReLU), and those that employ piecewise-smooth activation functions (e.g., Sigmoids). We prove the two following theorems: (i) the decidability of verifying DNNs with a particular set of piecewise-smooth activation functions, including Sigmoid and tanh, is equivalent to a well-known, open problem formulated by Tarski; and (ii) the DNN verification problem for any quantifier-free linear arithmetic specification can be reduced to the DNN reachability problem, whose approximation is NP-complete. These results answer two fundamental questions about the computability and complexity of DNN verification, and the ways it is affected by the network’s activation functions and error tolerance; and could help guide future efforts in developing DNN verification tools.more » « less
An official website of the United States government

