We present a scalable methodology to verify stochastic hybrid systems for inequality linear temporal logic (iLTL) or inequality metric interval temporal logic (iMITL). Using the Mori–Zwanzig reduction method, we construct a finite-state Markov chain reduction of a given stochastic hybrid system and prove that this reduced Markov chain is approximately equivalent to the original system in a distributional sense. Approximate equivalence of the stochastic hybrid system and its Markov chain reduction means that analyzing the Markov chain with respect to a suitably strengthened property allows us to conclude whether the original stochastic hybrid system meets its temporal logic specifications. Based on this, we propose the first statistical model checking algorithms to verify stochastic hybrid systems against correctness properties, expressed in iLTL or iMITL. The scalability of the proposed algorithms is demonstrated by a case study.
more »
« less
On Neural Network Equivalence Checking Using SMT Solvers
Two pretrained neural networks are deemed (approximately) equivalent if they yield similar outputs for the same inputs. Equivalence checking of neural networks is of great importance, due to its utility in replacing learning-enabled components with (approximately) equivalent ones, when there is need to fulfill additional requirements or to address security threats, as is the case when using knowledge distillation, adversarial training, etc. In this paper, we present a method to solve various strict and approximate equivalence checking problems for neural networks, by reducing them to SMT satisfiability checking problems. This work explores the utility and limitations of the neural network equivalence checking framework, and proposes avenues for future research and improvements toward more scalable and practically applicable solutions. We present experimental results, for diverse types of neural network models (classifiers and regression networks) and equivalence criteria, towards a general and application-independent equivalence checking approach.
more »
« less
- Award ID(s):
- 1801546
- PAR ID:
- 10359212
- Editor(s):
- Bogomolov, S.; Parker, D.
- Date Published:
- Journal Name:
- Formal Modeling and Analysis of Timed Systems. FORMATS 2022
- Volume:
- 13465
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We study the theory of neural network (NN) from the lens of classical nonparametric regression problems with a focus on NN's ability to adaptively estimate functions with heterogeneous smoothness -- a property of functions in Besov or Bounded Variation (BV) classes. Existing work on this problem requires tuning the NN architecture based on the function spaces and sample sizes. We consider a "Parallel NN" variant of deep ReLU networks and show that the standard weight decay is equivalent to promoting the ℓp-sparsity (0<1) of the coefficient vector of an end-to-end learned function bases, i.e., a dictionary. Using this equivalence, we further establish that by tuning only the weight decay, such Parallel NN achieves an estimation error arbitrarily close to the minimax rates for both the Besov and BV classes. Notably, it gets exponentially closer to minimax optimal as the NN gets deeper. Our research sheds new lights on why depth matters and how NNs are more powerful than kernel methods.more » « less
-
Checking query equivalence is of great significance in database systems. Prior work in automated query equivalence checking sets the first steps in formally modeling and reasoning about query optimization rules, but only supports a limited number of query features. In this paper, we present Qed, a new framework for query equivalence checking based on bag semantics. Qed uses a new formalism called Q-expressions that models queries using different normal forms for efficient equivalence checking, and models features such as integrity constraints and NULLs in a principled way unlike prior work. Our formalism also allows us to define a new query fragment that encompasses many real-world queries with a complete equivalence checking algorithm, assuming a complete first-order theory solver. Empirically, Qed can verify 299 out of 444 query pairs extracted from the Calcite framework and 979 out of 1287 query pairs extracted from CockroachDB, which is more than 2× the number of cases proven by prior state-of-the-art solver.more » « less
-
Recent research shows that the dynamics of an infinitely wide neural network (NN) trained by gradient descent can be characterized by Neural Tangent Kernel (NTK) [27]. Under the squared loss, the infinite-width NN trained by gradient descent with an infinitely small learning rate is equivalent to kernel regression with NTK [4]. However, the equivalence is only known for ridge regression currently [6], while the equivalence between NN and other kernel machines (KMs), e.g. support vector machine (SVM), remains unknown. Therefore, in this work, we propose to establish the equivalence between NN and SVM, and specifically, the infinitely wide NN trained by soft margin loss and the standard soft margin SVM with NTK trained by subgradient descent. Our main theoretical results include establishing the equivalence between NN and a broad family of L2 regularized KMs with finite width bounds, which cannot be handled by prior work, and showing that every finite-width NN trained by such regularized loss functions is approximately a KM. Furthermore, we demonstrate our theory can enable three practical applications, including (i) non-vacuous generalization bound of NN via the corresponding KM; (ii) nontrivial robustness certificate for the infinite-width NN (while existing robustness verification methods would provide vacuous bounds); (iii) intrinsically more robust infinite-width NNs than those from previous kernel regression.more » « less
-
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration. They are capable of solving standard inverse problems such as denois- ing and compressive sensing with excellent results by simply fitting a neural network model to measurements from a single image or signal without the need for any additional training data. For some applications, this critically requires additional regularization in the form of early stopping the optimization. For signal recovery from a few measurements, however, un-trained convolutional networks have an intriguing self-regularizing property: Even though the network can perfectly fit any image, the network recovers a natural image from few measurements when trained with gradient descent until convergence. In this paper, we provide numerical evidence for this property and study it theoretically. We show that—without any further regularization—an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurementsmore » « less
An official website of the United States government

