Intermediate verification languages like Why3 and Boogie have made it much easier to build program verifiers, transforming the process into a logic compilation problem rather than a proof automation one. Why3 in particular implements a rich logic for program specification with polymorphism, algebraic data types, recursive functions and predicates, and inductive predicates; it translates this logic to over a dozen solvers and proof assistants. Accordingly, it serves as a backend for many tools, including Frama-C, EasyCrypt, and GNATProve for Ada SPARK. But how can we be sure that these tools are correct? The alternate foundational approach, taken by tools like VST and CakeML, provides strong guarantees by implementing the entire toolchain in a proof assistant, but these tools are harder to build and cannot directly take advantage of SMT solver automation. As a first step toward enabling automated tools with similar foundational guarantees, we give a formal semantics in Coq for the logic fragment of Why3. We show that our semantics are useful by giving a correct-by-construction natural deduction proof system for this logic, using this proof system to verify parts of Why3’s standard library, and proving sound two of Why3’s transformations used to convert terms and formulas into the simpler logics supported by the backend solvers.
more »
« less
Modular Primal-Dual Fixpoint Logic Solving for Temporal Verification
We present a novel approach to deciding the validity of formulas in first-order fixpoint logic with background theories and arbitrarily nested inductive and co-inductive predicates defining least and greatest fixpoints. Our approach is constraint-based, and reduces the validity checking problem of the given first-order-fixpoint logic formula (formally, an instance in a language called µCLP) to a constraint satisfaction problem for a recently introduced predicate constraint language. Coupled with an existing sound-and-relatively-complete solver for the constraint language, this novel reduction alone already gives a sound and relatively complete method for deciding µCLP validity, but we further improve it to a novelmodular primal-dualmethod. The key observations are (1) µCLP is closed under complement such that each (co-)inductive predicate in the originalprimalinstance has a corresponding (co-)inductive predicate representing its complement in thedualinstance obtained by taking the standard De Morgan’s dual of the primal instance, and (2)partial solutionsfor (co-)inductive predicates synthesized during the constraint solving process of the primal side can be used as sound upper-bounds of the corresponding (co-)inductive predicates in the dual side, and vice versa. By solving the primal and dual problems in parallel and exchanging each others’ partial solutions as sound bounds, the two processes mutually reduce each others’ solution spaces, thus enabling rapid convergence. The approach is alsomodularin that the bounds are synthesized and exchanged at granularity of individual (co-)inductive predicates. We demonstrate the utility of our novel fixpoint logic solving by encoding a wide variety of temporal verification problems in µCLP, including termination/non-termination, LTL, CTL, and even the full modal µ-calculus model checking of infinite state programs. The encodings exploit the modularity in both the program and the property by expressing each loops and (recursive) functions in the program and sub-formulas of the property as individual (possibly nested) (co-)inductive predicates. Together with our novel modular primal-dual µCLP solving, we obtain a novel approach to efficiently solving a wide range of temporal verification problems.
more »
« less
- PAR ID:
- 10602124
- Publisher / Repository:
- Association for Computing Machinery (ACM)
- Date Published:
- Journal Name:
- Proceedings of the ACM on Programming Languages
- Volume:
- 7
- Issue:
- POPL
- ISSN:
- 2475-1421
- Format(s):
- Medium: X Size: p. 2111-2140
- Size(s):
- p. 2111-2140
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This article presents a type-based analysis for deriving upper bounds on the expected execution cost of probabilistic programs. The analysis is naturally compositional, parametric in the cost model, and supports higher-order functions and inductive data types. The derived bounds are multivariate polynomials that are functions of data structures. Bound inference is enabled by local type rules that reduce type inference to linear constraint solving. The type system is based on the potential method of amortized analysis and extends automatic amortized resource analysis (AARA) for deterministic programs. A main innovation is that bounds can contain symbolic probabilities, which may appear in data structures and function arguments. Another contribution is a novel soundness proof that establishes the correctness of the derived bounds with respect to a distribution-based operational cost semantics that also includes nontrivial diverging behavior. For cost models like time, derived bounds imply termination with probability one. To highlight the novel ideas, the presentation focuses on linear potential and a core language. However, the analysis is implemented as an extension of Resource Aware ML and supports polynomial bounds and user defined data structures. The effectiveness of the technique is evaluated by analyzing the sample complexity of discrete distributions and with a novel average-case estimation for deterministic programs that combines expected cost analysis with statistical methods.more » « less
-
null (Ed.)Abstract In this paper, we study the problem of formal verification for Answer Set Programming (ASP), namely, obtaining a formal proof showing that the answer sets of a given (non-ground) logic program P correctly correspond to the solutions to the problem encoded by P , regardless of the problem instance. To this aim, we use a formal specification language based on ASP modules, so that each module can be proved to capture some informal aspect of the problem in an isolated way. This specification language relies on a novel definition of (possibly nested, first order) program modules that may incorporate local hidden atoms at different levels. Then, verifying the logic program P amounts to prove some kind of equivalence between P and its modular specification.more » « less
-
Steinmetz, A. (Ed.)Manual building code compliance checking is a time-consuming, labor-intensive and error-prone process. Automated logic-based reasoning is an essential step in the automation of this process. There have been previous studies using logic programming languages for automated logic-based reasoning to support automated compliance checking (ACC) of building designs with building codes. As a high-performance implementation of the standard logic programming language, B-Prolog was widely used in these studies. However, due to the support of dynamic predicates and user-defined operators, the predicates’ functions vary according to different user definitions; therefore, B-Prolog is sometimes not reliable for building code reasoning. As a more expressive, scalable, and reliable alterative to B-Prolog, Picat, a logic-based multi-paradigm programming language, provides a new and potentially more powerful platform for automated logic-based reasoning in ACC. To explore the potential value of Picat in ACC, in this study, the authors compared Picat and B-Prolog performance in automatically checking 20 requirement rules in the 2015 International Building Code. The experimental results showed that the automated checking for building codes in the B-Prolog version was faster than that in the Picat version, whereas the Picat version was more reliable than the B-Prolog version. This could be the result of B-Prolog using unifica-tion and Picat using pattern matching for indexing rules. More potential applications of Picat in ACC domain need further research. Furthermore, this schema could be used in the teaching of ACC to graduate construction students, illustrating the need to focus on the reliability, predictability and scalability of the process, in order to provide a practical solution to improving code compliance checking processes.more » « less
-
A major goal of grounded language learning research is to enable robots to connect language predicates to a robot’s physical interactive perception of the world. Coupling object exploratory behaviors such as grasping, lifting, and looking with multiple sensory modalities (e.g., audio, haptics, and vision) enables a robot to ground non-visual words like “heavy” as well as visual words like “red”. A major limitation of existing approaches to multi-modal language grounding is that a robot has to exhaustively explore training objects with a variety of actions when learning a new such language predicate. This paper proposes a method for guiding a robot’s behavioral exploration policy when learning a novel predicate based on known grounded predicates and the novel predicate’s linguistic relationship to them. We demonstrate our approach on two datasets in which a robot explored large sets of objects and was tasked with learning to recognize whether novel words applied to those objects.more » « less
An official website of the United States government
