skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Computing Uniform Interpolants for EUF via (conditional) DAG-based Compact Representations
The concept of a uniform interpolant for a quantifier-free formula from a given formula with a list of symbols, while well-known in the logic literature, has been unknown to the formal methods and automated reasoning community. This concept is precisely defined. Two algorithms for computing the uniform interpolant of a quantifier-free formula in EUF endowed with a list of symbols to be eliminated are proposed. The first algorithm is non-deterministic and generates a uniform interpolant expressed as a disjunction of conjunction of literals, whereas the second algorithm gives a compact representation of a uniform interpolant as a conjunction of Horn clauses. Both algorithms exploit efficient dedicated DAG representations of terms. Correctness and completeness proofs are supplied, using arguments combining rewrite techniques with model theory.  more » « less
Award ID(s):
1908804
PAR ID:
10642533
Author(s) / Creator(s):
; ;
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The concept of a uniform interpolant for a quantifier-free formula from a given formula with a list of symbols, while well-known in the logic literature, has been unknown to the formal methods and automated reasoning community. This concept is precisely defined. Two algorithms for computing the uniform interpolant of a quantifier-free formula in EUF endowed with a list of symbols to be eliminated are proposed. The first algorithm is non-deterministic and generates a uniform interpolant expressed as a disjunction of conjunction of literals, whereas the second algorithm gives a compact representation of a uniform interpolant as a conjunction of Horn clauses. Both algorithms exploit efficient dedicated DAG representations of terms. Correctness and completeness proofs are supplied, using arguments combining rewrite techniques with model theory. 
    more » « less
  2. The concept of uniform interpolant for a quantifier-free formula from a givenformula with a list of symbols, while well-known in the logic literature, hasbeen unknown to the formal methods and automated reasoning community for a longtime. This concept is precisely defined. Two algorithms for computingquantifier-free uniform interpolants in the theory of equality overuninterpreted symbols (EUF) endowed with a list of symbols to be eliminated areproposed. The first algorithm is non-deterministic and generates a uniforminterpolant expressed as a disjunction of conjunctions of literals, whereas thesecond algorithm gives a compact representation of a uniform interpolant as aconjunction of Horn clauses. Both algorithms exploit efficient dedicated DAGrepresentations of terms. Correctness and completeness proofs are supplied,using arguments combining rewrite techniques with model theory. 
    more » « less
  3. In this article, we enrich McCarthy’s theory of extensional arrays with a length and a maxdiff operation. As is well-known, some diff operation (i.e., some kind of difference function showing where two unequal arrays differ) is needed to keep interpolants quantifier free in array theories. Our maxdiff operation returns the max index where two arrays differ; thus, it has a univocally determined semantics. The length function is a natural complement of such a maxdiff operation and is needed to handle real arrays. Obtaining interpolation results for such a rich theory is a surprisingly hard task. We get such results via a thorough semantic analysis of the models of the theory and of their amalgamation and strong amalgamation properties. The results are modular with respect to the index theory; we show how to convert them into concrete interpolation algorithms via a hierarchical approach realizing a polynomial reduction to interpolation in linear arithmetics endowed with free function symbols. 
    more » « less
  4. Oh, A; Naumann, T; Globerson, A; Saenko, K; Hardt, M; Levine, S (Ed.)
    We investigate replicable learning algorithms. Informally a learning algorithm is replicable if the algorithm outputs the same canonical hypothesis over multiple runs with high probability, even when different runs observe a different set of samples from the unknown data distribution. In general, such a strong notion of replicability is not achievable. Thus we consider two feasible notions of replicability called {\em list replicability} and {\em certificate replicability}. Intuitively, these notions capture the degree of (non) replicability. The goal is to design learning algorithms with optimal list and certificate complexities while minimizing the sample complexity. Our contributions are the following. 1. We first study the learning task of estimating the biases of $$d$$ coins, up to an additive error of $$\varepsilon$$, by observing samples. For this task, we design a $(d+1)$-list replicable algorithm. To complement this result, we establish that the list complexity is optimal, i.e there are no learning algorithms with a list size smaller than $d+1$ for this task. We also design learning algorithms with certificate complexity $$\tilde{O}(\log d)$$. The sample complexity of both these algorithms is $$\tilde{O}(\frac{d^2}{\varepsilon^2})$$ where $$\varepsilon$$ is the approximation error parameter (for a constant error probability). 2. In the PAC model, we show that any hypothesis class that is learnable with $$d$$-nonadaptive statistical queries can be learned via a $(d+1)$-list replicable algorithm and also via a $$\tilde{O}(\log d)$$-certificate replicable algorithm. The sample complexity of both these algorithms is $$\tilde{O}(\frac{d^2}{\nu^2})$$ where $$\nu$$ is the approximation error of the statistical query. We also show that for the concept class \dtep, the list complexity is exactly $d+1$ with respect to the uniform distribution. To establish our upper bound results we use rounding schemes induced by geometric partitions with certain properties. We use Sperner/KKM Lemma to establish the lower bound results. 
    more » « less
  5. Several approaches toward quantifier-free interpolation algorithms of theories involving arrays have been proposed by extending the language using a binary function skolemizing the extensionality principle. In FoSSaCS 2021, the last three authors studied the enrichment of the McCarthy’s theory of extensional arrays with a maxdiff operation. This paper discusses the implementation of the interpolation algorithm proposed in FoSSaCS 2021 using the Z3 API. The implementation allows the user to choose iZ3, Mathsat, or SMTInterpol as interpolation engines. The tool returns a formula in SMTLIB2 format, which allows compatibility with model checkers and invariant generators using such a format. We compare our algorithm with state-of-the-art interpolation engines. Our experiments using unsatisfiable formulæ extracted with the model checker UAutomizer show the feasibility of our tool. For that purpose, we used C programs from the ReachSafety-Arrays and MemSafety-Arrays tracks of SV-COMP. 
    more » « less