The concept of a uniform interpolant for a quantifier-free formula from a given formula with a list of symbols, while well-known in the logic literature, has been unknown to the formal methods and automated reasoning community. This concept is precisely defined. Two algorithms for computing the uniform interpolant of a quantifier-free formula in EUF endowed with a list of symbols to be eliminated are proposed. The first algorithm is non-deterministic and generates a uniform interpolant expressed as a disjunction of conjunction of literals, whereas the second algorithm gives a compact representation of a uniform interpolant as a conjunction of Horn clauses. Both algorithms exploit efficient dedicated DAG representations of terms. Correctness and completeness proofs are supplied, using arguments combining rewrite techniques with model theory.
more »
« less
Uniform Interpolants in EUF: Algorithms using DAG-representations
The concept of uniform interpolant for a quantifier-free formula from a givenformula with a list of symbols, while well-known in the logic literature, hasbeen unknown to the formal methods and automated reasoning community for a longtime. This concept is precisely defined. Two algorithms for computingquantifier-free uniform interpolants in the theory of equality overuninterpreted symbols (EUF) endowed with a list of symbols to be eliminated areproposed. The first algorithm is non-deterministic and generates a uniforminterpolant expressed as a disjunction of conjunctions of literals, whereas thesecond algorithm gives a compact representation of a uniform interpolant as aconjunction of Horn clauses. Both algorithms exploit efficient dedicated DAGrepresentations of terms. Correctness and completeness proofs are supplied,using arguments combining rewrite techniques with model theory.
more »
« less
- Award ID(s):
- 1908804
- PAR ID:
- 10642522
- Publisher / Repository:
- Logical Methods in Computer Science
- Date Published:
- Journal Name:
- Logical Methods in Computer Science
- Volume:
- Volume 18, Issue 2
- ISSN:
- 1860-5974
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The concept of a uniform interpolant for a quantifier-free formula from a given formula with a list of symbols, while well-known in the logic literature, has been unknown to the formal methods and automated reasoning community. This concept is precisely defined. Two algorithms for computing the uniform interpolant of a quantifier-free formula in EUF endowed with a list of symbols to be eliminated are proposed. The first algorithm is non-deterministic and generates a uniform interpolant expressed as a disjunction of conjunction of literals, whereas the second algorithm gives a compact representation of a uniform interpolant as a conjunction of Horn clauses. Both algorithms exploit efficient dedicated DAG representations of terms. Correctness and completeness proofs are supplied, using arguments combining rewrite techniques with model theory.more » « less
-
Oh, A; Naumann, T; Globerson, A; Saenko, K; Hardt, M; Levine, S (Ed.)We investigate replicable learning algorithms. Informally a learning algorithm is replicable if the algorithm outputs the same canonical hypothesis over multiple runs with high probability, even when different runs observe a different set of samples from the unknown data distribution. In general, such a strong notion of replicability is not achievable. Thus we consider two feasible notions of replicability called {\em list replicability} and {\em certificate replicability}. Intuitively, these notions capture the degree of (non) replicability. The goal is to design learning algorithms with optimal list and certificate complexities while minimizing the sample complexity. Our contributions are the following. 1. We first study the learning task of estimating the biases of $$d$$ coins, up to an additive error of $$\varepsilon$$, by observing samples. For this task, we design a $(d+1)$-list replicable algorithm. To complement this result, we establish that the list complexity is optimal, i.e there are no learning algorithms with a list size smaller than $d+1$ for this task. We also design learning algorithms with certificate complexity $$\tilde{O}(\log d)$$. The sample complexity of both these algorithms is $$\tilde{O}(\frac{d^2}{\varepsilon^2})$$ where $$\varepsilon$$ is the approximation error parameter (for a constant error probability). 2. In the PAC model, we show that any hypothesis class that is learnable with $$d$$-nonadaptive statistical queries can be learned via a $(d+1)$-list replicable algorithm and also via a $$\tilde{O}(\log d)$$-certificate replicable algorithm. The sample complexity of both these algorithms is $$\tilde{O}(\frac{d^2}{\nu^2})$$ where $$\nu$$ is the approximation error of the statistical query. We also show that for the concept class \dtep, the list complexity is exactly $d+1$ with respect to the uniform distribution. To establish our upper bound results we use rounding schemes induced by geometric partitions with certain properties. We use Sperner/KKM Lemma to establish the lower bound results.more » « less
-
Algorithms for computing congruence closure of ground equations overuninterpreted symbols and interpreted symbols satisfying associativity andcommutativity (AC) properties are proposed. The algorithms are based on aframework for computing a congruence closure by abstracting nonflat terms byconstants as proposed first in Kapur's congruence closure algorithm (RTA97).The framework is general, flexible, and has been extended also to developcongruence closure algorithms for the cases when associative-commutativefunction symbols can have additional properties including idempotency,nilpotency, identities, cancellativity and group properties as well as theirvarious combinations. Algorithms are modular; their correctness and terminationproofs are simple, exploiting modularity. Unlike earlier algorithms, theproposed algorithms neither rely on complex AC compatible well-foundedorderings on nonvariable terms nor need to use the associative-commutativeunification and extension rules in completion for generating canonical rewritesystems for congruence closures. They are particularly suited for integratinginto the Satisfiability modulo Theories (SMT) solvers. A new way to viewGroebner basis algorithm for polynomial ideals with integer coefficients as acombination of the congruence closures over the AC symbol * with the identity 1and the congruence closure over an Abelian group with + is outlined.more » « less
-
In this article, we enrich McCarthy’s theory of extensional arrays with a length and a maxdiff operation. As is well-known, some diff operation (i.e., some kind of difference function showing where two unequal arrays differ) is needed to keep interpolants quantifier free in array theories. Our maxdiff operation returns the max index where two arrays differ; thus, it has a univocally determined semantics. The length function is a natural complement of such a maxdiff operation and is needed to handle real arrays. Obtaining interpolation results for such a rich theory is a surprisingly hard task. We get such results via a thorough semantic analysis of the models of the theory and of their amalgamation and strong amalgamation properties. The results are modular with respect to the index theory; we show how to convert them into concrete interpolation algorithms via a hierarchical approach realizing a polynomial reduction to interpolation in linear arithmetics endowed with free function symbols.more » « less
An official website of the United States government

