Abstract We study the effective front associated with first-order front propagations in two dimensions ($n=2$) in the periodic setting with continuous coefficients. Our main result says that that the boundary of the effective front is differentiable at every irrational point. Equivalently, the stable norm associated with a continuous $${\mathbb{Z}}^{2}$$-periodic Riemannian metric is differentiable at irrational points. This conclusion was obtained decades ago for smooth metrics [ 4, 6]. To the best of our knowledge, our result provides the first nontrivial property of the effective fronts in the continuous setting, which is the standard assumption in the literature of partial differential equations (PDE). Combining with the sufficiency result in [ 15], our result leads to a realization type conclusion: for continuous coefficients, a polygon could be an effective front if and only if it is centrally symmetric with rational vertices and nonempty interior.
more »
« less
Evaluating Relaxations of Logic for Neural Networks: A Comprehensive Study
Symbolic knowledge can provide crucial inductive bias for training neural models, especially in low data regimes. A successful strategy for incorporating such knowledge involves relaxing logical statements into sub-differentiable losses for optimization. In this paper, we study the question of how best to relax logical expressions that represent labeled examples and knowledge about a problem; we focus on sub-differentiable t-norm relaxations of logic. We present theoretical and empirical criteria for characterizing which relaxation would perform best in various scenarios. In our theoretical study driven by the goal of preserving tautologies, the Lukasiewicz t-norm performs best. However, in our empirical analysis on the text chunking and digit recognition tasks, the product t-norm achieves best predictive performance. We analyze this apparent discrepancy, and conclude with a list of best practices for defining loss functions via logic.
more »
« less
- Award ID(s):
- 1801446
- PAR ID:
- 10347015
- Date Published:
- Journal Name:
- Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21)
- Page Range / eLocation ID:
- 2812 to 2818
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We present Scallop, a language which combines the benefits of deep learning and logical reasoning. Scallop enables users to write a wide range of neurosymbolic applications and train them in a data- and compute-efficient manner. It achieves these goals through three key features: 1) a flexible symbolic representation that is based on the relational data model; 2) a declarative logic programming language that is based on Datalog and supports recursion, aggregation, and negation; and 3) a framework for automatic and efficient differentiable reasoning that is based on the theory of provenance semirings. We evaluate Scallop on a suite of eight neurosymbolic applications from the literature. Our evaluation demonstrates that Scallop is capable of expressing algorithmic reasoning in diverse and challenging AI tasks, provides a succinct interface for machine learning programmers to integrate logical domain knowledge, and yields solutions that are comparable or superior to state-of-the-art models in terms of accuracy. Furthermore, Scallop's solutions outperform these models in aspects such as runtime and data efficiency, interpretability, and generalizability.more » « less
-
We consider the problem of establishing that a program-synthesis problem is unrealizable (i.e., has no solution in a given search space of programs). Prior work on unrealizability has developed some automatic techniques to establish that a problem is unrealizable; however, these techniques are all black-box , meaning that they conceal the reasoning behind why a synthesis problem is unrealizable. In this paper, we present a Hoare-style reasoning system, called unrealizability logic for establishing that a program-synthesis problem is unrealizable. To the best of our knowledge, unrealizability logic is the first proof system for overapproximating the execution of an infinite set of imperative programs. The logic provides a general, logical system for building checkable proofs about unrealizability. Similar to how Hoare logic distills the fundamental concepts behind algorithms and tools to prove the correctness of programs, unrealizability logic distills into a single logical system the fundamental concepts that were hidden within prior tools capable of establishing that a program-synthesis problem is unrealizable.more » « less
-
We consider the problem of answering queries about formulas of first-order logic based on background knowledge partially represented explicitly as other formulas, and partially represented as examples independently drawn from a fixed probability distribution. PAC semantics, introduced by Valiant, is one rigorous, general proposal for learning to reason in formal languages: although weaker than classical entailment, it allows for a powerful model theoretic framework for answering queries while requiring minimal assumptions about the form of the distribution in question. To date, however, the most significant limitation of that approach, and more generally most machine learning approaches with robustness guarantees, is that the logical language is ultimately essentially propositional, with finitely many atoms. Indeed, the theoretical findings on the learning of relational theories in such generality have been resoundingly negative. This is despite the fact that first-order logic is widely argued to be most appropriate for representing human knowledge. In this work, we present a new theoretical approach to robustly learning to reason in first-order logic, and consider universally quantified clauses over a countably infinite domain. Our results exploit symmetries exhibited by constants in the language, and generalize the notion of implicit learnability to show how queries can be computed against (implicitly) learned first-order background knowledge.more » « less
-
Krylov subspace methods are a ubiquitous tool for computing near-optimal rank kk approximations of large matrices. While "large block" Krylov methods with block size at least kk give the best known theoretical guarantees, block size one (a single vector) or a small constant is often preferred in practice. Despite their popularity, we lack theoretical bounds on the performance of such "small block" Krylov methods for low-rank approximation. We address this gap between theory and practice by proving that small block Krylov methods essentially match all known low-rank approximation guarantees for large block methods. Via a black-box reduction we show, for example, that the standard single vector Krylov method run for t iterations obtains the same spectral norm and Frobenius norm error bounds as a Krylov method with block size ℓ≥kℓ≥k run for O(t/ℓ)O(t/ℓ) iterations, up to a logarithmic dependence on the smallest gap between sequential singular values. That is, for a given number of matrix-vector products, single vector methods are essentially as effective as any choice of large block size. By combining our result with tail-bounds on eigenvalue gaps in random matrices, we prove that the dependence on the smallest singular value gap can be eliminated if the input matrix is perturbed by a small random matrix. Further, we show that single vector methods match the more complex algorithm of [Bakshi et al. `22], which combines the results of multiple block sizes to achieve an improved algorithm for Schatten pp-norm low-rank approximation.more » « less