skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on April 1, 2026

Title: The logic of cardinality comparison without the axiom of choice
We work in the setting of Zermelo-Fraenkel set theory without assuming the Axiom of Choice. We consider sets with the Boolean operations together with the additional structure of comparing cardinality (in the Cantorian sense of injections). What principles does one need to add to the laws of Boolean algebra to reason not only about intersection, union, and complementation of sets, but also about the relative size of sets? We give a complete axiomatization. A particularly interesting case is when one restricts to the Dedekind-finite sets. In this case, one needs exactly the same principles as for reasoning about imprecise probability comparisons, the central principle being Generalized Finite Cancellation (which includes, as a special case, division-by-$$m$$). In the general case, the central principle is a restricted version of Generalized Finite Cancellation within Archimedean classes which we call Covered Generalized Finite Cancellation.  more » « less
Award ID(s):
2153823 2419591
PAR ID:
10645819
Author(s) / Creator(s):
;
Publisher / Repository:
Elsevier
Date Published:
Journal Name:
Annals of Pure and Applied Logic
Volume:
176
Issue:
4
ISSN:
0168-0072
Page Range / eLocation ID:
103549
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Seki, Shinnosuke; Stewart, Jaimie Marie (Ed.)
    Molecular programmers and nanostructure engineers use domain-level design to abstract away messy DNA/RNA sequence, chemical and geometric details. Such domain-level abstractions are enforced by sequence design principles and provide a key principle that allows scaling up of complex multistranded DNA/RNA programs and structures. Determining the most favoured secondary structure, or Minimum Free Energy (MFE), of a set of strands, is typically studied at the sequence level but has seen limited domain-level work. We analyse the computational complexity of MFE for multistranded systems in a simple setting were we allow only 1 or 2 domains per strand. On the one hand, with 2-domain strands, we find that the MFE decision problem is NP-complete, even without pseudoknots, and requires exponential time algorithms assuming SAT does. On the other hand, in the simplest case of 1-domain strands there are efficient MFE algorithms for various binding modes. However, even in this single-domain case, MFE is P-hard for promiscuous binding, where one domain may bind to multiple as experimentally used by Nikitin [Nat Chem., 2023], which in turn implies that strands consisting of a single domain efficiently implement arbitrary Boolean circuits. 
    more » « less
  2. We introduce a notion of complexity of diagrams (and, in particular, of objects and morphisms) in an arbitrary category, as well as a notion of complexity of functors between categories equipped with complexity functions. We discuss several examples of this new definition in categories of wide common interest such as finite sets, Boolean functions, topological spaces, vector spaces, semilinear and semialgebraic sets, graded algebras, affine and projective varieties and schemes, and modules over polynomial rings. We show that on one hand categorical complexity recovers in several settings classical notions of nonuniform computational complexity (such as circuit complexity), while on the other hand it has features that make it mathematically more natural. We also postulate that studying functor complexity is the categorical analog of classical questions in complexity theory about separating different complexity classes. 
    more » « less
  3. While Diffusion Monte Carlo (DMC) is in principle an exact stochastic method for ab initio electronic structure calculations, in practice, the fermionic sign problem necessitates the use of the fixed-node approximation and trial wavefunctions with approximate nodes (or zeros). This approximation introduces a variational error in the energy that potentially can be tested and systematically improved. Here, we present a computational method that produces trial wavefunctions with systematically improvable nodes for DMC calculations of periodic solids. These trial wavefunctions are efficiently generated with the configuration interaction using a perturbative selection made iteratively (CIPSI) method. A simple protocol in which both exact and approximate results for finite supercells are used to extrapolate to the thermodynamic limit is introduced. This approach is illustrated in the case of the carbon diamond using Slater–Jastrow trial wavefunctions including up to one million Slater determinants. Fixed-node DMC energies obtained with such large expansions are much improved, and the fixed-node error is found to decrease monotonically and smoothly as a function of the number of determinants in the trial wavefunction, a property opening the way to a better control of this error. The cohesive energy extrapolated to the thermodynamic limit is in close agreement with the estimated experimental value. Interestingly, this is also the case at the single-determinant level, thus, indicating a very good error cancellation in carbon diamond between the bulk and atomic total fixed-node energies when using single-determinant nodes. 
    more » « less
  4. Let (ak)k∈N be an increasing sequence of positive integers satisfying the Hadamard gap condition a_{k+1}/a_k > q > 1 for all k ∈ N, and let S_n(ω) = \sum_{k=1}^n cos(2πa_kω), n ∈ N, ω ∈ [0, 1]. Then S_n is called a lacunary trigonometric sum, and can be viewed as a random variable defined on the probability space Ω = [0, 1] endowed with Lebesgue measure. Lacunary sums are known to exhibit several properties that are typical for sums of independent random variables. For example, a central limit theorem for (S_n)_{n∈N} has been obtained by Salem and Zygmund, while a law of the iterated logarithm is due to Erdős and Gál. In this paper we study large deviation principles for lacunary sums. Specifically, under the large gap condition ak+1/ak → ∞, we prove that the sequence (Sn/n)n∈N does indeed satisfy a large deviation principle with speed n and the same rate function I􏰡 as for sums of independent random variables with the arcsine distribution. On the other hand, we show that the large deviation principle may fail to hold when we only assume the Hadamard gap condition. However, we show that in the special case when ak = qk for some q ∈ {2, 3, . . .}, (S_n/n)_{n∈N} satisfies a large deviation principle (with speed n) and a rate function I_q that is different from I, and describe an algorithm to compute an arbitrary number of terms in the Taylor expansion of Iq . In addition, we also prove that Iq converges pointwise to I as q → ∞. Furthermore, we construct a random perturbation (a_k)_{k∈N} of the sequence (2^k)_{k∈N} for which a_{k+1}/a_k → 2 as k → ∞, but for which at the same time (S_n/n)n∈N satisfies a large deviation principle with the same rate function I as in the independent case, which is surprisingly different from the rate function I_2 one might naïvely expect. We relate this fact to the number of solutions of certain Diophantine equations. Together, these results show that large deviation principles for lacunary trigonometric sums are very sensitive to the arithmetic properties of the sequence (a_k)_{k∈N}. This is particularly noteworthy since no such arithmetic effects are visible in the central limit theorem or in the law of the iterated logarithm for lacunary trigonometric sums. Our proofs use a combination of tools from probability theory, harmonic analysis, and dynamical systems. 
    more » « less
  5. The quintessential learning algorithm of empirical risk minimization (ERM) is known to fail in various settings for which uniform convergence does not characterize learning. Relatedly, the practice of machine learning is rife with considerably richer algorithmic techniques, perhaps the most notable of which is regularization. Nevertheless, no such technique or principle has broken away from the pack to characterize optimal learning in these more general settings. The purpose of this work is to precisely characterize the role of regularization in perhaps the simplest setting for which ERM fails: multiclass learning with arbitrary label sets. Using one-inclusion graphs (OIGs), we exhibit optimal learning algorithms that dovetail with tried-and-true algorithmic principles: Occam’s Razor as embodied by structural risk minimization (SRM), the principle of maximum entropy, and Bayesian inference. We also extract from OIGs a combinatorial sequence we term the Hall complexity, which is the first to characterize a problem’s transductive error rate exactly. Lastly, we introduce a generalization of OIGs and the transductive learning setting to the agnostic case, where we show that optimal orientations of Hamming graphs – judged using nodes’ outdegrees minus a system of node-dependent credits – characterize optimal learners exactly. We demonstrate that an agnostic version of the Hall complexity again characterizes error rates exactly, and exhibit an optimal learner using maximum entropy programs. 
    more » « less