skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: On sifted colimits in the presence of pullbacks
We show that in a category with pullbacks, arbitrary sifted colimits may be constructed as filtered colimits of reflexive coequalizers. This implies that "lex sifted colimits", in the sense of Garner-Lack, decompose as Barr-exactness plus filtered colimits commuting with finite limits. We also prove generalizations of these results for κ-small sifted and filtered colimits, and their interaction with λ-small limits in place of finite ones, generalizing Garner's characterization of algebraic exactness in the sense of Adámek-Lawvere-Rosický. Along the way, we prove a general result on classes of colimits, showing that the κ-small restriction of a saturated class of colimits is still "closed under iteration".  more » « less
Award ID(s):
2054508 2224709
PAR ID:
10318909
Author(s) / Creator(s):
Date Published:
Journal Name:
Theory and applications of categories
Volume:
37
ISSN:
1201-561X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In previous work, we introduce an axiomatic framework within which to prove theorems about many varieties of infinite-dimensional categories simultaneously. In this paper, we establish criteria implying that an ∞-category --- for instance, a quasi-category, a complete Segal space, or a Segal category --- is complete and cocomplete, admitting limits and colimits indexed by any small simplicial set. Our strategy is to build (co)limits of diagrams indexed by a simplicial set inductively from (co)limits of restricted diagrams indexed by the pieces of its skeletal filtration. We show directly that the modules that express the universal properties of (co)limits of diagrams of these shapes are reconstructible as limits of the modules that express the universal properties of (co)limits of the restricted diagrams. We also prove that the Yoneda embedding preserves and reflects limits in a suitable sense, and deduce our main theorems as a consequence. 
    more » « less
  2. We prove two variations of the classical gluing result of Beilinson–Bernstein–Deligne. We recast the problem of gluing in terms of filtered complexes in the total topos of aD-topos, in the sense of SGA 4, and prove our results using the filtered derived category. 
    more » « less
  3. Abstract We develop and study a generalization of commutative rings calledbands, along with the corresponding geometric theory ofband schemes. Bands generalize both hyperrings, in the sense of Krasner, and partial fields in the sense of Semple and Whittle. They form a ring‐like counterpart to the field‐like category ofidyllsintroduced by the first and third authors in the previous work. The first part of the paper is dedicated to establishing fundamental properties of bands analogous to basic facts in commutative algebra. In particular, we introduce various kinds of ideals in a band and explore their properties, and we study localization, quotients, limits, and colimits. The second part of the paper studies band schemes. After giving the definition, we present some examples of band schemes, along with basic properties of band schemes and morphisms thereof, and we describe functors into some other scheme theories. In the third part, we discuss some “visualizations” of band schemes, which are different topological spaces that one can functorially associate to a band scheme . 
    more » « less
  4. We develop a convex analytic approach to analyze finite width two-layer ReLU networks. We first prove that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set, where simple solutions are encouraged via its convex geometrical properties. We then leverage this characterization to show that an optimal set of parameters yield linear spline interpolation for regression problems involving one dimensional or rank-one data. We also characterize the classification decision regions in terms of a kernel matrix and minimum `1-norm solutions. This is in contrast to Neural Tangent Kernel which is unable to explain predictions of finite width networks. Our convex geometric characterization also provides intuitive explanations of hidden neurons as auto-encoders. In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints. Then, we apply certain convex relaxations and introduce a cutting-plane algorithm to globally optimize the network. We further analyze the exactness of the relaxations to provide conditions for the convergence to a global optimum. Our analysis also shows that optimal network parameters can be also characterized as interpretable closed-form formulas in some practically relevant special cases. 
    more » « less
  5. null (Ed.)
    We develop a convex analytic framework for ReLU neural networks which elucidates the inner workings of hidden neurons and their function space characteristics. We show that neural networks with rectified linear units act as convex regularizers, where simple solutions are encouraged via extreme points of a certain convex set. For one dimensional regression and classification, as well as rank-one data matrices, we prove that finite two-layer ReLU networks with norm regularization yield linear spline interpolation. We characterize the classification decision regions in terms of a closed form kernel matrix and minimum L1 norm solutions. This is in contrast to Neural Tangent Kernel which is unable to explain neural network predictions with finitely many neurons. Our convex geometric description also provides intuitive explanations of hidden neurons as auto encoders. In higher dimensions, we show that the training problem for two-layer networks can be cast as a finite dimensional convex optimization problem with infinitely many constraints. We then provide a family of convex relaxations to approximate the solution, and a cutting-plane algorithm to improve the relaxations. We derive conditions for the exactness of the relaxations and provide simple closed form formulas for the optimal neural network weights in certain cases. We also establish a connection to ℓ0-ℓ1 equivalence for neural networks analogous to the minimal cardinality solutions in compressed sensing. Extensive experimental results show that the proposed approach yields interpretable and accurate models. 
    more » « less