Abstract We give an operadic definition of a genuine symmetric monoidal $$G$$-category, and we prove that its classifying space is a genuine $$E_\infty $$G$-space. We do this by developing some very general categorical coherence theory. We combine results of Corner and Gurski, Power and Lack to develop a strictification theory for pseudoalgebras over operads and monads. It specializes to strictify genuine symmetric monoidal $$G$$-categories to genuine permutative $$G$$-categories. All of our work takes place in a general internal categorical framework that has many quite different specializations. When $$G$$ is a finite group, the theory here combines with previous work to generalize equivariant infinite loop space theory from strict space level input to considerably more general category level input. It takes genuine symmetric monoidal $$G$$-categories as input to an equivariant infinite loop space machine that gives genuine $$\Omega $$-$$G$-spectra as output.
more »
« less
Ultrahomogeneous tensor spaces
A cubic space is a vector space equipped with a symmetric trilinear form. Using categorical Fraïssé theory, we show that there is a universal ultrahomogeneous cubic space V of countable infinite dimension, which is unique up to isomorphism. The automorphism group G of V is quite large and, in some respects, similar to the infinite orthogonal group. We show that G is a linear-oligomorphic group (a class of groups we introduce), and we determine the algebraic representation theory of G. We also establish some model-theoretic results about V: it is ω-categorical (in a modified sense), and has quantifier elimination (for vectors). Our results are not specific to cubic spaces, and hold for a very general class of tensor spaces; we view these spaces as linear analogs of the relational structures studied in model theory.
more »
« less
- Award ID(s):
- 2401515
- PAR ID:
- 10591347
- Publisher / Repository:
- Elsevier
- Date Published:
- Journal Name:
- Advances in mathematics
- ISSN:
- 0001-8708
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Greenlees, John (Ed.)Let G be a finite group. We give Quillen equivalent models for the category of G–spectra as categories of spectrally enriched functors from explicitly described domain categories to nonequivariant spectra. Our preferred model is based on equivariant infinite loop space theory applied to elementary categorical data. It recasts equivariant stable homotopy theory in terms of point–set-level categories of G–spans and nonequivariant spectra. We also give a more topologically grounded model based on equivariant Atiyah duality.more » « less
-
Inspired by constraints from physical law, equivariant machine learning restricts the learning to a hypothesis class where all the functions are equivariant with respect to some group action. Irreducible representations or invariant theory are typically used to parameterize the space of such functions. In this article, we introduce the topic and explain a couple of methods to explicitly parameterize equivariant functions that are being used in machine learning applications. In particular, we explicate a general procedure, attributed to Malgrange, to express all polynomial maps between linear spaces that are equivariant under the action of a group G, given a characterization of the invariant polynomials on a bigger space. The method also parametrizes smooth equivariant maps in the case that G is a compact Lie group.more » « less
-
We prove that the Novikov conjecture holds for any discrete group admitting an isometric and metrically proper action on an admissible Hilbert-Hadamard space. Admissible Hilbert-Hadamard spaces are a class of (possibly infinite-dimensional) non-positively curved metric spaces that contain dense sequences of closed convex subsets isometric to Riemannian manifolds. Examples of admissible Hilbert-Hadamard spaces include Hilbert spaces, certain simply connected and non-positively curved Riemannian-Hilbertian manifolds and infinite-dimensional symmetric spaces. Thus our main theorem can be considered as an infinite-dimensional analogue of Kasparov’s theorem on the Novikov conjecture for groups acting properly and isometrically on complete, simply connected and non-positively curved manifolds. As a consequence, we show that the Novikov conjecture holds for geometrically discrete subgroups of the group of volume preserving diffeomorphisms of a closed smooth manifold. This result is inspired by Connes’ theorem that the Novikov conjecture holds for higher signatures associated to the Gelfand-Fuchs classes of groups of diffeormorphisms.more » « less
-
Characterizing how neural network depth, width, and dataset size jointly impact model quality is a central problem in deep learning theory. We give here a complete solution in the special case of linear networks with output dimension one trained using zero noise Bayesian inference with Gaussian weight priors and mean squared error as a negative log-likelihood. For any training dataset, network depth, and hidden layer widths, we find non-asymptotic expressions for the predictive posterior and Bayesian model evidence in terms of Meijer-G functions, a class of meromorphic special functions of a single complex variable. Through novel asymptotic expansions of these Meijer-G functions, a rich new picture of the joint role of depth, width, and dataset size emerges. We show that linear networks make provably optimal predictions at infinite depth: the posterior of infinitely deep linear networks with data-agnostic priors is the same as that of shallow networks with evidence-maximizing data-dependent priors. This yields a principled reason to prefer deeper networks when priors are forced to be data-agnostic. Moreover, we show that with data-agnostic priors, Bayesian model evidence in wide linear networks is maximized at infinite depth, elucidating the salutary role of increased depth for model selection. Underpinning our results is a novel emergent notion of effective depth, given by the number of hidden layers times the number of data points divided by the network width; this determines the structure of the posterior in the large-data limit.more » « less
An official website of the United States government

