skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on April 1, 2026

Title: Compositional Sparsity, Approximation Classes, and Parametric Transport Equations
Abstract Approximating functions of a large number of variables poses particular challenges often subsumed under the term “Curse of Dimensionality” (CoD). Unless the approximated function exhibits a very high level of smoothness the CoD can be avoided only by exploiting some typically hiddenstructural sparsity. In this paper we propose a general framework for new model classes of functions in high dimensions. They are based on suitable notions ofcompositional dimension-sparsityquantifying, on a continuous level, approximability by compositions with certain structural properties. In particular, this describes scenarios where deep neural networks can avoid the CoD. The relevance of these concepts is demonstrated forsolution manifoldsof parametric transport equations. For such PDEs parameter-to-solution maps do not enjoy the type of high order regularity that helps to avoid the CoD by more conventional methods in other model scenarios. Compositional sparsity is shown to serve as the key mechanism for proving that sparsity of problem data is inherited in a quantifiable way by the solution manifold. In particular, one obtains convergence rates for deep neural network realizations showing that the CoD is indeed avoided.  more » « less
Award ID(s):
2012469 2245097 2038080
PAR ID:
10617319
Author(s) / Creator(s):
Publisher / Repository:
Springer
Date Published:
Journal Name:
Constructive Approximation
Volume:
61
Issue:
2
ISSN:
0176-4276
Page Range / eLocation ID:
219 to 283
Subject(s) / Keyword(s):
Tamed compositions Compositional approximation Approximation classes Deep neural networks Operator learning Parametric transport equations Solution manifolds Nonlinear widths
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Neural networks have demonstrated impressive success in various domains, raising the question of what fundamental principles underlie the effectiveness of the best AI systems and quite possibly of human intelligence. This perspective argues that compositional sparsity, or the property that a compositional function have “few” constituent functions, each depending on only a small subset of inputs, is a key principle underlying successful learning architectures. Surprisingly, all functions that are efficiently Turing computable have a compositional sparse representation. Furthermore, deep networks that are also sparse can exploit this general property to avoid the “curse of dimensionality”. This framework suggests interesting implications about the role that machine learning may play in mathematics. 
    more » « less
  2. The main claim of this perspective is that compositional sparsity of the target function, which corresponds to the task to be learned, is the key principle underlying machine learning. Mhaskar and Poggio (2016) proved that sparsity of the compositional target functions (when the functions are on the reals, the constituent functions are also required to be smooth), naturally leads to sparse deep networks for approximation and thus for optimization. This is the case of most CNNs in current use, in which the known sparse graph of the target function is reflected in the sparse connectivity of the network. When the graph of the target function is unknown, I conjecture that transformers are able to implement a flexible version of sparsity (selecting which input tokens interact in the MLP layer), through the self-attention layers. Surprisingly, the assumption of compositional sparsity of the target function is not restrictive in practice, since I prove here that for computable functions (if on the reals with Lipschitz continuous derivatives) compositional sparsity is equivalent to efficient computability, that is Turing computability in polynomial time. 
    more » « less
  3. Multiple Instance Learning (MIL) provides a promising solution to many real-world problems, where labels are only available at the bag level but missing for instances due to a high labeling cost. As a powerful Bayesian non-parametric model, Gaussian Processes (GP) have been extended from classical supervised learning to MIL settings, aiming to identify the most likely positive (or least negative) instance from a positive (or negative) bag using only the bag-level labels. However, solely focusing on a single instance in a bag makes the model less robust to outliers or multi-modal scenarios, where a single bag contains a diverse set of positive instances. We propose a general GP mixture framework that simultaneously considers multiple instances through a latent mixture model. By adding a top-k constraint, the framework is equivalent to choosing the top-k most positive instances, making it more robust to outliers and multimodal scenarios. We further introduce a Distributionally Robust Optimization (DRO) constraint that removes the limitation of specifying a fix k value. To ensure the prediction power over high-dimensional data (eg, videos and images) that are common in MIL, we augment the GP kernel with fixed basis functions by using a deep neural network to learn adaptive basis functions so that the covariance structure of high-dimensional data can be accurately captured. Experiments are conducted on highly challenging real-world video anomaly detection tasks to demonstrate the effectiveness of the proposed model. 
    more » « less
  4. null (Ed.)
    Multiple Instance Learning (MIL) provides a promising solution to many real-world problems, where labels are only available at the bag level but missing for instances due to a high labeling cost. As a powerful Bayesian non-parametric model, Gaussian Processes (GP) have been extended from classical supervised learning to MIL settings, aiming to identify the most likely positive (or least negative) instance from a positive (or negative) bag using only the bag-level labels. However, solely focusing on a single instance in a bag makes the model less robust to outliers or multi-modal scenarios, where a single bag contains a diverse set of positive instances. We propose a general GP mixture framework that simultaneously considers multiple instances through a latent mixture model. By adding a top-k constraint, the framework is equivalent to choosing the top-k most positive instances, making it more robust to outliers and multimodal scenarios. We further introduce a Distributionally Robust Optimization (DRO) constraint that removes the limitation of specifying a fixed k value. To ensure the prediction power over high-dimensional data (e.g., videos and images) that are common in MIL, we augment the GP kernel with  fixed basis functions by using a deep neural network to learn adaptive basis functions so that the covariance structure of high-dimensional data can be accurately captured. Experiments are conducted on highly challenging real-world video anomaly detection tasks to demonstrate the effectiveness of the proposed model. 
    more » « less
  5. One surprising trait of neural networks is the extent to which their connections can be pruned with little to no effect on accuracy. But when we cross a critical level of parameter sparsity, pruning any further leads to a sudden drop in accuracy. This drop plausibly reflects a loss in model complexity, which we aim to avoid. In this work, we explore how sparsity also affects the geometry of the linear regions defined by a neural network, and consequently reduces the expected maximum number of linear regions based on the architecture. We observe that pruning affects accuracy similarly to how sparsity affects the number of linear regions and our proposed bound for the maximum number. Conversely, we find out that selecting the sparsity across layers to maximize our bound very often improves accuracy in comparison to pruning as much with the same sparsity in all layers, thereby providing us guidance on where to prune. 
    more » « less