skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Rigidly breaking potential flows and a countable Alexandrov theorem for polytopes
We study all the ways that a given convex body in d dimensions can break into countably many pieces that move away from each other rigidly at constant velocity, with no rotation or shearing. The initial velocity field is locally constant a.e., but may be continuous and/or fail to be integrable. For any choice of mass-velocity pairs for the pieces, such a motion can be generated by the gradient of a convex potential that is affine on each piece. We classify such potentials in terms of a countable version of a theorem of Alexandrov for convex polytopes, and prove a stability theorem. For bounded velocities, there is a bijection between the mass-velocity data and optimal transport flows (Wasserstein geodesics) that are locally incompressible. Given any rigidly breaking velocity field that is the gradient of a continuous potential, the convexity of the potential is established under any of several conditions, such as the velocity field being continuous, the potential being semiconvex, the mass measure generated by a convexified transport potential being absolutely continuous, or there being a finite number of pieces. Also we describe a number of curious and paradoxical examples having fractal structure.  more » « less
Award ID(s):
2106534 2106988
PAR ID:
10655258
Author(s) / Creator(s):
;
Publisher / Repository:
Mathematical Sciences Publishers
Date Published:
Journal Name:
Pure and Applied Analysis
Volume:
7
Issue:
4
ISSN:
2578-5893
Page Range / eLocation ID:
927 to 956
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. S. Koyejo; S. Mohamed; A. Agarwal; D. Belgrave; K. Cho; A. Oh (Ed.)
    A deep neural network using rectified linear units represents a continuous piecewise linear (CPWL) function and vice versa. Recent results in the literature estimated that the number of neurons needed to exactly represent any CPWL function grows exponentially with the number of pieces or exponentially in terms of the factorial of the number of distinct linear components. Moreover, such growth is amplified linearly with the input dimension. These existing results seem to indicate that the cost of representing a CPWL function is expensive. In this paper, we propose much tighter bounds and establish a polynomial time algorithm to find a network satisfying these bounds for any given CPWL function. We prove that the number of hidden neurons required to exactly represent any CPWL function is at most a quadratic function of the number of pieces. In contrast to all previous results, this upper bound is invariant to the input dimension. Besides the number of pieces, we also study the number of distinct linear components in CPWL functions. When such a number is also given, we prove that the quadratic complexity turns into bilinear, which implies a lower neural complexity because the number of distinct linear components is always not greater than the minimum number of pieces in a CPWL function. When the number of pieces is unknown, we prove that, in terms of the number of distinct linear components, the neural complexities of any CPWL function are at most polynomial growth for low-dimensional inputs and factorial growth for the worst-case scenario, which are significantly better than existing results in the literature. 
    more » « less
  2. We study the performance of noisy gradient descent and Nesterov's accelerated methods for strongly convex objective functions with Lipschitz continuous gradients. The steady-state second-order moment of the error in the iterates is analyzed when the gradient is perturbed by an additive white noise with zero mean and identity covariance. For any given condition number κ, we derive explicit upper bounds on noise amplification that only depend on κ and the problem size. We use quadratic objective functions to derive lower bounds and to demonstrate that the upper bounds are tight up to a constant factor. The established upper bound for Nesterov's accelerated method is larger than the upper bound for gradient descent by a factor of √κ. This gap identifies a fundamental tradeoff that comes with acceleration in the presence of stochastic uncertainties in the gradient evaluation. 
    more » « less
  3. Abstract This paper studies an equity market of stochastic dimension, where the number of assets fluctuates over time. In such a market, we develop the fundamental theorem of asset pricing, which provides the equivalence of the following statements: (i) there exists a supermartingale numéraire portfolio; (ii) each dissected market, which is of a fixed dimension between dimensional jumps, has locally finite growth; (iii) there is no arbitrage of the first kind; (iv) there exists a local martingale deflator; (v) the market is viable. We also present the optional decomposition theorem, which characterizes a given nonnegative process as the wealth process of some investment‐consumption strategy. Furthermore, similar results still hold in an open market embedded in the entire market of stochastic dimension, where investors can only invest in a fixed number of large capitalization stocks. These results are developed in an equity market model where the price process is given by a piecewise continuous semimartingale of stochastic dimension. Without the continuity assumption on the price process, we present similar results but without explicit characterization of the numéraire portfolio. 
    more » « less
  4. Caratheodory’s theorem says that any point in the convex hull of a set $$P$$ in $R^d$ is in the convex hull of a subset $P'$ of $$P$$ such that $$|P'| \le d + 1$$. For some sets P, the upper bound d + 1 can be improved. The best upper bound for P is known as the Caratheodory number [2, 15, 17]. In this paper, we study a computational problem of finding the smallest set $P'$ for a given set $$P$$ and a point $$p$$. We call the size of this set $P'$, the Caratheodory number of a point p or CNP. We show that the problem of deciding the Caratheodory number of a point is NP-hard. Furthermore, we show that the problem is k-LDT-hard. We present two algorithms for computing a smallest set $P'$, if CNP= 2,3. Bárány [1] generalized Caratheodory’s theorem by using d+1 sets (colored sets) such that their convex hulls intersect. We introduce a Colorful Caratheodory number of a point or CCNP which can be smaller than d+1. Then we extend our results for CNP to CCNP. 
    more » « less
  5. Motivated by robust and quantile regression problems, we investigate the stochastic gradient descent (SGD) algorithm for minimizing an objective functionfthat is locally strongly convex with a sub--quadratic tail. This setting covers many widely used online statistical methods. We introduce a novel piecewise Lyapunov function that enables us to handle functionsfwith only first-order differentiability, which includes a wide range of popular loss functions such as Huber loss. Leveraging our proposed Lyapunov function, we derive finite-time moment bounds under general diminishing stepsizes, as well as constant stepsizes. We further establish the weak convergence, central limit theorem and bias characterization under constant stepsize, providing the first geometrical convergence result for sub--quadratic SGD. Our results have wide applications, especially in online statistical methods. In particular, we discuss two applications of our results. 1) Online robust regression: We consider a corrupted linear model with sub--exponential covariates and heavy--tailed noise. Our analysis provides convergence rates comparable to those for corrupted models with Gaussian covariates and noise. 2) Online quantile regression: Importantly, our results relax the common assumption in prior work that the conditional density is continuous and provide a more fine-grained analysis for the moment bounds. 
    more » « less