skip to main content


Title: Boundary torsion and convex caps of locally convex surfaces
We prove that the torsion of any closed space curve which bounds a simply connected locally convex surface vanishes at least 4 times. This answers a question of Rosenberg related to a problem of Yau on characterizing the boundary of positively curved disks in Euclidean space. Furthermore, our result generalizes the 4 vertex theorem of Sedykh for convex space curves, and thus constitutes a far reaching extension of the classical 4 vertex theorem. The proof involves studying the arrangement of convex caps in a locally convex surface, and yields a Bose type formula for these objects.  more » « less
Award ID(s):
1711400
NSF-PAR ID:
10054418
Author(s) / Creator(s):
Date Published:
Journal Name:
Journal of differential geometry
Volume:
105
Issue:
3
ISSN:
0022-040X
Page Range / eLocation ID:
427-486
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. One of the most intruguing conjectures in extremal graph theory is the conjecture of Erdős and Sós from 1962, which asserts that every $n$-vertex graph with more than $\frac{k-1}{2}n$ edges contains any $k$-edge tree as a subgraph. Kalai proposed a generalization of this conjecture to hypergraphs. To explain the generalization, we need to define the concept of a tight tree in an $r$-uniform hypergraph, i.e., a hypergraph where each edge contains $r$ vertices. A tight tree is an $r$-uniform hypergraph such that there is an ordering $v_1,\ldots,v_n$ of its its vertices with the following property: the vertices $v_1,\ldots,v_r$ form an edge and for every $i>r$, there is a single edge $e$ containing the vertex $v_i$ and $r-1$ of the vertices $v_1,\ldots,v_{i-1}$, and $e\setminus\{v_i\}$ is a subset of one of the edges consisting only of vertices from $v_1,\ldots,v_{i-1}$. The conjecture of Kalai asserts that every $n$-vertex $r$-uniform hypergraph with more than $\frac{k-1}{r}\binom{n}{r-1}$ edges contains every $k$-edge tight tree as a subhypergraph. The recent breakthrough results on the existence of combinatorial designs by Keevash and by Glock, Kühn, Lo and Osthus show that this conjecture, if true, would be tight for infinitely many values of $n$ for every $r$ and $k$.The article deals with the special case of the conjecture when the sought tight tree is a path, i.e., the edges are the $r$-tuples of consecutive vertices in the above ordering. The case $r=2$ is the famous Erdős-Gallai theorem on the existence of paths in graphs. The case $r=3$ and $k=4$ follows from an earlier work of the authors on the conjecture of Kalai. The main result of the article is the first non-trivial upper bound valid for all $r$ and $k$. The proof is based on techniques developed for a closely related problem where a hypergraph comes with a geometric structure: the vertices are points in the plane in a strictly convex position and the sought path has to zigzag beetwen the vertices. 
    more » « less
  2. Abstract An ordered hypergraph is a hypergraph whose vertex set is linearly ordered, and a convex geometric hypergraph is a hypergraph whose vertex set is cyclically ordered. Extremal problems for ordered and convex geometric graphs have a rich history with applications to a variety of problems in combinatorial geometry. In this paper, we consider analogous extremal problems for uniform hypergraphs, and determine the order of magnitude of the extremal function for various ordered and convex geometric paths and matchings. Our results generalize earlier works of Braß–Károlyi–Valtr, Capoyleas–Pach, and Aronov–Dujmovič–Morin–Ooms-da Silveira. We also provide a new variation of the Erdős-Ko-Rado theorem in the ordered setting. 
    more » « less
  3. Abstract

    We consider the convex quadratic optimization problem in$$\mathbb {R}^{n}$$Rnwith indicator variables and arbitrary constraints on the indicators. We show that a convex hull description of the associated mixed-integer set in an extended space with a quadratic number of additional variables consists of an$$(n+1) \times (n+1)$$(n+1)×(n+1)positive semidefinite constraint (explicitly stated) and linear constraints. In particular, convexification of this class of problems reduces to describing a polyhedral set in an extended formulation. While the vertex representation of this polyhedral set is exponential and an explicit linear inequality description may not be readily available in general, we derive a compact mixed-integer linear formulation whose solutions coincide with the vertices of the polyhedral set. We also give descriptions in the original space of variables: we provide a description based on an infinite number of conic-quadratic inequalities, which are “finitely generated.” In particular, it is possible to characterize whether a given inequality is necessary to describe the convex hull. The new theory presented here unifies several previously established results, and paves the way toward utilizing polyhedral methods to analyze the convex hull of mixed-integer nonlinear sets.

     
    more » « less
  4. Abstract

    While it is well known from examples that no interesting “halfspace theorem” holds for properly immersed $n$-dimensional self-translating mean curvature flow solitons in Euclidean space $\mathbb{R}^{n+1}$, we show that they must all obey a general “bi-halfspace theorem” (aka “wedge theorem”): two transverse vertical halfspaces can never contain the same such hypersurface. The same holds for any infinite end. The proofs avoid the typical methods of nonlinear barrier construction for the approach via distance functions and the Omori–Yau maximum principle. As an application we classify the closed convex hulls of all properly immersed (possibly with compact boundary) $n$-dimensional mean curvature flow self-translating solitons $\Sigma ^n$ in ${\mathbb{R}}^{n+1}$ up to an orthogonal projection in the direction of translation. This list is short, coinciding with the one given by Hoffman–Meeks in 1989 for minimal submanifolds: all of ${\mathbb{R}}^{n}$, halfspaces, slabs, hyperplanes, and convex compacts in ${\mathbb{R}}^{n}$.

     
    more » « less
  5. Learning the underlying equation from data is a fundamental problem in many disciplines. Recent advances rely on Neural Networks (NNs) but do not provide theoretical guarantees in obtaining the exact equations owing to the non-convexity of NNs. In this paper, we propose Convex Neural Symbolic Learning (CoNSoLe) to seek convexity under mild conditions. The main idea is to decompose the recovering process into two steps and convexify each step. In the first step of searching for right symbols, we convexify the deep Q-learning. The key is to maintain double convexity for both the negative Q-function and the negative reward function in each iteration, leading to provable convexity of the negative optimal Q function to learn the true symbol connections. Conditioned on the exact searching result, we construct a Locally Convex equation Learning (LoCaL) neural network to convexify the estimation of symbol coefficients. With such a design, we quantify a large region with strict convexity in the loss surface of LoCaL for commonly used physical functions. Finally, we demonstrate the superior performance of the CoNSoLe framework over the state-of-the-art on a diverse set of datasets. 
    more » « less