skip to main content

Title: Nonlinear dimension reduction via outer Bi-Lipschitz extensions
We introduce and study the notion of *an outer bi-Lipschitz extension* of a map between Euclidean spaces. The notion is a natural analogue of the notion of *a Lipschitz extension* of a Lipschitz map. We show that for every map f there exists an outer bi-Lipschitz extension f′ whose distortion is greater than that of f by at most a constant factor. This result can be seen as a counterpart of the classic Kirszbraun theorem for outer bi-Lipschitz extensions. We also study outer bi-Lipschitz extensions of near-isometric maps and show upper and lower bounds for them. Then, we present applications of our results to prioritized and terminal dimension reduction problems, described next. We prove a *prioritized* variant of the Johnson–Lindenstrauss lemma: given a set of points X⊂ ℝd of size N and a permutation (”priority ranking”) of X, there exists an embedding f of X into ℝO(logN) with distortion O(loglogN) such that the point of rank j has only O(log3 + ε j) non-zero coordinates – more specifically, all but the first O(log3+ε j) coordinates are equal to 0; the distortion of f restricted to the first j points (according to the ranking) is at most O(loglogj). The result makes a more » progress towards answering an open question by Elkin, Filtser, and Neiman about prioritized dimension reductions. We prove that given a set X of N points in ℜd, there exists a *terminal* dimension reduction embedding of ℝd into ℝd′, where d′ = O(logN/ε4), which preserves distances ||x−y|| between points x∈ X and y ∈ ℝd, up to a multiplicative factor of 1 ± ε. This improves a recent result by Elkin, Filtser, and Neiman. The dimension reductions that we obtain are nonlinear, and this nonlinearity is necessary. « less
Authors:
; ; ;
Award ID(s):
1718820
Publication Date:
NSF-PAR ID:
10066349
Journal Name:
Nonlinear dimension reduction via outer Bi-Lipschitz extensions
Page Range or eLocation-ID:
1088 to 1101
Sponsoring Org:
National Science Foundation
More Like this
  1. Buchin, Kevin ; Colin de Verdiere, Eric (Ed.)
    In this paper, we prove a two-sided variant of the Kirszbraun theorem. Consider an arbitrary subset X of Euclidean space and its superset Y. Let f be a 1-Lipschitz map from X to ℝ^m. The Kirszbraun theorem states that the map f can be extended to a 1-Lipschitz map ̃ f from Y to ℝ^m. While the extension ̃ f does not increase distances between points, there is no guarantee that it does not decrease distances significantly. In fact, ̃ f may even map distinct points to the same point (that is, it can infinitely decrease some distances). However, we prove that there exists a (1 + ε)-Lipschitz outer extension f̃:Y → ℝ^{m'} that does not decrease distances more than "necessary". Namely, ‖f̃(x) - f̃(y)‖ ≥ c √{ε} min(‖x-y‖, inf_{a,b ∈ X} (‖x - a‖ + ‖f(a) - f(b)‖ + ‖b-y‖)) for some absolutely constant c > 0. This bound is asymptotically optimal, since no L-Lipschitz extension g can have ‖g(x) - g(y)‖ > L min(‖x-y‖, inf_{a,b ∈ X} (‖x - a‖ + ‖f(a) - f(b)‖ + ‖b-y‖)) even for a single pair of points x and y. In some applications, one is interested in the distances ‖f̃(x) - f̃(y)‖more »between images of points x,y ∈ Y rather than in the map f̃ itself. The standard Kirszbraun theorem does not provide any method of computing these distances without computing the entire map ̃ f first. In contrast, our theorem provides a simple approximate formula for distances ‖f̃(x) - f̃(y)‖.« less
  2. Let f be a drawing in the Euclidean plane of a graph G, which is understood to be a 1-dimensional simplicial complex. We assume that every edge of G is drawn by f as a curve of constant algebraic complexity, and the ratio of the length of the longest simple path to the the length of the shortest edge is poly(n). In the drawing f, a path P of G, or its image in the drawing π = f(P), is β-stretch if π is a simple (non-self-intersecting) curve, and for every pair of distinct points p ∈ P and q ∈ P , the length of the sub-curve of π connecting f(p) with f(q) is at most β∥f(p) − f(q)∥, where ∥.∥ denotes the Euclidean distance. We introduce and study the β-stretch Path Problem (βSP for short), in which we are given a pair of vertices s and t of G, and we are to decide whether in the given drawing of G there exists a β-stretch path P connecting s and t. We also output P if it exists. The βSP quantifies a notion of “near straightness” for paths in a graph G, motivated by gerrymandering regions in amore »map, where edges of G represent natural geographical/political boundaries that may be chosen to bound election districts. The notion of a β-stretch path naturally extends to cycles, and the extension gives a measure of how gerrymandered a district is. Furthermore, we show that the extension is closely related to several studied measures of local fatness of geometric shapes. We prove that βSP is strongly NP-complete. We complement this result by giving a quasi-polynomial time algorithm, that for a given ε > 0, β ∈ O(poly(log |V (G)|)), and s, t ∈ V (G), outputs a β-stretch path between s and t, if a (1 − ε)β-stretch path between s and t exists in the drawing.« less
  3. We consider the problem of finding a two-layer neural network with sigmoid, rectified linear unit (ReLU), or binary step activation functions that "fits" a training data set as accurately as possible as quantified by the training error; and study the following question: \emph{does a low training error guarantee that the norm of the output layer (outer norm) itself is small?} We answer affirmatively this question for the case of non-negative output weights. Using a simple covering number argument, we establish that under quite mild distributional assumptions on the input/label pairs; any such network achieving a small training error on polynomially many data necessarily has a well-controlled outer norm. Notably, our results (a) have a polynomial (in d) sample complexity, (b) are independent of the number of hidden units (which can potentially be very high), (c) are oblivious to the training algorithm; and (d) require quite mild assumptions on the data (in particular the input vector X∈ℝd need not have independent coordinates). We then leverage our bounds to establish generalization guarantees for such networks through \emph{fat-shattering dimension}, a scale-sensitive measure of the complexity class that the network architectures we investigate belong to. Notably, our generalization bounds also have good sample complexitymore »(polynomials in d with a low degree), and are in fact near-linear for some important cases of interest.« less
  4. We consider the (1+ϵ)-approximate nearest neighbor search problem: given a set X of n points in a d-dimensional space, build a data structure that, given any query point y, finds a point x∈X whose distance to y is at most (1+ϵ)minx∈X ‖x−y‖ for an accuracy parameter ϵ∈(0,1). Our main result is a data structure that occupies only O(ϵ^−2 n log(n)log(1/ϵ)) bits of space, assuming all point coordinates are integers in the range {−n^O(1)…n^O(1)}, i.e., the coordinates have O(logn) bits of precision. This improves over the best previously known space bound of O(ϵ^−2 n log(n)^2), obtained via the randomized dimensionality reduction method of Johnson and Lindenstrauss (1984). We also consider the more general problem of estimating all distances from a collection of query points to all data points X, and provide almost tight upper and lower bounds for the space complexity of this problem.
  5. Abstract Recently, Dvořák, Norin, and Postle introduced flexibility as an extension of list coloring on graphs (J Graph Theory 92(3):191–206, 2019, https://doi.org/10.1002/jgt.22447 ). In this new setting, each vertex v in some subset of V ( G ) has a request for a certain color r ( v ) in its list of colors L ( v ). The goal is to find an L coloring satisfying many, but not necessarily all, of the requests. The main studied question is whether there exists a universal constant $$\varepsilon >0$$ ε > 0 such that any graph G in some graph class $$\mathscr {C}$$ C satisfies at least $$\varepsilon$$ ε proportion of the requests. More formally, for $$k > 0$$ k > 0 the goal is to prove that for any graph $$G \in \mathscr {C}$$ G ∈ C on vertex set V , with any list assignment L of size k for each vertex, and for every $$R \subseteq V$$ R ⊆ V and a request vector $$(r(v): v\in R, ~r(v) \in L(v))$$ ( r ( v ) : v ∈ R , r ( v ) ∈ L ( v ) ) , there exists an L -coloring of Gmore »satisfying at least $$\varepsilon |R|$$ ε | R | requests. If this is true, then $$\mathscr {C}$$ C is called $$\varepsilon$$ ε - flexible for lists of size k . Choi, Clemen, Ferrara, Horn, Ma, and Masařík (Discrete Appl Math 306:20–132, 2022, https://doi.org/10.1016/j.dam.2021.09.021 ) introduced the notion of weak flexibility , where $$R = V$$ R = V . We further develop this direction by introducing a tool to handle weak flexibility. We demonstrate this new tool by showing that for every positive integer b there exists $$\varepsilon (b)>0$$ ε ( b ) > 0 so that the class of planar graphs without $$K_4, C_5 , C_6 , C_7, B_b$$ K 4 , C 5 , C 6 , C 7 , B b is weakly $$\varepsilon (b)$$ ε ( b ) -flexible for lists of size 4 (here $$K_n$$ K n , $$C_n$$ C n and $$B_n$$ B n are the complete graph, a cycle, and a book on n vertices, respectively). We also show that the class of planar graphs without $$K_4, C_5 , C_6 , C_7, B_5$$ K 4 , C 5 , C 6 , C 7 , B 5 is $$\varepsilon$$ ε -flexible for lists of size 4. The results are tight as these graph classes are not even 3-colorable.« less