skip to main content


Title: Sparse Approximation via Generating Point Sets
For a set P of n points in the unit ball b ⊆ R d , consider the problem of finding a small subset T ⊆ P such that its convex-hull ε-approximates the convex-hull of the original set. Specifically, the Hausdorff distance between the convex hull of T and the convex hull of P should be at most ε. We present an efficient algorithm to compute such an ε ′ -approximation of size kalg, where ε ′ is a function of ε, and kalg is a function of the minimum size kopt of such an ε-approximation. Surprisingly, there is no dependence on the dimension d in either of the bounds. Furthermore, every point of P can be ε- approximated by a convex-combination of points of T that is O(1/ε2 )-sparse. Our result can be viewed as a method for sparse, convex autoencoding: approximately representing the data in a compact way using sparse combinations of a small subset T of the original data. The new algorithm can be kernelized, and it preserves sparsity in the original input.  more » « less
Award ID(s):
1525971
NSF-PAR ID:
10057818
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms (SODA2016)
Page Range / eLocation ID:
548 to 557
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Bojanczyk, Mikolaj ; Chekuri, Chandra (Ed.)
    Given a point set P in the plane, we seek a subset Q ⊆ P, whose convex hull gives a smaller and thus simpler representation of the convex hull of P. Specifically, let cost(Q,P) denote the Hausdorff distance between the convex hulls CH(Q) and CH(P). Then given a value ε > 0 we seek the smallest subset Q ⊆ P such that cost(Q,P) ≤ ε. We also consider the dual version, where given an integer k, we seek the subset Q ⊆ P which minimizes cost(Q,P), such that |Q| ≤ k. For these problems, when P is in convex position, we respectively give an O(n log²n) time algorithm and an O(n log³n) time algorithm, where the latter running time holds with high probability. When there is no restriction on P, we show the problem can be reduced to APSP in an unweighted directed graph, yielding an O(n^2.5302) time algorithm when minimizing k and an O(min{n^2.5302, kn^2.376}) time algorithm when minimizing ε, using prior results for APSP. Finally, we show our near linear algorithms for convex position give 2-approximations for the general case. 
    more » « less
  2. We propose a sequential algorithm for learning sparse radial basis approximations for streaming data. The initial phase of the algorithm formulates the RBF training as a convex optimization problem with an objective function on the expansion weights while the data fitting problem imposed only as an ℓ∞-norm constraint. Each new data point observed is tested for feasibility, i.e., whether the data fitting constraint is satisfied. If so, that point is discarded and no model update is required. If it is infeasible, a new basic variable is added to the linear program. The result is a primal infeasible-dual feasible solution. The dual simplex algorithm is applied to determine a new optimal solution. A large fraction of the streaming data points does not require updates to the RBF model since they are similar enough to previously observed data and satisfy the data fitting constraints. The structure of the simplex algorithm makes the update to the solution particularly efficient given the inverse of the new basis matrix is easily computed from the old inverse. The second phase of the algorithm involves a non-convex refinement of the convex problem. Given the sparse nature of the LP solution, the computational expense of the non-convex algorithm is greatly reduced. We have also found that a small subset of the training data that includes the novel data identified by the algorithm can be used to train the non-convex optimization problem with substantial computation savings and comparable errors on the test data. We illustrate the method on the Mackey-Glass chaotic time-series, the monthly sunspot data, and a Fort Collins, Colorado weather data set. In each case we compare the results to artificial neural networks (ANN) and standard skew-RBFs. 
    more » « less
  3. We study a clustering problem where the goal is to maximize the coverage of the input points by k chosen centers. Specifically, given a set of n points P ⊆ ℝ^d, the goal is to pick k centers C ⊆ ℝ^d that maximize the service ∑_{p∈P}φ(𝖽(p,C)) to the points P, where 𝖽(p,C) is the distance of p to its nearest center in C, and φ is a non-increasing service function φ: ℝ+ → ℝ+. This includes problems of placing k base stations as to maximize the total bandwidth to the clients - indeed, the closer the client is to its nearest base station, the more data it can send/receive, and the target is to place k base stations so that the total bandwidth is maximized. We provide an n^{ε^-O(d)} time algorithm for this problem that achieves a (1-ε)-approximation. Notably, the runtime does not depend on the parameter k and it works for an arbitrary non-increasing service function φ: ℝ+ → ℝ+. 
    more » « less
  4. Caratheodory’s theorem says that any point in the convex hull of a set $P$ in $R^d$ is in the convex hull of a subset $P'$ of $P$ such that $|P'| \le d + 1$. For some sets P, the upper bound d + 1 can be improved. The best upper bound for P is known as the Caratheodory number [2, 15, 17]. In this paper, we study a computational problem of finding the smallest set $P'$ for a given set $P$ and a point $p$. We call the size of this set $P'$, the Caratheodory number of a point p or CNP. We show that the problem of deciding the Caratheodory number of a point is NP-hard. Furthermore, we show that the problem is k-LDT-hard. We present two algorithms for computing a smallest set $P'$, if CNP= 2,3. Bárány [1] generalized Caratheodory’s theorem by using d+1 sets (colored sets) such that their convex hulls intersect. We introduce a Colorful Caratheodory number of a point or CCNP which can be smaller than d+1. Then we extend our results for CNP to CCNP. 
    more » « less
  5. Determinant maximization problem gives a general framework that models problems arising in as diverse fields as statistics [Puk06], convex geometry [Kha96], fair allocations [AGSS16], combinatorics [AGV18], spectral graph theory [NST19a], network design, and random processes [KT12]. In an instance of a determinant maximization problem, we are given a collection of vectors U = {v1, . . . , vn} ⊂ Rd , and a goal is to pick a subset S ⊆ U of given vectors to maximize the determinant of the matrix ∑i∈S vivi^T. Often, the set S of picked vectors must satisfy additional combinatorial constraints such as cardinality constraint (|S| ≤ k) or matroid constraint (S is a basis of a matroid defined on the vectors). In this paper, we give a polynomial-time deterministic algorithm that returns a r O(r)-approximation for any matroid of rank r ≤ d. This improves previous results that give e O(r^2)-approximation algorithms relying on e^O(r)-approximate estimation algorithms [NS16, AG17,AGV18, MNST20] for any r ≤ d. All previous results use convex relaxations and their relationship to stable polynomials and strongly log-concave polynomials. In contrast, our algorithm builds on combinatorial algorithms for matroid intersection, which iteratively improve any solution by finding an alternating negative cycle in the exchange graph defined by the matroids. While the det(.) function is not linear, we show that taking appropriate linear approximations at each iteration suffice to give the improved approximation algorithm. 
    more » « less