We study the problem of finding the Löwner–John ellipsoid (i.e., an ellipsoid with minimum volume that contains a given convex set). We reformulate the problem as a generalized copositive program and use that reformulation to derive tractable semidefinite programming approximations for instances where the set is defined by affine and quadratic inequalities. We prove that, when the underlying set is a polytope, our method never provides an ellipsoid of higher volume than the one obtained by scaling the maximum volume-inscribed ellipsoid. We empirically demonstrate that our proposed method generates high-quality solutions and can be solved much faster than solving the problem to optimality. Furthermore, we outperform the existing approximation schemes in terms of solution time and quality. We present applications of our method to obtain piecewise linear decision rule approximations for dynamic distributionally robust problems with random recourse and to generate ellipsoidal approximations for the set of reachable states in a linear dynamical system when the set of allowed controls is a polytope.
more »
« less
Using SOS and Sublevel Set Volume Minimization for Estimation of Forward Reachable Sets
In this paper we propose a convex Sum-of-Squares optimization problem for finding outer approximations of forward reachable sets for nonlinear uncertain Ordinary Differential Equations (ODE’s) with either (or both) L2 or point-wise bounded input disturbances. To make our approximations tight we seek to minimize the volume of our approximation set. Our approach to volume minimization is based on the use of a convex determinant-like objective function.We provide several numerical examples including the Lorenz system and the Van der Pol oscillator.
more »
« less
- Award ID(s):
- 1931270
- PAR ID:
- 10113771
- Date Published:
- Journal Name:
- IFAC proceedings series
- ISSN:
- 0742-5953
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Using the concepts of mixed volumes and quermassintegrals of convex geometry, we derive an exact formula for the exclusion volume v ex ( K ) for a general convex body K that applies in any space dimension. While our main interests concern the rotationally-averaged exclusion volume of a convex body with respect to another convex body, we also describe some results for the exclusion volumes for convex bodies with the same orientation. We show that the sphere minimizes the dimensionless exclusion volume v ex ( K )/ v ( K ) among all convex bodies, whether randomly oriented or uniformly oriented, for any d , where v ( K ) is the volume of K . When the bodies have the same orientation, the simplex maximizes the dimensionless exclusion volume for any d with a large- d asymptotic scaling behavior of 2 2 d / d 3/2 , which is to be contrasted with the corresponding scaling of 2 d for the sphere. We present explicit formulas for quermassintegrals W 0 ( K ), …, W d ( K ) for many different nonspherical convex bodies, including cubes, parallelepipeds, regular simplices, cross-polytopes, cylinders, spherocylinders, ellipsoids as well as lower-dimensional bodies, such as hyperplates and line segments. These results are utilized to determine the rotationally-averaged exclusion volume v ex ( K ) for these convex-body shapes for dimensions 2 through 12. While the sphere is the shape possessing the minimal dimensionless exclusion volume, we show that, among the convex bodies considered that are sufficiently compact, the simplex possesses the maximal v ex ( K )/ v ( K ) with a scaling behavior of 2 1.6618… d . Subsequently, we apply these results to determine the corresponding second virial coefficient B 2 ( K ) of the aforementioned hard hyperparticles. Our results are also applied to compute estimates of the continuum percolation threshold η c derived previously by the authors for systems of identical overlapping convex bodies. We conjecture that overlapping spheres possess the maximal value of η c among all identical nonzero-volume convex overlapping bodies for d ⩾ 2, randomly or uniformly oriented, and that, among all identical, oriented nonzero-volume convex bodies, overlapping simplices have the minimal value of η c for d ⩾ 2.more » « less
-
Abstract Approximating a function with a finite series, e.g., involving polynomials or trigonometric functions, is a critical tool in computing and data analysis. The construction of such approximations via now-standard approaches like least squares or compressive sampling does not ensure that the approximation adheres to certain convex linear structural constraints, such as positivity or monotonicity. Existing approaches that ensure such structure are norm-dissipative and this can have a deleterious impact when applying these approaches, e.g., when numerical solving partial differential equations. We present a new framework that enforces via optimization such structure on approximations and is simultaneously norm-preserving. This results in a conceptually simple convex optimization problem on the sphere, but the feasible set for such problems can be very complex. We establish well-posedness of the optimization problem through results on spherical convexity and design several spherical-projection-based algorithms to numerically compute the solution. Finally, we demonstrate the effectiveness of this approach through several numerical examples.more » « less
-
In mixed multi-view data, multiple sets of diverse features are measured on the same set of samples. By integrating all available data sources, we seek to discover common group structure among the samples that may be hidden in individualistic cluster analyses of a single data view. While several techniques for such integrative clustering have been explored, we propose and develop a convex formalization that enjoys strong empirical performance and inherits the mathematical properties of increasingly popular convex clustering methods. Specifically, our Integrative Generalized Convex Clustering Optimization (iGecco) method employs different convex distances, losses, or divergences for each of the different data views with a joint convex fusion penalty that leads to common groups. Additionally, integrating mixed multi-view data is often challenging when each data source is high-dimensional. To perform feature selection in such scenarios, we develop an adaptive shifted group-lasso penalty that selects features by shrinking them towards their loss-specific centers. Our so-called iGecco+ approach selects features from each data view that are best for determining the groups, often leading to improved integrative clustering. To solve our problem, we develop a new type of generalized multi-block ADMM algorithm using sub-problem approximations that more efficiently fits our model for big data sets. Through a series of numerical experiments and real data examples on text mining and genomics, we show that iGecco+ achieves superior empirical performance for high-dimensional mixed multi-view data.more » « less
-
We propose an algorithm to solve convex and concave fractional programs and their stochastic counterparts in a common framework. Our approach is based on a novel reformulation that involves differences of square terms in the constraints, and subsequent employment of piecewise-linear approximations of the concave terms. Using the branch-and-bound (B\&B) framework, our algorithm adaptively refines the piecewise-linear approximations and iteratively solves convex approximation problems. The convergence analysis provides a bound on the optimality gap as a function of approximation errors. Based on this bound, we prove that the proposed B\&B algorithm terminates in a finite number of iterations and the worst-case bound to obtain an $$\epsilon$$-optimal solution reciprocally depends on the square root of $$\epsilon$$. Numerical experiments on Cobb-Douglas production efficiency and equitable resource allocation problems support that the algorithm efficiently finds a highly accurate solution while significantly outperforming the benchmark algorithms for all the small size problem instances solved. A modified branching strategy that takes the advantage of non-linearity in convex functions further improves the performance. Results are also discussed when solving a dual reformulation and using a cutting surface algorithm to solve distributionally robust counterpart of the Cobb-Douglas example models.more » « less
An official website of the United States government

