 Award ID(s):
 1851602
 NSFPAR ID:
 10079395
 Date Published:
 Journal Name:
 2017 IEEE Power & Energy Society General Meeting
 Page Range / eLocation ID:
 1 to 5
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

We present alfonso, an opensource Matlab package for solving conic optimization problems over nonsymmetric convex cones. The implementation is based on the authors’ corrected analysis of a method of Skajaa and Ye. It enables optimization over any convex cone as long as a logarithmically homogeneous selfconcordant barrier is available for the cone or its dual. This includes many nonsymmetric cones, for example, hyperbolicity cones and their duals (such as sumofsquares cones), semidefinite and secondorder cone representable cones, power cones, and the exponential cone. Besides enabling the solution of problems that cannot be cast as optimization problems over a symmetric cone, algorithms for nonsymmetric conic optimization also offer performance advantages for problems whose symmetric cone programming representation requires a large number of auxiliary variables or has a special structure that can be exploited in the barrier computation. The worstcase iteration complexity of alfonso is the best known for nonsymmetric cone optimization: [Formula: see text] iterations to reach an εoptimal solution, where ν is the barrier parameter of the barrier function used in the optimization. Alfonso can be interfaced with a Matlab function (supplied by the user) that computes the Hessian of a barrier function for the cone. A simplified interface is also available to optimize over the direct product of cones for which a barrier function has already been built into the software. This interface can be easily extended to include new cones. Both interfaces are illustrated by solving linear programs. The oracle interface and the efficiency of alfonso are also demonstrated using an optimal design of experiments problem in which the tailored barrier computation greatly decreases the solution time compared with using stateoftheart, offtheshelf conic optimization software. Summary of Contribution: The paper describes an opensource Matlab package for optimization over nonsymmetric cones. A particularly important feature of this software is that, unlike other conic optimization software, it enables optimization over any convex cone as long as a suitable barrier function is available for the cone or its dual, not limiting the user to a small number of specific cones. Nonsymmetric cones for which such barriers are already known include, for example, hyperbolicity cones and their duals (such as sumofsquares cones), semidefinite and secondorder cone representable cones, power cones, and the exponential cone. Thus, the scope of this software is far larger than most current conic optimization software. This does not come at the price of efficiency, as the worstcase iteration complexity of our algorithm matches the iteration complexity of the most successful interiorpoint methods for symmetric cones. Besides enabling the solution of problems that cannot be cast as optimization problems over a symmetric cone, our software can also offer performance advantages for problems whose symmetric cone programming representation requires a large number of auxiliary variables or has a special structure that can be exploited in the barrier computation. This is also demonstrated in this paper via an example in which our code significantly outperforms Mosek 9 and SCS 2.more » « less

Lowrank matrix recovery is a fundamental problem in machine learning with numerous applications. In practice, the problem can be solved by convex optimization namely nuclear norm minimization, or by nonconvex optimization as it is wellknown that for lowrank matrix problems like matrix sensing and matrix completion, all local optima of the natural nonconvex objectives are also globally optimal under certain ideal assumptions. In this paper, we study new approaches for matrix sensing in a semirandom model where an adversary can add any number of arbitrary sensing matrices. More precisely, the problem is to recover a lowrank matrix $X^\star$ from linear measurements $b_i = \langle A_i, X^\star \rangle$, where an unknown subset of the sensing matrices satisfies the Restricted Isometry Property (RIP) and the rest of the $A_i$'s are chosen adversarially. It is known that in the semirandom model, existing nonconvex objectives can have bad local optima. To fix this, we present a descentstyle algorithm that provably recovers the groundtruth matrix $X^\star$. For the closelyrelated problem of semirandom matrix completion, prior work [CG18] showed that all bad local optima can be eliminated by reweighting the input data. However, the analogous approach for matrix sensing requires reweighting a set of matrices to satisfy RIP, which is a condition that is NPhard to check. Instead, we build on the framework proposed in [KLL$^+$23] for semirandom sparse linear regression, where the algorithm in each iteration reweights the input based on the current solution, and then takes a weighted gradient step that is guaranteed to work well locally. Our analysis crucially exploits the connection between sparsity in vector problems and lowrankness in matrix problems, which may have other applications in obtaining robust algorithms for sparse and lowrank problems.more » « less

For fast timescales or long prediction horizons, the AC optimal power flow (OPF) problem becomes a computational challenge for largescale, realistic AC networks. To overcome this challenge, this paper presents a novel network reduction methodology that leverages an efficient mixedinteger linear programming (MILP) formulation of a Kronbased reduction that is optimal in the sense that it balances the degree of the reduction with resulting modeling errors in the reduced network. The method takes as inputs the full AC network and a precomputed library of AC load flow data and uses the graph Laplacian to constraint nodal reductions to only be feasible for neighbors of nonreduced nodes. This results in a highly effective MILP formulation which is embedded within an iterative scheme to successively improve the Kronbased network reduction until convergence. The resulting optimal network reduction is, thus, grounded in the physics of the full network. The accuracy of the network reduction methodology is then explored for a 100+ node mediumvoltage radial distribution feeder example across a wide range of operating conditions. It is finally shown that a network reduction of 2585% can be achieved within seconds and with worstcase voltage magnitude deviation errors within any super node cluster of less than 0.01pu. These results illustrate that the proposed optimizationbased approach to Kron reduction of networks is viable for larger networks and suitable for use within various power system applications.more » « less

o shift the computational burden from realtime to offline in delaycritical power systems applications, recent works entertain the idea of using a deep neural network (DNN) to predict the solutions of the AC optimal power flow (ACOPF) once presented load demands. As network topologies may change, training this DNN in a sampleefficient manner becomes a necessity. To improve data efficiency, this work utilizes the fact OPF data are not simple training labels, but constitute the solutions of a parametric optimization problem. We thus advocate training a sensitivityinformed DNN (SIDNN) to match not only the OPF optimizers, but also their partial derivatives with respect to the OPF parameters (loads). It is shown that the required Jacobian matrices do exist under mild conditions, and can be readily computed from the related primal/dual solutions. The proposed SIDNN is compatible with a broad range of OPF solvers, including a nonconvex quadratically constrained quadratic program (QCQP), its semidefinite program (SDP) relaxation, and MATPOWER; while SIDNN can be seamlessly integrated in other learningtoOPF schemes. Numerical tests on three benchmark power systems corroborate the advanced generalization and constraint satisfaction capabilities for the OPF solutions predicted by an SIDNN over a conventionally trained DNN, especially in lowdata setups.more » « less

Abstract In this paper, we study the convex quadratic optimization problem with indicator variables. For the
case, we describe the convex hull of the epigraph in the original space of variables, and also give a conic quadratic extended formulation. Then, using the convex hull description for the$${2\times 2}$$ $2\times 2$ case as a building block, we derive an extended SDP relaxation for the general case. This new formulation is stronger than other SDP relaxations proposed in the literature for the problem, including the optimal perspective relaxation and the optimal rankone relaxation. Computational experiments indicate that the proposed formulations are quite effective in reducing the integrality gap of the optimization problems.$${2\times 2}$$ $2\times 2$