skip to main content


Title: A global algorithm for AC optimal power flow based on successive linear conic optimization
Newly, there has been significant research interest in the exact solution of the AC optimal power flow (AC-OPF) problem. A semideflnite relaxation solves many OPF problems globally. However, the real problem exists in which the semidefinite relaxation fails to yield the global solution. The appropriation of relaxation for AC-OPF depends on the success or unfulflllment of the SDP relaxation. This paper demonstrates a quadratic AC-OPF problem with a single negative eigenvalue in objective function subject to linear and conic constraints. The proposed solution method for AC-OPF model covers the classical AC economic dispatch problem that is known to be NP-hard. In this paper, by combining successive linear conic optimization (SLCO), convex relaxation and line search technique, we present a global algorithm for AC-OPF which can locate a globally optimal solution to the underlying AC-OPF within given tolerance of global optimum solution via solving linear conic optimization problems. The proposed algorithm is examined on modified IEEE 6-bus test system. The promising numerical results are described.  more » « less
Award ID(s):
1851602
NSF-PAR ID:
10079395
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2017 IEEE Power & Energy Society General Meeting
Page Range / eLocation ID:
1 to 5
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We present alfonso, an open-source Matlab package for solving conic optimization problems over nonsymmetric convex cones. The implementation is based on the authors’ corrected analysis of a method of Skajaa and Ye. It enables optimization over any convex cone as long as a logarithmically homogeneous self-concordant barrier is available for the cone or its dual. This includes many nonsymmetric cones, for example, hyperbolicity cones and their duals (such as sum-of-squares cones), semidefinite and second-order cone representable cones, power cones, and the exponential cone. Besides enabling the solution of problems that cannot be cast as optimization problems over a symmetric cone, algorithms for nonsymmetric conic optimization also offer performance advantages for problems whose symmetric cone programming representation requires a large number of auxiliary variables or has a special structure that can be exploited in the barrier computation. The worst-case iteration complexity of alfonso is the best known for nonsymmetric cone optimization: [Formula: see text] iterations to reach an ε-optimal solution, where ν is the barrier parameter of the barrier function used in the optimization. Alfonso can be interfaced with a Matlab function (supplied by the user) that computes the Hessian of a barrier function for the cone. A simplified interface is also available to optimize over the direct product of cones for which a barrier function has already been built into the software. This interface can be easily extended to include new cones. Both interfaces are illustrated by solving linear programs. The oracle interface and the efficiency of alfonso are also demonstrated using an optimal design of experiments problem in which the tailored barrier computation greatly decreases the solution time compared with using state-of-the-art, off-the-shelf conic optimization software. Summary of Contribution: The paper describes an open-source Matlab package for optimization over nonsymmetric cones. A particularly important feature of this software is that, unlike other conic optimization software, it enables optimization over any convex cone as long as a suitable barrier function is available for the cone or its dual, not limiting the user to a small number of specific cones. Nonsymmetric cones for which such barriers are already known include, for example, hyperbolicity cones and their duals (such as sum-of-squares cones), semidefinite and second-order cone representable cones, power cones, and the exponential cone. Thus, the scope of this software is far larger than most current conic optimization software. This does not come at the price of efficiency, as the worst-case iteration complexity of our algorithm matches the iteration complexity of the most successful interior-point methods for symmetric cones. Besides enabling the solution of problems that cannot be cast as optimization problems over a symmetric cone, our software can also offer performance advantages for problems whose symmetric cone programming representation requires a large number of auxiliary variables or has a special structure that can be exploited in the barrier computation. This is also demonstrated in this paper via an example in which our code significantly outperforms Mosek 9 and SCS 2. 
    more » « less
  2. Low-rank matrix recovery is a fundamental problem in machine learning with numerous applications. In practice, the problem can be solved by convex optimization namely nuclear norm minimization, or by non-convex optimization as it is well-known that for low-rank matrix problems like matrix sensing and matrix completion, all local optima of the natural non-convex objectives are also globally optimal under certain ideal assumptions. In this paper, we study new approaches for matrix sensing in a semi-random model where an adversary can add any number of arbitrary sensing matrices. More precisely, the problem is to recover a low-rank matrix $X^\star$ from linear measurements $b_i = \langle A_i, X^\star \rangle$, where an unknown subset of the sensing matrices satisfies the Restricted Isometry Property (RIP) and the rest of the $A_i$'s are chosen adversarially. It is known that in the semi-random model, existing non-convex objectives can have bad local optima. To fix this, we present a descent-style algorithm that provably recovers the ground-truth matrix $X^\star$. For the closely-related problem of semi-random matrix completion, prior work [CG18] showed that all bad local optima can be eliminated by reweighting the input data. However, the analogous approach for matrix sensing requires reweighting a set of matrices to satisfy RIP, which is a condition that is NP-hard to check. Instead, we build on the framework proposed in [KLL$^+$23] for semi-random sparse linear regression, where the algorithm in each iteration reweights the input based on the current solution, and then takes a weighted gradient step that is guaranteed to work well locally. Our analysis crucially exploits the connection between sparsity in vector problems and low-rankness in matrix problems, which may have other applications in obtaining robust algorithms for sparse and low-rank problems. 
    more » « less
  3. For fast timescales or long prediction horizons, the AC optimal power flow (OPF) problem becomes a computational challenge for large-scale, realistic AC networks. To overcome this challenge, this paper presents a novel network reduction methodology that leverages an efficient mixed-integer linear programming (MILP) formulation of a Kron-based reduction that is optimal in the sense that it balances the degree of the reduction with resulting modeling errors in the reduced network. The method takes as inputs the full AC network and a pre-computed library of AC load flow data and uses the graph Laplacian to constraint nodal reductions to only be feasible for neighbors of non-reduced nodes. This results in a highly effective MILP formulation which is embedded within an iterative scheme to successively improve the Kron-based network reduction until convergence. The resulting optimal network reduction is, thus, grounded in the physics of the full network. The accuracy of the network reduction methodology is then explored for a 100+ node medium-voltage radial distribution feeder example across a wide range of operating conditions. It is finally shown that a network reduction of 25-85% can be achieved within seconds and with worst-case voltage magnitude deviation errors within any super node cluster of less than 0.01pu. These results illustrate that the proposed optimization-based approach to Kron reduction of networks is viable for larger networks and suitable for use within various power system applications. 
    more » « less
  4. o shift the computational burden from real-time to offline in delay-critical power systems applications, recent works entertain the idea of using a deep neural network (DNN) to predict the solutions of the AC optimal power flow (AC-OPF) once presented load demands. As network topologies may change, training this DNN in a sample-efficient manner becomes a necessity. To improve data efficiency, this work utilizes the fact OPF data are not simple training labels, but constitute the solutions of a parametric optimization problem. We thus advocate training a sensitivity-informed DNN (SI-DNN) to match not only the OPF optimizers, but also their partial derivatives with respect to the OPF parameters (loads). It is shown that the required Jacobian matrices do exist under mild conditions, and can be readily computed from the related primal/dual solutions. The proposed SI-DNN is compatible with a broad range of OPF solvers, including a non-convex quadratically constrained quadratic program (QCQP), its semidefinite program (SDP) relaxation, and MATPOWER; while SI-DNN can be seamlessly integrated in other learning-to-OPF schemes. Numerical tests on three benchmark power systems corroborate the advanced generalization and constraint satisfaction capabilities for the OPF solutions predicted by an SI-DNN over a conventionally trained DNN, especially in low-data setups. 
    more » « less
  5. Abstract

    In this paper, we study the convex quadratic optimization problem with indicator variables. For the$${2\times 2}$$2×2case, we describe the convex hull of the epigraph in the original space of variables, and also give a conic quadratic extended formulation. Then, using the convex hull description for the$${2\times 2}$$2×2case as a building block, we derive an extended SDP relaxation for the general case. This new formulation is stronger than other SDP relaxations proposed in the literature for the problem, including the optimal perspective relaxation and the optimal rank-one relaxation. Computational experiments indicate that the proposed formulations are quite effective in reducing the integrality gap of the optimization problems.

     
    more » « less