Many modern large-scale and distributed optimization problems can be cast into a form in which the objective function is a sum of a smooth term and a nonsmooth regularizer. Such problems can be solved via a proximal gradient method which generalizes standard gradient descent to a nonsmooth setup. In this paper, we leverage the tools from control theory to study global convergence of proximal gradient flow algorithms. We utilize the fact that the proximal gradient algorithm can be interpreted as a variable-metric gradient method on the forward-backward envelope. This continuously differentiable function can be obtained from the augmented Lagrangian associated with the original nonsmooth problem and it enjoys a number of favorable properties. We prove that global exponential convergence can be achieved even in the absence of strong convexity. Moreover, for in-network optimization problems, we provide a distributed implementation of the gradient flow dynamics based on the proximal augmented Lagrangian and prove global exponential stability for strongly convex problems.
more »
« less
An Exponentially Convergent Primal-Dual Algorithm for Nonsmooth Composite Minimization
We consider a class of nonsmooth convex composite optimization problems, where the objective function is given by the sum of a continuously differentiable convex term and a potentially non-differentiable convex regularizer. In [1], the authors introduced the proximal augmented Lagrangian method and derived the resulting continuous-time primal-dual dynamics that converge to the optimal solution. In this paper, we extend these dynamics from continuous to discrete time via the forward Euler discretization. We prove explicit bounds on the exponential convergence rates of our proposed algorithm with a sufficiently small step size. Since a larger step size can improve the convergence speed, we further develop a linear matrix inequality (LMI) condition which can be numerically solved to provide rate certificates with general step size choices. In addition, we prove that a large range of step size values can guarantee exponential convergence. We close the paper by demonstrating the performance of the proposed algorithm via computational experiments.
more »
« less
- Award ID(s):
- 1809833
- PAR ID:
- 10128660
- Date Published:
- Journal Name:
- 2018 IEEE Conference on Decision and Control (CDC)
- Page Range / eLocation ID:
- 4927 to 4932
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We consider an in-network optimal resource allocation problem in which a group of agents interacting over a connected graph want to meet a demand while minimizing their collective cost. The contribution of this paper is to design a distributed continuous-time algorithm for this problem inspired by a recently developed first-order transformed primal-dual method. The solution applies to cluster-based setting where each agent may have a set of subagents, and its local cost is the sum of the cost of these subagents. The proposed algorithm guarantees an exponential convergence for strongly convex costs and asymptotic convergence for convex costs. Exponential convergence when the local cost functions are strongly convex is achieved even when the local gradients are only locally Lipschitz. For convex local cost functions, our algorithm guarantees asymptotic convergence to a point in the minimizer set. Through numerical examples, we show that our proposed algorithm delivers a faster convergence compared to existing distributed resource allocation algorithms.more » « less
-
We propose a continuous-time second-order optimization algorithm for solving unconstrained convex optimization problems with bounded Hessian. We show that this alternative algorithm has a comparable convergence rate to that of the continuous-time Newton–Raphson method, however structurally, it is amenable to a more efficient distributed implementation. We present a distributed implementation of our proposed optimization algorithm and prove its convergence via Lyapunov analysis. A numerical example demonstrates our results.more » « less
-
The primal-dual gradient flow dynamics based on the proximal augmented Lagrangian were introduced in [1] to solve nonsmooth composite optimization problems with a linear equality constraint. We use a Lyapunov-based approach to demonstrate global exponential stability of the underlying dynamics when the differentiable part of the objective function is strongly convex and its gradient is Lipschitz continuous. This also allows us to determine a bound on the stepsize that guarantees linear convergence of the discretized algorithm.more » « less
-
— In large-scale and model-free settings, first-order algorithms are often used in an attempt to find the optimal control action without identifying the underlying dynamics. The convergence properties of these algorithms remain poorly understood because of nonconvexity. In this paper, we revisit the continuous-time linear quadratic regulator problem and take a step towards demystifying the efficiency of gradient-based strategies. Despite the lack of convexity, we establish a linear rate of convergence to the globally optimal solution for the gradient descent algorithm. The key component of our analysis is that we relate the gradient-flow dynamics associated with the nonconvex formulation to that of a convex reparameterization. This allows us to provide convergence guarantees for the nonconvex approach from its convex counterpart.more » « less