skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Finitely Convergent Circumcenter Method for the Convex Feasibility Problem
In this paper, we present a variant of the circumcenter method for the Convex Feasibility Problem (CFP), ensuring finite convergence under a Slater assumption. The method replaces exact projections onto the convex sets with projections onto separating halfspaces, perturbed by positive exogenous parameters that decrease to zero along the iterations. If the perturbation parameters decrease slowly enough, such as the terms of a diverging series, finite convergence is achieved. To the best of our knowledge, this is the first circumcenter method for CFP that guarantees finite convergence.  more » « less
Award ID(s):
2307328
PAR ID:
10583842
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
SIAM
Date Published:
Journal Name:
SIAM Journal on Optimization
Volume:
34
Issue:
3
ISSN:
1052-6234
Page Range / eLocation ID:
2535 to 2556
Subject(s) / Keyword(s):
49M27 65K05 65B99 90C25
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. SGD with Momentum (SGDM) is a widely used family of algorithms for large-scale optimization of machine learning problems. Yet, when optimizing generic convex functions, no advantage is known for any SGDM algorithm over plain SGD. Moreover, even the most recent results require changes to the SGDM algorithms, like averaging of the iterates and a projection onto a bounded domain, which are rarely used in practice. In this paper, we focus on the convergence rate of the last iterate of SGDM. For the first time, we prove that for any constant momentum factor, there exists a Lipschitz and convex function for which the last iterate of SGDM suffers from a suboptimal convergence rate of $$\Omega(\frac{\ln T}{\sqrt{T}})$$ after $$T$$ iterations. Based on this fact, we study a class of (both adaptive and non-adaptive) Follow-The-Regularized-Leader-based SGDM algorithms with increasing momentum and shrinking updates. For these algorithms, we show that the last iterate has optimal convergence $$O(\frac{1}{\sqrt{T}})$$ for unconstrained convex stochastic optimization problems without projections onto bounded domains nor knowledge of $$T$$. Further, we show a variety of results for FTRL-based SGDM when used with adaptive stepsizes. Empirical results are shown as well. 
    more » « less
  2. SGD with Momentum (SGDM) is a widely used family of algorithms for large-scale optimization of machine learning problems. Yet, when optimizing generic convex functions, no advantage is known for any SGDM algorithm over plain SGD. Moreover, even the most recent results require changes to the SGDM algorithms, like averaging of the iterates and a projection onto a bounded domain, which are rarely used in practice. In this paper, we focus on the convergence rate of the last iterate of SGDM. For the first time, we prove that for any constant momentum factor, there exists a Lipschitz and convex function for which the last iterate of SGDM suffers from a suboptimal convergence rate of $$\Omega(\frac{\ln T}{\sqrt{T}})$$ after $$T$$ iterations. Based on this fact, we study a class of (both adaptive and non-adaptive) Follow-The-Regularized-Leader-based SGDM algorithms with \emph{increasing momentum} and \emph{shrinking updates}. For these algorithms, we show that the last iterate has optimal convergence $$O(\frac{1}{\sqrt{T}})$ for unconstrained convex stochastic optimization problems without projections onto bounded domains nor knowledge of $$T$$. Further, we show a variety of results for FTRL-based SGDM when used with adaptive stepsizes. Empirical results are shown as well. 
    more » « less
  3. We develop a convex analytic approach to analyze finite width two-layer ReLU networks. We first prove that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set, where simple solutions are encouraged via its convex geometrical properties. We then leverage this characterization to show that an optimal set of parameters yield linear spline interpolation for regression problems involving one dimensional or rank-one data. We also characterize the classification decision regions in terms of a kernel matrix and minimum `1-norm solutions. This is in contrast to Neural Tangent Kernel which is unable to explain predictions of finite width networks. Our convex geometric characterization also provides intuitive explanations of hidden neurons as auto-encoders. In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints. Then, we apply certain convex relaxations and introduce a cutting-plane algorithm to globally optimize the network. We further analyze the exactness of the relaxations to provide conditions for the convergence to a global optimum. Our analysis also shows that optimal network parameters can be also characterized as interpretable closed-form formulas in some practically relevant special cases. 
    more » « less
  4. This article introduces a new system property called the closest feasible points (CFP) invariance to characterize systems with actuator saturation. Systems that possess this invariance property include diagonal matrices, completely decentralized (completely decoupled) linear dynamical systems, and dynamical systems with a nonsingular input-independent characteristic (decoupling) matrix that can be made diagonal with row or column rearrangements. However, a single-input single-output system may not possess this property. This system property has implications and applications in control, where actuator saturation is common. For example, when an actuator saturates, the closed-loop performance of a CFP non-invariant plant under a controller that is not a solution to a constrained optimal control problem, may degrade considerably. The definition of this property guides the derivation of optimal CFP non-invariance compensators that decrease the control performance degradation gracefully in CFP non-invariant plants. This work characterizes the plants for which clipping and direction preservation of controller outputs are optimal. 
    more » « less
  5. In this paper, we propose Nesterov Accelerated Shuffling Gradient (NASG), a new algorithm for the convex finite-sum minimization problems. Our method integrates the traditional Nesterov’s acceleration momentum with different shuffling sampling schemes. We show that our algorithm has an improved rate of O(1/T ) using unified shuffling schemes, where T is the number of epochs. This rate is better than that of any other shuffling gradient methods in convex regime. Our convergence analysis does not require an assumption on bounded domain or a bounded gradient condition. For randomized shuffling schemes, we improve the convergence bound further. When employing some initial condition, we show that our method converges faster near the small neighborhood of the solution. Numerical simulations demonstrate the efficiency of our algorithm. 
    more » « less