skip to main content


Title: Block-Activated Algorithms For Multicomponent Fully Nonsmooth Minimization
Under consideration are multicomponent minimization problems in- volving a separable nonsmooth convex function penalizing the com- ponents individually, and nonsmooth convex coupling terms penal- izing linear mixtures of the components. We investigate the appli- cation of block-activated proximal algorithms for solving such prob- lems, i.e., algorithms which, at each iteration, need to use only a block of the underlying functions, as opposed to all of them as in standard methods. For smooth coupling functions, several block- activated algorithms exist and they are well understood. By con- trast, in the fully nonsmooth case, few block-activated methods are available and little effort has been devoted to assessing them. Our goal is to shed more light on the implementation, the features, and the behavior of these algorithms, compare their merits, and provide machine learning and image recovery experiments illustrating their performance.  more » « less
Award ID(s):
1715671
NSF-PAR ID:
10335315
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing
Volume:
2022
Page Range / eLocation ID:
5428 to 5432
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We investigate a modular convex Nash equilibrium problem involving nonsmooth functions acting on linear mixtures of strategies, as well as smooth coupling functions. An asynchronous block-iterative decomposition method is proposed to solve it. 
    more » « less
  2. We investigate a modular convex Nash equilibrium problem involving nonsmooth functions acting on linear mixtures of strategies, as well as smooth coupling functions. An asynchronous block-iterative decomposition method is proposed to solve it. 
    more » « less
  3. Subgradient methods comprise a fundamental class of nonsmooth optimization algorithms. Classical results show that certain subgradient methods converge sublinearly for general Lipschitz convex functions and converge linearly for convex functions that grow sharply away from solutions. Recent work has moreover extended these results to certain nonconvex problems. In this work, we seek to improve the complexity of these algorithms by asking the following question. Is it possible to design a superlinearly convergent subgradient method? We provide a positive answer to this question for a broad class of sharp semismooth functions.

    Funding: The research of D. Davis was supported by the Division of Mathematical Sciences [Grant 2047637] and the Alfred P. Sloan Foundation.

     
    more » « less
  4. Abstract

    We present a new feasible proximal gradient method for constrained optimization where both the objective and constraint functions are given by summation of a smooth, possibly nonconvex function and a convex simple function. The algorithm converts the original problem into a sequence of convex subproblems. Formulating those subproblems requires the evaluation of at most one gradient-value of the original objective and constraint functions. Either exact or approximate subproblems solutions can be computed efficiently in many cases. An important feature of the algorithm is the constraint level parameter. By carefully increasing this level for each subproblem, we provide a simple solution to overcome the challenge of bounding the Lagrangian multipliers and show that the algorithm follows a strictly feasible solution path till convergence to the stationary point. We develop a simple, proximal gradient descent type analysis, showing that the complexity bound of this new algorithm is comparable to gradient descent for the unconstrained setting which is new in the literature. Exploiting this new design and analysis technique, we extend our algorithms to some more challenging constrained optimization problems where (1) the objective is a stochastic or finite-sum function, and (2) structured nonsmooth functions replace smooth components of both objective and constraint functions. Complexity results for these problems also seem to be new in the literature. Finally, our method can also be applied to convex function constrained problems where we show complexities similar to the proximal gradient method.

     
    more » « less
  5. We develop a unified level-bundle method, called accelerated constrained level-bundle (ACLB) algorithm, for solving constrained convex optimization problems. where the objective and constraint functions can be nonsmooth, weakly smooth, and/or smooth. ACLB employs Nesterov’s accelerated gradient technique, and hence retains the iteration complexity as that of existing bundle-type methods if the objective or one of the constraint functions is nonsmooth. More importantly, ACLB can significantly reduce iteration complexity when the objective and all constraints are (weakly) smooth. In addition, if the objective contains a nonsmooth component which can be written as a specific form of maximum, we show that the iteration complexity of this component can be much lower than that for general nonsmooth objective function. Numerical results demonstrate the effectiveness of the proposed algorithm. 
    more » « less