skip to main content


Search for: All records

Award ID contains: 2012243

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Optimization problems with group sparse regularization are ubiquitous in various popular downstream applications, such as feature selection and compression for Deep Neural Networks (DNNs). Nonetheless, the existing methods in the literature do not perform particularly well when such regularization is used in combination with a stochastic loss function. In particular, it is challenging to design a computationally efficient algorithm with a convergence guarantee and can compute group-sparse solutions. Recently, a half-space stochastic projected gradient ({\tt HSPG}) method was proposed that partly addressed these challenges. This paper presents a substantially enhanced version of {\tt HSPG} that we call~{\tt AdaHSPG+} that makes two noticeable advances. First, {\tt AdaHSPG+} is shown to have a stronger convergence result under significantly looser assumptions than those required by {\tt HSPG}. This improvement in convergence is achieved by integrating variance reduction techniques with a new adaptive strategy for iteratively predicting the support of a solution. Second, {\tt AdaHSPG+} requires significantly less parameter tuning compared to {\tt HSPG}, thus making it more practical and user-friendly. This advance is achieved by designing automatic and adaptive strategies for choosing the type of step employed at each iteration and for updating key hyperparameters. The numerical effectiveness of our proposed {\tt AdaHSPG+} algorithm is demonstrated on both convex and non-convex benchmark problems. The source code is available at \url{https://github.com/tianyic/adahspg}. 
    more » « less
  2. This paper introduces a new proximal stochastic gradient method with variance reduction and stabilization for minimizing the sum of a convex stochastic function and a group sparsity-inducing regularization function. Since the method may be viewed as a stabilized version of the recently proposed algorithm \pstorm{}, we call our algorithm \spstorm{}. Our analysis shows that \spstorm{} has strong convergence results. In particular, we prove an upper bound on the number of iterations required by \spstorm{} before its iterates correctly identify (with high probability) an optimal support (i.e., the zero and nonzero structure of an optimal solution). Most algorithms in the literature with such a support identification property use variance reduction techniques that require either periodically evaluating an \emph{exact} gradient or storing a history of stochastic gradients. Unlike these methods, \spstorm{} achieves variance reduction without requiring either of these, which is advantageous. Moreover, our support-identification result for \spstorm{} shows that, with high probability, an optimal support will be identified correctly in \emph{all} iterations with index above a threshold. We believe that this type of result is new to the literature since the few existing other results prove that the optimal support is identified with high probability at each iteration with a sufficiently large index (meaning that the optimal support might be identified in some iterations, but not in others). Numerical experiments on regularized logistic loss problems show that \spstorm{} outperforms existing methods in various metrics that measure how efficiently and robustly iterates of an algorithm identify an optimal support. 
    more » « less
  3. null (Ed.)