skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Analysis of a Class of Minimization Problems Lacking Lower Semicontinuity
The minimization of nonlower semicontinuous functions is a difficult topic that has been minimally studied. Among such functions is a Heaviside composite function that is the composition of a Heaviside function with a possibly nonsmooth multivariate function. Unifying a statistical estimation problem with hierarchical selection of variables and a sample average approximation of composite chance constrained stochastic programs, a Heaviside composite optimization problem is one whose objective and constraints are defined by sums of possibly nonlinear multiples of such composite functions. Via a pulled-out formulation, a pseudostationarity concept for a feasible point was introduced in an earlier work as a necessary condition for a local minimizer of a Heaviside composite optimization problem. The present paper extends this previous study in several directions: (a) showing that pseudostationarity is implied by (and thus, weaker than) a sharper subdifferential-based stationarity condition that we term epistationarity; (b) introducing a set-theoretic sufficient condition, which we term a local convexity-like property, under which an epistationary point of a possibly nonlower semicontinuous optimization problem is a local minimizer; (c) providing several classes of Heaviside composite functions satisfying this local convexity-like property; (d) extending the epigraphical formulation of a nonnegative multiple of a Heaviside composite function to a lifted formulation for arbitrarily signed multiples of the Heaviside composite function, based on which we show that an epistationary solution of the given Heaviside composite program with broad classes of B-differentiable component functions can in principle be approximately computed by surrogation methods. Funding: The work of Y. Cui was based on research supported by the National Science Foundation [Grants CCF-2153352, DMS-2309729, and CCF-2416172] and the National Institutes of Health [Grant 1R01CA287413-01]. The work of J.-S. Pang was based on research supported by the Air Force Office of Scientific Research [Grant FA9550-22-1-0045].  more » « less
Award ID(s):
2416172
PAR ID:
10625503
Author(s) / Creator(s):
; ;
Publisher / Repository:
Mathematics of Operations Research
Date Published:
Journal Name:
Mathematics of Operations Research
ISSN:
0364-765X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Local convergence analysis of the augmented Lagrangian method (ALM) is established for a large class of composite optimization problems with nonunique Lagrange multipliers under a second-order sufficient condition. We present a new second-order variational property called the semistability of second subderivatives and demonstrate that it is widely satisfied for numerous classes of functions, which is important for applications in constrained and composite optimization problems. Using the latter condition and a certain second-order sufficient condition, we are able to establish Q-linear convergence of the primal-dual sequence for an inexact version of the ALM for composite programs. Funding: Research of the first author is partially supported by Singapore National Academy of Science via SASEAF Programme under the grant RIE2025 NRF International Partnership Funding Initiative. Research of the second author is partially supported by the National Science Foundation under the grant DMS 2108546. 
    more » « less
  2. In this paper, we propose AdaBB, an adaptive gradient method based on the Barzilai-Borwein stepsize. The algorithm is line-search-free and parameter-free, and it essentially provides a convergent variant of the Barzilai-Borwein method for general convex optimization problems. We analyze the ergodic convergence of the objective function value and the convergence of the iterates for solving general convex optimization problems. Compared with existing works along this line of research, our algorithm gives the best lower bounds on the stepsize and the average of the stepsizes. Furthermore, we present extensions of the proposed algorithm for solving locally strongly convex and composite convex optimization problems where the objective function is the sum of a smooth function and a nonsmooth function. In the case of local strong convexity, we achieve linear convergence. Our numerical results also demonstrate very promising potential of the proposed algorithms on some representative examples. Funding: S. Ma is supported by the National Science Foundation [Grants DMS-2243650, CCF-2308597, CCF-2311275, and ECCS-2326591] and a startup fund from Rice University. J. Yang is supported by the National Natural Science Foundation of China [Grants 12431011 and 12371301] and the Natural Science Foundation for Distinguished Young Scholars of Gansu Province [Grant 22JR5RA223]. 
    more » « less
  3. We investigate a class of composite nonconvex functions, where the outer function is the sum of univariate extended-real-valued convex functions and the inner function is the limit of difference-of-convex functions. A notable feature of this class is that the inner function may fail to be locally Lipschitz continuous. It covers a range of important, yet challenging, applications, including inverse optimal value optimization and problems under value-at-risk constraints. We propose an asymptotic decomposition of the composite function that guarantees epi-convergence to the original function, leading to necessary optimality conditions for the corresponding minimization problem. The proposed decomposition also enables us to design a numerical algorithm such that any accumulation point of the generated sequence, if it exists, satisfies the newly introduced optimality conditions. These results expand on the study of so-called amenable functions introduced by Poliquin and Rockafellar in 1992, which are compositions of convex functions with smooth maps, and the prox-linear methods for their minimization. To demonstrate that our algorithmic framework is practically implementable, we further present verifiable termination criteria and preliminary numerical results. Funding: Financial support from the National Science Foundation Division of Computing and Communication Foundations [Grant CCF-2416172] and Division of Mathematical Sciences [Grant DMS-2416250] and the National Cancer Institute, National Institutes of Health [Grant 1R01CA287413-01] is gratefully acknowledged. 
    more » « less
  4. In this paper, we study the generalized subdifferentials and the Riemannian gradient subconsistency that are the basis for non-Lipschitz optimization on embedded submanifolds of [Formula: see text]. We then propose a Riemannian smoothing steepest descent method for non-Lipschitz optimization on complete embedded submanifolds of [Formula: see text]. We prove that any accumulation point of the sequence generated by the Riemannian smoothing steepest descent method is a stationary point associated with the smoothing function employed in the method, which is necessary for the local optimality of the original non-Lipschitz problem. We also prove that any accumulation point of the sequence generated by our method that satisfies the Riemannian gradient subconsistency is a limiting stationary point of the original non-Lipschitz problem. Numerical experiments are conducted to demonstrate the advantages of Riemannian [Formula: see text] [Formula: see text] optimization over Riemannian [Formula: see text] optimization for finding sparse solutions and the effectiveness of the proposed method. Funding: C. Zhang was supported in part by the National Natural Science Foundation of China [Grant 12171027] and the Natural Science Foundation of Beijing [Grant 1202021]. X. Chen was supported in part by the Hong Kong Research Council [Grant PolyU15300219]. S. Ma was supported in part by the National Science Foundation [Grants DMS-2243650 and CCF-2308597], the UC Davis Center for Data Science and Artificial Intelligence Research Innovative Data Science Seed Funding Program, and a startup fund from Rice University. 
    more » « less
  5. The matrix sensing problem is an important low-rank optimization problem that has found a wide range of applications, such as matrix completion, phase synchornization/retrieval, robust principal component analysis (PCA), and power system state estimation. In this work, we focus on the general matrix sensing problem with linear measurements that are corrupted by random noise. We investigate the scenario where the search rank r is equal to the true rank [Formula: see text] of the unknown ground truth (the exact parametrized case), as well as the scenario where r is greater than [Formula: see text] (the overparametrized case). We quantify the role of the restricted isometry property (RIP) in shaping the landscape of the nonconvex factorized formulation and assisting with the success of local search algorithms. First, we develop a global guarantee on the maximum distance between an arbitrary local minimizer of the nonconvex problem and the ground truth under the assumption that the RIP constant is smaller than [Formula: see text]. We then present a local guarantee for problems with an arbitrary RIP constant, which states that any local minimizer is either considerably close to the ground truth or far away from it. More importantly, we prove that this noisy, overparametrized problem exhibits the strict saddle property, which leads to the global convergence of perturbed gradient descent algorithm in polynomial time. The results of this work provide a comprehensive understanding of the geometric landscape of the matrix sensing problem in the noisy and overparametrized regime. Funding: This work was supported by grants from the National Science Foundation, Office of Naval Research, Air Force Office of Scientific Research, and Army Research Office. 
    more » « less