skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on August 12, 2025

Title: Parabolic Regularity of Spectral Functions
This paper is devoted to the study of the second-order variational analysis of spectral functions. It is well-known that spectral functions can be expressed as a composite function of symmetric functions and eigenvalue functions. We establish several second-order properties of spectral functions when their associated symmetric functions enjoy these properties. Our main attention is given to characterize parabolic regularity for this class of functions. It was observed recently that parabolic regularity can play a central rule in ensuring the validity of important second-order variational properties, such as twice epi-differentiability. We demonstrates that for convex spectral functions, their parabolic regularity amounts to that of their symmetric functions. As an important consequence, we calculate the second subderivative of convex spectral functions, which allows us to establish second-order optimality conditions for a class of matrix optimization problems.  more » « less
Award ID(s):
2108546
PAR ID:
10534375
Author(s) / Creator(s):
;
Publisher / Repository:
Informs
Date Published:
Journal Name:
Mathematics of Operations Research
ISSN:
0364-765X
Subject(s) / Keyword(s):
spectral functions, parabolic regularity, twice epi-differentiability, composite optimization, second-order optimality conditions
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Understanding the role that subgradients play in various second-order variational anal- ysis constructions can help us uncover new properties of important classes of functions in variational analysis. Focusing mainly on the behavior of the second subderivative and subgradient proto-derivative of polyhedral functions, i.e., functions with poly- hedral convex epigraphs, we demonstrate that choosing the underlying subgradient, utilized in the definitions of these concepts, from the relative interior of the subdif- ferential of polyhedral functions ensures stronger second-order variational properties such as strict twice epi-differentiability and strict subgradient proto-differentiability. This allows us to characterize continuous differentiability of the proximal mapping and twice continuous differentiability of the Moreau envelope of polyhedral functions. We close the paper with proving the equivalence of metric regularity and strong metric regularity of a class of generalized equations at their nondegenerate solutions. 
    more » « less
  2. The presence of second-order smoothness for objective functions of optimization problems can provide valuable information about their stability properties and help us design efficient numerical algorithms for solving these problems. Such second-order information, however, cannot be expected in various constrained and composite optimization problems since we often have to express their objective functions in terms of extended-real-valued functions for which the classical second derivative may not exist. One powerful geometrical tool to use for dealing with such functions is the concept of twice epi-differentiability. In this paper, we study a stronger version of this concept, called strict twice epi-differentiability. We characterize this concept for certain composite functions and use it to establish the equivalence of metric regularity and strong metric regularity for a class of generalized equations at their nondegenerate solutions. Finally, we present a characterization of continuous differentiability of the proximal mapping of our composite functions. 
    more » « less
  3. The paper is devoted to a comprehensive study of composite models in variational analysis and optimization the importance of which for numerous theoretical, algorithmic, and applied issues of operations research is difficult to overstate. The underlying theme of our study is a systematical replacement of conventional metric regularity and related requirements by much weaker metric subregulatity ones that lead us to significantly stronger and completely new results of first-order and second-order variational analysis and optimization. In this way, we develop extended calculus rules for first-order and second-order generalized differential constructions while paying the main attention in second-order variational theory to the new and rather large class of fully subamenable compositions. Applications to optimization include deriving enhanced no-gap second-order optimality conditions in constrained composite models, complete characterizations of the uniqueness of Lagrange multipliers, strong metric subregularity of Karush-Kuhn-Tucker systems in parametric optimization, and so on. 
    more » « less
  4. Inverse problems of identifying parameters in partial differential equations (PDEs) is an important class of problems with many real-world applications. Inverse problems are commonly studied in optimization setting with various known approaches having their advantages and disadvantages. Although a non-convex output least-squares (OLS) objective has often been used, a convex modified output least-squares (MOLS) attracted quite an attention in recent years. However, the convexity of the MOLS has only been established for parameters appearing linearly in the PDEs. The primary objective of this work is to introduce and analyze a variant of the MOLS for the inverse problem of identifying parameters that appear nonlinearly in variational problems. Besides giving an existence result for the inverse problem, we derive the first-order and second-order derivative formulas for the new functional and use them to identify the conditions under which the new functional is convex. We give a discretization scheme for the continuous inverse problem and prove its convergence. We also obtain discrete formulas for the new MOLS functional and present detailed numerical examples. 
    more » « less
  5. This work concerns the local convergence theory of Newton and quasi-Newton methods for convex-composite optimization: where one minimizes an objective that can be written as the composition of a convex function with one that is continuiously differentiable. We focus on the case in which the convex function is a potentially infinite-valued piecewise linear-quadratic function. Such problems include nonlinear programming, mini-max optimization, and estimation of nonlinear dynamics with non-Gaussian noise as well as many modern approaches to large-scale data analysis and machine learning. Our approach embeds the optimality conditions for convex-composite optimization problems into a generalized equation. We establish conditions for strong metric subregularity and strong metric regularity of the corresponding set-valued mappings. This allows us to extend classical convergence of Newton and quasi-Newton methods to the broader class of nonfinite valued piecewise linear-quadratic convex-composite optimization problems. In particular, we establish local quadratic convergence of the Newton method under conditions that parallel those in nonlinear programming. 
    more » « less