skip to main content


Search for: All records

Creators/Authors contains: "Mordukhovich, Boris S"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The paper is devoted to the study of a new class of optimal control problems governed by discontinuous constrained differential inclusions of the sweeping type involving the duration of the dynamic process into optimization. We develop a novel version of the method of discrete approximations of its own qualitative and numerical values with establishing its well-posedness and strong convergence to optimal solutions of the controlled sweeping process. Using advanced tools of first-order and second-order variational analysis and generalized differentiation allows us to derive new necessary conditions for optimal solutions of the discrete-time problems and then, by passing to the limit in the discretization procedure, for designated local minimizers in the original problem of sweeping optimal control. The obtained results are illustrated by a numerical example 
    more » « less
    Free, publicly-accessible full text available April 1, 2025
  2. The paper proposes and develops a novel inexact gradient method (IGD) for minimizing smooth functions with Lipschitzian gradients. We show that the sequence of gradients generated by IGD converges to zero. The convergence of iterates to stationary points is guaranteed under the Kurdyka- Lojasiewicz property of the objective function with convergence rates depending on the KL exponent. The newly developed IGD is applied to designing two novel gradient-based methods of nonsmooth convex optimization such as the inexact proximal point methods (GIPPM) and the inexact augmented Lagrangian method (GIALM) for convex programs with linear equality constraints. These two methods inherit global convergence properties from IGD and are confirmed by numerical experiments to have practical advantages over some well-known algorithms of nonsmooth convex optimization 
    more » « less
    Free, publicly-accessible full text available March 25, 2025
  3. This paper proposes and develops new linesearch methods with inexact gradient information for finding stationary points of nonconvex continuously differentiable functions on finite-dimensional spaces. Some abstract convergence results for a broad class of linesearch methods are established. A general scheme for inexact reduced gradient (IRG) methods is proposed, where the errors in the gradient approximation automatically adapt with the magnitudes of the exact gradients. The sequences of iterations are shown to obtain stationary accumulation points when different stepsize selections are employed. Convergence results with constructive convergence rates for the developed IRG methods are established under the Kurdyka–Łojasiewicz property. The obtained results for the IRG methods are confirmed by encouraging numerical experiments, which demonstrate advantages of automatically controlled errors in IRG methods over other frequently used error selections. 
    more » « less
  4. The paper is devoted to a comprehensive study of composite models in variational analysis and optimization the importance of which for numerous theoretical, algorithmic, and applied issues of operations research is difficult to overstate. The underlying theme of our study is a systematical replacement of conventional metric regularity and related requirements by much weaker metric subregulatity ones that lead us to significantly stronger and completely new results of first-order and second-order variational analysis and optimization. In this way, we develop extended calculus rules for first-order and second-order generalized differential constructions while paying the main attention in second-order variational theory to the new and rather large class of fully subamenable compositions. Applications to optimization include deriving enhanced no-gap second-order optimality conditions in constrained composite models, complete characterizations of the uniqueness of Lagrange multipliers, strong metric subregularity of Karush-Kuhn-Tucker systems in parametric optimization, and so on.

     
    more » « less