skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: On Some Quasi-Variational Inequalities and Other Problems with Moving Sets
Since its introduction over 50 years ago, the concept of Mosco convergence has permeated through diverse areas of mathematics and applied sciences. These include applied analysis, the theory of partial differential equations, numerical analysis, and infinite dimensional constrained optimization, among others. In this paper we explore some of the consequences of Mosco convergence on applied problems that involve moving sets, with some historical accounts, and modern trends and features. In particular, we focus on connections with density of convex intersections, finite element approximations, quasi-variational inequalities, and impulse problems.  more » « less
Award ID(s):
2012391
PAR ID:
10253602
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Journal of convex analysis
Volume:
28
Issue:
2
ISSN:
2363-6394
Page Range / eLocation ID:
629-654
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In recent years, there is a growing need to train machine learning models on a huge volume of data. Therefore, designing efficient distributed optimization algorithms for empirical risk minimization (ERM) has become an active and challenging research topic. In this paper, we propose a flexible framework for distributed ERM training through solving the dual problem, which provides a unified description and comparison of existing methods. Our approach requires only approximate solutions of the sub-problems involved in the optimization process, and is versatile to be applied on many large-scale machine learning problems including classification, regression, and structured prediction. We show that our framework enjoys global linear convergence for a broad class of non-strongly-convex problems, and some specific choices of the sub-problems can even achieve much faster convergence than existing approaches by a refined analysis. This improved convergence rate is also reflected in the superior empirical performance of our method. 
    more » « less
  2. Buttazzo, G.; Casas, E.; de Teresa, L.; Glowinsk, R.; Leugering, G.; Trélat, E.; Zhang, X. (Ed.)
    In this article, we report the results we obtained when investigating the numerical solution of some nonlinear eigenvalue problems for the Monge-Ampère operator v → det D 2 v . The methodology we employ relies on the following ingredients: (i) a divergence formulation of the eigenvalue problems under consideration. (ii) The time discretization by operator-splitting of an initial value problem (a kind of gradient flow) associated with each eigenvalue problem. (iii) A finite element approximation relying on spaces of continuous piecewise affine functions. To validate the above methodology, we applied it to the solution of problems with known exact solutions: The results we obtained suggest convergence to the exact solution when the space discretization step h → 0. We considered also test problems with no known exact solutions. 
    more » « less
  3. Sparse principal component analysis and sparse canonical correlation analysis are two essential techniques from high-dimensional statistics and machine learning for analyzing large-scale data. Both problems can be formulated as an optimization problem with nonsmooth objective and nonconvex constraints. Because nonsmoothness and nonconvexity bring numerical difficulties, most algorithms suggested in the literature either solve some relaxations of them or are heuristic and lack convergence guarantees. In this paper, we propose a new alternating manifold proximal gradient method to solve these two high-dimensional problems and provide a unified convergence analysis. Numerical experimental results are reported to demonstrate the advantages of our algorithm. 
    more » « less
  4. In this paper we develop convergence and acceleration theory for Anderson acceleration applied to Newton’s method for nonlinear systems in which the Jacobian is singular at a solution. For these problems, the standard Newton algorithm converges linearly in a region about the solution; and, it has been previously observed that Anderson acceleration can substantially improve convergence without additional a priori knowledge, and with little additional computation cost. We present an analysis of the Newton-Anderson algorithm in this context, and introduce a novel and theoretically supported safeguarding strategy. The convergence results are demonstrated with the Chandrasekhar H-equation and a variety of benchmark examples. 
    more » « less
  5. Abstract Analysis of pipe networks involves computing flow rates and pressure differences on pipe segments in the network, given the external inflow/outflow values. This analysis can be conducted using iterative methods, among which the algorithms of Hardy Cross and Newton-Raphson have historically been applied in practice. In this note, we address the mathematical analysis of the local convergence of these algorithms. The loop-based Newton–Raphson algorithm converges quadratically fast, and we provide estimates for its convergence radius to correct some estimates in the previous literature. In contrast, we show that the convergence of the Hardy Cross algorithm is only linear. This provides theoretical confirmation of experimental observations reported earlier in the literature. 
    more » « less