skip to main content


Search for: All records

Award ID contains: 1715671

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Under consideration are multicomponent minimization problems in- volving a separable nonsmooth convex function penalizing the com- ponents individually, and nonsmooth convex coupling terms penal- izing linear mixtures of the components. We investigate the appli- cation of block-activated proximal algorithms for solving such prob- lems, i.e., algorithms which, at each iteration, need to use only a block of the underlying functions, as opposed to all of them as in standard methods. For smooth coupling functions, several block- activated algorithms exist and they are well understood. By con- trast, in the fully nonsmooth case, few block-activated methods are available and little effort has been devoted to assessing them. Our goal is to shed more light on the implementation, the features, and the behavior of these algorithms, compare their merits, and provide machine learning and image recovery experiments illustrating their performance. 
    more » « less
  2. We show that many nonlinear observation models in signal recovery can be represented using firmly nonexpansive operators. To address problems with inaccurate measurements, we propose solving a vari- ational inequality relaxation which is guaranteed to possess solutions under mild conditions and which coincides with the original problem if it happens to be consistent. We then present an efficient algorithm for its solution, as well as numerical applications in signal and im- age recovery, including an experimental operator-theoretic method of promoting sparsity. 
    more » « less
  3. We propose a novel approach to monotone operator splitting based on the notion of a saddle operator. Under investigation is a highly structured multivariate monotone inclusion problem involving a mix of set-valued, cocoercive, and Lipschitzian monotone operators, as well as various monotonicity-preserving operations among them. This model encompasses most formulations found in the literature. A limitation of existing primal-dual algorithms is that they operate in a product space that is too small to achieve full splitting of our problem in the sense that each operator is used individually. To circumvent this difficulty, we recast the problem as that of finding a zero of a saddle operator that acts on a bigger space. This leads to an algorithm of unprecedented flexibility, which achieves full splitting, exploits the specific attributes of each operator, is asynchronous, and requires to activate only blocks of operators at each iteration, as opposed to activating all of them. The latter feature is of critical importance in large-scale problems. The weak convergence of the main algorithm is established, as well as the strong convergence of a variant. Various applications are discussed, and instantiations of the proposed framework in the context of variational inequalities and minimization problems are presented. 
    more » « less
  4. null (Ed.)
    We consider the problem of recovering a signal from nonlinear transformations, under convex constraints modeling a priori information. Standard feasibility and optimization methods are ill-suited to tackle this problem due to the nonlinearities. We show that, in many common applications, the transformation model can be associated with fixed point equations involving firmly nonexpansive operators. In turn, the recovery problem is reduced to a tractable common fixed point formulation, which is solved efficiently by a provably convergent, block-iterative algorithm. Applications to signal and image recovery are demonstrated. Inconsistent problems are also addressed. 
    more » « less
  5. null (Ed.)
    The goal of this paper is to promote the use of fixed point strategies in data science by showing that they provide a simplifying and unifying framework to model, analyze, and solve a great variety of problems. They are seen to constitute a natural environment to explain the behavior of advanced convex optimization methods as well as of recent nonlinear methods in data science which are formulated in terms of paradigms that go beyond minimization concepts and involve constructs such as Nash equilibria or monotone inclusions. We review the pertinent tools of fixed point theory and describe the main state-of-the-art algorithms for provenly convergent fixed point construction. We also incorporate additional ingredients such as stochasticity, block-implementations, and non-Euclidean metrics, which provide further enhancements. Applications to signal and image processing, machine learning, statistics, neural networks, and inverse problems are discussed. 
    more » « less
  6. We show that the weak convergence of the Douglas--Rachford algorithm for finding a zero of the sum of two maximally monotone operators cannot be improved to strong convergence. Likewise, we show that strong convergence can fail for the method of partial inverses. 
    more » « less
  7. Motivated by structures that appear in deep neural networks, we investigate nonlinear com- posite models alternating proximity and affine operators defined on different spaces. We first show that a wide range of activation operators used in neural networks are actually proximity operators. We then establish conditions for the averagedness of the proposed composite constructs and investigate their asymptotic properties. It is shown that the limit of the resulting process solves a variational inequality which, in general, does not derive from a minimization problem. The analysis relies on tools from monotone operator theory and sheds some light on a class of neural networks structures with so far elusive asymptotic properties. 
    more » « less
  8. null (Ed.)