skip to main content


Title: Linearized Krylov subspace Bregman iteration with nonnegativity constraint
Abstract Bregman-type iterative methods have received considerable attention in recent years due to their ease of implementation and the high quality of the computed solutions they deliver. However, these iterative methods may require a large number of iterations and this reduces their usefulness. This paper develops a computationally attractive linearized Bregman algorithm by projecting the problem to be solved into an appropriately chosen low-dimensional Krylov subspace. The projection reduces the computational effort required for each iteration. A variant of this solution method, in which nonnegativity of each computed iterate is imposed, also is described. Extensive numerical examples illustrate the performance of the proposed methods.  more » « less
Award ID(s):
1720259 1729509
NSF-PAR ID:
10191507
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Numerical Algorithms
ISSN:
1017-1398
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Canonical polyadic decomposition (CPD) has been a workhorse for multimodal data analytics. This work puts forth a stochastic algorithmic framework for CPD under β-divergence, which is well-motivated in statistical learning—where the Euclidean distance is typically not preferred. Despite the existence of a series of prior works addressing this topic, pressing computational and theoretical challenges, e.g., scalability and convergence issues, still remain. In this paper, a unified stochastic mirror descent framework is developed for large-scale β-divergence CPD. Our key contribution is the integrated design of a tensor fiber sampling strategy and a flexible stochastic Bregman divergence-based mirror descent iterative procedure, which significantly reduces the computation and memory cost per iteration for various β. Leveraging the fiber sampling scheme and the multilinear algebraic structure of low-rank tensors, the proposed lightweight algorithm also ensures global convergence to a stationary point under mild conditions. Numerical results on synthetic and real data show that our framework attains significant computational saving compared with state-of-the-art methods. 
    more » « less
  2. Plug-and-Play (PnP) methods are efficient iterative algorithms for solving ill-posed image inverse problems. PnP methods are obtained by using deep Gaussian denoisers instead of the proximal operator or the gradient-descent step within proximal algorithms. Current PnP schemes rely on data-fidelity terms that have either Lipschitz gradients or closed-form proximal operators, which is not applicable to Poisson inverse problems. Based on the observation that the Gaussian noise is not the adequate noise model in this setting, we propose to generalize PnP using the Bregman Proximal Gradient (BPG) method. BPG replaces the Euclidean distance with a Bregman divergence that can better capture the smoothness properties of the problem. We introduce the Bregman Score Denoiser specifically parametrized and trained for the new Bregman geometry and prove that it corresponds to the proximal operator of a nonconvex potential. We propose two PnP algorithms based on the Bregman Score Denoiser for solving Poisson inverse problems. Extending the convergence results of BPG in the nonconvex settings, we show that the proposed methods converge, targeting stationary points of an explicit global functional. Experimental evaluations conducted on various Poisson inverse problems validate the convergence results and showcase effective restoration performance. 
    more » « less
  3. We present two algorithms to compute system-specific polarizabilities and dispersion coefficients such that required memory and computational time scale linearly with increasing number of atoms in the unit cell for large systems. The first algorithm computes the atom-in-material (AIM) static polarizability tensors, force-field polarizabilities, and C 6 , C 8 , C 9 , C 10 dispersion coefficients using the MCLF method. The second algorithm computes the AIM polarizability tensors and C 6 coefficients using the TS-SCS method. Linear-scaling computational cost is achieved using a dipole interaction cutoff length function combined with iterative methods that avoid large dense matrix multiplies and large matrix inversions. For MCLF, Richardson extrapolation of the screening increments is used. For TS-SCS, a failproof conjugate residual (FCR) algorithm is introduced that solves any linear equation system having Hermitian coefficients matrix. These algorithms have mathematically provable stable convergence that resists round-off errors. We parallelized these methods to provide rapid computation on multi-core computers. Excellent parallelization efficiencies were obtained, and adding parallel processors does not significantly increase memory requirements. This enables system-specific polarizabilities and dispersion coefficients to be readily computed for materials containing millions of atoms in the unit cell. The largest example studied herein is an ice crystal containing >2 million atoms in the unit cell. For this material, the FCR algorithm solved a linear equation system containing >6 million rows, 7.57 billion interacting atom pairs, 45.4 billion stored non-negligible matrix components used in each large matrix-vector multiplication, and ∼19 million unknowns per frequency point (>300 million total unknowns). 
    more » « less
  4. The thermal radiative transfer (TRT) equations form an integro-differential system that describes the propagation and collisional interactions of photons. Computing accurate and efficient numerical solutions TRT are challenging for several reasons, the first of which is that TRT is defined on a high-dimensional phase space that includes the independent variables of time, space, and velocity. In order to reduce the dimensionality of the phase space, classical approaches such as the P$_N$ (spherical harmonics) or the S$_N$ (discrete ordinates) ansatz are often used in the literature. In this work, we introduce a novel approach: the hybrid discrete (H$^T_N$) approximation to the radiative thermal transfer equations. This approach acquires desirable properties of both P$_N$ and S$_N$, and indeed reduces to each of these approximations in various limits: H$^1_N$ $\equiv$ P$_N$ and H$^T_0$ $\equiv$ S$_T$. We prove that H$^T_N$ results in a system of hyperbolic partial differential equations for all $T\ge 1$ and $N\ge 0$. Another challenge in solving the TRT system is the inherent stiffness due to the large timescale separation between propagation and collisions, especially in the diffusive (i.e., highly collisional) regime. This stiffness challenge can be partially overcome via implicit time integration, although fully implicit methods may become computationally expensive due to the strong nonlinearity and system size. On the other hand, explicit time-stepping schemes that are not also asymptotic-preserving in the highly collisional limit require resolving the mean-free path between collisions, making such schemes prohibitively expensive. In this work we develop a numerical method that is based on a nodal discontinuous Galerkin discretization in space, coupled with a semi-implicit discretization in time. In particular, we make use of a second order explicit Runge-Kutta scheme for the streaming term and an implicit Euler scheme for the material coupling term. Furthermore, in order to solve the material energy equation implicitly after each predictor and corrector step, we linearize the temperature term using a Taylor expansion; this avoids the need for an iterative procedure, and therefore improves efficiency. In order to reduce unphysical oscillation, we apply a slope limiter after each time step. Finally, we conduct several numerical experiments to verify the accuracy, efficiency, and robustness of the H$^T_N$ ansatz and the numerical discretizations. 
    more » « less
  5. Abstract Motivation

    The advancement of high-throughput technology characterizes a wide variety of epigenetic modifications and noncoding RNAs across the genome involved in disease pathogenesis via regulating gene expression. The high dimensionality of both epigenetic/noncoding RNA and gene expression data make it challenging to identify the important regulators of genes. Conducting univariate test for each possible regulator–gene pair is subject to serious multiple comparison burden, and direct application of regularization methods to select regulator–gene pairs is computationally infeasible. Applying fast screening to reduce dimension first before regularization is more efficient and stable than applying regularization methods alone.

    Results

    We propose a novel screening method based on robust partial correlation to detect epigenetic and noncoding RNA regulators of gene expression over the whole genome, a problem that includes both high-dimensional predictors and high-dimensional responses. Compared to existing screening methods, our method is conceptually innovative that it reduces the dimension of both predictor and response, and screens at both node (regulators or genes) and edge (regulator–gene pairs) levels. We develop data-driven procedures to determine the conditional sets and the optimal screening threshold, and implement a fast iterative algorithm. Simulations and applications to long noncoding RNA and microRNA regulation in Kidney cancer and DNA methylation regulation in Glioblastoma Multiforme illustrate the validity and advantage of our method.

    Availability and implementation

    The R package, related source codes and real datasets used in this article are provided at https://github.com/kehongjie/rPCor.

    Supplementary information

    Supplementary data are available at Bioinformatics online.

     
    more » « less