skip to main content

Title: Linearized Krylov subspace Bregman iteration with nonnegativity constraint
Abstract Bregman-type iterative methods have received considerable attention in recent years due to their ease of implementation and the high quality of the computed solutions they deliver. However, these iterative methods may require a large number of iterations and this reduces their usefulness. This paper develops a computationally attractive linearized Bregman algorithm by projecting the problem to be solved into an appropriately chosen low-dimensional Krylov subspace. The projection reduces the computational effort required for each iteration. A variant of this solution method, in which nonnegativity of each computed iterate is imposed, also is described. Extensive numerical examples illustrate the performance of the proposed methods.
Authors:
; ;
Award ID(s):
1720259 1729509
Publication Date:
NSF-PAR ID:
10191507
Journal Name:
Numerical Algorithms
ISSN:
1017-1398
Sponsoring Org:
National Science Foundation
More Like this
  1. Canonical polyadic decomposition (CPD) has been a workhorse for multimodal data analytics. This work puts forth a stochastic algorithmic framework for CPD under β-divergence, which is well-motivated in statistical learning—where the Euclidean distance is typically not preferred. Despite the existence of a series of prior works addressing this topic, pressing computational and theoretical challenges, e.g., scalability and convergence issues, still remain. In this paper, a unified stochastic mirror descent framework is developed for large-scale β-divergence CPD. Our key contribution is the integrated design of a tensor fiber sampling strategy and a flexible stochastic Bregman divergence-based mirror descent iterative procedure, which significantly reduces the computation and memory cost per iteration for various β. Leveraging the fiber sampling scheme and the multilinear algebraic structure of low-rank tensors, the proposed lightweight algorithm also ensures global convergence to a stationary point under mild conditions. Numerical results on synthetic and real data show that our framework attains significant computational saving compared with state-of-the-art methods.
  2. We present two algorithms to compute system-specific polarizabilities and dispersion coefficients such that required memory and computational time scale linearly with increasing number of atoms in the unit cell for large systems. The first algorithm computes the atom-in-material (AIM) static polarizability tensors, force-field polarizabilities, and C 6 , C 8 , C 9 , C 10 dispersion coefficients using the MCLF method. The second algorithm computes the AIM polarizability tensors and C 6 coefficients using the TS-SCS method. Linear-scaling computational cost is achieved using a dipole interaction cutoff length function combined with iterative methods that avoid large dense matrix multiplies and large matrix inversions. For MCLF, Richardson extrapolation of the screening increments is used. For TS-SCS, a failproof conjugate residual (FCR) algorithm is introduced that solves any linear equation system having Hermitian coefficients matrix. These algorithms have mathematically provable stable convergence that resists round-off errors. We parallelized these methods to provide rapid computation on multi-core computers. Excellent parallelization efficiencies were obtained, and adding parallel processors does not significantly increase memory requirements. This enables system-specific polarizabilities and dispersion coefficients to be readily computed for materials containing millions of atoms in the unit cell. The largest example studied herein is an icemore »crystal containing >2 million atoms in the unit cell. For this material, the FCR algorithm solved a linear equation system containing >6 million rows, 7.57 billion interacting atom pairs, 45.4 billion stored non-negligible matrix components used in each large matrix-vector multiplication, and ∼19 million unknowns per frequency point (>300 million total unknowns).« less
  3. The thermal radiative transfer (TRT) equations form an integro-differential system that describes the propagation and collisional interactions of photons. Computing accurate and efficient numerical solutions TRT are challenging for several reasons, the first of which is that TRT is defined on a high-dimensional phase space that includes the independent variables of time, space, and velocity. In order to reduce the dimensionality of the phase space, classical approaches such as the P$_N$ (spherical harmonics) or the S$_N$ (discrete ordinates) ansatz are often used in the literature. In this work, we introduce a novel approach: the hybrid discrete (H$^T_N$) approximation to the radiative thermal transfer equations. This approach acquires desirable properties of both P$_N$ and S$_N$, and indeed reduces to each of these approximations in various limits: H$^1_N$ $\equiv$ P$_N$ and H$^T_0$ $\equiv$ S$_T$. We prove that H$^T_N$ results in a system of hyperbolic partial differential equations for all $T\ge 1$ and $N\ge 0$. Another challenge in solving the TRT system is the inherent stiffness due to the large timescale separation between propagation and collisions, especially in the diffusive (i.e., highly collisional) regime. This stiffness challenge can be partially overcome via implicit time integration, although fully implicit methods may become computationally expensivemore »due to the strong nonlinearity and system size. On the other hand, explicit time-stepping schemes that are not also asymptotic-preserving in the highly collisional limit require resolving the mean-free path between collisions, making such schemes prohibitively expensive. In this work we develop a numerical method that is based on a nodal discontinuous Galerkin discretization in space, coupled with a semi-implicit discretization in time. In particular, we make use of a second order explicit Runge-Kutta scheme for the streaming term and an implicit Euler scheme for the material coupling term. Furthermore, in order to solve the material energy equation implicitly after each predictor and corrector step, we linearize the temperature term using a Taylor expansion; this avoids the need for an iterative procedure, and therefore improves efficiency. In order to reduce unphysical oscillation, we apply a slope limiter after each time step. Finally, we conduct several numerical experiments to verify the accuracy, efficiency, and robustness of the H$^T_N$ ansatz and the numerical discretizations.« less
  4. Abstract Motivation

    The advancement of high-throughput technology characterizes a wide variety of epigenetic modifications and noncoding RNAs across the genome involved in disease pathogenesis via regulating gene expression. The high dimensionality of both epigenetic/noncoding RNA and gene expression data make it challenging to identify the important regulators of genes. Conducting univariate test for each possible regulator–gene pair is subject to serious multiple comparison burden, and direct application of regularization methods to select regulator–gene pairs is computationally infeasible. Applying fast screening to reduce dimension first before regularization is more efficient and stable than applying regularization methods alone.

    Results

    We propose a novel screening method based on robust partial correlation to detect epigenetic and noncoding RNA regulators of gene expression over the whole genome, a problem that includes both high-dimensional predictors and high-dimensional responses. Compared to existing screening methods, our method is conceptually innovative that it reduces the dimension of both predictor and response, and screens at both node (regulators or genes) and edge (regulator–gene pairs) levels. We develop data-driven procedures to determine the conditional sets and the optimal screening threshold, and implement a fast iterative algorithm. Simulations and applications to long noncoding RNA and microRNA regulation in Kidney cancer and DNA methylation regulationmore »in Glioblastoma Multiforme illustrate the validity and advantage of our method.

    Availability and implementation

    The R package, related source codes and real datasets used in this article are provided at https://github.com/kehongjie/rPCor.

    Supplementary information

    Supplementary data are available at Bioinformatics online.

    « less
  5. Iterative algorithms have become increasingly popular in Computed Tomography (CT) image reconstruction since they better deal with the adverse image artifacts arising from low radiation dose image acquisition. But iterative methods remain computationally expensive. The main cost emerges in the projection and backprojection operations where accurate CT system modeling can greatly improve the quality of the reconstructed image. We present a framework that improves upon one particular aspect - the accurate projection of the image basis functions. It differs from current methods in that it substitutes the high computational complexity associated with accurate voxel projection by a small number of memory operations. Coefficients are computed in advance and stored in look-up tables parameterized by the CT system's projection geometry. The look-up tables only require a few kilobytes of storage and can be efficiently accelerated on the GPU. We demonstrate our framework with both numerical and clinical experiments and compare its performance with the current state of the art scheme - the separable footprint method.