skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on May 6, 2026

Title: Enlarged GKB and RSVD for KP approximations illustrated for mixed precision $$\ell _1$$ image restoration
The singular value decomposition (SVD) of a reordering of a matrix A can be used to determine an efficient Kronecker product (KP) sum approximation to A. We present the use of an approximate truncated SVD (TSVD) to find the KP approximation, and contrast using a randomized singular value decomposition algorithm (RSVD), a new enlarged Golub Kahan Bidiagonalization algorithm (EGKB) and the exact TSVD. The EGKB algorithm enlarges the Krylov subspace beyond a given rank for the desired approximation. A suitable rank is determined using an automatic stopping test. We also contrast the use of single and double precision arithmetic to find the approximate TSVDs. To illustrate the accuracy and efficiency in terms of memory and computational cost of these approximate KPs, we consider the solution of the total variation regularized image deblurring problem using the split Bregman algorithm implemented in double precision. Together with an efficient implementation for the reordering of A we demonstrate that the approximate KP sum can be obtained using a TSVD, and that the new EGKB algorithm contrasts favorably with the use of the RSVD. These results verify that it is feasible to use single precision when estimating a KP sum from an approximate TSVD.  more » « less
Award ID(s):
2152704
PAR ID:
10603937
Author(s) / Creator(s):
;
Publisher / Repository:
Numerical algorithms
Date Published:
Journal Name:
Numerical Algorithms
ISSN:
1017-1398
Subject(s) / Keyword(s):
ℓ1 regularization· Split Bregman· Isotropic· Kronecker Product· Single and double precision· Golub Kahan Bidiagonalization· Randomized algorithm
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Tucker decomposition is a low-rank tensor approximation that generalizes a truncated matrix singular value decomposition (SVD). Existing parallel software has shown that Tucker decomposition is particularly effective at compressing terabyte-sized multidimensional scientific simulation datasets, computing reduced representations that satisfy a specified approximation error. The general approach is to get a low-rank approximation of the input data by performing a sequence of matrix SVDs of tensor unfoldings, which tend to be short-fat matrices. In the existing approach, the SVD is performed by computing the eigendecomposition of the Gram matrix of the unfolding. This method sacrifices some numerical stability in exchange for lower computation costs and easier parallelization. We propose using a more numerically stable though more computationally expensive way to compute the SVD by pre- processing with a QR decomposition step and computing an SVD of only the small triangular factor. The more numerically stable approach allows us to achieve the same accuracy with half the working precision (for example, single rather than double precision). We demonstrate that our method scales as well as the existing approach, and the use of lower precision leads to an overall reduction in running time of up to a factor of 2 when using 10s to 1000s of processors. Using the same working precision, we are also able to compute Tucker decompositions with much smaller approximation error. 
    more » « less
  2. null (Ed.)
    Seismic data are often incomplete due to equipment malfunction, limited source and receiver placement at near and far offsets, and missing crossline data. Seismic data contain redundancies because they are repeatedly recorded over the same or adjacent subsurface regions, causing the data to have a low-rank structure. To recover missing data, one can organize the data into a multidimensional array or tensor and apply a tensor completion method. We can increase the effectiveness and efficiency of low-rank data reconstruction based on tensor singular value decomposition (tSVD) by analyzing the effect of tensor orientation and exploiting the conjugate symmetry of the multidimensional Fourier transform. In fact, these results can be generalized to any order tensor. Relating the singular values of the tSVD to those of a matrix leads to a simplified analysis, revealing that the most square orientation gives the best data structure for low-rank reconstruction. After the first step of the tSVD, a multidimensional Fourier transform, frontal slices of the tensor form conjugate pairs. For each pair, a singular value decomposition can be replaced with a much cheaper conjugate calculation, allowing for faster computation of the tSVD. Using conjugate symmetry in our improved tSVD algorithm reduces the runtime of the inner loop by 35%–50%. We consider synthetic and real seismic data sets from the Viking Graben Region and the Northwest Shelf of Australia arranged as high-dimensional tensors. We compare the tSVD-based reconstruction with traditional methods, projection onto convex sets and multichannel singular spectrum analysis, and we see that the tSVD-based method gives similar or better accuracy and is more efficient, converging with runtimes that are an order of magnitude faster than the traditional methods. In addition, we verify that the most square orientation improves recovery for these examples by 10%–20% compared with the other orientations. 
    more » « less
  3. DeepTensor is a computationally efficient framework for low-rank decomposition of matrices and tensors using deep generative networks. We decompose a tensor as the product of low-rank tensor factors where each low-rank tensor is generated by a deep network (DN) that is trained in a self-supervised manner to minimize the mean-square approximation error. Our key observation is that the implicit regularization inherent in DNs enables them to capture nonlinear signal structures that are out of the reach of classical linear methods like the singular value decomposition (SVD) and principal components analysis (PCA). We demonstrate that the performance of DeepTensor is robust to a wide range of distributions and a computationally efficient drop-in replacement for the SVD, PCA, nonnegative matrix factorization (NMF), and similar decompositions by exploring a range of real-world applications, including hyperspectral image denoising, 3D MRI tomography, and image classification. 
    more » « less
  4. This paper proposes an optimization-based method to learn the singular value decomposition (SVD) of a compact operator with ordered singular functions. The proposed objective function is based on Schmidt’s low-rank approximation theorem (1907) that characterizes a truncated SVD as a solution minimizing the mean squared error, accompanied with a technique called nesting to learn the ordered structure. When the optimization space is parameterized by neural networks, we refer to the proposed method as NeuralSVD. The implementation does not require sophisticated optimization tricks unlike existing approaches. 
    more » « less
  5. Modern deep neural networks (DNNs) often require high memory consumption and large computational loads. In order to deploy DNN algorithms efficiently on edge or mobile devices, a series of DNN compression algorithms have been explored, including factorization methods. Factorization methods approximate the weight matrix of a DNN layer with the multiplication of two or multiple low-rank matrices. However, it is hard to measure the ranks of DNN layers during the training process. Previous works mainly induce low-rank through implicit approximations or via costly singular value decomposition (SVD) process on every training step. The former approach usually induces a high accuracy loss while the latter has a low efficiency. In this work, we propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step. SVD training first decomposes each layer into the form of its full-rank SVD, then performs training directly on the decomposed weights. We add orthogonality regularization to the singular vectors, which ensure the valid form of SVD and avoid gradient vanishing/exploding. Low-rank is encouraged by applying sparsity-inducing regularizers on the singular values of each layer. Singular value pruning is applied at the end to explicitly reach a low-rank model. We empirically show that SVD training can significantly reduce the rank of DNN layers and achieve higher reduction on computation load under the same accuracy, comparing to not only previous factorization methods but also state-of-the-art filter pruning methods. 
    more » « less