In this paper, we propose an efficient numerical scheme for solving some large‐scale ill‐posed linear inverse problems arising from image restoration. In order to accelerate the computation, two different hidden structures are exploited. First, the coefficient matrix is approximated as the sum of a small number of Kronecker products. This procedure not only introduces one more level of parallelism into the computation but also enables the usage of computationally intensive matrix–matrix multiplications in the subsequent optimization procedure. We then derive the corresponding Tikhonov regularized minimization model and extend the fast iterative shrinkage‐thresholding algorithm (FISTA) to solve the resulting optimization problem. Because the matrices appearing in the Kronecker product approximation are all structured matrices (Toeplitz, Hankel, etc.), we can further exploit their fast matrix–vector multiplication algorithms at each iteration. The proposed algorithm is thus called
Faster Linear Algebra for Distance Matrices
The distance matrix of a dataset X of n points with respect to a distance function
f represents all pairwise distances between points in X induced by f. Due to their
wide applicability, distance matrices and related families of matrices have been
the focus of many recent algorithmic works. We continue this line of research
and take a broad view of algorithm design for distance matrices with the goal of
designing fast algorithms, which are specifically tailored for distance matrices, for
fundamental linear algebraic primitives. Our results include efficient algorithms
for computing matrix-vector products for a wide class of distance matrices, such
as the l1 metric for which we get a linear runtime, as well as a quadratic lower
bound for any algorithm which computes a matrix-vector product for the l_infty case.
Our upper bound results have many
further downstream applications, including the fastest algorithm for computing
a relative error low-rank approximation for the distance matrix induced by l1
and l2 functions and the fastest algorithm for computing an additive error lowrank
approximation for the l2 metric, in addition to applications for fast matrix
multiplication among others. We also give algorithms for constructing distance
matrices and show that one can construct an approximate l2 distance matrix in
time faster than the bound implied by the Johnson-Lindenstrauss lemma.
more »
« less
- Award ID(s):
- 2022448
- NSF-PAR ID:
- 10430252
- Date Published:
- Journal Name:
- Conference on Neural Information Processing Systems
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Summary structured FISTA (sFISTA). In particular, we show that the approximation error introduced by sFISTA is well under control and sFISTA can reach the same image restoration accuracy level as FISTA. Finally, both the theoretical complexity analysis and some numerical results are provided to demonstrate the efficiency of sFISTA. -
null (Ed.)This survey describes probabilistic algorithms for linear algebraic computations, such as factorizing matrices and solving linear systems. It focuses on techniques that have a proven track record for real-world problems. The paper treats both the theoretical foundations of the subject and practical computational issues. Topics include norm estimation, matrix approximation by sampling, structured and unstructured random embeddings, linear regression problems, low-rank approximation, subspace iteration and Krylov methods, error estimation and adaptivity, interpolatory and CUR factorizations, Nyström approximation of positive semidefinite matrices, single-view (‘streaming’) algorithms, full rank-revealing factorizations, solvers for linear systems, and approximation of kernel matrices that arise in machine learning and in scientific computing.more » « less
-
null (Ed.)We investigate sublinear classical and quantum algorithms for matrix games, a fundamental problem in optimization and machine learning, with provable guarantees. Given a matrix, sublinear algorithms for the matrix game were previously known only for two special cases: (1) the maximizing vectors live in the L1-norm unit ball, and (2) the minimizing vectors live in either the L1- or the L2-norm unit ball. We give a sublinear classical algorithm that can interpolate smoothly between these two cases: for any fixed q between 1 and 2, we solve, within some additive error, matrix games where the minimizing vectors are in an Lq-norm unit ball. We also provide a corresponding sublinear quantum algorithm that solves the same task with a quadratic improvement in dimensions of the maximizing and minimizing vectors. Both our classical and quantum algorithms are optimal in the dimension parameters up to poly-logarithmic factors. Finally, we propose sublinear classical and quantum algorithms for the approximate Carathéodory problem and the Lq-margin support vector machines as applications.more » « less
-
We study the problem of supervised learning a metric space under discriminative constraints. Given a universe X and sets S, D subset binom{X}{2} of similar and dissimilar pairs, we seek to find a mapping f:X -> Y, into some target metric space M=(Y,rho), such that similar objects are mapped to points at distance at most u, and dissimilar objects are mapped to points at distance at least l. More generally, the goal is to find a mapping of maximum accuracy (that is, fraction of correctly classified pairs). We propose approximation algorithms for various versions of this problem, for the cases of Euclidean and tree metric spaces. For both of these target spaces, we obtain fully polynomial-time approximation schemes (FPTAS) for the case of perfect information. In the presence of imperfect information we present approximation algorithms that run in quasi-polynomial time (QPTAS). We also present an exact algorithm for learning line metric spaces with perfect information in polynomial time. Our algorithms use a combination of tools from metric embeddings and graph partitioning, that could be of independent interest.more » « less
-
Fast linear transforms are ubiquitous in machine learning, including the discrete Fourier transform, discrete cosine transform, and other structured transformations such as convolutions. All of these transforms can be represented by dense matrix-vector multiplication, yet each has a specialized and highly efficient (subquadratic) algorithm. We ask to what extent hand-crafting these algorithms and implementations is necessary, what structural prior they encode, and how much knowledge is required to automatically learn a fast algorithm for a provided structured transform. Motivated by a characterization of fast matrix-vector multiplication as products of sparse matrices, we introduce a parameterization of divide-and-conquer methods that is capable of representing a large class of transforms. This generic formulation can automatically learn an efficient algorithm for many important transforms; for example, it recovers the O(N logN) Cooley-Tukey FFT algorithm to machine precision, for dimensions N up to 1024. Furthermore, our method can be incorporated as a lightweight replacement of generic matrices in machine learning pipelines to learn efficient and compressible transformations. On a standard task of compressing a single hidden-layer network, our method exceeds the classification accuracy of unconstrained matrices on CIFAR-10 by 3.9 points—the first time a structured approach has done so—with 4X faster inference speed and 40X fewer parameters.more » « less