Abstract Can one recover a matrix efficiently from only matrix‐vector products? If so, how many are needed? This article describes algorithms to recover matrices with known structures, such as tridiagonal, Toeplitz, Toeplitz‐like, and hierarchical low‐rank, from matrix‐vector products. In particular, we derive a randomized algorithm for recovering an unknown hierarchical low‐rank matrix from only matrix‐vector products with high probability, where is the rank of the off‐diagonal blocks, and is a small oversampling parameter. We do this by carefully constructing randomized input vectors for our matrix‐vector products that exploit the hierarchical structure of the matrix. While existing algorithms for hierarchical matrix recovery use a recursive “peeling” procedure based on elimination, our approach uses a recursive projection procedure.
more »
« less
A unified recovery of structured signals using atomic norm
Abstract In many applications, we seek to recover signals from linear measurements far fewer than the ambient dimension, given the signals have exploitable structures such as sparse vectors or low rank matrices. In this paper, we work in a general setting where signals are approximately sparse in a so-called atomic set. We provide general recovery results stating that a convex programming can stably and robustly recover signals if the null space of the sensing map satisfies certain properties. Moreover, we argue that such null space property can be satisfied with high probability if each measurement is sub-Gaussian even when the number of measurements are very few. Some new results for recovering signals sparse in a frame, and recovering low rank matrices are also derived as a result.
more »
« less
- Award ID(s):
- 2307827
- PAR ID:
- 10483391
- Publisher / Repository:
- Oxford University Press
- Date Published:
- Journal Name:
- Information and Inference: A Journal of the IMA
- Volume:
- 13
- Issue:
- 1
- ISSN:
- 2049-8772
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We consider the rank of a class of sparse Boolean matrices of size $$n \times n$$. In particular, we show that the probability that such a matrix has full rank, and is thus invertible, is a positive constant with value about $0.2574$ for large $$n$$. The matrices arise as the vertex-edge incidence matrix of 1-out 3-uniform hypergraphs. The result that the null space is bounded in expectation, can be contrasted with results for the usual models of sparse Boolean matrices, based on the vertex-edge incidence matrix of random $$k$$-uniform hypergraphs. For this latter model, the expected co-rank is linear in the number of vertices $$n$$, \cite{ACO}, \cite{CFP}. For fields of higher order, the co-rank is typically Poisson distributed.more » « less
-
Abstract Robust principal component analysis (RPCA) is a widely used method for recovering low‐rank structure from data matrices corrupted by significant and sparse outliers. These corruptions may arise from occlusions, malicious tampering, or other causes for anomalies, and the joint identification of such corruptions with low‐rank background is critical for process monitoring and diagnosis. However, existing RPCA methods and their extensions largely do not account for the underlying probabilistic distribution for the data matrices, which in many applications are known and can be highly non‐Gaussian. We thus propose a new method called RPCA for exponential family distributions (), which can perform the desired decomposition into low‐rank and sparse matrices when such a distribution falls within the exponential family. We present a novel alternating direction method of multiplier optimization algorithm for efficient decomposition, under either its natural or canonical parametrization. The effectiveness of is then demonstrated in two applications: the first for steel sheet defect detection and the second for crime activity monitoring in the Atlanta metropolitan area.more » « less
-
Abstract We study the low-rank phase retrieval problem, where our goal is to recover a $$d_1\times d_2$$ low-rank matrix from a series of phaseless linear measurements. This is a fourth-order inverse problem, as we are trying to recover factors of a matrix that have been observed, indirectly, through some quadratic measurements. We propose a solution to this problem using the recently introduced technique of anchored regression. This approach uses two different types of convex relaxations: we replace the quadratic equality constraints for the phaseless measurements by a search over a polytope and enforce the rank constraint through nuclear norm regularization. The result is a convex program in the space of $$d_1 \times d_2$$ matrices. We analyze two specific scenarios. In the first, the target matrix is rank-$$1$$, and the observations are structured to correspond to a phaseless blind deconvolution. In the second, the target matrix has general rank, and we observe the magnitudes of the inner products against a series of independent Gaussian random matrices. In each of these problems, we show that anchored regression returns an accurate estimate from a near-optimal number of measurements given that we have access to an anchor matrix of sufficient quality. We also show how to create such an anchor in the phaseless blind deconvolution problem from an optimal number of measurements and present a partial result in this direction for the general rank problem.more » « less
-
We study the joint low-rank factorization of the matrices X=[A B]G and Y=[A C]H, in which the columns of the shared factor matrix A correspond to vectorized rank-one matrices, the unshared factors B and C have full column rank, and the matrices G and H have full row rank. The objective is to find the shared factor A, given only X and Y. We first explain that if the matrix [A B C] has full column rank, then a basis for the column space of the shared factor matrix A can be obtained from the null space of the matrix [X Y]. This in turn implies that the problem of finding the shared factor matrix A boils down to a basic Canonical Polyadic Decomposition (CPD) problem that in many cases can directly be solved by means of an eigenvalue decomposition. Next, we explain that by taking the rank-one constraint of the columns of the shared factor matrix A into account when computing the null space of the matrix [X Y], more relaxed identifiability conditions can be obtained that do not require that [A B C] has full column rank. The benefit of the unconstrained null space approach is that it leads to simple algorithms while the benefit of the rank-one constrained null space approach is that it leads to relaxed identifiability conditions. Finally, a joint unbalanced orthogonal Procrustes and CPD fitting approach for computing the shared factor matrix A from noisy observation matrices X and Y will briefly be discussed.more » « less
An official website of the United States government
