skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 8:00 PM ET on Friday, March 21 until 8:00 AM ET on Saturday, March 22 due to maintenance. We apologize for the inconvenience.


Title: Matrix denoising with partial noise statistics: optimal singular value shrinkage of spiked F-matrices
We study the problem of estimating a large, low-rank matrix corrupted by additive noise of unknown covariance, assuming one has access to additional side information in the form of noise-only measurements. We study the Whiten-Shrink-reColour (WSC) workflow, where a ‘noise covariance whitening’ transformation is applied to the observations, followed by appropriate singular value shrinkage and a ‘noise covariance re-colouring’ transformation. We show that under the mean square error loss, a unique, asymptotically optimal shrinkage nonlinearity exists for the WSC denoising workflow, and calculate it in closed form. To this end, we calculate the asymptotic eigenvector rotation of the random spiked F-matrix ensemble, a result which may be of independent interest. With sufficiently many pure-noise measurements, our optimally tuned WSC denoising workflow outperforms, in mean square error, matrix denoising algorithms based on optimal singular value shrinkage that do not make similar use of noise-only side information; numerical experiments show that our procedure’s relative performance is particularly strong in challenging statistical settings with high dimensionality and large degree of heteroscedasticity.  more » « less
Award ID(s):
2238821
PAR ID:
10511478
Author(s) / Creator(s):
; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Information and Inference: A Journal of the IMA
Volume:
12
Issue:
3
ISSN:
2049-8772
Page Range / eLocation ID:
2020 to 2065
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper we frame a fairly comprehensive set of spacetime detection problems, where a subspace signal modulates the mean-value vector of a multivariate normal measurement and nonstationary additive noise determines the covariance matrix. The measured spacetime data matrix consists of multiple measurements in time. As time advances, the signal component moves around in a subspace, and the noise covariance matrix changes in scale. 
    more » « less
  2. Abstract

    Linear transformations are widely used in data assimilation for covariance modeling, for reducing dimensionality (such as averaging dense observations to form “superobs”), and for managing sampling error in ensemble data assimilation. Here we describe a linear transformation that is optimal in the sense that, in the transformed space, the state variables and observations have uncorrelated errors, and a diagonal gain matrix in the update step. We conjecture, and provide numerical evidence, that the transformation is the best possible to precede covariance localization in an ensemble Kalman filter. A central feature of this transformation in the update step are scalars, which we term canonical observation operators (COOs), that relate pairs of transformed observations and state variables and rank‐order those pairs by their influence in the update. We show for an idealized problem that sample‐based estimates of the COOs, in conjunction with covariance localization for the sample covariance, can approximate well the true values, but a practical implementation of the transformation for high‐dimensional applications remains a subject for future research. The COOs also completely describe important properties of the update step, such as observation‐state mutual information, signal‐to‐noise and degrees of freedom for signal, and so give new insights, including relations among reduced‐rank approximations to variational schemes, particle‐filter weight degeneracy, and the local ensemble transform Kalman filter.

     
    more » « less
  3. DeepTensor is a computationally efficient framework for low-rank decomposition of matrices and tensors using deep generative networks. We decompose a tensor as the product of low-rank tensor factors where each low-rank tensor is generated by a deep network (DN) that is trained in a self-supervised manner to minimize the mean-square approximation error. Our key observation is that the implicit regularization inherent in DNs enables them to capture nonlinear signal structures that are out of the reach of classical linear methods like the singular value decomposition (SVD) and principal components analysis (PCA). We demonstrate that the performance of DeepTensor is robust to a wide range of distributions and a computationally efficient drop-in replacement for the SVD, PCA, nonnegative matrix factorization (NMF), and similar decompositions by exploring a range of real-world applications, including hyperspectral image denoising, 3D MRI tomography, and image classification. 
    more » « less
  4. Summary

    To construct an optimal estimating function by weighting a set of score functions, we must either know or estimate consistently the covariance matrix for the individual scores. In problems with high dimensional correlated data the estimated covariance matrix could be unreliable. The smallest eigenvalues of the covariance matrix will be the most important for weighting the estimating equations, but in high dimensions these will be poorly determined. Generalized estimating equations introduced the idea of a working correlation to minimize such problems. However, it can be difficult to specify the working correlation model correctly. We develop an adaptive estimating equation method which requires no working correlation assumptions. This methodology relies on finding a reliable approximation to the inverse of the variance matrix in the quasi-likelihood equations. We apply a multivariate generalization of the conjugate gradient method to find estimating equations that preserve the information well at fixed low dimensions. This approach is particularly useful when the estimator of the covariance matrix is singular or close to singular, or impossible to invert owing to its large size.

     
    more » « less
  5. null (Ed.)
    Abstract Selecting the optimal Markowitz portfolio depends on estimating the covariance matrix of the returns of N assets from T periods of historical data. Problematically, N is typically of the same order as T, which makes the sample covariance matrix estimator perform poorly, both empirically and theoretically. While various other general-purpose covariance matrix estimators have been introduced in the financial economics and statistics literature for dealing with the high dimensionality of this problem, we here propose an estimator that exploits the fact that assets are typically positively dependent. This is achieved by imposing that the joint distribution of returns be multivariate totally positive of order 2 (MTP2). This constraint on the covariance matrix not only enforces positive dependence among the assets but also regularizes the covariance matrix, leading to desirable statistical properties such as sparsity. Based on stock market data spanning 30 years, we show that estimating the covariance matrix under MTP2 outperforms previous state-of-the-art methods including shrinkage estimators and factor models. 
    more » « less