We present a method that takes as input a single dual-pixel image, and simultaneously estimates the image's defocus map---the amount of defocus blur at each pixel---and recovers an all-in-focus image. Our method is inspired from recent works that leverage the dual-pixel sensors available in many consumer cameras to assist with autofocus, and use them for recovery of defocus maps or all-in-focus images. These prior works have solved the two recovery problems independently of each other, and often require large labeled datasets for supervised training. By contrast, we show that it is beneficial to treat these two closely-connected problems simultaneously. To this end, we set up an optimization problem that, by carefully modeling the optics of dual-pixel images, jointly solves both problems. We use data captured with a consumer smartphone camera to demonstrate that, after a one-time calibration step, our approach improves upon prior works for both defocus map estimation and blur removal, despite being entirely unsupervised.
more »
« less
Learning the Truncation Index of the Kronecker Product SVD for Image Restoration
The image processing task of the recovery of an image from a noisy or compromised image is an illposed inverse problem. To solve this problem, it is necessary to incorporate prior information about the smoothness, or the structure, of the solution, by incorporating regularization. Here, we consider linear blur operators with an efficiently-found singular value decomposition. Then, regularization is obtained by employing a truncated singular value expansion for image recovery. In this study, we focus on images for which the image blur operator is separable and can be represented by a Kronecker product such that the associated singular value decomposition is expressible in terms of the singular value decompositions of the separable components. The truncation index k can then be identified without forming the full Kronecker product of the two terms. This report investigates the problem of learning an optimal k using two methods. For one method to learn k we assume the knowledge of the true images, yielding a supervised learning algorithm based on the average relative error. The second method uses the method of generalized cross validation and does not require knowledge of the true images. The approach is implemented and demonstrated to be successful for Gaussian, Poisson and salt and pepper noise types across noise levels with signal to noise ratios as low as 10. This research contributes to the field by offering insights into the use of the supervised and unsupervised estimators for the truncation index, and demonstrates that the unsupervised algorithm is not only robust and computationally efficient, but is also comparable to the supervised method.
more »
« less
- Award ID(s):
- 1757663
- PAR ID:
- 10524020
- Publisher / Repository:
- Society for Industrial and Applied Mathematics
- Date Published:
- Journal Name:
- SIAM Undergraduate Research Online
- Volume:
- 16
- ISSN:
- 2327-7807
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We consider the problem of matrix approximation and denoising induced by the Kronecker product decomposition. Specifically, we propose to approximate a given matrix by the sum of a few Kronecker products of matrices, which we refer to as the Kronecker product approximation (KoPA). Because the Kronecker product is an extensions of the outer product from vectors to matrices, KoPA extends the low rank matrix approximation, and includes it as a special case. Comparing with the latter, KoPA also offers a greater flexibility, since it allows the user to choose the configuration, which are the dimensions of the two smaller matrices forming the Kronecker product. On the other hand, the configuration to be used is usually unknown, and needs to be determined from the data in order to achieve the optimal balance between accuracy and parsimony. We propose to use extended information criteria to select the configuration. Under the paradigm of high dimensional analysis, we show that the proposed procedure is able to select the true configuration with probability tending to one, under suitable conditions on the signal-to-noise ratio. We demonstrate the superiority of KoPA over the low rank approximations through numerical studies, and several benchmark image examples.more » « less
-
The singular value decomposition (SVD) of a reordering of a matrix A can be used to determine an efficient Kronecker product (KP) sum approximation to A. We present the use of an approximate truncated SVD (TSVD) to find the KP approximation, and contrast using a randomized singular value decomposition algorithm (RSVD), a new enlarged Golub Kahan Bidiagonalization algorithm (EGKB) and the exact TSVD. The EGKB algorithm enlarges the Krylov subspace beyond a given rank for the desired approximation. A suitable rank is determined using an automatic stopping test. We also contrast the use of single and double precision arithmetic to find the approximate TSVDs. To illustrate the accuracy and efficiency in terms of memory and computational cost of these approximate KPs, we consider the solution of the total variation regularized image deblurring problem using the split Bregman algorithm implemented in double precision. Together with an efficient implementation for the reordering of A we demonstrate that the approximate KP sum can be obtained using a TSVD, and that the new EGKB algorithm contrasts favorably with the use of the RSVD. These results verify that it is feasible to use single precision when estimating a KP sum from an approximate TSVD.more » « less
-
DeepTensor is a computationally efficient framework for low-rank decomposition of matrices and tensors using deep generative networks. We decompose a tensor as the product of low-rank tensor factors where each low-rank tensor is generated by a deep network (DN) that is trained in a self-supervised manner to minimize the mean-square approximation error. Our key observation is that the implicit regularization inherent in DNs enables them to capture nonlinear signal structures that are out of the reach of classical linear methods like the singular value decomposition (SVD) and principal components analysis (PCA). We demonstrate that the performance of DeepTensor is robust to a wide range of distributions and a computationally efficient drop-in replacement for the SVD, PCA, nonnegative matrix factorization (NMF), and similar decompositions by exploring a range of real-world applications, including hyperspectral image denoising, 3D MRI tomography, and image classification.more » « less
-
Principal Component Analysis (PCA) is a standard dimensionality reduction technique, but it treats all samples uniformly, making it suboptimal for heterogeneous data that are increasingly common in modern settings. This paper proposes a PCA variant for samples with heterogeneous noise levels, i.e., heteroscedastic noise, that naturally arise when some of the data come from higher quality sources than others. The technique handles heteroscedasticity by incorporating it in the statistical model of a probabilistic PCA. The resulting optimization problem is an interesting nonconvex problem related to but not seemingly solved by singular value decomposition, and this paper derives an expectation maximization (EM) algorithm. Numerical experiments illustrate the benefits of using the proposed method to combine samples with heteroscedastic noise in a single analysis, as well as benefits of careful initialization for the EM algorithm. Index Terms— Principal component analysis, heterogeneous data, maximum likelihood estimation, latent factorsmore » « less
An official website of the United States government

