skip to main content


Title: Random mesh projectors for inverse problems
We propose a new learning-based approach to solve ill-posed inverse problems in imaging. We address the case where ground truth training samples are rare and the problem is severely ill-posed-both because of the underlying physics and because we can only get few measurements. This setting is common in geophysical imaging and remote sensing. We show that in this case the common approach to directly learn the mapping from the measured data to the reconstruction becomes unstable. Instead, we propose to first learn an ensemble of simpler mappings from the data to projections of the unknown image into random piecewise-constant subspaces. We then combine the projections to form a final reconstruction by solving a deconvolution-like problem. We show experimentally that the proposed method is more robust to measurement noise and corruptions not seen during training than a directly learned inverse.  more » « less
Award ID(s):
1725729
NSF-PAR ID:
10121465
Author(s) / Creator(s):
Date Published:
Journal Name:
7th International Conference on Learning Representations, ICLR 2019
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract We consider the problem of reconstructing a signal from under-determined modulo observations (or measurements). This observation model is inspired by a relatively new imaging mechanism called modulo imaging, which can be used to extend the dynamic range of imaging systems; variations of this model have also been studied under the category of phase unwrapping. Signal reconstruction in the under-determined regime with modulo observations is a challenging ill-posed problem, and existing reconstruction methods cannot be used directly. In this paper, we propose a novel approach to solving the signal recovery problem under sparsity constraints for the special case to modulo folding limited to two periods. We show that given a sufficient number of measurements, our algorithm perfectly recovers the underlying signal. We also provide experiments validating our approach on toy signal and image data and demonstrate its promising performance. 
    more » « less
  2. Optical metasurfaces consist of densely arranged unit cells that manipulate light through various light confinement and scattering processes. Due to its unique advantages, such as high performance, small form factor and easy integration with semiconductor devices, metasurfaces have been gathering increasing attention in fields such as displays, imaging, sensing and optical computation. Despite advances in fabrication and characterization, a viable design prediction for suitable optical response remains challenging for complex optical metamaterial systems. The computation cost required to obtain the optimal design exponentially grows as the design complexity increases. Furthermore, the design prediction is challenging since the inverse problem is often ill-posed. In recent years, deep learning (DL) methods have shown great promise in the area of inverse design. Inspired by this and the capability of DL to produce fast inference, we introduce a physics-informed DL framework to expedite the computation for the inverse design of metasurfaces. Addition of the physics-based constraints improve generalizability of the DL model while reducing data burden. Our approach introduces a tandem DL architecture with physics-based learning to alleviate the nonuniqueness issue by selecting designs that are scientifically consistent, with low error in design prediction and accurate reconstruction of optical responses. To prove the concept, we focus on the inverse design of a representative plasmonic device that consists of metal gratings deposited on a dielectric film on top of a metal substrate. The optical response of the device is determined by the geometrical dimensions as well as the material properties. The training and testing data are obtained through Rigorous Coupled-Wave Analysis (RCWA), while the physics-based constraint is derived from solving the electromagnetic (EM) wave equations for a simplified homogenized model. We consider the prediction of design for the optical response of a single wavelength incident or a spectrum of wavelength in the visible light range. Our model converges with an accuracy up to 97% for inverse design prediction with the optical response for the visible light spectrum as input. The model is also able to predict design with accuracy up to 96% and optical response reconstruction accuracy of 99% for optical response of a single wavelength of light as input. 
    more » « less
  3. Imaging through scattering is a pervasive and difficult problem in many biological applications. The high background and the exponentially attenuated target signals due to scattering fundamentally limits the imaging depth of fluorescence microscopy. Light-field systems are favorable for high-speed volumetric imaging, but the 2D-to-3D reconstruction is fundamentally ill-posed, and scattering exacerbates the condition of the inverse problem. Here, we develop a scattering simulator that models low-contrast target signals buried in heterogeneous strong background. We then train a deep neural network solely on synthetic data to descatter and reconstruct a 3D volume from a single-shot light-field measurement with low signal-to-background ratio (SBR). We apply this network to our previously developed computational miniature mesoscope and demonstrate the robustness of our deep learning algorithm on scattering phantoms with different scattering conditions. The network can robustly reconstruct emitters in 3D with a 2D measurement of SBR as low as 1.05 and as deep as a scattering length. We analyze fundamental tradeoffs based on network design factors and out-of-distribution data that affect the deep learning model’s generalizability to real experimental data. Broadly, we believe that our simulator-based deep learning approach can be applied to a wide range of imaging through scattering techniques where experimental paired training data is lacking.

     
    more » « less
  4. Elastography is an imaging technique to reconstruct elasticity distributions of heterogeneous objects. Since cancerous tissues are stiffer than healthy ones, for decades, elastography has been applied to medical imaging for noninvasive cancer diagnosis. Although the conventional strain-based elastography has been deployed on ultrasound diagnostic-imaging devices, the results are prone to inaccuracies. Model-based elastography, which reconstructs elasticity distributions by solving an inverse problem in elasticity, may provide more accurate results but is often unreliable in practice due to the ill-posed nature of the inverse problem. We introduce ElastNet, a de novo elastography method combining the theory of elasticity with a deep-learning approach. With prior knowledge from the laws of physics, ElastNet can escape the performance ceiling imposed by labeled data. ElastNet uses backpropagation to learn the hidden elasticity of objects, resulting in rapid and accurate predictions. We show that ElastNet is robust when dealing with noisy or missing measurements. Moreover, it can learn probable elasticity distributions for areas even without measurements and generate elasticity images of arbitrary resolution. When both strain and elasticity distributions are given, the hidden physics in elasticity—the conditions for equilibrium—can be learned by ElastNet.

     
    more » « less
  5. In recent years, large convolutional neural networks have been widely used as tools for image deblurring, because of their ability in restoring images very precisely. It is well known that image deblurring is mathematically modeled as an ill-posed inverse problem and its solution is difficult to approximate when noise affects the data. Really, one limitation of neural networks for deblurring is their sensitivity to noise and other perturbations, which can lead to instability and produce poor reconstructions. In addition, networks do not necessarily take into account the numerical formulation of the underlying imaging problem when trained end-to-end. In this paper, we propose some strategies to improve stability without losing too much accuracy to deblur images with deep-learning-based methods. First, we suggest a very small neural architecture, which reduces the execution time for training, satisfying a green AI need, and does not extremely amplify noise in the computed image. Second, we introduce a unified framework where a pre-processing step balances the lack of stability of the following neural-network-based step. Two different pre-processors are presented. The former implements a strong parameter-free denoiser, and the latter is a variational-model-based regularized formulation of the latent imaging problem. This framework is also formally characterized by mathematical analysis. Numerical experiments are performed to verify the accuracy and stability of the proposed approaches for image deblurring when unknown or not-quantified noise is present; the results confirm that they improve the network stability with respect to noise. In particular, the model-based framework represents the most reliable trade-off between visual precision and robustness. 
    more » « less