skip to main content


Title: Holographic particle localization under multiple scattering
We introduce a computational framework that incorporates multiple scattering for large-scale three-dimensional (3-D) particle localization using single-shot in-line holography. Traditional holographic techniques rely on single-scattering models that become inaccurate under high particle densities and large refractive index contrasts. Existing multiple scattering solvers become computationally prohibitive for large-scale problems, which comprise millions of voxels within the scattering volume. Our approach overcomes the computational bottleneck by slicewise computation of multiple scattering under an efficient recursive framework. In the forward model, each recursion estimates the next higher-order multiple scattered field among the object slices. In the inverse model, each order of scattering is recursively estimated by a nonlinear optimization procedure. This nonlinear inverse model is further supplemented by a sparsity promoting procedure that is particularly effective in localizing 3-D distributed particles. We show that our multiple-scattering model leads to significant improvement in the quality of 3-D localization compared to traditional methods based on single scattering approximation. Our experiments demonstrate robust inverse multiple scattering, allowing reconstruction of 100 million voxels from a single 1-megapixel hologram with a sparsity prior. The performance bound of our approach is quantified in simulation and validated experimentally. Our work promises utilization of multiple scattering for versatile large-scale applications.  more » « less
Award ID(s):
1813910
NSF-PAR ID:
10099626
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
SPIE journal
Volume:
1
Issue:
3
ISSN:
0036-1860
Page Range / eLocation ID:
036003
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The number of noisy images required for molecular reconstruction in single-particle cryoelectron microscopy (cryo-EM) is governed by the autocorrelations of the observed, randomly oriented, noisy projection images. In this work, we consider the effect of imposing sparsity priors on the molecule. We use techniques from signal processing, optimization, and applied algebraic geometry to obtain theoretical and computational contributions for this challenging nonlinear inverse problem with sparsity constraints. We prove that molecular structures modeled as sums of Gaussians are uniquely determined by the second-order autocorrelation of their projection images, implying that the sample complexity is proportional to the square of the variance of the noise. This theory improves upon the nonsparse case, where the third-order autocorrelation is required for uniformly oriented particle images and the sample complexity scales with the cube of the noise variance. Furthermore, we build a computational framework to reconstruct molecular structures which are sparse in the wavelet basis. This method combines the sparse representation for the molecule with projection-based techniques used for phase retrieval in X-ray crystallography. 
    more » « less
  2. Integrating regularization methods with standard loss functions such as the least squares, hinge loss, etc., within a regression framework has become a popular choice for researchers to learn predictive models with lower variance and better generalization ability. Regularizers also aid in building interpretable models with high-dimensional data which makes them very appealing. It is observed that each regularizer is uniquely formulated in order to capture data-specific properties such as correlation, structured sparsity and temporal smoothness. The problem of obtaining a consensus among such diverse regularizers while learning a predictive model is extremely important in order to determine the optimal regularizer for the problem. The advantage of such an approach is that it preserves the simplicity of the final model learned by selecting a single candidate model which is not the case with ensemble methods as they use multiple candidate models for prediction. This is called the consensus regularization problem which has not received much attention in the literature due to the inherent difficulty associated with learning and selecting a model from an integrated regularization framework. To solve this problem, in this paper, we propose a method to generate a committee of non-convex regularized linear regression models, and use a consensus criterion to determine the optimal model for prediction. Each corresponding non-convex optimization problem in the committee is solved efficiently using the cyclic-coordinate descent algorithm with the generalized thresholding operator. Our Consensus RegularIzation Selection based Prediction (CRISP) model is evaluated on electronic health records (EHRs) obtained from a large hospital for the congestive heart failure readmission prediction problem. We also evaluate our model on high-dimensional synthetic datasets to assess its performance. The results indicate that CRISP outperforms several state-of-the-art methods such as additive, interactions-based and other competing non-convex regularized linear regression methods. 
    more » « less
  3. Abstract We consider Bayesian inference for large-scale inverse problems, where computational challenges arise from the need for repeated evaluations of an expensive forward model. This renders most Markov chain Monte Carlo approaches infeasible, since they typically require O ( 1 0 4 ) model runs, or more. Moreover, the forward model is often given as a black box or is impractical to differentiate. Therefore derivative-free algorithms are highly desirable. We propose a framework, which is built on Kalman methodology, to efficiently perform Bayesian inference in such inverse problems. The basic method is based on an approximation of the filtering distribution of a novel mean-field dynamical system, into which the inverse problem is embedded as an observation operator. Theoretical properties are established for linear inverse problems, demonstrating that the desired Bayesian posterior is given by the steady state of the law of the filtering distribution of the mean-field dynamical system, and proving exponential convergence to it. This suggests that, for nonlinear problems which are close to Gaussian, sequentially computing this law provides the basis for efficient iterative methods to approximate the Bayesian posterior. Ensemble methods are applied to obtain interacting particle system approximations of the filtering distribution of the mean-field model; and practical strategies to further reduce the computational and memory cost of the methodology are presented, including low-rank approximation and a bi-fidelity approach. The effectiveness of the framework is demonstrated in several numerical experiments, including proof-of-concept linear/nonlinear examples and two large-scale applications: learning of permeability parameters in subsurface flow; and learning subgrid-scale parameters in a global climate model. Moreover, the stochastic ensemble Kalman filter and various ensemble square-root Kalman filters are all employed and are compared numerically. The results demonstrate that the proposed method, based on exponential convergence to the filtering distribution of a mean-field dynamical system, is competitive with pre-existing Kalman-based methods for inverse problems. 
    more » « less
  4. SUMMARY

    The spectral element method is currently the method of choice for computing accurate synthetic seismic wavefields in realistic 3-D earth models at the global scale. However, it requires significantly more computational time, compared to normal mode-based approximate methods. Source stacking, whereby multiple earthquake sources are aligned on their origin time and simultaneously triggered, can reduce the computational costs by several orders of magnitude. We present the results of synthetic tests performed on a realistic radially anisotropic 3-D model, slightly modified from model SEMUCB-WM1 with three component synthetic waveform ‘data’ for a duration of 10 000 s, and filtered at periods longer than 60 s, for a set of 273 events and 515 stations. We consider two definitions of the misfit function, one based on the stacked records at individual stations and another based on station-pair cross-correlations of the stacked records. The inverse step is performed using a Gauss–Newton approach where the gradient and Hessian are computed using normal mode perturbation theory. We investigate the retrieval of radially anisotropic long wavelength structure in the upper mantle in the depth range 100–800 km, after fixing the crust and uppermost mantle structure constrained by fundamental mode Love and Rayleigh wave dispersion data. The results show good performance using both definitions of the misfit function, even in the presence of realistic noise, with degraded amplitudes of lateral variations in the anisotropic parameter ξ. Interestingly, we show that we can retrieve the long wavelength structure in the upper mantle, when considering one or the other of three portions of the cross-correlation time series, corresponding to where we expect the energy from surface wave overtone, fundamental mode or a mixture of the two to be dominant, respectively. We also considered the issue of missing data, by randomly removing a successively larger proportion of the available synthetic data. We replace the missing data by synthetics computed in the current 3-D model using normal mode perturbation theory. The inversion results degrade with the proportion of missing data, especially for ξ, and we find that a data availability of 45 per cent or more leads to acceptable results. We also present a strategy for grouping events and stations to minimize the number of missing data in each group. This leads to an increased number of computations but can be significantly more efficient than conventional single-event-at-a-time inversion. We apply the grouping strategy to a real picking scenario, and show promising resolution capability despite the use of fewer waveforms and uneven ray path distribution. Source stacking approach can be used to rapidly obtain a starting 3-D model for more conventional full-waveform inversion at higher resolution, and to investigate assumptions made in the inversion, such as trade-offs between isotropic, anisotropic or anelastic structure, different model parametrizations or how crustal structure is accounted for.

     
    more » « less
  5. null (Ed.)
    Intensity Diffraction Tomography (IDT) is a new computational microscopy technique providing quantitative, volumetric, large field-of-view (FOV) phase imaging of biological samples. This approach uses computationally efficient inverse scattering models to recover 3D phase volumes of weakly scattering objects from intensity measurements taken under diverse illumination at a single focal plane. IDT is easily implemented in a standard microscope equipped with an LED array source and requires no exogenous contrast agents, making the technology widely accessible for biological research.Here, we discuss model and learning-based approaches for complex 3D object recovery with IDT. We present two model-based computational illumination strategies, multiplexed IDT (mIDT) [1] and annular IDT (aIDT) [2], that achieve high-throughput quantitative 3D object phase recovery at hardware-limited 4Hz and 10Hz volume rates, respectively. We illustrate these techniques on living epithelial buccal cells and Caenorhabditis elegans worms. For strong scattering object recovery with IDT, we present an uncertainty quantification framework for assessing the reliability of deep learning-based phase recovery methods [3]. This framework provides per-pixel evaluation of a neural network predictions confidence level, allowing for efficient and reliable complex object recovery. This uncertainty learning framework is widely applicable for reliable deep learning-based biomedical imaging techniques and shows significant potential for IDT. 
    more » « less