Adaptive 3D descattering with a dynamic synthesis network
Abstract Deep learning has been broadly applied to imaging in scattering applications. A common framework is to train a descattering network for image recovery by removing scattering artifacts. To achieve the best results on a broad spectrum of scattering conditions, individual “expert” networks need to be trained for each condition. However, the expert’s performance sharply degrades when the testing condition differs from the training. An alternative brute-force approach is to train a “generalist” network using data from diverse scattering conditions. It generally requires a larger network to encapsulate the diversity in the data and a sufficiently large training set to avoid overfitting. Here, we propose anadaptive learningframework, termed dynamic synthesis network (DSN), whichdynamicallyadjusts the model weights andadaptsto different scattering conditions. The adaptability is achieved by a novel “mixture of experts” architecture that enables dynamically synthesizing a network by blending multiple experts using a gating network. We demonstrate the DSN in holographic 3D particle imaging for a variety of scattering conditions. We show in simulation that our DSN provides generalization across acontinuumof scattering conditions. In addition, we show that by training the DSN entirely on simulated data, the network can generalize to experiments and achieve robust 3D descattering. We expect the same concept can find many other applications, such as denoising and imaging in scattering media. Broadly, our dynamic synthesis framework opens up a new paradigm for designing highlyadaptivedeep learning and computational imaging techniques.
more »
« less
- PAR ID:
- 10363272
- Publisher / Repository:
- Nature Publishing Group
- Date Published:
- Journal Name:
- Light: Science & Applications
- Volume:
- 11
- Issue:
- 1
- ISSN:
- 2047-7538
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Coherent imaging through scatter is a challenging task. Both model-based and data-driven approaches have been explored to solve the inverse scattering problem. In our previous work, we have shown that a deep learning approach can make high-quality and highly generalizable predictions through unseen diffusers. Here, we propose a new deep neural network model that is agnostic to a broader class of perturbations including scatterer change, displacements, and system defocus up to 10× depth of field. In addition, we develop a new analysis framework for interpreting the mechanism of our deep learning model and visualizing its generalizability based on an unsupervised dimension reduction technique. We show that our model can unmix the scattering-specific information and extract the object-specific information and achieve generalization under different scattering conditions. Our work paves the way to arobustandinterpretabledeep learning approach to imaging through scattering media.more » « less
-
Recovering 3D phase features of complex biological samples traditionally sacrifices computational efficiency and processing time for physical model accuracy and reconstruction quality. Here, we overcome this challenge using an approximant-guided deep learning framework in a high-speed intensity diffraction tomography system. Applying a physics model simulator-based learning strategy trained entirely on natural image datasets, we show our network can robustly reconstruct complex 3D biological samples. To achieve highly efficient training and prediction, we implement a lightweight 2D network structure that utilizes a multi-channel input for encoding the axial information. We demonstrate this framework on experimental measurements of weakly scattering epithelial buccal cells and strongly scatteringC. elegansworms. We benchmark the network’s performance against a state-of-the-art multiple-scattering model-based iterative reconstruction algorithm. We highlight the network’s robustness by reconstructing dynamic samples from a living worm video. We further emphasize the network’s generalization capabilities by recovering algae samples imaged from different experimental setups. To assess the prediction quality, we develop a quantitative evaluation metric to show that our predictions are consistent with both multiple-scattering physics and experimental measurements.more » « less
-
null (Ed.)Intensity Diffraction Tomography (IDT) is a new computational microscopy technique providing quantitative, volumetric, large field-of-view (FOV) phase imaging of biological samples. This approach uses computationally efficient inverse scattering models to recover 3D phase volumes of weakly scattering objects from intensity measurements taken under diverse illumination at a single focal plane. IDT is easily implemented in a standard microscope equipped with an LED array source and requires no exogenous contrast agents, making the technology widely accessible for biological research.Here, we discuss model and learning-based approaches for complex 3D object recovery with IDT. We present two model-based computational illumination strategies, multiplexed IDT (mIDT) [1] and annular IDT (aIDT) [2], that achieve high-throughput quantitative 3D object phase recovery at hardware-limited 4Hz and 10Hz volume rates, respectively. We illustrate these techniques on living epithelial buccal cells and Caenorhabditis elegans worms. For strong scattering object recovery with IDT, we present an uncertainty quantification framework for assessing the reliability of deep learning-based phase recovery methods [3]. This framework provides per-pixel evaluation of a neural network predictions confidence level, allowing for efficient and reliable complex object recovery. This uncertainty learning framework is widely applicable for reliable deep learning-based biomedical imaging techniques and shows significant potential for IDT.more » « less
-
The Deep Operator Network (DeepONet) framework is a different class of neural network architecture that one trains to learn nonlinear operators, i.e., mappings between infinite-dimensional spaces. Traditionally, DeepONets are trained using a centralized strategy that requires transferring the training data to a centralized location. Such a strategy, however, limits our ability to secure data privacy or use high-performance distributed/parallel computing platforms. To alleviate such limitations, in this paper, we study the federated training of DeepONets for the first time. That is, we develop a framework, which we refer to as Fed-DeepONet, that allows multiple clients to train DeepONets collaboratively under the coordination of a centralized server. To achieve Fed-DeepONets, we propose an efficient stochastic gradient-based algorithm that enables the distributed optimization of the DeepONet parameters by averaging first-order estimates of the DeepONet loss gradient. Then, to accelerate the training convergence of Fed-DeepONets, we propose a moment-enhanced (i.e., adaptive) stochastic gradient-based strategy. Finally, we verify the performance of Fed-DeepONet by learning, for different configurations of the number of clients and fractions of available clients, (i) the solution operator of a gravity pendulum and (ii) the dynamic response of a parametric library of pendulums.more » « less