skip to main content


Title: Physics-informed neural network for phase imaging based on transport of intensity equation

Non-interferometric quantitative phase imaging based on Transport of Intensity Equation (TIE) has been widely used in bio-medical imaging. However, analytic TIE phase retrieval is prone to low-spatial frequency noise amplification, which is caused by the illposedness of inversion at the origin of the spectrum. There are also retrieval ambiguities resulting from the lack of sensitivity to the curl component of the Poynting vector occurring with strong absorption. Here, we establish a physics-informed neural network (PINN) to address these issues, by integrating the forward and inverse physics models into a cascaded deep neural network. We demonstrate that the proposed PINN is efficiently trained using a small set of sample data, enabling the conversion of noise-corrupted 2-shot TIE phase retrievals to high quality phase images under partially coherent LED illumination. The efficacy of the proposed approach is demonstrated by both simulation using a standard image database and experiment using human buccal epitehlial cells. In particular, high image quality (SSIM = 0.919) is achieved experimentally using a reduced size of labeled data (140 image pairs). We discuss the robustness of the proposed approach against insufficient training data, and demonstrate that the parallel architecture of PINN is efficient for transfer learning.

 
more » « less
NSF-PAR ID:
10379775
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Optics Express
Volume:
30
Issue:
24
ISSN:
1094-4087; OPEXFF
Page Range / eLocation ID:
Article No. 43398
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Purpose

    To develop a strategy for training a physics‐guided MRI reconstruction neural network without a database of fully sampled data sets.

    Methods

    Self‐supervised learning via data undersampling (SSDU) for physics‐guided deep learning reconstruction partitions available measurements into two disjoint sets, one of which is used in the data consistency (DC) units in the unrolled network and the other is used to define the loss for training. The proposed training without fully sampled data is compared with fully supervised training with ground‐truth data, as well as conventional compressed‐sensing and parallel imaging methods using the publicly available fastMRI knee database. The same physics‐guided neural network is used for both proposed SSDU and supervised training. The SSDU training is also applied to prospectively two‐fold accelerated high‐resolution brain data sets at different acceleration rates, and compared with parallel imaging.

    Results

    Results on five different knee sequences at an acceleration rate of 4 shows that the proposed self‐supervised approach performs closely with supervised learning, while significantly outperforming conventional compressed‐sensing and parallel imaging, as characterized by quantitative metrics and a clinical reader study. The results on prospectively subsampled brain data sets, in which supervised learning cannot be used due to lack of ground‐truth reference, show that the proposed self‐supervised approach successfully performs reconstruction at high acceleration rates (4, 6, and 8). Image readings indicate improved visual reconstruction quality with the proposed approach compared with parallel imaging at acquisition acceleration.

    Conclusion

    The proposed SSDU approach allows training of physics‐guided deep learning MRI reconstruction without fully sampled data, while achieving comparable results with supervised deep learning MRI trained on fully sampled data.

     
    more » « less
  2. We present a tomographic imaging technique, termed Deep Prior Diffraction Tomography (DP-DT), to reconstruct the 3D refractive index (RI) of thick biological samples at high resolution from a sequence of low-resolution images collected under angularly varying illumination. DP-DT processes the multi-angle data using a phase retrieval algorithm that is extended by a deep image prior (DIP), which reparameterizes the 3D sample reconstruction with an untrained, deep generative 3D convolutional neural network (CNN). We show that DP-DT effectively addresses the missing cone problem, which otherwise degrades the resolution and quality of standard 3D reconstruction algorithms. As DP-DT does not require pre-captured data or pre-training, it is not biased towards any particular dataset. Hence, it is a general technique that can be applied to a wide variety of 3D samples, including scenarios in which large datasets for supervised training would be infeasible or expensive. We applied DP-DT to obtain 3D RI maps of bead phantoms and complex biological specimens, both in simulation and experiment, and show that DP-DT produces higher-quality results than standard regularization techniques. We further demonstrate the generality of DP-DT, using two different scattering models, the first Born and multi-slice models. Our results point to the potential benefits of DP-DT for other 3D imaging modalities, including X-ray computed tomography, magnetic resonance imaging, and electron microscopy.

     
    more » « less
  3. We propose a novel framework for the systematic design of lensless imaging systems based on the hyperuniform random field solutions of nonlinear reaction-diffusion equations from pattern formation theory. Specifically, we introduce a new class of imaging point-spread functions (PSFs) with enhanced isotropic behavior and controllable sparsity. We investigate PSFs and modulated transfer functions for a number of nonlinear models and demonstrate that two-phase isotropic random fields with hyperuniform disorder are ideally suited to construct imaging PSFs with improved performances compared to PSFs based on Perlin noise. Additionally, we introduce a phase retrieval algorithm based on non-paraxial Rayleigh–Sommerfeld diffraction theory and introduce diffractive phase plates with PSFs designed from hyperuniform random fields, called hyperuniform phase plates (HPPs). Finally, using high-fidelity object reconstruction, we demonstrate improved image quality using engineered HPPs across the visible range. The proposed framework is suitable for high-performance lensless imaging systems for on-chip microscopy and spectroscopy applications.

     
    more » « less
  4. Abstract

    Optical coherence tomography (OCT) is a widely used non-invasive biomedical imaging modality that can rapidly provide volumetric images of samples. Here, we present a deep learning-based image reconstruction framework that can generate swept-source OCT (SS-OCT) images using undersampled spectral data, without any spatial aliasing artifacts. This neural network-based image reconstruction does not require any hardware changes to the optical setup and can be easily integrated with existing swept-source or spectral-domain OCT systems to reduce the amount of raw spectral data to be acquired. To show the efficacy of this framework, we trained and blindly tested a deep neural network using mouse embryo samples imaged by an SS-OCT system. Using 2-fold undersampled spectral data (i.e., 640 spectral points per A-line), the trained neural network can blindly reconstruct 512 A-lines in 0.59 ms using multiple graphics-processing units (GPUs), removing spatial aliasing artifacts due to spectral undersampling, also presenting a very good match to the images of the same samples, reconstructed using the full spectral OCT data (i.e., 1280 spectral points per A-line). We also successfully demonstrate that this framework can be further extended to process 3× undersampled spectral data per A-line, with some performance degradation in the reconstructed image quality compared to 2× spectral undersampling. Furthermore, an A-line-optimized undersampling method is presented by jointly optimizing the spectral sampling locations and the corresponding image reconstruction network, which improved the overall imaging performance using less spectral data points per A-line compared to 2× or 3× spectral undersampling results. This deep learning-enabled image reconstruction approach can be broadly used in various forms of spectral-domain OCT systems, helping to increase their imaging speed without sacrificing image resolution and signal-to-noise ratio.

     
    more » « less
  5. Structured illumination microscopy (SIM) is a popular super-resolution imaging technique that can achieve resolution improvements of 2× and greater depending on the illumination patterns used. Traditionally, images are reconstructed using the linear SIM reconstruction algorithm. However, this algorithm has hand-tuned parameters which can often lead to artifacts, and it cannot be used with more complex illumination patterns. Recently, deep neural networks have been used for SIM reconstruction, yet they require training sets that are difficult to capture experimentally. We demonstrate that we can combine a deep neural network with the forward model of the structured illumination process to reconstruct sub-diffraction images without training data. The resulting physics-informed neural network (PINN) can be optimized on a single set of diffraction-limited sub-images and thus does not require any training set. We show, with simulated and experimental data, that this PINN can be applied to a wide variety of SIM illumination methods by simply changing the known illumination patterns used in the loss function and can achieve resolution improvements that match theoretical expectations.

     
    more » « less