This work concerns a fluorescence optical projection tomography system for low scattering tissue, like lymph nodes, with angular-domain rejection of highly scattered photons. In this regime, filtered backprojection (FBP) image reconstruction has been shown to provide reasonable quality images, yet here a comparison of image quality between images obtained by FBP and iterative image reconstruction with a Monte Carlo generated system matrix, demonstrate measurable improvements with the iterative method. Through simulated and experimental phantoms, iterative algorithms consistently outperformed FBP in terms of contrast and spatial resolution. Moreover, when projection number was reduced, in order to reduce total imaging time, iterative reconstruction suppressed artifacts that hampered the performance of FBP reconstruction (structural similarity of the reconstructed images with “truth” was improved from 0.15 ± 1.2 × 10−3to 0.66 ± 0.02); and although the system matrix was generated for homogenous optical properties, when heterogeneity (62.98 cm-1variance in
Non-line-of-sight (NLOS) imaging is a light-starving application that suffers from highly noisy measurement data. In order to recover the hidden scene with good contrast, it is crucial for the reconstruction algorithm to be robust against noises and artifacts. We propose here two weighting factors for the filtered backprojection (FBP) reconstruction algorithm in NLOS imaging. The apodization factor modifies the aperture (wall) function to reduce streaking artifacts, and the coherence factor evaluates the spatial coherence of measured signals for noise suppression. Both factors are simple to evaluate, and their synergistic effects lead to state-of-the-art reconstruction quality for FBP with noisy data. We demonstrate the effectiveness of the proposed weighting factors on publicly accessible experimental datasets.
more » « less- NSF-PAR ID:
- 10169779
- Publisher / Repository:
- Optical Society of America
- Date Published:
- Journal Name:
- Optics Letters
- Volume:
- 45
- Issue:
- 14
- ISSN:
- 0146-9592; OPLEDP
- Format(s):
- Medium: X Size: Article No. 3921
- Size(s):
- Article No. 3921
- Sponsoring Org:
- National Science Foundation
More Like this
-
µs ) was introduced to simulated phantoms, the results were still comparable (structural similarity homo: 0.67 ± 0.02 vs hetero: 0.66 ± 0.02). -
Abstract Non-line-of-sight (NLOS) imaging is a rapidly growing field seeking to form images of objects outside the field of view, with potential applications in autonomous navigation, reconnaissance, and even medical imaging. The critical challenge of NLOS imaging is that diffuse reflections scatter light in all directions, resulting in weak signals and a loss of directional information. To address this problem, we propose a method for seeing around corners that derives angular resolution from vertical edges and longitudinal resolution from the temporal response to a pulsed light source. We introduce an acquisition strategy, scene response model, and reconstruction algorithm that enable the formation of 2.5-dimensional representations—a plan view plus heights—and a 180∘field of view for large-scale scenes. Our experiments demonstrate accurate reconstructions of hidden rooms up to 3 meters in each dimension despite a small scan aperture (1.5-centimeter radius) and only 45 measurement locations.
-
Abstract Non-Line-Of-Sight (NLOS) imaging aims at recovering the 3D geometry of objects that are hidden from the direct line of sight. One major challenge with this technique is the weak available multibounce signal limiting scene size, capture speed, and reconstruction quality. To overcome this obstacle, we introduce a multipixel time-of-flight non-line-of-sight imaging method combining specifically designed Single Photon Avalanche Diode (SPAD) array detectors with a fast reconstruction algorithm that captures and reconstructs live low-latency videos of non-line-of-sight scenes with natural non-retroreflective objects. We develop a model of the signal-to-noise-ratio of non-line-of-sight imaging and use it to devise a method that reconstructs the scene such that signal-to-noise-ratio, motion blur, angular resolution, and depth resolution are all independent of scene depth suggesting that reconstruction of very large scenes may be possible.
-
Abstract Optical coherence tomography (OCT) is a widely used non-invasive biomedical imaging modality that can rapidly provide volumetric images of samples. Here, we present a deep learning-based image reconstruction framework that can generate swept-source OCT (SS-OCT) images using undersampled spectral data, without any spatial aliasing artifacts. This neural network-based image reconstruction does not require any hardware changes to the optical setup and can be easily integrated with existing swept-source or spectral-domain OCT systems to reduce the amount of raw spectral data to be acquired. To show the efficacy of this framework, we trained and blindly tested a deep neural network using mouse embryo samples imaged by an SS-OCT system. Using 2-fold undersampled spectral data (i.e., 640 spectral points per A-line), the trained neural network can blindly reconstruct 512 A-lines in 0.59 ms using multiple graphics-processing units (GPUs), removing spatial aliasing artifacts due to spectral undersampling, also presenting a very good match to the images of the same samples, reconstructed using the full spectral OCT data (i.e., 1280 spectral points per A-line). We also successfully demonstrate that this framework can be further extended to process 3× undersampled spectral data per A-line, with some performance degradation in the reconstructed image quality compared to 2× spectral undersampling. Furthermore, an A-line-optimized undersampling method is presented by jointly optimizing the spectral sampling locations and the corresponding image reconstruction network, which improved the overall imaging performance using less spectral data points per A-line compared to 2× or 3× spectral undersampling results. This deep learning-enabled image reconstruction approach can be broadly used in various forms of spectral-domain OCT systems, helping to increase their imaging speed without sacrificing image resolution and signal-to-noise ratio.
-
Purpose To develop a general phase regularized image reconstruction method, with applications to partial Fourier imaging, water–fat imaging and flow imaging.
Theory and Methods The problem of enforcing phase constraints in reconstruction was studied under a regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed to enable various joint reconstruction of partial Fourier imaging, water–fat imaging and flow imaging, along with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently non‐convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was applied to retrospectively under‐sampled in vivo datasets and compared with state of the art reconstruction methods.
Results Phase cycling reconstructions showed reduction of artifacts compared to reconstructions without phase cycling and achieved similar performances as state of the art results in partial Fourier, water–fat and divergence‐free regularized flow reconstruction. Joint reconstruction of partial Fourier + water–fat imaging + PI + CS, and partial Fourier + divergence‐free regularized flow imaging + PI + CS were demonstrated.
Conclusion The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of initial solutions and encourages the joint reconstruction of phase imaging applications. Magn Reson Med 80:112–125, 2018. © 2017 International Society for Magnetic Resonance in Medicine.