skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Correcting anisotropic intensity in light sheet images using dehazing and image morphology
Light-sheet fluorescence microscopy (LSFM) provides access to multi-dimensional and multi-scale in vivo imaging of animal models with highly coherent volumetric reconstruction of the tissue morphology, via a focused laser light sheet. The orthogonal illumination and detection LSFM pathways account for minimal photobleaching and deep tissue optical sectioning through different perspective views. Although rotation of the sample and deep tissue scanning constitutes major advantages of LSFM, images may suffer from intrinsic problems within the modality, such as light mismatch of refractive indices between the sample and mounting media and varying quantum efficiency across different depths. To overcome these challenges, we hereby introduce an illumination correction technique integrated with depth detail amelioration to achieve symmetric contrast in large field-of-view images acquired using a low power objective lens. Due to an increase in angular dispersion of emitted light flux with the depth, we combined the dehazing algorithm with morphological operations to enhance poorly separated overlapping structures with subdued intensity. The proposed method was tested on different LSFM modalities to illustrate its applicability on correcting anisotropic illumination affecting the volumetric reconstruction of the fluorescently tagged region of interest.  more » « less
Award ID(s):
1936519
PAR ID:
10584166
Author(s) / Creator(s):
; ; ; ; ; ; ;
Publisher / Repository:
American Institute of Physics
Date Published:
Journal Name:
APL Bioengineering
Volume:
4
Issue:
3
ISSN:
2473-2877
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Light-sheet microscopes must compromise among field of view, optical sectioning, resolution, and detection efficiency. High-numerical-aperture (NA) detection objective lenses provide higher resolution, but their narrow depth of field inefficiently captures the fluorescence signal generated throughout the thickness of the illumination light sheet when imaging large volumes. Here, we present ExD-SPIM (extended depth-of-field selective-plane illumination microscopy), an improved light-sheet microscopy strategy that solves this limitation by extending the depth of field (DOF) of high-NA detection objectives to match the thickness of the illumination light sheet. This extension of the DOF uses a phase mask to axially stretch the point-spread function of the objective lens while largely preserving lateral resolution. This matching of the detection DOF to the illumination-sheet thickness increases the total fluorescence collection, reduces the background, and improves the overall signal-to-noise ratio (SNR), as shown by numerical simulations, imaging of bead phantoms, and imaging living animals. In comparison to conventional light sheet imaging with low-NA detection that yields equivalent DOF, the results show that ExD-SPIM increases the SNR by more than threefold and dramatically reduces the rate of photobleaching. Compared to conventional high-NA detection, ExD-SPIM improves the signal sensitivity and volumetric coverage of whole-brain activity imaging, increasing the number of detected neurons by over a third. 
    more » « less
  2. Mask-based integrated fluorescence microscopy is a compact imaging technique for biomedical research. It can perform snapshot 3D imaging through a thin optical mask with a scalable field of view (FOV) and a thin device thickness. Integrated microscopy uses computational algorithms for object reconstruction, but efficient reconstruction algorithms for large-scale data have been lacking. Here, we developed DeepInMiniscope, a miniaturized integrated microscope featuring a custom-designed optical mask and a multi-stage physics-informed deep learning model. This reduces the computational resource demands by orders of magnitude and facilitates fast reconstruction. Our deep learning algorithm can reconstruct object volumes over 4×6×0.6 mm3. We demonstrated substantial improvement in both reconstruction quality and speed compared to traditional methods for large-scale data. Notably, we imaged neuronal activity with near-cellular resolution in awake mouse cortex, representing a substantial leap over existing integrated microscopes. DeepInMiniscope holds great promise for scalable, large-FOV, high-speed, 3D imaging applications with compact device footprint. # DeepInMiniscope: Deep-learning-powered physics-informed integrated miniscope [https://doi.org/10.5061/dryad.6t1g1jx83](https://doi.org/10.5061/dryad.6t1g1jx83) ## Description of the data and file structure ### DeepInMiniscope: Learned Integrated Miniscope ### Datasets, models and codes for 2D and 3D sample reconstructions. Dataset for 2D reconstruction includes test data for green stained lens tissue. Input: measured images of green fluorescent stained lens tissue, dissembled into sub-FOV patches. Output: the slide containing green lens tissue features. Dataset for 3D sample reconstructions includes test data for 3D reconstruction of in-vivo mouse brain video recording. Input: Time-series standard-derivation of difference-to-local-mean weighted raw video. Output: reconstructed 4-D volumetric video containing a 3-dimensional distribution of neural activities. ## Files and variables ### Download data, code, and sample results 1. Download data `data.zip`, code `code.zip`, results `results.zip`. 2. Unzip the downloaded files and place them in the same main folder. 3. Confirm that the main folder contains three subfolders: `data`, `code`, and `results`. Inside the `data` and `code` folder, there should be subfolders for each test case. ## Data 2D_lenstissue **data_2d_lenstissue.mat:** Measured images of green fluorescent stained lens tissue, disassembled into sub-FOV patches. * **Xt:** stacked 108 FOVs of measured image, each centered at one microlens unit with 720 x 720 pixels. Data dimension in order of (batch, height, width, FOV). * **Yt:** placeholder variable for reconstructed object, each centered at corresponding microlens unit, with 180 x 180 voxels. Data dimension in order of (batch, height, width, FOV). **reconM_0308:** Trained Multi-FOV ADMM-Net model for 2D lens tissue reconstruction. **gen_lenstissue.mat:** Generated lens tissue reconstruction by running the model with code **2D_lenstissue.py** * **generated_images:** stacked 108 reconstructed FOVs of lens tissue sample by multi-FOV ADMM-Net, the assembled full sample reconstruction is shown in results/2D_lenstissue_reconstruction.png 3D_mouse **reconM_g704_z5_v4:** Trained 3D Multi-FOV ADMM-Net model for 3D sample reconstructions **t_img_recd_video0003 24-04-04 18-31-11_abetterrecordlong_03560_1_290.mat:** Time-series standard-deviation of difference-to-local-mean weighted raw video. * **Xts:** test video with 290 frames and each frame 6 FOVs, with 1408 x 1408 pixels per FOV. Data dimension in order of (frames, height, width, FOV). **gen_img_recd_video0003 24-04-04 18-31-11_abetterrecordlong_03560_1_290_v4.mat:** Generated 4D volumetric video containing 3-dimensional distribution of neural activities. * **generated_images_fu:** frame-by-frame 3D reconstruction of recorded video in uint8 format. Data dimension in order of (batch, FOV, height, width, depth). Each frame contains 6 FOVs, and each FOV has 13 reconstruction depths with 416 x 416 voxels per depth. Variables inside saved model subfolders (reconM_0308 and reconM_g704_z5_v4): * **saved_model.pb:** model computation graph including architecture and input/output definitions. * **keras_metadata.pb:** Keras metadata for the saved model, including model class, training configuration, and custom objects. * **assets:** external files for custom assets loaded during model training/inference. This folder is empty, as the model does not use custom assets. * **variables.data-00000-of-00001:** numerical values of model weights and parameters. * **variables.index:** index file that maps variable names to weight locations in .data. ## Code/software ### Set up the Python environment 1. Download and install the [Anaconda distribution](https://www.anaconda.com/download). 2. The code was tested with the following packages * python=3.9.7 * tensorflow=2.7.0 * keras=2.7.0 * matplotlib=3.4.3 * scipy=1.7.1 ## Code **2D_lenstissue.py:** Python code for Multi-FOV ADMM-Net model to generate reconstruction results. The function of each script section is described at the beginning of each section. **lenstissue_2D.m:** Matlab code to display the generated image and reassemble sub-FOV patches. **sup_psf.m:** Matlab script to load microlens coordinates data and to generate PSF pattern. **lenscoordinates.xls:** Microlens units coordinates table. **3D mouse.py:** Python code for Multi-FOV ADMM-Net model to generate reconstruction results. The function of each script section is described at the beginning of each section. **mouse_3D.m:** Matlab code to display the reconstructed neural activity video and to calculate temporal correlation. ## Access information Other publicly accessible locations of the data: * [https://github.com/Yang-Research-Laboratory/DeepInMiniscope-Learned-Integrated-Miniscope](https://github.com/Yang-Research-Laboratory/DeepInMiniscope-Learned-Integrated-Miniscope) 
    more » « less
  3. Abstract Light-field fluorescence microscopy uniquely provides fast, synchronous volumetric imaging by capturing an extended volume in one snapshot, but often suffers from low contrast due to the background signal generated by its wide-field illumination strategy. We implemented light-field-based selective volume illumination microscopy (SVIM), where illumination is confined to only the volume of interest, removing the background generated from the extraneous sample volume, and dramatically enhancing the image contrast. We demonstrate the capabilities of SVIM by capturing cellular-resolution 3D movies of flowing bacteria in seawater as they colonize their squid symbiotic partner, as well as of the beating heart and brain-wide neural activity in larval zebrafish. These applications demonstrate the breadth of imaging applications that we envision SVIM will enable, in capturing tissue-scale 3D dynamic biological systems at single-cell resolution, fast volumetric rates, and high contrast to reveal the underlying biology. 
    more » « less
  4. Stimulated Raman projection tomography is a label-free volumetric chemical imaging technology allowing three-dimensional (3D) reconstruction of chemical distribution in a biological sample from the angle-dependent stimulated Raman scattering projection images. However, the projection image acquisition process requires rotating the sample contained in a capillary glass held by a complicated sample rotation stage, limiting the volumetric imaging speed, and inhibiting the study of living samples. Here, we report a tilt-angle stimulated Raman projection tomography (TSPRT) system which acquires angle-dependent projection images by utilizing tilt-angle beams to image the sample from different azimuth angles sequentially. The TSRPT system, which is free of sample rotation, enables rapid scanning of different views by a tailor-designed four-galvo-mirror scanning system. We present the design of the optical system, the theory, and calibration procedure for chemical tomographic reconstruction. 3D vibrational images of polystyrene beads and C. elegans are demonstrated in the C-H vibrational region. 
    more » « less
  5. We exploit memory effect correlations in speckles for the imaging of incoherent fluorescent sources behind scattering tissue. These correlations are often weak when imaging thick scattering tissues and complex illumination patterns, both of which greatly limit the practicality of associated techniques. In this work, we introduce a spatial light modulator between the tissue sample and the imaging sensor and capture multiple modulations of the speckle pattern. We show that by correctly designing the modulation patterns and the associated reconstruction algorithm, statistical correlations in the measurements can be greatly enhanced. We exploit this to demonstrate the reconstruction of mega-pixel sized fluorescent patterns behind the scattering tissue. 
    more » « less