Mask-based integrated fluorescence microscopy is a compact imaging technique for biomedical research. It can perform snapshot 3D imaging through a thin optical mask with a scalable field of view (FOV) and a thin device thickness. Integrated microscopy uses computational algorithms for object reconstruction, but efficient reconstruction algorithms for large-scale data have been lacking. Here, we developed DeepInMiniscope, a miniaturized integrated microscope featuring a custom-designed optical mask and a multi-stage physics-informed deep learning model. This reduces the computational resource demands by orders of magnitude and facilitates fast reconstruction. Our deep learning algorithm can reconstruct object volumes over 4×6×0.6 mm3. We demonstrated substantial improvement in both reconstruction quality and speed compared to traditional methods for large-scale data. Notably, we imaged neuronal activity with near-cellular resolution in awake mouse cortex, representing a substantial leap over existing integrated microscopes. DeepInMiniscope holds great promise for scalable, large-FOV, high-speed, 3D imaging applications with compact device footprint. # DeepInMiniscope: Deep-learning-powered physics-informed integrated miniscope [https://doi.org/10.5061/dryad.6t1g1jx83](https://doi.org/10.5061/dryad.6t1g1jx83) ## Description of the data and file structure ### DeepInMiniscope: Learned Integrated Miniscope ### Datasets, models and codes for 2D and 3D sample reconstructions. Dataset for 2D reconstruction includes test data for green stained lens tissue. Input: measured images of green fluorescent stained lens tissue, dissembled into sub-FOV patches. Output: the slide containing green lens tissue features. Dataset for 3D sample reconstructions includes test data for 3D reconstruction of in-vivo mouse brain video recording. Input: Time-series standard-derivation of difference-to-local-mean weighted raw video. Output: reconstructed 4-D volumetric video containing a 3-dimensional distribution of neural activities. ## Files and variables ### Download data, code, and sample results 1. Download data `data.zip`, code `code.zip`, results `results.zip`. 2. Unzip the downloaded files and place them in the same main folder. 3. Confirm that the main folder contains three subfolders: `data`, `code`, and `results`. Inside the `data` and `code` folder, there should be subfolders for each test case. ## Data 2D_lenstissue **data_2d_lenstissue.mat:** Measured images of green fluorescent stained lens tissue, disassembled into sub-FOV patches. * **Xt:** stacked 108 FOVs of measured image, each centered at one microlens unit with 720 x 720 pixels. Data dimension in order of (batch, height, width, FOV). * **Yt:** placeholder variable for reconstructed object, each centered at corresponding microlens unit, with 180 x 180 voxels. Data dimension in order of (batch, height, width, FOV). **reconM_0308:** Trained Multi-FOV ADMM-Net model for 2D lens tissue reconstruction. **gen_lenstissue.mat:** Generated lens tissue reconstruction by running the model with code **2D_lenstissue.py** * **generated_images:** stacked 108 reconstructed FOVs of lens tissue sample by multi-FOV ADMM-Net, the assembled full sample reconstruction is shown in results/2D_lenstissue_reconstruction.png 3D_mouse **reconM_g704_z5_v4:** Trained 3D Multi-FOV ADMM-Net model for 3D sample reconstructions **t_img_recd_video0003 24-04-04 18-31-11_abetterrecordlong_03560_1_290.mat:** Time-series standard-deviation of difference-to-local-mean weighted raw video. * **Xts:** test video with 290 frames and each frame 6 FOVs, with 1408 x 1408 pixels per FOV. Data dimension in order of (frames, height, width, FOV). **gen_img_recd_video0003 24-04-04 18-31-11_abetterrecordlong_03560_1_290_v4.mat:** Generated 4D volumetric video containing 3-dimensional distribution of neural activities. * **generated_images_fu:** frame-by-frame 3D reconstruction of recorded video in uint8 format. Data dimension in order of (batch, FOV, height, width, depth). Each frame contains 6 FOVs, and each FOV has 13 reconstruction depths with 416 x 416 voxels per depth. Variables inside saved model subfolders (reconM_0308 and reconM_g704_z5_v4): * **saved_model.pb:** model computation graph including architecture and input/output definitions. * **keras_metadata.pb:** Keras metadata for the saved model, including model class, training configuration, and custom objects. * **assets:** external files for custom assets loaded during model training/inference. This folder is empty, as the model does not use custom assets. * **variables.data-00000-of-00001:** numerical values of model weights and parameters. * **variables.index:** index file that maps variable names to weight locations in .data. ## Code/software ### Set up the Python environment 1. Download and install the [Anaconda distribution](https://www.anaconda.com/download). 2. The code was tested with the following packages * python=3.9.7 * tensorflow=2.7.0 * keras=2.7.0 * matplotlib=3.4.3 * scipy=1.7.1 ## Code **2D_lenstissue.py:** Python code for Multi-FOV ADMM-Net model to generate reconstruction results. The function of each script section is described at the beginning of each section. **lenstissue_2D.m:** Matlab code to display the generated image and reassemble sub-FOV patches. **sup_psf.m:** Matlab script to load microlens coordinates data and to generate PSF pattern. **lenscoordinates.xls:** Microlens units coordinates table. **3D mouse.py:** Python code for Multi-FOV ADMM-Net model to generate reconstruction results. The function of each script section is described at the beginning of each section. **mouse_3D.m:** Matlab code to display the reconstructed neural activity video and to calculate temporal correlation. ## Access information Other publicly accessible locations of the data: * [https://github.com/Yang-Research-Laboratory/DeepInMiniscope-Learned-Integrated-Miniscope](https://github.com/Yang-Research-Laboratory/DeepInMiniscope-Learned-Integrated-Miniscope)
more »
« less
This content will become publicly available on September 12, 2026
DeepInMiniscope: Deep learning–powered physics-informed integrated miniscope
Mask-based integrated fluorescence microscopy is a compact imaging technique for biomedical research. It can perform snapshot 3D imaging through a thin optical mask with a scalable field of view (FOV). Integrated microscopy uses computational algorithms for object reconstruction, but efficient reconstruction algorithms for large-scale data have been lacking. Here, we developed DeepInMiniscope, a miniaturized integrated microscope featuring a custom-designed optical mask and an efficient physics-informed deep learning model that markedly reduces computational demand. Parts of the 3D object can be individually reconstructed and combined. Our deep learning algorithm can reconstruct object volumes over 4 millimeters by 6 millimeters by 0.6 millimeters. We demonstrated substantial improvement in both reconstruction quality and speed compared to traditional methods for large-scale data. Notably, we imaged neuronal activity with near-cellular resolution in awake mouse cortex, representing a substantial leap over existing integrated microscopes. DeepInMiniscope holds great promise for scalable, large-FOV, high-speed, 3D imaging applications with compact device footprint.
more »
« less
- Award ID(s):
- 1847141
- PAR ID:
- 10644634
- Publisher / Repository:
- AAAS
- Date Published:
- Journal Name:
- Science Advances
- Volume:
- 11
- Issue:
- 37
- ISSN:
- 2375-2548
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Intensity Diffraction Tomography (IDT) is a new computational microscopy technique providing quantitative, volumetric, large field-of-view (FOV) phase imaging of biological samples. This approach uses computationally efficient inverse scattering models to recover 3D phase volumes of weakly scattering objects from intensity measurements taken under diverse illumination at a single focal plane. IDT is easily implemented in a standard microscope equipped with an LED array source and requires no exogenous contrast agents, making the technology widely accessible for biological research.Here, we discuss model and learning-based approaches for complex 3D object recovery with IDT. We present two model-based computational illumination strategies, multiplexed IDT (mIDT) [1] and annular IDT (aIDT) [2], that achieve high-throughput quantitative 3D object phase recovery at hardware-limited 4Hz and 10Hz volume rates, respectively. We illustrate these techniques on living epithelial buccal cells and Caenorhabditis elegans worms. For strong scattering object recovery with IDT, we present an uncertainty quantification framework for assessing the reliability of deep learning-based phase recovery methods [3]. This framework provides per-pixel evaluation of a neural network predictions confidence level, allowing for efficient and reliable complex object recovery. This uncertainty learning framework is widely applicable for reliable deep learning-based biomedical imaging techniques and shows significant potential for IDT.more » « less
-
Compressed ultrafast photography (CUP) is a computational optical imaging technique that can capture transient dynamics at an unprecedented speed. Currently, the image reconstruction of CUP relies on iterative algorithms, which are time-consuming and often yield nonoptimal image quality. To solve this problem, we develop a deep-learning-based method for CUP reconstruction that substantially improves the image quality and reconstruction speed. A key innovation toward efficient deep learning reconstruction of a large three-dimensional (3D) event datacube ( ) ( , spatial coordinate; , time) is that we decompose the original datacube into massively parallel two-dimensional (2D) imaging subproblems, which are much simpler to solve by a deep neural network. We validated our approach on simulated and experimental data.more » « less
-
SUMMARY Head-mounted miniaturized two-photon microscopes are powerful tools to record neural activity with cellular resolution deep in the mouse brain during unrestrained, free-moving behavior. Two-photon microscopy, however, is traditionally limited in imaging frame rate due to the necessity of raster scanning the laser excitation spot over a large field-of-view (FOV). Here, we present two multiplexed miniature two-photon microscopes (M-MINI2Ps) to increase the imaging frame rate while preserving the spatial resolution. Two different FOVs are imaged simultaneously and then demixed temporally or computationally. We demonstrate large-scale (500×500 µm2FOV) multiplane calcium imaging in visual cortex and prefrontal cortex in freely moving mice for spontaneous activity and auditory stimulus evoked responses. Furthermore, the increased speed of M-MINI2Ps also enables two-photon voltage imaging at 400 Hz over a 380×150 µm2FOV in freely moving mice. M-MINI2Ps have compact footprints and are compatible with the open-source MINI2P. M-MINI2Ps, together with their design principles, allow the capture of faster physiological dynamics and population recordings over a greater volume than currently possible in freely moving mice, and will be a powerful tool in systems neuroscience.more » « less
-
Poor access to eye care is a major global challenge that could be ameliorated by low-cost, portable, and easy-to-use diagnostic technologies. Diffuser-based imaging has the potential to enable inexpensive, compact optical systems that can reconstruct a focused image of an object over a range of defocus errors. Here, we present a diffuser-based computational funduscope that reconstructs important clinical features of a model eye. Compared to existing diffuser-imager architectures, our system features an infinite-conjugate design by relaying the ocular lens onto the diffuser. This offers shift-invariance across a wide field-of-view (FOV) and an invariant magnification across an extended depth range. Experimentally, we demonstrate fundus image reconstruction over a 33°FOV and robustness to ±4D refractive error using a constant point-spread-function. Combined with diffuser-based wavefront sensing, this technology could enable combined ocular aberrometry and funduscopic screening through a single diffuser sensor.more » « less
An official website of the United States government
