skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Content-aware frame interpolation (CAFI): deep learning-based temporal super-resolution for fast bioimaging
Abstract The development of high-resolution microscopes has made it possible to investigate cellular processes in 3D and over time. However, observing fast cellular dynamics remains challenging because of photobleaching and phototoxicity. Here we report the implementation of two content-aware frame interpolation (CAFI) deep learning networks, Zooming SlowMo and Depth-Aware Video Frame Interpolation, that are highly suited for accurately predicting images in between image pairs, therefore improving the temporal resolution of image series post-acquisition. We show that CAFI is capable of understanding the motion context of biological structures and can perform better than standard interpolation methods. We benchmark CAFI’s performance on 12 different datasets, obtained from four different microscopy modalities, and demonstrate its capabilities for single-particle tracking and nuclear segmentation. CAFI potentially allows for reduced light exposure and phototoxicity on the sample for improved long-term live-cell imaging. The models and the training and testing data are available via the ZeroCostDL4Mic platform.  more » « less
Award ID(s):
2014862
PAR ID:
10523515
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
Nature methods
Date Published:
Journal Name:
Nature Methods
Volume:
21
Issue:
2
ISSN:
1548-7091
Page Range / eLocation ID:
322 to 330
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Object tracking in microscopy videos is crucial for understanding biological processes. While existing methods often require fine-tuning tracking algorithms to fit the image dataset, here we explored an alternative paradigm: augmenting the image time-lapse dataset to fit the tracking algorithm. To test this approach, we evaluated whether generative video frame interpolation can augment the temporal resolution of time-lapse microscopy and facilitate object tracking in multiple biological contexts. We systematically compared the capacity of Latent Diffusion Model for Video Frame Interpolation (LDMVFI), Real-time Intermediate Flow Estimation (RIFE), Compression-Driven Frame Interpolation (CDFI), and Frame Interpolation for Large Motion (FILM) to generate synthetic microscopy images derived from interpolating real images. Our testing image time series ranged from fluorescently labeled nuclei to bacteria, yeast, cancer cells, and organoids. We showed that the off-the-shelf frame interpolation algorithms produced bio-realistic image interpolation even without dataset-specific retraining, as judged by high structural image similarity and the capacity to produce segmentations that closely resemble results from real images. Using a simple tracking algorithm based on mask overlap, we confirmed that frame interpolation significantly improved tracking across several datasets without requiring extensive parameter tuning and capturing complex trajectories that were difficult to resolve in the original image time series. Taken together, our findings highlight the potential of generative frame interpolation to improve tracking in time-lapse microscopy across diverse scenarios, suggesting that a generalist tracking algorithm for microscopy could be developed by combining deep learning segmentation models with generative frame interpolation. 
    more » « less
  2. Abstract Since its first demonstration over 100 years ago, scattering‐based light‐sheet microscopy has recently re‐emerged as a key modality in label‐free tissue imaging and cellular morphometry; however, scattering‐based light‐sheet imaging with subcellular resolution remains an unmet target. This is because related approaches inevitably superimpose speckle or granular intensity modulation on to the native subcellular features. Here, we addressed this challenge by deploying a time‐averaged pseudo‐thermalized light‐sheet illumination. While this approach increased the lateral dimensions of the illumination sheet, we achieved subcellular resolving power after image deconvolution. We validated this approach by imaging cytosolic carbon depots in yeast and bacteria with increased specificity, no staining, and ultralow irradiance levels. Overall, we expect this scattering‐based light‐sheet microscopy approach will advance single, live cell imaging by conferring low‐irradiance and label‐free operation towards eradicating phototoxicity. 
    more » « less
  3. Though recent years have witnessed remarkable progress in single image super-resolution (SISR) tasks with the prosperous development of deep neural networks (DNNs), the deep learning methods are confronted with the computation and memory consumption issues in practice, especially for resource-limited platforms such as mobile devices. To overcome the challenge and facilitate the real-time deployment of SISR tasks on mobile, we combine neural architecture search with pruning search and propose an automatic search framework that derives sparse super-resolution (SR) models with high image quality while satisfying the real-time inference requirement. To decrease the search cost, we leverage the weight sharing strategy by introducing a supernet and decouple the search problem into three stages, including supernet construction, compiler-aware architecture and pruning search, and compiler-aware pruning ratio search. With the proposed framework, we are the first to achieve real-time SR inference (with only tens of milliseconds per frame) for implementing 720p resolution with competitive image quality (in terms of PSNR and SSIM) on mobile platforms (Samsung Galaxy S20). 
    more » « less
  4. Abstract There is a significant gap in cost-effective quantitative phase microscopy (QPM) systems for studying dynamic cellular processes while maintaining accuracy for long-term cellular monitoring. Current QPM systems often rely on complex and expensive voltage-controllable components like Spatial Light Modulators or two-beam interferometry. To address this, we introduce a QPM system optimized for time-varying phase samples using azobenzene liquid crystal as a Zernike filter with a polarization-sensing camera. This system operates without input voltage or moving components, reducing complexity and cost. Optimized for gentle illumination to minimize phototoxicity, it achieves a 1 Hz frame rate for prolonged monitoring. The system demonstrated accuracy with a maximum standard deviation of ±42 nm and low noise fluctuations of ±2.5 nm. Designed for simplicity and single-shot operations, our QPM system is efficient, robust, and precisely calibrated for reliable measurements. Using inexpensive optical components, it offers an economical solution for long-term, noninvasive biological monitoring and research applications. 
    more » « less
  5. Abstract Superresolution is the general task of artificially increasing the spatial resolution of an image. The recent surge in machine learning (ML) research has yielded many promising ML-based approaches for performing single-image superresolution including applications to satellite remote sensing. We develop a convolutional neural network (CNN) to superresolve the 1- and 2-km bands on the GOES-R series Advanced Baseline Imager (ABI) to a common high resolution of 0.5 km. Access to 0.5-km imagery from ABI band 2 enables the CNN to realistically sharpen lower-resolution bands without significant blurring. We first train the CNN on a proxy task, which allows us to only use ABI imagery, namely, degrading the resolution of ABI bands and training the CNN to restore the original imagery. Comparisons at reduced resolution and at full resolution withLandsat-8/Landsat-9observations illustrate that the CNN produces images with realistic high-frequency detail that is not present in a bicubic interpolation baseline. Estimating all ABI bands at 0.5-km resolution allows for more easily combining information across bands without reconciling differences in spatial resolution. However, more analysis is needed to determine impacts on derived products or multispectral imagery that use superresolved bands. This approach is extensible to other remote sensing instruments that have bands with different spatial resolutions and requires only a small amount of data and knowledge of each channel’s modulation transfer function. Significance StatementSatellite remote sensing instruments often have bands with different spatial resolutions. This work shows that we can artificially increase the resolution of some lower-resolution bands by taking advantage of the texture of higher-resolution bands on theGOES-16ABI instrument using a convolutional neural network. This may help reconcile differences in spatial resolution when combining information across bands, but future analysis is needed to precisely determine impacts on derived products that might use superresolved bands. 
    more » « less