skip to main content

Title: Phasetime: Deep Learning Approach to Detect Nuclei in Time Lapse Phase Images
Time lapse microscopy is essential for quantifying the dynamics of cells, subcellular organelles and biomolecules. Biologists use different fluorescent tags to label and track the subcellular structures and biomolecules within cells. However, not all of them are compatible with time lapse imaging, and the labeling itself can perturb the cells in undesirable ways. We hypothesized that phase image has the requisite information to identify and track nuclei within cells. By utilizing both traditional blob detection to generate binary mask labels from the stained channel images and the deep learning Mask RCNN model to train a detection and segmentation model, we managed to segment nuclei based only on phase images. The detection average precision is 0.82 when the IoU threshold is to be set 0.5. And the mean IoU for masks generated from phase images and ground truth masks from experts is 0.735. Without any ground truth mask labels during the training time, this is good enough to prove our hypothesis. This result enables the ability to detect nuclei without the need for exogenous labeling.
; ; ; ;
Award ID(s):
Publication Date:
Journal Name:
Journal of Clinical Medicine
Page Range or eLocation-ID:
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Quantitative phase microscopy (QPM) enables studies of living biological systems without exogenous labels. To increase the utility of QPM, machine-learning methods have been adapted to extract additional information from the quantitative phase data. Previous QPM approaches focused on fluid flow systems or time-lapse images that provide high throughput data for cells at single time points, or of time-lapse images that require delayed post-experiment analyses, respectively. To date, QPM studies have not imaged specific cells over time with rapid, concurrent analyses during image acquisition. In order to study biological phenomena or cellular interactions over time, efficient time-dependent methods that automatically and rapidly identify events of interest are desirable. Here, we present an approach that combines QPM and machine learning to identify tumor-reactive T cell killing of adherent cancer cells rapidly, which could be used for identifying and isolating novel T cells and/or their T cell receptors for studies in cancer immunotherapy. We demonstrate the utility of this method by machine learning model training and validation studies using one melanoma-cognate T cell receptor model system, followed by high classification accuracy in identifying T cell killing in an additional, independent melanoma-cognate T cell receptor model system. This general approach could be usefulmore »for studying additional biological systems under label-free conditions over extended periods of examination.

    « less
  2. Observable reading behavior, the act of moving the eyes over lines of text, is highly stereotyped among the users of a language, and this has led to the development of reading detectors–methods that input windows of sequential fixations and output predictions of the fixation behavior during those windows being reading or skimming. The present study introduces a newmethod for reading detection using Region Ranking SVM (RRSVM). An SVM-based classifier learns the local oculomotor features that are important for real-time reading detection while it is optimizing for the global reading/skimming classification, making it unnecessary to hand-label local fixation windows for model training. This RRSVM reading detector was trained and evaluated using eye movement data collected in a laboratory context, where participants viewed modified web news articles and had to either read them carefully for comprehension or skim them quickly for the selection of keywords (separate groups). Ground truth labels were known at the global level (the instructed reading or skimming task), and obtained at the local level in a separate rating task. The RRSVM reading detector accurately predicted 82.5% of the global (article-level) reading/skimming behavior, with accuracy in predicting local window labels ranging from 72-95%, depending on how tuned the RRSVMmore »was for local and global weights. With this RRSVM reading detector, a method now exists for near real-time reading detection without the need for hand-labeling of local fixation windows. With real-time reading detection capability comes the potential for applications ranging from education and training to intelligent interfaces that learn what a user is likely to know based on previous detection of their reading behavior.« less
  3. Nowadays, to assess and document construction and building performance, large amount of visual data are captured and stored through camera equipped platforms such as wearable cameras, unmanned aerial/ground vehicles, and smart phones. However, due to the nonstop fashion in recording such visual data, not all of the frames in captured consecutive footages are intentionally taken, and thus not every frame is worthy of being processed for construction and building performance analysis. Since many frames will simply have non-construction related contents, before processing the visual data, the content of each recorded frame should be manually investigated depending on the association with the goal of the visual assessment. To address such challenges, this paper aims to automatically filter construction big visual data that requires no human annotations. To overcome challenges in pure discriminative approach using manually labeled images, we construct a generative model with unlabeled visual dataset, and use it to find construction-related frames in big visual dataset from jobsites. First, through composition-based snap point detection together with domain adaptation, we filter and remove most of accidently recorded frames in the footage. Then, we create discriminative classifier trained with visual data from jobsites to eliminate non-construction related images. To evaluate the reliabilitymore »of the proposed method, we have obtained the ground truth based on human judgment for each photo in our testing dataset. Despite learning without any explicit labels, the proposed method shows a reasonable practical range of accuracy, which generally outperforms prior snap point detection. Through the case studies, the fidelity of the algorithm is discussed in detail. By being able to focus on selective visual data, practitioners will spend less time on browsing large amounts of visual data; rather spend more time on looking at how to leverage the visual data to facilitate decision-makings in built environments.« less
  4. In the past two decades, spectral imaging technologies have expanded the capacity of fluorescence microscopy for accurate detection of multiple labels, separation of labels from cellular and tissue autofluorescence, and analysis of autofluorescence signatures. These technologies have been implemented using a range of optical techniques, such as tunable filters, diffraction gratings, prisms, interferometry, and custom Bayer filters. Each of these techniques has associated strengths and weaknesses with regard to spectral resolution, spatial resolution, temporal resolution, and signal-to-noise characteristics. We have previously shown that spectral scanning of the fluorescence excitation spectrum can provide greatly increased signal strength compared to traditional emission-scanning approaches. Here, we present results from utilizing a Hyperspectral Imaging Fluorescence Excitation Scanning (HIFEX) microscope system for live cell imaging. Live cell signaling studies were performed using HEK 293 and rat pulmonary microvascular endothelial cells (PMVECs), transfected with either a cAMP FRET reporter or a Ca2+ reporter. Cells were further labeled to visualize subcellular structures (nuclei, membrane, mitochondria, etc.). Spectral images were acquired using a custom inverted microscope (TE2000, Nikon Instruments) equipped with a 300W Xe arc lamp and tunable excitation filter (VF- 5, Sutter Instrument Co., equipped with VersaChrome filters, Semrock), and run through MicroManager. Timelapse spectral images weremore »acquired from 350-550 nm, in 5 nm increments. Spectral image data were linearly unmixed using custom MATLAB scripts. Results indicate that the HIFEX microscope system can acquire live cell image data at acquisition speeds of 8 ms/wavelength band with minimal photobleaching, sufficient for studying moderate speed cAMP and Ca2+ events.« less
  5. Microscopic evaluation of resected tissue plays a central role in the surgical management of cancer. Because optical microscopes have a limited depth-of-field (DOF), resected tissue is either frozen or preserved with chemical fixatives, sliced into thin sections placed on microscope slides, stained, and imaged to determine whether surgical margins are free of tumor cells—a costly and time- and labor-intensive procedure. Here, we introduce a deep-learning extended DOF (DeepDOF) microscope to quickly image large areas of freshly resected tissue to provide histologic-quality images of surgical margins without physical sectioning. The DeepDOF microscope consists of a conventional fluorescence microscope with the simple addition of an inexpensive (less than $10) phase mask inserted in the pupil plane to encode the light field and enhance the depth-invariance of the point-spread function. When used with a jointly optimized image-reconstruction algorithm, diffraction-limited optical performance to resolve subcellular features can be maintained while significantly extending the DOF (200 µm). Data from resected oral surgical specimens show that the DeepDOF microscope can consistently visualize nuclear morphology and other important diagnostic features across highly irregular resected tissue surfaces without serial refocusing. With the capability to quickly scan intact samples with subcellular detail, the DeepDOF microscope can improve tissue samplingmore »during intraoperative tumor-margin assessment, while offering an affordable tool to provide histological information from resected tissue specimens in resource-limited settings.« less