skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A hierarchical convolutional neural network for mitosis detection in phase-contrast microscopy images
We propose a Hierarchical Convolution Neural Network (HCNN) for mitosis event detection in time-lapse phase contrast microscopy. Our method contains two stages: first,we extract candidate spatial-temporal patch sequences in the input image sequences which potentially contain mitosis events. Then,we identify if each patch sequence contains mitosis event or not using a hieratical convolutional neural network. In the experiments,we validate the design of our proposed architecture and evaluate the mitosis event detection performance. Our method achieves 99.1% precision and 97.2% recall in very challenging image sequences of multipolar-shaped C3H10T1/2 mesenchymal stem cells and outperforms other state-of-the-art methods. Furthermore,the proposed method does not depend on hand-crafted feature design or cell tracking. It can be straightforwardly adapted to event detection of other different cell types.  more » « less
Award ID(s):
1355406
PAR ID:
10023449
Author(s) / Creator(s):
;
Date Published:
Journal Name:
MICCAI 2016: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Monitoring and tracking of cell motion is a key component for understanding disease mechanisms and evaluating the effects of treatments. Time-lapse optical microscopy has been commonly employed for studying cell cycle phases. However, usual manual cell tracking is very time consuming and has poor reproducibility. Automated cell tracking techniques are challenged by variability of cell region intensity distributions and resolution limitations. In this work, we introduce a comprehensive cell segmentation and tracking methodology. A key contribution of this work is that it employs multi-scale space-time interest point detection and characterization for automatic scale selection and cell segmentation. Another contribution is the use of a neural network with class prototype balancing for detection of cell regions. This work also offers a structured mathematical framework that uses graphs for track generation and cell event detection. We evaluated cell segmentation, detection, and tracking performance of our method on time-lapse sequences of the Cell Tracking Challenge (CTC). We also compared our technique to top performing techniques from CTC. Performance evaluation results indicate that the proposed methodology is competitive with these techniques, and that it generalizes very well to diverse cell types and sizes, and multiple imaging techniques. 
    more » « less
  2. Existing neural cell tracking methods generally use the morphology cell features for data association. However, these features are limited to the quality of cell segmentation and are prone to errors for mitosis determination. To over- come these issues, in this work we propose an online multi- object tracking method that leverages both cell appearance and motion features for data association. In particular, we propose a supervised blob-seed network (BSNet) to predict the cell appearance features and an unsupervised optical flow network (UnFlowNet) for capturing the cell motions. The data association is then solved using the Hungarian al- gorithm. Experimental evaluation shows that our approach achieves better performance than existing neural cell track- ing methods. 
    more » « less
  3. null (Ed.)
    Most modern commodity imaging systems we use directly for photography—or indirectly rely on for downstream applications—employ optical systems of multiple lenses that must balance deviations from perfect optics, manufacturing constraints, tolerances, cost, and footprint. Although optical designs often have complex interactions with downstream image processing or analysis tasks, today’s compound optics are designed in isolation from these interactions. Existing optical design tools aim to minimize optical aberrations, such as deviations from Gauss’ linear model of optics, instead of application-specific losses, precluding joint optimization with hardware image signal processing (ISP) and highly parameterized neural network processing. In this article, we propose an optimization method for compound optics that lifts these limitations. We optimize entire lens systems jointly with hardware and software image processing pipelines, downstream neural network processing, and application-specific end-to-end losses. To this end, we propose a learned, differentiable forward model for compound optics and an alternating proximal optimization method that handles function compositions with highly varying parameter dimensions for optics, hardware ISP, and neural nets. Our method integrates seamlessly atop existing optical design tools, such as Zemax . We can thus assess our method across many camera system designs and end-to-end applications. We validate our approach in an automotive camera optics setting—together with hardware ISP post processing and detection—outperforming classical optics designs for automotive object detection and traffic light state detection. For human viewing tasks, we optimize optics and processing pipelines for dynamic outdoor scenarios and dynamic low-light imaging. We outperform existing compartmentalized design or fine-tuning methods qualitatively and quantitatively, across all domain-specific applications tested. 
    more » « less
  4. This paper delves into the frequency analysis of image datasets and neural networks, particularly Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs), and reveals the alignment property between datasets and network architecture design. Our analysis suggests that the frequency statistics of image datasets and the learning behavior of neural networks are intertwined. Based on this observation, our main contribution consists of a new framework for network optimization that guides the design process by adjusting the network’s depth and width to align the frequency characteristics of untrained models with those of trained models. Our frequency analysis framework can be used to design better neural networks with better performance-model size trade-offs. Our results on ImageNet-1k image classification, CIFAR-100 image classification, and MS-COCO object detection and instance segmentation benchmarks show that our method is broadly applicable and can improve network architecture performance. Our investigation into the alignment between the frequency characteristics of image datasets and network architecture opens up a new direction in model analysis that can be used to design more efficient networks. 
    more » « less
  5. Abstract Modeling temporal event sequences on the vertices of a network is an important problem with widespread applications; examples include modeling influences in social networks, preventing crimes by modeling their space–time occurrences, and forecasting earthquakes. Existing solutions for this problem use a parametric approach, whose applicability is limited to event sequences following some well‐known distributions, which is not true for many real life event datasets. To overcome this limitation, in this work, we propose a composite recurrent neural network model for learning events occurring in the vertices of a network over time. Our proposed model combines two long short‐term memory units to capture base intensity and conditional intensity of an event sequence. We also introduce a second‐order statistic loss that penalizes higher divergence between the generated and the target sequence's distribution of hop count distance of consecutive events. Given a sequence of vertices of a network in which an event has occurred, the proposed model predicts the vertex where the next event would most likely occur. Experimental results on synthetic and real‐world datasets validate the superiority of our proposed model in comparison to various baseline methods. 
    more » « less