skip to main content


Title: Rotation equivariant and invariant neural networks for microscopy image analysis
Abstract Motivation

Neural networks have been widely used to analyze high-throughput microscopy images. However, the performance of neural networks can be significantly improved by encoding known invariance for particular tasks. Highly relevant to the goal of automated cell phenotyping from microscopy image data is rotation invariance. Here we consider the application of two schemes for encoding rotation equivariance and invariance in a convolutional neural network, namely, the group-equivariant CNN (G-CNN), and a new architecture with simple, efficient conic convolution, for classifying microscopy images. We additionally integrate the 2D-discrete-Fourier transform (2D-DFT) as an effective means for encoding global rotational invariance. We call our new method the Conic Convolution and DFT Network (CFNet).

Results

We evaluated the efficacy of CFNet and G-CNN as compared to a standard CNN for several different image classification tasks, including simulated and real microscopy images of subcellular protein localization, and demonstrated improved performance. We believe CFNet has the potential to improve many high-throughput microscopy image analysis applications.

Availability and implementation

Source code of CFNet is available at: https://github.com/bchidest/CFNet.

Supplementary information

Supplementary data are available at Bioinformatics online.

 
more » « less
Award ID(s):
1717205
NSF-PAR ID:
10425987
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Bioinformatics
Volume:
35
Issue:
14
ISSN:
1367-4803
Format(s):
Medium: X Size: p. i530-i537
Size(s):
p. i530-i537
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Motivation

    Best performing named entity recognition (NER) methods for biomedical literature are based on hand-crafted features or task-specific rules, which are costly to produce and difficult to generalize to other corpora. End-to-end neural networks achieve state-of-the-art performance without hand-crafted features and task-specific knowledge in non-biomedical NER tasks. However, in the biomedical domain, using the same architecture does not yield competitive performance compared with conventional machine learning models.

    Results

    We propose a novel end-to-end deep learning approach for biomedical NER tasks that leverages the local contexts based on n-gram character and word embeddings via Convolutional Neural Network (CNN). We call this approach GRAM-CNN. To automatically label a word, this method uses the local information around a word. Therefore, the GRAM-CNN method does not require any specific knowledge or feature engineering and can be theoretically applied to a wide range of existing NER problems. The GRAM-CNN approach was evaluated on three well-known biomedical datasets containing different BioNER entities. It obtained an F1-score of 87.26% on the Biocreative II dataset, 87.26% on the NCBI dataset and 72.57% on the JNLPBA dataset. Those results put GRAM-CNN in the lead of the biological NER methods. To the best of our knowledge, we are the first to apply CNN based structures to BioNER problems.

    Availability and implementation

    The GRAM-CNN source code, datasets and pre-trained model are available online at: https://github.com/valdersoul/GRAM-CNN.

    Supplementary information

    Supplementary data are available at Bioinformatics online.

     
    more » « less
  2. Abstract

    High-throughput materials research is strongly required to accelerate the development of safe and high energy-density lithium-ion battery (LIB) applicable to electric vehicle and energy storage system. The artificial intelligence, including machine learning with neural networks such as Boltzmann neural networks and convolutional neural networks (CNN), is a powerful tool to explore next-generation electrode materials and functional additives. In this paper, we develop a prediction model that classifies the major composition (e.g., 333, 523, 622, and 811) and different states (e.g., pristine, pre-cycled, and 100 times cycled) of various Li(Ni, Co, Mn)O2(NCM) cathodes via CNN trained on scanning electron microscopy (SEM) images. Based on those results, our trained CNN model shows a high accuracy of 99.6% where the number of test set is 3840. In addition, the model can be applied to the case of untrained SEM data of NCM cathodes with functional electrolyte additives.

     
    more » « less
  3. Purpose

    We introduce and validate a scalable retrospective motion correction technique for brain imaging that incorporates a machine learning component into a model‐based motion minimization.

    Methods

    A convolutional neural network (CNN) trained to remove motion artifacts from 2D T2‐weighted rapid acquisition with refocused echoes (RARE) images is introduced into a model‐based data‐consistency optimization to jointly search for 2D motion parameters and the uncorrupted image. Our separable motion model allows for efficient intrashot (line‐by‐line) motion correction of highly corrupted shots, as opposed to previous methods which do not scale well with this refinement of the motion model. Final image generation incorporates the motion parameters within a model‐based image reconstruction. The method is tested in simulations and in vivo motion experiments of in‐plane motion corruption.

    Results

    While the convolutional neural network alone provides some motion mitigation (at the expense of introduced blurring), allowing it to guide the iterative joint‐optimization both improves the search convergence and renders the joint‐optimization separable. This enables rapid mitigation within shots in addition to between shots. For 2D in‐plane motion correction experiments, the result is a significant reduction of both image space root mean square error in simulations, and a reduction of motion artifacts in the in vivo motion tests.

    Conclusion

    The separability and convergence improvements afforded by the combined convolutional neural network+model‐based method shows the potential for meaningful postacquisition motion mitigation in clinical MRI.

     
    more » « less
  4. Background

    Quantitative analysis of mitochondrial morphology plays important roles in studies of mitochondrial biology. The analysis depends critically on segmentation of mitochondria, the image analysis process of extracting mitochondrial morphology from images. The main goal of this study is to characterize the performance of convolutional neural networks (CNNs) in segmentation of mitochondria from fluorescence microscopy images. Recently, CNNs have achieved remarkable success in challenging image segmentation tasks in several disciplines. So far, however, our knowledge of their performance in segmenting biological images remains limited. In particular, we know little about their robustness, which defines their capability of segmenting biological images of different conditions, and their sensitivity, which defines their capability of detecting subtle morphological changes of biological objects.

    Methods

    We have developed a method that uses realistic synthetic images of different conditions to characterize the robustness and sensitivity of CNNs in segmentation of mitochondria. Using this method, we compared performance of two widely adopted CNNs: the fully convolutional network (FCN) and the U‐Net. We further compared the two networks against the adaptive active‐mask (AAM) algorithm, a representative of high‐performance conventional segmentation algorithms.

    Results

    The FCN and the U‐Net consistently outperformed the AAM in accuracy, robustness, and sensitivity, often by a significant margin. The U‐Net provided overall the best performance.

    Conclusions

    Our study demonstrates superior performance of the U‐Net and the FCN in segmentation of mitochondria. It also provides quantitative measurements of the robustness and sensitivity of these networks that are essential to their applications in quantitative analysis of mitochondrial morphology.

     
    more » « less
  5. null (Ed.)
    The success of supervised learning requires large-scale ground truth labels which are very expensive, time- consuming, or may need special skills to annotate. To address this issue, many self- or un-supervised methods are developed. Unlike most existing self-supervised methods to learn only 2D image features or only 3D point cloud features, this paper presents a novel and effective self-supervised learning approach to jointly learn both 2D image features and 3D point cloud features by exploiting cross-modality and cross-view correspondences without using any human annotated labels. Specifically, 2D image features of rendered images from different views are extracted by a 2D convolutional neural network, and 3D point cloud features are extracted by a graph convolution neural network. Two types of features are fed into a two-layer fully connected neural network to estimate the cross-modality correspondence. The three networks are jointly trained (i.e. cross-modality) by verifying whether two sampled data of different modalities belong to the same object, meanwhile, the 2D convolutional neural network is additionally optimized through minimizing intra-object distance while maximizing inter-object distance of rendered images in different views (i.e. cross-view). The effectiveness of the learned 2D and 3D features is evaluated by transferring them on five different tasks including multi-view 2D shape recognition, 3D shape recognition, multi-view 2D shape retrieval, 3D shape retrieval, and 3D part-segmentation. Extensive evaluations on all the five different tasks across different datasets demonstrate strong generalization and effectiveness of the learned 2D and 3D features by the proposed self-supervised method. 
    more » « less