skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A deep learned nanowire segmentation model using synthetic data augmentation
Abstract Automated particle segmentation and feature analysis of experimental image data are indispensable for data-driven material science. Deep learning-based image segmentation algorithms are promising techniques to achieve this goal but are challenging to use due to the acquisition of a large number of training images. In the present work, synthetic images are applied, resembling the experimental images in terms of geometrical and visual features, to train the state-of-art Mask region-based convolutional neural networks to segment vanadium pentoxide nanowires, a cathode material within optical density-based images acquired using spectromicroscopy. The results demonstrate the instance segmentation power in real optical intensity-based spectromicroscopy images of complex nanowires in overlapped networks and provide reliable statistical information. The model can further be used to segment nanowires in scanning electron microscopy images, which are fundamentally different from the training dataset known to the model. The proposed methodology can be extended to any optical intensity-based images of variable particle morphology, material class, and beyond.  more » « less
Award ID(s):
2038625
PAR ID:
10381649
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
npj Computational Materials
Volume:
8
Issue:
1
ISSN:
2057-3960
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. 3D instance segmentation for unlabeled imaging modalities is a challenging but essential task as collecting expert annotation can be expensive and time-consuming. Existing works segment a new modality by either deploying pre-trained models optimized on diverse training data or sequentially conducting image translation and segmentation with two relatively independent networks. In this work, we propose a novel Cyclic Segmentation Generative Adversarial Network (CySGAN) that conducts image translation and instance segmentation simultaneously using a unified network with weight sharing. Since the image translation layer can be removed at inference time, our proposed model does not introduce additional computational cost upon a standard segmentation model. For optimizing CySGAN, besides the CycleGAN losses for image translation and supervised losses for the annotated source domain, we also utilize self-supervised and segmentation-based adversarial objectives to enhance the model performance by leveraging unlabeled target domain images. We benchmark our approach on the task of 3D neuronal nuclei segmentation with annotated electron microscopy (EM) images and unlabeled expansion microscopy (ExM) data. The proposed CySGAN outperforms pre-trained generalist models, feature-level domain adaptation models, and the baselines that conduct image translation and segmentation sequentially. Our implementation and the newly collected, densely annotated ExM zebrafish brain nuclei dataset, named NucExM, are publicly available at https://connectomics-bazaar.github.io/proj/CySGAN/index.html. 
    more » « less
  2. This paper proposes a modified method for training tool segmentation networks for endoscopic images by parsing training images into two disjoint sets: one for rectangular representations of endoscopic images and one for polar. Previous work [1], [2] demonstrated that certain endoscopic images may be better segmented by a U-Net network trained on the original rectangular representation of images alone, and others performed better with polar representations. This work extends that observation to the training images and seeks to intelligently decompose the aggregate training data into disjoint image sets — one ideal for training a network to segment original, rectangular endoscopic images and the other for training a polar segmentation network. The training set decomposition consists of three stages: (1) initial data split and models, (2) image reallocation and transition mechanisms with retraining, and (3) evaluation. In (2), two separate frameworks for parsing polar vs. rectangular training images were investigated, with three switching metrics utilized in both. Experiments comparatively evaluated the segmentation performance (via Sørenson Dice coefficient) of the in-group and out-of-group images between the set-decomposed models. Results are encouraging, showing improved aggregate in-group Dice scores as well as image sets trending towards convergence. 
    more » « less
  3. Segmentation of echocardiograms plays an essential role in the quantitative analysis of the heart and helps diagnose cardiac diseases. In the recent decade, deep learning-based approaches have significantly improved the performance of echocardiogram segmentation. Most deep learning-based methods assume that the image to be processed is rectangular in shape. However, typically echocardiogram images are formed within a sector of a circle, with a significant region in the overall rectangular image where there is no data, a result of the ultrasound imaging methodology. This large non-imaging region can influence the training of deep neural networks. In this paper, we propose to use polar transformation to help train deep learning algorithms. Using the r-θ transformation, a significant portion of the non-imaging background is removed, allowing the neural network to focus on the heart image. The segmentation model is trained on both x-y and r-θ images. During inference, the predictions from the x-y and r-θ images are combined using max-voting. We verify the efficacy of our method on the CAMUS dataset with a variety of segmentation networks, encoder networks, and loss functions. The experimental results demonstrate the effectiveness and versatility of our proposed method for improving the segmentation results. 
    more » « less
  4. Abstract Computational modeling of cardiovascular function has become a critical part of diagnosing, treating and understanding cardiovascular disease. Most strategies involve constructing anatomically accurate computer models of cardiovascular structures, which is a multistep, time-consuming process. To improve the model generation process, we herein present SeqSeg (sequential segmentation): a novel deep learning-based automatic tracing and segmentation algorithm for constructing image-based vascular models. SeqSeg leverages local U-Net-based inference to sequentially segment vascular structures from medical image volumes. We tested SeqSeg on CT and MR images of aortic and aortofemoral models and compared the predictions to those of benchmark 2D and 3D global nnU-Net models, which have previously shown excellent accuracy for medical image segmentation. We demonstrate that SeqSeg is able to segment more complete vasculature and is able to generalize to vascular structures not annotated in the training data. 
    more » « less
  5. Electron microscopy images of carbon nanotube (CNT) forests are difficult to segment due to the long and thin nature of the CNTs; density of the CNT forests resulting in CNTs touching, crossing, and occluding each other; and low signal-to-noise ratio electron microscopy imagery. In addition, due to image complexity, it is not feasible to prepare training segmentation masks. In this paper, we propose CNTSegNet, a dual loss, orientation-guided, self-supervised, deep learning network for CNT forest segmentation in scanning electron microscopy (SEM) images. Our training labels consist of weak segmentation labels produced by intensity thresholding of the raw SEM images and self labels produced by estimating orientation distribution of CNTs in these raw images. The proposed network extends a U-net-like encoder-decoder architecture with a novel two-component loss function. The first component is dice loss computed between the predicted segmentation maps and the weak segmentation labels. The second component is mean squared error (MSE) loss measuring the difference between the orientation histogram of the predicted segmentation map and the original raw image. Weighted sum of these two loss functions is used to train the proposed CNTSegNet network. The dice loss forces the network to perform background-foreground segmentation using local intensity features. The MSE loss guides the network with global orientation features and leads to refined segmentation results. The proposed system needs only a few-shot dataset for training. Thanks to it’s self-supervised nature, it can easily be adapted to new datasets. 
    more » « less