skip to main content


Title: A deep learned nanowire segmentation model using synthetic data augmentation
Abstract

Automated particle segmentation and feature analysis of experimental image data are indispensable for data-driven material science. Deep learning-based image segmentation algorithms are promising techniques to achieve this goal but are challenging to use due to the acquisition of a large number of training images. In the present work, synthetic images are applied, resembling the experimental images in terms of geometrical and visual features, to train the state-of-art Mask region-based convolutional neural networks to segment vanadium pentoxide nanowires, a cathode material within optical density-based images acquired using spectromicroscopy. The results demonstrate the instance segmentation power in real optical intensity-based spectromicroscopy images of complex nanowires in overlapped networks and provide reliable statistical information. The model can further be used to segment nanowires in scanning electron microscopy images, which are fundamentally different from the training dataset known to the model. The proposed methodology can be extended to any optical intensity-based images of variable particle morphology, material class, and beyond.

 
more » « less
Award ID(s):
2038625
NSF-PAR ID:
10381649
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
npj Computational Materials
Volume:
8
Issue:
1
ISSN:
2057-3960
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. 3D instance segmentation for unlabeled imaging modalities is a challenging but essential task as collecting expert annotation can be expensive and time-consuming. Existing works segment a new modality by either deploying pre-trained models optimized on diverse training data or sequentially conducting image translation and segmentation with two relatively independent networks. In this work, we propose a novel Cyclic Segmentation Generative Adversarial Network (CySGAN) that conducts image translation and instance segmentation simultaneously using a unified network with weight sharing. Since the image translation layer can be removed at inference time, our proposed model does not introduce additional computational cost upon a standard segmentation model. For optimizing CySGAN, besides the CycleGAN losses for image translation and supervised losses for the annotated source domain, we also utilize self-supervised and segmentation-based adversarial objectives to enhance the model performance by leveraging unlabeled target domain images. We benchmark our approach on the task of 3D neuronal nuclei segmentation with annotated electron microscopy (EM) images and unlabeled expansion microscopy (ExM) data. The proposed CySGAN outperforms pre-trained generalist models, feature-level domain adaptation models, and the baselines that conduct image translation and segmentation sequentially. Our implementation and the newly collected, densely annotated ExM zebrafish brain nuclei dataset, named NucExM, are publicly available at https://connectomics-bazaar.github.io/proj/CySGAN/index.html. 
    more » « less
  2. Context. With the development of large-aperture ground-based solar telescopes and the adaptive optics system, the resolution of the obtained solar images has become increasingly higher. In the high-resolution photospheric images, the fine structures (umbra, penumbra, and light bridge) of sunspots can be observed clearly. The research of the fine structures of sunspots can help us to understand the evolution of solar magnetic fields and to predict eruption phenomena that have significant impacts on the Earth, such as solar flares. Therefore, algorithms for automatically segmenting the fine structures of sunspots in high-resolution solar image will greatly facilitate the study of solar physics. Aims. This study is aimed at proposing an automatic fine-structure segmentation method for sunspots that is accurate and requires little time. Methods. We used the superpixel segmentation to preprocess a solar image. Next, the intensity information, texture information, and spatial location information were used as features. Based on these features, the Gaussian mixture model was used to cluster different superpixels. According to different intensity levels of the umbra, penumbra, and quiet photosphere, the clusters were classified into umbra, penumbra, and quiet-photosphere areas. Finally, the morphological method was used to extract the light-bridge area. Results. The experimental results show that the method we propose can segment the fine structures of sunspots quickly and accurately. In addition, the method can process high-resolution solar images from different solar telescopes and generates a satisfactory segmentation performance. 
    more » « less
  3. Segmentation of multiple surfaces in optical coherence tomography (OCT) images is a challenging problem, further complicated by the frequent presence of weak boundaries, varying layer thicknesses, and mutual influence between adjacent surfaces. The traditional graph-based optimal surface segmentation method has proven its effectiveness with its ability to capture various surface priors in a uniform graph model. However, its efficacy heavily relies on handcrafted features that are used to define the surface cost for the “goodness” of a surface. Recently, deep learning (DL) is emerging as a powerful tool for medical image segmentation thanks to its superior feature learning capability. Unfortunately, due to the scarcity of training data in medical imaging, it is nontrivial for DL networks to implicitly learn the global structure of the target surfaces, including surface interactions. This study proposes to parameterize the surface cost functions in the graph model and leverage DL to learn those parameters. The multiple optimal surfaces are then simultaneously detected by minimizing the total surface cost while explicitly enforcing the mutual surface interaction constraints. The optimization problem is solved by the primal-dual interior-point method (IPM), which can be implemented by a layer of neural networks, enabling efficient end-to-end training of the whole network. Experiments on spectral-domain optical coherence tomography (SD-OCT) retinal layer segmentation demonstrated promising segmentation results with sub-pixel accuracy. 
    more » « less
  4. Abstract

    High fidelity near-wall velocity measurements in wall bounded fluid flows continue to pose a challenge and the resulting limitations on available experimental data cloud our understanding of the near-wall velocity behavior in turbulent boundary layers. One of the challenges is the spatial averaging and limited spatial resolution inherent to cross-correlation-based particle image velocimetry (PIV) methods. To circumvent this difficulty, we implement an explicit no-slip boundary condition in a wavelet-based optical flow velocimetry (wOFV) method. It is found that the no-slip boundary condition on the velocity field can be implemented in wOFV by transforming the constraint to the wavelet domain through a series of algebraic linear transformations, which are formulated in terms of the known wavelet filter matrices, and then satisfying the resulting constraint on the wavelet coefficients using constrained optimization for the optical flow functional minimization. The developed method is then used to study the classical problem of a turbulent channel flow using synthetic data from a direct numerical simulation (DNS) and experimental particle image data from a zero pressure gradient, high Reynolds number turbulent boundary layer. The results obtained by successfully implementing the no-slip boundary condition are compared to velocity measurements from wOFV without the no-slip condition and to a commercial PIV code, using the velocity from the DNS as ground truth. It is found that wOFV with the no-slip condition successfully resolves the near-wall profile with enhanced accuracy compared to the other velocimetry methods, as well as other derived quantities such as wall shear and turbulent intensity, without sacrificing accuracy away from the wall, leading to state of the art measurements in they+<1region of the turbulent boundary layer when applied to experimental particle images.

     
    more » « less
  5. ABSTRACT

    We introduce a new class of iterative image reconstruction algorithms for radio interferometry, at the interface of convex optimization and deep learning, inspired by plug-and-play methods. The approach consists in learning a prior image model by training a deep neural network (DNN) as a denoiser, and substituting it for the handcrafted proximal regularization operator of an optimization algorithm. The proposed AIRI (‘AI for Regularization in radio-interferometric Imaging’) framework, for imaging complex intensity structure with diffuse and faint emission from visibility data, inherits the robustness and interpretability of optimization, and the learning power and speed of networks. Our approach relies on three steps. First, we design a low dynamic range training data base from optical intensity images. Secondly, we train a DNN denoiser at a noise level inferred from the signal-to-noise ratio of the data. We use training losses enhanced with a non-expansiveness term ensuring algorithm convergence, and including on-the-fly data base dynamic range enhancement via exponentiation. Thirdly, we plug the learned denoiser into the forward–backward optimization algorithm, resulting in a simple iterative structure alternating a denoising step with a gradient-descent data-fidelity step. We have validated AIRI against clean, optimization algorithms of the SARA family, and a DNN trained to reconstruct the image directly from visibility data. Simulation results show that AIRI is competitive in imaging quality with SARA and its unconstrained forward–backward-based version uSARA, while providing significant acceleration. clean remains faster but offers lower quality. The end-to-end DNN offers further acceleration, but with far lower quality than AIRI.

     
    more » « less