skip to main content

Search for: All records

Creators/Authors contains: "Calyam, Prasad"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available March 27, 2024
  2. Free, publicly-accessible full text available May 20, 2024
  3. Free, publicly-accessible full text available May 1, 2024
  4. Electron microscopy images of carbon nanotube (CNT) forests are difficult to segment due to the long and thin nature of the CNTs; density of the CNT forests resulting in CNTs touching, crossing, and occluding each other; and low signal-to-noise ratio electron microscopy imagery. In addition, due to image complexity, it is not feasible to prepare training segmentation masks. In this paper, we propose CNTSegNet, a dual loss, orientation-guided, self-supervised, deep learning network for CNT forest segmentation in scanning electron microscopy (SEM) images. Our training labels consist of weak segmentation labels produced by intensity thresholding of the raw SEM images and self labels produced by estimating orientation distribution of CNTs in these raw images. The proposed network extends a U-net-like encoder-decoder architecture with a novel two-component loss function. The first component is dice loss computed between the predicted segmentation maps and the weak segmentation labels. The second component is mean squared error (MSE) loss measuring the difference between the orientation histogram of the predicted segmentation map and the original raw image. Weighted sum of these two loss functions is used to train the proposed CNTSegNet network. The dice loss forces the network to perform background-foreground segmentation using local intensity features. Themore »MSE loss guides the network with global orientation features and leads to refined segmentation results. The proposed system needs only a few-shot dataset for training. Thanks to it’s self-supervised nature, it can easily be adapted to new datasets.« less
    Free, publicly-accessible full text available February 12, 2024
  5. Carbon nanotube (CNT) forests are imaged using scanning electron microscopes (SEMs) that project their multilayered 3D structure into a single 2D image. Image analytics, particularly instance segmentation is needed to quantify structural characteristics and to predict correlations between structural morphology and physical properties. The inherent complexity of individual CNT structures is further increased in CNT forests due to density of CNTs, interactions between CNTs, occlusions, and lack of 3D information to resolve correspondences when multiple CNTs from different depths appear to cross in 2D. In this paper, we propose CNT-NeRF, a generative adversarial network (GAN) for simultaneous depth layer decomposition and segmentation of CNT forests in SEM images. The proposed network is trained using a multi-layer, photo-realistic synthetic dataset obtained by transferring the style of real CNT images to physics-based simulation data. Experiments show promising depth layer decomposition and accurate CNT segmentation results not only for the front layer but also for the partially occluded middle and back layers. This achievement is a significant step towards automated, image-based CNT forest structure characterization and physical property prediction.
    Free, publicly-accessible full text available January 1, 2024
  6. Free, publicly-accessible full text available February 20, 2024
  7. Current scientific experiments frequently involve control of specialized instruments (e.g., scanning electron mi- croscopes), image data collection from those instruments, and transfer of the data for processing at simulation centers. This process requires a “human-in-the-loop” to perform those tasks manually, which besides requiring a lot of effort and time, could lead to inconsistencies or errors. Thus, it is essential to have an automated system capable of performing remote instrumentation to intelligently control and collect data from the scientific instruments. In this paper, we propose a Remote Instrumentation Science Environment (RISE) for intelligent im- age analytics that provides the infrastructure to securely capture images, determine process parameters via machine learning, and provide experimental control actions via automation, under the premise of “human-on-the-loop”. The machine learning in RISE aids an iterative discovery process to assist researchers to tune instrument settings to improve the outcomes of experiments. Driven by two scientific use cases of image analytics pipelines, one in material science, and another in biomedical science, we show how RISE automation leverages a cutting-edge integration of cloud computing, on-premise HPC cluster, and a Python programming interface available on a microscope. Using web services, we implement RISE to perform automated image data collection/analysis guidedmore »by an intelligent agent to provide real-time feedback control of the microscope using the image analytics outputs. Our evaluation results show the benefits of RISE for researchers to obtain higher image analytics accuracy, save precious time in manually controlling the microscopes, while reducing errors in operating the instruments.« less
    Free, publicly-accessible full text available December 1, 2023
  8. Free, publicly-accessible full text available November 24, 2023
  9. Free, publicly-accessible full text available October 1, 2023
  10. Free, publicly-accessible full text available December 1, 2023