skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, September 29 until 11:59 PM ET on Saturday, September 30 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Palaniappan, Kannappan"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Electron microscopy images of carbon nanotube (CNT) forests are difficult to segment due to the long and thin nature of the CNTs; density of the CNT forests resulting in CNTs touching, crossing, and occluding each other; and low signal-to-noise ratio electron microscopy imagery. In addition, due to image complexity, it is not feasible to prepare training segmentation masks. In this paper, we propose CNTSegNet, a dual loss, orientation-guided, self-supervised, deep learning network for CNT forest segmentation in scanning electron microscopy (SEM) images. Our training labels consist of weak segmentation labels produced by intensity thresholding of the raw SEM images and self labels produced by estimating orientation distribution of CNTs in these raw images. The proposed network extends a U-net-like encoder-decoder architecture with a novel two-component loss function. The first component is dice loss computed between the predicted segmentation maps and the weak segmentation labels. The second component is mean squared error (MSE) loss measuring the difference between the orientation histogram of the predicted segmentation map and the original raw image. Weighted sum of these two loss functions is used to train the proposed CNTSegNet network. The dice loss forces the network to perform background-foreground segmentation using local intensity features. The MSE loss guides the network with global orientation features and leads to refined segmentation results. The proposed system needs only a few-shot dataset for training. Thanks to it’s self-supervised nature, it can easily be adapted to new datasets. 
    more » « less
    Free, publicly-accessible full text available February 12, 2024
  2. Carbon nanotube (CNT) forests are imaged using scanning electron microscopes (SEMs) that project their multilayered 3D structure into a single 2D image. Image analytics, particularly instance segmentation is needed to quantify structural characteristics and to predict correlations between structural morphology and physical properties. The inherent complexity of individual CNT structures is further increased in CNT forests due to density of CNTs, interactions between CNTs, occlusions, and lack of 3D information to resolve correspondences when multiple CNTs from different depths appear to cross in 2D. In this paper, we propose CNT-NeRF, a generative adversarial network (GAN) for simultaneous depth layer decomposition and segmentation of CNT forests in SEM images. The proposed network is trained using a multi-layer, photo-realistic synthetic dataset obtained by transferring the style of real CNT images to physics-based simulation data. Experiments show promising depth layer decomposition and accurate CNT segmentation results not only for the front layer but also for the partially occluded middle and back layers. This achievement is a significant step towards automated, image-based CNT forest structure characterization and physical property prediction. 
    more » « less
    Free, publicly-accessible full text available January 1, 2024
  3. Current scientific experiments frequently involve control of specialized instruments (e.g., scanning electron mi- croscopes), image data collection from those instruments, and transfer of the data for processing at simulation centers. This process requires a “human-in-the-loop” to perform those tasks manually, which besides requiring a lot of effort and time, could lead to inconsistencies or errors. Thus, it is essential to have an automated system capable of performing remote instrumentation to intelligently control and collect data from the scientific instruments. In this paper, we propose a Remote Instrumentation Science Environment (RISE) for intelligent im- age analytics that provides the infrastructure to securely capture images, determine process parameters via machine learning, and provide experimental control actions via automation, under the premise of “human-on-the-loop”. The machine learning in RISE aids an iterative discovery process to assist researchers to tune instrument settings to improve the outcomes of experiments. Driven by two scientific use cases of image analytics pipelines, one in material science, and another in biomedical science, we show how RISE automation leverages a cutting-edge integration of cloud computing, on-premise HPC cluster, and a Python programming interface available on a microscope. Using web services, we implement RISE to perform automated image data collection/analysis guided by an intelligent agent to provide real-time feedback control of the microscope using the image analytics outputs. Our evaluation results show the benefits of RISE for researchers to obtain higher image analytics accuracy, save precious time in manually controlling the microscopes, while reducing errors in operating the instruments. 
    more » « less
    Free, publicly-accessible full text available December 1, 2023
  4. Free, publicly-accessible full text available December 1, 2023
  5. While the physical properties of carbon nanotubes (CNTs) are often superior to conventional engineering materials, their widespread adoption into many applications is limited by scaling the properties of individual CNTs to macroscale CNT assemblies known as CNT forests. The self-assembly mechanics of CNT forests that determine their morphology and ensemble properties remain poorly understood. Few experimental techniques exist to characterize and observe the growth and self-assembly processes in situ. Here we introduce the use of in-situ scanning electron microscope (SEM) synthesis based on chemical vapor deposition (CVD) processing. In this preliminary report, we share best practices for in-situ SEM CVD processing and initial CNT forest synthesis results. Image analysis techniques are developed to identify and track the movement of catalyst nanoparticles during synthesis conditions. Finally, a perspective is provided in which in-situ SEM observations represent one component of a larger system in which numerical simulation, machine learning, and digital control of experiments reduces the role of humans and human error in the exploration of CNT forest process-structure-property relationships. 
    more » « less
  6. Abstract

    Understanding and controlling the self-assembly of vertically oriented carbon nanotube (CNT) forests is essential for realizing their potential in myriad applications. The governing process–structure–property mechanisms are poorly understood, and the processing parameter space is far too vast to exhaustively explore experimentally. We overcome these limitations by using a physics-based simulation as a high-throughput virtual laboratory and image-based machine learning to relate CNT forest synthesis attributes to their mechanical performance. Using CNTNet, our image-based deep learning classifier module trained with synthetic imagery, combinations of CNT diameter, density, and population growth rate classes were labeled with an accuracy of >91%. The CNTNet regression module predicted CNT forest stiffness and buckling load properties with a lower root-mean-square error than that of a regression predictor based on CNT physical parameters. These results demonstrate that image-based machine learning trained using only simulated imagery can distinguish subtle CNT forest morphological features to predict physical material properties with high accuracy. CNTNet paves the way to incorporate scanning electron microscope imagery for high-throughput material discovery.

     
    more » « less
  7. null (Ed.)