Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
Electron microscopy images of carbon nanotube (CNT) forests are difficult to segment due to the long and thin nature of the CNTs; density of the CNT forests resulting in CNTs touching, crossing, and occluding each other; and low signal-to-noise ratio electron microscopy imagery. In addition, due to image complexity, it is not feasible to prepare training segmentation masks. In this paper, we propose CNTSegNet, a dual loss, orientation-guided, self-supervised, deep learning network for CNT forest segmentation in scanning electron microscopy (SEM) images. Our training labels consist of weak segmentation labels produced by intensity thresholding of the raw SEM images and self labels produced by estimating orientation distribution of CNTs in these raw images. The proposed network extends a U-net-like encoder-decoder architecture with a novel two-component loss function. The first component is dice loss computed between the predicted segmentation maps and the weak segmentation labels. The second component is mean squared error (MSE) loss measuring the difference between the orientation histogram of the predicted segmentation map and the original raw image. Weighted sum of these two loss functions is used to train the proposed CNTSegNet network. The dice loss forces the network to perform background-foreground segmentation using local intensity features. The MSE loss guides the network with global orientation features and leads to refined segmentation results. The proposed system needs only a few-shot dataset for training. Thanks to it’s self-supervised nature, it can easily be adapted to new datasets.more » « less
Carbon nanotube (CNT) forests are imaged using scanning electron microscopes (SEMs) that project their multilayered 3D structure into a single 2D image. Image analytics, particularly instance segmentation is needed to quantify structural characteristics and to predict correlations between structural morphology and physical properties. The inherent complexity of individual CNT structures is further increased in CNT forests due to density of CNTs, interactions between CNTs, occlusions, and lack of 3D information to resolve correspondences when multiple CNTs from different depths appear to cross in 2D. In this paper, we propose CNT-NeRF, a generative adversarial network (GAN) for simultaneous depth layer decomposition and segmentation of CNT forests in SEM images. The proposed network is trained using a multi-layer, photo-realistic synthetic dataset obtained by transferring the style of real CNT images to physics-based simulation data. Experiments show promising depth layer decomposition and accurate CNT segmentation results not only for the front layer but also for the partially occluded middle and back layers. This achievement is a significant step towards automated, image-based CNT forest structure characterization and physical property prediction.more » « less
Current scientific experiments frequently involve control of specialized instruments (e.g., scanning electron mi- croscopes), image data collection from those instruments, and transfer of the data for processing at simulation centers. This process requires a “human-in-the-loop” to perform those tasks manually, which besides requiring a lot of effort and time, could lead to inconsistencies or errors. Thus, it is essential to have an automated system capable of performing remote instrumentation to intelligently control and collect data from the scientific instruments. In this paper, we propose a Remote Instrumentation Science Environment (RISE) for intelligent im- age analytics that provides the infrastructure to securely capture images, determine process parameters via machine learning, and provide experimental control actions via automation, under the premise of “human-on-the-loop”. The machine learning in RISE aids an iterative discovery process to assist researchers to tune instrument settings to improve the outcomes of experiments. Driven by two scientific use cases of image analytics pipelines, one in material science, and another in biomedical science, we show how RISE automation leverages a cutting-edge integration of cloud computing, on-premise HPC cluster, and a Python programming interface available on a microscope. Using web services, we implement RISE to perform automated image data collection/analysis guided by an intelligent agent to provide real-time feedback control of the microscope using the image analytics outputs. Our evaluation results show the benefits of RISE for researchers to obtain higher image analytics accuracy, save precious time in manually controlling the microscopes, while reducing errors in operating the instruments.more » « less
While the physical properties of carbon nanotubes (CNTs) are often superior to conventional engineering materials, their widespread adoption into many applications is limited by scaling the properties of individual CNTs to macroscale CNT assemblies known as CNT forests. The self-assembly mechanics of CNT forests that determine their morphology and ensemble properties remain poorly understood. Few experimental techniques exist to characterize and observe the growth and self-assembly processes in situ. Here we introduce the use of in-situ scanning electron microscope (SEM) synthesis based on chemical vapor deposition (CVD) processing. In this preliminary report, we share best practices for in-situ SEM CVD processing and initial CNT forest synthesis results. Image analysis techniques are developed to identify and track the movement of catalyst nanoparticles during synthesis conditions. Finally, a perspective is provided in which in-situ SEM observations represent one component of a larger system in which numerical simulation, machine learning, and digital control of experiments reduces the role of humans and human error in the exploration of CNT forest process-structure-property relationships.more » « less
Understanding and controlling the self-assembly of vertically oriented carbon nanotube (CNT) forests is essential for realizing their potential in myriad applications. The governing process–structure–property mechanisms are poorly understood, and the processing parameter space is far too vast to exhaustively explore experimentally. We overcome these limitations by using a physics-based simulation as a high-throughput virtual laboratory and image-based machine learning to relate CNT forest synthesis attributes to their mechanical performance. Using CNTNet, our image-based deep learning classifier module trained with synthetic imagery, combinations of CNT diameter, density, and population growth rate classes were labeled with an accuracy of >91%. The CNTNet regression module predicted CNT forest stiffness and buckling load properties with a lower root-mean-square error than that of a regression predictor based on CNT physical parameters. These results demonstrate that image-based machine learning trained using only simulated imagery can distinguish subtle CNT forest morphological features to predict physical material properties with high accuracy. CNTNet paves the way to incorporate scanning electron microscope imagery for high-throughput material discovery.
The parameter space of CNT forest synthesis is vast and multidimensional, making experimental and/or numerical exploration of the synthesis prohibitive. We propose a more practical approach to explore the synthesis-process relationships of CNT forests using machine learning (ML) algorithms to infer the underlying complex physical processes. Currently, no such ML model linking CNT forest morphology to synthesis parameters has been demonstrated. In the current work, we use a physics-based numerical model to generate CNT forest morphology images with known synthesis parameters to train such a ML algorithm. The CNT forest synthesis variables of CNT diameter and CNT number densities are varied to generate a total of 12 distinct CNT forest classes. Images of the resultant CNT forests at different time steps during the growth and self-assembly process are then used as the training dataset. Based on the CNT forest structural morphology, multiple single and combined histogram-based texture descriptors are used as features to build a random forest (RF) classifier to predict class labels based on correlation of CNT forest physical attributes with the growth parameters. The machine learning model achieved an accuracy of up to 83.5% on predicting the synthesis conditions of CNT number density and diameter. These results are the first step towards rapidly characterizing CNT forest attributes using machine learning. Identifying the relevant process-structure interactions for the CNT forests using physics-based simulations and machine learning could rapidly advance the design, development, and adoption of CNT forest applications with varied morphologies and propertiesmore » « less
Beyond facilitating transport and providing mechanical support to the leaf, veins have important roles in the performance and productivity of plants and the ecosystem. In recent decades, computational image analysis has accelerated the extraction and quantification of vein traits, benefiting fields of research from agriculture to climatology. However, most of the existing leaf vein image analysis programs have been developed for the reticulate venation found in dicots. Despite the agroeconomic importance of cereal grass crops, like
Oryza sativa(rice) and Zea mays(maize), a dedicated image analysis program for the parallel venation found in monocots has yet to be developed. To address the need for an image‐based vein phenotyping tool for model and agronomic grass species, we developed the grass vein image quantification( grasviq) framework. Designed specifically for parallel venation, this framework automatically segments and quantifies vein patterns from images of cleared leaf pieces using classical computer vision techniques. Using image data sets from maize inbred lines and auxin biosynthesis and transport mutants in maize, we demonstrate the utility of grasviqfor quantifying important vein traits, including vein density, vein width and interveinal distance. Furthermore, we show that the framework can resolve quantitative differences and identify vein patterning defects, which is advantageous for genetic experiments and mutant screens. We report that grasviqcan perform high‐throughput vein quantification, with precision on a par with that of manual quantification. Therefore, we envision that grasviqwill be adopted for vein phenomics in maize and other grass species.