skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A transfer learning approach for improved classification of carbon nanomaterials from TEM images
The extensive use of carbon nanomaterials such as carbon nanotubes/nanofibers (CNTs/CNFs) in industrial settings has raised concerns over the potential health risks associated with occupational exposure to these materials. These exposures are commonly in the form of CNT/CNF-containing aerosols, resulting in a need for a reliable structure classification protocol to perform meaningful exposure assessments. However, airborne carbonaceous nanomaterials are very likely to form mixtures of individual nano-sized particles and micron-sized agglomerates with complex structures and irregular shapes, making structure identification and classification extremely difficult. While manual classification from transmission electron microscopy (TEM) images is widely used, it is time-consuming due to the lack of automation tools for structure identification. In the present study, we applied a convolutional neural network (CNN) based machine learning and computer vision method to recognize and classify airborne CNT/CNF particles from TEM images. We introduced a transfer learning approach to represent images by hypercolumn vectors, which were clustered via K -means and processed into a Vector of Locally Aggregated Descriptors (VLAD) representation to train a softmax classifier with the gradient boosting algorithm. This method achieved 90.9% accuracy on the classification of a 4-class dataset and 84.5% accuracy on a more complex 8-class dataset. The developed model established a framework to automatically detect and classify complex carbon nanostructures with potential applications that extend to the automated structural classification for other nanomaterials.  more » « less
Award ID(s):
1826218
PAR ID:
10333846
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Nanoscale Advances
Volume:
3
Issue:
1
ISSN:
2516-0230
Page Range / eLocation ID:
206 to 213
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Machine learning image recognition and classification of particles and materials is a rapidly expanding field. However, nanomaterial identification and classification are dependent on the image resolution, the image field of view, and the processing time. Optical microscopes are one of the most widely utilized technologies in laboratories across the world, due to their nondestructive abilities to identify and classify critical micro-sized objects and processes, but identifying and classifying critical nano-sized objects and processes with a conventional microscope are outside of its capabilities, due to the diffraction limit of the optics and small field of view. To overcome these challenges of nanomaterial identification and classification, we developed an intelligent nanoscope that combines machine learning and microsphere array-based imaging to: (1) surpass the diffraction limit of the microscope objective with microsphere imaging to provide high-resolution images; (2) provide large field-of-view imaging without the sacrifice of resolution by utilizing a microsphere array; and (3) rapidly classify nanomaterials using a deep convolution neural network. The intelligent nanoscope delivers more than 46 magnified images from a single image frame so that we collected more than 1000 images within 2 seconds. Moreover, the intelligent nanoscope achieves a 95% nanomaterial classification accuracy using 1000 images of training sets, which is 45% more accurate than without the microsphere array. The intelligent nanoscope also achieves a 92% bacteria classification accuracy using 50 000 images of training sets, which is 35% more accurate than without the microsphere array. This platform accomplished rapid, accurate detection and classification of nanomaterials with miniscule size differences. The capabilities of this device wield the potential to further detect and classify smaller biological nanomaterial, such as viruses or extracellular vesicles. 
    more » « less
  2. Abstract Transmission electron microscopy (TEM) is essential for determining atomic scale structures in structural biology and materials science. In structural biology, three-dimensional structures of proteins are routinely determined from thousands of identical particles using phase-contrast TEM. In materials science, three-dimensional atomic structures of complex nanomaterials have been determined using atomic electron tomography (AET). However, neither of these methods can determine the three-dimensional atomic structure of heterogeneous nanomaterials containing light elements. Here, we perform ptychographic electron tomography from 34.5 million diffraction patterns to reconstruct an atomic resolution tilt series of a double wall-carbon nanotube (DW-CNT) encapsulating a complex ZrTe sandwich structure. Class averaging the resulting tilt series images and subpixel localization of the atomic peaks reveals a Zr11Te50structure containing a previously unobserved ZrTe2phase in the core. The experimental realization of atomic resolution ptychographic electron tomography will allow for the structural determination of a wide range of beam-sensitive nanomaterials containing light elements. 
    more » « less
  3. The parameter space of CNT forest synthesis is vast and multidimensional, making experimental and/or numerical exploration of the synthesis prohibitive. We propose a more practical approach to explore the synthesis-process relationships of CNT forests using machine learning (ML) algorithms to infer the underlying complex physical processes. Currently, no such ML model linking CNT forest morphology to synthesis parameters has been demonstrated. In the current work, we use a physics-based numerical model to generate CNT forest morphology images with known synthesis parameters to train such a ML algorithm. The CNT forest synthesis variables of CNT diameter and CNT number densities are varied to generate a total of 12 distinct CNT forest classes. Images of the resultant CNT forests at different time steps during the growth and self-assembly process are then used as the training dataset. Based on the CNT forest structural morphology, multiple single and combined histogram-based texture descriptors are used as features to build a random forest (RF) classifier to predict class labels based on correlation of CNT forest physical attributes with the growth parameters. The machine learning model achieved an accuracy of up to 83.5% on predicting the synthesis conditions of CNT number density and diameter. These results are the first step towards rapidly characterizing CNT forest attributes using machine learning. Identifying the relevant process-structure interactions for the CNT forests using physics-based simulations and machine learning could rapidly advance the design, development, and adoption of CNT forest applications with varied morphologies and properties 
    more » « less
  4. Abstract Pollen identification is necessary for several subfields of geology, ecology, and evolutionary biology. However, the existing methods for pollen identification are laborious, time-consuming, and require highly skilled scientists. Therefore, there is a pressing need for an automated and accurate system for pollen identification, which can be beneficial for both basic research and applied issues such as identifying airborne allergens. In this study, we propose a deep learning (DL) approach to classify pollen grains in the Great Basin Desert, Nevada, USA. Our dataset consisted of 10,000 images of 40 pollen species. To mitigate the limitations imposed by the small volume of our training dataset, we conducted an in-depth comparative analysis of numerous pre-trained Convolutional Neural Network (CNN) architectures utilizing transfer learning methodologies. Simultaneously, we developed and incorporated an innovative CNN model, serving to augment our exploration and optimization of data modeling strategies. We applied different architectures of well-known pre-trained deep CNN models, including AlexNet, VGG-16, MobileNet-V2, ResNet (18, 34, and 50, 101), ResNeSt (50, 101), SE-ResNeXt, and Vision Transformer (ViT), to uncover the most promising modeling approach for the classification of pollen grains in the Great Basin. To evaluate the performance of the pre-trained deep CNN models, we measured accuracy, precision, F1-Score, and recall. Our results showed that the ResNeSt-110 model achieved the best performance, with an accuracy of 97.24%, precision of 97.89%, F1-Score of 96.86%, and recall of 97.13%. Our results also revealed that transfer learning models can deliver better and faster image classification results compared to traditional CNN models built from scratch. The proposed method can potentially benefit various fields that rely on efficient pollen identification. This study demonstrates that DL approaches can improve the accuracy and efficiency of pollen identification, and it provides a foundation for further research in the field. 
    more » « less
  5. In the past few years, there have been many research studies conducted in the field of Satellite Image Classification. The purposes of these studies included flood identification, forest fire monitoring, greenery land identification, and land-usage identification. In this field, finding suitable data is often considered problematic, and some research has also been done to identify and extract suitable datasets for classification. Although satellite data can be challenging to deal with, Convolutional Neural Networks (CNNs), which consist of multiple interconnected neurons, have shown promising results when applied to satellite imagery data. In the present work, first we have manually downloaded satellite images of four different classes in Florida locations using the TerraFly Mapping System, developed and managed by the High Performance Database Research Center at Florida International University. We then develop a CNN architecture suitable for extracting features and capable of multi-class classification in our dataset. We discuss the shortcomings in the classification due to the limited size of the dataset. To address this issue, we first employ data augmentation and then utilize transfer learning methodology for feature extraction with VGG16 and ResNet50 pretrained models. We use these features to classify satellite imagery of Florida. We analyze the misclassification in our model and, to address this issue, we introduce a location-based CNN model. We convert coordinates to geohash codes, use these codes as an additional feature vector and feed them into the CNN model. We believe that the new CNN model combined with geohash codes as location features provides a better accuracy for our dataset. 
    more » « less