skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Deformation Manifold Learning Model for Deformation of Multi-Walled Carbon Nano-Tubes: Exploring the Latent Space
A novel machine learning model is presented in this work to obtain the complex high-dimensional deformation of Multi-Walled Carbon Nanotubes (MWCNTs) containing millions of atoms. To obtain the deformation of these high dimensional systems, existing models like Atomistic, Continuum or Atomistic-Continuum models are very accurate and reliable but are computationally prohibitive for these large systems. This high computational requirement slows down the exploration of physics of these materials. To alleviate this problem, we developed a machine learning model that contains a) a novel dimensionality reduction technique which is combined with b) deep neural network based learning in the reduced dimension. The proposed non-linear dimensionality reduction technique serves as an extension of functional principal component analysis. This extension ensures that the geometric constraints of deformation are satisfied exactly and hence we termed this extension as constrained functional principal component analysis. The novelty of this technique is its ability to design a function space where all the functions satisfy the constraints exactly, not approximately. The efficient dimensionality reduction along with the exact satisfaction of the constraint bolster the deep neural network to achieve remarkable accuracy. The proposed model predicts the deformation of MWCNTs very accurately when compared with the deformation obtained through atomistic-physics-based model. To simulate the complex high-dimensional deformation the atomistic-physics-based models takes weeks high performance computing facility, whereas the proposed machine learning model can predict the deformation in seconds. This technique also extracts the universally dominant pattern of deformation in an unsupervised manner. These patterns are comprehensible to us and provides us a better explanation on the working of the model. The comprehensibility of the dominant modes of deformation yields the interpretability of the model.  more » « less
Award ID(s):
1937983
PAR ID:
10322729
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ASME 2021 International Mechanical Engineering Congress and Exposition
Volume:
12
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Principal Component Analysis (PCA) and Kernel Principal Component Analysis (KPCA) are fundamental methods in machine learning for dimensionality reduction. The former is a technique for finding this approximation in finite dimensions and the latter is often in an infinite dimensional Reproducing Kernel Hilbert-space (RKHS). In this paper, we present a geometric framework for computing the principal linear subspaces in both situations as well as for the robust PCA case, that amounts to computing the intrinsic average on the space of all subspaces: the Grassmann manifold. Points on this manifold are defined as the subspaces spanned by K -tuples of observations. The intrinsic Grassmann average of these subspaces are shown to coincide with the principal components of the observations when they are drawn from a Gaussian distribution. We show similar results in the RKHS case and provide an efficient algorithm for computing the projection onto the this average subspace. The result is a method akin to KPCA which is substantially faster. Further, we present a novel online version of the KPCA using our geometric framework. Competitive performance of all our algorithms are demonstrated on a variety of real and synthetic data sets. 
    more » « less
  2. Abstract Output from multidimensional datasets obtained from spectroscopic imaging techniques provides large data suitable for machine learning techniques to elucidate physical and chemical attributes that define the maximum variance in the specimens. Here, a recently proposed technique of dimensional stacking is applied to obtain a cumulative depth over several LaAlO3/SrTiO3heterostructures with varying thicknesses. Through dimensional reduction techniques via non‐negative matrix factorization (NMF) and principal component analysis (PCA), it is shown that dimensional stacking provides much more robust statistics and consensus while still being able to separate different specimens of varying parameters. The results of stacked and unstacked samples as well as the dimensional reduction techniques are compared. Applied to four LaAlO3/SrTiO3heterostructures with varying thicknesses, NMF is able to separate 1) surface and film termination; 2) film; 3) interface position; and 4) substrate attributes from each other with near perfect consensus. However, PCA results in the loss of data related to the substrate. 
    more » « less
  3. Most applications of multispectral imaging are explicitly or implicitly dependent on the dimensionality and topology of the spectral mixing space. Mixing space characterization refers to the identification of salient properties of the set of pixel reflectance spectra comprising an image (or compilation of images). The underlying premise is that this set of spectra may be described as a low dimensional manifold embedded in a high dimensional vector space. Traditional mixing space characterization uses the linear dimensionality reduction offered by Principal Component Analysis to find projections of pixel spectra onto orthogonal linear subspaces, prioritized by variance. Here, we consider the potential for recent advances in nonlinear dimensionality reduction (specifically, manifold learning) to contribute additional useful information for multispectral mixing space characterization. We integrate linear and nonlinear methods through a novel approach called Joint Characterization (JC). JC is comprised of two components. First, spectral mixture analysis (SMA) linearly projects the high-dimensional reflectance vectors onto a 2D subspace comprising the primary mixing continuum of substrates, vegetation, and dark features (e.g., shadow and water). Second, manifold learning nonlinearly maps the high-dimensional reflectance vectors into a low-D embedding space while preserving manifold topology. The SMA output is physically interpretable in terms of material abundances. The manifold learning output is not generally physically interpretable, but more faithfully preserves high dimensional connectivity and clustering within the mixing space. Used together, the strengths of SMA may compensate for the limitations of manifold learning, and vice versa. Here, we illustrate JC through application to thematic compilations of 90 Sentinel-2 reflectance images selected from a diverse set of biomes and land cover categories. Specifically, we use globally standardized Substrate, Vegetation, and Dark (S, V, D) endmembers (EMs) for SMA, and Uniform Manifold Approximation and Projection (UMAP) for manifold learning. The value of each (SVD and UMAP) model is illustrated, both separately and jointly. JC is shown to successfully characterize both continuous gradations (spectral mixing trends) and discrete clusters (land cover class distinctions) within the spectral mixing space of each land cover category. These features are not clearly identifiable from SVD fractions alone, and not physically interpretable from UMAP alone. Implications are discussed for the design of models which can reliably extract and explainably use high-dimensional spectral information in spatially mixed pixels—a principal challenge in optical remote sensing. 
    more » « less
  4. Abstract CRISPR‐Cas9 screens facilitate the discovery of gene functional relationships and phenotype‐specific dependencies. The Cancer Dependency Map (DepMap) is the largest compendium of whole‐genome CRISPR screens aimed at identifying cancer‐specific genetic dependencies across human cell lines. A mitochondria‐associated bias has been previously reported to mask signals for genes involved in other functions, and thus, methods for normalizing this dominant signal to improve co‐essentiality networks are of interest. In this study, we explore three unsupervised dimensionality reduction methods—autoencoders, robust, and classical principal component analyses (PCA)—for normalizing the DepMap to improve functional networks extracted from these data. We propose a novel “onion” normalization technique to combine several normalized data layers into a single network. Benchmarking analyses reveal that robust PCA combined with onion normalization outperforms existing methods for normalizing the DepMap. Our work demonstrates the value of removing low‐dimensional signals from the DepMap before constructing functional gene networks and provides generalizable dimensionality reduction‐based normalization tools. 
    more » « less
  5. This paper presents a generative statistical model for analyzing time series of planar shapes. Using elastic shape analysis, we separate object kinematics (rigid motions and speed variability) from morphological evolution, representing the latter through transported velocity fields (TVFs). A principal component analysis (PCA) based dimensionality reduction of the TVF representation provides a finite-dimensional Euclidean framework, enabling traditional time-series analysis. We then fit a vector auto-regressive (VAR) model to the TVF-PCA time series, capturing the statistical dynamics of shape evolution. To characterize morphological changes,we use VAR model parameters for model comparison, synthesis, and sequence classification. Leveraging these parameters, along with machine learning classifiers, we achieve high classification accuracy. Extensive experiments on cell motility data validate our approach, demonstrating its effectiveness in modeling and classifying migrating cells based on morphological evolution—marking a novel contribution to the field. 
    more » « less